id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
56,244,281 | https://en.wikipedia.org/wiki/Dissection%20into%20orthoschemes | In geometry, it is an unsolved conjecture of Hugo Hadwiger that every simplex can be dissected into orthoschemes, using a number of orthoschemes bounded by a function of the dimension of the simplex. If true, then more generally every convex polytope could be dissected into orthoschemes.
Definitions and statement
In this context, a simplex in -dimensional Euclidean space is the convex hull of points that do not all lie in a common hyperplane. For example, a 2-dimensional simplex is just a triangle (the convex hull of three points in the plane) and a 3-dimensional simplex is a tetrahedron (the convex of four points in three-dimensional space). The points that form the simplex in this way are called its vertices.
An orthoscheme, also called a path simplex, is a special kind of simplex. In it, the vertices can be connected by a path, such that every two edges in the path are at right angles to each other. A two-dimensional orthoscheme is a right triangle. A three-dimensional orthoscheme can be constructed from a cube by finding a path of three edges of the cube that do not all lie on the same square face, and forming the convex hull of the four vertices on this path.
A dissection of a shape (which may be any closed set in Euclidean space) is a representation of as a union of other shapes whose interiors are disjoint from each other. That is, intuitively, the shapes in the union do not overlap, although they may share points on their boundaries. For instance, a cube can be dissected into six three-dimensional orthoschemes. A similar result applies more generally: every hypercube or hyperrectangle in dimensions can be dissected into orthoschemes.
Hadwiger's conjecture is that there is a function such that every -dimensional simplex can be dissected into at most orthoschemes. Hadwiger posed this problem in 1956; it remains unsolved in general, although special cases for small values of are known.
In small dimensions
In two dimensions, every triangle can be dissected into at most two right triangles, by dropping an altitude from its widest angle onto its longest edge.
In three dimensions, some tetrahedra can be dissected in a similar way, by dropping an altitude perpendicularly from a vertex to a point in an opposite face, connecting perpendicularly to the sides of the face, and using the three-edge perpendicular paths through and to a side and then to a vertex of the face. However, this does not always work. In particular, there exist tetrahedra for which none of the vertices have altitudes with a foot inside the opposite face.
Using a more complicated construction, proved that every tetrahedron can be dissected into at most 12 orthoschemes.
proved that this is optimal: there exist tetrahedra that cannot be dissected into fewer than 12 orthoschemes. In the same paper, Böhm also generalized Lenhard's result to three-dimensional spherical geometry and three-dimensional hyperbolic geometry.
In four dimensions, at most 500 orthoschemes are needed. In five dimensions, a finite number of orthoschemes is again needed, roughly bounded as at most 12.5 million. Again, this applies to spherical geometry and hyperbolic geometry as well as to Euclidean geometry.
Hadwiger's conjecture remains unproven for all dimensions greater than five.
Consequences
Every convex polytope may be dissected into simplexes. Therefore, if Hadwiger's conjecture is true, every convex polytope would also have a dissection into orthoschemes.
A related result is that every orthoscheme can itself be dissected into or smaller orthoschemes. Therefore, for simplexes that can be partitioned into orthoschemes, their dissections can have arbitrarily large numbers of orthoschemes.
References
Conjectures
Unsolved problems in geometry
Geometric dissection
Multi-dimensional geometry | Dissection into orthoschemes | [
"Mathematics"
] | 882 | [
"Geometry problems",
"Unsolved problems in mathematics",
"Unsolved problems in geometry",
"Conjectures",
"Mathematical problems"
] |
56,244,331 | https://en.wikipedia.org/wiki/Boundary%20extension | Boundary extension (BE) is a cognitive psychology phenomenon and an error of commission in which people remember more of a scene or boundary than was originally present in the original picture. Boundary extension is typically studied using a recognition memory test where participants are shown a series of photos and then shown new photos that are either the same or have been altered in some way and asked if they are the same or different from the original photos. For example, people are typically presented with either a close-angle photo, which shows less of a picture scene, or a wide-angle photo, which shows more of a picture scene, during the study phase where the participant tries to memorize the picture and then a close or wide-angle photo during the test phase where the participant is tested on the original photos. Consequently, there are four different viewing conditions that people could experience the photos in: close-close, wide-wide, close-wide, or wide-close. If the participants respond that the new photos with more background are the same as the original photos, then they are demonstrating boundary extension because they are extending the boundary of the original photo.
How psychologists have studied boundary extension has evolved over time. For example, psychologists first studied this phenomenon by asking participants to draw scenes from memory. But after many studies, researchers moved to studying boundary extension through a picture recognition memory task which is the more widely used way to study boundary extension currently.
Boundary extension occurs with a variety of stimuli. For example, boundary extension happens with simple and complex photos, simple and complex objects, line-drawings, and photos and objects that have been zoomed in or out varying degrees. Multimodal boundary extension also happens with both the haptic and auditory senses. Boundary extension occurs with a variety of ages as well. For example, boundary extension is apparent very early in life in 3 to 4-month old infants and for children. College students are susceptible to boundary extension and so are older adults. Boundary extension even happens with people who have disorders such as Down syndrome.
Because boundary extension is so universal regarding different altered stimuli and age groups, there are many possible causes, examples, and scenarios of boundary extension. For example, people tend to draw entire scenes instead of what was just in the picture. Also, people naturally add more background into scenes regardless of whether they are just looking at the scene or drawing it. Essentially, what is just beyond the current boundaries becomes a part of the internal representation of the recalled scene in a person’s mind. In addition, many cognitive mechanisms influence boundary extension such as a source monitoring error and a perceptual schema.
Vocabulary related to boundary extension
Source monitoring error
A source monitoring error can be defined as the inability to recall where information came from, especially when trying to recall the source of photos. For example, participants in boundary extension experiments often tend to say that the boundary-extended test photos came from the study pictures rather than recognizing that they altered the photos in their minds to cause boundary extension, and these new photos that they are trying to remember were self-generated.
Perceptual schema
A perceptual schema is a cognitive phenomenon and an internal mental representation of a scene that is created by oneself using prior knowledge and details of the world. Perceptual schemas often form when a person sees a new image because it can be a way to process the image using prior knowledge of other images that one has viewed and processed in the past. Perceptual schemas can form while the picture is being viewed for the first time or soon after. Perceptual schemas are applicable to boundary extension because one’s perceptual schema might add in background and boundary details that were not in the original photo but are a part of one’s perceptual schema of the photo.
Visual memory
Visual memory can be defined as the process by which one encodes and remembers visual information such as pictures. Visual memory is relevant to boundary extension because boundary extension is a visual memory phenomenon where one has to rely on the visual aspects of memory to try and recall pictures or notice any changes in the pictures or scenes.
Possible causes of boundary extension
There are many possible causes of boundary extension. For example, source monitoring errors, perceptual schemas, and visual memory all partially can contribute to boundary extension because they are all related to how photos are initially processed and then later remembered.
A variety of types of objects and scenes also help facilitate boundary extension. For example, simple scenes, a picture with one main object, and complex scenes, a picture with more than one main object, cause people to boundary extend. Photos that are really similar and that have either been zoomed a large amount or a small amount also elicit boundary extension. Furthermore, wide-angled scenes, pictures that show more of the background, and close-angled scenes, pictures that show less of the background, contribute to the boundary extension phenomenon. These scenes can be of animals, landscapes, people, or other objects. Scenes with rotated objects of varying degrees also elicit boundary extension. Moreover, 3-D models of rooms with furniture are conducive to boundary extension compared with 2-D scenes. Even neutral and emotional photos help cause boundary extension. Boundary extension occurs for scene pictures, objects in pictures with blank backgrounds, and line drawings. Outline-scenes and outline-objects elicit boundary extension. So, a variety of different scene stimuli cause boundary extension for the average person.
How boundary extension has evolved
At first, boundary extension was studied by having participants draw scenes from memory. Participants would be presented with a photo and then the photo would be taken away and participants would be asked to draw the photo from memory keeping in mind the proportions of the original photo and the background. But because of the inherent tediousness and imprecision of coding and analyzing this kind of picture data, psychologists transitioned to studying boundary extension through picture recognition memory tasks. In a picture recognition memory task, participants would be shown photos in the study phase and then presented with photos that were the same or slightly altered in the test phase. They would be asked if the photo was the same or if the camera angle seemed a little further away, a lot further away, a little closer, or a lot closer. Finally, they would rate how confident they were about their answer ranging from sure, pretty sure, not sure, or did not see the picture.
Boundary extension affects different age groups
Boundary extension occurs no matter what age one is. For example, boundary extension is present in infants. Children also have boundary extension no matter if they draw scenes from memory or complete a picture recognition task. Even college students have boundary extension no matter the kind of boundary extension task. Finally, adults and older adults demonstrate boundary extension tendencies as well. Boundary extension will persist and occur throughout one’s life starting in infancy.
Multimodal boundary extension
Visual and haptic boundary extension
Boundary extension has been explored by incorporating a haptic element into a boundary extension task. The researchers had college students either view or touch 3-D scene-regions with a frame. The college students then recalled the stimuli that they had interacted with by listing what objects they had either felt or saw. Boundary extension occurred for both the visual and haptic stimuli and conditions. The researchers concluded that boundary extension occurs across modalities. People perceive and remember scenes multimodally through both their eyes and their hands.
Visual and auditory boundary extension
Boundary extension has also been studied by adding an auditory element to see how sound relates to boundary extension and picture memory. Researchers had participants complete the normal picture recognition memory task with an added auditory component. Participants were in one of three conditions: no sound, music, or sound effect. While viewing the photos in the study phase, participants either listened to silence, a sound relevant to the photo, or unrelated music. They then completed the normal test phase structure of the picture recognition memory task. Boundary extension occurred in all three conditions and did not differ across conditions. So, the type of noise did not affect boundary extension. Indeed, boundary extension in both auditory conditions was the same as the control condition where the participants listened to silence while viewing the photos. Auditory stimuli do not affect boundary extension at all.
Boundary extension and different brains
Down syndrome
Children with Down syndrome still experienced boundary extension on the picture recognition memory task, the drawing task, and the 3-D scene memory task compared to children without Down syndrome despite the differences in their brains. Down syndrome participants typically demonstrated the most boundary extension on the drawing task.
Amnesia
Among test subjects with a type of brain damage that leads to a form of amnesia, the boundary extension error ranges from significantly less erroneous to nonexistent when compared to test subjects that do not have brain damage.
References
Cognitive psychology
Memory | Boundary extension | [
"Biology"
] | 1,779 | [
"Behavioural sciences",
"Behavior",
"Cognitive psychology"
] |
56,244,768 | https://en.wikipedia.org/wiki/MacFarsi%20encoding | MacFarsi encoding is an obsolete encoding for Farsi/Persian, Urdu (and English) texts that was used in Apple Macintosh computers to texts.
The encoding is identical to MacArabic encoding, except the numerals, which are the Persian/Urdu style, also known as "Extended" or "Eastern" Arabic-Indic numerals. See Arabic script in Unicode for more details.
References
See also
MacArabic encoding
Arabic script in Unicode
Character sets
Farsi
Persian alphabets | MacFarsi encoding | [
"Technology"
] | 101 | [
"Computing stubs",
"Digital typography stubs"
] |
56,245,450 | https://en.wikipedia.org/wiki/Council%20architect | A council architect or municipal architect (properly titled county architect, borough architect, city architect or district architect) is an architect employed by a local authority. The name of the position varies depending on the type of local authority and is similar to that of county surveyor or chief engineer used by some authorities. Council architects are employed in the United Kingdom but also used in Malta and Ireland.
History
The role was once widespread with many counties, cities and other local authorities employing their own architect to design public works. Council architects acted as designer, client and regulator for their authority, and having significant buying power, they were able to influence suppliers to accommodate their requirements. They worked closely with the council planning department, with whom they were often co-located. In 1953, the London County Council (LCC) employed more than 1,500 people within its architects department.
The LCC architects were key innovators, with the guaranteed salary and relative anonymity allowing them to develop experimental designs without risk to income or the stigma of failure. The LCC architects department also provided research funding, including for the Survey of London, and had in-house testing and development teams. The smaller scale firms in private practice at the time could not provide such luxuries.
Current role
The trend in recent decades has been for councils to close their architects departments. As of 2015, there were 237 council architects in England, 159 in Scotland and 24 in Wales. The biggest employers are Hampshire (44), Glasgow (18), the Highland Council (13) and Lancashire (11). Despite their predecessors having one of the largest and most active architects departments in the country, no London borough now employs more than five council architects.
Once closed, a local authority is highly unlikely to revive an architects department and will instead rely on outsourcing to private firms. One exception is the London Borough of Croydon, which re-established a council architect position in 2015. Hampshire County Architects remains the largest council architects department, and is recognized as a leader in its field, winning several awards for its school designs since the 1980s.
References
Architecture occupations
Government occupations | Council architect | [
"Engineering"
] | 425 | [
"Architecture occupations",
"Architecture"
] |
56,248,002 | https://en.wikipedia.org/wiki/Integrated%20Electronics%20Piezo-Electric | Integrated Electronics Piezo-Electric (IEPE) characterises a technical standard for piezoelectric sensors which contain built-in impedance conversion electronics. IEPE sensors are used to measure acceleration, force or pressure. Measurement microphones also apply the IEPE standard.
Other proprietary names for the same principle are ICP, CCLD, IsoTron or DeltaTron.
The electronics of the IEPE sensor (typically implemented as FET circuit) converts the high impedance signal of the piezoelectric material into a voltage signal with a low impedance of typically 100 Ω. A low impedance signal is advantageous because it can be transmitted across long cable lengths without a loss of signal quality. In addition, special low noise cables, which are otherwise required for use with piezoelectric sensors, are no longer necessary.
The sensor circuit is supplied with constant current. A distinguishing feature of the IEPE principle is that the power supply and the sensor signal are transmitted via one shielded wire.
Most IEPE sensors work at a constant current between 2 and 20 mA. A common value is 4 mA. The higher the constant current the longer the possible cable length. Cables of several hundred meters length can be used without a loss of signal quality. Supplying the IEPE sensor with constant current, results in a positive bias voltage, typically between 8 and 12 volts, at the output. The actual measuring signal of the sensor is added to this bias voltage. The supply or compliance voltage of the constant current source should be 24 to 30 V which is about two times the bias voltage. This ensures maximum amplitudes in positive and negative direction.
A typical IEPE sensor supply with 4 mA constant current and 25 V compliance voltage has a power consumption of 100 mW. This can be a drawback in battery powered systems. For such applications low-power IEPE sensors exist which can be operated at only 0.1 mA constant current from a 12 V supply. This may save up to 90 % power.
Many measuring instruments designed for piezoelectric sensors or measurement microphones have an IEPE constant current source integrated at the input.
In measuring instruments with IEPE input the bias voltage is often used for sensor detection.
If the signal lies close to the constant current supply voltage, there is no sensor present or the cable path has been interrupted. A signal close to the saturation voltage, indicates a short circuit in the sensor or cable. In between these two limits a functional sensor has been detected.
The bias voltage is cut off by a coupling capacitor at the instrument input and only the AC signal is processed further.
Piezoelectric sensors which do not possess IEPE electronics, meaning with charge output, remain reserved for applications where lowest frequencies, high operating temperatures, an extremely large dynamic range, very energy saving operation or extremely small design is required.
References
External links
IEPE principle, Metra
Sensors
Accelerometers | Integrated Electronics Piezo-Electric | [
"Physics",
"Technology",
"Engineering"
] | 587 | [
"Accelerometers",
"Physical quantities",
"Acceleration",
"Measuring instruments",
"Sensors"
] |
56,248,020 | https://en.wikipedia.org/wiki/2014%20Dan%20River%20coal%20ash%20spill | The 2014 Dan River coal ash spill occurred in February 2014, when an Eden, North Carolina facility owned by Duke Energy spilled 39,000 tons of coal ash into the Dan River. The company later pled guilty to criminal negligence in their handling of coal ash at Eden and elsewhere and paid fines of over $5 million. The United States Environmental Protection Agency (EPA) has since been responsible for overseeing cleanup of the waste. EPA and Duke Energy signed an administrative order for the site cleanup.
Incident
On February 2, 2014 a drainage pipe burst at a coal ash containment pond owned by Duke Energy in Eden, North Carolina, sending 39,000 tons of coal ash into the Dan River. In addition to the coal ash, 27 million gallons of wastewater from the plant was released into the river. The broken pipe was left unsealed for almost a week before the draining coal ash was stopped. The ash was deposited up to from the site of the spill and contained harmful metals and chemicals. This catastrophe occurred at the site of the Dan River Steam Station, a retired coal power plant which had ceased operation in 2012. Duke Energy apologized for the incident and announced detailed plans for removal of coal ash at the Dan River site. Workers were only able to remove about ten percent of the coal ash that was spilled into the river, but cleanup is ongoing and Duke Energy plans to spend around 3 million dollars to continue the cleanup efforts.
CNN reported that the river was turned into an oily sludge. The river is a drinking water source for communities in North Carolina and Virginia. Immediate tests showed increased amounts of arsenic and selenium, but the river was deemed by state officials to be a safe source for drinking water. However, further tests showed the ash to contain pollutants including but not limited to arsenic, copper, selenium, iron, zinc and lead. The coal ash immediately endangered animals and fish species that lived in or around the river. Six days after the spill Duke Energy announced that the leakage had been stopped and they pledged to clean up the coal ash.
Reasons for spill
The cause of the ash spill was described by EPA as a limited structural flaw. A storm pipe nearby the deposits of a coal ash slurry containment area broke and allowed for the leakage. Coal ash slurry is produced during the process of burning coal. It is the left over impurities that stick around after burning coal for electricity. Coal companies have found that the cheapest way to store this waste is to mix it with water and store it in a pond. These ponds have been found to have leaks that can dispose hazardous material into surface water among other things. EPA has identified at least 25 coal ash ponds in the southeast that are "high hazard". This material was released into the Dan River because of the collapse of a 48 inch drain pipe. The pipe was made of concrete and corrugated metal and reason for the fracture cannot be identified. What resulted was 39 thousand tons of coal ash and 27 million gallons of ash pond water were deposited into the Dan River.
Environmental impact
EPA has been collecting dissolved contaminant concentration data in the Dan River (from the VA/NC state line to midway between Danville and South Boston) since the coal ash spill. The organization has been periodically comparing the retrieved water/sediment chemistry data to ecological risk screening levels (ERSLs) to assess risk to aquatic and plant life. Coal ash is made up of various materials after the burning of coal takes place. These include silica, arsenic, boron, cadmium, chromium, copper, lead, mercury, selenium, and zinc. Certain contaminants that were measured exceed the screening levels, necessitating that the water/sediment chemistry must continue to be monitored. Coal ash can coat and degrade the habitats of aquatic animals as well as cause direct harm to certain organisms.
The latest surface water sampling results were released by EPA in July 2014. All surface water chemical concentrations were found to be below the ERSLs except for lead. The latest sediment sampling results were also released in July 2014. All sediment chemical concentrations were found to be below the ERSLs except for aluminum, arsenic, barium, and iron. The latest soil sampling results were released in June 2014. All soil chemical concentrations were found to be below the ERSLs except for aluminum, barium, iron, and manganese.
The coal ash will never be fully removed from the river. This is due to samples passing human health screening, the potential for historical contamination to become re-suspended, and removal being more detrimental to certain endangered species than the coal ash itself. In addition, the coal ash is already mixed in with existing sediment, complicating its removal further. EPA estimated that about 72 percent of all the toxic water in the country comes directly from coal-fired power plants.
Enforcement
The New York Times reported that the North Carolina Department of Environmental Quality (NCDEQ; formerly the Department of Environment and Natural Resources) was directed to minimize its regulatory role prior to the accident by Governor Pat McCrory. Prior to being Governor, McCrory had worked for Duke Energy for nearly three decades. At the time, it was the third largest coal ash spill to have occurred in the United States. Prior to the incident, environmental groups had attempted to sue Duke Energy three times in 2013 under the Clean Water Act to force the company to fix leaks in its coal ash dumps. Each time, the groups were blocked by NCDEQ, which eventually fined the company $99,111. Federal prosecutors found this fine to be suspiciously low, and investigated both Duke Energy and the state regulators. Many newspaper editorials alleged that Duke Energy's environmental safety controls were lax and that the company "bullied" regulators.
After the incident, Duke Energy was prosecuted by a number of agencies, and substantial evidence was presented indicating that company officials knew about numerous coal ash leaks in various plants including the Eden facility and declined to resolve it or provide local plant administrators the funds they were requesting to monitor and mitigate the problems. At the federal level, Duke was prosecuted by the United States Department of Justice Environment and Natural Resources Division and pled guilty to nine charges of criminal negligence under the Clean Water Act. Duke agreed to pay $102 million in fines and restitution, the largest federal criminal fine in North Carolina history. Duke also agreed to pay fines to North Carolina and Virginia ($2.5 million).
Outcomes
Largely as a result of the attention brought to Duke Energy's handling of coal ash ponds by the 2014 disaster, the North Carolina state legislature ordered Duke Energy to close its 32 ash ponds in the state by 2029. On May 2, 2014, Duke Energy and EPA agreed to a $3 million dollar cleanup agreement. Part of the agreement is having Duke Energy identify areas of necessary cleanup on the Dan River that is estimated to cost around 1 million dollars. The other 2 million dollars is allocated to EPA to address future response methods needed in order to clean up the Dan River. A spokesperson for Duke Energy announced that the company plans to exit the coal ash business. Associates have said that well before the Dan River incident the company had allocated 130 million dollars to transitioning plants to handle fly ash in dry form and manage it in lined landfills. Duke Energy said that it created an advisory group of researchers to help with cleaner coal combustion at its facilities.
In February 2016, the EPA proposed a $6.8 million settlement, which Duke Energy immediately appealed. In September the corporation accepted a settlement just shy of the original amount at $5,983,750, to be paid for fines, restitution, cleanup assessment, removal, and community action initiatives. Regarding the initial settlement, EPA sends periodic bills to Duke Energy accounting for direct and indirect costs incurred by EPA, its contractors, and the Department of Justice.
The states affected launched a lawsuit on July 18, 2019, asking that the court declare Duke Energy responsible for the damage done to the environment by the spill.
Cleanup efforts
To keep the energy provider accountable, under the Administrative Settlement Agreement & Order on Consent for Removal Action (AOC) as of May 2014, the Respondent, Duke Energy, was required to submit a number of plans to EPA, including a scope of work, public health, post-removal site control, and engineering plans.
The work plan includes descriptions and a schedule of actions required by the settlement.
The public health plan ensures protection of public health during on-site removal projects.
The post-removal site control plan provides EPA with documentation of all post-removal arrangements.
The engineering report describes steps executed by Duke to improve the structural durability of post-release impoundments and storm sewer lines running under their coal ash impoundments.
Within these plans, Duke Energy is responsible for creating and implementing a Site Assessment that includes but is not limited to ecological analysis, surface water and sediment assessment as well as post-removal monitoring protocols to calculate the extent of pollution in the Dan River in North Carolina and the Kerr Reservoir and Schoolfield Dam in Danville, Virginia. These assessments were approved by the EPA in consultation with the affected state agencies including NCDEQ and the Virginia Department of Environmental Quality (VDEQ). Following the spill and written into the AOC are monitoring protocols in which EPA will sporadically authorize the NCDEQ and VDEQ to take split or duplicate water samples to ensure consistent quality after removal of the coal ash.
As of April 1, 2019 North Carolina has ordered Duke Energy to dig up millions of tons of coal ash at six of its power plants. The dangerous coal ash has been mixed with water and stored in uncovered, unlined ponds for decades, but following the 2014 Dan River coal ash spill, many lawsuits have been filed. If the plaintiffs in these cases are successful, Duke Energy would be required to drain all of its 31 ponds. The draining process would cost $5 billion to the already $5.6 billion cleanup from 2014. With the added costs, Duke energy customers could expect to pay a higher fee in the next coming years.
See also
2018 Cape Fear River coal ash spill
References
Dan River coal ash spill
Dan River coal ash spill
Coal-fired power stations
Energy accidents and incidents
Environmental impact of the coal industry
Environmental impact of the energy industry
Hazardous waste
Rockingham County, North Carolina
Waste disposal incidents in the United States
Water pollution in the United States
Environmental disasters in the United States
2014 disasters in the United States | 2014 Dan River coal ash spill | [
"Technology"
] | 2,114 | [
"Hazardous waste"
] |
56,248,459 | https://en.wikipedia.org/wiki/Bituminite | Bituminite is an autochthonous maceral that is a part of the liptinite group in lignite, that occurs in petroleum source rocks originating from organic matter such as algae which has undergone alteration or degradation from natural processes such as burial . It occurs as fine-grained groundmass, laminae or elongated structures that appear as veinlets within horizontal sections of lignite and bituminous coals, and also occurs in sedimentary rocks. Its occurrence in sedimentary rocks is typically found surrounding alginite, and parallel along bedding planes. Bituminite is not considered to be bitumen because its properties are different from most bitumens. It is described to have no definite shape or form when present in bedding and can be identified using different kinds of visible and fluorescent lights. There are three types of bituminite: type I, type II and type III, of which type I is the most common. The presence of bituminite in oil shales, other oil source rocks and some coals plays an important factor when determining potential petroleum-source rocks.
Physical properties
The internal structure of bituminite varies from deposit to deposit. It may be homogeneous, streaky, fluidal or finely granular. These properties of internal structure, however, are only visible when particles are irradiated with blue or violet light.
Bituminite is commonly found in the size and shape of irregular, discoidal particles that are typically 100–200 μm in diameter. When observed under transmitted light with oil immersion, the color of bituminite is orange, reddish to brown. Under reflected light, bituminite is dark brown to dark grey and sometimes black in color. Bituminite has an approximate density of ≈1.2–1.3 g/cm3, which was determined by gradient centrifuging. Bituminite has a very low polishing hardness. It usually smears during the polishing process because it is unconsolidated and very soft.
Occurrence
Bituminite is found in oxic to anoxic lacustrine and marine environments commonly associated with other maceral minerals such as alginite and liptodetrinite. The organic material undergoes diagenesis, forming an amorphous matrix. Framboidal pyrite is a common feature associated with bituminite. This is caused by bacterial reworking of the digestible organic matter. Due to the reworking of organic matter the particles/grains of bituminite often appear diffuse and blurred. Though grains of bituminite are often too blurred to distinguish, its optical properties widely vary, and are therefore used to determine the bituminite type.
It is very common to have all three types of bituminite in organic-rich sedimentary rocks, however, the modal percentage of bituminite type varies. Typically type I bituminite is found to be much larger than other types and exude a negative alteration when irradiated with blue light. Type II is determined by its yellowish or reddish-brown fluorescence and its occasional oil expulsions when irradiated. Type III, the rarest kind of bituminite, appears dark grey under a reflected white light, however, lacks fluorescence. It is also distinguished by the fine granular structure and its association with faunal relics.
History
Bituminite was a general term given to rocks which are rich in bitumen. The term was also used informally to describe irregularly shaped macerals until 1975 when the ICCP clearly defined the term.
Applications and uses
Bituminite is the main source for low-temperature coal tar, which is used in industry, medicine and construction . The value of bituminite increases with grade. At high grade, i.e. high maturity, bituminite has high hydrogen to carbon content . A high hydrogen/carbon ratio bituminite indicates a good hydrocarbon source. However, low grades of bituminite vary depending on type, meaning that there is variable hydrogen/carbon ratios.
Bituminite can also be used as potential indicators in the petroleum industry. Research has shown that type I bituminite in modes upwards of 10% of the total organic matter, is indicative of a potential petroleum-source rock.
References
Sedimentology
Organic minerals
Petrology | Bituminite | [
"Chemistry"
] | 895 | [
"Organic compounds",
"Organic minerals"
] |
56,249,073 | https://en.wikipedia.org/wiki/Egocentric%20vision | Egocentric vision or first-person vision is a sub-field of computer vision that entails analyzing images and videos captured by a wearable camera, which is typically worn on the head or on the chest and naturally approximates the visual field of the camera wearer. Consequently, visual data capture the part of the scene on which the user focuses to carry out the task at hand and offer a valuable perspective to understand the user's activities and their context in a naturalistic setting.
The wearable camera looking forwards is often supplemented with a camera looking inward at the user's eye and able to measure a user's eye gaze, which is useful to reveal attention and to better understand the
user's activity and intentions.
History
The idea of using a wearable camera to gather visual data from a first-person perspective dates back to the 70s, when Steve Mann invented "Digital Eye Glass", a device that, when worn, causes the human eye itself to effectively become both an electronic camera and a television display.
Subsequently, wearable cameras were used for health-related applications in the context of Humanistic Intelligence and Wearable AI. Egocentric vision is best done from the point-of-eye, but may also be done by way of a neck-worn camera when eyeglasses would be in-the-way. This neck-worn variant was popularized by way of the Microsoft SenseCam in 2006 for experimental health research works. The interest of the computer vision community into the egocentric paradigm has been arising slowly entering the 2010s and it is rapidly growing in recent years, boosted by both the impressive advances in the field of wearable technology and by the increasing number of potential applications.
The prototypical first-person vision system described by Kanade and Hebert, in 2012 is composed by three basic components: a localization component able to estimate the surrounding, a recognition component able to identify object and people, and an activity recognition component, able to provide information about the current activity of the user. Together, these three components provide a complete situational awareness of the user, which in turn can be used to provide assistance to the user or to the caregiver. Following this idea, the first computational techniques for egocentric analysis focused on hand-related activity recognition and social interaction analysis. Also, given the unconstrained nature of the video and the huge amount of data generated, temporal segmentation and summarization were among the first problems addressed. After almost ten years of egocentric vision (2007 - 2017), the field is still undergoing diversification. Emerging research topics include:
Social saliency estimation
Multi-agent egocentric vision systems
Privacy preserving techniques and applications
Attention-based activity analysis
Social interaction analysis
Hand pose analysis
Ego graphical User Interfaces (EUI)
Understanding social dynamics and attention
Revisiting robotic vision and machine vision as egocentric sensing
Activity forecasting
Technical challenges
Today's wearable cameras are small and lightweight digital recording devices that can acquire images and videos automatically, without the user intervention, with different resolutions and frame rates, and from a first-person point of view. Therefore, wearable cameras are naturally primed to gather visual information from our everyday interactions since they offer an intimate perspective of the visual field of the camera wearer.
Depending on the frame rate, it is common to distinguish between photo-cameras (also called lifelogging cameras) and video-cameras.
The former (e.g., Narrative Clip and Microsoft SenseCam), are commonly worn on the chest, and are characterized by a very low frame rate (up to 2fpm) that allows to capture images over a long period of time without the need of recharging the battery. Consequently, they offer considerable potential for inferring knowledge about e.g. behaviour patterns, habits or lifestyle of the user. However, due to the low frame-rate and the free motion of the camera, temporally adjacent images typically present abrupt appearance changes so that motion features cannot be reliably estimated.
The latter (e.g., Google Glass, GoPro), are commonly mounted on the head, and capture conventional video (around 35fps) that allows to capture fine temporal details of interactions. Consequently, they offer potential for in-depth analysis of daily or special activities. However, since the camera is moving with the wearer head, it becomes more difficult to estimate the global motion of the wearer and in the case of abrupt movements, the images can result blurred.
In both cases, since the camera is worn in a naturalistic setting, visual data present a huge variability in terms of illumination conditions and object appearance.
Moreover, the camera wearer is not visible in the image and what he/she is doing has to be inferred from the information in the visual field of the camera, implying that important information about the wearer, such for instance as pose or facial expression estimation, is not available.
Applications
A collection of studies published in a special theme issue of the American Journal of Preventive Medicine has demonstrated the potential of lifelogs captured through wearable cameras from a number of viewpoints. In particular, it has been shown that used as a tool for understanding and tracking lifestyle behaviour, lifelogs would enable the prevention of noncommunicable diseases associated to unhealthy trends and risky profiles (such as obesity, depression, etc.). In addition, used as a tool of re-memory cognitive training, lifelogs would enable the prevention of cognitive and functional decline in elderly people.
More recently, egocentric cameras have been used to study human and animal cognition, human-human social interaction, human-robot interaction, human expertise in complex tasks.
Other applications include navigation/assistive technologies for the blind, monitoring and assistance of industrial workflows, and augmented reality interfaces.
See also
Eye tracking
Lifelog
Quantified self
Smartglasses
Sousveillance
References
Computer vision | Egocentric vision | [
"Engineering"
] | 1,198 | [
"Artificial intelligence engineering",
"Packaging machinery",
"Computer vision"
] |
56,249,296 | https://en.wikipedia.org/wiki/Quick%20Start%20Programme | The Quick Start Programme (also known as the QSP) is a fund administered by the United Nations Environment Programme. By Resolution I/4 of the First Session of the International Conference on Chemicals Management, it has been granted the responsibility to act as the financial arm of the Strategic Approach to International Chemicals Management.
The QSP now has become a fund of approximately US$123.2 million, with an investment portfolio of 184 projects in 108 different countries, including 54 Least Developed Countries and Small Island Developing States.
References
Organizations established in 2006
United Nations Environment Programme
Chemical safety | Quick Start Programme | [
"Chemistry"
] | 114 | [
"Chemical safety",
"Chemical accident",
"nan"
] |
56,249,726 | https://en.wikipedia.org/wiki/%C3%81d%C3%A1m%20Mikl%C3%B3si | Ádám Miklósi (born 25 September 1962) is a Hungarian ethologist, expert on dog cognition and behavior. He holds the position of professor and, between 2006 and 2024, head of the Ethology Department at the Faculty of Sciences of the Eötvös Loránd University in Budapest, Hungary. In 2016 he was elected as a corresponding member, in 2022 as a full member of the Hungarian Academy of Sciences. He is the co-founder and leader of the Family Dog Project, which aims to study human-dog interaction from an ethological perspective. In 2014 he published the 2nd edition of an academic volume entitled Dog Behaviour, Evolution, and Cognition by Oxford University Press
Bibliography
List of publications at the MTMT
List of publications at Google scholar
Books
Dog Behaviour, Evolution, and Cognition
The Dog - A Natural History
References
External links
Ádám Miklósi's cv at the Institute of Biology, Eötvös Loránd University
Family Dog Project
Newly elected members of the HAS, 2016
Canine Science Forum 2008
Canine Science Forum 2018
Canine Science Forum 2023
1962 births
Living people
Animal cognition writers
Academic staff of Eötvös Loránd University
Ethologists
Members of the Hungarian Academy of Sciences
Hungarian biologists | Ádám Miklósi | [
"Biology"
] | 247 | [
"Ethology",
"Behavior",
"Ethologists"
] |
56,249,865 | https://en.wikipedia.org/wiki/Dehalogenimonas%20formicexedens | Dehalogenimonas formicexedens is a Gram-negative, strictly anaerobic and non-spore-forming bacterium from the genus of Dehalogenimonas which has been isolated from contaminated groundwater in Louisiana in the United States.
References
External links
Type strain of Dehalogenimonas formicexedens at BacDive - the Bacterial Diversity Metadatabase
Bacteria described in 2017
Dehalococcoidetes | Dehalogenimonas formicexedens | [
"Biology"
] | 87 | [
"Bacteria stubs",
"Bacteria"
] |
56,250,451 | https://en.wikipedia.org/wiki/Arctur-2 | Arctur-2 is a supercomputer located in Slovenia which is used by scientists and industry professionals to run intensive workloads and computer simulations such as aerodynamics simulations and steel casting simulations.
The Arctur-2 High Performance Computer (HPC) is located in Nova Gorica (Slovenia) and was put into operation in early 2017.
Arctur-2 is a system built by Sugon and consists of 30 nodes, each with two Intel Xeon E5-2690v4 processors; 8 of these nodes are equipped with 4 Nvidia Tesla M60 GPUs each, and another 8 of them have big memory capacity of 1024GB per node.
The supercomputer is managed by Arctur.
References
GPGPU supercomputers
Supercomputing in Europe | Arctur-2 | [
"Technology"
] | 163 | [
"Supercomputing in Europe",
"Computing stubs",
"Supercomputing",
"Computer hardware stubs"
] |
56,251,479 | https://en.wikipedia.org/wiki/Anti-exhaustion%20hypothesis | The anti-exhaustion hypothesis is a possible explanation for the existence of large repertoires and the song switching behaviour exhibited in birds. This hypothesis states that muscle exhaustion occurring due to repeating song bouts can be avoided by switching to a different song in the bird's repertoire. The anti-exhaustion hypothesis therefore predicts that birds with larger repertoires are less susceptible to exhaustion because they can readily change the song that they are producing.
The anti-exhaustion hypothesis was first proposed by Marcel Lambrechts and André Dhondt in 1988 after they carried out a study using recordings from great tits, Parus major, during the dawn chorus. There have been several studies carried out in which results have contradicted the anti-exhaustion hypothesis. Recent studies have shown that there is no evidence that the anti-exhaustion hypothesis is the cause of large repertoires in birds. Since the proposal of the anti-exhaustion hypothesis, several hypotheses have been proposed to explain the existence of repertoires and song switching behaviour in birds, including the motivation hypothesis and the warm-up hypothesis.
Anti-exhaustion hypothesis in great tits
The great tit, Parus major, is a passerine bird belonging to the family Paridae. Passerines are commonly referred to as songbirds, with most passerines singing multiple species-specific songs making a repertoire. Birds can be ranked depending on how they perform the songs in their repertoire. On one side, there are birds that sing with eventual variety and have small repertoires, meaning that each song type in their repertoire is repeated before they switch to a different song type.
On the other side, there are birds that sing with immediate variety and have larger repertoires, meaning that they switch song types continuously. The great tit, in particular, sings with eventual variety and has a small repertoire, usually consisting of two to seven different song types. Songs can be broken down into several simpler components. Songs are made up of bouts which last from about 30 seconds-600 seconds.
A bout is a stereotyped repetition of one to five notes which are called a phrase. Between two and 20 phrases are sung are short bursts which are called strophes. In-between strophes are periods of silence, and this is referred to as the inter-strophe pause. Therefore, a great tit sings several strophes of one song type before switching to a bout of another song type from their repertoire.
Biologically, having a large repertoire is advantageous in territorial defence and larger repertoires are also correlated with higher reproductive success. Marcel Lambrechts and André Dhondt proved that average strophe length and repertoire size can be used as proxies for male quality. Male quality refers to the fitness of the bird, measuring how well it survives and its reproductive success. Lambrechts and Dhondt set out to find the answers to four questions also pertaining to percentage performance time and male quality in the great tit.
The first of four questions they sought out to determine was if high quality males, those with a higher fitness, have a higher percentage performance time in their bouts compared to lower quality males, with percentage performance time meaning the percentage of time during which song was being produced. The second was if high quality males sing longer bouts than low quality males. The third answer they sought was whether the percentage performance time changed or stayed the same during a bout. The final question they sought the answer to was whether or not the percentage performance time changed after a bird switches song types.
In order to determine the answer to the four proposed questions, Marcel and André recorded male great tits from 1983–1986 in two plots (L and B) in the Peerdsbos and another plot (U) on campus at the University of Antwerp in Wilrijk. They came up with the anti-exhaustion hypothesis as an explanation for the results obtained from their study. Their results showed that high quality males, as predicted, had a higher percentage performance time than low quality males. Lambrechts and Dhondt also found that all great tits can show a systematic decrease in the percentage performance time during a bout, which is also known as drift.
The finding that drove their hypothesis was that the males were able to recover a high percentage performance by switching song types. If the male was producing shorter strophes and having longer inter-strophe pauses (low percentage performance), then by switching to a different song type the bird would once again be able to produce longer strophes and have shorter inter-strophe pauses. Lambrechts and Dhondt proposed the anti-exhaustion hypothesis, which provided both a functional and casual explanation for the song switching behaviour in birds along with having song repertoires.
The anti-exhaustion hypothesis stated that when it is necessary for a bird to sing for a prolonged period of time at a high rate, it must continuously switch song types. The hypothesis was focused around the idea that extended bouts of singing would lead to neuromuscular exhaustion because of the repetitive and stereotyped fashion of song bouts. Marcel and André proposed that the male great tits would have longer inter-strophe pauses towards the end of a song due to this exhaustion. By switching song types, the birds would be using alternative sound-producing muscles and nerves, therefore they would be able to recover a high percentage performance once again.
Anti-exhaustion hypothesis in blue tits
A study completed by Angelika Poesel and Bart Kempenaers (2002) was aimed at explaining drift during blue tit (Cyanistes caeruleus) song and among other Parus species and to also explain their findings in relation to the anti-exhaustion hypothesis and the motivation hypothesis. They studied a group of 20 male blue tits at Kolbeterberg in Vienna, Austria that were living in mixed deciduous woods. The results from their study showed that male blue tits did show a decrease in performance output (percentage performance time) the longer that they performed one song typed, which was illustrated by an increase in inter-strophe pauses.
In order to confirm the anti-exhaustion hypothesis, the two factors proposed by Lambrechts and Dhondt that influenced a low percentage performance time needed to be confirmed. These two factors are the initial level of song output (the greater the initial output, the greater the drift would be), and the number of switches between song types (after a switch in song types, performance output was increased). The results from this study could conclude that a repeated stereotyped song is difficult to maintain over a long period of time, which supported the anti-exhaustive hypothesis.
Motivation hypothesis
The motivation hypothesis was a competing hypothesis of the anti-exhaustion hypothesis. The motivation hypothesis, proposed by Weary in 1988, explained that drift may be due to a lack of motivation to keep singing the same song, not because of neuromuscular exhaustion. This study also worked with great tits, playing song to them during the day. Weary suggested that if drift was due to a lack of motivation, then if a bird was presented with the song of a rival, for example, then the bird should be able to increase its song output because of the motivational stimulus.
Weary also argued that if drift was caused by neuromuscular exhaustion, then birds would not be able to increase song output if they did not switch song types, which was not the case in all birds.
Lambrechts argued back that the test completed by Weary was not appropriate, as it was completed during the day, not during the dawn chorus when song output is at its maximum. In order to solve this conflict, Weary and Lambrechts got together and performed a series of tests during the day and during the dawn chorus testing the responsiveness to playback.
Their results showed that both hypotheses were supported, with drift being more common during the dawn chorus (supporting the anti-exhaustive hypothesis), but also that the increase in song output was similar during periods of high and low output (supporting the motivation hypothesis). These results concluded that the anti-exhaustive hypothesis and the motivation hypothesis are both possible and can both occur at the same time.
Warm-up hypothesis
The warm-up hypothesis proposed by Schraft et al. (2016) seems to contradict the anti-exhaustion hypothesis . This hypothesis foresees that birds have greater singing performance depending on the number of songs the bird has sung that day, known as recent practice, regardless of song type. This study was carried out between March and June 2012 at the Cabo Rojo National Wildlife Refuge in Puerto Rico. Male Adelaide's warblers, Setophaga adelaidae, recorded during their breeding season in their resident woods were found to have an average repertoire of 29 songs/male.
Schraft et al. tested several hypotheses, each with a different prediction and a different independent variable. The song type specific hypothesis predicted performance would decrease with consecutive repetitions of a song type, the independent variable being the run number. This hypothesis is consistent with the anti-exhaustion hypothesis. The song type general hypothesis predicted performance would increase with a higher latency period between songs, the independent variable being latency.
The warm-up hypothesis predicted performance would increase with number of songs sung, the independent variable being order. The Type I singing showcases higher performance predicted Type I songs would have higher performance than Type II songs, the independent variable. The vocal interaction hypothesis predicted performance would increase when countersinging. It was also hypothesized that time-dependent factors influenced performance, the independent variable being time.
The results from the study showed that singing performance improves with time throughout the morning. This was explained only by the warm-up hypothesis, the cumulative number of songs that the bird had sung that morning. The other hypotheses were found to have no effect on singing performance, including the song type specific hypothesis which is consistent with the anti-exhaustion hypothesis. Schraft et al. proposed that the anti-exhaustion hypothesis and the warm-up hypothesis are not mutually exclusive as the birds that warm up still may need to switch song types because of fatigue.
See also
Bird vocalization
f_tyieefrjxudu%4dtdu== References ==
Bird sounds
Ethology | Anti-exhaustion hypothesis | [
"Biology"
] | 2,076 | [
"Behavioural sciences",
"Ethology",
"Behavior"
] |
56,251,626 | https://en.wikipedia.org/wiki/Ministry%20of%20Energy%20and%20Petroleum%20%28Sudan%29 | The Ministry of Ministry of Energy and Petroleum (MOP) (), previously known as the Ministry of Oil and Gas (); was the governmental body in the Sudan responsible for developing and implementing the government policy for exploiting the oil and gas resources in Sudan in 2017.
References
External links
Ministry of Energy and Petroleum
Oil and Gas
Economy of Sudan
Energy ministries | Ministry of Energy and Petroleum (Sudan) | [
"Engineering"
] | 73 | [
"Energy organizations",
"Energy ministries"
] |
56,253,531 | https://en.wikipedia.org/wiki/NGC%204544 | NGC 4544 is an edge-on spiral galaxy located about 52 million light-years away in the constellation Virgo. NGC 4544 was discovered by astronomer Edward Swift on April 27, 1887. NGC 4544 is a member of the Virgo Cluster.
See also
List of NGC objects (4001–5000)
References
External links
Virgo (constellation)
Spiral galaxies
4544
41958
7756
Astronomical objects discovered in 1887
Virgo Cluster
Discoveries by Edward Swift | NGC 4544 | [
"Astronomy"
] | 94 | [
"Virgo (constellation)",
"Constellations"
] |
56,254,831 | https://en.wikipedia.org/wiki/Star%20Wars%3A%20Rebellion%20%28board%20game%29 | Star Wars: Rebellion is an asymmetrical strategy board game designed by Corey Konieczka and published by Fantasy Flight Games in 2016. The game's setting is inspired by the original Star Wars trilogy. Players control either the Galactic Empire or the Rebel Alliance. Each player pursues a different path to victory, with the Galactic Empire playing seeking to find the Rebel Alliance player's base and destroy it, while the Rebel Alliance player attempts to avoid detection by the Galactic Empire and sabotage their efforts. The game received highly positive reviews and won numerous awards.
Game Overview
Star Wars: Rebellion lets players reenact the "epic conflict" between the Rebel Alliance and the Galactic Empire. Players take control of various prominent characters from the Star Wars saga, sending them on secret missions and leading troops in combat across the galaxy.
The two factions have different strategies and objectives. The Rebel Alliance is vastly outnumbered and cannot survive a head-on fight; instead, it must remain hidden and rely on subterfuge, guerrilla tactics, and diplomacy to undermine the Empire. The Rebels win the game by gaining enough support to start a full-scale galactic revolt and overthrow the Empire.
The Galactic Empire is a vast, tyrannical regime that rules many systems throughout the galaxy with an iron fist. The Imperials can easily build terrifying weapons of war in large quantities. Although their forces are many, their only chance of extinguishing the spark of rebellion is to spread throughout the galaxy, quell uprisings, and search for the hidden Rebel base. They win the game by finding where the Rebel base is located and conquering it.
Gameplay
Each game round is split up into three phases: the assignment phase; the command phase; and the refresh phase. During the assignment phase each player, starting with the Rebel player, assigns their leaders to missions. Next in the command phase, each player takes turns revealing missions or activating systems to move units. Finally, in the refresh phase each player retrieves all their leaders back to their leader pool, draws two mission cards, the Imperial player draws two probe cards, the Rebel player draws an objective card, the time marker advances one, and units are deployed. The Rebel player moves their reputation marker towards the time marker by completing objectives.
If, usually during the command phase, a system contains both Rebel and Imperial units in either the ground or space arenas, then combat takes place until both arenas contain only one side's units. Combat involves rolling dice to determine hits, and playing tactic cards to perform additional actions in attack and in defense.
The game ends when either the Imperial player wins by conquering the Rebel base's system or the Rebel player wins by having the reputation marker and time marker in the same space of the time track.
Components
A Game Board (split in 2 halves)
Various cards used for player objectives, missions, actions, tactics, and searching for the rebel base
Faction mats for each player
Plastic miniatures representing the Imperial and Rebel fleets
Cardboard standees representing faction leaders
Various cardboard markers used to represent and track damage, destroyed systems, faction loyalty, character status, and game progression
Custom dice used during in-game combat
A Rules Reference and a Learn to Play booklet
Expansion
The first expansion to Star Wars: Rebellion is Rise of the Empire released in 2017. This expansion focused on adding characters, units, and missions from Rogue One, the expansion also overhauled the combat system calling the new system Cinematic Combat. The expansion was nominated for the 2017 Golden Geek Best Board Game Expansion.
Awards
The game has received multiple awards and honors:
2016 UK Games Expo Award for best Board Games with Miniatures.
Golden Geek Board Game of the Year Nominee
Golden Geek Best 2-Player Board Game in 2016. Winner
2017 Goblin Magnifico Nominee
2016 Tric Trac Nominee
2016 International Gamers Award - General Strategy: Two-players Nominee
2016 Cardboard Republic Immersionist Laurel Nominee
2016 Best Science Fiction or Fantasy Board Game Nominee
Reception
The game has received positive reviews. Ars Technica has noted that the game "has a vibrant game system at its core", Polygon has described it as an "epic game, faithful to the spirit of the original films" and "is worth your time" and BGL has mentioned that the game "delivers the most complete Star Wars experience in board games". Meeple Mountain have also positively commented that the "Star Wars: Rebellion not only fits the theme, it NAILS it". As of January 2022, Star Wars Rebellion is ranked among the top ten games on BoardGameGeek, with a mean rating of 8.4/10 according to 26,000 ratings.
References
External links
Star Wars: Rebellion product page at Fantasy Flight Games
Asymmetric board games
Board games introduced in 2016
Fantasy Flight Games games
Science fiction board wargames
Space opera board games
Rebellion | Star Wars: Rebellion (board game) | [
"Physics"
] | 970 | [
"Asymmetric board games",
"Symmetry",
"Asymmetry"
] |
56,255,412 | https://en.wikipedia.org/wiki/Xylit | Xylit (from xylon, "silk") is a waste product generated by the mining of lignite. As in peat, embedded iron structures do not become completely sedimented. Its density is around 250 kg/m3.
Uses
Its very low heat content, even in a dried state, makes it a good fuel for heat generation. It has been used in France as compost, and is sometimes used in potting soil and as a substrate for horticulture. Because it is more elastic and robust than wood, it can be used as a good substitute for bricks.
Its unique structure that traps nutrients and pollutants, as well as its high specific surface area (encouraging trickling filter development) and its exceptional longevity (30 years), allow to be used as filter media in some decentralized wastewater systems.
Commercial uses
The Belgian company uses Xylit as filtering media in its X-Perco
Aquaterra Solutions uses Xylit for riverbank stabilization
See also
Lignite
Peat
Références
This article is partially translated from German and French Wikipedia articles.
Coal
Organic minerals
Sanitation
Water pollution | Xylit | [
"Chemistry",
"Environmental_science"
] | 226 | [
"Organic compounds",
"Organic minerals",
"Water pollution"
] |
56,255,499 | https://en.wikipedia.org/wiki/Dehalogenimonas%20alkenigignens | Dehalogenimonas alkenigignens is a strictly anaerobic bacterium from the genus of Dehalogenimonas which has been isolated from groundwater from Louisiana in the United States.
References
External links
Type strain of Dehalogenimonas alkenigignens at BacDive - the Bacterial Diversity Metadatabase
Bacteria described in 2013
Dehalococcoidetes | Dehalogenimonas alkenigignens | [
"Biology"
] | 77 | [
"Bacteria stubs",
"Bacteria"
] |
56,255,593 | https://en.wikipedia.org/wiki/Alexey%20Georgiyevich%20Postnikov | Alexey Georgiyevich Postnikov (; 12 June 1921 – 22 March 1995) was a Russian mathematician, who worked on analytic number theory. He is known for the Postnikov character formula, which expresses the value of a Dirichlet character by means of a trigonometric function of a polynomial with rational coefficients.
Postnikov's father was a high-ranking economic functionary who was arrested in 1938 and became a victim of Stalin's purges. Alexei Postnikov studied from 1939 at the Lomonosov University, interrupted by WW II, so that his degree was delayed until 1946. In 1949 he received his Russian candidate degree (Ph.D.) from Lomonosov University under Alexander Gelfond with thesis On the differential independence of Dirichlet series. From 1950 Postnikov was at the Steklov Institute in Moscow in the department of number theory, led by Ivan Vinogradov, who exerted a great influence on Postnikov, who was also influenced by the Leningrad school of number theory under Yuri Linnik. In 1955 Postnikov published his famous formula, now known as the Postnikov character formula. This was also the subject of his Russian doctorate (higher doctoral recognition) in 1956 (Investigation of the method of Vinogradov for trigonometric sums (in Russian)). He was later a senior scientist at the Steklov Institute.
He also dealt with probability theory and Tauberian theorems in analysis.
In 1966, with Vinogradov, he was a Plenary Speaker of the ICM in Moscow with talk Recent developments in analytic number theory.
Selected publications
Introduction to Analytic Number Theory, American Mathematical Society, Translation of Mathematical Monographs 68, 1988 (translated from Russian original published by Nauka in Moscow in 1971)
Arithmetical modelling of random processes, Trudy Mat. Inst. Steklov 1960 (in Russian)
Ergodic aspects of the theory of congruences and of the theory of Diophantine approximations, Trudy Mat. Inst. Steklov 1966 (Russian), English translation Proc. Steklov Inst. Math. 1967
Tauberian theory and its applications, Trudy Mat. Inst. Steklow 142, 1979 (Russian), English translation Proc. Steklov Inst. Math. 1980
References
External links
mathnet.ru
Obituary (Russian) in Russian Mathematical Surveys 1998, Number Theory Web
1921 births
1995 deaths
20th-century Russian mathematicians
Soviet mathematicians
Scientists from Moscow
Number theorists
Moscow State University alumni | Alexey Georgiyevich Postnikov | [
"Mathematics"
] | 516 | [
"Number theorists",
"Number theory"
] |
56,255,646 | https://en.wikipedia.org/wiki/Transcriptome%20instability | Transcriptome instability is a genome-wide, pre-mRNA splicing-related characteristic of certain cancers. In general, pre-mRNA splicing is dysregulated in a high proportion of cancerous cells. For certain types of cancer, like in colorectal and prostate, the number of splicing errors per cancer has been shown to vary greatly between individual cancers, a phenomenon referred to as transcriptome instability. Transcriptome instability correlates significantly with reduced expression level of splicing factor genes. Mutation of DNMT3A contributes to development of hematologic malignancies, and DNMT3A-mutated cell lines exhibit transcriptome instability as compared to their isogenic wildtype counterparts.
References
Gene expression
Cancer | Transcriptome instability | [
"Chemistry",
"Biology"
] | 155 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
56,256,054 | https://en.wikipedia.org/wiki/Testbed%20aircraft | A testbed aircraft is an aeroplane, helicopter or other kind of aircraft intended for flight research or testing the aircraft concepts or on-board equipment. These could be specially designed or modified from serial production aircraft.
Use of testbed aircraft
For example, in development of new aircraft engines, these are fitted to a testbed aircraft for flight testing, before certification. New instruments wiring and equipment, a fuel system and piping, structural alterations to the wings, and other adjustments are needed for this adaptation.
The Folland Fo.108 (nicknamed the "Folland Frightful") was a dedicated engine testbed aircraft in service from 1940. The aircraft had a mid-fuselage cabin for test instrumentation and observers. Twelve were built and provided to British aero-engine companies. A large number of aircraft-testbeds have been produced and tested since 1941 in the USSR and Russia by the Gromov Flight Research Institute.
AlliedSignal, Honeywell Aerospace, Pratt & Whitney, and other aerospace companies used Boeing jetliners as flying testbed aircraft.
See also
Index of aviation articles
List of experimental aircraft
List of aerospace flight test centres
Development mule
Iron bird (aviation)
References
Aerospace engineering
Experimental aircraft
Aviation industry
Civil aviation
Military aviation
Aircraft operations
History of aviation | Testbed aircraft | [
"Engineering"
] | 256 | [
"Aerospace engineering"
] |
56,256,205 | https://en.wikipedia.org/wiki/Single-electron%20transistor | A single-electron transistor (SET) is a sensitive electronic device based on the Coulomb blockade effect. In this device the electrons flow through a tunnel junction between source/drain to a quantum dot (conductive island). Moreover, the electrical potential of the island can be tuned by a third electrode, known as the gate, which is capacitively coupled to the island. The conductive island is sandwiched between two tunnel junctions modeled by capacitors, and , and resistors, and , in parallel.
History
A new subfield of condensed matter physics began in 1977 when David Thouless pointed out that, when made small enough, the size of a conductor affects its electronic properties. This was followed by mesoscopic physics research in the 1980s based on the submicron-size of systems investigated. Thus began research related to the single-electron transistor.
The first single-electron transistor based on the phenomenon of Coulomb blockade was reported in 1986 by Soviet scientists and D. V. Averin. A couple years later, T. Fulton and G. Dolan at Bell Labs in the US fabricated and demonstrated how such a device works. In 1992 Marc A. Kastner demonstrated the importance of the energy levels of the quantum dot. In the late 1990s and early 2000s, Russian physicists S. P. Gubin, V. V. Kolesov, E. S. Soldatov, A. S. Trifonov, V. V. Khanin, G. B. Khomutov, and S. A. Yakovenko were the first ones to demonstrate a molecule-based SET operational at room temperature.
Relevance
The increasing relevance of the Internet of things and the healthcare applications give more relevant impact to the electronic device power consumption. For this purpose, ultra-low power consumption is one of the main research topics into the current electronics world. The amazing number of tiny computers used in the day-to-day world (e.g. mobile phones and home electronics) requires a significant power consumption level of the implemented devices. In this scenario, the SET has appeared as a suitable candidate to achieve this low power range with high level of device integration.
Applicable areas include: super-sensitive electrometers, single-electron spectroscopy, DC current standards, temperature standards, detection of infrared radiation, voltage state logics, charge state logics, programmable single-electron transistor logic.
Device
Principle
The SET has, like the FET, three electrodes: source, drain, and a gate. The main technological difference between the transistor types is in the channel concept. While the channel changes from insulated to conductive with applied gate voltage in the FET, the SET is always insulated. The source and drain are coupled through two tunnel junctions, separated by a metallic or semiconductor-based quantum nanodot (QD), also known as the "island". The electrical potential of the QD can be tuned with the capacitively coupled gate electrode to alter the resistance, by applying a positive voltage the QD will change from blocking to non-blocking state and electrons will start tunnelling to the QD. This phenomenon is known as the Coulomb blockade.
The current, from source to drain follows Ohm's law when is applied, and it equals where the main contribution of the resistance, comes from the tunnelling effects when electrons move from source to QD, and from QD to drain. regulates the resistance of the QD, which regulates the current. This is the exact same behaviour as in regular FETs. However, when moving away from the macroscopic scale, the quantum effects will affect the current,
In the blocking state all lower energy levels are occupied at the QD and no unoccupied level is within tunnelling range of electrons originating from the source (green 1.). When an electron arrives at the QD (2.) in the non-blocking state it will fill the lowest available vacant energy level, which will raise the energy barrier of the QD, taking it out of tunnelling distance once again. The electron will continue to tunnel through the second tunnel junction (3.), after which it scatters inelastically and reaches the drain electrode Fermi level (4.).
The energy levels of the QD are evenly spaced with a separation of This gives rise to a self-capacitance of the island, defined as: To achieve the Coulomb blockade, three criteria need to be met:
The bias voltage must be lower than the elementary charge divided by the self-capacitance of the island:
The thermal energy in the source contact plus the thermal energy in the island, i.e. must be below the charging energy: otherwise the electron will be able to pass the QD via thermal excitation.
The tunnelling resistance, should be greater than which is derived from Heisenberg's uncertainty principle. where corresponds to the tunnelling time and is shown as and in the schematic figure of the internal electrical components of the SET. The time () of electron tunnelling through the barrier is assumed to be negligibly small in comparison with the other time scales. This assumption is valid for tunnel barriers used in single-electron devices of practical interest, where
If the resistance of all the tunnel barriers of the system is much higher than the quantum resistance it is enough to confine the electrons to the island, and it is safe to ignore coherent quantum processes consisting of several simultaneous tunnelling events, i.e. co-tunnelling.
Theory
The background charge of the dielectric surrounding the QD is indicated by . and denote the number of electrons tunnelling through the two tunnel junctions and the total number of electrons is . The corresponding charges at the tunnel junctions can be written as:
where and are the parasitic leakage capacities of the tunnel junctions. Given the bias voltage, you can solve the voltages at the tunnel junctions:
The electrostatic energy of a double-connected tunnel junction (like the one in the schematical picture) will be
The work performed during electron tunnelling through the first and second transitions will be:
Given the standard definition of free energy in the form:
where we find the free energy of a SET as:
For further consideration, it is necessary to know the change in free energy at zero temperatures at both tunnel junctions:
The probability of a tunnel transition will be high when the change in free energy is negative. The main term in the expressions above determines a positive value of as long as the applied voltage will not exceed the threshold value, which depends on the smallest capacity in the system. In general, for an uncharged QD ( and ) for symmetric transitions () we have the condition
(that is, the threshold voltage is reduced by half compared with a single transition).
When the applied voltage is zero, the Fermi level at the metal electrodes will be inside the energy gap. When the voltage increases to the threshold value, tunnelling from left to right occurs, and when the reversed voltage increases above the threshold level, tunnelling from right to left occurs.
The existence of the Coulomb blockade is clearly visible in the current–voltage characteristic of a SET (a graph showing how the drain current depends on the gate voltage). At low gate voltages (in absolute value), the drain current will be zero, and when the voltage increases above the threshold, the transitions behave like an ohmic resistance (both transitions have the same permeability) and the current increases linearly. The background charge in a dielectric can not only reduce, but completely block the Coulomb blockade.
In the case where the permeability of the tunnel barriers is very different a stepwise I-V characteristic of the SET arises. An electron tunnels to the island through the first transition and is retained on it, due to the high tunnel resistance of the second transition. After a certain period of time, the electron tunnels through the second transition, however, this process causes a second electron to tunnel to the island through the first transition. Therefore, most of the time the island is charged in excess of one charge. For the case with the inverse dependence of permeability the island will be unpopulated and its charge will decrease stepwise. Only now can we understand the principle of operation of a SET. Its equivalent circuit can be represented as two tunnel junctions connected in series via the QD, perpendicular to the tunnel junctions is another control electrode (gate) connected. The gate electrode is connected to the island through a control tank The gate electrode can change the background charge in the dielectric, since the gate additionally polarizes the island so that the island charge becomes equal to
Substituting this value into the formulas found above, we find new values for the voltages at the transitions:
The electrostatic energy should include the energy stored on the gate capacitor, and the work performed by the voltage on the gate should be taken into account in the free energy:
At zero temperatures, only transitions with negative free energy are allowed: or . These conditions can be used to find areas of stability in the plane
With increasing voltage at the gate electrode, when the supply voltage is maintained below the voltage of the Coulomb blockade (i.e. ), the drain output current will oscillate with a period These areas correspond to failures in the field of stability. The oscillations of the tunnelling current occur in time, and the oscillations in two series-connected junctions have a periodicity in the gate control voltage. The thermal broadening of the oscillations increases to a large extent with increasing temperature.
Temperature dependence
Various materials have successfully been tested when creating single-electron transistors. However, temperature is a huge factor limiting implementation in available electronical devices. Most of the metallic-based SETs only work at extremely low temperatures.
As mentioned in bullet 2 in the list above: the electrostatic charging energy must be greater than to prevent thermal fluctuations affecting the Coulomb blockade. This in turn implies that the maximum allowed island capacitance is inversely proportional to the temperature, and needs to be below 1 aF to make the device operational at room temperature.
The island capacitance is a function of the QD size, and a QD diameter smaller than 10 nm is preferable when aiming for operation at room temperature. This in turn puts huge restraints on the manufacturability of integrated circuits because of reproducibility issues.
CMOS compatibility
The level of the electrical current of the SET can be amplified enough to work with available CMOS technology by generating a hybrid SET–FET device.
The EU funded, in 2016, project IONS4SET (#688072) looks for the manufacturability of SET–FET circuits operative at room temperature. The main goal of this project is to design a SET-manufacturability process-flow for large-scale operations seeking to extend the use of the hybrid SET–CMOS architectures. To assure room temperature operation, single dots of diameters below 5 nm have to be fabricated and located between source and drain with tunnel distances of a few nanometers. Up to now there is no reliable process-flow to manufacture a hybrid SET–FET circuit operative at room temperature. In this context, this EU project explores a more feasible way to manufacture the SET–FET circuit by using pillar dimensions of approximately 10 nm.
See also
Coulomb blockade
MOSFET
Transistor model
References
Nanoelectronics
Transistor types | Single-electron transistor | [
"Materials_science"
] | 2,371 | [
"Nanotechnology",
"Nanoelectronics"
] |
56,256,478 | https://en.wikipedia.org/wiki/Fibrolane | Fibrolane was the brand name of a regenerated protein fibre produced by Courtaulds Ltd. in Coventry (UK) during the 1940s, 1950s and early 1960s. It was made from the milk protein casein dissolved in alkali and regenerated by spinning the resulting dope into an acid bath using technology similar to that of viscose rayon production.
The fibre was produced as staple, tow or stretch-broken tow ("tops"), mainly for blending with wool. It had a warm, soft handle and could be converted into fine yarns and soft fabrics. Small amounts of Fibrolane could be added to wool to improve the efficiency of felt production.
References
Phosphoproteins
Organic polymers
Synthetic fibers | Fibrolane | [
"Chemistry"
] | 152 | [
"Organic compounds",
"Synthetic materials",
"Organic polymers",
"Synthetic fibers"
] |
56,256,699 | https://en.wikipedia.org/wiki/17%CE%B1-Methyl-19-norprogesterone | 17α-Methyl-19-norprogesterone (developmental code name H-3510), also known as 17α-methyl-19-norpregn-4-ene-3,20-dione, is a progestin which was never marketed. It is a derivative of progesterone, and is the combined derivative of 17α-methylprogesterone and 19-norprogesterone. The drug is the parent compound of a subgroup of the 19-norprogesterone group of progestins, which includes demegestone (the δ9 derivative), promegestone (the δ9 and 21-methyl derivative), and trimegestone (the δ9, 21-methyl, and 21-hydroxyl derivative).
See also
Gestronol (17α-hydroxy-19-norprogesterone)
References
Abandoned drugs
Diketones
Norpregnanes
Progestogens | 17α-Methyl-19-norprogesterone | [
"Chemistry"
] | 201 | [
"Drug safety",
"Abandoned drugs"
] |
56,256,742 | https://en.wikipedia.org/wiki/Glutamate-sensitive%20fluorescent%20reporter | A genetically engineered fluorescent protein that changes its fluorescence when bound to the neurotransmitter glutamate. Glutamate-sensitive fluorescent reporters (iGluSnFR, colloquially pronounced 'glue sniffer') are used to monitor the activity of presynaptic terminals by fluorescence microscopy. GluSnFRs are a class of optogenetic sensors used in neuroscience research. In brain tissue, two-photon microscopy is typically used to monitor GluSnFR fluorescence.
Design
The widely used iGluSnFR consists of a circularly permuted enhanced green fluorescent protein (cpEGFP) fused to a glutamate binding protein (GluBP) from a bacterium. When GluBP binds a glutamate molecule, it changes its shape, pulling the EGFP barrel together, increasing fluorescence. A specific peptide segment (PDGFR) is included to bring the sensor to the outside of the cell membrane. In the more recent version by Aggarwal et al. (2022), researchers introduced iGluSnFR to two additional anchoring domains, a glycosylphostidylinositol (GPI) anchor, and a modified form of the cytosolic -cterminal domain of Stargazin with a PDZ ligand.
History
The first genetically encoded fluorescent glutamate sensors (FLIPE, GluSnFR and SuperGluSnFR) were constructed by attaching cyan-fluorescent protein (CFP) and yellow-fluorescent protein (YFP) to a bacterial glutamate binding protein (GluBP). Glutamate binding changed the distance between CFP and YFP, changing the efficiency of energy transfer (FRET) between the two fluorophores. A breakthrough in visualizing glutamate release was achieved with iGluSnFR, a single-fluorophore glutamate sensor based on EGFP producing a ~5‑fold increase in fluorescence. To measure synaptic transmission at high frequencies, novel iGluSnFR variants with accelerated kinetics have recently been developed.
References
Neuroscience
Membrane proteins
Fluorescent proteins
Genetic engineering | Glutamate-sensitive fluorescent reporter | [
"Chemistry",
"Engineering",
"Biology"
] | 453 | [
"Biochemistry methods",
"Biological engineering",
"Neuroscience",
"Fluorescent proteins",
"Genetic engineering",
"Protein classification",
"Membrane proteins",
"Molecular biology",
"Bioluminescence"
] |
56,257,876 | https://en.wikipedia.org/wiki/Asus%20Xonar | Asus Xonar is a lineup of PC sound cards by Taiwanese electronics manufacturer ASUS.
The lineup currently comprises the following models:
References
External links
ASUS sound cards
Asus products | Asus Xonar | [
"Technology"
] | 39 | [
"Computing stubs",
"Computer hardware stubs"
] |
56,259,299 | https://en.wikipedia.org/wiki/Cavity%20switch | A cavity switch is a device that modulates cavity properties in the time domain. It is known as Q switching if the quality factor of cavities is under modulation. There are other properties such as the cavity mode volume, resonant frequency, phase delay, and optical local density of states can be switched or modulated. Cavity switches are mainly used in telecommunications and quantum electrodynamics studies.
See also
Q-switching
References
Laser science
Optoelectronics | Cavity switch | [
"Physics"
] | 97 | [
"Quantum mechanics",
"Quantum physics stubs"
] |
56,260,115 | https://en.wikipedia.org/wiki/Clearing%20denominators | In mathematics, the method of clearing denominators, also called clearing fractions, is a technique for simplifying an equation equating two expressions that each are a sum of rational expressions – which includes simple fractions.
Example
Consider the equation
The smallest common multiple of the two denominators 6 and 15z is 30z, so one multiplies both sides by 30z:
The result is an equation with no fractions.
The simplified equation is not entirely equivalent to the original. For when we substitute and in the last equation, both sides simplify to 0, so we get , a mathematical truth. But the same substitution applied to the original equation results in , which is mathematically meaningless.
Description
Without loss of generality, we may assume that the right-hand side of the equation is 0, since an equation may equivalently be rewritten in the form .
So let the equation have the form
The first step is to determine a common denominator of these fractions – preferably the least common denominator, which is the least common multiple of the .
This means that each is a factor of , so for some expression that is not a fraction. Then
provided that does not assume the value 0 – in which case also equals 0.
So we have now
Provided that does not assume the value 0, the latter equation is equivalent with
in which the denominators have vanished.
As shown by the provisos, care has to be taken not to introduce zeros of – viewed as a function of the unknowns of the equation – as spurious solutions.
Example 2
Consider the equation
The least common denominator is .
Following the method as described above results in
Simplifying this further gives us the solution .
It is easily checked that none of the zeros of – namely , , and – is a solution of the final equation, so no spurious solutions were introduced.
References
Elementary algebra
Equations | Clearing denominators | [
"Mathematics"
] | 393 | [
"Mathematical objects",
"Elementary algebra",
"Equations",
"Elementary mathematics",
"Algebra"
] |
63,216,947 | https://en.wikipedia.org/wiki/X-ray%20emission%20spectroscopy | X-ray emission spectroscopy (XES) is a form of X-ray spectroscopy in which a core electron is excited by an incident x-ray photon and then this excited state decays by emitting an x-ray photon to fill the core hole. The energy of the emitted photon is the energy difference between the involved electronic levels. The analysis of the energy dependence of the emitted photons is the aim of the X-ray emission spectroscopy.
There are several types of XES and can be categorized as non-resonant XES (XES), which includes -measurements, valence-to-core (VtC/V2C)-measurements, and ()-measurements, or as resonant XES (RXES or RIXS), which includes XXAS+XES 2D-measurement, high-resolution XAS, 2p3d RIXS, and Mössbauer-XES-combined measurements. In addition, Soft X-ray emission spectroscopy (SXES) is used in determining the electronic structure of materials.
History
The first XES experiments were published by Lindh and Lundquist in 1924
In these early studies, the authors utilized the electron beam of an X-ray tube to excite core electrons and obtain the -line spectra of sulfur and other elements. Three years later, Coster and Druyvesteyn performed the first experiments using photon excitation. Their work demonstrated that the electron beams produce artifacts, thus motivating the use of X-ray photons for creating the core hole. Subsequent experiments were carried out with commercial X-ray spectrometers, as well as with high-resolution spectrometers.
While these early studies provided fundamental insights into the electronic configuration of small molecules, XES only came into broader use with the availability of high intensity X-ray beams at synchrotron radiation facilities, which enabled the measurement of (chemically) dilute samples.
In addition to the experimental advances, it is also the progress in quantum chemical computations, which makes XES an intriguing tool for the study of the electronic structure of chemical compounds.
Henry Moseley, a British physicist was the first to discover a relation between the -lines and the atomic numbers of the probed elements. This was the birth hour of modern x-ray spectroscopy. Later these lines could be used in elemental analysis to determine the contents of a sample.
William Lawrence Bragg later found a relation between the energy of a photon and its diffraction within a crystal. The formula he established, says that an X-ray photon with a certain energy bends at a precisely defined angle within a crystal.
Equipment
Analyzers
A special kind of monochromator is needed to diffract the radiation produced in X-Ray-Sources. This is because X-rays have a refractive index n ≈ 1. Bragg came up with the equation that describes x-ray/neutron diffraction when those particles pass a crystal lattice.(X-ray diffraction)
For this purpose "perfect crystals" have been produced in many shapes, depending on the geometry and energy range of the instrument. Although they are called perfect, there are miscuts within the crystal structure which leads to offsets of the Rowland plane.
These offsets can be corrected by turning the crystal while looking at a specific energy(for example: -line of copper at 8027.83eV).
When the intensity of the signal is maximized, the photons diffracted by the crystal hit the detector in the Rowland plane. There will now be a slight offset in the horizontal plane of the instrument which can be corrected by increasing or decreasing the detector angle.
In the Von Hamos geometry, a cylindrically bent crystal disperses the radiation along its flat surface's plane and focuses it along its axis of curvature onto a line like feature.
The spatially distributed signal is recorded with a position sensitive detector at the crystal's focusing axis providing the overall spectrum. Alternative wavelength dispersive concepts have been proposed and implemented based on Johansson geometry having the source positioned inside the Rowland circle, whereas an instrument based on Johann geometry has its source placed on the Rowland circle.
X-ray sources
X-ray sources are produced for many different purposes, yet not every X-ray source can be used for spectroscopy. Commonly used sources for medical applications generally generate very "noisy" source spectra, because the used cathode material must not be very pure for these measurements. These lines must be eliminated as much as possible to get a good resolution in all used energy ranges.
For this purpose normal X-ray tubes with highly pure tungsten, molybdenum, palladium, etc. are made. Except for the copper they are embedded in, they produce a relatively "white" spectrum. Another way of producing X-rays are particle accelerators. The way they produce X-rays is from vectoral changes of their direction through magnetic fields. Every time a moving charge changes direction it has to give off radiation of corresponding energy. In X-ray tubes this directional change is the electron hitting the metal target (Anode) in synchrotrons it is the outer magnetic field accelerating the electron into a circular path.
There are many different kind of X-ray tubes and operators have to choose accurately depending on what it is, that should be measured.
Modern spectroscopy and the importance of -lines in the 21st Century
Today, XES is less used for elemental analysis, but more and more do measurements of -line spectra find importance, as the relation between these lines and the electronic structure of the ionized atom becomes more detailed.
If an 1s-Core-Electron gets excited into the continuum(out of the atoms energy levels in MO),
electrons of higher energy orbitals need to lose energy and "fall" to the 1s-Hole that was created to fulfill Hund's Rule.(Fig.2)
Those electron transfers happen with distinct probabilities. (See Siegbahn notation)
Scientists noted that after an ionization of a somehow bonded 3d-transition metal-atom the -lines intensities and energies shift with oxidation state of the metal and with the species of ligand(s). This gave way to a new method in structural analysis:
By high-resolution scans of these lines the exact energy level and structural configuration of a chemical compound can be determined.
This is because there are only two major electron transfer mechanisms, if we ignore every transfer not affecting valence electrons.
If we include the fact that chemical compounds of 3d-transition metals can either be high-spin or low-spin we get 2 mechanisms for each spin configuration.
These two spin configurations determine the general shape of the and -mainlines as seen in figure one and two, while the structural configuration of electrons within the compound causes different intensities, broadening, tailing and piloting of the and -lines.
Although this is quite a lot of information, this data has to be combined with absorption measurements of the so-called "pre-edge" region.
Those measurements are called XANES (X-ray absorption near edge structure).
In synchrotron facilities those measurement can be done at the same time, yet the experiment setup is quite complex and needs exact and fine tuned crystal monochromators to diffract the tangential beam coming from the electron storage ring. Method is called HERFD, which stands for High Energy Resolution Fluorescence Detection. The collection method is unique in that, after a collection of all wavelengths coming from "the source" called ,
the beam is then shone onto the sample holder with a detector behind it for the XANES part of the measurement. The sample itself starts to emit X-rays and after those photons have been monochromatized they are collected, too.
Most setups use at least three crystal monochromators or more. The is used in absorption measurements as a part of the Beer-Lambert Law in the equation
where is the intensity of transmitted photons. The received values for the extinction are wavelength specific which therefore creates a spectrum of the absorption.
The spectrum produced from the combined data shows clear advantage in that background radiation is almost completely eliminated while still having an extremely resolute view on features on a given absorption edge.(Fig.4)
In the field of development of new catalysts for more efficient energy storage, production and usage in form of hydrogen fuel cells and new battery materials, the research of the -lines is essential nowadays.
The exact shape of specific oxidation states of metals is mostly known, yet newly produced chemical compounds with the potential
of becoming a reasonable catalyst for electrolysis, for example, are measured every day.
Several countries encourage many different facilities all over the globe in this special field of science in the hope for clean, responsible and cheap energy.
Soft x-ray emission spectroscopy
Soft X-ray emission spectroscopy or (SXES) is an experimental technique for determining the electronic structure of materials.
Uses
X-ray emission spectroscopy (XES) provides a means of probing the partial occupied density of electronic states of a material. XES is element-specific and site-specific, making it a powerful tool for determining detailed electronic properties of materials.
Forms
Emission spectroscopy can take the form of either resonant inelastic X-ray emission spectroscopy (RIXS) or non-resonant X-ray emission spectroscopy (NXES). Both spectroscopies involve the photonic promotion of a core level electron, and the measurement of the fluorescence that occurs as the electron relaxes into a lower-energy state. The differences between resonant and non-resonant excitation arise from the state of the atom before fluorescence occurs.
In resonant excitation, the core electron is promoted to a bound state in the conduction band. Non-resonant excitation occurs when the incoming radiation promotes a core electron to the continuum. When a core hole is created in this way, it is possible for it to be refilled through one of several different decay paths. Because the core hole is refilled from the sample's high-energy free states, the decay and emission processes must be treated as separate dipole transitions. This is in contrast with RIXS, where the events are coupled, and must be treated as a single scattering process.
Properties
Soft X-rays have different optical properties than visible light and therefore experiments must take place in ultra high vacuum, where the photon beam is manipulated using special mirrors and diffraction gratings.
Gratings diffract each energy or wavelength present in the incoming radiation in a different direction. Grating monochromators allow the user to select the specific photon energy they wish to use to excite the sample. Diffraction gratings are also used in the spectrometer to analyze the photon energy of the radiation emitted by the sample.
See also
X-ray absorption spectroscopy
References
External links
Soft X-ray Emission Spectroscopy - Description at beamteam.usask.ca
Emission spectroscopy
X-ray spectroscopy
Synchrotron-related techniques
1924 introductions | X-ray emission spectroscopy | [
"Physics",
"Chemistry"
] | 2,266 | [
"Emission spectroscopy",
"X-ray spectroscopy",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
63,217,284 | https://en.wikipedia.org/wiki/To%20the%20Stars%3A%20Costa%20Rica%20in%20NASA | To the Stars: Costa Rica in NASA (2018) is a book by Canadian writer Bruce James Callow and Costa Rican writer Ana Luisa Monge Naranjo published by Editorial Tecnológica de Costa Rica. The book documents the lives of the Costa Ricans who have worked for the National Aeronautics and Space Administration (NASA) up to the date of the book's publication.
Story
The stories documented in the book share common themes including triumph over adversity and the importance of perseverance when faced with seemingly impossible obstacles. The book is laid out in an easy to read interview format making it accessible to young readers looking for career paths to aerospace careers. To the Stars: Costa Rica in NASA reveals the diversity of the important jobs Costa Ricans perform at NASA which serves as a source of pride and inspiration including in the wider Hispanic community. The public, and in particular educators focused on STEAM, have responded very positively to the book, both in Costa Rica and other countries including Canada, The USA and Mexico.
Outreach
Since the book was published in August 2018 the authors have embarked on an ongoing outreach program to share the positive messages as widely as possible. As of March 2020 over 60 workshops and web conferences had been delivered including a presentation at the Space Explorers Education Conference (SEEC)#SEEC2020 in Houston Texas, with the participation of Sandra Cauffman, Acting Director of NASA's Earth Science Division. A high priority of the authors is to reach out to under-serviced populations and they have done workshops with students in Canadian First Nations communities, orphanages and in a center for underage mothers. The NASA Costa Ricans including engineers Andres Mora and Alfredo Valverde and oceanographer Joaquin Chaves have spoken to students at their workshops several times via web conferences.
References
Astronomy books
Works about NASA
2018 non-fiction books
English-language non-fiction books
Spanish-language non-fiction books
Costa Rica–United States relations | To the Stars: Costa Rica in NASA | [
"Astronomy"
] | 392 | [
"Astronomy books",
"Works about astronomy"
] |
63,219,935 | https://en.wikipedia.org/wiki/Levitation%20based%20inertial%20sensing | Levitation based inertial sensing is a new and rapidly growing technique for measuring linear acceleration, rotation and orientation of a body. Based on this technique, inertial sensors such as accelerometers and gyroscopes, enables ultra-sensitive inertial sensing. For example, the world's best accelerometer used in the LISA Pathfinder in-flight experiment is based on a levitation system which reaches a sensitivity of and noise of .
History
The pioneering work related to the microparticle levitation was performed by Artur Ashkin in 1970. He demonstrated optical trapping of dielectric microspheres for the first time, forming an optical levitation system, by using a focused laser beam in air and liquid. This new technology was later named "optical tweezer" and applied in biochemistry and biophysics. Later, significant scientific progress on optically levitated systems was made, for example the cooling of the center of mass motion of a micro- or nanoparticle in the millikelvin regime. Very recently a research group published a paper showing motional quantum ground state cooling of a levitated nanoparticle. In addition, levitation based on electrostatic and magnetic approaches have also been proposed and realized.
Levitation systems have shown high force sensitivities in the range. For example, an optically levitated dielectric particle has been shown to exhibit force sensitivities beyond ~ . Thus, levitation systems show promise for ultra-sensitive force sensing, such as detection of short-range interactions. By levitating micro- or mesoparticles with a relatively large mass, this system can be employed as a high-performance inertial sensor, demonstrating nano-g sensitivity.
Method
One possible working principle behind a levitation based inertial sensing system is the following. By levitating a micro-object in vacuum and after a cool-down process, the center of mass motion of the micro-object can be controlled and coupled to the kinematic states of the system. Once the system's kinematic state changes (in other words, the system undergoes linear or rotational acceleration), the center of mass motion of the levitated micro-object is affected and yields a signal. This signal is related to the changes of the system's kinematic states and can be read out.
Regarding levitation techniques, there are generally three different approaches: optical, electrostatic and magnetic.
Applications
The sub-attonewton force sensitivity of levitation based system could show promise for applications in many different fields, such as Casimir force sensing, gravitational wave detection and inertial sensing. For inertial sensing, levitation based system could be used to make high-performance accelerometers and gyroscopes employed in inertial measurement units (IMUs) and inertial navigation systems (INSs). These are used in such applications as drone navigation in tunnels and mines, guidance of unmanned aerial vehicles (UAVs), or stabilization of micro-satellites. Levitation based Inertial sensors that have sufficient sensitivity and low noise () for measurements in the seismic band ( to ) can be used in the field of seismometry, in which current inertial sensors cannot meet the requirements.
There are already some commercial products on the market. One example is the iOSG Superconducting gravity sensor, which is based on magnetic levitation and shows a noise of .
Advantages
The future trends in inertial sensing require that inertial sensors have lower cost, higher performance, and smaller in size. Levitation based inertial sensing systems have already shown high performance. For example, the accelerometer used in the LISA Pathfinder in-flight experiment has a sensitivity of and noise of .
References
Levitation
Sensors | Levitation based inertial sensing | [
"Physics",
"Technology",
"Engineering"
] | 784 | [
"Physical phenomena",
"Measuring instruments",
"Levitation",
"Motion (physics)",
"Sensors"
] |
63,220,383 | https://en.wikipedia.org/wiki/LG%20V60%20ThinQ | The LG V60 ThinQ 5G, commonly referred to as the LG V60, is an Android phablet manufactured by LG Electronics as part of the LG V series. It was announced in February 2020 and is the successor to the LG V50 ThinQ. On April 5, 2021, LG announced it would be shutting down its mobile phone division and ceasing production of all remaining devices. LG noted the phone would be available until existing inventory ran out.
Specifications
Design and Hardware
Anodized aluminum is used for the frame, however the edges are chamfered unlike the V50. Gorilla Glass 5 is present on the front, while Gorilla Glass 6 is on the back. The camera module protrudes slightly from the back like on the V40, and the flash is no longer separate, while the rear-mounted fingerprint sensor has been replaced with an under-screen optical unit. On the front, the dual cameras of the V40 and V50 have been omitted in favor of a singular front-facing camera, which reduces the size of the display notch. The V60 is available in Classy Blue or Classy White; the IP68 rating is retained. The device uses the Snapdragon 865 processor with the Adreno 650 GPU, and supports 5G. It is available with 8 GB of LPDDR5 RAM and 128 GB or 256 GB of UFS 3.0 storage. MicroSD card expansion is supported through a hybrid single-SIM slot, up to 2 TB with a single-SIM. The P-OLED display is larger than the V50's at 6.8 inches (172.7 mm) and has a wider 20.5:9 aspect ratio, but the resolution has been lowered to 1080p from 1440p. The display supports Wacom AES active pen input, but no pen is included and there is no built in storage for one. To compete with folding smartphones, the device offers a case accessory known as "LG DualScreen", which contains a second, 6.8-inch 1080p display panel. It is connected and powered through the USB type-C connector of the phone, and also supports active pen input. While the DualScreen is being used, the phone can still be charged wired with an included magnetic charging tip. Stereo speakers are present with active noise cancellation and a 3.5 mm audio jack. The battery is larger at 5000 mAh, and can be recharged either wired over USB-C (Quick Charge 4.0+) or wirelessly (Qi). A triple camera setup is used on the rear, consisting of a 64 MP Samsung's Bright S5KGW1 main sensor, a 13 MP ultrawide sensor and a time-of-flight 3D depth sensor. There is no telephoto sensor like on the V40 and V50; LG claims that the high resolution of the wide sensor can shoot lossless zoom photos. The rear camera can now record video at 8K resolution at 26 fps, encoded at 10 bit HDR10+ format in Rec.2020 colour space. The front-facing camera is 10 MP and can now record 4K video at 60 frames per second.
Software
The V60 ships with Android 10 and uses LG's UX 9.
References
LG Electronics smartphones
LG Electronics mobile phones
Mobile phones introduced in 2020
Android (operating system) devices
Mobile phones with multiple rear cameras
Mobile phones with 8K video recording
Discontinued flagship smartphones | LG V60 ThinQ | [
"Technology"
] | 727 | [
"Discontinued flagship smartphones",
"Flagship smartphones"
] |
63,221,711 | https://en.wikipedia.org/wiki/Cyclooctadiene%20iridium%20methoxide%20dimer | Cyclooctadiene iridium methoxide dimer is an organoiridium compound with the formula Ir2(OCH3)2(C8H12)2, where C8H12 is the diene 1,5-cyclooctadiene. It is a yellow solid that is soluble in organic solvents. The complex is used as a precursor to other iridium complexes, some of which are used in homogeneous catalysis.
The compound is prepared by treating cyclooctadiene iridium chloride dimer with sodium methoxide. In terms of its molecular structure, the iridium centers are square planar as is typical for a d8 complex. The Ir2O2 core is folded.
References
Homogeneous catalysis
Cyclooctadiene complexes
Organoiridium compounds | Cyclooctadiene iridium methoxide dimer | [
"Chemistry"
] | 168 | [
"Catalysis",
"Homogeneous catalysis"
] |
63,221,967 | https://en.wikipedia.org/wiki/Climate%20change%20and%20birds | Significant work has gone into analyzing the effects of climate change on birds. Like other animal groups, birds are affected by anthropogenic (human-caused) climate change. The research includes tracking the changes in species' life cycles over decades in response to the changing world, evaluating the role of differing evolutionary pressures and even comparing museum specimens with modern birds to track changes in appearance and body structure. Predictions of range shifts caused by the direct and indirect impacts of climate change on bird species are amongst the most important, as they are crucial for informing animal conservation work, required to minimize extinction risk from climate change.
Climate change mitigation options can also have varying impacts on birds. However, even the environmental impact of wind power is estimated to be much less threatening to birds than the continuing effects of climate change.
Causes
Climate change has raised the temperature of the Earth by about since the Industrial Revolution. As the extent of future greenhouse gas emissions and mitigation actions determines the climate change scenario taken, warming may increase from present levels by less than with rapid and comprehensive mitigation (the Paris Agreement goal) to around ( from the preindustrial) by the end of the century with very high and continually increasing greenhouse gas emissions.
Effects
Physical changes
Birds are a group of warm-blooded vertebrates constituting the class Aves, characterized by feathers, toothless beaked jaws, the laying of hard-shelled eggs, a high metabolic rate, a four-chambered heart, and a strong yet lightweight skeleton.
Climate change has already altered the appearance of some birds by facilitating changes to their feathers. A comparison of museum specimens of juvenile passerines from 1800s with juveniles of the same species today had shown that these birds now complete the switch from their nesting feathers to adult feathers earlier in their lifecycle, and that females now do this earlier than males. Further, blue tits are defined by blue and yellow feathers, but a study in Mediterranean France had shown that those contrasting colors became less bright and intense in just the period between 2005 and 2019.
A study in Chicago showed that the length of birds' lower leg bones (an indicator of body sizes) shortened by an average of 2.4% and their wings lengthened by 1.3%. In the central Amazon area, birds have decreased in mass (an indicator of size) by up to 2% per decade, and increased in wing length by up to 1% per decade, with links to temperature and precipitation shifts. These morphological trends may demonstrate an example of evolutionary change following Bergmann's rule. Across Eurasia, snowfinches became both smaller and darker over the past 100 years.
Rising temperatures due to global warming have also been shown to decrease the size of many migratory birds. In a first study to identify a direct link between cognition and phenotypic responses to climate change, researchers show that size reduction is much more pronounced in smaller-brained birds compared to bigger-brained species. Reduction in body size is a general response to warming temperatures since birds with smaller bodies can dissipate heat easier, helping to cope with the heat-caused stress. Reduced body and brain sizes also lead to reduced cognitive and competitive ability, making the smaller-species birds easier targets for predators. In another study where researchers compared the brain sizes of 1,176 bird species, they found that species that spend more resources on their young have larger brains as adults. Bird species that feed their offspring after hatching have extended durations during which their young can develop their brain, producing more intelligent and larger-brained offspring. Changing environments due to climate change might impact the ability of birds to obtain enough food to sustain their own brains and provide for their young, resulting in reduced brain sizes. Larger-brained and more intelligent birds, such as the New Caledonian crow, may therefore be able to better cope with the challenges posed by climate change.
Phenology
For many species, climate change already results in phenological mismatch, which is a phenomenon where the timing of one aspect of a species' yearly cycle ceases to align with another, impairing the species' evolutionary fitness. Events such as reproduction and migration are energetically expensive, and often only occur during a brief period throughout the annual cycle when seasonal prey availability is the highest. However, many prey items differ in energetic and nutritional content and are responding to climate change at different rates than bird life stages. Some common species like pied flycatcher can compensate for a mismatch between their breeding time and population sizes of their preferred prey (caterpillars) by feeding their offspring alternatives like flying insects and spiders, leading to reduced body mass but avoiding a major decrease in reproductive success. Mismatch is a more acute issue for Arctic shorebirds due to the high rate of climate change in the Arctic, leading to events like the 2016 starvation-caused die-off of around 9000 puffins and other shorebirds in Alaska. Long-distance migrating birds also tend to be more sensitive to phenological mismatch, due to the increasing inability to track changes in the breeding environment the further they migrate or to adjust when they can gather food and breed. There is more phenological mismatch occurring during the spring migration, leading to decline in populations in species that have a greater mismatch or phenological asynchrony, compared to species with a lower sensitivity to the changing climate and therefore less need to adjust migratory patterns. If the timing of the highest availability of a bird species' main food source happens earlier than its migration timeline because of warmer weather, then it will likely miss the time for resource gathering.
In response, changes in bird phenology have been observed over the past 50 years, such as the lengthening of spring migrations. Different species can have different triggers for migration, and so the changes in migration patterns can also differ, but for many, there is a correlation between temperatures and otherwise unexplained variations in migration timing over the short term. In general, the earliest individuals are migrating earlier and the latest migrating at a similar time or later than before. Wood warblers in North America provide a notable example, as an analysis of 60 years of data shows that every additional of early spring temperatures appears to bring their migrations 0.65 days closer. There has been some scientific debate as to whether such shifts represent an evolutionary adaptive change, or phenotypic plasticity. In other words, just because many individuals in a species have altered their phenology, it does not mean that the change will necessarily help those individuals obtain greater reproductive success and perpetuate the change in behaviour in the next generation, since individual phenotypic changes may be mistimed. This is especially important with climate change, as its variable rate makes it harder to adjust the timing correctly, and it's possible for individuals across multiple generations to respond to such environmental cues in the same manner, but without an ultimate reproductive benefit. Some species which have increased their egg laying dates and advanced spring migration timelines have shown more positive population trends, like some passerines breeding in Great Britain, but this only provides indirect evidence. To date, Common terns are one of a few species where the pressure to migrate earlier (forwards shift of 9.3 days over 27 years) was confirmed to have a heritable component to it.
Great tits provide a notable example of the complexities of tracking phenology change. In 2006, population declines were observed due to a >10-day mismatch between their preferred breeding season and peak population spawn of caterpillars, their preferred food source. Consequently, fledglings raised earlier in the season when caterpillar populations are at their peak are in better physiological condition than those raised later in the breeding season, which should act as an evolutionary driver. Yet, caterpillar numbers are affected by more than the climate, with the physical condition of local primary producers like oak trees often being more important for their numbers, and consequently, for when it makes the most sense for great tit individuals to lay their eggs. Nevertheless, by 2021, it was observed that great tit phenology continued to advance even as the late spring warming, and thus the peak of caterpillar numbers, changed much less since 2006. Thus, phenological mismatch for great tits is now substantially lower than before, signifying successful adaptation, but future warming is likely to increase the mismatch again. If the Paris Agreement is fulfilled and the warming peaks at or , then the mismatch will peak around 2050 and then decline again as the species will continue to adapt. Under RCP4.5 and RCP8.5, the two more severe climate change scenarios, average phenological mismatch will once again be at 10 days by the end of the century or even reach the near-unprecedented 15 days, respectively.
Extreme disturbance events
Besides an ongoing increase in temperature and shifts in precipitation patterns, climate change also increases the frequency of extreme weather events, and those can be particularly damaging to species caught in their path. Carnaby's Black Cockatoo is a species in southwestern Australia which suffered a large decrease in population after just two extreme weather events - a severe heatwave and a severe hail storm between October 2009 and March 2010. In Europe, lesser kestrels seem to adjust to ongoing warming, but have been observed to lose more offspring during the extreme drought months.
Climate change is known to increase the risk and the severity of wildfires in many parts of the world, where it dries out vegetation and reduces the extent of snowpack. Wildfires can destroy the habitats of certain birds: after the 2019–20 Australian bushfire season, emu subpopulations on the New South Wales coast are considered at high risk. During the 2020 Western United States wildfire season, one of the only two strongholds of Cassia Crossbill was engulfed in flames. While most bird species can survive the immediate destruction of their habitat by flying away, they can still be heavily affected by wildfire smoke. This is a particular concern for migratory bird species who can be caught in a smoke-filled area right as they are migrating. In 2020, "hundreds of thousands and possibly even up to a million birds have died across at least five U.S. states and in four Mexican states", primarily of migratory species. This "unprecedented" event was connected to wildfire smoke the following year.
Shorebird habitats are often negatively affected by sea level rise, both due to the gradual degradation from the ongoing trend, and the sudden storm surges and other extreme events. Later in the century, sea level on the East Coast of the United States may advance high enough that a large hurricane could flood up to 95% of current piping plover habitat in the area, while its ability to shift habitat inland may be constrained by future shoreline development. Gulf Coast populations are also at risk, with a potential 16% loss of habitat by 2100 to gradual inundation alone, and a risk of both extreme storms and further human development of the shoreline. Ironically, inland piping plover populations may benefit from stronger floods powered by climate change, as the open sand shoals they nest in can only avoid vegetation overgrowth if they are flooded regularly, ideally once in four years, which occurred before the European colonization of the Americas but has now been reduced to once per twenty years by shoreline stabilization efforts to protect human property. Consequently, future flooding caused by climate change may be restoring a historical norm for the species, although there is a small risk of climate change either leading to excessive flooding or drying the area under some scenarios.
Range
Climate change can make nesting conditions intolerable for various bird species. For instance, shorebirds nest in sand, and the coastal populations of least terns and piping plovers are already known to suffer from sand temperatures increasing and at times getting too hot, while desert birds can outright die of dehydration on unprecedentedly hot days. The range of many birds is expected to shift as the result, as "climate change forces species to move, adapt or die." For instance, young house sparrows have been observed to travel further from their parents' nests than before, in response to warming temperatures. Climate change had also been connected with the observed decline in numbers and range reduction of the rusty blackbird, a formerly common yet currently vulnerable North American species. Range shifts are generally increasing in latitude, like with two Asian subspecies of Black-tailed godwit, which are expected to shift closer to the North Pole. Their overall habitat is likely to shrink dramatically to about 16% of its present extent, with all the former high-suitability areas lost. In addition to moving polewards, bird species near the mountains shift to the cooler climate of higher elevations. In India, 66–73% of 1,091 species are expected to move upwards or northwards in response to climate change. Around 60% will see their ranges shrink, with the rest gaining in range.
Besides rising temperatures, climate change can also impact birds' ranges through changes in precipitation. For instance, increased rainfall in some alpine climates is consistent with predictions of effects of climate change on the water cycle. This includes some habitat of savannah sparrows and horned larks, which are known to have higher daily nest mortalities if their environment rained consecutively for more than two days, compared with no rain at all. The Grey-headed robin is restricted to rainforests of the wet tropics region in Australia's northeast Queensland and another population in the New Guinea highlands. It needs cooler temperatures that can only be found in the higher altitudes, yet unlike other species, it cannot keep shifting its range all the way up the mountains, as otherwise it suffers from excessive precipitation. These restrictions on available range make it particularly vulnerable to future climate change. On the other hand, in North America, the southwestern willow flycatcher is expected to lose at least 62% of its population size by 2100 under a high-warming scenario and 36% under an intermediate scenario, but may not suffer any losses under a low-warming scenario, in large part due to its evolutionary potential. However, if the future effects of drought end up particularly severe for the species during its nesting season, it may end up losing the majority of its population size even in the low-warming scenario, and 93% or >99% in the higher-warming scenarios.
Human actions often interact with the effects of climate change. For example, in South American grasslands, the campo miner would lose 77–92% of its area by 2080 under the high-warming scenario and 68–74% under the intermediate scenario, which is particularly concerning due to the lack of protected areas for this species. The pied crow has seen its range decrease in northern Africa but increase in southern Africa due to climate change. Climate change favors the development of forests over grasslands in southern Africa, which provides more trees for nesting. However, their increase in range and density in the south has been helped by electrical power lines. Electrical infrastructure provides additional nesting and perching sites, which may have increased the overall prevalence of the species. And in North America, projected range shifts have been described as "unbelievable" by the experts of National Audubon Society. While one reason is their geographic distance, the other is because for non-migratory species, they can only become reality with assisted migration, as otherwise they wouldn't be able to cross the natural barriers to the newly suitable habitats and would instead be simply extirpated from their old ones.
Extinction
Effects of climate change mitigation activities
Climate change mitigation benefits most bird species in the long run by limiting harmful effects of climate change. However, mitigation strategies may have more complex unintended outcomes. Some provide co-benefits as forest management to thin forest fire fuels may increase bird habitat. Certain cropping strategies for renewable biomass may also increase overall species richness compared to traditional agricultural practices. On the other hand, tidal power systems may affect wader birds, but there's little research due to the limited uptake of this form of renewable energy.
Wind farms are known for being dangerous to birds, and have been found to harm species such as white-tailed eagles and whooper swans. This may be a problem of visual acuity, as most birds have a poor frontal vision. Wind turbine collisions could potentially be reduced if towers were made more conspicuous to birds, or placed in better locations.
In the United States, it has been estimated that between 140,000 and 500,000 birds die every year from collisions with wind turbines, which could increase to 1.4 million if the wind power capacity were increased six-fold. On average, collisions are the least frequent in the Great Plains region, where about 2.92 birds collide with a turbine every year, are higher in the West and East of the country (4.72 and 6.86 birds per turbine annually) and are the highest in California where 7.85 birds collide with each turbine every year.
In general, older wind farms tended to consider birds less in their placement, and this led to greater mortality rates than for wind farms installed after the development of improved guidelines. Newer research shows that in the Indian state of Karnataka, annual fatalities per turbine are at 0.26 per year, which includes both birds and bats. When a coastal wind farm was built on the East Asian-Australasian Flyway, bird community appeared to adjust after one year of operations. However, even the older wind farms were estimated to be responsible for losing less than 0.4 birds per gigawatt-hour (GWh) of electricity generated in 2009, compared to over 5 birds per GWh for fossil fueled power stations.
See also
Assisted migration
Effects of climate change on biomes
Bird migration perils
Bird fallout
Bird conservation
Season creep
References
Effects of climate change
Ornithology
Bird migration
Birds
Birds and humans | Climate change and birds | [
"Biology"
] | 3,641 | [
"Birds",
"Animals"
] |
63,223,344 | https://en.wikipedia.org/wiki/Proofs%20That%20Really%20Count | Proofs That Really Count: the Art of Combinatorial Proof is an undergraduate-level mathematics book on combinatorial proofs of mathematical identies. That is, it concerns equations between two integer-valued formulas, shown to be equal either by showing that both sides of the equation count the same type of mathematical objects, or by finding a one-to-one correspondence between the different types of object that they count. It was written by Arthur T. Benjamin and Jennifer Quinn, and published in 2003 by the Mathematical Association of America as volume 27 of their Dolciani Mathematical Expositions series. It won the Beckenbach Book Prize of the Mathematical Association of America.
Topics
The book provides combinatorial proofs of thirteen theorems in combinatorics and 246 numbered identities (collated in an appendix). Several additional "uncounted identities" are also included. Many proofs are based on a visual-reasoning method that the authors call "tiling", and in a foreword, the authors describe their work as providing a follow-up for counting problems of the Proof Without Words books by Roger B. Nelson.
The first three chapters of the book start with integer sequences defined by linear recurrence relations, the prototypical example of which is the sequence of Fibonacci numbers. These numbers can be given a combinatorial interpretation as the number of ways of tiling a strip of squares with tiles of two types, single squares and dominos; this interpretation can be used to prove many of the fundamental identities involving the Fibonacci numbers, and generalized to similar relations about other sequences defined similarly, such as the Lucas numbers, using "circular tilings and colored tilings". For instance, for the Fibonacci numbers, considering whether a tiling does or does not connect positions and of a strip of length immediately leads to the identity
Chapters four through seven of the book concern identities involving continued fractions, binomial coefficients, harmonic numbers, Stirling numbers, and factorials. The eighth chapter branches out from combinatorics to number theory and abstract algebra, and the final chapter returns to the Fibonacci numbers with more advanced material on their identities.
Audience and reception
The book is aimed at undergraduate mathematics students, but the material is largely self-contained, and could also be read by advanced high school students. Additionally, many of the book's chapters are themselves self-contained, allowing for arbitrary reading orders or for excerpts of this material to be used in classes. Although it is structured as a textbook with exercises in each chapter, reviewer Robert Beezer writes that it is "not meant as a textbook", but rather intended as a "resource" for teachers and researchers. Echoing this, reviewer Joe Roberts writes that despite its elementary nature, this book should be "valuable as a reference ... for anyone working with such identities".
In an initial review, Darren Glass complained that many of the results are presented as dry formulas, without any context or explanation for why they should be interesting or useful, and that
this lack of context would be an obstacle for using it as the main text for a class. Nevertheless, in a second review after a year of owning the book, he wrote that he was "lending it out to person after person".
Reviewer Peter G. Anderson praises the book's "beautiful ways of seeing old, familiar mathematics and some new mathematics too", calling it "a treasure". Reviewer Gerald L. Alexanderson describes the book's proofs as "ingenious, concrete and memorable". The award citation for the book's 2006 Beckenbach Book Prize states that it "illustrates in a magical way the pervasiveness and power of counting techniques throughout mathematics. It is one of those rare books that will appeal to the mathematical professional and seduce the neophyte."
One of the open problems from the book, seeking a bijective proof of an identity combining binomial coefficients with Fibonacci numbers, was subsequently answered positively by Doron Zeilberger. In the web site where he links a preprint of his paper, Zeilberger writes,
Recognition
Proofs That Really Count won the 2006 Beckenbach Book Prize of the Mathematical Association of America, and the 2010 CHOICE Award for Outstanding Academic Title of the American Library Association. It has been listed by the Basic Library List Committee of the Mathematical Association of America as essential for inclusion in any undergraduate mathematics library.
References
External links
Proofs That Really Count on the Internet Archive
Enumerative combinatorics
Mathematical proofs
Mathematics books
2003 non-fiction books | Proofs That Really Count | [
"Mathematics"
] | 931 | [
"Enumerative combinatorics",
"nan",
"Combinatorics"
] |
63,223,403 | https://en.wikipedia.org/wiki/SnRNA-seq | snRNA-seq, also known as single nucleus RNA sequencing, single nuclei RNA sequencing or sNuc-seq, is an RNA sequencing method for profiling gene expression in cells which are difficult to isolate, such as those from tissues that are archived or which are hard to be dissociated. It is an alternative to single cell RNA seq (scRNA-seq), as it analyzes nuclei instead of intact cells.
snRNA-seq minimizes the occurrence of spurious gene expression, as the localization of fully mature ribosomes to the cytoplasm means that any mRNAs of transcription factors that are expressed after the dissociation process cannot be translated, and thus their downstream targets cannot be transcribed. Additionally, snRNA-seq technology enables the discovery of new cell types which would otherwise be difficult to isolate.
Methods and technology
The basic snRNA-seq method requires 4 main steps: tissue processing, nuclei isolation, cell sorting, and sequencing. In order to isolate and sequence RNA inside the nucleus, snRNA-seq involves using a quick and mild nuclear dissociation protocol. This protocol allows for minimization of technical issues that can affect studies, especially those concerned with immediate early gene (IEG) behavior.
The resulting dissociated cells are suspended and the suspension gently lysed, allowing the cell nuclei to be separated from their cytoplasmic lysates using centrifugation. These separated nuclei/cells are sorted using fluorescence-activated cell sorting (FACS) into individual wells, and amplified using microfluidics machinery. Sequencing occurs as normal and the data can be analyzed as appropriate for its use.
This basic snRNA-seq methodology is capable of profiling RNA from tissues that are preserved or cannot be dissociated, but it does not have high throughput capability due to its reliance on nuclei sorting by FACS. This technique cannot be scaled easily to profiling large numbers of nuclei or samples. Massively parallel scRNA-seq methods exist and can be readily scaled but their requirement of a single cell suspension as input is not ideal and eliminates some of the flexibility that is available with the snRNA-seq method in regards to the types of tissues and cells that can be examined. In response, the DroNc-Seq method of massively parallel snRNA-seq with droplet technology was developed by researchers from the Broad Institute of MIT and Harvard. In this technique, nuclei that have been isolated from their fixed or frozen tissue are encapsulated in droplets with uniquely barcoded beads that are coated with oligonucleotides containing a 30-terminal deoxythymine (dT) stretch. This coating captures the polyadenylated mRNA content produced when the nuclei are lysed inside the droplets. The captured mRNA is reverse transcribed into cDNA after emulsion breakage. Sequencing this cDNA produces the transcriptomes of all the single nuclei being looked at and these can be used for many purposes, including identification of unique cell types.
The sequencing tools and equipment used in scRNA-seq can be used with modifications for snRNA-seq experiments. Illumina outlines a workflow for the basic snRNA-seq method which can be performed with existing equipment. DroNc-Seq can be accomplished with microfluidic platforms which are meant for the Drop-seq scRNA-seq method. However, Dolomite Bio has adapted one of their instruments, the automated Nadia platform for scRNA-seq, to be used natively for DroNc-Seq as well. This instrument could simplify the generation of single nuclei sequencing libraries, as it is being used for its intended purpose.
In regard to data analysis after sequencing, a computational pipeline known as dropSeqPipe was developed by the McCarroll Lab at Harvard. Although the pipeline was originally developed for use with Drop-seq scRNA-seq data, it can be used with DroNc-Seq data as it also utilizes droplet technology.
Difference between snRNA-seq and scRNA-seq
snRNA-seq uses isolated nuclei instead of the entire cells to profile gene expression. That is to say, scRNA-seq measures both cytoplasmic and nuclear transcripts, while snRNA-seq mainly measures nuclear transcripts (though some transcripts might be attached to the rough endoplasmic reticulum and partially preserved in nuclear preps). This allows for snRNA-seq to process only the nucleus and not the entire cell. For this reason, compared to scRNA-seq, snRNA-Seq is more appropriate to profile gene expression in cells that are difficult to isolate (e.g. adipocytes, neurons), as well as preserved tissues.
Additionally, the nuclei required for snRNA-seq can be obtained quickly and easily from fresh, lightly fixed, or frozen tissues, whereas isolating single cells for single-cell RNA-seq (scRNA-seq) involves extended incubations and processing. This gives researchers the ability to obtain transcriptomes which are not as perturbed during isolation.
Application
In neuroscience, neurons have an interconnected nature which makes it extremely hard to isolate intact single neurons. As snRNA-seq has emerged as an alternative method of assessing a cell's transcriptome through the isolation of single nuclei, it has been possible to conduct single-neuron studies from postmortem human brain tissue. snRNA-seq has also enabled the first single neuron analysis of immediate early gene expression (IEGs) associated with memory formation in the mouse hippocampus. In 2019, Dmitry et al used the method on cortical tissue from ASD patients to identify ASD-associated transcriptomic changes in specific cell types, which is the first cell-type-specific transcriptome assessment in brains affected by ASD.
Outside of neuroscience, snRNA-seq has also been used in other research areas. In 2019, Haojia et al compared both scRNA-seq and snRNA-seq in a genomic study around the kidney. They found snRNA-seq accomplishes an equivalent gene detection rate to that of scRNA-seq in adult kidney with several significant advantages (including compatibility with frozen samples, reduced dissociation bias and so on ). In 2019, Joshi et al used snRNA-seq in a human lung biology study in which they found snRNA-seq allowed unbiased identification of cell types from frozen healthy and fibrotic lung tissues. Adult mammalian heart tissue can be extremely hard to dissociate without damaging cells, which does not allow for easy sequencing of the tissue. However, in 2020, German scientists presented the first report of sequencing an adult mammalian heart by using snRNA-seq and were able to provide practical cell‐type distributions within the heart
Pros and cons of snRNA-seq
Pros
In scRNA-seq, the dissociation process may impair some sensitive cells and some cells in certain tissues (e.g. collagenous matrix) can be extremely hard to dissociate. Such issues can be prevented in snRNA-seq as we only need to isolate a single nucleus instead of an entire single cell.
Unlike scRNA-seq, snRNA-seq has quick and mild nuclei dissociation protocols that would forestall technical issues emerging from heating, protease digestion.
snRNA-seq works very well for preserved/frozen tissues.
Cons
Sequencing RNA in the cytoplasm (gene isoforms, RNA in mitochondria and chloroplast etc.) is not possible, as snRNA-seq mostly measures nuclear transcripts.
References
RNA sequencing
Molecular biology techniques | SnRNA-seq | [
"Chemistry",
"Biology"
] | 1,647 | [
"Genetics techniques",
"RNA sequencing",
"Molecular biology techniques",
"Molecular biology"
] |
63,223,600 | https://en.wikipedia.org/wiki/Cloud%20seeding%20in%20the%20United%20Arab%20Emirates | Cloud seeding in the United Arab Emirates is a weather modification technique used by the government to address water challenges in the country. Cloud seeding is also referred to as man made precipitation and artificial rain making. The United Arab Emirates is one of the first countries in the Persian Gulf region to use cloud seeding technology. UAE scientists use cloud seeding technology to supplement the country's water insecurity, which stems from the extremely hot climate. They use weather radars to continuously monitor the atmosphere of the country. Forecasters and scientists have estimated that cloud seeding operations can enhance rainfall by as much as 30-35% percent in a clear atmosphere, and up to 10-15% in a more humid atmosphere. This practice has caused concerns regarding the impact on the environment because it is difficult to predict its long-term global implications.
Climate needs
The UAE has an arid climate with less than 100mm per year of rainfall, a high evaporation rate of surface water and a low groundwater recharge rate. Rainfall in the UAE has been fluctuating over the last few decades in winter season between December and March.
The climate of the UAE is a very dry region aside from the coast and the border of the UAE and Oman, where there is high humidity. The UAE is located in a dust hotspot that contributes to the arid climate. There is little to no rainfall, due to frontal systems from the west and northwest, which yields few inches of rainfall per year. This lack of rainfall has scientists and the government worried about water security in the future.
Due to industrialization and population growth, the demand for water has rapidly increased. Current resources are being depleted and scarcity issues are arising. As a result, the UAE is looking to cloud seeding technologies to increase water security as well as renewability to combat water and food scarcity that may arise.
History
Scientists have been experimenting with cloud seeding technology since the 1940s. The cloud-seeding program in the UAE was initiated in the late 1990s, as one of the first Middle Eastern countries to utilize this technique. In 2005, the UAE launched the UAE Prize for Excellence in Advancing the Science and Practice of Weather Modification in collaboration with the World Meteorological Organization (WMO). In 2010, cloud seeding began as a project by weather authorities to create artificial rain. The project, which began in July 2010 and cost $11 million USD, succeeded in creating rain storms in the Dubai and Abu Dhabi deserts.
Government involvement
The UAE government developed a research program called the UAE Research Program for Rain Enhancement Science (UAEREP) in 2015. It allows scientists and researchers to pitch their potential solutions and conduct research to improve the accuracy of cloud seeding technology. After pitching research proposals, scientists are awarded grants through the UAEREP. Among its key goals are advancing the science, technology, and implementation of rain enhancement and encouraging additional investments in research funding and research partnerships to advance the field, increasing rainfall and ensuring water security globally. By early 2001, the UAEREP was conducting research projects in cooperation with the National Center for Atmospheric Research (NCAR) in the U.S., the Witwatersrand University in South Africa, the National Aeronautics and Space Agency (NASA) in the U.S.
The Program for Rain Enhancement Science is an initiative of the United Arab Emirates Ministry of Presidential Affairs. It is overseen by the UAE National Center of Meteorology & Seismology (NCMS) based in Abu Dhabi.
In 2014, a total of 187 missions was sent to seed clouds in the UAE with each aircraft taking about three hours to target five to six clouds at a cost of $3,000 per operation. In 2017, the UAE had 214 missions, and in 2018, it had 184 missions, and 247 missions were launched in 2019. Tests of new technologies were done in 2020 with partners in the United States to test the use of nanomaterials for seeding.
Technology
The augmentation of rainfall considers both the ground-based and airborne processes that occur in different rain cloud types (but generally focused on convective clouds). The UAE utilizes operational aircraft-based and drone-controlled hygroscopic cloud seeding as opposed to conventional randomized aircraft seeding, as it does not take into consideration the varying properties of rain clouds, especially present in dusty and arid regions like the UAE. Since 2021, the devices have been equipped with a payload of electric-charge emission instruments and customized sensors that fly at low altitudes and deliver an electric charge to air molecules. Hygroscopic cloud seeding uses natural salts such as potassium chloride and sodium chloride that pre-exist in the atmosphere with hygroscopic flares. By introducing Hygroscopic particles, it enhances the natural rain particles which begins a collision-coalescence process.
At present, the UAE mostly cloud seeds in the eastern mountains on the border to Oman to raise levels in aquifers and reservoirs. There are 75 networked automatic weather stations distributed across the country, 7 air quality stations, a Doppler weather radar network of five stationary and one mobile radar, and six Beechcraft King Air C90 aircraft distributed across the country for cloud seeding operations.
Environmental impact
Flooding
It is predicted that climate change will lead to higher temperatures, increased humidity and a greater risk of flooding in parts of the Gulf region. These issues could be worsened in nations like the UAE which do not have adequate drainage infrastructure to manage heavy rainfall.
Cloud seeding activities conducted in 2019 by the UAE National Center of Meteorology & Seismology (NCM) as part of the UAE Research Program for Rain Enhancement Science were carried out prior to floods in Dubai in 2019. Although the NCM has linked heavier rainfall to cloud seeding operations, they assert it was not the cause of the flooding. Commercial and residential areas were severely impacted and pumps were needed to remove excess water due to inadequate drainage systems because drainage systems could not handle the volume of water. The UAE planned to invest 500 million dirhams ($136.1 million) on flood protection and transport infrastructure after severe storms in 2020.
Sharjah, one of the most populous cities in the UAE, has experienced repetitive urban flooding during the rainy season over the last three decades. Possible additional increased rainfall intensity due to cloud seeding would require additional investment in the city's drainage systems to mitigate flood risk.
April 2024 floods
Experts are doubtful that cloud seeding played a role in the UAE's April 2024 floods, suggesting that the heavy rainfall was more likely caused by anthropogenic climate change.
Atmospheric aerosols
Cloud seeding missions require firing salts and silver iodide crystals into the atmosphere. The increased concentration of particulate matter, or micro-pollutants, increases risk for respiratory illnesses. In 2017, a study was conducted before and after cloud seeding missions, which recorded an increase of particulate matter, correlating to the months of active artificial rain. Researchers attribute this to left over silver iodine crystals that were not dispersed in the rain during the cloud seeding months. A study was conducted called the UAE Unified Aerosol Experiment (UAE2) to assess the progress and effectiveness of cloud seeding specifically in the UAE. Researchers found a significant increase in rainfall trends in areas with cloud seeding. More recently, over 20 regions in the UAE that participated in cloud seeding experiments have a higher concentration of particulate matter. The overall environmental impact of cloud seeding is difficult measure due to the inability to perform controlled experiments along with the difficulty in direct tracing.
See also
Cloud seeding
United Arab Emirates
Environmental issues in the United Arab Emirates
Arabian Desert
Abu Dhabi
Dubai Electricity and Water Authority
Sharjah Electricity and Water Authority
Particulates
References
Weather modification
Science and technology in the United Arab Emirates
Science experiments | Cloud seeding in the United Arab Emirates | [
"Engineering"
] | 1,579 | [
"Planetary engineering",
"Weather modification"
] |
63,224,278 | https://en.wikipedia.org/wiki/Nucleomodulin | Nucleomodulins are a family of bacterial proteins that enter the nucleus of eukaryotic cells.
This term comes from the contraction between "nucleus" and "modulins", which are microbial molecules that modulate the behaviour of eukaryotic cells. Nucleomodulins are produced by pathogenic or symbiotic bacteria. They act on various processes in the nucleus: remodelling of the chromatin structure, transcription, splicing of pre-messenger RNA, cell division.
The identification of nucleomodulins in several species of bacterial pathogens of humans, animals and plants has led to the emergence of the concept that direct control of the nucleus is one of the most sophisticated strategies used by microbes to bypass host defences. Nucleomodulins can be directly secreted into the intracellular medium after entry of the bacteria into the cell, like Listeria monocytogenes, or they can be injected from the extracellular medium or intracellular organelles using a type III or IV bacterial secretion system, also known as a "molecular syringe".
More recently, it has been shown that some of them, such as YopM from Yersinia pestis and IpaH9.8 from Shigella flexneri, can autonomously penetrate eukaryotic cells thanks to a membrane transduction domain.
The diversity of molecular mechanisms triggered by nucleomodulins is a source of inspiration for new biotechnologies. They are true nano-machines capable of hijacking a multitude of nuclear processes. In research, nucleomodulins are the subject of in-depth studies that have led to the discovery of new human nuclear regulators, such as the epigenetic regulator BAHD1.
Examples
Agrobacterium tumefaciens, responsible for crown gall disease, produces an arsenal of Vir proteins, including VirD2 and VirE2, enabling the precise integration of a piece of its DNA, called T-DNA, into that of the host plant
Listeria monocytogenes, responsible for listeriosis, can modulate the expression of immunity genes. One of the mechanisms at play involves the bacterial protein LntA, which inhibits the function of the epigenetic regulator BAHD1. The action of this nucleomodulin is associated with chromatin decompaction and activation of an interferon response genes.
Shigella flexneri, responsible for shigellosis, secretes the IpaH9.8 protein targeting a mRNA splicing protein that disrupts the production of protein isoforms and the inflammatory response in humans.
Legionella pneumophila, responsible for legionellosis, secretes an enzyme with histone methyltransferase activity capable of methylating histones at different chromosome loci or at the level of ribosomal DNA (rDNA) in the nucleolus.
References
Genetics | Nucleomodulin | [
"Biology"
] | 605 | [
"Genetics"
] |
63,224,734 | https://en.wikipedia.org/wiki/Leslie%20Stephen%20George%20Kovasznay | Leslie S. G. Kovasznay (14 April 1918, Budapest – 17 April 1980) was a Hungarian-American engineer, known as one of the world's leading experts in turbulent flow research.
Kovasznay earned in 1943 his doctorate in engineering at the Royal Hungarian Institute of Technology in the laboratory of Előd Abody-Anderlik in the faculty of mechanical engineering. After working from 1941 to 1946 at that Faculty, he spent a year at the Cavendish Laboratory working with Sir Geoffrey Taylor. From 1947 to 1978 Kovasznay was a faculty member of the Aeronautics Department organized by Francis H. Clauser (1913–2013) at Johns Hopkins University (JHU). In December he resigned from JHU to become a professor of mechanical engineering at the University of Houston, where he remained in his professorship until his sudden death in 1980.
In the 1970s, he worked with Hajime Fujita on experimental studies of interactions between airfoils and wake turbulence and, with Chih-Ming Ho, on experimental studies of interactions between sound and turbulence.
He travelled widely, lectured at many universities and conferences, and made extended visits in France and Japan. He was the author or coauthor of more than 80 papers. He was a Guggenheim Fellow for the academic year 1955–1956. He was elected a Fellow of the American Physical Society in 1962.
Kovasznay married in 1944. Upon his death, he was survived by his widow and their daughter.
Selected publications
1948
1948
1949
1950
1953
1953
1955
1958
1968
1969
1969
1970
1972
1972
See also
Kovasznay flow
Entropy-vorticity wave
References
1918 births
1980 deaths
20th-century American physicists
Aerodynamicists
Budapest University of Technology and Economics alumni
Fellows of the American Physical Society
Fluid dynamicists
Hungarian aerospace engineers
Hungarian emigrants to the United States
20th-century Hungarian physicists
20th-century Hungarian inventors
Johns Hopkins University faculty
Musicians from Budapest
20th-century American inventors
University of Houston faculty | Leslie Stephen George Kovasznay | [
"Chemistry"
] | 397 | [
"Fluid dynamicists",
"Fluid dynamics"
] |
63,229,239 | https://en.wikipedia.org/wiki/Applied%20Cognitive%20Psychology | Applied Cognitive Psychology is a bimonthly peer-reviewed scientific journal covering experimental research in cognitive psychology. It was established in 1987 and is published by John Wiley & Sons. The founding editors-in-chief were Douglas Herrmann and Graham M. Davies, and the current one is Pär Anders Granhag (University of Gothenburg). According to the Journal Citation Reports, the journal has a 2021 impact factor of 2.360, ranking it 56th out of 91 journals in the category "Psychology, Experimental".
References
External links
Applied psychology journals
Cognitive science journals
Cognitive psychology | Applied Cognitive Psychology | [
"Biology"
] | 115 | [
"Behavioural sciences",
"Behavior",
"Cognitive psychology"
] |
63,229,281 | https://en.wikipedia.org/wiki/HRDetect | HRDetect (Homologous Recombination Deficiency Detect) is a whole-genome sequencing (WGS)-based classifier designed to predict BRCA1 and BRCA2 deficiency based on six mutational signatures. Additionally, the classifier is able to identify similarities in mutational profiles of tumors to that of tumors with BRCA1 and BRCA2 defects, also known as BRCAness. This classifier can be applied to assess the implementation of PARP inhibitors in patients with BRCA1/BRCA2 deficiency. The final output is a probability of BRCA1/2 mutation.
Background
BRCA1/BRCA2
BRCA1 and BRCA2 play crucial roles in maintaining genome integrity, mainly through homologous recombination (HR) for DNA double-strand breaks (DSB)repair. The mutations of BRCA1 and BRCA2 can lead to a reduced capacity of HR machinery, increased genomic instability, and elicit a predisposition to malignancies. People with BRCA1 and BRCA2 deficiency have higher risks of developing certain cancers such as breast and ovarian cancers. Germline defects in BRCA1/BRCA2 genes account for up to 5% of breast cancer cases.
PARP inhibitors
Poly (ADP ribose) polymerase (PARP) inhibitors are designed to treat BRCA1- and BRCA2- defect tumors owing to their homologous recombination deficiency. These drugs have been majorly implemented in breast and ovarian cancers, and their clinical efficacy among patients with other types of cancers, such as pancreatic cancer, is still being investigated. It is vital to identify adequate patients with BRCA1/BRCA2 deficiency to utilize PARP inhibitors optimally. PARP inhibitors operate on the concept of synthetic lethality where by selectively causing cell death in BRCA-mutant cells while sparing normal cells.
HRDetect
HRDetect was implemented to detect tumors with BRCA1/BRCA2 deficiency using the data from whole-genome sequencing. This model quantitatively aggregates six HRD-associated signatures into a single score called HRDetect to accurately classify breast cancers by their BRCA1 and BRCA2 status. The machine learning algorithm assigns weight values to these signatures prior to computing the final score. The six signatures, ranked by decreasing weight, include microhomology-mediated indels, the HRD index, base- substitution signature 3, rearrangement signature 3, rearrangement signature 5, and base- substitution signature 8. Additionally, this weighted approach is able to identify BRCAness, which refers to mutational phenotypes displaying homologous recombination deficiency similar to tumors with BRCA1/BRCA2 germline defects.
Methodology
Input
HRDetect requires four types of inputs:
Counts of mutations associated with each signature of single-base substitutions
Indels with microhomology at the indel breakpoint junction, indels at polynucleotide-repeat tracts and other complex indels as proportions
Counts of rearrangements associated with each signature
HRD index (Arithmetic sum of loss of heterozygosity (LOH), telomeric-allelic imbalance (TAI), and large-scale state transitions (LST) scores)
Statistical Analysis
It is based on a supervised learning method using a lasso logistic regression model to distinguish samples into those with and without BRCA 1/2 deficiency. Optimal coefficients are obtained by minimizing the objective function.
Log Transformation
To account for a high substitution count in samples, the genomic data is first log transformed:
Standardization
The transformed data is then standardized to make mutational class values comparable giving each object a mean of 0 and a standard deviation (sd) of 1:
Lasso Logistical Regression Modelling
To be able to distinguish between those affected and not affected by BRCA1/BRCA2 deficiency, a lasso logistic regression model is used:
where:
: BRCA status of a sample || yi = 1 for BRCA1/BRCA2-null samples || yi = 0 otherwise
: Intercept, interpreted as the log of odds of = 1 when = 0
: Vector of weights
: Number of features characterizing each sample
: Number of samples
: Vector of features characterizing the ith sample
: Penalty promoting the sparseness of the weights
: L1 norm of the vector of weights
The β weights are constrained to be positive to reflect the presence of mutational actions due to BRCA1/BRCA2 defects. Setting the constraint of nonnegative weights ensures that all samples would be scored on the basis of the presence of relevant mutational signatures associated with BRCA1/BRCA2 deficiency, irrespective of whether these signatures are the dominant mutational process in the cancer.
HRDetect Score
Lastly, the weights obtained from the lasso regression are used to give a new sample a probabilistic score using the normalized mutational data and application of the model parameters(, ):
where:
: variable encoding the status of the ith sample
: Intercept weight
: Vector encoding features of the ith sample
: Vector of weights
Interpretation
The probability value quantifies the degree of BRCA1/BRCA2 defectiveness. A cut-off probability value should be chosen while maintaining a high sensitivity. These scores can be utilized to guide therapy.
Applications
Predicting Chemotherapeutic Outcomes
Mutations in genes responsible for HR are prevalent among human cancers. The BRCA1 and BRCA2 genes are centrally involved in HR, DNAdamage repair, end resection, and checkpoint signaling. Mutational signatures of HRD have been identified in over 20% of breast cancers, as well as pancreatic, ovarian, and gastric cancers. BRCA1/2 mutations confer sensitivity to platinum-based chemotherapies. HRDetect can independently trained to predict BRCA1/2 status, and has the capacity to predict outcomes on platinum-based chemotherapies.
Breast Cancer
HRDetect was initially developed to detect tumors with BRCA1 and BRCA2 deficiency based on the data from whole-genome sequencing of a cohort of 560 breast cancer samples. Within this cohort, 22 patients were known to carry germline BRCA1/BRCA2 mutations. BRCA1/BRCA2- deficiency mutational signatures were found in more breast cancer patients than previously known. This model was able to identify 124 (22%) breast cancer patients showing BRCA1/2 mutational signatures in this cohort of 560 samples. Apart from the 22 known cases, an additional 33 patients showed deficiency with germline BRCA1/2 mutations, 22 patients displayed somatic mutation of BRCA1/2, and 47 were recognized to show functional defect without detected BRCA1/2 mutation. As a result, with an application of a probabilistic cut-off 0.7, HRDetect was able to demonstrate a 98.7% sensitivity recognizing BRCA1/2- deficient cases.
In contrast, germline mutations of BRCA1/2 are present in only 1~5% of breast cancer cases. Furthermore, these findings suggest that more breast cancer patients, as many as 1 in 5 (20%), may benefit from PARP inhibitors than a small percentage of patients currently given with the treatment.
Cohort of 80 Breast cancer patients. 6 out of 7 are above HRDetect score 0.7.
Cohort of 80 Breast Cancer Samples
HRDetect was tested in 80 breast cancer cases with mainly ER positive and HER2 negative. The tool was able to find ones that exceed HRDetect score 0.7, including one germline BRCA1 mutation carrier, four germline BRCA2 mutation carriers and one somatic BRCA2 mutation carrier. The sensitivity of this tool also reached 86%.
Compatibility Across Cancers
HRDetect can be applied to other cancer types and yields adequate sensitivity.
Ovarian Cancer
In a cohort of 73 patients with ovarian cancer, 30 patients were known to carry BRCA1/BRCA2 mutations and 46 (63%) patients were assessed by HRDetect to have HRDetect score over 0.7. The sensitivity of detecting BRCA1/2-deficient cancer was almost 100%, with an additional 16 cases identified.
Pancreatic Cancer
In a cohort of 96 patients with pancreatic cancers, 6 cases were known to have mutation or allele loss and 11 (11.5%) patients were identified by HRDetect to an exceed cutoff of 0.7. The study observed a similar result of sensitivity approaching 100%, with five other cases identified.
Advantages and Limitations
Advantages
The concordance is predictions is high between low coverage and high coverage sequencing.
It can trained on whole exome sequencing (WES) data
It can be used with sequencing data from formalin fixed paraffin embedded (FFPE)
It can distinguish BRCA1 from BRCA2 tumors
Limitations
While it can be used with WES data, the sensitivity of detection falls considerably when not trained with such data. The sensitivity increases when training is performed with WES data however false-positive's are still identified.
References
Genome projects | HRDetect | [
"Biology"
] | 1,912 | [
"Genome projects"
] |
63,230,609 | https://en.wikipedia.org/wiki/Pythagorean%20Triangles | Pythagorean Triangles is a book on right triangles, the Pythagorean theorem, and Pythagorean triples. It was originally written in the Polish language by Wacław Sierpiński (titled Trójkąty pitagorejskie), and published in Warsaw in 1954. Indian mathematician Ambikeshwar Sharma translated it into English, with some added material from Sierpiński, and published it in the Scripta Mathematica Studies series of Yeshiva University (volume 9 of the series) in 1962. Dover Books republished the translation in a paperback edition in 2003. There is also a Russian translation of the 1954 edition.
Topics
As a brief summary of the book's contents, reviewer Brian Hopkins quotes The Pirates of Penzance: "With many cheerful facts about the square of the hypotenuse."
The book is divided into 15 chapters (or 16, if one counts the added material as a separate chapter). The first three of these define the primitive Pythagorean triples (the ones in which the two sides and hypotenuse have no common factor), derive the standard formula for generating all primitive Pythagorean triples, compute the inradius of Pythagorean triangles, and construct all triangles with sides of length at most 100.
Chapter 4 considers special classes of Pythagorean triangles, including those with sides in arithmetic progression, nearly-isosceles triangles, and the relation between nearly-isosceles triangles and square triangular numbers. The next two chapters characterize the numbers that can appear in Pythagorean triples, and chapters 7–9 find sets of many Pythagorean triangles with the same side, the same hypotenuse, the same perimeter, the same area, or the same inradius.
Chapter 10 describes Pythagorean triangles with a side or area that is a square or cube, connecting this problem to Fermat's Last Theorem. After a chapter on Heronian triangles, Chapter 12 returns to this theme, discussing triangles whose hypotenuse and sum of sides are squares. Chapter 13 relates Pythagorean triangles to rational points on a unit circle, Chapter 14 discusses right triangles whose sides are unit fractions rather than integers, and Chapter 15 is about the Euler brick problem, a three-dimensional generalization of Pythagorean triangles, and related problems on integer-sided tetrahedra. Sadly, in giving an example of a Heronian tetrahedron found by E. P. Starke, the book repeats a mistake of Starke in calculating its volume.
Audience and reception
The book is aimed at mathematics teachers, in order to inspire their interest in this subject, but (despite complaining that some of its proofs are overly complicated) reviewer Donald Vestal also suggests this as a "fun book for a mostly general audience".
Reviewer Brian Hopkins suggests that some of the book's material could be simplified using modular notation and linear algebra, and that the book could benefit by updating it to include a bibliography, index, more than its one illustration, and pointers to recent research in this area such as the Boolean Pythagorean triples problem. Nevertheless, he highly recommends it to mathematics teachers and readers interested in "thorough and elegant proofs". Reviewer Eric Stephen Barnes rates Sharma's translation as "very readable". The editors of zbMATH write of the Dover edition that "It is a pleasure to have this classic text available again".
References
Pythagorean theorem
Mathematics books
1954 non-fiction books
1962 non-fiction books
2003 non-fiction books | Pythagorean Triangles | [
"Mathematics"
] | 760 | [
"Euclidean plane geometry",
"Mathematical objects",
"Equations",
"Pythagorean theorem",
"Planes (geometry)"
] |
54,711,924 | https://en.wikipedia.org/wiki/Friedrich%20Weleminsky | Joseph Friedrich ("Fritz") Weleminsky (20 January 1868, in Golčův Jeníkov – 1 January 1945, in London), was a physician, a scientist and a privatdozent in Hygiene (now called Microbiology) at the German University, Prague who, in the early 20th century, created an alternative treatment for tuberculosis, tuberculomucin Weleminsky.
Early life and education
He was born into a Jewish family on 20 January 1868 at Golčův Jeníkov in Bohemia, (then part of the Austro-Hungarian Empire, now in the Czech Republic). His parents were Jacob Weleminsky (1834–1905), a general medical practitioner (GP) in Golčův Jeníkov, and his wife Bertha (née Kohn; 1844–1914). Friedrich was their second child; he had an elder sister, Paula (1867–1936), who in 1888 married a Dresden lawyer, Felix Popper, and a younger brother, Josef ("Pepi") (1870–1937) who, like Friedrich, studied medicine in Prague and who went on to become a laryngologist.
The family moved to Dresden in 1879 when Jacob obtained a position as GP there, and later to Prague. Friedrich attended the Kreuzschule in Dresden and studied medicine in Prague.
Career
Friedrich Weleminsky enrolled in the medical faculty of the German University in Prague in 1893 and obtained a habilitation qualification as Dr.Med. in 1900. He was appointed to a teaching post in the university's medical faculty as a privatdozent in Hygiene in July 1900.
During the First World War, Weleminsky was in charge of the reserve hospital "Halicz" which was stationed in various parts of Austria and Hungary. While stationed in Kleinreifling, a village in the district of Steyr-Land in Upper Austria, he successfully brought a local typhoid epidemic under control, for which he was made an Ehrenbürger of
Weyer.
Tuberculomucin Weleminsky
Weleminsky's particular area of interest was vaccination against tuberculosis. In 1935, an editorial in the American Journal of Clinical Pathology cited one of his articles as providing "a good review of the voluminous literature accumulated on BCG".
In 1912 Weleminsky, who was then second assistant to Ferdinand Hueppe, the head of the Institute for Hygiene at the German University of Prague, published his discovery of a new treatment for tuberculosis, which he named tuberculomucin (Tbm). It was tested on guinea pigs, with number 1769 being the first to survive due to the treatment in 1909. He also used tuberculomucin Weleminsky (also spelt tuberkulomucin Weleminsky and tuberkulomuzin Weleminsky) to treat cattle which he kept at his country retreat, Schloss Thalheim.
More than 60 papers were published in German describing tuberculomucin's use in humans, but very few of them were read by an English-speaking audience. By the mid-1920s it was known as tuberculomucin Weleminsky and at least two companies were involved in producing and marketing the treatment.<ref group=note>The 1927 advertisement pictured here was issued by the Biopharma pharmaceutical company in Vienna. It listed the distinctions between tuberculin and tuberculomucin Weleminsky (TbM) and invited general practitioners and clinical physicians to seek a brochure about TbM, or trial samples for clinical testing, by completing and returning a postcard in which they were also asked to say if they were already using Tbm and what success they had experienced in using it. </ref> In 1938, Sanders, a Belgian pharmaceutical company, planned to manufacture Tbm and to make it available in Western Europe and other parts of the developed world. However, Weleminsky fled from Prague in 1939, a couple of weeks before the Nazi invasion of Czechoslovakia, and these plans and further development of the treatment ceased.
Personal and family life
On 4 December 1905 he married Jenny Elbogen (1882–1957), at her parents' country home, Schloss Thalheim, Lower Austria. The married couple lived in Prague and at Schloss Thalheim, which Jenny inherited from her father after his death in 1918 and which they ran as a model dairy farm.
They had four children together. Their eldest daughter, Marianne (born 1906), and their son, Anton (born 1908), came to Britain just before the Second World War. Two of their daughters emigrated in the early 1930s to Mandatory Palestine where they took new names – Eliesabeth (born 1909) became Jardenah, and Dorothea (born 1912) was known as Leah.
Facing Nazi persecution for being Jewish, Friedrich and Jenny Weleminsky found sanctuary in Britain in 1939.
Death and legacy
Friedrich Weleminsky died of pneumonia on 1 January 1945 at Fulham Hospital, London and is buried at Golders Green Jewish Cemetery. His wife Jenny, who was 14 years younger, survived him by 12 years. Their grandchildren and great-grandchildren now live in Britain, Israel, Australia, Sweden and Germany.
In 2011, following an approach by Weleminsky's eldest granddaughter, Dr Charlotte Jones, a retired general practitioner, a team at the University College London's Department of Science and Technology Studies resumed research on tuberculomucin Weleminsky. Since 2017, Friedrich's granddaughter Judy Weleminsky has been leading this research.
Publications
Basch, K; Weleminsky, Friedrich (1898). "Ueber die Ausscheidung von Krankheitserregern durch die Milch". Jahrb. f. Kinderheilk. 47, 105–115
Weleminsky, Friedrich (1899). Über Sporenbildung bei Dematium pululans de Bary, 7pp.
Weleminsky, Friedrich (1899). Ueber Akklimitisation in Grossstädten (On acclimisation in large cities), Oldenbourg: Munich. Off-print from Archiv für Hygiene, 26: 2
Jadassohn, J; Pick, Walther; Weleminsky, Friedrich (1903). "Buchanzeigen und Besprechungen". Archiv für Dermatologie und Syphilis, 64(1): 149–160
Weleminsky, Friedrich (1905). "Zur Pathogenese der Lungentuberkulose (On the pathogenesis of lung tuberculosis)". Klinische Wochenschrift (Clinical Weekly). Berlin, Springer-Verlag
Weleminsky, Friedrich (1906). "Ueber Zuchtung von Mikroorganismen in stromenden Nahrboden". Zentralblatt fur Bakteriologie, Parasitenkunde und Infektionskrankheiten (Central Journal of Bacteriology, Parasitics, Infectious Diseases and Hygiene), 42: 1–7
Weleminsky, Friedrich (1907). "Der Gang von Infektionen in den Lymphbahnen (The course of infections in the lymphatics)". Klinische Wochenschrift (Clinical Weekly). Berlin, Springer-Verlag
Weleminsky, Friedrich (1912). "Ueber die Bildung von Elweiss und Mucin durch Tuberkelbacillen". Klinische Wochenschrift (Clinical Weekly). 28: 1–8 Berlin, Springer-Verlag
Weleminsky, Friedrich (1914). "Tierversuche mit Tuberculomucin". Klinische Wochenschrift (Clinical Weekly). 18: 1–10 Berlin, Springer-Verlag
Weleminsky, Friedrich (1928). "Filtrable form of tubercle bacilli". Zentralbl. f. d. gesam. Tuberk.Forsch. 28(5/6): 305–310
Weleminsky, Friedrich (1930). "Die Immunisierung gegen Tuberkulose mit Calmette's BCG". Klinische Wochenschrift (Clinical Weekly). II: 1317–1320 Berlin, Springer-Verlag
Weleminsky, Friedrich (1930). "Die B.C.G.-Literatur in französischer Sprache". Zentralbl. f. d. gesam. Tuberk.Forsch''. 33: 129–135
See also
Jiří Velemínský
Judy Weleminsky
Note
References
Further reading
1868 births
1945 deaths
19th-century Austrian people
19th-century Austrian scientists
20th-century Austrian educators
20th-century Austrian physicians
20th-century Austrian scientists
Academic staff of Charles University
Austrian medical researchers
Austrian microbiologists
Burials at Golders Green Jewish Cemetery
Charles University alumni
Czech educators
Czech medical researchers
Czech microbiologists
Czech pulmonologists
Czechoslovak Jews
Deaths from pneumonia in England
Jewish educators
Jewish emigrants from Austria after the Anschluss to the United Kingdom
Jewish biologists
Jewish physicians
Jewish German scientists
Jews from Bohemia
People from Havlíčkův Brod District
People from Sankt Pölten-Land District
Physicians from Austria-Hungary
Tuberculosis researchers
Vaccinologists
Scientists from Bohemia
People educated at the Kreuzschule | Friedrich Weleminsky | [
"Biology"
] | 1,920 | [
"Vaccination",
"Vaccinologists"
] |
54,711,947 | https://en.wikipedia.org/wiki/3D%20structure%20change%20detection | 3D Structure Change Detection is a type of Change detection (GIS) processes for GIS (geographical information systems). It is a process that measures how the volume of a particular area have changed between two or more time periods. A high-spatial resolution Digital elevation model (DEM) that provides accurate 4-d (space and time) structural information over area of interest is required to compute such changes. In production, two or more DEMs that cover the same area are used to monitor topographic changes of area. By comparing the DEMs made at different times, structure of terrain changes can be realized by the ground elevation difference from DEMs. Details, occurring time and accuracy of such changes are strongly relied on the resolution, quality of DEMs. In general, the problem of involves whether or not a change has occurred, or whether several changes have occurred. Such structure changes detection has been widely used to assess urban growth, impact of natural disasters like earthquake, volcano and battle damage assessment.
See also
Change detection (GIS)
Digital elevation model
Geographic information system
References
External links
Geographic information systems
Topography techniques
Change detection | 3D structure change detection | [
"Technology"
] | 224 | [
"Information systems",
"Geographic information systems"
] |
54,715,525 | https://en.wikipedia.org/wiki/Life%20Length | Life Length is a biotechnology company. Located in Madrid, it provides telomere diagnostics as well as telomerase measurement.
It was founded by American entrepreneur Stephen J. Matlin and Dr. María Blasco Marhuenda in 2010 with the objective to commercialize Blasco's HT Q-FISH conceptual work.
Life Length is Spain's only federally-accredited laboratory under CLIA. Life Length has three main facilities, with offices in Madrid and laboratories located in Tres Cantos.
History
Life Length was established on September 28, 2010, as a spin-off from the Spanish National Cancer Research Centre. In 2016, the company obtained CLIA certification, a U.S. government accreditation for clinical laboratories, becoming the only laboratory in Spain with this certification.
In 2017, the ONCOCHECK project, which involves a series of clinical studies focused on cancer diagnostics, received €3.1 million in funding from the European Union's Horizon 2020 research and innovation program.
In 2021, Life Length opened a clinic at Paseo del General Martinez Campos, 46, in Madrid.
In 2022, the company's prostate cancer diagnostic tool received approval from the Spanish Agency for Medicines and Health Products (AEMPS). The same year, the company launched HEALTHTAV, a new product in its portfolio.
Telomeres and their importance to the company
Telomeres are part of our DNA and are found at the ends of chromosomes. Their function is to protect our DNA during each cell division by preventing chromosomes from adhering to each other or from losing important information. They represent the most precise biomarker to measure aging. Telomere deterioration has been associated with the ageing process and many other diseases.
Over the years, every time a cell divides, our telomeres successively shorten up to a point where the cells cannot divide any more. Subsequently, they either undergo a process called apoptosis (cells progressively die) or go into senescence (they lose their function).
Many studies link long telomeres and a slower rate of telomere shortening with greater longevity. For example, research done on mice showed that individuals with hyper-long telomeres lived 13% longer than those with normal telomeres. However, they also store less fat, which also contributes to greater longevity.
Due to the impact, they have at the cellular level, the length of telomeres and their rate of shortening is considered a relevant biomarker for assessing the state of aging of the entire organism.
Projects
Oncocheck
ONCOCHECK is a set of clinical studies conducted by Life Length during 2017. The aim of the project was the clinical validation of telomere-associated variables (TAVs) as cancer biomarkers. It involved more than 1,200 adults and 300 children suffering from one of multiple existing types of cancer, including breast, prostate, lung, and leukemia cancers, among others. ONCOCHECK received funding from the European Union's Horizon 2020 research and innovation program. With more than 7,000 peer-reviewed scientific and clinical publications, telomere length measurement has established itself as a biomarker in cancer diagnosis and prognosis. This project has the invaluable support of some of the most important hospitals in Spain such as "University Hospital 12 de octubre", "University Hospital Puerta de Hierro", "University Hospital Niño Jesús", "Vall d'Hebron Hospital", and "Centro Integral Oncológico Clara Campal (CIOCC)". Within the ONCOCHECK project, Life Length is also conducting studies in advanced solid tumors and chronic lymphocytic leukemia (CLL).
The results of the ONCOCHECK project have enabled Life Length to develop new applications in oncology .
Prostate cancer diagnosis product
Telomeres as cancer biomarkers:
Tumor cells work differently from normal cells. As a cell becomes cancerous, it divides more frequently, and its telomeres shorten faster.
Cancer cells avoid senescence/death and instead become immortal with the ability to replicate indefinitely, even when telomeres are short. Therefore, alterations in telomere length have great potential as a biomarker in cancer.
Prostate cancer is the second most occurring cancer for men around the world today. 1 out of 8 men suffer from it every year. This diagnosis is an in vitro diagnostic test (IVD) that makes it possible to identify patients with a higher risk of suffering from aggressive prostate cancer. The prostate cancer diagnosis product, combined with the current screening method, can potentially prevent hundreds of prostate biopsies annually. Life Length has launched it on the market in 2022. It is a test that provides doctors with a useful and rapid tool for clinical decision-making.
This test is a minimally invasive procedure that only requires a blood sample from the patient.
Other studies
Lung cancer -
Life Length is working on developing an algorithm to detect patients at risk for lung cancer. With its telomere measurement platform, the test could potentially fill the gap in lung cancer screening techniques in a routine, minimally invasive, and low-cost way. Life Length was the European Seal of Excellence by the European Commission for its project proposal in lung cancer research.
Childhood cancer -
Life Length has conducted the largest childhood cancer research project to date, obtaining samples from about 100 children with cancer and healthy children. The aim of the study was to help oncology and hematology specialists make better decisions and increase the chances of these children beating cancer. Life Length has carried out these studies with the support of one of the most important children's hospitals in Spain, the Hospital Infantil Universitario Niño Jesús.
More studies and projects
Brain Age: Environmental influences in prenatal life have a major impact on brain aging and age-associated brain disorders. Aging is considered as a major risk factor of most neurogenerative diseases such as Alzheimer's or Parkinson's disease for example.
EuroBATS: Studies that uses both genetic and biological approaches. Study of 8,000 identical twins to identify markers of aging. The use of improving the length of the telomeres will be used in this process.
Frailomic: Utility of biomarkers to characterize elderly individuals at risk for frailty, its progression to disability outcomes, and overall health and well-being consequences. The main objective is to prevent and detect frailty before suffering from it.
Accreditations
Life Length is considered the most accredited clinical laboratory in Spain.
ISO 15189 – Granted by the International Organization for Standardization develop and publish International Standards.
Rhode Island Department of Health – A state government agency helping to prevent diseases by protecting and promoting health and safety.
License: The center of Health Facilities and Regulation authorized Life Length to conduct and maintain an Out of State Clinical Laboratory in conformity with RIGL C23-16.2.
CMS – Centers for Medicare & Medicaid Services. Their objective would be to strengthen health equity, expand coverage, and improve health outcomes.
License: Pursuant to Section 353 of the Public Health Services Act (42 U.S.C. 263a) as revised by the Clinical Laboratory Improvement Amendments (CLIA).
A2LA – The American Association for Laboratory Accreditation provides comprehensive services in laboratory accreditation and laboratory-related training.
CDPH – The California Department of Public Health is responsible for the public health of California.
Maryland Department of Health – The department supports and improves the health and safety through disease prevention, access to care, quality management, and community engagement.
License: Pursuant to the provisions of TITLE 17, subtitle 2, Health-General Article 17-201 et seq., Annotated Code of Maryland
Pennsylvania Department of Health – The company´s objective is to promote healthy behaviors, prevent injury and disease, and assure the safe delivery of quality health care.
License: Pursuant of the act of September 26, 1951, P.L., 1539 as amended, a Permit to operate a Clinical Laboratory.
Salud Madrid– The Servicio Madrileño de Salud is responsible for the system of public health services in the Community of Madrid. This public provider accredits the extraction of samples in non-health organizations.
License: C.2.5.6 Centro de diagnóstico con unidades de U.72 Obtención de muestras, U.73 Análisis clínicos y U.74 Bioquímica clínica
Scientific publications
Life Length has published to date the following articles:
In addition, numerous clients of Life Length have published articles based on the results for work performed by the company demonstrating the uniqueness of the TAT® and related technologies:
References
Biotechnology companies
Companies based in Madrid
Spanish companies established in 2010 | Life Length | [
"Engineering",
"Biology"
] | 1,779 | [
"Biotechnology organizations",
"Biotechnology companies"
] |
54,720,086 | https://en.wikipedia.org/wiki/Vortex%20flowmeter | A Vortex flowmeter is a type of flowmeter used for measuring fluid flow rates in an enclosed conduit.
Composition of vortex flowmeter
A vortex flowmeter has the following components: A flow sensor operable to sense pressure variations due to vortex-shedding of a fluid in a passage and to convert the pressure variations to a flow sensor signal, in the form of an electrical signal; and a signal processor operable to receive the flow sensor signal and to generate an output signal corresponding to the pressure variations due to vortex-shedding of the fluid in the passage.
Working principle
When the medium flows through the Bluff body at a certain speed, an alternately arranged vortex belt is generated behind the sides of the Bluff body, called the "von Kármán vortex". Since both sides of the vortex generator alternately generate the vortex, the pressure pulsation is generated on both sides of the generator, which makes the detector produce alternating stress. The piezoelectric element encapsulated in the detection probe body generates an alternating charge signal with the same frequency as the vortex, under the action of alternating stress. The frequency of these pulses is directly proportional to flow rate. The signal is sent to the intelligent flow totalizer to be processed after being amplified by the pre-amplifier.
In certain range of Reynolds number (2×10^4~7×10^6), the relationship among vortex releasing frequency, fluid velocity, and vortex generator facing flow surface width can be expressed by the following equation:
where is the releasing frequency of Carmen vortex, is the Strouhal number, is velocity, and is the width of the triangular cylinder.
Industrial applications
The vortex flowmeter is a broad-spectrum flow meter which can be used for metering, measurement and control of most steam, gas and liquid flow for a very unique medium versatility, high stability and high reliability with no moving parts, simple structure and low failure rate. The vortex flowmeter is relatively economical because of its simple flow measurement system and ease of maintenance. It is widely used in heavy industrial applications, power facilities, and energy industries, particularly in steam processes.
See also
References
Flow meters | Vortex flowmeter | [
"Chemistry",
"Technology",
"Engineering"
] | 433 | [
"Measuring instruments",
"Flow meters",
"Fluid dynamics"
] |
54,722,182 | https://en.wikipedia.org/wiki/2-Hydroxy-3-morpholinopropanesulfonic%20acid | MOPSO is a zwitterionic organic chemical buffering agent; one of Good's buffers. MOPSO and MOPS (3-morpholinopropanesulfonic acid) are chemically similar, differing only in the presence of a hydroxyl group on the C-2 of the propane moiety. It has a useful pH range of 6.5-7.9 in the physiological range, making it useful for cell culture work. It has a pKa of 6.9 with ΔpKa/°C of -0.015 and a solubility in water at 0°C of 0.75 M.
MOPSO has been used as a buffer component for:
Copper analysis via:
Flow injection micellar technique of the catalytic reaction of 3-methyl-2-benzothiazolinone hydrazone and N-ethyl-N-(2-hydroxy-3-sulfopropyl)-3,5-dimethoxyaniline
Electrospray ionization quadrupole time-of-flight mass spectroscopy to measure MOPSO-copper chelated complexes
Discontinuous gel electrophoresis on rehydratable polyacrylamide gels
Buffered charcoal yeast extract agar
Fixing cells in urine in a buffered alcohol
Testing crude oil bioremediation products in marine environments
See also
MOPS
MES
CAPS
CHES
References
Zwitterions
4-Morpholinyl compounds
Sulfonic acids
Secondary alcohols | 2-Hydroxy-3-morpholinopropanesulfonic acid | [
"Physics",
"Chemistry"
] | 313 | [
"Matter",
"Functional groups",
"Zwitterions",
"Sulfonic acids",
"Ions"
] |
54,723,496 | https://en.wikipedia.org/wiki/NGC%203675 | NGC 3675 is a spiral galaxy located in the constellation Ursa Major. It is located at a distance of about 50 million light years from Earth, which, given its apparent dimensions, means that NGC 3675 is about 100,000 light years across. It was discovered by German-British astronomer William Herschel on 14 January 1788. NGC 3675 belongs to the Ursa Major Cluster, part of the Virgo Supercluster.
It hosts a low-ionization nuclear emission-line region (LINER). In the nucleus there is a supermassive black hole with an estimated mass of 10-39 million , based on the intrinsic velocity dispersion as measured by the Hubble Space Telescope. Although the galaxy was reported to have a strong bar visible in infrared images, there has been no indication of a bar in further observations. Its spiral disk is of type III and there is a dust structure which is more prominent to the east. The galaxy features two ring structures, with diameter 1.62 and 2.42 arcminutes. The spiral arms are tightly wound and form an inner pseudoring and they continue for one revolution outside the ring. The outer arms are very patchy and filamentary.
One supernova has been observed in NGC 3675: SN 1984R (type unknown, mag. 13) was discovered by Kaoru Ikeya on 2 December 1984.
Gallery
See also
List of NGC objects (3001–4000)
References
External links
Unbarred spiral galaxies
Ursa Major
3675
06439
035164
+07-24-004
11234+4351
17880114
Discoveries by William Herschel | NGC 3675 | [
"Astronomy"
] | 336 | [
"Ursa Major",
"Constellations"
] |
54,723,517 | https://en.wikipedia.org/wiki/Bose%E2%80%93Einstein%20condensation%20of%20polaritons | Bose–Einstein condensation of polaritons is a growing field in semiconductor optics research, which exhibits spontaneous coherence similar to a laser, but through a different mechanism. A continuous transition from polariton condensation to lasing can be made similar to that of the crossover from a Bose–Einstein condensate to a BCS state in the context of Fermi gases. Polariton condensation is sometimes called “lasing without inversion”.
Overview
Polaritons are bosonic quasiparticles which can be thought of as dressed photons. In an optical cavity, photons have an effective mass, and when the optical resonance in a cavity is brought near in energy to an electronic resonance (typically an exciton) in a medium inside the cavity, the photons become strongly interacting, and repel each other. They therefore act like atoms which can approach equilibrium due to their collisions with each other, and can undergo Bose-Einstein condensation (BEC) at high density or low temperature. The Bose condensate of polaritons then emits coherent light like a laser. Because the mechanism for the onset of coherence is the interactions between the polaritons, and not the optical gain that comes from inversion, the threshold density can be quite low.
History
The theory of polariton BEC was first proposed by Atac Imamoglu and coauthors including Yoshihisa Yamamoto. These authors claimed observation of this effect in a subsequent paper, but this was eventually shown to be standard lasing. In later work in collaboration with the research group of Jacqueline Bloch, the structure was redesigned to include several quantum wells inside the cavity to prevent saturation of the exciton resonance, and in 2002 evidence for nonequilibrium condensation was reported which included photon-photon correlations consistent with spontaneous coherence. Later experimental groups have used essentially the same design. In 2006, the group of Benoit Deveaud and coauthors reported the first widely accepted claim of nonequilibrium Bose–Einstein condensation of polaritons based on measurement of the momentum distribution of the polaritons. Although the system was not in equilibrium, a clear peak in the ground state of the system was seen, a canonical prediction of BEC. Both of these experiments created a polariton gas in an uncontrolled free expansion. In 2007, the experimental group of David Snoke demonstrated nonequilibrium Bose–Einstein condensation of polaritons in a trap, similar to the way atoms are confined in traps for Bose–Einstein condensation experiments. The observation of polariton condensation in a trap was significant because the polaritons were displaced from the laser excitation spot, so that the effect could not be attributed to a simple nonlinear effect of the laser light. Jaqueline Bloch and coworkers observed polariton condensation in 2009, after which many other experimentalists reproduced the effect (for reviews see the bibliography). Evidence for polariton superfluidity was reported in by Alberto Amo and coworkers, based on the suppressed scattering of the polaritons during their motion. This effect has been seen more recently at room temperature, which is the first evidence of room temperature superfluidity, albeit in a highly nonequilibrium system.
Equilibrium polariton condensation
The first clear demonstration of Bose–Einstein condensation of polaritons in equilibrium was reported by a collaboration of David Snoke, Keith Nelson, and coworkers, using high quality structures fabricated by Loren Pfeiffer and Ken West at Princeton. Prior to this result, polariton condensates were always observed out of equilibrium. All of the above studies used optical pumping to create the condensate. Electrical injection, which enables a polariton laser which could be a practical device, was shown in 2013 by two groups.
Nonequilibrium condensation
Polariton condensates are an example, and the most well studied example, of Bose-Einstein condensation of quasiparticles. Because most of the experimental work on polariton condensates used structures with very short polariton lifetime, a large body of theory has addressed the properties of nonequilibrium condensation and superfluidity. In particular, Jonathan Keeling and Iacopo Carusotto and C. Ciuti have shown that although a condensate with dissipation is not a “true” superfluid, it still has a critical velocity for onset of superfluid effects.
See also
Bose-Einstein condensation of quasiparticles
References
Further reading
Universal Themes of Bose-Einstein Condensation, published by Cambridge University Press (2017). ,
John Robert Schrieffer, Theory of Superconductivity, (1964),
Bose–Einstein Condensation, published by Cambridge University Press (1996). ;
Bose–Einstein condensates
Quasiparticles | Bose–Einstein condensation of polaritons | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,026 | [
"Bose–Einstein condensates",
"Matter",
"Phases of matter",
"Condensed matter physics",
"Quasiparticles",
"Subatomic particles"
] |
54,723,812 | https://en.wikipedia.org/wiki/Blue-winged%20amazon | The blue-winged amazon (Amazona gomezgarzai) is a proposed Central American species of parrot living in the Yucatan Peninsula, Mexico. It was described in 2017 in the journal PeerJ; however, its existence as a distinct wild species native to the Yucatan Peninsula has been questioned. A critique published in the journal Zootaxa identified numerous weaknesses with the description and suggested that the most plausible hypothesis was that the two specimens on which the description was based were hybrids.
Description
The proposed species was described as having an average body length of and a body weight of . It is characterized by distinct sexual dimorphism, with males larger than females. Depending on the sex, wing length is and the tail length is . Most of the body is covered with green plumage. However, these parrots differ in size and detail of the colors of the plumage from other Amazons found on the Yucatan peninsula (i.e., the white-fronted amazon – Amazona albifrons and the Yucatan amazon – Amazona xantholora).
The forehead and feathers around the male's eye are red. In females, red feathers are restricted to the forehead. Remiges (primaries and secondaries) are blue and green. The tail feathers are green with blue and green tips and red spots on the inner side.
Call
They produce distinctive loud, sharp and repetitive sounds that resemble a hawk, a natural predator of these birds. It is possible that this sound is used to alert other birds. The syllables are 3–5 times longer than those of white-fronted amazon and Yucatan amazon.
Behaviour
They live in groups up to a dozen individuals. Couples and their adult offspring tend to remain together as a group.
Taxonomy
Genetic analyzes based on mitochondrial DNA indicate close affinity with the white-fronted amazon and suggests that the blue-winged amazon could be a subspecies of it. However, this may be due to the introgression of mitochondrial DNA or too little variation in the loci examined. It is possible that the new species has evolved relatively recently (about 120,000 years ago) within one of the white-fronted amazon populations.
The proposed species was reported as discovered in 2014 by veterinarian and researcher at the Autonomous University of Nuevo León (UANL), Miguel Ángel Gómez Garza, during one of his expeditions on the Yucatan Peninsula and was described by Tony Silva and colleagues. The species has yet to be recognized by leading taxonomic authorities including the International Union for the Conservation of Nature (IUCN), the International Ornithologists' Union (IOU), or the American Ornithological Society (AOS).
Controversy
One of the authors, Tony Silva, was prosecuted in Chicago in March 1996 because he attempted to illegally introduce hundreds of exotic birds to the United States from Mexico and Central America, and, the original collector was reportedly and assessor in 2019 of Roberto Chavarria Gallegos, the then director of Parks and Wildlife of Nuevo Leon, who was denounced for the supposed illegal acquisition of 6 flamingos for the Zoo "La Pastora" from a suspected phantom wildlife trader.
References
Further reading
Amazon parrots
Birds described in 2017
Controversial parrot taxa | Blue-winged amazon | [
"Biology"
] | 639 | [
"Biological hypotheses",
"Controversial parrot taxa",
"Hypothetical species"
] |
54,724,282 | https://en.wikipedia.org/wiki/NGC%207096 | NGC 7096 is a grand-design spiral galaxy located about 130 million light-years away in the constellation of Indus. NGC 7096 is also part of a group of galaxies that contains the galaxy NGC 7083. NGC 7096 was discovered by astronomer John Herschel on August 31, 1836.
See also
NGC 7001
References
External links
Unbarred spiral galaxies
Indus (constellation)
7096
67168
IC objects
Astronomical objects discovered in 1836 | NGC 7096 | [
"Astronomy"
] | 95 | [
"Indus (constellation)",
"Constellations"
] |
54,724,614 | https://en.wikipedia.org/wiki/Watching-eye%20effect | The watching-eye effect says that people behave more altruistically and exhibit less antisocial behavior in the presence of images that depict eyes, because these images insinuate that they are being watched. Eyes are strong signals of perception for humans. They signify that our actions are being seen and paid attention to even through mere depictions of eyes.
It has been demonstrated that these effects are so pronounced that even depictions of eyes are enough to trigger them. This means that people need not actually be watched, but that a simple photograph of eyes is enough to elicit feelings that individuals are being watched which can impact their behavior to be more pro-social and less antisocial. Empirical psychological research has continually shown that the visible presence of images depicting eyes nudges people towards slightly, but measurably more honest and more pro-social behavior.
The concept is part of the psychology of surveillance and has implications for the areas of crime reduction and prevention without increasing actual surveillance, just by psychological measures alone. By simply inserting signs depicting eyes and leading others to believe they are being watched, crime can be reduced, as it leads to behavior that is more socially acceptable.
Similar effects
The effect differs from the psychic staring effect in that the latter describes the feeling of being watched, whereas individuals who succumb to the watching-eye effect usually affect our behaviour through the subconscious level.
Evidence of effects on behavior
Effects on pro-social behavior
There is evidence that images of eyes being present cause people to behave pro-socially. Pro-social behavior is acting in a way or with the intent that benefits others. There are two forms of motivation that support this. One being negative motivation that makes people want to avoid behavior that is wrong and violates the norm. They want to keep up a positive social image, or be seen improving their image rather than worsening it. The second being positive motivation to get a reward or benefits in the future. They believed that under watching eyes that if they behaved in a positive manner that benefited others, they were likely to get paid back for it in the future.
Pro-social experiments
Certain studies have shown that under the influence of eyes people will behave as truthfully honest. Under controlled groups without images of eyes present people were more likely to behave anti-socially and lie for the benefit of others. People lean toward honesty rather than acting generously to keep a good image in these situations in order to avoid violating norms. In these situations honesty is often chosen since it is seen as the most pro-social behavior.
There are more examples of studies that show that pro-social behavior is more likely under watchful behavior. People were more likely to share things such as money in games that had to do with economics when presented with images with eyes. People were also shown to be more likely to pick up trash at bus stops and pick up after themselves in a cafeteria, they were less likely to commit bicycle theft, and people were much more likely to give the full amount of money for their coffee on certain days that images of eyes were put up nearby.
In an experiment on littering funded by the School of Psychology at Newcastle University it was found that places that already had trash on the ground tending to have an increase in littering, showing that people tend to behave in ways that seem socially acceptable. Likewise, it was discovered that images of eyes that insinuated watching caused a reduce in littering however, the reduction of littering was mainly only present when there were also larger groups of people around. The findings of this study added to the idea that watching eyes reduce anti-social behavior and increase people to behave more pro-socially.
Donation experiment
In situations where the image of eyes were present people were also more likely to be generous with donations and give more. One study testing this was done at the University of Virginia by Caroline Kelsey. The study was done at a children's museum where there was a donation box at the front desk. Data was collected from this setting for 28 weeks, testing more than 34,100 people who visited in the span of this time. Each week the sign over the box that usually read "Donations would be appreciated" changed to primarily images of eyes or other inanimate objects such as chairs or noses with some wording with it. Throughout each week the number of people who visited the museum was recording along with the total amount of donations made. By the end of the study it was found that patrons donated more under the presence of eyes on the signs rather than other inanimate objects.
More on studies
Other studies in relation to the watching-eye-effect show that people are more cooperative and aware of themselves when their identity is exposed as opposed to when they are acting anonymously. They act more respectfully and appropriately because their reputation is at risk when they are being watched by others or feel that they are being watched. Even in some studies that insisted to their participants that their actions were anonymous they were still more generous because they felt identified by the eyes.
Some studies argue that it may not be the effect of these eyes that gives people incentive to be more generous, but the number of people that are around them that make them feel peer pressure to conform to more pro-social behavior.
See also
Decision-making
Evil eye
Eye contact
Fake security camera
Gaze
Hawthorne effect
Security theater
Situation awareness
Subject-expectancy effect
References
"No effect of ‘watching eyes’: An attempted replication and extension investigating individual differences", Rotella et al 2021
Psychological effects
Cognition
Cognitive biases
Cognitive psychology | Watching-eye effect | [
"Biology"
] | 1,112 | [
"Behavioural sciences",
"Behavior",
"Cognitive psychology"
] |
77,656,313 | https://en.wikipedia.org/wiki/National%20Satellite%20Test%20Facility | The National Satellite Test Facility (NSTF) is a testing site for artificial satellites, located in Harwell, Oxfordshire, in the United Kingdom. It is the first dedicated satellite testing facility in the UK. Construction began in 2018 and was completed in 2024. The facility opened in May 2024. It was built through a collaboration between RAL (Rutherford Appleton Lab) Space and the National Physical Laboratory.
Its first customers were Airbus Defence and Space UK.
References
Buildings and structures in Oxfordshire
Space programme of the United Kingdom
Science and technology in the United Kingdom | National Satellite Test Facility | [
"Astronomy"
] | 113 | [
"Outer space stubs",
"Outer space",
"Astronomy stubs"
] |
77,657,654 | https://en.wikipedia.org/wiki/Self-differentiation | Self-differentiation is a psychological concept that refers to the ability of an individual to maintain their sense of self while engaging in relationships with others. This concept is deeply rooted in family systems theory, particularly in the work of Murray Bowen. It was later expanded upon and popularized in leadership contexts by Edwin Friedman. Self-differentiation is crucial for understanding how individuals navigate emotional interdependence within families, social groups, and organizational settings.
Overview
Self-differentiation involves balancing two fundamental life forces: the drive for individuality and the drive for togetherness. Individuals with high levels of self-differentiation can maintain their personal values, beliefs, and identity even in the face of pressure from others or within close relationships. Conversely, those with low self-differentiation may struggle to distinguish their own thoughts and feelings from those of others, leading to emotional dependency and reactivity.
Theoretical background
Family systems theory
Self-differentiation is a core concept within family systems theory, developed by Murray Bowen in the mid-20th century. Bowen's theory posits that individuals are best understood within the context of their family relationships, which are seen as interconnected and interdependent. According to Bowen, each family member has a specific level of self-differentiation that influences their behavior within the family system.
Emotional triangles
Bowen also introduced the concept of emotional triangles—the idea that a two-person relationship often becomes unstable and involves a third person to reduce anxiety. The level of self-differentiation in each individual influences how they manage these triangles. Individuals with higher self-differentiation are better able to manage anxiety without triangulating, thus maintaining more stable relationships.
The multigenerational transmission process
The multigenerational transmission process is another key concept in Bowen's theory. It refers to how patterns of emotional functioning are passed down through generations. Levels of self-differentiation can be transmitted from parents to children, often perpetuating either high or low levels of differentiation across generations.
Edwin Friedman’s contribution
Edwin Friedman, a prominent rabbi, therapist, and leadership consultant, expanded Bowen's concept of self-differentiation beyond the family system to organizational and leadership contexts. In his influential book, *A Failure of Nerve*, Friedman applied the principles of self-differentiation to leadership, arguing that effective leaders are those who maintain a clear sense of self and are not easily swayed by the anxiety and reactivity of their organization
Characteristics of self-differentiation
Emotional regulation
Individuals with high self-differentiation have strong emotional regulation skills. They can separate their own emotions from those of others, allowing them to respond thoughtfully rather than reactively in emotionally charged situations. This is crucial not only in family dynamics, as Bowen emphasized, but also in leadership scenarios, as stressed by Friedman.
Autonomy and connection
Self-differentiated individuals can balance autonomy with connection. They maintain close relationships without losing their sense of self and assert their needs and desires without fearing the loss of connection with others. This balance is essential in both personal and professional relationships, enabling individuals to function effectively in diverse environments.
Tolerance for differences
A high level of self-differentiation is associated with greater tolerance for differences in relationships. These individuals can accept that others may have different opinions, feelings, or needs, and they do not feel compelled to conform to maintain harmony. In organizational settings, as Friedman noted, this quality allows leaders to navigate complex group dynamics and encourage diversity of thought.
Implications in psychotherapy and leadership
Psychotherapy
Self-differentiation is a key focus in various therapeutic approaches, particularly those rooted in family therapy and systems therapy. Therapists may work with clients to increase their level of self-differentiation to improve their relationships and emotional well-being. Techniques might include exploring family-of-origin issues, enhancing emotional awareness, and developing assertiveness.
Leadership and organizations
Friedman's extension of self-differentiation into leadership theory has significant implications for how leaders manage stress, conflict, and change. A self-differentiated leader can resist the pull of organizational anxiety, make clear decisions, and foster an environment where others can develop their own self-differentiation. This approach is increasingly recognized in leadership training and organizational development.
Criticism and controversies
While the concept of self-differentiation is widely respected, some critics argue that it may not fully account for cultural differences in family dynamics or the impact of societal structures on individual behavior. Additionally, the emphasis on self-differentiation can sometimes be seen as overly individualistic, particularly in collectivist cultures where interdependence is highly valued.
In leadership contexts, some argue that Friedman's application of self-differentiation may overlook the importance of empathy and collaboration, focusing too much on the leader's autonomy.
References
External links
Understanding Bowen Family Systems Theory | Psychology Today
Bowen Family Systems Theory - A Guide to Navigating Family Relationships | Carla Corelli
Introduction to Bowen Theory | The Bowen Center
Differentiation Of Self - Murray Bowen | My People Patterns
Applying Edwin Friedman's *A Failure of Nerve* to the Contemporary Church | London Seminary
Crucible Four Points of Balance | Crucible Four Points
Summary of Edwin Friedman’s ‘A Failure of Nerve’ | Alastair's Adversaria
Psychological concepts
Psychotherapy
Organizational behavior
Cognitive behavioral therapy
Social psychology | Self-differentiation | [
"Biology"
] | 1,033 | [
"Behavior",
"Organizational behavior",
"Human behavior"
] |
77,657,810 | https://en.wikipedia.org/wiki/%282R%2C3R%29-Hydroxybupropion | (2R,3R)-Hydroxybupropion, or simply (R,R)-hydroxybupropion, is the major metabolite of the antidepressant, smoking cessation, and appetite suppressant medication bupropion. It is the (2R,3R)-enantiomer of hydroxybupropion, which in humans occurs as a mixture of (2R,3R)-hydroxybupropion and (2S,3S)-hydroxybupropion (radafaxine). Hydroxybupropion is formed from bupropion mainly by the cytochrome P450 enzyme CYP2B6. Levels of (2R,3R)-hydroxybupropion are dramatically higher than those of bupropion and its other metabolites during bupropion therapy.
Exposure with bupropion
Bupropion is substantially converted into metabolites during first-pass metabolism with oral administration and levels of its metabolites are much higher than those of bupropion itself. Exposure to (2R,3R)-hydroxybupropion is 29-fold higher than to (R)-bupropion and exposure to (2S,3S)-hydroxybupropion is 3.7-fold higher than to (S)-bupropion. Other metabolites that circulate at higher concentrations than those of bupropion include threohydrobupropion and to a lesser extent erythrohydrobupropion.
The metabolism of bupropion and its metabolites is stereoselective. During bupropion therapy, exposure to (R)-bupropion is 2- to 6-fold higher than to (S)-bupropion and exposure to (2R,3R)-hydroxybupropion is 20- to 65-fold higher than to (2S,3S)-hydroxybupropion. Hence, (2R,3R)-hydroxybupropion is a major metabolite of bupropion and (2S,3S)-hydroxybupropion is a minor metabolite.
In contrast to humans, only low levels of hydroxybupropion or (2R,3R)-hydroxybupropion occur with bupropion in rats. This highlights substantial species differences in the pharmacokinetics of bupropion between animals and humans. These differences in turn may account for differences in the pharmacodynamic effects of bupropion between species.
Pharmacology
Pharmacodynamics
(2R,3R)-Hydroxybupropion is much less pharmacologically active as a monoamine reuptake inhibitor than bupropion or (2S,3S)-hydroxybupropion. Conversely, its potency as a negative allosteric modulator of nicotinic acetylcholine receptors is variable but overall more similar to that of bupropion and (2S,3S)-hydroxybupropion.
Additional studies have characterized the affinities (Ki) of bupropion and the hydroxybupropion enantiomers at the monoamine transporters as well as affinities and potencies (IC50) using non-human proteins. In contrast to bupropion and (2S,3S)-hydroxybupropion, racemic hydroxybupropion, using rat proteins, has been found to act as a selective norepinephrine reuptake inhibitor (IC50 = 1,700nM) with no apparent inhibition of dopamine reuptake (IC50 > 10,000nM). Normally, activity with racemic mixtures is expected to be closer to that of the active enantiomer than to the inactive enantiomer. The reasons for the discrepancy in the case of racemic hydroxybupropion are unclear. In any case, it was suggested that (2R,3R)-hydroxybupropion might be acting as a negative allosteric modulator of the binding of (2S,3S)-hydroxybupropion to the dopamine transporter.
Bupropion and (2S,3S)-hydroxybupropion are substantially more potent than (2R,3R)-hydroxybupropion in various rodent behavioral tests, such as the forced swim test (an assay of antidepressant-like activity). However, sufficient doses of bupropion, (2S,3S)-hydroxybupropion, and (2R,3R)-hydroxybupropion all produce full methamphetamine-like effects in monkeys (1mg/kg, 3mg/kg, and 10mg/kg, respectively). Bupropion produces nicotine-like effects in rodents and (2S,3S)-hydroxybupropion partially substitutes for nicotine. In contrast, (2R,3R)-hydroxybupropion does not substitute for nicotine and dose-dependently antagonizes the effects of nicotine by up to 50%.
(2R,3R)-Hydroxybupropion is a strong CYP2D6 inhibitor similarly to bupropion. (2R,3R)-Hydroxybupropion alone has been estimated to account for approximately 65% of the total in vivo CYP2D6 inhibition of bupropion, whereas threohydrobupropion accounted for 21% and erythrohydrobupropion accounted for 9% (with 5% remaining or unaccounted for).
Pharmacokinetics
Hydroxybupropion, including both (2R,3R)-hydroxybupropion and (2S,3S)-hydroxybupropion, is mainly formed from bupropion by the cytochrome P450 enzyme CYP2B6. However, CYP2C19, CYP3A4, CYP1A2, and CYP2E1 appear to play a minor role.
CYP2B6 is highly polymorphic and is subject to high interindividual variability of approximately 100-fold. This may result in large interindividual differences in the metabolism of bupropion into hydroxybupropion and the effects of bupropion. However, clearance of bupropion is not affected in different CYP2B6 metabolizer phenotypes. This suggests that other enzymes compensate in the metabolism of bupropion in the context of reduced CYP2B6 function. The moderate CYP2B6 inducer rifampicin increased the clearance of (2R,3R)-hydroxybupropion and decreased its exposure and half-life by approximately 50%.
The elimination half-life of (2R,3R)-hydroxybupropion is 19 to 26hours.
Chemistry
Hydroxybupropion has two chiral centers. As a result, there are four possible enantiomers of the compound. However, only (2R,3R)-hydroxybupropion and (2S,3S)-hydroxybupropion are formed in humans. (2R,3S)- and (2S,3R)-Hydroxybupropion do not occur in humans presumably due to steric hindrance precluding their formation.
References
Antidepressants
Cathinones
3-Chlorophenyl compounds
CYP2D6 inhibitors
Enantiopure drugs
Human drug metabolites
Morpholines
Nicotinic antagonists
Norepinephrine–dopamine reuptake inhibitors
Smoking cessation
Stimulants
Tertiary alcohols | (2R,3R)-Hydroxybupropion | [
"Chemistry"
] | 1,732 | [
"Chemicals in medicine",
"Stereochemistry",
"Human drug metabolites",
"Enantiopure drugs"
] |
77,658,557 | https://en.wikipedia.org/wiki/Implantable%20bulking%20agent | Implantable bulking agents are self-expanding solid prostheses which are implanted in the tissues around the anal canal. It is a surgical treatment for fecal incontinence and represents a newer evolution of the similar procedure which uses perianal injectable bulking agents.
History
The implantable bulking agents represent the most recent stage of development of a similar procedure which uses perianal injectable bulking agents. That procedure in turn was developed from use of injectable bulking agents in urology to treat urinary incontinence. Many different injectable materials have been used. The biggest problem with injectable materials is that they seem to have only a temporary effect. Over time, the material degrades and may migrate away from the injection site. For example, one study investigated the outcome and ultrasound appearance of 3 of the commonly used injectable bulking agents (Durasphere, PTQ, Solesta) after an average of 7 years. The researchers reported that typically about 14% of the original volume of material was still identifiable on ultrasound, and that complete disappearance of the materials on ultrasound was correlated with poorer clinical outcomes.
Implantable bulking agents use multiple cylindrical HYEXPAN (polyacrylonitrile) implants. Marketed as "Gatekeeper" by Medtronic, Minneapolis, USA, it was first used to treat gastro-esophageal reflux disease. Production of the implant system was transferred to THD S.p.A., Correggio, Italy. Gatekeeper now has a CE marking, and was registered for the treatment of fecal incontinence in 2010. The first publication describing use of these implants in FI was in 2011. Several other publications appeared and the results were initially promising.
In the original description, Gatekeeper used four self-expandable, solid, thin cylinders. Subsequently, six of the prostheses were used. An advancement of the procedure was described in 2016, marketed as "SphinKeeper". SphinKeeper uses 10 prostheses which are slightly thicker and longer compared to those used in the Gatekeeper implant system. One publication stated that new generation SphinKeeper implant system has replaced use of Gatekeeper, and that the use of 10 prostheses represents change in the paradigm of injectable and implantable bulking agents. SphinKeeper is a permanent implantable device rather than a bulking agent, and aims to create a kind of artificial neosphincter. Previous techniques aimed to simply augment the internal anal sphincter. The first systematic review on implantable bulking agents was published in 2022. However, no randomized placebo controlled trials have been published yet.
Material
The implants are made of polyacrylonitrile (HYEXPAN). This material is inert, non-allergenic, non-immunogenic, nondegradable and noncarcinogenic. The material is hydrophilic, which allows the implants to slowly absorb water and change in dimensions, which occurs within 48 hours once they are implanted in the tissues.
This polyacrylonitrile material is thought to meet the criteria for the "ideal bulking agent", and therefore may overcome the disadvantages of other bulking agents.
Indications
Implantable bulking agents are indicated for passive fecal incontinence, caused by interior anal sphincter dysfunction or damage. The onset of incontinence should be at least 6 months ago. It has been recommended that this procedure should be attempted only if non-surgical options have failed (such as pharmacologic, behavioral, pelvic floor rehabilitation), and also if injectable bulking agents were unsuccessful. Researchers are investigating the use of GK and SK in patients with a wider range of causes of fecal incontinence.
Contraindications have been suggested by different authors, and include:
Active perianal sepsis (infection)
Inflammatory bowel diseases which involve the anorectal region (Crohn’s disease, ulcerative colitis)
Active treatment of anal, rectal or colon cancer
Rectal bleeding of unknown cause
Rectal prolapse
Uncontrolled blood coagulation disorders
Radiotherapy involving the pelvis
Immunosuppression
Pregnant patient or patient planning pregnancy in the next 12 months.
Lesion involving >60° of the internal anal sphincter and/or >90° of the external anal sphincter, as shown on ultrasound.
Severe anal scarring.
Diabetes mellitus, pudendal neuropathy, and previous implantation of sacral nerve stimulation device are not contraindications to the use of implantable bulking agents.
Procedure
The procedure is carried out under local or regional anesthesia (Spinal anaesthesia) with or without sedation or general anesthesia. It takes 30-40 minutes and may be done as a day case on an outpatient basis. Intravenous antibiotics may given at the start of the procedure. The patient is usually put into the lithotomy position.
The surgeon identifies the interior anal sphincter and the intersphincteric groove while using an anal retractor such as the Eisenhammer retractor. 2 cm away from the anal verge a 2 mm incision is made in the perianal skin. This location may minimize the possibility of contamination of the wounds during bowel movements. The prostheses are implanted using a custom "gun" which consists of a delivery system and a dispenser which holds one prosthesis at a time. This device is specific to the type of implant being used: Gatekeeper or SphinKeeper. The needle (cannula / sheath) is inserted into the incision and pushed into the intersphincteric space through a short subcutaneous tunnel. It is thought that the path of this created "tunnel" should not be a straight line, in order to prevent extrusion of the prosthesis along the insertion track. The needle is advanced to a depth just beyond the level of the dentate line. This will correspond to the upper part of the anal canal, at the level of the puborectalis muscle. The exact position of the needle tip is confirmed by direct vision or with the guidance of endoanal ultrasound. The procedure is described as relatively simple to perform from a technical perspective, and one author stated that ultrasound guidance during placement is not necessary if the surgeon is experienced. Firing the "gun" causes the cannula to retract completely into the delivery system, leaving the prosthesis in the target location. The prosthesis is then placed into the intersphincteric space.
It is thought that placement of the implants in the intersphincteric space pushes the external anal sphincter outwards and the internal anal sphincter inwards. This may increase the length of the sarcomeres, which theoretically increases the contractility of the muscle. In terms of physiological measurements, the resting anal pressure and the length of the high pressure zone in the anal canal may be improved. In other words, the bulking effect may improve the seal of the anal canal and the length of the anal canal.
The surgical procedure is almost identical for Gatekeeper and SphinKeeper. The difference between Gatekeeper and SphinKeeper is in the size and number of the individual prostheses. Gatekeeper uses 4-6 prostheses. SphinKeeper uses up to 10 prostheses.
Within 48 hours of implantation, the implant material absorbs water from the tissues because of its hydrophilic properties. Each prosthesis becomes thicker and shorter in shape. This rapid increase in volume allows the prostheses to self-fix in position and prevents displacement and migration (in most cases). The prostheses also become softer in consistency and compliant to external pressures, but are still able to maintain their original shape.
In the dehydrated state, Gatekeeper prostheses are thin cylinders, 2 mm in diameter and 22 mm long. After implantation, they become 6.5 mm in diameter and 17 mm long. Their volume increases by 750% from 70 mm3 to 500 mm3.
SphinKeeper prostheses are slightly thicker and longer cylinders. In the dehydrated state, SphinKeeper prostheses are 3 mm in diameter and 29 mm long. After implantation, they become 7 mm in diameter and 23 mm long. SphinKeeper implants are long enough to restore the normal length of the anal canal. They are also wide enough to make sure there is good filling ability. Therefore, SphinKeeper allows for surgical correction of larger defects of the internal anal sphincter or external anal sphincter.
The process is repeated for each individual prosthesis, placing them into incisions made at the intersphincteric groove at equidistant intervals depending upon the total number of prostheses to be deployed. For example, if 4 prostheses are to be used, incisions may be placed at 12, 3, 6, and 9 o’clock. If 6 prostheses are to be used, incisions may be placed at 1, 3, 5, 7, 9, and 11 o’clock. The exact number of prostheses used is arbitrary, but placing 10 prostheses enables the creation of a circumferential ring of prostheses around the anal canal in the intersphincteric space. This effectively creates a situation similar to an artificial anal sphincter. One publication reported improved outcome with Gatekeeper when more prostheses were used.
Interestingly, it is thought that the exact spacing of the prostheses does not influence the outcome of the procedure, and the important factor appears to be that the prostheses are distributed equally around the anal canal. This is the case even in the presence of tears of the external anal sphincter or internal anal sphincter. Therefore, the implants are placed in the above locations for convenience of the surgeon, even for patients with a tear in a specific part of the sphincter.
The incisions are closed with resorbable sutures. Endoanal ultrasound may be used to confirm the location of each prosthesis. A course of oral antibiotics (e.g. metronidazole) may be given after the procedure. Oral laxatives (e.g. lactulose) may be given, which prevents straining and constipation. Anal trauma (e.g. receptive anal intercourse) should be avoided for at least 72 hours after the procedure. Patients are usually advised to rest in bed, with the aim of reducing the risk of dislocation of the prostheses.
After healing, the implants continue to be palpable and are visible on endoanal ultrasound. Each prosthesis appears as hyperechoic dot with a hypoechoic shadow behind it. Three dimensional endoanal ultrasound has also been used to visualize the implants, wherein the prostheses appear as a continuous hyperechoic line.
Complications
Compared to other surgical treatment options for fecal incontinence, implantable bulking agents appear to be safe. Therefore, it is also suitable for elderly or frail patients. However, complications are sometimes reported. For example, acute sepsis (infection) at the implantation site has been rarely recorded.
The most important complication is displacement of a prosthesis (also referred to as migration / dislocation / dislodgement / extrusion). The rate of displacement of at least 1 of the prostheses has been reported to be as low as 0%, or as high as 91% of cases. The patient may report pain, swelling and/or no improvement in symptoms when there is prosthesis displacement. This is potentially noteworthy, since improved symptoms after use of injectable bulking agents have been attributed at least in part to the placebo effect.
Placement in the intersphincteric space is thought to be less liable to extrusion or migration of the prostheses or other complications such as erosion, ulceration or fistula formation in the anal canal. If the implants were in the submucosal layer, they would be more vulnerable to such complications. Furthermore, displacement is less likely because of the rapid increase in size of the prostheses, allowing them self-fix in position in most cases.
A systematic review found that in total, migration / dislodgement / dislocation was reported in 41 out of 154 patients (26.6%) across 7 studies. The same researchers reported that some kind of adverse event occurred in 48 out of 166 patients (28.9%). Sometimes, a prosthesis had to be removed, however it is possible to implant a new one in the correct position.
Effectiveness
The first systematic review on GK and SK was published in 2022. It combined results from 8 studies published before 2020 – a total of 166 patients. All studies were judged to be at moderate to high risk of bias. The reviewers reported that severity of FI improved in 5 out of 7 of the studies which used the Cleveland Clinic FI Score and in 3 out of 5 of the studies which used the Vaizey score. Quality of life improved in 2 studies which measured that outcome. They concluded that GK and SK may be effective, safe and minimally invasive options for fecal incontinence in those cases where non surgical treatments have failed. The reviewers called for controlled trials to be conducted.
References
External links
Colorectal surgery
Management of fecal incontinence
Gastroenterology
Digestive system surgery
Symptoms and signs: Digestive system and abdomen
Incontinence | Implantable bulking agent | [
"Biology"
] | 2,858 | [
"Incontinence",
"Excretion"
] |
77,659,735 | https://en.wikipedia.org/wiki/Rohri%20Canal | Rohri Canal is a major irrigation canal in Sindh, Pakistan. It is a vital source of water for agriculture in the region. It originates from the left bank of the Indus River at the Sukkur Barrage, located in Sukkur District, Sindh. It traverses through several districts, providing irrigation to vast agricultural lands. The canal's primary flow is towards the south, irrigating districts including Sukkur, Khairpur, Naushahro Feroze, Shaheed Benazirabad, Matiari, Hyderabad, Sanghar and Badin.
It is a perennial canal, meaning it supplies water throughout the year. The Rohri Canal is part of the larger Sukkur Barrage irrigation system. The construction of the barrage itself began in 1923 and was completed in 1932. While the Rohri Canal's construction was completed before the barrage project.
References
Irrigation canals
Irrigation projects
Irrigation in Pakistan | Rohri Canal | [
"Engineering"
] | 187 | [
"Irrigation projects"
] |
77,662,800 | https://en.wikipedia.org/wiki/Thiamphenicol%20glycinate%20acetylcysteine | Thiamphenicol glycinate acetylcysteine (TGA) is a pharmaceutical drug that is a combination of thiamphenicol glycinate ester (TAFGE), which is a derivative of the antibiotic thiamphenicol, and N-acetylcysteine (NAC), which is a mucus-thinning drug. Upon contact with tissue esterases, TGA releases both TAFGE and NAC. As a standalone medication, NAC is widely used in respiratory tract infections for its mucolytic activity.
Medical uses
Thiamphenicol glycinate acetylcysteine (TGA) effectively treats upper respiratory tract infections, including otitis media, pharyngotonsillitis, and rhinosinusitis. It is also used to treat exacerbations of chronic obstructive pulmonary disease (COPD) and chronic bronchitis with bronchiectasis.
TGA is effective in eradicating biofilms in otolaryngologic infections, as biofilms are often resistant to treatment with antibiotics. Biofilms are complex communities of bacteria that can adhere to surfaces and are known to be highly resistant to antibiotic treatment and immune responses, so the ability of TGA to effectively eradicate biofilms represents an advantage over traditional antibiotics.
Mechanism of action
TGA works by releasing thiamphenicol glycinate ester (TAFGE) and N-acetylcysteine (NAC) upon contact with tissue esterases. Esterases are enzymes that break down esters into an acid and an alcohol in a chemical reaction with water called hydrolysis. Such reaction is needed to split TGA into its active components. TAFGE is an antibiotic, while NAC has mucolytic properties.
Thiamphenicol and TAFGE are related compounds, but they have different forms and uses: thiamphenicol is an antibiotic that is the methyl-and-sulfonyl analogue of chloramphenicol; it has a similar spectrum of activity to chloramphenicol but is 2.5 to 5 times as potent; it is used in many countries as a veterinary antibiotic, but is also used in some countries such as Brazil, China, Italy, Moldova and Morocco for use in humans. On the other hand, TAFGE is a form of thiamphenicol that is used in parenteral and aerosol dosage forms. After administration, TAFGE is rapidly hydrolysed by tissue esterases, releasing thiamphenicol. Its antibacterial activity is due to thiamphenicol.
Routes of Administration
TGA can be administered intramuscularly or by aerosol. It can be used by a sprayer or via a nebulizer.
Brand name
TGA is sold under various brand names, including "Fluimucil Antibiotic IT".
References
Combination drugs
Antibiotics
Expectorants | Thiamphenicol glycinate acetylcysteine | [
"Biology"
] | 627 | [
"Antibiotics",
"Biocides",
"Biotechnology products"
] |
77,662,984 | https://en.wikipedia.org/wiki/Law%20%28mathematics%29 | In mathematics, a law is a formula that is always true within a given context. Laws describe a relationship, between two or more expressions or terms (which may contain variables), usually using equality or inequality, or between formulas themselves, for instance, in mathematical logic. For example, the formula is true for all real numbers , and is therefore a law. Laws over an equality are called identities. For example, and are identities. Mathematical laws are distinguished from scientific laws which are based on observations, and try to describe or predict a range of natural phenomena. The more significant laws are often called theorems.
Notable examples
Geometric laws
Triangle inequality: If , , and are the lengths of the sides of a triangle then the triangle inequality states that
with equality only in the degenerate case of a triangle with zero area. In Euclidean geometry and some other geometries, the triangle inequality is a theorem about vectors and vector lengths (norms):
where the length of the third side has been replaced by the length of the vector sum . When and are real numbers, they can be viewed as vectors in , and the triangle inequality expresses a relationship between absolute values.
Pythagorean theorem: It states that the area of the square whose side is the hypotenuse (the side opposite the right angle) is equal to the sum of the areas of the squares on the other two sides. The theorem can be written as an equation relating the lengths of the sides a, b and the hypotenuse c, sometimes called the Pythagorean equation:
Trigonometric identities
Geometrically, trigonometric identities are identities involving certain functions of one or more angles. They are distinct from triangle identities, which are identities involving both angles and side lengths of a triangle. Only the former are covered in this article.
These identities are useful whenever expressions involving trigonometric functions need to be simplified. Another important application is the integration of non-trigonometric functions: a common technique which involves first using the substitution rule with a trigonometric function, and then simplifying the resulting integral with a trigonometric identity.
One of the most prominent examples of trigonometric identities involves the equation which is true for all real values of . On the other hand, the equation
is only true for certain values of , not all. For example, this equation is true when but false when .
Another group of trigonometric identities concerns the so-called addition/subtraction formulas (e.g. the double-angle identity , the addition formula for ), which can be used to break down expressions of larger angles into those with smaller constituents.
Algebraic laws
Cauchy–Schwarz inequality: An upper bound on the inner product between two vectors in an inner product space in terms of the product of the vector norms. It is considered one of the most important and widely used inequalities in mathematics.
The Cauchy–Schwarz inequality states that for all vectors and of an inner product space
where
is the inner product. Examples of inner products include the real and complex dot product; see the examples in inner product. Every inner product gives rise to a Euclidean norm, called the or , where the norm of a vector is denoted and defined by
where is always a non-negative real number (even if the inner product is complex-valued). By taking the square root of both sides of the above inequality, the Cauchy–Schwarz inequality can be written in its more familiar form in terms of the norm:
Moreover, the two sides are equal if and only if and are linearly dependent.
Combinatorial laws
Pigeonhole principle: If items are put into containers, with , then at least one container must contain more than one item. For example, of three gloves (none of which is ambidextrous/reversible), at least two must be right-handed or at least two must be left-handed, because there are three objects but only two categories of handedness to put them into.
Logical laws
De Morgan's laws: In propositional logic and Boolean algebra, De Morgan's laws, also known as De Morgan's theorem, are a pair of transformation rules that are both valid rules of inference. They are named after Augustus De Morgan, a 19th-century British mathematician. The rules allow the expression of conjunctions and disjunctions purely in terms of each other via negation. The rules can be expressed in English as:
not (A or B) = (not A) and (not B)
not (A and B) = (not A) or (not B) where "A or B" is an "inclusive or" meaning at least one of A or B rather than an "exclusive or" that means exactly one of A or B. In formal language, the rules are written as where P and Q are propositions,
is the negation logic operator (NOT),
is the conjunction logic operator (AND),
is the disjunction logic operator (OR),
is a metalogical symbol meaning "can be replaced in a logical proof with", often read as "if and only if". For any combination of true/false values for P and Q, the left and right sides of the arrow will hold the same truth value after evaluation.
The three Laws of thought are:
The law of identity: 'Whatever is, is.' For all a: a = a.
The law of non-contradiction (alternately the 'law of contradiction'): 'Nothing can both be and not be.'
The law of excluded middle: 'Everything must either be or not be.' In accordance with the law of excluded middle or excluded third, for every proposition, either its positive or negative form is true: A∨¬A.
Phinominological laws
Benford's law is an observation that in many real-life sets of numerical data, the leading digit is likely to be small. In sets that obey the law, the number 1 appears as the leading significant digit about 30% of the time, while 9 appears as the leading significant digit less than 5% of the time. Uniformly distributed digits would each occur about 11.1% of the time.
Strong law of small numbers, in a humorous way, states any given small number appears in far more contexts than may seem reasonable, leading to many apparently surprising coincidences in mathematics, simply because small numbers appear so often and yet are so few.
See also
Formula
List of inequalities
List of mathematical identities
List of laws
Statement (logic)
Tautology (logic)
Citations
References
Bertrand Russell, The Problems of Philosophy (1912), Oxford University Press, New York, 1997, .
External links
The Encyclopedia of Equation Online encyclopedia of mathematical identities (archived)
A Collection of Algebraic Identities
Mathematical terminology
Theorems
Mathematical logic | Law (mathematics) | [
"Mathematics"
] | 1,389 | [
"Mathematical logic",
"nan"
] |
77,663,120 | https://en.wikipedia.org/wiki/Galerina%20semilanceata | Galerina semilanceata is a species of mushroom in the genus Galerina native to Washington state and California.
Description
This species is identified by its heavily veined, light colored cap, and small white V shaped powderings that go up the stem. The mushrooms are usually about 3 cm all around.
Conservation status
This species is not of concern and is quite common.
References
Hymenogastraceae
Fungi described in 1896
Fungus species | Galerina semilanceata | [
"Biology"
] | 88 | [
"Fungi",
"Fungus species"
] |
77,663,512 | https://en.wikipedia.org/wiki/Sinkclose | Sinkclose is a security vulnerability in certain AMD microprocessors dating back to 2006 that was made public by IOActive security researchers on August 9, 2024. IOActive researchers Enrique Nissim and Krzysztof Okupski presented their findings at the 2024 DEF CON security conference in Las Vegas in a talk titled "AMD Sinkclose: Universal Ring-2 Privilege Escalation".
AMD said it would patch all affected Zen-based Ryzen, Epyc and Threadripper processors but initially omitted Ryzen 3000 desktop processors. AMD followed up and said the patch would be available for them as well. AMD said the patches would be released on August 20, 2024.
Mechanism
Sinkclose affects the System Management Mode (SMM) of AMD processors. It can only be exploited by first compromising the operating system kernel. Once the exploit is effected, it is possible to avoid detection by antivirus software and even compromise a system after the operating system has been re-installed.
References
External links
IOActive announcement
NIST page on CVE-2023-31315
2024 in computing
AMD
Computer security exploits
X86 architecture | Sinkclose | [
"Technology"
] | 245 | [
"Computer security exploits"
] |
77,663,968 | https://en.wikipedia.org/wiki/Cyclic%20adenosine-inosine%20monophosphate | Cyclic adenosine-inosine monophosphate (cAIMP, CL-592) is an experimental antiviral drug. It is the best studied of a range of related analogues which act as agonists of the Stimulator of interferon genes (STING) receptor which mediates interferon production by the immune system. It shows broad spectrum antiviral activity against a range of viruses including SARS-CoV-2 and enterovirus 68, and in studies on mice prevented the development of arthritis following infection with Chikungunya virus.
See also
Tilorone
References
Antiviral drugs
Twelve-membered rings
Phosphorus heterocycles
Oxygen heterocycles
Purines
Heterocyclic compounds with 3 rings | Cyclic adenosine-inosine monophosphate | [
"Biology"
] | 159 | [
"Antiviral drugs",
"Biocides"
] |
77,663,992 | https://en.wikipedia.org/wiki/UGC%2011861 | UGC 11861 is a barred spiral galaxy in the constellation of Cepheus. Its velocity with respect to the cosmic microwave background is 1334 ± 10 km/s, which corresponds to a Hubble distance of 19.68 ± 1.39Mpc (∼64.2million light-years). In addition, three non redshift measurements give a distance of 18.933 ± 5.26Mpc (~61.7million light-years). The first known reference to this galaxy comes from volume IV of the Catalogue of Galaxies and of Clusters of Galaxies compiled by Fritz Zwicky in 1968, where it was listed as CGCG343-003, and described as an "extremely diffuse spiral."
The SIMBAD database lists UGC11861 an active galaxy nucleus candidate, i.e. it has a compact region at the center that emits a significant amount of energy across the electromagnetic spectrum, with characteristics indicating that this luminosity is not produced by the stars. In addition, the galaxy contains two broad spiral arms wrapping around its central region.
Supernovae
Three supernovae have been observed in UGC11861:
SN1995ag (typeII, mag.17) was discovered by Jean Mueller on 28 September 1995.
SN1997db (TypeII, mag.16.9) was discovered by Michael Schwartz on 2 August 1997.
SN2011dm (typeIa, mag.18.8) was discovered by the Lick Observatory Supernova Search (LOSS) on 15 June 2011.
References
External links
067671
11861
21557+7301
Cepheus (constellation)
Astronomical objects discovered in 1968
Barred spiral galaxies | UGC 11861 | [
"Astronomy"
] | 354 | [
"Constellations",
"Cepheus (constellation)"
] |
77,664,921 | https://en.wikipedia.org/wiki/San%20Francisco%20Estuary%20Institute | The San Francisco Estuary Institute (SFEI) is a nonprofit research institute focusing on the estuaries and ecosystems of San Francisco Bay and Northern California. SFEI was created in 1992 in order to coordinate integrated research and monitoring of the Bay. SFEI administers the Aquatic Science Center, a joint powers authority (JPA), which is an agency formed when multiple government agencies have a common mission that can be better achieved by pooling resources and knowledge. SFEI's precursor was the Aquatic Habitat Institute, created in 1986.
Research
Water quality monitoring
SFEI has managed the Regional Monitoring Program for Water Quality in San Francisco Bay (RMP) since its beginning in 1993. Scientists monitor pollutants in water, sediment, and in Bay wildlife, including bivalves, fish, bird eggs, and harbor seals. Samples are analyzed for mercury, PCBs, pesticides, metals, and a variety of contaminants of emerging concern.
Thousands of man-made chemicals are found in Bay water, sediment, and organisms. For many of these, there is little or no data on their impacts on the environment or human health, and they are not regulated by state or federal law. These are often referred to as "contaminants of emerging concern" or CECs. SFEI has studied these chemicals in the Bay since 2001. Scientists have identified the following most likely to have a negative impact on Bay wildlife: PFOS, the pesticide fipronil, nonylphenols and nonylphenol ethoxylates.
Information developed by the RMP is used by state regulators to set Total Maximum Daily Loads. RMP data has also been used in the development of fish consumption advisories by California's Department of Public Health. Levels of PCBs and mercury in certain species of sportfish in San Francisco Bay exceed safe levels for human consumption. The RMP collected data on copper in stormwater, which is toxic to aquatic organisms at elevated concentrations. These data contributed to the passage of California Senate Bill 346, also known as the California Motor Vehicle Brake Friction Material Law. This law supports the development of alternative, less toxic materials for use in brake pads.
Ecology
SFEI scientists have made wide use of the techniques of historical ecology, which incorporates information from historical maps and other records to learn how and ecosystems have changed over time. This information is used to help guide restoration and management plans for wetlands and other landscapes.
Information Technology
SFEI staff have collaborated with NASA scientists to develop an early-warning system for harmful algal blooms, based on satellite remote sensing data and artificial intelligence.
History
In 1987, the San Francisco Estuary Project (SFEP, a state and federal cooperative program) began creating a Comprehensive Conservation and Management Plan (CCMP) for the San Francisco Estuary, involving over 100 stakeholders. The CCMP led to the establishment of the San Francisco Estuary Institute (SFEI) and the Regional Monitoring Program (RMP).
In 1992, the San Francisco Regional Water Quality Control Board mandated the implementation of a regional pollutant monitoring program in the Bay. Under the federal Clean Water Act and the state Porter Cologne Water Quality Act, polluters must have a discharge permit, and must monitor discharges (compliance monitoring) and in the water body near their discharge (receiving water impacts monitoring). This results in a patchwork of data that is not well-suited for science or management. By contrast, coordinated monitoring programs can gather information relevant to managers and with clear scientific objectives. Because cooperative programs can be more efficient and useful, several such programs have been created in the United States, for example for the Chesapeake Bay, in Puget Sound, and the Southern California Bight.
In 1993, the Aquatic Habitat Institute was reorganized as SFEI and the RMP officially began, using previous pilot studies to guide its monitoring efforts. The RMP annual budget has grown from $1.2 million in 1993 to $5.4 million in 2024.
In the early 1990s, California regulatory agencies established numeric criteria for several pollutants, but little was known about whether waterways exceeded these criteria. Early work by the RMP focused on sampling water in the Bay to determine its status, and whether pollutant concentrations met or exceeded standards. In subsequent years, the RMP has expanded its objectives to include estimating inputs ("loads") of pollutants to the Bay, understanding how pollutants enter waterways ("pathways"), and effects of pollutants on wildlife. These goals assist regulators in developing Total Maximum Daily Loads and issuing discharge permits required by the US Clean Water Act and California's Porter-Cologne Water Quality Control Act.
SFEI is located in Richmond, California. SFEI's original location (in 1993) was in Oakland, California, with subsequent offices at the Richmond Field Station before moving to the present location in 2007.
Member Agencies
The Regional Monitoring Program for Water Quality in San Francisco Bay (RMP) is funded by permitted dischargers including oil refineries, industrial facilities, dredgers, wastewater treatment facilities, and municipal stormwater management programs. For a full list, see Appendix A in the RMP charter. Members participate in the RMP in exchange for some regulatory relief, or exemption from conducting some monitoring that would normally be required under the Clean Water Act and the National Pollutant Discharge Elimination System. Participants elect representatives to serve on various committees, through which they oversee the program's finances, guide its management, and provide input and peer review of the science. In addition, the RMP has over a dozen science advisors, nationally recognized experts in various fields of environmental science.
See also
San Francisco Estuary Partnership
The Bay Institute
San Francisco Bay Conservation and Development Commission
Save the Bay
San Francisco Baykeeper
Southern California Coastal Water Research Project
References
External links
San Francisco Estuary Institute
Water in California
Environmental research institutes
Research institutes in California | San Francisco Estuary Institute | [
"Environmental_science"
] | 1,187 | [
"Environmental research institutes",
"Environmental research"
] |
77,667,621 | https://en.wikipedia.org/wiki/List%20of%20Tajikistani%20regions%20by%20life%20expectancy | Tajikistan, officially the Republic of Tajikistan, is a landlocked country in Central Asia. It consists of four administrative provinces (viloyat): Sughd, Khatlon, the autonomous province of Gorno-Badakhshan (abbreviated as GBAO), and the Regions of Republican Subordination (RRP). Each province is further divided into districts, which in turn are subdivided into jamoats (village-level self-governing units) and then villages (qyshloqs). As of 2006, there were 58 districts and 367 jamoats in Tajikistan. Districts under Tajikistan Central Government Jurisdiction, also known as Districts of Republican Subordination, are regions surrounding Dushanbe (the capital city) that are directly governed by the central administration.
This list presents the region-wise life expectancy in Tajikistan. The national average in 2019 was 75.1 years, with 73.5 years for males and 76.8 years for females. GBAO has the highest life expectancy for both genders.
Agency of Statistics
Global Data Lab (2019–2022)
Data source: Global Data Lab
References
Demographics of Tajikistan
Life expectancy | List of Tajikistani regions by life expectancy | [
"Biology"
] | 239 | [
"Senescence",
"Life expectancy"
] |
77,668,484 | https://en.wikipedia.org/wiki/Ring%20of%20fire%20%28meteorology%29 | In meteorology, a ring of fire pattern is a type of an atmospheric setup where thunderstorms form along the edges of a strong high-pressure ridge in the upper layer of the atmosphere. These storms can produce severe thunderstorms and flooding around the edges of the ridge. It is a similar phenomenon to the heat dome, and the two typically coincide as functions of strong areas of high atmospheric pressure, with both being most common during the warm season.
In the United States, ring of fire patterns are also commonly contributing factors to warm-season derechos, as extreme atmospheric instability builds near the edges of the ridge.
References
Atmospheric dynamics
Meteorological phenomena | Ring of fire (meteorology) | [
"Physics",
"Chemistry"
] | 132 | [
"Physical phenomena",
"Earth phenomena",
"Atmospheric dynamics",
"Meteorological phenomena",
"Fluid dynamics"
] |
77,668,849 | https://en.wikipedia.org/wiki/Mean%20payoff%20game | In game theory, a mean payoff game is a zero-sum game played on the vertices of a weighted directed graph. The game is played as follows: at the start of the game, a token is placed on one of the vertices of the graph. Each vertex is assigned to either the Maximizer of the Minimizer. The player that controls the current vertex the token is on, may choose one outgoing edge along which the token moves next. In doing so, the Minimizer pays the maximizer the number that is on the edge. Then, again, the player controlling the next vertex the token gets can choose where it goes, and this continues indefinitely. The objective for the Maximizer is to maximize their long term average payoff, and the Minimizer has the opposite objective.
Formal definition
A mean payoff game consists of a graph , and a function where is the set of vertices, which are partitioned between the players, and where is the weight of an edge. Often, the graph is assumed to be sinkless, which means that every vertex has at least one outgoing edge. A play is a possible outcome of the game, which is an inifinite walk on the graph, we could write this as a sequence of edges: where the head of equals the tail of . The objective value of the game can then be written as follows:
A strategy for the Maximizer is a function , where is the set of finite walks that start at the initial vertex and end at some vertex , which returns an outgoing edge of the end vertex . A strategy for the Minimizer can be defined analogously. If both players fix a strategy, say they pick strategies and , then the outcome of the game is fixed, and the resulting play is the path .
One of the fundamental results for mean payoff games is that they are positionally determined. This means in our case that the game has a unique value, and that each player has a strategy that can attain the value, and that strategy is positional, e.g. it only depends on the current vertex the token is on. In formulas, the following equation holds for the value :
Solving mean payoff games
Solving a mean payoff game can mean several things, although in practice finding one often also yields the other:
Determining the value of one or all starting vertices
Determining the optimal strategy for one or both players
Determining the set of starting vertices from which the Maximizer can guarantee a value of at least 0. Doing so is called finding the zero-mean partition (and is also related to solving energy games)
It is a major open problem in computer science whether there exists a polynomial time algorithm for solving any of the above problems. These problems are one of the few to be contained in both the classes NP and coNP but not known to be in P. Currently, the fastest algorithm is a randomized strategy improvement algorithm, which runs in time , where is the number of Maximizer vertices. The best deterministic algorithms run in time , where now is the number of edges and the total number of vertices.
Three of the most well-known algorithms for solving mean payoff games are the following (each of which has their own slight variants):
The GKK algorithm. Its main idea is to add a potential to every vertex in the graph, and slowly increase the potential on some nodes until we find the zero-mean partition. This also gives us the energy values in an energy game.
Value iteration. Its main idea is to give each vertex a value, and update the values via fixed point iteration. When the fixed point is reached, the zero-mean partition and energy values can be found.
Strategy improvement. Its main idea is to start with an arbitrary Maximizer strategy, and assign it a valuation. Then, by repeatedly making changes to the strategy that improve its valuation, we end up with a strategy for the Maximizer than guarantees nonzero payoff wherever it is possible.
Related games and problems
The problem of solving parity games can be polynomial-time reduced to solving mean payoff games. Solving mean payoff games can be shown to be polynomial-time equivalent to many core problems concerning tropical linear programming. Another closely related game to the mean payoff game is the energy game, in which the Maximizer tries to maximize the smallest cumulative sum within the play instead of the long-term average.
References
Further reading
Game theory
Directed graphs | Mean payoff game | [
"Mathematics"
] | 890 | [
"Game theory"
] |
77,669,779 | https://en.wikipedia.org/wiki/Time-lock%20puzzle | A Time-lock puzzle, or Time released cryptography encrypts a message that cannot be decrypted until a specified amount of time has passed. The concept was first described by Timothy C. May, and a solution first introduced by Ron Rivest, Adi Shamir, and David A. Wagner in 1996. Time-lock puzzle are useful in cases where confidentiality of information is determined by time, such as an diarist who does not want their views released until 50 years after their death, an auction where bids are sealed until the bidding period is closed, electronic voting, and contract signing. They can additionally be used in creating further cryptographic primitives, such as verifiable delay functions and zero knowledge proofs.
Time released cryptography can be achieved through several different mechanisms.
Use mathematical problems requiring sequential calculations to solve, and cannot be solved with parallelization. Thus, adding more computers to a problem will not help solve the problem faster.
Use of a trusted agent, or multiple agents who each hold a part of the message and cryptographic keys, who release the message after a specified time period has passed.
Distribute public encryption keys to users, and place private cryptographic keys with a trusted agent in an offline location, to be released at a later date.
See also
LCS35
References
Cryptography | Time-lock puzzle | [
"Mathematics",
"Engineering"
] | 271 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
77,670,060 | https://en.wikipedia.org/wiki/Enterosoma%20genetic%20code | The Enterosoma genetic code (tentative code number 34) translates AGG to methionine, as determined by the codon assignment software Codetta; it was further shown that this recoding is associated with a special tRNA with the appropriate anticodon and tRNA identity elements. The code is found in a small clade of species within the Enterosoma genus, according to the GTDB taxonomy system release 220. Codetta called the Enterosoma code for the following genome assemblies: GCA_002431755.1, GCA_002439645.1, GCA_002436825.1, GCA_002451385.1, GCA_002297105.1, GCA_002297045.1, GCA_002404995.1, and GCA_900549915.1.
See also
Genetic codes: list of alternative codons
List of genetic codes
References
Genetics | Enterosoma genetic code | [
"Biology"
] | 210 | [
"Protein biosynthesis",
"Gene expression",
"Molecular genetics",
"Biosynthesis",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
77,670,146 | https://en.wikipedia.org/wiki/Anaerococcus%20and%20Onthovivens%20genetic%20code | The Anaerococcus and Onthovivens genetic code (tentative code number 36) translates CGG to tryptophan, as determined by the codon assignment software Codetta; it was further shown that this recoding is associated with a special tRNA with the appropriate anticodon and tRNA identity elements appropriate for such decoding. As currently known, this code is limited to two distinct clades, the genus Anaerococcus in the class Clostridia and the genus Onthovivens in the class Bacilli, as defined by the GTDB taxonomy system release 220. Codetta originally called the Anaerococcus and Onthovivens code for the following genome assemblies: GCA_000024105.1, GCA_900445285.1, GCA_902500265.1, GCA_900258475.1, GCA_002399785.1, GCA_004558005.1, GCA_900540365.1, GCA_900540395.1, GCA_900545015.1.
See also
Genetic codes: list of alternative codons
List of genetic codes
References
Genetics | Anaerococcus and Onthovivens genetic code | [
"Biology"
] | 263 | [
"Protein biosynthesis",
"Gene expression",
"Molecular genetics",
"Biosynthesis",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
77,670,151 | https://en.wikipedia.org/wiki/Absconditabacterales%20genetic%20code | The Absconditabacterales genetic code (tentative code number 37) translates UGA to glycine, and CGG and GCA to tryptophan, as determined by the codon assignment software Codetta; it was further shown that these recodings are associated with three special tRNAs with appropriate anticodons and tRNA identity elements. Codetta called the Absconditibacterales code (sometimes leaving the rare CGA codon uncalled) for the following genome assemblies: GCA_002792495.1, GCA_001007975.1, GCA_003488625.1, GCA_003260355.1, GCA_003242865.1, GCA_000350285.1, GCA_002746475.1, GCA_007116275.1, GCA_007115995.1, GCA_002361595.1, GCA_000503875.1, GCA_003543185.1, GCA_002441085.1, and GCA_002791215.1. Review of the GTDB taxonomy system (release 220) for the order Absconditabacterales (phylum Patescibacteria) left two questionable genome assemblies (GCA_002414185.1, for which Codetta had called the CGA codon Arg, and GCA_937862565.1, the only known genome from the CALMFT01 family and untested by Codetta); spot-checking these two genomes shows that they both have all three special tRNAs, suggesting that the code is universal across the order.
See also
Genetic codes: list of alternative codons
List of genetic codes
References
Genetics | Absconditabacterales genetic code | [
"Biology"
] | 401 | [
"Protein biosynthesis",
"Gene expression",
"Molecular genetics",
"Biosynthesis",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
77,672,874 | https://en.wikipedia.org/wiki/Jizera%20Dark-Sky%20Park | The Jizera Dark Sky Park (Polish: Izerski Park Ciemnego Nieba - IPCN, Czech: Jizerská oblast tmavé oblohy - JOTO) is the first transnational dark-sky preserve. It is located in a nearly-uninhabited region of the Jizera Mountains that lies halfway between Poland and the Czech Republic. The park primarily serves to inform the general public about the issue of light pollution, as well as to protect nature and the environment.
Establishment of the park
The initiative to create a dark-sky preserve in the Jizera Mountains came from the Astronomical Institute of the University of Wrocław in Poland. It was joined by other institutions, namely the Astronomical Institute of the Academy of Sciences of the Czech Republic, the Nature and Landscape Protection Agency of the Czech Republic [cz], the administration of the Jizera Mountains Protected Area, the Świeradów Zdroj Forest District, the Szklarska Poreba Forest District, the state enterprise Forests of the Czech Republic [cz], and the regional directorate of Liberec. On November 4, 2009, as part of the United Nations' International Year of Astronomy, these institutions jointly declared the Jizera Dark Sky Region.
Location and purpose
The Jizera region of dark skies covers an area of almost . It is located in an nearly uninhabited part of the Jizera Mountains and lies halfway between the Czech and Polish sides of the mountain range. On the Czech side, it stretches from the settlement of Jizerka to Mount Smrk, while in Poland it continues along the High Jizera ridge and surrounds the Jizera Meadow and the settlement of Orle.
The park has several functions. Above all, it informs the general public about the issue of light pollution by showcasing its night sky as much darker than the sky in cities and their surroundings. Another important function is the protection of nature and the environment. The park hosts a number of astronomical events for the public such as lectures and sky observations, in which the Club of Astronomers Liberecka branch of the Czech Astronomical Society and other institutions take part. On the Polish side, the park is part of the astro-tourism project Astro Izery.
Darkness
Although the night sky in the Jizera dark sky region is significantly darker than in the cities, it is not as naturally dark as it would be with no light from Earth. The influence of light pollution from cities stretches tens of kilometers away. The brightness of the sky in Jizerka is approximately twice as high as a naturally dark sky without the influence of light pollution. Naturally dark night skies effectively do not occur in the densely populated Central Europe region. The most prominent sources of light pollution that can be seen from the area are the cities of Liberec, Jablonec and Nisou, Tanvald, and Jelenia Góra. The sky quality of that park expressed by the Bortle scale is at level 4, and reaches level 3 under exceptionally good conditions.
References
External links
Jizera Dark Sky Park
Homepage
Light pollution
Dark-sky preserves
2009 establishments
2009 in the Czech Republic | Jizera Dark-Sky Park | [
"Astronomy"
] | 642 | [
"Dark-sky preserves"
] |
76,239,444 | https://en.wikipedia.org/wiki/Cis-Urocanic%20acid | cis-Urocanic acid (cis-UCA) is a chemical compound produced by ultraviolet irradiation of trans-urocanic acid, a metabolite naturally formed in the body from histidine. cis-Urocanic acid is suspected of involvement in the development of skin cancer. It acts as an immunosuppressant through action as an agonist of the 5-HT2A receptor, and blocking this receptor has been shown to reduce cis-UCA mediated photocarcinogenesis. However the immunomodulatory effects of cis-UCA are complex and also involve other pathways, and at low levels it shows antiinflammatory actions and may be protective against UV damage in the cornea and retina.
References
Imidazoles
Carboxylic acids
Alkene derivatives | Cis-Urocanic acid | [
"Chemistry"
] | 172 | [
"Pharmacology",
"Carboxylic acids",
"Functional groups",
"Medicinal chemistry stubs",
"Pharmacology stubs"
] |
76,239,903 | https://en.wikipedia.org/wiki/List%20of%20wetland%20plants | This is a list of plants that grow in wetland environments, including aquatic plants and plants that live in the ecotones between terrestrial and aquatic environments.
Major cosmopolitan groups
These are groups with members found in wetland environments throughout the world.
Ceratophyllum demersum is a cosmopolitan species of aquatic plant.
Drosera, the sundews, are carnivorous plants with species found on every continent except Antarctica.
Duckweeds are tiny flowering plants that float on the surface of water, with members of the group found worldwide.
Isoetes is a cosmopolitan genus of lycophyte known as the quillworts.
Juncus is a genus of monocot commonly known as rushes, found in every continent except Antarctica.
Juncus effusus
Mangroves are trees or shrubs that grow in salt or brackish water along coastlines. The group consists of numerous unrelated plants that have convergently evolved. Sometimes, the widely distributed genus Rhizophora is referred to as the true mangroves.
Pistia, a genus with one species that is native to tropical environments and has further extended its range as an introduced species.
Phragmites is a genus of plants known as reeds.
Pondweeds are a family of aquatic plant with a subcosmopolitan distribution.
Sagittaria is a genus of plants known as arrowhead or katniss.
Salix, the willows, are native to many areas throughout the world, usually in riparian ecosystems.
Salvinia natans, the floating fern, is native in Africa, Asia, Europe, South America, and introduced elsewhere.
Sedges are a large family of grass-like plants with many species that form a characteristic part of wetland vegetation.
Bolboschoenus, club rushes.
Carex, the true sedges, contains over 2,000 species, primarily found in wetland environments.
Eleocharis, the spikerushes.
Scirpus, bulrushes.
Sphagnum is a genus of moss that is found primarily in the Northern Hemisphere, as well as in some areas of South America, New Zealand and Tasmania. Sphagnum moss is notable because it forms peat.
Sporobolus, cordgrasses.
Typha, known as cattails or bulrushes, are found throughout the world and a characteristic plant of wetland environments.
Utricularia, known as the bladderworts, are carnivorous plants with species found worldwide.
Water lilies are aquatic flowering plants with leaves that float on the surface of bodies of water.
By distribution
Asia
Acorus calamus, sweetflag
Acrostichum aureum
Aegiceras corniculatum, black mangrove
Avicennia marina
Avicennia officinalis, Indian mangrove
Azolla, mosquito ferns
Barclaya longifolia
Barringtonia acutangula
Bruguiera gymnorhiza, the large-leafed orange mangrove
Eleocharis palustris, common spikerush
Euryale ferox, prickly waterlily
Excoecaria agallocha, the milky mangrove
Glyptostrobus pensilis, the Chinese swamp cypress, highly endangered
Iris halophila
Kandelia candel, a mangrove species
Lepironia articulata, the gray sedge
Lycopus lucidus
Lysimachia maritima
Nechamandra alternifolia
Nelumbo nucifera, the sacred lotus
Nymphaea nouchali, the blue water lily
Oryza coarctata, syn. Porteresia coarctata, a type of wild rice that grows in estuaries
Persicaria hydropiper
Persicaria thunbergii
Potamogeton distinctus
Rhizophora mucronata
Sagittaria guayanensis
Sagittaria trifolia
Salix cheilophila
Salix pierotii
Suaeda salsa, seepweed
Typha orientalis, a species of cattail
Zizania latifolia
Africa
Cyperus papyrus, papyrus
Echinochloa pyramidalis, antelope grass
Ficus verruculosa, water fig
Fimbristylis dichotoma
Miscanthus junceus
Phoenix reclinata wild date palm
Prionium serratum, palmiet, a South African endemic
Rhizophora mucronata
Syzygium cordatum
Trapa natans
Typha australis
Vossia cuspidata, hippo grass
Europe
Calamagrostis pseudophragmites
Dactylorhiza majalis, broad-leaved marsh orchid
Damasonium alisma, star fruit
Eleocharis palustris, common spikerush
Eriophorum sedge, known as cotton grass or bog cotton
Hottonia palustris
Leucojum aestivum, summer snowflake
Lysimachia maritima
Menyanthes trifoliata, bogbean
Myrica gale, bog-myrtle
Narthecium ossifragum, bog-asphodel
Persicaria hydropiper
Sphagnum moss
North America
Acorus americanus, American sweetflag
Anchistea virginica, Virginia chain fern
Asclepias incarnata, swamp milkweed
Borrichia frutescens, sea oxeye
Caltha palustris, marsh marigold
Cephalanthus occidentalis, buttonbush
Eleocharis palustris, common spikerush
Hibiscus moscheutos, marsh mallow
Iris virginica, southern blue flag
Lycopus americanus, American bugleweed
Lysimachia maritima
Persicaria hydropiper
Sacciolepis striata
Salix nigra, black willow
Sarracenia, North American pitcher plants
Saururus cernuus, lizard's tail
Taxodium distichum, bald cypress
Nelumbo lutea, American lotus
Nuphar lutea, Spatterdock
Nymphaea odorata, fragrant water lily
Nyssa biflora, swamp tupelo tree
South America
Alternanthera philoxeroides
Hydrocotyle ranunculoides, floating marshpennywort
Hydrocotyle verticillata
Limnobium laevigatum
Limnocharis flava
Luziola peruviana
Myriophyllum aquaticum, parrot's feather, is now introduced worldwide but originates in the Amazon River.
Nymphoides humboldtiana
Pontederia crassipes, the common water hyacinth, has been introduced worldwide but is native to South America
Pontederia subovata
Sarcocornia ambigua
Syagrus romanzoffiana, queen palm
Victoria cruziana, Santa Cruz waterlily
Australia, New Zealand and Tasmania
Aegiceras corniculatum, black mangrove
Avicennia marina, gray mangrove
Azolla filiculoides red mosquitofern, found in Tasmania
Bruguiera gymnorhiza, the large-leafed orange mangrove
Casuarina glauca, the swamp she-oak, found in inland Australia
Cordyline australis, endemic to New Zealand
Dicksonia squarrosa, the New Zealand tree fern, endemic to New Zealand
Dacrycarpus dacrydioides, kahikatea, endemic to New Zealand
Duma florulenta, tangled lignum
Eleocharis sphacelata, giant spikesedge
Eucalyptus robusta
Marsilea drummondii, a widespread aquatic fern in inland Australia known as nardoo
Melaleuca ericifolia
Nymphoides crenata, the wavy marshwort, endemic to Australia
Ottelia ovalifolia
Persicaria hydropiper
Rhizophora mucronata
Rhizophora stylosa
Samolus repens, the sea primrose
Typha orientalis, a species of cattail
Villarsia, a genus of aquatic flowering plants
References
Wetland
Plants
Wetland | List of wetland plants | [
"Biology",
"Environmental_science"
] | 1,637 | [
"Lists of plants",
"Hydrology",
"Plants",
"Plants by habitat",
"Lists of biota",
"Wetlands",
"Organisms by habitat"
] |
76,242,748 | https://en.wikipedia.org/wiki/Automatic%20track%20warning%20system | Automatic track warning system (ATWS, ) is a technical device used during track construction for occupational safety. It warns the construction site workers of an approaching train. In Germany, it usually consists of a series of signal lights and acoustic warning devices, which are mounted on steel poles or tripods at the edge of the track bed every 30 meters. There are wired and wireless systems that are automatically activated when a train approaches, for example by a wheel contact in the track bed. As long as the ATWS is generating warning signals, the construction site personnel must stay away from the track in question and allow the train to pass. ATWS is considered safer than the non-automated solution, where one worker constantly watches for approaching trains and alerts his colleagues.
Types
There are collective ATWS and individual ATWS. The collective ATWS generates an acoustic signal for a group of workers. The individual ATWS is a wearable device warning every worker individually.
Acoustic warning signals in Germany
Acoustic signals generated by collective ATWS in Germany are defined in the railway signalling regulations:
A long continuous tone (mixed sound of different high tones) means "Caution! Vehicles are approaching on the neighbouring track"
Two long tones in succession at different pitches signal "Clear the working track!"
At least five times two short tones at different pitches signalling "Clear working tracks as quickly as possible!"
After the acoustic signal, the railway workers have 25 seconds to escape from the track.
Noise pollution controversy for collective ATWS
Due to the nature of the system, ATWS is very loud (97 up to 126 dB) so that it can be heard by the track workers to be warned, even in the vicinity of loud working machines and with hearing protection in place. This is a burden for people living near railway tracks during work. Modern systems adapt to the ambient volume (mandatory from 2019) and only warn at full sound pressure in very loud environments. Even at reduced volume, this is still perceived as noise pollution. In 2014 in Stuttgart-Sommerrain, an ATWS was sabotaged by unidentified persons.
Research on wearable devices for personal warning
EU Commission funded the project ALARP (A railway automatic track warning system based on distributed personal mobile terminals) in the years 2010–2013 by the total amount of €3,941,877.20. The aim of the project was to improve the safety of track workers through the development of an innovative ATWS using low-cost, rugged, wireless wearable devices.
See also
Automatic warning system
Robotic tech vest
References
Esternal links
Sound of ATWS in Germany
Railway safety
Construction safety
Noise pollution
Wearable devices | Automatic track warning system | [
"Engineering"
] | 529 | [
"Construction",
"Construction safety"
] |
76,245,774 | https://en.wikipedia.org/wiki/Russula%20cremoricolor | Russula cremoricolor, also known as the winter russula, is a species of gilled mushroom. This mushroom has red, cream-yellow, and pink color variants, which complicates attempts at field identification, although finding "red and creamy capped fruitbodies in close proximity is a good clue indicating this species". The winter russula is "mildly toxic," and causes intestinal distress even when consumed in small amounts. The red morph was previously identified as Russula silvicola, but was found to be genetically identical to the cream-colored individuals called R. cremoricolor. The red morph is superficially similar to Russula californiensis but R. cremicolor has a much sharper, peppier taste, likes to associate with mixed forest or tanoak rather than pine, and keeps its gills and stipe white even in age.
See also
List of Russula species
Russula emetica
References
Fungi of North America
Fungi described in 1902
Fungus species
cremoricolor | Russula cremoricolor | [
"Biology"
] | 217 | [
"Fungi",
"Fungus species"
] |
76,245,939 | https://en.wikipedia.org/wiki/Transfluxor | A transfluxor was a specialised type of magnetic core memory element in which each core had two holes, one for writing and another for reading. It had the unusual property that a core's state could be read without erasing it. In addition to binary data, transfluxors could also store analog values, with no need to drive them into core saturation.
The technology is described in U.S. patent 3048828.
Transfluxors were used in the ARMA Micro Computer.
References
History of computing hardware
Non-volatile memory
Types of RAM | Transfluxor | [
"Technology"
] | 118 | [
"Computing stubs",
"History of computing hardware",
"History of computing"
] |
76,246,110 | https://en.wikipedia.org/wiki/Agu%C3%A7adoura%20test%20site | The Aguçadoura test site is an offshore location in the north of Portugal where grid connected offshore renewable energy devices have been tested, for research and project demonstration. It is about 5 km (3 miles) off the coast of Aguçadoura, Póvoa de Varzim, about 35 km NNE of central Porto.
It was established in 2001, and four developers have tested devices there: the Archimedes Wave Swing, three Pelamis Wave Power P1 machines as the Aguçadoura Wave Farm, the Principle Power WindFloat, and CorPower Ocean's C4.
Since 2021, it has been managed by WavEC and OceanACT.
Archimedes Wave Swing
In May 2004, a 2 MW (peak power output) Archimedes Wave Swing (AWS) was installed at Aguçadoura, after unsuccessful attempts in 2001 and 2002. The installation took three and a half days, and was eventually achieved by attaching the convertor to a pontoon and then submerging it and attaching to the seabed while the chamber remained floating.
The AWS device on the submersible pontoon foundation was 48 m long, 28 m wide and 35 m high, and sat on the sea bed beneath the waves. It had a 9.5 m diameter moving captor with a stroke of 7 m that moved with the waves at a maximum speed of 2.2 m/s. It was connected to the Portuguese grid by a 6 km long cable.
The testing was postponed until mid-September 2004, due to technical issues communicating with the device. At the end of October 2004 the testing license expired and the tests finished. The AWS intellectual property was later transferred to a Scottish company AWS Ocean Energy Ltd.
Aguçadoura Wave Farm
Three Pelamis P1 wave energy converters were installed at Aguçadoura in September 2018, and connected to the Portuguese grid. These each had a rated peak power of nominally 750 kW, giving a total of 2.25 MW installed capacity. There were plans to install a further 25 Pelamis WECs, but this never happened.
The three machines were taken back to shore due to technical issues, but were never re-deployed due to the financial crisis. One of the project partners, Babcock & Brown pulled out after a major sale of assets to repay its debts. Another of the partners, Energias de Portugal (EDP), were not discouraged by the failure and signed an agreement with US-based Principle Power to develop floating offshore wind turbines.
Principle Power WindFloat
An initial agreement between Principle Power and EDP was made in 2009 to develop floating offshore wind turbines at the Aguçadoura site. A consortium called WindPlus was set up to develop the project; it included Principle Power, EDP, and Vestas.
In November 2011, the WindFloat 1 semi-submersible platform with a 2 MW Vestas wind turbine was installed around 5 km off the coast of Aguçaduora, following a 350 km tow from Setúbal. The turbine was 54 m high, and with the foundation weighed 1200 t, and can be installed in water depths of over 50 m. The structure was not permanently installed, but held in place by drag-embedment anchors similar to those used to moor floating oil platforms.
After five years, the testing programme was completed, the device having survived 17 m high waves, wind speeds of up to 111 km/h , and generated 17 GWh of renewable electricity for the Portuguese grid. The design of the platform was approved by certification body Bureau Veritas in April 2016.
The WindPlus consortium has since developed the 25 MW WindFloat Atlantic project, about 20 km off the coast of Viana do Castelo, some 30 km North of Aguçadoura. This has three WindFloat foundations each with a Vestas V164-8.4MW turbine, which began supplying power in January 2020.
CorPower Ocean HiWave-5
In late 2020, CorPower Ocean secured a 10-year licence from the Portuguese Directorate-General for Natural Resources that would allow them to test an array of CorPower wave energy converters (WECs) at the Aguçadoura site within the HiWave-5 project. A new subsea electricity cable was installed in Autumn 2022. The first CorPower C4 WEC was installed in September 2023, and started exporting to the Portuguese electricity grid in October 2023. It is planned to install a further three C5 WECs as a demonstration of a CorPack wave cluster.
References
Wave power
Floating wind turbines
Wave farms in Portugal | Aguçadoura test site | [
"Engineering"
] | 951 | [
"Floating wind turbines",
"Offshore engineering"
] |
76,246,243 | https://en.wikipedia.org/wiki/Influencer | An influencer (also known as a social media influencer or online influencer) is an individual who builds a grassroots online presence through engaging content like photos, videos, and updates, using direct audience interaction to establish authenticity, expertise, and appeal, and standing apart from traditional celebrities by growing their platform through social media rather than pre-existing fame. The modern referent of the term is commonly a paid role in which a business entity pays for the social media influence-for-hire activity to promote its products and services, known as influencer marketing. Types of influencers include fashion influencer, travel influencer and virtual influencer, and involve content creators and streamers.
Some influencers are associated with specific social media apps such as TikTok influencers, Instagram influencer, or Pinterest influencer, and many are also considered internet celebrities. , Instagram is the social media platform on which businesses spend the most advertising dollars towards marketing with influencers. However, influencers can exert their influence on any type of social media network. Thus, Instagram's leadership in the influencer marketing space has been under assault by platforms such as LinkedIn, TikTok, Snapchat and Roblox.
Definition
Influencers may be celebrities of any type with large social media followings, including people who are mainly internet celebrities.
There is a lack of consensus about what an influencer is. One writer defines them as "a range of third parties who exercise influence over the organization and its potential customers." Another defines an influencer as a "third party who significantly shapes the customer's purchasing decision but may never be accountable for it." According to another, influencers are "well-connected, create an impact, have active minds, and are trendsetters". And just because an individual has many followers does not necessarily mean they have much influence over those individuals, only that they have many followers. A 1% increase in influencer marketing spending can lead to a 0.5% increase in audience engagement.
Market-research techniques can be used to identify influencers, using predefined criteria to determine the extent and type of influence. "Activists" get involved with organizations such as their communities, political movements, and charities. "Connected influencers" have large social networks. "Authoritative influencers" are trusted by others. "Active minds" have a diverse range of interests. "Trendsetters" are the early adopters (or leavers) of markets. According to Malcolm Gladwell, "The success of any kind of social epidemic is heavily dependent on the involvement of people with a particular and rare set of social gifts". He has identified types of influencers who are responsible for the "generation, communication and adoption" of messages; connectors network with a variety of people, have a wide reach, and are essential to word-of-mouth communication; mavens use information, share it with others, and are insightful about trends.
History
Origins
The origins of online influencing can be traced back to the emergence of digital blogs and platforms in the early 2000s. Nevertheless, recent studies demonstrate that Instagram, an application with more than one billion users, harbors the majority of the influencer demographic. These individuals are sometimes referred to as "Instagrammers" or "Instafamous." A crucial aspect of influencing lies in their association with sponsors. The 2015 debut of Vamp, a company that links influencers with sponsorships, transformed the landscape of influencing.
There is much debate about whether social media influencers can be considered celebrities, as their path to fame is often less traditional and arguably easier. Melody Nouri addresses the differences between the two types in her article "The Power of Influence: Traditional Celebrities vs Social Media Influencer". Nouri asserts that social media platforms have a greater negative impact on young, impressionable audiences compared to traditional media like magazines, billboards, advertisements, and tabloids featuring celebrities. Online it is thought to be simpler to manipulate an image and lifestyle in such a way that viewers are more susceptible to believing it. An example of this is the Foregoer Influence, a movement started by the Foregoer which resulted in millions across the world copying the Foregoer's humor, which may have had negative effects on them.
2000s
The early 2000s showed corporate endeavors to leverage the internet for influence, with some companies participating in forums for promotions or providing bloggers with complimentary products in return for favorable reviews. A few of these practices were viewed as unethical for taking advantage of the labor of young individuals without providing remuneration. In 2024, The Blogstar Network was established by Ted Murphy of MindComet. Bloggers were encouraged to join an email list and receive remunerated offers from corporations in exchange for creating specific posts. For instance, bloggers were compensated for writing reviews of fast-food meals on their blogs. Blogstar is widely regarded as the first influencer marketing network.
Murphy succeeded Blogstar with PayPerPost, which was introduced in 2006. This platform compensated significant posters on prominent forums and social media platforms for every post made about a corporate product. Payment rates were determined by the influencer's status. Though very popular, PayPerPost, received a great deal of criticism as these influencers were not required to disclose their involvement with PayPerPost as traditional journalism would have. With the success of PayPerPost, the public became aware that there was a drive for corporate interests to influence what some people were posting to these sites. The platform also incentivized other firms to establish comparable programs.
Despite concerns, marketing networks with influencers continued to grow throughout the 2000s and into the 2010s. The influencer marketing industry is expected to be worth up to $15 billion by 2022, up from as much as $8 billion in 2019, according to estimates from Business Insider Intelligence, which are based on Mediakix data. Evan Asano, the Former CEO and founder of the agency Mediakix, previously spoke with Business Insider and said he believed influencer marketing on Instagram would continue to grow despite likes being hidden.
2010s
By the 2010s, the term "influencer" described digital content creators with a large following, distinctive brand persona, and a patterned relationship with commercial sponsors. Consumers often mistakenly view celebrities as reliable, leading to trust and confidence in the products being promoted.
A 2001 study from Rutgers University discovered that individuals were using "internet forums as influential sources of consumer information." The study proposes that consumers preferred internet forums and social media when making purchasing decisions over conventional advertising and print sources. An influencer's personality strongly impacts their audience's purchasing decision, with those who engage with their audience being more persuasive in encouraging product purchases. Companies today place great importance on feedback and comments received through social media platforms as consumers trust other consumers. Reviews are often relied on to persuade consumers to make a purchase, highlighting the impact of a negative review on a business's revenue.
A typical method of marketing between the influencer and the audience is "B2C marketing". B2C marketing, meaning Business to Consumer marketing, entails the strategies which a business would undertake to promote themselves and their services directly to their target audiences. This is typically through advertising and creating content through the influencer themselves. The intention is that their followers, who relate or look up to certain influencers, will be more inclined to purchase an item because their favorite "Internet celebrity" recommended it. Social media influencers typically promote a lifestyle of beauty and luxury fashion and foster consumer–brand relationships, while selling their own lines of merchandise.
David Rowles explains the methods online influencers employ to increase their audience and brand visibility. Digital branding encompasses all online experiences and necessitates value provision."
Self-branding
Self-branding, also known as personal branding, describes the development of a public image for commercial gain or social or cultural capital. The rise of social media has been exploited by individuals seeking personal fame and product sales. Platforms such as Instagram, Twitch, Snapchat, VSCO, YouTube, and TikTok, are the most common social media outlets on which online influencers attempt to build a following. Fame can be attained through different avenues and media forms, including art, humor, modeling, and podcasts. Marketing experts have concluded that anyone can build websites easily without any technical knowledge or complex coding languages. They can upload text, pictures, and videos instantly from personal computers or phones. With technological barriers diminishing, the web has become the ideal platform for personal branding.
Student athletes
Following the National Collegiate Athletic Association v. Alston ruling by the Supreme Court of the United States in 2021, pre-college and college athletes became eligible for student athlete compensation for use of their personality rights without loss of athletic eligibility and education-related benefits, which broadened the influencer landscape to people who might not yet be celebrities. Subsequently, it became common for amateur athletes to use the name, image and likeness of their personal brand as influencers for hire outside of the field of play.
Income
In 2023 in the United States, 27 million people were paid content creators. Of those, 12 million did content creation as their full-time profession. 8 million did it as part-time work, and 7 million did it as a hobby. Influencers can make money in various ways, but most of them earn money from endorsements or sponsorships. Social media influencers can use their fame to promote products or experiences to their followers, as a method of providing credibility to products.
Influencers can also expand their source of revenue by creating their own products or merchandise to sell. By doing this, and by using their platform to promote their products to an established audience, influencers can earn money by developing their own reputable brands. Bloggers can feature sponsored posts in social media to make profits. For instance, fashion blogger Chiara Ferragni started as an online blogger, and then gained millions of followers on Instagram. She later created her brand, the Chiara Ferragni Collection. Like many other Instagram celebrities, Ferragni started by charging money per post for promoting brands. She earns revenue from promotional Instagram posts and the sale of her own products.
In 2020, a report by venture-capital firm SignalFire stated that the economy spawned by internet creators was the "fastest-growing type of small business".
Marketing
Regulations
Despite the recent emergence of influencer culture, influencer marketing and advertising it is left highly unregulated by existing legislation. This became a prevalent concern when users on social media platforms were finding it difficult to distinguish any differences between advertisements and sponsorships with personal posts. This was evident with the mismanagement of Fyre Festival, where numerous Instagram influencers were sanctioned for their lack of transparency. This led to a massive backlash from the public, who felt the promotion of the event deliberately misled and confused target audiences. As a result, numerous advertising bodies sought to introduce strict regulations and guidelines around influencer marketing. This includes the AANA (Australian Associations of National Advertisers), who states that influencer advertising must be "clearly distinguishable".
In August 2024, the Federal Trade Commission voted unanimously to ban marketers from using fake user reviews created by generative artificial intelligence chatbots (such as ChatGPT) and influencers paying for bots to increase follower counts.
Payments
Most influencers are paid before the start of a marketing campaign, and others are paid after it ends. Consensus exists about how much an influencer should be paid. Compensation may vary by how many people an influencer can reach, the extent to which they will endorse the product (a deliverable), and the success of their past endorsements have performed.
Top-tier influencers and celebrities may receive a six- or seven-figure fee for a single social-media post. In addition to (or in lieu of) a fee, payment may include free products or services. While top-tier influencers generate attention, only 4% of all influencers make more than $100,000 a year. For influencers with smaller followings, free products or services may be the only form of compensation. Advertisers are increasingly inclined to see influencers with a small but dedicated follower base as a more efficient use of marketing dollars.
Forrester Research analyst Michael Speyer notes that for small and medium-sized businesses, "IT sales are influenced by several parties, including peers, consultants, bloggers, and technology resellers." According to Speyer, "Vendors need to identify and characterize influencers inside their market. This requires a comprehensive influencer identification program and the establishment of criteria for ranking influencer impact on the decision process."
Categorization
Influencers are categorized by the number of followers they have on social media. They include celebrity endorsements from those with large followings, to niche content creators with a loyal following on social-media platforms such as YouTube, Instagram, Facebook, and Twitter. Their followers range in number from hundreds of millions to 1,000.
Nano-influencers – These are influencers that have a following ranging from 1k to 10k.
Micro-influencers – These are the influencers with followers in the range of 10K to 100k
Macro-influencers – These are the influencers with followers from the range of 100K to 500k
Mega/Celeb-influencers – These are the influencers with more than 500k followers
Businesses pursue people who aim to lessen their consumption of advertisements, and are willing to pay their influencers more. Targeting influencers is seen as increasing marketing's reach, counteracting a growing tendency by prospective customers to ignore marketing.
Marketing researchers Kapitan and Silvera find that influencer selection extends into product personality. This product and benefit matching is key. For a shampoo, it should use an influencer with good hair. Likewise, a flashy product may use bold colors to convey its brand. If an influencer is not flashy, they will clash with the brand. Matching an influencer with the product's purpose and mood is important.
See also
Celebrity culture
Online streamer
Opinion leadership
Social media marketing
References
Celebrity
Cultural trends
Social influence | Influencer | [
"Technology"
] | 2,903 | [
"Computing and society",
"Social media"
] |
76,250,051 | https://en.wikipedia.org/wiki/Applied%20Mathematical%20Modelling | Applied Mathematical Modelling is a scientific journal published by Elsevier, focusing on applied mathematics with an emphasis on mathematical modeling in engineering, environmental processes, manufacturing, and industrial systems.. The journal was established as a quarterly journal in 1976 by IPC Science and Technology Press with Christopher J. Rawlins as managing editor. The journal is currently published by Elsevier on a monthly basis, and is edited by Johann Sienz (Swansea University).
Abstracting and indexing
The journal is indexed and abstracted in the following bibliographic databases:
References
Elsevier academic journals
Applied mathematics journals
Monthly journals | Applied Mathematical Modelling | [
"Mathematics"
] | 121 | [
"Applied mathematics",
"Applied mathematics journals"
] |
76,250,400 | https://en.wikipedia.org/wiki/Waste%20Siege | Waste Siege: The Life of Infrastructure in Palestine is a nonfiction book by Sophia Stamatopoulou-Robbins. The book is an ethnography of waste management in the West Bank under the constraints of Israeli occupation, arguing that the Oslo Accords led to the abnormal presence and flow of waste for Palestinians, which Stamatopoulou-Robbins refers to as "waste siege". It is based on a decade of ethnographic fieldwork that she conducted in the West Bank for her dissertation.
Waste Siege was published by Stanford University Press in 2019, and received various recognitions including the Albert Hourani Book Award and selection as a Choice Outstanding Academic Title. Reviewers broadly praised Stamatopoulou-Robbins' ethnographic research and conclusions; some challenged specific portions of her arguments.
Background
In 1995, Israel and the Palestine Liberation Organization signed the Oslo II Accord. Oslo II divided the Israeli-occupied West Bank into three areas with differing levels of control shared by the Israel and the new Palestinian National Authority. This and the other Oslo Accords were meant to be temporary but remain in effect. As a result, Palestinian civilians in the occupied Palestinian territories including the West Bank and Gaza Strip are under Israeli governance and subject to various different legal systems.
Waste Siege was written by Sophia Stamatopoulou-Robbins, then an assistant professor of anthropology at Bard College. In her second year of graduate school, Stamatopoulou-Robbins took a course titled "Power and Hegemony" taught by Partha Chatterjee. The class focused on Foucault and Gramsci; Stamatopoulou-Robbins wrote a paper about the 2006 Palestinian legislative election, in which Palestinians elected Hamas, and its connection to infrastructure in Palestine. Her interest in Palestinian infrastructure was a response to its invocation by Western leftists as the reason for the popularity of Hamas, which she found overly simplistic in light of the involvement of Israel and international aid organizations as well as the complexity and unpredictability of Palestinian relationships to infrastructure.
Stamatopoulou-Robbins' dissertation, based on 10 years of ethnographic fieldwork in the West Bank, discussed Palestinians in the West Bank and their responses to the governance of the Palestinian Authority. She specifically focused on the Authority's waste management. She later decided that she wanted to reach a broader audience, including "people who don't think much about Palestine" as well as people outside academia, and developed her dissertation into a book—her first—with added content bringing it to 344 pages' length. The book was published in 2019 by Stanford University Press.
Synopsis
Waste Siege focuses on waste management in the West Bank and the ways it is shaped by the constraints of Israeli occupation. As an ethnography, it discusses the lives of Palestinians in the West Bank and the ways those lives are shaped by the presence of abnormal quantities and varieties of waste due to Israeli colonialism. Stamatopoulou-Robbins refers to these conditions as "waste siege", which she claims began in the 2000s after the Oslo Accords and displaced direct violence by Israel. She argues that waste in the West Bank is "matter with no place to go," drawing on discard studies and Latourian materialism as well as more traditional anthropology. The book also has roots in science and technology studies.
Throughout the text, Stamatopoulou-Robbins describes Palestinian encounters with the Israeli state and the subordinate but state-like Palestinian Authority, and the ways these interactions shape the flow of waste for Palestinians alongside their individual and collective improvisations in response to the presence of waste. She conducts ethnographic work with both bureaucrats and typical governed Palestinians.
Waste Siege also includes sensory descriptions of the ways waste in the West Bank looks, smells, and feels, emphasizing the material qualities of waste.
"Compression"
The first chapter, titled "Compression", focuses on landfills in combination and contrast with other waste management methods in the West Bank. Stamatopoulou-Robbins discusses the temporal implications of landfills and their relevance to a Palestinian project of nation-building as an example of collaborative national work. She also characterizes the lives and views of the Palestinian professionals managing these landfills. These professionals know that landfills only function for a finite period of time, but are unable to access more modern waste management technology; they are educated about this technology but unable to bring it to the West Bank.
Palestinian waste professionals must also cooperate with Israel and international aid organizations to get funding for landfills. These foreign investors have the power to shape Palestinian sanitation projects even if these projects become misaligned with what Palestinians themselves believe they need.
The first chapter concludes with a specific example of the conflict that can occur between the national and international actors involved in Palestinian landfill management. The managers of Zahrat al-Finjan, a landfill run by the Palestinian Authority, are forced to choose whether to allow Israeli settlers to dump trash in the landfill or risk those settlers and others illegally dumping trash elsewhere.
"Inundated"
The second chapter, titled "Inundated", focuses on used goods smuggled from Israel into the West Bank to be sold, as well as the planned obsolescence of new goods in the West Bank market. Stamatopoulou-Robbins argues based on her fieldwork that used goods from Israel, or (from rubbish), are valuable not because of sustainability or poverty but because they are of higher quality than the new goods available to Palestinians. The second chapter also describes the Palestinian pursuit of Israeli goods as linked to the Palestinian desire for sovereignty.
A theme in the discussion of is the complication of travel by Israeli checkpoints, the Israeli identity card system, and vehicle registration systems of Israel and of the Palestinian Authority.
"Accumulation"
The third chapter, titled "Accumulation", discusses the accumulation of waste in the village Shuqba. Palestinian and Israeli waste is frequently dumped in the village, including potentially dangerous medical equipment as well as sewage and animal carcasses. Residents are gradually poisoned over time, but the variety of sources for this waste from both sides of the Green Line makes it difficult to know who to blame. In some cases villagers blamed other villagers, even when the hostile force of Israel was involved.
"Gifted"
The fourth chapter, titled "Gifted", focuses on the redistribution of stale and unwanted bread, which is hung on unrelated structures, as an example of the collective creation of infrastructure. Many Palestinians redistribute bread based on a religious prohibition on throwing it away or letting it touch the ground; Stamatopoulou-Robbins argues that bread is also sacred to Palestinians because it represents various things including interconnectedness and a desire to support one another. She also analyzes the role of shame, or , in Palestinian methods of bread redistribution; restaurant owners and bakers seek ways to get bread to people who need it without making them feel ashamed. She states that these processes of redistributing bread are "infrastructural in that they mediate urban public life, creating networks that facilitate the flow of people and ideas, allowing for their exchange over space".
"Leakage"
The fifth chapter, titled "Leakage", discusses sewage management in the West Bank. It explores the ways that the sharing of West Bank infrastructure and environment between Israeli settlers and Palestinians is complex and unequal. The concept of a "shared environment" in this space is mobilized to present Palestinians as polluters who are simultaneously incompetent and malicious, enabling Israel to justify continued "custodianship" over the West Bank.
Stamatopoulou-Robbins uses George Orwell's term doublethink to describe a phenomenon in which Palestinian waste managers attempt to manage wastewater in order to claim effective national environmental stewardship, even though they are aware that they do not have the political power to protect the shared environment of the West Bank. She argues that Israel is in fact responsible for the wastewater, but both Israel and Palestine have a political interest in asserting that Palestinian authorities are responsible for the wastewater; Israel uses Palestinian failures to maintain its image as a steward of the environment while Palestine uses waste infrastructure to assert Palestinian environmental stewardship and political power.
Stamatopoulou-Robbins also discusses the specific example of the city Nablus. Israel pipes sewage from Nablus across the Green Line and processes it in Israel, subsequently using it as a free source of water for agricultural irrigation. Israel uses this sewage flow as an example of Palestinian incompetence while repeatedly preventing the Palestinian Authority from building its own sewage infrastructure.
Conclusion
The conclusion of the book discusses the Dead Sea, with Stamatopoulou-Robbins narrating a visit to the body of water which is increasingly contaminated with sewage. It subsequently broadens in scope to discuss the implications of waste siege at a planetary scale, describing it as "a way to name the kind of living we do in the constantly changing ruins we have made".
Reception
Waste Siege won the Albert Hourani Book Award from the Middle East Studies Association in 2020, and was selected as a Choice Outstanding Academic Title. In 2021 it won the Book Award of the Middle East Section of the American Anthropological Association (AAA), shared the Julian Steward Award from the Anthropology & Environment Section of the AAA, and was jointly awarded the Sharon Stephens Book Prize.
Reviews
A 2020 review in Arab Studies Quarterly found Waste Siege "an important work" and "a welcome addition to the sparse literature about the environment, waste, and infrastructure in Palestine and the Middle East more broadly". Reviewer Basma Fahoum praised Stamatopoulou-Robbins' level of knowledge about daily life for West Bank Palestinians. However, she criticized flawed translation and transliteration from Arabic to English, and argued that Palestinian redistribution of bread is not unique but a characteristic shared with many Arab and Muslim countries as well as areas of Israel populated by observant Jews.
In 2021, a positive review in PoLAR: Political and Legal Anthropology Review described the book as an effective criticism of "the putative universality of environmental threats, mobility, fixity, political violence, and state governance". In cultural geographies, reviewers analyzed Waste Siege alongside Electrical Palestine, describing the two texts as "unique in their insistence that scholars researching Palestine begin their work from the material relations that undergird everyday life". They praised Waste Siege as a well-contextualized and object-oriented work, and recommended reading the two together. A review in Arab Studies Journal found Waste Siege "an innovative and methodologically rich text" that effectively links discussion of the environment with analysis of settler colonialism. Reviewer Moné Makkawi critiqued the heavy use of jargon and unclear rhetorical choices, including the interchangeable use of the terms West Bank and Palestine with no analysis of the Gaza Strip, as well as lack of analysis of "the ways in which the violence of both military occupation and waste siege function as extensions of one another".
In HAU: Journal of Ethnographic Theory, which published multiple reviews of Waste Siege in its Spring 2021 issue, Amahl Bishara praised Stamatopoulou-Robbins' ethnographic work and her "creative and caring approach to ethnography". Like Makkawi, Bishara criticized the assertion that waste siege is replacing military violence, arguing that the latter is ongoing. Rosalind Fredericks described Waste Siege as "impressive" and "beautifully written", and emphasized its importance for the field of discard studies; she critiqued the end of the book for globalizing the concept of waste siege, arguing that this "can dilute the more powerful message that is specific to this conjuncture, in this place". Kali Rubaii linked concepts from the book to military creep and noted its "textured" implications, including "that assigning responsibility or causality for events that appear to belong to civilian life is not a straightforward act" in an environment like Palestine. Munira Khayyat described Waste Siege as "a startling new angle" on life in Palestine beyond the spectacle of war, but pushed back on Stamatopoulou-Robbins' assertion that Palestinians do not think of their improvisation and survival as resistance. Stamatopoulou-Robbins responded to this collection of reviews, explaining that she globalized her argument to counteract the way "Palestine tends not to be considered useful for thinking about much of the rest of the planet", and noting that her assertion that waste siege replaced direct Israeli violence was based on the views of Palestinians in the West Bank as expressed during her fieldwork.
In 2022, a positive review in Anthropological Quarterly found that "Waste Siege sheds light on how people negotiate being overdetermined by their colonial conditions, including through their deployment and rejection of the term (and the terms of their) environment". Also in 2022, a review in the Journal of the Royal Anthropological Institute favorably discussed Waste Siege alongside A Mass Conspiracy to Feed People and Pollution Is Colonialism; the reviewer found the book an "excellent ethnography" and a useful addition to its field.
References
Citations
Works cited
2019 non-fiction books
Anthropology books
Books about Israel
Books about Palestine (region)
Books about Palestinians
Ethnographic literature
Science and technology studies works
Stanford University Press books
Waste management by in the Palestinian Territories | Waste Siege | [
"Technology"
] | 2,694 | [
"Science and technology studies works",
"Science and technology studies"
] |
76,258,302 | https://en.wikipedia.org/wiki/Tomette | Tomettes are a type of terracotta tile that is commonly used as flooring, particularly in southern regions of France including Provence, Dauphiné and the island of Corsica, but also elsewhere including Paris. They are typically hexagonal (or sometimes octagonal) in shape, which allows them to tessellate into a uniform surface while minimizing the need for a seal substance.
History
Terracotta tiles were historically valued for their ability to retain heat from a hearth, and for keeping rooms cool in the summer. The tomette was developed in response to an economic crisis in 1829 which saw a fall in purchasing power as a result of industrialisation. Their shape made it possible to maximize the use of clay by minimizing losses when cutting the tiles, and reduces the amount of sealer material required between them. Production was centred in Apt and Salernes, where the ferruginous clay soil was ideal for the production of the tiles. By the 1850s production had greatly expanded and the tiles were being supplied to merchants in Toulon, Marseille and Nice, but were also being exported abroad to parts of Africa, Italy and the United States. Tomettes were later used in the reconstruction of homes after the Second World War but fell out of fashion after the mid 20th century, although they have recently seen a revival as their properties adapt well to modern underfloor heating.
See also
Azulejos, painted ceramic tiles popular in Portugal and Spain
References
Terracotta
Floors
Buildings and structures in Provence
Buildings and structures in Corsica | Tomette | [
"Engineering"
] | 307 | [
"Structural engineering",
"Floors"
] |
76,258,405 | https://en.wikipedia.org/wiki/Weather%20friar | The Weather friar (Catalan language, Frare del temps), is an absorption hygrometer created by Agapito Borràs Pedemonte in 1894.
History
Origin
Borràs a native of Calella, constructed the Weather friar, when he was barely 18 years old in 1894 after studying several books on recreational physics, giving this and other gadgets made by him to his friends; The Weather friar caught the attention of some businessmen from Arenys de Mar, who convinced Borràs to market it for an initial price of 80 pesetas, this being the seed of one of the most famous toy companies in Spain : Juguetes Borrás . This company would finally merge with Educa Sallent in 2001, creating the new brand Educa Borrás, although the Weather friar was left out of the merger, which is why today the company in charge of its manufacturing and distribution is Tot Ideas S.L., founded by Borràs around 1906 in Mataró, a place where he had moved, taking advantage of the commercial boom derived from the Barcelona-Mataró railway line, the first inaugurated in European-Spain (the first in colonial-Spain, was in Cuba). Currently the company is directed by Enric Borrás, great-grandson of Agapito Borràs, and sells about 40,000 Weather friars around the world (mainly Spain, France, Italy, Portugal and even in Malta ) at a price of approximately 24 euros.
Precedents
Although "the Weather friar" has been sometimes wrongly cited as the oldest hygrometer-meteorologist in the world, there is a similar hygrometer, with the shape of a Capuchin friar as well, from the late 18th century, in the Museum of Arts and Crafts in Paris, built by Charles Alexandre Clair and belonging to the physics office of Jacques Charles, who would bequeath his collection of gadgets to the State on January 15, 1792, with the National Assembly preparing an inventory of it where the existence of this object, very popular circa 1842, according to the first volume of the book Maison rustique du XIXe siècle, Encyclopédie d'Agriculture pratique, published under the direction of the scientific popularizer Charles-François Bailly de Merlieux, where a hygrometer is also mentioned in which the image of a Turk moves a saber to indicate the weather, similar to the friar moving the pointer and the hood of the habit. Similarly, at the Palaiseau Polytechnic School there is a hair hygrometer from 1809 in which a female figure moves her right arm to indicate on a scale the amount of water vapor in the air, dating the use of human hair to measure humidity to the year 1775, when Horace-Bénédict de Saussure decided has used it to develop what is considered the first precise hygrometer (perfected in 1824 by Jacques Babinet with a microscopic reading system). By the way, Santorio Santorio had already described a thread hygrometer in 1623. Likewise, an illustration of two hygrometers represented respectively by the figures of a monk and a cat stands out in Arthur Mangin's book L'air et le monde aérien (1865), both described as popular hygrometers. There was a clear tendency in the xix century to have hygrometers with decorative drawings in homes, especially in rural areas as an aid to the peasantry's work.
Description
The hygrometer, made of cardboard, shows a figure consisting of a friar of the Capuchin Order with an open book in his right hand and the left arm and the hood of the habit mobile thanks to balanced axes; In this arm he carries a bar thanks to which he indicates the weather approximately 24 hours in advance on various signs arranged from top to bottom on a column while he moves his hood to reveal his head when it is hot or to cover it when rain threatens, including the weather conditions "dry", "rough", "wind", "good", "unsafe", "windy", "wet" and "rain", with the friar's functions illustrated in the following verses that accompany the contraption:The original model, a copy of which is preserved in the Capuchin Museum of Sarriá, presents the friar standing, although he is currently shown seated and accompanied by a globe at his feet and an hourglass on a table, recalling Ramon Llull in this pose, while the first version resembles the drawing of Brother Ramón de los Pirineos made by Celestí Sadurní Deop in 1876 for the cover of the Hermit Calendar. In addition to having the signs available in several languages ( Spanish, Catalan, Galician, Basque, Portuguese, French, Italian, German and English ), over time around forty variants of the friar have been developed of time (all of them preserved at the Tot Ideas headquarters), as a limited edition of 120. th anniversary of its creation in which the Montserrat massif appears as a background landscape (backgrounds were also created with the Expiatory Temple of the Sagrada Familia and the Cathedral of Santiago de Compostela ), or several minimalist and iconoclasts (with a plain frame or decorated with panots ) in white, yellow, red and black, known as Fraile Colors and marketed on the occasion of its 128th. th birthday, also highlighting several models in which the friar was replaced at the request of consumers by various figures: nuns (limited promotion), astronomers, a warrior from the Middle Ages, Christopher Columbus, Felix the cat and even trademark friars, including the one that illustrates the bottles of Kina San Clemente, a very popular drink in the 1960s as a restorative for children. Regarding the column where the signs indicating the weather appear, this, also the subject of a transformation, has been identified by some as the terminus cross located in the Plaza de Santa Ana de Mataró, a place that the Capuchins used to frequent during his stay in the city between 1610 and 1835.
Operation
The mechanism that governs the movement of both the arm and the hood consists of a component of natural origin sensitive to humidity, specifically, grease-free and very taut human hair glued to an elastic band (especially women's hair: young and blonde Slavs given their high sensitivity), although some models use catgut or horsehair. The same system as that of the time houses. The contractions and expansions experienced by the hair depending on the water vapor present in the air govern the movement of the arm and the hood; Because humidity variations are usually accompanied by a change in atmospheric pressure, the weather indicated by the friar can be verified by the pressure marked on a barometer, a device with which the weather friar is often confused. As it is an absorption hygrometer, adjustments must be made since the device does not recognize on its own whether the degree of humidity is high or low. This adjustment, which can be carried out as many times as necessary and preferably using a conventional or digital hygrometer, must be carried out on a day without excess humidity or dryness and after the friar has been there for several hours exposed in its final location; Using hands located on the back, the arm must be rotated until the bar points at the sign with the "good" indicator and the hood raised or lowered until it is halfway along its trajectory (leaving the friar's head at half covered), never making a 360° turn and ensuring that both the arm and the hood do not trip over or rub against anything when turning. For correct operation, the friar must be placed in a dry place with plenty of ventilation in order to avoid saturation in the air, a fact that could negatively affect the device by forcing it to indicate the wrong climate, although being made of cardboard It should be kept indoors as otherwise its useful life will be reduced. Regarding maintenance, because the hair withers and dries over time, it is necessary to change the hair approximately once a week or when the device begins to show weather that does not coincide with the real one, although to avoid this inconvenience, electronic weather friars have been created which have an LCD screen, pressure, humidity and temperature sensors, and an internet connection to be able to synchronize the friar's clock and obtain weather data from outside, although maintaining the traditional look.
Legacy
Currently, many farmers use the weather friar to check if their predictions coincide with those shown in the Hermit's Calendar, being today one of the most used in rural areas of Spain. France, Italy, England. and even Malta.
References
External links
Product website
The product on the company's website
Meteorological instrumentation and equipment
1894 in Spain | Weather friar | [
"Technology",
"Engineering"
] | 1,797 | [
"Meteorological instrumentation and equipment",
"Measuring instruments"
] |
76,258,934 | https://en.wikipedia.org/wiki/Association%20of%20Public%20Analysts | The Association of Public Analysts (APA) is a UK professional association for public analysts. It was founded in 1954, although an earlier body, the Society of Public Analysts, was founded in 1874, became the Society for Analytical Chemistry, and was one of the bodies which merged to form the Royal Society of Chemistry in 1980.
Through its activities it seeks to serve the public by using analytical chemistry to addressing a wide range of issues including not only the adulteration and contamination of food, drugs and other commercial products but also to adi in their accurrate description. the APA president is Jane White.
It publishes the Journal of the Association of Public Analysts (JAPA), which appeared in print from 1963 to 1997, volumes 1-33 (), and was then relaunched as an online journal from volume 34, 2006 ().
References
External links
Environmental chemistry
Food scientists
Public health in the United Kingdom
Food safety | Association of Public Analysts | [
"Chemistry",
"Environmental_science"
] | 184 | [
"Environmental chemistry",
"nan"
] |
76,259,001 | https://en.wikipedia.org/wiki/European%20Photovoltaic%20Solar%20Energy%20Conference%20and%20Exhibition | The European Photovoltaic Solar Energy Conference and Exhibition (EU PVSEC) is an international scientific conference and industry exhibition in the solar energy industry. The event covers developments in different aspects of photovoltaics, including science, technology, systems, finance, policies, and markets. The conference topics include the spectrum of photovoltaics value chain, such as policy considerations and foundational aspects.
EU PVSEC is one of the three hosts of the quadrennial World Conference on Photovoltaic Energy Conversion (WCPEC), along with the IEEE's PVSC on the USA side and PVSEC on the Asia-Pacific side.
Conference Topics
The technical programme of the conference is coordinated by the European Commission's Joint Research Centre (JRC) and is structured on the following 5 topics:
Silicon Materials and Cells
Thin Films and New Concepts
Photovoltaic Modules and BoS Components
PV Systems Engineering, Integrated/Applied PV
PV in the Energy Transition
Prizes and Awards
The list of the prizes and awards that are delivered during the EU PVSEC:
Becquerel Prize
Student Awards
Poster Awards
Becquerel Prize
The Alexandre Edmond Becquerel Prize is granted by the European Commission as a highlight of the Opening Ceremony of the EU PVSEC, in the purpose of honouring outstanding and major contributions in photovoltaic solar electricity. The prize is named after Edmond Becquerel, French physicist who created the world's first photovoltaic cell. It has been awarded since 1989.
Notable recipients of the prize over the years include Roger Van Overstraeten (1989), Werner H. Bloss (1991), Antonio Luque (1992), Adolf Goetzberger (1997), Joachim Luther (2005), Arvind Victor Shah (2007), Mechtild Rothe (2008), Andres Cuevas (2015), Henry Snaith (2020), and (2024), among others.
See also
Solar power in the European Union
European Commission
WCPEC
PVSC
References
Solar_energy
Photovoltaics
Conferences
Exhibitions
Trade_fairs
World's fairs in Europe
Science and technology in Europe
Energy
Solar power | European Photovoltaic Solar Energy Conference and Exhibition | [
"Physics"
] | 448 | [
"Energy (physics)",
"Energy",
"Physical quantities"
] |
70,455,157 | https://en.wikipedia.org/wiki/EIA%201956%20resolution%20chart | The EIA 1956 Resolution Chart (until 1975 called RETMA Resolution Chart 1956) is a test card originally designed in 1956 to be used with black and white analogue TV systems, based on the previous (and very similar) RMA 1946 Resolution Chart. It consisted of a printed chart filmed by a TV camera or monoscope to be displayed on a TV screen, and was also available as individual rolls of test film to test broadcasting equipment. Inspecting the chart allowed to check for defects like ringing, geometric distortions, raster scan linearity, cathode-ray tube uniformity and lack of image resolution. If needed, a technician could use it to perform the necessary hardware adjustments.
Today, this chart continues to be used to measure image resolution of modern cameras and lenses and also in scientific research.
Features and operation
The chart is composed of several features, each designed for a specific test:
Large white circle: Allows for image geometry adjustments (image should be centered with the circles being perfectly round).
Vertical stripe boxes: A grating with a resolution of 200 Television Lines (TVL), a measurement of image resolution on analogue TV systems, allowing adjustment of horizontal linearity and geometry.
Horizontal stripe boxes: A grating, allowing adjustment of vertical linearity.
Grayscale steps: Evaluating gamma and transfer characteristics, they allow for contrast and brightness adjustments (at least 6 to 8 steps should be visible)
Concentric circles: Allow to test cathode-ray beam sharpness and focus
Resolution wedges: The gradually expanding lines near the center, labeled with periodic indications of the corresponding spatial frequency, allow checking of image resolution.
Border arrows: Allow for overscan adjustments.
Numbers: Going from 200 to 800, they correspond to TV Lines (TVL).
Used with early monochrome TV systems, this chart was useful in measuring image resolution, determined by inspection of the image as displayed on a CRT.
On such systems an important measure is the limiting horizontal resolution, affected by hardware and transmission quality (vertical resolution is fixed and determined by the video standard used, usually 525 lines or 625 lines).
Usage
RMA 1946 Resolution Chart
The RMA 1946 Resolution Chart was transmitted by NTS and NOS in the Netherlands, SRG SSR in Switzerland, VRT and RTBF in Belgium, RTP in Portugal, TVP in Poland, TVB in Hong Kong, Venevisión in Venezuela (525-lines variant; in conjunction with Indian-head test pattern), WISN-TV in Milwaukee, Wisconsin (525-lines variant) and on low-powered experimental transmissions by Philips Natuurkundig Laboratorium in Eindhoven (NL) and Istanbul University in Turkey.
EIA 1956 resolution chart
The EIA 1956 resolution chart was transmitted by NRK in Norway (in conjunction with the monochrome Pye Test Card G), CKCK-TV in Saskatchewan, Canada (525-lines variant), CERTV in the Dominican Republic (525-lines variant), KRMA-TV, KVVV-TV, WVIZ-TV, WHYY-TV and WUAB-TV in the United States (525-lines variant; WUAB-TV's version later partially overlaid on SMPTE color bars), RTBF and VRT in Belgium, NTS in the Netherlands, Magyar Televízió in Hungary, TVP in Poland, American Forces Network in West Germany (525-lines variant, sometimes also with the centre portion overlaid on top of Multiburst test pattern), Yugoslav Radio Television in the former SFR Yugoslavia, Rediffusion Television in British Hong Kong (where it replaced a modified version of the 1950s Marconi-designed Associated-Rediffusion "diamond" test card), ERTU in Egypt and ORTAS in Syria. It was also used by the pirate TV Noordzee station broadcasting to the Netherlands in the 1960s.
This chart, in conjunction with the RMA 1946 Resolution Chart and later widescreen patterns, is commonly used to test consumer and professional standalone, smartphone and tablet cameras for photo and videography and other imaging equipment like microscopes or CCTV cameras.
Variations
Some variations of the EIA resolution test chart exist. Two Japanese variants of the EIA 1956 resolution chart are called "ITE Resolution Chart /EIAJ Test Chart A" and "JEITA Test Chart II". A widescreen update of the EIA 1956 resolution chart was developed around the 1980s for the HD-MAC broadcasting standard, which was later modified by the Institute of Image Information and Television Engineers of Japan as its ITE Resolution Chart for High-definition Televisions.
Telefunken T 05
In continental Europe, another variation known as Telefunken Test Card T05 was used. It had five diagonal bars on the top left of the centre white circle and different resolution wedges reminiscent of the RMA 1946 Resolution Chart. It was also available as individual rolls of test film, particularly in the DACH countries. As a test card, it was used on ARD (from the 1950s up to the 1970s), Hessischer Rundfunk, Bayerischer Rundfunk, WDR, NWRV in northern Germany, Yugoslav Radio Television, Österreichischer Rundfunk in Austria, BRT in Belgium, Doordarshan in India, some commercial TV stations in Australia, TVE in Spain, Israel Broadcasting Authority and Israeli Educational Television in Israel, TRT in Turkey, and in early-1950s trial television tests by the KTH Royal Institute of Technology in Stockholm, Sweden.
The centre portion of the Telefunken T05 test card was depicted on the obverse side of the 50 Years of Television commemorative coin minted on 9 March 2005 in Austria.
In popular culture
The centre portion of the RMA 1946 Resolution Chart was featured on the cover of Die Kreuzen's 7" single of Pink Flag/Land of Treason, released in 1990.
See also
Television lines
Philips PM5540
Indian-head test pattern
Test card
References
Test cards
Broadcast engineering | EIA 1956 resolution chart | [
"Engineering"
] | 1,231 | [
"Broadcast engineering",
"Electronic engineering"
] |
70,455,164 | https://en.wikipedia.org/wiki/NGC%204033 | NGC 4033 is an elliptical galaxy located in the constellation Corvus. It was discovered on 31 December 1785, by William Herschel.
References
External links
Elliptical galaxies
4033
Corvus (constellation) | NGC 4033 | [
"Astronomy"
] | 43 | [
"Corvus (constellation)",
"Constellations"
] |
70,455,313 | https://en.wikipedia.org/wiki/Tapering%20%28firearms%29 | In firearms,Tapering refers to components that narrow down, similar to that of a conical fashion hence the name taper.
Barrels
In barrels, this centralises mass to the operator. Not only to reduce weight from the muzzle but also to increase accuracy/acquisition and stabilise the balance handling of the weapon. Also the fact that chamber pressures are higher at the rear of the barrel.
Rifling
In rifling, a Tapered bore/Conical bore is where the caliber narrows off to increase velocity of the round.
Cartridges
In cartridges, this usually helps in chambering/unloading the weapon. This differs than shouldering/bottlenecking as this only refers to the case head of the cartridge that holds the projectile, whereas tapering usually refers to the angled sides of the cartridge.
See also
Squeeze bore
Built-up gun
Fluting (firearms)
References
The Sportsman's Hand Book Containing Rules, Tables of Weights and Measures, Concise Instructions on Selecting, Caring for and Handling Guns and Fishing Tackle ... and Many Other Hints and Instructions Useful to Beginners By Horace Park · 1885. Page 25
Amateur Gunsmithing By Townsend Whelen · 1924. Page 62
English Patents of Inventions, Specifications 1866, 2137 - 2186 1867. Page 12
The American Rifle A Treatise, a Text Book, and a Book of Practical Instruction in the Use of the Rifle By Townsend Whelen · 1918. Page 134
Gun Research Declassified Visit to Mauser-Werke 2022. Page 52
The Ultimate in Rifle Accuracy By Glenn Newick · 1990
Gunsmithing at Home: Lock, Stock & Barrel - Page 79, John E. Traister · 1996
Gunsmithing Modern Firearms: A Gun Guy's Guide to Making Good Guns Even Better, Bryce M. Towsley · 2019
Gunsmithing - Page 176, Roy F. Dunlap · 1963
The Complete Guide to Gunsmithing Gun Care and Repair, Charles Edward Chapel 1962. Publisher: Skyhorse Publishing; Revised edition (14 May 2015), Paperback: 512 pages, ISBN 1632202697
A Look at the Rigidity of Benchrest Barrels
Why are gun barrels tapered?
How to taper a rifle barrel
Calculating Barrel Pressure and Projectile Velocity in Gun Systems | Close Focus Research - Ballistic Testing Services
Precision Rifle barrel talk, chamber, neck, throat, etc
Firearm components | Tapering (firearms) | [
"Technology"
] | 477 | [
"Firearm components",
"Components"
] |
70,456,258 | https://en.wikipedia.org/wiki/Fissuroma%20wallichiae | Fissuroma wallichiae is a fungus species of the genus of Fissuroma which was discovered in southern Thailand.
References
Fungi described in 2020
Fungus species
Pleosporales | Fissuroma wallichiae | [
"Biology"
] | 38 | [
"Fungi",
"Fungus species"
] |
70,456,329 | https://en.wikipedia.org/wiki/Fissuroma%20maculans | Fissuroma maculans is a fungus species of the genus of Fissuroma.
References
Fungi described in 2011
Fungus species
Pleosporales | Fissuroma maculans | [
"Biology"
] | 32 | [
"Fungi",
"Fungus species"
] |
70,458,009 | https://en.wikipedia.org/wiki/Frank%20A.%20Weinhold | Frank A. Weinhold is an American chemist, academic and author. He is an Emeritus Professor of Chemistry at the University of Wisconsin–Madison.
Weinhold is best known for the development of natural bond orbital methods and associated applications in physical and computational quantum chemistry. He has authored and co-authored over 200 software packages and technical publications along with several books including Valency and Bonding: A Natural Bond Orbital Donor-Acceptor Perspective, Classical and Geometrical Theory of Chemical and Phase Thermodynamics and Discovering Chemistry with Natural Bond Orbitals. His accolades include Alfred P. Sloan Award (1970), Camille Dreyfus Teacher-Scholar Award (1972), Lise Meitner-Minerva Lectureship Award for Computational Quantum Chemistry from Technion and Hebrew University (2007), and an Honorary Doctorate from the University of Rostock (2011).
Weinhold is a Fellow of the Royal Society of Chemistry and the American Association for the Advancement of Science. He served on the Honorary Editorial Advisory Boards of the International Journal of Quantum Chemistry and the Russian Journal of Physical Chemistry.
Education
Weinhold obtained a BA in Chemistry from the University of Colorado-Boulder in 1962 and was awarded a Fulbright Scholarship to study at the University of Freiburg in 1963. He received an AM from Harvard University in 1964 and continued his studies in physical chemistry under Edgar Bright Wilson, earning a PhD in 1967. Subsequently, he conducted postdoctoral research at the University of Oxford with Charles Coulson in 1968 and at the University of California-Berkeley in 1969.
Career
Weinhold began his academic career as an Assistant Professor at Stanford University in 1969. He then moved to the Theoretical Chemistry Institute (TCI) and Chemistry Department of the University of Wisconsin–Madison in 1976, becoming Associate Professor in 1977, Full Professor in 1979, and TCI Director from 1983 to 1991. He has been an Emeritus Professor of Chemistry at the University of Wisconsin–Madison since 2007.
Research
Weinhold's contributions to the field of chemistry include development of upper and lower bounds for quantum-mechanical properties, complex-coordinate rotation theory of autoionizing resonances, Natural Bond Orbital (NBO) and Natural Resonance Theory (NRT) analysis methods, the metric geometry of equilibrium thermodynamics, and the Quantum Cluster Equilibrium (QCE) theory of fluids.
Works
Weinhold's works largely focus on the quantum mechanics of chemical bonding. With Clark R. Landis, he co-authored Discovering Chemistry with Natural Bond Orbitals, which explored the conceptual basis of chemical bonding using the algorithms and keyword options of the NBO program. They also co-wrote the textbook Valency and Bonding: A Natural Bond Orbital Donor-Acceptor Perspective, which provided a modern overview of chemical bonding theory across the periodic table.
In 2009, Weinhold published the textbook, Classical and Geometrical Theory of Chemical and Phase Thermodynamics, which expounds the reformulation of Gibbsian thermodynamics as a metric geometry. This work finds notable application in the physics of black-hole thermodynamics.
Natural bond orbital method and extensions
Weinhold's research group pioneered the natural bond orbital analysis methods and their applications to molecular and supramolecular phenomena in successive versions of the NBO program. These methods include Natural Population Analysis (NPA) and Natural Bond Orbital (NBO) algorithms for extracting atomic charge and charge-transfer descriptors of intra- and interatomic interactions from modern computational models.
With Alan E. Reed, Weinhold also developed the algorithm for natural localized molecular orbitals (NLMOs), which provide an exact local representation of SCF and CI wave functions that automatically preserves σ–π separation and other localized bonding conceptions with modest computational overhead.
With Eric D. Glendening, Weinhold also developed the algorithm for Natural Resonance Theory (NRT) and the associated natural bond orders and resonance weightings. These provide computational descriptors that agree closely with the empirical resonance conceptions of Linus Pauling.
NBO deletions for cause-effect analysis of chemical properties
The NBO program, in tandem with the host electronic structure system, allows the user to delete (remove from the total energy evaluation) any chosen donor-acceptor interaction between filled (donor) and unfilled (acceptor) NBOs and re-calculate the energy as though this interaction were absent in nature. In cases where the property depends uniquely on whether the specific NBO interaction is included or not, one has direct cause/effect evidence that the deleted donor-acceptor (Lewis base-Lewis acid) interaction is the responsible physical origin for the property of interest. Such NBO-deletion techniques were used to identify the electronic origin of two mysterious structural properties: the internal rotation barriers of ethane-like molecules and the hydrogen bonds of water and many biomaterials.
With Terry K. Brunck, Weinhold showed that the characteristic 3 kcal/mol barrier to methyl torsions in ethane-like molecules is essentially removed by deletion of the vicinal σ-σ* donor-acceptor interactions between adjoining CH/C'H' NBOs, which intrinsically favor the anti-twisted ground-state geometry. Such vicinal σ-σ* interactions are now recognized as a weak form of resonance-type delocalization (hyperconjugation) that is ubiquitous in saturated molecules.
With Alan E. Reed and Larry K. Curtiss, Weinhold showed that the characteristic 5 kcal/mol H-bonding interaction between water molecules is similarly annihilated by deletion of the intermolecular n-σ* donor-acceptor interaction between a lone-pair (n) of one monomer and the proximal valence antibond (σ*) of the other, which intrinsically favors the characteristic linear O'···HO alignment of the ground-state dimer. Analogous intermolecular halogen bonds (and other X-bonding varieties) are now widely recognized as leading contributors to the clustering forces that lead to phase condensation of all materials at sufficiently low temperature.
Quantum Cluster Equilibrium (QCE) theory
Weinhold also developed the Quantum Cluster Equilibrium (QCE) method for computing (T,P)-dependent fluid-phase properties of water and other pure substances. QCE predictions are based on a model partition function composed from an equilibrium mixture of molecular clusters {Mn}, each optimized at a consistent quantum chemical level. QCE applications and experimental comparisons were performed (with co-workers Ralf Ludwig, Thomas C. Farrar, and Mark Wendt) for a variety of pure liquids, including M = water, ammonia, N-methylacetamide, formic acid and ethanol, and the theory was further extended to 2-component solutions in the Peacemaker program of Barbara Kirchner and coworkers. A noteworthy achievement of QCE theory was successful ab initio modelling of the pH of liquid water.
Awards and honors
1970 – Alfred P. Sloan Award, Alfred P. Sloan Foundation
1972 – Camille Dreyfus Teacher-Scholar Award, The Camille and Henry Dreyfus Foundation
2007 – Lise Meitner-Minerva Lectureship Award, Technion and Hebrew University
2011 – Honorary Doctorate, University of Rostock
Bibliography
Books
Valency and Bonding: A Natural Bond Orbital Donor-Acceptor Perspective (2005) ISBN 978-0521831284
Classical and Geometrical Theory of Chemical and Phase Thermodynamics (2009) ISBN 978-0470402368
Discovering Chemistry with Natural Bond Orbitals (2012) ISBN 978-1118119969
Selected articles
Weinhold, F. (1976). Thermodynamics and geometry. Physics Today, 29(3), 23-29.
Foster, A. J., & Weinhold, F. (1980). Natural hybrid orbitals. Journal of the American Chemical Society, 102(24), 7211-7218.
Reed, A. E., & Weinhold, F. (1985). Natural localized molecular orbitals. The Journal of Chemical Physics, 83(4), 1736-1740.
Reed, A. E., Weinstock, R. B., & Weinhold, F. (1985). Natural population analysis. The Journal of Chemical Physics, 83(2), 735-746.
Reed, A. E., Curtiss, L. A., & Weinhold, F. (1988). Intermolecular interactions from a natural bond orbital, donor-acceptor viewpoint. Chemical Reviews, 88(6), 899-926.
Glendening, E. D., Landis, C. R., & Weinhold, F. (2019). Resonance theory reboot. Journal of the American Chemical Society, 141(10), 4156-4166.
Weinhold, F. (2023). “Noncovalent interaction”: A chemical misnomer that inhibits proper understanding of hydrogen bonding, rotation barriers, and other topics. Molecules, 28(9), 3776.
References
Living people
20th-century American chemists
Theoretical chemists
University of Wisconsin–Madison faculty
Stanford University Department of Chemistry faculty
University of Colorado Boulder alumni
Harvard University alumni
University of Freiburg alumni
1941 births | Frank A. Weinhold | [
"Chemistry"
] | 1,924 | [
"Theoretical chemists",
"American theoretical chemists"
] |
70,458,075 | https://en.wikipedia.org/wiki/Hexenuronic%20acid | Hexenuronic acid (HexA) is an organic compound with the formula C13H20O10. It is an unsaturated sugar produced during the kraft process in the creation of wood pulp.
Kraft process
During the kraft process, which is the turning of wood into wood pulp for papermaking, wood chips are treated with sodium hydroxide and sodium sulfide. Sodium hydroxide catalyzes the demethylation of 4-O-methyl-D-glucuronoxylan, which is found at the ends of the polysaccharide xylan.
Hexenuronic acid decreases a wood's kappa number, which is a measure of bleachability of wood pulp, by 3-7. It readily reacts with common wood pulp bleaching agents like ozone, peracetic acid, and chlorine dioxide. Consequently, research has focused on ways to break down hexenuronic acid prior to bleaching to decrease dangerous waste products and costs.
The main method of destroying hexenuronic acid is to treat the wood pulp post kraft processing with strong acids at high temperatures. HexA is hydrolyzed and broken down into aldehydes and alcohols like 2-furoic acid and 5-carboxy-2-furaldehdye. This process has led to a 50% reduction in bleaching costs of the wood pulp in some cases.
In microbes
Polysaccharide lyases (PL) are a type of enzyme that is found in numerous microorganisms including bacteriophages that break down parts of wood. PL catalyzses β-elimination of uronic acid-containing polysaccharides into HexA.
References
Papermaking
Oxygen heterocycles
Carboxylic acids
Methoxy compounds
Sugar acids
Dihydropyrans
Tetrahydropyrans
Triols | Hexenuronic acid | [
"Chemistry"
] | 393 | [
"Sugar acids",
"Carbohydrates",
"Carboxylic acids",
"Functional groups"
] |
70,458,614 | https://en.wikipedia.org/wiki/Huawei%20Nova%209 | Huawei Nova 9 is a smartphone manufactured by Huawei. It was announced on September 23, 2021.
References
Nova 9
Mobile phones introduced in 2021
Mobile phones with multiple rear cameras
Mobile phones with 4K video recording | Huawei Nova 9 | [
"Technology"
] | 44 | [
"Mobile technology stubs",
"Mobile phone stubs"
] |
70,459,348 | https://en.wikipedia.org/wiki/River%20reed%20salt | River reed salt is a type of salt produced in Kenya from river reeds called muchua that grow along the Nzoia River.
It is thought that the origins of this practice date back to the 17th century, when the Bukusu people migrated from the area of the Congo River.
The only place the salt is traditionally made is the village of Nabuyole in Webuye Constituency of Bungoma County. To produce the salt, muchua reeds growing along the river are collected, dried, and then burnt to first obtain the ash. The collected ash is then placed in a vessel with drainage. Water is slowly passed over and collected in a vessel underneath. The solution is filtered and then boiled to obtain the salt crystals which are traditionally packaged in banana leaves.
Notes
External links
Edible salt
Kenyan cuisine | River reed salt | [
"Chemistry"
] | 164 | [
"Edible salt",
"Salts"
] |
70,460,606 | https://en.wikipedia.org/wiki/Bird%20%28mathematical%20artwork%29 | Bird, also known as A Bird in Flight refers to bird-like mathematical artworks that are introduced by mathematical equations. A group of these figures are created by combing through tens of thousands of computer-generated images. They are usually defined by trigonometric functions. An example of A Bird in Flight is made up of 500 segments defined in a Cartesian plane where for each the endpoints of the -th line segment are:
and
.
The 500 line segments defined above together form a shape in the Cartesian plane that resembles a bird with open wings. Looking at the line segments on the wings of the bird causes an optical illusion and may trick the viewer into thinking that the segments are curved lines. Therefore, the shape can also be considered as an optical artwork. Another version of A Bird in Flight was defined as the union of all of the circles with center and radius , where , and
The set of the 20,001 circles defined above form a subset of the plane that resembles a flying bird. Although this version's equations are a lot more complicated than the version made of 500 segments, it has a better resemblance to a real flying bird.
References
Birds in art
Mathematical artworks
Contemporary works of art
Geometric shapes | Bird (mathematical artwork) | [
"Mathematics"
] | 244 | [
"Geometric shapes",
"Mathematical objects",
"Geometric objects"
] |
70,461,225 | https://en.wikipedia.org/wiki/CARMENES%20survey | The CARMENES survey (Calar Alto high-Resolution search for M-dwarfs with Exoearths with Near-infrared and optical Échelle Spectrographs) is a project to examine approximately 300 M-dwarf stars for signs of exoplanets with the CARMENES instrument on the Spanish Calar Alto's 3.5m telescope.
Operating since 2016, it aims to find Earth-sized exoplanets around 2 (Earth masses) using Doppler spectroscopy (also called the radial velocity method). More than 20 exoplanets have been found through CARMENES, among them Teegarden b, considered one of the most potentially habitable exoplanets. Another potentially habitable planet found is Gliese 357 d.
Discoveries
See also
List of exoplanet search projects
References
Astronomical surveys | CARMENES survey | [
"Astronomy"
] | 169 | [
"Astronomical surveys",
"Astronomical objects",
"Astronomy stubs",
"Works about astronomy"
] |
70,461,252 | https://en.wikipedia.org/wiki/Mu1%20Octantis | {{DISPLAYTITLE:Mu1 Octantis}}
Mu1 Octantis, Latinized from μ1 Octantis, is a solitary star in the southern circumpolar constellation Octans. It has an apparent magnitude of 5.98, allowing it to be faintly visible to the naked eye under ideal conditions. Located 335 light years away, it is approaching the Sun with a heliocentric radial velocity of .
This object is an F-type star with the blended luminosity class of a giant star and a bright giant. At present it has 1.36 times the mass of the Sun but has expanded to 4.68 times its girth. It radiates at from its enlarged photosphere at an effective temperature of 6,521 K, giving it a yellow white glow. Mu1 Octantis is metal enriched and has an age of 900 million years.
References
Octans
F-type giants
F-type bright giants
Octantis, Mu1
Octantis, 50
Durchmusterung objects
196051
102162
7863 | Mu1 Octantis | [
"Astronomy"
] | 220 | [
"Octans",
"Constellations"
] |
70,461,506 | https://en.wikipedia.org/wiki/Cartellino | A cartellino (Italian for "small piece of paper") is an illusionistic portrayal of a written note included in painting, mostly from the with a legend that records the name of the artist, the date, the subject, or some other relevant information about the work. About 500 Renaissance paintings include a cartellino, but the device has been adopted by some later artists.
It usually takes the form of a fictive rectangular scrap of parchment or paper – sometimes with frayed edges, creased or torn – which is depicted as being attached with a pin or wax to a surface that lies parallel to the picture plane, perhaps a foreground parapet or a background wall. Often the cartellino gives the impression of the note being attached to the surface of the painting rather than being part of the artwork itself.
This trompe-l'œil effect may reflect an earlier artistic practice of real notes being physically attached to paintings. Other suggestions of the origins include the inscriptions included in the Early Netherlandish paintings works of Jan van Eyck, such as his 1432 Léal Souvenir, and from the artistic practice at the studio of Francesco Squarcione in Padua, based on the gothic inscriptions seen in medieval paintings.
History
The cartellino appears in Italian Renaissance painting from the 15th century into the 16th century, and particularly in painting from Venice and the Veneto from the 1470s to the 1520s. One of the first cartellini appears on the Tarquinia Madonna by Filippo Lippi, painted in 1437. Other early examples include Andrea Mantegna's 1448 painting of St Mark, and Marco Zoppo's Wimborne Madonna of c.1455. Later examples include Carlo Crivelli's c.1480 Lenti Madonna, Giovanni Bellini's 1501–1502 Portrait of Doge Leonardo Loredan and Jacopo de' Barbari's 1504 Still-Life with Partridge and Gauntlets.
The cartellino fell out of fashion, as artists desired to be known directly from the virtuosic quality of their work, not as craftsmen with a workshop whose work was identified by their name on a label. By 1548, a character in Paolo Pino's Dialogo di pittura was describing the cartellino as a laughable thing. However, there are several cartellini in Hans Holbein the Younger's Portrait of Georg Giese from 1532, and Francisco de Zurbarán included cartellini in about a fifth of his autograph works, including his 1628 painting of Saint Serapion.
In her 2009 PhD thesis, Kandice Rawlings distinguishes the cartellino from other written element including in a painting, such as depictions of inscriptions in stone or on wooden plaques, or writing in books held by subjects, or on streamers or banderoles. Other contemporary terms that were used for the same device include letterina, cartucce, and bolletta – that is, small letter, cartouche, or label. Despite the similarity of the word, there is little evidence of any connection with the cardellino (goldfinch, a symbol of Christ's Passion).
Rawlings documents 412 Italian paintings with cartellini, almost all religious subject or portraits. Early examples are connected with Padua. About three quarters were painted by artists trained or active in Venice and the Veneto. About three quarters were painted between 1470 and 1530, with the largest number in the first decade of the 16th century. About four fifths contain the artist's signature. A third include a date, often alongside a signature. Rawlings identifies another 74 paintings from outside Italy that include cartellini, principally from Germany, mainly Albrecht Dürer in the early 16th century, England, mainly Holbein in the mid-1500s, and Spain, mainly El Greco in the late 16th and early 17th century, Velázquez in the 1630s, and Zurbarán as late as the 1660s.
The cartellino had a knowing revival in Diego Rivera's 1915 Zapatista Landscape.
See also
Museum label
Musca depicta
Notes
References
Cartellino, Glossary, National Gallery
Visual motifs | Cartellino | [
"Mathematics"
] | 849 | [
"Symbols",
"Visual motifs"
] |
70,461,867 | https://en.wikipedia.org/wiki/Serendipitaceae | The Serendipitaceae are a family of fungi in the order Sebacinales. Species do not produce visible basidiocarps (fruit bodies), but form septate basidia on thin, trailing hyphae. Species are mycorrhizal, forming associations with a wide range of plants. Most species have only been detected through environmental DNA sampling or laboratory cultures. The family currently contains the single genus Serendipita.
References
Sebacinales
Serendipitaceae | Serendipitaceae | [
"Biology"
] | 103 | [
"Fungus stubs",
"Fungi"
] |
70,462,002 | https://en.wikipedia.org/wiki/Nishimta | In Mandaeism, the nishimta ( ; plural: ) or nishma ( ) is the human soul. It is can also be considered as equivalent to the "psyche" or "ego". It is distinct from ruha ('spirit'), as well as from mana ('nous'). In Mandaeism, humans are considered to be made up of the physical body (pagra), soul (nišimta), and spirit (ruha).
In the afterlife
When a Mandaean person dies, priests perform elaborate death rituals or death masses called masiqta in order to help guide the soul (nišimta) towards the World of Light. In order to pass from Tibil (Earth) to the World of Light, the soul must go through multiple maṭarta (watch-stations, toll-stations, or purgatories; see also Arcs of Descent and Ascent and araf (Islam)) before finally being reunited with the dmuta, the soul's heavenly counterpart.
A successful masiqta merges the incarnate soul ( ; roughly equivalent to the psyche or "ego" in Greek philosophy) and spirit ( ; roughly equivalent to the pneuma or "breath" in Greek philosophy) from the Earth (Tibil) into a new merged entity in the World of Light called the ʿuṣṭuna ('trunk', a word of Indo-Iranian origin). The ʿuṣṭuna can then reunite with its heavenly, non-incarnate counterpart (or spiritual image), the dmuta, in the World of Light, where it will reside in the world of ideal counterparts (Mšunia Kušṭa).
See also
Sidra d-Nishmata (Book of Souls, the first part of the Qulasta)
Ruha (spirit)
Mana (Mandaeism) (nous)
Nafs in Islam
Jiva in Hinduism
Ancient Egyptian conception of the soul
Soul dualism
References
Mandaic words and phrases
Mandaean philosophical concepts
Conceptions of self
Vitalism | Nishimta | [
"Biology"
] | 425 | [
"Non-Darwinian evolution",
"Vitalism",
"Biology theories"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.