id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
63,079,473 | https://en.wikipedia.org/wiki/The%20Pursuit%20of%20Perfect%20Packing | The Pursuit of Perfect Packing is a book on packing problems in geometry. It was written by physicists Tomaso Aste and Denis Weaire and published in 2000 by Institute of Physics Publishing (doi:10.1887/0750306483, ) with a second edition published in 2008 by Taylor & Francis ().
Topics
The mathematical topics described in the book include sphere packing (including the Tammes problem, the Kepler conjecture, and higher-dimensional sphere packing), the Honeycomb conjecture and the Weaire–Phelan structure, Voronoi diagrams and Delaunay triangulations, Apollonian gaskets, random sequential adsorption, and the physical realizations of some of these structures by sand, soap bubbles, the seeds of plants, and columnar basalt. A broader theme involves the contrast between locally ordered and locally disordered structures, and the interplay between local and global considerations in optimal packings.
As well, the book includes biographical sketches of some of the contributors to this field, and histories of their work in this area, including Johannes Kepler, Stephen Hales, Joseph Plateau, Lord Kelvin, Osborne Reynolds, and J. D. Bernal.
Audience and reception
The book is aimed at a general audience rather than to professional mathematicians. Therefore, it avoids mathematical proofs and is otherwise not very technical. However, it contains pointers to the mathematical literature where readers more expert in these topics can find more detail. Avoiding proof may have been a necessary decision as some proofs in this area defy summarization: the proof by Thomas Hales of the Kepler conjecture on optimal sphere packing in three dimensions, announced shortly before the publication of the book and one of its central topics, is hundreds of pages long.
Reviewer Johann Linhart complains that (in the first edition) some figures are inaccurately drawn. And although finding the book "entertaining and easy to read", William Satzer finds it "frustrating" in the lack of detail in its stories. Nevertheless, Linhart and reviewer Stephen Blundell highly recommend the book, and reviewer Charles Radin calls it "a treasure trove of intriguing examples" and "a real gem". And despite complaining about a format that mixes footnote markers into mathematical formulas, and the illegibility of some figures, Michael Fox recommends it to "any mathematics or science library".
References
Packing problems
Mathematics books
2000 non-fiction books
2008 non-fiction books | The Pursuit of Perfect Packing | [
"Mathematics"
] | 496 | [
"Mathematical problems",
"Packing problems"
] |
63,079,510 | https://en.wikipedia.org/wiki/Psychological%20impact%20of%20climate%20change | The psychological impacts of climate change concerns effects that climate change can have on individuals' mental and emotional well-being. People experience a wide range of emotions as they grapple with the challenge of climate change between their short-term self-interest and their longer-term community interests. People respond to concerns about climate change in various ways: behaviorally, via acts that frequently indicate conflicting attitudes, emotionally, through affective responses, and cognitively, through assessments. There is a wealth of research demonstrating how emotions influence people's decisions in a variety of contexts, including social issues, and can be used to distill personal experiences. They may also relate to more generalized effects on groups and their behaviors, such as the urge to migrate from affected areas of the globe to areas perceived as less affected. These impacts can manifest in various ways and affect people of all ages and backgrounds. Some key psychological impacts of climate change include emotional states such as eco-anxiety, ecological grief, eco-anger or solastalgia. While troublesome, such emotions may not appear immediately harmful and can lead to a rational response to the degradation of the natural world motivating adaptive action. However, there can be other effects on health, such as post-traumatic stress disorder (PTSD), for instance, as a result of witnessing or seeing reports of massive wildfires, which may be more dangerous.
Efforts to understand the psychological impacts of climate change have antecedents in work from the 20th century and even earlier, making evidence-based links to the changing physical and social environment resulting from accelerated human activity dating from the Industrial Revolution. Empirical investigation of psychological impacts specifically related to climate change began in the late 20th century and have intensified in the first decade of the 21st century. From the early 2010s, psychologists were increasingly calling on each other to contribute to the understanding of psychological impacts from climate change. Academic professionals, medical professionals, and various actors are actively seeking to understand these impacts, provide relief, make accurate predictions, and assist in efforts to mitigate and adapt to global warming, including attempts to pause activity leading to further warming.
There are several channels through which climate change can impact a person's mental health, including direct impacts, indirect effects, and awareness of the issue. Specific populations, such as communities of color, children, and adolescents, are particularly vulnerable to these mental health impacts. There are many exceptions, but generally, it is people in developing countries who are more exposed to the direct effects and economic disruption caused by climate change.
The psychological effects of climate change may be investigated within the field of climate psychology or picked up in the course of treatment of mental health disorders. Non-clinical approaches, campaigning options, internet-based support forums, and self-help books may be adopted by those not overwhelmed by climate anxiety. Some psychological impacts may not receive any form of treatment at all and could be productive—for example, when concern about climate change is channeled into information gathering and seeking to influence related policy with others. The psychological effects of climate may receive attention from governments and others involved in creating public policy, by means of campaigning and lobbying by groups and NGOs.
History
Late-19th Century
Efforts to understand the psychological impacts of climate change have deep roots in the 20th century and even earlier, in the context of reactions to the changing physical and social environment that emerged from changes such as the Industrial Revolution. The foundational concepts of climate change can be traced back to the early 19th century. In 1824, French mathematician Joseph Fourier first described the greenhouse effect, suggesting that gases in the atmosphere trap heat from the sun. Later, in 1896, Swedish scientist Svante Arrhenius quantified the relationship between carbon dioxide (CO2) levels and global temperature, predicting that increased CO2 from fossil fuel combustion would lead to global warming. This early work laid the groundwork for understanding how human activity could influence climate.
20th Century
The mid-20th century marked a significant turning point in climate science. In 1958, Charles David Keeling began precise measurements of atmospheric CO2 at the Mauna Loa Observatory, revealing a consistent upward trend known as the Keeling Curve. This data highlighted the direct correlation between human activity—specifically fossil fuel combustion—and rising greenhouse gas concentrations.
As scientific evidence mounted, international concern grew. The 1972 United Nations Conference on the Human Environment in Stockholm was one of the first major global meetings focused on environmental issues. By the late 1980s, the need for coordinated global action became urgent. In 1988, the Intergovernmental Panel on Climate Change (IPCC) was established to assess scientific knowledge and facilitate international discussions on climate change. The 1992 Earth Summit in Rio de Janeiro produced the Framework Convention on Climate Change (UNFCCC), a landmark treaty aimed at combating climate change and its impacts. This was followed by the Kyoto Protocol in 1997, which committed industrialized nations to reduce greenhouse gas emissions. These agreements represented significant steps in recognizing climate change as a global issue requiring collective action.
21st Century
The 21st century has seen a dramatic increase in public awareness and scientific consensus regarding climate change. From the early 2010s, psychologists were increasingly calling on each other to contribute to the understanding of the psychological impacts of climate change. The IPCC's Fourth Assessment Report in 2007 asserted that climate change is primarily caused by human activities, marking a pivotal moment in climate advocacy. The Paris Agreement, adopted in 2015 by 196 countries, aimed to limit global warming to well below 2 degrees Celsius, representing a unified global commitment to address climate change. While psychologists had almost zero involvement in the first five IPCC reports, at least five will contribute to the IPCC Sixth Assessment Report, which should be fully published by 2022. As of 2020, the discipline of climate psychology has grown to include many subfields; climate psychologists have been working with the United Nations, national and local governments, corporations, NGOs, and individuals.
Today
Today, climate change remains a pressing issue, with continued research and activism underscoring the urgency for comprehensive policy responses. As the effects of climate change become increasingly visible, the historical journey from early scientific understanding to international action highlights the critical need for ongoing engagement and solutions.
Pathways
Three causal pathways by which climate change causes psychological effects have been suggested: direct, indirect, or via psychosocial awareness. In some cases, people may be affected via more than one pathway at once.
There are three broad channels by which climate change affects people's mental state: directly, indirectly, or unconsciously. The direct channel includes stress-related conditions being caused by exposure to extreme weather events, such as cyclones and wildfires, causing conditions such as PTS and anxiety disorder. However, psychological impacts can also occur through less intense forms of climate change, such as through rising temperatures, leading to increased aggression. The indirect pathway occurs via disruption to economic and social activities, such as when an area of farmland becomes infertile due to desertification, a decrease in tourism due to damage to the landscape, or interruptions to transport. This can lead to increased stress, depression, and other psychological conditions such as anxiety. The third channel can be through unconscious awareness of the climate change threat, even by individuals not otherwise affected by it. This can be, for instance, feeling intimidated by the threats of food and water insecurity posed by climate change, which can lead to conflict. In general, populations living at sea level and in the Southern Hemisphere tend to be more exposed to economic disruption caused by climate change. Whereas recently identified climate-related psychological conditions like "eco-anxiety," resulting from emergent awareness of the threat, can affect people across the planet.
Direct impact
Exposure to extreme weather events, such as hurricanes, floods, or high temperatures associated with drought and wildfires, can cause various emotional disorders. Most commonly, this is short-term stress, from which people can often rapidly recover. But sometimes chronic conditions set in, especially among those who have been exposed to multiple events, such as post-traumatic stress, somatoform disorder, or long-term anxiety. A swift response by authorities to restore a sense of order and security can substantially reduce the risk of long-term psychological impact for most people. However, individuals already suffering from mental ill health and who do not receive the required attention when weather conditions disrupt services may face further decline.
The single best-studied connection between weather and human behavior is that between temperature and aggression, which has been investigated in laboratory settings, by historical study, and extensive fieldwork. Various reviews conclude that high temperatures cause people to become bad-tempered, leading to increased physical violence, including domestic violence, especially in areas of mixed ethnic groups. There has been academic dispute regarding the degree to which the excess violence is caused by climate change, as opposed to natural temperature variability. The psychological effects of unusually low temperatures, which climate change can also cause in some parts of the world, must be more thoroughly documented. However, evidence suggests that, unlike unusually high temperatures, they are less likely to lead to increased aggression.
Indirect pathway
Climate change significantly impacts people's financial stability in all parts of the world. Climate change reduces agricultural output and makes an area unattractive for tourism. This can cause significant stress, which in turn can lead to depression and other adverse psychological conditions. Consequences can be especially severe if financial stress is coupled with considerable disruption to social life, such as relocation to camps. For example, in the aftermath of Hurricane Katrina, the suicide rate for the general population rose by about 300%, but for those who were displaced and had to move into trailer parks, it rose by over 1400%. Effective inter-governmental interventions, especially in some less prosperous countries in the global south, can alleviate an immediate crisis.
Mental health and physical health are largely intertwined, so any climate change-related effects on physical health can directly affect mental health. Environmental disruption, such as the loss of bio-diversity, or even the loss of environmental features like sea-ice, cultural landscapes, or historic heritage can also cause negative psychological responses, such as ecological grief or solastalgia.
Unconscious awareness
Information about the risks posed by climate change, even to those not yet directly affected by it, can cause long-lasting psychological conditions, such as anxiety or other forms of distress. This can especially affect children, and has been compared to nuclear anxiety which occurred during the Cold War. Conditions such as eco-anxiety are very rarely severe enough to require clinical treatment. While unpleasant and thus classified as negative, such conditions have been described as valid rational responses to the reality of climate change.
Mental health
Specific conditions
As climate change becomes increasingly evident and threatening to both the biosphere and human livelihoods, the feelings aroused in response are a focus for exploration. Emotions such as feelings of loss and anxiety, grief, and guilt appear as typical responses to perceived threats posed by climate change. Such emotions have been collectively referred to in the literature as climate distress. Climate change is associated with increased frequency and severity of extreme weather events. The impacts of discrete events such as natural disasters on mental health have been demonstrated through decades of research showing increased levels of PTSD, depression, anxiety, substance abuse, and even domestic violence following the experience of storms.
Emotional reactions to climate change are being studied. Feelings of loss can originate in anticipation of impending catastrophe and after actual destruction. The corresponding 'anticipatory mourning' has been explored. The feelings of grief and distress in response to ecological destruction have elsewhere been termed 'solastalgia,' and the response to pollution of the local environment has been termed 'environmental melancholia.'
However, feelings in response to climate change and its broader ramifications can be unconscious or not fully recognized. The result is feelings of despair and unease, particularly in young people. It can surface in those attending therapy. This makes it difficult to give name to what one is feeling, so it is generally termed as eco-anxiety—particularly when this negative effect takes on more intense forms such as sleeping disorders and ruminative thinking. Rather than see eco-anxiety as a pathology requiring treatment, Bednarek has suggested that it be construed as an adaptive, healthy response.
It is often difficult to conceptualize emotions in response to the unseen or intangible aspects of climate change. Theoretical approaches have suggested this is due to climate change being part of a greater construct than human cognition can fully comprehend, known as a 'hyperobject.' One of the techniques used by climate psychologists to engage with such 'unthought knowns' and their unconscious, unexplored emotional implications is 'social dreaming.'
Awareness of climate change and its destructive impact, happening in both the present and future, is often very overwhelming. Literature investigating how individuals and society respond to crisis and disaster found that when there was space to process and reflect on emotional experiences, these increased emotions became adaptive. Furthermore, these adaptations then led to growth and resilience. Doppelt suggested 'transformational resilience' as a property of social systems, in which adversities are catalysts for new meaning and direction in life, leading to changes that increase both individual and community wellbeing above previous levels.
Anthropological perspective on climate psychology
Climate change has devastating effects on Indigenous peoples' psychological wellbeing as it impacts them directly and indirectly. As their lifestyles are often closely linked to the land, climate change directly impacts their physical health and financial stability in quantifiable ways. There is also a concerning correlation between severe mental health issues among Indigenous peoples worldwide and environmental changes. The connection and value Indigenous cultures ascribe to land means that damage to or separation from it directly impacts mental health. For many, their country is interwoven with psychological aspects such as their identity, community, and rituals. This interconnectedness informs a holistic perspective of health which requires balance and spiritual connection to the environment, both of which climate change threatens and Western climate actors do not fully understand.
Inadequate government responses that neglect Indigenous knowledge further worsen the adverse psychological effects linked to climate change. This produces the risk of cultural homogenization due to global adaptation efforts to climate change and the disruption of cultural traditions due to forced relocation. Countries with lower socio-economic status and minority groups in high socio-economic areas are disproportionately affected by the climate crisis. Worsening environmental conditions and catastrophic climate events have created environmental refugees.
Changes in cultural practice and social behavior occurred along with the intensifying climate crisis. Indigenous culture is one example of this shift, as the human body embodies the surrounding physical environment. Understanding how these cultural shifts in the climate crisis influence mental health is essential in creating and providing appropriate support. Anthropologists provide an essential tool for understanding the implications of the climate crisis on human health. The 'environmental body' expands on Scheper-Hughes and Lock's theory of the 'three bodies' – the phenomenological body, the body politic, and the symbolically lived social body. It is now necessary to understand mental health, not just as a product of biomedical imbalance but as a result of the climate crisis. The hegemonic ideology that prioritizes economic expansion drastically affects mental wellbeing and must be brought to light and challenged. The effects will only intensify over time as unpredictable environmental disasters worsen. Due to the extensive impacts of climate change on Indigenous mental health, Indigenous perspectives must be carefully considered and increasingly incorporated into the field of climate psychology.
Other
Other climate-specific psychological impacts are less well-studied than eco-anxiety. They include eco-depression, eco-anger, and states of denial or numbness, which can be brought on by too much exposure to alarmist presentations of the climate threat. A study that used confirmatory factor analysis to separate the effects of eco-anxiety, eco-depression, and eco-anger found that eco-anger is the best for the person's wellbeing and also suitable for motivating participation in collective and individual action to mitigate climate change. A 2021 report found that eco-anger was significantly more common among young people. A 2021 literature review found that emotional responses to crises can be adaptive when the individual has the capacity and support to process and reflect on this emotion. In these cases, individuals can grow from their experiences and support others. In the context of climate change, this capacity for deep reflection is necessary to navigate the emotional challenges that both individuals and societies face.
Impacts on specific groups
People express differing intensities of concern and grief about climate change depending on their worldview, with those holding egoistic (defined as people who mostly care about oneself and their health and wellbeing), social-altruistic (defined as people who express concern for others in their community like future generations, friends, family and general public) and biospheric (defined as people who are concerned about environmental aspects like plants and animals) views differing markedly. People who belong to the biospheric group expressed the most concern about ecological stress or grief, i.e., a form of grief related to worries about the state of the world's environment, and engage in ecological coping – ecological coping includes connection to community, expression of sorrow and grief, shifting focus to controllable aspects of climate change and being close to nature – people who belonged to the social-altruistic group engaged in ecological coping but did not express ecological stress.
Indigenous communities
Indigenous communities are disproportionately affected by climate change. "The impacts of climate change that we are feeling today, from extreme heat to flooding to severe storms, are expected to get worse, and people least able to prepare and cope are disproportionately exposed," said EPA Administrator Michael S. Regan. This has short- and long-term effects on physical and mental health. It is important to recognize how environmentalism and racism are intertwined—how the repercussions of slavery and colonialism and continuous police brutality still play a key role in climate change in communities of color. The response to eco-anxiety is focused on the dominant groups in society and neglects the marginalized communities. According to Mental Health America, 17% of Black people and 23% of Native Americans live with a mental illness.
Research has shown that communities of color are less likely to have access to mental health services, less likely to seek out treatment, and more likely to receive low or poor-quality care. This is due to an overwhelming amount of racial, structural, and cultural barriers these communities face. Eco-anxiety is affecting the majority of young adults because they have grown up with climate change and see the impacts it has on them locally. There are very few resources for communities of color to help them cope with eco-anxiety. Researchers recommend talking with a local therapist, reconnecting with nature, and focusing on positive news about climate change. Many minority and low-income communities do not have the same access to green spaces or playgrounds compared to suburban communities. Studies have shown the positive impact that physical activity can have on mental health, but once again, they do not have access to this resource.
People of color
Climate change disproportionately impacts people of color, exacerbating existing social and economic disparities. Environmental racism, where communities of color are more likely to be exposed to environmental hazards, intensifies as climate change intensifies. These communities often reside in areas with poor air quality, proximity to industrial facilities, or vulnerable coastal regions, making them more susceptible to the adverse effects of extreme weather events, such as hurricanes, floods, and heatwaves.
Moreover, climate change can also disrupt livelihoods, as many people of color heavily rely on agriculture, fisheries, or forestry for income, and these sectors are often vulnerable to changing weather patterns. The loss of these livelihoods can lead to increased financial stress and insecurity. Additionally access to resources and opportunities for adaption and mitigation measures can be limited for marginalized communities, hindering their ability to cope with the impacts of climate change effectively. Lack of representation in decision-making processes and limited access to education on climate change exacerbate these challenges.
The psychological toll on people of color is significant, as they experience not only the direct impacts of climate change but also the stress and anxiety arising from systemic inequalities. Coping with environmental hazards while facing socioeconomic disadvantages can lead to mental health issues, such as depression, anxiety, and trauma. Recognizing and addressing these disparities is crucial in the fight against climate change. Solutions must be inclusive.
Children
Children and young adults are the most vulnerable to climate change impacts. Many of the impacts of climate change that affect children's physical health also lead to psychological and mental health consequences. Children who live in geographic locations that are most susceptible to the impacts of climate change and/or with weaker infrastructure and fewer supports and services suffer the worst impacts.
Even though children and young adults are the most vulnerable group regarding the impacts of climate change, they have received far less research focus than adults. The World Health Organization states that more than 88% of the existing burden of disease attributable to climate change occurs in children younger than 5 years. The impacts of climate change on children include them being at a high risk of mental health consequences like PTSD, depression, anxiety, phobias, sleep disorders, attachment disorders, and substance abuse. These conditions can lead to problems with emotion regulation, cognition, learning, behavior, language development, and academic performance.
A 2018 study argued that it was crucial to gather information about how children are psychologically affected by climate change because of three primary reasons:
Children will bear a larger burden of the negative consequences of climate change over their lifetimes, and hence, society needs to know how to reduce these impacts and protect them;
They are the next leaders of society, and how they are responding psychologically now has importance for their current and future decision-making;
They will need the capacity to adapt to a climate-changed world, including a rapid psychological and physical transition to a low-carbon economy, and they will require particular knowledge, attitudes, and attributes to facilitate this adaptation.
Adaptive impacts
While most studies on the psychological impact of climate change finds negative effects, other or adaptive impacts are also possible. Direct experience of the negative effects of climate change may lead to positive personal change. For some individuals, experiencing environmental events such as flooding have resulted in greater psychological salience and concern for climate change, which in turn predicts intentions, behaviors, and support for policy in response to climate change. A potential example of positive impact via the indirect channel would be financial benefits for the minority of farmers who could enjoy increased crop yields. While the overall effects of climate change on agriculture are predicted to be strongly negative, some crops in certain areas are predicted to benefit.
At a personal level, emotions like worry and anxiety are a normal, if uncomfortable, part of life. They can be seen as part of a defense system that identifies and deals with threats. From this perspective, anxiety can help motivate people to seek information and take action on a problem. Anxiety and worry are more likely to be associated with engagement when people feel that they can do things. Feelings of agency can be strengthened by including people in participatory decision-making. Problem-focused and meaning-focused coping skills can also be promoted. Problem-focused coping involves gathering information and finding out what you can do. Meaning-focused coping involves behaviors such as identifying positive information, focusing on constructive sources of hope, and trusting that other people are also doing their part. A sense of agency, coping skills, and social support are all important in building general resilience. Education may benefit from a focus around emotional awareness and the development of sustainable emotion-regulation strategies.
For some individuals, the increased engagement caused by the shared struggle against climate change reduces social isolation and loneliness. At a community level, learning about the science of climate change and taking collective action in response to the threat can increase altruism and social cohesion, strengthen social bonds, and improve resilience. Such positive social impact is generally associated only with communities that had somewhat high social cohesion in the first place, prompting community leaders to act to improve social resiliency before climate-related disruption becomes too severe.
Mitigation efforts
Psychologists have increasingly been assisting the worldwide community in facing the "diabolically" difficult challenge of organizing effective climate change mitigation efforts. Much work has been done on how to best communicate climate-related information to have a positive psychological impact, leading to people engaging in the problem rather than evoking psychological defenses like denial, distance, or a numbing sense of doom. In addition to advising on the method of communication, psychologists have investigated the difference it makes when the right sort of person is communicating. For example, when addressing American conservatives, climate-related messages are received more positively if delivered by former military officers. Various people who are not primarily psychologists have also been advising on psychological matters related to climate change. For example, Christiana Figueres and Tom Rivett-Carnac, who led the efforts to organize the unprecedentedly successful 2015 Paris Agreement, have since campaigned to spread the view that a "stubborn optimism" mindset should ideally be part of an individual's psychological response to the climate change challenge.
See also
Barriers to pro-environmental behaviour
Brain health and pollution
Eco-anxiety
Effects of climate change on human health
Effects of climate change on mental health
Politics of climate change
Psychological impact of discrimination on health
Psychology of climate change denial
Notes
References
External links
How to transform apocalypse fatigue into action on global warming , TED Talk by Per Espen Stoknes on overcoming defensive psychological impacts
Climate Psychologists
Climate Psychiatry Alliance
Environmental psychology
Climate change and society
Environment and health
Effects of climate change | Psychological impact of climate change | [
"Environmental_science"
] | 5,231 | [
"Environmental social science",
"Environmental psychology"
] |
63,080,641 | https://en.wikipedia.org/wiki/Dave%20Matthews%20Band%20bus%20incident | On August 8, 2004, a tour bus belonging to Dave Matthews Band dumped an estimated of human waste from the bus's blackwater tank through the Kinzie Street Bridge in Chicago onto an open-top passenger sightseeing boat sailing in the Chicago River below. The incident became popularly known as the Dave Matthews Band incident or Poopgate.
As part of a 2005 legal settlement, the band agreed to pay $200,000 to environmental protection and other projects (). The band also donated $100,000 to two groups that protect the river and the surrounding area (). The band's bus driver, Stefan Wohl, pleaded guilty to dumping the waste in April 2005.
Background
Dave Matthews Band had booked rooms at the Peninsula Hotel on 108 E. Superior Street for a two-night show at Alpine Valley Music Theatre in East Troy, Wisconsin. The incident occurred between the first and second night of the concert. The band booked five buses for its show; Stefan Wohl drove the bus of the band's violinist Boyd Tinsley.
During warm months, the Chicago Architecture Center offers a boat tour of the buildings along the Chicago River. The boats have open-roof seating, where passengers sit during the tour.
Most of Chicago's bridges feature riveted grating, which is used for its strength and anti-slip properties. Riveted grating allows rain, snow and other liquids to pass through, removing the need for complicated drainage systems or to salt the bridge deck during snow, and assuring it does not ice over in wintry weather.
Incident
On August 8, 2004, Wohl was alone in Tinsley's bus and driving to a downtown hotel when he emptied the bus's blackwater tank as it crossed the metal grates of the Kinzie Street Bridge.
Passenger boat Chicago's Little Lady was hosting the 1:00 PM Chicago Architecture Foundation tour of the Chicago River. While passing under the bridge, the boat received the full contents of the tank on the seats of its open-roof terrace. Roughly two-thirds of the 120 passengers aboard the tour boat were soaked. The boat immediately returned to its dock, where all passengers were issued refunds. Five passengers were taken to Northwestern Memorial Hospital for testing. According to the Illinois Attorney General, passengers aboard included persons with disabilities, elderly, a pregnant woman, a small child, and an infant. The filing describes the incident:
The boat's deck was swabbed by its crew, and service was resumed for its scheduled 3:00 PM tour.
Investigation
Immediately following the incident, the Chicago Police Department said they were investigating but did not yet consider it a crime. On August 9, the Chicago Architecture Foundation released a statement that a witness had recorded license-plate information, which they had turned over to the police as evidence. On August 10, bus driver Jerry Fitzpatrick, who drove for the band, was identified as the owner of the bus's license plate. In a phone interview, Fitzpatrick denied to a Chicago Tribune reporter that he had dumped the waste, asserting that he was parked in front of the band's hotel at the time. A publicist for the Dave Matthews Band issued a statement saying the band's management had determined that every one of its buses was parked at the time of the incident.
Fitzpatrick, who was in Effingham, Illinois, at the time, instructed Sgt. Paul Gardner of the Effingham Police Department to inspect the bus's septic tank to prove that he could not have emptied it. Gardner reported to the Chicago Tribune over Fitzpatrick's cell phone that he had inspected the tank, and that it was nearly full.
State prosecutors worked with a nearby fitness gym, the East Bank Club, to identify the offending bus based on the gym's security videotapes. On August 24, Illinois Attorney General Lisa Madigan filed a $70,000 lawsuit against Stefan Wohl, alleging that he was responsible for the dumping. Wohl denied dumping the waste, and was supported by the band. On August 25, Mayor Richard M. Daley held a press conference in which he released the videotape used as evidence. Daley, himself a fan of the band, expressed his belief that the dumping was "absolutely unacceptable".
In March 2005, Wohl pleaded guilty to reckless conduct and discharging contaminants to cause water pollution. He was sentenced to 150 hours of community service, fined $10,000, to be paid to Friends of the Chicago River, an environmental organization, and received 18 months probation. The Dave Matthews Band donated $50,000 to the Chicago Park District, $50,000 to Friends of the Chicago River and paid the State of Illinois a settlement of $200,000. The Dave Matthews Band agreed to keep a log of when its buses empty their septic tanks.
Impact and legacy
No passengers suffered any long-lasting physical health effects from having the waste dumped on them. In a 2009 interview with WTMX, Dave Matthews said that he'd "apologize for [the incident] as long as [he has] to". Satirical website The Onion published an article about the incident in 2018 containing an edited image supposedly depicting it. On August 8, 2023, the Riot Fest Historical Society attached a plaque to the Kinzie Street Bridge commemorating the 19th anniversary of the incident.
On August 7, 2024, in anticipation of the incident's 20th anniversary, many previously unknown passengers and crew from the incident spoke publicly about their experience, including the captain of Chicago's Little Lady, Sonja Lund. Most passengers reported that, though originally disgusted and upset by their experience, they began to find it humorous over time. On August 8, 2024, a documentary film entitled The Crappening was announced with the stated goal of finding and recording first-hand witness accounts of the event.
Also commemorating the 20th anniversary of the incident, podcaster Justin McElroy produced a mini-documentary about the bridge incident for the McElroy Family YouTube channel. In the video, Justin takes a tour of the locations in Chicago that are related to the event and explains the incident and its backstory to his two brothers: podcaster Travis McElroy and Forbes 30 Under 30 Media Luminary award winner, journalist and podcaster Griffin McElroy.
References
2004 in Chicago
August 2004 crimes in the United States
Crimes in Chicago
Chicago River incident
Waste disposal incidents in the United States
Water pollution in the United States
Feces | Dave Matthews Band bus incident | [
"Biology"
] | 1,313 | [
"Excretion",
"Feces",
"Animal waste products"
] |
63,081,160 | https://en.wikipedia.org/wiki/Gargi%20College%20molestations | On 6 February 2020, at around 6:30 pm, a group of intoxicated men entered the campus of Gargi College, a women's college affiliated to University of Delhi. The incidents happened during the annual cultural fest of the college, Reverie. Reportedly, some students were sexually assaulted by members of the mob. It was also reported that some of the men masturbated in front of the female students.
An FIR was registered by the principal of Gargi College three days later. A fact-finding committee was created by the college administration to gather relevant information on the incidents.
The Delhi Police said an inspector of Crime Against Women (CAW) Cell was designated as the investigation officer in the case and the Additional DCP of South Delhi was designated as the inquiry officer. Police arrested 10 suspects on 12 February 2020, and two more 13 February 2020. Police later arrested five more suspects, bringing the number of arrests to seventeen as of 18 February 2020. On 14 February 2020, the initial ten accused were released on bail.
The incidents
The annual cultural fest of Gargi College was scheduled to hold from 4 to 6 February 2020. On 6 February 2020, the final day, a music concert was organised for which playback singer Jubin Nautiyal was invited to perform. The crowd began to form around 3:30 pm around both gates of the campus. Entry was scheduled to close after 4:30 pm. Since the college is a women's college, men are only allowed to enter with passes. Reportedly the gates remained open for a longer time and identifications and passes were not properly checked for the men's entry. Some reports say that the gate was not damaged at first, an administration official had opened the gates to give entry to a car, and many people rushed inside. The influx of the crowd continued for hours. It was reported that 5,000 to 10,000 individuals gathered in and around the campus. According to organisers, the expected crowd was around 6000, 3000 college students and around 3000 more through entry passes.
According to the eye-witnesses' testimonies and social media posts, the mob entered around 6:30 pm. One report said that the men were middle-aged and came in trucks. Students said that those men did not appear to be students, and some reports claimed the men were returning from a pro-CAA rally who were shouting "Jai Shri Ram". This mob destroyed the campus gates, and some of them climbed over the walls and damaged students' vehicles. The men walked around drunk and shirtless. They brought with them alcohol, cigarettes and weed. They assaulted women and chased them. One student told NDTV that she wanted to report verbal harassment to the Proctor but she could not as signal jammers were installed for the fest. Some students noted that the incursion appeared to be planned, as some of the men carried eggs and threw them at the students.
One student said the crowd was "massive" and she was unable to move out and had to stay inside the campus for 40 minutes. When she went out in an open space, one of the men started masturbating at her. As soon as she escaped, a first-year student ran to her and said that a group of five or six men appeared to be attempting to surround her. Some of the students posted on social media that they were followed to their hostels and accommodations, and some were followed to the metro station when they left the campus.
Security lapse and police inaction
The students alleged that college security personnel were present and when they observed the incidents, they did not intervene, even when specifically asked to do so.
The Economic Times reported that the Rapid Action Force (RAF) and Delhi Police personnel were stationed close to the entrance of the campus from where the mob entered the college. Students said the RAF and police personnel did nothing.
Aftermath
The incident came to mainstream media when students posted narrations of their experiences in Instagram. Around 100 students protested outside the gate of the college from 10 February until 16 February 2020, sitting for dharna from 10 a.m. to 2 p.m. to demand an apology from the college principal, assurance of a safe campus and action against the perpetrators. They alleged that even after the incidents were brought to the notice of the college management, they did not take any necessary steps. Delhi University Teachers’ Association (DUTA) supported the students and joined the protest.
Some of the students who were harassed complained to the principal and others. The principal, Dr. Kumar was reportedly heard saying, "Why come to the fests, if you do not feel safe." The students' Union blamed the administration and called the Principal's statement "infuriating and appalling". A girl stated, "It was scary and traumatic, and the administration refused to help."
Commissions for Women
The Delhi Commission for Women (DCW) Chief, Swati Maliwal interviewed some of the students and asked the Principal Dr. Kumar to appear before the commission.
The National Commission for Women (NCW) also sent investigators. The NCW Chairperson, Rekha Sharma said that she had read about the molestations on social media and sent a team to talk with the principal and also to the Delhi Police.
Police investigation
On 10 February 2020, after three days of the incidents, Dr. Promila Kumar, the principal of Gargi College, made a complaint at the Hauz Khas Police Station. A FIR was registered and the case was lodged under sections 452 (trespass), 354 (assault or criminal force intend to outrage modesty of women) and 509 (criminal intimidation) and 34 (acts done by several persons in furtherance of common intention) of Indian Penal Code.
Also on 10 February 2020, the DCP of South Delhi stated that a police inspector of the Crime Against Women (CAW) Cell had been designated as investigation officer in the case while the Additional DCP of South Delhi, Geetanjali Khandelwal, would oversee the investigation. On 12 February 2020, the Commissioner of Police (South Delhi), Atul Kumar Thakur stated that 11 teams of Delhi Police were working on the case. Police collected evidence and statements from witnesses and scanned CCTV footage from the cameras at the college gate to get evidence. CCTV footage revealed that some of the men broke the college gate to gain access, while some jumped over barricades. The security team at the gate was outnumbered by the mob. Some suspects were identified using this footage.
Petition for court-monitored CBI inquiry
On 13 February 2020, a lawyer filed a PIL in the Supreme Court, seeking a court-monitored Central Bureau of Investigation (CBI) inquiry into the alleged molestation and sought arrest of the perpetrators behind the "planned criminal conspiracy". The petitioner told the court that it could be a "criminal conspiracy hatched by the political party" and raised concerns that the electronic evidence could be destroyed. In response, the top court said that the Delhi High Court can pass an order for the authorities to preserve the evidence. Though the court had refused to entertain the petition and directed the petitioner to move the same to the Delhi High Court. Thus the PIL moved to the High Court, where the court has agreed to hear the petition and listed for hearing on 17 February 2020.
Identification and arrests
On 12 February 2020, Delhi Police said that 30 suspects were identified and arrested ten students in connection to the alleged assaults. The police stated that the accused were for the most part local university students. A day later, police arrested two more people. On 14 February, the initial ten accused were granted bail by a Delhi court and were released on a surety of रु 10,000 each. Police said the CCTV footage that they have only establishes the arrested persons "barging into the college premises by damaging a gate" but not their involvement in molestation.
The police arrested one more person on 16 February and two on 18 February, for a total of seventeen arrests.
Reaction from political leaders
Chief Minister of Delhi, Arvind Kejriwal condemned the misbehavior with the women students and tweeted that the incident was "extremely unfortunate" and the accused must be brought to justice. Deputy CM of Delhi, Manish Sisodia called the incidents as "disgusting". Sisodia mentioned in his tweet that "fests are opportunities to celebrate the cultural diversity" but "anti-social elements saw this fest as another chance to inflict harassment and violence on students". BJP chief, Manoj Tiwari tweeted that the incident is highly condemnable and culprits should be apprehended immediately.
Delhi Congress president Subhash Chopra said that he was "anguished by the Gargi college incident". He tweeted that it is "sorrowful that the girl students are not safe in their own college in the national capital". He also mentioned in his tweet that "highly shameful that Delhi Police silently watched the atrocities against women."
During the Question Hour of Lok Sabha in response to fellow MP, Gaurav Gogoi's question, HRD Minister of India, Ramesh Pokhriyal said the perpetrators were from outside and were not students and said that the college administration was advised to take action into the matter.
See also
Campus sexual assault
Measures of campus sexual assault
Blank Noise
2012 Guwahati molestation case
List of sexual abuses perpetrated by groups
Mass sexual assault
Sexual abuse by yoga gurus
References
External links
Overcrowding and harassment at Reverie’20, Gargi College by DU Beat (Delhi University Student's newspaper)
Gargi college students statements to NDTV India
Sexual violence Reverie 2020, Gargi College by The Quint
"They were drunk men": Who caused the mayhem at Gargi College? by The Newslaundry
Campus sexual assault
Mass sexual assault
Sexual abuse
Sexual violence at universities and colleges
Student culture in India
Abuse
Sex crimes in India
Sexual ethics
Sexual misconduct
Social issues
Sexuality and society
Sex gangs
Sexual harassment
Violence against women in India
Sex scandals in India
Incidents of violence against women
Delhi University | Gargi College molestations | [
"Biology"
] | 2,073 | [
"Abuse",
"Behavior",
"Aggression",
"Human behavior"
] |
63,083,789 | https://en.wikipedia.org/wiki/Dupin%27s%20theorem | In differential geometry Dupin's theorem, named after the French mathematician Charles Dupin, is the statement:
The intersection curve of any pair of surfaces of different pencils of a threefold orthogonal system is a curvature line.
A threefold orthogonal system of surfaces consists of three pencils of surfaces such that any pair of surfaces out of different pencils intersect orthogonally.
The most simple example of a threefold orthogonal system consists of the coordinate planes and their parallels. But this example is of no interest, because a plane has no curvature lines.
A simple example with at least one pencil of curved surfaces: 1) all right circular cylinders with the z-axis as axis, 2) all planes, which contain the z-axis, 3) all horizontal planes (see diagram).
A curvature line is a curve on a surface, which has at any point the direction of a principal curvature (maximal or minimal curvature). The set of curvature lines of a right circular cylinder consists of the set of circles (maximal curvature) and the lines (minimal curvature). A plane has no curvature lines, because any normal curvature is zero. Hence, only the curvature lines of the cylinder are of interest: A horizontal plane intersects a cylinder at a circle and a vertical plane has lines with the cylinder in common.
The idea of threefold orthogonal systems can be seen as a generalization of orthogonal trajectories. Special examples are systems of confocal conic sections.
Application
Dupin's theorem is a tool for determining the curvature lines of a surface by intersection with suitable surfaces (see examples), without time-consuming calculation of derivatives and principal curvatures. The next example shows, that the embedding of a surface into a threefold orthogonal system is not unique.
Examples
Right circular cone
Given: A right circular cone, green in the diagram.
Wanted: The curvature lines.
1. pencil: Shifting the given cone C with apex S along its axis generates a pencil of cones (green).
2. pencil: Cones with apexes on the axis of the given cone such that the lines are orthogonal to the lines of the given cone (blue).
3. pencil: Planes through the cone's axis (purple).
These three pencils of surfaces are an orthogonal system of surfaces. The blue cones intersect the given cone C at a circle (red). The purple planes intersect at the lines of cone C (green).
Alternative with spheres
The points of the space can be described by the spherical coordinates
. It is set S=M=origin.
1. pencil: Cones with point S as apex and their axes are the axis of the given cone C (green): .
2. pencil: Spheres centered at M=S (blue):
3. pencil: Planes through the axis of cone C (purple): .
Torus
1. pencil: Tori with the same directrix (green).
2. pencil: Cones containing the directrix circle of the torus with apexes on the axis of the torus (blue).
3. pencil: Planes containing the axis of the given torus (purple).
The blue cones intersect the torus at horizontal circles (red).
The purple planes intersect at vertical circles (green).
The curvature lines of a torus generate a net of orthogonal circles.
A torus contains more circles: the Villarceau circles, which are not curvature lines.
Surface of revolution
Usually a surface of revolution is determined by a generating plane curve (meridian) . Rotating around the axis generates the surface of revolution. The method used for a cone and a torus can be extended to a surface of revolution:
1. pencil: Parallel surfaces to the given surface of revolution.
2. pencil: Cones with apices on the axis of revolution with generators orthogonal to the given surface (blue).
3. pencil: Planes containing the axis of revolution (purple).
The cones intersect the surface of revolution at circles (red). The purple planes intersect at meridians (green). Hence:
The curvature lines of a surface of revolution are the horizontal circles and the meridians.
Confocal quadrics
The article confocal conic sections deals with confocal quadrics, too. They are a prominent example of a non trivial orthogonal system of surfaces. Dupin's theorem shows that
the curvature lines of any of the quadrics can be seen as the intersection curves with quadrics out of the other pencils (see diagrams).
Confocal quadrics are never rotational quadrics, so the result on surfaces of revolution (above) cannot be applied. The curvature lines are i.g. curves of degree 4. (Curvature lines of rotational quadrics are always conic sections !)
Ellipsoid (see diagram)
Semi-axes: .
The curvature lines are sections with one (blue) and two (purple) sheeted hyperboloids. The red points are umbilic points.
Hyperboloid of one sheet (see diagram)
Semi-axes: .
The curvature lines are intersections with ellipsoids (blue) and hyperboloids of two sheets (purple).
Dupin cyclides
A Dupin cyclide and its parallels are determined by a pair of focal conic sections. The diagram shows a ring cyclide together with its focal conic sections (ellipse: dark red, hyperbola: dark blue). The cyclide can be seen as a member of an orthogonal system of surfaces:
1. pencil: parallel surfaces of the cyclide.
2. pencil: right circular cones through the ellipse (their apexes are on the hyperbola)
3. pencil: right circular cones through the hyperbola (their apexes are on the ellipse)
The special feature of a cyclide is the property:
The curvature lines of a Dupin cyclide are circles.
Proof of Dupin's theorem
Any point of consideration is contained in exactly one surface of any pencil of the orthogonal system. The three parameters describing these three surfaces can be considered as new coordinates. Hence any point can be represented by:
or shortly:
For the example (cylinder) in the lead the new coordinates are the radius of the actual cylinder, angle between the vertical plane and the x-axis and the height of the horizontal plane. Hence, can be considered as the cylinder coordinates of the point of consideration.
The condition "the surfaces intersect orthogonally" at point means, the surface normals are pairwise orthogonal. This is true, if
are pairwise orthogonal. This property can be checked with help of Lagrange's identity.
Hence
(1)
Deriving these equations for the variable, which is not contained in the equation, one gets
Solving this linear system for the three appearing scalar products yields:
(2)
From (1) and (2): The three vectors are orthogonal to vector and hence are linear dependent (are contained in a common plane), which can be expressed by:
(3)
From equation (1) one gets (coefficient of the first fundamental form) and
from equation (3): (coefficient of the second fundamental form)
of the surface .
Consequence: The parameter curves are curvature lines.
The analogous result for the other two surfaces through point is true, too.
References
H.S.M. Coxeter: Introduction to geometry, Wiley, 1961, pp. 11, 258.
Ch. Dupin: Développements de géométrie, Paris 1813.
F. Klein: Vorlesungen über Höhere Geometrie, Springer-Verlag, 2013, , p. 9.
Ludwig Schläfli: Über die allgemeinste Flächenschar zweiten Grades, die mit irgend zwei anderen Flächenscharen ein orthogonales System bildet, in L. Schläfli: Gesammelte mathematische Abhandlungen p. 163, Springer-Verlag, 2013, .
J. Weingarten: Über die Bedingung, unter welcher eine Flächenfamilie einem orthogonalen Flächensystem angehört., Journal für die reine und angewandte Mathematik, Band 1877, Heft 83, pp. 1–12, ISSN (Online) 1435–5345, ISSN (Print) 0075–4102.
T. J. Willmore: An Introduction to Differential Geometry, Courier Corporation, 2013, , p. 295.
Surfaces | Dupin's theorem | [
"Mathematics"
] | 1,742 | [
"Theorems in differential geometry",
"Theorems in geometry"
] |
63,083,999 | https://en.wikipedia.org/wiki/C9H20O2 | {{DISPLAYTITLE:C9H20O2}}
The molecular formula C9H20O2 may refer to:
Dibutoxymethane
1,9-Nonanediol | C9H20O2 | [
"Chemistry"
] | 43 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
63,084,236 | https://en.wikipedia.org/wiki/WISE%202150%E2%88%927520 | WISE 2150–7520 AB (abbreviated W2150AB) is a binary brown dwarf 78.9 light-years distant from Earth in the southern constellation Octans. The system is a wide binary with a separation of 341 astronomical units. The primary of the system was discovered in 2005 as an infrared object with high proper motion and in 2008 was found to be an ultracool dwarf with a spectral type of L. The secondary, a much cooler T dwarf, was discovered by volunteers of the citizen science project Backyard Worlds: Planet 9, using data from the Wide-field Infrared Survey Explorer (WISE). The system was followed up by the project scientists with Magellan and Spitzer and a scientific paper describing the binary was published in the Astrophysical Journal in 2020.
The system belongs to only a few brown dwarf binaries that can be easily resolved by ground-based telescopes. Another example is SDSS J1416+1348.
Brown dwarf system
The system consists out of a L1 primary with a mass of and a T8 secondary with a mass of . The brown dwarfs are separated by 341 astronomical units. Other brown dwarfs show a similar wide binary configuration, like Oph 162225-240515, but most of them are young or have a higher total mass. W2150AB is unusual as it does not show signs of youth. The age of the system was estimated between 0.5 and 10 billion years. The combination of low total mass and large separation results in a low gravitational binding energy for the system. The researchers compared the binding energy and the mass ratio of the system with other brown dwarf binaries and found 2M1101AB as a younger sibling. W2150AB must have formed like other brown dwarf binaries in a more crowded region and left this natal region surviving any interactions with nearby stars or giant molecular clouds that could easily perturb this pair, leaving only two single brown dwarfs.
Gallery
See also
LSPM J0207+3331 another object discovered by a Backyard Worlds volunteer
UScoCTIO 108
Luhman 16
References
External links
W2150AB in wiseview tool created by Backyard Worlds volunteers
W2150B in The Extrasolar Planets Encyclopaedia
Octans
Brown dwarfs
Binary stars
L-type brown dwarfs
T-type brown dwarfs
J215018.25-752039.7
WISE objects | WISE 2150−7520 | [
"Astronomy"
] | 481 | [
"Octans",
"Constellations"
] |
63,085,400 | https://en.wikipedia.org/wiki/GnosticPlayers | GnosticPlayers is a computer hacking group, which is believed to have been formed in 2019 and gained notability for hacking Zynga, Canva, and several other online services.
The Independent reported that GnosticPlayers had claimed responsibility for hacking other online businesses, and stealing hundreds of millions of credentials from web databases such as MyFitnessPal, Dubsmash, and fourteen others; and subsequently selling these credentials on the dark web.
Reported members
In 2020, cybersecurity author Vinny Troia published a report listing the following core group members:
Maxime Thalet-Fischer, who went under the aliases DDB, Casper, RawData and Pumpkin, was the seller of the group.
Nassim Benhaddou, who went under the alias Prosox, was a member of the group and was known to be Gabriel's early associate. According to Troia, Benhaddou later went on to form the group ShinyHunters.
In 2019, Nassim Benhaddou, Gabriel Kimiaie-Asadi Bildstein, as well as Maxime Thalet-Fischer, were arrested after Gabriel confessed that they hacked Gatehub. The hack reportedly involved the theft of $9.5 million worth of cryptocurrency.
Companies affected
GnosticPlayers have taken public responsibility for the following data breaches:
500px • 8fit • 8tracks • Animoto • Armor Games • Artsy • Avito • BlankMediaGames • Bookmate • Bukalapak • Canva • Chegg • CoffeeMeetsBagel • Coinmama • Coubic • DailyBooth • DataCamp • DubSmash • Edmodo • Epic Games • Evite • EyeEm • Fotolog • GameSalad • Gatehub • Ge.tt • GfyCat • HauteLook • Houzz • iCracked • Ixigo • Legendas.tv • LifeBear • LiveJournal • LovePlanet • mefeedia • MindJolt • MyFitnessPal • MyHeritage • MyVestigage • Netlog & Twoo • OMGPop • Onebip • Overblog • Petflow • PiZap • PromoFarma • RoadTrippers • Roll20 • ShareThis • Shein • Singlesnet • Solstice • Storenvy • StoryBird • StreetEasy • Stronghold Kingdoms • Taringa • Wanelo • WhitePages • Wirecard • Yanolja • Yatra • YouNow • Youthmanual • Zomato • Zynga
See also
ShinyHunters
The Dark Overlord
References
Hacker groups
Hacking in the 2020s
Hacking in the 2010s | GnosticPlayers | [
"Technology"
] | 557 | [
"Computer security stubs",
"Computing stubs"
] |
64,439,436 | https://en.wikipedia.org/wiki/Deficiency%20%28graph%20theory%29 | Deficiency is a concept in graph theory that is used to refine various theorems related to perfect matching in graphs, such as Hall's marriage theorem. This was first studied by Øystein Ore. A related property is surplus.
Definition of deficiency
Let be a graph, and let U be an independent set of vertices, that is, U is a subset of V in which no two vertices are connected by an edge. Let denote the set of neighbors of U, which is formed by all vertices from 'V' that are connected by an edge to one or more vertices of U. The deficiency of the set U is defined by:
Suppose G is a bipartite graph, with bipartition V = X ∪ Y. The deficiency of G with respect to one of its parts (say X), is the maximum deficiency of a subset of X:
Sometimes this quantity is called the critical difference of G.
Note that defG of the empty subset is 0, so def(G;X) ≥ 0.
Deficiency and matchings
If def(G;X) = 0, it means that for all subsets U of X, |NG(U)| ≥ |U|. Hence, by Hall's marriage theorem, G admits a perfect matching.
In contrast, if def(G;X) > 0, it means that for some subsets U of X, |NG(U)| < |U|. Hence, by the same theorem, G does not admit a perfect matching. Moreover, using the notion of deficiency, it is possible to state a quantitative version of Hall's theorem:
Proof. Let d = def(G;X). This means that, for every subset U of X, |NG(U)| ≥ |U|-d. Add d dummy vertices to Y, and connect every dummy vertex to all vertices of X. After the addition, for every subset U of X, |NG(U)| ≥ |U|. By Hall's marriage theorem, the new graph admits a matching in which all vertices of X are matched. Now, restore the original graph by removing the d dummy vertices; this leaves at most d vertices of X unmatched.
This theorem can be equivalently stated as:
where ν(G) is the size of a maximum matching in G (called the matching number of G).
Properties of the deficiency function
In a bipartite graph G = (X+Y, E), the deficiency function is a supermodular set function: for every two subsets X1, X2 of X:A tight subset is a subset of X whose deficiency equals the deficiency of the entire graph (i.e., equals the maximum). The intersection and union of tight sets are tight; this follows from properties of upper-bounded supermodular set functions.
In a non-bipartite graph, the deficiency function is, in general, not supermodular.
Strong Hall property
A graph G has the Hall property if Hall's marriage theorem holds for that graph, namely, if G has either a perfect matching or a vertex set with a positive deficiency. A graph has the strong Hall property if def(G) = |V| - 2 ν(G). Obviously, the strong Hall property implies the Hall property. Bipartite graphs have both of these properties, however there are classes of non-bipartite graphs that have these properties.
In particular, a graph has the strong Hall property if-and-only-if it is stable - its maximum matching size equals its maximum fractional matching size.
Surplus
The surplus of a subset U of V is defined by:surG(U) := |NG(U)| − |U| = −defG(U)The surplus of a graph G w.r.t. a subset X is defined by the minimum surplus of non-empty subsets of X: sur(G;X) := min [U a non-empty subset of X] surG(U)Note the restriction to non-empty subsets: without it, the surplus of all graphs would always be 0. Note also that:def(G;X) = max[0, −sur(G; X)]In a bipartite graph G = (X+Y, E), the surplus function is a submodular set function: for every two subsets X1, X2 of X:A surplus-tight subset is a subset of X whose surplus equals the surplus of the entire graph (i.e., equals the minimum). The intersection and union of tight sets with non-empty intersection are tight; this follows from properties of lower-bounded submodular set functions.
For a bipartite graph G with def(G;X) = 0, the number sur(G;X) is the largest integer s satisfying the following property for every vertex x in X: if we add s new vertices to X and connect them to the vertices in NG(x), the resulting graph has a non-negative surplus.
If G is a bipartite graph with a positive surplus, such that deleting any edge from G decreases sur(G;X), then every vertex in X has degree sur(G;X) + 1.
A bipartite graph has a positive surplus (w.r.t. X) if-and-only-if it contains a forest F such that every vertex in X has degree 2 in F.
Graphs with a positive surplus play an important role in the theory of graph structures; see the Gallai–Edmonds decomposition.
In a non-bipartite graph, the surplus function is, in general, not submodular.
References
Graph theory | Deficiency (graph theory) | [
"Mathematics"
] | 1,189 | [
"Discrete mathematics",
"Mathematical relations",
"Graph theory",
"Combinatorics"
] |
64,439,520 | https://en.wikipedia.org/wiki/A%20Treatise%20on%20the%20Circle%20and%20the%20Sphere | A Treatise on the Circle and the Sphere is a mathematics book on circles, spheres, and inversive geometry. It was written by Julian Coolidge, and published by the Clarendon Press in 1916. The Chelsea Publishing Company published a corrected reprint in 1971, and after the American Mathematical Society acquired Chelsea Publishing it was reprinted again in 1997.
Topics
As is now standard in inversive geometry, the book extends the Euclidean plane to its one-point compactification, and considers Euclidean lines to be a degenerate case of circles, passing through the point at infinity. It identifies every circle with the inversion through it, and studies circle inversions as a group, the group of Möbius transformations of the extended plane. Another key tool used by the book are the "tetracyclic coordinates" of a circle, quadruples of complex numbers describing the circle in the complex plane as the solutions to the equation . It applies similar methods in three dimensions to identify spheres (and planes as degenerate spheres) with the inversions through them, and to coordinatize spheres by "pentacyclic coordinates".
Other topics described in the book include:
Tangent circles and pencils of circles
Steiner chains, rings of circles tangent to two given circles
Ptolemy's theorem on the sides and diagonals of quadrilaterals inscribed in circles
Triangle geometry, and circles associated with triangles, including the nine-point circle, Brocard circle, and Lemoine circle
The Problem of Apollonius on constructing a circle tangent to three given circles, and the Malfatti problem of constructing three mutually-tangent circles, each tangent to two sides of a given triangle
The work of Wilhelm Fiedler on "cyclography", constructions involving circles and spheres
The Mohr–Mascheroni theorem, that in straightedge and compass constructions, it is possible to use only the compass
Laguerre transformations, analogues of Möbius transformations for oriented projective geometry
Dupin cyclides, shapes obtained from cylinders and tori by inversion
Legacy
At the time of its original publication this book was called encyclopedic, and "likely to become and remain the standard for a long period". It has since been called a classic, in part because of its unification of aspects of the subject previously studied separately in synthetic geometry, analytic geometry, projective geometry, and differential geometry. At the time of its 1971 reprint, it was still considered "one of the most complete publications on the circle and the sphere", and "an excellent reference".
References
External links
A Treatise on the Circle and the Sphere (1916 edition) at the Internet Archive
Circles
Spherical geometry
Inversive geometry
Mathematics books
1916 non-fiction books
Treatises
Clarendon Press books | A Treatise on the Circle and the Sphere | [
"Mathematics"
] | 553 | [
"Circles",
"Pi"
] |
64,439,717 | https://en.wikipedia.org/wiki/80%20Million%20Tiny%20Images | 80 Million Tiny Images is a dataset intended for training machine learning systems constructed by Antonio Torralba, Rob Fergus, and William T. Freeman in a collaboration between MIT and New York University. It was published in 2008.
The dataset has size 760 GB. It contains 79,302,017 32×32 pixel color images, scaled down from images scraped from the World Wide Web over 8 months. The images are classified into 75,062 classes. Each class is a non-abstract noun in WordNet. Images may appear in more than one class. The dataset was motivated by non-parametric models of neural activations in the visual cortex upon seeing images.
The CIFAR-10 dataset uses a subset of the images in this dataset, but with independently generated labels, as the original labels were not reliable. The CIFAR-10 set has 6000 examples of each of 10 classes and the CIFAR-100 set has 600 examples of each of 100 non-overlapping classes.
Construction
It was first reported in a technical report in April 2007, during the middle of the construction process, when there were only 73 million images. The full dataset was published in 2008.
They began with all 75,846 nonabstract nouns in WordNet, and then for each of these nouns, they scraped 7 Image search engines: Altavista, Ask.com, Flickr, Cydral, Google, Picsearch and Webshots. After 8 months of scraping, they obtained 97,245,098 images. Since they didn't have enough storage, they downsized the images to 32×32 as they were scraped.
After gathering, they removed images with zero variance and intra-word duplicate images, resulting in the final dataset.
Out of the 75,846 nouns, only 75,062 classes had any results, so the other nouns did not appear in the final dataset.
The number of images per noun follows a Zipf-like distribution, with 1056 images per noun on average. To prevent a few nouns taking up too many images, they put an upper bound of at most 3000 images per noun.
Retirement
The 80 Million Tiny Images dataset was retired from use by its creators in 2020, after a paper by researchers Abeba Birhane and Vinay Prabhu found that some of the labeling of several publicly available image datasets, including 80 Million Tiny Images, contained racist and misogynistic slurs which were causing models trained on them to exhibit racial and sexual bias. The dataset also contained offensive images. Following the release of the paper, the dataset's creators removed the dataset from distribution, and requested that other researchers not use it for further research and to delete their copies of the dataset.
See also
List of datasets in computer vision and image processing
References
Machine learning
Datasets in computer vision
External links | 80 Million Tiny Images | [
"Engineering"
] | 592 | [
"Artificial intelligence engineering",
"Machine learning"
] |
64,439,875 | https://en.wikipedia.org/wiki/Portego | Portego ("porch" in Venetian dialect) is a characteristic compositional element of the Venetian civil buildings built during the years of the Republic of Venice. The portego is similar to a reception hall but has peculiar features.
History
The portego is known from the ancient times; it is present even in the oldest Venetian palaces. In later centuries and especially during the emergence of the Renaissance architecture, the portego original central structure has changed substantially, allowing for T-shaped and L-shaped halls.
Function
In a typical Venetian palace, the portego is the local passage hall that joins the water portal with the land portal. On the ground floor, it serves as an entrance hall for loading goods, while on the upper floors the portego is used both as a reception hall and as a passing hall to access other rooms, located on both sides. Furthermore, the portego was crucial in providing ventilation and air circulation for the palazzo which, especially during medieval summers, allowed for respite from the humid weather and smells emitted from the often sewer infested waterways of Venice.
Architecture
Usually, the portego joins the water portal and the ground portal and may pass through the court. This large room is usually decorated by a multi-light polifora, its size depending on the width of the interior. In the traditional palazzo, for example Palazzo Loredan dell'Ambasciatore, the staircase is placed at the portego.
Gallery
See also
Sotoportego
References
Venetian Gothic architecture
Architectural design
Rooms
Architectural elements | Portego | [
"Technology",
"Engineering"
] | 305 | [
"Building engineering",
"Rooms",
"Architectural elements",
"Architectural design",
"Design",
"Components",
"Architecture"
] |
64,439,884 | https://en.wikipedia.org/wiki/Windows%20File%20Recovery | Windows File Recovery is a command-line software utility from Microsoft to recover deleted files. It is freely available for Windows 10 version 2004 (May 2020 Update) and later from the Microsoft Store.
Windows File Recovery can recover files from a local hard disk drive (HDD), USB flash drive, or memory card such as an SD card. It can work to some extent with solid-state drives (SSD).
The program is run using the winfr command. It has a mode designed for NTFS file systems, that will attempt recovery of files from a disk that is corrupted or has been formatted. Another mode will attempt recovery of specific file types from FAT and exFAT (predominantly found on external devices) and ReFS file systems.
See also
Data remanence
Data recovery
List of data recovery software
File History
Trash (computing)
Undeletion
References
External links
Microsoft software
Windows commands
Windows-only freeware | Windows File Recovery | [
"Technology"
] | 186 | [
"Windows commands",
"Computing commands"
] |
64,440,132 | https://en.wikipedia.org/wiki/L25%20ribosomal%20protein%20leader | L25 ribosomal protein leader is a ribosomal protein leader involved in the ribosome biogenesis. It is used as an autoregulatory mechanism to control the concentration of the ribosomal protein L25. Known Examples were predicted in Gammaproteobacteria with bioinformatic approaches. or in Enterobacteria.
The structure is located in the 5′ untranslated regions of mRNAs encoding ribosomal protein L25 (rplY).
See also
Ribosomal protein leader
References
External links
Ribosomal protein leader | L25 ribosomal protein leader | [
"Chemistry"
] | 110 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
64,440,137 | https://en.wikipedia.org/wiki/S10%20ribosomal%20protein%20leader | S10 ribosomal protein leader is a ribosomal protein leader involved in the ribosome biogenesis. It is used as an autoregulatory mechanism to control the concentration of the ribosomal protein S10. Known Examples were predicted in Clostridia or other lineages of Bacillota with bioinformatic approaches. The structure is located in the 5′ untranslated regions of mRNAs encoding ribosomal proteins S10 (rpsJ), L3 (rplc) and L4 (rplD).
There is an uncertainty about the ligand, because of a lack of experimental investigation.
See also
Ribosomal protein leader
References
External links
Ribosomal protein leader | S10 ribosomal protein leader | [
"Chemistry"
] | 143 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
64,440,565 | https://en.wikipedia.org/wiki/Miro%20Erkintalo | Miro Erkintalo is a New Zealand physicist specialising in nonlinear optics and laser physics, based at the University of Auckland.
Education
Erkintalo was born and grew up in Pori, Finland, with an interest in science and maths. He attended the Tampere University of Technology intending to get his MSc and become a teacher or technologist, but after interning in a research lab decided to become a physicist. He completed three degrees in succession: a BSc (March 2009), an MSc (November 2009) and Doctor of Science in Physics (January 2012).
After his PhD, Erkintalo came to New Zealand in 2012 to take up a postdoctoral fellowship at the University of Auckland, at the suggestion of his mentor John Dudley. He had intended to just stay for two years, but enjoyed New Zealand so much he became a permanent resident. He became a Lecturer in the Department of Physics in 2014, Senior Lecturer in February 2017 and Associate Professor in February 2021. He is a principal investigator at the Dodd-Walls Centre for Photonic and Quantum Technologies.
Areas of research
Erkintalo studies laser light and how it interacts with matter, both fundamental physics and technological applications. He developed the theoretical model for microresonator frequency combs, which can convert a single laser beam into hundreds or thousands of different-coloured beams. Currently fibre-optic communications systems use hundreds of lasers with different wavelengths to increase the amount of information transmitted; a microresonator frequency comb could allow a single beam to do this work, greatly improving performance and energy efficiency. His work on temporal cavity solitons has potential for the development of light-based computer memory.
Erkintalo has also been part of the development of inexpensive ultrashort pulsed lasers with potential applications in microscopy and micro-machining. These lasers have extremely short pulses of hundreds of femtoseconds, which have very high peak energy and can be used in environments where they would have to work under extreme noise, temperature, and vibration.
Honours and awards
Erkintalo was awarded a Rutherford Discovery Fellowship in 2015 and two Marsden Fund grants. He won the Hamilton Award, the Royal Society Te Apārangi's Early Career Research Excellence Award for Science, in 2016 for his work in nonlinear optics and laser physics.
On 30 June 2020 Erkintalo was presented with the 2019 Prime Minister’s MacDiarmid Emerging Scientist Prize for his contributions to new laser technologies. Most of the $200,000 prize will go towards exploring microresonator frequency comb architecture.
Selected publications
References
External links
2016 Hamilton Award winner: Dr Miro Erkintalo (YouTube)
University of Auckland staff profile
Research website
Living people
Optical physicists
Theoretical physicists
Tampere University of Technology alumni
Year of birth missing (living people)
Recipients of Marsden grants
20th-century New Zealand physicists
21st-century New Zealand physicists
Expatriates in New Zealand
Finnish expatriates | Miro Erkintalo | [
"Physics"
] | 599 | [
"Theoretical physics",
"Theoretical physicists"
] |
64,440,890 | https://en.wikipedia.org/wiki/Genetic%20resources%20conservation%20and%20sustainable%20use | Genetic resources means genetic material of actual or potential value where genetic material means any material of plant, animal, microbial or other origin containing functional units of heredity... Genetic resources thus refer to the part of genetic diversity that has or could have practical use, such as in plant breeding. The term was introduced by Otto Frankel and Erna Bennett for a technical conference on the exploration, utilization and conservation of plant genetic resources, organized by the Food and Agriculture Organisation (FAO) and the International Biological Program (IBP), held in Rome, Italy, 18–26 September 1967.
Genetic resources is one of the three levels of biodiversity defined by Article 2 of the Convention on Biological Diversity (CBD) in Rio, 1992
Under the CBD, discussions and negotiations regarding genetic resources are organized by the FAO Commission of Genetic Resources for Food and Agriculture. This commission distinguishes the following domains of genetic resources:
Animal genetic resources
Aquatic genetic resources
Forest genetic resources
Micro-organisms and invertebrates
Plant genetic resources
Genetic resources are threatened by genetic erosion and conservation activities are undertaken to prevent loss of diversity.
History
Before the introduction of the term, the Russian scientist Nikolai Vavilov initiated comprehensive studies on plant genetic resources and conservation work in the 1920’s. The American botanist Jack Harlan stressed the tight link between plant genetic resources and man in a seminal publication "Crops and Man".
Methodologies for conservation of genetic resources
There are two complementary ways to conserve genetics resources:
in situ, which consists in managing populations on-site, dynamically evolving in their natural environment. In situ methodologies include:
conservation in natural populations (in nature)
on farm conservation
ex situ, which consists in conserving individuals or populations out of their natural environments. Ex situ gene bank methodologies include:
conservation in seed banks
in vitro conservation
cryopreservation
International policies
Policies are key to ensure the fair and equitable sharing of benefits derived from the use of genetic resources, for present and future generations. The main international policy framework that regulates genetic resources exchange and use is the Nagoya Protocol which entered into force in 2014. It defines and protects the owners of genetic resources and it sets the rules for Access and Benefit Sharing (ABS)
Peer-reviewed literature
The following scientific journals are dedicated to the topic of genetic resources conservation and sustainable use:
Conservation Genetics Resources
Conservation Genetics
Genetic resources (open access)
Journal of Genetic Resources
Plant Genetic Resources
Genetic Resources and Crop Evolution
See also
Germplasm, genetic resources that are preserved for various purposes such as breeding, preservation, and research
International Treaty on Plant Genetic Resources for Food and Agriculture, an international agreement to promote sustainable use of the world's plant genetic resources
Genetic resources contribute to the provisioning ecosystem services.
References
Conservation biology
Genetics | Genetic resources conservation and sustainable use | [
"Biology"
] | 551 | [
"Conservation biology",
"Genetics"
] |
64,441,820 | https://en.wikipedia.org/wiki/List%20of%20tallest%20clock%20towers | A list of the tallest structures with clocks on their exterior that can be seen from the ground. The list includes various structures with a working clock face or faces on their exteriors. The first type of structure are proper Clock towers which are structures that fulfil the definition of a tower with a clock face or faces on the exterior wall or walls. Possibly the most famous example is the colloquially termed Big Ben. Some structures of this type were originally built as bell towers and had the clocks added later, such as the Springfield Campanille. Some clock towers of this type are freestanding, such as the Joseph Chamberlain Memorial Clock Tower, while others are attached to, or on top of, buildings such as the tower on the Philadelphia City Hall. The second set of structures are buildings (rather than towers) that had clock faces on the exterior as part of their original design such as the Wrigley Building. The third set of structures are buildings that have had a clock face or faces added after the original building was constructed such as the Palace of Culture and Science. This division of structures with clock faces follows the general terminology used in related articles and follows Council on Tall Buildings and Urban Habitat (CTBUH) criteria. For the purposes of comparison and clarity this list includes all structures with clocks and clock faces of the types previously described. The list includes all clock 'tower' structures with a height of at least .
List
See also
List of largest clock faces
List of clock towers
List of tallest towers
List of tallest buildings
References
Clock towers
Clock towers | List of tallest clock towers | [
"Engineering"
] | 310 | [
"Structural engineering",
"Towers"
] |
64,442,240 | https://en.wikipedia.org/wiki/Xiangdong%20Ji | Xiangdong Ji (; born 1962) is a Chinese theoretical nuclear and elementary particle physicist. He is a Distinguished University Professor at the University of Maryland, College Park.
Ji received his bachelor's degree from Tongji University in 1982 and his PhD from Drexel University in 1987. He was a postdoctoral researcher at Caltech and MIT. In 1991, he became Assistant Professor at the MIT, and in 1996 he moved to the University of Maryland, where he was the founding director of the Maryland Center for Fundamental Physics from 2007 to 2009. He was the Dean of Physics and Astronomy Department at Shanghai Jiao Tong University from 2009 to 2013.
Ji’s main research interest has been in the quark and gluon structure of the proton and neutron in Quantum Chromodynamics (QCD). He formulated the spin structure of the proton in terms of local and gauge-invariant spin and orbital angular momentum contributions of quarks and gluons (Ji spin decomposition), showed that they can be obtained (Ji sum rule) from a class of physical quantities called Generalized Parton Distributions (GPDs) he introduced independently. GPDs are the special cases of Wigner distributions which provide simultaneous space and momentum information of partons.
Ji found a new class of QCD hard scattering called Deep Exclusive Processes in lepton-nucleon collisions, which allows to probe the GPDs experimentally. The simplest example is production of a high-energy photon and a recoil nucleon in hard scattering, which he named it as Deeply-Virtual Compton scattering (DVCS).
Deep Exclusive Processes has been an important part of the experimental program at Jefferson Lab 12 GeV facility
and the Electron-Ion Collider at Brookhaven National Laboratory.
In 2013, Ji found that the fundamental quantities charactering the high-energy properties of the nucleon, the parton distributions introduced by R. Feynman, can be directly calculated in Euclidean lattice field theory. He developed this into Large-Momentum Effective Theory or LaMET which allows parton physics or light-cone correlations computable from large momentum expansion of time-independent observables in lattice QCD.
Ji was elected a fellow of the American Physical Society in 2000, "[f]or fundamental contributions to the understanding of the structure of the nucleon and the process of deeply virtual Compton scattering." In 2014 he won the Humboldt Prize and in 2015 he won the Outstanding Nuclear Physicists Award from the Jefferson Sciences Associates.
In 2016 he won the Herman Feshbach Prize in Theoretical Nuclear Physics for pioneering work in developing tools to characterize the structure of the nucleon within QCD and for showing how its properties can be probed through experiments; this work not only illuminates the nucleon theoretically but also acts as a driver of experimental programs worldwide.
Ji is also engaged in elementary particle physics. He was the founder and the first spokesperson (2009-2018) of the PandaX project, one of the three most advanced deep underground liquid xenon experiments in the world (the other two are XENON and LZ), to elucidate the nature of dark matter and fundamental properties of neutrinos.
References
External links
Chinese nuclear physicists
21st-century Chinese physicists
Particle physicists
1962 births
Living people
Fellows of the American Physical Society | Xiangdong Ji | [
"Physics"
] | 678 | [
"Particle physicists",
"Particle physics"
] |
64,442,755 | https://en.wikipedia.org/wiki/Valerii%20Vinokur | Valerii Vinokur (also spelled as Vinokour, or Valery Vinokour; born 26 April 1949) is a condensed matter physicist who works on superconductivity, the physics of vortices, disordered media and glasses, nonequilibrium physics of dissipative systems, quantum phase transitions, quantum thermodynamics, and topological quantum matter. He is a senior scientist and Argonne Distinguished Fellow at Argonne National Laboratory and a senior scientist at the Consortium for Advanced Science and Engineering, Office of Research and National Laboratories, The University of Chicago. He is a Foreign Member of the National Norwegian Academy of Science and Letters and a Fellow of the American Physical Society.
Career
Vinokur earned his BSc in physics of metals at Moscow Institute of Steel and Alloys in 1972 and moved to the Institute of Solid State Physics, Chernogolovka, Russia, where he received a Ph.D. in physics in 1979. He has held appointments as a visiting scientist at CNRS, Grenoble (1987), a visiting scientist at Leiden University (1989), a visiting scientist at ETH (Zurich) (1990), and as visiting director of research at Ecole Normale Superieure (Paris) (1996). Since 1990 till January 2021 Vinokur has worked at the Argonne National Laboratory, having become a Distinguished Argonne Fellow in 2009. Since 2018 till January 2021, he has been a senior scientist at the Consortium for Advanced Science and Engineering, Office of Research and National Laboratories, The University of Chicago. Since January 2021 Vinokur has been working for Terra Quantum AG as a Chief Technology Officer US. Since January 2021 Vinokur also has been an adjunct professor at City College of the City University of New York.
Honors, awards and fellowships
Fellow of the American Physical Society, 1998
University of Chicago Distinguished Performance Award, 1998
, 2003
Alexander von Humboldt Research Award, 2003
Foreign Member of the Norwegian National Academy of Letters and Science, 2013
Alexander von Humboldt Research Award, 2013
International Abrikosov Prize, 2017
Fritz London Memorial Prize, 2020
References
1949 births
Fellows of the American Physical Society
Argonne National Laboratory people
American physicists
Living people
Russian physicists
Jewish physicists
Theoretical physicists | Valerii Vinokur | [
"Physics"
] | 459 | [
"Theoretical physics",
"Theoretical physicists"
] |
64,442,911 | https://en.wikipedia.org/wiki/EncroChat | EncroChat was a Europe-based communications network and service provider that offered modified smartphones allowing encrypted communication among subscribers. It was used primarily by organized crime members to plan criminal activities. Police infiltrated the network between at least March and June 2020 during a Europe-wide investigation. An unidentified source associated with EncroChat announced on the night of 12–13 June 2020 that the company would cease operations because of the police operation.
The service had around 60,000 subscribers at the time of its closure. In the UK the National Crime Agency led an operation resulting in over 2,600 arrests and 1,384 criminal charges.
Background
EncroChat handsets emerged in 2016 as a replacement for a previously disabled end-to-end encrypted service. The company had revealed on 31 December 2015 the Version 115 of EncroChat OS, which appears to be the first public release of their operating system. The earliest version of the company's website archived by the Wayback Machine dates to 23 September 2015.
According to a May 2019 report by the Gloucester Citizen, EncroChat was originally developed for "celebrities who feared their phone conversations were being hacked". In the 2015 murder of English mobster Paul Massey, the killers used a similar service providing encrypted BlackBerry phones based on PGP. After the Dutch and Canadian police compromised their server in 2016, EncroChat turned into a popular alternative among criminals for its security-oriented services in 2017–2018.
The founders and owners of EncroChat are not known. According to Dutch journalist Jan Meeus, a Dutch organized crime gang was involved and financed the developers.
Through a marketing strategy of "relentless online advertising", EncroChat rapidly expanded during its four and a half years of existence, benefiting from the closure of its competitors Amsterdam-based PGP Safe (customised BlackBerry) and Ennetcom. The network eventually reached an estimated 60,000 total subscribers at the time of its closure in June 2020. According to the French National Gendarmerie, 90 percent of subscribers were criminals, and the British National Crime Agency (NCA) said it found no evidence of non-criminals using it.
EncroChat first came to the attention of the media when it was revealed that high-profile criminals Mark Fellows and Steven Boyle had been using the encrypted devices to communicate during the May 2018 gangland murder of John Kinsella in Rainhill, England. The service resurfaced in the media during the summer of 2020 after law enforcement announced that they had infiltrated the encrypted network and investigative journalist Joseph Cox, who had been reviewing EncroChat for months, published an exposé in Vice Motherboard.
Functionality and services
The EncroChat service was available for handsets called "carbon units", whose GPS, camera and microphone functions were disabled by the company for privacy reasons. Devices were sold with pre-installed applications, including EncroChat, an OTR-based messaging app which routed conversations through a central server based in France, EncroTalk, a ZRTP-based voice call service, and EncroNotes, which allowed users to write encrypted private notes. They generally used modified Android devices, with some models based on the BQ Aquaris X2 phone hardware, others on Samsung devices, and sometimes on non-Android BlackBerry mobile phones.
Devices with EncroChat were able to boot in two modes. When only the power button was pressed to turn the handset on, they booted into a dummy Android home screen. But when the handset was switched on by pressing the power button together with the volume button, the phone booted to a secret, encrypted partition which facilitated secret communication via EncroChat's French servers. A "panic button" feature was available, where a certain PIN inputted to the device via the unlock screen would erase all data on the phone. According to journalist Jurre van Bergen, the IP of EncroChat's server points to French web hosting company OVH. EncroChat's SIM provider was the Dutch telecommunications firm KPN.
EncroChat devices were particularly popular in Europe, although they were also sold in the Middle East and elsewhere in the world. One source told Vice Motherboard that they became the "industry standard" among criminals. They were reported in July 2020 to cost €1,000 (£900) each, then €1,500 (£1,350) for a six-month contract to use EncroChat's service. EncroChat's website says that the firm had resellers in Amsterdam, Rotterdam, Madrid and Dubai, although Cox describes EncroChat as a "highly secretive" firm which "does not operate like a normal technology company". The phones were reportedly bought via a physical transaction which "looked like a drug deal", and at least one case involves an ex-military operative selling devices in Northern Ireland.
Infiltration
The EncroChat encrypted messaging service and the related customized phones were discovered by France's National Gendarmerie in 2017 when conducting operations against organized crime gangs. At the time of the Fellows and Boyle trial in December 2018, the NCA struggled to crack the lock screen passcode, as anything was wiped out after a set number of attempts.
The investigation accelerated in early 2019 after receiving EU funding. At the end of January 2020, a judge in Lille, France, authorized the infiltration of the EncroChat servers. Intelligence and technical collaboration between the NCA, the National Gendarmerie and Dutch police culminated in gaining access to messages after the National Gendarmerie put a "technical tool" on EncroChat's servers in France. The malware allowed them to read messages before they were sent and record lock screen passwords. Messages could be read by law enforcement beginning in April. EncroChat estimated that around 50 percent of devices in Europe were affected in June 2020.
The National Gendarmerie formed a special unit to investigate the hacked information on 15 March 2020, then signed an agreement with the Dutch Police to form a joint investigation team (JIT) on 10 April, co-operating through Eurojust with the support of Europol. The data was distributed by the JIT to other European partners, including the UK, Sweden and Norway. The NCA began to receive information about the content of messages on 1 April 2020, then started to build data analysis technology to automatically "identify and locate offenders by analysing millions of messages and hundreds of thousands of images". The chief of the Dutch National Police Force, , compared the malware to "sitting at the table where criminals were chatting among themselves". In May 2020, the wipe feature was disabled at distance by law enforcement in some units. The company initially tried to push an update in response to what was initially regarded as a bug, but the devices were struck again by malware altering lock screen passwords.
On the night of 12–13 June 2020, once EncroChat suspected the infiltration by law enforcement had occurred, users received a secret message:
A few days later, an "email address long associated with EncroChat" informed Vice Motherboard that the service was shutting down permanently "following several attacks carried out by a foreign organization that seems to originate in the UK"; Cox publicly disclosed excerpts of the email on 22 June. Europol and the National Crime Agency refused to comment at the time. The identity of the persons in charge of EncroChat has not been revealed as of 3 July 2020.
Impact
European joint investigation team, 2020
The Europol-supported JIT, code named Emma 95 in France and 26Lemont in the Netherlands, allowed the gathering in real time of millions of messages between suspects. Information was also shared with law enforcement in several countries that were not participating in the JIT, including the UK, Sweden and Norway.
The Dutch police arrested more than 100 suspects and seized more than 8 tonnes of cocaine, around 1.2 tonnes of crystal methamphetamine, 19 synthetic drug laboratories, dozens of guns and luxury cars, and around €20 million in cash. On 22 June 2020 in a property in Rotterdam, authorities found police uniforms, stolen vehicles, 25 firearms, and 25 kg (55 lb) of drugs in a different property. On 22 June 2020, the Dutch police also discovered a "torture chamber" in a warehouse near the town of about 7 km (5 miles) east of Bergen op Zoom. The facility, which was still under construction when discovered, consisted of seven cells made out of sound-proofed shipping containers; torture tools were found including a dentist's chair, hedge trimmers, scalpels and pliers. The place was nicknamed by criminals the "treatment room" or the "ebi", in reference to Extra Beveiligde Inrichting (EBI), a Dutch maximum security prison.
EncroChat probes in Ireland left criminals scrambling for cover. €1.1 million worth of cocaine was seized in an Amsterdam flat, and €5.5 million of cannabis in a trailer in County Wexford, both belonging to Irish gangs. Prominent Irish gang boss Daniel Kinahan was reported to have fled his "safe-haven" of Dubai on 9 July 2020.
Arrests were also made in Sweden. French authorities declined to disclose information publicly about the arrests in July 2020.
European joint investigation against Ndrangheta, 2023
In May 2023, "Operation Eureka" led to arrests of 108 people suspected of being involved with 'Ndrangheta in Italy and more than 30 arrests in Germany after 4 years of investigations and having been able to crack EncroChat and SkyEcc.
United Kingdom
Operation Venetic
Operation Venetic was a British national response initiated by the National Crime Agency (NCA). In June 2020, EncroChat had 10,000 users in the UK alone. As a result of the infiltration of the network, UK police arrested 746 individuals, including major crime bosses, intercepted two tonnes of drugs (with a street value at the time in excess of £100 million), seized £54 million in cash, as well as weapons, including submachine guns, handguns, grenades, an AK-47 assault rifle, and more than 1,800 rounds of ammunition. More than 28 million tablets of the sedative Etizolam were found in a factory in Rochester, Kent. Additionally, 354 kg (780 lb) of cocaine were seized by the Eastern unit in Essex and East Anglia, and 233 kg (514 lb) by the West Midlands unit. Police Scotland seized 164 kg (362 lb) of cocaine, £200,000 of cannabis and £750,000 in cash in several busts. In May 2020, police found two suitcases containing £1.1 million in Sheffield.
On 24 March 2020 NCA Agent and G3 Operations Manager J. Wayne filled out the Application for a Targeted Equipment Interference Warrant Under the Investigatory Powers Act 2016 form for TEI warrant 91-TEI-0141-2020.
On 25 March 2020 the NCA applied to their judiciary for an amendment to also scan for wi-fi networks that were adjacent to the infected Encrochat devices.
On 26 March 2020 the TEI was granted by the Judicial Commissioner. The warrant was further approved by the NCA director-general Dame Lynne Owens QPM CBE.
As of 8 July 2020, four people had been charged by the NCA with conspiracy to murder. British police claimed to have prevented up to 200 gangland killings, although Vice News noted that "the number of homicides linked to high level organised crime—as opposed to street gangs—in this county is relatively low". Two corrupt law enforcement officers were also arrested as a result of the operation.
On 22 December 2020, Thomas Maher was jailed for 14 years and 8 months at Liverpool Crown Court. He had pleaded guilty to four counts of conspiracy to commit a crime at an earlier hearing. He was involved in conspiracies to smuggle about £1.5 million (€1.6m) of cocaine from the Netherlands to Ireland as well as laundering about £1 million (€1.09m) in cash between Ireland and the Netherlands. He had used two EncroChat phones, which were not recovered, using the aliases "Satirical" and "Snacker".
In March 2022 the first murder plot convictions due to EncroChat were secured, against Paul Fontaine and Frankie Sinclair. By that time the NCA said that 2,631 people had been arrested in the UK as part of Operation Venetic; 1,384 had been charged, 260 convicted and over five and a half tons of class A drugs, 165 weapons and £75m in criminal cash had been seized.
By 9 October 2023, Operation Venetic had led to more than 3,100 arrests, 1,240 convictions and a combined 7,938 years in prison sentences. The operation had also recovered 173 firearms, 3459 rounds of ammunition and more than 9 tonnes of class A drugs.
In November 2023 Natalie Mottram, a former police analyst, was sentenced to almost four years imprisonment for misconduct in public office, perverting the course of justice and unauthorised access to computer material. She had told a criminal friend that police were monitoring EncroChat messages and that the police had information on him.
Operation Legality
The legality of the Targeted Equipment Interference (TEI) warrant (91-TEI-0141-2020) was questioned due to the unorthodox nature of the warrant as well as the legal arguments in the affidavit in application of a TEI warrant. There is nothing new in arguing the merits of obtaining the identities of the users of a system and bringing them to justice. Neither is it particularly foreign to exaggerate the number of criminals that will be arrested or downplay the number of innocent people that will be affected by the intrusion, however unethical that method may be. However, in this warrant the NCA essential indicated that if the warrant wasn't granted, then the French would proceed with the operation anyway, and the NCA would be exposed as culpable as to having violated civil and criminal statutes in the United Kingdom. (see page 9) "...there is a significant risk that the NCA is encouraging an offense under the CMA, which may amount to an offence under ss. 44, 45, 46 of the Serious Crime Act 2007 (the "SCA 2007)"..." In other words, the NCA's arguments for obtaining the warrant was, "if you don't grant this, we could be prosecuted for criminally participating in the hacking of United Kingdom citizens devices." After granting the initial warrant, amendments to the initial warrant were requested on 24 March 2020 to allow for the scanning of wireless access points available to the Encrochat devices.
An Investigatory Powers Tribunal (IPT) into the Operation Venetic was conducted from September 2022 to May 2023. The defense barristers accused the NCA of "Deliberately concealing" information when it applied for the EncroChat warrants. In addition to the issues surrounding the text of the TEI warrant, the defense attorneys argued that the application for a TEI warrant vs. a TI warrant was in of itself a "serious and fundamental error" and the position was "tenuous at best". “The NCA started with the result they wanted and tried to fit that into the Investigatory Powers Act. They wanted a TEI and nothing else,” a barrister acting for complainants told the court. “Their motive was understandable. They wanted to make the intercept available in court.”
The Investigatory Powers Tribunal (IPT) concluded that the National Crime Agency (NCA) did not deliberately conceal information from the Judicial Commissioner when applying for the Targeted Equipment Interference (TEI) warrant. The tribunal found that the NCA's actions were lawful and that they were not wrong in seeking a TEI warrant instead of a Targeted Interception (TI) warrant. The tribunal dismissed various claims and complaints, declaring that the TEI warrant was lawfully issued and that the NCA did not fail in its duty of candor.
Operation Eternal
Operation Eternal, the London Metropolitan Police arm of the EncroChat operation, described itself as "the most significant operation the Metropolitan Police Service has ever launched against serious and organised crime". Around 1,400 EncroChat users were based in London at the time of its closure in June 2020. The Metropolitan Police seized more than £13.4 million in cash, 16 firearms, more than 500 rounds of ammunition, 620 kg (1400 lb) of Class A drugs, and arrested 171 people. As of 8 July 2020, 113 of them have been charged; 88 face charges of conspiracy to supply Class A drugs, and 16 have been charged with firearms offences.
In September 2020 nine people were arrested after raids in Brighton, Portslade, Kent, and London linked to Operation Eternal. Three men were arrested in Brighton and Portslade, five men and a woman in Kent and London. They were arrested for a variety of charges, including conspiracy to supply cocaine. Police seized 10 kg (20 lb) of Class A drugs and £60,000.
By 9 October 2023, Operation Eternal had led to 942 arrests and 426 convictions with a combined prison sentences of 3,722 years. Around £19 million in cash had been seized along with more than three tonnes of class A and B drugs and 49 guns.
Convictions
On 21 May 2021, Carl Stewart of Gem Street, Liverpool was sentenced to 13 years and 6 months at Liverpool Crown Court after pleading guilty to attempting to smuggle cocaine, heroin, MDMA and ketamine, as well as transferring criminal property. He had used EncroChat to transfer large amounts of class A and B drugs under the alias "ToffeeForce" (a reference to Everton F.C.). He was identified from a photo he had sent via Encrochat showing his hands holding a block of Blue Stilton. Police were able to identify him via his fingerprints in the photo.
Vincent Coggins, a boss in the Huyton Firm organized crime group, used EncroChat, and was jailed for 28 years.
In July 2024, former Gibraltar international footballer Jason Pusey was sentenced to 11 years in prison for his involvement in a large-scale drug operation, coordinating the supply of significant quantities of cocaine, ketamine, and cannabis.
Similar products
The Canada-based company Phantom Secure, which started as a legitimate firm selling modified mobile phones, provided "secure communications to high-level drug traffickers and other criminal organization leaders" according to a 2018 FBI takedown announcement. Its CEO, Vincent Ramos, was sentenced in 2019 to a nine-year prison sentence after telling undercover agents that he created the device to help drug traffickers. Customers included members of the Sinaloa Cartel, and the FBI reportedly asked Ramos to plant a backdoor in Phantom Secure's encrypted network, which he refused to do.
The "secure messenger" ANOM was launched after Phantom Secure was shut down, but in 2021 was revealed to be a sting operation run by law enforcement agencies.
The secure mobile phone company MPC was revealed in 2019 to have been created by Scottish criminals James and Barrie Gillespie. Christopher Hughes, a former employee of the company, is wanted by Dutch police for the murder of criminal turned blogger Martin Kok in December 2016.
Sky ECC was an encrypted chat service by Sky Global, a Canadian service provider. In March 2021, Dutch and Belgian police claimed to have accessed and decrypted the system traffic, leading to numerous arrests.
Ennetcom was a Dutch telecom provider accused of having retailed its customised phones for €1,500 each largely for use by criminals, with traffic of the company's servers (mostly Canada-based) used for routing encrypted messages between its about 19,000 subscribers.
Ghost was an Australian provider that was raided in 2024.
References
Cyberspace
Dark web
Defunct darknet markets
Distributed computing architecture
End-to-end encryption
File sharing
Internet architecture
Internet culture
Network architecture
Virtual private networks
Android (operating system) forks
Mobile Linux
Mobile operating systems
2016 software
Venetic
Organized crime in Europe
2020 disestablishments
Law enforcement operations in France | EncroChat | [
"Technology",
"Engineering"
] | 4,205 | [
"Internet architecture",
"IT infrastructure",
"Cyberspace",
"Network architecture",
"Computer networks engineering",
"Information technology"
] |
64,444,693 | https://en.wikipedia.org/wiki/Ripple20 | Ripple20 is a set of vulnerabilities discovered in 2020 in a software library that implemented a TCP/IP stack. The security concerns were discovered by JSOF, which named the collective vulnerabilities for how one company's code became embedded into numerous products. The software library was created around 1997 and had been implemented by many manufacturers of online devices.
Description
Ripple20 is a set of 19 vulnerabilities discovered in 2020 in a software library developed by the Cincinnati-based company Treck Inc., which implemented a TCP/IP stack.
History
The first release of Treck's library was around 1997. Treck had also worked with Elmic Systems, which created a fork of the library when the companies ended their collaboration. In September 2019, JSOF researchers analyzed a device containing code from the library and discovered it had vulnerabilities. Further analysis determined that the code originated from Treck's library, which had been widely implemented by numerous manufacturers. The disclosure of the vulnerabilities was made in June 2020. Ripple20 was chosen as the name for the set of vulnerabilities based on the disclosure year and the idea that the problems "rippled" through the supply chain from one company. It is difficult to identify all affected devices, because manufacturers may not realize that the library was used in one of their components.
References
External links
Computer security exploits | Ripple20 | [
"Technology"
] | 285 | [
"Computer security exploits"
] |
64,445,673 | https://en.wikipedia.org/wiki/F.%20Riesz%27s%20theorem | F. Riesz's theorem (named after Frigyes Riesz) is an important theorem in functional analysis that states that a Hausdorff topological vector space (TVS) is finite-dimensional if and only if it is locally compact.
The theorem and its consequences are used ubiquitously in functional analysis, often used without being explicitly mentioned.
Statement
Recall that a topological vector space (TVS) is Hausdorff if and only if the singleton set consisting entirely of the origin is a closed subset of
A map between two TVSs is called a TVS-isomorphism or an isomorphism in the category of TVSs if it is a linear homeomorphism.
Consequences
Throughout, are TVSs (not necessarily Hausdorff) with a finite-dimensional vector space.
Every finite-dimensional vector subspace of a Hausdorff TVS is a closed subspace.
All finite-dimensional Hausdorff TVSs are Banach spaces and all norms on such a space are equivalent.
Closed + finite-dimensional is closed: If is a closed vector subspace of a TVS and if is a finite-dimensional vector subspace of ( and are not necessarily Hausdorff) then is a closed vector subspace of
Every vector space isomorphism (i.e. a linear bijection) between two finite-dimensional Hausdorff TVSs is a TVS isomorphism.
Uniqueness of topology: If is a finite-dimensional vector space and if and are two Hausdorff TVS topologies on then
Finite-dimensional domain: A linear map between Hausdorff TVSs is necessarily continuous.
In particular, every linear functional of a finite-dimensional Hausdorff TVS is continuous.
Finite-dimensional range: Any continuous surjective linear map with a Hausdorff finite-dimensional range is an open map and thus a topological homomorphism.
In particular, the range of is TVS-isomorphic to
A TVS (not necessarily Hausdorff) is locally compact if and only if is finite dimensional.
The convex hull of a compact subset of a finite-dimensional Hausdorff TVS is compact.
This implies, in particular, that the convex hull of a compact set is equal to the convex hull of that set.
A Hausdorff locally bounded TVS with the Heine-Borel property is necessarily finite-dimensional.
See also
References
Bibliography
Theorems in functional analysis
Lemmas
Topological vector spaces | F. Riesz's theorem | [
"Mathematics"
] | 495 | [
"Theorems in mathematical analysis",
"Mathematical theorems",
"Vector spaces",
"Topological vector spaces",
"Space (mathematics)",
"Theorems in functional analysis",
"Mathematical problems",
"Lemmas"
] |
64,447,976 | https://en.wikipedia.org/wiki/Algorithmic%20Puzzles | Algorithmic Puzzles is a book of puzzles based on computational thinking. It was written by computer scientists Anany and Maria Levitin, and published in 2011 by Oxford University Press.
Topics
The book begins with a "tutorial" introducing classical algorithm design techniques including backtracking, divide-and-conquer algorithms, and dynamic programming, methods for the analysis of algorithms, and their application in example puzzles. The puzzles themselves are grouped into three sets of 50 puzzles, in increasing order of difficulty. A final two chapters provide brief hints and more detailed solutions to the puzzles, with the solutions forming the majority of pages of the book.
Some of the puzzles are well known classics, some are variations of known puzzles making them more algorithmic, and some are new. They include:
Puzzles involving chessboards, including the eight queens puzzle, knight's tours, and the mutilated chessboard problem
Balance puzzles
River crossing puzzles
The Tower of Hanoi
Finding the missing element in a data stream
The geometric median problem for Manhattan distance
Audience and reception
The puzzles in the book cover a wide range of difficulty, and in general do not require more than a high school level of mathematical background.
William Gasarch notes that grouping the puzzles only by their difficulty and not by their themes is actually an advantage, as it provides readers with fewer clues about their solutions.
Reviewer Narayanan Narayanan recommends the book to any puzzle aficionado, or to anyone who wants to develop their powers of algorithmic thinking. Reviewer Martin Griffiths suggests another group of readers, schoolteachers and university instructors in search of examples to illustrate the power of algorithmic thinking.
Gasarch recommends the book to any computer scientist, evaluating it as "a delight".
References
Algorithms
Puzzle books
2011 non-fiction books
Oxford University Press books | Algorithmic Puzzles | [
"Mathematics"
] | 358 | [
"Applied mathematics",
"Algorithms",
"Mathematical logic"
] |
64,448,041 | https://en.wikipedia.org/wiki/S4%20ribosomal%20protein%20leader | The S4 ribosomal protein leader is a ribosomal protein leader involved in ribosome biogenesis. It is used as an autoregulatory mechanism to control the concentration of the ribosomal protein S4. Two examples of such leaders that use different conserved structures, in Bacillota and Gammaproteobacteria, have been experimentally confirmed.
Four additional S4 ribosomal protein leaders, each with distinct structures, were predicted in various bacteria phyla. In Bacteroidia or Bacillota, the structure is located in the 5′ untranslated regions of mRNAs encoding ribosomal proteins S4 (rpsD), RNA polymerase alpha subunit (rpoA) and L17 (rplQ).
In Clostridia (whose S4 ribosomal protein leader differs from that of other Bacillota) and Gammaproteobacteria, the ribosomal proteins S13 (rpsM) and S11 (rpsK) were also part of the mRNA encoding region.
See also
Ribosomal protein leader
References
External links
Ribosomal protein leader | S4 ribosomal protein leader | [
"Chemistry"
] | 227 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
64,448,052 | https://en.wikipedia.org/wiki/EL15%20ribosomal%20protein%20leader | An eL15 ribosomal protein leader is a ribosomal protein leader involved in ribosome biogenesis. It is used as an autoregulatory mechanism to control the concentration of the ribosomal protein eL15, which is used in archaea and eukaryotes. Known Examples were predicted in Euryarchaeota with bioinformatic approaches. The structure is located in the 5′ untranslated regions of mRNAs encoding ribosomal proteins eL15 (rpl15e). Similarities between the eL15 ribosomal protein and the rRNA site to which this protein binds were detected.
References
External links
Ribosomal protein leader | EL15 ribosomal protein leader | [
"Chemistry"
] | 131 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
64,448,059 | https://en.wikipedia.org/wiki/L17%20ribosomal%20protein%20leader | An L17 ribosomal protein leader is a ribosomal protein leader involved in ribosome biogenesis. It is used as an autoregulatory mechanism to control the concentration of the ribosomal protein L17. Known Examples were predicted in Actinomycetota and Pseudomonadota with bioinformatic approaches. The structure is located in the 5′ untranslated regions of mRNAs encoding ribosomal protein L17 (rplQ).
See also
Ribosomal protein leader
References
External links
Ribosomal protein leader | L17 ribosomal protein leader | [
"Chemistry"
] | 109 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
64,448,072 | https://en.wikipedia.org/wiki/L31%20ribosomal%20protein%20leader | An L31 ribosomal protein leader is a ribosomal protein leader involved in ribosome biogenesis. It is used as an autoregulatory mechanism to control the concentration of the ribosomal protein L31. Five structurally distinct types of L31 ribosomal protein leader were predicted with a bioinformatic approach. The structure is located in the 5′ untranslated regions of mRNAs encoding ribosomal protein L31 (rpmE), and in one case L32 (rpmF). These are found in different species of Actinomycetota, Bacillota or Gammaproteobacteria. The gammaproteobacterial type was also detected and validated in an independent experimental study using the organism Escherichia coli.
See also
Ribosomal protein leader
References
External links
Ribosomal protein leader | L31 ribosomal protein leader | [
"Chemistry"
] | 172 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
64,448,084 | https://en.wikipedia.org/wiki/S16%20ribosomal%20protein%20leader | A S16 ribosomal protein leader is a ribosomal protein leader involved in ribosome biogenesis. It is used as an autoregulatory mechanism to control the concentration of the ribosomal protein S16. Known Examples were predicted in Flavobacteria with bioinformatic approaches. The structure is located in the 5′ untranslated regions of mRNAs encoding ribosomal proteins L16 (rpsP) and the ribosome maturation factor protein (rimM).
References
External links
Ribosomal protein leader | S16 ribosomal protein leader | [
"Chemistry"
] | 110 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
64,448,087 | https://en.wikipedia.org/wiki/S6%3AS18%20ribosomal%20protein%20leader | S6:S18 ribosomal protein leader is a ribosomal protein leader involved in the ribosome biogenesis. It is used as an autoregulatory mechanism to control the concentration of the ribosomal proteins S6:S18 complex. An experimentally confirmed example of such a leader occurs in a wide variety of bacteria, though not all phyla. A S6:S18 ribosomal leader was predicted in Chlorobia, and its predicted structure differs from that of the validated S6:S18 ribosomal leader. This structure is located in the 5′ untranslated regions of mRNAs encoding ribosomal proteins rpsF (S6), the Single-strand DNA-binding protein A (ssbA), S18 (rpsR) and L7/L12 (rpll).
See also
Ribosomal protein leader
References
External links
Ribosomal protein leader | S6:S18 ribosomal protein leader | [
"Chemistry"
] | 186 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
64,449,141 | https://en.wikipedia.org/wiki/Carbon%20dioxide%20angiography | Carbon dioxide angiography is a diagnostic radiographic technique in which a carbon dioxide (CO2) based contrast medium is used - unlike traditional angiography where the contrast medium normally used is iodine based – to see and study the body vessels. Since CO2 is a non-radio-opaque contrast medium, angiographic procedures need to be performed in digital subtraction angiography (DSA).
History
The use of carbon dioxide as a contrast agent goes back to 1920s when the gas was used to visualize retroperitoneal structures. In the 1950s and early 1960s, CO2 was injected intravenously to delineate the right atrium for the detection of pericardial effusion. This imaging technique developed from animal and clinical studies which demonstrated that CO2 was safe and well tolerated with venous injections. In the early 1970s, Dr. Hawkins and Dr. Cho started using and studying CO2 as a contrast agent also for peripheral vascular imaging and intervention. With the advent of digital subtraction angiography (DSA) technique in 1980s, CO2 has evolved into a safe and useful alternative contrast agent in both arteriography and venography. Because of its lack of renal toxicity and allergic potential, CO2 is a preferred contrast agent in patients with renal failure or iodinated contrast medium allergy, and particularly in patients who require large volumes of contrast medium for complex endovascular procedures.
Technique
CO2 angiography is intended only for peripheral procedures. In case of procedures in the arterious system it is allowed to inject CO2 only below the diaphragm; while in the venous system it can also be investigated supradiaphragmatic, provided that the cerebral vessels are excluded. Taking this aspect into consideration, the practical approach follows that of the iodinated contrast procedures. The contrast injection can be carried out, similarly, both with manual devices and with automatic injectors (Automated Carbon Dioxide Angiography, ACDA).
Properties
Being naturally present in the human body, CO2 is the only 100% biocompatible contrast agent, meaning no adverse reactions, such as allergy, nephrotoxicity, and hepatotoxicity.
Carbon dioxide is a negative contrast medium and it has a low radiopacity (while iodinated contrast media are defined as positive contrast media due to their high radiopacity). Contrast is caused by the different X-ray absorption coefficients between the tissue and the contrast agent. In the vascular imaging results produced using CO2, vessels look brighter rather than the surrounding tissues, because the contrast medium absorbs less X-ray radiations rather an iodine-based contrast medium, where the vessel are displayed in black.
The CO2 does not mix with blood. At atmospheric pressure CO2 is in gaseous form and, when it comes out from the catheter, it forms a train of bubbles which displaces blood, causing a transient ischemia, in relation to the bloodstream (systolic pressure). When added together by DSA “stacking” software, the result is a composite diagnostic image of the frames.
Carbon dioxide is highly soluble, allowing multiple injections without a maximum dosage (per procedure, while it is 100 mL per injection by the literature), but, in case of multiple injections, should be considered and adequate time interval between them, so to allow the gas to be expelled from the body. Compared with the oxygen, the most present gaseous substance in the body, CO2 is more than 20 times more soluble, meaning the possibility of injecting high quantities in the body.
High compressibility and explosive delivery. More pressure is exerted to the gas, more its density increases, resulting in a decrease in gas volume and an increase in gas pressure. The effusion of the gas from the catheter orifice into a state of lower pressure, such as a blood vessel, leads to a sudden increase in the volume of the gas - the “explosive delivery” or “jet effect” - which could lead to an excessive stress in vessels walls. To avoid this, immediately prior to the injection of CO2, a flush is performed, injecting small amounts of CO2 to reduce gas compression and guarantee gas delivery at a steady flow rate.
CO2 is 400 times less viscous than iodinated contrast medium, allowing its injection through devices with a very little inner lumen, as microcatheters, or, even, with other devices inserted in the catheter, as guidewires, balloons or as in atherectomy procedures. The low viscosity of CO2 makes it easy to pass through small vessels, visualizing tight stenosis, collaterals, small bleedings and endoleaks in AAA procedures.
Expulsion: Once dissolved in the plasma, CO2 is transported to the lungs and removed in a single pass by the alveoli, favoring the possibility of performing multiple injections without complications (in healthy patients, meaning no severe COPD or significant POF, especially in presence of pulmonary embolism).
Buoyancy is defined as the tendency of a body to float when submerged into a fluid. CO2 is lighter than blood and, therefore, floats above the bloodstream. The main advantage is represented by the simplicity of filling the more superficial (in transverse plane) vessels of the body, conversely the main disadvantage consists in a less ease of filling the deeper ones.
Side effects
Pins and needles/burning sensation, nausea and temporary discomfort are possible sensations during CO2 angiography, mainly because the transient ischemia caused by the CO2 bubbles flowing in the bloodstream. CO2 is also neurotoxic, so brain injections should be avoided. The most feared complication for intravascular use is air embolism, which can result in stroke, myocardial infarction, paralysis, amputation, or death, although this risk across all patients is less than 1%. A large amount of CO2 trapped in the pulmonary artery or right side of the heart (only of concern during venography) obstructs venous return resulting in bradycardia and hypotension. The patient should be rotated into a left lateral decubitus position if this happens to attempt to separate the CO2 into a gas layer floating "on top of" and no longer interfering with the flow of the liquid and solid components of blood (vapor lock). Therefore, having a delivery system, which prevents air room diffusion, is a necessary safety measure for the patients.
References
Carbon dioxide
Radiography | Carbon dioxide angiography | [
"Chemistry"
] | 1,350 | [
"Greenhouse gases",
"Carbon dioxide"
] |
73,155,123 | https://en.wikipedia.org/wiki/Leon%20Lucy | Leon B. Lucy (1938–2018) was a British-American astrophysicist, best known for his contribution to the Richardson-Lucy deconvolution algorithm and spearheading the development of smoothed-particle hydrodynamics methods. He won the Gold Medal of the Royal Astronomical Society in 2000.
References
External links
Columbia University faculty
1938 births
2018 deaths | Leon Lucy | [
"Physics",
"Astronomy"
] | 76 | [
"Astrophysics stubs",
"Astronomy stubs",
"Astrophysics"
] |
73,155,315 | https://en.wikipedia.org/wiki/Abillion | abillion is a mobile application helping users to find vegan and sustainable products.
The platform allow users to review plant-based, cruelty-free and sustainable products, while donating between 0.10 and $1 to nonprofit organisations for each review written. As of May 2023, the company claims to have donated over $2.8M to various nonprofit organisations including Sea Shepherd and Mercy for Animals.
The main objective of the company is to reach the number of one billion people following a vegan diet and lifestyle by 2030.
History
The american entrepreneur Vikas Garg founded the company in Singapore and the app has been officially launched in May 2018.
The start-up was first named abillionveg and changed its name in 2020 to shorten it to abillion.
In 2019, the company raised $3M in its first round of funding (pre-Series A).
In 2021, it raised $10M in its Series A funding.
In February 2023, the company announced the launch of a community investment round, using the crowdfunding platform Wefunder, which reached a total of $500 000.
In May 2023, it celebrated its 5th anniversary and reaching 1M downloads.
Awards
Using data from the reviews published by its users, abillion is awarding the most liked vegan products and brands.
In May 2023, the company published a world Top 10 Best Plant Based Burgers, among the winning brands were Beyond Meat, NotCo and Sojasun.
References
Mobile applications
Veganism
Vegetarianism
2018 establishments in Singapore | Abillion | [
"Technology"
] | 317 | [
"Mobile technology stubs"
] |
73,155,361 | https://en.wikipedia.org/wiki/Acta%20Sedimentologica%20Sinica | Acta Sedimentologica Sinica (Chinese name: 沉积学报) is a peer-reviewed scientific journal covering the fields of sedimentology, sedimentary mineral deposits, and geochemistry. It is sponsored by the Professional Committee of Sedimentology of the Chinese Society of Mineral and Rock Geochemistry and the Professional Committee of Sedimentary Geology of the Chinese Geological Society.
History
The journal was established in 1983 and as submissions increased switched to a biomonthly schedule in 2006. The editor-in-chief is Chengshan Wang (Chinese Academy of Sciences). Baojun Liu (Division of Earth Sciences, Chinese Academy of Sciences) is honorary editor-in-chief.
Abstracting and indexing
the journal is abstracted and indexed by:
Scopus
EBSCO databases
Chemical Abstracts Service
DOAJ
Notable articles
According to the Web of Science, the following three articles have been cited most often (>350 times):
See also
Nature Geoscience
Palaeogeography, Palaeoclimatology, Palaeoecology
Sedimentary Geology
References
External links
Sedimentology
Geochemistry journals
Bimonthly journals
Chinese-language journals
Academic journals established in 1983
Academic journals of China | Acta Sedimentologica Sinica | [
"Chemistry"
] | 235 | [
"Geochemistry journals"
] |
73,157,151 | https://en.wikipedia.org/wiki/Climate%20change%20ethics | Climate change ethics is a field of study that explores the moral aspects of climate change. Climate change is often studied and addressed by scientists, economists, and policymakers in value neutral ways. However, philosophers such as Stephen M. Gardiner and the scientific authors of the Intergovernmental Panel on Climate Change (IPCC), argue that decisions related to climate change are moral issues and involve value judgment. Climate change involves difficult moral questions relating to global inequality and human development, who bears responsibility for past emissions, as well as the role of future generations, personal responsibility and many more.
The two main ethical implications of climate change are related to its effects. The causes and effects of climate change are unrelated in time and space. Anthropogenic climate change is caused mainly by humans burning fossil fuels. The primary beneficiaries of fossil fuel burning are developed countries whereas the majority of climate impacts will be felt by the developing world. Further, climate change occurs on timescales much greater than a single generation of the human population, causing conflict between economic and political interests which are products of society and the interests of future people—an ethical and moral concept.
Beginnings
Climate change has become a concern for a number of disciplines due to its potentially catastrophic impacts on environmental systems, wildlife, nature, and humans. Climate change poses a serious threat to the global economy as economic development, especially in the West, has been largely dependent on the extraction and burning of fossil fuels since the Industrial Revolution. Burning fossil fuels increases the concentration of greenhouse gases in the atmosphere which is the primary driver of current global anthropogenic climate change. This notion has led to the study of the economics of climate change. Climate change is also a deeply political issue as there are disagreements among actors on whether and to what extent society should act on climate change. Economics is insufficient to guide policymaking alone, however, as it is only capable making predictions regarding how different policy decisions will affect the economy and how to proceed along those different pathways; it cannot tell us which pathway to choose, that is determined by which values we act on as a society. Because of this, some philosophers have argued that climate change is “fundamentally an ethical issue” which raises questions about "how we ought to live, what kinds of societies we want, and how we should relate to nature and other forms of life.”
Global justice
Climate change can be considered a global justice issue because the actors with the largest contribution to climate change are not the ones suffering from the most severe impacts. Historically, wealthy, developed nations have been emitting, and currently emit, disproportionally large amounts of greenhouse gases compared to poorer developing nations. For example, Bangladesh is highly vulnerable to the effects of climate change. The country's per capita emissions are 1/20 of the global average and 1/100 of the per capita emissions in the United States, but its low-lying topography makes it extremely vulnerable to sea level rise and cyclones—which are predicted to increase in frequency and intensity with climate change. Thus climate change can be seen as a global justice issue because the perpetrators of climate change impacts (developed nations) and the victims of those impacts (developing nations) are distinct actors.
In addition to climate change being a global justice issue due to the disparities between the roles of developed and developing nations, the global justice issue can also be framed in terms of wealth. "Half the world’s carbon is emitted by the world’s richest 500 million people" meaning that regardless of where one lives, the higher their income, the higher their emissions. Although the United States has one of the highest per capita greenhouse gas emissions in the world, there are lower-income people in the U.S. with relatively lower emissions. Further, poorer people, regardless of where they live, are more likely to experience the effects of climate change because they have a reduced means to adapt compared to rich people.
Intergenerational ethics
The intergenerational ethics of climate change addresses the responsibility of current generations to be environmentally conscious to and ensure the sustainable use of environmental resources can continue for future generations. Moral responsibility is a crucial consideration in intergenerational climate change ethics. This responsibility extends to various interests, including humans, animals, future people, and nature. The interests of the current generation must be weighed against those of future generations, balancing current needs against future aspirations.
The effects of climate change are dispersed temporally and spatially. Ethical implications due to spatial dispersion are those discussed in the previous section on global justice: those causing the problem are not in the same physical space as those experiencing the worst of its effects. Temporal ethical implications mainly relate to the fact that current greenhouse gas emissions will affect future generations more than they will affect current people. This notion of pushing climate change impacts on future people poses epistemic difficulties, making it hard to grasp cause and effect, which could undermine motivation to respond. Institutional inadequacy further complicates the issue. Democratic political institutions have relatively short time horizons which are at odds with the timescale of global climate change. Politicians are concerned about voter support for the next election, on a scale of a few years, whereas climate change operates on much longer timescales of hundreds to thousands of years. Therefore, climate change gets put on the back burner of political agendas because it won’t help politicians win the next election cycle.
Economics
Economists propose prioritizing adaptation over mitigation due to high costs associated with mitigation; however, conventional economic analyses have philosophical limitations. Such analyses discount future generations and prioritize human interests, failing to consider all relevant costs and benefits of climate change mitigation. Henry Shrue argues that the "No Harm Principle" gives us reason for acting on climate change, despite the uncertainty of future impacts.
Temporal discounting
The concept of temporal discounting in economics is relevant to climate change ethics due to the temporal dispersion of its effects. Economists use discount rates to determine the value of future goods because it is assumed that the global economy will continue to grow and future people will have more goods than current people. The more goods you have, the less valuable any one good is, hence, it is discounted. Using different discount rates, economists can arrive at very different conclusions regarding how much of the global budget should be dedicated to climate change mitigation, adaptation, or other things. Prioritarianism offers one ethical justification for imploring a high discount rate is that because future people will be better off than we are today, benefiting people today is more valuable than benefiting future people. Utilitarianism on the other hand, favors a lower discount rate (or none) under the idea that benefits to future people are equally valuable as benefits to current people.
Human rights
Climate change is a pressing issue that threatens the basic human rights of individuals and communities around the world. Climate change violates several human rights, including the right to life, health, food, water, and shelter. Climate change exacerbates existing inequalities and disproportionately affects vulnerable populations, such as low-income communities, indigenous peoples, and small island developing states. Adopting a rights-based approach to climate change that recognizes the link between climate change and human rights would provide significant improvements.
A moral threshold approach to climate change that identifies the minimum standards to protect human rights. This approach involves identifying a set of moral principles that establish the minimum standards of protection required to ensure that human rights are not violated by climate change. The moral threshold approach also involves identifying the duties and responsibilities of different actors in addressing climate change, including states, corporations, and individuals.
States can take action to address climate change, as they are the primary sources of greenhouse gas emissions. States can take measures to reduce their emissions and contribute to the global effort to limit the increase in global temperatures. Additionally, corporations have a responsibility to reduce their emissions and contribute to sustainable development. Individuals can play a role by adopting sustainable lifestyles and advocating for policies that address climate change. It is also an open moral question whether or not acts of civil disobedience by individuals or groups aimed at raising awareness of the climate crisis can be justified.
Climate change is a human rights issue that requires action. There is a high need for a rights-based approach to climate change and proposes a moral threshold framework for addressing this issue. By recognizing the link between climate change and human rights, people can work towards a more just and equitable future for all. It is the responsibility of all actors, including states, corporations, and individuals, to take action to address climate change and protect human rights.
References
Climate change and society
Environmental ethics | Climate change ethics | [
"Environmental_science"
] | 1,760 | [
"Environmental ethics"
] |
73,158,275 | https://en.wikipedia.org/wiki/Dust%20corner | Dust corners are triangle-shaped pieces, usually made of brass or nickel, that are used to prevent dust from accumulating in corners. Stair dust corners are used on staircases at the point where the tread, riser, and stringer meet. Dust corners make household chores such as sweeping and vacuuming more convenient. Stair dust corners originated in the 1880s, during the Victorian era. Dust corners typically have a small hole in the middle so a nail can be hammered into the stairs. Gail Caskey Winkler, author of Victorian Interior Decoration, believes dust corners originated in response to the public's new knowledge of the germ theory of disease.
Dust corners are most commonly found in older homes, but are still available for purchase in the 21st century. Vacuum cleaners have made dust corners largely obsolete, but dust corners are still used for decorative purposes and may make vacuum cleaning easier.
See also
Dust bunny
Broom
Stairs
Stair tread
Vacuum cleaner
Victorian decorative arts
References
1880s introductions
Cleaning
Dust
Stairs
Victorian culture | Dust corner | [
"Chemistry"
] | 200 | [
"Cleaning",
"Surface science"
] |
73,160,662 | https://en.wikipedia.org/wiki/Nalepella | Nalepella, the rust mites, is a genus of very small Trombidiform mites in the family Phytoptidae. They are commonly found on a variety of conifers, including hemlock, spruce, balsam fir, and pine. They sometimes infest Christmas trees in nurseries. Nalepella mites are vagrants, meaning they circulate around the tree; females overwinter in bark cracks. Infested spruce emit a characteristic odour.
Distribution
The genus is holarctic, and species are found in North America, Europe, and China.
Effects
The mites feed on the cell sap of the tree's needles, sometimes causing severe damage. Typical effects from a Nalepella infestation include needle discolouration and premature needle drop. The colour of discolouration varies by species; for example, Nalepella tsugifoliae causes yellowed or grey discolouration, while Nalepella halourga's discolouration is more bronze in colour. Some species are considered serious pests of ornamental coniferous trees. They are commonly found on Christmas trees in North America and Europe, and they may seriously damage the tree.
Spruce infested by Nalepella were found to increase emissions of certain compounds that may cause the characteristic smell of infested plants. Another study in 2009 found that some compounds emitted by infected spruce attracted or repelled Hylobius abietis, another pest of conifers.
Life cycle
Nalepella mite eggs overwinter on needles, then hatch early in the spring. As cold-season mites, they are most active in the early spring and the fall. The mites deposit eggs during the fall, but may continue to be active into the winter. They have multiple generations per year.
Species
Species details
Nalepella brewrieanae
N. brewrieanae, first discovered in 2003 on Picea breweriana. It was first described from Germany, but is also known from Poland. Besides P. breweriana, it is also known from P. abies and P. glauca.
Nalepella danica
Nalepella danica infests members of the Abies (fir) genus. Specifically, it has been recorded from A. alba, A. concolor, A. lasiocarpa, and A. nordmanniana. It causes small rusty brown to bronze spots on the needles of its host plant, but a severe infestation can result in defoliation. Nymphs typically grow between 90 and 108 μm, while female adults 145 and 240 μm. They are known exclusively from Denmark.
Nalepella ednae
Nalepella ednae is distributed across the central and Northwestern United States, as well as in British Columbia. They are of concern in Mexico, where they may be introduced via cut Christmas trees. Although it is only known from a few fir species, all may be hosts. The damage they cause is unknown.
Nalepella haarlovi
Nalepella haarlovi is known from Denmark and Finland. It has been recorded infesting Picea sitchensis. They are one of the most economically important members of the genus. This species has four to eight generations per year.
Nalepella halourga
Nalepella halourga, commonly known as the spruce rust mite, is restricted to Picea (spruce). Their colour varies throughout the year; during the growing season, they are colourless to pale yellow, but in the fall they turn reddish-purple. They are found in Eastern North America.
Nalepella longoctonema
Nalepella longoctonema was first described in 1991 from two fir species in Oregon. They grow to 206 μm in length, and have been collected in large numbers on fir plantations. They are one of the most economically important members of the genus.
Nalepella shevtchenkoi
Nalepella shevtchenkoi lives around the bases of the host plant's needles, as well as on its stems. It is known from Abies (fir) and Picea (spruce) species. The species is considered one of the most damaging of the eriophyoid mites. It is found in parts of central and eastern Europe.
Nalepella tsugifoliae
The hemlock rust mite is reddish-orange in colour, and has relatively large eggs. They infest fir, hemlock, larch, and yew to high densities- there may be as many as 100 mites on one needle. Infested trees turn bluish, then yellow, before beginning to drop needles. They feed on both sides of the tree's needles.
Notes
References
Pest arthropods
Trombidiformes
Trombidiformes genera | Nalepella | [
"Biology"
] | 1,004 | [
"Pest arthropods",
"Pests (organism)"
] |
73,160,668 | https://en.wikipedia.org/wiki/Japanese%20Adverse%20Drug%20Event%20Report%20database | The Japanese Adverse Drug Event Report (JADER) database is a spontaneous reporting system of drug adverse events which is managed by the Pharmaceuticals and Medical Devices Agency (PMDA) in Japan. It has been available since 2012.
See also
Pharmacovigilance
FDA Adverse Event Reporting System (FAERS)
References
Government databases in Japan
Pharmacovigilance databases | Japanese Adverse Drug Event Report database | [
"Chemistry"
] | 75 | [
"Pharmacovigilance databases",
"Drug safety"
] |
73,160,700 | https://en.wikipedia.org/wiki/Christine%20Joblin | Christine Joblin is a French astrochemist who uses spectroscopy to study photodissociation and the polycyclic aromatic hydrocarbons in cosmic dust. Beyond her experimental and observational work, she also contributed to the first clear finding of buckminsterfullerene in a meteorite, a ureilite that exploded over the Nubian Desert in late 2008. She is a director of research for the French National Centre for Scientific Research (CNRS), affiliated with the Institut de Recherche en Astrophysique et Planétologie in Toulouse.
Education and career
Joblin earned a master's degree in astrophysics in 1989 through study at Paris Diderot University, Paris-Sud University, and the École normale supérieure (Paris). She completed a Ph.D. in astrophysics in 1992 at Paris Diderot University, and has a 2005 habilitation at Toulouse III - Paul Sabatier University.
After postdoctoral research at NASA's Ames Research Center in California from 1992 to 1995, she has been a CNRS researcher in Toulouse since 1995, originally with the Centre d'Étude Spatiale des Rayonnements (CESR, the Center for the Study of Radiation in Space). She was promoted to director of research in 2007. In 2011 CESR and several other laboratories merged to form the Institut de Recherche en Astrophysique et Planétologie (IRAP, the Institute for Research in Astrophysics and Planetology), her current affiliation.
Outreach
In an effort to bring her research to a wider audience, Joblin co-created an English-language comic book and webcomic, Estrella, with visual artist Lorenzo Palloni. Its plot features a young girl who (as in the 1966 film Fantastic Voyage) is shrunk to nanoscopic scale to learn about the creation of cosmic dust in dying stars. It was published in 2018 by ERCcOMICS, a program of the Publications Office of the European Union.
Recognition
Joblin won the 2001 young scientist prize of the Société Française d'Astronomie et d'Astrophysique, and received the CNRS Silver Medal for 2015. She was the 2020 winner of the of the French Academy of Sciences.
She was named a chevalier of the Legion of Honour in 2016, and was named to the Ordre national du Mérite in 2022.
References
External links
Estrella (archived copy)
Year of birth missing (living people)
Living people
French astrophysicists
French chemists
French women chemists
Women astrophysicists
Astrochemists
Research directors of the French National Centre for Scientific Research
Knights of the Legion of Honour
Recipients of the Ordre national du Mérite | Christine Joblin | [
"Chemistry"
] | 545 | [
"Astrochemists"
] |
73,161,370 | https://en.wikipedia.org/wiki/Farmacovigilancia%20Espa%C3%B1ola%2C%20Datos%20de%20Reacciones%20Adversas | The Farmacovigilancia Española, Datos de Reacciones Adversas (FEDRA), also known as the Spanish Pharmacovigilance Datatabase or Spanish Pharmacovigilance System, is a pharmacovigilance database in Spain which was developed in 1982.
See also
Pharmacovigilance
FDA Adverse Event Reporting System (FAERS)
References
External links
https://www.notificaram.es/
Pharmacovigilance databases | Farmacovigilancia Española, Datos de Reacciones Adversas | [
"Chemistry"
] | 109 | [
"Pharmacovigilance databases",
"Drug safety"
] |
73,163,203 | https://en.wikipedia.org/wiki/Shokken | Shokken (食券 "food ticket") are a type of Japanese ticket machine/vending machine, usually used at restaurants for ordering food.
Information
Shokken machines were first seen in 1926 at Tokyo Station There are currently over 43,000 shokken machines in Japan.
Shokken are often found in restaurants, cafes, fast-food restaurants and other establishments. A typical shokken machine features buttons where the customer can select an item, a coin slot, where the customer can pay for the item and a printer where the customer can receive their receipt. Upon receiving their receipt, the customer can then exchange their receipt for their purchased item. Shokken machines can be standalone machines and sometimes are located on countertops and tables.
Companies often use shokken machines as they can reduce the amount of staff needed, reduce theft, reduce the turnover rate and can help reduce ordering errors. While useful, shokken machines are not associated with a fine dining atmosphere, as they are often seen in inexpensive restaurants such as Matsuya, Yoshinoya and Sukiya. Shokken machines also can break and limit customized orders.
References
Dispensers
Vending machines
Retail formats
Commercial machines | Shokken | [
"Physics",
"Technology",
"Engineering"
] | 237 | [
"Machines",
"Commercial machines",
"Vending machines",
"Automation",
"Physical systems"
] |
73,163,376 | https://en.wikipedia.org/wiki/PLAC-Seq | Proximity ligation-assisted chromatin immunoprecipitation sequencing (PLAC-seq) is a chromatin conformation capture(3C)-based technique to detect and quantify genomic chromatin structure from a protein-centric approach. PLAC-seq combines in situ Hi-C and chromatin immunoprecipitation (ChIP), which allows for the identification of long-range chromatin interactions at a high resolution with low sequencing costs. Mapping long-range 3-dimensional(3D) chromatin interactions is important in identifying transcription enhancers and non-coding variants that can be linked to human diseases.
Different 3C-based techniques have been used to study the higher-order 3D chromatin structure, and it has been combined with high-throughput sequencing to determine the chromatin structure on a genome-wide level. Hi-C is one of the most widely used 3C-based techniques because it allows for high-resolution (kilobase-scale) genome-topology identification. However, it requires billions of sequencing reads which has limited its application. Another commonly used 3C-based technique is chromatin interaction analysis by paired-end tag sequencing (ChiA-PET). ChiA-PET can identify long-range interactions of transcription promoters and enhancers at a high resolution but requires millions of cells.
PLAC-seq alleviates these issues by using in situ Hi-C, which creates long-range DNA contacts in situ in the nucleus before lysis. Unlike ChiA-PET which performs ChIP and proximity ligation after chromatin shearing, performing proximity ligation in the nuclei first prevents large disruptions of protein/DNA complexes. This decreases false-positive interactions and improves DNA contact capture efficiency, meaning that PLAC-seq is more accurate and requires fewer cells.
History
PLAC-seq was developed in 2016 and an almost identical technique called HiChIP was also developed in the same year. Both methods combine in situ Hi-C and ChIP but have different library preparation methods. While PLAC-seq uses biotin pull-down followed by end-repair, adapter ligation, and PCR, HiChIP usesTn5 tagmentation, biotin pull-down, and PCR. However, both techniques can use the same quality control and data analysis techniques.
Different computation software tools can be used to analyze the data from PLAC-seq, for example, Fit-Hi-C, HiCCUPS, Mango, Hichipper, MAPS, and FitHiChIP. Many of the earlier software tools were developed for other 3C-based technologies and were not optimized for PLAC-seq/HiChIP data. Fit-Hi-C and HiCCUPS, both developed in 2014, were mainly developed for Hi-C data, and utilize a matrix-balancing-based normalization approach. Mango was developed in 2015, and is mainly used for ChIA-PET data, but has high false-positive rates in analyzing PLAC-seq/HiChIP data due to the different biases. Hichipper was developed in 2018 to alleviate this issue and introduced a bias-correcting algorithm, but it still has difficulties identifying protein interactions between protein binding and non-protein binding regions on the chromosome. MAPS and FitHiChIP were developed in 2019 as a PLAC-seq/HiChIP-specific analysis pipeline, and are generally thought to be more effective than the existing models to analyze PLAC-seq/HiChIp data.
Procedure
The general workflow of PLAC-seq involves cell harvesting and crosslinking, in situ digestion and proximity ligation, ChIP, library construction, sequencing, and data analysis. The first step of PLAC-seq includes the preparation and crosslinking of cell and tissue samples, which typically begins with cell collection through centrifugation. The next step involves the use of a DNA crosslinking agent such as formaldehyde (HCOH) followed by the addition of glycine to stop the crosslinking reaction. The cross-linked cells can then be pelleted by centrifugation and either stored at -80 or used in the next step of the procedure. In situ digestion involves cell lysis with the use of a lysis buffer followed by digestion with a restriction enzyme MboI. This step allows for uniform digestion of genetic material while keeping the crosslinked regions of the chromosome intact. After inactivation of the digestion reaction, dNTPs and biotin are added in order to repair overhangs and mark the DNA for pull down respectively. In situ proximity ligation occurs when the biotinylated ends of the crosslinked DNA are ligated with each other. Chromatin fragmentation by sonication allows for the shearing of non-crosslinked fragments of DNA. This is followed by immunoprecipitation of biotinylated DNA through the use of antibody-coated beads. The DNA is then reverse-crosslinked and purified using column-based DNA purification or phenol-chloroform extraction. The library construction step first involves the pull-down of biotinylated DNA and the addition of sequencing adapters. The cycle number for amplification needs to be determined prior to the final amplification and library purification. Data analysis of PLAC-seq sequencing data can be carried out in multiple ways, however, the common methods involve the use of Fit-Hi-C, FitHiChIP, and MAPS. Data analysis involves mapping to a reference genome, using software tools such as Hichipper to identify peaks, and downstream analysis involving peak comparison and functional enrichment analysis. The resulting data can also be integrated with other genomic data such as Hi-C or RNA-seq in order to identify potential regulatory networks.
Applications
PLAC-seq was developed to map and analyze long-range chromatin interactions. These interactions have important implications when it comes to the transcriptional regulation of genes.
One challenge for mammalian cells is fitting around two meters of genetic material into a nucleus that is around a few microns in diameter, and at the same time organizing the genetic material to be able to access and use the genetic and epigenetic information. To do this, DNA is compacted around histone octamers into 2D structures, and then further packaged into 3D compartments by various mechanisms such as cis-regulatory interactions and repressive interactions. Therefore, chromosomal regions distant in 2D may have intra- and interchromosomal long-range interactions in 3D. These 3D structures are involved in the induction and repression of genes that have biological implications on basic cell functions such as cell cycle, replication, and development. Aberrant 3D structures have roles in the development of diseases and abnormalities such as cancer. This can involve interactions between promoters and terminators/enhancers through the formation of long-range chromatin loops.
PLAC-seq has been utilized to study H3K4me3 and H3K27ac PLACE (PLAC-Enriched) interactions. It has also been used to call for significant H3K4me3-mediated chromatin interactions, thereby allowing for the identification of differential epigenetic modification in different cell types such as those found in the developing human cortex.
Use
Advantages: Compared to ChIA-PET, PLAC-seq requires significantly less amount of starting biological material. With shearing being one of the first steps in ChIA-PET, this leads to the disruption of protein and DNA complexes. PLAC-seq avoids this by having the crosslinking reaction precede the shearing process. Furthermore, PLAC-seq requires fewer sequencing reads than Hi-C. While ChIA-PET requires 100 million starting cells, PLAC-seq only requires 5 million cells. Even with 20-fold fewer cells, PLAC-seq was able to produce more reads (175 million) with a fewer PCR duplication rate (33%) than ChIA-PET (16 million, and 44% respectively). PLAC-seq was also nearly 100 times more cost-effective than ChIA-PET.
Disadvantages: While many of the 3C-based techniques have different biases from the protocols, PLAC-seq (and HiChIP) data have biases from immunoprecipitation efficiencies that need to be corrected for in the computational step. Effective ways of reducing and/or removing the different biases in 3C-based technologies is still being studied.
References
Molecular biology techniques | PLAC-Seq | [
"Chemistry",
"Biology"
] | 1,784 | [
"Molecular biology techniques",
"Molecular biology"
] |
73,164,006 | https://en.wikipedia.org/wiki/4%20Draconis | 4 Draconis, also known as HR 4765 and CQ Draconis, is a star about 570 light years from the Earth, in the constellation Draco. It is a 5th magnitude star, so it will be faintly visible to the naked eye of an observer far from city lights. It is a variable star, whose brightness varies slightly from 4.90 to 5.12 over a period of 4.66 years.
In 1967, Olin Eggen discovered that 4 Draconis is a variable star, during a multicolor photometric survey of red stars. In 1973 it was given the variable star designation CQ Draconis.
Until the year 1985, 4 Draconis was thought to be a normal red giant star. In 1985, Dieter Reimers announced that the International Ultraviolet Explorer had detected a hot companion to the red giant, which itself appeared to be a binary cataclysmic variable star, making the complete system a triple star. However a 2003 study by Peter Wheatley et al., who examined ROSAT X-ray data for the star, concluded that the hot companion was more apt to be a single white dwarf, rather than a binary, and that the white dwarf is accreting material from the red giant. There does not yet appear to be a consensus about the multiplicity; some later studies consider 4 Draconis to be a binary, and some a triple.
In 1987, Alexander Brown announced that 6 cm wavelength radio emission had been detected by the Very Large Array. The strength of the radio emission was variable on a timescale of weeks to months.
It is possible that an outburst of 4 Draconis was the "guest star" reported by Chinese astronomers in the year 369 CE, in the constellation Zigong.
References
Draco (constellation)
060998
108907
Draconis, CQ
Z Andromedae variables
Draconis, 4
M-type giants | 4 Draconis | [
"Astronomy"
] | 397 | [
"Constellations",
"Draco (constellation)"
] |
73,164,564 | https://en.wikipedia.org/wiki/Ammonium%20hexachlorotellurate | Ammonium hexachlorotellurate is an inorganic chemical compound with the chemical formula .
Physical properties
The compound forms yellow octahedral crystals about diameter, decomposes gradually in air. The compound contains the ammonium cations and hexachlorotellurate(IV) anions .
References
Tellurium compounds
Ammonium compounds | Ammonium hexachlorotellurate | [
"Chemistry"
] | 74 | [
"Salts",
"Ammonium compounds",
"Inorganic compounds",
"Inorganic compound stubs"
] |
73,164,769 | https://en.wikipedia.org/wiki/Ordered%20two-template%20relay | Ordered Two-Template Relay (OTTR) is a library preparation technique used to improve quantitation of highly modified non-coding RNA (ncRNA) species, which have been difficult to characterize using traditional cDNA sequencing approaches. OTTR leverages a retroelement reverse transcriptase (RT), termed BoMoC, with template jumping properties and high processivity across modified RNA templates, to generate cDNA products for next-generation sequencing (NGS). Overall, OTTR offers a streamlined approach for cDNA library production of full-length and modified ncRNA targets.
Background
Cellular ncRNA pools are known to be dynamically regulated and can have high degrees of variation between different cell types and developmental stages. Dysregulation of transfer RNAs (tRNAs), a type of ncRNA, has been linked to a diverse array of detrimental physiological conditions including neurological diseases and cancer. While characterization of transfer RNA (tRNAs) diversity is relevant to disease, current library preparation approaches are limited in their ability to capture highly modified tRNA bases, which block reverse transcriptase and interfere with the production of full-length cDNA intermediates needed for sequencing. To date, several cDNA library preparation techniques, including OTTR, have attempted to overcome these problems and improve our ability to characterize ncRNA pools.
OTTR Workflow
BoMoC Reverse Transcriptase
Reverse transcriptases (RTs) are polymerases capable of synthesizing complementary DNA (cDNA) using either RNA and DNA templates and have become essential biotechnology tools in both clinical and laboratory settings. OTTR makes use of a unique non-long terminal repeat (LTR) retroelement RT called BoMoC, due to its specialized ability to synthesize cDNA opposite templates containing modified bases or sugar backbones and being highly processive across discontinuous RNA templates. Originally purified from the silk moth Bombyx mori, OTTR BoMoC is N-terminally truncated and modified to introduce a stabilizing active site mutation.
cDNA Synthesis
Initially, the RNA or DNA of interest is purified and denoted as the input template (IT). The OTTR library preparation protocol require the IT is incubated with BoMoC, which uses terminal transferase or ‘tailing’ activity' to add a chain-terminating dideoxynucleotide base, ddRTP (ddATP or ddGTP), to the 3’ end of the IT, in the presence of manganese (Mn2+). To promote cDNA synthesis in the steps to follow, the divalent cation source is switched to magnesium (Mg2+), free ddRTPs are inactivated and dNTPs are added. Next, a RNA-DNA duplex, containing a 3’ +1Y (dTTP or dCTP) base overhang is added, allowing the RNA-DNA duplex to base pair with the ddRTP-containing IT. BoMoC extends from the 3’ of the +1Y base across the IT. Following this, dNTP concentrations are altered to encourage the addition of dGTP to the 3’ end of the cDNA IT through the non-template nucleotide addition (NTA) activity of BoMoC. A 3’ adaptor template (AT) containing a 3’ dCTP is added to the reaction, promoting base pairing between the cDNA 3’ G overhang and the 3’C base of the AT and subsequent extension by BoMoC. When using RNA as the input template, addition of RNase A and RNase H is needed to degrade remaining RNA, leaving only the cDNA template.
Library preparation for sequencing
Based on the sequencing approach used, the 5’ and 3’ adaptor sequences used to tag the cDNA library can be altered as needed. Previously, dual adapter-tagged cDNA libraries have been characterized using Illumina NGS. Low-cycle PCR can also be used to index universal adaptor cDNA libraries following the RT reaction. Alternatively, full-length adaptor sequences of choice can be included in the 5’ and 3’ adaptors used in the initial RT reaction.
Applications
To date, OTTR has been used for quantifying tRNA species. This method provides a reliable and precise quantification of tRNAs and allows for the detection of changes in tRNA levels under different physiological conditions. Therefore, this method is useful for a variety of research applications in the field of molecular biology and genetics.
As tRNAs are essential components of translation, the ability to quantify tRNA levels accurately is crucial for understanding how the translation machinery is regulated. Using OTTR, researchers can determine changes in tRNA levels in response to different growth conditions, environmental stress, or genetic modifications. This information can help to identify factors that affect tRNA abundance and their potential roles in modulating translation.
In particular, the OTTR protocol has been used to characterize the small RNA composition of mammalian sperm populations. This work revealed that sperm small RNA pools are composed largely of rRNA fragments and both 5' and 3' tRNA halves from the majority of tRNAs. This improved understanding of sperm payload composition has implications for our understanding of the biogenesis of structural RNA fragments in the male germline, as well as the biochemical nature of the RNAs delivered to the zygote upon fertilization.
The OTTR protocol could also be used in the study of piwi-interacting RNAs (piRNAs), which is another important classes of small RNAs. While the applications of OTTR have mainly been focused on tRNA fragment detection, OTTR has also been shown to perform well in the capture of miRNAs which could be useful for the study of miRNA expression patterns in different cell types or under different conditions.
Future applications of protocols similar to OTTR are also being considered in the development of diagnostic assays for small RNAs. The high fidelity of the OTTR protocol in capturing small RNAs could make it an attractive option when paired with liquid biopsy assays, where circulating small RNAs are analyzed as biomarkers for various diseases.
Advantages and Limitations
Advantages
Full-length ncRNA capture – BoMoC has high processivity across modified RNA templates resulting in the production of more full-length cDNA products compared to common reverse transcriptase's, which are prone to premature termination at modified sites.
Single tube reaction – All cDNA synthesis steps can be performed in a single tube, without the need for intermediate purification steps. This allows for automation of the OTTR approach. Additionally, this reduces the total amount of input RNA required purification steps, prone to loss of
Capture of modified sites – Mis-incorporation signatures of BoMoC across modified RNA bases have been characterized. Therefore, OTTR allows for the identification of modification status of RNA transcripts where modified sites are known.
Sequencing of both RNA and DNA templates – While RNA templates have been used to benchmark the OTTR technique, BoMoC is processive over DNA, meaning this library preparation approach could be adapted for DNA characterization.
Limitations
Identification of novel modified sites – BoMoC does not capture the modification directly but rather the mis-incorporation signature resulting from the modified base. Therefore, sites where modification status has not been previously established may be difficult to confirm the identity of the modification.
Pre-mature termination of cDNA – Approximately, 10% of miRNA species characterized using OTTR experience prematurely terminated cDNA products. While these levels are low compared to common reverse transcriptase, some species carrying very bulky modifications may be challenging to capture with OTTR.
Diversity in species used for benchmarking – To date, miRNA and tRNA pools have been characterized with OTTR. However, more validation, with diverse RNA and DNA species will increase the potential applications of OTTR.
References
RNA sequencing
Molecular biology | Ordered two-template relay | [
"Chemistry",
"Biology"
] | 1,606 | [
"Genetics techniques",
"RNA sequencing",
"Molecular biology techniques",
"Molecular biology",
"Biochemistry"
] |
73,164,831 | https://en.wikipedia.org/wiki/Myelinoid | A myelinoid or myelin organoid is a three dimensional in vitro cultured model derived from human pluripotent stem cells (hPSCs) that represents various brain regions, the spinal cord or the peripheral nervous system in early fetal human development. Myelinoids have the capacity to recapitulate aspects of brain developmental processes, microenvironments, cell to cell interaction, structural organization and cellular composition. The differentiating aspect dictating whether an organoid is deemed a cerebral organoid/brain organoid or myelinoid is the presence of myelination and compact myelin formation that is a defining feature of myelinoids. Due to the complex nature of the human brain, there is a need for model systems which can closely mimic complicated biological processes. Myelinoids provide a unique in vitro model through which myelin pathology, neurodegenerative diseases, developmental processes and therapeutic screening can be accomplished.
History
In vitro models have been a critical component of many biological studies. Monolayers, or 2D cultures, have been widely used in the past, however, they are limited by their lack of complexity and fail to recapitulate tissue architecture involved in biological processes occurring in vivo. Model organisms, such as Mus musculus, Caenorhabditis elegans, Drosophila melanogaster, and Saccharomyces cerevisiae, recapitulate biological complexity better than 2D monolayer cultures. However, these model organisms do not perfectly capture human biology. Specifically, there are stark differences in brain development between mice and humans. Major developmental differences include variability in division patterns of neural stem cells and localization and types of glial cells that occur at specific stages in development.
Leveraging pluripotent stem cell technologies, brain organoids and cerebral organoids were developed to fill the gap in model systems to study human specific brain development and pathology in vitro. The first cerebral organoid was established in 2013. Since then, various protocols have emerged for generating organoids for different brain regions such as cerebellar, hippocampal, midbrain, forebrain, and hypothalamic organoids. Cerebral organoids provide a neurological model through which diseases, development and therapeutics can be studied. However, a major constraint of cerebral organoids is that they lack robust myelin formation and are therefore not well suited to studies investigating white matter.
This limitation of cerebral organoids was addressed in 2018 when brain organoids containing a robust population of myelinating oligodendrocytes were generated. The process of generating these myelinated brain organoids lasted 210 days and involved the addition of various growth factors and media at specific time points. Due to the prolonged duration of the 2018 protocol, there were efforts to speed up and streamline the differentiation and generation of these myelinated organoids. A similar protocol which differed slightly in growth factors added and timing of media changes was described in 2019. This protocol was able to generate organoids with compact myelin formation by day 160.
Another protocol developed in 2019 demonstrated that myelinated organoid generation could be accelerated further. Using a novel protocol, myelin basic protein (MBP), a marker for oligodendrocyte differentiation and myelination in the CNS, was detectable as early as day 63 (9 weeks) and myelinated axons were observed by day 105 (15 weeks), effectively halving the duration of the protocol.
A protocol of similar duration was established in 2021, however, the resulting organoids differ slightly in their biological context. This protocol leveraged the fact that spinal cord myelination is observed prior to cortical myelination. This protocol generated organoids with robust myelination with a ventral caudal cell fate. These organoids, although not technically brain organoids, can also be used to study myelin disease pathology, validated in the study through generating organoids recapitulating the disease pathology observed in Nfasc 155-/- patients. In this protocol, they referred to their myelinated organoids as "myelinoids" thus creating the category of organoids referred to as myelinoids.
In 2021, a group of researchers aimed to address the fact that the lengthy differentiation protocols renders myelinoids less practical for high throughput experimentation such as drug screening. To do this, scientists developed a human induced pluripotent stem cell (hiPSC) line that relies on early expression of an oligodendroglial gene which enabled the accelerated generation of myelinated organoids in just 42 days. To date, this is the fastest protocol for generating mature oligodendrocytes in a brain organoid.
Culturing methods
To generate organoids, human pluripotent stem cells (hPSCs) are allowed to aggregate into embryoid bodies (EBs) in low attachment plates (in suspension), which are then cultivated in a rotating bioreactor with lineage specific factors to promote cell amplification, growth and differentiation. EBs have the capacity to differentiate into all embryonic germ layers, mesoderm, endoderm and ectoderm. In vivo, the nervous system, including myelin, is generated from the ectoderm. To recapitulate this in vitro and generate myelin organoids, the EBs are cultured in media with specific growth factors and supplements that lead to ectodermal differentiation specifically, followed by subsequent neural induction. More specifically, neural induction factors are added to induce the formation of neural progenitor cells which give rise to neurons and glial cells, including oligodendrocytes, in vivo.
A well established method used to efficiently differentiate hPSC into neural cells is by dual inhibition of SMAD signaling using dorsomorphin (also known as compound C) and SB431542. To promote further proliferation of neural precursor cells specific growth factors are added to the media such as epidermal growth factor (EGF) and fibroblast growth factor 2 (FGF-2). Before neural and glial induction, the spheroids are generally embedded in an extracellular matrix, such as Matrigel, and transferred to a rotating bioreactor where different small molecules and growth factors are continuously supplemented to promote the differentiation of cells into specific structures and cell types.
In vivo, neuronal induction precedes oligodendrocyte formation. Therefore, in culture, neuronal induction factors are added first to induce neuro-cortical patterning of the spheroids, followed by factors that induce oligodendrocyte precursor cell (OPC) formation and differentiation into oligodendroglia. To promote formation of neurons from neural precursor cells, brain-derived neurotrophic factor (BDNF) and neurotrophic factor 3 (NT3) can be added to the media. Subsequently, factors such as platelet-derived growth factor AA (PDGF-AA) and insulin-like growth factor 1 (IGF-1) are added to the media to result in an expansion of the OPC populations present within the organoid by promoting OPC proliferation and survival.
Finally, factors that induce OPC differentiation into oligodendrocytes, and ultimately myelinating oligodendrocytes, are added. This includes thyroid hormone (T3), which has been shown to induce oligodendrocyte generation from OPCs in vivo. The organoids are maintained in suspension where they grow and mature until required for analysis. The fundamentals of this workflow are generally used to obtain myelin organoids; however, various protocols that rely on it have introduced multiple modifications for different purposes. Madhavan et al. was the first to establish a reproducible protocol that allowed for generating organoids with robust OPC and oligodendrocytes populations, and therefore myelination; they are referred to as myelin organoids, or myelinoids.
Properties and components
The generation of myelin organoids generally relies on neurocortical patterning factors that establish the structural and cellular framework necessary for the induction of oligodendrogenesis later on in the differentiation protocol. Therefore, the properties and components of myelin organoids in the early stages of differentiation are very similar to that observed in cerebral organoids where populations of neural progenitor cells, precursors of neurons and glial cells, start to emerge and self-organize into distinct layers that recapitulate features of the cortex during early embryogenesis.
At such early stages, myelin organoids start to form large continuous neuroepithelial that encompass a fluid filled cavity representative of a brain ventricle. The progenitor cells surrounding the putative ventricle organize into distinct layers defined by specific neural markers that become more defined as the organoid matures. The layers include a ventricular zone surrounding the cavity with cells expressing PAX6, SOX2 and Ki67, followed by the outer subventricular zone and intermediate zone with cells expressing Ki-67 and TBR2, and finally cortical plate layer with cells expressing CTIP2, MAP2 and TBR1.
Following neurocortical patterning, the oligodendrocyte lineage growth factors drive the expansion of native populations of OPCs distributed causing a substantial increase in their numbers which express SOX6, SOX10 and OLIG2, markers of glial induction and OPC specification. As the myelin organoid matures, the OPC cells differentiate into oligodendrocytes that express proteolipid protein 1 (PLP1), the predominant component of myelin, and MYRF25, an oligodendrocyte specific transcription factor. The oligodendrocytes are distributed throughout the neuronal layers, where upon maturation, their processes express MBP and CNP (an early myelination marker), begin extending to wrap and myelinate the axons surrounding them. The myelin undergoes maturation, refinement and compaction eventually leading to the formation of functional neuronal networks with compactly wrapped myelin lamellae. Further myelin maturation leads to distinct axonal subdomains with a paranodal axo-glial junction (PNJ) and node of Ranvier. The observation of paranodal and nodal assembly is protocol dependent, some observe paranodal and nodal assembly, some do not. Overall, the oligodendrocytes in myelin organoids demonstrate the ability to form compact myelin that wraps and organizes around neuronal axons recapitulating the three dimensional architecture of myelinated axonal networks in humans.
Applications
Disease modelling
Myelinoids recapitulate various fundamental aspects of brain development and myelination, and therefore related disease and pathology. Given that, they can be used to model various diseases and understand disease mechanisms associated with myelin defects including neurodegenerative diseases, CNS injury, PMD, and NFASC.
Pelizaeus-Merzbacher disease (PMD)
PMD is a rare monogenic disease caused by various mutations of the X-linked proteolipid protein 1 gene (PLP1). PLP1 is a critical protein for myelin formation. PMD is classified as a leukodystrophy, meaning that it is a disease affecting the white matter of the brain. Madhavan et al. tested how well their myelinoid system could recapitulate the established cellular pathology of PMD. Organoids were derived from three patients with varying disease severity where the subject with a deletion, a duplication, and a point mutation had mild, moderate and severe phenotypes respectively. Their results demonstrated that the myelinating oligocortical spheroids generated recapitulated the degrees of cellular pathology associated with the genetic variants, therefore can serve as models for understanding the relationships between PMD genotypes and phenotypes, which have not been fully characterized yet, therefore can serve as models for understanding the relationships between PMD genotypes and phenotypes, which have not been fully characterized yet.
Neurofascin (NFASC) nonsense mutation
The NFASC gene encodes a cell adhesion molecule that is involved in neurite outgrowth and fasciculation. Additionally, NFASC is involved in the organization of axonal initiation segment and the nodes of Ranvier during development. Patients with nonsense mutations in NFASC have abnormalities in the paranodal axo-glial junction (PNJ). James et al. demonstrated that patient derived myelinoids had widespread formation of myelinoids of both patient and control; however, as expected, the PNJ in patient derived myelinoids had disrupted paranode formation.
Myelin structure and integrity analysis
Myelin structure and integrity is inherently hard to study in humans at a molecular level. MRI can shed light on myelin abnormalities in a human brain, however, many studies utilize animal models to study myelin related changes in response to genetic variants. Myelinoids provide a 3D human derived system to study myelin structure. Measuring the number and length of myelin sheaths, paranodal/nodal organization and structure, myelin volume and compaction, cellular identity and composition, and cellular organization are all methods for quantifying myelin changes.
Testing drugs and therapeutics
Studies have shown that in myelinoids, human myelination can be pharmacologically manipulated in a quantifiable manner at both cellular and global levels across the myelinoids. Therefore, myelin organoids can be used as a preclinical model for evaluating myelin associated candidate therapeutics and drugs in a human physiologically relevant context.
Promyelinating drugs
Myelin organoids can be used to study the therapeutic potential of possible myelination strategies for individuals with diseases associated with demyelination such as leukodystrophies and multiple sclerosis, an auto-immune demyelinating disease affecting the CNS.
Clemastine and ketoconazole are promyelinating drugs that function as potent stimulators of oligodendrocyte generation and myelination in rodent models. The previously known effects of both drugs have been recapitulated using myelin organoids as they enhanced and accelerated the extent and rate of oligodendrocyte generation, maturation and myelination in organoids.
Er stress pathway small-molecule modulators
Certain classes of Pelizaeus-Merzbacher disease (PMD), proteolipid protein 1 (PLP1) show perinuclear retention in oligodendrocytes. Perinuclear retention of misfolded proteins is a hallmark of endoplasmic reticulum (ER) stress, which might be implicated in the pathology observed in PMD. In a myelinoid model of Pelizaeus-Merzbacher disease (PMD) developed in 2018, treatment with a modulator of ER stress pathways called GSK2656157, an inhibitor of protein-kinase-R-like ER kinase, partially rescued PLP1 perinuclear retention mobilizing it away from the ER and into the processes of oligodendrocytes. In addition, treatment resulted in an increase in the number of cells that show MYRF expression, an oligodendrocyte specific transcription factor, which has been observed to be reduced in PMD oligodendrocytes compared to control.
Gene-editing: CRISPR
In a myelinoid model of PMD caused by point mutations in proteolipid protein 1 (PLP1), a CRISPR correction to the wildtype sequence in the hPSCs used to generate it rescued some aspects of PMD pathology. The treatment restored the perinuclear retention of PLP1 and mobilization into oligodendrocyte processes and increased the amount of oligodendrocytes that express MYRF, an oligodendrocyte specific marker, to levels observed in healthy controls. The myelin organoids derived from hPSC after the CRISPR correction of PLP1 point mutations generated myelin after 20 weeks in culture.
Genome analysis
'Omics' has a broad application to organoids and since the development of organoid technology, transcriptome, epigenome, proteome, and metabolome analysis have been used. Additionally, targeted gene editing and host-microbiome interactions have been studied using organoids.
Single-cell omics
It is not possible to study gene expression patterns of the brain in human subjects, so the ability to recapitulate some of the complexity of the human brain in vitro allows for aspects of human development and disease to be investigated. Single-cell omics is a powerful tool that has been used to identify different subpopulations of oligodendrocyte progenitor cells (OPCs) and mature oligodendrocytes in mouse models which were previously undefined. The heterogeneity of oligodendrocytes was previously thought to be functionally homogeneous; however, distinct cell populations can be characterized through specific transcriptional signatures and gene ontology profiles.
Single-cell RNA sequencing (scRNA seq) analysis of myelinoids generated in 2018, confirmed that there were distinct populations of oligodendrocytes throughout multiple stages of development in oligocortical spheroids which closely matched the single-cell transcriptome data obtained from human fetal cortex. Due to their close transcriptomic resemblance to human fetal brain data, the regulatory landscape of cells within cerebral organoids can inform on the underlying regulatory mechanisms governing human brain development.
In 2020, researchers described an approach to obtain meaningful scRNA seq and assay for transposase-accessible chromatin using sequencing (ATAC-seq) data from brain organoids. The protocol can likely translate to myelin organoids due to the similar biology between cerebral organoids and myelinoids.
Orgo-seq
Orgo-seq is a framework through which bulk RNA (bRNA) and scRNA sequencing data of organoids can be integrated. This platform was developed to address challenges associated with phenotyping organoids and demonstrated its ability to identify critical cell types and cell type specific driver genes involved with neurodevelopmental disorders and disease manifestation.
Using the Orgo-Seq framework, three datasets (bRNA-seq from donor derived organoids, scRNA-seq data from cerebral organoids and fetal brains in precious studies, and bRNA-seq from the BrainSpan Project of human post-mortem brains) were used to study copy number variants in autism spectrum disorder. They leveraged several datasets to identify the types of cells present and cell specific driver genes in patient derived organoids.
Brain organoids serve as a human-derived model through which genetic variation and its impact on cell specific processes and association with neurodevelopmental and neurodegenerative disorders can be studied. Specifically, myelinoids provide a system to study the cell type specific effects in oligodendrocytes that are disrupted by genetic variants. Overall, Orgo-Seq provides a quantitative and validated framework for investigating driver genes and their role in neurological and neurological disorders. In the future, Lim et al., aim to develop a precision medicine framework to identify gene networks and effects of genetic variants in an organoid system, which would include myelinoids, that recapitulates the patient's exact genetic background.
Advantages
More physiologically relevant to humans compared to animal models
A more faithful recapitulation of the complexity of human brain than oligodendrocytes monolayers (2D model system)
Contains much more robust OPC and myelinating oligodendrocyte populations compared to cerebral organoids (compact myelin formation)
Personalized therapeutics – Myelinoids from patient derived iPSCs
With the absence of human brain tissue, myelinoids offer unprecedented opportunities for studying oligogenesis and myelination. While animal models are valuable for studying human diseases, they do not fully recapitulate human brain development and show many discrepancies affecting their translatability to human physiology. Considering resemblance of myelin organoids to the human brain, they have been proposed as models bridging between animal models and human physiology.
Other hPSC derived oligodendrocytes systems have been established, such as the two dimensional (2D) monolayer oligodendrocytes models. However, when compared to 2D systems, myelin organoids more faithfully recapitulate the structure and functionality of the developing human brain containing a more physiologically relevant microenvironment including their 3D cytoarchitecture, neural circuits, cell interactions and an overall more physiologically relevant microenvironment.
While cerebral organoids form the brain cytoarchitecture and composition, they generally lack oligodendrocytes, the cells responsible for myelination in the central nervous system. The myelinoid protocol pioneered in 2018, and subsequently modified by others, offer a reproducible method for generating organoids with robust OPC and oligodendrocytes populations that track the endogenous neurons forming functional neuronal networks ensheathed with myelin.
Finally, the ability to generate myelinoids from patient derived hPSCs (induced-PSCs) offer major advantages and opportunities to explore patient-specific pathogenesis over the developmental and maturation stages of oligodendrocytes. This allows for the development of personalized therapeutic approaches.
Limitations
Experimental variability
Lengthy myelinoid generation protocol
Fail to capture all cell types and certain phenotypes (i.e. behavioral abnormalities)
Necrotic canters
As is the case with every model system, myelinoids have their limitations. Due to the methods involved with generating the organoids, there can be a large degree of experimental variability. Additionally, due to the long duration over which myelination occurs, optimizing the dosage of molecules and treatments involved in myelin development can be difficult. The advantage of drug screening in this model comes with its own limitations. It can be difficult to scale myelinoid experiment to an appropriate scale for high throughput screening due to the long duration of protocols and limited efficiency.
Myelinoids capture a large number of cell types found in vivo, however, they fail to capture all cell types. Microglia are absent in some myelinoids as was observed in the 2021 protocol. Myelinoids also do not capture any behavioral abnormalities.
Finally, a challenge with all organoid cultures is that they rely on diffusion for nutrients to reach cells. Therefore, many organoids will develop a necrotic center due to a lack of nutrients making their way to the innermost cells. Recently, developing vascularized organoids has been of interest and may potentially alleviate this issue. However, myelinoids as described in current protocols are not vascularized.
References
Developmental neuroscience
Stem cells
Central nervous system
Synthetic biology
Biology articles needing expert attention | Myelinoid | [
"Engineering",
"Biology"
] | 4,757 | [
"Synthetic biology",
"Biological engineering",
"Molecular genetics",
"Bioinformatics"
] |
73,165,077 | https://en.wikipedia.org/wiki/Yasmin%20Umar | Mohammad Yasmin bin Haji Umar (born 23 April 1956) is a Bruneian aristocrat, politician, and retired military officer who served as minister of energy from 2010 to 2018 and deputy minister of defence from 2005 to 2010.
Early life and education
Mohammad Yasmin bin Haji Umar, born in Brunei on 23 April 1956, pursued his early education at Anthony Abell College in Seria. On 12 July 1979, he earned a Bachelor of Science (Hons) degree in electronics from the University of Wales in the United Kingdom. Continuing his academic journey, he enrolled at the University of Loughborough, also in the UK, where he specialised in digital communication systems. On 1 December 1981, he was awarded a Master of Science degree by the faculty of electrical and electronic engineering.
Military career
Yasmin began his career in the Royal Brunei Malay Regiment (RBMR) as a commissioned officer, receiving a promotion to lieutenant on 9 November 1981. On 25 June 1986, he was awarded the certified chartered engineer insignia by the Institution of Chartered Engineers. Throughout his career, he participated in numerous courses, seminars, and workshops in the United Kingdom, Australia, Singapore, Japan, and the United States. In 1987, he attended the 22nd army staff course, division 1, at the Royal Military College of Science in the United Kingdom.
He held various roles in policy, corporate management, logistics, and strategy. He began as an engineering officer, initially assigned to the First Flotilla of the RBMR, now known as the Royal Brunei Navy, where he served as a weapons engineering officer. On 1 April 1988, he was appointed senior engineering officer, leading the Naval Engineering Department.
Political career
Ministry of Defence
Subsequently, on 14 September 1990, Yasmin became head of research in the defence minister's office and the directorate of strategic planning (DMO/DSP). In 1991, he participated in the Defence Research Fellow Exchange Programme at the National Institute of Defence Studies in Japan. In 1992, Yasmin took on the role of staff officer grade 1 maintenance at the directorate of logistics, where he developed maintenance guidelines for armed forces equipment. On 2 May 1994, he returned to the DMO/DSP as a staff officer grade 1. He was appointed director of intelligence and security on 14 July 1995, a position he held until December 1998. He attended the Australian Defence College in Canberra in 1999. He was appointed as the director of DMO/DSP on 4 January 1999.
Yasmin was appointed as one of three newly appointed permanent secretaries in Brunei, assuming a role at the Ministry of Defence on 24 January 2003, where he oversaw policy and administration. This appointment was later confirmed when Sultan Hassanal Bolkiah received the appointees at Istana Nurul Iman on 6 February that same year. Yasmin was officially appointed to the Legislative Council by the sultan on 6 September 2004.
Deputy Minister of Defence
On 24 May 2005, Yasmin was appointed as the deputy minister of defence under the sultan's order as part of a cabinet reshuffle.
On 2 March 2007, Yasmin emphasised the importance of human resources in strengthening Brunei's defence readiness. During the 23rd National Day celebration, he reiterated the sultan's message that the country's future progress, both regionally and internationally, relies on effectively managing its human resources to produce specialists and intellectuals. Yasmin noted that without a skilled and knowledgeable workforce, Brunei would struggle to compete with more advanced nations. On 5 December 2007, Yasmin was present at the Langkawi International Maritime and Aerospace Exhibition, where a memorandum of understanding (MOU) was signed between World Aerospace (M) and Royal Brunei Technical Services for the management of BRIDEX 2009.
Minister of Energy
As part of a cabinet reshuffle, Yasmin was appointed minister of energy at the Prime Minister's Office (PMO) on 29 May 2010. Shortly after his appointment, on 2 November 2011, Yasmin became one of the respondents in a legal case filed by Captain (Retired) Huraizah Duraman, who alleged wrongful dismissal from the Royal Brunei Armed Forces (RBAF). Yasmin, along with other defendants, was accused of recommending or conspiring to cause Huraizah's discharge from the RBAF. However, the court ruled that the dismissal was solely a result of the sultan's prerogative power, which could not be influenced or questioned by the respondents. Additionally, Yasmin and the other defendants were protected by constitutional immunity, shielding them from legal action regarding their actions in this matter. Ultimately, the court dismissed the case, concluding that Yasmin’s involvement did not lead to the plaintiff's dismissal.
In 2011, Yasmin criticised Brunei Shell Petroleum (BSP) for allowing large businesses to dominate energy contracts, which he believed hindered the growth of small and medium-sized enterprises (SMEs). He called for greater transparency and faster vendor registration to support SMEs, advocating for a more inclusive approach to contract allocation in the energy sector. His remarks aimed to foster a more equitable environment for SMEs in Brunei's energy industry. Following this, on 1 February 2012, the Energy Department at the PMO, with the Sultan's approval, released Directive No. 2–Local Business Development (LBD) Framework. Yasmin hoped this would lead to spin-offs, as Brunei Shell Joint Venture and TotalEnergies planned to invest B$5–6 billion over the next two years.
Minister of Energy and Industry
On 22 October 2015, Yasmin was appointed Minister of Energy and Industry in the PMO as part of a wider cabinet reshuffle, which saw several top officials reassigned to new roles. In his new position, Yasmin took charge of Brunei's increasingly important energy sector, overseeing the nation's energy policies and fostering the growth of the oil and gas industry, crucial to the country's economic development.
On 3 November 2016, Yasmin reaffirmed Brunei's commitment to a zero-tolerance policy towards corruption. He stressed that corruption could undermine the country’s progress by depriving citizens of essential opportunities, such as job creation. Yasmin also warned international companies operating in Brunei against interfering with corruption investigations, describing corruption as a destructive force akin to a disease that could erode the social fabric if not addressed. He underscored the need for a workforce that aligns with Brunei's principles of , emphasising the importance of integrity in public and private sectors.
On 7 May 2017, Yasmin met with Saudi Arabia's Ministry of Energy, Industry and Mineral Resources, Khalid A. Al-Falih, to discuss strengthening Brunei–Saudi relations. The two discussed potential Saudi investments in Brunei's ammonia and urea projects, as well as opportunities in the petrochemical sector, particularly the supply of Saudi crude oil for the downstream industry. They also reviewed the extension of the December 2016 agreement on oil output adjustments under the OPEC/non-OPEC cooperation declaration. On the same day, Yasmin emphasised the importance of Brunei's MSMEs engaging in the digital economy, stressing that for MSMEs to thrive, they must embrace digital commerce. He highlighted the government's initiative to train 1,000 MSMEs in e-commerce through Darussalam Enterprise, aimed at improving their operations and boosting the national economy. Additionally, Yasmin reaffirmed the government's commitment to enhancing the business climate by simplifying business processes and supporting MSMEs. He encouraged MSMEs to seize opportunities to expand their market presence and contribute to Brunei's GDP, particularly through participation in expos.
In August 2017, Amrtur Corporation filed a US$45 million claim against BSP, alleging lost revenues of B$61.2 million (US$45 million) between 2012 and 2016 due to breaches of contracts with BSP. Yasmin was named as one of the 12 defendants in the case, which became widely discussed after a leaked letter related to the dispute went viral on social media. The case was linked to allegations of corruption within the Brunei sultanate, with Yasmin accused of involvement. Accusations arose that he had a conflict of interest during his time on the BSP board. Armtr Corporation's complaint focused on an alleged breach of contracts and income loss between 2012 and 2016, and the case was seen as part of the sultan's efforts to resolve conflicts of interest and promote government transparency. Later, Yasmin accompanied the sultan on his state visit to Beijing on 13 September 2017, where he also attended the 14th China–ASEAN Expo.
Following a cabinet reshuffle on 30 January 2018, Yasmin's was removed from the role as Minister of Energy and Industry, with Mat Suny succeeding him. This significant reorganisation, aimed at advancing the sultan's commitment to combating corruption and fostering national development, sought to introduce fresh talent and accelerate the implementation of Wawasan Brunei 2035.
Political views
Using Brunei's energy sector as a basis for economic growth and diversification was at the heart of Yasmin's political views. Through government LBD directives, which promoted the expansion of local businesses and generated employment possibilities, he fervently argued for maximising local content in oil and gas activities. In order to enhance Brunei's oil and gas resources, lessen its susceptibility to price swings, and draw in substantial foreign direct investment (FDI), like the multibillion-dollar investments made by Hengyi Industries and Brunei Fertilizer Industries, Yasmin placed a strong emphasis on the growth of downstream industries.
Yasmin also highlighted Brunei's ability to compete for FDI by making doing business easier. He did this by pointing out improvements that made Brunei the most improved economy in the World Bank's 2016 and 2017 Doing Business Reports. In line with the Wawasan Brunei 2035 goal of diversifying the economy and lowering dependency on oil, he thought these measures increased investor confidence in both the oil and non-oil industries. Yasmin went on to highlight Brunei's distinct assets, including its unexplored natural biodiversity and high-quality halal standards, as major forces behind regional competitiveness in high-priority industries including halal, technology, tourism, and business services.
Personal life
Yasmin married Datin Hajah Noryasimah binti Abdullah on 5 August 1983, and the couple has a daughter.
Titles, styles and honours
Titles and styles
Yasmin was honoured by Sultan Hassanal Bolkiah with the manteri title of , bearing the style .
Honours
Yasmin has been bestowed the following honours:
National
Order of Setia Negara Brunei First Class (PSNB; 15 July 2011) – Dato Seri Setia
Order of Seri Paduka Mahkota Brunei First Class (SPMB; 15 July 2006) – Dato Seri Paduka
Order of Seri Paduka Mahkota Brunei Second Class (DPMB; 15 July 2003) – Dato Paduka
Order of Seri Paduka Mahkota Brunei Third Class (SMB)
Sultan Hassanal Bolkiah Medal First Class (PHBS; 15 July 2010)
Sultan of Brunei Silver Jubilee Medal (5 October 1992)
Sultan of Brunei Golden Jubilee Medal (5 October 2017)
National Day Silver Jubilee Medal (23 February 2009)
Proclamation of Independence Medal (1997)
General Service Medal
Long Service Medal and Good Conduct (PKLPB)
Royal Brunei Armed Forces Silver Jubilee Medal (31 May 1986)
Fellow of Pertubuhan Ukur Jurutera & Arkitek (1 May 2010)
Foreign
Jordan:
Grand Cordon of the Order of Independence (13 May 2008)
Philippines:
Grand Cross of the Order of Sikatuna (GCrS; 24 August 2008)
Singapore:
Darjah Utama Bakti Cemerlang (Tentera) (DUBC; 16 June 2011)
United Kingdom:
Fellow of the Institution of Engineering and Technology (28 April 2008)
References
Further reading
Living people
1956 births
Bruneian Muslims
Government ministers of Brunei
Members of the Legislative Council of Brunei
Bruneian military personnel
Alumni of the University of Wales
Alumni of Loughborough University
Grand Cordons of the Order of Independence (Jordan)
Bruneian colonels
Fellows of the Institution of Engineering and Technology
Recipients of the Darjah Utama Bakti Cemerlang (Tentera) | Yasmin Umar | [
"Engineering"
] | 2,537 | [
"Institution of Engineering and Technology",
"Fellows of the Institution of Engineering and Technology"
] |
73,165,446 | https://en.wikipedia.org/wiki/Fling%20%28social%20network%29 | Fling was a social media app available for IOS and Android. It was founded in 2014 by Marco Nardone and was taken offline in August 2016.
Overview
In 2012, Marco Nardone founded the startup Unii and launched Unii.com, a social network intended for students in the UK. While working on this service, Nardone had the idea for a messaging service where pictures could be sent to strangers in January 2014. The app Fling was then developed and released between March and July 2014. After a month, it already had 375,000 downloads and 180,000 active users on iOS.
Users were able to take pictures inside the app and send them to 50 random people all over the world. The recipient could then choose to answer via chat or reply by sending a picture themselves.
The app was used by many users as a medium to exchange sexually explicit pictures and for sexting with strangers. This led to the app being removed from the App Store in June 2015.
In the 19 days that followed, flings developers rewrote the App almost completely from scratch, working around the clock. The feature to message random strangers was removed, and the app was readmitted into the App Store as a messenger App resembling Snapchat.
But the redesigned Application did not have the success of its predecessor. The funding ran out and the parent company Unii went bankrupt. The company was not able to pay their content moderation team anymore, leading to a new surge of pornographic content on the App.
Shortly after that, the Social Network was taken offline in August 2016. It has been inactive since.
During the 2 years Fling was online, $21 million was raised from investors while generating no revenue at all. Of this $21 million (£16.5m), £5 million came from Nardone's father.
Allegations against CEO
Former employees made multiple allegations against Marco Nardone, the Founder and CEO of Unii and Fling. According to these claims, he behaved erratic and abusive, throwing "things across the office".
He hired his girlfriend as the head of human resources to handle issues between him and his staff. Employees who left the company often had "some part of their pay held back".
According to the reports, he also spent the money raised from investors irresponsibly, having no clear concept of a budget. Some of that money was used on expensive restaurants in London, a luxurious office for CEO Nardone and advertisements for Fling on Twitter and Facebook.
Nardone also spent time partying in Ibiza with two employees, while the developer team in London frantically tried to get Fling back online after it being removed from the App Store.
In December 2017 he pleaded guilty to assaulting his girlfriend at a domestic violence court.
References
Social media
Defunct social networking services
Mobile applications
Instant messaging clients | Fling (social network) | [
"Technology"
] | 573 | [
"Instant messaging",
"Computing and society",
"Instant messaging clients",
"Social media"
] |
73,165,579 | https://en.wikipedia.org/wiki/Nickel%28II%29%20stearate | Nickel(II) stearate is a metal-organic compound, a salt of nickel and stearic acid with the chemical formula . The compound is classified as a metallic soap, i.e. a metal derivative of a fatty acid. The compound is harmful if swallowed and may cause skin sensitization.
Synthesis
An exchange reaction of sodium stearate and nickel dichloride:
Physical properties
Nickel(II) stearate forms a green powder.
The compound is insoluble in water, methanol, ethanol, or ether, soluble in carbon tetrachloride and pyridine, slightly soluble in acetone.
Uses
The compound is used as a lubricant and in various industrial applications.
References
Stearates
Nickel compounds | Nickel(II) stearate | [
"Chemistry"
] | 157 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
73,165,907 | https://en.wikipedia.org/wiki/Geranium%20%C3%97%20cantabrigiense | Geranium × cantabrigiense is a hybrid flowering plant in the cranesbill family Geraniaceae. It is an hybrid between Geranium dalmaticum and G. macrorrhizum.
Etymology
The name cantabrigiense comes from Cantabrigia, the Latin name for Cambridge, England.
Origin
Geranium × cantabrigiense was originally obtained in cultivation in 1974, when Dr. Helen Kiefer of the Cambridge University Botanic Garden used pollen of G. dalmaticum to fertilise G. macrorrhizum. The resulting plant is sterile, producing long-lasting pink flowers that do not set seed, but spreads vegetatively through trailing stems.
This hybrid has since been found in the wild, having formed through natural hybridisation where both parents co-occur. One naturally occurring form discovered in the Biokovo mountains of Croatia has been introduced in cultivation as the cultivar 'Biokovo'.
References
cantabrigiense
Plants described in 1985
Hybrid plants | Geranium × cantabrigiense | [
"Biology"
] | 211 | [
"Hybrid plants",
"Plants",
"Hybrid organisms"
] |
73,166,413 | https://en.wikipedia.org/wiki/Strontium%20stearate | Strontium stearate is a metal-organic compound, a salt of strontium and stearic acid with the chemical formula . The compound is classified as a metallic soap, i.e. a metal derivative of a fatty acid.
Synthesis
A reaction of strontium hydroxide with stearic acid.
Physical properties
The compound forms white powder. Insoluble in alcohol, soluble (forms gel) in aliphatic and aromatic hydrocarbons.
Uses
Strontium stearate is used in grease and wax compounding.
It is also used as a lubricant to improve the flow characteristics of polyolefin resins.
References
Stearates
Strontium compounds | Strontium stearate | [
"Chemistry"
] | 143 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
73,166,658 | https://en.wikipedia.org/wiki/Varsha%20%28season%29 | Varsha () is the season of monsoon in the Hindu calendar. It is one of the six seasons (ritu), each lasting two months, the others being Vasanta (spring), Grishma (summer), Sharada (autumn), Hemanta (pre-winter), and Shishira (winter).
It falls in the two months of Shravana and Bhadrapada of the Hindu calendar, or July and August of the Gregorian calendar. It is preceded by Grishma, the summer season, and followed by Sharada, the autumn season.
In addition to the season, the word Varsha can also be used for rain or rainfall. In Urdu, Varsha (rainfall) is referred to as Baarish.
References
Hindu calendar
Seasons | Varsha (season) | [
"Physics"
] | 163 | [
"Physical phenomena",
"Earth phenomena",
"Seasons"
] |
73,166,794 | https://en.wikipedia.org/wiki/List%20of%20trichloroethylene-related%20incidents | Trichloroethylene (TCE) is a common industrial solvent mostly used for metal degreasing. Due to its wide use in industries, there have been several incidences of waste TCE leaking into aquifers and contaminating groundwaters.
Due to their similar industrial uses, areas contaminated with mainly TCE may also be contaminated with tetrachloroethylene in smaller amounts.
Background
The first known report of TCE in groundwater was given in 1949 by two English public chemists who described two separate instances of well contamination by industrial releases of TCE.
Exposure to TCE occurs mainly through contaminated drinking water. With a specific gravity greater than 1 (denser than water), trichloroethylene can be present as a dense non-aqueous phase liquid (DNAPL) if sufficient quantities are spilled in the environment. Another significant source of vapor exposure in Superfund sites that had contaminated groundwater, such as the Twin Cities Army Ammunition Plant, was by showering. TCE readily volatilizes out of hot water and into the air. Long, hot showers would then volatilize more TCE into the air. In a home closed tightly to conserve the cost of heating and cooling, these vapors would then recirculate. Based on available federal and state surveys, between 9% and 34% of the drinking water supply sources tested in the U.S. may have some TCE contamination, though EPA has reported that most water supplies are in compliance with the maximum contaminant level (MCL) of 5 ppb.
In addition, a growing concern in recent years at sites with TCE contamination in soil or groundwater has been vapor intrusion in buildings, which has resulted in indoor air exposures, such is in a recent case in the McCook Field neighborhood of Dayton, Ohio, United States. Trichloroethylene has been detected in 852 Superfund sites across the United States, according to the Agency for Toxic Substances and Disease Registry (ATSDR). Under the Safe Drinking Water Act of 1974, and as amended annual water quality testing is required for all public drinking water distributors. The EPA'S current guidelines for TCE are online.
The EPA's table of "TCE Releases to Ground" is dated 1987 to 1993, thereby omitting one of the largest Superfund cleanup sites in the nation, the North IBW in Scottsdale, Arizona. Earlier, TCE was dumped here, and was subsequently detected in the municipal drinking water wells in 1982, prior to the study period.
Marine Corps Base Camp Lejeune in North Carolina may be the largest TCE contamination site in the United States. Legislation could force the EPA to establish a health advisory and a national public drinking water regulation to limit trichloroethylene.
The 1998 film A Civil Action dramatizes the EPA lawsuit Anne Anderson, et al., v. Cryovac, Inc. concerning trichloroethylene contamination that occurred in Woburn, Massachusetts in the 1970s and 1980s.
1980s
Between 1975 and 1985, the water supply of Marine Corps Base Camp Lejeune was contaminated with trichloroethylene and other volatile organic compounds.
In 1986, and later again in 2009, 2 plumes containing trichloroethylene was found on Long Island, New York due to Northrop Grumman's Bethpage factories that worked in conjunction with the United States Navy during the 1930s and 1940s.
In 1988, the EPA discovered tons of TCE that had been leaked or dumped into the ground by the United States military and semiconductor industry (companies including Fairchild Semiconductor, Intel Corporation, and Raytheon Company) just outside NASA Ames in Moffett Field, Mountain View, California.
In 1987, Hill Air Force Base, in Layton, Utah, was declared a Superfund site in 1987 and added to the U.S. Environmental Protection Agency's National Priorities List. Contamination of TCE has been detected in the groundwater throughout Weber County, Utah.
1990s
In 1990, Fort Ord, CA was added to the EPA's National Priorities List. Veterans have linked trichloroethylene as the underlying cause for high incidence rates of multiple myeloma.
In 1992, Lockformer conducted soil sampling on their property and found TCE in the soil at levels as high as 680 parts per million (ppm). During the summer of 2000, a group of residents hired legal counsel, and on October 11, 2000, these residents had their private well water tested by a private environmental consultant. The group owned homes south of the Lockformer property in the suspected path of groundwater flow. The consultant collected a second round of well water samples on November 10, 2000, and TCE was detected in some of the wells sampled. Beginning in December 2000, Illinois EPA collected about 350 more private well water samples north and south of the Lockformer property.
For over 20 years of operation, RCA Corporation had been pouring toxic wastewater into a well in its Taoyuan City, Taiwan facility. The pollution from the plant was not revealed until 1994, when former workers brought it to light. Investigation by the Taiwan Environmental Protection Administration confirmed that RCA had been dumping chlorinated organic solvents into a secret well and caused contamination to the soil and groundwater surrounding the plant site. High levels of TCE and tetrachloroethylene can be found in groundwater drawn as far as two kilometers from the site.
In 1998, the View-Master factory supply well in Beaverton, Oregon was found to have been contaminated with high levels of TCE. It was estimated that 25,000 factory workers had been exposed to it from 1950 to 2001.
In the case of Lisle, Illinois, releases of trichloroethylene had allegedly occurred on the Lockformer property beginning in 1968 and continuing for an undetermined period. The company used TCE in the past as a degreaser to clean metal parts. Contamination at the Lockformer site is presently under investigation by the U.S. Environmental Protection Agency and Illinois EPA.
2000s and 2010s
As of 2007, 57,000 pounds, or 28.5 tons of TCE have been removed from the system of wells that once supplied drinking water to the residents of Scottsdale, Arizona. One of the three drinking water wells previously owned by the City of Phoenix and ultimately sold to the City of Scottsdale, tested at 390 ppb TCE when it was closed in 1982. The City of Scottsdale recently updated its website to clarify that the contaminated wells were "in the Scottsdale area," and amended all references to the measured levels of TCE discovered when the wells were closed (including "390 ppb") to "trace".
In June 2012, residents of an area off of Stony Hill Road, Wake Forest, NC were contacted by the EPA and DWQ about possible TCE contamination after authorities followed up on existing TCE contamination in 2005. Subsequent EPA testing found multiple sites with detectable levels of TCE and several with levels above the MCL.
In December 2017, tonnes of waste TCE and tetrachloroethylene were dumped into sewers in the Tuzla district of Istanbul, Turkey. The leak had allegedly affected thousands of people, especially those with asthma, in neighbouring areas and about 97 people were hospitalised. The Istanbul Metropolitan Municipality (İBB) has stated that the situation did not possess any danger to human health. Various residents have said that it was a "normal occurrence", chemical leaks were "the fate of Tuzla" and they consumed yoghurt after the heavy exposure. Trichloroethylene is widely used and unregulated in Turkey, the TCE import was thought to be about 2.16 million dollars in 2020.
In February 2020, McClymonds High School in West Oakland, California was temporarily closed after trichloroethylene was found in groundwater beneath the school.
Regulations
United States
Until recent years, the US Agency for Toxic Substances and Disease Registry (ATSDR) contended that trichloroethylene had little-to-no carcinogenic potential, and was probably a co-carcinogen—that is, it acted in concert with other substances to promote the formation of tumors.
In 2023, the United States EPA determined that trichloroethylene presents an unreasonable risk of injury to human health under 52 out of 54 conditions of use, including during manufacturing, processing, mixing, recycling, vapor degreasing, as a lubricant, adhesive, sealant, cleaning product, and spray. It is dangerous from both inhalation and dermal exposure and was most strongly associated with immunosuppressive effects for acute exposure, as well as autoimmune effects for chronic exposures.
As of June 1, 2023, two U.S. states (Minnesota and New York) have acted on the EPA's findings and banned trichloroethylene in all cases but research and development.
Proposed U.S. federal regulation
In 2001, a draft report of the Environmental Protection Agency (EPA) laid the groundwork for tough new standards to limit public exposure to trichloroethylene. The assessment set off a fight between the EPA and the Department of Defense (DoD), the Department of Energy, and NASA, who appealed directly to the White House. They argued that the EPA had produced junk science, its assumptions were badly flawed, and that evidence exonerating the chemical was ignored.
The DoD has about 1,400 military properties nationwide that are contaminated with trichloroethylene. Many of these sites are detailed and updated by www.cpeo.org and include a former ammunition plant in the Twin Cities area. Twenty three sites in the Energy Department's nuclear weapons complex—including Lawrence Livermore National Laboratory in the San Francisco Bay area, and NASA centers, including the Jet Propulsion Laboratory in La Cañada Flintridge are reported to have TCE contamination.
Political appointees in the EPA sided with the Pentagon and agreed to pull back the risk assessment. In 2004, the National Academy of Sciences was given a $680,000 contract to study the matter, releasing its report in the summer of 2006. The report has raised more concerns about the health effects of TCE.
European Union
In the European Union, the Scientific Committee on Occupational Exposure Limit Values (SCOEL) recommends an exposure limit for workers exposed to trichloroethylene of 10 ppm (54.7 mg/m3) for 8-hour TWA and of 30 ppm (164.1 mg/m3) for STEL (15 minutes).
Existing EU legislation aimed at protection of workers against risks to their health (including Chemical Agents Directive 98/24/EC and Carcinogens Directive 2004/37/EC) currently do not impose binding minimum requirements for controlling risks to workers health during the use phase or throughout the life cycle of trichloroethylene. However, in case the ongoing discussions under the Carcinogens Directive will result in setting of a binding Occupational Exposure Limit for trichloroethylene for protection of workers; this conclusion may be revisited.
The Solvents Emissions Directive 1999/13/EC and Industrial Emissions Directive 2010/75/EC impose binding minimum requirements for emissions of trichloroethylene to the environment for certain activities, including surface cleaning. However, the activities with solvent consumption below a specified threshold are not covered by these minimum requirements.
According to European regulation, the use of trichloroethylene is prohibited for individuals at a concentration greater than 0.1%. In industry, trichloroethylene should be substituted before April 21, 2016 (unless an exemption is requested before October 21, 2014) by other products such as tetrachloroethylene (perchloroethylene), methylene chloride (dichloromethane), or other hydrocarbon derivatives (ketones, alcohols, ...).
Reduced production
In recent times, there has been a substantial reduction in the production output of trichloroethylene; alternatives for use in metal degreasing abound, some chlorinated aliphatic hydrocarbons being phased out in a large majority of industries due to the potential for health effects and the legal liability that ensues as a result.
The U.S. military has virtually eliminated its use of the chemical, allegedly purchasing only 11 gallons in 2005. About 100 tons of it was used annually in the U.S. as of 2006.
References
Soil contamination
Water pollution
Pollution-related lists | List of trichloroethylene-related incidents | [
"Chemistry",
"Environmental_science"
] | 2,578 | [
"Environmental chemistry",
"Soil contamination",
"Water pollution"
] |
73,166,872 | https://en.wikipedia.org/wiki/Hemanta%20%28season%29 | Hemanta () is the season of early winter in the Hindu calendar. It is one of the six seasons (ritu), each lasting two months, the others being Vasanta (spring), Grishma (summer), Sharada (autumn), Varsha (monsoon), and Shishira (winter).
It falls in the two months of Agrahayana and Pausha of the Hindu calendar, or November and December of the Gregorian calendar. It is preceded by Sharada, the autumn season, and followed by Shishira, the winter season.
References
Hindu calendar
Seasons | Hemanta (season) | [
"Physics"
] | 124 | [
"Physical phenomena",
"Earth phenomena",
"Seasons"
] |
73,166,961 | https://en.wikipedia.org/wiki/Zirconium%20stearate | Zirconium stearate is a metal-organic compound, a salt of zirconium and stearic acid with the chemical formula .
The compound is classified as a metallic soap, i.e. a metal derivative of a fatty acid.
Synthesis
Zirconium stearate is prepared by boiling stearic acid and sodium carbonate in water and then adding zirconium oxychloride solution.
Also, zirconium stearate can be prepared by reacting zirconium nitrate and sodium oleate.
Physical properties
The compound forms white powder.
Uses
Zirconium stearate is used as a raw material for waterproofing materials and emulsion stabilizers.
Also used as a flattening agent.
References
Stearates
Zirconium(IV) compounds | Zirconium stearate | [
"Chemistry"
] | 166 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
73,167,663 | https://en.wikipedia.org/wiki/Tcr-seq | TCR-Seq (T-cell Receptor Sequencing) is a method used to identify and track specific T cells and their clones. TCR-Seq utilizes the unique nature of a T-cell receptor (TCR) as a ready-made molecular barcode. This technology can apply to both single cell sequencing technologies and high throughput screens
Background
T-cell Receptor (TCR)
T cells are a part of the adaptive immune system and play a critical role in protecting the body from foreign pathogens. T-cell receptors (TCRs) are a group of membrane proteins found on the surface of T cells which can bind to foreign antigens. TCRs interact with major histocompatibility complexes (MHC) on cell surfaces to recognize antigens. They are heterodimers made up of predominantly α and β chains (or more rarely δ and γ chains) and consist of a variable region and a constant region. Variable regions are produced through a process called VDJ recombination, which results in unique amino acid sequences for α, β, and γ chains. The result is that each TCR is unique and recognizes a specific antigen
Complementarity Determining Regions (CDRs)
Complementarity determining regions (CDRs) are a part of the TCR and play an essential role in TCR-MHC interactions. CDR1 and CDR2 are encoded by V genes, while CDR3 is made from the region between V and J genes or between D and J genes (termed "VDJ genes" when referred to together). CDR3 is the most variable of the CDRs, and is in direct contact with the antigen. As such, CDR3 is used as the “barcode region” to identify unique T cell populations, as it is highly unlikely for two T cells to have the same CDR3 sequence unless they came from the same parental T cell.
Clonality
VDJ recombination produces such a vast amount of unique TCRs that many receptors never encounter the antigen they are best suited for. When a foreign antigen is present in the body, the few T cells that recognize that antigen are positively selected for so that the body has an adequate number of T cells to mount an effective immune response. The selected T cells rapidly divide and differentiate into effector T-cells through a process called clonal expansion, which retains the TCR sequence (including the CDR3 sequence) that originally recognized the antigen
TCR-Seq uses the unique nature of the TCR - in particular CDR3 - as a molecular barcode to track T cells through a variety of processes like differentiation and proliferation, which can be used for a wide variety of purposes.
Methods
Bulk vs Single-Cell Sequencing
TCR sequencing can be performed in on pooled cell populations (“bulk sequencing”) or single cells (“single cell sequencing”). Bulk sequencing is useful to explore entire TCR repertoires - all the TCRs within an individual or a sample - and to generate comparisons between repertoires of different individuals. This method can sequence millions of cells in a single experiment. However, one major disadvantage is that bulk sequencing cannot determine which TCR chains pair together, only the frequency within the repertoire. The large amount of TCRs sampled also means that lower abundance TCRs may not be detected
Single cell sequencing can determine TCR chain pairs, making them more useful for identifying specific TCRs. Some major disadvantages of this technique are its high costs, limited capacity of a few thousand cells, and the necessity of live cells which may be more challenging to obtain
Target Sequences
Any TCR chain can be sequenced, although the α and β chain are more commonly chosen due to their abundance in the T cell population. In particular, the β chain is of interest due to its higher diversity and specificity compared to other chains. The presence of a D gene component in the β chain which is not present in the α chain allows more diverse combinations. As well, β chains are unique to each T cell, which can be used to identify distinct T cell populations within a sample
To perform TCR-sequencing, polymerase chain reaction (PCR) amplification is performed on the CDR3 region as a measure of unique T cells within a population. The CDR3 region is chosen over CDR1 and CDR2 as it is directly responsible for antigen interactions and is generally unique to TCRs from the same lineage, which allows identification of distinct populations
Library Preparation
The goal of this step is to generate a library of transcripts to be sequenced. There are 3 major ways of generating a library for TCR sequencing.
Multiplex DNA
Multiplex PCR can be employed on both genomic DNA (gDNA) or RNA which has been converted to double-stranded complementary DNA (cDNA). Primer pools with primer pairs targeting J and V alleles are used to amplify the CDR3 region of the TCR transcript. The transcript goes through two or more rounds of PCR to amplify the region of interest, then adaptors are ligated onto either end of the resulting transcript. This method is among the most used in the generation of libraries for TCR-seq as it can capture a great deal of the diversity of the TCR through the primer pool. However, as it is near-impossible to optimize PCR conditions for all the primers in the pool, multiplex DNA can result in amplification bias where some CDR3 regions with primers that bind poorly may not be amplified. This means the abundance of amplified segments may not correspond with the actual abundance within the cell
Target Enrichment In-Solution
This method can use genomic gDNA or RNA converted to cDNA. The starting material is first processed to generate DNA or cDNA transcripts with indexed adaptors on the 5’ and 3’ ends. These transcripts are then incubated with RNA baits designed to bind to regions of interest, which is generally the CDR3 region. These baits, which are normally bound to magnetic beads, can be isolated using a magnet. This allows the isolation of transcripts of the CDR3 region which can then amplified using PCR. Target enrichment using RNA baits requires fewer PCR amplification steps, which may decrease amplification bias. However, the efficiency of the capture by magnets may affect the diversity of the amplified transcripts.
5’-RACE
Rapid Amplification of cDNA Ends (RACE) is a method that uses RNA transcripts for generation of the library. Although RACE can be applied with the 3' or the 5' end, the 5' end more commonly used for TCR-seq. This method revolves around the addition of a common 5' adaptor sequence to the transcript, which can be a done a few different ways. One method is to add on the adapter following reverse transcription. During the generation of the reverse DNA strand from the RNA template, a forward primer adds a sequence complementary to the 5'adapter, leading to template switching This allows a 5' adapter to be incorporated into the cDNA when the complementary sequence is generated. Primers can be designed to amplify the entire region from the adaptor to the constant region, then adaptor ligation can be performed in a second PCR reaction. As all the different transcripts now share an identical adapter, they can be amplified using a single primer pair. As such, this method decreases amplification bias and improves the ability to detect more uncommon TCR populations with greater certainty. However, as TCR transcription levels differ between cells, this method cannot provide an accurate measurement of the number of different T cell types in the sample based on the level of RNA transcripts alone
Sequencing
Following generation of the library, the products can be sequenced, generally via Next Generation Sequencing (NGS). Usage of machines capable of longer reads and maintains read quality at the 3’end is important, as the CDR3 region is at the 3’end of an approximately 500 base pair transcript
The error rate of NGS presents a challenge for analysis of TCR repertoires. Small variations in the TCR can change their specificity towards antigens, and as such may be interest to researchers. However, errors in sequencing can generate a minor change that may be interpreted as a low-frequency, distinct TCR population, which is a problem when analyzing changes in TCR repertoires. Efforts have been made to establish thresholds to remove low abundance reads from analysis, as well as to develop algorithms to correct these errors
Applications
Generally, the data collected from TCR-seq is used to compare TCR repertoires, either between the same patient at different timepoints, or between different patients. Recent studies examined the characteristics of a healthy repertoire, and found a high degree of variation in TCR β chain levels and types, though a subset is shared across different individuals. However, this diversity has yet to be shown to strongly correlate with any conditions of interest, such as rates of infection or chance of cancer relapse, suggesting further research is necessary.
Infectious Diseases
Clonal expansion of T cells allow the immune system to deal with a variety of infection disease with high specificity. Thus, understanding changes that occur to the T cell repertoire following disease infection can early diagnosis, disease monitoring, and therapeutic development
Acquired Immunodeficiency Syndrome (AIDS) is a devastating disease caused by Human Immunodeficiency Virus (HIV) infection, which results in the death of CD4+ T cells. and dysfunctional CD8+ T cells. Recent studies have suggested that increased TCR diversity may decrease HIV diversity and limit disease progression. Sequencing of the TCR would also increase understanding of the progression of AIDS and predict morbidity. Additionally, sequencing the TCR repertoire of individuals with natural defense against AIDs infection could help development of a vaccine to limit further spread of the disease
Cancer
Cancer is the uncontrolled proliferation of malignant cells which can spread throughout the body. This is caused by mutations within the cancer cell, which often leads to expression of mutant proteins termed neoantigens. Identification of these neoantigens has great therapeutic benefit, as they can be exploited to target cancer cells without harming normal cells. As CD8+ T cells can recognize some neoantigens in their TCR, sequencing of TCR repertoires can help identify potential cancer biomarkers. In addition to biomarker identification, sequencing of the TCR repertoire can also track changes in cancer progression, assess responses to immunotherapy, and evaluate the tumour microenvironment for conditions that may make it permissible to cancer growth
See also
NOMe-seq
PLAC-Seq
References
DNA sequencing
Molecular biology techniques | Tcr-seq | [
"Chemistry",
"Biology"
] | 2,202 | [
"Molecular biology techniques",
"DNA sequencing",
"Molecular biology"
] |
73,168,113 | https://en.wikipedia.org/wiki/Circular%20fashion | Circular fashion is an application of circular economy to the fashion industry, where the life cycles of fashion products are extended. The aim is to create a closed-loop system where clothing items are designed, produced, used, and then recycled or repurposed in a way that minimizes waste and reduces the environmental impact of the fashion industry. It involves moving away from the traditional linear model of take-make-use-and-dispose towards a circular model of reduce-reuse-recycle-and-regenerate. This model not only helps in reducing environmental impact but also promotes economic growth through innovative business models and sustainable practices.
According to the definition of The European Parliament, this involves "sharing, leasing, reusing, repairing, refurbishing and recycling existing materials and products as long as possible." As suggested by The European Commission report, circular fashion encompasses a range of practices and strategies such as designing clothes for longevity, using sustainable materials, implementing recycling programs, and promoting secondhand markets. It also involves reducing the environmental impact of the production process by using sustainable energy sources and reducing the use of chemicals and water. Garments used in circular fashion are designed for longevity and durability with eco-friendly materials to encourage longer lifespans and methods that minimize waste and environmental impact.
Pioneering work and terminology on circular fashion, reached the mainstream through a 2017 report by the Ellen MacArthur Foundation titled "A New Textile Economy: Redesigning Fashion's Future". So far, the EU has been the main proponent for developing frameworks around circular fashion on a policy level, such as the Circular Economy Action Plan, part of the European Commission's "EU strategy for sustainable and circular textiles," launched in March 2022.
References
Further reading
Can clothes ever be fully recycled?; BBC
Fashion industry
Environmental economics
Sustainable business
Clothing and the environment
Recycling
Reuse
2014 neologisms | Circular fashion | [
"Environmental_science"
] | 379 | [
"Environmental economics",
"Environmental social science"
] |
73,169,134 | https://en.wikipedia.org/wiki/HD%20189080 | HD 189080, also known as HR 7621 or rarely 74 G. Telescopii, is a solitary orange-hued star located in the southern constellation Telescopium. It has an apparent magnitude of 6.18, placing it near the limit for naked eye visibility. Gaia DR3 parallax measurements place it at a distance of 357 light years and it is currently receding rapidly with a heliocentric radial velocity of . At its current distance, HD 189080's brightness is diminished by 0.17 magnitudes due to extinction from interstellar dust. It has an absolute magnitude of +1.1.
This is an evolved red giant with a stellar classification of K0 III. It is currently on the red giant branch, fusing a hydrogen shell around an inert helium core. It has 119% the mass of the Sun, but at the age of 4.83 billion years it has expanded to 9.9 times the radius of the Sun. It radiates 43.6 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of . HD 189080 is slightly metal deficient with [Fe/H] = −0.11 and spins too slowly to be measured accurately.
References
K-type giants
Telescopium
Telescopii, 74
CD-49 12949
189080
098482
7621 | HD 189080 | [
"Astronomy"
] | 288 | [
"Telescopium",
"Constellations"
] |
73,169,693 | https://en.wikipedia.org/wiki/Grothendieck%20trace%20theorem | In functional analysis, the Grothendieck trace theorem is an extension of Lidskii's theorem about the trace and the determinant of a certain class of nuclear operators on Banach spaces, the so-called -nuclear operators. The theorem was proven in 1955 by Alexander Grothendieck. Lidskii's theorem does not hold in general for Banach spaces.
The theorem should not be confused with the Grothendieck trace formula from algebraic geometry.
Grothendieck trace theorem
Given a Banach space with the approximation property and denote its dual as .
⅔-nuclear operators
Let be a nuclear operator on , then is a -nuclear operator if it has a decomposition of the form
where and and
Grothendieck's trace theorem
Let denote the eigenvalues of a -nuclear operator counted with their algebraic multiplicities. If
then the following equalities hold:
and for the Fredholm determinant
See also
Literature
References
Theorems in functional analysis
Topological tensor products
Determinants | Grothendieck trace theorem | [
"Mathematics",
"Engineering"
] | 212 | [
"Theorems in mathematical analysis",
"Tensors",
"Theorems in functional analysis",
"Topological tensor products"
] |
73,169,871 | https://en.wikipedia.org/wiki/NOMe-seq | Nucleosome Occupancy and Methylome Sequencing (NOMe-seq) is a genomics technique used to simultaneously detect nucleosome positioning and DNA methylation... This method is an extension of bisulfite sequencing, which is the gold standard for determining DNA methylation. NOMe-seq relies on the methyltransferase M.CviPl, which methylates cytosines in GpC dinucleotides unbound by nucleosomes or other proteins, creating a nucleosome footprint. The mammalian genome naturally contains DNA methylation, but only at CpG sites, so GpC methylation can be differentiated from genomic methylation after bisulfite sequencing. This allows simultaneous analysis of the nucleosome footprint and endogenous methylation on the same DNA molecules. In addition to nucleosome foot-printing, NOMe-seq can determine locations bound by transcription factors. Nucleosomes are bound by 147 base pairs of DNA whereas transcription factors or other proteins will only bind a region of approximately 10-80 base pairs. Following treatment with M.CviPl, nucleosome and transcription factor sites can be differentiated based on the size of the unmethylated GpC region.
Nucleosome occupancy determines DNA accessibility, which provides insight into regulatory regions of the genome. Important regulatory elements within a cell (such as promoters, enhancers, silencers, etc.), are located in open or accessible regions to allow binding of transcription factors or other regulatory molecules. NOMe-seq can therefore be used to elucidate regulatory information. Alternative DNA accessibility techniques include MNase-seq, DNase-seq, FAIRE-seq, and their successor ATAC-seq. NOMe-seq has the additional benefit of providing DNA methylation status, which also plays a crucial role in the regulation of genomic activity. Interestingly, increased DNA methylation is associated with transcriptional silencing whereas accessible DNA unbound by nucleosomes is generally associated with transcriptional activation. In this sense, NOMe-seq consists of two independent methylation analyses that are functionally oppositional.
History
The M.CviPl methyltransferase was first described in 1998, where the gene was cloned from Chorella virus NYs-1. After its discovery, the methyltransferase was used for nucleosome foot-printing as early as 2004, but NOMe-seq was not officially described until 2012. M.CviPl was not the only methyltransferase used for nucleosome foot-printing; Methylase-sensitive Single Promoter Analysis (M-SPA) was described in 2005 using the CpG methyltransferase M.Sssi. M.CviPl techniques quickly overtook M-SPA as GpC specificity is preferable to CpG specificity, with GpC dinucleotides having a broader distribution throughout the genome and no endogenous methylation. The NOMe-seq assay was subsequently developed, with the earliest mention being in 2011 and an in depth description published in 2012. The technique has since been adapted for single cell technologies, with single cell NOMe-seq (scNOMe-seq) described in 2017 and NOMe-seq using nanopore sequencing (nanoNOMe) described in 2020. These adaptations have allowed high resolution analyses that can compare and contrast DNA accessibility between single cells.
Methods
Components
For isolating nuclei, components include Dulbecco's phosphate-buffered saline (DPBS), trypsin or dispase (depending on cell type), trypan blue, hemocytometer, lysis buffer, and wash buffer
To treat nuclei with M.CviPl, components include GpC Buffer, S-adenosylhomocysteine (SAM), M.CviPl, Sucrose, Nuclease-free water, and Stop buffer
For isolating M.CviPl-treated DNA, components include NaCl, Proteinase K, Phenol:Chloroform (1:1), Ethanol, TE Buffer, and Nandrop spectrophotometer EDTA pH 8
For fragmenting M.CviPI-treated DNA, components include Covaris sonicator, Covaris MicroTUBE AFA Pre-slit Snap-Cap 6x16mm, Nanodrop Spectrophotometer, and DNA High Sensitivity Kit (Agilent) for use with Agilent 2100 Bioanalyzer
For bisulfite conversion of M.CviPI-treated DNA, components include EZ DNA Methylation Kit
For constructing NOMe-seq Library, components include Accel-NGS Methyl-Seq DNA Library Kit for Illumina Platforms, Methyl-Seq Set A Indexing Kit, Magnetic Beads, dsDNA HS Assay Kit, and DNA High Sensitivity Kit
Workflow
Isolate Nuclei: Lysation of pelleted cells
Treat Nuclei with M. CviPl to Methylate GpCs: Incubation with GpC buffer and M.CviPI
Purify and Fragment MCviPl-treated Nuclei
Treat MCviPI-treated Nuclei with Bisulfite: Conversion of unmethylated Cs to Ts
Construct NOME-seq library
Sequence NOMe-seq Library
Perform Quality Check: Analysis of the sequence by aligning the sequenced genomic clones to bisulfite converted sequence
Post-process: Analysis of duplicates and coverage quality
Call for Methylation at CpG and GpC sites: CpG is found in all HCG trinucleotides; GpC is found in all GCH trinucleotides
Call for NDRs
Perform Quality Analysis of methylation and NDRs identification: Visualization of methylation levels
Use
Advantages
High resolution
Identifies regulatory elements without needing to understand nucleosome modifications (in comparison to CHIP-seq)
Greater depth regarding the position of nucleosomes (comparison to DNase-seq, FAIRE-seq, and ATAC-seq.
Identifies information on dual nucleosome position
Identifies DNA methylation at a single-DNA molecule resolution
Relatively short reaction time: 15 minutes while utilizing approximately 200,000 cells.
Limitations
Relies on the presence of GpC residues: while the broad distribution of GpCs provides high depth information, mapping is still based on GpC presence in regions unbound by nucleosomes or transcription factors and therefore can not provide single nucleotide resolution
Higher expense compared to other sequencing methods due to the depth in information generated
Other Applications & Complementary Methods
scNOMe-seq
scNOMe-seq is a method that was adapted from NOMe-seq to be used in single cells studies. This has been found to produce similar results as NOMe-seq when using bulk samples of human cell cultures. Single cell analyses have many benefits in cases where gene expression can vary between cells. For example, to further develop cancer treatments, it would be useful to understand the differences that arise between individual cells using the scNOMe-seq method.
nanoNOMe
nanoNOMe is a method that was adapted from NOMe-seq that uses nanopore sequencing instead of bisulfite sequencing. Nanopore sequencing is a long read sequencing method that also detects DNA methylation, providing additional insight into longe range patterns on individual molecules.
NOMePlot
NOMePlot is a bioinformatic tool that was developed for datasets derived by NOMe-seq. This tool easily obtains single molecule locus-specific information in genome-wide datasets from bulk cell populations and has been validated using mouse embryonic stem cells.
See also
Pore-C
References
DNA sequencing
Molecular biology techniques | NOMe-seq | [
"Chemistry",
"Biology"
] | 1,613 | [
"Molecular biology techniques",
"DNA sequencing",
"Molecular biology"
] |
73,170,392 | https://en.wikipedia.org/wiki/Anne-Christine%20Hladky | Anne-Christine Hladky-Hennion (born 1965) is a French researcher in acoustic metamaterials. She is a director of research for the French National Centre for Scientific Research (CNRS), and scientific deputy director of the CNRS (INSIS).
Education and career
Hladky is originally from Lille, where she was born in 1965. After earning a diploma in 1987 from the Institut supérieur de l'électronique et du numérique in Lille, she continued her education at the Lille University of Science and Technology, where she earned a doctorate in 1990, in materials science. Her doctoral dissertation, Application de la méthode des éléments finis à la modélisation de structures périodiques utilisées en acoustique, was supervised by Jean-Noël Decarpigny.
She joined CNRS in 1992, and became a director of research in 2015.
Recognition
Hladky was the 1990 winner of the Young Researcher Prize of the French Acoustical Society. In 2018 she received the CNRS Silver Medal.
References
1965 births
Living people
Scientists from Lille
French materials scientists
Women materials scientists and engineers
Metamaterials scientists
Research directors of the French National Centre for Scientific Research
Acousticians | Anne-Christine Hladky | [
"Materials_science",
"Technology"
] | 246 | [
"Metamaterials",
"Materials scientists and engineers",
"Metamaterials scientists",
"Women materials scientists and engineers",
"Women in science and technology"
] |
73,172,241 | https://en.wikipedia.org/wiki/Oxidation%20state%20localized%20orbitals | Oxidation state localized orbitals (OSLOs) is a new concept used to determine the oxidation states of each fragment for the coordination complexes. Based on the result of density functional theory (DFT), all the occupied molecular orbitals are remixed to get the oxidation state localized orbitals. These orbitals are assigned to one of the fragments in this molecule based on the fragment orbital localization index (FOLI). After all the electrons are assigned, the oxidation states of each fragment could be obtained by calculating the difference between the number of electrons and protons in each fragment.
History
Oxidation state is an important index to evaluate the charge distribution within molecules. The most common definition of oxidation state was established by IUPAC, which let the atom with higher electronegativity takes all the bonding electrons and calculated the difference between the number of electrons and protons around each atom to assign the oxidation states. However, the definition doesn't thoroughly consider the distribution of the bonding electrons and further restricts the applicability of oxidation states.
To precisely assign the oxidation state for each component in the molecule, especially for organometallic complexes, several different research groups, including Pedro Salvador and Martin Head-Gordon, have developed different methods to determine the oxidation states. In 2009, Martin Head-Gordon group established a new method called localized orbitals bonding analysis (LOBA) to assign the electrons associated with each localized orbitals. However, this method failed to provide reasonable oxidation states since the orbitals cannot be localized for some complicated systems.
To overcome this problem to get the correct assignment of oxidation states, in 2022, Martin Head-Gordon and Pedro Salvador decide to localize the electrons based on different fragments rather than atoms. Thus, they developed the method known as oxidation state localized orbitals (OSLOs), which can accurately assign electrons to different fragments to obtain the oxidation states of each fragment.
General methods
Generation of full set of orbitals
Based on the density functional theory, a full set of orbitals will compose the resulting OSLOs for each fragment. Then, these sets will be imported to the algorithm for further assignment of oxidation states and construction of OSLOs.
Localization measurement
The extent of delocalization could be quantified by using Pipek's delocalization measurement. For orbitals which are highly localized, the Pipek's indexes will be very close to 1. On the other hand, for highly delocalized orbitals, the Pipek's indexes become larger.
However, this method cannot evaluate the localization extent on each fragment. Thus, a new measurement is necessary. The fragment orbital localization index (FOLI) is defined as the square root of the fragment population over the delocalization index:
Based on this localization index, the localization extent on each fragment can be determined. with higher FOLI, it means the extent of localization on this fragment is relatively low, vice versa. Thus, after acquiring the FOLI, the electrons in each OSLO will be assigned to the fragment with the lowest FOLI.
Workflow
First, based on the results of density functional theory calculations. The set with the minimal FOLI is selected for further analysis. Then, after calculating the FOLI for each set, the set with the minimal FOLI is selected. For the selected set, the OSLOs are removed and the oxidation states are assigned based on these OSLOs.In this method, the fragment with the higher electron population gets all the electrons in this orbital. For all the other sets, they become the input for the next-round analysis, and the process repeats until all OSLOs are constructed and all electrons are assigned.
Result
Significance
The valence OSLOs of the molecule can also be constructed using the method. The oxidation state of the ligand and metal are also determined and show consistency with the expected Lewis structure and can provide great insight for evaluating the redox reactivity.
last FOLI and Δ-FOLI are two important values to evaluate the quality of the localization result. With the last FOLI closer to 1, it means that the OSLOs are highly localized on one fragment. On the other hand, Δ-FOLI is the difference between the last FOLI and the second-last FOLI. With a larger Δ-FOLI, it means the selected set of OSLOs is much better than other options, which indicates the unambiguity of this result.
Notable result
For example, using the OSLOs for ferrocene shows great consistency with the prediction. The metal center was assigned the oxidation state of +2, and the Cp ligands were assigned the oxidation state of -1, which is quite consistent with the aromatic behavior of Cp. Furthermore, the last FOLI for ferrocene is 1.313 and the Δ-FOLI is 1.800, both indicating the unambiguity of the result.
However, for some complicated species possessing noninnocent ligands, the results become ambiguous. For example, several copper-trifluoromethyl complexes show small Δ-FOLI, which means the result is no longer unique. Moreover, whether the copper has the oxidation state of +3 or +1 remain controversial. Besides, for the Grubbs catalyst, the result is also inconsistent with conventional Fischer and Schrock classifications.
References
Wikipedia Student Program
Coordination complexes
Coordination chemistry | Oxidation state localized orbitals | [
"Chemistry"
] | 1,085 | [
"Coordination chemistry",
"Coordination complexes"
] |
68,775,476 | https://en.wikipedia.org/wiki/Yardena%20Samuels | Yardena Samuels or Samuels-Lev is an Israeli molecular biologist who is the Director of the Ekard Institute for Cancer Diagnosis Research at the Weizmann Institute of Science. Her research considers the genetic mutations of melanoma.
Early life and education
Samuels was born in Tel HaShomer. She first visited the Weizmann Institute of Science at the age of seventeen, when she attended a summer school at the international summer science institute. Her mother is a diplomat and her father is a Director for International Relations. She was an undergraduate student at University of Cambridge, where she earned a Bachelor's degree in 1993. Samuels moved to the Hebrew University of Jerusalem, where she worked toward an MSc in immunology. She returned to the United Kingdom for graduate studies in molecular cancer biology, during which she was based at the Ludwig Institute for Cancer Research. She moved to Johns Hopkins University as a postdoctoral fellow in Bert Vogelstein's laboratory in 2003. She identified that the gene encoding PI3-Kalpha is mutated in one third of colorectal cancer patients. During this position, she became interested in personalised medicine for cancer treatment.
Research and career
In 2006, Samuels was appointed Assistant Professor at the Cancer Genetics Branch of the National Human Genome Research Institute at the National Institutes of Health. She established a tumour bank of almost 120 normal and tumour tissue samples. The bank allowed her to analyse for mutations in melanomas, as well as offering hope for the identification of new drug targets. She returned to Israel as a researcher in 2012, where she established her own laboratory at the Weizmann Institute of Science. She was made Director of the Weizmann Brazil Tumor Bank. The bank helps scientists identify genes that are associated with tumour growth.
Samuels' research involves the use of DNA sequencing to identify genetic mutations in melanoma. She identified a mutation that is present in one in five of melanoma cases.
Awards and honours
Elected to the Council of the European Molecular Biology Organization
European Research Council award
Alfred Blalock, Young Investigators' Day Award
Genome Technology’s Top 25 Young Investigators
Elected Fellow of the European Academy of Cancer Sciences
Selected publications
Personal life
Samuels is married to Ori Lev and has two sons.
References
Molecular biologists
Israeli biologists
Year of birth missing (living people)
Hebrew University of Jerusalem alumni
Alumni of Imperial College London
National Institutes of Health faculty
Academic staff of Weizmann Institute of Science
Living people | Yardena Samuels | [
"Chemistry"
] | 500 | [
"Molecular biologists",
"Biochemists",
"Molecular biology"
] |
68,776,341 | https://en.wikipedia.org/wiki/Trino%20%28SQL%20query%20engine%29 | Trino is an open-source distributed SQL query engine designed to query large data sets distributed over one or more heterogeneous data sources. Trino can query data lakes that contain a variety of file formats such as simple row-oriented CSV and JSON data files to more performant open column-oriented data file formats like ORC or Parquet residing on different storage systems like HDFS, AWS S3, Google Cloud Storage, or Azure Blob Storage using the Hive and Iceberg table formats. Trino also has the ability to run federated queries that query tables in different data sources such as MySQL, PostgreSQL, Cassandra, Kafka, MongoDB and Elasticsearch. Trino is released under the Apache License.
History
In January 2019, the original creators of Presto, Martin Traverso, Dain Sundstrom, and David Phillips, created a fork of the Presto project. They initially kept the name Presto and used the PrestoSQL web handle to distinguish it from the original PrestoDB project. Simultaneously, they announced the Presto Software Foundation. The foundation is a not-for-profit organization dedicated to the advancement of the Presto open source distributed SQL query engine.
In December 2020, PrestoSQL was rebranded as Trino. The Trino Software Foundation, code base, and all other PrestoSQL assets were renamed as part of the rebrand.
Presto and Trino were originally designed and developed by Martin, Dain, David, and Eric Hwang at Facebook to allow data analysts to run interactive queries on its large data warehouse in Apache Hadoop. Trino shares the first six years of development with the Presto project. To learn more about the earlier history of Trino, you can reference the Presto history section.
Trino is used in many data platforms and products from cloud providers and other vendors. Customization of these products varies from pure Trino usage to heavily customized systems to run a data platform or integration in specialized data platforms for usage with specific data. Examples include Amazon Athena, Starburst Galaxy, Dune, and many others.
Architecture
Trino is written in Java. It runs on a cluster of servers that contains two types of nodes, a coordinator and a worker.
The coordinator is responsible for parsing, analyzing, optimizing, planning, and scheduling a query submitted by a client. The coordinator interacts with the service provider interface (SPI) to obtain the available tables, table statistics, and other information needed to carry out its tasks.
The workers are responsible for executing the tasks and operators fed to them by the scheduler. These tasks process rows from the data sources which produce results that are returned to the coordinator and ultimately back to the client.
Trino adheres to the ANSI SQL standard and includes various parts of the following ANSI specifications: SQL-92, SQL:1999, SQL:2003, SQL:2008, SQL:2011, SQL:2016, SQL:2023.
Trino supports the separation of compute and storage and may be deployed both on-premises and in the cloud.
Trino has a Distributed computing MPP architecture. Trino first distributes work over multiple workers by running ad-hoc partitioning operations or relying on existing partitions in the data of the underlying data store. Once this data has reached the worker, the data is processed over pipelined operators carried out on multiple threads.
See also
Presto (SQL query engine)
Big data
Data Intensive Computing
Apache Drill
Computer cluster
References
External links
Trino Software Foundation (formerly Presto Software Foundation)
SQL
Free system software
Hadoop
Cloud platforms
Java platform | Trino (SQL query engine) | [
"Technology"
] | 750 | [
"Cloud platforms",
"Computing platforms",
"Java platform"
] |
68,777,315 | https://en.wikipedia.org/wiki/Classical%20probability%20density | The classical probability density is the probability density function that represents the likelihood of finding a particle in the vicinity of a certain location subject to a potential energy in a classical mechanical system. These probability densities are helpful in gaining insight into the correspondence principle and making connections between the quantum system under study and the classical limit.
Mathematical background
Consider the example of a simple harmonic oscillator initially at rest with amplitude . Suppose that this system was placed inside a light-tight container such that one could only view it using a camera which can only take a snapshot of what's happening inside. Each snapshot has some probability of seeing the oscillator at any possible position along its trajectory. The classical probability density encapsulates which positions are more likely, which are less likely, the average position of the system, and so on. To derive this function, consider the fact that the positions where the oscillator is most likely to be found are those positions at which the oscillator spends most of its time. Indeed, the probability of being at a given -value is proportional to the time spent in the vicinity of that -value. If the oscillator spends an infinitesimal amount of time in the vicinity of a given -value, then the probability of being in that vicinity will be
Since the force acting on the oscillator is conservative and the motion occurs over a finite domain, the motion will be cyclic with some period which will be denoted . Since the probability of the oscillator being at any possible position between the minimum possible -value and the maximum possible -value must sum to 1, the normalization
is used, where is the normalization constant. Since the oscillating mass covers this range of positions in half its period (a full period goes from to then back to ) the integral over is equal to , which sets to be .
Using the chain rule, can be put in terms of the height at which the mass is lingering by noting that , so our probability density becomes
where is the speed of the oscillator as a function of its position. (Note that because speed is a scalar, is the same for both half periods.) At this point, all that is needed is to provide a function to obtain . For systems subject to conservative forces, this is done by relating speed to energy. Since kinetic energy is and the total energy , where is the potential energy of the system, the speed can be written as
Plugging this into our expression for yields
Though our starting example was the harmonic oscillator, all the math up to this point has been completely general for a particle subject to a conservative force. This formula can be generalized for any one-dimensional physical system by plugging in the corresponding potential energy function. Once this is done, is readily obtained for any allowed energy .
Examples
Simple harmonic oscillator
Starting with the example used in the derivation above, the simple harmonic oscillator has the potential energy function
where is the spring constant of the oscillator and is the natural angular frequency of the oscillator. The total energy of the oscillator is given by evaluating at the turning points . Plugging this into the expression for yields
This function has two vertical asymptotes at the turning points, which makes physical sense since the turning points are where the oscillator is at rest, and thus will be most likely found in the vicinity of those values. Note that even though the probability density function tends toward infinity, the probability is still finite due to the area under the curve, and not the curve itself, representing probability.
Bouncing ball
For the lossless bouncing ball, the potential energy and total energy are
where is the maximum height reached by the ball. Plugging these into yields
where the relation was used to simplify the factors out front. The domain of this function is (the ball does not fall through the floor at ), so the distribution is not symmetric as in the case of the simple harmonic oscillator. Again, there is a vertical asymptote at the turning point .
Momentum-space distribution
In addition to looking at probability distributions in position space, it is also helpful to characterize a system based on its momentum. Following a similar argument as above, the result is
where is the force acting on the particle as a function of position. In practice, this function must be put in terms of the momentum by change of variables.
Simple harmonic oscillator
Taking the example of the simple harmonic oscillator above, the potential energy and force can be written as
Identifying as the maximum momentum of the system, this simplifies to
Note that this has the same functional form as the position-space probability distribution. This is specific to the problem of the simple harmonic oscillator and arises due to the symmetry between and in the equations of motion.
Bouncing ball
The example of the bouncing ball is more straightforward, since in this case the force is a constant,
resulting in the probability density function
where is the maximum momentum of the ball. In this system, all momenta are equally probable.
See also
Probability density function
Correspondence principle
Classical limit
Wave function
References
Concepts in physics
Classical mechanics
Theoretical physics | Classical probability density | [
"Physics"
] | 1,060 | [
"Theoretical physics",
"Mechanics",
"Classical mechanics",
"nan"
] |
68,778,849 | https://en.wikipedia.org/wiki/Tisotumab%20vedotin | Tisotumab vedotin, sold under the brand name Tivdak, is an antibody-drug conjugate used to treat cervical cancer. It is a combination of tisotumab, a monoclonal antibody against tissue factor, and monomethyl auristatin E (MMAE), a potent inhibitor of cell division. It is administered by infusion into a vein.
Tisotumab vedotin was approved for medical use in the United States in September 2021. The U.S. Food and Drug Administration considers it to be a first-in-class medication.
Adverse effects
In the United States, Tivdak carries a black box warning for ocular toxicity, which occurs in up to 60% of treated patients. In clinical trials, the most common forms of ocular toxicity were dry eye, conjunctivitis, corneal damage, and blepharitis.
Other common adverse effects include bleeding (occurring in approximately 60% of patients, most often nosebleed) and peripheral neuropathy (42% of patients). Like all drugs containing MMAE, tisotumab vedotin can cause inflammation of the lungs.
Mechanism of action
The antibody portion of tisotumab vedotin (tisotumab) binds to and forms a complex with tissue factor, a molecule expressed on the surface of cancer cells. This complex is then taken up into the cell, where tisotumab vedotin is broken down by proteolytic cleavage, releasing MMAE, which stops the cell cycle and kills the cell by apoptosis.
History
Tisotumab vedotin was developed by Genmab in Utrecht, the Netherlands, and Copenhagen, Denmark, with the code name TF-011-MMAE. In September 2021, tisotumab vedotin was granted accelerated approval by United States Food and Drug Administration for the use of recurrent or metastatic cervical cancer with disease progression on or after chemotherapy.
Society and culture
Legal status
In April 2024, Tisotumab vedotin was granted traditional approval for recurrent or metastatic cervical cancer with disease progression on or after chemotherapy. Tisotumab vedotin previously received accelerated approval for this indication.
Names
Tisotumab vedotin is the international nonproprietary name. Tivdak is the brand name for tisotumab vedotin in the United States.
References
External links
Antibody-drug conjugates
Monoclonal antibodies for tumors | Tisotumab vedotin | [
"Biology"
] | 512 | [
"Antibody-drug conjugates"
] |
68,780,058 | https://en.wikipedia.org/wiki/Arsinide | An arsinide, arsanide, dihydridoarsenate(1−) or arsanyl compound is a chemical derivative of arsine, where one hydrogen atom is replaced with a metal or cation. The arsinide ion has formula . It can be considered as a ligand with name or arsanido. Few chemists study arsanyl compounds, as they are both toxic and unstable. The IUPAC names are arsanide and dihydridoarsenate(1−). For the ligand the name is arsanido. The neutral group is termed arsanyl.
Formation
Alkali metal arsinides can form by bubbling arsine through a liquid ammonia solution of alkali metal such as sodium, potassium or alkaline earth metal such as calcium.
Arsinides are also formed when arsine reacts with thin layers of alkali metals.
The arsine may reduce some compounds to metals, so for example an attempt to make an indium arsinide results in metallic indium.
Reactions
When heated, metal hydrogen arsinide and metal dihydrogen arsinide compounds lose hydrogen to become a metal arsenide:
With lithium dihydrogen arsinide , it can also lose arsine to become dilithium hydrogen arsinide :
These reactions take place even at room temperature, and result in a discolouration of the original chemical.
Sodium dihydrogen arsinide reacts with alkyl halides RX (where X = F, Cl, Br, I, and R is alkyl) to make dialkylarsine . Potassium dihydrogen arsinide reacts with alkyl halides to make trialkylarsine .
Sodium dihydrogen arsinide reacts with diethyl carbonate to yield the 2-arsaethynolate ion, (analogous with cyanate ion) which can be crystallised with the sodium ion and 18-crown-6.
Arsinides react with water to yield arsine :
Potassium dihydrogen arsinide reacts with halobenzenes , where X = Cl, Br, I (chlorobenzene , bromobenzene , iodobenzene ) to produce benzene , tetraphenyldiarsine and triphenylarsine .
Potassium dihydrogen arsinide reacts with a silyl halide, e.g. chlorosilane , producing trisilylarsine.
Potassium dihydrogen arsinide reacts with and a crown ether resulting in .
List
Related
The hydrogen atoms in the arsinide anion may be substituted by organic or other groups which can then also produce ions, for example by methyl , like in potassium methyl arsinide (), or by trimethylsilyl . The doubly bonded ligand =AsH (or ) is called arsinidene.
References
Arsenic compounds
Anions
Hydrides | Arsinide | [
"Physics",
"Chemistry"
] | 602 | [
"Ions",
"Matter",
"Anions"
] |
68,780,641 | https://en.wikipedia.org/wiki/Liv%20Hornek%C3%A6r | Liv Hornekær (born 1972 in Copenhagen.) is a Danish experimental physicist who works in nanotechnology and astrochemical research.
She is a professor at the Department of Physics and Astronomy at Aarhus University and head of the surface dynamics group at the department. Her research mainly covers the interaction between hydrogen atoms and carbon-based surfaces
In 2016, she won the prestigious EliteForsk Prize, which was awarded by the Danish Ministry of Higher Education and Science. In 2017 she was appointed Professor of Physics at Aarhus University as the first woman ever, and in 2020 she was elected as member of The Royal Danish Academy of Sciences and Letters
References
External links
Danish women physicists
Scientists from Copenhagen
Royal Danish Academy of Sciences and Letters
Academic staff of Aarhus University
Experimental physicists
Nanotechnologists
Astrochemists
1972 births
Living people
21st-century Danish physicists
21st-century Danish women scientists | Liv Hornekær | [
"Physics",
"Chemistry",
"Materials_science"
] | 176 | [
"Experimental physics",
"Nanotechnology",
"Astrochemists",
"Experimental physicists",
"Nanotechnologists"
] |
68,781,702 | https://en.wikipedia.org/wiki/Venomics | Venomics is the study of proteins associated with venom, a toxic substance secreted by animals, which is typically injected either offensively or defensively into prey or aggressors, respectively.
Background
Venom is produced in a specialised gland (or glands) and is delivered through hollow fangs or a stinger in a process called envenomation. The main function of venom is to disrupt the physiological processes of the wounded animal through neurotoxic cytotoxic, myotoxic, or haemotoxic mechanisms. This can then help in certain processes such as procuring prey or in defense predators. Venom has evolved many times in multiple phyla, each having developed their own unique types of venom and methods of delivery independently. However, due to the excessive amounts of venomous animals in the world, they are the major cause of animal-related deaths (~ 57,000 in 2013) than non-venomous animals (~22,000). For example, globally, someone is bitten by a snake every 10 seconds, according to estimates. Snakes are responsible for more than 5.4 million biting-injuries, resulting to 1.8 - 2.7 million envenomings and around 81,410 to 137,880 deaths annually. Bites by venomous snakes can cause acute medical emergencies involving severe paralysis that may prevent breathing, cause bleeding disorders that can lead to fatal haemorrhage, cause irreversible kidney failure and severe local tissue destruction that can cause permanent disability and limb amputation. Children may suffer more severe effects and can experience the effects more quickly than adults due to their smaller body mass. With venomic methods, venom can be co-opted into beneficial substances such as new medicines and effective insecticides. For instance, Captopril® (Enalapril), Integrilin® (Eptifibatide) and Aggrastat® (Tirofiban) are drugs based on snake venoms, which have been approved by the FDA. In addition to these approved drugs, many other snake venom components are now involved in preclinical or clinical trials for a variety of therapeutic applications.
The Creation and History of Venomics Techniques
Venom is made up of multiple proteinous components, with each component differing in its structural complexity. Venom can be a mixture of simplistic peptides, secondary (α-helices and β-sheets) structured proteins and tertiary structured proteins (crystalline structures). Furthermore, depending on the organism, there can be fundamental differences in the strategies they incorporate in their venom contents, the biggest difference being between invertebrates and vertebrates. For example, the majority of funnel-web spider's venom was made up of peptides between 3-5 KDa (75%), with the remaining peptides being between 6.5 and 8.5 KDa in mass. Conversely, snake venom is made up of more complex protein such as modified saliva proteins (CRISPs & kallikrein) and protein families that have had their genes recruited from other tissue groups (Acetylcholinesterase, crotasin, defensin & cystatin). Due to this extraordinary amount of variation in the components that make up venom, a new field was needed to identify and categorise the millions of bioactive molecules that are found within the venom. Therefore, by combining the methods of multiple fields such as genomics, transcriptomics, proteomics and bioinformatics, an aptly named new field emerged named venomics.
Venomics was first established in the latter half of the 20th century as different ‘-omic’ technologies began to rise in popularity. However, the progression of venomics since its inception has always been reliant on and limited by the advancement of technology. Juan Calvete draws attention to this with explicitly when detailing the history of venomics. He declares that ''the last revolutions made in venomics research in the last decade (1989–1999) are the direct result of advancements made in proteomic-centered methods and the indirect result of more widely available and cost-effective forms of transcriptomics and bio-informatics analysis''. One of the first popular research topics of venomics was the pharmacological properties of the polypeptide toxins found in snake venom (Specifically, Elapidae and Hydrophidae) due to the neurotoxic properties and their ability to cause respiratory failure in animals. However, due to the lack of competent technology, less complex techniques (such dialysis to separate the venom), followed by simplistic chromatography and electrophoresis analysis, research was limited.
Evidence of early interest in snake venom was prevalent throughout the early 20th century with one of the first big breakthroughs being in the mid-1960s. For example, Halbert Raudonat was one of the first researchers to fractionate Cobra (Naja nivea) venom using a sophisticated dialysis and paper chromatography techniques. Furthermore, Evert Karlsson and David Eaker were able to successfully purify the specific neurotoxins found in Cobra (Naja nigricollis) venom and found that those isolated polypeptides had a consistent molecular weight of around 7000.
Future research in this field would eventually lead to indirect predictive models and then direct crystal structures of important many protein superfamilies. For example, Barbara Low was one of the first to release a 3D structure of the three-finger protein (TFP), Erabutoxin-b. TFPs are an example of α-Neurotoxins, they are small in structure (~60-80 amino acid length) and are a predominant component found in many snake venoms (representing up to 70%-95% of all toxins).
The Current State and Methodology of Venomics
Retrospectively, venomics has made a lot of progress in sequencing and creating accurate models of toxic molecules through current advanced methods. Through these methods, global categorisation of venoms has also taken place, with previously studied venoms being documented and widely available. An example of this would be the ‘Animal toxin annotation project’ (Provided by the UniProtKB/Swiss-Prot), which is a database that aims to provide a high quality and freely available source of protein sequences, 3D structures and functional information on thousands of animal venom/poisons. So far, they have categorised over 6,500 toxins (Both venoms & poisons) at the protein-level, with the overall UniProt organisation having reviewed over 500,000 proteins and provided the proteomes of 100,000 organisms. However, even with today's technology the deconstruction and cataloguing of the individual components of what makes up an animal's venom takes a large amount of time and resources due to the overwhelming amount of molecules that are found in a single venom sample. This is complicated further when there are some animals (I.e. Cone snails) that can change the complexity and make-up of their venom depending on the circumstances (Offensive related or defensive related matters) of the envenoming. Furthermore, inter-specific differences exist between male and female of a species with their venoms varying in quantities and toxicity.
Professor Juan J. Calvete is a prolific researcher in venomics at the biomedical institute in Valencia and has extensively explained the process involved in untangling and analysing venom (Once in 2007 and recently in 2017.
These involve the following steps:
(1) Venom collection, (2) Separation and quantification, (3) Identification and (4) Representation of components found.
(1) Venom collection methods
Venom milking is the most simplistic way of collecting a venom sample. It usually involves a vertebrate animal (Typically a snake) to deliver a venomous bite into a container. Similarly, electrical stimulation can be used for invertebrate animal (Insects and arachnids) subjects. This practice has allowed for the discovery of the basic properties of venom and to understand the biological factors involved in venom production such as venom regeneration periods. Other methods involve post-mortem dissection of the venom glands to collect the required materials (Venom or tissue).
(2) Separation and quantification methods
Separation methods are the first step to decomplexify the venom sample, with a common method being reverse‐phase high performance liquid chromatography (RP-HPLC). This method can be applied broadly to nearly all venoms as a crude fractionation method and to detect the peptide bonds found. A less common techniques like 1D/2D gel electrophoresis can also be used in cases of venoms containing heavy, complex peptides (Preferable >10KDa). This means in additions to RP-HPLC, Gel electrophoresis can help identify large molecules (such as enzymes) and to help refine venom prior to further analytical methods. Next, N-terminal sequencing is used to find the amino acid order of the fractionated proteins/peptides starting with the N-terminal end. Furthermore, SDS‐PAGE (Sodium dodecyl sulfate-polyacrylamide gel electrophoresis) can be performed on the isolated proteins from the RP-HPLC to identify proteins of interest before moving on to the identification stage.
(3) Identification methods
There are two predominantly used proteomic methods when identifying the structure of a peptide/protein, Top-down proteomics (TDP) and Bottom-up proteomics (BUP). TDP involves taking fractionated venom samples and analysing those peptides/proteins with Liquid chromatography tandem-mass spectrometry (LC-MS/MS). This results in the identification and characterisation of all peptides/proteins present in the initial sample. While, BUP consists of fractionating and breaking down the peptides/proteins before analysis (LC-MS/MS) using chemical reduction, alkylating and enzymatic digestion (Typically with trypsin). BUP is more commonly used than TDP as breaking down the samples allows the components to meet the ideal mass range for LC-MS/MS analysis. However, there are disadvantages and limitations with both identification methods. BUP results are prone to protein inference problems as large toxins can be broken down into smaller toxins which are shown in the output, but do not exist naturally within the venom sample. While, TDP is the newer method and is able to fill-in the gaps BUP leaves, TDP needs instruments with high amounts of resolving power (Typically 50,000 or above). Most studies will actually use both methods in parallel to obtain the most accurate results. Furthermore, transcriptomic/genomic methods can be used to create cDNA libraries from the extracted mRNA molecules expressed in the venom glands of a venomous animal. These methods optimise the protein identification process by producing the DNA sequences of all proteins expressed in the venom glands. A large problem in using transcriptomic/genomic analysis in venomic studies is the lack of full genome sequences of many venomous animals. However, this is a fleeting problem due to the amount of full genome projects involved in sequencing venomous animals such as the ‘venomous system genome project’ (Launched in 2003). Through these projects, various fields of study such as ecological/evolutionary studies and venomic studies can provide supporting information and systematic analysis of toxins.
(4) Accurate representation of components
Renata Rodrigues produced an informative study detailing both the proteome and the transcriptome of the Neuwied's Lancehead (Bothropoides pauloensis), with all the methods described above. The proteome showed the presence of nine protein families with the majority of components belonging to snake venom metalloproteinases (38%), phospholipase A2 (31%) and Bradykinin-potentiating peptides/C-type natriuretic peptides (12%). The transcriptome gave a cDNA of over 1100 expressed sequence tags (ESTs), with only 688 sequences being related to the venom gland. Similarly, the transcriptome showed matching results with 36% of SVMP's being the majority of the ESTs followed by PLA2 (26%) and BPP/C-NP (17%) sequences. Furthermore, this study shows that through both the use of proteomic and transcriptomics, we can fully comprehend the components within venom. This can then lead to both the molecular structure and functions of many bioactive components, which can intern lead to bioprospecting venom components into new medicines and can help to develop better methods of creating anti-venoms.
The Future Possibilities of Venomics
The field of venomics has been vastly revamped since its origin in the 20th century and continues to be improved with contemporary methods such as next generation sequencing and nuclear magnetic resonance spectroscopy. From this trend, it would seem that venomics will be progressively enhanced in its capabilities through the persistent technological advancements of the 21st century. As previously mentioned, a potential route that can be expanded upon further by venomics could be venom-specific molecules being co-opted into specialised medicines. The first example of this was in the early 1970s, when Captopril was found to be an inhibitor of angiotensin converting enzymes (ACE) and had the means of treating hypertension in people. Glenn King discusses the current state of venom-derived drugs, with six drugs derived from venom being FDA-approved and ten more currently being under clinical trials. Michael Pennington gives a detailed update on the current landscape of venom-derived drugs and the potential future of the field (Table 1).
Anti-venoms is another branch of medicine, which needs to be improved due to the problems many developing countries face with venomous animals. Places like south/southeast Asia and sub-Saharan Africa are where many cases of both morbidity (limb amputation) and mortality take place. Snakes (especially Elapidae and Viperidae) are the leading cause of envenomings and antivenoms are in constant short supply in high risk areas due to the strenuous productive methods (Immunised animals) and the strict storage preferences (Constant below 0OC storage). This problem continues, when the medicine itself has limited effects on localised tissue and inevitably causes either acute (anaphylactic or pyrogenic) and delayed (serum sickness type) reactions in most patients. However, by using different ‘omic’ technologies, the use of ‘Antivenomics’ can potentially make safer, more cost effective and less time-consuming ways of producing antivenoms for a range of toxic organisms. New antivenom methods are even being investigated today with the use of monoclonal antibodies (mAbs) and the expansion of venomous databases, allowing for more effective approaches when screening of cross-reactivity of antivenoms. Lastly, agriculture can be improved upon by enhanced-venomic techniques through the invention of insect-specific biopesticides created from venom. Insects are both an agricultural/horticultural pest and act as vector/carriers of many parasites and disease. Ergo, effective insecticides are always needed to control the destructive effects of many insect species. However, many insecticides used in the past, do not meet current regulations and have been banned due to harmful effects such as affecting non-target species (DDT) and having a high toxicity level towards mammals (Neonicotinoids). Monique Windley propose arachnid venom is a potential solution to this problem due to the abundance of neurotoxic compounds present in their venom (Predicted 10million bioactive peptides) and due to their venom being specific towards insect.
Table 1. Venom-derived medicines discussed by Pennington, Czerwinski et al., (2017).
References
Toxic effects of venomous animals
Venomous animals
Neurotoxins
Toxins by organ system affected | Venomics | [
"Chemistry"
] | 3,298 | [
"Neurochemistry",
"Neurotoxins"
] |
68,783,415 | https://en.wikipedia.org/wiki/Finials%20of%20Cologne%20Cathedral | The finials of Cologne Cathedral from the tops of the two towers (north and south towers) at a height of 149 to 157 metres. A copy of this finial in original size, but made of concrete, has stood below the steps in front of the west façade of the cathedral since 1991.
Shape and construction
The finials consist of a central shaft surrounded by two leaf wreaths of different sizes. They date from the last construction phase of Cologne Cathedral around 1880, although the plans still go back to master builder Ernst Friedrich Zwirner († 1861), who based his plans on the original, medieval façade plan F. In this design, the finials were to have a diameter of 5.20 metres.
Zwirner's successor as cathedral architect was , who is considered to have completed the cathedral. He was already planning a smaller diameter of initially 5.02 metres, later 4.75 metres, for the lower leaf wreath. The natural limitations of the material to be extracted from the Obernkirchen Sandstein finally tipped the scales: the final diameter of the lower leaf wreath is 4.58 metres, the height around eight metres.
In addition to the size of the stone blocks, transporting them to heights of over 150 metres posed a challenge in the 19th century: Not only were scaffolding and rope hoists too weak, but the steam-powered freight lift could carry a maximum weight of four tonnes. A one-piece lower leaf wreath alone would have weighed over 17 tonnes. This is one of the reasons why the finials, with their approximately 37 cubic metres of stone each, are made up of a total of 24 individual stones.
To stabilise the construction of the spire, a system of brackets and reinforcements, mostly made of copper, was developed to counter the danger of corrosion. The leaves of the lower ring of leaves, joined together in the middle on a comparatively small surface, project outwards up to 2.30. They are therefore supported on the one hand by stone brackets from below, but held in place on the top by an octagonal copper band on the shaft and by metal rods.
A wrought-iron rod 10 centimetres in diameter and 21 metres long was passed through the centre of the shaft to stabilise it with a copper sheath. This rod hangs downwards into the tower's spire and is weighted down in the manner of a pendulum.
Copper ladders lead from an exit about 17 metres below to the tops of the finials, where there is a lightning rod.
Structure and modification
The finials were made in the winter of 1879/80 in the stonemasons' workshop of the cathedral building lodge; the raising and setting began on 16 July 1880, after the raising scaffolding had been reinforced as a precaution. For example, the hemp rope was replaced by steel cables.
The finial of the north tower was completed and put in place on 23 July 1880, that of the south tower on 14 August 1880 – but without the keystone, which was put in place to celebrate the completion of the cathedral on 15 October 1880.
Shortly after completion, however, the protests of the population increased, as the finials appeared too compact and bulky despite their removal. For this reason, it was decided shortly afterward to rework the leaf wreaths by hand.
In the winter of 1880/81, wooden housings were mounted around the finials to create a heated workspace for the workers in the cold. 40 stonemasons worked until 12 February 1881 to make the leaf wreaths more filigree afterward.
Model on the Domplatte
Cathedral master builder Richard Voigtel had already originally striven for the production of a third finial as a "monument to the completion of the cathedral". In a sketch and design from 1879, he envisaged a 10.5-metre-high replica of the finials to be erected on the south-eastern corner of the cathedral terrace. However, Voigtel could not prevail with this idea.
In 1980, the year of the cathedral's jubilee, the sculptor Uspelkat made a plastic model based on construction drawings, which was erected in front of the cathedral on 18 March 1980. Although not entirely true to scale and to the original, it enjoyed great popularity until it was severely damaged by storm Wiebke in 1990.
On 11 October 1991, the Cologne Tourist Office had a newly created model of the finial erected in front of the cathedral. The concrete model of the southern finial on a scale of 1:1 was placed 50 metres in front of the west façade of the cathedral between the street Unter Fettenhennen and the . The faithful sculpture demonstrates the dimension and details of its prototype.
In an effort to replace the model with a durable structure, the choice fell on a concrete casting due to the considerably lower costs compared to natural stone. First, the finial of the south tower was re-measured and photographed from the air. Using a plaster model on a scale of 1:10, segmentation, reinforcement, formwork and concreting sequences were developed. On a 1:1 raw model made of polystyrene foam blocks, the later made of silicone rubber was applied, which received a supporting body made of epoxy for casting. The finished structure comprised 13 prefabricated parts made of dark grey through-coloured reinforced concrete. Except for the massive leaf crowns and the keystone, all parts were designed as hollow bodies with wall thicknesses between 15 and 20 centimetres for reasons of weight saving.
The finial, which was assembled using a crane, is almost 10 metres high, 5 metres wide and weighs 35 tonnes, less than half the weight of the natural stone model. It is set in a circular flowerbed and bears explanatory panels in 15 languages on its base.
The model of the finial has become a popular meeting point in front of the cathedral and is the starting point for numerous city tours around Cologne Cathedral.
Discussion on the location of the crucifers replica
In 2012, the "Urban Congress" project commissioned by the City of Cologne, which focused on the conscious handling of art in the public urban space of Cologne, presented a number of recommendations for action, including the removal of the crucifix replica in front of the cathedral, with the aim of calming down or "clearing out" the area in front of the cathedral and giving the actual art monument at this location, the Taubenbrunnen by Ewald Mataré, a more prominent place and a new visibility.
In December 2014, the city centre district council decided to relocate and commissioned the city administration to look for an alternative location, which, however, was not found even months later. Alternative locations discussed included the Burgmauer located in the western viewing axis of the cathedral, the or the location of the former concrete mushrooms on the Domplatte, the district council had finally decided on the Deutz side of the Rhine near the Hohenzollern Bridge, i.e. the viewing axis of the cathedral on the right bank of the Rhine. In the recommendation for action of the Urban Congress, the location of the Dombauhütte in front of the cathedral choir is considered typologically sensible, but the terrace of the Café Reichard opposite is also considered.
Barbara Schock-Werner had already criticised the object at this location during her tenure as cathedral architect, since it was charged with a significance in the middle of the cathedral's visual axis that it did not have. City dean also supported the decision, as well as the then cathedral provost , the architect Allmann Sattler Wappner commissioned with the cathedral slab reconstruction as well as from the Romano-Germanic Museum. Overall, the "majority of the interlocutors" in public and private discussions of the "Urban Congress" had considered the current location "unsuitable both for the Dove Fountain and for the perspective on the main portal of the cathedral". After the publication of the plans, opposing voices were found in letters to the editor and comments in the daily press as well as in an online petition, which found almost 2900 supporters. In politics, at the district level, the Greens, Die Linke, Deine Freunde and Pirates were in favour of demolition; in the city council, the SPD was against it, the CDU was in favour of a move to the Burgmauer, although both groups questioned in principle the decision-making power of the district council on this point. A dialogue commissioned by the city council at the end of 2015 between the new Lord Mayor Henriette Reker and District Mayor led to the compromise that the finial would remain in its place for the time being until the planned renewal of the western cathedral surroundings.
References
Further reading
70. Dombaubericht by Richard Voigtel in the : amtliche Mittheilungen des Central-Dombau-Vereins, Nr. 325, 14 April 1882,
External links
Cologne Cathedral
Ornaments (architecture)
Stone buildings
Roofs | Finials of Cologne Cathedral | [
"Technology",
"Engineering"
] | 1,831 | [
"Structural system",
"Structural engineering",
"Roofs"
] |
68,784,218 | https://en.wikipedia.org/wiki/List%20of%20Apple%20TV%2B%20original%20films | Apple TV+ is a global on-demand Internet streaming media provider, owned and operated by Apple Inc., that features a number of original programs that includes original series, specials, miniseries, documentaries, and films distributed under Apple Original Films. Some films were released in theaters on or before their release on Apple TV+.
Original films
Feature films
Documentaries
Specials
Shorts
Upcoming original films
Feature films
Documentaries
In development
Notes
References
External links
– official site
Apple TV+
Apple TV+ | List of Apple TV+ original films | [
"Technology"
] | 94 | [
"Computing-related lists",
"Apple Inc. lists"
] |
68,785,206 | https://en.wikipedia.org/wiki/Silicide%20carbide | Silicide carbides or carbide silicides are compounds containing anions composed of silicide (Si4−) and carbide (C4−) or clusters therof. They can be considered as mixed anion compounds or intermetallic compounds, as silicon could be considered as a semimetal.
Related compounds include the germanide carbides, phosphide silicides, boride carbides and nitride carbides. Other related compounds may contain more condensed anion combinations such as the carbidonitridosilicates with C(SiN3)4 with N bridging between two silicon atoms.
Production
Silicide carbide compounds can be made by heating silicon, graphite, and metal together. It is important to exclude oxygen before and during the reaction. The flux method involves a reaction in a molten metal. Gallium is suitable, because it dissolves carbon and silicon, but does not react with them.
Properties
Silicide carbides are a kind of ceramic, yet they also have metallic properties. They are not as brittle as most ceramics, but are stiffer than metals. They have high melting temperatures.
In air silicide carbide compounds are stable, and are hardly affected by water. The appearance is often metallic grey. When powdered the colour is dark grey.
When ErFe2SiC is dissolved in acid, mostly methane is produced, but the products include some hydrocarbons with two and three carbon atoms.
The lanthanide contraction is evident with the cell sizes for rare earth element silicide carbides.
List
References
Carbides
Mixed anion compounds
Silicides | Silicide carbide | [
"Physics",
"Chemistry"
] | 347 | [
"Ions",
"Matter",
"Mixed anion compounds"
] |
60,936,140 | https://en.wikipedia.org/wiki/June%20Lindsey | June Monica Lindsey ( Broomhead, June 7, 1922 – November 4, 2021) was a British-Canadian physical chemist. Whilst working on X-ray crystallography at the University of Cambridge, Lindsey was influential in the elucidation of the structure of DNA. She solved the structures of the purines, adenine and guanine. Her depiction of intramolecular hydrogen bonds in adenine crystals was central to Watson and Crick's elucidation of the double helical structure of DNA.
Education and early career
June Broomhead was born in Doncaster, England in June 1922. She joined the University of Cambridge, UK, in 1941. She completed the requirements for her degree in 1944, and joined the Cavendish Laboratory at the University of Cambridge. World War II forced her to leave her research career, however. She was encouraged to become a teacher and spent two years teaching science in a school. She returned to Cambridge in 1946.
She completed her undergraduate courses in 1944 at Newnham College, but Cambridge did not give women undergraduate degrees prior to 1948. She was awarded her bachelor's degree 50 years after earning it.
She solved the crystal structure of a complex of adenine and guanine. She delineated the shape and dimensions of the two nitrogenous subunits of DNA. She proposed that complementary nucleobases are bound together by hydrogen bonds, work that was expanded by Bill Cochran. Her research, particularly the prediction of hydrogen bonds, was researched and used by Watson and Crick to determine the structure of DNA. They created cardboard models based on the dimensions from Lindsey's crystal structures. Francis Crick worked opposite Lindsey at the University of Cambridge. They did not recognise the contributions of Lindsey in their discovery of the molecular structure of nucleic acids.
Career
Lindsey was awarded her doctorate (Ph.D.) in 1950, and then moved to the University of Oxford where she worked as a postdoctoral scholar with Dorothy Hodgkin on Vitamin B12. Lindsey moved to Canada in 1951. Before she left, Lawrence Bragg wrote to her requesting that she join him working on experimental and theoretical crystallography. In a letter, he wrote: “We badly need your hands to tackle knotty crystallographic problems, both experimental and theoretical. I wish all these things had come up while you were still with us; they would have been just in your line.”
She worked at the National Research Council Canada on the structure of codeine and morphine. Her husband, George Lindsey, was stationed in Montreal. Lindsey left her career in crystallography to look after her two children. They moved to Italy on a NATO mission in 1961.
Lindsey collected her bachelor's degree in 1998, over 50 years after completing it, when Cambridge granted Dr. Lindsey and 900 other "unofficial" female graduates their degrees earned prior to 1948.
Belated recognition
Alex MacKenzie, a pediatrician at the Children's Hospital of Eastern Ontario in Ottawa, who knew Lindsey as a family friend, asked her about her career. She told him about her 1940s work on crystallography, which inspired him to research her scientific contributions. MacKenzie was amazed by what he found and did not want her work to go unnoticed; it is "something we should shout from the mountaintops". He led the rediscovery of her contributions to the discovery of the double helix structure of DNA.
Personal life
Lindsey died in Ottawa on November 4, 2021, at the age of 99. She was predeceased by her husband, George.
References
1922 births
2021 deaths
Academics of the University of Oxford
Alumni of the University of Cambridge
British emigrants to Canada
Canadian crystallographers
Women biochemists
People from Doncaster
British crystallographers | June Lindsey | [
"Chemistry"
] | 758 | [
"Biochemists",
"Women biochemists"
] |
60,936,591 | https://en.wikipedia.org/wiki/Mohammed%20Nasser%20Al%20Ahbabi | Mohammed Nasser Al Ahbabi is the Director General of United Arab Emirates Space Agency. Before joining the UAE Space Agency, Ahbabi was part of a UAE Armed Forces think tank project, where he worked alongside military and government stakeholders, on concepts and technologies in Smart Defense and Cyber Warfare, amongst others. He has an active role in ITU-R and has served as the head of YAHSAT MilSatCom Project.
Education
In 1998, Mohammed Nasser Al Ahbabi obtained a degree in electronic engineering from the University of California, United States of America. He obtained a masters degree in communications from the University of Southampton, United Kingdom in 2001, and a Ph.D in Laser and Fibre Optics from the same university in 2005.
Career
Mohammed Nasser Al Ahbabi initially served as a telecommunications officer for the UAE Armed Forces. Concurrently, he worked as a coordinator for Dubai Internet City. From 2005 to 2012, he was a telecommunications officer at Sharyan Al Doea Network, and a project manager in the military division at Al Yah Satellite Communications. He is a part of the Hope Mars Mission team, which plans to send the Hope Space exploration probe into Mars' orbit by 2020.
Recognition
Mohammed Nasser Al Ahbabi has been ranked 43 in the Top 100 Most Powerful Arabs 2018 list compiled by Gulf Business. He was ranked 13 in Richtopia's list of the world’s 100 most influential figures in the space exploration sector.
References
Living people
Year of birth missing (living people) | Mohammed Nasser Al Ahbabi | [
"Astronomy"
] | 304 | [
"People associated with astronomy"
] |
60,937,698 | https://en.wikipedia.org/wiki/NGC%204150 | NGC 4150 is an elliptical galaxy located approximately 45 million light years away in the constellation Coma Berenices. It was discovered by William Herschel on March 13, 1785.
See also
List of NGC objects (4001–5000)
Gallery
References
External links
Elliptical galaxies
Coma Berenices
4150
038742 | NGC 4150 | [
"Astronomy"
] | 65 | [
"Coma Berenices",
"Constellations"
] |
60,938,323 | https://en.wikipedia.org/wiki/Julieta%20Norma%20Fierro%20Gossman | Julieta Norma Fierro Gossman (born in Mexico City on February 24, 1948), better known as Julieta Fierro, is a Mexican astrophysicist and science communicator. She is a full researcher at the Institute of Astronomy and professor of the Sciences Faculty at the National Autonomous University of Mexico (UNAM). She is part of the Researchers National System in Mexico, holding a level III position. Since 2004 she is a member of the Mexican Academy of Language.
Her research is focused on the study of interstellar medium and her latest research involves the study of the solar system. Nonetheless, she is most known for her science communication work. She holds three honris causa doctorates, and several laboratories, libraries, planetariums, astronomical societies, and schools have her name.
Biography
Julieta Fierro was born on February 24, 1948, in Mexico City. She studied physics at the UNAM School of Sciences and obtained her degree in 1974. Afterwards she earned a masters in Astrophysics at the same institution. She is a researcher in the Institute of Astronomy at UNAM and a full professor at the School of Sciences of the same university.
From March 2000 to January 2004, she was UNAM's General Director of Scientific Outreach. She has been in positions such as vice president and president of the Education Commission of the International Astronomical Union and president of the Mexican Academy of Natural Sciences Teachers and of the Mexican Association of Science and Technology Museums. Furthermore, she belonged to the board of directors of the Astronomical Society of the Pacific, which is focused on communicating science to improve education.
She was elected a member of the Academia Mexicana de la Lengua on July 24, 2003, and took possession of its 25th chair on August 26, 2004, with the lecture entitled Imaginemos un Caracol (Let's Imagine a Snail). She was elected a corresponding member of the Royal Spanish Academy on April 21, 2005.
Science communication
Throughout her career, she has written 40 books, of which 23 are on popular science. She has published dozens of articles in national and international journals. One of her writings was published in Mayan (an indigenous language). With the purpose of communicating science to broader audiences, she has given hundreds of talks and lectures, and designed multiple workshops of science for kids. During 2020 she published a series of scientific activities to perform at home during the lockdown periods due to the COVID-19 pandemic.
She participated in the creation of the astronomy room at Universum, one of the most popular University museums in Latin America. She was the director of Universum and of the Museo Descubre of Aguascalientes. She collaborated on the creation of a science museum in Puerto Rico and the McDonald Observatory in the United States and the Sutherland in South Africa. She collaborates actively with Universum, the Museum of Natural Sciences, the Museo de la Luz (Museum of the Light), the McDonald Observatory in Texas and Puerto Rico, and with the Global Fair in Japan.
She has participated in thousands of radio shows where she reads about science and talks about her passion for it. Sometimes she invites other scientists and interview them to enrich the conversation. She hosted a television series titled: Más allá de las estrellas (Beyond the Stars), which was chosen as the best science show in Mexico in 1998. Her most recent collaboration with Mexican TV was with Canal 11, a channel from the Instituto Politécnico Nacional with the TV show called Sofía Luna, agente espacial (Special Agent, Sofia Luna).
Selected publications
La astronomía de México. Lectorum, 2001, . Reissued in 2005.
Albert Einstein: Un científico de nuestro tiempo. Co-authored with Héctor Domínguez, Lectorum, 2005, .
Lo grandioso de la luz, Gran paseo por la ciencia. Editorial Nuevo México, 2005, .
Lo grandioso del tiempo, Gran paseo por la ciencia. Editorial Nuevo México, 2005, .
Cartas Astrales: Un romance científico del tercer tipo. Co-authored with Adolfo Sánchez Valenzuela, Alfaguara, 2006, .
La luz de las estrellas. Co-authored with Héctor Domínguez. Ediciones La Vasija, 2006, .
Galileo y el telescopio, 400 años de ciencia. Co-authored with Héctor Domínguez. Uribe y Ferrari Editores, 2007, .
Newton, la luz y el movimiento de los cuerpos. Co-authored with Héctor Domínguez. Uribe y Ferrari Editores, 2007, .
From the collection Ciencia para todos (science for everyone) from the Fondo de Cultura Económica in México, her main works are:
La Evolución Química del Sol. Co-authored with Manuel Peimbert Sierra, 2012, .
Nebulosas planetarias: la hermosa muerte de las estrellas. Co-authored with Silvia Torres Castilleja, 2009, .
Fronteras el universo. Book complied by Manuel Peimbert Sierra (compiler), Silvia Torres Castilleja, Miguel Ángel Herrera, Miriam Peña, Luis Felipe Rodríguez, Dany Page, José Jesús González, Deborah Dultzin, 2000, . Wrote one chapter about planetary systems.
La familia del Sol, co-authored with Miguel Ángel Herrera, 1989, .
Awards and recognitions
Throughout her career she has been awarded with multiple prizes and her work recognized by different institutions:
Third World Network of Scientific Organizations outreach award, 1992.
Kalinga Prize, UNESCO, 1995.
Primo Rovis Gold Medal, Trieste Center of Astrophysical Theory, 1996.
Klumpke-Roberts Award, Astronomical Society of the Pacific, 1998.
National Award for Science Journalism, 1998.
Latin American Award for the Popularization of Science, Chile, 2001.
Citizen's Medal of Merit from the Mexico City Assembly of Representatives, 2003.
Benito Juárez Medal, 2004.
Flama Recognition, Autonomous University of Nuevo León, 2005
Vasco de Quiroga Medal, 2011.
TWAS-ROLAC Regional Prize, 2017.
Medal for the Scientific Merit, 2021. Engineer Mario Molina.
Elected to the American Academy of Arts and Sciences, 2023. Class I – Mathematical and Physical Sciences. Section 4 – Astronomy, Astrophysics, and Earth Sciences.
Honorary degrees
2006: Awarded by CITEM
2009: Awarded by Coordinadora de Identidades Territoriales Mapuche and Michoacan University of Saint Nicholas of Hidalgo.
2017: Awarded by Universidad Autónoma Benito Juárez de Oaxaca.
References
External links
Julieta Norma Fierro Gossman at the UNAM Institute of Astronomy
1948 births
20th-century Mexican scientists
21st-century Mexican scientists
20th-century Mexican women writers
20th-century Mexican writers
21st-century Mexican women writers
Kalinga Prize recipients
Living people
Members of the Mexican Academy of Language
Mexican astrophysicists
Mexican women physicists
National Autonomous University of Mexico alumni
Academic staff of the National Autonomous University of Mexico
21st-century science writers
Scientists from Mexico City
Women science writers
Writers from Mexico City
20th-century Mexican women scientists
21st-century Mexican women scientists
20th-century women physicists
Fellows of the American Academy of Arts and Sciences
Mexican scientists
Mexican science communicators | Julieta Norma Fierro Gossman | [
"Technology"
] | 1,507 | [
"Women science writers",
"Women in science and technology"
] |
60,938,387 | https://en.wikipedia.org/wiki/Estradiol%20benzoate/estradiol%20phenylpropionate/testosterone%20propionate/testosterone%20phenylpropionate/testosterone%20isocaproate | Estradiol benzoate/estradiol phenylpropionate/testosterone propionate/testosterone phenylpropionate/testosterone isocaproate (EB/EPP/TP/TPP/TiC), sold under the brand names Estandron Prolongatum, Lynandron Prolongatum, and Mixogen, was an injectable combination medication of the estrogens estradiol benzoate (EB) and estradiol phenylpropionate (EPP) and the androgens/anabolic steroids testosterone propionate (TP), testosterone phenylpropionate (TPP), and testosterone isocaproate (TiC) which was used in menopausal hormone therapy for women. It was also used to suppress lactation in postpartum women.
The medication was provided in the form of 1 mL ampoules and 2 mL vials containing 1 mg/mL EB, 4 mg/mL EPP, 20 mg/mL TP, 40 mg/mL TPP, and 40 mg/mL TiC in an oil solution and was administered by intramuscular injection. EB/EPP/TP/TPP/TiC reportedly has a duration of about 14 days.
Estandron Prolongatum, Lynandron Prolongatum, and Mixogen were all introduced for medical use by 1956. Oral tablet products with the same brand names of Estandron, Lynandron, and Mixogen, containing ethinylestradiol and methyltestosterone, were marketed around the same time, and should not be confused with the injectable products. Estandron Prolongatum, Lynandron Prolongatum, and Mixogen remained marketed as late as the 1980s. EB/EPP/TP/TPP/TiC appears to no longer be marketed.
See also
Estradiol benzoate/estradiol phenylpropionate
List of combined sex-hormonal preparations
References
Abandoned drugs
Combined estrogen–androgen formulations | Estradiol benzoate/estradiol phenylpropionate/testosterone propionate/testosterone phenylpropionate/testosterone isocaproate | [
"Chemistry"
] | 445 | [
"Drug safety",
"Abandoned drugs"
] |
60,938,901 | https://en.wikipedia.org/wiki/NGC%203705 | NGC 3705 is a barred spiral galaxy in the constellation Leo. It was discovered by William Herschel on Jan 18, 1784. It is a member of the Leo II Groups, a series of galaxies and galaxy clusters strung out from the right edge of the Virgo Supercluster.
One supernova has been observed in NGC 3705: SN 2022xxf (type Ic, mag. 15.5).
See also
List of NGC objects (3001–4000)
Gallery
References
External links
Barred spiral galaxies
Leo (constellation)
3705
Astronomical objects discovered in 1784
Discoveries by William Herschel
035440 | NGC 3705 | [
"Astronomy"
] | 126 | [
"Leo (constellation)",
"Constellations"
] |
60,939,030 | https://en.wikipedia.org/wiki/Myers%20deoxygenation | In organic chemistry, the Myers deoxygenation reaction is an organic redox reaction that reduces an alcohol into an alkyl position by way of an arenesulfonylhydrazine as a key intermediate. This name reaction is one of four discovered by Andrew Myers that are named after him; this reaction and the Myers allene synthesis reaction involve the same type of intermediate. The other reactions are Myers' asymmetric alkylation and Myers-Saito Cycloaromatization.
R–CH2OH + H2NNHSO2Ar → R–CH2N(SO2Ar)NH2 → R–CH2N=NH → R–CH3 + N2
The reaction is a three-step one-pot process in which the alcohol first undergoes a Mitsunobu reaction with ortho-nitrobenzenesulfonylhydrazine in the presence of triphenylphosphine and diethyl azodicarboxylate. Unlike hydrazone-synthesis reactions, this reaction occurs on the same nitrogen of the hydrazine that has the arenesulfonyl substituent. Upon warming, this product undergoes an elimination of arylsulfinic acid to give an unstable diazene as a reactive intermediate. A radical process then promptly occurs with loss of dinitrogen to give the final alkyl product.
The alkyl-radical intermediate can instead undergo an intramolecular reaction with various other suitably-positioned functional groups within the molecule, such as alkenes or cyclopropanes, leading to alternate products.
If the diazene intermediate is able to undergo a sigmatropic rearrangement, this process occurs in preference to the simple radical reduction to give a hydrocarbon with a transposed π bond. For example, in the Myers allene synthesis, one of the two π bonds of the alkyne of a propargyl alcohol shifts, forming an allene. Likewise, the benzylic alcohol 1-naphthylmethanol rearranges to give a methylene-cyclohexyl product with loss of aromaticity.
References
Organic reduction reactions
Name reactions | Myers deoxygenation | [
"Chemistry"
] | 447 | [
"Name reactions",
"Organic chemistry stubs"
] |
60,939,156 | https://en.wikipedia.org/wiki/Felisa%20N%C3%BA%C3%B1ez%20Cubero | Felisa Núñez Cubero (January 21, 1924 - August 10, 2017) was a Spanish physicist. She was the first female professor at the Polytechnic University of Madrid.
Career
She graduated in Chemical Sciences in 1946 in Valladolid, and began working with Professor Velayos, who greatly influenced her scientific vocation, orienting it towards physics and directing his doctoral thesis in the area of magnetism. In 1958 she received her doctorate in physics from the UCM with a thesis on permanent magnets and three years later she obtained a scholarship from the Ramsay Memorial Fellowship Trust to expand her research activity at the University of Nottingham, working on magnetic domains with Professor Bates. Her work is cited in the books Modern Magnetism of Bates and Magnetism of Rado and Shull, whose four volumes constitute an authentic encyclopedia of magnetism.
In her academic life she carried out teaching activities, starting as assistant and associate professor at the University of Valladolid (1946-1956), later as assistant professor at the Complutense University of Madrid (1956-1982) and finally at the Polytechnic University of Madrid. Here she was professor of physics, first at the University School of Telecommunications Engineering(1964-1983) and later in the University School of Forestry Technical Engineering (1983-2000), the last ten years as professor emerita. In 1990 the universities of Madrid UCM and UPM awarded her gold medals.
Selected works
"Electricity and Magnetism Laboratory" Editorial Urmo, 1972
"100 problems Electromagnetism" Alianza Editorial, 1997
Awards and honors
First Prize Teaching of Physics. 1999, awarded by the Royal Spanish Society of Physics
Gold Medal of the Complutense University of Madrid. June 1989
Gold Medal of the Polytechnic University of Madrid. October 1989
References
Spanish women physicists
Spanish physicists
Condensed matter physicists
1924 births
2017 deaths
Technical University of Madrid
Academic staff of the Complutense University of Madrid | Felisa Núñez Cubero | [
"Physics",
"Materials_science"
] | 391 | [
"Condensed matter physicists",
"Condensed matter physics"
] |
60,939,597 | https://en.wikipedia.org/wiki/Japan%20Vulnerability%20Notes | Japan Vulnerability Notes (JVN) is Japan's national vulnerability database. It is maintained by the Japan Computer Emergency Response Team Coordination Center and the Japanese government's Information-Technology Promotion Agency.
References
External links
https://jvn.jp/en/
Security vulnerability databases | Japan Vulnerability Notes | [
"Technology"
] | 58 | [
"Computer security stubs",
"Computing stubs"
] |
60,940,195 | https://en.wikipedia.org/wiki/NGC%207773 | NGC 7773 is a barred spiral galaxy located in the constellation of Pegasus at an approximate distance of 400 million light years. NGC 7773 was discovered on October 9, 1790 by William Herschel.
See also
Galaxy
Gallery
References
External links
NGC 7773 on SIMBAD
7773
Pegasus (constellation)
Barred spiral galaxies | NGC 7773 | [
"Astronomy"
] | 67 | [
"Pegasus (constellation)",
"Constellations"
] |
60,940,623 | https://en.wikipedia.org/wiki/Ro%203-0412 | Ro 3-0412 is an acetylcholinesterase inhibitor. It is the organophosphate analog of neostigmine.
See also
Neostigmine
Ro 3-0419
Ro 3-0422
References
Acetylcholinesterase inhibitors
Organophosphates
Quaternary ammonium compounds
Phenol esters
Methyl esters
Methylsulfates | Ro 3-0412 | [
"Chemistry"
] | 78 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
60,941,204 | https://en.wikipedia.org/wiki/Dimethanospiro%282.2%29octaplane | Dimethanospiro[2.2]octaplane is a hypothetical saturated hydrocarbon that is expected to have a carbon atom in with a stable, unusual square-planar coordination rather than the usual tetrahedral geometry of a carbon atom with four bonds.
Molecular architecture
An octaplane contains a central carbon atom is surrounded by four carbon atoms, which are held in place by perpendicular links to two cyclooctane rings above and below. The parent structure octaplane itself is expected to have a very low ionization potential and a square–planar geometry as the monocation, however calculations on the neutral compound found that the central carbon would distort to a square–pyramidal geometry.
In dimethanospiro[2.2]octaplane, two pairs of the carbons attached to the central one are bonded to each other to make a spiropentane, and there are two methylene linkages between the two cyclooctane rings.
References
Hypothetical chemical compounds
Polycyclic nonaromatic hydrocarbons
Cyclopropanes
Spiro compounds
Cyclohexanes | Dimethanospiro(2.2)octaplane | [
"Chemistry"
] | 233 | [
"Theoretical chemistry stubs",
"Hypotheses in chemistry",
"Organic compounds",
"Theoretical chemistry",
"Hypothetical chemical compounds",
"Spiro compounds"
] |
60,941,624 | https://en.wikipedia.org/wiki/Kelly%20West%20Award | The Kelly M. West Award for Outstanding Achievement in Epidemiology is an honor bestowed by the American Diabetes Association. It has been awarded annually to an individual since 1986. The award is named in honor of Kelly M. West.
Winners
Source:
See also
Banting Medal
List of medicine awards
References
Medicine awards
American science and technology awards
Epidemiology
American Diabetes Association
Awards established in 1986
1986 establishments in the United States | Kelly West Award | [
"Technology",
"Environmental_science"
] | 86 | [
"Science and technology awards",
"Environmental social science",
"Epidemiology",
"Medicine awards"
] |
60,941,951 | https://en.wikipedia.org/wiki/Laborat%C3%B3rio%20Nacional%20de%20Energia%20e%20Geologia | Laboratório Nacional de Energia e Geologia (National Laboratory of Energy and Geology) is a Portuguese public R&D institution in the fields of energy and geology. It was created in 2007 by the merger of the former National Institute of Engineering, Technology and Innovation with several smaller research and regulation bodie.s
References
External links
Research institutes in Portugal
2007 establishments in Portugal
Energy research institutes
Earth science research institutes
Multidisciplinary research institutes
Amadora | Laboratório Nacional de Energia e Geologia | [
"Engineering"
] | 94 | [
"Energy research institutes",
"Energy organizations"
] |
60,942,301 | https://en.wikipedia.org/wiki/NGC%203666 | NGC 3666 is an unbarred spiral galaxy in the constellation Leo. It was discovered by William Herschel on March 15, 1784. It is a member of the Leo II Groups, a series of galaxies and galaxy clusters strung out from the right edge of the Virgo Supercluster.
See also
List of NGC objects (3001-4000)
Gallery
References
External links
Astronomical objects discovered in 1784
Unbarred spiral galaxies
3666
Leo (constellation)
035043 | NGC 3666 | [
"Astronomy"
] | 97 | [
"Leo (constellation)",
"Constellations"
] |
60,944,280 | https://en.wikipedia.org/wiki/Scanning%20helium%20microscopy | The scanning helium microscope (SHeM) is a form of microscopy that uses low-energy (5–100 meV) neutral helium atoms to image the surface of a sample without any damage to the sample caused by the imaging process. Since helium is inert and neutral, it can be used to study delicate and insulating surfaces. Images are formed by rastering a sample underneath an atom beam and monitoring the flux of atoms that are scattered into a detector at each point.
The technique is different from a scanning helium ion microscope, which uses charged helium ions that can cause damage to a surface.
Motivation
Microscopes can be divided into two general classes: those that illuminate the sample with a beam, and those that use a physical scanning probe. Scanning probe microscopies raster a small probe across the surface of a sample and monitor the interaction of the probe with the sample. The resolution of scanning probe microscopies is set by the size of the interaction region between the probe and the sample, which can be sufficiently small to allow atomic resolution. Using a physical tip (e.g. AFM or STM) does have some disadvantages though including a reasonably small imaging area and difficulty in observing structures with a large height variation over a small lateral distance.
Microscopes that use a beam have a fundamental limit on the minimum resolvable feature size, , which is given by the Abbe diffraction limit,
where is the wavelength of the probing wave, is the refractive index of the medium the wave is travelling in and the wave is converging to a spot with a half-angle of . While it is possible to overcome the diffraction limit on resolution by using a near-field technique, it is usually quite difficult. Since the denominator of the above equation for the Abbe diffraction limit will be approximately two at best, the wavelength of the probe is the main factor in determining the minimum resolvable feature, which is typically about 1 μm for optical microscopy.
To overcome the diffraction limit, a probe that has a smaller wavelength is needed, which can be achieved using either light with a higher energy, or through using a matter wave.
X-rays have a much smaller wavelength than visible light, and therefore can achieve superior resolutions when compared to optical techniques. Projection X-ray imaging is conventionally used in medical applications, but high resolution imaging is achieved through scanning transmission X-ray microscopy (STXM). By focussing the X-rays to a small point and rastering across a sample, a very high resolution can be obtained with light. The small wavelength of X-rays comes at the expense of a high photon energy, meaning that X-rays can cause radiation damage. Additionally, X-rays are weakly interacting, so they will primarily interact with the bulk of the sample, making investigations of a surface difficult.
Matter waves have a much shorter wavelength than visible light and therefore can be used to study features below about 1 μm. The advent of electron microscopy opened up a variety of new materials that could be studied due to the enormous improvement in the resolution when compared to optical microscopy.
The de Broglie wavelength, , of a matter wave in terms of its kinetic energy, , and particle mass, , is given by
Hence, for an electron beam to resolve atomic structure, the wavelength of the matter wave would need be at least = 1 Å, and therefore the beam energy would need to be given by > 100 eV.
Since electrons are charged, they can be manipulated using electromagnetic optics to form extremely small spot sizes on a surface. Due to the wavelength of an electron beam being low, the Abbe diffraction limit can be pushed below atomic resolution and electromagnetic lenses can be used to form very intense spots on the surface of a material. The optics in a scanning electron microscope usually require the beam energy to be in excess of 1 keV to produce the best-quality electron beam.
The high energy of the electrons leads to the electron beam interacting not only with the surface of a material, but forming a tear-drop interaction volume underneath the surface. While the spot size on the surface can be extremely low, the electrons will travel into the bulk and continue interacting with the sample. Transmission electron microscopy avoids the bulk interaction by only using thin samples, however usually the electron beam interacting with the bulk will limit the resolution of a scanning electron microscope.
The electron beam can also damage the material, destroying the structure that is to be studied due to the high beam energy. Electron beam damage can occur through a variety of different processes that are specimen-specific. Examples of beam damage include the breaking of bonds in a polymer, which changes the structure, and knock-on damage in metals that creates a vacancy in the lattice, which changes to the surface chemistry. Additionally, the electron beam is charged, which means that the surface of the sample needs to be conducting to avoid artefacts of charge accumulation in images. One method to mitigate the issue when imaging insulating surfaces is to use an environmental scanning electron microscope (ESEM).
Therefore, in general, electrons are often not particularly suited to studying delicate surfaces due to the high beam energy and lack of exclusive surface sensitivity. Instead, an alternative beam is required for the study of surfaces at low energy without disturbing the structure.
Given the equation for the de Broglie wavelength above, the same wavelength of a beam can be achieved at lower energies by using a beam of particles that have a higher mass. Thus, if the objective were to study the surface of a material at a resolution that is below that which can be achieved with optical microscopy, it may be appropriate to use atoms as a probe instead. While neutrons can be used as a probe, they are weakly interacting with matter and can only study the bulk structure of a material. Neutron imaging also requires a high flux of neutrons, which usually can only be provided by a nuclear reactor or particle accelerator.
A beam of helium atoms with a wavelength = 1 Å has an energy of 20 meV, which is about the same as the thermal energy. Using particles of a higher mass than that of an electron means that it is possible to obtain a beam with a wavelength suitable to probe length scales down to the atomic level with a much lower energy.
Thermal energy helium atom beams are exclusively surface sensitive, giving helium scattering an advantage over other techniques such as electron and x-ray scattering for surface studies. For the beam energies that are used, the helium atoms will have classical turning points 2–3 Å away from the surface atom cores. The turning point is well above the surface atom cores, meaning that the beam will only interact with the outermost electrons.
History
The first discussion of obtaining an image of a surface using atoms was by King and Bigas, who showed that an image of a surface can be obtained by heating a sample and monitoring the atoms that evaporate from the surface. King and Bigas suggest that it could be possible to form an image by scattering atoms from the surface, though it was some time before this was demonstrated.
The idea of imaging with atoms instead of light was subsequently widely discussed in the literature.
The initial approach to producing a helium microscope assumed that a focussing element is required to produce a high intensity beam of atoms. An early approach was to develop an atomic mirror, which is appealing since the focussing is independent of the velocity distribution of the incoming atoms. However the material challenges to produce an appropriate surface that is macroscopically curved and defect free on an atomic length-scale has proved too challenging so far.
King and Bigas, showed that an image of a surface can be obtained by heating a sample and monitoring the atoms that evaporate from the surface. King and Bigas suggest it could be possible to form an image by scattering atoms from the surface, though it was some time before it was demonstrated.
Metastable atoms are atoms that have been excited out of the ground state, but remain in an excited state for a significant period of time. Microscopy using metastable atoms has been shown to be possible, where the metastable atoms release stored internal energy into the surface, releasing electrons that provide information on the electronic structure. The kinetic energy of the metastable atoms means that only the surface electronic structure is probed, but the large energy exchange when the metastable atom de-excites will still perturb delicate sample surfaces.
The first two-dimensional neutral helium images were obtained using a conventional Fresnel zone plate by Koch et al. in a transmission setup. Helium will not pass through a solid material, therefore a large change in the measured signal is obtained when a sample is placed between the source and the detector. By maximising the contrast and using transmission mode, it was much easier to verify the feasibility of the technique. However, the setup used by Koch et al. with a zone plate did not produce a high enough signal to observe the reflected signal from the surface at the time. Nevertheless, the focussing obtained with a zone plate offers the potential for improved resolution due to the small beam spot size in the future. Research into neutral helium microscopes that use a Fresnel zone plate is an active area in Holst’s group at the University of Bergen.
Since using a zone plate proved difficult due to the low focussing efficiency, alternative methods for forming a helium beam to produce images with atoms were explored.
Recent efforts have avoided focussing elements and instead are directly collimating a beam with a pinhole. The lack of atom optics means that the beam width will be significantly larger than in an electron microscope. The first published demonstration of a two-dimensional image formed by helium reflecting from the surface was by Witham and Sánchez, who used a pinhole to form the helium beam. A small pinhole is placed very close to a sample and the helium scattered into a large solid angle is fed to a detector. Images are collected by moving the sample around underneath the beam and monitoring how the scattered helium flux changes.
In parallel to the work by Witham and Sánchez, a proof of concept machine named the scanning helium microscope (SHeM) was being developed in Cambridge in collaboration with Dastoor's group from the University of Newcastle. The approach that was adopted was to simplify previous attempts that involved an atom mirror by using a pinhole, but to still use a conventional helium source to produce a high quality beam. Other differences from the Witham and Sánchez design include using a larger sample to pinhole distance, so that a larger variety of samples can be used and to use a smaller collection solid angle, so that it may be possible to observe more subtle contrast. These changes also reduced the total flux in the detector meaning that higher efficiency detectors are required (which in itself is an active area of research.
Image formation process
The atomic beam is formed through a supersonic expansion, which is a standard technique used in helium atom scattering. The centreline of the gas is selected by a skimmer to form an atom beam with a narrow velocity distribution. The gas is then further collimated by a pinhole to form a narrow beam, which is typically between 1–10 μm. The use of a focusing element (such as a zone plate) allows beam spot sizes below 1 μm to be achieved, but currently still comes with low signal intensity.
The gas then scatters from the surface and is collected into a detector. In order to measure the flux of the neutral helium atoms, they must first be ionised. The inertness of helium that makes it a gentle probe means that it is difficult to ionise and therefore reasonably aggressive electron bombardment is typically used to create the ions. A mass spectrometer setup is then used to select only the helium ions for detection.
Once the flux from a specific part of the surface is collected, the sample is moved underneath the beam to generate an image. By obtaining the value of the scattered flux across a grid of positions, then values can then be converted to an image.
The observed contrast in helium images has typically been dominated by the variation in topography of the sample. Typically, since the wavelength of the atom beam is small, surfaces appear extremely rough to the incoming atom beam. Therefore, the atoms are diffusely scattered and roughly follow Knudsen's Law [citation?] (the atom equivalent of Lambert's cosine law in optics). However, more recently work has begun to see divergence from diffuse scattering due to effects such as diffraction and chemical contrast effects. However, the exact mechanisms for forming contrast in a helium microscope is an active field of research. Most cases have some complex combination of several contrast mechanisms making it difficult to disentangle the different contributions.
Combinations of images from multiple perspectives allows stereophotogrammetry to produce partial three dimensional images, especially valuable for biological samples subject to degradation in electron microscopes.
Optimal configurations
The optimal configurations of scanning helium microscopes are geometrical configurations that maximise the intensity of the imaging beam within a given lateral resolution and under certain technological constraints.
When designing a scanning helium microscope, scientists strive to maximise the intensity of the imaging beam while minimising its width. The reason behind this is that the beam's width gives the resolution of the microscope while its intensity is proportional to its signal to noise ratio. Due to their neutrality and high ionisation energy, neutral helium atoms are hard to detect. This makes high-intensity beams a crucial requirement for a viable scanning helium microscope.
In order to generate a high-intensity beam, scanning helium microscopes are designed to generate a supersonic expansion of the gas into vacuum, that accelerates neutral helium atoms to high velocities. Scanning helium microscopes exist in two different configurations: the pinhole configuration and the zone plate configuration. In the pinhole configuration, a small opening (the pinhole) selects a section of the supersonic expansion far away from its origin, which has previously been collimated by a skimmer (essentially, another small pinhole). This section then becomes the imaging beam. In the zone plate configuration a Fresnel zone plate focuses the atoms coming from a skimmer into a small focal spot.
Each of these configurations have different optimal designs, as they are defined by different optics equations.
Pinhole configuration
For the pinhole configuration the width of the beam (which we aim to minimise) is largely given by geometrical optics. The size of the beam at the sample plane is given by the lines connecting the skimmer edges with the pinhole edges. When the Fresnel number is very small (), the beam width is also affected by Fraunhofer diffraction (see equation below).
In this equation is the Full Width at Half Maximum of the beam, is the geometrical projection of the beam and is the Airy diffraction term. is the Heaviside step function used here to indicate that the presence of the diffraction term depends on the value of the Fresnel number. Note that there are variations of this equation depending on what is defined as the "beam width" (for details compare and ). Due to the small wavelength of the helium beam, the Fraunhofer diffraction term can usually be omitted.
The intensity of the beam (which we aim to maximise) is given by the following equation (according to the Sikora and Andersen model):
Where is the total intensity stemming from the supersonic expansion nozzle (taken as a constant in the optimisation problem), is the radius of the pinhole, S is the speed ratio of the beam, is the radius of the skimmer, is the radius of the supersonic expansion quitting surface (the point in the expansion from which atoms can be considered to travel in a straight line), is the distance between the nozzle and the skimmer and is the distance between the skimmer and the pinhole. There are several other versions of this equation that depend on the intensity model, but they all show a quadratic dependency on the pinhole radius (the bigger the pinhole, the more intensity) and an inverse quadratic dependency with the distance between the skimmer and the pinhole (the more the atoms spread, the less intensity).
By combining the two equations shown above, one can obtain that for a given beam width for the geometrical optics regime the following values correspond to intensity maxima:
In here, represents the working distance of the microscope and is a constant that stems from the definition of the beam width. Note that both equations are given with respect to the distance between the skimmer and the pinhole, a. The global maximum of intensity can then be obtained numerically by replacing these values in the intensity equation above. In general, smaller skimmer radii coupled with smaller distances between the skimmer and the pinhole are preferred, leading in practice to the design of increasingly smaller pinhole microscopes.
Zone plate configuration
The zone plate microscope uses a zone plate (that acts roughly like as a classical lens) instead of a pinhole to focus the atom beam into a small focal spot. This means that the beam width equation changes significantly (see below).
Here, is the zone plate magnification and is the width of the smallest zone. Note the presence of chromatic aberrations (). The approximation sign indicates the regime in which the distance between the zone plate and the skimmer is much bigger than its focal length.
The first term in this equation is similar to the geometric contribution in the pinhole case: a bigger zone plate (taken all parameters constant) corresponds to a bigger focal spot size. The third term differs from the pinhole configuration optics as it includes a quadratic relation with the skimmer size (which is imaged through the zone plate) and a linear relation with the zone plate magnification, which will at the same time depend on its radius.
The equation to maximise, the intensity, is the same as the pinhole case with the substitution . By substitution of the magnification equation:
where is the average de-Broglie wavelength of the beam. Taking a constant , which should be made equal to the smallest achievable value, the maxima of the intensity equation with respect to the zone plate radius and the skimmer-zone plate distance can be obtained analytically. The derivative of the intensity with respect to the zone plate radius can be reduced the following cubic equation (once it has been set equal to zero):
Here some groupings are used: is a constant that gives the relative size of the smallest aperture of the zone plate compared with the average wavelength of the beam and is the modified beam width, which is used through the derivation to avoid explicitly operating with the constant airy term: .
This cubic equation is obtained under a series of geometrical assumptions and has a closed-form analytical solution that can be consulted in the original paper or obtained through any modern-day algebra software. The practical consequence of this equation is that zone plate microscopes are optimally designed when the distances between the components are small, and the radius of the zone plate is also small. This goes in line with the results obtained for the pinhole configuration, and has as its practical consequence the design of smaller scanning helium microscopes.
See also
Helium atom scattering
Atom optics
Atomic mirror
Matter wave
References
Microscopes
Nanotechnology
Atomic, molecular, and optical physics | Scanning helium microscopy | [
"Physics",
"Chemistry",
"Materials_science",
"Technology",
"Engineering"
] | 3,932 | [
"Materials science",
"Measuring instruments",
"Microscopes",
" molecular",
"Microscopy",
"Atomic",
"Nanotechnology",
" and optical physics"
] |
60,946,079 | https://en.wikipedia.org/wiki/Marianne%20Walck | Marianne C. Walck is Director of the National Energy Technology Laboratory. She previously served as Vice President of the Sandia National Laboratories, where she led nuclear weapons stewardship, and as the Chief Research Officer at the Idaho National Laboratory.
Early life and education
Walck studied physics and geology at Hope College, which she graduated in 1978. She earned master's and doctorate degrees in geophysics at the California Institute of Technology. For her doctorate (1984), Walck worked on teleseismic array analysis of upper mantle velocity structure with Robert Clayton and Don Anderson. Her subsequent research considered subsurface energy sources and treaty verification.
Research and career
Walck joined Sandia National Laboratories in 1984. After 6 years as a research scientist, she served as manager of the Geophysics Department. Her group conducted geophysical R&D, including monitoring subsurface processes using microseismic monitoring. In 2003, she was named Senior Manager for Nuclear Energy Safety Technologies, where she was responsible for five research and development groups, working on a range of topics including civilian nuclear power and the transportation of nuclear waste. This involved studies for the Nuclear Regulatory Commission, where she assessed the vulnerabilities of nuclear power plants to terrorist attacks. Her group's efforts were used during the Fukushima Daiichi nuclear disaster. Starting in 2011, she also served as associate director of the United States Department of Energy Center for Frontiers of Energy Security.
While Vice President for Sandia's California Laboratory (2015–2017), she was responsible for a 1300-person site in Livermore, CA that performed research and development in nuclear security and energy. She also led Sandia's Energy and Climate program, which looked at renewable energy, transportation energy systems and the nuclear fuel cycle. She was associate director of CFSES, the Center for Frontiers of Subsurface Energy Security, a collaboration between the University of Texas at Austin and Sandia National Laboratories. In 2015 she was named a Vice President of Sandia National Laboratories.
Walck retired from Sandia National Laboratories in 2017. She was announced as the deputy director for Science and Technology of the Idaho National Laboratory in 2018. In this capacity, she leads research, science and technology. She is a Distinguished Expert for the California Council on Science and Technology., and serves on a variety of advisory panels.
Personal life
Walck is married with two children. She is a violinist in her local community orchestra.
References
Hope College alumni
California Institute of Technology alumni
Idaho National Laboratory
Environmental scientists
Sandia National Laboratories people
20th-century American scientists
20th-century American women scientists
21st-century American scientists
21st-century American women scientists
Year of birth missing (living people)
Living people | Marianne Walck | [
"Environmental_science"
] | 542 | [
"American environmental scientists",
"Environmental scientists"
] |
60,946,866 | https://en.wikipedia.org/wiki/Noa-name | A noa-name is a word that replaces a taboo word, generally out of fear that the true name would summon the thing. The term derives from the Polynesian concept of noa, which is the antonym of tapu (from which derives the word taboo) and serves to lift the tapu from a person or object.
A noa-name is sometimes described as a euphemism, though the meaning is more specific; a noa-name is a non-taboo synonym used to avoid bad luck, and replaces a name considered dangerous. The noa-name may be innocuous or flattering, or it may be more accusatory.
Examples
In the Germanic languages, the word for 'bear' was replaced with a noa-name meaning 'brown', the Proto-Germanic *berô, with descendants including Swedish , English bear, German and Dutch .
In Finnish, there are several noa-names for (bear), used instead of calling the animal by its name and inadvertently attracting its attention. The word itself is a noa-name, to avoid using the original (and now relatively uncommon) words or . (See Finnish mythology.)
In Swedish, the word ('wolf') was replaced by ('stranger'). The spirits of the hearth, (corresponding to the Scottish brownie, or the Cornish pixie), were known as , 'dear little relatives'.
In Irish folklore, fairies more commonly called sidhe are referred to as 'the little people' or 'the good people.'
The Icelandic word huldufólk translates to 'the hidden people' and refers to supernatural beings otherwise known as álfar (elves).
In English, the Devil has been referred to by a variety of names (e.g. 'Old Nick', 'Mr. Scratch') to avoid attracting his attention through his name.
In Greek legend, the Erinyes (the Furies, the spirits of revenge) were commonly known as the Eumenides ('the benevolent ones'). Additionally, Hades, god of the underworld, was usually referred to with euphemisms like Ploútōn ('the wealthy one') in order to avoid attracting his attention.
In Jewish culture, it is forbidden to speak the name of God (represented as YHWH) and the noa-name adonai, 'my lord', or HaShem, 'the Name', is used instead.
To avoid the negative connotations of the left side and left-handedness, most Romance languages created noa-names to avoid Latin : see French , Spanish , Romanian . Also Greek created (), a derivation from (, "best") to avoid ().
See also
Apotropaic names are negative words applied to ward off evil.
Avoidance speech, a sociolinguistic phenomenon found in some aboriginal languages
Heiti
Kenning
The love that dare not speak its name
Mokita, a Trobriand term that translates as 'the truth we all know but agree not to talk about'
The evil wizard Lord Voldemort, typically referred to in the Harry Potter series as "He Who Must Not Be Named" or "You-Know-Who"
The name of the William Shakespeare play Macbeth is, by longstanding theatrical custom, not to be mentioned in order to avoid bad luck; reference is instead made, for instance, to "the Scottish play"
References
English-language idioms
Etiquette
Euphemisms
Taboo | Noa-name | [
"Biology"
] | 721 | [
"Etiquette",
"Behavior",
"Human behavior"
] |
60,947,361 | https://en.wikipedia.org/wiki/Robert%20McCann%20%28mathematician%29 | Robert John McCann is a Canadian mathematician, known for his work in transportation theory. He has worked as a professor at the University of Toronto since 1998, and as Canada Research Chair in Mathematics, Economics, and Physics since 2020.
Life and work
McCann was raised in Windsor, Ontario. He studied engineering and physics at Queen's University before graduating with a degree in math, and earned a PhD in mathematics from Princeton University in 1994. McCann was a Tamarkin Assistant Professor at Brown University from 1994, before joining the University of Toronto Department of Mathematics in the fall of 1998. He served as editor-in-chief of the Canadian Journal of Mathematics from 2007 to 2016, and again since 2022. He was an invited speaker at the International Congress of Mathematicians in Seoul in 2014. He was elected a Fellow of the American Mathematical Society in 2012, of the Royal Society of Canada in 2014, of the Fields Institute in 2015 and of the Canadian Mathematical Society in 2020. In 2025 he received the Norbert Wiener Prize in Applied Mathematics.
References
External links
Year of birth missing (living people)
Applied mathematicians
Living people
20th-century Canadian mathematicians
21st-century Canadian mathematicians
Brown University faculty
Fellows of the American Mathematical Society
Mathematical economists
Mathematical physicists
People from Windsor, Ontario
Princeton University alumni
Queen's University at Kingston alumni
Academic staff of the University of Toronto | Robert McCann (mathematician) | [
"Mathematics"
] | 270 | [
"Applied mathematics",
"Applied mathematicians"
] |
67,409,235 | https://en.wikipedia.org/wiki/Polar%20factorization%20theorem | In optimal transport, a branch of mathematics, polar factorization of vector fields is a basic result due to Brenier (1987), with antecedents of Knott-Smith (1984) and Rachev (1985), that generalizes many existing results among which are the polar decomposition of real matrices, and the rearrangement of real-valued functions.
The theorem
Notation. Denote the image measure of through the map .
Definition: Measure preserving map. Let and be some probability spaces and a measurable map. Then, is said to be measure preserving iff , where is the pushforward measure. Spelled out: for every -measurable subset of , is -measurable, and . The latter is equivalent to:
where is -integrable and is -integrable.
Theorem. Consider a map where is a convex subset of , and a measure on which is absolutely continuous. Assume that is absolutely continuous. Then there is a convex function and a map preserving such that
In addition, and are uniquely defined almost everywhere.
Applications and connections
Dimension 1
In dimension 1, and when is the Lebesgue measure over the unit interval, the result specializes to Ryff's theorem. When and is the uniform distribution over , the polar decomposition boils down to
where is cumulative distribution function of the random variable and has a uniform distribution over . is assumed to be continuous, and preserves the Lebesgue measure on .
Polar decomposition of matrices
When is a linear map and is the Gaussian normal distribution, the result coincides with the polar decomposition of matrices. Assuming where is an invertible matrix and considering the probability measure, the polar decomposition boils down to
where is a symmetric positive definite matrix, and an orthogonal matrix. The connection with the polar factorization is which is convex, and which preserves the measure.
Helmholtz decomposition
The results also allow to recover Helmholtz decomposition. Letting be a smooth vector field it can then be written in a unique way as
where is a smooth real function defined on , unique up to an additive constant, and is a smooth divergence free vector field, parallel to the boundary of .
The connection can be seen by assuming is the Lebesgue measure on a compact set and by writing as a perturbation of the identity map
where is small. The polar decomposition of is given by . Then, for any test function the following holds:
where the fact that was preserving the Lebesgue measure was used in the second equality.
In fact, as , one can expand , and therefore . As a result, for any smooth function , which implies that is divergence-free.
See also
References
Measures (measure theory)
Theorems involving convexity | Polar factorization theorem | [
"Physics",
"Mathematics"
] | 546 | [
"Measures (measure theory)",
"Quantity",
"Physical quantities",
"Size"
] |
67,410,383 | https://en.wikipedia.org/wiki/Taeniolella%20serusiauxii | Taeniolella serusiauxii is a species of lichenicolous fungus in the family Mytilinidiaceae. It was described as a new species in 1992 by Paul Diederich. The type was collected in France, where it was found growing on Dendrographa decolorans. The specific epithet serusiauxii honours the Belgian lichenologist Emmanuël Sérusiaux.
The fungus is an anamorph, with little differentiated mycelium. Its conidia are coarsely cracked and fissured (rhagadiose) to scaley (squamulose), measuring 5–22 by 3.5–6.5 μm, often with long more or less translucent germ tubes; the conidiophores are 2.5–5 μm wide. The fungus has been recorded from Brazil, British Overseas Territories, France, Papua New Guinea, Tanzania, and the United States. It grows on the lichens Dendrographa decolorans, Tylophoron moderatum, and T. protrudens.
References
Mytilinidiales
Fungi described in 1992
Fungi of Africa
Fungi of Europe
Fungi of South America
Fungi of New Guinea
Fungi of the United States
Lichenicolous fungi
Fungi without expected TNC conservation status
Fungus species | Taeniolella serusiauxii | [
"Biology"
] | 269 | [
"Fungi",
"Fungus species"
] |
67,411,784 | https://en.wikipedia.org/wiki/Computational%20models%20in%20epilepsy | Computational models in epilepsy mainly focus on describing an electrophysiological manifestation associated with epilepsy called seizures. For this purpose, computational neurosciences use differential equations to reproduce the temporal evolution of the signals recorded experimentally. A book published in 2008, Computational Neuroscience in Epilepsy. summarizes different works done up to this time. The goals of using its models are diverse, from prediction to comprehension of underlying mechanisms.
The crisis phenomenon (seizure) exists and shares certain dynamical properties across different scales and different organisms. It is possible to distinguish different approaches: the phenomenological models focus on the dynamics observed, generally reduced to few dimension it facilitates the study from the point of view of the theory of dynamical systems and more mechanistic models that explain the biophysical interactions underlying seizures. It is also possible to use these approaches to model and analyse the interactions between different regions of the brain (In this case the notion of network plays an important role) and the transition to ictal state. These large-scale approaches have the advantage of being able to be related to the recordings made in humans thanks to electroencephalography (EEG). It offers new directions for clinical research, particularly as an additional tool in the treatment of refractory epilepsy
Other approaches are to use the models to try to understand the mechanisms underlying these seizures using biophysical descriptions from the neuron scale. This makes it possible to understand the role of homeostasis and to understand the link between physical quantities (such as the concentration of potassium for example) and the pathological dynamics observed.
This area of research has evolved rapidly in recent years and continues to show promise for our understanding and treatment of epilepsies for either for direct clinical application in the case of refractory epilepsy or fundamental research to guide experimental works.
References
Computational biology
Epilepsy | Computational models in epilepsy | [
"Biology"
] | 380 | [
"Computational biology"
] |
67,412,446 | https://en.wikipedia.org/wiki/Predictive%20methods%20for%20surgery%20duration | Predictions of surgery duration (SD) are used to schedule planned/elective surgeries so that utilization rate of operating theatres be optimized (maximized subject to policy constraints). An example for a constraint is that a pre-specified tolerance for the percentage of postponed surgeries (due to non-available operating room (OR) or recovery room space) not be exceeded. The tight linkage between SD prediction and surgery scheduling is the reason that most often scientific research related to scheduling methods addresses also SD predictive methods and vice versa. Durations of surgeries are known to have large variability. Therefore, SD predictive methods attempt, on the one hand, to reduce variability (via stratification and covariates, as detailed later), and on the other employ best available methods to produce SD predictions. The more accurate the predictions, the better the scheduling of surgeries (in terms of the required OR utilization optimization).
An SD predictive method would ideally deliver a predicted SD statistical distribution (specifying the distribution and estimating its parameters). Once SD distribution is completely specified, various desired types of information could be extracted thereof, for example, the most probable duration (mode), or the probability that SD does not exceed a certain threshold value. In less ambitious circumstance, the predictive method would at least predict some of the basic properties of the distribution, like location and scale parameters (mean, median, mode, standard deviation or coefficient of variation, CV). Certain desired percentiles of the distribution may also be the objective of estimation and prediction. Experts estimates, empirical histograms of the distribution (based on historical computer records), data mining and knowledge discovery techniques often replace the ideal objective of fully specifying SD theoretical distribution.
Reducing SD variability prior to prediction (as alluded to earlier) is commonly regarded as part and parcel of SD predictive method. Most probably, SD has, in addition to random variation, also a systematic component, namely, SD distribution may be affected by various related factors (like medical specialty, patient condition or age, professional experience and size of medical team, number of surgeries a surgeon has to perform in a shift, type of anesthetic administered). Accounting for these factors (via stratification or covariates) would diminish SD variability and enhance the accuracy of the predictive method. Incorporating expert estimates (like those of surgeons) in the predictive model may also contribute to diminish the uncertainty of data-based SD prediction. Often, statistically significant covariates (also related to as factors, predictors or explanatory variables) — are first identified (for example, via simple techniques like linear regression and knowledge discovery), and only later more advanced big-data techniques are employed, like Artificial Intelligence and Machine Learning, to produce the final prediction.
Literature reviews of studies addressing surgeries scheduling most often also address related SD predictive methods. Here are some examples (latest first).
The rest of this entry review various perspectives associated with the process of producing SD predictions — SD statistical distributions, Methods to reduce SD variability (stratification and covariates), Predictive models and methods, and Surgery as a work-process. The latter addresses surgery characterization as a work-process (repetitive, semi-repetitive or memoryless) and its effect on SD distributional shape.
SD Statistical Distributions
Theoretical models
A most straightforward SD predictive method comprises specifying a set of existent statistical distributions, and based on available data and distribution-fitting criteria select the most fitting distribution. There is a large volume of comparative studies that attempt to select the most fitting models for SD distribution. Distributions most frequently addressed are the normal, the three-parameter lognormal, gamma (including the exponential) and Weibull. Less frequent "trial" distributions (for fitting purposes) are the loglogistic model, Burr, generalized gamma and the piecewise-constant hazard model. Attempts to presenting SD distribution as a mixture-distribution have also been reported (normal-normal, lognormal-lognormal and Weibull–Gamma mixtures). Occasionally, predictive methods are developed that are valid for a general SD distribution, or more advanced techniques, like Kernel Density Estimation (KDE), are used instead of the traditional methods (like distribution-fitting or regression-oriented methods). There is broad consensus that the three-parameter lognormal describes best most SD distributions. A new family of SD distributions, which includes the normal, lognormal and exponential as exact special cases, has recently been developed. Here are some examples (latest first).
Using historical records to specify an empirical distribution
As an alternative to specifying a theoretical distribution as model for SD, one may use records to construct a histogram of available data, and use the related empirical distribution function (the cumulative plot) to estimate various required percentiles (like the median or the third quartile). Historical records/expert estimates may also be used to specify location and scale parameters, without specifying a model for SD distribution.
Data mining methods
These methods have recently gained traction as an alternative to specifying in-advance a theoretical model to describe SD distribution for all types of surgeries. Examples are detailed below ("Predictive models and methods").
Reducing SD variability (stratification and covariates)
To enhance SD prediction accuracy, two major approaches are pursued to reduce SD data variability: Stratification and covariates (incorporated in the predictive model). Covariates are often referred to in the literature also as factors, effects, explanatory variables or predictors.
Stratification
The term means that available data are divided (stratified) into subgroups, according to a criterion statistically shown to affect SD distribution. The predictive method then aims to produce SD prediction for specified subgroups, having SD with appreciably reduced variability. Examples for stratification criteria are medical specialty, Procedure Code systems, patient-severity condition or hospital/surgeon/technology (with resulting models related to as hospital-specific, surgeon-specific or technology-specific). Examples for implementation are Current Procedural Terminology (CPT) and ICD-9-CM Diagnosis and Procedure Codes (International Classification of Diseases, 9th Revision, Clinical Modification).
Covariates (factors, effects, explanatory variables, predictors)
This approach to reduce variability incorporates covariates in the prediction model. The same predictive method may then be more generally applied, with covariates assuming different values for different levels of the factors shown to affect SD distribution (usually by affecting a location parameter, like the mean, and, more rarely, also a scale parameter, like the variance). A most basic method to incorporate covariates into a predictive method is to assume that SD distribution is lognormally distributed. The logged data (taking log of SD data) then represent a normally distributed population, allowing use of multiple- linear-regression to detect statistically significant factors. Other regression methods, which do not require data normality or are robust to its violation (generalized linear models, nonlinear regression) and artificial intelligence methods have also been used (references sorted chronologically, latest first).
Predictive models and methods
Following is a representative (non-exhaustive) list of models and methods employed to produce SD predictions (in no particular order). These, or a mixture thereof, may be found in the sample of representative references below:
Linear regression (LR); Multivariate adaptive regression splines (MARS); Random forests (RF); Machine learning; Data mining (rough sets, neural networks); Knowledge discovery in databases (KDD); Data warehouse model (used to extract data from various, possibly non-interacting, databases); Kernel density estimation (KDE); Jackknife; Monte Carlo simulation.
Surgery as work-process (repetitive, semi-repetitive, memoryless)
Surgery is a work process, and likewise it requires inputs to achieve the desired output, a recuperating post-surgery patient. Examples of work-process inputs, from Production Engineering, are the five M's — "money, manpower, materials, machinery, methods" (where "manpower" refers to the human element in general). Like all work-processes in industry and the services, surgeries also have a certain characteristic work-content, which may be unstable to various degrees (within the defined statistical population to which the prediction method aims). This generates a source for SD variability that affects SD distributional shape (from the normal distribution, for purely repetitive processes, to the exponential, for purely memoryless processes). Ignoring this source may confound its variability with that due to covariates (as detailed earlier). Therefore, as all work-processes may be partitioned into three types (repetitive, semi-repetitive, memoryless), surgeries may be similarly partitioned. A stochastic model that takes account of work-content instability has recently been developed, which delivers a family of distributions, with the normal/lognormal and exponential as exact special cases. This model was applied to construct a statistical process control scheme for SD.
References
Prediction
Health care management
Surgery
Hospitals
Health informatics
Health Resources and Services Administration | Predictive methods for surgery duration | [
"Biology"
] | 1,902 | [
"Health informatics",
"Medical technology"
] |
67,414,125 | https://en.wikipedia.org/wiki/List%20of%20heritage%20railways%20and%20funiculars%20in%20Switzerland | This is a list of heritage railways in Switzerland. For convenience, the list includes any pre-World War II railway in the large sense of the term (either adhesion railway, rack railway or funicular) currently operated with at least several original or historical carriages.
Switzerland has a very dense rail network, both standard and narrow gauge. The overwhelming majority of railways, built between the mid-19th and early 20th century, are still in regular operation today and were electrified earlier than in the rest of Europe. The major exception is the partially rack and pinion-operated Furka Steam Railway, the longest unelectrified line in the country. However, numerous rail operators, notably SBB Historic, provide services with well-maintained historical rolling stock.
List
Blonay–Chamby museum railway (adhesion)
Brienz Rothorn Railway (rack)
Dampfbahn-Verein Zürcher Oberland (adhesion)
Etzwilen–Singen railway (adhesion)
(funicular)
Furka Steam Railway (rack and adhesion)
(adhesion)
Giessbachbahn (funicular)
Heimwehfluhbahn (funicular)
International Rhine Regulation Railway (adhesion)
La Traction (adhesion)
Les Avants–Sonloup (funicular)
Montreux–Glion–Rochers-de-Naye railway (rack)
Montreux–Lenk im Simmental line (adhesion)
(adhesion)
Pilatus Railway (rack)
Reichenbachfall Funicular
Rhaetian Railway, notably on the Albula and Bernina lines (adhesion)
Riffelalp tram (adhesion)
Rigi Railways (rack)
Rorschach–Heiden railway (rack)
SBB Historic (adhesion)
(adhesion)
Schynige Platte Railway (rack)
Sonnenberg (funicular)
(funicular)
Vapeur Val-de-Travers (adhesion)
(adhesion)
Zürcher Museums-Bahn (adhesion)
See also
List of railway museums in Switzerland
List of narrow-gauge railways in Switzerland
List of mountain railways in Switzerland
List of funiculars in Switzerland
Lists of tourist attractions in Switzerland
Swiss Museum of Transport ()
References
Switzerland | List of heritage railways and funiculars in Switzerland | [
"Engineering"
] | 456 | [
"Lists of heritage railways",
"Engineering preservation societies"
] |
67,414,446 | https://en.wikipedia.org/wiki/Carbonea%20supersparsa | Carbonea supersparsa is a species of lichenicolous fungus belonging to the family Lecanoraceae. It is widespread in the Northern Hemisphere. In Iceland it has been reported growing on Lecanora cenisia near Egilsstaðir and Lecanora polytropa near Seyðisfjörður.
References
Lecanoraceae
Fungi described in 1865
Fungi of Iceland
Fungi of Europe
Fungi of North America
Taxa named by William Nylander (botanist)
Fungus species | Carbonea supersparsa | [
"Biology"
] | 102 | [
"Fungi",
"Fungus species"
] |
67,414,541 | https://en.wikipedia.org/wiki/Carbonea%20vitellinaria | Carbonea vitellaria is a species of lichenicolous fungus belonging to the family Lecanoraceae. It has a worldwide distribution. In Iceland it has been reported growing on Candelariella vitellina near Egilsstaðir and on King George Island, Antarctica.
References
Lecanoraceae
Fungi described in 1852
Fungi of Iceland
Taxa named by William Nylander (botanist)
Lichenicolous fungi
Fungus species | Carbonea vitellinaria | [
"Biology"
] | 89 | [
"Fungi",
"Fungus species"
] |
67,414,554 | https://en.wikipedia.org/wiki/Live%2C%20Laugh%2C%20Love | "Live, Laugh, Love" is a motivational three-word phrase that became a popular slogan on motivational posters and home decor in the late 2000s and early 2010s. By extension, the saying has also become pejoratively associated with a style of "basic" Generation X decor and with what Vice described as "speaking-to-the-manager shallowness".
The phrase is an abridged form of the 1904 poem "Success" by Bessie Anderson Stanley which begins:
This phrase was subsequently popularized by Ann Landers and a 1990 Dear Abby column, where it was misattributed to Ralph Waldo Emerson.
2010s merchandise
"Live, Laugh, Love" and variants on the phrase have appeared on framed posters, wall decals, ornaments, cushions, mugs, bed linen, jewellery and even on coffins. The Live Love Laugh Foundation, a mental health organization in India founded by Deepika Padukone, takes its name from the phrase.
Vice noted that the trend had largely passed by 2020. Google Trends shows that searches for the phrase peaked between 2009 and 2014 in the United States, falling in popularity since then.
See also
Keep Calm and Carry On, another motivational phrase, originally from World War II in Britain, that became popular around the same time.
References
2010s fads and trends
English phrases
Memes
Motivation
Slogans | Live, Laugh, Love | [
"Biology"
] | 274 | [
"Ethology",
"Behavior",
"Motivation",
"Human behavior"
] |
67,414,711 | https://en.wikipedia.org/wiki/Muellerella%20lichenicola | Muellerella lichenicola is a species of lichenicolous fungus in the family Verrucariaceae. It was first formally described as a new species in 1826 by Søren Christian Sommerfelt, as Sphaeria lichenicola. David Leslie Hawksworth transferred it to the genus Muellerella in 1979.
It has been reported growing on Caloplaca aurantia, Caloplaca saxicola and Physcia aipolia in Sicily, and on an unidentified crustose lichen in Iceland. In Mongolia, it has been reported growing on the thallus of a Biatora-lichen at elevation in the Bulgan district and on Aspicilia at elevation in the Altai district. In Victoria Land, Antarctica, it has been reported from multiple hosts, including members of the Teloschistaceae and Physciaceae.
References
Verrucariales
Fungi described in 1826
Fungi of Iceland
Fungi of Asia
Lichenicolous fungi
Fungi of Europe
Fungus species | Muellerella lichenicola | [
"Biology"
] | 207 | [
"Fungi",
"Fungus species"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.