id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
64,298,954 | https://en.wikipedia.org/wiki/RNP%20world | The RNP world is a hypothesized intermediate period in the origin of life characterized by the existence of ribonucleoproteins. The period followed the hypothesized RNA world and ended with the formation of DNA and contemporary proteins. In the RNP world, RNA molecules began to synthesize peptides. These would eventually become proteins which have since assumed most of the diverse functions RNA performed previously. This transition paved the way for DNA to replace RNA as the primary store of genetic information, leading to life as we know it.
Principle of concept
Thomas Cech, in 2009, proposed the existence of the RNP world after his observation of apparent differences in the composition of catalysts in the two most fundamental processes that maintain and express genetic systems. For DNA, the maintenance, replication, and transcription processes are accomplished purely by protein polymerases. However, the mRNA processes of gene expression via splicing and protein synthesis are catalyzed by RNP complexes (the spliceosome and ribosome).
This difference between protein and ribonucleoprotein catalysts can be explained by extending the RNA world theory. The older RNA molecules were originally self-catalyzed through ribozymes, which evolved the assistance of proteins to form RNP. Thereafter, the newer DNA molecule used only the more efficient protein processes from the start. Thus, our current DNA world could have resulted from the gradual replacement of RNA catalysis machines with proteins. In this view, ribonucleoproteins and nucleotide-based cofactors are remnants of an intermediary era, the RNP world.
See also
First universal common ancestor
References
Ribonucleoproteins
RNA
DNA
Peptides
Proteins
Origin of life | RNP world | [
"Chemistry",
"Biology"
] | 363 | [
"Biomolecules by chemical classification",
"Origin of life",
"Molecular biology",
"Biological hypotheses",
"Proteins",
"Peptides"
] |
64,305,899 | https://en.wikipedia.org/wiki/Vortex%20filter | A vortex filter is a filter used in rainwater harvesting systems to separate medium to large sized debris from the flow of water before the water flows into a tank, cistern or reservoir. by directing the flow around the inside of the wall of the filter housing. Any material with a density greater than water is pushed to the outside and allows cleaner water to flow through a central fine mesh basket into the supply pipe.
All debris washes out of a large diameter drain pipe in the base of the filter body.
They are best suited for commercial and residential rainwater harvesting; however, they can be used for other process and wastewater filtering applications.
Before entering the tank for storage, rainwater should be both filtered and aerated. Filtration removes large particulate matter, which frequently both carries and feeds bacteria. Removal of this particulate matter, along with oxygenation of the water, greatly reduces the number of harmful bacteria in the tank.
WISY pre-tank filters accomplish both of these tasks, protecting the water quality in the tank. WISY Filters are also self-cleaning and require minimal annual maintenance.
References
Water filters | Vortex filter | [
"Chemistry"
] | 226 | [
"Water treatment",
"Water filters",
"Filters"
] |
64,307,867 | https://en.wikipedia.org/wiki/ARM%20Cortex-X1 | The ARM Cortex-X1 is a central processing unit implementing the ARMv8.2-A 64-bit instruction set designed by ARM Holdings' Austin design centre as part of ARM's Cortex-X Custom (CXC) program.
Design
The Cortex-X1 design is based on the ARM Cortex-A78, but redesigned for purely performance instead of a balance of performance, power, and area (PPA).
The Cortex-X1 is a 5-wide decode out-of-order superscalar design with a 3K macro-OP (MOPs) cache. It can fetch 5 instructions and 8 MOPs per cycle, and rename and dispatch 8 MOPs, and 16 μOPs per cycle. The out-of-order window size has been increased to 224 entries. The backend has 15 execution ports with a pipeline depth of 13 stages and the execution latencies consists of 10 stages. It also features 4x128b SIMD units.
ARM claims the Cortex-X1 offers 30% faster integer and 100% faster machine learning performance than the ARM Cortex-A77.
The Cortex-X1 supports ARM's DynamIQ technology, expected to be used as high-performance cores when used in combination with the ARM Cortex-A78 mid and ARM Cortex-A55 little cores.
Architecture changes in comparison with ARM Cortex-A78
Around 20% performance improvement (+30% from A77)
30% faster integer
100% faster machine learning performance
Out-of-order window size has been increased to 224 entries (from 160 entries)
Up to 4x128b SIMD units (from 2x128b)
15% more silicon area
5-way decode (from 4-way)
8 MOPs/cycle decoded cache bandwidth (from 6 MOPs/cycle)
64 KB L1D + 64 KB L1I (from 32/64 KB L1)
Up to 1 MB/core L2 cache (from 512 KB/core max)
Up to 8 MB L3 cache (from 4 MB max)
Licensing
The Cortex-X1 is available as SIP core to partners of their Cortex-X Custom (CXC) program, and its design makes it suitable for integration with other SIP cores (e.g. GPU, display controller, DSP, image processor, etc.) into one die constituting a system on a chip (SoC).
Usage
Samsung Exynos 2100
Qualcomm Snapdragon 888(+)
Google Tensor
See also
ARM Cortex-A78, related high performance microarchitecture
ARM Neoverse V1 (Zeus), server sister core to the Cortex-X1
Comparison of ARMv8-A cores, ARMv8 family
References
ARM processors | ARM Cortex-X1 | [
"Technology",
"Engineering"
] | 576 | [
"Computing stubs",
"Computer engineering stubs",
"Computer engineering"
] |
64,308,146 | https://en.wikipedia.org/wiki/Erd%C5%91s%20sumset%20conjecture | In additive combinatorics, the Erdős sumset conjecture is a conjecture which states that if a subset of the natural numbers has a positive upper density then there are two infinite subsets and of such that contains the sumset . It was posed by Paul Erdős, and was proven in 2019 in a paper by Joel Moreira, Florian Richter and Donald Robertson.
See also
List of conjectures by Paul Erdős
Notes
Conjectures that have been proved
Paul Erdős
Combinatorics | Erdős sumset conjecture | [
"Mathematics"
] | 98 | [
"Mathematical theorems",
"Discrete mathematics",
"Combinatorics",
"Conjectures that have been proved",
"Combinatorics stubs",
"Mathematical problems"
] |
41,482,032 | https://en.wikipedia.org/wiki/Wind-wave%20dissipation | Wind-wave dissipation or "swell dissipation" is process in which a wave generated via a weather system loses its mechanical energy transferred from the atmosphere via wind. Wind waves, as their name suggests, are generated by wind transferring energy from the atmosphere to the ocean's surface, capillary gravity waves play an essential role in this effect, "wind waves" or "swell" are also known as surface gravity waves.
General physics and theory
The process of wind-wave dissipation can be explained by applying energy spectrum theory in a similar manner as for the formation of wind-waves (generally assuming spectral dissipation is a function of wave spectrum). However, although even some of recent innovative improvements for field observations (such as Banner & Babanin et al. ) have contributed to solve the riddles of wave breaking behaviors, unfortunately there hasn't been a clear understanding for exact theories of the wind wave dissipation process still yet because of its non-linear behaviors.
By past and present observations and derived theories, the physics of the ocean-wave dissipation can be categorized by its passing regions along to water depth. In deep water, wave dissipation occurs by the actions of friction or drag forces such as opposite-directed winds or viscous forces generated by turbulent flows—usually nonlinear forces. In shallow water, the behaviors of wave dissipations are mostly types of shore wave breaking (see Types of wave breaking).
Some of simple general descriptions of wind-wave dissipation (defined by Luigi Cavaleri et al. ) were proposed when we consider only ocean surface waves such as wind waves. By means of the simple, the interactions of waves with the vertical structure of the upper layers of the ocean are ignored for simplified theory in many proposed mechanisms.
Sources of wind-wave dissipation
In general understanding, the physics of wave dissipation can be categorized by considering with its dissipation sources, such as 1) wave breaking 2) wave–turbulence interaction 3) wave–wave modulation respectively. (descriptions below of this chapter also follow the reference )
1) dissipation by "wave breaking"
Wind-wave breaking at coastal area is a major source of the wind-wave dissipation. The wind waves lose their energy to the shore or sometimes back to the ocean when those break at the shore. (see more explains -> “Ocean surface wave breaking”)
2) dissipation by "wave–turbulence interaction"
The turbulent wind flows and viscous eddies inside waves can both affect wave dissipation. In the very early understandings, the viscosity could barely affect the wind waves, so that the dissipation of the swells by viscosity was also barely considered. However, recent weather forecasting models begin considering “wave-turbulence interaction” for the wave modeling. It is still arguable how much the turbulent-induced dissipations contribute to change the whole wave profiles, but the ideas of wave-turbulence interaction for surface viscous layers and wave bottom boundary layers are recently accepted.
3) dissipation by "wave-wave modulation"
Wave–wave interactions can affect to the wave dissipation. In the early eras, the ideas that a short wave breaking can take energy from the long waves through the modulation were proposed by Phillips (1963), and Longuett-Higgins (1969) as well. These ideas had been debated (new results that the dissipations by interactions between wave modulations should be much weaker than the theory's of Phillips) by Hasselmann's works (1971), but in the recent understanding, the dissipations of these cases are typically little stronger than the dissipation by “wave-turbulence interactions” when the reasonable modulation transfer functions implemented. Most cases of the swell dissipations are due to this dissipation type.
Ocean-surface wave breaking
When wind waves approach to coast area from deep water, the waves change their heights and lengths. The wave height becomes higher and the wavelength becomes shorter as the wave velocity is slowed when ocean waves approach to the shore. If the water depth is sufficiently shallow, the wave crest become steeper and the trough gets broader and shallower; finally, the ocean waves break at the shore. The motions of wave breaking are different with along to the steepness of shores and waves, and can be categorized by below three types.
• Spilling breaker
With lower shore slope, the waves lose energy slowly as approaching to the shore. The waves spill sea water down the front of the waves when those are breaking.
• Plunging breaker
With moderately steep shore slope, the wave loses energy quickly. If the shore slope is steep enough, the crest of wave moves faster than the trough. The crest curls over front of the wave, and after the crest plunges sea water to the trough. (Plunging breakers are good for surfing)
• Surging breaker
With highly steep shore slope (for extreme steepness, such as seawalls), if the shore steepness is very high, the waves can't reach to the critical steepness to break. The waves climb along through the shore slope, and release energy to the backward from the shore. It never shows white-cap breaks, but for extreme steepness case, such as seawall, the waves break with white-foams.
See also
Dispersion (water waves)
External links
Breaking and dissipation of ocean surface waves – Alexander V. Babanin
References
Coastal geography
Physical oceanography
Water waves
Oceanographical terminology | Wind-wave dissipation | [
"Physics",
"Chemistry"
] | 1,149 | [
"Physical phenomena",
"Applied and interdisciplinary physics",
"Water waves",
"Waves",
"Physical oceanography",
"Fluid dynamics"
] |
41,482,967 | https://en.wikipedia.org/wiki/Bourne%20%28stream%29 | A bourne is an intermittent stream, flowing from a spring. Frequent in chalk and limestone country where the rock becomes saturated with winter rain, that slowly drains away until the rock becomes dry, when the stream ceases. The word is from the Anglo-Saxon language of England.
The word can be found in northern England in placenames such as: Redbourne and Legbourne but is commonly in use in southern England (particularly Dorset) as a name for a small river, particularly in compound names such as winterbourne. A winterbourne is a stream or river that is dry through the summer months.
Bourne is used as a place name or as a part of a place name, usually in chalk downland countryside. Alternative forms are bourn or borne or born. The apparent variant, borne found in the placename: Camborne, arises from the Cornish language and is in fact a false friend: it refers to a hill (Cornish: bronn, from Common Brythonic *brunda; compare Welsh bryn). Born/borne in German also means fount, or spring, and is related to the Indo-European root, *bhreu. That born/borne appears throughout Europe as a placename is also an important clue that this spelling is an etymological precursor to the Middle English bourne/burn.
Cf. Burn (landform), in common use in Scotland and North East England especially, but also found (in placenames) elsewhere in England such as: Blackburn, Gisburn, Woburn, Kilburn, Winkburn, and so forth.
For rivers and places named Bourne or having this word as part of the name, see Bourne (disambiguation).
References
Water streams
Fluvial landforms
Geomorphology
Hydrology
Rivers
Bodies of water | Bourne (stream) | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 367 | [
"Hydrology",
"Environmental engineering"
] |
41,489,324 | https://en.wikipedia.org/wiki/Intelligent%20maintenance%20system | An intelligent maintenance system (IMS) is a system that uses collected data from machinery in order to predict and prevent potential failures in them. The occurrence of failures in machinery can be costly and even catastrophic. In order to avoid failures, there needs to be a system which analyzes the behavior of the machine and provides alarms and instructions for preventive maintenance. Analyzing the behavior of the machines has become possible by means of advanced sensors, data collection systems, data storage/transfer capabilities and data analysis tools. These are the same set of tools developed for prognostics. The aggregation of data collection, storage, transformation, analysis and decision making for smart maintenance is called an intelligent maintenance system (IMS).
Definition
An intelligent maintenance system is a system that uses data analysis and decision support tools to predict and prevent the potential failure of machines. The recent advancement in information technology, computers, and electronics have facilitated the design and implementation of such systems.
The key research elements of intelligent maintenance systems consist of:
Transformation of data to information to knowledge and synchronization of the decisions with remote systems
Intelligent, embedded prognostic algorithms for assessing degradation and predicting the performance in future
Software and hardware platforms to run online models
Embedded product services and life cycle information for closed-loop product designs
E-manufacturing and e-maintenance
With evolving applications of tether-free communication technologies (e.g. Internet) e-intelligence is having a larger impact on industries. Such impact has become a driving force for companies to shift the manufacturing operations from traditional factory integration practices towards an e-factory and e-supply chain philosophy. Such change is transforming the companies from local factory automation to global business automation. The goal of e-manufacturing is, from the plant floor assets, to predict the deviation of the quality of the products and possible loss of any equipment. This brings about the predictive maintenance capability of the machines.
The major functions and objectives of e-manufacturing are: “(a) provide a transparent, seamless and automated information exchange process to enable an only handle information once (OHIO) environment; (b) improve the use of plant floor assets using a holistic approach combining the tools of predictive maintenance techniques; (c) links entire supply chain management (SCM) operation and asset optimization; and (d) deliver customer services using the latest predictive intelligence methods and tether-free technologies”.
The e-Maintenance infrastructure consists of several information sectors:
Control systems and production schedulers
Engineering product data management systems
Enterprise resource planning (ERP) systems
Condition monitoring systems
Maintenance scheduling (CMMS/EAM) systems
Plant asset management (PAM) systems
See also
Big Data
Cyber manufacturing
Cyber-physical system
Decision support systems
Industrial artificial intelligence
Industrial Big Data
Industry 4.0
Internet of Things
Intelligent transformation
Machine to machine
Maintenance, repair, and operations
Predictive maintenance
Preventive maintenance
Prognostics
Smart, connected products
References
Further reading
M. J. Ashby et al., “Intelligent maintenance advisor for turbine engines”, The Journal of the Operational Research Society, vol. 46, No. 7 (July 1995), 831–853.
A. K. S. Jardine et al., “A review on machinery diagnostics and prognostics implementing condition-based maintenance”, Mechanical Systems and Signal Processing 20 (2006) 1483–1510.
R. C. M. Yam et al., “Intelligent Predictive Decision Support System forCondition-Based Maintenance”, Int J Adv Manuf Technol (2001) 17:383–391
A. Muller et al., “On the concept of e-maintenance: Review and current research”, Reliability Engineering and System Safety 93 (2008) 1165–1187
A. Bos et al., “SCOPE: An Intelligent Maintenance System for Supporting Crew Operations”, AUTOTESTCON 2004. Proceedings. IEEE, 2004.
Maintenance
Prediction
Survival analysis | Intelligent maintenance system | [
"Engineering"
] | 793 | [
"Maintenance",
"Mechanical engineering"
] |
41,491,685 | https://en.wikipedia.org/wiki/Prosoplasia | Prosoplasia (from prósō, "forward" + πλάσις plasis, "formation") is the differentiation of cells either to a higher function or to a higher level of organization.
Assuming an increasing cellular peculiarity from a presupposed stem-cell fate, prosoplasia is therefore a forward differentiation, unlike anaplasia (a backward differentiation).
Examples of prosoplasia include the forward differentiation of cells in the mucosa in Warthin's tumor.
References
Oncology
Induced stem cells | Prosoplasia | [
"Biology"
] | 115 | [
"Induced stem cells",
"Stem cell research"
] |
41,492,109 | https://en.wikipedia.org/wiki/C21H26N2O | {{DISPLAYTITLE:C21H26N2O}}
The molecular formula C21H26N2O (molar mass: 322.452 g/mol, exact mass: 322.2045 u) may refer to:
Acetylfentanyl
Benzylfentanyl (R-4129)
Molecular formulas | C21H26N2O | [
"Physics",
"Chemistry"
] | 73 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
60,797,380 | https://en.wikipedia.org/wiki/Beth%20Kane | Elizabeth Kane, also known as Alice and Red Alice, is a fictional character created by Greg Rucka and J. H. Williams III. Beginning as a supervillain, she first appeared in August 2009 in the comic book Detective Comics, published by DC Comics. Her relationship with her twin sister Kate Kane defines much of Batwoman's emotional life. During The New 52, it is established that Kate and Beth are cousins of Bruce Wayne, the alter-ego of the superhero Batman, through his mother Martha Wayne (née Kane).
Alice appears in the Arrowverse TV series Batwoman as part of the main cast, portrayed by Rachel Skarsten.
Fictional character biography
Alice's origin is told in flashback. Elizabeth "Beth" Kane is the identical twin sister of Katherine "Kate" Kane, and was older than Kate by two minutes. She is the daughter of Jacob Kane and his wife Gabrielle Kane, both career soldiers in the U.S. Army. The Kanes are Jewish, and Jacob Kane inherited vast wealth along with his other siblings. Bette Kane (the superheroine known as Flamebird, and later Hawkfire) is a cousin, and Bruce Wayne's mother Martha Kane Wayne was Jacob's sister.
Jacob Kane is promoted to colonel and assigned to NATO headquarters in Brussels, Belgium. When the twins turned 12 years old, their mother took them to a restaurant for a birthday dessert. A terrorist group (later revealed to be the organization known as the "Many Arms of Death") kidnapped the family, and Col. Kane led a rescue mission to save them. During the battle, Gabrielle was murdered by the terrorists. The terrorists kidnapped another young girl and murdered her too. Kate, seeing the body of a young girl under a blanket, is left with the impression her sister died. Col. Kane, however, knew that the terrorists had Beth. Despite looking for years, Col. Kane never found Beth. He never told Kate that Beth might still be alive. The Many Arms of Death needed twins to rule their organization, but since Kate Kane was rescued this meant Beth was not useful to them. Beth's fragile psyche led the Many Arms of Death to send her to the United States, where she was raised by the Religion of Crime.
15 Years later, Kate Kane becomes Batwoman.
First appearance
Alice makes her first appearance in 2009 in Detective Comics #854. With the death of Bruno Mannheim, the supervillain group known as the Religion of Crime is leaderless. The thirteen covens that make up the Religion of Crime elect Alice to lead the group, giving her the title "High Madame". Beth is shown to be insane, as she dresses in clothes and makeup to resemble the character Alice from Lewis Carroll novels and only speaks in quotations from the Alice novels. She kills a number of members of her own group when they fail her or question her abilities.
Alice kidnaps Col. Kane, who immediately recognizes his now-grown daughter, and uses him to gain access to a military base near Gotham City. She seizes chemical weapons from the base and intends to kill everyone in the city by dispersing them from an aircraft. During her final battle with Alice, Batwoman pushes her from the aircraft and Alice falls into Gotham Bay. Batwoman believes Alice to be dead.
The Gotham Police, however, never recover a body. Alice's final words implied that Col. Kane was her father. Taking Alice's blood spatters on her Batwoman costume, Kate Kane utilizes DNA testing to discover that Alice is her sister, Beth. The knowledge that her father hid Beth's possible survival from her led to a long rift in Kate and Jacob's relationship.
Reappearance
Alice reappears alive inside a sarcophagus in Batwoman (vol. 2) #17. According to Department of Extranormal Operations (DEO) Agent Cameron Chase, the Religion of Crime (ROC) was in the process of founding a cult based on Batwoman. The cult retrieved Beth's body from Gotham Harbor and placed it inside the sarcophagus. The sarcophagus brought Beth back to life, and kept her in suspended animation. Agent Chase, tasked by the DEO to discover the secret identity of Batwoman, uncovered the cult. All the cult members died defending the sarcophagus, which was brought back to DEO headquarters by Agent Chase. Scanning by DEO technicians revealed Beth was inside, and although she was apparently conscious the DEO did not open the sarcophagus for several months.
Now in the custody of the DEO, Beth appears traumatized by her months spent in the sarcophagus. Sometimes she's lucid, and other times reverts to her "Alice" personality. Mister Bones, director of the DEO, believes himself to be Jacob Kane's illegitimate son, and wants to use Beth for his own purposes. Batwoman agrees to uncover Batman secret identity if the DEO will turn Beth over to her, destroy all its files on the Kane family, stop targeting Bette Kane, stop putting pressure on Maggie Sawyer, and agree to no longer see Batwoman as one of their agents.
Thanks to Bette Kane's electronic listening devices, Batwoman's entire family and Maggie Sawyer realize what Batwoman is up against and how high the stakes are. Col. Kane sets up the "Murder of Crows", the elite group of ex-military and intelligence operatives who trained Kate, to provide backup support for Batwoman. The Crows and Hawkfire kidnap Agent Asaf, Mister Bones' top subordinate at the DEO, and induce him to reveal the location of Beth Kane in exchange for Batwoman's help in discrediting Bones (which will allow Asaf to take over the directorship of the DEO). Hawkfire and the Crows break into the DEO safe house and finds Beth, but are captured. Batwoman and Batman agree to work together to stop Bones and free Beth. Bones, whose body generates cyanide, threatens to kill Beth rather than hand her over. Asaf shoots Bones in the head and Beth is freed.
Col. Kane takes Beth to the Kane family's private island for psychiatric treatment.
Red Alice
In Batwoman (vol. 2) #36, Beth is depicted flying back to Gotham City, where she takes up residence in the mothballed family manor house on the Kane estate. She has returned to renew her relationship with Kate, having had a major breakthrough in her psychiatric treatment some weeks earlier (although she still speaks in quotations from Carroll at times). Clearly aware of Kate's superhero identity, she breaks into Kate's city apartment and reunites with Batwoman.
Calling herself Red Alice, Beth is introduced to Natalia Mitternacht (the vampire also known as Nocturna). Kate has abandoned her long-term relationship with Maggie Sawyer and formed one with Natalia. Beth instinctively realizes that Natalia is evil and has Kate under some sort of mental control. Beth says she wants to atone for the evil she did, and she shows familiarity with the steam-powered gun grappling hook Batwoman uses as well as incredible strength as she swings on filament lines above Gotham's city streets. When the witch Morgaine le Fey attacks an amnesiac Jason Blood in order to stop Etrigan the Demon from manifesting from Jason's body, Red Alice saves Jason from falling to his death. Red Alice later confronts Nocturna and accuses her of hypnotizing Kate. Nocturna has Batwoman attack Red Alice. Realizing she cannot defeat her sister physically, Beth offers her throat. The shock of almost being driven to kill the one person she loves more than anyone else allows Kate to break the hold Nocturna has on her. When Nocturna brags about the murders she's committed in Batwoman's name, Beth reveals that she's captured the admission on her mobile phone and live-streamed the admission to the Gotham Police Department. Afterward, Beth helps Kate deal with Natalia's emotional and sexual betrayal and successfully encourages her to reconcile with Maggie Sawyer.
Red Alice also participates in Batwoman's battle with Morgaine le Fey. Morgaine manages to recover a magical tool known as the "sorcerer's stone", which will enhance her powers dramatically. She intends to transform the world into a version of Avalon, which herself as empress. To do so, she and her demon horde ascend to a space station in outer space (the highest point above the planet). Red Alice accompanies Batwoman, Etrigan the Demon, Clayface, and Ragman aboard a Space Shuttle into orbit to stop Morgaine. The helmet of Alice's spacesuit cracks in battle, and Ragman saves her life by absorbing her soul into his costume. Batwoman's team is defeated by Morgaine, and they crash back to Earth. On a transformed Earth, Ragman restores Beth's soul to her body. The other evil souls trapped in Ragman's costume try to hold Beth back, but she resists them and screams that she wants to atone for all the wrong she has done. Red Alice then assists Batwoman's team in defeating le Fey and undoing the spell.
Reemergence of Alice
In a flashback in Batwoman (vol. 3) #7, Beth is depicted receiving further psychiatric treatment at the Weiße Kaninchen Sanatorium near Geneva, Switzerland.
The Alice persona reemerges in the "Many Arms of Death"/"Fall of the House of Kane" storyline. As depicted previously and during this story, Kate Kane comes out as a lesbian while obtaining her military education at the United States Military Academy. Depressed at the loss of her lover (who chose to keep her lesbianism a secret and remain in the Army) and her military career, Kate begins drinking heavily and taking drugs while traveling around the world and spending large sums of money. While sailing near the island of Coryana, she falls overboard and receives a severe head injury after striking a coral reef. The island's ruler, Safiyah Sohail, saves Kate's life by sewing Kate's skull shut with gold thread. The two become lovers, to the distress of Tahani, Safiyah's former partner. Coryana is a "pirate nation", providing tax havens, untraceable bank accounts, freedom of movement for arms dealers, and more, none of which worries Kate. Unwittingly, Kate becomes an asymptomatic carrier for a deadly bacterium found on the reefs on which she was injured; this bacterium causes a disease which ravages Coryana's fox population. To protect Kate, Safiyah scapegoats a troublesome man on the island, accusing him of releasing the plague, and has him killed. Kate is horrified and, after a brief fight with Safiyah and Tahani (now known by the codename "Knife"), leaves Coryana.
Years later, Batman asks Batwoman to help break the "Many Arms of Death", a terrorist organization. Batwoman learns that Beth is missing from the Weiße Kaninchen Sanatorium, and assumes Safiyah has her. Following a clue left by Safiyah, Batwoman travels to the long-abandoned Kane house in Brussels. Safiyah is there, but denies kidnapping Beth. She reveals that getting Batwoman to Belgium was a ruse to get her away from prying eyes and eavesdropping equipment. Safiyah reveals that Knife has betrayed them both, kidnapping Beth and using drugs to force her Alice personality to reemerge. Alice has subsequently taken over the Many Arms of Death, and plans to destroy Gotham City by unleashing thousands of deadly disease-carrying bats. Batwoman destroys the bats by trapping them in her airship and then initiating its self-destruction. Batwoman manages to further mitigate the damage of the attack with the help of her mission partner Julia Pennyworth, who synthesizes an aerosolized vaccine and disperses it over Gotham from the duo's secondary airship. As Batman (summoned by Julia) attempts to subdue Alice, Batwoman fights him off while arguing that Alice belongs with her and not in Arkham Asylum. She convinces him that family (Alice is Bruce Wayne's cousin, too) is more important. He allows her to keep control of Alice, although Batwoman's relationship with Batman becomes strained.
Three months after being rescued from Knife, Beth (sane once more now that the drugs are out of her system) is living with Kate in Kate's Gotham apartment. Somewhat psychologically and physically incapacitated by the drugs, she is cared for by Kate and Julia Pennyworth. The "Alice" persona is now theorized to be something magical implanted in Beth by the Religion of Crime, not induced by trauma. She receives outpatient therapy from a woman with a top hat (the comic implies this is the magical superheroine Zatanna).
In the story "Disinformation Campaign", part of the "Fear State" crossover storyline, Beth is still dealing with controlling her Alice persona. In order to discover information about Seer, an "Anti-Oracle" figure spreading misinformation throughout Gotham during the larger crisis, Beth works alongside her sister, disguising herself as Alice to infiltrate a gathering of the Religion of Crime in an attempt to recruit those followers to find Seer. Though this recruitment fails, the twins still identify the location of Seer; Kate relays this information to Nightwing and Oracle. During the mission, Beth has an interior conversation with her Alice persona, and comes to terms with keeping her under control and accepts that, for better or worse, Alice is now a part of her for good.
Batwoman: Future's End
Red Alice also appears in the comic book Batwoman: Future's End. Set five years into a potential future, Batwoman has become a vampire. Red Alice joins with Clayface, Jason Blood/Etrigan the Demon, and Ragman to try to stop her. During the battle, Batwoman kills Jason and Clayface. Red Alice fends off Batwoman's attacks using technology given to her by Bruce Wayne, and then reluctantly and tearfully kills her sister by driving a wooden stake through her heart.
Other appearances
Beth Kane appears several times in Batwoman stories in cameos and other minor roles.
Batwoman dreams of the child Beth and the adult Alice after she injects herself with Scarecrow fear-toxin.
When Batwoman is poisoned by the villain Wolf Spider, she hallucinates about Beth as a child and as Alice.
Batwoman envisions a dead, skeletal Beth as she motivates herself to work harder at building her strength and fighting skills.
Beth has a cameo in Batwoman's memories about her childhood at Kane Estate.
While under the influence of Scarecrow's improved fear toxin, Batwoman has a hallucination in which the child Beth is killed by a warped version of the adult Alice. Beth appears in a flashback as Batwoman thinks about her family in an attempt to break the toxin's hold on her.
A young Beth appears in one of Batwoman's memories about a time when Kate changed her Halloween costume to a mummy so Beth, despite having a broken wrist, would feel comfortable trick-or-treating.
A young Beth appears in one of Batwoman's visions.
Description
Alice is 24 years of age when she makes her first appearance. She suffers from a psychosis in which she presents a personality based on the fictional Lewis Carroll character, Alice, and speaks in quotations from Alice novels and stories. She is depicted as having chalk-white skin, short and wavy blonde hair, red nails and lips, and using heavy black mascara and eye-liner. She dresses in white, pseudo-Victorian fashion with a low décolletage and dress cut away in front to expose her thigh-high stockings and garter. The Alice personality's speech balloons are black with white borders. The text is also white, as well as serif transitional, partly italicized, and in upper and lower case. This indicates her psychosis. The Beth personality's speech balloons are white with black, sans-serif, all-caps text. This is the same style used by all other characters in the comic, which represents her lucidity. Alice is usually armed with one or more handguns, sometimes carries sharp-edged weapons such as razor blades and knives, and has an acquired immunity to many poisons and chemical weapons. She has extensive knowledge of a wide range of chemicals, drugs, hallucinogens, and poisons.
Red Alice has a similar appearance to Alice, although the right side of her head is shaved. She dressed in roughly the same pseudo-Victorian costume (although without the long dress in the rear), but her clothing is now colored burgundy. Her makeup is also different. She now sports a spray-painted purplish-red domino mask around her eyes. Red Alice exhibits familiarity with a number of gadgets and weapons used by Batwoman, as well as the physical strength and dexterity needed to use them.
Under the influence of Tahani's drugs, Alice appears similar to the way she looked in her first appearance. She wears a simplified, tailored short dress with bodice and lace-up thigh boots. Her hair is no longer shaved on one side of her head, but she continues to paint her lips and nails red. To depict her insanity, her speech balloons are either black or deep red and outlined in white. She also no longer speaks in Lewis Carroll quotations.
In other media
Television
Alice appears in Batwoman, portrayed by Rachel Skarsten while her younger self is portrayed by Ava Sleeth. This version was presumed dead after a car accident, subsequently being rescued and held captive by August Cartwright, who wants her to be a companion for his disfigured son Jonathan "Mouse" Cartwright. In addition, Jacob Kane's wife Catherine Hamilton-Kane used DNA analysts and the skull fragments of a deer to make Jacob think that Beth is dead. Additionally, Alice was abused by August's mother Mabel, whom she referred to as the Queen of Hearts. After successfully escaping from August, Beth hid in one of the ships and was briefly taken in by Safiyah Sohail. Growing up, she becomes Alice, forms the Wonderland Gang, and seeks revenge against her father for abandoning her. In the pilot episode, Alice meets her twin sister, Kate who quickly realizes that Alice is her twin sister during an event when crows operative Sophie Moore was abducted. Kate became Batwoman where she rescued Sophie and prevented the detonation of a bomb at a viewing event, but Alice got away. After giving a cryptic call to Jacob from his apartment at the time he was at a gala hosted by Tommy Elliot, she saves Batwoman by knocking out Tommy as she wanted her alive for now. Catherine later finds playing cards left by Alice in her bedroom and begs Jacob to deal with her despite his concerns that she might be Beth. Later, Alice crashed the gala, poisoning and killing Catherine. After briefly visiting Mouse in ICU disguised as a Crows agent, Alice learns about another Beth on Earth-Prime. While Kate was able to give Mary's blood to the alternate Beth and lingers by a trapped Alice, the alternate Beth is sniped enabling Alice to feel better and knock out Kate. After catching August, Alice leaves him for Kate who confesses to Kate and Jacob about what he did. Subsequently, Alice and Mouse are incarcerated at Arkham Asylum, forming an alliance with Tommy Elliot.
Skarsten also plays the Earth-99 counterpart of Beth in the crossover event "Crisis on Infinite Earths".
Kate later encounters a similar Beth (also portrayed by Skarsten) on Earth-Prime, displaced from her now non-existent reality, where she is shocked to learn of her doppelganger's villainy and even briefly assumes Alice's identity in an attempt to save Kate. After Mouse ends up in Crows custody under heavy guard in their ICU, Alice and the alternate Beth start to develop severe headaches. This Beth reveals she was not separated from her family and later went on to get a master's degree in astrophysics. This Beth was later sniped and killed by August Cartwright who mistook her for Alice.
Film
Beth Kane makes a non-speaking cameo appearance in a flashback in Batman: Bad Blood.
Notes
References
External links
Batwoman
Characters created by Greg Rucka
Chemical war and weapons in popular culture
Comics characters introduced in 2009
DC Comics female supervillains
DC Comics television characters
Female characters in television
Fictional American Jews in comics
Fictional characters with mental disorders
Fictional chemists
Fictional crime bosses
Fictional identical twins
Twin characters in comics
Fictional mass murderers
Jewish superheroes
Superhero television characters
Batman characters | Beth Kane | [
"Chemistry"
] | 4,273 | [
"Chemical war and weapons in popular culture",
"Chemical weapons"
] |
60,803,057 | https://en.wikipedia.org/wiki/Microtechnique | Microtechnique is an aggregate of methods used to prepare micro-objects for studying. It is currently being employed in many fields in life science. Two well-known branches of microtechnique are botanical (plant) microtechnique and zoological (animal) microtechnique.
With respect to both plant microtechnique and animal microtechnique, four types of methods are commonly used, which are whole mounts, smears, squashes, and sections, in recent micro experiments. Plant microtechnique contains direct macroscopic examinations, freehand sections, clearing, maceration, embedding, and staining. Moreover, three preparation ways used in zoological micro observations are paraffin method, celloidin method, and freezing method.
History
The early development of microtechnique in botany is closely related to that in zoology. Zoological and botanical discoveries are adopted by both zoologists and botanists.
The field of microtechnique lasted from at the end of the 1930s when the principle of dry preparation emerged. The early development of microtechnique in botany is closely related to that in zoology. Zoological and botanical discoveries are adopted by both zoologists and botanists. Since Hooke discovered cells, microtechnique had also developed with the emergence of early microscopes. Microtechnique then had advanced over the period of 1800–1875. After 1875, modern micro methods have emerged. In recent years, both traditional methods and modern microtechnique have been in use in many experiments.
Commonly used methods
Some general microtechnique can be used in both plant and animal micro observation. Whole mounts, smears, squashes, and sections are four commonly used methods when preparing plant and animal specimens for specific purposes.
Whole mounts
Whole mounts are usually used when observers need to use a whole organism or do some detailed research on specific organ structure. This method requires objects in which moisture can be removed, like seeds and micro fossils.
According to different purposes, Whole-mounts can be divided into three categories, Temporary whole mounts, Semi-permanent whole mounts, and permanent whole mounts. Temporary whole mounts are usually used for teaching activities in class. Semi-permanent whole mounts are prepared for longer using time, which is no more than fourteen days. In this preparation, Canada balsam is used to seal the specimens, and this method is used to observe unicellular and colonial algae, fungal spores, mosses protonemata, and prothalli. The third way is a permanent whole mount. Two methods are usually used, which are hygrobutol method and glycerine-xylol method.
Smears
Smears is an easy way for preparing slices. This method is used in many laboratories. Smears can be employed when making slide specimens by spreading liquid or semi-liquid materials or lose tissues and cells of animals and plants evenly on the slide. The steps and requirements for the application of the smear method are as follows: first, smear. When the solid material is smeared, the material should be placed on the glass slide and wiped away, then use the blade to press the material on one side. The cells should be pressed out and distributed evenly on the glass slide in a thin layer, such as the anther smeared.
Squashes
Squashes are methods, in which objects are crushed with force. This method is suitable for preparing both transparent and tender tissues. When preparing squashes slides, specimens are supposed to be thin and transparent so that objects can be observed clearly under microscopes.
This technique is to place the material on the glass slide and remove it with the scalpel or to dissect needle, then add a drop of dye solution. After these steps, apply the second slide to cover the initial slide and apply pressure evenly to break the material and disperse the cells. Furthermore, another possible way can be used to prepare slides. The specimens can also be extruded between the cover slide and the slide with equal pressure.
Sections
Sections are known as thin slices need to be tested in all studies of cellular structures. This technique can be used for the preparation of tissue of animals and plants. For using under optical microscopy, the thickness of the material should be between above 2 and 25 micrometers. When observing under electron microscopy, sections should be from 20 to 30 nanometers. Microtome can be used in sectioning of sufficiently thin slices. If the objects cannot satisfy the requirement of thickness, materials are required to be dehydrated using alcohol before section. Three commonly used sectioning method are freehand section technique, paraffin method, and celloidin method.
Methods used in plant micro-experiments
Botanical microtechnique is an aggregate of methods providing micro visualization of gene and gene product in an entire plant. Plant microtechnique is also a study providing valuable experimental information. Plant microtechnique involves classical methods developed over a hundred years ago and new methods developed to expand our research scope and depth in botanical micro studies. Both traditional and new micro technique is useful for experimental research, and some will have a significant influence on further study. Different methods are used to prepare plant specimens, including direct macroscopic examinations, freehand sections, clearing, maceration, embedding, and staining.
Direct microscopic examinations
The direct micro examination is a simple way prepared for observing micro-objects. Also, this method is useful to observe whether the mold grows on the surface of the specimens. This can be an initial step of the micro experiment.
Freehand section
Freehand slicing is a method of making thin slices of fresh or fixed experimental materials with a hand-held blade. Freehand slicing refers to the method of directly cutting fresh or fixed materials (generally plants with a low degree of lignification) into thin slices without special instruments or special chemical reagents.
Clearing
Clearing technique provides translucent slides via removing part of cytoplasmic content and then applying high refractive index reagents to process the tissues. This method is suitable for preparing whole mount slides. The clearing is a procedure of using clearing reagents for removal of alcohol and makes tissue translucent. Xylene is the most popular clearing agent.
Maceration
Macerating tissues is the process of separating the constituent cells of tissues. This method enables observers to study the whole cell in third-dimensional detail. Chemical maceration method means the using chemicals to process organs or part to soften tissue and dissolving the cells so that different cell can be identified.
Embedding
Embedding technique is a medium stage when doing a sectioning process. When preparing specimens, it is difficult to make uniform slices since the tissue is soft. Therefore, it is necessary to soak the tissue with a certain substance to harden the whole tissue, to facilitate the slicing. This process is called embedding. The substance used to embed tissue is embedding media, which is chosen depends on the category of the microscope, category of the micro tome, and category of tissue. Paraffin wax, whose melting point is from 56 to 62°C, is commonly used for embedding.
Staining
Since few plant tissues have a color, there is little chromatically difference between plant tissues makes it difficult to differentiate botanical structure. Material is usually dyed before installation. This process is called staining, which can be used to prepare botanical specimens so that it is possible to distinguish one part of the sample from another in terms of color. Acid dyes can be used when staining micro slides, for example, acid dyes are in use when coloring nuclei and other cellular components are stained using alkaline. There are also staining machine used for staining, which allows tissue to be stained automatically.
Microtechnique used for animal observation
The zoological microtechnique is the art of the preparation for microscopic animal observation. Although many microtechniques can be used in both plant and animal micro experiments. Some methods may differ from itself when employed in different field. Three commonly used preparation ways used in zoological micro observations can be concluded as paraffin method, celloidin method, freezing method, and miscellaneous techniques.
Paraffin method
Infiltration and embedding
This process usually consists of steps of infiltration, embedding, sectioning, affixing and processing the sections. Followed by the initial stage, fixation, the next step is dehydration, which removes the water in the tissue using alcohol. Then the tissue can be infiltrated and embedded with wax. A tissue specimen can keep for several years after finishing embedding this tissue into the wax. Paraffin wax, which is soft and colorless, is the most commonly used reagent.
Sectioning
Sectioning a tissue can use either the micro tome knife or the razor blade as the cutting blade.
The micro tome knife is used for handling sectioning. It is necessary to use a micro tome knife when preparing sections less than 1/1000 micrometers. When using such a knife, the operators must be extremely careful. This instrument is impractical sometimes, so using the razor blade for general work to prepare sections above 9 microns (1 micron equals 1/1000 micrometers). Furthermore, the razor blade works better than the micro tome knife when requiring thick sections with no less than 20 microns.
Affixing and processing
After sectioning, the prepared slices are affixed on slides. There are two commonly used affixatives, Haupt's and Mayer's. Haupt's affixative contains 100 ccs (cubic centimeter) distilled water, 1gm gelatin, 2 gm phenol crystals, 15 cc glycerine. Mayer's affixative is consist of 5 cc egg albumen, 50 cc glycerine, 1 gm sodium salicylate. The general steps of affixing paraffin sections can be concluded as 1. Clean the required slides, 2. Mark the cleaned slides, 3. Drop affixative on each slide, 4. Put on another slide, 5. Spread the affixative, 6. Drop floating medium, 7. Divide the paraffin into required length, 8. Transfer the sections, 9. Add more floating medium if incomplete floating occurs, 10. Rise the temperature, 11. Remove slides and redundant floating medium, 12, drying the section.
Processing paraffin sections include 1. Deparaffination, 2. Removing the deparaffing solution, 3. Hydration, 4. Staining, 5. Dehydration, 6. Dealcoholisation and clearing, 7. Mounting the cover slide.
Celloidin method
Celloidin technique is the procedure of embedding a specimen in celloidin. This method can be used for embedding large, hard objects. Celloidin is a digestive fiber, which is flammable, and it is soluble in acetone, clove oil, and the mixture of anhydrous alcohol and ether. Celloidin will turn into white emulsion turbid liquid when it meets water, so it is required to use a dry container to contain celloidin.
The method of celloidin slicing is to fix and dehydrate the tissue, then treat it with the anhydrous alcohol-ether mixture. After this step, to impregnate, embed and slice the tissue with celloidin. Moreover, this slicing method can slice large tissues and has the advantage that its heat allows the tissues does not shrink. However, this technique contains some shortcomings. For instance, the slices cannot be sliced very thin (more than 20 microns), and impregnation with celloidin is time-consuming.
Freezing method
Freezing technique is the most commonly used sectioning method. This method can preserve the immune activity of various antigens well. Both fresh tissue and fixed tissue can be frozen. Moreover, it is also a technique used for freezing sections of either fresh or fixed plant tissues.
During the freezing procedure, the water in tissues is easy to form ice crystals, which often affects the antigen localization. It is generally believed that when ice crystals are small, the effect is small, and when ice crystals are large, the damage to the tissue structure is large, and the above phenomenon is more likely to occur in tissues with more moisture components. The size of an ice crystal is directly proportional to its growth rate and inversely proportional to the nucleation rate (formation rate), that is, the larger the number of ice crystal formation, the smaller it is, and the more serious the impact on the structure. Therefore, the number of ice crystals should be minimized. The freezing method allows sectioning tissues rapidly and biopsy without using reagents. This procedure should be rapidly in case of the form of ice crystal.
See also
Microtechnology
Histology
References
Scientific techniques
Microbiology | Microtechnique | [
"Chemistry",
"Biology"
] | 2,626 | [
"Microbiology",
"Microscopy"
] |
60,808,823 | https://en.wikipedia.org/wiki/Artificial%20metalloenzyme | An artificial metalloenzyme (ArM) is a designer metalloprotein, not found in nature, which can catalyze desired chemical reactions. Despite fitting into classical enzyme categories, ArMs also have potential in new-to-nature chemical reactivity like catalysing Suzuki coupling, Metathesis etc., which were never reported among natural enzymatic reactions. ArMs have two main components: a protein scaffold and an artificial catalytic moiety, which, in this case, features a metal center. This class of designer biocatalysts is unique because of the potential to improve the catalytic performance through chemogenetic optimization, a parallel improvement of both the direct metal surrounding (first coordination sphere) and the protein scaffold (second coordination sphere).The second coordination sphere (protein scaffold) is easily evolvable and, in the case of ArMs, responsible for very high (stereo)selectivity. With the progress in organometallic synthesis and protein engineering, more and more new kind of design of ArMs were developed, showing promising future in both academia and industrial aspects.
In 2018, one-half of the Nobel Prize in Chemistry was awarded to Frances H. Arnold "for the directed evolution of enzymes", who elegantly evolved artificial metalloenzymes to realize efficient and highly selective new-to-nature chemical reactions in vitro and in vivo.
History
Dated back to 1956, the first protein modified transition metal catalyst was documented. The Palladium(II) salt was absorbed onto silk fibroin fiber, reduced by hydrogen to get the first reported ArM, which can catalyze asymmetric hydrogenation. This work was not reproducible, but it is considered to be the first work in the field of artificial metalloenzymes. At that time, the major challenge that blocked further studies was underdeveloped protein production and purification technology. The first attempt to anchor an abiotic metal center onto a protein was reported by Whitesides et al. using biotin-avidin interaction, making an artificial hydrogenase. The presence of avidin can significantly increase the catalytic capacity of Rhodium(I) cofactor in aqueous phosphate buffer. Another pioneering work was conducted by Kaiser et al. where carboxypeptidase A (CPA) was repurposed into an oxidase by substituting Zn(II) center by Cu(II), for the oxidation of ascorbic acid.
The real potential of ArMs was unleashed when recombinant protein production was developed, namely in 1997 Distefano and Davies reported a scaffold modification of a recombinant adipocyte lipid-binding protein (ALBP) with iodoacetamido-1,10-phenanthroline coordinating Cu(II) for the stereoselective hydrolysis of racemic esters.
Formation
Abiotic cofactor anchoring
Four strategies have been used to assemble ArMs:
Covalent immobilization of a metal-containing catalytic moiety by an irreversible reaction with the protein;
Supramolecular interactions between a protein and a high-affinity substrate could be used to anchor a metal cofactor;
The metal substitution in a natural metalloenzyme can result in a novel catalytic activity to the protein. The metal could be part of a prosthetic group (e.g., heme) or bound to amino acids;
Amino acids with Lewis-basic properties in a hydrophobic pocket could interact with coordinatively unsaturated metal center.
These four strategies led to a great progress in the field of artificial metalloenzymes since the beginning of the 21st century, unlocking exceptional selectivity for new-to-nature reactions.
Covalent
With the development of bioconjugation technology, there are plenty of strategies to covalently bind an artificial metallocofactor onto a protein scaffold:
cysteine residue based chemistry: Cys-meleimide, Cys-α-haloketone, Cys-benzylhalide chemistry and disulfide formation,
post-translational bioorthogonal modification based on Amber stop codon suppression (e.g., Click chemistry)
enzyme active site modification (e.g., covalent bond formation between lipase and lipase inhibitor).
Supramolecular
Streptavidin or avidin in combination with biotinylated artificial metal cofactors is the most commonly used supramolecular strategy to make ArMs. In the early example from Ward et al. shown below, the ligand of Ru(I) complex was covalently linked to biotin and then the whole complex was anchored to streptavidin thanks to a specific and strong biotin-streptavidin interaction. The formed ArM can catalyze the reduction of prochiral ketones. Taking advantages of protein evolvability, different mutants of streptavidin can achieve different stereoselectivity. Throughout the years, many streptavidin-based enzymes were developed, enabling catalysis of very complex transformations in water, under ambient conditions.
Besides biotin-streptavidin based ArMs, another important example of using supramolecular iassembly strategy is antigen-antibody recognition. First reported in 1989 by Lerner et al.., a monoclonal antibody-based ArM is raised to hydrolyze specific peptide.
Another interesting scaffold used as a platform for supramolecularly assembled ArMs are multidrug resistance regulators (MDRs), particularly a PadR family of proteins without native catalytic activity, whose function in nature is the recognition of foreign agents and to activate subsequent cellular response. Among them, Lactococcal multidrug resistance regulator (LmrR) was mainly used to create ArMs, using different strategies, including the supramolecular one. Namely, Roelfes et al. incorporated Cu(II) phenanthroline complex in the hydrophobic pocket of LmrR and performed Friedel-Crafts reaction enantioselectively; and Fe heme complex which catalyzed cyclopropanation enantioselectively.
Metal substitution
This strategy involves substitution of a native metal center in a metallocofactor, by another metal, that might or might not be already present in living systems. In this way, electronic and steric properties of the catalytic active site are altered compared to the wild-type enzyme, and novel catalytic pathways are unlocked.
Dative
The dative anchoring strategy uses natural amino acid residue in the protein scaffold like His, Cys, Glu, Asp and Ser to coordinate to a metal center. Like the first example of Pd-fibroin, dative anchoring to natural amino acids is not commonly used nowadays and often resulted in a more ambiguous binding site for metal compared with previous three methods.
However, these challenges can be overcome by in vivo incorporating metal-chelating non-canonical amino acids (ncAAs) in the protein scaffold. These genetically encoded ncAAs' side chains have chelating moieties, such as 2,2'-bipyridine (3-(2,2'-bipyridin-5-yl)-L-alanine) and 8-hydroxyquinoline (2-amino-3-(8-hydroxyquinolin-3-yl)propanoic acid) that can selectively coordinate different metals. Combining protein scaffolds featuring chelating ncAAs with different metals yields exceptionally selective artificial metalloenzymes with various application potentials. ncAAs are usually incorporated through the means of Amber stop codon suppression, via the orthogonal translation system (OTS).
Natural Metalloenzymes repurposing
In addition to anchoring artificial metal center in the protein scaffold, researchers like Frances Arnold and Yang Yang focused on changing the native environment of natural metallocofactors. Due to the large sequence space that can be evolved in natural metalloenzymes, they can be evolved to catalyse non-native transformations. This process is known as enzyme repurposing. Directed evolution is commonly used to tailor the catalytic capacity and repurpose the enzyme function. Mostly based on native porphyrin-metallocofactor, Arnold's lab has developed many ArMs catalysing regioselective and/or enantioselective transformations, such as Carbon-Boron bond formation, carbene insertion, and aminohydroxylation by evolving the sequence context of the corresponding ArMs.
As the pioneers of metalloredox radical biocatalysis, Yang et al. repurposed cytochrome P450s to catalyze atom transfer radical cyclization (ATRC), and Huang et al. repurposed non-heme Fe-dependent enzymes to catalyze an abiological radical-relay azidation and radical fluorination.
Function
So far, ArMs can catalyze planty of chemical reactions, such as: allylic alkylation, allylic amination, aldol reaction, alcohol oxidation, C-H activation, click reaction, catechol oxidation, reduction, cyclopropanation, Diels-Alder reaction, epoxidation, epoxide ring opening, Friedel-Crafts alkylation, hydrogenation, hydroformylation, Heck reaction, Metathesis, Michael addition, nitrite reduction, NO reduction, Suzuki reaction, Si-H insertion, polymerization (atom transfer radical polymerization), atom transfer radical cyclization (ATRC) and radical fluorination.
References
Metalloproteins
Organometallic chemistry
Bioinorganic chemistry
Enzymes
| Artificial metalloenzyme | [
"Chemistry",
"Biology"
] | 2,022 | [
"Biochemistry",
"Metalloproteins",
"Organometallic chemistry",
"Bioinorganic chemistry"
] |
54,391,065 | https://en.wikipedia.org/wiki/%28Diacetoxyiodo%29benzene | (Diacetoxyiodo)benzene, also known as phenyliodine(III) diacetate (PIDA) is a hypervalent iodine chemical with the formula . It is used as an oxidizing agent in organic chemistry.
Preparation
This reagent was originally prepared by Conrad Willgerodt by reacting iodobenzene with a mixture of acetic acid and peracetic acid:
PIDA can also be prepared from iodosobenzene and glacial acetic acid:
More recent preparations direct from iodine, acetic acid, and benzene have been reported, using either sodium perborate or potassium peroxydisulfate as the oxidizing agent:
The PIDA molecule is termed hypervalent as its iodine atom (technically a hypervalent iodine) is in its +III oxidation state and has more than typical number of covalent bonds. It adopts a T-shaped molecular geometry, with the phenyl group occupying one of the three equatorial positions of a trigonal bipyramid (lone pairs occupy the other two) and the axial positions occupied by oxygen atoms from the acetate groups. The "T" is distorted in that the phenyl-C to I to acetate-O bond angles are less than 90°. A separate investigation of the crystal structure confirmed that it has orthorhombic crystals in space group Pnn2 and reported unit-cell dimensions in good agreement with the original paper. The bond lengths around the iodine atom were 2.08 Å to the phenyl carbon atom and equal 2.156 Å bonds to the acetate oxygen atoms. This second crystal structure determination explained the distortion in the geometry by noting the presence of two weaker intramolecular iodine–oxygen interactions, resulting in an "overall geometry of each iodine [that] can be described as a pentagonal-planar arrangement of three strong and two weak secondary bonds."
Unconventional reactions
One use of PIDA is in the preparation of similar reagents by substitution of the acetate groups. For example, it can be used to prepare (bis(trifluoroacetoxy)iodo)benzene (phenyliodine(III) bis(trifluoroacetate), PIFA) by heating in trifluoroacetic acid:
PIFA can be used to carry out the Hofmann rearrangement under mildly acidic conditions, rather than the strongly basic conditions traditionally used. The Hofmann decarbonylation of an N-protected asparagine has been demonstrated with PIDA, providing a route to β-amino-L-alanine derivatives.
PIDA is also used in Suárez oxidation, where photolysis of hydroxy compounds in the presence of PIDA and iodine generates cyclic ethers. This has been used in several total syntheses, such as the total synthesis of (−)-majucin, (−)-Jiadifenoxolane A, and cephanolide A.
References
Reagents for organic chemistry
Oxidizing agents
Iodanes
Acetates
Phenyl compounds | (Diacetoxyiodo)benzene | [
"Chemistry"
] | 644 | [
"Iodanes",
"Redox",
"Oxidizing agents",
"Reagents for organic chemistry"
] |
55,974,555 | https://en.wikipedia.org/wiki/List%20of%20major%20snow%20and%20ice%20events%20in%20the%20United%20States | The following is a list of major snow and ice events in the United States that have caused noteworthy damage and destruction in their wake. The categories presented below are not used to measure the strength of a storm, but are rather indicators of how severely the snowfall affected the population in the storm's path. Some information such as snowfall amounts or lowest pressure may be unavailable due to a lack of documentation. Winter storms can produce both ice and snow, but are usually more notable in one of these two categories. The "Maximum accumulation" sections reflect the more notable category which is represented in inches of snow unless otherwise stated. Only category 1 and higher storms as defined by their regional snowfall index are included here.
Note: A blizzard is defined as having sustained winds of at least 35 mph for three hours or more.
Seasonal summaries
The following is a table that shows North American winter season summaries dating back to 2009. While there is no well-agreed-upon date used to indicate the start of winter in the Northern Hemisphere, there are two definitions of winter which may be used. The first is astronomical winter, which has the season starting on a date known as the winter solstice, often on or around December 21. The season lasts until the spring equinox, which often occurs on or around March 20. The second has to do with meteorological winter which varies with latitude for a start date. Winter is often defined by meteorologists to be the three calendar months with the lowest average temperatures. Since both definitions span the start of the calendar year, it is possible to have a winter storm occur two different years.
18th–19th century
20th century
21st century
2000s
2010s
2020s
See also
List of blizzards
List of Regional Snowfall Index Category 5 winter storms
List of Regional Snowfall Index Category 4 winter storms
List of Northeast Snowfall Impact Scale winter storms
Winter storm naming in the United States
Notes
References
Blizzards in the United States
Lists of disasters in the United States
Winter weather events in the United States
Weather-related lists | List of major snow and ice events in the United States | [
"Physics"
] | 403 | [
"Weather",
"Physical phenomena",
"Weather-related lists"
] |
55,976,060 | https://en.wikipedia.org/wiki/Microscopy%20with%20UV%20surface%20excitation | Microscopy with UV Surface Excitation (MUSE) is a novel microscopy method that utilizes the shallow penetration of UV photons (230–300 nm) excitation. Compared to conventional microscopes, which usually require sectioning to exclude blurred signals from outside of the focal plane, MUSE's low penetration depth limits the excitation volume to a thin layer, and removes the tissue sectioning requirement. The entire signal collected is the desired light, and all photons collected contribute to the image formation.
Mechanism
The microscope setup is based on an inverted microscope design. An automated stage is used to record larger areas by mosaicing a series of single adjacent frames. The LED light is focused using a ball lens with a short focal length onto the sample surface in an oblique-angle cis-illumination scheme since standard microscopy optics do not transmit UV light efficiently. No dichroic mirror or filter is required as microscope objectives are opaque to UV excitation light. The emitted fluorescence light is collected using a long-working-distance objective and focused via a tube lens onto a CCD camera.
Specimens are submerged in exogenous dye for 10 seconds and then briefly washed in water or phosphate-buffered saline (PBS). The resulting stained specimens generate bright enough signals for direct and interpretable visualization through microscope eyepiece.
Contrast enhancement
Previous work from MUSE includes the detection of endogenous fluorescent molecules in intact clinical and human tissues for functional and structural characterization, which is limited by the relatively dim autofluorescence found in tissue. However, the use of bright exogenous dyes can provide substantially more remitted light than the autofluorescence approach.
Several dyes have been studied for MUSE's application, including eosin, rhodamine, DAPI, Hoechst, acridine orange, propidium iodide, and proflavine. Eosin and rhodamine stain the cytoplasm and the extracellular matrix, making the bulk of the tissue visible. Hoechst and DAPI fluoresce brightly when bound to DNA, allowing them to serve as excellent nuclear stains.
Innovation and significance
Microscope-based diagnostics are widely performed and served as a gold standard in histological analysis. However this procedure generally requires a series time-consuming lab-based procedures including fixation, paraffin embedment, sectioning, and staining to produce microscope slides with optically thin tissue slides (4–6 μm). While in developed regions histology is commonly used, people who live in areas with limited resources can hardly access it and consequently are in need for a low-cost, more efficient way to access pathological diagnosis. The main significance of MUSE system comes from its capacity to produce high-resolution microscopic image with subcellular features in a time-efficient manner with less costs and less lab-expertises requirements.
With 280 nm deep UV excitation and simple but robust hardware design, MUSE system can collect fluorescence signals without the need for fluorescence filtering techniques or complex mathematical image reconstruction. It has potential for generate high quality images containing more information than microscope slides in terms of its 2.5 dimensional features. MUSE images have been validated with diagnostic values. The system is capable to produce images from various tissue type in different sizes, either fresh or fixed.
Use
MUSE system mainly serves as a low-cost alternative to traditional histological analysis for cancer diagnostics with simpler and less time-consuming techniques. By integrating microscopy and fresh tissue fluorescence staining into an automated optical system, the overall acquiring time needed for getting digital images with diagnostic values can be much shortened into the scale of minutes comparing with conventional pathology, where general procedure can take from hours to days. The color-mapping techniques that correlated fluorescence staining to traditional H&E staining provide the same visual representation to pathologists based on existing knowledge with no need for additional training on image recognition.
Additionally, this system also has great potential to be used for intraoperative consultation, a method performed in pathologists lab that examine the microscopic features of tissue during oncological surgery usually for rapid cancer lesion and margin detection. It also can play an important role in biological and medical research, which might require examination on cellular features of tissue samples. In the future, the system can be further optimized to include more features including staining protocol, LEDs wavelength for more research usages and applications.
Advantages and disadvantages
References
Microscopes
Ultraviolet radiation | Microscopy with UV surface excitation | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 910 | [
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Measuring instruments",
"Ultraviolet radiation",
"Microscopes",
"Microscopy"
] |
67,250,063 | https://en.wikipedia.org/wiki/First%20sunrise | The first sunrise refers to the custom of observing the first sunrise of the year. Such a custom may be just an observation of the sunrise on a special day, or has a religious meaning for those who worship the Sun, such as the followers of traditional religions in Korea and Japan and the Inuit, Yupik, Aleut, Chukchi and the Iñupiat in the Arctic Circle, for praying for good luck.
Japan
In Japan, the observation of the first sunrise of the year () on the first day on the Old Calendar has been part of the traditional Shintoist worship of Amaterasu, the sun goddess. Nowadays, Japanese travel agents arrange trips to observe the earliest first sunrise of the year on the new Gregorian calendar in the easternmost Ogasawara Islands of the Japanese archipelago.
Mongolia
In Mongolia, there is a custom of observing the first sunrise on the first day of the year at the top of the mountain the Mongolian lunisolar calendar. commonly known as Tsagaan Sar. The holiday has shamanistic influences.
Korea
In Korea, there is also a custom of observing the first sunrise on the first day of the year, either on the traditional Korean calendar or the new calendar. Pohang Homigot, Ulleung County and Jeongdongjin are famous place to watch first sunrise.
Canada, Greenland, Russia and the United States
In the Arctic circle, the Inuit, Yupik, Aleut, Chukchi and the Iñupiat observe the first sunrise on the first day of the year () by extinguishing three qulliqs and relighting them. This is to honour the sun and moon.
See also
Sunrise
Japanese New Year
Korean New Year
Quviasukvik
Caroline Island
Pitt Island
Young Island
Heliacal rising
References
External links
Solar phenomena
January
January observances
New Year in Japan
Culture of Korea
Eskimo culture
New Year in Canada
New Year in Russia
New Year in the United States | First sunrise | [
"Physics"
] | 399 | [
"Physical phenomena",
"Stellar phenomena",
"Solar phenomena"
] |
67,252,443 | https://en.wikipedia.org/wiki/Spongy%20degeneration%20of%20the%20central%20nervous%20system | Spongy degeneration of the central nervous system, also known as Canavan's disease, Van Bogaert-Bertrand type or Aspartoacylase (AspA) deficiency, is a rare autosomal recessive neurodegenerative disorder. It belongs to a group of genetic disorders known as leukodystrophies, where the growth and maintenance of myelin sheath in the central nervous system (CNS) are impaired. There are three types of spongy degeneration: infantile, congenital and juvenile, with juvenile being the most severe type. Common symptoms in infants include lack of motor skills, weak muscle tone, and macrocephaly. It may also be accompanied by difficulties in feeding and swallowing, seizures and sleep disturbances. Affected children typically die before the age of 10, but life expectancy can vary.
The cause of spongy degeneration of the CNS is the mutation in a gene coding for aspartoacylase (AspA), an enzyme that hydrolyzes N-acetyl aspartic acid (NAA). In the absence of AspA, NAA accumulates and results in spongy degeneration. The exact pathophysiological causes of the disease are currently unclear, but there are developing theories. Spongy degeneration can be diagnosed with neuroimaging techniques and urine examination. There is no current treatment for spongy degeneration, but research utilising gene therapy to treat the disease is underway. Spongy degeneration is found to be more prevalent among Ashkenazi Jews, with an incidence of 1/6000 amongst this ethnic group.
Clinical Symptoms
Spongy Degeneration of the CNS is classified into three types: infantile, juvenile and congenital; based on the age of onset and severity of symptoms.
Infantile Type
The infantile type is the most common type of spongy degeneration of the CNS. Usually, affected infants appear normal for the first few months of life. The age of onset is around 6 months, where infants begin to develop noticeable psychomotor defects. Various motor skills such as turning over and stabilising head movements are affected. Hypotonia and macrocephaly are also observed in the first few months.
During the latter part of the first year, most children's eyes fail to respond to visual stimuli, with episodic saccadic eye movements observed, rendering most children blind in the second year.
The symptoms in the terminal stage of disease development are sweating, emesis, hyperthermia, seizures, and hypotension, which usually results in the death of the child. Life expectancies of affected infants vary, but most infants do not live past the age of ten.
Congenital Type
The age of onset is typically a few days after birth in the congenital type. Pregnancy and delivery are not affected and the child is born with a normal appearance and no health issues. However, affected infants may become lethargic in the following days and find movements such as sucking and swallowing difficult. As the disease progresses, patients may have decreased muscle tone and inactivation of Moro reflex, also known as startle reflex. This may lead to the development of Cheyne Stokes respiration after a few weeks or even days after delivery, which may be fatal.
Juvenile Type
The age of onset of the juvenile type is around five years of age. Most patients with the juvenile type survive until late adolescence. Affected toddlers typically develop progressive cerebellar syndrome and mental deterioration, which is followed by vision loss, optic atrophy, and generalised spasticity. Unlike the infantile form, there is no macrocephaly exhibited.
Pathophysiology
Although the pathophysiological causes of CD symptoms are still unclear, there are developing theories on the causes of myelination issues, gelatinous cortical white matter and seizures.
Issues in Myelination
Molecular Water Pump (MWP) and Osmolyte Imbalance
Increased cerebrospinal fluid (CSF) pressure and intramyelinic edema in CD patients suggest the existence of an efficient MWP in the brain. The MWP is a membrane protein responsible for pumping water molecules, along with dissolved NAA molecules, from the intraneuronal space to the interstitial space. In healthy individuals, NAA is first transported down the concentration gradient through the MWP from neurons to the interstitial space and subsequently hydrolyzed by AspA in neighbouring oligodendrocytes.
In patients with CD, it is theorized that AspA deficiency causes accumulation of NAA in the interstitial space, inducing an osmolyte imbalance and accumulation of water in the interstitial space. This increases hydrostatic pressure between interlamellar spaces and extracellular periaxonal and parenchymatous space, loosening the tight junctions between them, thus causing intramyelinic edema. Subsequent demyelination possibly contributes to vacant spaces in the white matter or spongy degeneration.
Dysmyelination
NAA-derived acetates are involved in the synthesis of fatty acids, which are subsequently incorporated into myelin lipids. It is hypothesized that in CD patients, AspA deficiency reduces NAA-derived acetates, and consequently decreases the synthesis of myelin-associated lipids. This leads to dysmyelination, which promotes the formation vacuoles in interstitial space and spongy degeneration. However, it has been shown that spongy degeneration is not directly caused by the disrupted synthesis of myelin. Animal models show that myelination may still occur in AspA lacking species, possibly due to parallel pathways for myelination during the initial stages of myelinogenesis.
Protein Folding and Stabilization
Deficiency of AspA lowers acetyl coenzyme A (CoA) expression in cells, which may be responsible for stabilization and correct folding of proteins. This leads to protein degradation, with a particularly large effect in oligodendrocytes. In animal studies of AspA deficient species, protein degradation in oligodendrocytes has been shown to cause severe loss in myelin proteins.
Gelatinous Subcortical White Matter
The deficiency in AspA, which is vital in oligodendrocytes to produce NAA derived acetate, leads to a lack of regulation in the genetic structure and expression in these cells. This results in the death of oligodendrocytes, hence induces neuronal injury and the formation of vacuoles in the subcortical matter. These vacuoles contribute to the formation of gelatinous-textured subcortical white matter found in many CD patients.
Seizures and Neurodegeneration
The pathophysiological causes of seizures and neurodegeneration in CD patients are likely due to oxidative stress generated by NAA accumulation. It is postulated that NAA promotes oxidative stress through promoting reactive oxygen species, as well as reducing non-enzymatic antioxidant defenses. NAA also affects multiple antioxidant enzymes, such as catalase and glutathione peroxidase, impairing the detoxification of hydrogen peroxide. Recent animal studies have shown the chronic oxidative stress may cause dysfunction in mitochondria, rendering the brain more susceptible to epileptic seizures.
Diagnosis
Canavan's disease is initially recognized by the appearance of symptoms, yet further examinations are needed for definitive diagnosis. Neuroimaging techniques such as Computed Tomography (CT) scan or Magnetic Resonance imaging (MRI) are typically used to detect the presence of degenerative subcortical white matter. Microscopy of the cerebrospinal fluid can also be used for diagnosis, where swollen astrocytes with distorted and elongated mitochondria can be seen in patients.
Urine examinations are used to differentiate CD patients from other neurodegenerative disorders with similar morphology, such as Alexander diseases and Tay-Sachs diseases (which similarly exhibit macrocephaly), as patients with CD uniquely display increased excretion of NAA.
Prevention
DNA analysis is generally used to determine if parents are carriers of the mutant gene. Prenatal diagnosis through either DNA analysis or determination of NAA in amniotic fluid (which would be increased in an affected pregnancy) can also be used when DNA analysis cannot be performed on parents. It has been observed that there is an abnormally high carrier rate in the Ashkenazi Jewish population. The risk of their offspring having spongy degeneration is one in four if both parents are carriers of the mutant gene.
Treatment methods
There are currently no specific forms of treatment known for spongy degeneration of the CNS. Certain treatment modules are under experimental trials and current patients are supported by palliative measures, all of which are described below.
Current palliative measures
Current patients are supported by the care guidelines for other paediatric neurodegenerative diseases. For patients with respiratory issues, suction machines are used to clear mucous from the upper airway of the lungs. Oxygen concentrators are also administered for airway clearance and continuous supply of air to aid breathing. As for infants with hypotonia, it is addressed by the provision of positioning equipment like specialized strollers, bath chairs and feeder seats.
Possible treatment modules under development
Intraperitoneal injections of lipoic acid
Lipoic acid (which can cross the blood brain barrier), has recently been trialed in preclinical studies, where it has been injected into tremor rats intraperitoneally. Tremor rats are deemed as the naturally occurring model for spongy degeneration of the CNS as NAA induces oxidative stress. Positive results have emerged from these studies, suggesting that lipoic acid may be a possible approach for symptomatic treatments.
Intraperitoneal lithium administration
A possible treatment is to employ neuroprotective techniques to offset the neurological damage in the CNS caused by the accumulation of NAA. One potential treatment that has been identified is lithium, which has been observed to induce neuroprotective effects in dementia patients. Administration of intraperitoneal lithium has been tested in both tremor and wild-type rats, causing a decrease in NAA levels in both species. In human trials, NAA levels in patient's brain and urine was found to drop after one year of treatment. This is coupled with the elevation of alertness and visual tracking. However. CD symptoms including axial hypotonia and spastic diplegia remained.
Gene therapy
Since CD arises from a monogenic defect and is localized in the CNS, gene replacement therapy is a potential treatment. This therapy involves replacing the mutant gene of the disease with a fully functional gene using a vector, which transports therapeutic DNA into cells, allowing cells to produce AspA. Adeno-associated Viruses (AAVs) are widely used as vectors for gene therapy. They are adopted as they do not replicate themselves and are almost non-toxic. There are two serotypes used for the treatment: AAV2 and AAV9. The difference of the stereotypes is that AAV2 is limited by blood-brain-barrier (BBB), whilst AAV9 can cross the BBB, allowing for treatment even at the later stages of the disease. However, current research shows that AAVs may trigger unwanted immune responses in infants and have limited gene encapsulating capacity.
Epidemiology
Spongy degeneration of the CNS is pan-ethnic, due to its prevalence among Ashkenazi Jews. There are two common mutations found among them: missense mutation (Glu285AIa) and nonsense mutation (Tyr231X). In the missense mutation, there is a substitution of glutamic acid to alanine. As for the nonsense mutation, the tyrosine codon is replaced by a termination codon. Genetic screening reveals that around 1 in 40 healthy Jews are carriers and the incidence of this disease in this population is as high as 1 in 6000.
History
The first case of spongy degeneration of the CNS was reported in 1928 by Globus and Strauss, who designated the case as Schilder's disease, a term for diffuse myelinoclastic sclerosis. In 1931, Canavan reported a case where the megalencephaly of brain degeneration is different from that caused by a tumour. However, she failed to recognize the spongy alterations that suggest a unique pathological cause that distinguishes her case from Schilder's disease. Later in 1937, Eislebergl reported six cases from Jewish families and discovered the familial characteristics of spongy degeneration, but she classified these cases as Krabbe's sclerosis. It was not until 1949 when Van Bogaert and Bertrand reported five cases from Jewish families, whereupon further pathological analysis confirmed that spongy degeneration is the nosologic entity.
References
Neuroscience
Neurology
Central nervous system
Central nervous system disorders | Spongy degeneration of the central nervous system | [
"Biology"
] | 2,714 | [
"Neuroscience"
] |
67,253,209 | https://en.wikipedia.org/wiki/Abolition%20of%20time%20zones | Various proposals have been made to replace the system of time zones with Coordinated Universal Time (UTC) as a local time.
History
For most of history, the position of the sun was used for timekeeping. During the 19th century, most towns kept their own local time. The standardization of time zones started in 1884 in the US.
Proposals
Arthur C. Clarke proposed the use of a single time zone in 1976. Attempts to abolish time zones date back half a century and include the Swatch Internet Time. Economics professor Steve Hanke and astrophysics professor Dick Henry at Johns Hopkins University have been proponents of the concept and have integrated it in their Hanke–Henry Permanent Calendar.
Usage
UTC as a universal time zone is already used by airline operators around the world and other international settings where time coordination is especially critical. This includes military operations, the US National Oceanic and Atmospheric Administration and the International Space Station. Within the United States, some have cited effective international use of UTC in certain industries as evidence that a permanent national time zone would work within the United States, a change the Secretary of Transportation would have the authority to make.
Advantages
The same time is used globally, which removes the requirement of calculations between different zones.
Possible health benefits as people who live on the eastern side of a time zone are out of sync with the circadian rhythms.
Disadvantages
The date will change during daylight hours in parts of the Americas and Asia–Pacific.
Requires changes in linguistic terminology related to time.
Conceptually, time zones would still be in effect as different regions would still carry out activities such as business hours, lunch, school, etc. at different UTC times, essentially trading one system for a tantamount one.
For example, at 08:00 (8 AM), with UTC±0 as a worldwide standard, the sky in the Eastern United States would look how it normally does at 03:00 (3 AM), and in China would look how it does at 16:00 (4 PM). However, in the United Kingdom, the sky would look the exact same as it normally does at 08:00 (8 AM).
See also
Eastern Standard Tribe
Calendar reform
Time in China
References
Time zones
Time scales | Abolition of time zones | [
"Physics",
"Astronomy"
] | 445 | [
"Physical quantities",
"Time",
"Astronomical coordinate systems",
"Spacetime",
"Time scales"
] |
49,219,918 | https://en.wikipedia.org/wiki/HAAAP%20family | The Hydroxy/Aromatic Amino Acid Permease (HAAAP) Family (TC# 2.A.42) is a member of the large Amino Acid-Polyamine-OrganoCation (APC) Superfamily of secondary carrier proteins. Members of the HAAAP family all function in amino acid uptake. Homologues are present in many Gram-negative and Gram-positive bacteria, with at least one member classified from archaea (TC# 2.A.42.1.7, Thermococcus barophilus).
Structure
Proteins of the HAAAP family possess 403-443 amino acyl residues and exhibit eleven putative or established TransMembrane α-helical Spanners (TMSs). These proteins exhibit topological features common to the eukaryotic amino acid/auxin permease (AAAP) family (TC# 2.A.18). These proteins also exhibit limited sequence similarity with some of the AAAP family members. A phylogenetic relationship has been proposed between members of the HAP family and the APC family since they exhibit limited sequence similarity with one another.
As of early 2016, no crystal structural data is available for members of the HAAAP family in RCSB.
Members
The HAAAP family includes three well-characterized aromatic amino acid/H+ symport permeases of E. coli and two hydroxy amino acid permeases. The aromatic amino acid/H+ symport permeases are:
Mtr, a high affinity tryptophan-specific permease (TC# 2.A.42.1.2).
TnaB, a low affinity tryptophan permease (TC# 2.A.42.1.3).
TyrP, a tyrosine-specific permease (TC# 2.A.42.1.1).
The two hydroxy amino acid permeases are:
SdaC, the serine permease (TC# 2.A.42.2.1), of E. coli. SdaC of E. coli (TC# 2.A.42.2.1) is also called DcrA, and together with a periplasmic protein DcrB (P37620), has been reported to play a role in phage DNA uptake in conjunction with an outer membrane receptors of the OMR family (TC# 1.B.14).
TdcC, the threonine permease (TC# 2.A.42.2.2), of E. coli.
Thus, FhuA (TC# 1.B.14.1.4) transports phage T5 DNA while BtuB (TC# 1.B.14.3.1) transports phage C1 DNA (Samsonov et al., 2002). DcuB is a putative lipoprotein found only in enteric bacteria.
All members of the HAAAP family can be found in the Transporter Classification Database.
Transport Reaction
The generalized transport reaction catalyzed by proteins of the HAAAP family is:
Amino acid (out) + nH+ (out) → Amino acid (in) + nH+ (in).
References
Protein families
Membrane proteins
Transmembrane proteins
Transmembrane transporters
Transport proteins
Integral membrane proteins | HAAAP family | [
"Biology"
] | 695 | [
"Protein families",
"Protein classification",
"Membrane proteins"
] |
49,221,628 | https://en.wikipedia.org/wiki/SIAM%20Journal%20on%20Applied%20Mathematics | The SIAM Journal on Applied Mathematics is a peer-reviewed academic journal in applied mathematics published by the Society for Industrial and Applied Mathematics (SIAM), with Paul A. Martin (Colorado School of Mines) as its editor-in-chief. It was founded in 1953 as SIAM's first journal, the Journal of the Society for Industrial and Applied Mathematics, and was given its current name in 1966. In most years since 1999, it has been ranked by SCImago Journal Rank as a second-quartile journal in applied mathematics. Together with Communications on Pure and Applied Mathematics it has been called "one of the two greatest American entries in applied math".
References
Applied mathematics journals
SIAM academic journals
Academic journals established in 1953 | SIAM Journal on Applied Mathematics | [
"Mathematics"
] | 147 | [
"Applied mathematics",
"Applied mathematics journals"
] |
49,222,617 | https://en.wikipedia.org/wiki/VeRoLog | The European Working Group on Vehicle Routing and Logistics Optimization (also, EWG VeRoLog, or simply VeRoLog) is a working group within EURO, the Association of European Operational Research Societies whose objective is to promote the application of operations research models, methods and tools to the field of vehicle routing and logistics, and to encourage the exchange of information among practitioners, end-users, and researchers, stimulating the work on new and important problems with sound scientific methods.
History
VeRoLog is one of the working groups of EURO, the Association of European Operational Research Societies. The Group was founded in 2011 by Daniele Vigo, Marielle Christiansen, Angel Corberan, Wout Dullaert, Richard Eglese, Geir Hasle, Stefan Irnich, Frederic Semet and Maria Grazia Speranza.
Governance
The group is managed by a Coordinator and an Advisory Board including the founding members. The current coordinator is Daniele Vigo.
Membership
The group is suitable for people who are presently engaged in Vehicle Routing and Logistics, either in theoretical aspects or in business, industry or public administration applications. Currently (2015), the group has about 1,500 members from 67 countries.
Conferences
VeRoLog holds conferences on a regular basis (once a year during Summer) and issues every year an award to the best doctoral dissertation on vehicle routing and logistics optimization.
Publications
In most cases, the annual conference is followed by a peer reviewed special issue of an international journal, presenting a selection of the contributions presented at the meeting. Recent special issues appeared on European Journal of Operational Research, and Computers and Operations Research.
A newsletter is emailed to all members every month.
References
Operations research
Working groups
Organizations established in 2011 | VeRoLog | [
"Mathematics"
] | 348 | [
"Applied mathematics",
"Operations research"
] |
49,223,016 | https://en.wikipedia.org/wiki/Electrolithoautotroph | An electrolithoautotroph is an organism which feeds on electricity. These organisms use electricity to convert carbon dioxide into organic matter by using electrons directly taken from solid-inorganic electron donors. Electrolithoautotrophs are microorganisms which are found in the deep crevices of the ocean. The warm, mineral-rich environment provides a rich source of nutrients. The electron source for carbon assimilation from diffusible Fe2+ ions to an electrode under the condition that electrical current is the only source of energy and electrons. Electrolithoautotrophs form a third metabolic pathway compared to photosynthesis (plants converting light into sugar) and chemosynthesis (bacteria converting chemical energy into food).
References
Biological processes
Electricity
Metabolism
Organisms by adaptation
Trophic ecology | Electrolithoautotroph | [
"Chemistry",
"Biology"
] | 163 | [
"Organisms by adaptation",
"Cellular processes",
"nan",
"Biochemistry",
"Metabolism"
] |
49,226,157 | https://en.wikipedia.org/wiki/Weak%20gravity%20conjecture | In theoretical physics, the weak gravity conjecture (WGC) is a conjecture regarding the strength gravity can have in a theory of quantum gravity relative to the gauge forces in that theory. It roughly states that gravity should be the weakest force in any consistent theory of quantum gravity. It was first proposed by Nima Arkani-Hamed, Luboš Motl, Alberto Nicolis, and Cumrun Vafa in 2007.
Gravity when compared to a U(1) gauge group interaction like electromagnetism, the mildest version of the weak gravity conjecture implies that there exist an object with electric charge q and mass m such that
where is the charge-to-mass ratio of an arbitrary large black hole.
The conjecture was originally motivated by the fact that black holes should be able to decay. This insensitivity of black hole evaporation to its electric charge suggests that black holes can violate global symmetries and violate global charge conservation.
See also
Fundamental interaction
Swampland (physics)
References
Quantum gravity
String theory
Conjectures | Weak gravity conjecture | [
"Physics",
"Astronomy",
"Mathematics"
] | 209 | [
"Astronomical hypotheses",
"Unsolved problems in mathematics",
"Unsolved problems in physics",
"Quantum mechanics",
"Quantum gravity",
"Conjectures",
"String theory",
"Mathematical problems",
"Physics beyond the Standard Model",
"Quantum physics stubs"
] |
44,357,173 | https://en.wikipedia.org/wiki/Kaplan%E2%80%93Yorke%20conjecture | In applied mathematics, the Kaplan–Yorke conjecture concerns the dimension of an attractor, using Lyapunov exponents. By arranging the Lyapunov exponents in order from largest to smallest , let j be the largest index for which
and
Then the conjecture is that the dimension of the attractor is
This idea is used for the definition of the Lyapunov dimension.
Examples
Especially for chaotic systems, the Kaplan–Yorke conjecture is a useful tool in order to estimate the fractal dimension
and the Hausdorff dimension of the corresponding attractor.
The Hénon map with parameters a = 1.4 and b = 0.3 has the ordered Lyapunov exponents and . In this case, we find j = 1 and the dimension formula reduces to
The Lorenz system shows chaotic behavior at the parameter values , and . The resulting Lyapunov exponents are {2.16, 0.00, −32.4}. Noting that j = 2, we find
References
Dimension
Dynamical systems
Limit sets
Conjectures | Kaplan–Yorke conjecture | [
"Physics",
"Mathematics"
] | 219 | [
"Geometric measurement",
"Limit sets",
"Unsolved problems in mathematics",
"Physical quantities",
"Conjectures",
"Topology",
"Mechanics",
"Theory of relativity",
"Mathematical problems",
"Dimension",
"Dynamical systems"
] |
44,357,613 | https://en.wikipedia.org/wiki/K%C3%A4ll%C3%A9n%20function | The Källén function, also known as triangle function, is a polynomial function in three variables, which appears in geometry and particle physics. In the latter field it is usually denoted by the symbol . It is named after the theoretical physicist Gunnar Källén, who introduced it as a short-hand in his textbook Elementary Particle Physics.
Definition
The function is given by a quadratic polynomial in three variables
Applications
In geometry the function describes the area of a triangle with side lengths :
See also Heron's formula.
The function appears naturally in the kinematics of relativistic particles, e.g. when expressing the energy and momentum components in the center of mass frame by Mandelstam variables.
Properties
The function is (obviously) symmetric in permutations of its arguments, as well as independent of a common sign flip of its arguments:
If the polynomial factorizes into two factors
If the polynomial factorizes into four factors
Its most condensed form is
Interesting special cases are
References
Kinematics (particle physics)
Triangles
Eponymous geometric shapes | Källén function | [
"Chemistry"
] | 208 | [
"Scattering",
"Kinematics (particle physics)"
] |
44,358,091 | https://en.wikipedia.org/wiki/IEC%2062682 | IEC 62682 is a technical standard titled Management of alarms systems for the process industries.
Scope
The standard specifies principles and processes for the management of alarm systems based on distribute control systems and computer-based Human-Machine Interface (HMI) technology for the process industries. It covers alarms from all systems presented to the operator, which can include basic process control systems, annunciator panels, safety instrumented systems, fire and gas systems, and emergency response systems. The practices are applicable to continuous, batch, and discrete processes. The process industry sector includes many types of manufacturing processes, such as refineries, petrochemical, chemical, pharmaceutical, pulp and paper, and power.
Standard
The standard addresses all lifecycle phases (development, design, installation, and operation) for alarm management in the process industries. The standard defines the terminology and work processes recommended to effectively maintain an alarm system throughout the lifecycle. The standard was written as an extension of the existing ISA 18.2-2009 standard which utilized numerous industry alarm management guidance documents in its development such as EEMUA 191. Ineffective alarm systems have often been cited as contributing factors in the investigation reports following major process incidents. The standard is intended to provide a methodology that will result in the improved safety of the process industries.
See also
Butterfleye
External links
IEC 62682
Alarm Network
Safety
Alarms
Industrial processes
Electrical standards | IEC 62682 | [
"Physics",
"Technology"
] | 282 | [
"Electrical standards",
"Electrical systems",
"Alarms",
"Physical systems",
"Warning systems"
] |
44,360,676 | https://en.wikipedia.org/wiki/Octotiamine | Octotiamine (INN, JAN; Gerostop, Neuvita, Neuvitan), also known as thioctothiamine, is an analogue of vitamin B1 which is used in Japan and Finland.
See also
Vitamin B1 analogue
References
Thiamine | Octotiamine | [
"Chemistry"
] | 58 | [
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
47,448,696 | https://en.wikipedia.org/wiki/Transdermal%20drug%20delivery | Transdermal drug delivery techniques include:
Cream
DMSO solution
Hydrogel
Iontophoresis (using electricity)
Jet injector
Liniment
Lip balm
Liposomes
Lotion
Medicated shampoo
Ointment
Paste
Thin-film drug delivery
Topical cream
Topical gel
Transdermal patch
Transdermal spray
Transfersome vesicles
Routes of administration | Transdermal drug delivery | [
"Chemistry"
] | 73 | [
"Pharmacology",
"Routes of administration"
] |
47,449,480 | https://en.wikipedia.org/wiki/Human%E2%80%93animal%20hybrid | A human–animal hybrid and animal–human hybrid is an organism that incorporates elements from both humans and non-human animals. Technically, in a human–animal hybrid, each cell has both human and non-human genetic material. It is in contrast to an individual where some cells are human and some are derived from a different organism, called a human-animal chimera. (A human chimera, on the other hand, consists only of human cells, from different zygotes.)
Examples of human–animal hybrids mainly include humanized mice that have been genetically modified by xenotransplantation of human genes. Humanized mice are commonly used as small animal models in biological and medical research for human therapeutics.
Human–animal hybrids are the subject of legal, moral, and technological debate in the context of recent advances in genetic engineering.
Human–animal hybrids have existed throughout social cultures for a long time (particularly in terms of mythology), being a part of storytelling across multiple continents, and have also been incorporated into comic books, films, video games, and other related mass media in recent decades.
Terminology
Defined by the magazine H+ as "genetic alterations that are blendings [sic] of animal and human forms", such hybrids may be referred by other names occasionally such as "para-humans". They may additionally may be called "humanized animals". Technically speaking, they are also related to "cybrids" (cytoplasmic hybrids), with "cybrid" cells featuring foreign human nuclei inside of them being a topic of interest. Possibly, a real-world human-animal hybrid may be an entity formed from either a human egg fertilized by a nonhuman sperm or a nonhuman egg fertilized by a human sperm.
Examples
Artificially created human-animal hybrids include humanized mice that have been xenotransplanted with human gene products, so as to be utilized for gaining relevant insights in the in vivo context for understanding of human-specific physiology and pathologies. Humanized mice are commonly used as small animal models in biological and medical research for human therapeutics including infectious diseases and cancer. For example, genetically modified mice may be born with human leukocyte antigen genes in order to provide a more realistic environment when introducing human white blood cells into them in order to study immune system responses.
Moral discussions
Advances in genetic engineering have generally caused a large number of debates and discussions in the fields related to bioethics, including research relating to the creation of human-animal hybrids. Although the two topics are not strictly related, the debates involving the creation of human-animal hybrids have paralleled that of the debates around the stem-cell research controversy.
The question of what line exists between a "human" being and a "non-human" being has been a difficult one for many researchers to answer. While animals having one percent or less of their cells originally coming from humans may clearly appear to be in the same boat as other animals, no consensus exists on how to think about beings in a genetic middle ground that have something like an even mix. "I don't think anyone knows in terms of crude percentages how to differentiate between humans and nonhumans," U.S. patent office official John Doll has stated. Critics of increased government restrictions include scientists such as Dr. Douglas Kniss, head of the Laboratory of Perinatal Research at Ohio State University, who has remarked that formal laws aren't the best option since the "notion of animal-human hybrids is very complex." He's also argued that their creation is inherent "not the kind of thing we support" in his kind of research since scientists should "want to respect human life".
In contrast, notable socio-economic theorist Jeremy Rifkin has expressed opposition to research that creates beings crossing species boundaries, arguing that it interferes with the fundamental 'right to exist' possessed by each animal species. "One doesn't have to be religious or into animal rights to think this doesn't make sense," he has argued when expressing support for anti-chimera and anti-hybrid legislation. As well, William Cheshire, associate professor of neurology at the Mayo Clinic's Florida branch, has called the issue "unexplored biologic territory" and advocated for a "moral threshold of human neural development" to restrict the destroying a human embryo to obtain cell material and/or the creation of an organism that's partly human and partly animal." He has said, "We must be cautious not to violate the integrity of humanity or of animal life over which we have a stewardship responsibility".
Legality
While laws against the creation of hybrid beings have been proposed in U.S. states and in the U.S. Congress, several scientists have argued that legal barriers might go too far and prohibit medically beneficial studies into human modification.
In terms of scientific ethics, restrictions on the creation of human–animal hybrids have proved a controversial matter in multiple countries. While the state of Arizona banned the practice altogether in 2010, a proposal on the subject that sparked some interest in the United States Senate from 2011 to 2012 ended up going nowhere. Although the two concepts are not strictly related, discussions of experimentation into blended human and animal creatures has paralleled the discussions around embryonic stem-cell research (the 'stem cell controversy'). The creation of genetically modified organisms for a multitude of purposes has taken place in the modern world for decades, examples being specifically designed foodstuffs made to have features such as higher crop yields through better disease resistance.
President George W. Bush brought up the topic in his 2006 State of the Union Address, in which he called for the prohibition of "human cloning in all its forms", "creating or implanting embryos for experiments", "creating human-animal hybrids", and also "buying, selling, or patenting human embryos". He argued, "A hopeful society has institutions of science and medicine that do not cut ethical corners and that recognize the matchless value of every life." He also stated that humanity "should never be discarded, devalued or put up for sale."
A 2005 appropriations bill passed by the U.S. Congress and signed into law by President Bush contained specific wording forbidding any patents on humans or human embryos. In terms of outright bans on hybrid research in the first place, a measure came up in the 110th Congress entitled the Human-Animal Hybrid Prohibition Act of 2008. Congressman Chris Smith (R, NJ-4) introduced it on April 24, 2008. The text of the proposed act stated that "human dignity and the integrity of the human species are compromised" if such hybrids exist and set up the punishment of imprisonment for up to ten years as well as a fine of over one million dollars. Though attracting support from many co-sponsors such as then Representatives Mary Fallin, Duncan Hunter, Joseph R. Pitts, and Rick Renzi among others, the Act failed to get through Congress.
A related proposal had come up in the U.S. Senate the prior year, the Human-Animal Hybrid Prohibition Act of 2007, and it also had failed. That effort was proposed by then-Senator Sam Brownback (R, KS) on November 15, 2007. Featuring the same language as the later measure in the House, its bipartisan group of cosponsors included then Senators Tom Coburn, Jim DeMint, and Mary Landrieu.
A localized measure designed to ban the creation of hybrid entities came up in the state of Arizona in 2010. The proposal was signed into law by then Governor Jan Brewer. Its sponsor stated that it was needed to clarify important "ethical boundaries" in research.
In myth
For thousands of years, these hybrids have been one of the most common themes in storytelling about animals throughout the world. The lack of a strong divide between humanity and animal nature in multiple traditional and ancient cultures has provided the underlying historical context for the popularity of tales where humans and animals have mingling relationships, such as in which one turns into the other or in which some mixed being goes through a journey. Interspecies friendships within the animal kingdom, as well as between humans and their pets, additionally provides an underlying root for the popularity of such beings.
In various mythologies throughout history, many particularly famous hybrids have existed, including as a part of Egyptian and Indian spirituality. The entities have also been characters in fictional media such as in H. G. Wells' work The Island of Doctor Moreau, adapted into the popular 1932 film Island of Lost Souls. In legendary terms, the hybrids have played varying roles from that of trickster and/or villain to serving as divine heroes in very different contexts, depending on the given culture.
Legendary historical and mythological human-animal hybrids
Beings displaying a mixture of human and animal traits while also having a similarly blended appearance have played a vast and varied role in multiple traditions around the world. Artist and scholar Pietro Gaietto has written that "representations of human-animal hybrids always have their origins in religion". In "successive traditions they may change in meaning but they still remain within spiritual culture", Gaietto has argued, when looking back in an evolution-minded point of view. The beings show up in both Greek and Roman mythology, with various elements of ancient Egyptian society ebbing and flowing into those cultures in particular. Prominent examples in ancient Egyptian religion, featuring some of the earliest such hybrid beings, include the canine-like god of death known as Anubis and the lion-like Sphinx. Other instances of these types of characters include figures within both Chinese and Japanese mythology. The observation of interspecies friendships within the animal kingdom, as well as the bonds existing between humans and their pets, have been a source of the appeal in such stories.
A prominent hybrid figure that's internationally known is the mythological Greek figure of Pan. A deity that rules over and symbolizes the untamed wild, he helps express the inherent beauty of the natural world as the Greeks saw things. He specifically received reverence by ancient hunters, fishermen, shepherds, and other groups with a close connection to nature. Pan is a Satyr who possesses the hindquarters, legs, and horns of a goat while otherwise being essentially human in appearance; stories of his encounters with different gods, humans, and others have been a part of popular culture in several different cultures for many years. The human-animal hybrid has appeared in acclaimed works of art by figures such as Francis Bacon, also being mentioned in poetic pieces such as in John Fletcher's writings. Specifically, the human-animal hybrid has appeared in acclaimed works of art by figures such as Francis Bacon. Additional famous mythological hybrids include the Egyptian god of death, named Anubis, and the fox-like Japanese beings that are called Kitsune.
In Chinese mythology, the figure of Chu Pa-chieh () undergoes a personal journey in which he gives up wickedness for virtue. After causing a disturbance in heaven from his licentious actions, he is exiled to Earth. By mistake, he enters the womb of a sow and ends up being born as a half-man/half-pig entity. With the head and ears of a pig coupled with a human body, his already animal-like sense of selfishness from his past life remains. Killing and eating his mother as well as devouring his brothers, he makes his way to a mountain hideout, spending his days preying on unwary travelers unlucky enough to cross his path. However, the exhortations of the kind goddess Kuan Yin, journeying in China, persuade him to seek a nobler path, and his life's journey and the side of goodness proceeds on such that even he is ordained a priest by the goddess herself. Remarking on the character's role in the religious novel Journey to the West, where the being first appears, professor Victor H. Mair has commented that "[p]ig-human hybrids represent descent and the grotesque, a capitulation to the basest appetites" rather than "self-improvement".
Several hybrid entities have long played a major role in Japanese media and in traditional beliefs within the country. For example, a warrior god known as Amida received worship as a part of Japanese mythology for many years; he possessed a generally humanoid appearance while having a canine-like head. However, the god's devotional popularity fell in about the middle of the 19th century. A Tanuki resembles a raccoon dog, but its shape-shifting talents allow it to turn into humans for the purposes of trickery, such as impersonating Buddhist monks. The fox-like creatures known as Kitsune also possess similar powers, and stories abound of them tricking human men into marriage by turning into seductive women.
Other examples include characters in ancient Anatolia and Mesopotamia. The latter region has had the tradition of a malevolent human-animal hybrid deity in Pazuzu, the demon featuring a humanoid shape yet having grotesque features such as sharp talons. The character picked up revived attention when an interpretation of it appeared in William Peter Blatty's 1971 novel The Exorcist and the Academy Award winning 1973 film adaption of the same name, with the demon possessing the body of an innocent young girl. The movie, regarded as one of the greatest horror films of all time, has a prologue in which co-protagonist Father Merrin (Max von Sydow) visits an archaeological dig in Iraq and ominously discovers an old statue of the monstrous being.
Theriocephaly studies
"Theriocephaly" (from Greek θηρίον therion 'beast' and κεφαλή kefalí 'head') is the anthropomorphic condition or quality of having the head of an animal with a body either mostly or entirely looking human – the term being commonly used to refer the depiction of deities or otherwise specially able individuals. An entity with such qualities is said to be "theriomorphous". Many of the gods and goddesses worshipped by the ancient Egyptians, for example, were commonly depicted as being theriocephalic. This phenomenon partly represented an intermediate step in a longer process of anthropomorphization of former animal deities (e.g. the goddess Hathor in her earliest form was depicted as a cow and in her latest manifestation as a woman with cows ears and sometimes a hairstyle resembling cows horns). But the form of depiction sometimes depended also on the aspects of a deity an artist wanted to accentuate (e.g. Ba, the aspect of personality of a human soul, was depicted as a bird with a humans head). This can also be seen in the different hieroglyphs that could be used to write the name of a single deity.
Other notable examples include:
Horus features the head of a falcon.
Anubis has a jackal's head.
Set, often depicted with the head of an unknown creature, gets associated with a being referred to as the "Set animal" by Egyptologists.
Khonsu, (god of the moon disc) depicted as a man with a falcons head and or as a human child, both with a moon disc on top of the head.
Examples from other geographic areas include:
Cernunnos, a historic Celtic deity, has been adapted as the Horned God in Wicca tradition.
The Minotaur menaces people in Greek mythology.
In some Eastern Orthodox Church icon traditions, some saints, particularly St. Christopher, get depicted as having the head of a dog.
In Hinduism, Ganesha features an elephant head.
In Abenaki mythology, a part of the history of the indigenous peoples of the United States, the spirit Pamola was a being who possessed the head of a moose as well as the wings and taloned feet of an eagle.
In fiction
Many prominent pieces of children's literature over the past two centuries have featured humanized animal characters, often as protagonists in the stores. In the opinion of popular educator Lucy Sprague Mitchell, the appeal of such mythical and fantastic beings comes from how children desire "direct" language "told in terms of images— visual, auditory, tactile, muscle images". Another author has remarked that an "animal costume" provides "a way to emphasize or even exaggerate a particular characteristic".
The anthropomorphic characters in the seminal works by English writer Beatrix Potter in particular live an ambiguous situation, having human dress yet displaying many instinctive animal traits. Writing on the popularity of Peter Rabbit, a later author commented that in "balancing humanized domesticity against wild rabbit foraging, Potter subverted parental authority and its built in hypocrisy" in Potter's child-centered books. Writer Lisa Fraustino has cited on the subject R.M. Lockley's tongue-in-cheek observation: "Rabbits are so human. Or is it the other way around— humans are so rabbit?"
Writer H. G. Wells created his famous work The Island of Doctor Moreau, featuring a mixture of horror and science fiction elements, to promote the anti-vivisection cause as a part of his long-time advocacy for animal rights. Wells' story describes a man stuck on an island ruled over by the titular Dr. Moreau, a morally depraved scientist who has created several human-animal hybrids referred to as 'Beast Folk' through vivisection and even by combining parts of other animals for some of the 'Beast Folk'. The story has been adapted into film several times, with varying success. The most acclaimed version is the 1932 black-and-white treatment called Island of Lost Souls. Wells himself wrote that "this story was the response of an imaginative mind to the reminder that humanity is but animal rough-hewn to a reasonable shape and in perpetual internal conflict between instinct and injunction," with the scandals surrounding Oscar Wilde being the impetus for the English writer's treatment of themes such as ethics and psychology. Challenging the Victorian era viewpoints of its time, the 1896 work presents a complex situation in which enhancing animals into hybrids involves both terrifying violence and pain as well as appears essentially futile, given the power of raw instinct. A pessimistic view towards the ability of human civilization to live by law-abiding, moral standards for long thus follows.
The 1986 horror film The Fly features a deformed and monstrous human-animal hybrid, played by actor Jeff Goldblum. His character, scientist Seth Brundle, undergoes a teleportation experiment that goes awry and fuses him at a fundamental genetic level with a common fly caught besides him. Brundle experiences drastic mutations as a result that horrify him. Movie critic Gerardo Valero has written that the famous horror work, "released at the dawn of the AIDS epidemic", "was seen by many as a metaphor for the disease" while also playing on bodily fears about dismemberment and coming apart that human beings inherently share.
The science fiction film Splice, released 2009, shows scientists mixing together human and animal DNA in the hopes of advancing medical research at the pharmaceutical company that they work at. Calamitous results occur when the hybrid named Dren is born.
The H. P. Lovecraft–inspired movie Dagon, released in 2001, additionally features grotesque hybrid beings. In terms of comic books, examples of fictional human-animal hybrids include the characters in Charles Burns' Black Hole series. In those comics, a set of teenagers in a 1970s era town become afflicted by a bizarre disease; the sexually transmitted affliction mutates them into monstrous forms.
Multiple video games have featured human-animal hybrids as enemies for the protagonist(s) to defeat, including powerful boss characters. For instance, the 2014 survival horror release The Evil Within includes grotesque hybrid beings, looking like the undead, attacking main character Detective Sebastian Castellanos. With partners Joseph Oda and Julie Kidman, the protagonist attempts investigate a multiple homicide at a mental hospital yet discovers a mysterious figure who turns the world around them into a living nightmare, Castellanos having to find the truth about the criminal psychopath.
Heroic character examples of human-animal anthropomorphic characters include the two protagonists of the 2002 movie The Cat Returns (Japanese title: 猫の恩返し), with the animated film featuring a young girl (named "Haru") being transformed against her will into a feline-human hybrid and fighting a villainous king of the cats with the help of a dashing male cat companion (known as the "Baron") at her side.
On a more everyday life tone, featuring human-animal hybrids of mythological beings having common human experiences, A Centaur's Life, known in Japan as , is a Japanese slice of life comedy manga series by Kei Murayama. The series has been serialized in Tokuma Shoten's Monthly Comic Ryū magazine since February 2011, and is published in English by Seven Seas Entertainment. An anime television series adaptation by Haoliners Animation League aired in Japan from July to September 2017.
With general U.S. popular culture and its various subcultures, the furry fandom consists of individuals interested in a variety of artistic materials, this often featuring "furry art... [that] depicts a human-animal hybrid in everyday life". Specific people involved in creative media will frequently come up with a "fursona" depicting a version or versions of themselves as a hybrid creature. This practice functions as an outlet based on "personal ideas of self-expression" (self-realization).
See also
References
External links
"Chinese Human-animal Hybrid Embryo Experiments Have Been Interrupted" – Sina.com report
"The First Individual Animal-hybrid Embryos Are from China" – Xinhua News Agency report
Animals in folklore
Animals in mythology
Anthropomorphic animals
Fantasy creatures
Genetic engineering
Human–animal interaction
Science fiction themes
Transhumanism | Human–animal hybrid | [
"Chemistry",
"Technology",
"Engineering",
"Biology"
] | 4,503 | [
"Biological engineering",
"Animals",
"Genetic engineering",
"Transhumanism",
"Ethics of science and technology",
"Human–animal interaction",
"Humans and other species",
"Molecular biology"
] |
57,712,865 | https://en.wikipedia.org/wiki/Chemical%20looping%20reforming%20and%20gasification | Chemical looping reforming (CLR) and gasification (CLG) are the operations that involve the use of gaseous carbonaceous feedstock and solid carbonaceous feedstock, respectively, in their conversion to syngas in the chemical looping scheme. The typical gaseous carbonaceous feedstocks used are natural gas and reducing tail gas, while the typical solid carbonaceous feedstocks used are coal and biomass. The feedstocks are partially oxidized to generate syngas using metal oxide oxygen carriers as the oxidant. The reduced metal oxide is then oxidized in the regeneration step using air. The syngas is an important intermediate for generation of such diverse products as electricity, chemicals, hydrogen, and liquid fuels.
The motivation for developing the CLR and CLG processes lies in their advantages of being able to avoid the use of pure oxygen in the reaction, thereby circumventing the energy intensive air separation requirement in the conventional reforming and gasification processes. The energy conversion efficiency of the processes can, thus, be significantly increased. Steam and carbon dioxide can also be used as the oxidants. As the metal oxide also serves as the heat transfer medium in the chemical looping process, the exergy efficiency of the reforming and gasification processes like that for the combustion process is also higher as compared to the conventional processes.
Description
The CLR and CLG processes use solid metal oxides as the oxygen carrier instead of pure oxygen as the oxidant. In one reactor, termed the reducer or fuel reactor, the carbonaceous feedstock is partially oxidized to syngas, while the metal oxide is reduced to a lower oxidation state as given by:
CHaOb + MeOx → CO + H2 + MeOx-δ
where Me is a metal. It is noted that the reaction in the reducer of the CLR and CLG processes differs from that in the chemical looping combustion (CLC) process in that, the feedstock in CLC process is fully oxidized to CO2 and H2O. In another reactor, termed the oxidizer, combustor or air reactor (when air is used as the regeneration agent), the reduced metal oxide from the reducer is re-oxidized by air or steam as given by:
MeOx-δ + O2 (air) → MeOx + (O2 depleted air)
MeOx-δ + H2O → MeOx + H2
The solid metal oxide oxygen carrier is then circulated between these two reactors. That is the reducer and the oxidizer/combustor are connected in a solids circulatory loop, while the gaseous reactants and products from each of the two reactors are isolated by the gas seals between the reactors. This streamlining configuration of the chemical looping system possesses a process intensification property with a smaller process footprint as compared to that for the traditional systems.
Oxygen carriers
The Ellingham diagram that provides the Gibbs free energy formation of a variety of metal oxides is widely used in metallurgical processing for determining the relative reduction-oxidation potentials of metal oxides at different temperatures. It depicts the thermodynamic property of a variety of metal oxides to be used as potential oxygen carrier materials. It can be modified to provide the Gibbs free energy changes for metals and metal oxides under various oxidation states so that it can be directly used for the selection of metal oxide oxygen carrier materials based on their oxidation capabilities for specific chemical looping applications. The modified Ellingham diagram is given in Fig 1a. As shown in Fig 1b, the diagram can be divided into four different sections based on the following four key reactions:
Reaction line 1: 2CO + O2 → 2CO2
Reaction line 2: 2H2 + O2 → 2H2O
Reaction line 3: 2C + O2 → 2CO
Reaction line 4: 2CH4 + O2 → 2CO + 4H2
The sections identified in Fig 1b provide the information on metal oxide materials that can be selected as potential oxygen carriers for desired chemical looping applications. Specifically, highly oxidative metal oxides, such as NiO, CoO, CuO, Fe2O3 and Fe3O4 belong to the combustion section (Section A) and they all lie above the reaction lines 1 and 2. These metal oxides have a high oxidizing tendency and can be used as oxygen carriers for the chemical looping combustion, gasification or partial oxidation processes. The metal oxides in Section E, the small section between the reaction lines 1 and 2, can be used for CLR and CLG, although a significant amount of H2O may present in the syngas product. The section for syngas production lies between reaction lines 2 and 3 (Section B). Metal oxides lying in this region, such as CeO2, have moderate oxidation tendencies and are suitable for CLR and CLG but not for the complete oxidation reactions. Metal oxides below reaction line 3 (Sections C and D) are not thermodynamically favored for oxidizing the fuels to syngas. Thus, they cannot be used as oxygen carriers and are generally considered to be inert. These materials include Cr2O3 and SiO2. They can, however, be used as support materials along with active oxygen carrier materials. In addition to the relative redox potentials of metal oxide materials illustrated in Fig 1b, the development of desired oxygen carriers for chemical looping applications requires to consider such properties as oxygen carrying capacity, redox reactivity, reaction kinetics, recyclability, attrition resistance, heat carrying capacity, melting point, and production cost.
Process configurations
The CLR and CLG processes can be configured based on the types of carbonaceous feedstocks given and desired products to be produced. Among a broad range of products, the CLG process can produce electricity through chemical looping IGCC. The syngas produced from the CLR and the CLG can be used to synthesize a variety of chemicals, liquid fuels and hydrogen. Given below are some specific examples of the CLR and CLG processes.
Steam methane reforming with chemical looping combustion (CLC-SMR)
Hydrogen and syngas are currently produced largely by steam methane reforming (SMR). The main reaction in SMR is:
CH4 + H2O → CO + 3H2
Steam can be further used to convert CO to H2 via the water-gas shift reaction (WGS):
H2O + CO → CO2 + H2
The SMR reaction is endothermic, which requires heat input. The state-of-art SMR system places the tubular catalytic reactors in a furnace, in which fuel gas is burned to provide the required heat.
In the SMR with chemical looping combustion (CLC-SMR) concepts shown in Fig 2, the syngas production is carried out by the SMR in a tubular catalytic reactor while the chemical looping combustion system is used to provide the heat for the catalytic reaction. Depending on which chemical looping reactor is used to provide the SMR reaction heat, two CLC-SMR schemes can be configured. In Scheme 1 (Fig 2a), the reaction heat is provided by the reducer (fuel reactor). In Scheme 2 (Fig 2b), the reaction heat is provided by the combustor (air reactor). In either scheme, the combustion of metal oxide by air in the chemical looping system provides the heat source that sustains the endothermic SMR reactions. In the chemical looping system, natural gas and the recycled off-gas from the pressure swing adsorption (PSA) of the SMR process system are used as the feedstock for the CLC fuel reactor operation with CO2 and the steam as the reaction products. The CLC-SMR concepts have mainly been studied from the perspective of the process simulation. It is seen that both schemes do not engage directly the chemical looping system as a means for syngas production.
Chemical looping reforming (CLR)
Chemical looping systems can directly be engaged as an effective means for syngas production. Compared to the conventional partial oxidation (POX) or autothermal reforming (ATR) processes, the key advantage of the chemical looping reforming (CLR) process is the elimination of the air separation unit (ASU) for oxygen production. The gaseous fuel, typically natural gas, is fed to the fuel reactor, in which a solid metal oxide oxygen carrier partially oxidizes the fuel to generate syngas:
CH4 + MeOx → CO + 2H2 + MeOx-δ
Steam can be added to the reaction in order to increase the generation of H2, via the water-gas shift reaction (WGS) and/or steam methane reforming.
The CLR process can produce a syngas with a H2:CO molar ratio of 2:1 or higher, which is suitable for Fischer–Tropsch synthesis, methanol synthesis, or hydrogen production. The reduced oxygen carrier from the reducer is oxidized by air in the combustor:
MeOx-δ + O2 (air) → MeOx
The overall reaction in the CLR system is a combination of the partial oxidation reaction of the fuel and the WGS reaction:
CH4 + O2 + a H2O → CO + (2+a) H2
It is noted that the actual reaction products for such reactions as those given above can vary depending on the actual operating conditions. For example, the CLR reactions can also produce CO2 when highly oxidative oxygen carriers such as NiO and Fe2O3 are used. The carbon deposition occurs particularly when the oxygen carrier is highly reduced. Reduced oxygen carrier species, such as Ni and Fe, catalyze the hydrocarbon pyrolysis reactions.
Fig 3 shows a CLR system that has been studied experimentally by Vienna University of Technology. The system consists of a fluidized bed reducer and a fluidized bed combustor, connected by loop seals and cyclones. Commonly used oxygen carriers are based on NiO or Fe2O3. The NiO-based oxygen carriers exhibit excellent reactivity, as shown by the high conversion of natural gas. The Fe2O3-based oxygen carriers have a lower material cost while their reactivity is lower than that of the NiO-based ones. Operating variables such as temperature, pressure, type of metal oxide, and molar ratio of metal oxide to gaseous fuel will influence the fuel conversion and product compositions. However, with the effects of the back mixing and distributed residence time for the metal oxide particles in the fluidized bed, the oxidation state of the metal oxide particles in the fluidized bed varies that prevents a high purity of the syngas to be produced from the reactor.
The moving bed reactor that does not have the effects of back mixing of the metal oxide particles is another gas-solid contact configuration for CLR/CLG operation. This reactor system developed by Ohio State University is characterized by a co-current gas-solid moving bed reducer as given in Fig 4. The moving bed reducer can maintain the uniform oxidation state of the exit metal oxide particles from the reactor. thereby synchronizing the process operation to achieve the thermodynamic equilibrium conditions. The CLR moving bed process applied to the methane to syngas (MTS) reactions has the flexibility of co-feeding CO2 as a feedstock with such gaseous fuels as natural gas, shale gas, and reducing tail gases, yielding a CO2 negative process system. The CLR-MTS system can yield a higher energy efficiency and cost benefits over the conventional syngas technologies. In a benchmark study for production of 50,000 barrels per day of liquid fuels using the natural gas as the feedstock, the CLR - MTS system for syngas production can reduce the natural gas usage by 20% over the conventional systems involving the Fischer–Tropsch technology.
Chemical looping gasification (CLG)
Chemical looping gasification (CLG) differs from the CLR in that it uses solid fuels such as coal and biomass instead of gaseous fuels as feedstocks. The operating principles for the CLG is similar to CLR. For solid feedstocks, devolatilization and pyrolysis of the solid fuel occur when the solid fuels are introduced into the reducer and mixed with the oxygen carrier particles. With the fluidized bed reducer, the released volatiles, including light organic compounds and tars, may channel through the reducer and exit with the syngas. The light organic compounds may reduce the purity of the syngas, while the tars may accumulate in downstream pipelines and instruments. For example, the carbon efficiency using the coal CLG fluidized bed reducer may vary from 55% to 81%, whereas the carbon efficiency using the coal moving bed reducer can reach 85% to 98%. The syngas derived from the biomass CLG fluidized bed reducer may consist of up to 15% methane, while the syngas derived from the biomass CLG moving bed reducer can reach a methane concentration of less than 5%. In general, increasing the temperature of the CLG system can promote volatile and char conversion. This may also promote the full oxidation side reaction resulting in an increased CO2 concentration in the syngas. Additional equipment for gas cleanup including scrubber, catalytic steam reformer and/or tar reformer may be necessary downstream of the CLG system in order to remove or convert the unwanted byproducts in the syngas stream. Char, the remaining solid from the devolatilization and reactions, requires additional time for conversion. For a fluidized bed reducer with particle back mixing, unconverted char may leave the reducer with the reduced metal oxide particles. A carbon stripper may be needed at the solid outlet of the fluidized bed reducer to allow the unconverted char to be separated from the oxygen carriers. The char can be recycled back to the reducer for further conversion.
In a similar operating scheme to the CLR - MTS system given in Fig 4, chemical looping gasification (CLG) of solid fuels carried out in a co-current moving bed reducer to partially oxidize solid fuels into syngas can reach an appropriate H2/CO ratio for downstream processing. Coal ash is removed through in-situ gas-solid separation operation. The moving bed prevents the channeling or bypassing of the volatiles and chars, thereby maximizing the conversion of the solid fuel. The full oxidation side reactions can be impeded through the control of the oxidation state formed for the oxygen carriers in the moving bed reactor. The CLR moving bed process applied to the coal to syngas (CTS) reactions also has the flexibility of co-feeding CO2 as a feedstock with coal yielding a CO2 negative process system with a high purity of syngas production. In a benchmark study for production of 10,000 ton/day of methanol from coal, the upstream gasification capital cost can be reduced by 50% when the chemical looping moving bed gasification system is used.
Broader context
In a general sense, the CLR and CLG processes for syngas production are part of the chemical looping partial oxidation or selective oxidation reaction schemes. The syngas production can lead to the hydrogen production from the downstream water-gas shift reaction. The CLG process can also be applied to electricity generation, resembling the IGCC based on the syngas generated from the chemical looping processes. The chemical looping three-reactor (including reducer, oxidizer and combustor) system using a moving bed reducer for metal oxide reduction by fuel followed by a moving bed oxidizer for the water splitting to produce hydrogen is given in Fig 5. For coal-based feedstock applications, this system is estimated to reduce the cost for electricity generation by 5-15% as compared to conventional systems.
The selective oxidation based chemical looping processes can be used to produce directly in one step value-added products beyond syngas. These chemical looping processes require the use of designed metal oxide oxygen carrier that has a high product selectivity and a high feedstock conversion. An example is the chemical looping selective oxidation process developed by DuPont for producing maleic anhydride from butane. The oxygen carrier used in this process is vanadium phosphorus oxide (VPO) based material. This chemical looping process was advanced to the commercial level. Its commercial operation, however, was hampered in part by the inadequacies in the chemical and mechanical viability of the oxygen carrier VPO and its associated effects on the reaction kinetics of the particles.
Chemical looping selective oxidation was also applied to the production of olefins from methane. In chemical looping oxidative coupling of methane (OCM), the oxygen carrier selectively converts methane into ethylene.
References
looping reforming and gasification
Chemical process engineering
Industrial gases
Synthetic fuel technologies | Chemical looping reforming and gasification | [
"Chemistry",
"Engineering"
] | 3,496 | [
"Chemical engineering",
"Petroleum technology",
"Chemical processes",
"Industrial gases",
"Synthetic fuel technologies",
"nan",
"Chemical process engineering"
] |
57,712,889 | https://en.wikipedia.org/wiki/Self-adaptive%20mechanisms | Self-adaptive mechanisms, sometimes simply called adaptive mechanisms, in engineering, are underactuated mechanisms that can adapt to their environment. One of the most well-known example of this type of mechanisms are underactuated fingers, grippers, and robotic hands. Contrary to standard underactuated mechanisms where the motion is governed by the dynamics of the system, the motion of self-adaptive mechanisms is generally constrained by compliant elements cleverly located in the mechanisms.
Definition
Underactuated mechanisms have a lower number of actuators than the number of degrees of freedom (DOF). In a two-dimensional plane, a mechanism can have up to three DOF (two translations, one rotation), and in three-dimensional Euclidean space, up to six (three translations, three rotations). In the case of self-adaptive mechanisms, the lack of actuators is compensated by passive elements that constrain the motion of the system. Springs are a good example of such elements, but other can be used depending on the type of mechanisms.
One of the earliest example of self-adaptive mechanism is the flapping wing proposed by Leonardo da Vinci in the Codex Atlanticus.
Underactuated hands
The first commonly known underactuated finger was the Soft-Gripper designed by Shigeo Hirose in the late 1970s. The most common type of transmission mechanisms used in self-adaptive hands are linkages and tendons.
Kinetostatics
Underactuated fingers and hands are usually analyzed with respect to their kinetostatics (negligible kinetic energy, static analysis of a mechanism in motion) rather than the dynamics of the system, as the kinetic energy of these systems is generally negligible compared to the potential energy stored into the passive elements. The forces applied by each phalanx of an underactuated finger can be computed with the following expression:
where F is the vector made of the forces applied, J is the Jacobian matrix of the finger, T* is the transmission matrix, and t is the torque vector made (actuator and passive elements).
Applications
A self-adaptive robotic hand, SARAH (Self-Adaptive Robot Auxiliary Hand), was designed and built to be part of the Dextre’s toolbox. Dextre is a robotic telemanipulator that resides at the end of CANADARM-2 on the International Space Station. The Yale OpenHand is an example of open source self-adaptive mechanisms that can be found online. Some companies are also selling self-adaptive hands for industrial purposes. Prosthetics is another application for self-adaptive hands. One known example is the SPRING (Self-Adaptive Prosthesis for Restoring Natural Grasping) hand.
Other examples
Self-adaptive mechanisms can be used for other applications, such as walking robots.
Compliant mechanisms are another example of self-adaptive mechanisms, where the passive elements and the transmission mechanism are a single monolithic block.
References
Mechanical engineering | Self-adaptive mechanisms | [
"Physics",
"Engineering"
] | 594 | [
"Applied and interdisciplinary physics",
"Mechanical engineering"
] |
57,717,208 | https://en.wikipedia.org/wiki/Gannet%20oil%20and%20gas%20field | Gannet is an oil and gas field located in the United Kingdom's continental shelf in the North Sea. It is east of Aberdeen, and the water depth at the Gannet offshore installation is . The field is located in Blocks 22/21, 22/25, 22/26 and 21/30. It is half-owned by Royal Dutch Shell (50%) and partly by ExxonMobil (50%) and has been operated by Shell UK Ltd since ‘first oil’ in November 1993. The Gannet A installation is the host platform for subsea tiebacks designated Gannet B to G. Like most Shell fields in the central and northern North Sea the field is named after a sea bird the gannet.
The Gannet reservoirs
The Gannet reservoirs are located at a depth of between and extend over several blocks. They comprises good quality turbiditic sands of a Tertiary age (Tay, Rogaland, Forties, Lista and Andrew Formations) and were discovered in 1973. The formation comprises a mixture of hydrocarbon reservoirs.
{| class="wikitable"
|+Gannet oil and gas reservoirs
!
!Location from Gannet A installation
!Reservoir type
!API gravity
!Reserves
!Wells
|-
|Gannet A
|––
|Oil rim with a gas cap
|40°
|68 million barrels of oil and condensate; 411 billion cubic feet of gas
|11 wells
|-
|Gannet B
|5 km north-west
|Gas/condensate
|
|3 million barrels of condensate; 139 billion cubic feet of gas
|2 subsea wells
|-
|Gannet C
|5 km south-west
|Oil rim with a gas cap
|40°
|60 million barrels of oil; 140 billion cubic feet of gas
|6 horizontal oil wells plus 2 gas production wells
|-
|Gannet D
|16 km north-east
|Unsaturated oil without a gas cap
|40°
|31 million barrels of oil; 30 billion cubic feet of gas
|5 subsea wells
|-
|Gannet E
|14 km
|Unsaturated heavy oil without a gas cap
|20°
|23 million barrels of heavy crude
|Subsea well with electric submersible pump
|-
|Gannet F
|11 km
|Unsaturated oil without a gas cap
|40°
|19 million barrels of oil
|
|-
|Gannet G
|
|Unsaturated oil without a gas cap
|40°
|13 million barrels of oil
|2 subsea wells
|}
Design
The topsides for Gannet were designed by Matthew Hall Engineering which was also responsible for procurement and construction and commissioning assistance. They were awarded the contract in July 1989. Initially there were facilities for 15 oil production wells, two gas production wells and seven spare slots; there was also provision for 39 subsea risers. The production capacity was 56,000 barrels of oil per day and 4.0 million standard cubic metres of gas per day. There are four production trains for both oil and gas processing. Electricity generation is powered by two 21-megawatt Rolls-Royce RB211 gas turbines. The topside accommodation was for 40 people. The integrated deck topsides (9600 tonnes) are supported by a four leg steel jacket (lift weight 8400 tonnes). The topsides were fabricated by Redpath Offshore North; and the jacket by RGC Offshore at Methil.
Operation
Oil from the platform wellheads and subsea wells is routed to one of the three horizontal first stage 3-phase separators. Oil from the separators is combined and fed to common second, third and fourth stage 3-phase separators. Processed oil is exported by pipeline to the Fulmar A installation and thence via Norpipe to Teesside. Gas from the second, third and fourth stage separators is compressed to the operating pressure of the first stage separators with which it is combined and further compressed. Gas from the Gannet A gas wellheads and from the Gannet B and C subsea wells flows to one of the two vertical separators. Gas is co-mingled with the off-gas from the oil separators is dehydrated through counter-current contact with Triethylene glycol. Gas is compressed for export to St Fergus via the Fulmar gas pipeline. There are also facilities for gas injection into Gannet A, B and C reservoirs and for gas lift to oil production wells on reservoirs, A, C, D, E, F and G. Produced water is treated prior to overboard disposal.
Oil export capacity is 88,000 barrels per day. Gas compression and dehydration capacity is 246 million standard cubic feet per day. Gannet A has a gas lift capacity of 130 million standard cubic feet per day and a produced water handling capability of 60,000 barrels per day. Shell have indicated that there is greater than 25% ullage of the total system capability available in the plant for additional third party processing and transportation. Peak production was: Gannet A 1 million tonnes per year (1999); Gannet B 0.8 billion cubic metres per year (1996); Gannet C 1.6 million tonnes per year (1993); Gannet D 0.5 million tonnes per year (1994).
East of Gannet and Montrose Fields MPA(NC)
In 2014 of sea to the east of Gannet and the neighbouring Montrose oil field was declared a Nature Conservation Marine Protected Area under the title East of Gannet and Montrose Fields MPA(NC). The sands and gravels that form most of the seabed within the MPA are the preferred habitat for ocean quahog, which bury themselves deep into the sand to escape predation. When buried ocean quahog can survive long periods of time without food or oxygen, and are one of the longest living creatures on Earth, having a lifespan of more than 400 years.
The MPA also includes a band of offshore deep-sea mud which form a habitat for many species of worm and mollusc, who live buried in the mud.
See also
Energy policy of the United Kingdom
Energy use and conservation in the United Kingdom
References
North Sea energy
Natural gas platforms
Oil and gas industry in Scotland
Oil fields of Scotland
Nature Conservation Marine Protected Areas of Scotland
Shell plc oil and gas fields
ExxonMobil oil and gas fields | Gannet oil and gas field | [
"Engineering"
] | 1,297 | [
"Structural engineering",
"Natural gas platforms"
] |
57,718,310 | https://en.wikipedia.org/wiki/Hymatic | Hymatic, also known as Hymatic Engineering, are a British manufacturer of heat exchangers, fluid control technology and cryogenic systems as part of an aircraft's environmental control system (ECS), headquartered in Worcestershire.
History
The company was founded on 27 September 1937.
It began making air compressors (pneumatics), anti-g valves, pressure reducing valves, stop valves and fuel system relief valves. Most well-known British aircraft in the 1950s and 1960s contained their valves and pneumatic equipment.
It developed the fuel system for Concorde. Concorde carried around 22,000 gallons of fuel. Concorde's fuel system had to overcome boiling of fuel at high altitudes.
In 2002, the company had a turnover of £21.7m.
Research
It has worked with the Cryogenic Engineering Group at the University of Oxford, in making linear compressors.
Ownership
The company was bought by Honeywell in February 2004, with the case referred to the Office of Fair Trading (OFT).
Structure
It is situated on the Moon's Moat Industrial Estate in Redditch. The company is registered with the British Cryogenics Council.
Products
Anti-ice valves
Cryocoolers for infrared sensors
Stored energy systems
References
External links
Grace's Guide
Crogenic Cooling Solutions
Aircraft component manufacturers of the United Kingdom
British companies established in 1937
Companies based in Redditch
Cryogenics
Honeywell
Manufacturing companies established in 1937
Science and technology in Worcestershire
Valve manufacturers | Hymatic | [
"Physics"
] | 299 | [
"Applied and interdisciplinary physics",
"Cryogenics"
] |
61,992,323 | https://en.wikipedia.org/wiki/Chandrasekhar%E2%80%93Page%20equations | Chandrasekhar–Page equations describe the wave function of the spin-1/2 massive particles, that resulted by seeking a separable solution to the Dirac equation in Kerr metric or Kerr–Newman metric. In 1976, Subrahmanyan Chandrasekhar showed that a separable solution can be obtained from the Dirac equation in Kerr metric. Later, Don Page extended this work to Kerr–Newman metric, that is applicable to charged black holes. In his paper, Page notices that N. Toop also derived his results independently, as informed to him by Chandrasekhar.
By assuming a normal mode decomposition of the form (with being a half integer and with the convention ) for the time and the azimuthal component of the spherical polar coordinates , Chandrasekhar showed that the four bispinor components of the wave function,
can be expressed as product of radial and angular functions. The separation of variables is effected for the functions , , and (with being the angular momentum per unit mass of the black hole) as in
Chandrasekhar–Page angular equations
The angular functions satisfy the coupled eigenvalue equations,
where is the particle's rest mass (measured in units so that it is the inverse of the Compton wavelength),
and . Eliminating between the foregoing two equations, one obtains
The function satisfies the adjoint equation, that can be obtained from the above equation by replacing with . The boundary conditions for these second-order differential equations are that (and ) be regular at and . The eigenvalue problem presented here in general requires numerical integrations for it to be solved. Explicit solutions are available for the case where .
Chandrasekhar–Page radial equations
The corresponding radial equations are given by
where is the black hole mass,
and Eliminating from the two equations, we obtain
The function satisfies the corresponding complex-conjugate equation.
Reduction to one-dimensional scattering problem
The problem of solving the radial functions for a particular eigenvalue of of the angular functions can be reduced to a problem of reflection and transmission as in one-dimensional Schrödinger equation; see also Regge–Wheeler–Zerilli equations. Particularly, we end up with the equations
where the Chandrasekhar–Page potentials are defined by
and , is the tortoise coordinate and . The functions are defined by , where
Unlike the Regge–Wheeler–Zerilli potentials, the Chandrasekhar–Page potentials do not vanish for , but has the behaviour
As a result, the corresponding asymptotic behaviours for as becomes
References
Spinors
Black holes
Ordinary differential equations | Chandrasekhar–Page equations | [
"Physics",
"Astronomy"
] | 532 | [
"Black holes",
"Physical phenomena",
"Physical quantities",
"Unsolved problems in physics",
"Astrophysics",
"Density",
"Stellar phenomena",
"Astronomical objects"
] |
61,997,982 | https://en.wikipedia.org/wiki/Discovery%20and%20development%20of%20bisphosphonates | Bisphosphonates are an important class of drugs originally commercialised in the mid to late 20th century. They are used for the treatment of osteoporosis and other bone disorders that cause bone fragility and diseases where bone resorption is excessive. Osteoporosis is common in post-menopausal women and patients in corticosteroid treatment where biphosphonates have been proven a valuable treatment and also used successfully against Paget's disease, myeloma, bone metastases and hypercalcemia. Bisphosphonates reduce breakdown of bones by inhibiting osteoclasts, they have a long history of use and today there are a few different types of bisphosphonate drugs on the market around the world.
Discovery
Bisphosphonates were originally synthesized in the 19th century and used in industry for their antiscaling and anticorrosive properties. In the late 1960s their potential to treat diseases related to the metabolism of the bones became evident. The first generation of bisphosphonates included etidronic acid and clodronic acid which were introduced in the 1970s and 1980s. They were the first bisphosphonate drugs to be used successfully in the clinic. They have since then been developed further with the intention to make them more potent, enhance their distribution inside the bone and extend the duration of action. This has made it possible to give zoledronate, the most recent bisphosphonate drug to be placed on the market, in a single annual dose by intravenous infusion.
Development
The original bisphosphonates (first generation) were simple molecules with small groups of single atoms or alkyl chains in position R1 and R2. They only had a rather weak inhibiting effect on bone resorption. The inclusion of an amino group marked the beginning of the second generation of bisphosphonates with higher potency. The first was pamidronate and similar analogues followed where the position of the nitrogen in the side chain was the key to a more potent drug. Later it became apparent that the nitrogen does not necessarily have to be connected to an alkyl chain but instead using a heterocyclic group. A few such drugs have been developed and placed on the market where zoledronate is the most notable one. Minodronic acid is even more potent and has been placed on the market in Japan. Their potency is such that it is effective even in picomolar concentration.
Further development has not resulted in the placing on the market of compounds in equal potency. Arylalkyl substitutes of pamidronate are among the most recent bisphosphonates to be used clinically where the hydroxyl group in position R2 has been omitted to ensure stability.
Recent research in this area has opened up an opportunity to develop new bisphosphonate drug therapies.
Bisphosphonates with a more lipophilic character have been developed and have shown potential as a tumor suppressant. They operate by a slightly different mechanism in which they not only inhibit the key enzyme farnesyl pyrophosphate synthase (FPPS) of the mevalonate pathway but also geranylgeranyl pyrophosphate synthase (GGPS), an enzyme also located in the mevalonate pathway. They do not have the same affinity for the bone minerals.
GGPS has since been successfully inhibited by a novel bisphosphonate compound with a triazole group within R2 and a methyl group in R1. This may become useful in therapies against malignancies like multiple myeloma.
In 2018, a dendritic bisphosphonate was introduced containing three bisphosphonate units. It has shown potential for bone specific delivery of large therapeutic molecules by taking advantage of the high affinity of bisphosphonates to the bone minerals
Mechanism of action
The mechanism of action of the bisphosphonates (BP's) has evolved as new generations of drugs have been developed. The function of the first generation bisphosphonates differs from the more recent nitrogen containing BP's but both are apparently internalised by endocytosis of a membrane-bound vesicle where the drug is most likely in a complex with Ca2+ ions. This does not concern other cells in the bone as this takes place by a selective uptake of osteoclasts.
The common function which applies to all bisphosphonate drugs is a physicochemical interaction with the bone mineral to prevent the physical resorption of the bone by the osteoclasts. This is especially relevant at sites where bone remodelling is most active. The bisphosphonates have an intrinsic affinity for the calcium ions (hydroxyapatite) of the bone mineral just as the endogenous pyrophosphates. The difference lies in the non-hydrolysable carbon-phosphorus bond of the bisphosphonates which prevents their metabolism and at the same time ensure an effective absorption from the gastrointestinal tract.
The primary inhibiting action of the first generation of bisphosphonates on osteoclasts is by inducing apoptosis. The mechanism of action is apparently by the formation of an ATP analogue or metabolite of the bisphosphonates like etidronic acid and clodronic acid. The ATP analogue accumulates in the cytosol of the osteoclast with a cytotoxic effect.
The primary mechanism of action of the more developed nitrogen containing bisphosphonates is however by cellular effects on osteoclasts through inhibition of the mevalonate pathway and in particular the subsequent formation of isoprenoid lipids. The inhibition takes place at a key branch point in the pathway catalyzed by farnesyl pyrophosphate synthase (FPPS). Isoprenoid lipids are necessary for post-translational modifications of small GTP-binding regulatory proteins like Rac, Rho and Ras of the Ras superfamily. The function of osteoclasts depends on them for a variety of cellular processes like apoptosis.
Structure activity relationship
Pharmacophore
Bisphosphonates mimic the endogenous inorganic pyrophosphate where the oxygen backbone is replaced with carbon (P-C-P for P-O-P). The two additional groups or side chains on the carbon backbone are usually referred to as R1 and R2. R1 is usually a hydroxyl group which enhances the affinity for the calcium by forming a tridentate ligand along with the phosphate groups. The compound can be made more potent by optimizing the structure of the R2 group to best inhibit bone resorption.
Phosphonate
Phosphonate groups in the chemical structure are important for the binding of the drug to the target enzyme. Studies have showed that removal or replacement of the phosphonate group with a carboxylic acid causes drastic loss in potency of the drug and the enzyme inhibitor no longer goes into an isomerized state.
Hydroxyl group (R1 side chain)
Modification of the R1 side chain on bisphosphonates is very minor today, single hydroxyl group at that position seems to give the best results in terms of activity. The hydroxyl group plays a role in forming a water-induced bond with glutamine (Gln240) on the target enzyme. Drugs that have no hydroxyl group initially cause better inhibition than parent compounds, without hydroxyl group the drug seems to fit more easily into the open active site. The absence of hydroxyl group however reduces the ability to hold the target enzyme complex in isomerized state. Biological activity of bisphosphonates with hydroxyl group, therefore, appears over longer time.
Nitrogen (R2 side chain)
Nitrogen containing bisphosphonates are the current most used drugs in the class because of their potency. Studies have showed that nitrogen on bisphosphonates forms hydrogen bond with threonine (Thr201) and the carbonyl part of Lysine (Lys200) on target enzyme, therefore enhancing the binding of the complex. Altering the position of nitrogen can significantly change the ability for the nitrogen hydrogen bond to occur.
Modification of nitrogen containing side chain (R2 side chain)
Increased carbon length of the nitrogen R2 side chain alters activity. Side chain that is made out of three carbons has proven to be the most ideal length in terms of activity, increasing or decreasing the length of the chain from there has negative effect on biological activity. Alendronate, a common bisphosphonate drug, has a three carbon length side chain for example. Risedronate has heterocyclic structure containing nitrogen. Heterocyclic nitrogen containing bisphosphonates have revealed better results in terms of activity compared to earlier bisphosphonates with nitrogen bound to carbon chain. Studies on risedronate analogous with different placement of nitrogen on the ring have shown no measurable difference on biological activity. Increased length of carbon chain connected to the ring revealed negative results. Zoledronate is the most potent bisphosphonate drug today only available as intravenous injection. It is the only bisphosphonate drug that has two nitrogen groups in the side chain hence its potency and route of administration differs from other drugs in the same class.
References
Drug discovery
Bisphosphonates | Discovery and development of bisphosphonates | [
"Chemistry",
"Biology"
] | 1,980 | [
"Life sciences industry",
"Medicinal chemistry",
"Drug discovery"
] |
61,999,316 | https://en.wikipedia.org/wiki/Pyrogallolarenes | A pyrogallolarene (also calix[4]pyrogallolarene) is a macrocycle, or a cyclic oligomer, based on the condensation of pyrogallol (1,2,3-trihydroxybenzene) and an aldehyde. Pyrogallolarenes are a type of calixarene, and a subset of resorcinarenes that are substituted with a hydroxyl at the 2-position.
Pyrogallolarenes, like all resorcinarenes, form inclusion complexes with other molecules forming a host–guest complex. Pyrogallolarenes (like resorcinarenes) self-assemble into larger supramolecular structures forming a hydrogen-bonded hexamer. The pyrogallolarene hexamer is unique from those formed from resorcinarene, in that it does not incorporate solvent molecules into the structure. Both in the crystalline state and in organic solvents, six molecules will form an assembly with an internal volume of around one cubic nanometer (nanocapsules) and shapes similar to the Archimedean solids. A number of solvent or other molecules may reside in the capsule interior. The pyrogallolarene hexamer is generally more stable than the resorcinarene hexamer, even in polar solvents.
Synthesis
The pyrogallolarene macrocycle is typically prepared by condensation of pyrogallol and an aldehyde in concentrated acid solution in the presence of an alcohol solvent, usually methanol or ethanol. The reaction conditions can usually be carefully adjusted to precipitate the pure product or the product may be purified by recrystallization.
Pyrogallol[4]arene is simply made by mixing a solvent-free dispersion of isovaleraldehyde with pyrogallol, and a catalytic amount of p-toluenesulfonic acid, in a mortar and pestle.
References
Supramolecular chemistry
Macrocycles
Cyclophanes | Pyrogallolarenes | [
"Chemistry",
"Materials_science"
] | 438 | [
"Organic compounds",
"Macrocycles",
"nan",
"Nanotechnology",
"Supramolecular chemistry"
] |
62,004,450 | https://en.wikipedia.org/wiki/Ring%20of%20modular%20forms | In mathematics, the ring of modular forms associated to a subgroup of the special linear group is the graded ring generated by the modular forms of . The study of rings of modular forms describes the algebraic structure of the space of modular forms.
Definition
Let be a subgroup of that is of finite index and let be the vector space of modular forms of weight . The ring of modular forms of is the graded ring .
Example
The ring of modular forms of the full modular group is freely generated by the Eisenstein series and . In other words, is isomorphic as a -algebra to , which is the polynomial ring of two variables over the complex numbers.
Properties
The ring of modular forms is a graded Lie algebra since the Lie bracket of modular forms and of respective weights and is a modular form of weight . A bracket can be defined for the -th derivative of modular forms and such a bracket is called a Rankin–Cohen bracket.
Congruence subgroups of SL(2, Z)
In 1973, Pierre Deligne and Michael Rapoport showed that the ring of modular forms is finitely generated when is a congruence subgroup of .
In 2003, Lev Borisov and Paul Gunnells showed that the ring of modular forms is generated in weight at most 3 when is the congruence subgroup of prime level in using the theory of toric modular forms. In 2014, Nadim Rustom extended the result of Borisov and Gunnells for to all levels and also demonstrated that the ring of modular forms for the congruence subgroup is generated in weight at most 6 for some levels .
In 2015, John Voight and David Zureick-Brown generalized these results: they proved that the graded ring of modular forms of even weight for any congruence subgroup of is generated in weight at most 6 with relations generated in weight at most 12. Building on this work, in 2016, Aaron Landesman, Peter Ruhm, and Robin Zhang showed that the same bounds hold for the full ring (all weights), with the improved bounds of 5 and 10 when has some nonzero odd weight modular form.
General Fuchsian groups
A Fuchsian group corresponds to the orbifold obtained from the quotient of the upper half-plane . By a stacky generalization of Riemann's existence theorem, there is a correspondence between the ring of modular forms of and a particular section ring closely related to the canonical ring of a stacky curve.
There is a general formula for the weights of generators and relations of rings of modular forms due to the work of Voight and Zureick-Brown and the work of Landesman, Ruhm, and Zhang.
Let be the stabilizer orders of the stacky points of the stacky curve (equivalently, the cusps of the orbifold ) associated to . If has no nonzero odd weight modular forms, then the ring of modular forms is generated in weight at most and has relations generated in weight at most . If has a nonzero odd weight modular form, then the ring of modular forms is generated in weight at most and has relations generated in weight at most .
Applications
In string theory and supersymmetric gauge theory, the algebraic structure of the ring of modular forms can be used to study the structure of the Higgs vacua of four-dimensional gauge theories with N = 1 supersymmetry. The stabilizers of superpotentials in N = 4 supersymmetric Yang–Mills theory are rings of modular forms of the congruence subgroup of .
References
Lie algebras
Modular forms
Number theory | Ring of modular forms | [
"Mathematics"
] | 730 | [
"Modular forms",
"Discrete mathematics",
"Number theory"
] |
62,925,989 | https://en.wikipedia.org/wiki/Parallel%20task%20scheduling | Parallel task scheduling (also called parallel job scheduling or parallel processing scheduling) is an optimization problem in computer science and operations research. It is a variant of optimal job scheduling. In a general job scheduling problem, we are given n jobs J1, J2, ..., Jn of varying processing times, which need to be scheduled on m machines while trying to minimize the makespan - the total length of the schedule (that is, when all the jobs have finished processing). In the specific variant known as parallel-task scheduling, all machines are identical. Each job j has a length parameter pj and a size parameter qj, and it must run for exactly pj time-steps on exactly qj machines in parallel.
Veltman et al. and Drozdowski denote this problem by in the three-field notation introduced by Graham et al. P means that there are several identical machines running in parallel; sizej means that each job has a size parameter; Cmax means that the goal is to minimize the maximum completion time. Some authors use instead. Note that the problem of parallel-machines scheduling is a special case of parallel-task scheduling where for all j, that is, each job should run on a single machine.
The origins of this problem formulation can be traced back to 1960. For this problem, there exists no polynomial time approximation algorithm with a ratio smaller than unless .
Definition
There is a set of jobs, and identical machines. Each job has a processing time (also called the length of j), and requires the simultaneous use of machines during its execution (also called the size or the width of j).
A schedule assigns each job to a starting time and a set of machines to be processed on. A schedule is feasible if each processor executes at most one job at any given time.
The objective of the problem denoted by is to find a schedule with minimum length , also called the makespan of the schedule.
A sufficient condition for the feasibility of a schedule is the following
.
If this property is satisfied for all starting times, a feasible schedule can be generated by assigning free machines to the jobs at each time starting with time . Furthermore, the number of machine intervals used by jobs and idle intervals at each time step can be bounded by . Here a machine interval is a set of consecutive machines of maximal cardinality such that all machines in this set are processing the same job. A machine interval is completely specified by the index of its first and last machine. Therefore, it is possible to obtain a compact way of encoding the output with polynomial size.
Computational hardness
This problem is NP-hard even when there are only two machines and the sizes of all jobs are (i.e., each job needs to run only on a single machine). This special case, denoted by , is a variant of the partition problem, which is known to be NP-hard.
When the number of machines m is at most 3, that is: for the variants and , there exists a pseudo-polynomial time algorithm, which solves the problem exactly.
In contrast, when the number of machines is at least 4, that is: for the variants for any , the problem is also strongly NP-hard (this result improved a previous result showing strong NP-hardness for ).
If the number of machines is not bounded by a constant, then there can be no approximation algorithm with an approximation ratio smaller than unless . This holds even for the special case in which the processing time of all jobs is , since this special case is equivalent to the bin packing problem: each time-step corresponds to a bin, m is the bin size, each job corresponds to an item of size qj, and minimizing the makespan corresponds to minimizing the number of bins.
Variants
Several variants of this problem have been studied. The following variants also have been considered in combination with each other.
Contiguous jobs: In this variant, the machines have a fixed order . Instead of assigning the jobs to any subset , the jobs have to be assigned to a contiguous interval of machines. This problem corresponds to the problem formulation of the strip packing problem.
Multiple platforms: In this variant, the set of machines is partitioned into independent platforms. A scheduled job can only use the machines of one platform and is not allowed to span over multiple platforms when processed.
Moldable jobs: In this variant each job has a set of feasible machine-counts . For each count , the job can be processed on d machines in parallel, and in this case, its processing time will be . To schedule a job , an algorithm has to choose a machine count and assign j to a starting time and to machines during the time interval A usual assumption for this kind of problem is that the total workload of a job, which is defined as , is non-increasing for an increasing number of machines.
Release dates: In this variant, denoted by , not all jobs are available at time 0; each job j becomes available at a fixed and known time rj. It must be scheduled after that time.
Preemption: In this variant, denoted by , it is possible to interrupt jobs that are already running, and schedule other jobs that become available at that time.
Algorithms
The list scheduling algorithm by Garey and Graham has an absolute ratio , as pointed out by Turek et al. and Ludwig and Tiwari.
Feldmann, Sgall and Teng observed that the length of a non-preemptive schedule produced by the list scheduling algorithm is actually at most times the optimum preemptive makespan.
A polynomial-time approximation scheme (PTAS) for the case when the number of processors is constant, denoted by , was presented by Amoura et al. and Jansen et al.
Later, Jansen and Thöle found a PTAS for the case where the number of processors is polynomially bounded in the number of jobs.
In this algorithm, the number of machines appears polynomially in the time complexity of the algorithm.
Since, in general, the number of machines appears only in logarithmic in the size of the instance, this algorithm is a pseudo-polynomial time approximation scheme as well.
A -approximation was given by Jansen, which closes the gap to the lower bound of except for an arbitrarily small .
Differences between contiguous and non-contiguous jobs
Given an instance of the parallel task scheduling problem, the optimal makespan can differ depending on the constraint to the contiguity of the machines. If the jobs can be scheduled on non-contiguous machines, the optimal makespan can be smaller than in the case that they have to be scheduled on contiguous ones.
The difference between contiguous and non-contiguous schedules has been first demonstrated in 1992 on an instance with tasks, processors, , and .
Błądek et al. studied these so-called c/nc-differences and proved the following points:
For a c/nc-difference to arise, there must be at least three tasks with
For a c/nc-difference to arise, there must be at least three tasks with
For a c/nc-difference to arise, at least processors are required (and there exists an instance with a c/nc-difference with ).
For a c/nc-difference to arise, the non-contiguous schedule length must be at least
The maximal c/nc-difference is at least and at most
To decide whether there is an c/nc-difference in a given instance is NP-complete.
Furthermore, they proposed the following two conjectures, which remain unproven:
For a c/nc-difference to arise, at least tasks are required.
Related problems
There are related scheduling problems in which each job consists of several operations, which must be executed in sequence (rather than in parallel). These are the problems of open shop scheduling, flow shop scheduling and job shop scheduling.
References
Optimal scheduling
Packing problems | Parallel task scheduling | [
"Mathematics",
"Engineering"
] | 1,602 | [
"Optimal scheduling",
"Mathematical problems",
"Packing problems",
"Industrial engineering"
] |
71,578,738 | https://en.wikipedia.org/wiki/PHY-Level%20Collision%20Avoidance | PHY-Level Collision Avoidance (PLCA) is a component of the Ethernet reconciliation sublayer (between the PHY and the MAC) defined within IEEE 802.3 clause 148. The purpose of PLCA is to avoid the shared medium collisions and associated retransmission overhead. PLCA is used in 802.3cg (10BASE-T1), which focuses on bringing Ethernet connectivity to short-haul embedded internet of things and low throughput, noise-tolerant, industrial deployment use cases.
In order for a multidrop 10BASE-T1S standard to successfully compete with CAN XL, some kind of arbitration was necessary. The linear arbitration scheme of PLCA somewhat resembles the one of the Byteflight, but PLCA was designed from scratch to accommodate the existing shared medium Ethernet MACs with their busy sensing mechanisms.
Operation
Under a PLCA scheme all nodes are assigned unique sequential numbers (IDs) in the range from 0 to N. Zero ID corresponds to a special "master" node that during the idle intervals transmits the synchronization beacon (a special heartbeat frame). After the beacon (within PLCA cycle) each node gets its transmission opportunity (TO). Each opportunity interval is very short (typically 20 bits), so overhead for the nodes that do not have anything to transmit is low. If the PLCA circuitry discovers that the node's TO cannot be used (the other node with a lower ID have started its transmission and the media is busy at the beginning of the TO for this node), it asserts the "local collision" input of the MAC thus delaying the transmission. The condition is cleared once the node gets its TO. A standard MAC reacts to the local collision with a backoff, however, since this is the first and only backoff for this frame, the backoff interval is equal to the smallest possible frame - and the backoff timer will definitely expire by the time the TO is granted, so there is no additional loss of performance.
See also
Internet of things (IOT)
References
Sources
Ethernet
Computer networking
Algorithms
Internet of things | PHY-Level Collision Avoidance | [
"Mathematics",
"Technology",
"Engineering"
] | 426 | [
"Computer networking",
"Computer engineering",
"Applied mathematics",
"Algorithms",
"Mathematical logic",
"Computer science"
] |
71,587,738 | https://en.wikipedia.org/wiki/Barocaloric%20material | Barocaloric materials are characterized by strong, reversible thermic responses to changes in pressure. Many involve solid-to-solid phase changes from disordered to ordered and rigid under increased pressure, releasing heat. Barocaloric solids undergo solid-to-solid phase change. One barocaloric material processes heat without a phase change: natural rubber.
Input energy
Barocaloric effects can be achieved at pressures above 200 MPa for intermetallics or about 100 MPa in plastic crystals. However, changes phase at pressures of 80 MPa. The hybrid organic–inorganic layered perovskite (CH3–(CH2)n−1–NH3)2MnCl4 (n = 9,10), shows reversible barocaloric entropy change of ΔSr ~ 218, 230 J kg−1 K−1 at 0.08 GPa at 294-311.5 K (transition temperature).
Barocaloric materials are one of several classes of materials that undergo caloric phase transitions. The others are magnetocaloric, electrocaloric, and elastocaloric. Magnetocaloric effects typically require field strengths larger than 2 T, while electrocaloric materials require field strengths in the kV to MV/m range. Elastocaloric materials may require force levels as large as 700 MPa.
Potential applications
Barocaloric materials have potential use as refrigerants in cooling systems instead of gases such as hydrofluorocarbons. cycles, the pressure then drives a solid-to-solid phase change. A prototype air conditioner was made from a metal tube filled with a metal-halide perovskite (the refrigerant) and water or oil (heat/pressure transport material). A piston pressurizes the liquid.
Another project used as the refrigerant. It achieved reversible entropy changes of ~71 J K−1 kg−1 at ambient temperature. The phase transition temperature is a function of pressure, varying at a rate of ~0.79 K MPa−1. The accompanying saturation driving pressure is ~40 MPa, a barocaloric strength of ~1.78 J K−1 kg−1 MPa−1, and a temperature span of ~41 K under 80 MPa. Neutron scattering characterizations of crystal structures/atomic dynamics show that reorientation-vibration coupling is responsible for the pressure sensitivity.
See also
Thermoelectric effect
References
Refrigerants
Phase transitions | Barocaloric material | [
"Physics",
"Chemistry"
] | 513 | [
"Physical phenomena",
"Phase transitions",
"Critical phenomena",
"Phases of matter",
"Statistical mechanics",
"Matter"
] |
71,587,864 | https://en.wikipedia.org/wiki/Silicon%E2%80%93oxygen%20bond | A silicon–oxygen bond ( bond) is a chemical bond between silicon and oxygen atoms that can be found in many inorganic and organic compounds. In a silicon–oxygen bond, electrons are shared unequally between the two atoms, with oxygen taking the larger share due to its greater electronegativity. This polarisation means Si–O bonds show characteristics of both covalent and ionic bonds. Compounds containing silicon–oxygen bonds include materials of major geological and industrial significance such as silica, silicate minerals and silicone polymers like polydimethylsiloxane.
Bond polarity, length and strength
On the Pauling electronegativity scale, silicon has an electronegativity of 1.90 and oxygen 3.44. The electronegativity difference between the elements is therefore 1.54. Because of this moderately large difference in electronegativities, the bond is polar but not fully ionic. Carbon has an electronegativity of 2.55 so carbon–oxygen bonds have an electronegativity difference of 0.89 and are less polar than silicon–oxygen bonds. Silicon–oxygen bonds are therefore covalent and polar, with a partial positive charge on silicon and a partial negative charge on oxygen: Siδ+—Oδ−.
Silicon–oxygen single bonds are longer (1.6 vs 1.4 Å) but stronger (452 vs. about 360 kJ mol−1) than carbon–oxygen single bonds. However, silicon–oxygen double bonds are weaker than carbon–oxygen double bonds (590 vs. 715 kJ mol−1) due to a better overlap of p orbitals forming a stronger pi bond in the latter. This is an example of the double bond rule. For these reasons, carbon dioxide is a molecular gas containing two C=O double bonds per carbon atom whereas silicon dioxide is a polymeric solid containing four Si–O single bonds per silicon atom; molecular SiO2 containing two Si=O double bonds would polymerise. Other compounds containing Si=O double bonds are normally very reactive and unstable with respect to polymerisation or oligomerization. Silanones oligomerise to siloxanes unless they are stabilised, for example by coordination to a metal centre, coordination to Lewis acids or bases, or by steric shielding.
Bond angles
Disiloxane groups, Si–O–Si, tend to have larger bond angles than their carbon counterparts, C–O–C. The Si–O–Si angle ranges from about 130–180°, whereas the C–O–C angle in ethers is typically 107–113°. Si–O–C groups are intermediate, tending to have bond angles smaller than Si–O–Si but larger than C–O–C. The main reasons are hyperconjugation (donation from an oxygen p orbital to an Si–R σ* sigma antibonding molecular orbital, for example) and ionic effects (such as electrostatic repulsion between the two neighbouring partially positive silicon atoms). Recent calculations suggest π backbonding from an oxygen 2p orbital to a silicon 3d orbital makes only a minor contribution to bonding as the Si 3d orbital is too high in energy.
The Si–O–Si angle is 144° in α-quartz, 155° in β-quartz, 147° in α-cristobalite and (153±20)° in vitreous silica. It is 180° in coesite (another polymorph of SiO2), in Ph3Si–O–SiPh3, and in the [O3Si–O–SiO3]6− ion in thortveitite, Sc2Si2O7. It increases progressively from 133° to 180° in Ln2Si2O7 as the size and coordination number of the lanthanide decreases from neodymium to lutetium. It is 150° in hemimorphite and 134° in lithium metasilicate and sodium metasilicate.
Coordination number
In silicate minerals, silicon often forms single bonds to four oxygen atoms in a tetrahedral molecular geometry, forming a silicon–oxygen tetrahedron. At high pressures, silicon can increase its coordination number to six, as in stishovite.
See also
Organosilicon compound
Carbon–hydrogen bond
Carbon–carbon bond
Carbon–fluorine bond
Bonding in solids
References
Chemical bonding | Silicon–oxygen bond | [
"Physics",
"Chemistry",
"Materials_science"
] | 902 | [
"Chemical bonding",
"Condensed matter physics",
"nan"
] |
71,590,400 | https://en.wikipedia.org/wiki/Carbon%20dichalcogenide | Carbon dichalcogenides are chemical compounds of carbon and chalcogen elements. They have the general chemical formula CZ2, where Z = O, S, Se, Te.
This includes:
Carbon dioxide,
Carbon disulfide,
Carbon diselenide,
Carbonyl sulfide, OCS
Carbonyl selenide, OCSe
Thiocarbonyl selenide, SCSe
Thiocarbonyl telluride, SCTe
Stability
Double bonds between carbon and chalcogen elements, C=Z, become weaker the heavier the chalcogen, Z. This trend means carbon dichalcogenide monomers are less stable and more susceptible to polymerisation as Z changes from O to Te. For example, is stable, polymerises under extreme conditions, tends to polymerise, CSeTe is unstable and does not exist. This trend is an example of the double bond rule.
Bonding
In carbon dichalcogenides, C=O bond lengths are around 1.16 Å, C=S around 1.56 Å, C=Se around 1.70 Å and C=Te around 1.90 Å.
References
Inorganic carbon compounds
Chalcogenides | Carbon dichalcogenide | [
"Chemistry"
] | 240 | [
"Inorganic carbon compounds",
"Inorganic compounds"
] |
71,590,696 | https://en.wikipedia.org/wiki/Subgroup%20distortion | In geometric group theory, a discipline of mathematics, subgroup distortion measures the extent to which an overgroup can reduce the complexity of a group's word problem. Like much of geometric group theory, the concept is due to Misha Gromov, who introduced it in 1993.
Formally, let generate group , and let be an overgroup for generated by . Then each generating set defines a word metric on the corresponding group; the distortion of in is the asymptotic equivalence class of the function where is the ball of radius about center in and is the diameter of .
A subgroup with bounded distortion is called undistorted, and is the same thing as a quasi-isometrically embedded subgroup.
Examples
For example, consider the infinite cyclic group , embedded as a normal subgroup of the Baumslag–Solitar group . With respect to the chosen generating sets, the element is distance from the origin in , but distance from the origin in . In particular, is at least exponentially distorted with base .
On the other hand, any embedded copy of in the free abelian group on two generators is undistorted, as is any embedding of into itself.
Elementary properties
In a tower of groups , the distortion of in is at least the distortion of in .
A normal abelian subgroup has distortion determined by the eigenvalues of the conjugation overgroup representation; formally, if acts on with eigenvalue , then is at least exponentially distorted with base . For many non-normal but still abelian subgroups, the distortion of the normal core gives a strong lower bound.
Known values
Every computable function with at most exponential growth can be a subgroup distortion, but Lie subgroups of a nilpotent Lie group always have distortion for some rational .
The denominator in the definition is always ; for this reason, it is often omitted. In that case, a subgroup that is not locally finite has superadditive distortion; conversely every superadditive function (up to asymptotic equivalence) can be found this way.
In cryptography
The simplification in a word problem induced by subgroup distortion suffices to construct a cryptosystem, algorithms for encoding and decoding secret messages. Formally, the plaintext message is any object (such as text, images, or numbers) that can be encoded as a number . The transmitter then encodes as an element with word length . In a public overgroup with that distorts , the element has a word of much smaller length, which is then transmitted to the receiver along with a number of "decoys" from , to obscure the secret subgroup . The receiver then picks out the element of , re-expresses the word in terms of generators of , and recovers .
References
Geometric group theory
Low-dimensional topology | Subgroup distortion | [
"Physics",
"Mathematics"
] | 573 | [
"Geometric group theory",
"Group actions",
"Low-dimensional topology",
"Topology",
"Symmetry"
] |
71,590,725 | https://en.wikipedia.org/wiki/Socolar%20tiling | A Socolar tiling is an example of an aperiodic tiling, developed in 1989 by Joshua Socolar in the exploration of quasicrystals. There are 3 tiles a 30° rhombus, square, and regular hexagon. The 12-fold symmetry set exist similar to the 10-fold Penrose rhombic tilings, and 8-fold Ammann–Beenker tilings.
The 12-fold tiles easily tile periodically, so special rules are defined to limit their connections and force nonperiodic tilings. The rhombus and square are disallowed from touching another of itself, while the hexagon can connect to both tiles as well as itself, but only in alternate edges.
Dodecagonal rhomb tiling
The dodecagonal rhomb tiling include three tiles, a 30° rhombus, a 60° rhombus, and a square. Another set includes a square, a 30° rhombus and an equilateral triangle.
See also
Pattern block - 6 tiles based on 12-fold symmetry, including the 3 Socolar tiles
Socolar–Taylor tile - A different tiling named after Socolar
References
Aperiodic tilings | Socolar tiling | [
"Physics",
"Mathematics"
] | 250 | [
"Tessellation",
"Geometry",
"Geometry stubs",
"Aperiodic tilings",
"Symmetry"
] |
71,591,647 | https://en.wikipedia.org/wiki/DoITPoMS | Dissemination of IT for the Promotion of Materials Science (DoITPoMS) is a web-based educational software resource designed to facilitate the teaching and learning of Materials science, at the tertiary level for free.
History
The DoITPoMS project originated in the early 1990s, incorporating customized online sources into the curriculum of the Materials Science courses in the Natural Sciences Tripos of the University Cambridge. The initiative became formalized in 2000, with the start of a project supported by the UK national Fund for the Development of Teaching and Learning (FDTL). This was led by the Department of Materials Science and Metallurgy at the University of Cambridge with five partner institutions, including the University of Leeds, London Metropolitan University, the University of Manchester, Oxford Brookes University, and the University of Sheffield. This period of cooperation lasted for about 10 years.
The FDTL project was aimed at building on expertise concerning the use of Information Technology (IT) to enhance the student learning experience and to disseminate these techniques within the Materials Education community in the UK and globally. This was done by creating an archive of background information, such as video clips, micrographs, simulations, etc, and libraries of teaching and learning packages (TLPs) that covers a particular topic, which were designed both for independent usage by students and as a teaching aid for educators. A vital feature of these packages is a high level of user interactivity.
DoITPoMS has no commercial sponsors and no advertising is permitted on the site. The background science to the resources within DoITPoMS has all been input by unpaid volunteers, most of whom have been academics based in universities. A single person retains responsibility for a particular resource, and these people are credited to the site. While the logo of University of Cambridge does appear on the site, is content is available freely and licensed under CC BY-NC-SA 2.0 UK.
Format and usage
The set of resources currently available on the site comprises Libraries of TLPs (~75), Micrographs (~900), Video clips (~150), Lecture demonstration packages (5), and Stand-alone simulations (2). These all have slightly different purposes, and the modes of usage cover a wide range. In each TLP, several simulations typically allow the user to input data to visualise the characteristics of particular effects or phenomena. This is to enable students to explore areas in their way and facilitates the creation of exercises by educators. Each TLP has a set of questions at the end, designed to test whether the main points of the TLP have been understood.
The TLPs cover many diverse topics within the broad field of Materials science, ranging from basics, such as crystal structures and thermal conduction, to more applied areas, such as the design and functioning of batteries and fuel cells. Tools such as X-ray diffraction and the finite element method are also included. Many, although not all, of these topics, go into greater depth and are designed explicitly as educational resources.
Approximately half a million users accessed the site in 2021.
References
External links
DoITPoMS on Flicker.
DoITPoMS on YouTube.
British educational websites
Educational materials
Virtual learning environments
Science communication
Open educational resources
University of Cambridge
Organisations associated with the University of Cambridge | DoITPoMS | [
"Physics",
"Materials_science",
"Engineering"
] | 658 | [
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
53,100,045 | https://en.wikipedia.org/wiki/Trilaciclib | Trilaciclib, sold under the brand name Cosela, is a medication used to reduce the frequency of chemotherapy-induced bone marrow suppression.
The most common side effects include fatigue; low levels of calcium, potassium and phosphate; increased levels of an enzyme called aspartate aminotransferase; headache; and infection in the lungs (pneumonia).
Trilaciclib may help protect bone marrow cells from damage caused by chemotherapy by inhibiting cyclin-dependent kinase 4/6, a type of enzyme. Trilaciclib is the first therapy in its class and was approved for medical use in the United States in February 2021. The U.S. Food and Drug Administration considers it to be a first-in-class medication.
Chemotherapy drugs are designed to kill cancer cells but can damage normal tissues as well. The bone marrow is particularly susceptible to chemotherapy damage. The bone marrow makes red blood cells, white blood cells, and platelets (small fragments in the blood) that transport oxygen, fight infection, and stop bleeding. When damaged, the bone marrow produces fewer of these cells, leading to fatigue, increased risk of infection, and bleeding, among other problems. Trilaciclib may help protect the normal bone marrow cells from the harmful effects of chemotherapy.
Medical uses
Trilaciclib is indicated to reduce the frequency of chemotherapy-induced bone marrow suppression in adults receiving certain types of chemotherapy for extensive-stage (when the cancer has spread beyond the lungs) small cell lung cancer.
History
The effectiveness of trilaciclib was evaluated in three randomized, double-blind, placebo-controlled studies in participants with extensive-stage small cell lung cancer. Combined, these studies randomly assigned 245 participants to receive either an intravenous infusion of trilaciclib or a placebo before chemotherapy. The studies then compared the two groups for the proportion of participants with severe neutropenia (a very low count of white blood cells called neutrophils) and the duration of severe neutropenia in the first cycle of chemotherapy. In all three studies, participants who received trilaciclib had a lower risk of having severe neutropenia compared to participants who received a placebo. Among those who had severe neutropenia, participants who received trilaciclib, on average, had it for a shorter time than participants who received a placebo.
The U.S. Food and Drug Administration (FDA) granted the application for trilaciclib priority review and breakthrough therapy designations. The FDA granted the approval of Cosela to G1 Therapeutics, Inc.
References
External links
Protein kinase inhibitors
Chemotherapeutic adjuvants
Pyridines
4-Methylpiperazin-1-yl compounds
Spiro compounds
Amides
Guanidines
CDK inhibitors | Trilaciclib | [
"Chemistry"
] | 574 | [
"Guanidines",
"Functional groups",
"Organic compounds",
"Amides",
"Spiro compounds"
] |
53,102,285 | https://en.wikipedia.org/wiki/Mass-spring-damper%20model | The mass-spring-damper model consists of discrete mass nodes distributed throughout an object and interconnected via a network of springs and dampers. This model is well-suited for modelling object with complex material properties such as nonlinearity and viscoelasticity.
Packages such as MATLAB may be used to run simulations of such models. As well as engineering simulation, these systems have applications in computer graphics and computer animation.
Derivation (Single Mass)
Deriving the equations of motion for this model is usually done by summing the forces on the mass (including any applied external forces :
By rearranging this equation, we can derive the standard form:
where
is the undamped natural frequency and is the damping ratio. The homogeneous equation for the mass spring system is:
This has the solution:
If then is negative, meaning the square root will be imaginary and therefore the solution will have an oscillatory component.
See also
Numerical methods
Soft body dynamics#Spring/mass models
Finite element analysis
References
Classical mechanics
Mechanical vibrations | Mass-spring-damper model | [
"Physics",
"Engineering"
] | 207 | [
"Structural engineering",
"Mechanics",
"Classical mechanics",
"Mechanical vibrations"
] |
53,107,281 | https://en.wikipedia.org/wiki/Yang%20Shiming | Yang Shiming or Shih-Ming Yang () was a Chinese thermodynamicist, who was a pioneer in heat transfer in mainland China.
Yang was a standing member of the Executive Committee of the Chinese Society of Engineering Thermophysics and vice chairman of the National Educational Advisory Committee on Thermal Engineering. He had also served on the editorial board of the Journal of Engineering Thermophysics since 1980, the honorary editorial advisory board of the International Journal of Heat and Mass Transfer since 1982, and the executive committee as well as the Scientific Council of the International Centre of Heat and Mass Transfer since 1987.
Early life and education
Yang fled to Shanghai at age 12. The CPC influenced him when he was studying at a high school, and he joined the party in 1941.
Yang was enrolled at Shanghai Jiao Tong University in 1942. There was a hiatus in his undergraduate life, he backed to the university in 1946 and received his BS degree from there in 1948. Meantime, he successively served as a teacher at a technical school and an engineer at a rubber factory. Afterwards, he received his MS degree from Case Institute of Technology in 1950. Then he became a graduate student with Professor Max Jakob's guidance and obtained his Ph. D. degree from Illinois Institute of Technology in 1953.
Career
Upon the request of Jakob, Yang joined the Heat Transfer Laboratory of IIT before he returned to Shanghai at the end of 1953. Their concerted effort revealed the Jakob number.
Yang began to teach at Jiao Tong University since 1956, he followed the university's westward moving to Xi'an in the next year. During 1958–85, he was in charge of the thermal engineering section of Xi'an Jiaotong University as an associate professor and then a professor.
Bereavement of wife led to Yang's return to Shanghai Jiao Tong University, where he continued his work until retirement.
Yang made a great contribution to the Chinese thermodynamics education. The textbook Heat Transfer, which he edited, received the National Excellent Textbook Award in 1988.
Family
Yang's wife died in the 1980s. They had a daughter and a son.
References
1925 births
2017 deaths
National Chiao Tung University (Shanghai) alumni
Academic staff of Shanghai Jiao Tong University
Thermodynamicists
People from Wuxi | Yang Shiming | [
"Physics",
"Chemistry"
] | 470 | [
"Thermodynamics",
"Thermodynamicists"
] |
64,317,992 | https://en.wikipedia.org/wiki/Combinatorial%20Geometry%20in%20the%20Plane | Combinatorial Geometry in the Plane is a book in discrete geometry. It was translated from a German-language book, Kombinatorische Geometrie in der Ebene, which its authors Hugo Hadwiger and Hans Debrunner published through the University of Geneva in 1960, expanding a 1955 survey paper that Hadwiger had published in L'Enseignement mathématique. Victor Klee translated it into English, and added a chapter of new material. It was published in 1964 by Holt, Rinehart and Winston, and republished in 1966 by Dover Publications. A Russian-language edition, , translated by I. M. Jaglom and including a summary of the new material by Klee, was published by Nauka in 1965. The Basic Library List Committee of the Mathematical Association of America has recommended its inclusion in undergraduate mathematics libraries.
Topics
The first half of the book provides the statements of nearly 100 propositions in the discrete geometry of the Euclidean plane, and the second half sketches their proofs. Klee's added chapter, lying between the two halves, provides another 10 propositions, including some generalizations to higher dimensions, and the book concludes with a detailed bibliography of its topics.
Results in discrete geometry covered by this book include:
Carathéodory's theorem that every point in the convex hull of a planar set belongs to a triangle determined by three points of the set, and Steinitz's theorem that every point interior to the convex hull is interior to the convex hull of four points of the set.
The Erdős–Anning theorem, that if an infinite set of points in the plane has an integer distance between every two points, then the given points must all lie on a single line.
Helly's theorem, that if a family of compact convex sets has a non-empty intersection for every triple of sets, then the whole family has a non-empty intersection.
A Helly-like property of visibility related to the art gallery theorem: if every three points of a polygon are visible from some common point within the polygon, then there is a point from which the entire polygon is visible. In this case the polygon must be a star-shaped polygon.
The impossibility of covering a closed parallelogram by three translated copies of its interior, and the fact that every other compact convex set can be covered in this way.
Jung's theorem, that (for sets in the plane) the radius of the smallest enclosing circle is at most times the diameter. This bound is tight for the equilateral triangle.
Paradoxes of set decomposition into smaller sets, related to the Banach–Tarski paradox.
Radon's theorem that every four points in the plane can be partitioned into two subsets with intersecting convex hulls.
Sperner's lemma on colorings of triangulations.
The Sylvester–Gallai theorem, in the form that if a finite set of points in the plane has the property that every line through two of the points contains a third point from the set, then the given points must all lie on a single line.
Tarski's plank problem, in the form that whenever two infinite strips together cover a compact convex set, their total width is at least as large as the width of the narrowest strip that covers the set by itself.
Whenever a line is covered by two closed subsets, then at least one of the two subsets has pairs of points at all possible distances.
It also includes some topics that belong to combinatorics but are not inherently geometric, including:
Hall's marriage theorem characterizing the bipartite graphs that have a perfect matching.
Ramsey's theorem that, if the -tuples of points from an infinite set of points are assigned finitely many colors, then an infinite subset has -tuples of only one color.
Audience and reception
The book is written at a level appropriate for undergraduate students in mathematics, and assumes a background knowledge in real analysis and undergraduate-level geometry. One goal of the book is to expose students at this level to research-level problems in mathematics whose statement is readily accessible.
References
Discrete geometry
Mathematics books
1964 non-fiction books | Combinatorial Geometry in the Plane | [
"Mathematics"
] | 860 | [
"Discrete geometry",
"Discrete mathematics"
] |
64,318,108 | https://en.wikipedia.org/wiki/Vertex%20cover%20in%20hypergraphs | In graph theory, a vertex cover in a hypergraph is a set of vertices, such that every hyperedge of the hypergraph contains at least one vertex of that set. It is an extension of the notion of vertex cover in a graph.
An equivalent term is a hitting set: given a collection of sets, a set which intersects all sets in the collection in at least one element is called a hitting set. The equivalence can be seen by mapping the sets in the collection onto hyperedges.
Another equivalent term, used more in a combinatorial context, is transversal. However, some definitions of transversal require that every hyperedge of the hypergraph contains precisely one vertex from the set.
Definition
Recall that a hypergraph is a pair , where is a set of vertices and is a set of subsets of called hyperedges. Each hyperedge may contain one or more vertices.
A vertex-cover (aka hitting set or transversal) in is set such that, for all hyperedges , it holds that .
The vertex-cover number (aka transversal number) of a hypergraph is the smallest size of a vertex cover in . It is often denoted by .
For example, if is this 3-uniform hypergraph:
then has admits several vertex-covers of size 2, for example:
However, no subset of size 1 hits all the hyperedges of . Hence the vertex-cover number of is 2.
Note that we get back the case of vertex covers for simple graphs if the maximum size of the hyperedges is 2.
Algorithms
The computational problems minimum hitting set and hitting set are defined as in the case of graphs.
If the maximum size of a hyperedge is restricted to , then the problem of finding a minimum -hitting set permits a -approximation algorithm. Assuming the unique games conjecture, this is the best constant-factor algorithm that is possible and otherwise there is the possibility of improving the approximation to .
For the hitting set problem, different parametrizations make sense. The hitting set problem is -complete for the parameter , that is, it is unlikely that there is an algorithm that runs in time where is the cardinality of the smallest hitting set. The hitting set problem is fixed-parameter tractable for the parameter , where is the size of the largest edge of the hypergraph. More specifically, there is an algorithm for hitting set that runs in time .
Hitting set and set cover
The hitting set problem is equivalent to the set cover problem: An instance of set cover can be viewed as an arbitrary bipartite graph, with sets represented by vertices on the left, elements of the universe represented by vertices on the right, and edges representing the inclusion of elements in sets. The task is then to find a minimum cardinality subset of left-vertices which covers all of the right-vertices. In the hitting set problem, the objective is to cover the left-vertices using a minimum subset of the right vertices. Converting from one problem to the other is therefore achieved by interchanging the two sets of vertices.
Applications
An example of a practical application involving the hitting set problem arises in efficient dynamic detection of race condition. In this case, each time global memory is written, the current thread and set of locks held by that thread are stored. Under lockset-based detection, if later another thread writes to that location and there is not a race, it must be because it holds at least one lock in common with each of the previous writes. Thus the size of the hitting set represents the minimum lock set size to be race-free. This is useful in eliminating redundant write events, since large lock sets are considered unlikely in practice.
Fractional vertex cover
A fractional vertex-cover is a function assigning a weight in to each vertex in , such that for every hyperedge in , the sum of fractions of vertices in is at least 1. A vertex cover is a special case of a fractional vertex cover in which all weights are either 0 or 1. The size of a fractional vertex-cover is the sum of fractions of all vertices.
The fractional vertex-cover number of a hypergraph is the smallest size of a fractional vertex-cover in . It is often denoted by .
Since a vertex-cover is a special case of a fractional vertex-cover, for every hypergraph :fractional-vertex-cover-number() ≤ vertex-cover-number ();In symbols: The fractional-vertex-cover-number of a hypergraph is, in general, smaller than its vertex-cover-number. A theorem of László Lovász provides an upper bound on the ratio between them:
If each vertex is contained in at most hyperedges (i.e., the degree of the hypergraph is at most ), then
Transversals in finite projective planes
A finite projective plane is a hypergraph in which every two hyperedges intersect. Every finite projective plane is -uniform for some integer . Denote by the -uniform projective plane. The following projective planes are known to exist:
: it is simply a triangle graph.
: it is the Fano plane.
exists whenever is the power of a prime number.
When exists, it has the following properties:
It has vertices and hyperedges.
It is -uniform - each hyperedge contains exactly vertices.
It is -regular - each vertex is contained in exactly hyperedges.
: the vertices in each hyperedge are a vertex-cover of (since every other hyperedge intersects ).
The only transversals of size are the hyperedges; all other transversals have size at least .
.
: every matching in contains at most a single hyperedge.
Minimal transversals
A vertex-cover (transversal) is called minimal if no proper subset of is a transversal.
The transversal hypergraph of is the hypergraph whose hyperedge set consists of all minimal-transversals of .
Computing the transversal hypergraph has applications in combinatorial optimization, in game theory, and in several fields of computer science such as machine learning, indexing of databases, the satisfiability problem, data mining, and computer program optimization.
See also
Matching in hypergraphs – discusses also the duality between vertex-cover and matching.
References
Graph theory
Hypergraphs | Vertex cover in hypergraphs | [
"Mathematics"
] | 1,286 | [
"Discrete mathematics",
"Mathematical relations",
"Graph theory",
"Combinatorics"
] |
64,319,867 | https://en.wikipedia.org/wiki/Plasmonics%20%28journal%29 | Plasmonics is a bimonthly peer-reviewed scientific journal covering plasmonics, including the theory of plasmonic metamaterials, fluorescence and surface-enhanced Raman spectroscopy. It is published by Springer Science+Business Media. Its current editor is Chris D. Geddes, Director of the Institute of Fluorescence at the University of Maryland Biotechnology Institute. According to the Journal Citation Reports, the journal has a 2023 impact factor of 3.3.
Abstracting and indexing
The journal is abstracted and indexed in:
Science Citation Index Expanded
Current Contents/Physical, Chemical & Earth Sciences
EBSCO Academic Search
EBSCO Discovery Service
EBSCO Engineering Source
EBSCO STM Source
ProQuest Advanced Technologies & Aerospace Database
ProQuest Central
ProQuest SciTech Premium Collection
ProQuest Technology Collection
ProQuest-ExLibris Primo
ProQuest-ExLibris Summon
References
External links
Springer Science+Business Media academic journals
Optics journals
Nanotechnology journals
Materials science journals
English-language journals
Bimonthly journals
Academic journals established in 2006 | Plasmonics (journal) | [
"Materials_science",
"Engineering"
] | 216 | [
"Nanotechnology journals",
"Materials science journals",
"Materials science"
] |
64,322,092 | https://en.wikipedia.org/wiki/M-%CE%B1-HMCA | M-α-HMCA (3-(benzo[d][1,3]dioxol-5-yl)-2-hydroxy-N,2-dimethyl-3-(methylamino)propanamide) is an unintentional sideproduct during the synthesis of MDMA using PMK glycidate as a precursor. It was identified in MDMA pills. The biological properties of this molecule have not yet been documented. The backbone of the chemical structure is M-alpha whose psychoactive activity has been described by Alexander Shulgin.
References
Amides
Amines
Benzodioxoles | M-α-HMCA | [
"Chemistry"
] | 136 | [
"Amides",
"Functional groups"
] |
64,322,614 | https://en.wikipedia.org/wiki/Vecchia%20approximation | Vecchia approximation is a Gaussian processes approximation technique originally developed by Aldo Vecchia, a statistician at United States Geological Survey. It is one of the earliest attempts to use Gaussian processes in high-dimensional settings. It has since been extensively generalized giving rise to many contemporary approximations.
Intuition
A joint probability distribution for events , and , denoted , can be expressed as
Vecchia's approximation takes the form, for example,
and is accurate when events and are close to conditionally independent given knowledge of . Of course one could have alternatively chosen the approximation
and so use of the approximation requires some knowledge of which events are close to conditionally independent given others. Moreover, we could have chosen
a different ordering, for example
Fortunately, in many cases there are good heuristics making decisions about how to construct the approximation.
More technically, general versions of the approximation lead to a sparse Cholesky factor of the precision matrix. Using the standard Cholesky factorization produces entries which can be interpreted as conditional correlations with zeros indicating no independence (since the model is Gaussian). These independence relations can be alternatively expressed using graphical models and there exist theorems linking graph structure and vertex ordering with zeros in the Cholesky factor. In particular, it is known that independencies that are encoded in a moral graph lead to Cholesky factors of the precision matrix that have no fill-in.
Formal description
The problem
Let be a Gaussian process indexed by with mean function and covariance function . Assume that is a finite subset of and is a vector of values of evaluated at , i.e. for . Assume further, that one observes where with .
In this context the two most common inference tasks include evaluating the likelihood
or making predictions of values of for and , i.e. calculating
Original formulation
The original Vecchia method starts with the observation that the joint density of observations can be written as a product of conditional distributions
Vecchia approximation assumes instead that for some
Vecchia also suggested that the above approximation be applied to observations that are reordered lexicographically using their spatial coordinates. While his simple method has many weaknesses, it reduced the computational complexity to . Many of its deficiencies were addressed by the subsequent generalizations.
General formulation
While conceptually simple, the assumption of the Vecchia approximation often proves to be fairly restrictive and inaccurate. This inspired important generalizations and improvements introduced in the basic version over the years: the inclusion of latent variables, more sophisticated conditioning and better ordering. Different special cases of the general Vecchia approximation can be described in terms of how these three elements are selected.
Latent variables
To describe extensions of the Vecchia method in its most general form, define and notice that for it holds that like in the previous section
because given all other variables are independent of .
Ordering
It has been widely noted that the original lexicographic ordering based on coordinates when is two-dimensional produces poor results. More recently another orderings have been proposed, some of which ensure that points are ordered in a quasi-random fashion. Highly scalable, they have been shown to also drastically improve accuracy.
Conditioning
Similar to the basic version described above, for a given ordering a general Vecchia approximation can be defined as
where . Since it follows that since suggesting that the terms be replaced with . It turns out, however, that sometimes conditioning on some of the observations increases sparsity of the Cholesky factor of the precision matrix of . Therefore, one might instead consider sets and such that and express as
Multiple methods of choosing and have been proposed, most notably the nearest-neighbour Gaussian process (NNGP), meshed Gaussian process and multi-resolution approximation (MRA) approaches using , standard Vecchia using and Sparse General Vecchia where both and are non-empty.
Software
Several packages have been developed which implement some variants of the Vecchia approximation.
GPvecchia is an R package available through CRAN which implements most versions of the Vecchia approximation
GpGp is an R package available through CRAN which implements an scalable ordering method for spatial problems which greatly improves accuracy.
spNNGP is an R package available through CRAN which implements the latent Vecchia approximation
pyMRA is a Python package available through pyPI implementing Multi-resolution approximation, a special case of the general Vecchia method used in dynamic state-space models
meshed is an R package available through CRAN which implements Bayesian spatial or spatiotemporal multivariate regression models based a latent Meshed Gaussian Process (MGP) using Vecchia approximations on partitioned domains
Notes
Geostatistics
Computational science
Computational statistics
Statistical software | Vecchia approximation | [
"Mathematics"
] | 969 | [
"Applied mathematics",
"Computational mathematics",
"Computational science",
"Statistical software",
"Computational statistics",
"Mathematical software"
] |
64,322,620 | https://en.wikipedia.org/wiki/Gaussian%20process%20approximations | In statistics and machine learning, Gaussian process approximation is a computational method that accelerates inference tasks in the context of a Gaussian process model, most commonly likelihood evaluation and prediction. Like approximations of other models, they can often be expressed as additional assumptions imposed on the model, which do not correspond to any actual feature, but which retain its key properties while simplifying calculations. Many of these approximation methods can be expressed in purely linear algebraic or functional analytic terms as matrix or function approximations. Others are purely algorithmic and cannot easily be rephrased as a modification of a statistical model.
Basic ideas
In statistical modeling, it is often convenient to assume that , the phenomenon under investigation is a Gaussian process indexed by which has mean function and covariance function .
One can also assume that data are values of a particular realization of this process for indices .
Consequently, the joint distribution of the data can be expressed as
,
where and , i.e. respectively a matrix with the covariance function values and a vector with the mean function values at corresponding (pairs of) indices.
The negative log-likelihood of the data then takes the form
Similarly, the best predictor of , the values of for indices , given data has the form
In the context of Gaussian models, especially in geostatistics, prediction using the best predictor, i.e. mean conditional on the data, is also known as kriging.
The most computationally expensive component of the best predictor formula is inverting the covariance matrix , which has cubic complexity . Similarly, evaluating likelihood involves both calculating and the determinant which has the same cubic complexity.
Gaussian process approximations can often be expressed in terms of assumptions on under which and can be calculated with much lower complexity. Since these assumptions are generally not believed to reflect reality, the likelihood and the best predictor obtained in this way are not exact, but they are meant to be close to their original values.
Model-based methods
This class of approximations is expressed through a set of assumptions which are imposed on the original process and which, typically, imply some special structure of the covariance matrix. Although most of these methods were developed independently, most of them can be expressed as special cases of the sparse general Vecchia approximation.
Sparse covariance methods
These methods approximate the true model in a way the covariance matrix is sparse. Typically, each method proposes its own algorithm that takes the full advantage of the sparsity pattern in the covariance matrix. Two prominent members of this class of approaches are covariance tapering and domain partitioning. The first method generally requires a metric over and assumes that for we have only if for some radius . The second method assumes that there exist such that . Then with appropriate distribution of indices among partition elements and ordering of elements of the covariance matrix is block diagonal.
Sparse precision methods
This family of methods assumes that the precision matrix is sparse and generally specifies which of its elements are non-zero. This leads to fast inversion because only those elements need to be calculated. Some of the prominent approximations in this category include the approach based on the equivalence between Gaussian processes with Matern covariance function and stochastic PDEs, periodic embedding, and Nearest Neighbour Gaussian processes. The first method applies to the case of and when has a defined metric and takes advantage of the fact, that the Markov property holds which makes very sparse. The second extends the domain and uses Discrete Fourier Transform to decorrelate the data, which results in a diagonal precision matrix. The third one requires a metric on and takes advantage of the so-called screening effect assuming that only if , for some .
Sparse Cholesky factor methods
In many practical applications, calculating is replaced with computing first , the Cholesky factor of , and second its inverse . This is known to be more stable than a plain inversion. For this reason, some authors focus on constructing a sparse approximation of the Cholesky factor of the precision or covariance matrices. One of the most established methods in this class is the Vecchia approximation and its generalization. These approaches determine the optimal ordering of indices and, consequently, the elements of and then assume a dependency structure which minimizes in-fill in the Cholesky factor. Several other methods can be expressed in this framework, the Multi-resolution Approximation (MRA), Nearest Neighbour Gaussian Process, Modified Predictive Process and Full-scale approximation.
Low-rank methods
While this approach encompasses many methods, the common assumption underlying them all is the assumption, that , the Gaussian process of interest, is effectively low-rank. More precisely, it is assumed, that there exists a set of indices such that every other set of indices
where is an matrix, and and is a diagonal matrix. Depending on the method and application various ways of selecting have been proposed. Typically, is selected to be much smaller than which means that the computational cost of inverting is manageable ( instead of ).
More generally, on top of selecting , one may also find an matrix and assume that , where are values of a Gaussian process possibly independent of . Many machine learning methods fall into this category, such as subset-of-regressors (SoR), relevance vector machine, sparse spectrum Gaussian Process and others and they generally differ in the way they derive and .
Hierarchical methods
The general principle of hierarchical approximations consists of a repeated application of some other method, such that each consecutive application refines the quality of the approximation. Even though they can be expressed as a set of statistical assumptions, they are often described in terms of a hierarchical matrix approximation (HODLR) or basis function expansion (LatticeKrig, MRA, wavelets). The hierarchical matrix approach can often be represented as a repeated application of a low-rank approximation to successively smaller subsets of the index set . Basis function expansion relies on using functions with compact support. These features can then be exploited by an algorithm who steps through consecutive layers of the approximation. In the most favourable settings some of these methods can achieve quasi-linear () complexity.
Unified framework
Probabilistic graphical models provide a convenient framework for comparing model-based approximations. In this context, value of the process at index can then be represented by a vertex in a directed graph and edges correspond to the terms in the factorization of the joint density of . In general, when no independent relations are assumed, the joint probability distribution can be represented by an arbitrary directed acyclic graph. Using a particular approximation can then be expressed as a certain way of ordering the vertices and adding or removing specific edges.
Methods without a statistical model
This class of methods does not specify a statistical model or impose assumptions on an existing one. Three major members of this group are the meta-kriging algorithm, the gapfill algorithm and Local Approximate Gaussian Process approach. The first one partitions the set of indices into components , calculates the conditional distribution for each those components separately and then uses geometric median of the conditional PDFs to combine them. The second is based on quantile regression using values of the process which are close to the value one is trying to predict, where distance is measured in terms of a metric on the set of indices. Local Approximate Gaussian Process uses a similar logic but constructs a valid stochastic process based on these neighboring values.
References
Geostatistics
Computational science
Computational statistics | Gaussian process approximations | [
"Mathematics"
] | 1,531 | [
"Computational science",
"Applied mathematics",
"Computational statistics",
"Computational mathematics"
] |
64,322,655 | https://en.wikipedia.org/wiki/SILAM | SILAM (System for Integrated Modeling of Atmospheric Composition) is a global-to-meso-scale atmospheric dispersion model developed by the Finnish Meteorological Institute (FMI).
Model
It provides information on atmospheric composition, air quality, and wildfire smoke (PM2.5) and is also able to solve the inverse dispersion problem. It can take data from a variety of sources, including natural ones such as sea salt, blown dust, and pollen.
The FMI provides three datasets based on SILAM: a 4-day global air pollutant (SO2, NO, NO2, O3, PM2.5, and PM10) forecast based on TNO-MACC (global emission) and IS4FIRES (wildfire), a 5-day global wildfire smoke forecast based on IS4FIRES, and a 5-day pollen forecast for Europe.
References
Atmospheric dispersion modeling
Air pollution | SILAM | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 194 | [
"Atmospheric dispersion modeling",
"Environmental modelling",
"Environmental engineering"
] |
64,323,695 | https://en.wikipedia.org/wiki/The%20CRISPR%20Journal | The CRISPR Journal is a peer-reviewed scientific journal published every three months by Mary Ann Liebert. It covers research on all aspects of CRISPR research, including its uses in synthetic biology and genome editing.
Editors
Its editor-in-chief is Rodolphe Barrangou. The journal's editorial board includes key crew members of CRISPR technology Jennifer Doudna, Emmanuelle Charpentier, and George Church. The inaugural issue of the journal was published in February, 2018.
See also
Genome editing
External links
Genome editing
Biology journals
Academic journals established in 2018
English-language journals
Mary Ann Liebert academic journals | The CRISPR Journal | [
"Engineering",
"Biology"
] | 127 | [
"Genetics techniques",
"Genetic engineering",
"Genome editing"
] |
44,370,787 | https://en.wikipedia.org/wiki/First%20passage%20percolation | First passage percolation is a mathematical method used to describe the paths reachable in a random medium within a given amount of time.
Introduction
First passage percolation is one of the most classical areas of probability theory. It was first introduced by John Hammersley and Dominic Welsh in 1965 as a model of fluid flow in a porous media. It is part of percolation theory, and classical Bernoulli percolation can be viewed as a subset of first passage percolation.
Most of the beauty of the model lies in its simple definition (as a random metric space) and the property that several of its fascinating conjectures do not require much effort to be stated. Most times, the goal of first passage percolation is to understand a random distance on a graph, where weights are assigned to edges. Most questions are tied to either find the path with the least weight between two points, known as a geodesic, or to understand how the random geometry behaves in large scales.
Mathematics
As is the case in percolation theory in general, many of the problems related to first passage percolation involve finding optimal routes or optimal times.
The model is defined as follows.
Let be a graph. We place a non-negative random variable , called the passage time of the edge , at each nearest-neighbor edge of the graph . The collection is usually assumed to be independent, identically distributed but there are variants of the model.
The random variable is interpreted as the time or the cost needed to traverse edge .
Since each edge in first passage percolation has its own individual weight (or time) we can write the total time of a path as the summation of weights of each edge in the path.
Given two vertices of one then sets
where the infimum is over all finite paths that start at and end at .
The function induces a random pseudo-metric on .
The most famous model of first passage percolation is on the lattice . One of its most notorious questions is "What does a ball of large radius look like?". This question was raised in the original paper of Hammersley and Welsh in 1969 and gave rise to the Cox-Durrett limit shape theorem in 1981.
Although the Cox-Durrett theorem provides existence of the limit shape, not many properties of this set are known. For instance, it is expected that under mild assumptions this set should be strictly convex. As of 2016, the best result is the existence of the Auffinger-Damron differentiability point in the Cox-Liggett flat edge case.
There are also some specific examples of first passage percolation that can be modeled using Markov chains. For example: a complete graph can be described using Markov chains and recursive trees and 2-width strips can be described using a Markov chain and solved using a Harris chain.
Applications
First passage percolation is well-known for giving rise to other tools of mathematics, including the Subadditive Ergodic Theorem, a fundamental result in ergodic theory.
Outside mathematics, the Eden growth model is used to model bacteria growth and deposition of material. Another example is comparing a minimized cost from the Vickrey–Clarke–Groves auction (VCG-auction) to a minimized path from first passage percolation to gauge how pessimistic the VCG-auction is at its lower limit. Both problems are solved similarly and one can find distributions to use in auction theory.
References
Network theory
Combinatorics
Percolation theory | First passage percolation | [
"Physics",
"Chemistry",
"Mathematics"
] | 720 | [
"Physical phenomena",
"Phase transitions",
"Discrete mathematics",
"Percolation theory",
"Graph theory",
"Combinatorics",
"Network theory",
"Mathematical relations",
"Statistical mechanics"
] |
44,370,960 | https://en.wikipedia.org/wiki/SIMPLEC%20algorithm | The SIMPLEC (Semi-Implicit Method for Pressure Linked Equations-Consistent) algorithm; a modified form of SIMPLE algorithm; is a commonly used numerical procedure in the field of computational fluid dynamics to solve the Navier–Stokes equations.
This algorithm was developed by Van Doormal and Raithby in 1984. The algorithm follows the same steps as the SIMPLE algorithm, with the variation that the momentum equations are manipulated, allowing the SIMPLEC velocity correction equations to omit terms that are less significant than those omitted in SIMPLE. This modification attempts to minimize the effects of dropping velocity neighbor correction terms.
Algorithm
The steps involved are same as the SIMPLE algorithm and the algorithm is iterative in nature. p*, u*, v* are guessed Pressure, X-direction velocity and Y-direction velocity respectively, p', u', v' are the correction terms respectively and p, u, v are the correct fields respectively; Φ is the property for which we are solving and d terms are involved with the under relaxation factor. So, steps are as follows:
1. Specify the boundary conditions and guess the initial values.
2. Determine the velocity and pressure gradients.
3. Calculate the pseudo velocities.
4. Solve for the pressure equation and get the p.
5. Set p*=p.
6. Using p* solve the discretized momentum equation and get u* and v*.
7. Solve the pressure correction equation.
8. Get the pressure correction term and evaluate the corrected velocities and get p, u, v, Φ*.
9. Solve all other discretized transport equations.
10. If Φ shows convergence, then STOP and if not, then set p*=p, u*=u, v*=v, Φ*=Φ and start the iteration again.
Peculiar features
The discretized pressure correction equation is same as in the SIMPLE algorithm, except for the d terms which are used in momentum equations.
p=p*+p' which tells that the under relaxing factor is not there in SIMPLEC as it was in SIMPLE.
SIMPLEC algorithm is seen to converge 1.2-1.3 times faster than the SIMPLE algorithm
It doesn't solve extra equations like SIMPLER algorithm.
The cost per iteration is same as in the case of SIMPLE.
Like SIMPLE, a bad pressure field guess will destroy a good velocity field.
See also
SIMPLE algorithm
SIMPLER algorithm
Navier–Stokes equations
References
Computational fluid dynamics | SIMPLEC algorithm | [
"Physics",
"Chemistry"
] | 499 | [
"Computational fluid dynamics",
"Fluid dynamics",
"Computational physics"
] |
44,372,439 | https://en.wikipedia.org/wiki/Computational%20methods%20for%20free%20surface%20flow | In physics, a free surface flow is the surface of a fluid flowing that is subjected to both zero perpendicular normal stress and parallel shear stress. This can be the boundary between two homogeneous fluids, like water in an open container and the air in the Earth's atmosphere that form a boundary at the open face of the container.
Computation of free surfaces is complex because of the continuous change in the location of the boundary layer. Conventional methods of computation are insufficient for such analysis. Therefore, special methods are developed for the computation of free surface flows.
Introduction
Computation in flows with free and moving boundaries like the open-channel flow is a difficult task. The position of the boundary is known only at the initial time and its location at later times can be determined as using various methods like the Interface Tracking Method and the Interface Capturing Method.
Boundary conditions
Neglecting the phase change at the free surface, the following boundary conditions apply.
Kinematic condition
The free surface should be a sharp boundary separating the two fluids. There should be no flow through this boundary, i.e.,
or
where the subscript stands for free surface. This implies that the normal component of the velocity of the fluid at the surface is equal to the normal component of the velocity of the free surface.
Dynamic condition
The forces acting on the fluid at free surface should be in equilibrium, i.e. the momentum is conserved at the free surface. The normal forces on either side of the free surface are equal and opposite in direction and the forces in tangential direction should be equal in magnitude and direction.
Here σ is the surface tension, n, t and s are unit vectors in a local orthogonal coordinate system (n,t,s) at the free surface (n is outward normal to the free surface while the other two lie in the tangential plane and are mutually orthogonal). The indices 'l' and 'g' denote liquid and gas, respectively and K is the curvature of the free surface.
with Rt and Rs being radii of curvature along coordinates t and s.
The surface tension σ is force per unit length of a surface element and acts tangential to the free surface.
For an infinitesimally small surface element dS, the tangential components of the surface tension forces cancel out when σ = constant, and the normal component can be expressed as a local force that results in a pressure jump across the surface.
Methods of computation
Interface tracking method
This is a method that treats the free surface as a sharp interface whose motion is followed. In this method, boundary-fitted grids are used and advanced each time the free surface is moved.
Interface tracking method is useful in situations like calculation of flow around submerged bodies. This is done by making an unperturbed free surface linear, so a height function is introduced for the free surface elevation relative to its unperturbed state.
This gives the kinematic boundary condition a new form:
This equation can be integrated and the fluid velocity at free surface can be obtained either by extrapolation from the interior or by using dynamic boundary condition. For the calculation of flow, FV method is widely used. The steps for a fully conservative FV method of this type are:
momentum equation is solved to obtain velocity at the current free surface using specified pressure.
Local mass conservation is enforced in each CV by solving a pressure-correction equation. Mass is conserved both globally and locally, but velocity-correction is produced at free surface giving a non-zero mass flux.
Position of free surface is corrected to compensate for the non-zero mass flux with the volume flux due to the movement of the each free-surface cell face by enforcing the kinematic boundary conditions.
Iterate until no further correction is needed, satisfying the continuity and momentum equations.
Advance to the next time step.
The main problem with the algorithm in this procedure is that there is only one equation for one cell but large number of grid nodes moving. To avoid instability and wave reflection, the method is modified as follows:
From the previous steps, we can calculate the volume of fluid to be flowed in or out of the CV to have mass conservation. To obtain the coordinates of CV vertices at free surface, we have more unknowns and less equations due to single volumetric flow rate for each cell.
Hence the CVs are defined by the cell face centers rather than vertices and vertices are obtained by interpolation. This gives a tridiagonal system for 2D and can be solved using TDMA method. For 3D, the system is block tridiagonal and is best solved by one of the iterative solvers.
Interface capturing method
In computation of two-fluid flows, in some cases the interface might be too complex to track while keeping the frequency of re-meshing at an acceptable level. Not being able to reduce the frequency of re-meshing in 3D might introduce overwhelming mesh generation and projection costs, making the computations with the interface-tracking technique no longer feasible. In such cases, interface-capturing techniques, which do not normally require costly mesh update steps, could be used with the understanding that the interface will not be represented as accurately as we would have with an interface-tracking technique.
Methods which do not define the interface as sharp boundary. A fixed grid extends beyond the free surface over which the computation is performed. To determine the shape of the free surface, the fraction of each cell near the interface is computed that is partially filled.
Marker-and-cell or MAC Scheme
MAC scheme was proposed by Harlow and Welch in 1965. In this method, a mass-less particle is introduced at the initial time at the free surface. The motion of this mass-less particle is followed with the passage of time.
Benefit: This scheme can treat complex phenomena like wave breaking.
Drawback: In three dimensional flow solving the equations governing fluid flow and also following the motion of a large number of markers both simultaneously demands high computational power.
Volume-of-fluid or VOF scheme
VOF scheme was proposed by Hirt and Nichols in 1981. In this method, fraction of the cell occupied by the liquid phase can be calculated by solving the transport equation. The transport equation is:
+ div(cv) = 0
where c is the fraction of control volume filled. c=1 for completely filled and c = 0 for completely empty control volumes.
So in total, for VOF method, one has to solve three forms of equations, conservation equations for mass, conservation equations for momentum, equation for filled fraction for each control volume.
NOTE: IN INCOMPRESSIBLE FLOWS, ABOVE EQUATION GIVES SAME RESULTS WITH c AND 1 - c MAKING THE ENFORCEMENT OF MASS CONSERVATION A MUST.
Since the higher order schemes are preferred over lower order schemes to prevent artificial mixing of the two fluids, it is important to prevent the overshoots and undershoots due to the condition 0 ≤ c ≤ 1. For such problems, modifications were made to MAC and VOF schemes.
Modifications to MAC and VOF scheme
Marker and micro-cell method in which local grid refinement is done according to the following criteria:
only the cells having 0 < c < 1 are refined.
This method is more efficient than MAC scheme because only the cells at the boundary are refined. But in this method, the free surface profile is not sharply defined.
Hybrid methods
There are some fluid flows which do not come under either of the category, for example, bubbly flows. For the computation of such two-phase flows which do not come under any of the above discussed categories, elements are borrowed from both surface-capturing and surface-tracking methods. Such methods are called hybrid methods. In this method, fluid properties are smeared over a fixed number of grid points normal to the interface. Now, as in interface capturing method, both fluids are treated as single fluid with variable properties. Interface is also tracked as in interface-tracking method to prevent it from smearing by moving the marker particles using the velocity field generated by the flow solver. marker particles are added and removed to maintain the accuracy by keeping the approximate spacing between them equal.
References | Computational methods for free surface flow | [
"Physics",
"Chemistry"
] | 1,628 | [
"Computational fluid dynamics",
"Fluid dynamics",
"Computational physics"
] |
62,008,819 | https://en.wikipedia.org/wiki/Birchfield%E2%80%93Tomasi%20dissimilarity | In computer vision, the Birchfield–Tomasi dissimilarity is a pixelwise image dissimilarity measure that is robust with respect to sampling effects. In the comparison of two image elements, it fits the intensity of one pixel to the linearly interpolated intensity around a corresponding pixel on the other image. It is used as a dissimilarity measure in stereo matching, where one-dimensional search for correspondences is performed to recover a dense disparity map from a stereo image pair.
Description
When performing pixelwise image matching, the measure of dissimilarity between pairs of pixels from different images is affected by differences in image acquisition such as illumination bias and noise. Even when assuming no difference in these aspects between an image pair, additional inconsistencies are introduced by the pixel sampling process, because each pixel is a sample obtained integrating the continuous light signal over a finite region of space, and two pixels matching the same feature of the image content may correspond to slightly different regions of the real object that can reflect light differently and can be subject to partial occlusion, depth discontinuity, or different lens defocus, thus generating different intensity signals.
The Birchfield–Tomasi measure compensates for the sampling effect by considering the linear interpolation of the samples. Pixel similarity is then determined by finding the best match between the intensity of a pixel sample in one image and the interpolated function in an interval around a location in the other image.
Considering the stereo matching problem for a rectified stereo pair, where the search for correspondences is performed in one dimension, given two columns and along the same scanline for the left and right image respectively, it is possible to define two symmetric functions
where and are the linear interpolation functions of the left and right image intensity and along the scanline. The Birchfield–Tomasi dissimilarity can then be defined as
In practice the measure can be computed with only a small and constant overhead with respect to the calculation of the simple intensity difference, because it is not necessary to reconstruct the interpolant function. Given that the interpolant is linear within each unit interval centred around a pixel, its minimum is located in one of its extremities. Therefore, can be written as
where
denoting with and the values of the interpolated intensities at the rightmost and leftmost extremities of a one-pixel interval centred around
The other function can be similarly rewritten, completing the expression for .
References
Computer vision | Birchfield–Tomasi dissimilarity | [
"Engineering"
] | 521 | [
"Artificial intelligence engineering",
"Packaging machinery",
"Computer vision"
] |
62,013,142 | https://en.wikipedia.org/wiki/Nanolattice | A nanolattice is a synthetic porous material consisting of nanometer-size members patterned into an ordered lattice structure, like a space frame. The nanolattice is a newly emerged material class that has been rapidly developed over the last decade. Nanolattices redefine the limits of the material property space. Despite being composed of 50-99% of air, nanolattices are very mechanically robust because they take advantage of size-dependent properties that we generally see in nanoparticles, nanowires, and thin films. The most typical mechanical properties of nanolattices include ultrahigh strength, damage tolerance, and high stiffness. Thus, nanolattices have a wide range of applications.
Driven by the evolution of 3D printing techniques, nanolattices aiming to exploit beneficial material size effects through miniaturized lattice designs were first developed in the mid-2010s,. Nanolattices are the smallest man-made lattice truss structures and a class of metamaterials that derive their properties from both their geometry (general metamaterial definition) and the small size of their elements. Therefore, they can possess effective properties not found in nature, and that may not be achieved with larger-scale lattices of the same geometry.
Synthesis
To produce nanolattice materials, polymer templates are manufactured by high-resolution 3D printing processes, such as multiphoton lithography, self-assembly, self-propagating photopolymer waveguides, and direct laser writing techniques. Those methods can synthesize the structure with a unit cell size down to the order of 50 nanometers. Genetic engineering also has the potential in synthesizing nanolattice. Ceramic, metal or composite material nanolattices are formed by post-treatment of the polymer templates with techniques including pyrolysis, atomic layer deposition, electroplating and electroless plating. Pyrolysis, which additionally shrinks the lattices by up to 90%, creates the smallest-size structures, whereby the polymeric template material transforms into carbon, or other ceramics and metals, through thermal decomposition in inert atmosphere or vacuum.
Properties
At the nanoscale, size effects and different dimensional constraints, like grain boundaries, dislocations, and distribution of voids, can tremendously change the properties of a material. Nanolattices possess unparalleled mechanical properties. Nanolattices are the strongest existing cellular materials despite being extremely light-weight. Though consisting of 50%-99% air, nanolattice can be as strong as steel. Its effective strength can reach up to 1 GPa. On the order of 50nm, the extremely small volume of their individual members, such as walls, nodes, and trusses, thereby statistically nearly eliminates the material flaw population and the base material of nanolattices can reach mechanical strengths on the order of the theoretical strength of an ideal, perfect crystal. While such effects are typically limited to individual, geometrically primitive structures like nanowires, the specific architecture allows nanolattices to exploit them in complex, three-dimensional structures of notably larger overall size. Nanolattices can be designed highly deformable and recoverable, even with ceramic base materials. Nanolattices are able to undergo 80% compressive strain without catastrophic failure and then still recover to 100% original shape. Nanolattices can possess mechanical metamaterial properties like auxetic (negative Poisson's ratio) or meta-fluidic behavior (large Bulk modulus). Nanolattices can combine mechanical resilience and ultra-low thermal conductivity and can have electromagnetic metamaterial characteristics like optical cloaking. However, one of the challenges in nanolattices research is figure how to retain the robust properties while scaling up. It is inherently difficult to keep nanoscale size effects in bulk structure. The straightforward workaround to overcome this challenge is to combine bulk processes with thin film deposition techniques to retain the frame space hollow structure.
Application
The first market for nanolattices may be small-scale, small-lot components for biomedical, electrochemical, microfluidic, and aerospace applications, which require highly customizable and extreme combinations of properties. In the aerospace industry, the application of nanolattice could make the aircraft lighter and save lots of energy.
See also
Metamaterials
Nanomaterials
Microlattice
References
Nanomaterials
Metamaterials
Foams | Nanolattice | [
"Chemistry",
"Materials_science",
"Engineering"
] | 922 | [
"Metamaterials",
"Foams",
"Materials science",
"Nanotechnology",
"Nanomaterials"
] |
47,458,792 | https://en.wikipedia.org/wiki/Lyle%20Benjamin%20Borst | Lyle Benjamin Borst (November 24, 1912 – July 30, 2002) was an American nuclear physicist and inventor. He worked with Enrico Fermi in Chicago, was involved with the Manhattan District Project, and worked with Ernest O. Wollan to conduct neutron scattering and neutron diffraction studies.
Life and times
Borst was born on November 24, 1912, in Cook County at Chicago, Illinois, the son of George William Borst aged 39 of Chicago, Illinois, and Jennie Beveridge aged 26. Borst was married to Ruth Barbara Mayer Borst for 63 years and had 3 children, sons, John Benjamin and Stephen Lyle and daughter, Frances Elizabeth Wright including 7 grandchildren and 4 great-grandchildren. He died at his home in Williamsville, New York on July 30, 2002.
Career
Borst attended the University of Illinois at Urbana–Champaign and received bachelor's and master's degrees. He attended the University of Chicago and was awarded a doctorate degree in chemistry in 1941.
Borst worked as a senior physicist on the Manhattan Project from 1943 to 1946 at the Clinton Laboratories in Oak Ridge, Tennessee. In 1944 Ernest O. Wollan and Borst used neutron diffraction to produce "rocking curves" for crystals of gypsum and sodium chloride (salt). In 1946 Karl Z. Morgan and Borst at Oak Ridge develop a film badge to measure worker exposure to fast neutrons. From 1946 to 1951 Borst was chairman of the department of reactor science and engineering at Brookhaven National Laboratory and was responsible for the operation and oversight of the Brookhaven Graphite Research Reactor. He played a key role in the design of the research reactor. Borst was at the University of Utah from 1951 to 1953 as professor of physics. From 1956 to 1961 he was chairman of the department of physics at the college of engineering at New York University. From 1961 to 1983 Borst was professor of physics at State University of New York in Buffalo, New York, and was appointed professor emeritus in 1983. In 1969 he served as master of Clifford Furnas College at the State University of New York at Buffalo.
Professional service
National Board of the American Civil Liberties Union, member
ACLU, Niagara Frontier Chapter, chairman
American Physical Society, Fellow
American Association for the Advancement of Science, member
Association of Oak Ridge Scientists, founding member
Federation of Atomic Scientists, founder
Publications
Thesis and dissertation
The Angular Distribution of Recoil Nuclei. 1941
Patents
Adjustable support for spectrometer reflectors. 18 December 1951.
Method of testing hermetic containers. 17 February 1959.
Central control system. 22 September 1959.
Neutronic reactor shielding. 11 July 1961.
Convergent Neutronic Reactor. 5 June 1962.
Neutron amplifier. 2 October 1962.
Process for cooling a nuclear reactor. 11 December 1962.
Improvements in neutron reactors. 1962.
Temperature measuring method and apparatus. 30 July 1963.
Nuclear reactor for a railway vehicle. 31 March 1964.
Neutron reactors. 20 July 1965.
Nuclear power reactor. 27 July 1965.
Process for controlling thermal neutron concentration in an irradiated system. 18 October 1966.
Neutron amplifier. 13 December 1966.
Photographic process. 29 June 1976.
References
1912 births
2002 deaths
American nuclear physicists
20th-century American inventors
Health physicists
Manhattan Project people
Oak Ridge National Laboratory people
Neutron scattering
Neutron instrumentation
American Civil Liberties Union people
Fellows of the American Physical Society
Brookhaven National Laboratory staff
Polytechnic Institute of New York University faculty
State University of New York faculty
University of Utah faculty
University of Illinois Urbana-Champaign alumni
University of Chicago alumni
Scientists from Chicago
20th-century American physicists
20th-century American chemists | Lyle Benjamin Borst | [
"Chemistry",
"Technology",
"Engineering"
] | 720 | [
"Scattering",
"Neutron instrumentation",
"Measuring instruments",
"Neutron scattering"
] |
47,470,324 | https://en.wikipedia.org/wiki/Launching%20gantry | A launching gantry (also called bridge building crane, and bridge-building machine) is a special-purpose mobile gantry crane used in bridge construction, specifically segmental bridges that use precast box girder bridge segments or precast girders in highway and high-speed rail bridge construction projects. The launching gantry is used to lift and support bridge segments or girders as they are placed while being supported by the bridge piers instead of the ground.
While superficially similar, launching gantry machines should not be confused with movable scaffolding systems, which also are used in segmental bridge construction. Both feature long girders spanning multiple bridge spans which move with the work, but launching gantry machines are used to lift and support precast bridge segments and bridge girders, while movable scaffolding systems are used for cast-in-place construction of bridge segments.
Operation and design
Typically, precast segmental bridges and precast girders are placed using ground-based cranes to lift each segment or girder. However, ground access to the spans may be challenged by the presence of existing infrastructure or bodies of water, or the height to which the segments must be raised can exceed the reach of ground-based cranes. A launching gantry can be used to solve these issues.
The most visible feature of a launching gantry are the twin parallel girders, which can either be above (upper-beam) or below (lower-beam or underslung) the bridge deck. However, a single beam can also be used, typically in upper-beam configuration. The launching gantry machine usually is sized to the construction project, with the length of the twin main girders approximately 2.3 times the distance between spans. This length enables the launching gantry to span the gap between two adjacent bridge piers while providing allowances for the distance required for launching to the next span and flexibility of movement to accommodate curved paths between piers. In some cases, hinges have been inserted into the gantry girders to allow tighter curves. The launching gantry girders are supported at each pier by braced frames which have a limited range of movement to facilitate placement of bridge segments or bridge girders; the launching gantry does not generally contact the bridge deck.
Two gantry trolleys can run the full length of the launching gantry girders. Each trolley is equipped with two winches: a main winch to suspend the load, and a translation winch to move the trolley along the girders. When bridge segments (or bridge girders) are delivered at the ground level, the launching gantry is used to pick them up and raise them to deck or pier height. If the segments (or girders) are delivered instead at the bridge deck level, the launching gantry moves back to allow the forward trolley to pick up the front end of the next segment (or girder), while the back end of the segment (or girder) is supported by the transportation vehicle; as the forward trolley moves forward, the rear trolley takes over supporting the back end from the vehicle.
Bridge segments (or bridge girders) are set in place by the launching gantry until the span between adjacent piers is completed. For segmental bridges, typically a span-by-span or balanced-cantilever approach is adopted to place segments. To free up the gantry trolley(s), temporary hangers are used to support each segment after it has been placed. In the span-by-span approach, all the segments for a span are placed before bridge tendons are tensioned; in this fashion, work progresses from one pier towards an adjacent pier. In the balanced-cantilever approach, segments are placed simultaneously on each side and work progresses from a central pier towards the two nearest piers instead. In either case, the launching gantry girders and hangers essentially serve as falsework prior to tensioning.
Once the bridge span between adjacent piers is completed, the winches on the trolleys are used to lift the gantry girders and "launch" them ahead to the next span. The process of lifting and placing bridge segments (or girders) followed by launching the gantry girders ahead is repeated until the bridge is complete.
An example of a large launching gantry is the SLJ900/32 designed in China by the Shijiazhuang Railway Design Institute and manufactured by the Beijing Wowjoint Machinery Company. This launching gantry is long, wide, and weighs . When driving, the machine is supported by 64 wheels, in four sections of 16 wheels each (forming two trucks, one at each end). When launching, the forward end of the machine is supported (on sliding rails) by a strut lowered onto a bridge support column, while the truck for that end hangs off the gantry backbone with no support from beneath. Once the gantry straddles the open span, the bridge segment is lowered onto the bridge support piers, and the process reverses to retract the launching gantry. The SLJ900 moves at unloaded, and carrying a bridge segment.
References
Civil engineering
Construction equipment | Launching gantry | [
"Engineering"
] | 1,062 | [
"Construction",
"Construction equipment",
"Civil engineering",
"Industrial machinery"
] |
47,471,695 | https://en.wikipedia.org/wiki/Nojirimycin | Nojirimycin is the parent compound of a class of antibiotics and glycosidase inhibitors. Nojirimycin and its derivatives are mainly obtained from a class of Streptomyces species. Chemically, it is an iminosugar.
Derivatives
1-deoxynojirimycin or duvoglustat
1-deoxygalactonojirimycin or migalastat, a drug for the treatment of Fabry disease
References
Antibiotics
Streptomyces
Iminosugars | Nojirimycin | [
"Chemistry",
"Biology"
] | 112 | [
"Iminosugars",
"Carbohydrates",
"Biotechnology products",
"Antibiotics",
"Biocides"
] |
49,236,353 | https://en.wikipedia.org/wiki/Energy%20Company%20Obligation | The Energy Company Obligation (ECO) is a British Government programme. It is designed to offset emissions created by energy company power stations. The first obligation period ran from January 2013 to 31 March 2015. The second obligation period, known as ECO2, ran from 1 April 2015 to 31 March 2017. The third obligation period, known as ECO3, ran from 3 December 2018 until 31 March 2022. The fourth iteration, ECO4, commenced on 1 April 2022 and will run until 31 March 2026.
The Government obligates the larger energy suppliers to help lower-income households improve their energy efficiency.
ECO is the replacement of two previous schemes, the Carbon Emission Reduction Target (CERT) and the Community Energy Saving Programme (CESP). It has been announced that the programme will be replaced in 2017 by a less extensive version.
The programme focused on heating, in particular improving insulation.
Ofgem has been appointed the scheme administrator on behalf of the Department for Energy Security & Net Zero.
How does ECO work?
The ECO scheme works by placing an obligation on large and medium energy suppliers in England, Scotland and Wales to provide energy-saving measures for households deemed to live in fuel poverty. Suppliers are allocated based on their overall share of the domestic gas and electricity market.
The range of measures available through the scheme include heating upgrades, solar panels, wall and roof insulation. The provision of these measures is supposedly designed to help vulnerable families reduce their energy bills. The scheme is also seen as a way of helping the government reach its net zero target by 2050.
ECO3 target reached
Ofgem's ECO3 final determination report provides details on the overall performance of the scheme and conclusions regarding of energy suppliers’ achievement against their obligations. The overall target for all participant suppliers was an estimated lifetime bill savings of £8.253 billion. The ECO3 final report confirms that this target was exceeded, with a total estimated lifetime bill savings of £8.457 billion achieved.
The other highlights of the findings were as follows:
"All but one active supplier successfully met their HHCRO obligation and sub-obligation lifetime bill saving targets.
1.03 million energy saving measures were installed over the course of ECO3. This included:
Broken down or energy inefficient boilers being replaced in 251,741 households with energy efficient condensing boilers or low carbon heating alternatives
Cavity wall insulation installed in 152,938 households
Underfloor insulation installed in 133,173 households
Loft insulation installed in 88,588 households
It is estimated that measures installed since the first ECO scheme was introduced in 2013 will provide lifetime carbon savings of around 58.2 MtCO2e. This is equivalent to the amount of carbon absorbed by 264 million mature trees over 10 years."
ECO4
The latest iteration of the Energy Company Obligation (ECO4) began on 27 July 2022 and will run until 31st March 2026. ECO4 focusses on improving the least energy efficient properties and targets homes with an energy rating between D and G. It also aims to provide a more complete retrofit of properties to ensure maximum carbon emission savings. A minimum project scoring methodology is in place to ensure a multi-measure, whole house approach to each property. This is designed to encourage the installation of a variety of measures per household, including insulation, solar panels and renewable heating systems.
For homeowners looking to take advantage of the ECO4 scheme, organisations like UK Energy Management (UKEM) can assist in securing funding and arranging the installation of energy-efficient measures. These improvements are government backed, but supplied by energy companies, and may include loft insulation, solar panel installations, and energy-efficient boilers, aimed at reducing energy consumption and carbon emissions.
The eligibility criteria for ECO4 has seen the removal of disability benefits which qualified under the ECO3 component of the scheme. The ECO4 focuses solely on households that receive income based benefits, some tax credits and pension credits. This change was introduced to ensure that the scheme targets those households most in need of energy efficiency support, particularly those at risk of fuel poverty. However, there have been concerns that the removal of disability benefits from the eligibility criteria may leave some vulnerable households unsupported.
ECO4 qualifying benefits:
Child tax credit (CTC)
Child benefit
Housing Benefit
Jobseeker's Allowance (JSA)
Employment and Support Allowance (ESA)
Income Support (IS)
Pension Credit Guarantee Credit
Pension Credit Saving Credit
Universal Credit (UC)
Warm Home Discount Scheme Rebate
Working Tax Credit (WTC)
Local authorities can sign declarations for eligible households that apply through Flexible Energy under the programme, but the works are carried out by private companies, with funding from energy suppliers. Householders are recommended to check that installers are registered on the TrustMark website.
According to Ofgem's statistics as of 7 May 2024 there have been a total of 100,708 Energy Company Obligation 4 projects submitted. This highlights the scale of the programme in improving energy efficiency in homes across the UK. The scheme aims to provide long-term energy savings while contributing to the UK’s carbon reduction targets.
The statistics on energy supplier performance at 7 May 2024 can be viewed on the Energy Saving Genie website.
References
External links
ECO4 delivery guidance for suppliers from OFGEM
Government programs
Emissions reduction
Energy in the United Kingdom
Climate change policy in the United Kingdom | Energy Company Obligation | [
"Chemistry"
] | 1,085 | [
"Greenhouse gases",
"Emissions reduction"
] |
49,236,889 | https://en.wikipedia.org/wiki/Plinian%20Core | Plinian Core is a set of vocabulary terms that can be used to describe different aspects of biological species information. Under "biological species Information" all kinds of properties or traits related to taxa—biological and non-biological—are included. Thus, for instance, terms pertaining descriptions, legal aspects, conservation, management, demographics, nomenclature, or related resources are incorporated.
Description
The Plinian Core is aimed to facilitate the exchange of information about the species and upper taxa.
What is in scope?
Species level catalogs of any kind of biological objects or data.
Terminology associated with biological collection data.
Striving for compatibility with other biodiversity-related standards.
Facilitating the addition of components and attributes of biological data.
What is not in scope?
Data interchange protocols.
Non-biodiversity-related data.
Occurrence level data.
This standard is named after Pliny the Elder, a very influential figure in the study of the biological species.
Plinian Core design requirements includes: ease of use, to be self-contained, able to support data integration from multiple databases, and ability to handle different levels of granularity. Core terms can be grouped in its current version as follows:
Metadata
Base Elements
Record Metadata
Nomenclature and Classification
Taxonomic description
Natural history
Invasive species
Habitat and Distribution
Demography and Threats
Uses, Management and Conservation
associatedParty, MeasurementOrFact, References, AncillaryData
Background
Plinian Core started as a collaborative project between Instituto Nacional de Biodiversidad and GBIF Spain in 2005. A series of iterations in which elements were defined and implanted in different projects resulted in a "Plinian Core Flat" [deprecated].
As a result, a new development was impulse to overcome them in 2012. New formal requirements, additional input and a will to better support the standard and its documentation, as well as to align it with the processes of TDWG, the world reference body for biodiversity information standards.
A new version, Plinian Core v3.x.x was defined. This provides more flexibility to fully represent the information of a species in a variety of scenarios. New elements to deal with aspects such as IPR, related resources, referenced, etc. were introduced, and elements already included were better-defined and documented.
Partner for the development of Plinian Core in this new phase incorporated the University of Granada (UG, Spain), the Alexander von Humboldt Institute (IAvH, Colombia), the National Commission for the Knowledge and Use of Biodiversity (Conabio, Mexico) and the University of São Paulo (USP, Brazil).
A "Plinian Core Task Group" within TDWG "Interest Group on species Information" was constituted and currently working on its development.
Levels of the standard
Plinian Core is presented in to levels: the abstract model and the application profiles.
The abstract model (AM), comprising the abstract model schema(xsd) and the terms' URIs, is the normative part. It is all comprehensive, and allows for different levels of granularity in describing species properties. The AM should be taken as a "menu" from which to choose terms and level of detail needed in any specific project.
The subsets of the abstract model intended to be implemented in specific projects are the "application profiles" (APs). Besides containing part of the elements of the AM, APs can impose additional specifications on the included elements, such as controlled vocabularies. Some examples of APs in use follow:
Application profile CONABIO
Application profile INBIO
Application profile GBIF.ES
Application profile Banco de Datos de la Naturaleza.Spain
Application profile SIB-COLOMBIA
Relation to other standards
Plinian incorporates a number of elements already defined by other standards. The following table summarizes these standards and the elements used in Plinian Core:
External links
Main page
An Implementation of Plinian Core as GBIF's IPT Extensions
Plinian Core Terms Quick Reference Guide
Plinian Core Abstract Model (xsd). Current version
Biodiversity Information Standards (TDWG)
Sistema de información de la naturaleza de euskadi. Aplicación del estandar Plinian Core
Estándar Plinian Core para la gestión integrada de la información sobre especies. Ministerio de Agricultura, Alimentación y Medio Ambiente de España
Modelo conceptual de la Base Nacional de Datos de Vegetación. Ministerio del Ambiente, Ecuador
References
Knowledge representation
Interoperability
Metadata standards | Plinian Core | [
"Engineering"
] | 912 | [
"Telecommunications engineering",
"Interoperability"
] |
49,239,358 | https://en.wikipedia.org/wiki/Kirklington%20Hall%20Research%20Station | Kirklington Hall Research Station was a geophysical research institute of BP in Kirklington, Nottinghamshire. During the 1950s it was the main research site of BP.
Background
Cricketer John Boddam-Whetham was born at the site in 1843. Sir Albert Bennett, 1st Baronet, Liberal MP from 1922-23 for Mansfield, lived there from 1920. The Bennett baronets was formed in 1929. Lady Evelyn Maude Robinson, was the owner from around 1930, and the wife of Sir John Robinson of Worksop, who died aged 74 on Saturday 2 December 1944.
The previous owner died aged 73 on Friday 14 December 1945, leaving £138,365 in her will. In June 1945 it was put up for auction, with 631 acres, 15 bedrooms, 6 bathrooms, abd 11 servants rooms. It was sold for £24,500 in Derby in July 1945.
Nottinghamshire County Council for two years was looking to buy the property, but it was a big investment, for a residential further education college; it chose another site in July 1947.
History
BP
As part of the East Midlands Oil Province, oil was found in eastern Nottinghamshire. It was also known as the BP Research Centre or the Geophysical Centre, part of BP's Exploration Division.
The site was acquired due to proximity of Eakring, in July 1949. The local church, with Hockerton, held garden parties at the site in the summer.
The research centre was established in 1950. Its first employee was Jack Birks, later managing director of BP. From 1950 it was the main geophysical research site of BP, until BP sold the site in 1957 for £12,000. Research moved to Sunbury-on-Thames, in Surrey, in 1957. Sunbury Research Centre had been built around the same time as the Kirklington site, in the early 1950s.
Private property
It was put up for sale in November 1957. In 1958 there was the possibility of the site being a teacher training college.
From 1958 it was a private school, which had been formed in Southwell in 1945.
It was put up for sale in 1987, with a guide price of £850,000.
Kirklington Hall today is a private school.
Structure
The former site is situated north of the A617.
Function
It conducted geophysical research for exploration for BP. This part of BP is now known as BP Exploration. Work would be conducted on core samples and with seismic methods.
See also
British Geological Survey, also in Nottinghamshire
Sunbury Research Centre, where most of BP's research takes place in the UK today.
:Category:Petroleum geology
:Category:Seismology measurement
References
British Petroleum and Global Oil 1950-1975: The Challenge of Nationalism, James Bamberg, page 33
External links
Our Nottinghamshire
1950 establishments in England
1957 disestablishments in England
BP buildings and structures
Buildings and structures in Nottinghamshire
Energy research institutes
Engineering research institutes
Earth science research institutes
Petroleum industry in the United Kingdom
Petroleum organizations
Research institutes established in 1950
Research institutes in England
Science and technology in Nottinghamshire
Research stations | Kirklington Hall Research Station | [
"Chemistry",
"Engineering"
] | 615 | [
"Engineering research institutes",
"Petroleum",
"Petroleum organizations",
"Energy research institutes",
"Energy organizations"
] |
49,240,826 | https://en.wikipedia.org/wiki/Zinc%20finger%20protein%20557 | Zinc finger protein 557 is a protein that in humans is encoded by the ZNF557 gene.
References
Further reading
Proteins | Zinc finger protein 557 | [
"Chemistry"
] | 28 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
49,241,310 | https://en.wikipedia.org/wiki/Anion%20exchanger%20family | The anion exchanger family (TC# 2.A.31, also named bicarbonate transporter family) is a member of the large APC superfamily of secondary carriers. Members of the AE family are generally responsible for the transport of anions across cellular barriers, although their functions may vary. All of them exchange bicarbonate. Characterized protein members of the AE family are found in plants, animals, insects and yeast. Uncharacterized AE homologues may be present in bacteria (e.g., in Enterococcus faecium, 372 aas; gi 22992757; 29% identity in 90 residues). Animal AE proteins consist of homodimeric complexes of integral membrane proteins that vary in size from about 900 amino acyl residues to about 1250 residues. Their N-terminal hydrophilic domains may interact with cytoskeletal proteins and therefore play a cell structural role. Some of the currently characterized members of the AE family can be found in the Transporter Classification Database.
Family overview
Bicarbonate (HCO3 −) transport mechanisms are the principal regulators of pH in animal cells. Such transport also plays a vital role in acid-base movements in the stomach, pancreas, intestine, kidney, reproductive organs and the central nervous system. Functional studies have suggested different HCO3 − transport modes.
Anion exchanger proteins exchange HCO3 − for Cl− in a reversible, electroneutral manner.
Na+/HCO3 − co-transport proteins mediate the coupled movement of Na+ and HCO3 − across plasma membranes, often in an electrogenic manner.
Sequence analysis of the two families of HCO3 − transporters that have been cloned to date (the anion exchangers and Na+/HCO3 − co-transporters) reveals that they are homologous. This is not entirely unexpected, given that they both transport HCO3 − and are inhibited by a class of pharmacological agents called disulphonic stilbenes. They share around ~25-30% sequence identity, which is distributed along their entire sequence length, and have similar predicted membrane topologies, suggesting they have ~10 transmembrane (TM) domains.
A conserved domain is found at the C terminus of many bicarbonate transport proteins. It is also found in some plant proteins responsible for boron transport. In these proteins it covers almost the entire length of the sequence.
The Band 3 anion exchange proteins that exchange bicarbonate are the most abundant polypeptide in the red blood cell membrane, comprising 25% of the total membrane protein. The cytoplasmic domain of band 3 functions primarily as an anchoring site for other membrane-associated proteins. Included among the protein ligands of this domain are ankyrin, protein 4.2, protein 4.1, glyceraldehyde-3-phosphate dehydrogenase (GAPDH), phosphofructokinase, aldolase, hemoglobin, hemichromes, and the protein tyrosine kinase (p72syk).
Anion exchangers in humans
In humans, anion exchangers fall under the solute carrier family 4 (SLC4) family, which is composed of 10 paralogous members (SLC4A1-5; SLC4A7-11). Nine encode proteins that transport HCO. Functionally, eight of these proteins fall into two major groups: three Cl-HCO exchangers (AE1-3) and five Na+-coupled HCO transporters (NBCe1, NBCe2, NBCn1, NBCn2, NDCBE). Two of the Na+-coupled transporters (NBCe1, NBCe2) are electrogenic; the other three Na+-coupled HCO transporters and all three AEs are electroneutral. Two others (AE4, SLC4A9 and BTR1, SLC4A11) are not characterized. Most, though not all, are inhibited by 4,4'-diisothiocyanatostilbene-2,2'-disulfonate (DIDS). SLC4 proteins play roles in acid-base homeostasis, transport of H+ or HCO by epithelia (e.g. absorption of HCO in the renal proximal tubule, secretion of HCO in the pancreatic duct), as well as the regulation of cell volume and intracellular pH.
Based on their hydropathy plots all SLC4 proteins are hypothesized to share a similar topology in the cell membrane. They have relatively long cytoplasmic N-terminal domains composed of a few hundred to several hundred residues, followed by 10-14 transmembrane (TM) domains, and end with relatively short cytoplasmic C-terminal domains composed of ~30 to ~90 residues. Although the C-terminal domain comprises a small percentage of the size of the protein, this domain in some cases, has (i) binding motifs that may be important for protein-protein interactions (e.g., AE1, AE2, and NBCn1), (ii) is important for trafficking to the cell membrane (e.g., AE1 and NBCe1), and (iii) may provide sites for regulation of transporter function via protein kinase A phosphorylation (e.g., NBCe1).
The SLC4 family comprises the following proteins.
SLC4A1
SLC4A2
SLC4A3
SLC4A4
SLC4A5
SLC4A7
SLC4A8
SLC4A9
SLC4A10
SLC4A11
Anion exchanger 1
The human anion exchanger 1 (AE1 or Band 3) binds carbonic anhydrase II (CAII) forming a "transport metabolon" as CAII binding activates AE1 transport activity about 10 fold. AE1 is also activated by interaction with glycophorin, which also functions to target it to the plasma membrane. The membrane-embedded C-terminal domains may each span the membrane 13-16 times. According to the model of Zhu et al. (2003), AE1 in humans spans the membrane 16 times, 13 times as α-helix, and three times (TMSs 10, 11 and 14) possibly as β-strands. AE1 preferentially catalyzes anion exchange (antiport) reactions. Specific point mutations in human anion exchanger 1 (AE1) convert this electroneutral anion exchanger into a monovalent cation conductance. The same transport site within the AE1 spanning domain is involved in both anion exchange and cation transport.
AE1 in human red blood cells has been shown to transport a variety of inorganic and organic anions. Divalent anions may be symported with H+. Additionally, it catalyzes flipping of several anionic amphipathic molecules such as sodium dodecyl sulfate (SDS) and phosphatidic acid from one monolayer of the phospholipid bilayer to the other monolayer. The rate of flipping is sufficiently rapid to suggest that this AE1-catalyzed process is physiologically important in red blood cells and possibly in other animal tissues as well. Anionic phospholipids and fatty acids are likely to be natural substrates. However, the mere presence of TMSs enhances the rates of lipid flip-flop.
Structure
The crystal structure of AE1 (CTD) at 3.5 angstroms has been determined. The structure is locked in an outward-facing open conformation by an inhibitor. Comparing this structure with a substrate-bound structure of the uracil transporter UraA in an inward-facing conformation allowed identification of the likely anion-binding position in the AE1 (CTD), and led to proposal of a possible transport mechanism that could explain why selected mutations lead to disease. The 3-D structure confirmed that the AE family is a member of the APC superfamily.
There are several crystal structures available for the AE1 protein in RCSB (links are also available in TCDB).
AE1: , , , , , , , , , , , , , , , , ,
Other members
Renal Na+:HCO cotransporters have been found to be members of the AE family. They catalyze the reabsorption of HCO in the renal proximal tubule in an electrogenic process that is inhibited by typical stilbene inhibitors of AE such as DIDS and SITS. They are also found in many other body tissues. At least two genes encode these symporters in any one mammal. A 10 TMS model has been presented, but this model conflicts with the 14 TMS model proposed for AE1. The transmembrane topology of the human pancreatic electrogenic Na+:HO transporter, NBC1, has been studied. A TMS topology with N- and C-termini in the cytoplasm has been suggested. An extracellular loop determines the stoichiometry of Na+-HCO cotransporters.
In addition to the Na+-independent anion exchangers (AE1-3) and the Na+:HCO cotransporters (NBCs) (which may be either electroneutral or electrogenic), a Na+-driven HCO/Cl− exchanger (NCBE) has been sequenced and characterized. It transports Na+ + HCO preferentially in the inward direction and H+ + Cl− in the outward direction. This NCBE is widespread in mammalian tissues where it plays an important role in cytoplasmic alkalinization. For example, in pancreatic β-cells, it mediates a glucose-dependent rise in pH related to insulin secretion.
Animal cells in tissue culture expressing the gene-encoding the ABC-type chloride channel protein CFTR (TC# 3.A.1.202.1) in the plasma membrane have been reported to exhibit cyclic AMP-dependent stimulation of AE activity. Regulation was independent of the Cl− conductance function of CFTR, and mutations in the nucleotide-binding domain #2 of CFTR altered regulation independently of their effects on chloride channel activity. These observations may explain impaired HCO secretion in cystic fibrosis patients.
Anion exchangers in plants and fungi
Plants and yeast have anion transporters that in both the pericycle cells of plants and the plasma membrane of yeast cells export borate or boric acid (pKa = 9.2). In A. thaliana, boron is exported from pericycle cells into the root stellar apoplasm against a concentration gradient for uptake into the shoots. In S. cerevisiae, export is also against a concentration gradient. The yeast transporter recognizes HCO, I−, Br−, NO and Cl−, which may be substrates. Tolerance to boron toxicity in cereals is known to be associated with reduced tissue accumulation of boron. Expression of genes from roots of boron-tolerant wheat and barley with high similarity to efflux transporters from Arabidopsis and rice lowered boron concentrations due to an efflux mechanism. The mechanism of energy coupling is not known, nor is it known if borate or boric acid is the substrate. Several possibilities (uniport, anion:anion exchange and anion:cation exchange) can account for the data.
Transport reactions
The physiologically relevant transport reaction catalyzed by anion exchangers of the AE family is:
Cl− (in) + HCO (out) ⇌ Cl− (out) + HCO (in).
That for the Na+:HCO3- cotransporters is:
Na+ (out) + nHCO (out) → Na+ (in) + nHCO (in).
That for the Na+/HCO:H+/Cl− exchanger is:
Na+ (out) + HCO (out) + H+ (in) + Cl− (in) ⇌ Na+ (in) + HCO (in) + H+ (out) + Cl− (out).
That for the boron efflux protein of plants and yeast is:
Boron (in) → Boron (out)
See also
Solute carrier family
Transporter Classification Database
References
Protein families
Transmembrane transporters
Solute carrier family | Anion exchanger family | [
"Biology"
] | 2,627 | [
"Protein families",
"Protein classification"
] |
49,241,355 | https://en.wikipedia.org/wiki/Gastruloid | Gastruloids are three dimensional aggregates of embryonic stem cells (ESCs) that, when cultured in specific conditions, exhibit an organization resembling that of an embryo. They develop with three orthogonal axes and contain the primordial cells for various tissues derived from the three germ layers, without the presence of extraembryonic tissues. Notably, they do not possess forebrain, midbrain, and hindbrain structures. Gastruloids serve as a valuable model system for studying mammalian development, including human development, as well as diseases associated with it. They are a model system an embryonic organoid for the study of mammalian development (including humans) and disease.
Background
The Gastruloid model system draws its origins from work by Marikawa et al.. In that study, small numbers of mouse P19 embryonal carcinoma (EC) cells, were aggregated as embryoid bodies (EBs) and used to model and investigate the processes involved in anteroposterior polarity and the formation of a primitive streak region. In this work, the EBs were able to organise themselves into structures with polarised gene expression, axial elongation/organisation and up-regulation of posterior mesodermal markers. This was in stark contrast to work using EBs from mouse ESCs, which had shown some polarisation of gene expression in a small number of cases but no further development of the multicellular system.
Following this study, the Martinez Arias laboratory in the Department of Genetics at the University of Cambridge demonstrated how aggregates of mouse embryonic stem cells (ESCs) were able to generate structures that exhibited collective behaviours with striking similarity to those during early development such as symmetry-breaking (in terms of gene expression), axial elongation and germ-layer specification. To quote from the original paper: "Altogether, these observations further emphasize the similarity between the processes that we have uncovered here and the events in the embryo. The movements are related to those of cells in gastrulating embryos and for this reason we term these aggregates ‘gastruloids’". As noted by the authors of this protocol, a crucial difference between this culture method and previous work with mouse EBs was the use of small numbers of cells which may be important for generating the correct length scale for patterning, and the use of culture conditions derived from directed differentiation of ESCs in adherent culture
Brachyury (T/Bra), a gene which marks the primitive streak and the site of gastrulation, is up-regulated in the Gastruloids following a pulse of the Wnt/β-Catenin agonist CHIR99021 (Chi; other factors have also been tested) and becomes regionalised to the elongating tip of the Gastruloid. From or near the region expressing T/Bra, cells expressing the mesodermal marker tbx6 are extruded from the similar to cells in the gastrulating embryo; it is for this reason that these structures are called Gastruloids.
Further studies revealed that the events that specify T/Bra expression in gastruloids mimic those in the embryo. After seven days gastruloids exhibit an organization very similar to a midgestation embryo with spatially organized primordia for all mesodermal (axial, paraxial, intermediate, cardiac, cranial and hematopoietic) and endodermal derivatives as well as the spinal cord. They also implement Hox gene expression with the spatiotemporal coordinates as the embryo. Gastruloids lack brain as well as extraembryonic tissues but characterisation of the cellular complexity of gastruloids at the level of single cell and spatial transcriptomics, reveals that they contain representatives of the three germ layers including neural crest, Primordial Germ cells and placodal primordia.
A feature of gastruloids is a disconnect between the transcriptional programs and outlines and the morphogenesis. However, changes in the culture conditions can elicit morphogenesis, most significantly gastruloids have been shown to form somites and early cardiac structures. In addition, interactions between gastruloids and extraembryonic tissues promote an anterior, brain-like polarised tissue.
Gastruloids have recently been obtained from human ESCs, which gives developmental biologists the ability to study early human development without needing human embryos. Importantly though, the human gastruloid model is not able to form a human embryo, meaning that is a non-intact, non-viable and non-equivalent to in vivo human embryos.
The term Gastruloid has been expanded to include self-organised human embryonic stem cell arrangements on patterned (micro patterns) that mimic early patterning events in development; these arrangements should be referred to as 2D gastruloids.
References
Stem cells
Tissue engineering
Animal developmental biology | Gastruloid | [
"Chemistry",
"Engineering",
"Biology"
] | 1,015 | [
"Biological engineering",
"Cloning",
"Chemical engineering",
"Tissue engineering",
"Medical technology"
] |
55,985,697 | https://en.wikipedia.org/wiki/ULAS%20J1342%2B0928 | ULAS J1342+0928 is the third-most distant known quasar detected and contains the second-most distant and oldest known supermassive black hole, at a reported redshift of z = 7.54. The ULAS J1342+0928 quasar is located in the Boötes constellation. The related supermassive black hole is reported to be "780 million times the mass of the Sun". At its discovery, it was the most distant known quasar. In 2021 it was eclipsed by QSO J0313-1806 as the most distant quasar.
Discovery
On 6 December 2017, astronomers published that they had found the quasar using data from the Wide-field Infrared Survey Explorer (WISE) combined with ground-based surveys from the United Kingdom Infrared Telescope and the DECam Legacy Survey . It was spectroscopically confirmed using data from the Magellan Telescopes at Las Campanas Observatory in Chile, as well as the Large Binocular Telescope in Arizona and the Gemini North telescope in Hawaii. The related black hole of the quasar existed when the universe was about 690 million years old (about 5 percent of its currently known age of 13.80 billion years).
The quasar comes from a time known as "the epoch of reionization", when the universe emerged from its Dark Ages. Extensive amounts of dust and gas have been detected to be released from the quasar into the interstellar medium of its host galaxy.
Description
ULAS J1342+0928 has a measured redshift of 7.54, which corresponds to a comoving distance of 29.36 billion light-years from Earth. When it was reported in 2017, it was the most distant quasar yet observed. The quasar emitted the light observed on Earth today less than 690 million years after the Big Bang, about 13.1 billion years ago.
The quasar's luminosity is estimated at solar luminosities. This energy output is generated by a supermassive black hole estimated at solar masses. According to lead astronomer Bañados, "This particular quasar is so bright that it will become a gold mine for follow-up studies and will be a crucial laboratory to study the early universe."
Significance
The light from ULAS J1342+0928 was emitted before the end of the theoretically predicted transition of the intergalactic medium from an electrically neutral to an ionized state (the epoch of reionization). Quasars may have been an important energy source in this process, which marked the end of the cosmic Dark Ages, so observing a quasar from before the transition is of major interest to theoreticians. Because of their high ultraviolet luminosity, quasars also are some of the best sources for studying the reionization process. The discovery is also described as challenging theories of black hole formation, by having a supermassive black hole much larger than expected at such an early stage in the Universe's history, though this is not the first distant quasar to offer such a challenge.
See also
List of the most distant astronomical objects
List of quasars
J0313–1806
References
External links
Carnegie Institution for Science
NASA APOD − Quasar Pictures
Quasar Image Gallery/perseus
Astronomical objects discovered in 2017
Supermassive black holes
Boötes
Quasars | ULAS J1342+0928 | [
"Physics",
"Astronomy"
] | 700 | [
"Black holes",
"Boötes",
"Unsolved problems in physics",
"Supermassive black holes",
"Constellations"
] |
55,993,407 | https://en.wikipedia.org/wiki/Cooperative%20segmental%20mobility | Cooperative segmental mobility is a phenomenon associated with mobility of tens to a few hundreds of repeat units of a polymer and is important in defining the transition between “glassy” and “rubbery” state of the polymer. This cooperative segmental mobility is closely related to the dynamics of the polymer near its glass transition temperature. In the glassy state, the relaxation process of a polymer chain is a cooperative phenomenon and the molecular motion depends to some degree on that of its neighbors.
Background
F. Bueche was the first scientist to come up with the theory of segmental mobility of polymers near their glass transition temperature. He presented two methods to describe the mobility of polymer segments near Tg. Based on his excluded volume calculations, he concluded that any factor which increases the volume disturbed by a rotating segment of a polymer chain or which increases the density of the polymer will in turn increase Tg. Hence, increasing chain stiffness and addition of bulky, but stiff, chain side groups will increase Tg.
Later in 1965, Adam and Gibbs formulated the temperature dependence of cooperative relaxation properties in glass forming liquids. It was theorized that the local relaxation of polymer chains occurs in cooperative rearranging regions (CRR) where there is a collective motion of small polymer segments (hence the name segmental mobility). They correlated the size of these cooperative regions to the configurational entropy of the system. Typically the size of the CRR near Tg ranges between 1 and 4 nm.
Alexandrov Solunov formulated an equation to determine the average size of the cooperative rearranging regions which corresponds to the average number of structural units that relax together to cross from one configuration to another. He concluded that the glass transition temperature is proportional to the activation energy of these CRRs.
Following these work, a lot of researchers started investigating ways to quantify this cooperative segmental mobility which in turn affects the relaxation time of polymers.
Theory
The concept of cooperative segmental mobility comes from the free volume quantification. With the reduction in temperature, the free volume occupied by the polymer segments is reduced. Due to this loss in free volume, cooperative segmental dynamics is significantly slowed near the glass transition temperature and is practically arrested at glass transition temperature. In other words, the molecular rearrangements are frozen.
This cooperative segmental mobility has a huge effect on the glass transition temperature of the polymer. As F. Bueche concluded in his work, increasing the chain stiffness and/or adding a bulky stiff side group significantly increases Tg. For example, the glass transition temperature of poly(dimethyl siloxane) is -125 °C whereas that of poly(isobutylene) is -75 °C due to stiffer backbone chain. Similarly, Tg for polypropylene is -20 °C whereas for polystyrene it is 95 °C due to the presence of bulkier side group.
The glass transition temperature in thin films is also affected by this phenomenon. Exposure to the free surface enhances the cooperative segmental mobility which reduces Tg. This can be detrimental in applications like nano electronics, where polymer thin films are used.
Experimentally, single-molecule fluorescence microscopy can be used to detect the motion of single molecules, and thus reflect the segmental mobility in their neighboring domains. From simulation perspective, the probability of segment movement (PSM) is a parameter used to directly measure the segmental mobility. PSM is defined as the probability of movement of each segment either by jumping into its neighboring empty sites or partial sliding diffusion. The value of 〈PSM〉 decreases initially with the decrease of temperature, and then levels off at low temperatures, indicating that the chain segments enter the completely frozen state.
References
Polymer physics | Cooperative segmental mobility | [
"Chemistry",
"Materials_science"
] | 750 | [
"Polymer physics",
"Polymer chemistry"
] |
72,977,646 | https://en.wikipedia.org/wiki/Mode%20conversion | Mode conversion is the transformation of a wave at an interface into other wave types (modes).
Principle
Mode conversion occurs when a wave encounters an interface between materials of different impedances and the incident angle is not normal to the interface. Thus, for example, if a longitudinal wave from a fluid (e.g., water or air) strikes a solid (e.g., steel plate), it is usually refracted and reflected as a function of the angle of incidence, but if some of the energy causes particle movement in the transverse direction, a second transverse wave is generated, which can also be refracted and reflected. Snellius' law of refraction can be formulated as:
This means that the incident wave is split into two different wave types at the interface. If we consider a wave incident on an interface of two different solids (e.g. aluminum and steel), the wave type of the reflected wave also splits.
Besides these simple mode conversions, an incident wave can also be converted into surface waves. For example, if one radiates a longitudinal wave at a shallower angle than that of total reflection onto a boundary surface, it will be totally reflected, but in addition a surface wave traveling along the boundary layer will be generated. The incident wave is thus converted into reflected longitudinal and surface wave.
In general, mode conversions are not discrete processes, i.e. a part of the incident energy is converted into different types of waves. The amplitudes (transmission factor, reflection factor) of the converted waves depend on the angle of incidence.
Seismic waves
In seismology, a wave conversion specifically refers to the conversion between P and S waves at discontinuities. Body waves are reflected and refracted when they hit a boundary layer within the earth. Here, P-waves can be converted into S-waves (PS-wave) at interfaces, as well as vice versa (SP-wave). Here applies analogously for an incident P-wave:
The change in amplitudes can be described with the zoeppritz equations.
References
Wave mechanics | Mode conversion | [
"Physics"
] | 426 | [
"Wave mechanics",
"Waves",
"Physical phenomena",
"Classical mechanics"
] |
72,988,466 | https://en.wikipedia.org/wiki/Axial%20current | In particle physics, the axial current, also denoted the pseudo-vector or chiral current, is the conserved current associated to the chiral symmetry or axial symmetry of a system.
Origin
According to Noether's theorem, each symmetry of a system is associated a conserved quantity. For example, the rotational invariance of a system implies the conservation of its angular momentum, or spacetime invariance implies the conservation of energy–momentum. In quantum field theory, internal symmetries also result in conserved quantities. For example, the U(1) gauge transformation of QED implies the conservation of the electric charge. Likewise, if a theory possesses an internal chiral or axial symmetry, there will be a conserved quantity, which is called the axial charge. Further, just as the motion of an electrically charged particle produces an electric current, a moving axial charge constitutes an axial current.
Definition
The axial current resulting from the motion of an axially charged moving particle is formally defined as , where is the particle field represented by Dirac spinor (since the particle is typically a spin-1/2 fermion) and and are the Dirac gamma matrices.
For comparison, the electromagnetic current produced by an electrically charged moving particle is .
Meaning
As explained above, the axial current is simply the equivalent of the electromagnetic current for the axial symmetry instead of the U(1) symmetry. Another perspective is given by recalling that the
chiral symmetry is the invariance of the theory under the field rotation and (or alternatively and ), where denotes a left-handed field and a right-handed one.
From this as well as the fact that and the definition of above, one sees that the axial current is the difference between the current due to left-handed fermions and that from right-handed ones, whilst the electromagnetic current is the sum.
Chiral symmetry is exhibited by vector gauge theories with massless fermions. Since there is no known massless fermion in nature, chiral symmetry is at best an approximate symmetry in fundamental theories, and the axial current is not conserved. (Note: this explicit breaking of the chiral symmetry by non-zero masses is not to be confused with the spontaneous chiral symmetry breaking that plays a dominant role in hadronic physics.) An important consequence of such non-conservation is the neutral pion decay and the chiral anomaly, which is directly related to the pion decay width.
Applications
The axial current is an important part of the formalism describing high-energy scattering reactions. In such reaction, two particles scatter off each other by exchanging a force boson, e.g., a photon for electromagnetic scattering (see the figure).
The cross-section for such reaction is proportional to the square of the scattering amplitude, which in turn is given by the product of boson propagator times the two currents associated with the motions two colliding particles. Therefore, currents (axial or electromagnetic) are one of the two essential ingredients needed to compute high-energy scattering, the other being the boson propagator.
In electron–nucleon scattering (or more generally, charged lepton–hadron/nucleus scattering) the axial current yields the spin-dependent part of the cross-section. (The spin-average part of the cross-section comes from the electromagnetic current.)
In neutrino–nucleon scattering, neutrinos couple only via the axial current, thus accessing different nucleon structure information than with charged leptons.
Neutral pions also couple only via the axial current because pions are pseudoscalar particles and, to produce amplitudes (scalar quantities), a pion must couple to another pseudoscalar object like the axial current. (Charged pions can also couple via the electromagnetic current.)
See also
Chiral anomaly
Chiral symmetry breaking
Chiral perturbation theory
Chiral magnetic effect
Parity (physics)
QCD
References
Physical quantities
Quantum field theory
Particle physics
Nuclear physics
Quantum chromodynamics
Standard Model
Conservation equations
Conservation laws
Symmetry
Four-vectors | Axial current | [
"Physics",
"Mathematics"
] | 830 | [
"Physical phenomena",
"Physical quantities",
"Four-vectors",
"Quantum mechanics",
"Mathematical objects",
"Particle physics",
"Nuclear physics",
"Equations of physics",
"Quantity",
"Vector physical quantities",
"Geometry",
"Symmetry",
"Physics theorems",
"Standard Model",
"Quantum field ... |
59,385,967 | https://en.wikipedia.org/wiki/Azanide | Azanide is the IUPAC-sanctioned name for the anion . The term is obscure; derivatives of are almost invariably referred to as amides, despite the fact that amide also refers to the organic functional group –. The anion is the conjugate base of ammonia, so it is formed by the self-ionization of ammonia. It is produced by deprotonation of ammonia, usually with strong bases or an alkali metal. Azanide has a H–N–H bond angle of 104.5°.
Alkali metal derivatives
The alkali metal derivatives are best known, although usually referred to as alkali metal amides. Examples include lithium amide, sodium amide, and potassium amide. These salt-like solids are produced by treating liquid ammonia with strong bases or directly with the alkali metals (blue liquid ammonia solutions due to the solvated electron):
, where M = Li, Na, K
Silver(I) amide () is prepared similarly.
Transition metal complexes of the amido ligand are often produced by salt metathesis reaction or by deprotonation of metal ammine complexes.
References
Anions
Nitrogen hydrides | Azanide | [
"Physics",
"Chemistry"
] | 244 | [
"Ions",
"Matter",
"Anions"
] |
59,393,164 | https://en.wikipedia.org/wiki/Uschi%20Steigenberger | Ursula "Uschi" Steigenberger (25 April 1951 — 12 December 2018) FInstP was a German condensed matter physicist and director of the ISIS neutron source. She was one of the founders of the Institute of Physics Juno Award.
Education
Steigenberger was born in Augsburg. She studied physics at the University of Würzburg, where she remained for her graduate studies. She earned a PhD in condensed matter physics under the supervision of Michael von Ortenburg. For her doctoral thesis she designed a stress rig, which allowed her to apply uniaxial pressure to single crystals of tellurium. After earning her PhD in 1981, Steigenberger worked in various magnet labs, including the CNRS Grenoble High Field Magnet Facility in Grenoble and Lebedev Physical Institute in Moscow.
Career and research
Steigenberger joined the Institut Laue–Langevin in 1982. She worked on the PRogetto dell'Istituto di Strutura della MAteria del CNR (PRISMA) spectrometer with collaborators in Italy, and continued to work with them on the motion of molecules. She worked on cadmium telluride that had been alloyed with magnesium. Here she met British physicist Keith McEwen, whom she married in 1986 before the couple moved to the United Kingdom.
Steigenberger joined the ISIS neutron source in 1986. She was initially responsible for the PRISMA spectrometer, leading a collaboration between the Engineering and Physical Sciences Research Council (EPSRC) and Italian Consiglio Nazionale delle Ricerche. PRISMA was used for inelastic neutron scattering from single crystals. It can measure the change in scattering during the application of temperature, pressure or magnetic fields. Steigenberger was made one of the ISIS Excitations Group leaders, becoming the first woman Division Head in 1994. She attended the Oxford School on Neutron Scattering in 1991. In 1996 Steigenberger coordinated a workshop on pulsed neutron experiments. She served as director at the ISIS neutron source from 2011 to 2012. She worked with scientists from Norway to create Larmor, a high-intensity small-angle scattering instrument that uses a beam of polarised neutrons to study the movement of atoms in a material. Steigenberger served as Chair of the Helmholtz-Zentrum Berlin. She retired from the ISIS neutron source in 2013. Her work was recognised by scientists all over the world, developing important partnerships with Japan and Italy.
Awards and honours
Steigenberger was shortlisted for a WISE Campaign Lifetime Achievement Award. She was a Fellow of the Institute of Physics, and part of their review of women in the physics workplace that led to the Juno award scheme as well as being a member of their Science Board. In the 2016 New Year Honours she was appointed Officer of the Order of the British Empire (OBE) for services to science, which she collected in 2017. This OBE was honorary as Steigenberger kept her German citizenship whilst working in the UK.
Death
Steigenberger was diagnosed with pancreatic cancer in January 2017, and died on 12 December 2018.
References
2018 deaths
German women physicists
Condensed matter physicists
20th-century German physicists
20th-century German women scientists
21st-century German physicists
21st-century German women scientists
1951 births | Uschi Steigenberger | [
"Physics",
"Materials_science"
] | 657 | [
"Condensed matter physicists",
"Condensed matter physics"
] |
59,401,907 | https://en.wikipedia.org/wiki/Christel%20Marian | Christel Maria Marian (4 June 1954) is a German chemist. She is a full professor and the director of the institute of theoretical and computational chemistry at the University of Düsseldorf.
Education and professional life
Marian studied chemistry in Cologne and Bonn. She finished her doctorate in Theoretical Chemistry at the University of Bonn under the supervision of Sigrid D. Peyerimhoff in 1980. She did a postdoc in the Theoretical Physics Department of Stockholm University (Sweden) in the group of Per E. M. Siegbahn. She completed her habilitation at the University of Bonn in 1991. In 2001, she joined the University of Düsseldorf as a full professor. Between 2011 and 2015, she was Dean of the Mathematical and Natural Science Faculty at the University of Düsseldorf.
Personal life
She has two daughters.
Research
Her research focuses on the development and application of theoretical and computational excited-state electronic structure methods – also for biomolecules. She also worked on spin-orbit coupling in molecules.
Selected publications
Some of her most cited publications are:
Awards
She is a member of the North Rhine-Westphalian Academy of Sciences, Humanities and the Arts.
References
1954 births
20th-century German chemists
German women chemists
Living people
Theoretical chemists
20th-century German women scientists
21st-century German chemists | Christel Marian | [
"Chemistry"
] | 264 | [
"Quantum chemistry",
"Theoretical chemistry",
"Theoretical chemists",
"Physical chemists"
] |
51,559,936 | https://en.wikipedia.org/wiki/Realizing%20Increased%20Photosynthetic%20Efficiency | Realizing Increased Photosynthetic Efficiency (RIPE) is a translational research project that is genetically engineering plants to photosynthesize more efficiently to increase crop yields. RIPE aims to increase agricultural production worldwide, particularly to help reduce hunger and poverty in Sub-Saharan Africa and Southeast Asia by sustainably improving the yield of key food crops including soybeans, rice, cassava and cowpeas. The RIPE project began in 2012, funded by a five-year, $25-million dollar grant from the Bill and Melinda Gates Foundation. In 2017, the project received a $45 million-dollar reinvestment from the Gates Foundation, Foundation for Food and Agriculture Research, and the UK Government's Department for International Development. In 2018, the Gates Foundation contributed an additional $13 million to accelerate the project's progress.
Background
During the 20th century, the Green Revolution dramatically increased yields through advances in plant breeding and land management. This period of agricultural innovation is credited for saving millions of lives. However, these approaches are reaching their biological limits, leading to stagnation in yield improvement. In 2009, the Food and Agriculture Organization projected that global food production must increase by 70% by 2050 to feed an estimated world population of 9 billion people. Meeting the demands of 2050 is further challenged by shrinking arable land, decreasing natural resources, and climate change.
Research
The RIPE project's proof-of-concept study established photosynthesis can be improved to increase yields, published in Science. The Guardian named this discovery one of the 12 key science moments of 2016. Computer model simulations identify strategies to improve the basic underlying mechanisms of photosynthesis and increase yield. First, researchers transform, or genetically engineer, model plants that are tested in controlled environments, e.g. growth chambers and greenhouses. Next, successful transformations are tested in randomized, replicated field trials. Finally, transformations with statistically significant yield increases are translated to the project's target food crops. Likely several approaches could be combined to additively increase yield. "Global access” ensures smallholder farmers will be able to use and afford the project's intellectual property.
Organization
RIPE is led by the University of Illinois at the Carl R. Woese Institute for Genomic Biology. The project's partner institutions include the Australian National University, Chinese Academy of Sciences, Commonwealth Scientific and Industrial Research Organisation, Lancaster University, Louisiana State University, University of California at Berkeley, University of Cambridge, University of Essex, and the United States Department of Agriculture/Agricultural Research Service.
The Executive Committee oversees the various research strategies; its members are listed in the table below.
References
Genetic engineering and agriculture
Research projects
Photosynthesis | Realizing Increased Photosynthetic Efficiency | [
"Chemistry",
"Engineering",
"Biology"
] | 541 | [
"Biochemistry",
"Genetic engineering and agriculture",
"Genetic engineering",
"Photosynthesis"
] |
62,943,310 | https://en.wikipedia.org/wiki/Green%20hydrogen | Green hydrogen (GH2 or GH2) is hydrogen produced by the electrolysis of water, using renewable electricity. Production of green hydrogen causes significantly lower greenhouse gas emissions than production of grey hydrogen, which is derived from fossil fuels without carbon capture.
Green hydrogen's principal purpose is to help limit global warming to 1.5 °C, reduce fossil fuel dependence by replacing grey hydrogen, and provide for an expanded set of end-uses in specific economic sectors, sub-sectors and activities. These end-uses may be technically difficult to decarbonize through other means such as electrification with renewable power. Its main applications are likely to be in heavy industry (e.g. high temperature processes alongside electricity, feedstock for production of green ammonia and organic chemicals, as direct reduction steelmaking), long-haul transport (e.g. shipping, aviation and to a lesser extent heavy goods vehicles), and long-term energy storage.
As of 2021, green hydrogen accounted for less than 0.04% of total hydrogen production. Its cost relative to hydrogen derived from fossil fuels is the main reason green hydrogen is in less demand. For example, hydrogen produced by electrolysis powered by solar power was about 25 times more expensive than that derived from hydrocarbons in 2018. By 2024, this cost disadvantage had decreased to approximately 3x more expensive.
Definition
Most commonly, green hydrogen is defined as hydrogen produced by the electrolysis of water, using renewable electricity. In this article, the term green hydrogen is used with this meaning.
Precise definitions sometimes add other criteria. The global Green Hydrogen Standard defines green hydrogen as "hydrogen produced through the electrolysis of water with 100% or near 100% renewable energy with close to zero greenhouse gas emissions."
A broader, less-used definition of green hydrogen also includes hydrogen produced through various other methods that produce relatively low emissions and meet other sustainability criteria. For example, these production methods may involve nuclear energy or biomass feedstocks.
Electrolysis
Hydrogen can be produced from water by electrolysis. Electrolysis powered by renewable energy is carbon neutral. The business consortium Hydrogen Council said that, as of December 2023, manufacturers are preparing for a green hydrogen expansion by building out the electrolyzer pipeline by 35 percent to meet the needs of more than 1,400 announced projects.
Biochar-assisted
Biochar-assisted water electrolysis (BAWE) reduces energy consumption by replacing the oxygen evolution reaction (OER) with the biochar oxidation reaction (BOR). An electrolyte dissolves the biochar as the reaction proceeds. A 2024 study claimed that the reaction was 6x more efficient than conventional electrolysis, operating at <1 V, without production using ~250 mA/gcat current at 100% Faradaic efficiency. The process could be driven by small-scale solar or wind power.
Cow manure biochar operated at only 0.5 V, better than materials such as sugarcane husks, hemp waste, and paper waste. Almost 35% of the biochar and solar energy was converted into hydrogen. Biochar production (via pyrolysis) is not carbon neutral.
Uses
There is potential for green hydrogen to play a significant role in decarbonising energy systems where there are challenges and limitations to replacing fossil fuels with direct use of electricity.
Hydrogen fuel can produce the intense heat required for industrial production of steel, cement, glass, and chemicals, thus contributing to the decarbonisation of industry alongside other technologies, such as electric arc furnaces for steelmaking. However, it is likely to play a larger role in providing industrial feedstock for cleaner production of ammonia and organic chemicals. For example, in steelmaking, hydrogen could function as a clean energy carrier and also as a low-carbon catalyst replacing coal-derived coke.
Hydrogen used to decarbonise transportation is likely to find its largest applications in shipping, aviation and to a lesser extent heavy goods vehicles, through the use of hydrogen-derived synthetic fuels such as ammonia and methanol, and fuel cell technology. As an energy resource, hydrogen has a superior energy density (39.6 kWh) versus batteries (lithium battery: 0.15-0.25 kWh). For light duty vehicles including passenger cars, hydrogen is far behind other alternative fuel vehicles, especially compared with the rate of adoption of battery electric vehicles, and may not play a significant role in future.
Green hydrogen can also be used for long-duration grid energy storage, and for long-duration seasonal energy storage. It has been explored as an alternative to batteries for short-duration energy storage.
Green methanol
Green methanol is a liquid fuel that is produced from combining carbon dioxide and hydrogen () under pressure and heat with catalysts. It is a way to reuse carbon capture for recycling. Methanol can store hydrogen economically at standard outdoor temperatures and pressures, compared to liquid hydrogen and ammonia that need to use a lot of energy to stay cold in their liquid state. In 2023 the Laura Maersk was the first container ship to run on methanol fuel. Ethanol plants in the midwest are a good place for pure carbon capture to combine with hydrogen to make green methanol, with abundant wind and nuclear energy in Iowa, Minnesota, and Illinois. Mixing methanol with ethanol could make methanol a safer fuel to use because methanol doesn't have a visible flame in the daylight and doesn't emit smoke, and ethanol has a visible light yellow flame. Green hydrogen production of 70% efficiency and a 70% efficiency of methanol production from that would be a 49% energy conversion efficiency.
Market
As of 2022, the global hydrogen market was valued at $155 billion and was expected to grow at an average (CAGR) of 9.3% between 2023 and 2030.
Of this market, green hydrogen accounted for about $4.2 billion (2.7%).
Due to the higher cost of production, green hydrogen represents a smaller fraction of the hydrogen produced compared to its share of market value.
The majority of hydrogen produced in 2020 was derived from fossil fuel. 99% came from carbon-based sources. Electrolysis-driven production represents less than 0.1% of the total, of which only a part is powered by renewable electricity.
The current high cost of production is the main factor limiting the use of green hydrogen. A price of $2/kg is considered by many to be a potential tipping point that would make green hydrogen competitive against grey hydrogen. It is cheapest to produce green hydrogen with surplus renewable power that would otherwise be curtailed, which favours electrolysers capable of responding to low and variable power levels (such as proton exchange membrane electrolysers).
The cost of electrolysers fell by 60% from 2010 to 2022, and green hydrogen production costs are forecasted to fall significantly to 2030 and 2050, driving down the cost of green hydrogen alongside the falling cost of renewable power generation. Goldman Sachs analysis observed in 2022, just prior to Russia's invasion of Ukraine that the "unique dynamic in Europe with historically high gas and carbon prices is already leading to green H2 cost parity with grey across key parts of the region", and anticipated that globally green hydrogen achieve cost parity with grey hydrogen by 2030, earlier if a global carbon tax were placed on grey hydrogen.
As of 2021, the green hydrogen investment pipeline was estimated at 121 gigawatts of electrolyser capacity across 136 projects in planning and development phases, totaling over $500 billion. If all projects in the pipeline were built, they could account for 10% of hydrogen production by 2030.
The market could be worth over $1 trillion a year by 2050 according to Goldman Sachs.
An energy market analyst suggested in early 2021 that the price of green hydrogen would drop 70% by 2031 in countries that have cheap renewable energy.
Projects
Australia
In 2020, the Australian government fast-tracked approval for the world's largest planned renewable energy export facility in the Pilbara region. In 2021, energy companies announced plans to construct a "hydrogen valley" in New South Wales at a cost of $2 billion to replace the region's coal industry.
As of July 2022, the Australian Renewable Energy Agency (ARENA) had invested $88 million in 35 hydrogen projects ranging from university research and development to first-of-a-kind demonstrations. In 2022, ARENA is expected to close on two or three of Australia's first large-scale electrolyser deployments as part of its $100 million hydrogen deployment round.
In 2024 Andrew Forrest delayed or cancelled plans to manufacture 15 million tonnes of green hydrogen per year by 2030.
Brazil
Brazil's energy matrix is considered one of the cleanest in the world. Experts highlight the country's potential for producing green hydrogen. Research carried out in the country indicates that biomass (such as starches and waste from sewage treatment plants) can be processed and converted into green hydrogen (see: Bioenergy, Biohydrogen and Biological hydrogen production). The Australian company Fortescue Metals Group has plans to install a green hydrogen plant near the port of Pecém, in Ceará, with an initial forecast of starting operations in 2022. In the same year, the Federal University of Santa Catarina announced a partnership with the German Deutsche Gesellschaft für Internationale Zusammenarbeit, for the production of H2V. Unigel has plans to build a green hydrogen/green ammonia plant in Camaçari, Bahia, which is scheduled to come into operation in 2023. Initiatives in this area are also ongoing in the states of Minas Gerais, Paraná, Pernambuco, Piauí, Rio de Janeiro, Rio Grande do Norte, Rio Grande do Sul and São Paulo. Research work by the University of Campinas and the Technical University of Munich has determined the space required for wind and solar parks for large-scale hydrogen production. According to this, significantly less land will be required to produce green hydrogen from wind and photovoltaic energy than is currently required to grow fuel from sugarcane. In this study, author Herzog assumed an electricity requirement for the electrolysers of 120 gigawatts (GW). On November 20, 2023, Ursula von der Leyen, President of the European Commission, announced support for the production of 10 GW of hydrogen and subsequently ammonia in the state of Piauí. Ammonia will be exported from there.
Canada
World Energy GH2's Project Nujio'qonik aims to be Canada's first commercial green hydrogen / ammonia producer created from three gigawatts of wind energy on the west coast of Newfoundland and Labrador, Canada. Nujio'qonik is the Mi'kmaw name for Bay St. George, where the project is proposed. Since June 2022, the project has been undergoing environmental assessment according to regulatory guidelines issued by the Government of Newfoundland and Labrador.
Chile
Chile's goal to use only clean energy by the year 2050 includes the use of green hydrogen. The EU Latin America and Caribbean Investment Facility provided a €16.5 million grant and the EIB and KfW are in the process of providing up to €100 million each to finance green hydrogen projects.
China
In 2022 China was the leader of the global hydrogen market with an output of 33 million tons (a third of global production), mostly using fossil fuel.
As of 2021, several companies have formed alliances to increase production of the fuel fifty-fold in the next six years.
Sinopec aimed to generate 500,000 tonnes of green hydrogen by 2025. Hydrogen generated from wind energy could provide a cost-effective alternative for coal-dependent regions like Inner Mongolia. As part of preparations for the 2022 Winter Olympics a hydrogen electrolyser, described as the "world's largest" began operations to fuel vehicles used at the games. The electrolyser was powered by onshore wind.
Egypt
Egypt has opened the door to $40 billion of investment in green hydrogen and renewable technology by signing seven memoranda of understanding with international developers in the fields. The projects located in the Suez canal economic zone will see an investment of around $12 billion at an initial pilot phase, followed by a further $29 billion, according to the country's Planning Minister, Hala Helmy el-Said.
Germany
Germany invested €9 billion to construct 5 GW of electrolyzer capacity by 2030.
India
Reliance Industries announced its plan to use about 3 gigawatts (GW) of solar energy to generate 400,000 tonnes of hydrogen. Gautam Adani, founder of the Adani Group announced plans to invest $70 billion to become the world's largest renewable energy company, and produce the cheapest hydrogen across the globe. The power ministry of India has stated that India intends to produce a cumulative 5 million tonnes of green hydrogen by 2030.
In April 2022, the public sector Oil India Limited (OIL), which is headquartered in eastern Assam's Duliajan, set up India's first 99.99% pure green hydrogen pilot plant in keeping with the goal of "making the country ready for the pilot-scale production of hydrogen and its use in various applications" while "research and development efforts are ongoing for a reduction in the cost of production, storage and the transportation" of hydrogen.
In January 2024, nearly 412,000 metric tons/year capacity green hydrogen projects were awarded to produce green hydrogen by the end of 2026.
Japan
In 2023, Japan announced plans to spend US$21 billion on subsidies for delivered clean hydrogen over a 15-year period.
Mauritania
Mauritania launched two major projects on green hydrogen. The NOUR Project would become one of the world's largest hydrogen projects with 10 GW of capacity by 2030 in cooperation with Chariot company. The second is the AMAN Project, which includes 12GW of wind capacity and 18GW of solar capacity to produce 1.7 million tons per annum of green hydrogen or 10 million tons per annum of green ammonia for local use and export, in cooperation with Australian company CWP Renewables.
Namibia
Namibia has commissioned a green hydrogen production project with German support. The 10 billion dollar project involves the construction of wind farms and photovoltaic plants with a total capacity of 7 (GW) to produce. It aims to produce 2 million tonnes of green ammonia and hydrogen derivatives by 2030 and will create 15,000 jobs of which 3,000 will be permanent.
Oman
An association of companies announced a $30 billion project in Oman, which would become one of the world's largest hydrogen facilities. Construction was to begin in 2028. By 2038 the project was to be powered by 25 of wind and solar energy.
Portugal
In April 2021, Portugal announced plans to construct the first solar-powered plant to produce hydrogen by 2023. Lisbon based energy company Galp Energia announced plans to construct an electrolyser to power its refinery by 2025.
Saudi Arabia
In 2021, Saudi Arabia, as a part of the NEOM project, announced an investment of $5bn to build a green hydrogen-based ammonia plant, which would start production in 2025.
Singapore
Singapore started the construction of a 600 MW hydrogen-ready powerplant that is expected to be ready by the first half of 2026.
Spain
In February 2021, thirty companies announced a pioneering project to provide hydrogen bases in Spain. The project intended to supply 93 of solar and 67 GW of electrolysis capacity by the end of the decade.
United Arab Emirates
In 2021, in collaboration with Expo 2020 Dubai, a pilot project was launched which is the first "industrial scale", solar-driven green hydrogen facility in the Middle East and North Africa."
United Kingdom
In August 2017, EMEC, based in Orkney, Scotland, produced hydrogen gas using electricity generated from tidal energy in Orkney. This was the first time that hydrogen has been created from tidal energy anywhere in the world.
In March 2021, a proposal emerged to use offshore wind in Scotland to power converted oil and gas rigs into a "green hydrogen hub" which would supply fuel to local distilleries.
In June 2021, Equinor announced plans to triple UK hydrogen production. In March 2022 National Grid announced a project to introduce green hydrogen into the grid with a 200m wind turbine powering an electrolyser to produce gas for about 300 homes.
In December 2023, the UK government announced a £2 billion fund would be setup to back 11 separate projects. The then Energy Secretary, Claire Coutinho announced the funding would be invested over a 15-year period. The first allocation round would be known as HAR1. Vattenfall planned to generate green hydrogen from a test offshore wind turbine near Aberdeen in 2025.
United States
The federal Infrastructure Investment and Jobs Act, which became law in November 2021, allocated $9.5 billion to green hydrogen initiatives. In 2021, the U.S. Department of Energy (DOE) was planning the first demonstration of a hydrogen network in Texas. The department had previously attempted a hydrogen project known as Hydrogen Energy California. Texas is considered a key part of green hydrogen projects in the country as the state is the largest domestic producer of hydrogen and has a hydrogen pipeline network. In 2020, SGH2 Energy Global announced plans to use plastic and paper via plasma gasification to produce green hydrogen near Los Angeles.
In 2021 then New York governor Andrew Cuomo announced a $290 million investment to construct a green hydrogen fuel production facility. State authorities backed plans for developing fuel cells to be used in trucks and research on blending hydrogen into the gas grid. In March 2022 the governors of Arkansas, Louisiana, and Oklahoma announced the creation of a hydrogen energy hub between the states. Woodside announced plans for a green hydrogen production site in Ardmore, Oklahoma. The Inflation Reduction Act of 2022 established a 10-year production tax credit, which includes a $3.00/kg subsidy for green hydrogen.
Public-private projects
In October 2023, Siemens announced that it had successfully performed the first test of an industrial turbine powered by 100 per cent green hydrogen generated by a 1 megawatt electrolyser. The turbine also operates on gas and any mixture of gas and hydrogen.
Government support
In 2020, the European Commission adopted a dedicated strategy on hydrogen. The "European Green Hydrogen Acceleration Center" is tasked with developing a €100 billion a year green hydrogen economy by 2025.
In December 2020, the United Nations together with RMI and several companies, launched Green Hydrogen Catapult, with a goal to reduce the cost of green hydrogen below US$2 per kilogram (equivalent to $50 per megawatt hour) by 2026.
In 2021, with the support of the governments of Austria, China, Germany, and Italy, UN Industrial Development Organization (UNIDO) launched its Global Programme for Hydrogen in Industry. Its goal is to accelerate the deployment of GH2 in industry.
In 2021, the British government published its policy document, a "Ten Point Plan for a Green Industrial Revolution," which included investing to create 5 of low carbon hydrogen by 2030. The plan included working with industry to complete the necessary testing that would allow up to 20% blending of hydrogen into the gas distribution grid by 2023. A BEIS consultation in 2022 suggested that grid blending would only have a "limited and temporary" role due to an expected reduction in the use of natural gas.
The Japanese government planned to transform the nation into a "hydrogen society". Energy demand would require the government to import/produce 36 million tons of liquefied hydrogen. At the time Japan's commercial imports were projected to be 100 times less than this amount by 2030, when the use of fuel was expected to commence. Japan published a preliminary road map that called for hydrogen and related fuels to supply 10% of the power for electricity generation as well as a significant portion of the energy for uses such as shipping and steel manufacture by 2050. Japan created a hydrogen highway consisting of 135 subsidized hydrogen fuels stations and planned to construct 1,000 by the end of the 2020s.
In October 2020, the South Korean government announced its plan to introduce the Clean Hydrogen Energy Portfolio Standards (CHPS) which emphasizes the use of clean hydrogen. During the introduction of the Hydrogen Energy Portfolio Standard (HPS), it was voted on by the 2nd Hydrogen Economy Committee. In March 2021, the 3rd Hydrogen Economy Committee was held to pass a plan to introduce a clean hydrogen certification system based on incentives and obligations for clean hydrogen.
Morocco, Tunisia, Egypt and Namibia have proposed plans to include green hydrogen as a part of their climate change agenda. Namibia is partnering with European countries such as Netherlands and Germany for feasibility studies and funding.
In July 2020, the European Union unveiled the Hydrogen Strategy for a Climate-Neutral Europe. A motion backing this strategy passed the European Parliament in 2021. The plan is divided into three phases. From 2020 to 2024, the program aims to decarbonize existing hydrogen production. From 2024-2030 green hydrogen would be integrated into the energy system. From 2030 to 2050 large-scale deployment of hydrogen would occur. Goldman Sachs estimated hydrogen to 15% of the EU energy mix by 2050.
Six European Union member states: Germany, Austria, France, the Netherlands, Belgium and Luxembourg, requested hydrogen funding be backed by legislation. Many member countries have created plans to import hydrogen from other nations, especially from North Africa. These plans would increase hydrogen production, but were accused of trying to export the necessary changes needed within Europe. The European Union required that starting in 2021, all new gas turbines made in the bloc must be ready to burn a hydrogen–natural gas blend.
In November 2020, Chile's president presented the "National Strategy for Green Hydrogen," stating he wanted Chile to become "the most efficient green hydrogen producer in the world by 2030". The plan includes HyEx, a project to make solar based hydrogen for use in the mining industry.
Regulations and standards
In the European Union, certified 'renewable' hydrogen, defined as produced from non-biological feedstocks, requires an emission reduction of at least 70% below the fossil fuel it is intended to replace. This is distinct in the EU from 'low carbon' hydrogen, which is defined as made using fossil fuel feedstocks. For it to be certified, low carbon hydrogen must achieve at least a 70% reduction in emissions compared with the grey hydrogen it replaces.
In the United Kingdom, just one standard is proposed, for 'low carbon' hydrogen. Its threshold GHG emissions intensity of 20gCO2 equivalent per megajoule should be easily met by renewably-powered electrolysis of water for green hydrogen production, but has been set at a level to allow for and encourage other 'low carbon' hydrogen production, principally blue hydrogen. Blue hydrogen is grey hydrogen with added carbon capture and storage, which to date has not been produced with carbon capture rates in excess of 60%. To meet the UK's threshold, its government has estimated that an 85% carbon capture rate would be necessary.
In the United States, planned tax credit incentives for green hydrogen production are to be tied to the emissions intensity of 'clean' hydrogen produced, with greater levels of support on offer for lower greenhouse gas intensities.
See also
Alternative fuel
Carbon-neutral fuel
Combined cycle hydrogen power plant
Fossil fuel phase-out
Hydrogen economy
Green methanol fuel
References
External links
Green hydrogen explainer video from Scottish Power
Emissions reduction | Green hydrogen | [
"Chemistry"
] | 4,812 | [
"Greenhouse gases",
"Emissions reduction"
] |
62,944,196 | https://en.wikipedia.org/wiki/Ethyl%20acetoxy%20butanoate | Ethyl acetoxy butanoate (EAB) is a volatile chemical compound found as a minor component of the odour profile of ripe pineapples, though in its pure form it has a smell more similar to sour yoghurt. It can be metabolized in humans into GHB, and thus can produce similar sedative effects.
It is synthesised by the reaction of gamma-butyrolactone and ethyl acetate with sodium ethoxide.
See also
1,4-Butanediol (1,4-BD)
1,6-Dioxecane-2,7-dione
Aceburic acid
gamma-Hydroxybutyraldehyde
References
Acetate esters
Butyrate esters
GABAB receptor agonists
GHB receptor agonists
Neurotransmitter precursors
Prodrugs
Sedatives | Ethyl acetoxy butanoate | [
"Chemistry"
] | 188 | [
"Chemicals in medicine",
"Prodrugs"
] |
62,946,131 | https://en.wikipedia.org/wiki/UK%20Battery%20Industrialisation%20Centre | The UK Battery Industrialisation Centre (UK BIC) is a research centre in the United Kingdom, to develop new electrical batteries, for the British automotive industry. UKBIC provides over £60 million worth of specialized manufacturing equipment, supporting manufacturers, entrepreneurs, researchers, and educators in battery technology development. It has accelerated low carbon R&D, contributing to the UK's Net Zero goal by 2050.
History
Funding for the UK Battery Industrialisation Centre (UKBIC) is supplied by United Kingdom Research and Innovation (UKRI). This financial support was announced on 29 November 2017. The facility was officially inaugurated by the British Prime Minister, Boris Johnson, in July 2021, as documented on the UKBIC's official website.
Location
The UKBIC facility is located outside Coventry, adjacent to Coventry airport and about half a mile east of the junction between the A46 and A45. This is just outside the city boundary, in the extreme north of Warwick District, Warwickshire.
References
External links
2020 establishments in England
Automotive industry in the United Kingdom
Engineering research institutes
Research institutes in Warwickshire
Warwick District | UK Battery Industrialisation Centre | [
"Engineering"
] | 219 | [
"Engineering research institutes"
] |
62,947,198 | https://en.wikipedia.org/wiki/Blood%20compatibility%20testing | Blood compatibility testing is conducted in a medical laboratory to identify potential incompatibilities between blood group systems in blood transfusion. It is also used to diagnose and prevent some complications of pregnancy that can occur when the baby has a different blood group from the mother. Blood compatibility testing includes blood typing, which detects the antigens on red blood cells that determine a person's blood type; testing for unexpected antibodies against blood group antigens (antibody screening and identification); and, in the case of blood transfusions, mixing the recipient's plasma with the donor's red blood cells to detect incompatibilities (crossmatching). Routine blood typing involves determining the ABO and RhD (Rh factor) type, and involves both identification of ABO antigens on red blood cells (forward grouping) and identification of ABO antibodies in the plasma (reverse grouping). Other blood group antigens may be tested for in specific clinical situations.
Blood compatibility testing makes use of reactions between blood group antigens and antibodies—specifically the ability of antibodies to cause red blood cells to clump together when they bind to antigens on the cell surface, a phenomenon called agglutination. Techniques that rely on antigen-antibody reactions are termed serologic methods, and several such methods are available, ranging from manual testing using test tubes or slides to fully automated systems. Blood types can also be determined through genetic testing, which is used when conditions that interfere with serologic testing are present or when a high degree of accuracy in antigen identification is required.
Several conditions can cause false or inconclusive results in blood compatibility testing. When these issues affect ABO typing, they are called ABO discrepancies. ABO discrepancies must be investigated and resolved before the person's blood type is reported. Other sources of error include the "weak D" phenomenon, in which people who are positive for the RhD antigen show weak or negative reactions when tested for RhD, and the presence of immunoglobulin G antibodies on red blood cells, which can interfere with antibody screening, crossmatching, and typing for some blood group antigens.
Medical uses
Blood compatibility testing is routinely performed before a blood transfusion. The full compatibility testing process involves ABO and RhD (Rh factor) typing; screening for antibodies against other blood group systems; and crossmatching, which involves testing the recipient's blood plasma against the donor's red blood cells as a final check for incompatibility. If an unexpected blood group antibody is detected, further testing is warranted to identify the antibody and ensure that the donor blood is negative for the relevant antigen. Serologic crossmatching may be omitted if the recipient's antibody screen is negative, there is no history of clinically significant antibodies, and their ABO/Rh type has been confirmed against historical records or against a second blood sample; and in emergencies, blood may be transfused before any compatibility testing results are available.
Blood compatibility testing is often performed on pregnant women and on the cord blood from newborn babies, because incompatibility puts the baby at risk for developing hemolytic disease of the newborn. It is also used before hematopoietic stem cell transplantation, because blood group incompatibility can be responsible for some cases of acute graft-versus-host disease.
Principles
Blood types are defined according to the presence or absence of specific antigens on the surface of red blood cells. The most important of these in medicine are the ABO and RhD antigens but many other blood group systems exist and may be clinically relevant in some situations. As of 2021, 43 blood groups are officially recognized.
People who lack certain blood group antigens on their red cells can form antibodies against these antigens. For example, a person with type A blood will produce antibodies against the B antigen. The ABO blood group antibodies are naturally occurring, meaning that they are found in people who have not been exposed to incompatible blood. Antibodies to most other blood group antigens, including RhD, develop after people are exposed to the antigens through transfusion or pregnancy. Some of these antibodies can bind to incompatible red blood cells and cause them to be destroyed, resulting in transfusion reactions and other complications.
Serologic methods for blood compatibility testing make use of these antibody-antigen reactions. In blood typing, reagents containing blood group antibodies, called antisera, are added to suspensions of blood cells. If the relevant antigen is present, the antibodies in the reagent will cause the red blood cells to agglutinate (clump together), which can be identified visually. In antibody screening, the individual's plasma is tested against a set of red blood cells with known antigen profiles; if the plasma agglutinates one of the red blood cells in the panel, this indicates that the individual has an antibody against one of the antigens present on the cells. In crossmatching, a prospective transfusion recipient's plasma is added to the donor red blood cells and observed for agglutination (or hemolysis) to detect antibodies that could cause transfusion reactions.
Blood group antibodies occur in two major forms: immunoglobulin M (IgM) and immunoglobulin G (IgG). Antibodies that are predominantly IgM, such as the ABO antibodies, typically cause immediate agglutination of red blood cells at room temperature. Therefore, a person's ABO blood type can be determined by simply adding the red blood cells to the reagent and centrifuging or mixing the sample, and in crossmatching, incompatibility between ABO types can be detected immediately after centrifugation. RhD typing also typically uses IgM reagents although anti-RhD usually occurs as IgG in the body. Antibodies that are predominantly IgG, such as those directed towards antigens of the Duffy and Kidd systems, generally do not cause immediate agglutination because the small size of the IgG antibody prevents formation of a lattice structure. Therefore, blood typing using IgG antisera and detection of IgG antibodies requires use of the indirect antiglobulin test to demonstrate IgG bound to red blood cells.
In the indirect antiglobulin test, the mixture of antiserum or plasma and red blood cells is incubated at , the ideal temperature for reactivity of IgG antibodies. After incubation, the red blood cells are washed with saline to remove unbound antibodies, and anti-human globulin reagent is added. If IgG antibodies have bound to antigens on the cell surface, anti-human globulin will bind to those antibodies, causing the red blood cells to agglutinate after centrifugation. If the reaction is negative, "check cells"—reagent cells coated with IgG—are added to ensure that the test is working correctly. If the test result is indeed negative, the check cells should react with the unbound anti-human globulin and demonstrate agglutination.
Blood typing
ABO and Rh typing
In ABO and Rh typing, reagents containing antibodies against the A, B, and RhD antigens are added to suspensions of blood cells. If the relevant antigen is present, the red blood cells will demonstrate visible agglutination (clumping). In addition to identifying the ABO antigens, which is termed forward grouping, routine ABO blood typing also includes identification of the ABO antibodies in the person's plasma. This is called reverse grouping, and it is done to confirm the ABO blood type. In reverse grouping, the person's plasma is added to type A1 and type B red blood cells. The plasma should agglutinate the cells that express antigens that the person lacks, while failing to agglutinate cells that express the same antigens as the patient. For example, the plasma of someone with type A blood should react with type B red cells, but not with A1 cells. If the expected results do not occur, further testing is required. Agglutination is scored from 1+ to 4+ based on the strength of the reaction. In ABO typing, a score of 3+ or 4+ indicates a positive reaction, while a score of 1+ or 2+ is inconclusive and requires further investigation.
Other blood group systems
Prior to receiving a blood transfusion, individuals are screened for the presence of antibodies against antigens of non-ABO blood group systems. Blood group antigens besides ABO and RhD that are significant in transfusion medicine include the RhC/c and E/e antigens and the antigens of the Duffy, Kell, Kidd, and MNS systems. If a clinically significant antibody is identified, the recipient must be transfused with blood that is negative for the corresponding antigen to prevent a transfusion reaction. This requires the donor units to be typed for the relevant antigen. The recipient may also be typed for the antigen to confirm the identity of the antibody, as only individuals who are negative for a blood group antigen should produce antibodies against it.
In Europe, females who require blood transfusions are often typed for the Kell and extended Rh antigens to prevent sensitization to these antigens, which could put them at risk for developing hemolytic disease of the newborn during pregnancy. The American Society of Hematology recommends that people with sickle cell disease have their blood typed for the RhC/c, RhE/e, Kell, Duffy, Kidd, and MNS antigens prior to transfusion, because they often require transfusions and may become sensitized to these antigens if transfused with mismatched blood. Extended red blood cell phenotyping is also recommended for people with beta-thalassemia. Blood group systems other than ABO and Rh have a relatively small risk of complications when blood is mixed, so in emergencies such as major hemorrhage, the urgency of transfusion can exceed the need for compatibility testing against other blood group systems (and potentially Rh as well).
Antibody screening and identification
Antibodies to most blood group antigens besides those of the ABO system develop after exposure to incompatible blood. Such "unexpected" blood group antibodies are only found in 0.8–2% of people; however, recipients of blood transfusions must be screened for these antibodies to prevent transfusion reactions. Antibody screening is also performed as part of prenatal care, because antibodies against RhD and other blood group antigens can cause hemolytic disease of the newborn, and because Rh-negative mothers who have developed an anti-RhD antibody are not eligible to receive Rho(D) immune globulin (Rhogam).
In the antibody screening procedure, an individual's plasma is added to a panel of two or three sets of red blood cells which have been chosen to express most clinically significant blood group antigens. Only group O cells are used in antibody screening, as otherwise the cells would react with the naturally occurring ABO blood group antibodies. The mixture of plasma and red cells is incubated at 37°C and tested via the indirect antiglobulin test. Some antibody screening and identification protocols incorporate a phase of testing after incubation at room temperature, but this is often omitted because most unexpected antibodies that react at room temperature are clinically insignificant.
Agglutination of the screening cells by the plasma, with or without the addition of anti-human globulin, indicates that an unexpected blood group antibody is present. If this occurs, further testing using more cells (usually 10–11) is necessary to identify the antibody. By examining the antigen profiles of the red blood cells the person's plasma reacts with, it is possible to determine the antibody's identity. An "autocontrol", in which the individual's plasma is tested against their own red cells, is included to determine whether the agglutination is due to an alloantibody (an antibody against a foreign antigen), an autoantibody (an antibody against one's own antigens), or another interfering substance.
The image above shows the interpretation of an antibody panel used in serology to detect antibodies towards the most relevant blood group antigens. Each row represents "reference" or "control" red blood cells of donors which have known antigen compositions and are ABO group O. The + symbol means that the antigen is present on the reference red blood cells, and 0 means it is absent; nt means "not tested". The "result" column to the right displays reactivity when mixing reference red blood cells with plasma from the patient in 3 different phases: room temperature, 37°C and AHG (with anti-human globulin, by the indirect antiglobulin test).
Step 1; Annotated in blue: starting to exclude antigens without reaction in all 3 phases; looking at the first reference cell row with no reaction (0 in column at right, in this case cell donor 2), and excluding (here marked by X) each present antigen where the other pair is either practically non-existent (such as for DT) or 0 (presence is homozygous, in this case homozygous c).When both pairs are + (heterozygous cases), they are both excluded (here marked by X), except for C/c, E/e, Duffy, Kidd and MNS antigens (where antibodies of the patient may still react towards blood cells with homozygous antigen expression, because homozygous expression results in a higher dosage of the antigen). Thus, in this case, E/e is not excluded in this row, while K/k is, as well as Jsb (regardless of what Jsa would have shown).
Step 2: Annotated in brown: Going to the next reference cell row with a negative reaction (in this case cell donor 4), and repeating for each antigen type that is not already excluded.
Step 3: Annotated in purple. Repeating the same for each reference cell row with negative reaction.
Step 4: Discounting antigens that were absent in all or almost all reactive cases (here marked with \). These are often antigens with low prevalence, and while there is a possibility of such antibodies being produced, they are generally not the type that is responsible for the reactivity at hand.
Step 5: Comparing the remaining possible antigens for a most likely culprit (in this case Fya), and selectively ruling out significant differential antigens, such as with the shown additional donor cell type that is known to not contain Fya but contains C and Jka.
In this case, the antibody panel shows that anti-Fya antibodies are present. This indicates that donor blood typed to be negative for the Fya antigen must be used. Still, if a subsequent cross-matching shows reactivity, additional testing should be done against previously discounted antigens (in this case potentially E, K, Kpa and/or Lua).
When multiple antibodies are present, or when an antibody is directed against a high-frequency antigen, the normal antibody panel procedure may not provide a conclusive identification. In these cases, hemagglutination inhibition can be used, wherein a neutralizing substance cancels out a specific antigen. Alternatively, the plasma may be incubated with cells of known antigen profiles in order to remove a specific antibody (a process termed adsorption); or the cells can be treated with enzymes such as ficain or papain which inhibit the reactivity of some blood group antibodies and enhance others. The effect of ficain and papain on major blood group systems is as follows:
Enhanced: ABO, Rh, Kidd, Lewis, P1, Ii
Destroyed: Duffy (Fya and Fyb), Lutheran, MNS
Unaffected: Kell
People who have tested positive for an unexpected blood group antibody in the past may not exhibit a positive reaction on subsequent testing; however, if the antibody is clinically significant, they must be transfused with antigen-negative blood regardless.
Crossmatching
Crossmatching, which is routinely performed before a blood transfusion, involves adding the recipient's blood plasma to a sample of the donor's red blood cells. If the blood is incompatible, the antibodies in the recipient's plasma will bind to antigens on the donor red blood cells. This antibody-antigen reaction can be detected through visible clumping or destruction of the red blood cells, or by reaction with anti-human globulin, after centrifugation.
If the transfusion recipient has a negative antibody screen and no history of antibodies, an "immediate spin" crossmatch is often performed: the red blood cells and plasma are centrifuged immediately after mixing as a final check for incompatibility between ABO blood types. If a clinically significant antibody is detected (or was in the past), or if the immediate spin crossmatch demonstrates incompatibility, a "full" or "IgG crossmatch" is performed, which uses the indirect antiglobulin test to detect blood group incompatibility caused by IgG antibodies. The IgG crossmatching procedure is more lengthy than the immediate spin crossmatch, and in some cases may take more than two hours.
Individuals who have a negative antibody screen and no history of antibodies may also undergo an "electronic crossmatch", provided that their ABO and Rh type has been determined from the current blood sample and that the results of another ABO/Rh type are on record. In this case, the recipient's blood type is simply compared against that of the donor blood, without any need for serologic testing. In emergencies, blood may be issued before crossmatching is complete.
Methods
Tube and slide methods
Blood typing can be performed using test tubes, microplates, or blood typing slides. The tube method involves mixing a suspension of red blood cells with antisera (or plasma, for reverse grouping) in a test tube. The mixture is centrifuged to separate the cells from the reagent, and then resuspended by gently agitating the tube. If the antigen of interest is present, the red blood cells agglutinate, forming a solid clump in the tube. If it is absent, the red blood cells go back into suspension when mixed. The microplate method is similar to the tube method, except rather than using individual test tubes, blood typing is carried out in a plate containing dozens of wells, allowing multiple tests to be performed at the same time. The agglutination reactions are read after the plate is centrifuged.
Antibody screening and identification can also be carried out by the tube method. In this procedure, the plasma and red cells are mixed together in a tube containing a medium that enhances agglutination reactions, such as low ionic strength saline (LISS). The tubes are incubated at body temperature for a defined period of time, then centrifuged and examined for agglutination or hemolysis; first immediately following the incubation period, and then after washing and addition of anti-human globulin reagent. Crossmatching, likewise, may be performed by the tube method; the reactions are read immediately after centrifugation in the immediate spin crossmatch, or after incubation and addition of AHG in the full crossmatching procedure.
The slide method for blood typing involves mixing a drop of blood with a drop of antisera on a slide. The slide is tilted to mix the cells and reagents together and then observed for agglutination, which indicates a positive result. This method is typically used in under-resourced areas or emergency situations; otherwise, alternative methods are preferred.
Column agglutination
Column agglutination techniques for blood compatibility testing (sometimes called the "gel test") use cards containing columns of dextran-polyacrylamide gel. Cards designed for blood typing contain pre-dispensed blood typing reagents for forward grouping, and wells containing only a buffer solution, to which reagent red blood cells and plasma are added, for reverse grouping. Antibody screening and crossmatching can also be carried out by column agglutination, in which case cards containing anti-human globulin reagent are used. The gel cards are centrifuged (sometimes after incubation, depending on the test), during which red blood cell agglutinates become trapped at the top of the column because they are too large to migrate through the gel. Cells that have not agglutinated collect on the bottom. Therefore, a line of red blood cells at the top of the column indicates a positive result. The strength of positive reactions is scored from 1+ to 4+ depending on how far the cells have travelled through the gel. The gel test has advantages over manual methods in that it eliminates the variability associated with manually re-suspending the cells and that the cards can be kept as a record of the test. The column agglutination method is used by some automated analyzers to perform blood typing automatically. These analyzers pipette red blood cells and plasma onto gel cards, centrifuge them, and scan and read the agglutination reactions to determine the blood type.
Solid-phase assay
Solid-phase assays (sometimes called the "antigen capture" method) use reagent antigens or antibodies affixed to a surface (usually a microplate). Microplate wells coated with anti-A, -B and -D reagents are used for forward grouping. The test sample is added and the microplate is centrifuged; in a positive reaction, the red blood cells adhere to the surface of the well. Some automated analyzers use solid phase assays for blood typing.
Genotyping
Genetic testing can be used to determine a person's blood type in certain situations where serologic testing is insufficient. For example, if a person has been transfused with large volumes of donor blood, the results of serologic testing will reflect the antigens on the donor cells and not the person's actual blood type. Individuals who produce antibodies against their own red blood cells or who are treated with certain drugs may show spurious agglutination reactions in serologic testing, so genotyping may be necessary to determine their blood type accurately. Genetic testing is required for typing red blood cell antigens for which no commercial antisera are available.
The AABB recommends RhD antigen genotyping for women with serologic weak D phenotypes who have the potential to bear children. This is because some people with weak D phenotypes can produce antibodies against the RhD antigen, which can cause hemolytic disease of the newborn, while others cannot. Genotyping can identify the specific type of weak D antigen, which determines the potential for the person to produce antibodies, thus avoiding unnecessary treatment with Rho(D) immune globulin. Genotyping is preferred to serologic testing for people with sickle cell disease, because it is more accurate for certain antigens and can identify antigens that cannot be detected by serologic methods.
Genotyping is also used in prenatal testing for hemolytic disease of the newborn. When a pregnant woman has a blood group antibody that can cause HDN, the fetus can be typed for the relevant antigen to determine if it is at risk of developing the disease. Because it is impractical to draw blood from the fetus, the blood type is determined using an amniocentesis sample or cell-free fetal DNA isolated from the mother's blood. The father may also be genotyped to predict the risk of hemolytic disease of the newborn, because if the father is homozygous for the relevant antigen (meaning having two copies of the gene) the baby will be positive for the antigen and thus at risk of developing the disease. If the father is heterozygous (having only one copy), the baby only has a 50% chance of being positive for the antigen.
Limitations
ABO discrepancies
In ABO typing, the results of the forward and reverse grouping should always correspond with each other. An unexpected difference between the two results is termed an ABO discrepancy, and must be resolved before the person's blood type is reported.
Forward grouping
Weak reactions in the forward grouping may occur in people who belong to certain ABO subgroups—variant blood types characterized by decreased expression of the A or B antigens or changes in their structure. Weakened expression of ABO antigens may also occur in leukemia and Hodgkin's lymphoma. Weak reactions in forward grouping can be strengthened by incubating the blood and reagent mixture at room temperature or , or by using certain enzymes to enhance the antigen-antibody reactions.
Occasionally, two populations of red blood cells are apparent after reaction with the blood typing antisera. Some of the red blood cells are agglutinated, while others are not, making it difficult to interpret the result. This is called a mixed field reaction, and it can occur if someone has recently received a blood transfusion with a different blood type (as in a type A patient receiving type O blood), if they have received a bone marrow or stem cell transplant from someone with a different blood type, or in patients with certain ABO subgroups, such as A3. Investigation of the person's medical history can clarify the cause of the mixed field reaction.
People with cold agglutinin disease produce antibodies against their own red blood cells that cause them to spontaneously agglutinate at room temperature, leading to false positive reactions in forward grouping. Cold agglutinins can usually be deactivated by warming the sample to and washing the red blood cells with saline. If this is not effective, dithiothreitol can be used to destroy the antibodies.
Cord blood samples may be contaminated with Wharton's jelly, a viscous substance that can cause red blood cells to stick together, mimicking agglutination. Wharton's jelly can be removed by thoroughly washing the red blood cells.
In a rare phenomenon known as "acquired B antigen", a patient whose true blood type is A may show a weak positive result for B in the forward grouping. This condition, which is associated with gastrointestinal diseases such as colon cancer and intestinal obstruction, results from conversion of the A antigen to a structure mimicking the B antigen by bacterial enzymes. Unlike the true B antigen, acquired B antigen does not react with reagents within a certain pH range.
Reverse grouping
Infants under 3 to 6 months of age exhibit missing or weak reactions in reverse grouping because they produce very low levels of ABO antibodies. Therefore, reverse grouping is generally not performed for this age group. Elderly people may also exhibit decreased antibody production, as may people with hypogammaglobulinemia. Weak reactions can be strengthened by allowing the plasma and red cells to incubate at room temperature for 15 to 30 minutes, and if this is not effective, they can be incubated at .
Approximately 20% of individuals with the blood type A or AB belong to a subgroup of A, termed A2, while the more common subgroup, encompassing approximately 80% of individuals, is termed A1. Because of small differences in the structure of the A1 and A2 antigens, some individuals in the A2 subgroup can produce an antibody against A1. Therefore, these individuals will type as A or AB in the forward grouping, but will exhibit an unexpected positive reaction with the type A1 red cells in the reverse grouping. The discrepancy can be resolved by testing the person's red blood cells with an anti-A1 reagent, which will give a negative result if the patient belongs to the A2 subgroup. Anti-A1 antibodies are considered clinically insignificant unless they react at . Other subgroups of A exist, as well as subgroups of B, but they are rarely encountered.
If high levels of protein are present in a person's plasma, a phenomenon known as rouleaux may occur when their plasma is added to the reagent cells. Rouleaux causes red blood cells to stack together, which can mimic agglutination, causing a false positive result in the reverse grouping. This can be avoided by removing the plasma, replacing it with saline, and re-centrifuging the tube. Rouleaux will disappear once the plasma is replaced with saline, but true agglutination will persist.
Antibodies to blood group antigens other than A and B may react with the reagent cells used in reverse grouping. If a cold-reacting autoantibody is present, the false positive result can be resolved by warming the sample to . If the result is caused by an alloantibody, an antibody screen can be performed to identify the antibody, and the reverse grouping can be performed using samples that lack the relevant antigen.
Weak D phenotype
Approximately 0.2 to 1% of people have a "weak D" phenotype, meaning that they are positive for the RhD antigen, but exhibit weak or negative reactions with some anti-RhD reagents due to decreased antigen expression or atypical variants of antigen structure. If routine serologic testing for RhD results in a score of 2+ or less, the antiglobulin test can be used to demonstrate the presence of RhD. Weak D testing is also performed on blood donors who initially type as RhD negative. Historically, blood donors with weak D were treated as Rh positive and patients with weak D were treated as Rh negative in order to avoid potential exposure to incompatible blood. Genotyping is increasingly used to determine the molecular basis of weak D phenotypes, as this determines whether or not individuals with weak D can produce antibodies against RhD or sensitize others to the RhD antigen.
Red cell antibody sensitization
The indirect antiglobulin test, which is used for weak D testing and typing of some red blood cell antigens, detects IgG bound to red blood cells. If IgG is bound to red blood cells in vivo, as may occur in autoimmune hemolytic anemia, hemolytic disease of the newborn and transfusion reactions, the indirect antiglobulin test will always give a positive result, regardless of the presence of the relevant antigen. A direct antiglobulin test can be performed to demonstrate that the positive reaction is due to sensitization of red cells.
Other pretransfusion testing
Some groups of people have specialized transfusion requirements. Fetuses, very low-birth-weight infants, and immunocompromised people are at risk for developing severe infection with cytomegalovirus (CMV)―an opportunistic pathogen for which approximately 50% of blood donors test positive―and may be transfused with CMV-negative blood to prevent infection. Those who are at risk of developing graft-versus-host disease, such as bone marrow transplant recipients, receive blood that has been irradiated to inactivate the T lymphocytes that are responsible for this reaction. People who have had serious allergic reactions to blood transfusions in the past may be transfused with blood that has been "washed" to remove plasma. The history of the patient is also examined to see if they have previously identified antibodies and any other serological anomalies.
A direct antiglobulin test (Coombs test) is also performed as part of the antibody investigation.
Donor blood is generally screened for transfusion-transmitted infections such as HIV. As of 2018, the World Health Organization reported that nearly 100% of blood donations in high- and upper-middle-income countries underwent infectious disease screening, but the figures for lower-middle-income and low-income countries were 82% and 80.3% respectively.
History
In 1901, Karl Landsteiner published the results of an experiment in which he mixed the serum and red blood cells of five different human donors. He observed that a person's serum never agglutinated their own red blood cells, but it could agglutinate others', and based on the agglutination reactions the red cells could be sorted into three groups: group A, group B, and group C. Group C, which consisted of red blood cells that did not react with any person's plasma, would later be known as group O. A fourth group, now known as AB, was described by Landsteiner's colleagues in 1902. This experiment was the first example of blood typing.
In 1945, Robin Coombs, A.E. Mourant and R.R. Race published a description of the antiglobulin test (also known as the Coombs test). Previous research on blood group antibodies had documented the presence of so-called "blocking" or "incomplete" antibodies: antibodies that occupied antigen sites, preventing other antibodies from binding, but did not cause red blood cells to agglutinate. Coombs and his colleagues devised a method to easily demonstrate the presence of these antibodies. They injected human immunoglobulins into rabbits, which caused them to produce an anti-human globulin antibody. The anti-human globulin could bind to antibodies already attached to red blood cells and cause them to agglutinate. The invention of the antiglobulin test led to the discovery of many more blood group antigens. By the early 1950s, companies had begun producing commercial antisera for special antigen testing.
Notes
References
Blood tests
Transfusion medicine
Immunologic tests | Blood compatibility testing | [
"Chemistry",
"Biology"
] | 7,006 | [
"Blood tests",
"Chemical pathology",
"Immunologic tests"
] |
62,948,318 | https://en.wikipedia.org/wiki/Uri%20Sivan | Uri Sivan (אורי סיון)(born 1955) is an Israeli physicist who is the 17th president of the Technion – Israel Institute of Technology. He is also the holder of the Bertoldo Badler Chair in the Technion's Faculty of Physics.
Biography
Uri Sivan's parents immigrated to Mandatory Palestine from Poland in 1936. They studied at the Technion – Israel Institute of Technology after being banned from European universities because they were Jewish.
Sivan served as a pilot in the Israeli Air Force.
Sivan has a BSc in Physics and Mathematics, and an MSc and PhD in Physics from Tel Aviv University.
Sivan lives in Haifa, Israel. He is married and has three children.
Academic career
In 1991, after three years at IBM’s T. J. Watson Research Center in New York State, Sivan joined the Faculty of Physics at the Technion – Israel Institute of Technology, and became the holder of the Bertoldo Badler Chair.
Sivan set up and led the Russell Berrie Nanotechnology Research Institute at Technion from 2005 to 2010, and in 2017 he set up the National Advisory Committee for Quantum Science and Technology of the Council for Higher Education's Planning and Budgeting Committee. Israel's second astronaut carried the nano-bible, a 0.5 square-millimeter silicon nanochip with 1.2 million letters, created by Uri Sivan into space in 2022.
In September 2019, Sivan became the 17th President of the Technion – Israel Institute of Technology, replacing Peretz Lavie.
Awards and recognition
Sivan was awarded the Israel Academy of Sciences Bergmann Prize, the Mifal Hapais Landau Prize for the Sciences and Research, the Rothschild Foundation Bruno Prize, the Technion's Hershel Rich Innovation Award, and the Taub Award for Excellence in Research.
References
Scientists from Haifa
20th-century Israeli physicists
Academic staff of Technion – Israel Institute of Technology
Tel Aviv University alumni
Jewish physicists
Living people
Israeli people of Polish-Jewish descent
Technion – Israel Institute of Technology presidents
Quantum physicists
Physics educators
Israeli Air Force personnel
21st-century Israeli physicists
1955 births | Uri Sivan | [
"Physics"
] | 453 | [
"Quantum physicists",
"Quantum mechanics"
] |
60,815,736 | https://en.wikipedia.org/wiki/Kink%20%28materials%20science%29 | Kinks are deviations of a dislocation defect along its glide plane. In edge dislocations, the constant glide plane allows short regions of the dislocation to turn, converting into screw dislocations and producing kinks. Screw dislocations have rotatable glide planes, thus kinks that are generated along screw dislocations act as an anchor for the glide plane. Kinks differ from jogs in that kinks are strictly parallel to the glide plane, while jogs shift away from the glide plane.
Energy
Pure-edge and screw dislocations are conceptually straight in order to minimize its length, and through it, the strain energy of the system. Low-angle mixed dislocations, on the other hand, can be thought of as primarily edge dislocation with screw kinks in a stair-case structure (or vice versa), switching between straight pure-edge and pure-screw dislocation segments. In reality, kinks are not sharp transitions. Both the total length of the dislocation and the kink angle are dependent on the free energy of the system. The primary dislocation regions lie in Peierls-Nabarro potential minima, while the kink requires addition energy in the form of an energy peak. To minimize free energy, the kink equilibrates at a certain length and angle. Large energy peaks create short but sharp kinks in order to minimize dislocation length within the high energy region, while small energy peaks create long and drawn-out kinks in order to minimize total dislocation length.
Kink movement
Kinks facilitate the movement of dislocations along its glide plane under shear stress, and is directly responsible for plastic deformation of crystals. When a crystal undergoes shear force, e.g. cut with scissors, the applied shear force causes dislocations to move through the material, displacing atoms and deforming the material. The entire dislocation does not move at once – rather, the dislocation produces a pair of kinks, which then propagates in opposite directions down the length of the dislocation, eventually shifting the entire dislocation by a Burgers vector. The velocity of dislocations through kink propagation also clearly limited on the nucleation frequency of kinks, as a lack of kinks compromises the mechanism by which dislocations move.
As shear force approaches infinity, the velocity at which dislocations migrate is limited by the physical properties of the material, maximizing at the material's sound velocity. At lower shear stresses, the velocity of dislocations end up relating exponentially with the applied shear force:
where
is applied shear force
and are experimentally found constants
The above equation gives the upper limit on dislocation velocity. The interactions of dislocation movement on its environment, particularly other defects such as jogs and precipitates, results in drag and slows down the dislocation:
where
is the drag parameter of the crystal
Kink movement is strongly dependent on temperature as well. Higher thermal energy assists in the generation of kinks, as well as increasing atomic vibrations and promoting dislocation motion.
Kinks may also form under compressive stress due to the buckling of crystal planes into a cavity. At high compressive forces, masses of dislocations move at once. Kinks align with each other, forming walls of kinks that propagate all at once. At sufficient forces, the tensile force produced by the dislocation core exceeds the fracture stress of the material, combining kink boundaries into sharp kinks and de-laminating the basal planes of the crystal.
References
Crystallographic defects | Kink (materials science) | [
"Chemistry",
"Materials_science",
"Engineering"
] | 767 | [
"Crystallographic defects",
"Crystallography",
"Materials degradation",
"Materials science"
] |
60,816,692 | https://en.wikipedia.org/wiki/Milan%20Mrksich | Milan Mrksich (born 15 August 1968) is an American chemist. He is the Henry Wade Rogers Professor at Northwestern University with appointments in chemistry, biomedical engineering and cell & developmental biology.
He also served as both the founding director of the Center for Synthetic Biology and as an associate director of the Robert H. Lurie Comprehensive Cancer Center at Northwestern. Mrksich also served as the Vice President for Research of Northwestern University.
His research involves the chemistry and synthesis of surfaces that contact biological environments. His laboratory has pioneered several technologies, including strategies to integrate living cells with microelectronic devices, methods to enable high throughput assays for drug discovery, and approaches to making synthetic fusion proteins for applications as therapeutics. Most notably, he developed the SAMDI-MS biochip technology that allows for high-throughput quantification of surface-based biochemical assays using MALDI mass spectrometry. Through SAMDI-MS, Mrksich has become a leader in using label-free technology for drug discovery, founding the company SAMDI Tech in 2011 that primarily serves global pharmaceutical companies. His work has been described in over 240 publications (h-index 98), 500 invited talks, and 18 patents.
Early life and education
Milan Mrksich () was born on August 15, 1968, to Serbian immigrants and raised in Justice, Illinois. He graduated from University of Illinois at Urbana-Champaign in 1989 with a B.S. in chemistry working in the laboratory of Steven Zimmerman on molecular tweezers. He completed his PhD in organic chemistry in 1994 from Caltech under chemist Peter B. Dervan. After graduate school, he was an American Chemical Society postdoctoral fellow at Harvard University under chemist George M. Whitesides before joining the faculty at the University of Chicago in 1996. He worked there for 15 years before joining the faculty at Northwestern University in 2011.
Research history
Early career
Early on as an independent investigator, Mrksich developed and executed the concept of dynamic substrates for cell culture. Here, self-assembled monolayers (SAMs) present cell adhesive ligands with perfect control over density and orientation against a non-adhesive, inert background, such as ethylene glycol. These monolayers can be further modified with electroactive groups that selectively release immobilized ligand when stimulated with an electric potential. Several strategies using this approach were studied in the context of cell signaling, migration, and co-culture. Subsequent cell-based work focused on developing methods to pattern cells on the aforementioned SAMs. The work has mostly utilized microcontact printing to confine adherent cells into defined positions, shapes, and sizes. Ultimately, his group's work has revealed examples of how cellular mechanics and cytoskeletal structure influence phenotype. A primary example of this involved investigating how cell shape exerts control over the differentiation of mesenchymal stem cells. Further work utilized these patterned monolayers to investigate the relationship between various cytoskeletal elements and to observe complex phenotypic differences in patient-derived neuroprogenitor cells. Recent work in the group investigating cell patterning has utilized photoactive adhesive peptides, allowing for local, spatiotemporal control of cell adhesion to study gap junction formation.
SAMDI-MS
While performing much of the early dynamic substrate and cell patterning work, Mrksich also pioneered an assay platform that utilizes SAMs of alkanethiolates on gold. The monolayers contain capture ligands (e.g. biotin or maleimide) that can selectively immobilize a peptide of interest. Subsequently, the monolayer can treated with a specific enzyme or a complex mixture, such as cell lysate, that can modify the peptide through various biological processes (e.g. phosphorylation). For quality control, the monolayers present these peptides against a background of tri(ethylene glycol) groups to prevent the nonspecific adsorption of protein to the surface that could obfuscate the reaction signal and, therefore, enable quantitative and reproducible assays. Most significantly, the monolayers can be characterized with MALDI mass spectrometry in a technique known as SAMDI-MS, which provides the masses of the substituted alkanethiolates and, therefore, the mass change of the immobilized peptide that results from enzyme activity. The method is compatible with standard array formats and liquid handling robotics, allowing a throughput in the tens of thousands of reactions per day. Importantly, the matrix-assisted laser desorption time-of-flight mass spectrometry (MALDI-TOF) analysis provides a fast and quantitative mass shift readout without the need for labels.
Megamolecules
Most recently, Mrksich's group has focused on developing a technique for assembling large molecular structures with perfectly defined structures and orientations, known as Megamolecules. This is primarily done through use of fusion proteins and irreversible inhibitor linkers that assemble stable intermediates. Structure-function relationships, including synthesis of cyclic and antibody-mimic structures have been investigated for potential therapeutic application.
Entrepreneurship
Mrksich has been an active entrepreneur over the past twenty years. He co-founded SAMDI Tech in 2011, which uses his label-free assay technology to perform high throughput screens for pharmaceutical companies. SAMDI Tech entered into a partnership with Charles River Laboratories in 2018 and was purchased by CRL in 2023. Mrksich also co-founded WMR Biomedical in 2008, with George Whitesides and Carmichael Roberts to develop resorbable stent materials; this company was renamed Lyra Therapeutics and had an IPO in 2020 (NASDAQ LYRA) and has drug-eluting stents in clinical trials for ear, nose and throat disease, including chronic rhinosinusitis. Mrksich has recently founded ModuMab Therapeutics, which applies his megamolecule technology to creating antibody mimics for a broad range of diseases.
Service
Mrksich has also been active in serving the scientific community in a number of roles. These include his current service as the Scientific Director of the Searle Scholars Program, as a member of the Board of Governors for Argonne National Laboratory, and as a member of the Board of Directors for the Camille & Henry Dreyfus Foundation. His past appointments include service and chairing DARPA’s Defense Sciences Research Council and many program advisory committees.
Awards and honors
Personal life
Milan lives in Hinsdale, Illinois with his two children.
References
American bioengineers
Scientists from Chicago
Northwestern University faculty
University of Illinois Urbana-Champaign alumni
California Institute of Technology alumni
American Chemical Society
University of Chicago faculty
People from Hinsdale, Illinois
Living people
1968 births
American people of Serbian descent | Milan Mrksich | [
"Physics",
"Chemistry"
] | 1,403 | [
"Monolayers",
"American Chemical Society",
"Atoms",
"Matter"
] |
60,819,688 | https://en.wikipedia.org/wiki/Skin%20temperature%20%28atmosphere%29 | The skin temperature of an atmosphere is the temperature of a hypothetical thin layer high in the atmosphere that is transparent to incident solar radiation and partially absorbing of infrared radiation from the planet. It provides an approximation for the temperature of the tropopause on terrestrial planets with greenhouse gases present in their atmospheres.
The skin temperature of an atmosphere should not be confused with the surface skin temperature, which is more readily measured by satellites, and depends on the thermal emission at the surface of a planet.
Background
The concept of a skin temperature builds on a radiative-transfer model of an atmosphere, in which the atmosphere of a planet is divided into an arbitrary number of layers. Each layer is transparent to the visible radiation from the Sun but acts as a blackbody in the infrared, fully absorbing and fully re-emitting infrared radiation originating from the planet's surface and from other atmospheric layers. Layers are warmer near the surface and colder at higher altitudes. If the planet's atmosphere is in radiative equilibrium, then the uppermost of these opaque layers should radiate infrared radiation upwards with a flux equal to the incident solar flux. The uppermost opaque layer (the emission level) will thus radiate as a blackbody at the planet's equilibrium temperature.
The skin layer of an atmosphere references a layer far above the emission level, at a height where the atmosphere is extremely diffuse. As a result, this thin layer is transparent to solar (visible) radiation and translucent to planetary/atmospheric (infrared) radiation. In other words, the skin layer acts as a graybody, because it is not a perfect absorber/emitter of infrared radiation. Instead, most of the infrared radiation coming from below (i.e. from the emission level) will pass through the skin layer, with only a small fraction being absorbed, resulting in a cold skin layer.
Derivation
Consider a thin layer of gas high in the atmosphere with some absorptivity (i.e. the fraction of incoming energy that is absorbed), ε. If the emission layer has some temperature Teq, the total flux reaching the skin layer from below is given by:
assuming the emission layer of the atmosphere radiates like a blackbody according to the Stefan-Boltzmann law. σ is the Stefan-Boltzmann constant.
As a result:
is absorbed by the skin layer, while passes through the skin layer, radiating directly into space.
Assuming the skin layer is at some temperature Ts, and using Kirchhoff's law (absorptivity = emissivity), the total radiation flux produced by the skin layer is given by:
where the factor of 2 comes from the fact that the skin layer radiates in both the upwards and downwards directions.
If the skin layer remains at a constant temperature, the energy fluxes in and out of the skin layer should be equal, so that:
Therefore, by rearranging the above equation, the skin temperature can be related to the equilibrium temperature of an atmosphere by:
The skin temperature is thus independent of the absorptivity/emissivity of the skin layer.
Applications
A multi-layered model of a greenhouse atmosphere will produce predicted temperatures for the atmosphere that decrease with height, asymptotically approaching the skin temperature at high altitudes. The temperature profile of the Earth's atmosphere does not follow this type of trend at all altitudes, as it exhibits two temperature inversions, i.e. regions where the atmosphere gets warmer with increasing altitude. These inversions take place in the stratosphere and the thermosphere, due to absorption of solar ultraviolet (UV) radiation by ozone and absorption of solar extreme ultraviolet (XUV) radiation respectively. Although the reality of Earth's atmospheric temperature profile deviates from the many-layered model due to these inversions, the model is relatively accurate within Earth's troposphere. The skin temperature is a close approximation for the temperature of the tropopause on Earth. An equilibrium temperature of 255 K on Earth yields a skin temperature of 214 K, which compares with a tropopause temperature of 209 K.
References
Temperature
Atmospheric radiation | Skin temperature (atmosphere) | [
"Physics",
"Chemistry"
] | 838 | [
"Scalar physical quantities",
"Temperature",
"Thermodynamic properties",
"Physical quantities",
"SI base quantities",
"Intensive quantities",
"Thermodynamics",
"Wikipedia categories named after physical quantities"
] |
53,115,925 | https://en.wikipedia.org/wiki/Microflotation | Microflotation is a further development of standard dissolved air flotation (DAF). Microflotation is a water treatment technology operating with microbubbles of 10–80 μm in size instead of 80-300 μm like conventional DAF units.
The general operating method of microflotation is similar to standard recycled stream DAF units. The advancements of microflotation are lower pressure operation, smaller footprints and less energy consumption.
Process description
The method of Microflotation is comparable to recycled stream DAF.
A portion of the clarified effluent water leaving the Microflotation tank is pumped into a small pressure vessel into which compressed air is also introduced. This results in saturating the pressurized effluent water with air. The air-saturated water stream is recycled to the front of the Microflotation cell and flows through a pressure release valve just as it enters the front of the float tank, which results in the air being released in the form of tiny bubbles. Bubbles form at nucleation sites on the surface of the suspended particles, adhering to the particles. As more bubbles form, the lift from the bubbles eventually overcomes the force of gravity. This causes the suspended matter to float to the surface where it forms a froth layer which is then removed by a skimmer. The froth-free water exits the float tank as the clarified effluent from the Microflotation unit. A particular circular DAF system is called "Zero speed", allowing quite water status then highest performances; a typical example is an Easyfloat 2K DAF system.
Advantages
Microflotation is an enhanced method to float particles to the surface with the aid of adherent air bubbles.
The adherence of suspended solids to bubbles is easier and more intensive, the smaller the bubbles are. Because of the improved adherence capacity of small microbubbles, the saturation of the introduced air as well as the reduction capability of particles lead to an improved suspended solids reduction, a higher solids content in the float sludge and a more stable float sludge on the surface of the microflotation cell.
A difference has to be made to dispersed flotation used in mining industry in mineral segregation processes where the bubble are bigger being 500-2000 μm in size and volume of air is many fold compared to the water volume. Traditional Dissolved Air flotation (DAF) mainly operates with bubble sizes ranging from 80 to 300 μm with very inhomogeneous bubble size distribution.
A major difference of low pressure dissolved air flotation and other flotation processes lies in the volumes of bubbles, amount of air and raising speeds. One macro bubble can be 1000 times bigger in volume compared to one micro bubble. And vice versa the number of micro bubbles can be 1000 fold in number compared to one macro bubble having same volume.
Microflotation enables bubbles in size 40-70 μm with rise rates from 3–10 m/h. The rise rate is slow enough not to destroy the fragile flocks forming an agglomeration of particles with weak mutual bonding and high enough to allow time for separation of the agglomeration. With the attachment of particles to bubbles the size range of "flock-bubble" grows, and the rise velocities grow simultaneously. The separation rate is accelerated leading to residence times of combined chemical precipitation and flotation from 10 to 60 minutes with need of small footprint areas of treatment plants and decreasing the cost structures of treatment processes.
A distribution of bubble sizes between 20 and 50 microns is the necessary requirement for an optimum flotation result. Even a small number of bubbles with diameters of above 100 microns can disable a flotation separation process, because larger bubbles rise more quickly and cause turbulence, which severely destroys already build air-flocks-agglomerates.
Applications
Microflotation is technically appropriately and primarily economic to substitute classic technology like sand filtration and sedimentation. Beyond there are several applications at which low pressure Microflotation is an alternative to membrane technology or represents a convincing addition.
Microflotation can be used as:
Non-Chemical/Chemical Industrial PreTreatment (COD, BOD, F.O.G., TSS reduction. heavy metal- and color removal)
Primary treatment
Tertiary treatment
Replacement or protection of filtration units
Sludge thickening
Protection and performance improvement of MBR units, aerobic and anaerobic biologies
References
Flotation processes
Water treatment
Waste treatment technology | Microflotation | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 931 | [
"Water treatment",
"Water pollution",
"Environmental engineering",
"Oil refining",
"Flotation processes",
"Water technology",
"Waste treatment technology"
] |
53,121,641 | https://en.wikipedia.org/wiki/Nano-suction%20technology | Nano-suction is a technology that uses vacuum, negative fluid pressure and millions of nano-sized suction cups to securely adhere any object to a flat non-porous surface. When the nano-suction object is pressed against a flat surface, millions of miniature suction cups create a large vacuum, generating a strong suction force that can hold a tremendous amount of weight. The nature of the technology allows easy removal without residue, and makes it reusable.
Applications
There have been a wide range of applications of nano-suction technology, also known as "anti-gravity", ranging from hooks, frames, mirrors, notepad organisers, mobile phone cases and large houseware products.
See also
Synthetic setae
Suction cup
References
Nanotechnology
Tools
Vacuum
Joining | Nano-suction technology | [
"Physics",
"Materials_science",
"Engineering"
] | 159 | [
"Nanotechnology",
"Vacuum",
"Matter",
"Materials science"
] |
70,161,885 | https://en.wikipedia.org/wiki/2%2C4%2C6-Tri-tert-butylpyrimidine | 2,4,6-Tri-tert-butylpyrimidine is the organic compound with the formula HC(ButC)2N2CtBu where tBu = (CH3)3C. It is a substituted derivative of the heterocycle pyrimidine. Known also as TTBP, this compound is of interest as a base that is sufficiently bulky to not bind boron trifluoride but still able to bind protons. It is less expensive that the related bulky derivatives of pyridine such as 2,6-di-tert-butylpyridine, 2,4,6-tri-tert-butylpyridine, and 2,6-di-tert-butyl-4-methylpyridine.
References
Pyrimidines
Reagents for organic chemistry
Non-nucleophilic bases
Tert-butyl compounds | 2,4,6-Tri-tert-butylpyrimidine | [
"Chemistry"
] | 192 | [
"Non-nucleophilic bases",
"Bases (chemistry)",
"Reagents for organic chemistry"
] |
70,163,331 | https://en.wikipedia.org/wiki/Hydraulic%20modular%20trailer | A hydraulic modular trailer (HMT) is a special platform trailer unit which feature swing axles, hydraulic suspension, independently steerable axles, two or more axle rows, compatible to join two or more units longitudinally and laterally and uses power pack unit (PPU) to steer and adjust height. These trailer units are used to transport oversized load, which are difficult to disassemble and are overweight. These trailers are manufactured using high tensile steel, which makes it possible to bear the weight of the load with the help of one or more ballast tractors which push and pull these units via drawbar or gooseneck this combination of tractor and trailer is also termed as heavy hauler.
Typical loads include oil rig modules, bridge sections, buildings, ship sections, and industrial machinery such as generators and turbines also many militaries uses HMT for tank transportation. There is a limited number of manufacturers who produce these heavy-duty trailers because the market share of oversized loads is very thin when we talk about the over all transportation industry. There are self powered units of hydraulic modular trailer which are called SPMT which are used when the ballast tractors can not be applied due to space.
History
In 1957 the first every hydraulic modular trailers were made by Willy Scheuerle a Germany based trailer specialist which were four axles 32 wheeled modules for Robert Wynn and Sons Ltd, a Shaftesbury-based Guinness Book of Record-winning heavy haulage company. Wynns were also the first to use pneumatic tires for loads weighing more than 100 tons and also to use hydraulic suspension trailers which were manufactured by Cranes Trailers limited from Dereham.
In 1962 Cranes Trailers limited developed two four-axle 32-wheel modules for Pickfords, a London based heavy haulage company, with combined payload capacity of 160 tons on a total of eight axles and 64 wheels the modules incorporated hydraulic suspensions and each axle interlinked with mechanical steering system at an operational height varied from 2.9 to 3.11ft. The modules had drawbar coupling which could be coupled at either of both ends or even both for push-pull combination.
In 1963 Goldhofer developed modular trailers in Europe for heavy haulers. In the same year, Cometto developed a 300-ton capacity module in 14-axle, seven-row configuration. Scheuerle also demonstrated its modules at events in 1967 and later King Truck Equipment Ltd signed an agreement with Scheuerle which gave them exclusive manufacturing rights to produce their trailers in the UK.
In 1971, King Truck Equipment Ltd demonstrated two units that were custom-built for Pickfords. A single unit was able to carry 150 tons on six axle rows and 48 wheels in total. Who would use them mostly with their Scammell ballast tractors via a drawbar coupling. These trailers had independent suspension and steering abilities via the Petter twin-cylinder diesel engine used as a PPU.In the 1970s, many manufacturers started to developed HMTs as the industry believed that the conventional low loaders had various limitations. To comply with new regulations and keeping safety in mind, the industry knew that they needed more axles to distribute the payload and the ultimate solution for the demand would be HMTs. Manufacturers opted hydraulic suspension instead of mechanical leaf springs and air suspension due to its efficient size and adjustable characteristics. Manufacturers chose high-tensile steel instead of aluminum because when it comes to HMTs and oversize loads, the minimizing the weight of the HMT is not relevant when they have their own payload capacity excluding the ballast tractor. The only weak point that existed on a HMT were the tires, which are still a significant weakness till today, that's the reason why SPMTs have solid tires. HMTs operate at a higher speed then SPMts that's why solid tires are not an option for HMTs.
Specifications
The number of axles on a HMT is not specified; two-, three-, four-, five-, six-, and eight-axle units are manufactured. Multiple units can be coupled longitudinally and laterally to transport a heavier load; each axle has a lifting capacity ranging from 18 tons to 45 tons. With a steering capacity of 50 to 60 degrees. Some combinations require a trailer operator who controls steering and height adjustments of the trailer via a controller which is modular and can be mounted at the frontend or rear end of the trailer. Huge combinations may also have a cabin for the operator, while typical combinations have a seat attached to the controller.
Hydraulic cylinders are used for steering and suspension of the trailer each axle has an individual suspension cylinder, steering rod which is connected to the main steering cylinder which is at the frontend of the trailer which makes all the axles steer at once in the same direction one row of axle consist of two turn tables, two knees, two suspension cylinders and four to eight wheels attached to a high strength metal platform. Steering and suspension cylinders are hydraulically operated using hydraulic fluid through hose pipe from the hydraulic tank, which is located near the PPU. PPU, which powers the steering, suspension to and fro flow of hydraulic fluid from hydraulic tank to suspensions and steering cylinders, puts out about 18 to 25 hp of power and are available in both diesel and petrol variants manufactured by renowned brands like Kohler, Yanmar and Hatz.
Multiple units of HMT can be interconnected longitudinally by pins and interconnecting couplings mounted in the centre of the chassis in the front and rear to interconnect them laterally they are bolted on the side wall of the chassis. HMTs can not move themselves, so There are two ways by which a HMT can be coupled with a tractor unit which can push and pull the trailer, these are gooseneck and drawbar.
Gooseneck is the most common coupling used in the industry. A swan shaped coupling is coupled to the trailer and the tractor via connection of trailer pin and tractor fifth wheel. This coupling can be hydraulically adjusted to suit the tractor's height also the steering controls are connected to the coupling. Goosenecks are easy to use and gives benefit to using conventional tractors, but this coupling has two huge drawbacks. This coupling can not be applied in a two file or side by side HMT configuration which limits the payload. Additionally, it can not be applied in push and pull configuration. Goosenecks are manufactured by the trailers manufactures themselves.
Drawbar is the most efficient and economical coupling which consists of an A-shaped frame with an I-shaped loop which is coupled to the trailer and connected to a ballast tractor via a towing hitch of the tractor. This coupling is widely used in developing countries because of its economical cost. Unlike gooseneck, this coupling can be applied to side by side and push & pull configuration which, but this coupling can not be connected to a typical tractor, it requires a ballast tractor which has a ballast box instead of a fifth wheel and tow hitches in the rear and front. Draw bars and tow hitches are manufacture red by companies like jost and Ringfeder.
Since 2005 in the United States of America, HMT have extra features and design changes which include widening axles, and half way folding system. Due to different road regulations in different states, almost all manufacturers have adopted the US design and developed a product for the US market. These HMT trailers are named dual lane trailers, which comes from the widening characteristic of the trailer. Dual lane trailers have capability to change its width from to wide to make transport of empty trailers easy and also comply with state regulations when required.
Accessories
Gooseneck
Draw bar
Drop Deck
Vessel Bridge
Intermediate spacer
Excavator deck
Extendable spacer
Turntables (bolster)
Blade Lifter
Tower adapter
Girder frame
Trailer power assist
Manufacturers
Goldhofer
Scheuerle
Nicolas
Kamag
Tiiger
Tratec
Faymonville
Cometto
Drake Trailers
Kennedy Trailers
Trabosa
Taskers of Andover
DOLL Fahrzeugbau
TRT Australia
Broshuis
Tuff Trailers
Colombo
Capperi
Modern Transport Engineers
King Trailers Limited
Leonardo DRS
BEML
Operators
ALE
Sarens
Mammoet
Lampson International
Pickfords
Alstom
ALE
CLP Group
Omega morgan
United States Army
Indian Army
British Army
Republic of Korea Armed Forces
French Army
Italian Army
Turkish Armed Forces
Gallery
See also
Heavy hauler
Tractor unit
Ringfeder
Ballast tractor
Transporter Industry International
Faymonville Group
References
Trailers
Heavy haulage
Engineering vehicles
Modularity
Machines | Hydraulic modular trailer | [
"Physics",
"Technology",
"Engineering"
] | 1,712 | [
"Physical systems",
"Engineering vehicles",
"Machines",
"Mechanical engineering"
] |
70,169,832 | https://en.wikipedia.org/wiki/Hi-C%20%28genomic%20analysis%20technique%29 | Hi-C is a high-throughput genomic and epigenomic technique to capture chromatin conformation (3C). In general, Hi-C is considered as a derivative of a series of chromosome conformation capture technologies, including but not limited to 3C (chromosome conformation capture), 4C (chromosome conformation capture-on-chip/circular chromosome conformation capture), and 5C (chromosome conformation capture carbon copy). Hi-C comprehensively detects genome-wide chromatin interactions in the cell nucleus by combining 3C and next-generation sequencing (NGS) approaches and has been considered as a qualitative leap in C-technology (chromosome conformation capture-based technologies) development and the beginning of 3D genomics.
Similar to the classic 3C technique, Hi-C measures the frequency (as an average over a cell population) at which two DNA fragments physically associate in 3D space, linking chromosomal structure directly to the genomic sequence. The general procedure of Hi-C involves first crosslinking chromatin material using formaldehyde. Then, the chromatin is solubilized and fragmented, and interacting loci are re-ligated together to create a genomic library of chimeric DNA molecules. The relative abundance of these chimeras, or ligation products, is correlated to the probability that the respective chromatin fragments interact in 3D space across the cell population. While 3C focuses on the analysis of a set of predetermined genomic loci to offer “one-versus-some” investigations of the conformation of the chromosome regions of interest, Hi-C enables “all-versus-all” interaction profiling by labeling all fragmented chromatin with a biotinylated nucleotide before ligation. As a result, biotin-marked ligation junctions can be purified more efficiently by streptavidin-coated magnetic beads, and chromatin interaction data can be obtained by direct sequencing of the Hi-C library.
Analyses of Hi-C data not only reveal the overall genomic structure of mammalian chromosomes, but also offer insights into the biophysical properties of chromatin as well as more specific, long-range contacts between distant genomic elements (e.g. between genes and regulatory elements), including how these change over time in response to stimuli. In recent years, Hi-C has found its application in a wide variety of biological fields, including cell growth and division, transcription regulation, fate determination, development, autoimmune disease, and genome evolution. By combining Hi-C data with other datasets such as genome-wide maps of chromatin modifications and gene expression profiles, the functional roles of chromatin conformation in genome regulation and stability can also be delineated.
History
At its inception, Hi-C was a low-resolution, high-noise technology that was only capable of describing chromatin interaction regions within a bin size of 1 million base pairs (Mb). The Hi-C library also required several days to construct, and the datasets themselves were low in both output and reproducibility. Nevertheless, Hi-C data offered new insights for chromatin conformation as well as nuclear and genomic architectures, and these prospects motivated scientists to put efforts to modify the technique over the past decade.
Between 2012 and 2015, several modifications to the Hi-C protocol have taken place, with 4-cutter digestion or adapted deeper sequencing depth to obtain higher resolution. The use of restriction endonucleases that cut more frequently, or DNaseI and Micrococcal nucleases also significantly increased the resolution of the method. More recently (2017), Belaghzal et al. described a Hi-C 2.0 protocol that was able to achieve kilobase (kb) resolution. The key adaptation to the base protocol was the removal of the SDS solubilization step after digestion to preserve nuclear structure and prevent random ligation between fragmented chromatin by ligation within the intact nuclei, which formed the basis of in situ Hi-C. In 2021, Hi-C 3.0 was described by Lafontaine et al., with higher resolution achieved by enhancing crosslinking with formaldehyde followed by disuccinimidyl glutarate (DSG). While formaldehyde captures the amino and imino groups of both proteins and DNA, the NHS-esters in DSG react with primary amines on proteins and can capture amine-amine interactions. These updates to the base protocol allowed the scientists to look at more detailed conformational structures such as chromosomal compartment and topologically associating domains (TADs), as well as high-resolution conformational features such as DNA loops.
To date, a variety of derivatives of Hi-C have already emerged, including in situ Hi-C, low Hi-C, SAFE Hi-C, and Micro-C, with distinctive features related to different aspects of standard Hi-C, but the basic principle has remained the same.
Traditional Hi-C
The outline of the classical Hi-C workflow is as follows: cells are cross-linked with formaldehyde; chromatin is digested with a restriction enzyme that generates a 5’ overhang; the 5’ overhang is filled with biotinylated bases and the resulting blunt-ended DNA is ligated. The ligation products, with biotin at the junction, are selected for using streptavidin and further processed to prepare a library ready for subsequent sequencing efforts.
The pairwise interactions that Hi-C can capture across the genome are immense and so it is important to analyze an appropriately large sample size, in order to capture unique interactions that may only be observed in a minority of the general population. To obtain a high complexity library of ligation products that will ensure high resolution and depth of data, a sample of 20–25 million cells is required as input for Hi-C. Primary human samples, which may be available only in fewer cell numbers, could be used for standard Hi-C library preparation with as low as 1–5 million cells. However, using such a low input of cells may be associated with low library complexity which results in a high percentage of duplicate reads during library preparation.
Standard Hi-C gives data on pairwise interactions at the resolution of 1 to 10 Mb, requires high sequencing depth and the protocol takes around 7 days to complete.
Formaldehyde cross-linking
Cell and nuclear membranes are highly permeable to formaldehyde. Formaldehyde cross-linking is frequently employed for the detection and quantification of DNA-protein and protein-protein interactions. Of interest in the context of Hi-C, and all 3C-based methods, is the ability of formaldehyde to capture cis chromosomal interactions between distal segments of chromatin. It does so by forming covalent links between spatially adjacent chromatin segments. Formaldehyde can react with macromolecules in two steps: first it reacts with a nucleophilic group on a DNA base for example, and forms a methylol adduct, which is then converted to a Schiff base. In the second step, the Schiff base, which can decompose rapidly, forms a methylene bridge with another functional group on another molecule. It can also make this methylene bridge with a small molecule in solution such as glycine, which is used in excess to quench formaldehyde in Hi-C. Quenchers can typically exert an effect on formaldehyde from outside the cell. A key feature of this two-step formaldehyde crosslinking reaction is that all the reactions are reversible, which is vital for chromatin capture.
Crosslinking is a pivotal step of the chromatin capture workflow as the functional readout of the technique is the frequency at which two genomic regions are crosslinked to each other. Thus, the standardization of this step is important and for that, one must consider potential sources of variation. Presence of serum, which contains a high concentration of protein, in culture media can decrease the effective concentration of formaldehyde available for chromatin crosslinking, by sequestering it in the culture media. Therefore, in cases where serum is used in culture, it should be removed for the crosslinking step. The nature of cells, i.e., whether they are suspension or adherent, is also a pertinent consideration for the crosslinking step. Adherent cells bind to surfaces with the help of molecular mechanisms of cytoskeletons. It has been shown that there is a link between cytoskeleton-maintained nuclear and cellular morphology which, if altered, may negatively impact global nuclear organization. Adherent cells therefore, should be crosslinked while still attached to their culture surface.
Lysis, restriction digest and biotinylation
Cells are lysed on ice with cold hypotonic buffer containing sodium chloride, Tris-HCl at pH 8.0, and non-ionic detergent IGEPAL CA-630, supplemented with protease inhibitors. The protease inhibitors and incubation on ice help preserve the integrity of crosslinked chromatin complexes from endogenous proteases. The lysis step helps to release the nucleic material from the cells.
Following cell lysis, chromatin is solubilized with dilute SDS in order to remove proteins that have not been crosslinked and to open chromatin and make it more accessible for subsequent restriction endonuclease-mediated digestion. If the incubation with SDS exceeds the recommended 10 minutes, the formaldehyde crosslinks can be reversed and so the incubation with SDS must be immediately followed by an incubation on ice. A non-ionic detergent called Triton X-100 is used to quench SDS in order to prevent enzyme denaturation in the next step.
Any restriction enzyme that generates a 5’ overhang, such as HindIII can be used to digest the now accessible chromatin overnight. This 5’ overhang provides the template required by the Klenow fragment of DNA Polymerase I to add biotinylated CTP or ATP to the digested ends of chromatin. This step allows for the selection of Hi-C ligation products for library preparation.
Proximity ligation
A dilution ligation is performed on DNA fragments that are still crosslinked to one another in order to favor the intramolecular ligation of fragments within the same chromatin complex instead of ligation events between fragments across different complexes. Since this ligation step occurs between blunt-ended DNA fragments (since the sticky ends have been filled in with biotin-labeled bases), the reaction is allowed to go on for up to 4 hours to make up for its inherent inefficiency. As a result of proximity ligation, the terminal HindIII sites are lost and an NheI site is generated.
Biotin removal, DNA shearing, size selection and end repair
The biotin-labeled ligation products can be purified using phenol-chloroform DNA extraction. To remove any fragments with biotin-labeled ends that have not been ligated, T4 DNA Polymerase with 3’ to 5’ exonuclease activity is used to remove nucleotides from the ends of such fragments. This step ensures that none of these unligated fragments are selected for library preparation. The reaction is stopped with EDTA and the DNA is purified once again using phenol-chloroform DNA extraction.
The ideal size of DNA fragments for the sequencing library depends on the sequencing platform that will be used. DNA can first be sheared to fragments around 300–500 bp long using sonication. Fragments of this size are suitable for high-throughput sequencing. Following sonication, fragments can be size selected using AMPure XP beads from Beckman Coulter to obtain ligation products with a size distribution between 150 and 300 bp. This is the optimal fragment size window for HiSeq cluster formation.
DNA shearing causes asymmetric DNA breaks and must be repaired before biotin pulldown and sequencing adaptor ligation. This is achieved by using a combination of enzymes that fill in 5’ overhangs, and add 5’ phosphate groups and adenylate to the 3’ ends of fragments to allow for ligation of sequencing adaptors.
Biotin pull-down
Using an excess of streptavdin beads, such as the My-One C1 streptavidin bead solution from Dynabeads, biotinylated Hi-C ligation products can be pulled-down and enriched for. Ligation of the Illumina paired-end adapters is performed while the DNA fragments are bound to the streptavidin beads. Adsorption to the beads increases efficiency of the ligation of these blunt-ended DNA fragments to the adaptors, as it decreases their mobility.
Library preparation and sequencing
After the ligation of the adaptors is complete, PCR amplification of the library is performed. The PCR step can introduce high number of duplicates in a low complexity Hi-C ligation product sample as a result of over-amplification. This results in very few interactions being captured and oftentimes, this is because the input sample size had a low amount of cells. It is important to titrate the number of cycles required to get at least 50 ng of Hi-C library DNA for sequencing. Fewer the cycle number, the better so that there are no PCR artifacts (such as off-target amplicons, non-specificity, etc.). The ideal range of PCR cycles is 9–15 and it is more ideal to pool multiple PCR reactions to get enough DNA for sequencing, than to increase the number of cycles for one PCR reaction. The PCR products are purified again using AMPure beads to remove primer dimers and then quantified before being sequenced. Regions of chromatin that interact with each other are then identified by paired-end sequencing of the biotinylated, ligated products.
Any platform that can allow for the ligated fragments to be sequenced across the NheI junction (Roche 454) or by paired-end or mate-paired reads (Illumina GA and HiSeq platforms) would be suitable for Hi-C. Before high-throughput sequencing, the quality of the library should be verified using Sanger sequencing, wherein the long sequencing read will read through the biotin junction. Thirty-six or 50 bp reads are sufficient to identify most chromatin interacting pairs using Illumina paired-end sequencing. Since the average size of fragments in the library is 250 bp, 50bp paired-end reads have been found to be optimum for Hi-C library sequencing.
Quality control of Hi-C libraries
There are several pressure points throughout the workflow of Hi-C sample preparation that are well documented and reported. DNA at various stages can be run on 0.8% agarose gels to assay the size distribution of fragments. This is particularly important after shearing of size selection steps. Degradation of DNA can also be monitored as smears appearing as a result under low molecular weight products on gels. Degradation can occur due to not adding sufficient protease inhibitors during lysis, endogenous nuclease activity or thermal degradation due to incorrect icing. 3C PCR reactions can be performed to test for the formation of proximity ligation products.
Variants
Standard Hi-C has a high input cell number cost, requires deep sequencing, generates low-resolution data, and suffers from formation of redundant molecules that contribute to low complexity libraries when cell numbers are low. To combat these issues in order to be able to apply this technique in contexts where cell number is a limiting factor, for example, with primary human cell work, several Hi-C variants have been developed since the first conceptualization of Hi-C.
The four main classes under which Hi-C variants fall under are: dilution ligation, in situ ligation, single cell, and low noise improvement systems. Standard Hi-C is a type of dilution ligation and other dilution ligation include DNase Hi-C and Capture Hi-C. In contrast to standard and Capture Hi-C, DNase Hi-C requires only 2–5 million cells as input, uses DNaseI for chromatin fragmentation and employs an in-gel dilution proximity ligation. The use of DNaseI has been shown to greatly improve efficiency and resolution of Hi-C. Capture Hi-C is a genome-wide assaying technique to look at chromatin interactions of specific loci using a hybridization-based capture of targeted genomic regions. It was first developed by Mifsud et al. to map long-range promoter contacts in human cells by generating a biotinylated RNA bait library that targeted 21,841 promoter regions. These variants, in addition to others (described below), represent modifications to the foundational technique of standard Hi-C and address and alleviate one or more limitations of the original method.
In situ Hi-C
In situ Hi-C combines standard Hi-C with nuclear ligation assay, i.e., proximity ligation performed in intact nuclei. The protocol is similar to standard Hi-C in terms of the basic workflow outline but differs in other ways. In situ Hi-C requires 2 to 5 million cells compared to the ideal 20 to 25 million required for standard Hi-C and it requires only 3 days to complete the protocol versus 7 days for standard Hi-C. Furthermore, proximity ligation does not take place in solution like in standard Hi-C, decreasing the frequency of random, biologically irrelevant contacts and ligations, as indicated by the lower frequency of mitochondrial and nuclear DNA contacts in captured biotinylated DNA. This is achieved by leaving the nuclei intact for the ligation step. Cells are still lysed with a buffer containing Tris-HCl at pH 8.0, sodium chloride, and the detergent IGEPAL CA630 before ligation, but instead of homogenization of the cell lysate, cell nuclei are pelleted after initial lysis to degrade the cell membrane. After proximity ligation is complete, cell nuclei are incubated for at least 1.5 hours at 68 degrees Celsius to permeabilize the nuclear membrane and release its nuclear contents.
The resolution that can be achieved with in situ Hi-C can be up to 950 to 1000 bp compared to the 1 to 10 Mb resolution of standard Hi-C and the 100 kb resolution of DNase Hi-C. While standard Hi-C makes use of a 6-bp cutter such as HindIII for the restriction digest step, in situ Hi-C uses a 4-bp cutter such as MboI or its isoschizomer DpnII (which is not sensitive to CpG methylation) to increase efficiency and resolution (as the restriction sites of MboI and DpnII are more frequently occurring in the genome). Data between replicates for in situ Hi-C is consistent and highly reproducible, with very less background noise and demonstrating clear chromatin interactions. It is however possible that some of the captured interactions may not be accurate intermolecular interactions since the nucleus is densely packed with protein and DNA so performing proximity ligations in intact nuclei may pull down confounding interactions that may only form due to the nature of nuclear packaging and not so much unique chromosomal interactions with cellular functional impact. It also requires an extremely high sequencing depth of around 5 billion paired-end reads per sample to achieve the resolution of data described by Rao et al. Several techniques that have adapted the concept of in situ Hi-C exist, including Sis Hi-C, OCEAN-C and in situ capture Hi-C. Described below are two of the most prominent in situ Hi-C based techniques.
1. Low-C
Low-C is an in situ Hi-C protocol adapted for use on low cell numbers, which is particularly useful in contexts where cell number is a limiting agent, for example, in primary human cell culture. This method makes use of minor changes, including volumes and concentrations used and the timing and order of certain experimental steps to allow for the generation of high-quality Hi-C libraries from cell numbers as low as 1000 cells. Despite the potential of generating usable and high resolution data with as few as 1000 cells, Diaz et al. still recommend using at least 1 to 2 million cells if feasible, or if not a minimum of 500 K cells. Library quality was first assessed on the Illumina MiSeq (2x84 np paired-end reads) platform and once passed quality control criteria (including low PCR duplicates), the library was sequenced on Illumina NextSeq (2x80 bp paired-end). Overall, this technique circumvents the issue of requiring a high cell number input for Hi-C and the high sequencing depth required to obtain high resolution data, but can only achieve resolutions of up to 5 kb and may not always be reproducible due the variable nature of sample sizes used and the data generated from it.
2. SAFE Hi-C
SAFE Hi-C, or simplified, fast, and economically efficient Hi-C, generates sufficient ligated fragments without amplification for high-throughput sequencing. In situ Hi-C data that has been published indicates that amplification (at the PCR step for library preparation) introduces distance-dependent amplification bias, which results in a higher noise to signal ratio against genomic distance. SAFE Hi-C was successfully used to generate an amplification-free, in situ Hi-C ligation library from as low as 250 thousand K562 cells. Ligation fragments are anywhere between 200 and 500 bp long, with an average at about 370 bp. All ligation product libraries were sequenced using the Illumina HiSeq platform (2x150 bp paired-end reads). Although SAFE Hi-C can be used for a cell input as low as 250 thousand, Niu et al. recommend using 1 to 2 million cells. Samples produce enough ligates to be sequenced on one-fourth of a lane. SAFE Hi-C has been demonstrated to increase library complexity due to the removal of PCR duplicates which lower the overall percentage of unique paired reads. Overall, SAFE Hi-C preserves the integrity of chromosomal interactions while also reducing the need to have high sequencing depth and saving overall cost and labor.
Micro-C
Micro-C is a version of Hi-C that includes a micrococcal nuclease (MNase) digestion step to look at interactions between pairs of nucleosomes, thus enabling resolution of sub-genomic TAD structures at the 1 to 100 nucleosome scale. It was first developed for use in yeast and was shown to conserve the structural data obtained from a standard Hi-C but with greater signal-to-noise ratio. When used with human embryonic stem cells and fibroblasts, 2.6 to 4.5 billion uniquely mapped reads were obtained per sample. Hsieh et al. analyzed 2.64 billion reads from mouse embryonic stem cells and demonstrated that there was increased power for detecting short-range interactions.
Single cell Hi-C
Hi-C has also been adapted for use with single cells but these techniques require high levels of expertise to perform and are plagued with issues such as low data quality, coverage, and resolution.
Adaptations for Ancient DNA: PaleoHi-C
PaleoHi-C is a specialized adaptation of the Hi-C genomic analysis technique designed to study the three-dimensional genome architecture in ancient DNA samples. It addresses the challenges posed by degraded and fragmented DNA, enabling researchers to reconstruct chromatin interactions in extinct species.
Methodology
PaleoHi-C modifies the traditional Hi-C protocol to account for the specific characteristics of ancient DNA:
Sample Preparation: DNA is extracted from well-preserved tissues, such as bones or skin, often found in cold or arid environments that minimize degradation.
Fragmentation and Ligation: Due to the inherent fragmentation of ancient DNA, PaleoHi-C utilizes optimized ligation protocols to capture chromatin interactions even in highly degraded samples.
Data Analysis: Advanced computational tools process the interaction data, reconstructing chromatin structures and identifying features like topologically associating domains (TADs) and chromatin compartments.
Applications
PaleoHi-C has opened new avenues in paleogenomics, including:
Genome Reconstruction: It has been used to map the three-dimensional genome architecture of extinct species, such as the 52,000-year-old woolly mammoth (Mammuthus primigenius), revealing similarities with modern relatives like the Asian elephant (Elephas maximus).
Epigenetic Insights: By identifying preserved chromatin interactions, PaleoHi-C provides a unique window into the regulation of genes in ancient organisms. Studies have demonstrated that chromatin organization, including Barr bodies representing inactive X chromosomes, can remain intact in ancient nuclei.
Evolutionary Studies: The technique aids in understanding how genome organization has evolved over time and across species.
Significance
The adaptation of Hi-C for ancient DNA has transformed the field of paleogenomics, allowing for detailed studies of extinct species at a molecular level. By preserving and analyzing chromatin interactions, PaleoHi-C sheds light on genome structure, evolution, and adaptation in ancient ecosystems.
Limitations
PaleoHi-C is constrained by the availability of well-preserved samples and the inherent challenges of working with highly degraded DNA. However, advances in sequencing technologies and computational methods continue to expand its potential applications.
Data analysis
The chimeric DNA ligation products generated by Hi-C represent pairwise chromatin interactions or physical 3D contacts within the nucleus, and can be analyzed by a variety of downstream approaches. Briefly, deep sequencing data is used to build unbiased genome-wide chromatin interaction maps. Then several different methods can be employed to analyze these maps to identify chromosomal structural patterns and their biological interpretations. Many of these data analysis approaches also apply to 3C-sequencing or other equivalent data.
Read mapping
Hi-C data produced by deep sequencing is in the form of a traditional FASTQ file, and the reads can be aligned to the genome of interest using sequence alignment software (e.g. Bowtie, bwa, etc.). Because Hi-C ligation products may span hundreds of megabases and may bridge loci on different chromosomes, Hi-C read alignment is often chimeric in the sense that different parts of a read may be aligned to loci distant apart, possibly in different orientations. Long-read aligners (e.g. minimap2) often support chimeric alignment and can be directly applied to long-read Hi-C data. Short-read Hi-C alignment is more challenging.
Notably, Hi-C generates ligation junctions of varying sizes, but the exact position of the ligation site is not measured. To circumvent this problem, iterative mapping is used to avoid the search for the junction site before being able to split the reads into two and mapping them separately to identify the interaction pairs. The idea behind iterative mapping is to map as short a sequence as possible to ensure unique identification of interaction pairs before reaching the junction site. As a result, 25-bp long reads starting from the 5’ end are mapped to the genome at first, and reads that do not uniquely map to a single loci are extended by an additional 5 bp and then re-mapped. This process is repeated till all reads uniquely map, or till the reads are extended to their entirety. Only paired end reads with each side uniquely mapped to a single genomic loci are kept. All other paired end reads are discarded.
Several variations of read mapping techniques are implemented in many bioinformatics pipelines, such as ICE, HiC-Pro, HIPPIE, HiCUP, and TADbit, to map two portions of a paired end read separately, in the case that the two portions match distinct genomic positions, thus addressing the challenge where reads span the ligation junctions.
With increased read length, more recent pipelines (e.g. Juicer and the 4D-Nucleosome Data Portal) often align short Hi-C reads with an alignment algorithm capable of chimeric alignment, such as bwa-mem, chromap and dragmap. This procedure calls alignment once and is simpler than iterative mapping.
Fragment assignment and filtering
The mapped reads are then each assigned a single genomic alignment location according to its 5’ mapped position in the genome. For each read pair, a location is assigned to only one of the restriction fragments, thus should fall in close proximity to a restriction site and less than the maximum molecule length away. Reads mapped more than the maximum molecule length away from the closest restriction sites are the results of physical breakage of the chromatin or non-canonical nuclease activities. Because these reads also instruct information on chromatin interactions, they are not discarded, but appropriate filtering must take place after assigning genomic locations to remove technical noise in the dataset.
Depending on whether the read pair falls within the same or different restriction fragments, different filtering criteria are applied. If the paired reads map to the same restriction fragment, they likely represent un-ligated dangling ends or circularized fragments that are uninformative, and are therefore removed from the dataset. These reads could also represent PCR artifacts, undigested chromatin fragments, or simply, reads with low alignment quality. Whatever their origin, reads mapped to the same fragment are considered “spurious signals” and are typically discarded before downstream processing.
The remaining paired reads mapped to distinct restriction fragments are also filtered to discard identical/redundant PCR products, and this is achieved by removing reads sharing the exact same sequence or 5’ alignment positions. Additional levels of filtering could also be applied to fit the experimental purpose. For example, potential undigested restriction sites could be specifically filtered out, rather than passively identified, by removing reads mapped to the same chromosomal strand with a small distance (user-defined, experience-based) in between.
Binning and bin-level filtering
Based on their midpoint coordinates, Hi-C restriction fragments are binned into fixed genomic intervals, with bin sizes ranging from 40 kb to 1 Mb. The rationale behind this approach is that by reducing the complexity of the data and lowering the number of candidate genome-wide interactions per bin, genomic bins allow for the construction of more robust and less noisy signals, in the form of contact frequencies, at the expense of resolution (though restriction fragment length still remains the ultimate physical limit to Hi-C resolution). Bin to bin interactions are aggregated by simply taking the sum, although more focused and informative methods have also been developed over the years to further enhance the signal. One such method described by Rao et al. aims to push the limit of bin size to smaller and smaller bins, eventually having > 80% of bins covered by 1000 reads each, which significantly increased the resolution of the final analysis results.
Bin-level filtering, just like fragment-level filtering, also takes place to shed experimental artifacts from the obtained data. Bins with high noise and low signals are removed as they typically represent highly repetitive genomic contents around the telomeres and centromeres. This is done by comparing the individual bin sums to the sum of all bins and removing the bottom 1% of bins, or by using the variance as a measure of noise. Low-coverage bins, or bins three standard deviations below the center of a log-normal distribution (which fits the total number of contacts per genomic bin), are removed using the MAD-max (maximum allowed median absolute deviation) filter. After binning, Hi-C data will be stored in a symmetrical matrix format.
More recently, many approaches have been proposed to predetermine the optimal bin size for different Hi-C experiments. Li et al. in 2018 described deDoc, a method where bin size is selected as the one at which the structural entropy of the Hi-C matrix reaches a stable minimum. QuASAR, on the other hand, offers a bit more quality assessment, and compares replicate scores of the samples (given that replicates are indeed included for the experimental purpose) to find the maximum usable resolution. Some publications also tried to score interaction frequencies at the single-fragment level, where a higher coverage can be achieved even with a lower number of reads. HiCPlus, a tool developed by Zhang et al. in 2018, is able to impute Hi-C matrices similar to the original ones using only 1/16 of the original reads.
Balancing/normalization
Balancing refers to the process of bias correction of the obtained Hi-C data, and can be either explicit or implicit. Explicit balancing methods require the explicit definitions of biases known to be associated with Hi-C reads (or any high-throughput sequencing technique in general) including the read mappability, GC content, as well as individual fragment length. A correction factor is first computed for each of the considered biases, followed by each of their combination, and then applied to the read counts per genomic bin.
However, some biases can come from an unknown origin, in which case an implicit balancing approach is used instead. Implicit balancing relies on the assumption that each genomic locus should have “equal visibility”, which suggests that the interaction signal at each genomic locus in the Hi-C data should add up to the same total amount. One approach called iterative correction uses the Sinkhorn–Knopp balancing algorithm and attempts to balance the symmetrical matrix using the aforementioned assumption (by equalizing the sum of each and every row and column in the matrix). The algorithm iteratively alternates between two steps: 1) dividing each row by its mean, and 2) dividing each column by its mean, which are guaranteed to converge in the end and leave no obviously high rows or columns in the interaction matrix. Other computational methods also exist to normalize the biases inherent to Hi-C data, including sequential component normalization (SCN), the Knight-Ruiz matrix-balancing approach, and eigenvector decomposition (ICE) normalization. In the end, both the explicit and the implicit bias correction methods yield comparable results.
Analysis and data interpretation
With a binned, genome-wide interaction matrix, common interaction patterns observed in mammalian genomes can be identified and interpreted biologically, while more rare, less frequently observed patterns such as circular chromosomes and centromere clustering, may require additional specially-tailored methods to be identified.
1. Cis/trans interaction ratio
Cis/trans interactions are one of the two strongest interaction patterns observed in Hi-C maps. They are not locus-specific, and thus are considered as a genome-level pattern. Typically, a higher interaction frequency is observed, on average, for pairs of loci residing on the same chromosome (in cis) than pairs of loci residing on different chromosomes (in trans). In Hi-C interaction matrices, cis/trans interactions appear as square blocks centered along a diagonal, matching individual chromosomes at the same time. Because this pattern is relatively consistent across different species and cell types, it can be used to assess the quality of the data. A noisier experiment, due to random background ligation or any unknown factor, will result in a lower cis to trans interaction ratio (as the noise is expected to affect both cis and trans interactions to a similar extent), and high-quality experiments typically have a cis/trans interaction ratio between 40 and 60 for the human genome.
2. Distance-dependent interaction frequency
This pattern refers to the distance-dependent decay of interaction frequencies on a genome level, and represents the second one of the two strongest Hi-C interaction patterns. As the interaction frequencies between cis-interacting loci decrease (as a result of further distance between them), a gradual decrease of interaction frequency can be observed moving away from the diagonal in the interaction matrix.
Various polymer models exist to statistically characterize the properties of loci pairs separated by a given distance, but discrete binning and fitting continuous functions are two common ways to analyze the distance-dependent interaction frequencies between datapoints. First, interaction frequencies can be binned based on their genomic distance, then a continuous function is fitted to the data using information of the average of each bin. The resulting decay function is plotted on a log-log plot so that a linear line can be used to represent the power-law decays predicted by polymer models. However, oftentimes a simple polymer model will not be sufficient to fully represent the distance-dependent interaction frequencies, at which point more complicated decay functions might result, which might affect the reproducibility of the data due to the presence of locus-specific rather than genome-wide patterns observed in the Hi-C matrix (which are not taken into consideration by polymer models).
3. Chromatin compartments
The strongest locus-specific pattern found in Hi-C maps is chromatin compartments, which takes the shape of a plaid or “checker-board”-like pattern on the interaction matrix, with alternating blocks that range between 1 and 10 Mb in size (which makes them easy to extract even in experiments with very low sampling) in the human genome. This pattern can be found at both high and low frequencies. Because chromosomes consist of two types of genomic regions that alternate along the length of individual chromosomes, the interaction frequencies between two regions of the same type and interaction frequencies between two regions of different types can be quite different.
The definition of the active (A) and inactive (B) chromatin compartments is based on principal component analysis, first established by Lieberman-Aiden et al. in 2009. Their approach calculated the correlation of the Hi-C matrix of observed vs. expected signal (obtained from a distance-normalized contact matrix) ratio, and used the sign of the first eigenvector to denote positive and negative parts of the resulting plot as A and B compartments, respectively. Many genomic studies have indicated that chromatin compartments are correlated with chromatin states, such as gene density, DNA accessibility, GC content, replication timing, and histone marks. Therefore, type A compartments are more specifically defined to represent the gene-dense regions of euchromatin, while type B compartments represent heterochromatic regions with less gene activities. Overall, chromatin compartments offer insights on the general organization principles of the genome of interest.
More and more bioinformatics tools capable of performing compartment calling have been developed over the past decade, including HOMER, HiTC R, and CscoreTool. Although they each has their own differences and optimizations made on the original 2009 approach, their base protocols still rely on principal component analysis.
4. Topologically associating domains (TADs)
TADs are sub-Mb structures that may harbor gene-regulatory features, such as local promoter-enhancer interactions. More generally, TADs are considered as an emergent property of underlying biological mechanisms, which defines TADs as loop extrusions, compartmentalizations, or any dynamic genomic pattern rather than a static structural feature of the genome. Thus, TADs represent regulatory microenvironments and usually show up on a Hi-C map as blocks of highly self-interacting regions in which interaction frequencies within the region are significantly higher than interaction frequencies between two adjacent regions. In Hi-C interaction matrices, TADs are square blocks of elevated interaction frequencies centred along the diagonal. However, this is merely an oversimplified description, and identifying the actual pattern requires much more statistical processing and estimation.
One approach to identify TADs was described by Dixon et al., where they first calculated (within some genomic range) the difference between the average upstream interactions and the average downstream interactions of each bin in the matrix. This difference was then transformed into a chi-squared statistic based on the Hidden Markov Model, and any sharp change in this chi-squared value, called the directionality index, will define the boundaries of TADs. Alternatively, one could simply take the ratio between average upstream and downstream interactions to define TAD boundaries, as did Naumova et al.
Another approach is to calculate the average interaction frequencies crossing over each bin, again within some predetermined genomic range. The resulting value is referred to as the insulation score and can be thought of as the average of a square sliding along the diagonal of the matrix (Crane et al.). This value is expected to be lower at TAD boundaries; thus, one can use standard statistical techniques to find local minima (boundaries), and define regions between consecutive boundaries to be TADs.
However, as is increasingly recognized today, TADs represent a hierarchical series of structures that cannot be fully characterized by one-dimensional scores given by the previous methods. The increased resolution available in newer datasets can now explicitly address TADs with multiscale analysis approaches. As first introduced by Armatus, resolution specific domains can be identified and a consensus set of domains conserved across resolutions can be calculated, which transforms the problem of TAD calling into the optimization of scoring functions based on their local interaction densities. Variations of this approach with different objective functions, such as Lavaburst, MrTADFinder, 3DNetMod, and Matryoshka, are also developed to achieve better computing performance on higher resolution datasets.
5. Point interactions
Biologically, regulatory interactions usually occur at much smaller scale than TADs, and two genomic elements can activate/inhibit the expression of a gene within as small a distance as 1 kb. Therefore, point interactions are important in interpreting Hi-C maps, and are expected to appear as local enrichments in contact probability. However, current methodologies for the identification of point interactions are all implicit in nature, in that they do not instruct what a point interaction should look like. Instead, point mutations are identified as outliers with higher interaction frequencies than expected within the Hi-C matrix, given that the background model consists only of the strongest signals such as the distance-decay functions. The background model can be estimated and constructed using both local signal distributions and global approaches (i.e. chromosome-wide/genome-wide). Many of the aforementioned bioinformatics packages incorporate algorithms to identify point interactions. In short, the significance of individual pairwise interaction is calculated, and significantly high outliers are corrected for multiple testing before they are recognized as truly informative point interactions. It is helpful to compliment identified point interactions with additional evidence such as analysis of enrichment scores and biological replicates, to indicate that these interactions are indeed of biological significance.
Uses
Development
1. Cell division
Hi-C can reveal chromatin conformation changes during cell division. In interphase, chromatins are generally loose and vivacious so that transcription regulation and other regulatory activities could take place. When entering mitosis and cell division, chromatins become compactly folded into dense cylindrical chromosomes. Within the past five years, the development of single-cell Hi-C has enabled the depiction of the entire 3D structural landscape of chromatins/chromosomes throughout the cell cycle, and many studies have discovered that these identified genomic domains remain unchanged in interphase, and are erased by silencing mechanisms when the cell enters mitosis. When mitotic division is completed and the cell re-enters the interphase, chromatin 3D structures are observed to be re-established, and transcription regulation is restored.
2. Transcription regulation and fate determination
It has been suspected that the differentiation of embryonic stem cells (ESCs) and induced pluripotent stem cells (iPSCs) into various mature cell lineages is accompanied by global changes in chromosomal structures and consequently interaction dynamics to allow for the regulation of transcriptional activation/silencing. Standard Hi-C can be used to investigate this research question.
In 2015, Dixon et al. applied standard Hi-C to capture global 3D dynamics in human ESCs during their differentiation into high five cells. Due to the ability of Hi-C to depict dynamic interactions in differentiation-related TADs, the researchers discovered increases in the number of DHS sites, CTCF binding ability, active histone modifications, and target gene expressions within these TADs of interest, and found significant participation of major pluripotency factors such as OCT4, NANOG, and SOX2 in the interaction network during somatic cell reprogramming. Since then, Hi-C has been recognized as one of the standard methods to probe for transcriptional regulatory activities, and has confirmed that chromosome architecture is closely related to cell fate.
3. Growth and development
Mammalian somatic growth and development starts with the fertilization of sperm and oocyte, followed by the zygote stage, the 2-cell, 4-cell, and the 8-cell stage, the blastocyst stage, and finally the embryo stage. Hi-C made it possible to explore the comprehensive genomic architecture during growth and development, as both sis-Hi-C and in situ Hi-C have reported that TADs and genomic A and B compartments are not obviously present and appear to be less well-structured in oocyte cells. These structural features of the chromatin only gradually establish from weaker frequencies to cleaner and more frequent datapoints after fertilization, as developmental stages progress.
Genome evolution
As data on 3D genome structures becomes more and more prevalent in recent years, Hi-C begins to be used as a means to track evolutionary structural features/changes. Genomic single nucleotide polymorphisms (SNPs) and TADs are typically conserved across species, along with the CTCF factor in the chromatin domain evolution. Other factors, however, have been revealed by Hi-C techniques to experience structural evolutions in 3D architecture. These include codon usage frequency similarity (CUFS), paralog gene co-regulation, and spatially co-evolving orthologous modules (SCOMs). For large-scale domain evolution, chromosomal translocations, syntenic regions, as well as genomic rearrangement regions were all relatively conserved. These findings imply that Hi-C technologies is capable of providing an alternative point of view in the eukaryotic tree of life.
Cancer
Several studies have employed the use of Hi-C to describe and study chromatin architecture in different cancers and their impact on disease pathogenesis. Kloetgen et al. used in situ Hi-C to study T cell acute lymphoblastic leukemia (T-ALL) and found a TAD fusion event that removed a CTCF insulation site, allowing for the oncogene MYC’s promoter to directly interact with a distal super enhancer. Fang et al. have also shown how there are T-ALL specific gain or loss of chromatin insulation, which alters the strength of TAD architecture of the genome, using in situ Hi-C. Low-C has been used to map the chromatin structure of primary B cells of a diffuse large B-cell lymphoma patient and was used to find high chromosome structural variation between the patient and healthy B-cells. Overall, the application of Hi-C and its variants in cancer research provides unique insight into the molecular underpinnings of the driving factors of cell abnormality. It can help explain biological phenomena (high MYC expression in T-ALL) and help aid drug development to target mechanisms unique to cancerous cells.
References
Genomics techniques | Hi-C (genomic analysis technique) | [
"Chemistry",
"Biology"
] | 9,874 | [
"Genetics techniques",
"Genomics techniques",
"Molecular biology techniques"
] |
68,675,721 | https://en.wikipedia.org/wiki/Thermal%20laser%20epitaxy | Thermal laser epitaxy (TLE) is a physical vapor deposition technique that utilizes irradiation from continuous-wave lasers to heat sources locally for growing films on a substrate. This technique can be performed under ultra-high vacuum pressure or in the presence of a background atmosphere, such as ozone, to deposit oxide films.
TLE operates at power densities between 104 – 106 W/cm2, which results in evaporation or sublimation of the source material, with no plasma or high-energy particle species being produced. Despite operating at comparatively low power densities, TLE is capable of depositing many materials with low vapor pressures, including refractory metals, a process that is challenging to perform with molecular beam epitaxy.
Physical process
TLE uses continuous-wave lasers (typically with a wavelength of around 1000 nm) located outside the vacuum chamber to heat sources of material in order to generate a flux of vapor via evaporation or sublimation. Owing to the localized nature of the heat induced by the laser, a portion of the source may be transformed into a liquid state while the rest remains solid, such that the source acts as its own crucible. The strong absorption of light causes the laser-induced heat to be highly localized via the small diameter of the laser beam, which can also have the effect of confining the heat to the axis of the source. The resulting absorption corresponds to a typical photon penetration depth on the order of 2 nm due to the high absorption coefficients of α ~ 105 cm−1 of many materials. Heat loss via conduction and radiation further localizes the high-temperature region close to the irradiated surface of the source. The localized character of the heating enables many materials to be grown by TLE from freestanding sources without a crucible. Owing to the direct transfer of energy from the laser to the source, TLE is more efficient than other evaporation techniques such as evaporation and molecular beam epitaxy, which typically rely on wire-based Joule heaters to reach high temperatures.
By heating the source, a flux of vapor is produced, the pressure of which frequently has an approximately exponential relation to temperature. The vapor is then deposited onto a laser-heated substrate. The very high substrate temperatures achievable by laser heating allow the use of adsorption-controlled growth modes, similar to molecular beam epitaxy, ensuring precise control of the stoichiometry and temperature of the deposited film. This precise control is valuable for growing thin-film heterostructures of complex materials, such as high-Tc superconductors. By positioning all lasers outside of the evaporation chamber, contamination can be reduced compared to using in situ heaters, resulting in highly pure deposited films.
The deposition rate of the vapor impinging upon the substrate is controlled by adjusting the power of the incident source laser. The deposition rate frequently increases exponentially with source temperature, which in turn increases linearly with incident laser power. Stability in the deposition rate may be achieved by continuously moving the laser beam around the source, while compensating for any coating of any laser optics inside the TLE chamber.
The gas in the chamber can be incorporated in the deposition film. With the addition of an oxygen or ozone atmosphere, oxide films can readily be grown with TLE at pressures up to 10−2 hPa.
History
Shortly after the invention of the laser by Theodore Maiman in 1960, it was quickly recognized that a laser could act as a point source to evaporate source material in a vacuum chamber for fabricating thin films. In 1965, Smith and Turner succeeded in depositing thin films using a ruby laser, after which Groh deposited thin films using a continuous-wave CO2 laser in 1968. Further work demonstrated that laser-induced evaporation is an effective way to deposit dielectric and semiconductor films. However, issues occurred with regard to stoichiometry and the uniformity of the deposited films, thus diminishing their quality compared to films deposited by other techniques. Experiments to investigate the deposition of thin films using a pulsed laser at high power densities laid the foundation for pulsed laser deposition, an extremely successful growth technique that is widely used today.
Experiments utilizing continuous-wave lasers continued to be performed throughout the latter half of the twentieth century, highlighting the many advantages of continuous-wave laser evaporation including low power densities, which can reduce surface damage to sensitive films. It proved challenging to achieve congruent evaporation from compound sources using continuous-wave lasers, and film deposition was typically limited to sources with high vapor pressures due to the low continuous wave power densities available.
In 2019, the evaporation of sources using continuous-wave lasers was rediscovered at the Max Planck Institute for Solid State Research and dubbed "thermal laser epitaxy". This new technique uses elemental sources illuminated by high-power continuous-wave lasers (typically with peak powers around 1 kW at a wavelength of 1000 nm), thus allowing the deposition of low-vapor-pressure materials such as carbon and tungsten while avoiding issues with congruent evaporation from compound sources.
References
External links
Thermal Laser Epitaxy - Max Planck Institute for Solid State Research
Physical vapor deposition techniques
Thin film deposition
Semiconductor device fabrication
Crystallography
Methods of crystal growth | Thermal laser epitaxy | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 1,079 | [
"Thin film deposition",
"Microtechnology",
"Methods of crystal growth",
"Coatings",
"Thin films",
"Materials science",
"Semiconductor device fabrication",
"Crystallography",
"Condensed matter physics",
"Planes (geometry)",
"Solid state engineering"
] |
68,679,499 | https://en.wikipedia.org/wiki/Ms2%20%28software%29 | ms2 is a non-commercial molecular simulation program. It comprises both molecular dynamics and Monte Carlo simulation algorithms. ms2 is designed for the calculation of thermodynamic properties of fluids. A large number of thermodynamic properties can be readily computed using ms2, e.g. phase equilibrium, transport and caloric properties. ms2 is limited to homogeneous state simulations.
Features
ms2 contains two molecular simulation techniques: molecular dynamics (MD) and Monte-Carlo. ms2 supports the calculation of vapor-liquid equilibria of pure components as well as multi-component mixtures. Different Phase equilibrium calculation methods are implemented in ms2. Furthermore, ms2 is capable of sampling various classical ensembles such as NpT, NVE, NVT, NpH. To evaluate the chemical potential, Widom's test molecule method and thermodynamic integration are implemented. Also, algorithms for the sampling of transport properties are implemented in ms2. Transport properties are determined by equilibrium MD simulations following the Green-Kubo formalism and the Einstein formalism.
Applications
ms2 has been frequently used for predicting thermophysical properties of fluids for chemical engineering applications as well as for scientific computing and soft matter physics. It has been used for modelling both model fluids as well as real substances. A large number interaction potentials are implemented in ms2, e.g. the Lennard-Jones potential, the Mie potential, electrostatic interactions (point charges, point dipoles and point quadrupoles), and external forces. Force fields from databases such as the MolMod database can readily be used in ms2.
See also
Comparison of software for molecular mechanics modeling
List of Monte Carlo simulation software
List of free and open-source software packages
References
External links
Molecular dynamics software
Computational chemistry
Molecular modelling software
Molecular dynamics
Force fields (chemistry) | Ms2 (software) | [
"Physics",
"Chemistry"
] | 380 | [
"Molecular dynamics software",
"Molecular modelling software",
"Molecular physics",
"Computational chemistry software",
"Computational physics",
"Molecular dynamics",
"Computational chemistry",
"Theoretical chemistry",
"Molecular modelling",
"Force fields (chemistry)"
] |
68,681,251 | https://en.wikipedia.org/wiki/Phosphide%20carbide | Phosphide carbides or carbide phosphides are compounds containing anions composed of carbide (C4−) and phosphide (P3−). They can be considered as mixed anion compounds. Related compounds include phosphide silicides, germanide phosphides, arsenide carbides, nitride carbides and silicide carbides.
In light rare earth phosphide carbides, the ethenide ion [C=C]4− exists. In these P and C2 are disordered, and they randomly substitute for each other, even though charge differs. Bonds between carbon and phosphorus are weak, and so these kinds of compounds do not contain a C-P bond, instead carbon or phosphorus stand by themselves.
Phosphide carbides can be made by heating a metal, red phosphorus and graphite powder together in a carbon crucible under an inert gas atmosphere.
List
References
Phosphides
Carbides
Mixed anion compounds | Phosphide carbide | [
"Physics",
"Chemistry"
] | 218 | [
"Ions",
"Matter",
"Mixed anion compounds"
] |
57,721,868 | https://en.wikipedia.org/wiki/Mass-flux%20fraction | The mass-flux fraction (or Hirschfelder-Curtiss variable or Kármán-Penner variable) is the ratio of mass-flux of a particular chemical species to the total mass flux of a gaseous mixture. It includes both the convectional mass flux and the diffusional mass flux. It was introduced by Joseph O. Hirschfelder and Charles F. Curtiss in 1948 and later by Theodore von Kármán and Sol Penner in 1954. The mass-flux fraction of a species i is defined as
where
is the mass fraction
is the mass average velocity of the gaseous mixture
is the average velocity with which the species i diffuse relative to
is the density of species i
is the gas density.
It satisfies the identity
,
similar to the mass fraction, but the mass-flux fraction can take both positive and negative values. This variable is used in steady, one-dimensional combustion problems in place of the mass fraction. For one-dimensional ( direction) steady flows, the conservation equation for the mass-flux fraction reduces to
,
where is the mass production rate of species i.
References
Chemical properties
Dimensionless numbers of chemistry
Combustion | Mass-flux fraction | [
"Chemistry"
] | 229 | [
"Combustion",
"Dimensionless numbers of chemistry",
"nan"
] |
57,724,562 | https://en.wikipedia.org/wiki/Universal%20dielectric%20response | In physics and electrical engineering, the universal dielectric response, or UDR, refers to the observed emergent behaviour of the dielectric properties exhibited by diverse solid state systems. In particular this widely observed response involves power law scaling of dielectric properties with frequency under conditions of alternating current, AC. First defined in a landmark article by A. K. Jonscher in Nature published in 1977, the origins of the UDR were attributed to the dominance of many-body interactions in systems, and their analogous RC network equivalence.
The universal dielectric response manifests in the variation of AC Conductivity with frequency and is most often observed in complex systems consisting of multiple phases of similar or dissimilar materials. Such systems, which can be called heterogenous or composite materials, can be described from a dielectric perspective as a large network consisting of resistor and capacitor elements, known also as an RC network. At low and high frequencies, the dielectric response of heterogeneous materials is governed by percolation pathways. If a heterogeneous material is represented by a network in which more than 50% of the elements are capacitors, percolation through capacitor elements will occur. This percolation results in conductivity at high and low frequencies that is directly proportional to frequency. Conversely, if the fraction of capacitor elements in the representative RC network (Pc) is lower than 0.5, dielectric behavior at low and high frequency regimes is independent of frequency. At intermediate frequencies, a very broad range of heterogeneous materials show a well-defined emergent region, in which power law correlation of admittance to frequency is observed. The power law emergent region is the key feature of the UDR. In materials or systems exhibiting UDR, the overall dielectric response from high to low frequencies is symmetrical, being centered at the middle point of the emergent region, which occurs in equivalent RC networks at a frequency of :. In the power law emergent region, the admittance of the overall system follows the general power law proportionality , where the power law exponent α can be approximated to the fraction of capacitors in the equivalent RC network of the system α≅Pc.
Significance of the UDR
The power law scaling of dielectric properties with frequency is valuable in interpreting impedance spectroscopy data towards the characterisation of responses in emerging ferroelectric and multiferroic materials.
References
Dielectrics
Electric and magnetic fields in matter
Condensed matter physics
Electrical engineering
Electronic engineering | Universal dielectric response | [
"Physics",
"Chemistry",
"Materials_science",
"Technology",
"Engineering"
] | 524 | [
"Computer engineering",
"Phases of matter",
"Electric and magnetic fields in matter",
"Materials science",
"Materials",
"Electronic engineering",
"Condensed matter physics",
"Electrical engineering",
"Dielectrics",
"Matter"
] |
57,726,150 | https://en.wikipedia.org/wiki/C9H12O | {{DISPLAYTITLE:C9H12O}}
The molecular formula C9H12O (molar mass: 136.19 g/mol, exact mass: 136.0888 u) may refer to:
Mesitol (2,4,6-trimethylphenol)
2-Phenyl-2-propanol
2,3,6-Trimethylphenol
Molecular formulas | C9H12O | [
"Physics",
"Chemistry"
] | 92 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
57,726,619 | https://en.wikipedia.org/wiki/H-matrix%20%28iterative%20method%29 | In mathematics, an H-matrix is a matrix whose comparison matrix is an M-matrix. It is useful in iterative methods.
Definition: Let be a complex matrix. Then comparison matrix M(A) of complex matrix A is defined as where for all and for all . If M(A) is a M-matrix, A is a H-matrix.
Invertible H-matrix guarantees convergence of Gauss–Seidel iterative methods.
See also
Hurwitz-stable matrix
P-matrix
Perron–Frobenius theorem
Z-matrix
L-matrix
M-matrix
Comparison matrix
References
Matrices | H-matrix (iterative method) | [
"Mathematics"
] | 126 | [
"Matrices (mathematics)",
"Mathematical objects",
"Matrix stubs"
] |
71,594,556 | https://en.wikipedia.org/wiki/Lattice%20Boltzmann%20methods%20for%20solids | The Lattice Boltzmann methods for solids (LBMS) are a set of methods for solving partial differential equations (PDE) in solid mechanics. The methods use a discretization of the Boltzmann equation(BM), and their use is known as the lattice Boltzmann methods for solids.
LBMS methods are categorized by their reliance on:
Vectorial distributions
Wave solvers
Force tuning
The LBMS subset remains highly challenging from a computational aspect as much as from a theoretical point of view. Solving solid equations within the LBM framework is still a very active area of research. If solids are solved, this shows that the Boltzmann equation is capable of describing solid motions as well as fluids and gases: thus unlocking complex physics to be solved such as fluid-structure interaction (FSI) in biomechanics.
Proposed insights
Vectorial distributions
The first attempt of LBMS tried to use a Boltzmann-like equation for force (vectorial) distributions. The approach requires more computational memory but results are obtained in fracture and solid cracking.
Wave solvers
Another approach consists in using LBM as acoustic solvers to capture waves propagation in solids.
Force tuning
Introduction
This idea consists of introducing a modified version of the forcing term: (or equilibrium distribution) into the LBM as a stress divergence force. This force is considered space-time dependent and contains solid properties
,
where denotes the Cauchy stress tensor. and are respectively the gravity vector and solid matter density.
The stress tensor is usually computed across the lattice aiming finite difference schemes.
Some results
Force tuning has recently proven its efficiency with a maximum error of 5% in comparison with standard finite element solvers in mechanics. Accurate validation of results can also be a tedious task since these methods are very different, common issues are:
Meshes or lattice discretization
Location of computed fields at elements or nodes
Hidden information in software used for finite element analysis comparison
Non-linear materials
Steady state convergence for LBMS
Notes
References
Biomechanics
Fluid dynamics
Thermodynamics | Lattice Boltzmann methods for solids | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 413 | [
"Biomechanics",
"Dynamical systems",
"Chemical engineering",
"Mechanics",
"Thermodynamics",
"Piping",
"Fluid dynamics"
] |
71,597,444 | https://en.wikipedia.org/wiki/Cambodian%20mat | A Cambodian mat also known as a kantael (Khmer: កន្ទេល) is a woven mat made from palm or reed in Cambodia. The Cambodian mat consists of an ordinary mat, below which are fixed pads of strongly packed cotton, with the help of a special loom. They are specific to the Khmer people.
History
Mats have been woven in Cambodia since Angkorian times, as evidenced by carvings on the bas-relief of Angkor Wat.
When the French missionary Charles-Émile Bouillevaux, after being the first Frenchmen to discover Angkor Wat, traveled to the Eastern bank of the Mekong and encountered the Bunong people, he considered it an honour to be invited to sit on a Cambodian mat.
During his exploration trip in the 1880s, the French anthropologist Edouard Maurel acknowledged that there was something unique to the Cambodian mat, which he took as evidence of the luxury of the once flourishing Khmer civilization:
Auguste Pavie during his exploration of Cambodia, noticed that the King of Cambodia himself could sit on this type of Cambodian mat.
At the end of the XIXth century, the Cambodian mat was seen as the model for all the straw mats across Asia. Thus, French explorers in Vietnam refer to these straw mats as "Cambodian mats" while explorers in Yunnan described the beads of Chinese peasants as made of three planks on wooden trestles covered with rice straw and a Cambodian mat on top.
The French protectorate of Cambodia promoted the export of Cambodian mats. The Cambodian mat was promoted as "a fine and neat article" which attracted the attention of Japanese merchants at the Hanoi Exhibition in 1903; it was often sold in Saigon stuffed with kapok. Rather than Picot camp beds which were heavy and difficult to carry around in Indochina, the French colonialists recommended the use of Cambodian mats when travelling in French Indochina.
Unfortunately, woven plant mats have been largely supplanted by colored plastics mats imported from Thailand and Vietnam since 1981.
Since the beginning of the 21st century, weavers have learned how to dye and design patterns such as lanterns, pineapple eyes, grids, and strings.
In 2017, the French Cultural Center in Cambodia organized a itinerant exhibition called Mats and Table towels (Nattes et Nappes) which saw table cloths and sitting mats as two distinctive elements of France and Cambodia respectively.
The Cambodia Sedge Mats Business Association (CSMA) was set up to work as a trade organisation and promote Cambodian mats on the national and international markets. As of 2022, the Cambodian mats remain widely popular within the country and natural mats are preferred to nylon mats.
Production
The region of Cambodia best-known for mat weaving is the Mekong floodplain, especially around Lvea Aem district. Mats are usually a cottage industry woven by craftswomen sitting on mats in their private homes. The most popular mats in Cambodia are made of mangrove fan palm. While they are more rare, Cambodian mats can also be made of wicker and rattan (tbanh kanchoeur) made from dryandra trees. Reeds are usually grown on the edge of rice fields for making mats when the water recedes from the lake behind their village during the dry season when weaving is done from January to May.
Cambodian mats can be made from a variety of sedges, rattan and leaves such as grey sedge, rice sedge, red nut sedge, cool mat, Calamus viminalis or Khmer rattan, mangrove fan palm, palm leaves, banana leaves, talipot palm leaves, sago palm leaves and water hyacinth.
Mangrove palm tree mats: kantael pa'au
The mangrove palm tree first needs to be cut and divided into three sections: one is the central spine, and the two other is the soft wings on both sides. The fiber is then split and flattened. The shell is peeled off, and only the soft thread remains. They are then dried in the sun for a whole day and collected in bundles in the shade. Water is sprayed regularly to prevent the fiber from drying out and twisting. The threads are then sorted apart, with the long threads apart for weaving mats.
Red mats: kantael krahom
Red mats or kantael krahom are a kind reed mat woven from the bark of the red nut sedge known in Khmer as kravanh chruk. Craftsmen cut the reeds into small pieces of one meter length before dying the fibers of the cuttings by dipping them in red, white, green and yellow according to their preferred color. Cambodian red mats were exported and sold in Vietnam at least since the 19th century. Red mats are usually weaved with white reeds that are not diked at one top side to identify its orientation as it would be inconvenient that the head lay were the feet have trodden. It is a secondary source of income for Cambodian farmers who can add up to 2000 US dollars to the yearly revenue by weaving these red mats.
Water hyacinth mats: kantael komplaok
In recent years, Khmer people have also made mats for tableware and sleeping from dried water hyacinth. In fact, the plant, despite its beauty is fast-growing and often clogs waterways on the Tonle Sap. While its soft texture has made it popular, its durability is limited.
Use
Cambodian mats are an important piece of furniture in all Cambodian homes where such furniture is usually limited. Traditionally, palm mats were used both for as a sleeping mattress and a tablecloth on which families sit while they share their meals.
Mats are commonly laid out for guests and are important building materials for homes, and they are often used as wedding gifts.
During religious ceremonies, Cambodian people do not usually sit on chairs or bare floors but rather on mat-covered floors.
While these Cambodian mats were for family use, they have become popular among urban Khmer people and foreign tourists for decoration.
Literature
The French author Claude Farrère refers often to the Cambodian mat in Les Petites Allées, Le Quadrille des Mers de Chine, and La Sonate à la Mer, as an exotic reference to the colonial fantasm, which can also be found in the novel Lélie, fumeuse d'opium published under pseudonym and illustrated with pin-up illustrations of nude and semi-nude women by Raphael Kirchner.
References
External links
Interior design
Khmer folklore
Straw products
Units of area | Cambodian mat | [
"Mathematics"
] | 1,304 | [
"Quantity",
"Units of area",
"Units of measurement"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.