metadata
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:8000
- loss:TripletLoss
base_model: intfloat/multilingual-e5-base
widget:
- source_sentence: >-
What is the main purpose of placing small pieces of metal in the skin
during Neuroreflexotherapy (NRT)?
A. To enhance skin healing
B. To interrupt neural pain processes
C. To increase blood circulation
D. To promote muscle growth
sentences:
- >-
The Office of the Chief of Military Security Affairs (OCMSA) (),
commonly referred to by its Burmese acronym Sa Ya Pha (စရဖ), is the
military intelligence agency of the Myanmar Armed Forces, tasked with
intelligence gathering. It was created to replace the Directorate of
Defence Services Intelligence (DDSI), which was disbanded in 2004.
OCMSA is charged with handling political issues, and had played a
central role in monitoring the 2007 popular protests in Myanmar;
coordinating widespread arrests of protesters and their interrogation.
Human Rights Watch reported that as part of its interrogation process,
OCMSA uses sleep deprivation and condones the beating and kicking of
detainees until they are unconscious.
Notable former commanders of OCMSA include Vice President Lieutenant
General (Ret.) Myint Swe, Chief of General Staff (Army, Navy and
Airforce) General Mya Tun Oo and Union Minister for Home Affairs
Lieutenant General Kyaw Swe. As of September 2016, OCMSA is headed by
Lieutenant General Soe Htut. Brig.-Gen Tin Oo (no relation to Gen. Tin
Oo) was trained by the CIA on the Pacific island of Saipan and went on
to run one of the most feared and effective military intelligence spy
networks in Asia throughout the 1970s and ’80s.
Chiefs
See also
Bureau of Special Investigation
Special Intelligence Department
References
- >-
David Carruthers is a Canadian curler.
He is a and a 1998 Labatt Brier champion.
Teams
Personal life
In 2012, David Carruthers worked as an icemaker at both the Brampton and
Chinguacousy curling clubs in Brampton, Ontario. He was retired in
November 2017.
He is married to Gail Carruthers. He resides in Brampton, Ontario.
References
External links
David Carruthers – Curling Canada Stats Archive
- >-
Neuroreflexotherapy (NRT) is a type of alternative medicine treatment
used for some cases of low back pain (LBP). Small pieces of metal are
placed just under the surface of the skin in the ear and back, and are
intended to interrupt the neural pain processes. Totally, there are
three publications devoted to this method in the world medical science
[Kovacs FM, 1993, 1997, 2002]. They are published by the same first
author and claim almost absolute efficacy of the therapy and lack of
effect of other treatments. Also, three reviews repeat the definition of
the method and the claimed results.
Efficacy
A 2005 Cochrane review said that while some limited research reported
"surprising" results for the efficacy of this therapy in treating
nonspecific LBP, a lack of confirming research made it impossible to
reach general conclusions about its overall efficacy. A note from the
Co‐Editors of the review says that the conclusions would be more sound
"if similar evidence was available from RCTs conducted in other
countries, with other care providers and different researchers. The very
large positive response in the intervention group compared to the
placebo group is unusual for trials in chronic back pain. ... Therefore,
... there is no strong evidence that it will work as well outside the
specialty clinics in Spain."
Side effects
Negative skin reactions have been described including dermatitis and
skin infections.
Technique
In the procedure, small pieces of metal are placed in the skin of the
back and ear: surgical staples are placed through the outer layer of
skin, and burins are implanted just beneath the skin surface. The pieces
are either removed or fall out on their own between two weeks and three
months after implantation. The procedure is done on an outpatient basis,
and takes about one hour to complete. A 2005 review did not find reports
of pain or scarring associated with the procedure.
Mechanism of action
The pieces of metal are placed in locations believed to be in neural
pain transmission pathways in order to interrupt them. The locations
chosen are specific dermatomes in the back - regions served by a
particular nerve root - along with what Marlowe describes as associated
"referred tender points" in the ear. This combination of locations is
theorized to reduce pain by interfering with the neural pain
transmission and processing pathways, interrupting nerve signals that
tighten the lower back muscles, and reducing neurogenic inflammation.
The metal pieces in the lower back stimulate the release of peptides
that inhibit the generation of pain messages, and the ones in the ear
are theorized to activate pain-relieving mechanisms in the brain.
History
Several research works in neurophysiology and other fields of medicine
were carried out by Prof. Dr. René Kovacs in the second half of the 20th
century in France and Spain. These works led to Neuroreflexotherapy
(NRT) development which is continued by the Kovacs Foundation since
1986.
References
- source_sentence: >-
What is the year when Hygrophoropsis laevis was first described as new to
science?
A. 1980
B. 1985
C. 1990
D. 2000
sentences:
- >-
A dislon is a quantized field associated with the quantization of the
lattice displacement in crystalline solids. It is a localized collective
excitation of a crystal dislocation.
Description
Dislons are special quasiparticles that emerge from the quantization of
the lattice displacement field around a dislocation in a crystal. They
exhibit unique particle statistics depending on the dimension of
quantization. In one-dimensional quantization, dislons behave as bosonic
quasiparticles. However, in three-dimensional quantization, the
topological constraint of the dislocation leads to a breakdown of the
canonical commutation relation, resulting in the emergence of two
independent bosonic fields known as the d-field and f-field.
Interaction
Dislons interact with other particles such as electrons and phonons. In
the presence of multiple dislocations, the electron-dislon interaction
can affect the electrical conductivity of the system. The
distance-dependent interaction between electrons and dislocations leads
to oscillations in the electron self-energy away from the dislocation
core.
Applications
The study of dislons provides insights into various phenomena in
materials science, including the variation of superconducting transition
temperatures in dislocated crystals. Dislons play a role in
understanding the interaction between dislocations and phonons,
affecting thermal transport properties in the presence of dislocations.
- >-
Hygrophoropsis laevis is a species of fungus in the family
Hygrophoropsidaceae. Found in Malawi, it was described as new to science
in 1985.
References
External links
- >-
Excluding the northernmost districts, Kazakhstan consists of endorheic
basins, where rivers flow into one of the numerous lakes. The most
important drainage system is known as Yedisu, meaning "seven rivers" in
Turkic languages. Below is a list of the more important lakes, some of
which are shared (Caspian Sea, Lake Aral, Lake Aike, etc.) with the
neighbouring countries.
See also
Sor (geomorphology)
- source_sentence: >-
What is karyorrhexis primarily associated with in the context of cell
death?
A. Increased cell division
B. Destructive fragmentation of the nucleus
C. Enhancement of cellular functions
D. Normal cellular differentiation
sentences:
- >-
Xenon isotope geochemistry uses the abundance of xenon (Xe) isotopes and
total xenon to investigate how Xe has been generated, transported,
fractionated, and distributed in planetary systems. Xe has nine stable
or very long-lived isotopes. Radiogenic 129Xe and fissiogenic
131,132,134,136Xe isotopes are of special interest in geochemical
research. The radiogenic and fissiogenic properties can be used in
deciphering the early chronology of Earth. Elemental Xe in the
atmosphere is depleted and isotopically enriched in heavier isotopes
relative to estimated solar abundances. The depletion and heavy isotopic
enrichment can be explained by hydrodynamic escape to space that
occurred in Earth's early atmosphere. Differences in the Xe isotope
distribution between the deep mantle (from Ocean Island Basalts, or
OIBs), shallower Mid-ocean Ridge Basalts (MORBs), and the atmosphere can
be used to deduce Earth's history of formation and differentiation of
the solid Earth into layers.
Background
Xe is the heaviest noble gas in the Earth's atmosphere. It has seven
stable isotopes (126Xe,128Xe,129Xe,130Xe,131Xe, 132Xe, 134Xe) and two
isotopes (124Xe, 136Xe) with long-lived half-lives. Xe has four
synthetic radioisotopes with very short half-lives, usually less than
one month.
Xenon-129 can be used to examine the early history of the Earth. 129Xe
was derived from the extinct nuclide of iodine, iodine-129 or 129I (with
a half-life of 15.7 Million years, or Myr), which can be used in
iodine-xenon (I-Xe) dating. The production of 129Xe stopped within about
100 Myr after the start of the Solar System because 129I became extinct.
In the modern atmosphere, about 6.8% of atmospheric 129Xe originated
from the decay 129I in the first ~100 Myr of the Solar System's history,
i.e., during and immediately following Earth's accretion.
Fissiogenic Xe isotopes were generated mainly from the extinct nuclide,
plutonium-244 or 244Pu (half-life of 80 Myr), and also the extant
nuclide, uranium-238 or 238U (half-life of 4468 Myr). Spontaneous
fission of 238U has generated ~5% as much fissiogenic Xe as 244Pu. Pu
and U fission produce the four fissiogenic isotopes, 136Xe, 134Xe,
132Xe, and 131Xe in distinct proportions. A reservoir that remains an
entirely closed system over Earth's history has a ratio of Pu- to
U-derived fissiogenic Xe reaching to ~27. Accordingly, the isotopic
composition of the fissiogenic Xe for a closed-system reservoir would
largely resemble that produced from pure 244Pu fission. Loss of Xe from
a reservoir after 244Pu becomes extinct (500 Myr) would lead to a
greater contribution of 238U fission to the fissiogenic Xe.
Notation
Differences in the abundance of isotopes among natural samples are
extremely small (almost always below 0.1% or 1 per mille). Nevertheless,
these very small differences can record meaningful geological processes.
To compare these tiny but meaningful differences, isotope abundances in
natural materials are often reported relative to isotope abundances in
designated standards, with the delta (δ) notation. The absolute values
of Xe isotopes are normalized to atmospheric 130Xe. Define where i =
124, 126, 128, 129, 131, 132, 134, 136.
Applications
The age of Earth
Iodine-129 decays with a half-life of 15.7 Ma into 129Xe, resulting in
excess 129Xe in primitive meteorites relative to primordial Xe isotopic
compositions. The property of 129I can be used in radiometric
chronology. However, as detailed below, the age of Earth's formation
cannot be deduced directly from I-Xe dating. The major problem is the Xe
closure time, or the time when the early Earth system stopped gaining
substantial new material from space. When the Earth became closed for
the I-Xe system, Xe isotope evolution began to obey a simple radioactive
decay law as shown below and became predictable.
The principle of radiogenic chronology is, if at time t1 the quantity of
a radioisotope is P1 while at some previous time this quantity was P0,
the interval between t1 and t0 is given by the law of radioactive decay
as
Here is the decay constant of the radioisotope, which is the
probability of decay per nucleus per unit time. The decay constant is
related to the half life t1/2, by t1/2= ln(2)/
Calculations
The I-Xe system was first applied in 1975 to estimate the age of the
Earth. For all Xe isotopes, the initial isotope composition of iodine in
the Earth is given by
where is the isotopic ratios of iodine at the time that Earth primarily
formed, is the isotopic ratio of iodine at the end of stellar
nucleosynthesis, and is the time interval between the end of stellar
nucleosynthesis and the formation of the Earth. The estimated iodine-127
concentration in the Bulk Silicate Earth (BSE) (= crust + mantle
average) ranges from 7 to 10 parts per billion (ppb) by mass. If the BSE
represents Earth's chemical composition, the total 127I in the BSE
ranges from 2.26×1017 to 3.23×1017 moles. The meteorite Bjurböle is 4.56
billion years old with an initial 129I/127I ratio of 1.1×10−4, so an
equation can be derived as
where is the interval between the formation of the Earth and the
formation of meteorite Bjurböle. Given the half life of 129I of 15.7
Myr, and assuming that all the initial 129I has decayed to 129Xe, the
following equation can be derived:
129Xe in the modern atmosphere is 3.63×1013 grams. The iodine content
for BSE lies between 10 and 12 ppb by mass. Consequently, should be 108
Myr, i.e., the Xe-closure age is 108 Myr younger than the age of
meteorite Bjurböle. The estimated Xe closure time was ~4.45 billion
years ago when the growing Earth started to retain Xe in its atmosphere,
which is coincident with ages derived from other geochronology dating
methods.
Xe closure age problem
There are some disputes about using I-Xe dating to estimate the Xe
closure time. First, in the early solar system, planetesimals collided
and grew into larger bodies that accreted to form the Earth. But there
could be a 107 to 108 years time gap in Xe closure time between the
Earth's inner and outer regions. Some research support 4.45 Ga probably
represents the time when the last giant impactor (Martian-size) hit
Earth, but some regard it as the time of core-mantle differentiation.
The second problem is that the total inventory of 129Xe on Earth may be
larger than that of the atmosphere since the lower mantle hadn't been
entirely mixed, which may underestimate 129Xe in the calculation. Last
but not least, if Xe gas not been lost from the atmosphere during a long
interval of early Earth's history, the chronology based on 129I-129Xe
would need revising since 129Xe and 127Xe could be greatly altered.
Loss of earth's earliest atmosphere
Compared with solar xenon, Earth's atmospheric Xe is enriched in heavy
isotopes by 3 to 4% per atomic mass unit (amu). However, the total
abundance of xenon gas is depleted by one order of magnitude relative to
other noble gases. The elemental depletion while relative enrichment in
heavy isotopes is called the "Xenon paradox". A possible explanation is
that some processes can specifically diminish xenon rather than other
light noble gases (e.g. Krypton) and preferentially remove lighter Xe
isotopes.
In the last 2 decades, two categories of models have been proposed to
solve the xenon paradox. The first assumes that the Earth accreted from
porous planetesimals, and isotope fractionation happened due to
gravitational separation. However, this model cannot reproduce the
abundance and isotopic composition of light noble gases in the
atmosphere. The second category supposes a massive impact resulted in an
aerodynamic drag on heavier gases. Both the aerodynamic drag and the
downward gravitational effect lead to a mass-dependent loss of Xe gases.
But following research suggested that Xe isotope mass fractionation
shouldn't be a rapid, single event.
Research published since 2018 on noble gases preserved in Archean
(3.5–3.0 Ga old) samples may provide a solution to the Xe paradox.
Isotopically mass fractionated Xe is found in tiny inclusions of ancient
seawater in Archean barite and hydrothermal quartz. The distribution of
Xe isotopes lies between the primordial solar and the modern atmospheric
Xe isotope patterns. The isotopic fractionation gradually increases
relative to the solar distribution as Earth evolves over its first 2
billion years. This two billion-year history of evolving Xe
fractionation coincides with early solar system conditions including
high solar extreme ultraviolet (EUV) radiation and large impacts that
could energize large rates of hydrogen escape to space that are big
enough to drag out xenon. However, models of neutral xenon atoms
escaping cannot resolve the problem that other lighter noble gas
elements don't show the signal of depletion or mass-dependent
fractionation. For example, because Kr is lighter than Xe, Kr should
also have escaped in a neutral wind. Yet the isotopic distribution of
atmospheric Kr on Earth is significantly less fractionated than
atmospheric Xe.
A current explanation is that hydrodynamic escape can preferentially
remove lighter atmospheric species and lighter isotopes of Xe in the
form of charged ions instead of neutral atoms. Hydrogen is liberated
from hydrogen-bearing gases (H2 or CH4) by photolysis in the early Earth
atmosphere. Hydrogen is light and can be abundant at the top of the
atmosphere and escape. In the polar regions where there are open
magnetic field lines, hydrogen ions can drag ionized Xe out from the
atmosphere to space even though neutral Xe cannot escape.
The mechanism is summarized as below.
Xe can be directly photo-ionized by UV radiation in range of
Or Xe can be ionized by change exchange with H2 and CO2 through
where H+ and CO2+ can come from EUV dissociation. Xe+ is chemically
inert in H, H2, or CO2 atmospheres. As a result, Xe+ tends to persist.
These ions interact strongly with each other through the Coulomb force
and are finally dragged away by strong ancient polar wind. Isotope mass
fractionation accumulates as lighter isotopes of Xe+ preferentially
escape from the Earth. A preliminary model suggests that Xe can escape
in the Archean if the atmosphere contains >1% H2 or >0.5% methane.
When O2 levels increased in the atmosphere, Xe+ could exchange positive
charge with O2 though
From this reaction, Xe escape stopped when the atmosphere became
enriched in O2. As a result, Xe isotope fractionation may provide
insights into the long history of hydrogen escape that ended with the
Great Oxidation Event (GOE). Understanding Xe isotopes is promising to
reconstruct hydrogen or methane escape history that irreversibly
oxidized the Earth and drove biological evolution toward aerobic
ecological systems. Other factors, such as the hydrogen (or methane)
concentration becoming too low or EUV radiation from the aging Sun
becoming too weak, can also cease the hydrodynamic escape of Xe, but are
not mutually exclusive.
Organic hazes on Archean Earth could also scavenge isotopically heavy
Xe. Ionized Xe can be chemically incorporated into organic materials,
going through the terrestrial weathering cycle on the surface. The
trapped Xe is mass fractionated by about 1% per amu in heavier isotopes
but they may be released again and recover the original unfractionated
composition, making them not sufficient to totally resolve Xe paradox.
Comparison between Kr and Xe in the atmosphere
Observed atmospheric Xe is depleted relative to Chondritic meteorites by
a factor of 4 to 20 when compared to Kr. In contrast, the stable
isotopes of Kr are barely fractionated. This mechanism is unique to Xe
since Kr+ ions are quickly neutralized via
Therefore, Kr can be rapidly returned to neutral and wouldn't be dragged
away by the charged ion wind in the polar region. Hence Kr is retained
in the atmosphere.
Relation with Mass Independent Fraction of Sulfur Isotopes (MIF-S)
The signal of mass-independent fractionation of sulfur isotopes, known
as MIF-S, correlates with the end of Xe isotope fractionation. During
the Great Oxidation Event (GOE), the ozone layer formed when O2 rose,
accounting for the end of the MIF-S signature. The disappearance of the
MIF-S signal has been regarded as changing the redox ratio of Earth's
surface reservoirs. However, potential memory effects of MIF-S due to
oxidative weathering can lead to large uncertainty on the process and
chronology of GOE. Compared to the MIF-S signals, hydrodynamic escape of
Xe is not affected by the ozone formation and may be even more sensitive
to O2 availability, promising to provide more details about the
oxidation history of Earth.
Xe Isotopes as mantle tracers
Xe isotopes are also promising in tracing mantle dynamics in Earth's
evolution. The first explicit recognition of non-atmospheric Xe in
terrestrial samples came from the analysis of CO2-well gas in New
Mexico, displaying an excess of 129I-derived or primitive source 129Xe
and high content in 131-136Xe due to the decay of 238U. At present, the
excess of 129Xe and 131-136Xe has been widely observed in mid-ocean
ridge basalt (MORBs) and Oceanic island basalt (OIBs). Because 136Xe
receives more fissiogenic contribution than other heavy Xe isotopes,
129Xe (decay of 129I) and 136Xe are usually normalized to 130Xe when
discussing Xe isotope trends of different mantle sources. MORBs'
129Xe/130Xe and 136Xe/130Xe ratios lie on a trend from atmospheric
ratios to higher values and seemingly contaminated by the air. Oceanic
island basalt (OIBs) data lies lower than those in MORBs, implying
different Xe sources for OIBs and MORBs.
The deviations in 129Xe/130Xe ratio between air and MORBs show that
mantle degassing occurred before 129I was extinct, otherwise 129Xe/130Xe
in the air would be the same as in the mantle. The differences in the
129Xe/130Xe ratio between MORBs and OIBs may indicate that the mantle
reservoirs are still not thoroughly mixed. The chemical differences
between OIBs and MORBs still await discovery.
To obtain mantle Xe isotope ratios, it is necessary to remove
contamination by atmospheric Xe, which could start before 2.5 billion
years ago. Theoretically, the many non-radiogenic isotopic ratios
(124Xe/130Xe, 126Xe/130Xe, and 128Xe/130Xe) can be used to accurately
correct for atmospheric contamination if slight differences between air
and mantle can be precisely measured. Still, we cannot reach such
precision with current techniques.
Xe in other planets
Mars
On Mars, Xe isotopes in the present atmosphere are mass fractionated
relative to their primordial composition from in situ measurement of the
Curiosity Rover at Gale Crater, Mars. Paleo-atmospheric Xe trapped in
the Martian regolith breccia NWA 11220 is mass-dependently fractionated
relative to solar Xe by ~16.2‰. The extent of fractionation is
comparable for Mars and Earth, which may be compelling evidence that
hydrodynamic escape also occurred in the Mars history. The regolith
breccia NWA7084 and the >4 Ga orthopyroxene ALH84001 Martian meteorites
trap ancient Martian atmospheric gases with little if any Xe isotopic
fractionation relative to modern Martian atmospheric Xe. Alternative
models for Mars consider that the isotopic fractionation and escape of
Mars atmospheric Xe occurred very early in the planet's history and
ceased around a few hundred million years after planetary formation
rather than continuing during its evolutionary history
Venus
Xe has not been detected in Venus's atmosphere. 132Xe has an upper limit
of 10 parts per billion by volume. The absence of data on the abundance
of Xe precludes us from evaluating if the abundance of Xe is close to
solar values or if there is Xe paradox on Venus. The lack also prevents
us from checking if the isotopic composition has been mass dependently
fractionated, as in the case of Earth and Mars.
Jupiter
Jupiter's atmosphere has 2.5 ± 0.5 times the solar abundance values for
Xenon and similarly elevated argon and krypton (2.1 ± 0.5 and 2.7 ± 0.5
times solar values separately). These signals of enrichment are due to
these elements coming to Jupiter in very cold (T<30K) icy planetesimals.
- >-
Karyorrhexis (from Greek κάρυον karyon 'kernel, seed, nucleus' and ῥῆξις
rhexis 'bursting') is the destructive fragmentation of the nucleus of a
dying cell whereby its chromatin is distributed irregularly throughout
the cytoplasm. It is usually preceded by pyknosis and can occur as a
result of either programmed cell death (apoptosis), cellular senescence,
or necrosis.
In apoptosis, the cleavage of DNA is done by Ca2+ and Mg2+ -dependent
endonucleases.
Overview
During apoptosis, a cell goes through a series of steps as it eventually
breaks down into apoptotic bodies, which undergo phagocytosis. In the
context of karyorrhexis, these steps are, in chronological order,
pyknosis (the irreversible condensation of chromatin), karyorrhexis
(fragmentation of the nucleus and condensed DNA) and karyolysis
(dissolution of the chromatin due to endonucleases).
Karyorrhexis involves the breakdown of the nuclear envelope and the
fragmentation of condensed chromatin due to endonucleases. In cases of
apoptosis, karyorrhexis ensures that nuclear fragments are quickly
removed by phagocytes. In necrosis, however, this step fails to progress
in an orderly manner, leaving behind fragmented cellular debris, further
contributing to tissue damage and inflammation.
Process of Nuclear Envelope Dissolution During Karyorrhexis
In the intrinsic pathway of apoptosis, environmental factors such as
oxidative stress signal pro-apoptotic members of the Bcl-2 protein
family to eventually break the outer membrane of the mitochondria. This
causes cytochrome c to leak into the cytoplasm, which causes a cascade
of events that eventually leads to the activation of several caspases.
One of these caspases, caspase-6, is known to cleave nuclear lamina
proteins such as lamin A/C, which hold the nuclear envelope together,
thereby aiding in the dissolution of the nuclear envelope.
Process of Condensed Chromatin Fragmentation During Karyorrhexis
In the process of karyorrhexis through apoptosis, DNA is fragmented in
an orderly manner by endonucleases such as caspase-activated DNase and
discrete nucleosomal units are formed. This is because the DNA has
already been condensed during pyknosis, meaning it has been wrapped
around histones in an organized manner, with around 180 base pairs per
histone. The fragmented chromatin observed during karyorrhexis is made
when activated endonucleases cleave the DNA in between the histones,
resulting in orderly, discrete nucleosomal units. These short DNA
fragments left by the endonucleases can be identified on an agar gel
during electrophoresis due to their unique “laddered” appearance,
allowing researchers to better identify cell death through apoptosis.
Nucleus Degradation in Other Forms of Cell Death
Karyorrhexis is associated with a controlled breakdown of the nuclear
envelope, typically by caspases that destroy lamins during apoptosis.
However, for other forms of cell death that are less controlled than
apoptosis, such as necrosis (unprogrammed cell death), the degradation
of the nucleus is caused by other factors. Unlike apoptosis, necrosis
cells are characterized by having a ruptured plasma membrane, no
association with the activation of caspases, and typically invoking an
inflammatory response. Because necrosis is a caspase-independent
process, the nucleus may stay intact during early stages of cell death
before being ripped open due to osmotic stress and other factors
associated with having a hole in the plasma membrane. A specialized form
of necrosis, called necroptosis, has a slightly more controlled
degradation of the nucleus. This process is dependent on calpain, which
is a protease that also degrades lamins, destabilizing the structure of
the nucleus. However, similar to necrosis, this process also involves a
ruptured plasma membrane, which contributes to the uncontrolled
degradation of the nuclear envelope.
Unlike karyorrhexis in apoptosis which produces apoptotic bodies to be
digested through phagocytosis, karyorrhexis in necroptosis leads to the
expulsion of cell contents into extracellular space to be digested
through pinocytosis.
Triggers and Mechanisms
The process of apoptosis, and thereby nucleus degradation through
karyorrhexis, is invoked by various physiological and pathological
stimuli. DNA damage, oxidative stress, hypoxia, and infections can
initiate signaling cascades leading to nuclear degradation through the
intrinsic pathway of apoptosis. The intrinsic pathway can also be
induced through ethanol, which activates apoptosis-related proteins such
as BAX and caspases. Additionally, if the death receptors on a cell’s
surface are activated, such as CD95, the activation of caspases and
nuclear envelope degradation can be triggered as well. In all of these
processes, caspases such as caspase-3 play a key role by cleaving
nuclear lamins and promoting chromatin fragmentation. In necrosis,
uncontrolled calcium influx and activation of proteases such as calpains
accelerate the process, highlighting the contrasting regulatory
mechanisms between necrotic and apoptotic karyorrhexis.
The level of DNA damage determines whether a cell undergoes apoptosis or
cell senescence. Cellular senescence refers to the cessation of the cell
cycle and thus cell division, which can be observed after a fixed amount
(approximately 50) of doublings in primary cells. One cause of cellular
senescence is DNA damage through the shortening of telomeres. This
causes a DNA damage response (DDR), which, if prolonged over a long
period of time, activates ATR and ATM damage kinases. These kinases
activate two more kinases, Chk1 and Chk2 kinases, which can alter the
cell in a few different ways. One of these ways is by activating a
transcription factor known as p53. If the level of DNA damage is mild,
the p53 will opt to activate CIP, which inhibits CDKs, arresting the
cell cycle. However, if the level of DNA damage is severe enough, p53
can trigger apoptotic pathways which lead to the dissolution of the
nuclear envelope through karyorrhexis.
Pathological Implications
Karyorrhexis is a prominent feature in conditions related to cell death,
such as ischemia and neurodegenerative disorders. It has been observed
during myocardial infarction and brain stroke, indicating its
contribution to cell death in acute stress responses. Moreover,
disorders such as placental vascular malperfusion have highlighted the
role of karyorrhexis in fetal demise, particularly when it disrupts
normal tissue homeostasis.
In cancer, apoptotic karyorrhexis plays a dual role. While it
facilitates controlled cell death, aiding in tumor suppression,
resistance to apoptosis in cancer cells results in evasion of this
pathway, promoting malignancy. Therapeutic interventions targeting
apoptotic pathways attempt to restore this phase of nuclear degradation
to induce tumor regression.
- "Prompt engineering is the process of structuring or crafting an instruction in order to produce the best possible output from a generative artificial intelligence (AI) model.\n\nA prompt is natural language text describing the task that an AI should perform. A prompt for a text-to-text language model can be a query, a command, or a longer statement including context, instructions, and conversation history. Prompt engineering may involve phrasing a query, specifying a style, choice of words and grammar, providing relevant context, or describing a character for the AI to mimic.\n\nWhen communicating with a text-to-image or a text-to-audio model, a typical prompt is a description of a desired output such as \"a high-quality photo of an astronaut riding a horse\" or \"Lo-fi slow BPM electro chill with organic samples\". Prompting a text-to-image model may involve adding, removing, or emphasizing words to achieve a desired subject, style, layout, lighting, and aesthetic.\n\nHistory \nIn 2018, researchers first proposed that all previously separate tasks in natural language processing (NLP) could be cast as a question-answering problem over a context. In addition, they trained a first single, joint, multi-task model that would answer any task-related question like \"What is the sentiment\" or \"Translate this sentence to German\" or \"Who is the president?\"\n\nThe AI boom saw an increase in the amount of \"prompting technique\" to get the model to output the desired outcome and avoid nonsensical output, a process characterized by trial-and-error. After the release of ChatGPT in 2022, prompt engineering was soon seen as an important business skill, albeit one with an uncertain economic future.\n\nA repository for prompts reported that over 2,000 public prompts for around 170 datasets were available in February 2022. In 2022, the chain-of-thought prompting technique was proposed by Google researchers. In 2023, several text-to-text and text-to-image prompt databases were made publicly available. The Personalized Image-Prompt (PIP) dataset, a generated image-text dataset that has been categorized by 3,115 users, has also been made available publicly in 2024.\n\nText-to-text \nMultiple distinct prompt engineering techniques have been published.\n\nChain-of-thought \n\nAccording to Google Research, chain-of-thought (CoT) prompting is a technique that allows large language models (LLMs) to solve a problem as a series of intermediate steps before giving a final answer. In 2022, Google Brain reported that chain-of-thought prompting improves reasoning ability by inducing the model to answer a multi-step problem with steps of reasoning that mimic a train of thought. Chain-of-thought techniques were developed to help LLMs handle multi-step reasoning tasks, such as arithmetic or commonsense reasoning questions.\n\nFor example, given the question, \"Q: The cafeteria had 23 apples. If they used 20 to make lunch and bought 6 more, how many apples do they have?\", Google claims that a CoT prompt might induce the LLM to answer \"A: The cafeteria had 23 apples originally. They used 20 to make lunch. So they had 23 - 20 = 3. They bought 6 more apples, so they have 3 + 6 = 9. The answer is 9.\" When applied to PaLM, a 540 billion parameter language model, according to Google, CoT prompting significantly aided the model, allowing it to perform comparably with task-specific fine-tuned models on several tasks, achieving state-of-the-art results at the time on the GSM8K mathematical reasoning benchmark. It is possible to fine-tune models on CoT reasoning datasets to enhance this capability further and stimulate better interpretability.\n\nAn example of a CoT prompting:\n Q: {question}\n A: Let's think step by step.\n\nAs originally proposed by Google, each CoT prompt included a few Q&A examples. This made it a few-shot prompting technique. However, according to researchers at Google and the University of Tokyo, simply appending the words \"Let's think step-by-step\", has also proven effective, which makes CoT a zero-shot prompting technique. OpenAI claims that this prompt allows for better scaling as a user no longer needs to formulate many specific CoT Q&A examples.\n\nIn-context learning \nIn-context learning, refers to a model's ability to temporarily learn from prompts. For example, a prompt may include a few examples for a model to learn from, such as asking the model to complete \"maison house, chat cat, chien \" (the expected response being dog), an approach called few-shot learning.\n\nIn-context learning is an emergent ability of large language models. It is an emergent property of model scale, meaning that breaks in downstream scaling laws occur, leading to its efficacy increasing at a different rate in larger models than in smaller models. Unlike training and fine-tuning, which produce lasting changes, in-context learning is temporary. Training models to perform in-context learning can be viewed as a form of meta-learning, or \"learning to learn\".\n\nSelf-consistency decoding \nSelf-consistency decoding performs several chain-of-thought rollouts, then selects the most commonly reached conclusion out of all the rollouts.\n\nTree-of-thought \nTree-of-thought prompting generalizes chain-of-thought by generating multiple lines of reasoning in parallel, with the ability to backtrack or explore other paths. It can use tree search algorithms like breadth-first, depth-first, or beam.\n\nPrompting to estimate model sensitivity \nResearch consistently demonstrates that LLMs are highly sensitive to subtle variations in prompt formatting, structure, and linguistic properties. Some studies have shown up to 76 accuracy points across formatting changes in few-shot settings. Linguistic features significantly influence prompt effectiveness—such as morphology, syntax, and lexico-semantic changes—which meaningfully enhance task performance across a variety of tasks. Clausal syntax, for example, improves consistency and reduces uncertainty in knowledge retrieval. This sensitivity persists even with larger model sizes, additional few-shot examples, or instruction tuning.\n\nTo address sensitivity of models and make them more robust, several methods have been proposed. FormatSpread facilitates systematic analysis by evaluating a range of plausible prompt formats, offering a more comprehensive performance interval. Similarly, PromptEval estimates performance distributions across diverse prompts, enabling robust metrics such as performance quantiles and accurate evaluations under constrained budgets.\n\nAutomatic prompt generation\n\nRetrieval-augmented generation \n\nRetrieval-augmented generation (RAG) is a technique that enables generative artificial intelligence (Gen AI) models to retrieve and incorporate new information. It modifies interactions with a large language model (LLM) so that the model responds to user queries with reference to a specified set of documents, using this information to supplement information from its pre-existing training data. This allows LLMs to use domain-specific and/or updated information.\n\nRAG improves large language models (LLMs) by incorporating information retrieval before generating responses. Unlike traditional LLMs that rely on static training data, RAG pulls relevant text from databases, uploaded documents, or web sources. According to Ars Technica, \"RAG is a way of improving LLM performance, in essence by blending the LLM process with a web search or other document look-up process to help LLMs stick to the facts.\" This method helps reduce AI hallucinations, which have led to real-world issues like chatbots inventing policies or lawyers citing nonexistent legal cases. By dynamically retrieving information, RAG enables AI to provide more accurate responses without frequent retraining.\n\nGraph retrieval-augmented generation \n\nGraphRAG (coined by Microsoft Research) is a technique that extends RAG with the use of a knowledge graph (usually, LLM-generated) to allow the model to connect disparate pieces of information, synthesize insights, and holistically understand summarized semantic concepts over large data collections. It was shown to be effective on datasets like the Violent Incident Information from News Articles (VIINA).\n\nEarlier work showed the effectiveness of using a knowledge graph for question answering using text-to-query generation. These techniques can be combined to search across both unstructured and structured data, providing expanded context, and improved ranking.\n\nUsing language models to generate prompts \nLarge language models (LLM) themselves can be used to compose prompts for large language models. The automatic prompt engineer algorithm uses one LLM to beam search over prompts for another LLM:\n\n There are two LLMs. One is the target LLM, and another is the prompting LLM.\n Prompting LLM is presented with example input-output pairs, and asked to generate instructions that could have caused a model following the instructions to generate the outputs, given the inputs. \n Each of the generated instructions is used to prompt the target LLM, followed by each of the inputs. The log-probabilities of the outputs are computed and added. This is the score of the instruction.\n The highest-scored instructions are given to the prompting LLM for further variations.\n Repeat until some stopping criteria is reached, then output the highest-scored instructions.\nCoT examples can be generated by LLM themselves. In \"auto-CoT\", a library of questions are converted to vectors by a model such as BERT. The question vectors are clustered. Questions close to the centroid of each cluster are selected, in order to have a subset of diverse questions. An LLM does zero-shot CoT on each selected question. The question and the corresponding CoT answer are added to a dataset of demonstrations. These diverse demonstrations can then added to prompts for few-shot learning.\n\nText-to-image \n\nIn 2022, text-to-image models like DALL-E 2, Stable Diffusion, and Midjourney were released to the public. These models take text prompts as input and use them to generate images.\n\nPrompt formats \n\nEarly text-to-image models typically don't understand negation, grammar and sentence structure in the same way as large language models, and may thus require a different set of prompting techniques. The prompt \"a party with no cake\" may produce an image including a cake. As an alternative, negative prompts allow a user to indicate, in a separate prompt, which terms should not appear in the resulting image. Techniques such as framing the normal prompt into a sequence-to-sequence language modeling problem can be used to automatically generate an output for the negative prompt.\n\nA text-to-image prompt commonly includes a description of the subject of the art, the desired medium (such as digital painting or photography), style (such as hyperrealistic or pop-art), lighting (such as rim lighting or crepuscular rays), color, and texture. Word order also affects the output of a text-to-image prompt. Words closer to the start of a prompt may be emphasized more heavily.\n\nThe Midjourney documentation encourages short, descriptive prompts: instead of \"Show me a picture of lots of blooming California poppies, make them bright, vibrant orange, and draw them in an illustrated style with colored pencils\", an effective prompt might be \"Bright orange California poppies drawn with colored pencils\".\n\nArtist styles \n\nSome text-to-image models are capable of imitating the style of particular artists by name. For example, the phrase in the style of Greg Rutkowski has been used in Stable Diffusion and Midjourney prompts to generate images in the distinctive style of Polish digital artist Greg Rutkowski. Famous artists such as Vincent van Gogh and Salvador Dalí have also been used for styling and testing.\n\nNon-text prompts \n\nSome approaches augment or replace natural language text prompts with non-text input.\n\nTextual inversion and embeddings \nFor text-to-image models, textual inversion performs an optimization process to create a new word embedding based on a set of example images. This embedding vector acts as a \"pseudo-word\" which can be included in a prompt to express the content or style of the examples.\n\nImage prompting \nIn 2023, Meta's AI research released Segment Anything, a computer vision model that can perform image segmentation by prompting. As an alternative to text prompts, Segment Anything can accept bounding boxes, segmentation masks, and foreground/background points.\n\nUsing gradient descent to search for prompts \nIn \"prefix-tuning\", \"prompt tuning\", or \"soft prompting\", floating-point-valued vectors are searched directly by gradient descent to maximize the log-likelihood on outputs.\n\nFormally, let be a set of soft prompt tokens (tunable embeddings), while and be the token embeddings of the input and output respectively. During training, the tunable embeddings, input, and output tokens are concatenated into a single sequence , and fed to the LLMs. The losses are computed over the tokens; the gradients are backpropagated to prompt-specific parameters: in prefix-tuning, they are parameters associated with the prompt tokens at each layer; in prompt tuning, they are merely the soft tokens added to the vocabulary.\n\nMore formally, this is prompt tuning. Let an LLM be written as , where is a sequence of linguistic tokens, is the token-to-vector function, and is the rest of the model. In prefix-tuning, one provides a set of input-output pairs , and then use gradient descent to search for . In words, is the log-likelihood of outputting , if the model first encodes the input into the vector , then prepend the vector with the \"prefix vector\" , then apply . For prefix tuning, it is similar, but the \"prefix vector\" is pre-appended to the hidden states in every layer of the model.\n\nAn earlier result uses the same idea of gradient descent search, but is designed for masked language models like BERT, and searches only over token sequences, rather than numerical vectors. Formally, it searches for where is ranges over token sequences of a specified length.\n\nLimitations \n\t\nWhile the process of writing and refining a prompt for an LLM or generative AI shares some parallels with an iterative engineering design process, such as through discovering 'best principles' to reuse and discovery through reproducible experimentation, the actual learned principles and skills depend heavily on the specific model being learned rather than being generalizable across the entire field of prompt-based generative models. Such patterns are also volatile and exhibit significantly different results from seemingly insignificant prompt changes. According to The Wall Street Journal in 2025, the job of prompt engineer was one of the hottest in 2023, but has become obsolete due to models that better intuit user intent and to company trainings.\n\nPrompt injection \n\nPrompt injection is a cybersecurity exploit in which adversaries craft inputs that appear legitimate but are designed to cause unintended behavior in machine learning models, particularly large language models (LLMs). This attack takes advantage of the model's inability to distinguish between developer-defined prompts and user inputs, allowing adversaries to bypass safeguards and influence model behaviour. While LLMs are designed to follow trusted instructions, they can be manipulated into carrying out unintended responses through carefully crafted inputs."
- source_sentence: >-
What is one of the main challenges associated with airborne wind turbines
as mentioned in the text?
A. High installation costs
B. Difficulty in maintaining turbines at high altitudes
C. Lack of public interest
D. Limited wind resources
sentences:
- "Wildlife crossings are structures that allow animals to cross human-made barriers safely. Wildlife crossings may include underpass tunnels or wildlife tunnels, viaducts, and overpasses or green bridges (mainly for large or herd-type animals); amphibian tunnels; fish ladders; canopy bridges (especially for monkeys and squirrels); tunnels and culverts (for small mammals such as otters, hedgehogs, and badgers); and green roofs (for butterflies and birds).\n\nWildlife crossings are a practice in habitat conservation, allowing connections or reconnections between habitats, combating habitat fragmentation. They also assist in avoiding collisions between vehicles and animals, which in addition to killing or injuring wildlife may cause injury or death to humans and property damage.\n\nSimilar structures can be used for domesticated animals, such as cattle creeps.\n\nRoads and habitat fragmentation\n\nHabitat fragmentation occurs when human-made barriers such as roads, railroads, canals, electric power lines, and pipelines penetrate and divide wildlife habitat. Of these, roads have the most widespread and detrimental effects. Scientists estimate that the system of roads in the United States affects the ecology of at least one-fifth of the land area of the country. For many years ecologists and conservationists have documented the adverse relationship between roads and wildlife, and identify four ways that roads and traffic detrimentally affect wildlife populations: (1) they decrease habitat amount and quality, (2) they increase mortality due to wildlife-vehicle collisions (road kill), (3) they prevent access to resources on the other side of the road, and (4) they subdivide wildlife populations into smaller and more vulnerable sub-populations (fragmentation). Habitat fragmentation can lead to extinction or extirpation if a population's gene pool is restricted enough.\n\nThe first three effects (loss of habitat, road kill, and isolation from resources) exert pressure on various animal populations by reducing available resources and directly killing individuals in a population. For instance, found that road kills do not pose a significant threat to healthy populations but can be devastating to small, shrinking, or threatened populations. Road mortality has significantly affected a number of prominent species in the United States, including white-tailed deer (Odocoileus virginianus), Florida panthers (Puma concolor coryi), and black bears (Ursus americanus). In addition, habitat loss can be direct, if habitat is destroyed to make room for a road, or indirect, if habitat quality close to roads is compromised due to emissions from the roads (e.g. noise, light, runoff, pollution, etc.). Finally, species that are unable to migrate across roads to reach resources such as food, shelter and mates will experience reduced reproductive and survival rates, which can compromise population viability.\n\nIn addition to the first three factors, numerous studies have shown that the construction and use of roads is a direct source of habitat fragmentation. As mentioned above, populations surrounded by roads are less likely to receive immigrants from other habitats and as a result, they suffer from a lack of genetic diversity. These small populations are particularly vulnerable to extinction due to demographic, genetic, and environmental stochasticity because they do not contain enough alleles to adapt to new selective pressures such as changes in temperature, habitat, and food availability.\n\nThe relationship between roads and habitat fragmentation is well documented. One study found that roads contribute more to fragmentation in forest habitats than clear cuts. Another study concluded that road fragmentation of formerly contiguous forest in eastern North America is the primary cause for the decline of forest bird species and has also significantly harmed small mammals, insects, and reptiles in the United States. After years of research, biologists agree that roads and traffic lead to habitat fragmentation, isolation and road kill, all of which combine to significantly compromise the viability of wildlife populations throughout the world.\n\nWildlife-vehicle collisions \n\nWildlife-vehicle collisions have a significant cost for human populations because collisions damage property and injure and kill passengers and drivers. Research in the 1990s estimated the number of collisions with ungulates in traffic in Europe at 507,000 per year, resulting in 300 people killed, 30,000 injured, and property damage exceeding $1 billion. In parallel, 1.5 million traffic accidents involving deer in the United States cause an estimated $1.1 billion in vehicle damage each year. On a larger scale, research indicates that wildlife-vehicle collisions in the United States result in 29,000 injuries and more than 200 fatalities per year.\n\nThe conservation issues associated with roads (wildlife mortality and habitat fragmentation) coupled with the substantial human and economic costs resulting from wildlife-vehicle collisions have caused scientists, engineers, and transportation authorities to consider a number of mitigation tools for reducing the conflict between roads and wildlife. Of the currently available options, structures known as wildlife crossings have been the most successful at reducing both habitat fragmentation and wildlife-vehicle collisions caused by roads.\n\nWildlife crossings are structural passages beneath or above roadways that are designed to facilitate safe wildlife movement across roadways. In recent years, conservation biologists and wildlife managers have advocated wildlife crossings coupled with roadside fencing as a way to increase road permeability and habitat connectivity while decreasing wildlife-vehicle collisions. Wildlife crossing is the umbrella term encompassing underpasses, overpasses, ecoducts, green bridges, amphibian/small mammal tunnels, and wildlife viaducts . All of these structures are designed to provide semi-natural corridors above and below roads so that animals can safely cross without endangering themselves and motorists.\n\nHistory\nWritten reports of rough fish ladders date to 17th-century France, where bundles of branches were used to create steps in steep channels to bypass obstructions. A version was patented in 1837 by Richard McFarlan of Bathurst, New Brunswick, Canada, who designed a fishway to bypass a dam at his water-powered lumber mill. In 1880, the first fish ladder was built in Rhode Island, United States, on the Pawtuxet Falls Dam. As the Industrial Age advanced, dams and other river obstructions became larger and more common, leading to the need for effective fish by-passes.\n\nThe first overland wildlife crossings were constructed in France during the 1950s. European countries including the Netherlands, Switzerland, Germany, and France have been using various crossing structures to reduce the conflict between wildlife and roads for several decades and use a variety of overpasses and underpasses to protect and re-establish wildlife such as: amphibians, badgers, ungulates, invertebrates, and other small mammals.\n\nThe Humane Society of the United States reported in 2007 that the more than 600 tunnels installed under major and minor roads in the Netherlands have helped to substantially increase population levels of the endangered European badger. The longest \"ecoduct\" overpass, Natuurbrug Zanderij Crailoo, in the Netherlands, runs and spans a highway, railway and golf course.\n\nWildlife crossings are becoming increasingly common in Canada and the United States. Recognizable wildlife crossings are found in Banff National Park in Alberta, where vegetated overpasses provide safe passage over the Trans-Canada Highway for bears, moose, deer, wolves, elk, and many other species. The 24 wildlife crossings in Banff were constructed as part of a road improvement project in 1978. In the United States, thousands of wildlife crossings have been built in the past 30 years, including culverts, bridges, and overpasses. These have been used to protect mountain goats in Montana, spotted salamanders in Massachusetts, bighorn sheep in Colorado, desert tortoises in California, and endangered Florida panthers in Florida. The Henry Street salamander tunnels are tunnels under Henry Street in North Amherst, Massachusetts: they help salamanders cross Henry Street to get to vernal pools that the salamanders use for breeding.\n\nThe first wildlife crossing in the Canadian province of Ontario was built in 2010, along Ontario Highway 69 between Sudbury and Killarney, as part of the route's ongoing freeway conversion.\n\nCosts and benefits \nThe benefits derived from constructing wildlife crossings to extend wildlife migration corridors over and under major roads appear to outweigh the costs of construction and maintenance. One study estimates that adding wildlife crossings to a road project is a 7–8% increase in the total cost of the project . Theoretically, the monetary costs associated with constructing and maintaining wildlife crossings in ecologically important areas are trumped by the benefits associated with protecting wildlife populations, reducing property damage to vehicles, and saving the lives of drivers and passengers by reducing the number of collisions caused by wildlife.\n\nA study completed for the Virginia Department of Transportation estimated that underpasses for wildlife become cost effective, in terms of property damage, when they prevent between 2.6 and 9.2 deer-vehicle collisions per year, depending on the cost of the underpass. Approximately 300 deer crossed through the underpasses in the year the study took place .\n\nEffectiveness \n\nA number of studies have been conducted to determine the effectiveness of wildlife corridors at providing habitat connectivity (by providing viable migration corridors) and reducing wildlife-vehicle collisions. The effectiveness of these structures appears to be highly site-specific (due to differences in location, structure, species, habitat, etc.), and also dependent on design, but crossings have been beneficial to a number of species in a variety of locations.\n\nExamples\n\nBanff National Park \n\nBanff National Park offers one of the best opportunities to study the effectiveness of wildlife crossings because the park contains a wide variety of species and is bisected by the Trans-Canada Highway (TCH), a large commercial road. To reduce the effects of the four-lane TCH, 24 wildlife crossings (22 underpasses and two overpasses) were built to ensure habitat connectivity and protect motorists . In 1996, Parks Canada developed a contract with university researchers to assess the effectiveness of the crossings. Subsequently, a number of publications have analyzed the crossings' effect on various species and overall wildlife mortality (see , , and ).\n\nUsing a variety of techniques to monitor the crossings since the early 1980s, scientists report that 10 species of large mammals (including deer, elk, black bear, grizzly bear, mountain lion, wolf, moose, and coyote) have used the 24 crossings in Banff a total of 84,000 times as of January 2007 . The research also identified a \"learning curve\" such that animals need time to acclimate to the structures before they feel comfortable using them. For example, grizzly bear crossings increased from seven in 1996 to more than 100 in 2006, although the actual number of individual bears using the structures remained constant over this time at between two and four bears (Parks Canada, unpublished results). A similar set of observations was made for wolves, with crossings increasing from two to approximately 140 over the same 10-year period. However, in this case the actual number of wolves in the packs using the crossings increased dramatically, from a low of two up to a high of over 20 individuals.\n\n reported that the use of wildlife crossings and fencing reduced traffic-induced mortality of large ungulates on the TCH by more than 80 percent. Recent analysis for carnivores showed results were not as positive however, with bear mortality increasing by an average of 116 percent in direct parallel to an equal doubling of traffic volumes on the highway, clearly showing no effect of fencing to reduce bear mortality (Hallstrom, Clevenger, Maher and Whittington, in prep). Research on the crossings in Banff has thus shown mixed value of wildlife crossings depending on the species in question.\n\nParks Canada is currently planning to build 17 additional crossing structures across the TCH to increase driver safety near the hamlet of Lake Louise. Lack of effectiveness of standard fencing in reducing bear mortality demonstrates that additional measures such as wire 'T-caps' on the fence may be needed for fencing to mitigate effectively for bears (Hallstrom, Clevenger, Maher and Whittington, in prep).\n\nCollier and Lee Counties in Florida \nTwenty-four wildlife crossings (highway underpasses) and 12 bridges modified for wildlife have been constructed along a 40-mile stretch of Interstate 75 in Collier and Lee Counties in Florida . These crossings are specifically designed to target and protect the endangered Florida panther, a subspecies of cougar found in the Southeastern United States. Scientists estimate that there are only 80–100 Florida panthers alive in the wild, which makes them one of the most endangered large mammals in North America . The Florida panther is particularly vulnerable to wildlife-vehicle collisions, which claimed 11 panthers in 2006 and 14 in 2007 .\n\nThe Florida Fish and Wildlife Conservation Commission (FWC) has used a number of mitigation tools in an effort to protect Florida panthers and the combination of wildlife crossings and fences have proven the most effective . As of 2007, no panthers have been killed in areas equipped with continuous fencing and wildlife crossings and the FWC is planning to construct many more crossing structures in the future. The underpasses on I-75 also appeared to benefit bobcats, deer, and raccoons by significantly reducing wildlife-vehicle collisions along the interstate .\n\nSouthern California \nWildlife crossings have also been important for protecting biodiversity in several areas of southern California. In San Bernardino County, biologists have erected fences along State Route 58 to complement underpasses (culverts) that are being used by the threatened desert tortoise. Tortoise deaths on the highway declined by 93% during the first four years after the introduction of the fences, proving that even makeshift wildlife crossings (storm-drainage culverts in this case) have the ability to increase highway permeability and protect sensitive species . Studies by and report that underpasses in Orange, Riverside, and Los Angeles Counties have drawn significant use from a variety of species including bobcats, coyotes, gray fox, mule deer, and long-tailed weasels. These results could be extremely important for wildlife conservation efforts in the region's Puente Hills and Chino Hills links, which have been increasingly fragmented by road construction . Los Angeles County's first wildlife-purpose built underpass is at Harbor Boulevard. It was built in partnership between Los Angeles County, California State Parks and the Puente Hills Habitat Preservation Authority.\n\nThe Wallis Annenberg Wildlife Crossing in Agoura Hills, California, will be the world's largest wildlife crossing once completed in 2026.\n\nEcoducts, Netherlands \n\nThe Netherlands has over 66 wildlife crossings (overpasses and ecoducts) that have been used to protect the endangered European badger, as well as populations of wild boar, red deer, and roe deer. As of 2012, the Veluwe, of woods, heathland and drifting sands, the largest lowland nature area in North Western Europe, contains nine ecoducts, wide on average, that are used to shuttle wildlife across highways that transect the Veluwe. The first two ecoducts on the Veluwe were built in 1988 across the A50 when the highway was constructed. Five of the other ecoducts on the Veluwe were built across existing highways, one was built across a two lane provincial road. The two ecoducts across the A50 were used by nearly 5,000 deer and wild boar during a one-year period .\n\nThe Netherlands also boasts the world's longest ecoduct-wildlife overpass called the Natuurbrug Zanderij Crailoo (sand quarry nature bridge at Crailo) . The massive structure, completed in 2006, is wide and over long and spans a railway line, business park, roadway, and sports complex . Monitoring is currently underway to examine the effectiveness of this innovative project combining wildlife protection with urban development. The oldest wildlife passage is Zeist West - A 28, opened in 1988.\n\nSlaty Creek Wildlife Underpass, Calder Freeway, Black Forest, Australia \nAnother case study of the effectiveness of wildlife crossings comes from an underpass built to minimize the ecological effect of the Calder Freeway as it travels through the Black Forest in Victoria, Australia. In 1997, the Victorian Government Roads Corporation built Slaty Creek wildlife underpass at a cost of $3 million . Scientists used 14 different techniques to monitor the underpass for 12 months in order to determine the abundance and diversity of species using the underpass . During the 12-month period, 79 species of fauna were detected in the underpass (compared with 116 species detected in the surrounding forest) including amphibians, bats, birds, koalas, wombats, gliders, reptiles, and kangaroos . The results indicate that the underpass could be useful to a wide array of species but the authors suggest that Slaty Creek could be improved by enhanced design and maintenance of fencing to minimise road kill along the Calder Freeway and by attempting to exclude introduced predators such as cats and foxes from the area.\n\nI-70 Vail Pass, Colorado\nIn 2005, area environmental groups floated the idea of a wildlife overpass west of Vail Pass. In 2010, ARC Solutions – an interdisciplinary partnership – initiated the International Wildlife Crossing Infrastructure Design Competition for a wildlife crossing over Interstate 70 in the high country west of Denver, Colorado; designers had to account for challenges unique to the area, including snow and severe weather, high elevation and steep grades, a six-lane roadway, a bike path, and high traffic volumes, as well as multiple species of wildlife, including lynx.\n\nAfter receiving 36 submissions from nine countries, a jury of international experts in landscape architecture, engineering, architecture, ecology and transportation selected five finalists in November 2010 to further develop their conceptual designs for a wildlife crossing structure. In January 2011, the team led by HNTB with Michael Van Valkenburgh & Associates (New York) were selected as the winners. The design features a single 100\_m (328\_ft) concrete span across the highway that is planted with a variety of vegetation types, including a pine-tree forest and meadow grasses, to attract different species to cross. A modular precast concrete design means that much of the bridge can be constructed offsite and moved into place.\n\nIn late 2020, Summit County Safe Passages released the \"I-70 East Vail Pass Wildlife Crossings Feasibility Study\" for a wildlife overpass.\n\nI-90 near Snoqualmie Pass \nIn 2005, the Washington State Department of Transportation received approval to begin a safety improvement project through the Snoqualmie Pass area along the Interstate 90 corridor from Hyak to Easton, through the Central Cascades and Mountains to Sound Greenway National Heritage Area, including a series of wildlife crossings. Wildlife habitat on either side of I-90 will be reconnected with the installation of new bridges and culverts, protecting both wildlife and the traveling public. The construction of the wildlife overcrossing began in 2015 and was completed in late 2019. Work to restore habitat on the wildlife bridge over I-90 has continued throughout 2020, with 90,000 trees and shrubs planted on the overcrossing.\n\nInterstate 80 in Parleys Canyon \nIn 2018, the Utah Department of Transportation announced a wildlife crossing over Interstate 80 in Parleys Canyon. The project was completed in early 2019 and measures long by wide. It is currently the only wildlife overpass in the state, though Utah has more than 50 wildlife underpasses.\n\nRobert L.B. Tobin Land Bridge \n\nOn December 11, 2020, the Robert L.B. Tobin Land Bridge opened over Wurzbach Parkway in San Antonio, Texas' Phil Hardberger Park. The project cost $23 million and is designed for both wildlife and pedestrians. Construction began on November 26, 2018, originally expected to end in April 2020, and opened in December 2020. At long and wide, it was the largest wildlife bridge in the United States when it was constructed.\n\nCanopy Bridge in Anamalai Tiger Reserve \nMany endangered lion-tailed macaques used to be killed while crossing the highway at Puduthotam in Valparai, South India. Thanks to the efforts of NGOs and the forest department, several canopy bridges were installed, connecting trees on either side of the road. This helped to lower the numbers of lion-tailed macaques killed in the region. The Environment Conservation Group had initiated a national mission to increase awareness on the importance of adopting roadkill mitigation methods through their mission PATH traveling more than across 22 states.\n\nSee also\n\n Colored walls or corridors\n Aquatic organism passage\n Emerald network\n Wildlife corridor, green corridor\n\n Animal passages\n Amphibian and reptile tunnel\n Bat bridge\n Squirrel bridge\n Toad tunnel\n\n Ecological network\n Habitat destruction\n Rewilding\n Landscape connectivity\n\nBibliography \n\nHallstrom, W., A. P. Clevenger, A. Maher and J Whittington. 2008. Effectiveness of highway mitigation fencing for ungulates and carnivores. Journal of Applied Ecology - In Review.\n.\n\nExternal links\n\nEco-Logical: An Ecosystem Approach to Developing Infrastructure Projects - Federal Highway Administration (FHWA)\nWildlife Crossing Structures - Yellowstone to Yukon Conservation Initiative\nWildlife Crossings in Banff National Park \nRoad Ecology Center, UC Davis\nCalifornia Roadkill Observation System\nMaine Audubon Wildlife Road Watch\nAn Assessment of Wildlife Habitat Linkages on Interstate 70, Utah \nWildlife Consulting Resources Wildlife Crossing and Linkage Information for New Highway Projects\nWildlife Crossings Project - The Wildlife Crossings Project provides information about georreferenced wildlife crossings all around the world, and allow specialists to publish them."
- "Radio frequency sweep or frequency sweep or RF sweep apply to scanning a radio frequency band for detecting signals being transmitted there. A radio receiver with an adjustable receiving frequency is used to do this. A display shows the strength of the signals received at each frequency as the receiver's frequency is modified to sweep (scan) the desired frequency band.\n\nMethods and tools\nA spectrum analyzer is a standard instrument used for RF sweep. It includes an electronically tunable receiver and a display. The display presents measured power (y axis) vs frequency (x axis).\nThe power spectrum display is a two-dimensional display of measured power vs. frequency. The power may be either in linear units, or logarithmic units (dBm). Usually the logarithmic display is more useful, because it presents a larger dynamic range with better detail at each value. An RF sweep relates to a receiver which changes its frequency of operation continuously from a minimum frequency to a maximum (or from maximum to minimum). Usually the sweep is performed at a fixed, controllable rate, for example 5\_MHz/sec.\n\nSome systems use frequency hopping, switching from one frequency of operation to another. One method of CDMA uses frequency hopping. Usually frequency hopping is performed in a random or pseudo-random pattern.\n\nApplications\nFrequency sweeps may be used by regulatory agencies to monitor the radio spectrum, to ensure that users only transmit according to their licenses. The FCC for example controls and monitors the use of the spectrum in the U.S. In testing of new electronic devices, a frequency sweep may be done to measure the performance of electronic components or systems. For example, RF oscillators are measured for phase noise, harmonics and spurious signals; computers for consumer sale are tested to avoid radio frequency interference with radio systems. Portable sweep equipment may be used to detect some types of covert listening device (bugs).\n\nIn professional audio, the optimum use of wireless microphones and wireless intercoms may require performing a sweep of the local radio spectrum, especially if many wireless devices are being used simultaneously. The sweep is generally limited in bandwidth to only the operating bandwidth of the wireless devices. For instance, at American Super Bowl games, audio engineers monitor (sweep) the radio spectrum in real time to make certain that all local wireless microphones are operating at previously agreed-upon and coordinated frequencies.\n\nSee also\n Measuring receiver\n Spectrum management\n Technical surveillance counter-measures\n\nReferences\n\nDonald G. Fink, Donald Christiansen – Electronic Engineer's Handbook, Second edition. \nUlrich L. Rohde, Jerry C. Whitaker, T.T.N. Bucher – Communications Receivers: Principles and Design, Second edition."
- >-
An airborne wind turbine is a design concept for a wind turbine with a
rotor supported in the air without a tower, thus benefiting from the
higher velocity and persistence of wind at high altitudes, while
avoiding the expense of tower construction, or the need for slip rings
or yaw mechanism. An electrical generator may be on the ground or
airborne. Challenges include safely suspending and maintaining turbines
hundreds of meters off the ground in high winds and storms, transferring
the harvested and/or generated power back to earth, and interference
with aviation.
Airborne wind turbines may operate in low or high altitudes; they are
part of a wider class of Airborne Wind Energy Systems (AWES) addressed
by high-altitude wind power and crosswind kite power. When the generator
is on the ground, then the tethered aircraft need not carry the
generator mass or have a conductive tether. When the generator is aloft,
then a conductive tether would be used to transmit energy to the ground
or used aloft or beamed to receivers using microwave or laser. Kites and
helicopters come down when there is insufficient wind; kytoons and
blimps may resolve the matter with other disadvantages. Also, bad
weather such as lightning or thunderstorms, could temporarily suspend
use of the machines, probably requiring them to be brought back down to
the ground and covered. Some schemes require a long power cable and, if
the turbine is high enough, a prohibited airspace zone. As of 2022, few
commercial airborne wind turbines are in regular operation.
Aerodynamic variety
An aerodynamic airborne wind power system relies on the wind for
support.
In one class, the generator is aloft; an aerodynamic structure resembling a kite, tethered to the ground, extracts wind energy by supporting a wind turbine. In another class of devices, such as crosswind kite power, generators are on the ground; one or more airfoils or kites exert force on a tether, which is converted to electrical energy. An airborne turbine requires conductors in the tether or some other apparatus to transmit power to the ground. Systems that rely on a winch can instead place the weight of the generator at ground level, and the tethers need not conduct electricity.
Aerodynamic wind energy systems have been a subject of research interest
since at least 1980. Multiple proposals have been put forth but no
commercial products are available.
Other projects for airborne wind energy systems include:
Ampyx Power
Kitepower
KiteGen
Rotokite
HAWE System
SkySails
X-Wind technology
Makani Power crosswind hybrid kite system
Windswept and Interesting Kite Turbine Ring to Ring Torque Transfer
Kitemill
Aerostat variety
An aerostat-type wind power system relies at least in part on buoyancy
to support the wind-collecting elements. Aerostats vary in their designs
and resulting lift-to-drag ratio; the kiting effect of higher
lift-over-drag shapes for the aerostat can effectively keep an airborne
turbine aloft; a variety of such kiting balloons were made famous in the
kytoon by Domina Jalbert.
Balloons can be incorporated to keep systems up without wind, but
balloons leak slowly and have to be resupplied with lifting gas,
possibly patched as well. Very large, sun-heated balloons may solve the
helium or hydrogen leakage problems.
An Ontario based company called Magenn was developing a turbine called
the Magenn Air Rotor System (MARS). A future -wide MARS system would use
a horizontal rotor in a helium suspended apparatus which is tethered to
a transformer on the ground. Magenn claims that their technology
provides high torque, low starting speeds, and superior overall
efficiency thanks to its ability to deploy higher in comparison to
non-aerial solutions. The first prototypes were built by TCOM in April
2008. No production units have been delivered.
Boston-based Altaeros Energies uses a helium-filled balloon shroud to
lift a wind turbine into the air, transferring the resultant power down
to a base station through the same cables used to control the shroud. A
35-foot prototype using a standard Skystream 2.5kW 3.7m wind turbine was
flown and tested in 2012. In fall 2013, Altaeros was at work on its
first commercial-scale demonstration in Alaska.
Another concept, released in 2023, proposed a helium-filled balloon with
attached sails, which create pressure and drive the rotation of the
system around its horizontal axis. The kinetic energy is transferred to
a generator on the ground through ropes in circular motion.
See also
High-altitude wind power
Ram air turbine
References
Bibliography
Vance, E. Wind power: High hopes. Nature 460, 564–566 (2009). https://doi.org/10.1038/460564a
External links
Kitemill - Taking windpower to new heights
Energy Kite Systems
Why Airborne Wind Energy Airborne Wind Energy Labs
- source_sentence: |-
What is the main purpose of chain coding in image segmentation?
A. To enhance the color depth of images
B. To compress binary images by tracing contours
C. To convert images into three-dimensional models
D. To increase the size of image files
sentences:
- >-
A cobweb plot, known also as Lémeray Diagram or Verhulst diagram is a
visual tool used in dynamical systems, a field of mathematics to
investigate the qualitative behaviour of one-dimensional iterated
functions, such as the logistic map. The technique was introduced in the
1890s by E.-M. Lémeray. Using a cobweb plot, it is possible to infer
the long-term status of an initial condition under repeated application
of a map.
Method
For a given iterated function , the plot consists of a diagonal () line
and a curve representing . To plot the behaviour of a value , apply the
following steps.
Find the point on the function curve with an x-coordinate of . This has the coordinates ().
Plot horizontally across from this point to the diagonal line. This has the coordinates ().
Plot vertically from the point on the diagonal to the function curve. This has the coordinates ().
Repeat from step 2 as required.
Interpretation
On the Lémeray diagram, a stable fixed point corresponds to the segment
of the staircase with progressively decreasing stair lengths or to an
inward spiral, while an unstable fixed point is the segment of the
staircase with growing stairs or an outward spiral. It follows from the
definition of a fixed point that the staircases converge whereas spirals
center at a point where the diagonal line crosses the function graph.
A period-2 orbit is represented by a rectangle, while greater period
cycles produce further, more complex closed loops. A chaotic orbit would
show a "filled-out" area, indicating an infinite number of non-repeating
values.
See also
Jones diagram – similar plotting technique
Fixed-point iteration – iterative algorithm to find fixed points (produces a cobweb plot)
References
- >-
A chain code is a lossless compression based image segmentation method
for binary images based upon tracing image contours. The basic principle
of chain coding, like other contour codings, is to separately encode
each connected component, or "blob", in the image.
For each such region, a point on the boundary is selected and its
coordinates are transmitted. The encoder then moves along the boundary
of the region and, at each step, transmits a symbol representing the
direction of this movement.
This continues until the encoder returns to the starting position, at
which point the blob has been completely described, and encoding
continues with the next blob in the image.
This encoding method is particularly effective for images consisting of
a reasonably small number of large connected components.
Variations
Some popular chain codes include:
the Freeman Chain Code of Eight Directions (FCCE)
Directional Freeman Chain Code of Eight Directions (DFCCE)
Vertex Chain Code (VCC)
Three OrThogonal symbol chain code (3OT)
Unsigned Manhattan Chain Code (UMCC)
Ant Colonies Chain Code (ACCC)
Predator-Prey System Chain Code (PPSCC)
Beaver Territories Chain Code (BTCC)
Biological Reproduction Chain Code (BRCC)
Agent-Based Modeling Chain Code (ABMCC)
In particular, FCCE, VCC, 3OT and DFCCE can be transformed from one to
another
A related blob encoding method is crack code. Algorithms exist to
convert between chain code, crack code, and run-length encoding.
A new trend of chain codes involve the utilization of biological
behaviors. This started by the work of Mouring et al. who developed an
algorithm that takes advantage of the pheromone of ants to track image
information. An ant releases a pheromone when they find a piece of food.
Other ants use the pheromone to track the food. In their algorithm, an
image is transferred into a virtual environment that consists of food
and paths according to the distribution of the pixels in the original
image. Then, ants are distributed and their job is to move around while
releasing pheromone when they encounter food items. This helps other
ants identify information, and therefore, encode information.
In use
Recently, the combination of move-to-front transform and adaptive
run-length encoding accomplished efficient compression of the popular
chain codes.
Chain codes also can be used to obtain high levels of compression for
image documents, outperforming standards such as DjVu and JBIG2.
- >-
Meripilus sumstinei, commonly known as the giant polypore or the
black-staining polypore, is a species of fungus in the family
Meripilaceae.
Taxonomy
Originally described in 1905 by William Alphonso Murrill as Grifola
sumstinei, the species was transferred to Meripilus in 1988.
Description
The cap of this polypore is wide, with folds of flesh up to thick. It
has white to brownish concentric zones and tapers toward the base; the
stipe is indistinct.
Distribution and habitat
It is found in eastern North America from June to September. It grows in
large clumps on the ground around hardwood (including oak) trunks,
stumps, and logs.
Uses
The mushroom is edible.
References
datasets:
- ngkan146/mnlp_encoder_data
pipeline_tag: sentence-similarity
library_name: sentence-transformers
SentenceTransformer based on intfloat/multilingual-e5-base
This is a sentence-transformers model finetuned from intfloat/multilingual-e5-base on the mnlp_encoder_data dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: intfloat/multilingual-e5-base
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 dimensions
- Similarity Function: Cosine Similarity
- Training Dataset:
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("ngkan146/test-encoder-st")
# Run inference
sentences = [
'What is the main purpose of chain coding in image segmentation? \nA. To enhance the color depth of images \nB. To compress binary images by tracing contours \nC. To convert images into three-dimensional models \nD. To increase the size of image files',
'A chain code is a lossless compression based image segmentation method for binary images based upon tracing image contours. The basic principle of chain coding, like other contour codings, is to separately encode each connected component, or "blob", in the image.\n\nFor each such region, a point on the boundary is selected and its coordinates are transmitted. The encoder then moves along the boundary of the region and, at each step, transmits a symbol representing the direction of this movement.\n\nThis continues until the encoder returns to the starting position, at which point the blob has been completely described, and encoding continues with the next blob in the image.\n\nThis encoding method is particularly effective for images consisting of a reasonably small number of large connected components.\n\nVariations \nSome popular chain codes include:\n the Freeman Chain Code of Eight Directions (FCCE)\n Directional Freeman Chain Code of Eight Directions (DFCCE)\n Vertex Chain Code (VCC)\n Three OrThogonal symbol chain code (3OT)\n Unsigned Manhattan Chain Code (UMCC)\n Ant Colonies Chain Code (ACCC)\n Predator-Prey System Chain Code (PPSCC)\n Beaver Territories Chain Code (BTCC)\n Biological Reproduction Chain Code (BRCC)\n Agent-Based Modeling Chain Code (ABMCC)\n\nIn particular, FCCE, VCC, 3OT and DFCCE can be transformed from one to another\n\nA related blob encoding method is crack code. Algorithms exist to convert between chain code, crack code, and run-length encoding.\n\nA new trend of chain codes involve the utilization of biological behaviors. This started by the work of Mouring et al. who developed an algorithm that takes advantage of the pheromone of ants to track image information. An ant releases a pheromone when they find a piece of food. Other ants use the pheromone to track the food. In their algorithm, an image is transferred into a virtual environment that consists of food and paths according to the distribution of the pixels in the original image. Then, ants are distributed and their job is to move around while releasing pheromone when they encounter food items. This helps other ants identify information, and therefore, encode information.\n\nIn use \nRecently, the combination of move-to-front transform and adaptive run-length encoding accomplished efficient compression of the popular chain codes.\nChain codes also can be used to obtain high levels of compression for image documents, outperforming standards such as DjVu and JBIG2.',
'Meripilus sumstinei, commonly known as the giant polypore or the black-staining polypore, is a species of fungus in the family Meripilaceae.\n\nTaxonomy \nOriginally described in 1905 by William Alphonso Murrill as Grifola sumstinei, the species was transferred to Meripilus in 1988.\n\nDescription \nThe cap of this polypore is wide, with folds of flesh up to thick. It has white to brownish concentric zones and tapers toward the base; the stipe is indistinct.\n\nDistribution and habitat \nIt is found in eastern North America from June to September. It grows in large clumps on the ground around hardwood (including oak) trunks, stumps, and logs.\n\nUses \nThe mushroom is edible.\n\nReferences',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Training Details
Training Dataset
mnlp_encoder_data
- Dataset: mnlp_encoder_data at 39af5de
- Size: 8,000 training samples
- Columns:
anchor,positive, andnegative - Approximate statistics based on the first 1000 samples:
anchor positive negative type string string string details - min: 23 tokens
- mean: 65.95 tokens
- max: 171 tokens
- min: 19 tokens
- mean: 413.21 tokens
- max: 512 tokens
- min: 14 tokens
- mean: 405.39 tokens
- max: 512 tokens
- Samples:
anchor positive negative What are the two key processes that relative nonlinearity depends on for maintaining species diversity?
A. Species must differ in their resource consumption and reproductive rates.
B. Species must differ in their responses to resource density and affect competition differently.
C. Species must have identical growth rates and resource requirements.
D. Species must compete for the same resources and have similar responses to competition.Relative nonlinearity is a coexistence mechanism that maintains species diversity via differences in the response to and effect on variation in resource density or some other factor mediating competition. Relative nonlinearity depends on two processes: 1) species have to differ in the curvature of their responses to resource density and 2) the patterns of resource variation generated by each species must favor the relative growth of another species. In its most basic form, one species grows best under equilibrium competitive conditions and another performs better under variable competitive conditions. Like all coexistence mechanisms, relative nonlinearity maintains species diversity by concentrating intraspecific competition relative to interspecific competition. Because resource density can be variable, intraspecific competition is the reduction of per-capita growth rate under variable resources generated by conspecifics (i.e. individuals of the same species). Interspecific competitio...Muellerella lichenicola is a species of lichenicolous fungus in the family Verrucariaceae. It was first formally described as a new species in 1826 by Søren Christian Sommerfelt, as Sphaeria lichenicola. David Leslie Hawksworth transferred it to the genus Muellerella in 1979.
It has been reported growing on Caloplaca aurantia, Caloplaca saxicola and Physcia aipolia in Sicily, and on an unidentified crustose lichen in Iceland. In Mongolia, it has been reported growing on the thallus of a Biatora-lichen at elevation in the Bulgan district and on Aspicilia at elevation in the Altai district. In Victoria Land, Antarctica, it has been reported from multiple hosts, including members of the Teloschistaceae and Physciaceae.
ReferencesWhat was the unemployment rate in Japan in 2010?
A. 3.1%
B. 4.2%
C. 5.1%
D. 6.0%The labor force in Japan numbered 65.9 million people in 2010, which was 59.6% of the population of 15 years old and older, and amongst them, 62.57 million people were employed, whereas 3.34 million people were unemployed which made the unemployment rate 5.1%. The structure of Japan's labor market experienced gradual change in the late 1980s and continued this trend throughout the 1990s. The structure of the labor market is affected by: 1) shrinking population, 2) replacement of postwar baby boom generation, 3) increasing numbers of women in the labor force, and 4) workers' rising education level. Also, an increase in the number of foreign nationals in the labor force is foreseen.
As of 2019, Japan's unemployment rate was the lowest in the G7. Its employment rate for the working-age population (15-64) was the highest in the G7.
By 2021 the size of the labor force changed to 68.60 million, a decrease of 0.08 million from the previous year. Viewing by sex, the male labor force was 38.0...The Aircraft Classification Rating (ACR) - Pavement Classification Rating (PCR) method is a standardized international airport pavement rating system developed by ICAO in 2022. The method is scheduled to replace the ACN-PCN method as the official ICAO pavement rating system by November 28, 2024. The method uses similar concepts as the ACN-PCN method, however, the ACR-PCR method is based on layered elastic analysis, uses standard subgrade categories for both flexible and rigid pavement, and eliminates the use of alpha factor and layer equivalency factors.
The method relies on the comparison of two numbers:
The ACR, a number defined as two times the derived single wheel load (expressed in hundreds of kilograms) conveying the relative effect on an airplane of a given weight on a pavement structure for a specified standard subgrade strength;
The PCR, a number (and series of letters) representing the pavement bearing strength (on the same scale as ACR) of a given pavement section (runwa...What was the original name of WordMARC before it was changed due to a trademark conflict?
A. MUSE
B. WordPerfect
C. Document Assembly
D. PrimewordWordMARC Composer was a scientifically oriented word processor developed by MARC Software, an offshoot of MARC Analysis Research Corporation (which specialized in high end Finite Element Analysis software for mechanical engineering). It ran originally on minicomputers such as Prime and Digital Equipment Corporation VAX. When the IBM PC emerged as the platform of choice for word processing, WordMARC allowed users to easily move documents from a minicomputer (where they could be easily shared) to PCs.
WordMARC was the creation of Pedro Marcal, who pioneered work in finite element analysis and needed a technical word processor that both supported complex notations and was capable of running on minicomputers and other high-end machines such as Alliant and AT&T.
WordMARC was originally known as MUSE (MARC Universal Screen Editor), but the name was changed because of a trademark conflict with another company when the product was ported to the IBM PC.
Features
In comparison with WordPerf...Parametric stereo (abbreviated as PS) is an audio compression algorithm used as an audio coding format for digital audio. It is considered an Audio Object Type of MPEG-4 Part 3 (MPEG-4 Audio) that serves to enhance the coding efficiency of low bandwidth stereo audio media. Parametric Stereo digitally codes a stereo audio signal by storing the audio as monaural alongside a small amount of extra information. This extra information (defined as "parametric overhead") describes how the monaural signal will behave across both stereo channels, which allows for the signal to exist in true stereo upon playback.
History
Background
Advanced Audio Coding Low Complexity (AAC LC) combined with Spectral Band Replication (SBR) and Parametric Stereo (PS) was defined as HE-AAC v2. A HE-AAC v1 decoder will only give a mono output when decoding a HE-AAC v2 bitstream. Parametric Stereo performs sparse coding in the spatial domain, somewhat similar to what SBR does in the frequency domain. An AAC HE v2 b... - Loss:
TripletLosswith these parameters:{ "distance_metric": "TripletDistanceMetric.EUCLIDEAN", "triplet_margin": 5 }
Training Hyperparameters
Non-Default Hyperparameters
learning_rate: 2e-05weight_decay: 0.01num_train_epochs: 1warmup_steps: 10remove_unused_columns: False
All Hyperparameters
Click to expand
overwrite_output_dir: Falsedo_predict: Falseeval_strategy: noprediction_loss_only: Trueper_device_train_batch_size: 8per_device_eval_batch_size: 8per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 2e-05weight_decay: 0.01adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1.0num_train_epochs: 1max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.0warmup_steps: 10log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Falsefp16: Falsefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Falselabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}tp_size: 0fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torchoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters:auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseeval_use_gather_object: Falseaverage_tokens_across_devices: Falseprompts: Nonebatch_sampler: batch_samplermulti_dataset_batch_sampler: proportional
Training Logs
| Epoch | Step | Training Loss |
|---|---|---|
| 0.1 | 100 | 4.2263 |
| 0.2 | 200 | 3.9742 |
| 0.3 | 300 | 3.9605 |
| 0.4 | 400 | 3.9198 |
| 0.5 | 500 | 3.8953 |
| 0.6 | 600 | 3.8793 |
| 0.7 | 700 | 3.8918 |
| 0.8 | 800 | 3.8691 |
| 0.9 | 900 | 3.8747 |
| 1.0 | 1000 | 3.8523 |
Framework Versions
- Python: 3.10.12
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.7.0+cu126
- Accelerate: 1.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
TripletLoss
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}