id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
74,203,091 | https://en.wikipedia.org/wiki/Configuron | A configuron is an elementary configurational excitation in an amorphous material which involves breaking of a chemical bond. Coined by scientists C.A. Angell and K.J. Rao, this concept often involves the breaking and reforming of a chemical bond.
These configurational excitations, or configurons, serve as a crucial aspect of understanding the dynamic behaviors of amorphous materials. Essentially, these are the fundamental building blocks that dictate the arrangements of atoms or molecules within these substances.
Understanding configurons can open avenues in various fields, such as materials science and electronics, by allowing more precise manipulation of amorphous materials' properties.
See also
Quasiparticle
Amorphous solid
Condensed matter physics
Configuration interaction
References
Condensed matter physics
Materials science
Quasiparticles
Amorphous solids | Configuron | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 170 | [
"Matter",
"Applied and interdisciplinary physics",
"Phases of matter",
"Materials science",
"Unsolved problems in physics",
"Condensed matter physics",
"nan",
"Quasiparticles",
"Amorphous solids",
"Subatomic particles"
] |
74,204,240 | https://en.wikipedia.org/wiki/Reactances%20of%20synchronous%20machines | The reactances of synchronous machines comprise a set of characteristic constants used in the theory of synchronous machines. Technically, these constants are specified in units of the electrical reactance (ohms), although they are typically expressed in the per-unit system and thus dimensionless. Since for practically all (except for the tiniest) machines the resistance of the coils is negligibly small in comparison to the reactance, the latter can be used instead of (complex) electrical impedance, simplifying the calculations.
Two reactions theory
The air gap of the machines with a salient pole rotor is quite different along the pole axis (so called direct axis) and in the orthogonal direction (so called quadrature axis). Andre Blondel in 1899 proposed in his paper "Empirical Theory of Synchronous Generators" the two reactions theory that divided the armature magnetomotive force (MMF) into two components: the direct axis component and the quadrature axis component. The direct axis component is aligned with the magnetic axis of the rotor, while the quadrature (or transverse) axis component is perpendicular to the direct axis. The relative strengths of these two components depend on the design of the machine and the operating conditions. Since the equations naturally split into direct and quadrature components, many reactances come in pairs, one for the direct axis (with the index d), one for the quadrature axis (with the index q). This is often using direct-quadrature-zero transformation.
In machines with a cylindrical rotor the air gap is uniform, the reactances along the d and q axes are equal, and d/q indices are frequently dropped.
States of the generator
The flux linkages of the generator vary with its state. Usually applied for transients after a short circuit current. Three states are considered:
the steady-state is the normal operating condition with the armature magnetic flux going through the rotor;
the sub-transient state () is the one the generator enters immediately after the fault (short circuit). In this state the armature flux is pushed completely out of the rotor. The state is very brief, as the current in the damper winding quickly decays allowing the armature flux to enter the rotor poles only. The generator goes into transient state;
in the transient state () the flux is still out of the field winding of the rotor. The transient state decays to steady-state in few cycles.
The sub-transient () and transient () states are cheracterized by significantly smaller reactances.
Leakage reactances
The nature of magnetic flux makes it inevitable that part of the flux deviates from the intended "useful" path. In most designs, the productive flux links the rotor and stator; the flux that links just the stator (or the rotor) to itself is useless for energy conversion and thus is considered to be wasted leakage flux (stray flux). The corresponding inductance is called leakage inductance. Due to the presence of air gap, the role of the leakage flux is more important in a synchronous machine in comparison to a transformer.
Synchronous reactances
The synchronous reactances are exhibited by the armature in the steady-state operation of the machine. The three-phase system is viewed as a superposition of two: the direct one, where the maximum of the phase current is reached when the pole is oriented towards the winding and the quadrature one, that is 90° offset.
The per-phase reactance can be determined in a mental experiment where the rotor poles are perfectly aligned with a specific angle of the phase field in the armature (0° for , 90° for the ). In this case, the reactance will be related with the flux linkage and the phase current as , where is the circular frequency. The conditions for this mental experiment are hard to recreate in practice, but:
when the armature is short-circuited, the flowing current is practically all reactive (as the coil resistance is negligible), thus under the short-circuit condition the poles of the rotor are aligned with the armature magnetomotive force;
when the armature is left open-circuit, the voltage on the terminals is also aligned with the same phase and is equal to . If saturation is neglected, the flux linkage is the same.
Therefore, the direct synchronous reactance can be determined as a ratio of the voltage in open condition to short-circuit current : . These current and voltage values can be obtained from the open-circuit saturation curve and the synchronous impedance curve.
The synchronous reactance is a sum of the leakage reactance and the reactance of the armature itself (): .
Sequence network reactances
When analyzing unbalanced three-phase systems it is common to describe a system of symmetrical components. This models the machine by three components, each with a positive sequence reactance , a negative sequence reactance and a
zero sequence reactance .
List of reactances
Das identifies the following reactances:
leakage reactance . Potier reactance is an estimate of the armature leakage reactance;
synchronous reactance (also );
transient reactance ;
subtransient reactance ;
quadrature axis reactances , , , counterparts to , , ;
negative sequence reactance ;
zero sequence reactance .
References
Sources
Electrical engineering
Electrical generators | Reactances of synchronous machines | [
"Physics",
"Technology",
"Engineering"
] | 1,137 | [
"Physical systems",
"Electrical generators",
"Machines",
"Electrical engineering"
] |
74,209,853 | https://en.wikipedia.org/wiki/Thermodynamic%20modelling | Thermodynamic modelling is a set of different strategies that are used by engineers and scientists to develop models capable of evaluating different thermodynamic properties of a system. At each thermodynamic equilibrium state of a system, the thermodynamic properties of the system are specified. Generally, thermodynamic models are mathematical relations that relate different state properties to each other in order to eliminate the need of measuring all the properties of the system in different states.
The easiest thermodynamic models, also known as equations of state, can come from simple correlations that relate different thermodynamic properties using a linear or second-order polynomial function of temperature and pressures. They are generally fitted using experimental data available for that specific properties. This approach can result in limited predictability of the correlation and as a consequence it can be adopted only in a limited operating range.
By contrast, more advanced thermodynamic models are built in a way that can predict the thermodynamic behavior of the system, even if the functional form of the model is not based on the real thermodynamic behaviour of the material. These types of models contain different parameters that are gradually developed for each specific model in order to enhance the accuracy of the evaluating thermodynamic properties.
Cubic model development
Cubic equations of state refer to the group of thermodynamic models that can evaluate the specific volume of gas and liquid systems as a function of pressure and temperature. To develop a cubic model, first, it is essential to select a cubic functional form. The most famous functional forms of this category are Redlich-Kwong, Soave-Redlich-Kwong and Peng-Robinson. Although their initial form is empirically suggested, they are categorised as semi-empirical models as their parameters can be adjusted to fit the real experimental measurement data of the target system.
Pure component modelling
In case the development of a cubic model for a pure component is targeted, the purpose would be to replicate the specific volume behaviour of the fluid in terms of temperature and pressure. At a given temperature, any cubic functional form results in two separate roots which makes us capable of modelling the behaviour of both vapour and liquid phases within a single model. Finding the roots of the cubic function will be done by simulating the vapour-liquid equilibrium condition of the pure component where the fugacity coefficients of the two phases are equal to each other.
So, in this case, the main aim can be limited to deriving fugacity coefficients of vapour and liquid phases from the cubic model and refining the adjustable parameters of the model such that they will become equal to each other at different equilibrium pairs of temperature and pressure. As the equilibrium pressure and temperature are related together in the case of a pure component system, the functional form of cubic models are able to evaluate the specific volume of the system in the wide range of temperature and pressure domain.
Multi-component modelling
Cubic model development for mixtures of more than one component is different as, according to the Gibbs phase rule, at each temperature level of a multi-component system, equilibrium states can exist at multiple pressure levels. Because of that, development of the thermodynamic model should be performed following different steps:
Selection of the cubic model: The initial step is the selection of a cubic functional form. Essentially, there exists no specific rule for this step. It can be done based on common practices of the cubic models already developed for the pure components existing in the mixture.
Single phase: Although a cubic model for a pure component is capable of predicting the specific volume of the system at both vapour and liquid phases, this is not the case for multi-component systems. Currently cubic models are used for the prediction of specific volume only in the vapour phase, while the liquid phase is modelled with more complex models based on the excess Gibbs energy, such as UNIFAC, UNIQUAQ, etc.
Vapour and liquid phases: Cubic models can be expanded to model multi-component systems at both vapour and liquid phases by integrating a proper mixing rule in their structural function.
Mixing rules
Mixing rules refer to different approaches that can be used to modify the cubic model in the case of multi-component mixtures. The simplest mixing rule is proposed by van der Waals and is called the van der Waals one fluid (vdW1f) mixing rule. As it can be understood from its name, this mixing rule is only used in case of modelling of a single phase (vapor phase). As a first step, to combine the model parameters for each binary combination of the mixture, the following equations are suggested:
where and are the parameters of the main target cubic model that was previously chosen. Then, all the possible binary combinations together with the concentration of each constituent in the mixture are used to define the final parameters for the mixture model as below:
In the case of using this mixing rule, except the two adjustable binary interaction parameters (BIPs) for each combination ( and ), other parameters are specified based on the pure component parameters and the concentration of different constituents in the mixture. So, the model developed in this case is limited to adjusting these two parameters such that the fugacity coefficients at different phases will be equal to each other at a certain temperature and pressure level. To overcome the limitation of the sole single-phase behaviour prediction in the case of using this mixing rule, other advanced mixing rules are developed. To predict the thermodynamic behaviour of the multi-component system in different phases, it is essential to build the energy function as a fundamental property of the system. Although this is mainly the case for the fundamental models, advanced mixing rules such as Huran-Vidal mixing rule and Wong-Sandler mixing rule are developed to adjust the parameters of the cubic models to contain these fundamental properties. This is usually done by building a mathematical structure capable of calculating the excess Gibbs energy of the system. It is generally built by two widely used approaches, namely UNIFAC and Non Random Two Liquid (NRTL) method. The choice of the proper mixing rule to be implemented in the target system can be done based on the inherent properties of the target system such as the polarity of different components, the reactivity of system's constituents with respect to each other, etc.
Fundamental model development
Fundamental models refer to a family of thermodynamic models that propose a mathematical form for one of the fundamental thermodynamic properties of the system, such as Gibbs free energy or Helmholtz free energy. The core idea behind this type of thermodynamic models is that, by constructing the fundamental property, it is possible to take advantage of thermodynamic relations that express different thermodynamic properties as the first or second-order derivatives of fundamental properties, with respect to pressure, temperature or density.
Helmholtz free energy models
For the development of Helmholtz free energy models, the idea is to associate different parameters that resemble different inter-molecular forces between system species. As a result, these models are referred to as multi-parameter models. Steps to develop a Helmholtz free energy model can be summarized as:
Helmholtz free energy of pure components: Like all the thermodynamic models, the first step is to build the Helmholtz free energy of pure constituents of a system. For well-known components, such as carbon dioxide and nitrogen, such functions are already established and reported in the literature. These can be used as the starting point to establish such models for multi-component systems.
Helmholtz free energy of binary mixtures: Helmholtz free energy of a multi-component system can be obtained from the weighted sum of the Helmholtz free energy of each binary combination of the system constituents. The binary Helmholtz free energy contains different terms that are taking into account various intermolecular forces that can exist based on the inherent of the two target components. Such models are developed for natural gas components through GERG-2008 thermodynamic model and EOS-CGfor the humid and combustion gas-like mixtures. The main advantages of these models are their generality, which makes them applicable to a wide range of pressure, temperature, and the whole concentration range of involved constituents.
Thermodynamic models criterions
A thermodynamic model predicts different properties with a certain level of accuracy. In fact, based on the functional form of the thermodynamic model and the real behaviour of the system some properties can be predicted with high accuracy level, while the other ones could not be predicted accurately enough to comply with different industrial needs. In this regard several criterions should be taken into account for the proper choice of thermodynamic model to be practical based on the targeted application.
Applicability
Although thermodynamic models are generally developed to predict thermodynamic properties in a wide range of temperatures and pressures, due to the lack of experimental data for different compounds in the full operational range, model accuracy varies by moving towards wider temperature and pressure ranges. When a model is targeted to be used in a specific application, the initial step is to identify the temperature and pressure at what the model is intended to be implemented. If the model is able to perform in the target operating window, the second step is to investigate whether the model can cover all the system constituents within the concentration ranges of interest or not. Fundamental models answered this ambiguity by covering the whole concentration range of the compounds that they involved. However, this is not the case for ad-hoc cubic model developments which may be considered in the specific range of concentration based on the application.
Robustness
Thermodynamic models should be robust and reliable, providing consistent results across different conditions and applications. They should be able to handle non-ideal behaviour, phase transitions, and complex interactions without significant loss of accuracy. Although some models are capable of taking into account the possible reactions between the system constituents, this is not the case for other simpler models that can only predict the behaviour of the system only in a specific phase. So, it is essential to identify the typical behaviour of the fluid in the target application to select and develop a proper model. However, in most engineering applications, developing a model that would be able to predict the thermodynamic properties of the system in different phases, critical regions and taking into account the possible reaction between systems is a necessity.
Accuracy
Based on the foundation that each thermodynamic model is built, the accuracy could vary not only for a specific property evaluation from different models but also for predicting different properties within a specific model itself. Cubic models are developed based on the phase equilibrium and as a result, they can predict the phase equilibrium of pure and multi-component systems within an acceptable accuracy level in case the model is fine-tuned to the experimental data of interest. However, this family of models is not accurate enough in predicting density and specific heat capacity as the two main thermodynamic properties that are of importance in most industrial applications. In the recent case, some corrections are suggested to enhance the accuracy of the cubic models for different properties, such as Peneloux translation for density prediction.
On the other hand, models that are developed based on fundamental properties such as Gibbs free energy or Helmholtz free energy, are generally capable of predicting a wider range of properties. As these models have a multiple number of adjustable parameters that fitted to different of experimental properties data, it makes them a pioneer when it comes to accuracy.
Computational speed
The model should be computationally efficient, especially for complex systems and large-scale simulations. The model's equations and algorithms should be designed to minimize computational time. This is especially important in cases where transient processes are targeted that thermodynamic properties change significantly over the transient time domain and computationally demanding models cannot satisfy industrial needs.
Availability
In certain applications, it may be important to consider the acceptance and implementation of a specific thermodynamic model within the industry. Industrial standards and guidelines can provide insights into the preferred models for specific processes. However, not all thermodynamic models are widely available in commercial software packages. This is especially the case for more complex fundamental models that despite their robustness, they are not still well-accepted by industry to their limited availability.
See also
Thermodynamic equilibrium
List of thermodynamic properties
Equation of state
Wong-sandler mixing rule
Combining rules
UNIFAC
NRTL
References
Thermodynamic models
Engineering thermodynamics
Equations of state | Thermodynamic modelling | [
"Physics",
"Chemistry",
"Engineering"
] | 2,565 | [
"Equations of physics",
"Thermodynamic models",
"Engineering thermodynamics",
"Statistical mechanics",
"Thermodynamics",
"Mechanical engineering",
"Equations of state"
] |
74,209,862 | https://en.wikipedia.org/wiki/Medical%20open%20network%20for%20AI | Medical open network for AI (MONAI) is an open-source, community-supported framework for Deep learning (DL) in healthcare imaging. MONAI provides a collection of domain-optimized implementations of various DL algorithms and utilities specifically designed for medical imaging tasks. MONAI is used in research and industry, aiding the development of various medical imaging applications, including image segmentation, image classification, image registration, and image generation.
MONAI was first introduced in 2019 by a collaborative effort of engineers from Nvidia, the National Institutes of Health, and the King's College London academic community. The framework was developed to address the specific challenges and requirements of DL applied to medical imaging.
Built on top of PyTorch, a popular DL library, MONAI offers a high-level interface for performing everyday medical imaging tasks, including image preprocessing, augmentation, DL model training, evaluation, and inference for diverse medical imaging applications. MONAI simplifies the development of DL models for medical image analysis by providing a range of pre-built components and modules.
MONAI is part of a larger suite of Artificial Intelligence (AI)-powered software called NVIDIA Clara. Besides MONAI, Clara also comprises NVIDIA Parabricks for genome analysis.
Medical image analysis foundations
Medical imaging is a range of imaging techniques and technologies that enables clinicians to visualize the internal structures of the human body. It aids in diagnosing, treating, and monitoring various medical conditions, thus allowing healthcare professionals to obtain detailed and non-invasive images of organs, tissues, and physiological processes.
Medical imaging has evolved, driven by technological advancements and scientific understanding. Today, it encompasses modalities such as X-ray, Computed Tomography (CT), Magnetic Resonance Imaging (MRI), ultrasound, nuclear medicine, and digital pathology, each offering capabilities and insights into human anatomy and pathology.
The images produced by these medical imaging modalities are interpreted by radiologists, trained specialists in analyzing and diagnosing medical conditions based on the visual information captured in the images. In recent years, the field has witnessed advancements in computer-aided diagnosis, integrating Artificial intelligence and Deep learning techniques to automatize medical image analysis and assist radiologists in detecting abnormalities and improving diagnostic accuracy.
Features
MONAI provides a robust suite of libraries, tools, and Software Development Kits (SDKs) that encompass the entire process of building medical imaging applications. It offers a comprehensive range of resources to support every stage of developing Artificial intelligence (AI) solutions in the field of medical imaging, from initial annotation (MONAI Label), through models development and evaluation (MONAI Core), and final application deployment (MONAI deploy application SDK).
Medical data labeling
MONAI Label is a versatile tool that enhances the image labeling and learning process by incorporating AI assistance. It simplifies the task of annotating new datasets by leveraging AI algorithms and user interactions. Through this collaboration, MONAI Label trains an AI model for a specific task and continually improves its performance as it receives additional annotated images. The tool offers a range of features and integrations that streamline the annotation workflow and ensure seamless integration with existing medical imaging platforms.
AI-assisted annotation: MONAI Label assists researchers and practitioners in medical imaging by suggesting annotations based on user interactions by utilizing AI algorithms. This AI assistance significantly reduces the time and effort required for labeling new datasets, allowing users to focus on more complex tasks. The suggestions provided by MONAI Label enhance efficiency and accuracy in the annotation process.
Continuous learning: as users provide additional annotated images, MONAI Label utilizes this data to improve its performance over time. The tool updates its AI model with the newly acquired annotations, enhancing its ability to label images and adapt to specific tasks.
Integration with medical imaging platforms: MONAI Label integrates with medical imaging platforms such as 3D Slicer, Open Health Imaging Foundation viewer for radiology, QuPath, and digital slide archive for pathology. These integrations enable communication between MONAI Label and existing medical imaging tools, facilitating collaborative workflows and ensuring compatibility with established platforms.
Custom viewer integration: developers have the flexibility to integrate MONAI Label into their custom image viewers using the provided server and client APIs. These APIs are abstracted and thoroughly documented, facilitating smooth integration with bespoke applications.
Deep learning model development and evaluation
Within MONAI Core, researchers can find a collection of tools and functionalities for dataset processing, loading, Deep learning (DL) model implementation, and evaluation. These utilities allow researchers to evaluate the performance of their models. MONAI Core offers customizable training pipelines, enabling users to construct and train models that support various learning approaches such as supervised, semi-supervised, and self-supervised learning. Additionally, users have the flexibility to implement different computing strategies to optimize the training process.
Image I/O, processing, and augmentation: domain-specific APIs are available to transform data into arrays and different dictionary formats. Additionally, patch sampling strategies enable the generation of class-balanced samples from high-dimensional images. This ensures that the sampling process maintains balance and fairness across different classes present in the data. Furthermore, invertible transforms provided by MONAI Core allow for the reversal of model outputs to a previous preprocessing step. This is achieved by leveraging tracked metadata and applied operations, enabling researchers to interpret and analyze model results in the context of the original data.
Datasets and data loading: multi-threaded cache-based datasets support high-frequency data loading, public dataset availability accelerates model deployment and performance reproducibility, and custom APIs support compressed, image- and patched, and multimodal data sources.
Differentiable components, networks, losses, and optimizers: MONAI Core provides network layers and blocks that can seamlessly handle spatial 1D, 2D, and 3D inputs. Users have the flexibility to effortlessly integrate these layers, blocks, and networks into their personalized pipelines. The library also includes commonly used loss functions, such as Dice loss, Tversky loss, and Dice focal loss, which have been (re-)implemented from literature. In addition, MONAI Core offers numerical optimization techniques like Novograd and utilities like learning rate finder to facilitate the optimization process.
Evaluation: MONAI Core provides a comprehensive set of evaluation metrics for assessing the performance of medical image models. These metrics include mean Dice, Receiving operating characteristic curves, Confusion matrices, Hausdorff distance, surface distance, and occlusion sensitivity. The metric summary report generates statistical information such as mean, median, maximum, minimum, percentile, and standard deviation for the computed evaluation metrics.
GPU acceleration, performance profiling, and optimization: MONAI leverages a range of tools including DLProf, Nsight, NVTX, and NVML to detect performance bottlenecks. The distributed data-parallel APIs seamlessly integrate with the native PyTorch distributed module, PyTorch-ignite distributed module, Horovod, XLA, and the SLURM platform.
DL model collection: by offering the MONAI Model Zoo, MONAI establishes itself as a platform that enables researchers and data scientists to access and share cutting-edge models developed by the community. Leveraging the MONAI Bundle format, users can seamlessly and efficiently utilize any model within the MONAI frameworks (Core, Label, or Deploy).
AI-inference application development kit
The MONAI deploy application SDK offers a systematic series of steps empowering users to develop and fine-tune their AI models and workflows for deployment in clinical settings. These steps act as checkpoints, guaranteeing that the AI inference infrastructure adheres to the essential standards and requirements for seamless clinical integration.
Key components of the MONAI Deploy Application SDK include:
Pythonic framework for app development: the SDK presents a Python-based framework designed specifically for creating healthcare-focused applications. With its adaptable foundation, this framework enables the streamlined development of AI-driven applications tailored to the healthcare domain.
MONAI application package packaging mechanism: the SDK incorporates a tool for packaging applications into MONAI Application Packages (MAP). These MAP instances establish a standardized format for bundling and deploying applications, ensuring portability and facilitating seamless distribution.
Local MAP execution via app runner: the SDK provides an app runner feature that enables the local execution of MAP instances. This functionality empowers developers to run and test their applications within a controlled environment, allowing prototyping and debugging.
Sample applications: the SDK includes a selection of sample applications that serve as both practical examples and starting points for developers. These sample applications showcase different use cases and exemplify best practices for effectively utilizing the MONAI Deploy framework.
API documentation: the SDK is complemented by comprehensive documentation that outlines the available APIs and provides guidance to developers on effectively leveraging the provided tools and functionalities.
Applications
MONAI has found applications in various research studies and industry implementations across different anatomical regions. For instance, it has been utilized in academic research involving automatic cranio-facial implant design, brain tumor analysis from Magnetic Resonance images, identification of features in focal liver lesions from MRI scans, radiotherapy planning for prostate cancer, preparation of datasets for fluorescence microscopy imaging, and classification of pulmonary nodules in lung cancer.
In healthcare settings, hospitals have leveraged MONAI to enhance mammography reading by employing Deep learning models for breast density analysis. This approach reduce the waiting time for patients, allowing them to receive mammography results within 15 minutes. Consequently, clinicians save time, and patients experience shorter wait times. This advancement enables patients to engage in immediate discussions with their clinicians during the same appointment, facilitating prompt decision-making and discussion of next steps before leaving the facility. Moreover, hospitals can employ MONAI to identify indications of a COVID-19 patient's deteriorating condition or determine if they can be safely discharged, optimizing patient care and post-COVID-19 decision-making.
In the corporate realm, companies choose MONAI to develop product applications addressing various clinical challenges. These include ultrasound-based scoliosis assessment, Artificial intelligence-based pathology image labeling, in-field pneumothorax detection using ultrasound, characterization of brain morphology, detection of micro-fractures in teeth, and non-invasive estimation of intracranial pressure.
See also
Artificial intelligence in healthcare
Medical imaging
Deep learning
Image segmentation
Image registration
Image generation
References
Further reading
External links
Medical software
Free health care software | Medical open network for AI | [
"Biology"
] | 2,178 | [
"Medical software",
"Medical technology"
] |
74,213,354 | https://en.wikipedia.org/wiki/Huygens%20principle%20of%20double%20refraction | Huygens principle of double refraction, named after Dutch physicist Christiaan Huygens, explains the phenomenon of double refraction observed in uniaxial anisotropic material such as calcite. When unpolarized light propagates in such materials (along a direction different from the optical axis), it splits into two different rays, known as ordinary and extraordinary rays. The principle states that every point on the wavefront of birefringent material produces two types of wavefronts or wavelets: spherical wavefronts and ellipsoidal wavefronts. These secondary wavelets, originating from different points, interact and interfere with each other. As a result, the new wavefront is formed by the superposition of these wavelets.
History
The systematic exploration of light polarization began during the 17th century. In 1669, Rasmus Bartholin made an observation of double refraction in a calcite crystal and documented it in a published work in 1670. Later, in 1690, Huygens identified polarization as a characteristic of light and provided a demonstration using two identical blocks of calcite placed in succession. Each crystal divided an incoming ray of light into two, which Huygens referred to as "regular" and "irregular" (in modern terminology: ordinary and extraordinary). However, if the two crystals were aligned in the same orientation, no further division of the light occurred.
Huygens–Fresnel principle
While the Huygens' principle of double refraction explains the phenomenon of double refraction in an optically anisotropic medium, the Huygens–Fresnel principle pertains to the propagation of waves in an optically isotropic medium. According to the Huygens–Fresnel principle, each point on a wavefront can be considered a secondary point source of waves, so a new wavefront is formed after the secondary wavelets have traveled for a period equal to one vibration cycle. This new wavefront can be described as an envelope or tangent surface to these secondary wavelets. Understanding and forecasting the classical wave propagation of light is based on the Huygens-Fresnel principle.
Polarization of light
Electric and magnetic fields that are mutually perpendicular and fluctuating give rise to the transverse electromagnetic wave known as light. Electric and magnetic fields are perpendicular to the propagation direction of the wave. For example, if the wave propagation is in the z-direction, both the electric field and the magnetic field lie in the xy-plane. The electric field points in a specific direction in space since it is a vector. The direction of an electromagnetic wave's electric field vector E is referred to as polarization. If the electric field oscillates in the x-direction, the polarization of the light will be linear, along the x-direction.
Plane wave equation of the light
The electromagnetic wave equation's sinusoidal solution has the following form:where
is time (in seconds),
is the angular frequency (in radians per second),
is the phase angle constant (in rad), and
is the wave vector of the wave (in rad/m).
The wave vector is related to the angular frequency and speed of light by
where is the wavenumber (the magnitude of the wave vector) and is the wavelength.
Unpolarized light
If we were able to observe a light wave originating from an ordinary source and directed toward us, such as the light emitted by an incandescent bulb, we would find that it consists of mixture of light waves. These waves exhibit electric field components that fluctuate at a rapid pace, nearly matching the optical frequency itself, with a time scale of approximately 10−14 seconds. Consequently, the direction of oscillation of the electric field vector occurs in all possible planes perpendicular to the direction of the light beam. Unpolarized light is a type of light wave where the electric field vector oscillates in multiple planes. Light emitted by the sun, incandescent lamps, or candle flames is considered to be unpolarized.
Types of polarization
The light wave polarization specifies the form and location of the electric field vector's direction at a particular point in space as a function of time (in the plane perpendicular to the propagation direction). There are three possible polarization states for light, depending on where the vector's direction is located. The first is plane or linear polarization, the second is elliptical polarization, and the third is circular polarization.
The light may also be partially polarized in addition to these. The polarization of light cannot be determined by the human eye on its own. However, some animals and insects have a vision that is sensitive to polarization.
Plane linear polarized light
Light waves that exhibit oscillation in a single plane are referred to as plane-polarized light waves. In such waves, the electric field vector (E) oscillates exclusively within a single plane that is perpendicular to the direction of wave propagation. This type of wave is also called a linearly polarized wave since the orientation of the field vector at any given point in space and time lies along a line within a plane perpendicular to the wave's direction of propagation.
Isotropic and anisotropic materials
Materials can be classified into two categories based on their isotropy. Materials that are isotropic have the same physical characteristics throughout. In other words, regardless of the direction in which they are measured, their characteristics, such as optical, electrical, and mechanical, stay constant. Gases, liquids, and amorphous solids like glass are instances of isotropic materials. On the other hand, anisotropic materials show various physical characteristics depending on the direction of measurement. Their characteristics are not constant throughout the substance. Crystal structure, molecule orientation, or the presence of preferred axes can all be causes of anisotropy. Crystals, certain polymers, calcite, and numerous minerals are typical examples of anisotropic materials. The physical characteristics of anisotropic materials, such as refractive index, electrical conductivity, and mechanical qualities, can differ depending on the direction of measurement.
Optical axis and types of anisotropic materials
A frequent notion in the study of anisotropic materials, particularly in the context of optics, is the optical axis. It refers to a particular axis within the material along which certain optical characteristics remain unaltered. To put it in another way, the light that travels along the optical axis does not experience anisotropic behaviours on the transverse plane.
It is possible to further divide anisotropic materials into two categories: uniaxial anisotropic and biaxial anisotropic materials. One optical axis, also referred to as the extraordinary axis, exists in uniaxially anisotropic materials. In these materials, light propagating along the optical axis experience the same effects independently of the polarization. The optical plane, also known as the plane of polarization, is perpendicular to the optical axis. Light exhibits birefringence within this plane, which means that the refractive index and all the phenomena associated to that, depend on the polarization. A common effect that can be observed is the splitting of an incident ray into two rays when propagating in a birefringent medium. Due to the presence of two independent optical axes in biaxial anisotropic materials, light travelling in two different directions will experience different optical characteristics.
Positive and negative uniaxial material
There are two types of uniaxial material depending on the value of index of refraction for the e-ray and o-ray. When the value of the refractive index of the e-ray (ne) is larger than the index of refraction index of the o-ray(n0), the material is positive uniaxial. On the other hand, when the value of refractive index of the e-ray (ne) is less than index of refraction index of the o-ray (n0), the material is negative uniaxial material. Ice and quartz are examples for positive uniaxial material. Calcite and tourmaline are examples of negative uniaxial materials.
Huygens' explanation of double refraction
The ordinary ray (o-ray) has a spherical wavefront because the o-ray has a constant refractive index (n0) independent of propagation direction inside the uniaxial material and the same velocity in all directions. On the other hand, the extraordinary ray (E-ray) has an ellipsoidal wavefront due to its refractive index, which varies with the propagation direction within the uniaxial material, leading to different velocities in different directions. The two wavefronts come into contact at the points where they intersect with the optical axis.
When unpolarized light incidents on the birefringent material, the o-ray and e-ray will generate new wavefronts. The new wavefront for the o-ray will be tangent to the spherical wavelets, while the new wavefront for the e-ray will be tangent to the ellipsoidal wavelets. Each plane wavefront propagates straight ahead but with different velocities: V0 for the o-ray and Ve for the e-ray. The direction of the k-vector is always perpendicular to the wavefronts and is calculated from Snell's law. For normal incidence, the o-ray and e-ray having the same k-vector direction. However, the Poynting vector, describing the direction of propagation of optical power, is different for the two rays. The power direction for each ray is determined by connecting the line from the imaginary source on the old wavefront to the intersection point between the new wavefront and the spherical or ellipsoidal wavefront. As a result, the o-ray and e-ray will propagate in different directions with different velocities inside the material. For the e-ray, the angle between the k-vector and the power direction is called walk-off angle.
When a light travels through the crystal, these two wave surfaces follow distinct paths within the crystal. Eventually, two refracted rays emerge as a result of this propagation.
See also
Double refraction
Electromagnetic wave equation
Huygens–Fresnel principle
Isotropy
Polarization
Poynting vector
Wave vector
References
External links
Refraction
Optics
Polarization (waves) | Huygens principle of double refraction | [
"Physics",
"Chemistry"
] | 2,163 | [
"Physical phenomena",
"Applied and interdisciplinary physics",
"Refraction",
"Optics",
"Astrophysics",
"Optical phenomena",
" molecular",
"Atomic",
"Polarization (waves)",
" and optical physics"
] |
75,584,466 | https://en.wikipedia.org/wiki/Bunkbed%20conjecture | The bunkbed conjecture (also spelled bunk bed conjecture) is a statement in percolation theory, a branch of mathematics that studies the behavior of connected clusters in a random graph. The conjecture is named after its analogy to a bunk bed structure. It was first posited by Pieter Kasteleyn in 1985. A preprint giving a proposed counterexample to the conjecture was posted on the arXiv in October 2024 by Nikita Gladkov, Igor Pak, and Alexander Zimin.
Description
The conjecture has many equivalent formulations. In the most general formulation it involves two identical graphs, referred to as the upper bunk and the lower bunk. These graphs are isomorphic, meaning they share the same structure. Additional edges, termed posts, are added to connect each vertex in the upper bunk with the corresponding vertex in the lower bunk.
Each edge in the graph is assigned a probability. The edges in the upper bunk and their corresponding edges in the lower bunk share the same probability. The probabilities assigned to the posts can be arbitrary.
A random subgraph of the bunkbed graph is then formed by independently deleting each edge based on the assigned probability.
Equivalently, it can be assumed that all edges have the same deletion probability .
Statement of the conjecture
The bunkbed conjecture states that in the resulting random subgraph, the probability that a vertex in the upper bunk is connected to some vertex in the upper bunk is greater than or equal to the probability that is connected to , the isomorphic copy of in the lower bunk.
Interpretation and significance
The conjecture suggests that two vertices of a graph are more likely to remain connected after randomly removing some edges if the graph distance between the vertices is smaller. This is intuitive, and similar questions for random walks and Ising model were resolved positively. The original motivation for the conjecture was its implication that, in a percolation on the infinite square grid, the probability of being connected to for is greater than the probability of being connected to .
Despite intuitiveness, proving this conjecture is not straightforward and is an active area of research in percolation theory. It was proved for specific types of graphs, such as wheels, complete graphs, complete bipartite graphs, and graphs with a local symmetry. It was also proved in the limit for any graph. Counterexamples for generalizations of the bunkbed conjecture have been published for site percolation, hypergraphs, and directed graphs.
References
Percolation theory
Disproved conjectures | Bunkbed conjecture | [
"Physics",
"Chemistry",
"Mathematics"
] | 507 | [
"Physical phenomena",
"Phase transitions",
"Percolation theory",
"Combinatorics",
"Statistical mechanics"
] |
75,585,731 | https://en.wikipedia.org/wiki/List%20of%20least%20massive%20black%20holes | Below there is a list of the least massive known black holes, sorted by increasing mass. The unit of measurement is the solar mass, equivalent to kg.
List
See also
Stellar black hole
List of most massive black holes
References
Lists of superlatives in astronomy
Black holes | List of least massive black holes | [
"Physics",
"Astronomy"
] | 55 | [
"Black holes",
"Physical phenomena",
"Astronomy-related lists",
"Physical quantities",
"Lists of superlatives in astronomy",
"Unsolved problems in physics",
"Astrophysics",
"Density",
"Stellar phenomena",
"Astronomical objects"
] |
71,260,980 | https://en.wikipedia.org/wiki/Heme%20transporter | A heme transporter is a protein that delivers heme to the various parts of a biological cell that require it.
Heme is a major source of dietary iron in humans and other mammals, and its synthesis in the body is well understood, but heme pathways are not as well understood. It is likely that heme is tightly regulated for two reasons: the toxic nature of iron in cells, and the lack of a regulated excretory system for excess iron. Understanding heme pathways is therefore important in understanding diseases such as hemochromatosis and anemia.
Heme transport
Members of the SLC48 and SLC49 solute carrier family participate in heme transport across cellular membranes (heme-transporting ATPase).
SLC48A1—also known as Heme-Responsive Gene 1 (HRG1)—and its orthologues were first identified as a heme transporter family through a genetic screen in C.elegans. The protein plays a role in mobilizing heme from the lysosome to the cytoplasm.
Deletion of the gene in mice leads to accumulation of heme crystals called hemozoin within the lysosomes of bone marrow, liver and splenic macrophages, but the gene is not known to be associated with human disease.
FLVCR1 was originally identified as the receptor for the feline leukemia virus, whose genetic disruption leads to anemia and disruption of heme transport. It appears to protect cells at the CFU-E stage by exporting heme to prevent heme toxicity. Rare homozygous mutations result in autosomal recessive posterior column ataxia with retinitis pigmentosa.
FLVCR2 is closely related to FLCVR1, and genetic transfection experiments indicate that it transports heme. Mutations in the gene are associated with proliferative vasculopathy and hydranencephaly-hydrocephaly syndrome (PVHH, also known as Fowler syndrome).
Related genes SLC49A3 and SLC49A4 are less well characterized functionally, although SLC49A4 is also known as Disrupted In Renal Cancer Protein 2 or RCC4 due to an association with renal cell cancer.
References
Proteins
Molecular biology stubs
Molecular biology | Heme transporter | [
"Chemistry",
"Biology"
] | 471 | [
"Biomolecules by chemical classification",
"Molecular and cellular biology stubs",
"Molecular biology stubs",
"Biochemistry stubs",
"Molecular biology",
"Biochemistry",
"Proteins"
] |
71,272,920 | https://en.wikipedia.org/wiki/Human%20Biomolecular%20Atlas%20Program | The Human Biomolecular Atlas Program (HuBMAP) is a program funded by the US National Institutes of Health to characterize the human body at single cell resolution, integrated to other efforts such as the Human Cell Atlas. Among the products of the program is the Azimuth reference datasets for single-cell RNA seq data and the ASCT+B Reporter, a visualization tool for anatomical structures, cell types and biomarkers.
Millitomes are used to create uniformly sized tissue blocks that match the shape and size of organs from HuBMAP's 3D Reference Object Library.
The HuBMAP received 27 million US dollars of funding from the NIH in 2020 and about 28.5 million in 2021.
References
External links
Official website
Biological databases
Proteomics
National Institutes of Health | Human Biomolecular Atlas Program | [
"Biology"
] | 164 | [
"Bioinformatics",
"Biological databases"
] |
71,273,640 | https://en.wikipedia.org/wiki/Neodymium%28II%29%20bromide | Neodymium(II) bromide is an inorganic compound of neodymium and bromide.
Preparation
Neodymium(II) bromide can be obtained via the reduction of neodymium(III) bromide with neodymium in a vacuum at 800 to 900 °C.
Properties
Neodymium(II) bromide is a dark green solid. The compound is extremely hygroscopic and can only be stored and handled under carefully dried inert gas or under a high vacuum. In air or on contact with water, it converts to hydrates by absorbing moisture, but these are unstable and more or less rapidly transform into oxybromides with evolution of hydrogen. The compound has the same crystal structure as lead(II) chloride type.
References
Neodymium(II) compounds
Lanthanide halides
Bromides | Neodymium(II) bromide | [
"Chemistry"
] | 171 | [
"Bromides",
"Salts"
] |
78,541,502 | https://en.wikipedia.org/wiki/Narayama%20Tile%20Kiln%20Sites | The is the collective name for several archaeological sites containing Nara period kilns located in northern Nara city in Nara Prefecture and southern Kizugawa city in Kyoto Prefecture in the Kansai region of Japan. The site was designated a National Historic Site of Japan in 1976 with the area under protection expanded in 2010..
Overview
The Narayama kiln ruins are located in the gentle hill range at an elevation of 90 to 100 meters to the north of the site of Heijō-kyō palace, the capital of Japan in the Nara period. The hills are dotted with the remains of several kilns that fired roof tiles for the palace and temples of Heijō-kyō. roof tiles made of fired clay were introduced to Japan from Baekche during the 6th century along with Buddhism. During the 570s under the reign of Emperor Bidatsu, the king of Baekche sent six people to Japan skilled in various aspects of Buddhism, including a temple architect. Initially, tiled roofs were a sign of great wealth and prestige, and used for temple and government buildings. The material had the advantages of great strength and durability, and could also be made at locations around the country wherever clay was available.
At the the existence of six semi-underground flat kilns lined up from north-to-south has been confirmed, with the existence of two or three more flat-type kilns suspected to exist to the south of those. The one at the southern end is the best preserved, and is a flat kiln with a total length of 4.2 meters, with tahe rotisserie type structure with a flame passage with seven flame vents that separate the combustion chamber from the firing chamber. The kiln walls are made of flat tiles and clay, and the fire hole is made of stone. The firing chamber is about 0.5 meters higher than the combustion chamber and measures 1.1 meters in length and 2.3 meters in width. The roof tiles fired at this tile kiln site are mainly round tiles and flat tiles, and are the same as the roof tiles found at the Heijō-kyō Palace from the end of the Nara period. It was designated a National Historic Site in 1976. It is located on the east-facing slope of the western hill facing the Shika River in northern Nara.
Four sets of kiln remains are all located in Kizugawa. The consist of four kiln remains and several post-hole buildings. It has been confirmed as the location where the roof tiles for the temple of Hokke-ji were fired. Currently, the tile kiln site has been backfilled and is preserved, and full-size replicas of the two kiln in good condition have been made and are on display. The contains traces of four buildings and eight kilns . The has been confirmed as the location where the roof tiles for the temple of Kofuku-ji were fired. Seven kiln remains have been found. At the , a clay pit and parts of "mokko" (wooden sacks) used for transportation of roof tiles were excavated along with the remains of two kilns.
The , which has been newly designated, was discovered in 1972 and found to be a kiln that supplied tiles for the construction of the first Daigokuden-in Hall of Heijō-kyō Palace, making it an important archaeological site as the earliest operating tile kiln on Narayama. Ten kilns have been identified, including rebuilt ones, and there are differences in their structures. They are thought to have been in operation between the time when the capital was moved to Nara (710) and when it was temporarily moved to Kuni-kyo (740). The site is currently preserved in a backfilled state within a private residential area.
The Utahime Tile Kiln ruins are about a 10-minute walk from Heijoyama Station on the JR West Kansai Main Line.
See also
List of Historic Sites of Japan (Kyoto)
List of Historic Sites of Japan (Nara)
References
External links
Kizugawa City home page
Nara City home page
History of Kyoto Prefecture
Nara, Nara
History of Nara Prefecture
Kizugawa, Kyoto
Yamato Province
Yamashiro Province
Historic Sites of Japan
Japanese pottery kiln sites | Narayama Tile Kiln Sites | [
"Chemistry",
"Engineering"
] | 871 | [
"Kilns",
"Japanese pottery kiln sites"
] |
78,553,332 | https://en.wikipedia.org/wiki/Stibinidene | Stibinidenes are a class of organoantimony compounds in which the antimony center exhibits a formal oxidation state of +1. The parent stibinidenes have the formula R–Sb, with the antimony center possessing two lone pairs of electrons and a vacant 5p orbital (Figure 1). Reflecting their unusual low coordination number]] (i.e., 1) at [antimony]], stibinidines cannot be isolated. Instead, their oligomers or their adducts are often robust.
Synthesis
Attempted synthesis of stibinidenes, like carbenes, gives cyclic oligomeric forms. 6-, 5-, 4-, and 3-membered rings have been characterized. They are orange solids. These species exist in equilibrium:
Distibinidenes, in principle, can be produced by reduction of the corresponding dichlorides. The following idealized equations apply:.
2,4,6-Tris[bis(trimethylsilyl)methyl]phenyl, 2,6-bis-[bis(trimethylsilyl)methyl]-4-[tris(trimethylsilyl)methyl]phenyl, and various m-terphenyl ligands, exist as dimers with the formula RSb=SbR.
When R is bulky, the product "RSb" is obtained as ring with Sb-Sb bonds. Larger substituents give smaller rings, otherwise 5- and 6-membered rings form. In some cases, a dimer with an Sb=Sb bond is isolated.
Base-stabilized stibinidene
Monomeric stibinidenes were first obtained by Dostál reported a Sb(I) center stabilized by an N,C,N-pincer ligand. The ligand employed was L = 2,6-bis[N-(2',6'-dimethylphenyl)ketimino]phenyl. The synthesis of this complex was achieved by reducing LSb(III)Cl2 with two equivalents of t K[B(iBu)3H], resulting in the formation of isolable crystals of the stable monomeric stibinidene [C6H3-2,6-(C(Me)=N-2',6'-Me2C6H3)2]Sb via dihydrogen elimination (Scheme 2). In this system, coordination from the nitrogen centers provides thermodynamic stabilization to the Sb(I) center by delocalizing electron density, while the bulky N,C,N ligand introduces significant steric hindrance, which kinetically stabilizes the monomeric stibinidene by preventing dimerization or further reactions. Subsequently, other N,C,N-coordinating ligands were developed to produce stibinidenes, such as ArSb (where Ar = C6H3-2,6-(CH=NtBu)2 & Ar = C6H3-2,6-(CH=NDipp)2) which gained prominence in studies on stibinidene reactivity.
Carbene stabilized stibinidene
Diamidocarbene (DAC) stabilize monomeric stibinidenes. The synthesis involved the reaction of phenylantimony dichloride, stabilized by a DAC, with magnesium powder in THF (Scheme 3). This process yielded stable, isolable, fluorescent red crystals of the carbene-stabilized stibinidene, (DAC)Sb-Ph. Despite the exocyclic Sb(I) center being exposed, the compound exists as a monomer, with its stability attributed to the strong backbonding between the DAC and the antimony center. The steric bulk of the mesityl group in the carbene further contributes to the compound's kinetic stability. Density functional theory (DFT) calculations revealed that the stability of the compound arises from partial double bond character between the carbene carbon and the Sb(I) center. This is attributed to backbonding from the antimony center into the vacant p orbital of the carbene. Chloro-substituted stibinidenes have been trapped using a cyclic alkyl(amino)carbene (CAAC) ligand. The synthesis involved reduction of CAAC-coordinated SbCl3 with KC8. Subsequently, the phosphine stabilized stibinidene (o-PPh2)C6H4(Ar*)Ge(Cl)Sb (E, where Ar* = 2,6-Trip2C6H3), was reported.
Reactivity
Theoretically, singlet stibinidenes are ambiphilic due to the presence of both empty and filled 5p orbitals, which respectively confer Lewis acidic and Lewis basic character. However, N,C,N-pincer-coordinated stibinidenes exhibit diminished Lewis acidity because of nN → p*Sb donor-acceptor interactions. Despite this reduction in Lewis acidity, Dostál’s stibinidene remains widely utilized in reactivity studies. In contrast, carbene-stabilized stibinidenes show significantly reduced reactivity as strong electron donation from the carbene ligand diminishes the Lewis acidic nature, while strong back-donation from the Sb center to the carbene weakens their Lewis basicity. Due to their ambiphilic nature, Dostál’s stibinidenes are capable of activating small molecules, like disulfides, through oxidative addition. This reactivity arises from their ability to donate electron density to the LUMO of small molecules while simultaneously accepting electron density into the vacant 5p orbital. Dostál's N,C,N-coordinated stibinidene ArSb (where Ar = C6H3-2,6-(CH=NtBu)2) has been reported to act as a catalyst in the hydroboration of disulfides (Scheme 5). This reactivity exploits the ability of the stibinidene to reversibly interconvert between Sb(I) and Sb(III) oxidation states under the reaction conditions. The catalytic cycle involves the oxidative addition of disulfides to the Sb(I) center, followed by reductive elimination to regenerate the active species, enabling efficient hydroboration. As of 2024, this is the only reported example of catalysis involving stibinidene, demonstrating its potential in organometallic catalysis. Notably, triplet stibinidenes exhibit a distinct mode of reactivity. Acting as diradicals, they can react with small molecules such as alkynes and butadienes, forming antimony-substituted heterocycles, including three-membered and five-membered rings respectively (Scheme 4).
Small molecule activation and catalysis
The stibinidene ArSb (where Ar = C6H3-2,6-(CH=NtBu)2) oxidatively adds E2Ph2 (E = S, Se), resulting giving ArSb(EPh)2 (Scheme 5). catalytic cycle using this oxidized product (Scheme 5). The Sb(III)dithiolate reacts with pinacolborane at 70 °C to produce ArSb(SR)(H) and the S-borylated thiophenol derivatives. This process can be made catalytic in the presence of an α,β-unsaturated carbonyl to facilitate Michael addition reactions.
Fluind ligand and reported by Cornella et al., exhibits remarkable small molecule activation. Under a 1.2 bar atmosphere of H2 or ethylene at 60°C, the distibene was converted into the corresponding antimony dihydride or stibacyclopropane, respectively, via a transient stibinidene intermediate. NMR studies confirmed that this transient stibinidene adopts a triplet electronic configuration, allowing it to activate small molecules in a diradical fashion. Similarly, the reactivity of an isolated triplet stibinidene was observed. Acting as diradicals, this stibinidene react with small molecules such as 2,3-dimethyl-1,3-butadiene and 4-tetrabutylphenylacetylene, leading to the formation of antimony-substituted heterocycles, including five-membered and three-membered rings.
Hetero Diels-Alder reaction with alkynes
The Dostál group demonstrated that N,C,N-pincer-coordinated stibinidenes can act as masked heterocyclic dienes. When treated with the electron-deficient alkyne dimethyl acetylenedicarboxylate (DMAD), these stibinidenes undergo a hetero Diels–Alder [4+2] cycloaddition reaction (Scheme 6). This transformation yields a CO2Me-disubstituted 1-stiba-1,4-dihydro-iminonaphthalene, effectively converting one of the pendant imine arms of the stibinidene into a nitrogen-bridged stibacyclohexadiene. In this product, the Sb(III) atom serves as a bridgehead, while the second imine arm loses coordination with the Sb(III) center. Additionally, similar cycloaddition reactions were observed between Dostal's stibinidene and other substrates, such as methyl propiolate and N-alkyl/aryl-maleimides, RN(C(O)CH)2 (R = Me, tBu, Ph). These findings highlight the reactivity of stibinidenes as dienes, expanding their utility in cycloaddition chemistry.
Transition metal-"stabilized" stibinidenes
Complexes containing one or more ligands with the formula RSb (R = halide, alkyl, chloride, aryl) are called stibinidene complexes. The terminology is debatable because these complexes do not release RSb. As ligands, stibinidenes ligand resemble carbenes to some extent. bulky N,C,N-pincer ligands, phosphine based and gallium based ligand. Based on computational studies, ⲡ-donating substituents, such as nitrogen- and phosphorus-based anionic ligands attached to the pnictogen atom, significantly stabilize the singlet ground state of stibinidenes. In this state, the molecule features one stereochemically inactive lone pair with predominantly s-character and another lone pair with predominantly p-character, accompanied by a vacant p orbital, making stibinidenes ambiphilic (Figure 1). In contrast, σ-type ligands, such as hydride and alkyl groups, favor the triplet ground state, where two unpaired electrons occupy two 5p orbitals and one lone pair resides in the 5s orbital.
One early example is , which was obtained from phenyldiiodostibane. The geometry at Sb is trigonal planar . the authors proposed the presence of Sb–Mn π-bonding. The chloro substituted stibinidene complex, [ClSb{Cr(CO)5}2] again features a three-center, four-π-electron bond across both Sb–Cr bonds. Trigonal planar stibinidene complexes of the type [ClSb{M(CO)5}2] (A, where M = Cr, Mo, W) are typically prepared via salt-elimination reactions between Na2[M2(CO)10] and SbCl3 (Scheme 1). However, these complexes are highly unstable due to the vacant p orbital on the antimony center and, in the case of M = Mo or W, cannot easily be isolated. To stabilize these complexes, they can be trapped using Lewis bases (LB), forming stable adducts with the general formula [ClSb{M(CO)5}2LB] (B) (Scheme 1). Huttner and colleagues also identified distibene complexes of the type [RSb=SbR][W(CO)5]3 as side products during stibinidene synthesis, particularly when non-donor solvents were used. This observation highlights the critical role of donor molecules in stabilizing these compounds.
Stibinidene cation
Stibinidene cations are isoelectronic with carbenes (Scheme 8). The stibinidene cation was generated by reduction of SbX3 (X = F, Cl) with KC8, in the presence of one equivalent of LiOTf, with stabilization provided by the addition of an IPr CAAC ligand. This process resulted in the formation of a CAAC-stabilized Sb(I) cation. Previously, attempts to stabilize Sb(I) cations were made using a bis(diisopropylamino)cyclopropenylidene ligand. However, the resulting species was obtained in low yield and exhibited significant instability, undergoing decomposition. Subsequently, Majumdar et al. reported the isolation of an Sb(I) cation stabilized with a diphosphine ligand. In this synthesis, SbCl3, the bis(phosphine) ligand, and trimethylsilyl trifluoromethanesulfonate were reacted in a 1:2:3 ratio at room temperature. The bis(phosphine) ligand was found to act as both a reductant and a supporting ligand. Despite the overall positive charge of the Sb(I) site, it was observed to bind metal centers, forming complexes with Au(I), Ag(I), and Cu(I). Further progress was made by Zhenbo et al., who isolated an Sb(I) cation stabilized by a bis-silylene ligand. The lone pair on the Sb(I) center in this species was shown to coordinate with Cr and Mo carbonyls. Sb(I) cations can also be generated when a diiminopyridine ligand on Sb.
Further reading
References
Organoantimony compounds | Stibinidene | [
"Chemistry"
] | 3,011 | [
"Functional groups",
"Octet-deficient functional groups"
] |
77,207,015 | https://en.wikipedia.org/wiki/TOP%20Assay | The TOP Assay (Total Oxidizable Precursor Assay) is a laboratory method developed in 2012 that oxidatively converts (unknown) precursor compounds of perfluorocarboxylic acids (PFCAs) into the latter. This makes quantification possible. Potassium peroxodisulfate is used. This sum parameter can be used to determine the concentration of precursor compounds present by comparing the sample before and after the application of the TOP Assay.
Application
This method is used, for example, in the analysis of fire-fighting foams (aqueous film forming foam), textiles or water samples. Blood serum can also be analyzed in this way.
In addition to fluorotelomer compounds, hydrogen-substituted perfluorosulfonic acids (Hn-PFSAs), for example, can also be oxidized using the TOP Assay. Saturated and unsaturated perfluorosulfonic acids as well as perfluoroalkyl ether sulfonic acids, on the other hand, are stable.
Further reading
References
Laboratory techniques
Chemistry | TOP Assay | [
"Chemistry"
] | 227 | [
"nan"
] |
77,208,569 | https://en.wikipedia.org/wiki/H3LiIr2O6 | {{DISPLAYTITLE:H3LiIr2O6}}
H3LiIr2O6 is a material considered to best fit the archetype for being a special type of quantum spin liquid called a Kitaev spin liquid. Though known not to freeze at cold temperatures, H3LiIr2O6 is notoriously difficult to produce in a lab and is known to have disorder in it, muddying whether it was truly a spin liquid.
H3LiIr2O6 is considered to be a spin liquid that is proximate to the Kitaev-limit quantum spin liquid. Its ground state shows no magnetic order or spin freezing as expected for the spin liquid state. However, hydrogen zero-point motion and stacking faults are known to be present.
References
Liquids
Lithium compounds
Iridium compounds
Oxides | H3LiIr2O6 | [
"Physics",
"Chemistry"
] | 170 | [
"Phases of matter",
"Oxides",
"Salts",
"Matter",
"Liquids"
] |
77,210,919 | https://en.wikipedia.org/wiki/Becker%E2%80%93Morduchow%E2%80%93Libby%20solution | Becker–Morduchow–Libby solution is an exact solution of the compressible Navier–Stokes equations, that describes the structure of one-dimensional shock waves. The solution was discovered in a restrictive form by Richard Becker in 1922, which was generalized by Morris Morduchow and Paul A. Libby in 1949. The solution was also discovered independently by M. Roy and L. H. Thomas in 1944 The solution showed that there is a non-monotonic variation of the entropy across the shock wave. Before these works, Lord Rayleigh obtained solutions in 1910 for fluids with viscosity but without heat conductivity and for fluids with heat conductivity but without viscosity. Following this, in the same year G. I. Taylor solved the whole problem for weak shock waves by taking both viscosity and heat conductivity into account.
Mathematical description
In a frame fixed with a planar shock wave, the shock wave is steady. In this frame, the steady Navier–Stokes equations for a viscous and heat conducting gas can be written as
where is the density, is the velocity, is the pressure, is the internal energy per unit mass, is the temperature, is an effective coefficient of viscosity, is the coefficient of viscosity, is the second viscosity and is the thermal conductivity. To this set of equations, one has to prescribe an equation of state and an expression for the energy in terms of any two thermodynamics variables, say . Instead of , it is convenient to work with the specific enthalpy
Let us denote properties pertaining upstream of the shock with the subscript "" and downstream with "". The shock wave speed itself is denoted by . The first integral of the governing equations, after imposing the condition that all gradients vanish upstream, are found to be
By evaluating these on the downstream side where all gradients vanish, one recovers the familiar Rankine–Hugoniot conditions, , and Further integration of the above equations require numerical computations, except in one special case where integration can be carried out analytically.
Analytical solution
Two assumptions has to be made to facilitate explicit integration of the third equation. First, assume that the gas is ideal (polytropic since we shall assume constant values for the specific heats) in which case the equation of state is and further , where is the specific heat at constant pressure and is the specific heat ratio. The third equation then becomes
where is the Prandtl number based on ; when , say as in monoatomic gases, this Prandtl number is just the ordinary Prandtl number . The second assumption made is so that the terms inside the parenthesis becomes a total derivative, i.e., . This is a reasonably good approximation since in normal gases, Pradntl number is approximately equal to . With this approximation and integrating once more by imposing the condition that is bounded downstream, we find
This above relation indicates that the quantity is conserved everywhere, not just on the upstream and downstream side. Since for the polytropic gas , where is the specific volume and is the sound speed, the above equation provides the relation between the ratio and the corresponding velocity (or density or specific volume) ratio
,
i.e.,
where is the Mach number of the wave with respect to upstream and . Combining this with momentum and continuity integrals, we obtain the equation for as follows
We can introduce the reciprocal-viscosity-weighted coordinate
where , so that
The equation clearly exhibits the translation invariant in the -direction which can be fixed, say, by fixing the origin to be the location where the intermediate value is reached. Using this last condition, the solution to this equation is found to be
As (or, ), we have and as (or, ), we have This ends the search for the analytical solution. From here, other thermodynamics variables of interest can be evaluated. For instance, the temperature ratio is esaily to found to given by
and the specific entropy , by
The analytical solution is plotted in the figure for and . The notable feature is that the entropy does not monotonically increase across the shock wave, but it increases to a larger value and then decreases to a constant behind the shock wave. Such scenario is possible because of the heat conduction, as it will become apparent by looking at the entropy equation which is obtained from the original energy equation by substituting the thermodynamic relation , i.e.,
While the viscous dissipation associated with the term always increases the entropy, heat conduction increases the entropy in the colder layers where , whereas it decreases the entropy in the hotter layers where .
Taylor's solution: Weak shock waves
When , analytical solution is possible only in the weak shock-wave limit, as first shown by G. I. Taylor in 1910. In the weak shock-wave limit, all terms such as , etc., will be small. The thickness of the shock wave is of the order so that differentiation with respect to increases the order smallness by one; e.g. is a second-order small quantity. Without going into the details and treating the gas to a generic gas (not just polytropic), the solution for is found to be related to the steady travelling-wave solution of the Burgers' equation and is given by
where
in which is the Landau derivative (for polytropic gas ) and is a constant which when multiplied by some characteristic frequency squared provides the acoustic absorption coefficient. The specific entropy is found to be proportional to and is given by
Note that is a second-order small quantity, although is a third-order small quantity as can be inferred from the above expression which shows that for both . This is allowed since , unlike , passes through a maximum within the shock wave.
Validity of continuum hypothesis: since the thermal velocity of the molecules is of the order and the kinematic viscosity is of the order , where is the mean free path of the gas molecules,, we have ; an estimation based on heat conduction gives the same result. Combining this with the relation , shows that
i.e., the shock-wave thickness is of the order the mean free path of the molecules. However, in the continuum hypothesis, the mean free path is taken to be zero. It follows that the continuum equations alone cannot be strictly used to describe the internal structure of strong shock waves; in weak shock waves, can be made as small as possible to make large.
Rayleigh's solution
Two problems that were originally considered by Lord Rayleigh is given here.
Fluids with heat conduction and without viscosity
The problem when viscosity is neglected but heat conduction is allowed is of significant interest in astrophysical context due to presence of other heat exchange mechanisms such as radiative heat transfer, electron heat transfer in plasmas, etc. Neglect of viscosity means viscous forces in the momentum equation and the viscous dissipation in the energy equation disappear. Hence the first integral of the governing equations are simply given by
All the required ratios can be expreses in terms of immediately,
By eliminating from the last two equations, one can obtain equation , which can be integrated. It turns out there is no continuous solution for strong shock waves, precisely when
for this condition becomes
Fluids with viscosity and without heat conduction
Here continuous solutions can be found for all shock wave strengths. Further, here the entropy increases monotonically across the shock wave due to the absence of heat conduction. Here the first integrals are given by
One can eliminate the viscous terms in the last two equations and obtain a relation between and . Substituting this back in any one of the equations, we obtain an equation for , which can be integrated.
See also
Taylor–von Neumann–Sedov blast wave
References
Flow regimes
Fluid dynamics | Becker–Morduchow–Libby solution | [
"Chemistry",
"Engineering"
] | 1,596 | [
"Piping",
"Chemical engineering",
"Flow regimes",
"Fluid dynamics"
] |
77,211,763 | https://en.wikipedia.org/wiki/Advanced%20Synthesis%20%26%20Catalysis | Advanced Synthesis & Catalysis is a bimonthly peer-reviewed scientific journal established in 1999 by Wiley. It covers research on homogeneous, heterogeneous, organic, and enzyme catalysis that are key technologies to achieve green synthesis, significant contributions to the same goal by synthesis design, reaction techniques, flow chemistry, and continuous processing, multiphase catalysis, green solvents, catalyst immobilization, and recycling, separation science, and process development. The editor-in-chief is Joe P. Richmond.
References
External links
Monthly journals
Catalysis
Chemical industry in Germany
Chemistry journals
Academic journals established in 1999
Wiley-VCH academic journals
English-language journals | Advanced Synthesis & Catalysis | [
"Chemistry"
] | 139 | [
"Catalysis",
"Chemical kinetics"
] |
77,212,117 | https://en.wikipedia.org/wiki/Agnew%27s%20theorem | Agnew's theorem, proposed by American mathematician Ralph Palmer Agnew, characterizes reorderings of terms of infinite series that preserve convergence for all series.
Statement
We call a permutation an Agnew permutation if there exists such that any interval that starts with 1 is mapped by to a union of at most intervals, i.e., , where counts the number of intervals.
Agnew's theorem. is an Agnew permutation for all converging series of real or complex terms , the series converges to the same sum.
Corollary 1. (the inverse of ) is an Agnew permutation for all diverging series of real or complex terms , the series diverges.
Corollary 2. and are Agnew permutations for all series of real or complex terms , the convergence type of the series is the same.
Usage
Agnew's theorem is useful when the convergence of has already been established: any Agnew permutation can be used to rearrange its terms while preserving convergence to the same sum.
The Corollary 2 is useful when the convergence type of is unknown: the convergence type of is the same as that of the original series.
Examples
An important class of permutations is infinite compositions of permutations in which each constituent permutation acts only on its corresponding interval (with ). Since for , we only need to consider the behavior of as increases.
Bounded groups of consecutive terms
When the sizes of all groups of consecutive terms are bounded by a constant, i.e., , and its inverse are Agnew permutations (with ), i.e., arbitrary reorderings can be applied within the groups with the convergence type preserved.
Unbounded groups of consecutive terms
When the sizes of groups of consecutive terms grow without bounds, it is necessary to look at the behavior of .
Mirroring permutations and circular shift permutations, as well as their inverses, add at most 1 interval to the main interval , hence and its inverse are Agnew permutations (with ), i.e., mirroring and circular shifting can be applied within the groups with the convergence type preserved.
A block reordering permutation with > 1 blocks and its inverse add at most intervals (when is large) to the main interval , hence and its inverse are Agnew permutations, i.e., block reordering can be applied within the groups with the convergence type preserved.
Notes
References
Mathematical theorems | Agnew's theorem | [
"Mathematics"
] | 529 | [
"Sequences and series",
"Theorems in mathematical analysis",
"Mathematical analysis",
"Mathematical structures",
"Series (mathematics)",
"Mathematical theorems",
"Calculus",
"Mathematical problems"
] |
69,680,149 | https://en.wikipedia.org/wiki/Dysprosium%20phosphide | Dysprosium phosphide is an inorganic compound of dysprosium and phosphorus with the chemical formula DyP.
Synthesis
The compound can be obtained by the reaction of phosphorus and dysprosium at high temperature.
4 Dy + P4 → 4 DyP
Physical properties
DyP has a NaCl structure (a=5.653 Å), where dysprosium is +3 valence. Its band gap is 1.15 eV, and the Hall mobility (μH) is 8.5 cm3/V·s.
DyP forms crystals of a cubic system, space group Fm3m.
Uses
The compound is a semiconductor used in high power, high frequency applications and in laser diodes.
References
Phosphides
Dysprosium compounds
Semiconductors
Rock salt crystal structure | Dysprosium phosphide | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 172 | [
"Electrical resistance and conductance",
"Physical quantities",
"Semiconductors",
"Materials",
"Electronic engineering",
"Condensed matter physics",
"Solid state engineering",
"Matter"
] |
68,394,738 | https://en.wikipedia.org/wiki/Thrust%20%28particle%20physics%29 | In high energy physics, thrust is a property, (one of the event shape observables) used to characterize the collision of high energy particles in a collider.
When two high energy particles collide, they typically produce jets of secondary particles. This happens when one or several quark-antiquark pairs are produced during the collision. Each colored quark/antiquark pair travels its separate way and subsequently hadronizes. Many new particles are created by the hadronization process and travel in approximately the same direction as the original pair. This set of particles constitutes a jet.
The thrust quantifies the coherence, or ″jettiness″ of the group of particles resulting from one collision. It is defined as:
,
where is the momentum of particle , and is a unit vector that maximizes and defines the thrust axis. The sum is over all the final particles resulting from the collision. In practice, the sum may be carried over the detected particles only.
The thrust is stable under collinear splitting of particles, and therefore it is a robust observable, largely insensitive to the details of the specific hadronization process.
References
Experimental particle physics
Quantum chromodynamics | Thrust (particle physics) | [
"Physics"
] | 249 | [
"Particle physics stubs",
"Experimental physics",
"Particle physics",
"Experimental particle physics"
] |
68,402,947 | https://en.wikipedia.org/wiki/Event%20shape%20observables | In high energy physics, event shapes observables are quantities used to characterize the geometry of the outcome of a collision between high energy particles in a collider. Specifically, event shapes observables quantify the general pattern traced by the trajectories of the particles resulting from the collision.
The most common event shape observables include:
The sphericity;
The aplanarity;
The thrust.
The C-parameter;
The jet broadening.
References
Experimental particle physics | Event shape observables | [
"Physics"
] | 104 | [
"Particle physics stubs",
"Experimental physics",
"Particle physics",
"Experimental particle physics"
] |
68,403,092 | https://en.wikipedia.org/wiki/Shoe%20dryer | A shoe dryer or boot dryer is a machine used for drying shoes, and usually functions by blowing air on the inside of the shoes. The airflow causes the shoes to dry faster. The air can be heated for even faster drying, and these are the most common types. Shoes dryers can be especially useful for people who often have wet shoes, such as families with small children or people who often hike outdoor in the nature, or for ski boots which often are moist after use. Many shoes dryers have a timer which shuts off the dryer after some time. There are also shoe dryers which instead use a heated grate which the shoes are placed on top of, and which do not blow air.
History
Several patents have been awarded for shoe dryers, with some of the oldest dating back to 1963.
Noise
Many fan-driven shoe dryers emit bothersome noise during use. In a test from 2019, the most silent model was measured at 45 decibel (dB), while the other models were measured at 50 and 57 dB. In 2022, another model was measured at 56 dB in "tornado" mode and 29 dB in "whisper" mode, and in 2023 another variant of the same dryer was measured at 72 dB. It was also commented that the higher pitch of the said models noise could contribute to it being perceived as more intense and bothersome.
Air flow
The volumetric flow rate, i.e. the amount of air that is moved, is an important measure of fan-based shoe dryers. For example, a model tested in 2023 was stated to have a volume flow of 12 cubic meters per hour (m³/h), which corresponds to 12 000 liters per hour or just over 3 liters of air per second. Larger diameters of tubing and fans are beneficial for increased volumetric flow, and also results in lower air speed and thus less noise.
Heated air
Shoe dryers with a fan often emit slightly lukewarm or warm air. In a test from 2019, one of the models was rated at a power of 350 watt, of which 30 W was utilized as fan power and the remaining around 320 W were used for heating. Another model in the test had two temperature settings for choosing between 40 °C and 55 °C air temperature. A model tested in 2023 had settings for blowing air at room temperature, or heated air at 37, 45 or 60 degrees Celsius.
Not all shoes can withstand heated drying. Using high heat can put wear on shoes made of certain materials, and for example the use of a tumble dryer, heating cables or heating cabinet can lead to leather shoes cracking.
Fire hazard
Shoe dryers with heating can be a fire hazard if left on for too long, as with any heating appliance, and should therefore be used under supervision.
See also
Dehumidifier
Drying cabinet
Drying room
References
Home appliances | Shoe dryer | [
"Physics",
"Technology"
] | 594 | [
"Physical systems",
"Machines",
"Home appliances"
] |
74,216,190 | https://en.wikipedia.org/wiki/Zhu%20algebra | In mathematics, the Zhu algebra and the closely related C2-algebra, introduced by Yongchang Zhu in his PhD thesis, are two associative algebras canonically constructed from a given vertex operator algebra. Many important representation theoretic properties of the vertex algebra are logically related to properties of its Zhu algebra or C2-algebra.
Definitions
Let be a graded vertex operator algebra with and let be the vertex operator associated to Define to be the subspace spanned by elements of the form for An element is homogeneous with if There are two binary operations on defined byfor homogeneous elements and extended linearly to all of . Define to be the span of all elements .
The algebra with the binary operation induced by is an associative algebra called the Zhu algebra of .
The algebra with multiplication is called the C2-algebra of .
Main properties
The multiplication of the C2-algebra is commutative and the additional binary operation is a Poisson bracket on which gives the C2-algebra the structure of a Poisson algebra.
(Zhu's C2-cofiniteness condition) If is finite dimensional then is said to be C2-cofinite. There are two main representation theoretic properties related to C2-cofiniteness. A vertex operator algebra is rational if the category of admissible modules is semisimple and there are only finitely many irreducibles. It was conjectured that rationality is equivalent to C2-cofiniteness and a stronger condition regularity, however this was disproved in 2007 by Adamovic and Milas who showed that the triplet vertex operator algebra is C2-cofinite but not rational. Various weaker versions of this conjecture are known, including that regularity implies C2-cofiniteness and that for C2-cofinite the conditions of rationality and regularity are equivalent. This conjecture is a vertex algebras analogue of Cartan's criterion for semisimplicity in the theory of Lie algebras because it relates a structural property of the algebra to the semisimplicity of its representation category.
The grading on induces a filtration where so that There is a surjective morphism of Poisson algebras .
Associated variety
Because the C2-algebra is a commutative algebra it may be studied using the language of algebraic geometry. The associated scheme and associated variety of are defined to be which are an affine scheme an affine algebraic variety respectively. Moreover, since acts as a derivation on there is an action of on the associated scheme making a conical Poisson scheme and a conical Poisson variety. In this language, C2-cofiniteness is equivalent to the property that is a point.
Example: If is the affine W-algebra associated to affine Lie algebra at level and nilpotent element then is the Slodowy slice through .
References
Algebras
Algebraic geometry | Zhu algebra | [
"Mathematics"
] | 586 | [
"Mathematical structures",
"Algebras",
"Fields of abstract algebra",
"Algebraic structures",
"Algebraic geometry"
] |
74,227,856 | https://en.wikipedia.org/wiki/ARJ%20Home%20Appliances%20Company | Arj Home Appliance Company () is a white goods manufacturer in Iran. It was founded by Khalil Arjomand in 1937 as a small workshop in Tehran, Iran with eight employees. It started by producing water coolers and refrigerators. It later expanded its production line to include gas heaters, washing machines, freezers, and other devices.
History
The company originally began with a small factory producing metal products. Gradually, it broadened its range of products, increasing the number of employees to eight in the 1940s. With the expansion, the brand moved to Tehran-Karaj Road – the main industrial area to the west of the Iranian capital – where it broadened its range to include several household appliances.
However, in 1979, as with many of its competitors, Arj was nationalized due to the new revolutionary government's dislike of large businesses not under national control. By 1995 the majority of its shares were then sold off to Iran's largest state bank, Melli (National) Bank, which sold them on to different private shareholders.
Bankrupt and closure
Arj had been closed since 1995. As a result of increasing competition from foreign brands and financial problems, the company declared bankruptcy in the fiscal year 2016–17.
Arj officially closed in 2016 due to escalating production problems. The main reasons leading to its closure may have been:
Public administration and inefficient management policies
Exchange rate fluctuations causing a sharp rise in imports
Lack of strategy
Old technology and poor quality products
Revival of the brand
The Ministry of Industry decided to hold talks with multinational conglomerate corporation General Electric (GE), the American multinational manufacturer and marketer of home appliances Whirlpool Corporation, and the Italian Ariston Thermo Group to revive and update the named Iranian companies like Arj and others.
References
Iranian brands | ARJ Home Appliances Company | [
"Physics",
"Technology"
] | 371 | [
"Physical systems",
"Machines",
"Home appliances"
] |
74,228,604 | https://en.wikipedia.org/wiki/Damper%20winding | The damper winding (also amortisseur winding) is a squirrel-cage-like winding on the rotor of a typical synchronous electric machine. It is used to dampen the transient oscillations and facilitate the start-up operation.
Since the design of a damper winding is similar to that of a asynchronous motor, the winding technically enables the direct-on-line start and can even be used for the motor operation in the asynchronous mode.
Originally the damper winding was invented by Maurice Leblanc in France and Benjamin G. Lamme in the US to deal with the problem of hunting oscillations due to the early generators being driven by the directly connected steam engines with their pulsating torque. In the modern designs the generators are driven by turbines and the issue of hunting is less important, although pulsating torque is still encountered by motors, for example, while driving the piston compressors.
The construction of the damper windings is complex and largely based on empirical knowledge. A typical damper winding consists of short-circuit bars that in the machines with cylindrical rotors share the slots with the field windings, and in the case of salient pole rotors are located in the dedicated slots on the surfaces of pole shoes. There are no bars in the quadrature axis area of the salient pole machines. The bars are terminated on rings or plates encircling the rotor.
References
Sources
Electrical engineering
Electromagnetic coils | Damper winding | [
"Engineering"
] | 307 | [
"Electrical engineering"
] |
72,772,320 | https://en.wikipedia.org/wiki/Joggle%20%28architecture%29 | A joggle is a joint or projection that interlocks blocks (such as a lintel's stone blocks or an arch's voussoirs).
Often joggles are semicircular and knob-shaped, so joggled stones have a jigsaw- or zigzag-like pattern.
Joggling can be found in pre-Frankish buildings, in Roman Spain and Roman France.
In Islamic architecture, the earliest joggles were in the desert castles of the Umayyad Caliphate, such as Qasr al-Hayr al-Sharqi.
In Mamluk architecture, joggling is usually combined with ablaq (alternating colors).
Joggling also characterize Ottoman architecture in Cairo.
The protruding joggle is also called a "he-joggle", whereas the corresponding slot is called a "she-joggle".
See also
Dovetail joint: dovetailing can be considered a type of joggling.
References
Joinery
Masonry
Arabic architecture
Islamic architectural elements
Mamluk architecture
Ottoman architecture
Architecture in Egypt
Architecture in Syria
Architecture in the State of Palestine | Joggle (architecture) | [
"Engineering"
] | 230 | [
"Construction",
"Masonry"
] |
72,777,484 | https://en.wikipedia.org/wiki/Bohr%20Festival | The Bohr Festival () was a series of seven lectures given by Niels Bohr 12 to 22 June 1922 at the Institute of Theoretical Physics in Göttingen. These were the Wolfskehl Lectures, funded by the Wolfskehl Foundation. Taking place in the fortnight leading up to the Göttingen International Handel Festival, it became known as the Bohr Festival. In 1991, Friedrich Hund suggested that James Franck responsible for the comparison.
In the lectures Bohr outlined the current development of the Bohr-Sommerfeld theory, remarking "how incomplete and uncertain everything still is".
References
1920 in Germany
Quantum mechanics | Bohr Festival | [
"Physics"
] | 127 | [
"Quantum mechanics",
"Quantum physics stubs",
"Works about quantum mechanics"
] |
78,556,095 | https://en.wikipedia.org/wiki/Nephrocalcin | Nephrocalcin is an acidic glycoprotein, is produced by renal proximal tubule cells. It inhibits crystal nucleation, growth and aggregation. It is one of the key inhibitors for Nephrolithiasis, kidney stone disease.
There are at least 4 known isoforms of Nephrocalcin: NC-A, NC-B, NC-C, and NC-D. A higher secretion of NC-C and NC-D is found in kidney stone patients, whereas a higher secretion of NC-A and NC-B in non-patients.
References
Glycoproteins | Nephrocalcin | [
"Chemistry"
] | 134 | [
"Glycoproteins",
"Glycobiology"
] |
78,560,571 | https://en.wikipedia.org/wiki/Organocalcium%20chemistry | Organocalcium chemistry is the chemistry of compounds containing a calcium to carbon bond, or in broader definitions, organic compounds that contain calcium. Although discovered around the same time as the now commonly utilized organomagnesium compounds, organocalcium compounds were subject to greatly reduced interest due to drastic differences in stability. However, recent advances in stabilization of these highly reactive compounds has spurred increased interest in organocalcium compounds and allowed for multiple research directions to form. Because calcium metal is less reactive to organic reagents than magnesium and the organocalcium compounds are more reactive than organomagnesium compounds, synthesis of novel compounds still poses a significant challenge. Calcium also has access to empty d orbitals that the lighter alkaline earth metals cannot access, and the degree to which this affects bonding and reactivity has sparked a fundamental debate. Lastly, despite the inherent instability of most organocalcium complexes, the unique basicity and size of the calcium ion together with the highly polarized bonds formed has opened up applications for organocalcium compounds in organic transformations and catalytic cycles.
Compounds
In general, organocalcium synthesis is complicated by relatively unreactive calcium metal (compared to magnesium or the alkali metals due to a high atomization energy) and high reactivity of most organocalcium compounds to oxygen, water, and even ethereal solvents. To sustain the highly electropositive calcium center, the vast majority of compounds have anionic ligands by which they can be categorized, with neutral coordinating ligands utilized for increased stability.
Aryl, allyl, and alkyl derivatives
The earliest organocalcium compounds to receive some sustained interest were alkyl- and arylcalcium compounds. The first of these was reported in 1905 by Ernst Beckmann, where synthesis of phenylcalcium iodide was claimed following stirring of calcium shavings with iodobenzene in diethyl ether (Et2O). Subsequent study by Henry Gilman and Ferdinand Schulze argued that the isolated product in this report was actually the Et2O adduct of CaI2, and, although phenylcalcium halides have been reported numerous times, they are usually characterized through subsequent derivatization products. It took a full century until, in 2005, Matthias Westerhausen and colleagues obtained the first structural characterization of an arylcalcium compound, crystallizing phenylcalcium iodide as an adduct of tetrahydrofuran (THF) and calcium oxide. A consistent challenge in the formation of organocalcium compounds has been the activation of calcium metal. Recent advancements in mechanochemistry have opened up simpler synthetic setups, with unactivated calcium being used to form arylcalcium reagents in situ during ball-milling.
Allylcalcium compounds have also seen recent synthetic success, beginning with Timothy Hanusa and colleagues’ synthesis of a bis(allyl)calcium complex stabilized by sterically large, silyl substituents. These successes have largely been driven by the use of salt metathesis reactions, where potassium salts of allyl anions exchange metals with a calcium halide, typically CaI2. This same strategy has been used to synthesize the unsubstituted complex Ca(η3-C3H5)2 as a soluble triglyme adduct. This has been proven to be a versatile strategy, with a full series of substituted allylcalcium complexes of different sizes also characterized through a salt metathesis pathway.
The carbon atom in the calcium-carbon bond takes on a significant negative charge. Because of the greater nucleophilicity of alkyl ligands, the alkylcalcium reagents are in general harder to synthesize than the arylcalcium compounds. A common stabilizing strategy is to use bulky silyl and phenyl substituents to stabilize this negative charge. When targeting a Grignard analogue, the decreased reactivity from this method and the poor stability of the less protected methyl- and ethylcalcium halides has led to in situ generation of reactive alkylcalcium halides as the preferred method over the synthesis of isolable compounds. Because of this poor stability, the pure organometallic dimethylcalcium was only isolated in 2018 by Reiner Anwander and colleagues as an insoluble, amorphous solid, with the THF adduct being structurally characterizable as a heptametallic cluster.
Metallocenes
Few calcium metallocenes (“calcocenes”) have been isolated, but they are of particular interest due to the insights into bonding that have come from their study. The first synthesis of Cp2Ca (Cp = cyclopentadienyl) from calcium metal and cyclopentadiene in THF produced an insoluble, polymeric product. A crystal structure showed that, unlike most transition metal metallocenes, the Cp-Ca-Cp angle is significantly bent and Cp2Ca has an opening that can be utilized to access derivatives. As seen in the first monomeric synthesis of a calcocene, ethereal solvents such as Et2O and THF almost always coordinate in this opening and can be challenging to remove through sublimation. This bent structure can be leveraged into different coordination environments. For example, two butenyl-substituted Cp ligand will coordinate to Ca through both the five-membered rings and the olefins, but the olefins will not coordinate to Mg, where the Cp-Mg-Cp angle is not bent.
Low-oxidation-state compounds
Although low oxidation state beryllium and magnesium chemistry has developed significantly in the last two decades, only a few reports exist of organocalcium compounds stabilizing any oxidation state other than Ca(II). The first and only report of an isolable Ca(I) compound came in 2009, where two THF-coordinated Ca(I) ions sit on either side of an arene ring. The π-antibonding orbitals of the sandwiched arene help stabilize the two calcium ions, which are further stabilized by the coordinating solvent. Other studies of Ca(I) were done at low temperatures in exotic conditions or examine formally Ca(II) compounds that imply Ca(I)-containing intermediates either during synthesis or further reactivity. A landmark example of this from Sjoerd Harder and coworkers is the reported reduction of arenes and N2 by a bridged Ca(I)-Ca(I) species generated in situ. The ease of activating the normally inert N2 to turn it into a strong reductant even at room temperature highlights the instability of Ca(I) species. Although not isolable as a Ca(I)-Ca(I) dimer, it possesses similar reactivity as a stronger reducing agent than a Mg(I) dimer.
Amides, hydrides, and fluorides
There are several classes of calcium complexes that have become especially relevant despite not necessarily containing a Ca-C bond. The calcium amides, for example, have been investigated for numerous applications as a stoichiometric or catalytic reagent. Several modern synthetic strategies have allowed for a wide range of calcium amides to realized. Transmetalation, such as from a Sn(II) amide, allowed for the early preparation of amides yet again stabilized by bulky silyl groups. Additional electronic and kinetic stabilization can be provided through carbenes, despite lacking the π-backbonding that other main group elements are capable of. A breakthrough in eliminating side product formation and other contamination was the development of mechanochemical syntheses that forgo the use of solvent. Simply ball-milling CaI2 with a potassium amide salt yielded the corresponding bis(amido) complex.
Inspired by the well-studied and useful solid-state CaH2, several molecular calcium hydrides have been synthesized with the hope of interesting small molecule activation. In 2006, Sjoerd Harder and Julie Brettar accomplished the synthesis of a well-defined, dimeric calcium hydride through the reaction of a calcium amide with phenylsilane. Subsequent studies have expanded the library of stabilizing ligands, but all are multidentate ligands that coordinate through nitrogen sites.
Several recent advances have been made in the synthesis of molecular calcium fluorides. The solid-state CaF2 is an important source of fluorides for organofluorine compounds, but rely on dangerous HF intermediates. The early well-characterized molecular calcium fluorides are clusters and are formed by reacting CaF2 with large, multidentate ligands. Recent work from Simon Aldridge and coworkers have resulted in more accessible fluoride coordination environments that can act as reagents for nucleophilic fluoride addition to organic compounds.
Bonding Descriptions
The changes in properties going down the alkaline earth group causes calcium to possess qualitatively entirely distinct bonding characteristics than the lighter beryllium and magnesium ions. In particular, calcium is significantly larger, more reducing, and has a much lower electronegativity. This enforces a strong preference for the Ca(II) oxidation state and an essentially ionic bond with carbon, which can be reasonably described as a carbanion in the Ca-C bond.
A key difference in calcium bonding descriptions compared to magnesium and beryllium is the occasional use of the unfilled 3d orbitals to fully explain bonding and structural patterns. For example, the bent nature of calcocene, and the potentially bent geometry of CaH2, can be explained by increased involvement of the 3d orbitals in bonding. This has been highly debated, however, with other explanations invoking the polarizability of the larger Ca core and a stabilizing van der Waals interaction between the two ligands. A similar debate is ongoing regarding the degree of π-backbonding in a Ca(CO)8 complex. Although still controversial, computational studies on the degree of sp-d hybridization have caused some to label Ca as an honorary transition metal.
Reactivity
Heavy Grignard reactivity
Organocalcium compounds show some more similarities to organolithium chemistry over organomagnesium compounds. This is largely due to differences in electronegativity, which allow organocalcium compounds to function as a base more often than typical magnesium-based Grignard reagents do. This basicity is exemplified by the facile deprotonation and subsequent cleavage of ethers such as THF.
Another point of differentiation from the magnesium-based Grignard reagents is the higher positive charge localized on the calcium atom, due to the higher degree of ionicity in the Ca-C bond versus the Mg-C bond, which can enable unique reactivity not seen in the lighter alkaline earth compounds. For example, a dimeric Ca alkynide complex was shown to enable the coupling of two anionic alkynides to form an extended, fully double bonded four-carbon chain. The previously mentioned in situ generation of reactive alkylcalcium species has also been successfully used to react with amines to form calcium amides. This reactivity relies on fast ligand exchange of calcium Grignard reagents due to the ionic nature of this bond – the initially formed product is a heteroleptic calcium monoamide monohalide, but ligand exchange quickly forms the full calcium diamide and an insoluble calcium dihalide that drives the Schlenk equilibrium to completion. Non-Grignard alkylcalcium complexes have also shown unique reactivity, such as alkylation of benzene driven by the formation of a calcium hydride.
Catalytic reactivity
Catalysis with organocalcium compounds has historically been limited due to poor stability. However, significant recent progress has been made in multiple areas of catalytic applications. Inspired by alkali metal-based organometallic compounds use in anionic polymerization, organocalcium compounds have also been investigated as polymerization catalysts. For example, fast polymerization has been seen for polylactide synthesis with excellent selectivity for the isotactic form. This is not only enabled by the previously discussed electronic and electrostatic differences, but also by the larger size of calcium in comparison to the alkali metals or magnesium. The larger size of calcium allows an unusual trigonal prismatic coordination geometry utilized throughout the mechanism. The ionic nature of Ca-C bonding can also be leveraged for living polymerization, as was demonstrated for a stereoselective synthesis of polystyrene.
Catalysis has also been performed using organocalcium compounds for a series of organic transformations. This most prominently includes hydroamination, where numerous viable substrates and modes of selectivity have been demonstrated. Catalytic activity has also been shown for the analogous hydrophosphination, the hydrogenation of alkene with dihydrogen, regioselective hydrosilylation of conjugated alkenes, and the hydroboration of alkenes, although the role of calcium in the latter mechanism is still debated. The redistribution of arylsilane and hydrosilane groups has also been performed catalytically, relying on the cleavage and reformation of C-Si and Si-H bonds driven by the simultaneous cleavage and reformation of Ca-C and Ca-H bonds.
References
Calcium
Organometallic chemistry | Organocalcium chemistry | [
"Chemistry"
] | 2,815 | [
"Organometallic chemistry"
] |
78,562,224 | https://en.wikipedia.org/wiki/2-Aminoadipic-2-oxoadipic%20aciduria | 2-Aminoadipic-2-oxoadipic aciduria (AMOXAD) is a rare, autosomal recessive metabolic disorder caused by defects in the degradation of the amino acids lysine and tryptophan. It is classified as an organic aciduria and results from mutations in the DHTKD1 gene, which encodes a mitochondrial enzyme essential for the breakdown of 2-aminoadipate and 2-oxoadipate. The condition leads to the accumulation of these metabolites in blood and urine.
Genetics
The disorder stems from compound heterozygous mutation in the DHTKD1 gene, located on chromosome 10p14. These mutations disrupt the function of the mitochondrial 2-oxoadipate dehydrogenase complex (OADHC), a multienzyme system critical for amino acid metabolism. This complex catalyzes the oxidative decarboxylation of 2-oxoadipate during lysine and tryptophan degradation. Its dysfunction leads to the accumulation of toxic intermediates, which impair mitochondrial function, causing oxidative stress and energy deficits. Inheritance follows an autosomal recessive pattern, meaning an individual must inherit defective copies of the gene from both parents to manifest the disease. While AMOXAD is extremely rare, many cases remain asymptomatic or are diagnosed later in life.
Pathophysiology
The pathogenic mechanisms of AMOXAD are not fully elucidated. The lysine degradation pathway is a complex, multistep process involving mitochondrial, cytosolic, and peroxisomal enzymes. It begins with the conversion of lysine into saccharopine and subsequently into 2-aminoadipate-6-semialdehyde. This step is catalyzed by alpha-aminoadipic semialdehyde synthase (AASS). The semialdehyde is then converted to 2-aminoadipate, which is subsequently deaminatied into 2-oxoadipate. In the mitochondria, 2-oxoadipate is decarboxylated by the 2-oxoadipate dehydrogenase complex (OADHC), which depends on DHTKD1. This reaction yields glutaryl-CoA, which can enter the tricarboxylic acid cycle after conversion to acetyl-CoA. Mutations in DHTKD1 disrupt this crucial decarboxylation step, causing an accumulation of upstream metabolites such as 2-aminoadipate and 2-oxoadipate. This leads to mitochondrial dysfunction, increased oxidative stress, and toxic effects that contribute to the symptoms of AMOXAD. The pathway also intersects with the degradation of hydroxylysine and tryptophan, converging at the intermediates 2-aminoadipate and 2-oxoadipate. The exact pathways through which these metabolites cause damage remain a focus of ongoing research.
Clinical Symptoms
Over 20 cases of AMOXAD have been identified, with varying outcomes. While some patients remain asymptomatic, others experience a range of neurological and muscular symptoms, including:
Hypotonia (reduced muscle tone)
Developmental delays or intellectual disabilities of varying severity
Ataxia (impaired coordination)
Seizures
Behavioral abnormalities, such as attention deficit hyperactivity disorder (ADHD)
Diagnosis
Diagnosis involves analyzing urinary organic acids using gas chromatography–mass spectrometry. Characteristic findings include elevated levels of 2-oxoadipate and 2-hydroxyadipate in the urine and 2-aminoadipate in the blood. Molecular genetic testing can confirm mutations in the DHTKD1 gene, solidifying the diagnosis.
Treatment
Currently, there is no specific cure for AMOXAD. Management focuses on symptomatic treatment and supportive care, including dietary modifications (e.g., a low-lysine diet) to reduce the accumulation of toxic metabolites. Antiepileptic drugs are used to manage seizures, but vigabatrin should be avoided due to its potential to exacerbate underlying metabolic imbalances or increase the accumulation of toxic intermediates in lysine metabolism. Research is ongoing to identify targeted therapies that address the enzymatic deficiencies caused by DHTKD1 mutations.
Prognosis
The prognosis depends on the severity of symptoms. While asymptomatic individuals can lead normal lives, those with severe manifestations may experience significant developmental and neurological challenges.
References
External links
Rare diseases
Metabolic disorders | 2-Aminoadipic-2-oxoadipic aciduria | [
"Chemistry"
] | 941 | [
"Metabolic disorders",
"Metabolism"
] |
78,563,566 | https://en.wikipedia.org/wiki/Glass%20production%20in%20Licking%20County%2C%20Ohio | Licking County has been tied to the glass-making industry throughout the Midwest since the 1800s. This is due to the silica deposits found throughout rivers in Ohio. Entrepreneurs such as Edward H. Everett supported this industry. Although glass production has decreased in Licking County since the 1800s, it is still relevant today.
History of glass manufacturing
Shields King & Co. was a glass manufacturing company founded in 1871, and it began making various glass bottles. Shields King & Co. was founded by William Shields, David E. Stevens, Oren G. King, William E. Atkinson, and David C. Winegarner. They worked alongside other people such as Richard Lumley to complete different patents including self-sealing fruit jars. Together, King & Co. worked in the Newark Star Glassworks factory to produce beer bottles, jars, and bottle stoppers.
After opening in 1871, they were successful however; after being bought by Edward H. Everett in 1880, it prompted a significant increase in business. During the late 1800s, 20,000 dollars worth of beer bottles were produced for a brewing company in Cincinnati. The factory was in production until it burnt down in May 1893, only to begin production again in December. Edward H. Everett decided to facilitate a combination with other glass companies and create The American Bottle Company, a glass container manufacturer in the Midwest. It was founded in 1905 and is known for producing various bottles and jars for multiple industries.
Edward H. Everett caused growth within the glass industry as these factories became entirely based on machines for production. Machine-based production benefited the speed at which glassware was produced. However, the machines removed the heritage of glass blowers, taking away jobs from previous employees.
The “Stevens Tin Top” is an example of a piece of glass produced in the Newark Star Glassworks. It had a groove-ring wax sealer in a blue aquamarine glass. The jar is hand blown and has a tool applied to the lip. There were two patented fruit jars in 1875, and the name of their jars came to be called The Western Pride Self Sealing Jar. Shields and King & Co remarked that their jars were the cheapest on the market and that a wrench was unnecessary, therefore easier to open, setting them apart from their opponents.
Silica deposits
In McDermott, Ohio, some sandstones contain substances with different levels of purity that are sufficient as a source of silica. Silica sand units were mined throughout Ohio during the Civil War. These units continued to grow; shortly after World War I, large amounts of silica products were produced in Ohio. In the 1900s, these sandstones brought in large amounts of money, especially over the past 35 years. In 1986, 2 million tons of silica sandstone was sold with a value of 24 million dollars. The name Licking County originated from the salt licks found on the river's banks. These salt licks were not only beneficial for glass making but were also enjoyed by the wildlife surrounding the area.
Other prominent glass manufacturers in Licking County
Holophane, founded in France in 1895, brought its glass technology to Newark, Ohio, in 1902, capitalizing on the area's natural resources and skilled workforce. The Holophane company manufactures glass reflectors, refractors and lenses that are made out of glass to cover the light source. Initially collaborating with the A.H. Heisey Glass Company, Holophane established its own plant in 1910, however, the glass that Holophane used, Heisey manufactured and was used in lights throughout the early 20th century. In addition to this Heinsey and Holophane worked together on other various projects including the restoration of Heisey glass molds. Its creative glass products became essential for industrial and street lighting, particularly during World War II, when it supplied military bases and airfields.
After the war, Holophane expanded operations to nearby cities like Springfield and Pataskala, introduced overhead street lights in 1948, and diversified into decorative lighting. Today, Holophane leads in sustainable and energy-efficient lighting, integrating LED technology while maintaining its reputation for quality and innovation.
The Holophane company and other production companies were later run by Acuity Brands in 1999. After the company was run under different management, work changed drastically for the Newark department after Acuity Brands abruptly announced that it would be moving assembly lines to Mexico. In 2008, Acuity Brands was under legal obligation to represent The Bill Clinton Climate Initiative before facing backlash due to the company's negligence in the Clean Water Act. Acuity Brands faced a nearly 4 million dollar fine when it was discovered that a detergent company under their authority in Atlanta was not told about phosphor leaking into their public water supply. The company faced controversy when they mentioned the move of Holophane to Mexico, leaving several Holophane employees without a job. Despite pushback from Congress in Ohio about their move to Mexico, Acuity was determined to move Holophane productions and dismissed other opinions.
References
Licking County, Ohio
Glass production
History of Ohio | Glass production in Licking County, Ohio | [
"Materials_science",
"Engineering"
] | 1,032 | [
"Glass engineering and science",
"Glass production"
] |
78,565,600 | https://en.wikipedia.org/wiki/Machine%20unlearning | Machine unlearning is a branch of machine learning focused on removing specific undesired element, such as private data, outdated information, copyrighted material, harmful content, dangerous abilities, or misinformation, without needing to rebuild models from the ground up.
Large language models, like the ones powering ChatGPT, may be asked not just to remove specific elements but also to unlearn a "concept," "fact," or "knowledge," which aren't easily linked to specific examples. New terms such as "model editing," "concept editing," and "knowledge unlearning" have emerged to describe this process.
History
Early research efforts were largely motivated by Article 17 of the GDPR, the European Union's privacy regulation commonly known as the "right to be forgotten" (RTBF), introduced in 2014.
Present
The GDPR did not anticipate that the development of large language models would make data erasure a complex task. This issue has since led to research on "machine unlearning," with a growing focus on removing copyrighted material, harmful content, dangerous capabilities, and misinformation. Just as early experiences in humans shape later ones, some concepts are more fundamental and harder to unlearn. A piece of knowledge may be so deeply embedded in the model’s knowledge graph that unlearning it could cause internal contradictions, requiring adjustments to other parts of the graph to resolve them.
References
Machine learning | Machine unlearning | [
"Engineering"
] | 306 | [
"Artificial intelligence engineering",
"Machine learning"
] |
77,217,245 | https://en.wikipedia.org/wiki/Optimal%20network%20design | Optimal network design is a problem in combinatorial optimization. It is an abstract representation of the problem faced by states and municipalities when they plan their road network. Given a set of locations to connect by roads, the objective is to have a short traveling distance between every two points. More specifically, the goal is to minimize the sum of shortest distances, where the sum is taken over all pairs of points. For each two locations, there is a number representing the cost of building a direct road between them. A decision must be made about which roads to build with a fixed budget.
Formal definition
The input to the optimal network design problem is a weighted graph G = (V,E), where the weight of each edge (u,v) in the graph represents the cost of building a road from u to v; and a budget B.
A feasible network is a subset S of E, such that the sum of w(u,v) for all (u,v) in S is at most B, and there is a path between every two nodes u and v (that is, S contains a spanning tree of G).
For each feasible network S, the total cost of S is the sum, over all pairs (u,v) in E, of the length of the shortest path from u to v, which uses only edges in S. The objective is to find a feasible network with a minimum total cost.
Results
Johnson, Lenstra and Kan prove that the problem is NP-hard, even for the simple case where all edge weights are equal and the budget restricts the choice to spanning trees.
Dionne and Florian studied branch and bound algorithms, and showed that they work in reasonable time on medium-sized inputs, but not on large inputs. Therefore, they presented heuristic approximation algorithms.
Anshelevic, Dasgupta, Tardos and Wexler study a game of network design, where every agent has a set of terminals and wants to build a network in which his terminals are connected, but pay as little as possible. They study the computational problem of checking whether a Nash equilibrium exists. For some special cases, they give a polynomial time algorithm that finds a (1+ε)-approximate Nash equilibrium.
Boffey and Hinxman present a heuristic method, and show that it yields high quality results. They also study solution methods based on branch-and-Bound, and evaluate the effects of making various approximations when calculating lower bounds. They also generalize the problem to networks with link construction cost not proportional to length, and with trip demands that are not all equal.
See also
Network planning and design
Minimum routing cost spanning tree – a similar problem in which the selected set must be a spanning tree.
References
Combinatorial optimization
Networks
Transport
Spanning tree | Optimal network design | [
"Physics"
] | 566 | [
"Physical systems",
"Transport"
] |
77,221,512 | https://en.wikipedia.org/wiki/1997%20California%20New%20Years%20Floods | The 1997 California New Years Floods resulted from a series of winter storms, from December 26 to January 3 of 1997, fed with tropical moisture by an atmospheric river. It impacted Northern California, resulting in some of the most devastating flooding since the Great Flood of 1862. Similarly to the 1862 event, the flooding was a combined effect of heavy rainfall and excessive snowmelt of the relatively large early-season Sierra Nevada snowpack. The resulting flooding in the Central Valley and other low-lying areas forced over 120,000 people from their homes and caused over $2 billion in property damage alone. 48 out of California's 58 counties were declared disaster areas with many streamflow gauge stations in these areas recording return intervals of over 100 years. It would take months for the worst-hit areas to recover fully.
Meteorological Setting
Before the warm storms arrived, a cold system brought 5-8 feet of snow to the Sierras with heavy accumulations even below 5,000 feet. from December 21-22. This storm along with earlier colder systems contributed to the large snowpack (150% to 200% of average) in the Sierras. During Christmas, a shift in the weather pattern to what is known as a Pineapple Express began the series of successive storms that contributed to the flooding. The upper-level ridge began to shift west with cooler air dropping across British Columbia. An upper-level high situated on the Aleutian Islands was undercut by an upper-level low and stalled between 40 degrees North and 160 degrees West. This and an upper-level jet extension with peak wind speeds of 180 knots in the Western Pacific ultimately contributed to the influx of tropical moisture into California. Precipitable water in the atmosphere peaked near 1.8 inches just off the California coast on January 1. Normally, the Sierras get about three times more rain than the Sacramento Valley, but during this event, they got up to ten times more rain because of the wind direction and strength of the winds.
The combination of slow-moving weather systems and strong winds brought warm, moist air into Northern California, causing prolonged and heavy rainfall. This warmth caused snow levels to rise very high, above 10,000 feet. As a result, most of the precipitation fell as rain instead of snow. This rain not only added a lot of water but also melted most of the existing snow, leading to even more runoff and contributing to the flooding. A very active Madden-Julian oscillation is also thought to have contributed to this extreme precipitation event.
Precipitation Totals
Impact
North Coast
During the December 26 to January 3 storm period, the North Coast river basins, despite their lower elevations, received significant precipitation ranging from 10 to 25 inches. The most substantial rainfall occurred in the Eel and Russian River basins, leading to severe flooding. The Russian River at Guerneville reached a flood stage of 45 feet, about 3.5 feet lower than the record 48.56 feet stage in 1986, but still the second-highest stage since 1995. The flooding of the Russian River caused significant damage to farmland and vineyards along the banks of the river including the city of Guerneville.
Central Valley and Sierra Nevada
During the event, runoff from the Sierra Nevada basins that drain into the Central Valley was significantly increased by rain at higher elevations and melting snow. The New Year's Day storm tested the Sacramento-Feather River flood control system, which had to manage local runoff and reservoir releases to maintain its integrity. Prior to the major storms, reservoirs were able to reduce storage and regain flood reservation space based on forecasts and operations. However, the intense storms around New Year's Day quickly filled these reservoirs near capacity, necessitating increased downstream releases and setting new peak flow records into Lake Shasta and Lake Oroville.
On January 1, the Napa River reached 3 feet above flood stage in Napa and Cache Creek reached a record stage height of 14.14 feet and a record flow of 13,200 cfs which only caused minimal damage in Yolo County. The worst flooding occurred on the Feather River on January 2. The river stage height peaked at 50.4 at Nicolaus (2.4 feet above flood stage). Multiple levees broke along the river causing significant flooding to Marysville and Arboga. A levee break south of Yuba City devastated the town of Olivehurst. Roughly 100,000 people from Oroville had to be evacuated due to the high flows coming from Lake Oroville On January 2, the Cosumnes River at Michigan Bar reached a record peak stage height of 18.54 feet and a record flow of 93,000 cfs. The Cosumnes flooded surrounding areas including forcing the closing of SR 99 and I-5. The river breached levees in many places and began to flow above the levees altogether. The communities of Sloughhouse and Wilton were also flooded as a result. The Yuba River at Marysville reached a record peak stage height of 91.64 and a peak flow of 161,000 cfs on January 2. The American River had its second-highest peak stage height ever at 26.40 feet and the second-fastest peak flow rate ever at 180,000 cfs due to high flow releases from the Folsom Dam. A mudslide blocked US 50 near White Hall. I-80 was also closed. The Merced River at Pohono Bridge in Yosemite reached a record stage height of 23.43 feet and a record peak flow of 24,600 cfs which caused some of the greatest flooding since 1862. For the first time, the Don Pedro Reservoir reached maximum capacity forcing releases with high flows downstream. As a result, the Tuolumne River at Modesto reached a record stage height of 71.21 feet and a near-record flow of 55,800 cfs on January 4. The record flows on the river caused considerable flooding to farmland and housing along the river and some neighborhoods in Modesto. The Dry Creek flooded neighborhoods near the Creekside Golf Course. San Joaquin River at Vernalis reached a record peak stage height of 34.88 feet and a near-record flow of 75,600 cfs on January 5. The San Joaquin flooded many communities along its banks, including substantial damage in Manteca. The Sacramento River at Verona reached a near-record stage height of 42.09 feet and a record flow of 102,000 cfs. Consequently, levee breaches on the Sacramento and the flooding of the Yolo Bypass inundated many acres of farmland. The Truckee River also had near-record flows with a peak stage height of 13.13 feet and a flow rate of 14,900 cfs at Farad (well above flood stage) which flooded Downtown Truckee.
Aftermath
During the 1996-1997 water year, Northern California experienced extremely wet conditions in December and January. However, the rest of the winter and early spring saw little precipitation. Consequently, the snowpack in the northern Sierra Nevada was only 60% of the average by April 1, and many major reservoirs in California did not fill to capacity from the spring snowmelt. The flooding caused roughly $2 billion in damages ($2.94 billion in 2023) and was attributed to the deaths of 9 people. It took many places affected by the floods months to recover. In June 1997, Yosemite was provided with $178.5 million to repair and replace infrastructure, resources, and property damaged by the floods including an additional $79.2 million. It took until 2012 for the final flood recovery funds to be obligated. Since the 1997 floods, the California Department of Water Resources (DWR) has significantly improved flood risk management through better data collection, forecasting, and emergency response. Collaborating with various partners, DWR has implemented the Forecast Informed Reservoir Operations (FIRO) to reduce flood risks by optimizing reservoir storage. Technological advancements like LIDAR surveys enhance snowpack data accuracy, crucial for managing water supply. The department has invested billions in flood management systems, including levee improvements and habitat restoration. Public education and coordinated emergency responses further bolster California's flood preparedness and resilience.
See also
Floods in California
Floods in the United States
1997 Merced River flood - Detailed look at flooding in Yosemite from these storms
1997 Nevada floods - Flooding from these storms in Nevada
2017 California floods - Flooding that occurred in the same areas
References
Natural disasters
1997 | 1997 California New Years Floods | [
"Physics"
] | 1,702 | [
"Weather",
"Physical phenomena",
"Natural disasters"
] |
75,593,523 | https://en.wikipedia.org/wiki/Vladimir%20Alexandrovich%20Koptsik | Vladimir Alexandrovich Koptsik (; 26February 1924 – 2April 2005) was a Soviet crystallographer and physicist. In 1966 Koptsik was the first to publish the complete atlas of all 1651 antisymmetry space groups. In 1972 he published Symmetry in science and art with extensive coverage of dichromatic and polychromatic symmetry.
Life
Career
Koptsik was born on 26 February 1924 in Ivanovo. In 1941-1944 he worked as a turner in a defence plant in Moscow. Koptsik graduated from Moscow State University in 1949. He then began post-graduate work under the supervision of A.V. Shubnikov and submitted his candidate's dissertation in 1953.
In 1953 Koptsik was hired as an assistant to Shubnikov in the new department of Crystallography and Crystal Physics at MSU. He progressed through various positions, earning his doctorate in 1963, becoming full professor in 1967, and head of department from 1968 to 1974 succeeding Shubnikov.
Koptsik is known for his contributions to the physics of electrically and magnetically ordered crystals, the tensor representation of anisotropic media, the theory of crystal symmetry, and the symmetry aspects of structural phase transitions.
From 1966 Koptsik was a member of the Committee on International Crystallographic Tables of the International Union of Crystallography (IUCr); in 1983 he became a member of the subcommittee on nomenclature of n-dimensional crystallography.
Works
The majority of Koptsik's works were published in Russian. Books published by Koptsik:
Shubnikov groups: handbook on the symmetry and physical properties of crystal structures (1966)
Symmetry in Science and Art (1972); English translation (1974)
Problem exercises for crystal physics (1982 and 1988)
Koptsik published 300 academic papers. Selected papers available in English:
Polymorphic phase transitions and symmetry (1957)
A general sketch of the development of the theory of symmetry and its applications in physical crystallography over the last 50 years (1968)
Views of Aleksei Vasil'evich Shubnikov on crystallography and crystal physics (on the ninetieth anniversary of his birth) (1977)
Symmetry principle in physics (1983)
Generalized symmetry in crystal physics (1988)
Symmetry bases. The contemporary symmetry theory in solids (1994)
Honours and awards
E. S. Fedorov Prize of the Russian Academy of Sciences for his contributions to the theory of symmetry (1973)
Honoured Professor of Moscow State University (1996)
Honoured Scientist of the Russian Federation (1999)
References
1924 births
2005 deaths
Soviet physicists
Crystallographers | Vladimir Alexandrovich Koptsik | [
"Chemistry",
"Materials_science"
] | 540 | [
"Crystallographers",
"Crystallography"
] |
75,595,931 | https://en.wikipedia.org/wiki/Aluminylene | Aluminylenes are a sub-class of aluminium(I) compounds that feature singly-coordinated aluminium atoms with a lone pair of electrons. As aluminylenes exhibit two unoccupied orbitals, they are not strictly aluminium analogues of carbenes until stabilized by a Lewis base to form aluminium(I) nucleophiles. The lone pair and two empty orbitals on the aluminium allow for ambiphilic bonding where the aluminylene can act as both an electrophile and a nucleophile. Aluminylenes have also been reported under the names alumylenes and alanediyl.
The +1 oxidation state for aluminium is less stable than heavier group 13 elements, but the lower stability and higher reactivity of aluminium(I) compounds make for interesting chemistry. The first aluminium(I) compound to be isolated was Dohmeier's (AlCp*)4 which existed as a tetrameric solid but dissociated in solution to the monomer. This was followed by Roesky's synthesis of a doubly coordinated aluminium(I) and nitrogen heterocycle analogous to an aluminium Arduengo carbene. Despite some rich aluminium(I) chemistry following those discoveries, it wasn't until 2020 that a free (not Lewis base stabilized) aluminylene was synthesized.
Free aluminylenes
Simple aluminylenes have been studied but are highly reactive and only exist in the gas phase under extreme conditions. The first free aluminylene came from Tuononen and Power, who used bulky terphenyl ligands to stabilize the reduction of the aluminium(III) diiodide. The isolated arylaluminylene formed thermally stable yellow-orange crystals that were characterized via X-ray crystallography and NMR spectroscopy. The aluminylene demonstrated more reactivity than its gallium analogue and quickly formed an aluminium hydride upon reaction with hydrogen gas.
Soon after, Liu and coworkers as well as Hinz and coworkers separately synthesized a free nitrogen bound aluminylenes that was stabilized with the use of bulky carbazolyl ligands. While also thermally stable, the N-aluminylene was extremely sensitive to air and water. Part of the stability of the N-aluminylene is based on slight pi-donation from the nitrogen atom, facilitated by the planar nature of the molecule. This conclusion is supported by electronic structure calculations and a slightly shorter N-Al bond distance than would be expected for a N-Al single bond. Both free aluminylenes largely depend on the steric bulk of their ligands for kinetic protection, a common motif in stabilizing reactive main group complexes.
Reactivity
The ambiphilic nature of aluminylenes, as well as the reactivity of aluminium(I) complexes more generally, allows for aluminylenes to participate in a diverse range of reactions. Natural Bond Orbital (NBO) calculations showed that the frontier orbitals of these aluminylenes matched expectations with the aluminium lone pair as the HOMO and a largely aluminium p-orbital based LUMO.
Redox reactions
Power's aluminylene was shown to react with organic azides to create aluminium(III) imides. In a reaction with ArMe6N3, the terphenyl aluminylene was able to form an Al-N triple bond, a conclusion supported by the shortest reported Al-N bond distances (1.625Å). This aluminylene also reacted with less bulky azides, but the lack of steric protection meant that a second equivalent of azide reacted to give a multiply coordinated aluminium(III) compound.
The N-aluminylene reported by Liu and coworkers was shown to undergo an oxidative insertion reaction when mixed with IDippCuCl (IDipp=1,3-bis(2,6-diisopropylphenyl)imidazol-2-ylidene) to form a terminal copper-alumanyl complex.
Liu also demonstrated that the N-aluminylene could act as an important precursor to organoaluminium compounds. In these reactions, the aluminylene performs cycloaddition with unsaturated hydrocarbons to create aluminium heterocycles. Subsequently, the Al-N bond can be cleaved using a nucleophilic salt to free the newly formed organoaluminium compound.
In 2023, Liu and coworkers published further examples of the reactivity of their N-aluminylene as they attempted to react the compound with various boron based Lewis acids. Upon reaction with Ph2BOBPh2, the aluminylene formed a tricoordinate species featuring new aluminium-boron and aluminium-oxygen bonds. This free alumaborane was characterized via 11B NMR and showed two three-coordinate boron atoms, an observation further supported by x-ray crystallography data. The formation of Lewis adducts was also observed when the aluminylene was mixed with strong Lewis acids such as BCF (Tris(pentafluorophenyl)borane) and Piers’ borane (HB(C6F5)2).
Lewis base stabilized aluminylenes
In addition to free aluminylenes, there have been several attempts to further stabilize these reactive species through the coordination of another Lewis base. Transient versions of these compounds have been reported on the way to other products via coordination with N-heterocyclic Carbenes (NHCs) and amidophosphines. However, in 2022 Liu and coworkers were able to form an adduct between their N-aluminylene and an NHC, a combination that demonstrated increased reactivity compared to the free aluminylene. They explained this with Density Functional Theory calculations at the M06-2X/def2-SVP level showing that the NHC coordination narrowed of the HOMO-LUMO gap by raising the energy of the aluminium lone pair (HOMO). This aluminylene-NHC adduct was then shown to activate otherwise unreactive arene species to initiate ring expansions.
Aluminylene coordination chemistry
Aluminylenes have also demonstrated the ability to act as ligands and coordinate to transition metal centers. Tokitoh demonstrated multiple methods for using dialumene starting materials to create an arylaluminylene platinum complexes. NBO calculations showed that the Al-Pt bond showed a large degree of electrostatic interaction, supplemented by sigma donation from the aluminium and pi-backbonding from the platinum.
The N-aluminylene reported by Liu, also demonstrated an ability to coordinate to metal atoms. UV irradiation of tungsten hexacarbonyl in the presence of the N-aluminylene created an aluminylene-W(CO)5 compound. Furthermore, treatment of the N-aluminylene with W(CO)6 and Cr(CO)6 in coordinating solvents such as THF and DMAP also formed the aluminylene-transition metal complexes. In these cases, the aluminylene was stabilized by having a THF molecule or two DMAP molecules donate their lone pairs into the aluminylenes empty orbitals. Intrinsic Bond Orbital calculations showed a significant degree of pi-backbonding from the aluminylene in the tungsten and chromium complexes, which added further stabilization.
References
Aluminium(I) compounds
Organoaluminium compounds
Coordination complexes | Aluminylene | [
"Chemistry"
] | 1,588 | [
"Coordination chemistry",
"Functional groups",
"Octet-deficient functional groups",
"Coordination complexes"
] |
75,596,143 | https://en.wikipedia.org/wiki/Minister%20for%20Biosecurity | The Minister for Biosecurity is a minister in the New Zealand Government with the responsibility of managing biosecurity.
The current Minister for Biosecurity is Andrew Hoggard.
History
The portfolio was created after the 1996 general election. Previously, biosecurity matters had been under the purview of the Minister of Agriculture; it was John Falloon, acting in that portfolio, who had been responsible for the passage of the Biosecurity Act 1993. Briefly from 1998 to 1999 and again from 2011 to 2017, the portfolio was consolidated with other primary industries portfolios, first as the Minister for Food, Fibre, Biosecurity and Border Control and latterly as the Minister for Primary Industries.
List of ministers for biosecurity
The following ministers have held the office of Minister for Biosecurity.
Notes
References
Lists of government ministers of New Zealand
Biosecurity | Minister for Biosecurity | [
"Environmental_science"
] | 183 | [
"Toxicology",
"Biosecurity"
] |
75,596,569 | https://en.wikipedia.org/wiki/C14H14N4O3 | {{DISPLAYTITLE:C14H14N4O3}}
The molecular formula C14H14N4O3 may refer to:
Avadomide
Obidoxime | C14H14N4O3 | [
"Chemistry"
] | 40 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
75,597,213 | https://en.wikipedia.org/wiki/HD%2010390 | HD 10390 (HR 490; 51 H. Trinaguli) is a solitary star located in the northern constellation Triangulum. It is faintly visible to the naked eye as a bluish-white hued point of light with an apparent magnitude of 5.64. The object is located relatively close at a distance of 292 light-years based on Gaia DR3 parallax measurements and it is drifting closer with a heliocentric radial velocity of . At its current distance, HD 10390's brightness is diminished by an interstellar extinction of only five-hundredths of a magnitude and it has an absolute magnitude of +1.00.
HD 10390 has a stellar classification of B9 IV-V, indicating that it is a slightly evolved B-type star with a luminosity class intermediate between a subgiant and a main sequence star. Osawa (1959) gave a class of B9 V, instead indicating that it is an ordinary B-type main-sequence star that is generating energy via hydrogen fusion at its core. It has 2.62 times the mass of the Sun and 2.14 times the radius of the Sun. It radiates 51.5 times the luminosity of the Sun from its photosphere at an effective temperature of . HD 10390 is metal defecient with an iron abundance of [Fe/H] = −0.2 or 63.1% of the Sun's and it spins modestly with a projected rotational velocity of , well below its breakup velocity of 355 km/s. Despite the first classification, HD 10390 has only completed 16.8% of its main sequence lifetime at the age of approximately 50 million years.
References
B-type main-sequence stars
Triangulum
BD+34 00297
010390
07943
0490
00061524043 | HD 10390 | [
"Astronomy"
] | 382 | [
"Triangulum",
"Constellations"
] |
75,604,189 | https://en.wikipedia.org/wiki/Zhegalkin%20algebra | In mathematics, Zhegalkin algebra is a set of Boolean functions defined by the nullary operation taking the value , use of the binary operation of conjunction , and use of the binary sum operation for modulo 2 . The constant is introduced as . The negation operation is introduced by the relation . The disjunction operation follows from the identity .
Using Zhegalkin Algebra, any perfect disjunctive normal form can be uniquely converted into a Zhegalkin polynomial (via the Zhegalkin Theorem).
Basic identities
,
,
Thus, the basis of Boolean functions is functionally complete.
Its inverse logical basis is also functionally complete, where is the inverse of the XOR operation (via equivalence). For the inverse basis, the identities are inverse as well: is the output of a constant, is the output of the negation operation, and is the conjunction operation.
The functional completeness of the these two bases follows from completeness of the basis .
See also
Zhegalkin polynomial
References
Notes
Further reading
https://encyclopediaofmath.org/wiki/Zhegalkin_algebra
Boolean algebra | Zhegalkin algebra | [
"Mathematics"
] | 236 | [
"Boolean algebra",
"Fields of abstract algebra",
"Mathematical logic"
] |
75,604,392 | https://en.wikipedia.org/wiki/Bismuth%20organometallic%20chemistry | The stabilization of bismuth's +3 oxidation state due to the inert pair effect yields a plethora of organometallic bismuth-transition metal compounds and clusters with interesting electronics and 3D structures.
Catalysts
Due to the inert pair effect of the heavy, organometallic compounds of Bi (III) show Lewis acid properties given the lower ability of the 6s electron pair to mix with molecular orbitals and form σ-bonds. The search for non-toxic equivalents of boronic acids in advancing the Suzuki-Miyaura carbon-carbon coupling reactions and expand the scope of carbon-nitrogen and carbon-oxygen coupling ones turned chemists' attention to organometallic bismuth chemistry. Two catalytic mechanisms were proposed in the C-C bond formation catalyzed by bismuth organometallic compounds. The major difference arises from the rate of the oxidative addition to Pd(0) into a C-Bi bond or C-O one, yielding cycles A and B, respectively (see image).
Compounds with a metal-Bi σ-bond
Among the first representatives of the organometallic bismuth chemistry are a series of iron cyclopentadienyl compounds synthesized by Cullen et al. Characteristic to these is a σ Fe-Bi bond, the iron center bound to 1 cyclopentadienyl and to carbon monoxide ligands only having 17 electron in its coordination sphere in the absence of the Bi bond.
Adding to this, Huttner et al. described the synthesis of mixed Mn-Bi compounds. Most of the synthetic routes use bismuth trichloride as the bismuth metal source. The first proposed route relied on manganese cyclopentadienyl tricarbonyl as the starting material. A better yielding route employed [Cp(CO)2Mn(SiPh3)] anionic species as the manganese metal source. The synthesized [{Cp(CO)2Mn}2BiCl] adduct dimerizes in the solid-state.
Bismuth compounds derived from transition metal carbonyl complexes
Compounds derived from various transition metal carbonyl complexes are organometallic representatives with somewhat unusual cyclic structure and electronics. Such a representative is given in the form of the paramagnetic, ten-electron, tetrahedral [Cp2Co][Bi{Co(CO)4}4] complex.
Additionally, clusters like closo-[Bi3Cr2(CO)6]3- and [Bi3Mo2(CO)6]3- have been reported to stabilize the ozone-like structure of [Bi3]3-. The [Bi3]3- species, isostructural and isoelectronic with ozone, can be analyzed independently as a moiety bound to the metal carbonyl complexes. The reported Bi-Bi distance falls in between the single and double bond region and is elongated compared to Bi=Bi bond in the [Bi4]2- cluster, the later displaying a bond order of 1.25. This experimental observation is being rationalized by some amount of π-donation to the metal carbonyl center and simultaneously π* back-bonding to the bismuth cluster from the metallocene complex.
In 2009, Pearl et al. described the synthesis and isomerization of heterometallic complexes containing bismuth and rhenium. The precursors used in synthesis were an alkene-coordinated carbonyl rhenium complex and BiPh3. The reaction yields two types of heteronuclear bismuth-rhenium complexes and a homodinuclear rhenium one as a side product. Upon heating, the hexametallic tribismuth-trirhenium heteronuclear complex undergoes isomerization to cis- and trans-clusters containing the bicyclo [3.3.0] core (see scheme below). Under subsequent irradiation both stereoisomers convert to a common spiro [4.3] cluster compound.
Dibismuth transition metal-clusters
Adding to the transition metal-bismuth carbonyl clusters, the dibismuth clusters with transition metals have also been explored by synthetic chemists. The core of such compounds is represented in the form of dibismuthene or dibismithyne unit, in which the Bi atoms contain the inert 6s lone pair and through π-bond-donation are able to coordinate to carbonyl moieties of transition metals .
The common synthetic precursor is the trimethylsilylmethyl-cyclobismuthane. Upon reaction with tungsten pentacarbonyl, the resulting side-on adduct preserved the dibismuthene unit, while reaction with diiron noncarbonyl yields the a tetracylic heteronuclear iron-bismuth carbonyl compound (see scheme to the right).
The complexity of the dibismuthene complexes ranges from incorporation of cobalt ions to generate a prismatic cobalto carbonyl dicapped structure in the [(CO)11Co4Bi2]− structure to iron incorporation to yield diiron dibismuth tetracyclic moiety side-on capped with cobaltocarbonyl unit. A similar structure was synthesized with tungsten replacing the iron units and this time capped with a bismuth-iron carbonyl-Cp'' unit. Finally, another example comes in the form of a side-on coordinated zirconium dicyclopentadienyl unit to the dibismuth mesitylene moiety (see figure).
Bismuth-containing clusters
Multiple bismuth-containing clusters were reported, some of them synthesized through carbon monoxide ligand loss from the previously reported bismuth complexes. Strained cluster complexes with monodentate as well as bridging carbon monoxide units have also been isolated, such as [{Cp(μ2-CO)Fe}3(μ3-Bi)] and [(μ3-Bi)Co3(CO)6(μ-CO)3].
Spiro-like clusters such as [{Ru2(CO)8}(μ4-Bi){(μ-H)Ru3(CO)10} and cubane-like ones as [Bi4Co*4] are representatives as well. The former displays a tetracoordinate bismuth metallic center along with a dicoordinated hydride ligand. The structure of the latter is cubic with the edges alternating bismuth and cobalt metallic centers.
"Paddlewheel" complexes
Inspired from the dirhodium tetraacetate bimetallic salt, synthetic chemists decided to explore the synthesis of paddlewheel mixed heteronuclear bismuth-rhodium salts. The synthesis involves treatment of the [Rh2(O2CR)4] salt with the dibusmuth tetrafluoroacetate [Bi2(O2CCF3)4] equivalent. Depending on the nature and sterics of the R ligand, the resulting mixed salt has either two tBu R-substituents resulting in the cis mixed salt or a single Me R-substituent provenient from the dirhodium precursor (see scheme to the right). The mixed salts display increased air and moisture compared to the parental dimetallic salts and show Lewis acidity at the rhodium center.
See also
Organobismuth chemistry
Bismuth compounds
References
Bismuth compounds
Organometallic chemistry | Bismuth organometallic chemistry | [
"Chemistry"
] | 1,568 | [
"Organometallic chemistry"
] |
75,605,736 | https://en.wikipedia.org/wiki/Dinitrogen%20complexes%20of%20main-group%20elements | While the first dinitrogen complex was discovered in 1965, reports of dinitrogen complexes of main group elements have been significantly limited relative to their transition metal complex analogues. Examples span both the s- and p- blocks, with particular breakthroughs in Groups 1, 2, 13, 14, and 15 in the periodic table. These complexes tend to involve somewhat weak interactions between N2 and the main group atoms it binds. The formation of such compounds is of interest to chemists who seek to extend transition metal reactivity into the main group elements and especially those interested in using main group-mediated N2 activation.
Examples
One quintessential dinitrogen complex of a main group element is Gernot Frenking’s triphenylphosphinazine, first reported in 2013 in Angewandte Communications. This compound was notable for demonstrating the double Lewis acid behavior of dinitrogen, as the publication describes the N2 moiety in the doubly excited 1Γg state with four lone pairs on N—N fragment. The authors concluded that this electronic configuration renders dinitrogen a very strong Lewis acid given its electronic sextet as well as its relative electronegativity. Thus, the Lewis acidity of the N2 fragment strengthens Ph3P→N2←PPh3 attraction, making triphenylphosphinazine kinetically stable despite its thermodynamic instability. Indeed, Frenking et al. calculated the energy for the dissociation of N2(PPh3)2 to N2+2 PPh3 at RI-BP86/def2-TZVPP and found, with corrections for thermal and entropic contributions, a Gibbs free energy of −74.5 kcal mol−1. Meanwhile, Wilson et al.’s MP2/TZVP//B3LYP/TZVP value was slightly larger in magnitude at −87.8 kcal mol−1, but in either computation method, triphenylphosphinazine is thermodynamically unstable, with a strongly exergonic dissociation reaction. Thus, the kinetic contributions of the electronic structure of this compound are striking. Its isolation demonstrates that compounds whose dissociations would otherwise be strongly exergonic become isolable provided sufficient stabilization of their electronic structures. In other words, very strong donor-acceptor interactions may be sufficiently stabilizing to enable the isolation of compounds with very large heats of formation. The authors of this paper performed an EDA-NOCV analysis (Energy Decomposition Analysis-Natural Orbitals for Chemical Valence) to gain further information on the electrostatic interactions in this complex and found that, coupled with NBO analysis, this technique revealed that the P-N bonding in triphenylphosphinazine is more a function of P → N σ donation than it is N → P π back-donation. As such, the authors proposed a representation of this molecule consisting of dative P-N bonding. This is consistent with the partial charge of -1.73 on the N2 fragment calculated by NBO analysis, which also identified two lone pairs at each N atom and a single bond between the N atoms, supporting the Ph3P→N←PPh3 representation.
p-block
Complexes of dinitrogen in the p-block tend to be rather weakly coordinated. One such notable example in the realm of dinitrogen complexes of main group elements are those formed with main group radicals. In 2011, it was reported that paramagnetic main group compounds can form complexes with dinitrogen; the Sn(Hyp)3 radical (where Hyp = Si(SiMe3)3) was found to form a complex with weak van der Waals interactions with N2 detectable via electron paramagnetic resonance (EPR) and hyperfine sublevel correlation spectroscopy (HYSCORE). The van der Waals complex features transfer of unpaired electron spin density from Sn to N2 and is among the first examples of a dinitrogen complex to a large radical species in solution.
Another useful example of a dinitrogen complex to a p-block main group element is Ga-N2. Himmel et al. used matrix isolation experiments to spectroscopically probe the interactions between Ga and N2 in this species. The authors found that the bond between Ga and N relies on donation from the filled p orbital on the N atom into the empty p orbital on Ga; this was consistent with indications in the UV/Vis and Raman spectra that the complex's 2S excited state features a stronger Ga-N2 bond than its 2P ground state, as the excited state has a stronger σ interaction due to the removal of the unpaired electron from the p orbital. Spectroscopic data also allowed the authors to calculate a bond energy of 79 kJ mol−1 for the Ga-N2 complex. In terms of Group 13 dinitrogen complexes more generally, Himmel et al. found that the interactions between Group 13 metals and N2 are likely to be weak, as various experiments have demonstrated that N2 dissociates from the adduct at high temperatures. Interestingly, variations in pressure at constant temperature do not impact the decomposition rate with respect to N2.
s-block
Dinitrogen complexes have also been reported with main group elements in the s-block. In 1971, Andrews et al. reported the synthesis of two lithium dinitrogen complexes via simultaneous deposition of samples of nitrogen gas and lithium atomic beams onto a cesium iodide window at 15K. The N-enriched matrices were recovered via recondensation in liquid helium. The deposited samples were monitored via infrared spectroscopy, allowing the authors to observe two new absorptions in the matrix of lithium and nitrogen atoms. The resulting IR spectra also showed shifts at 1800 and 1535 cm−1, corresponding to nitrogen-nitrogen vibrations. Two new dinitrogen complexes of lithium were thus reported: LiN2 and LiN2N2, lithium supernitride and lithium disupernitride, respectively.
Further work with lithium involved reaction of metallic lithium with ethylene and N2 under an inert atmosphere yielding the Li(C2H4)(N2) complex, in which N2 is only weakly coordinated, as well as Li+N2−, whose formation ethylene catalyzes. In 1986, Andrews et al. synthesized and characterized both kinds of products spectroscopically.
While most main group complexes of dinitrogen involve end-on binding, in 2020, a collaboration between Mingfei Zhou and Gernot Frenking saw the first reported covalently bonded side-on N2 adducts of a main group element, with NNBe(η2-N2) and (NN)2Be(η2-N2). Pulsed laser evaporated beryllium atoms were allowed to react with N2 in neon at 4 K, allowing these collaborators to identify various beryllium dinitrogen products via infrared absorption spectroscopy. They further investigated isomers of Be(NN)n with n=2 or 3 using computational studies involving DFT at the M06-2X-D3/cc-pVTZ level and calculations at the CCSD(T)-Full/aug-cc-pVQZ, which identified the NNBe(η2-N2) and (NN)2Be(η2-N2) as the most energetically favorable isomers. Energy decomposition analysis (EDA) was used to confirm the characterization of these species as side-on N2 adducts as opposed to cyclic metalladiazirines governed by (NN)nBe→ η2-N2 π back-donation, a determination which was further supported by the authors’ QTAIM analysis. The reported Laplacian contour maps of these species displayed bond critical points and regions of local charge concentration pointing from the Be atoms to the η2-N2 ligands, hence the classification of these species as π-bonded.
Subsequent computational work by Rovaletti and coworkers highlighted the relevance of side-on bonding of dinitrogen to alkaline earth metals in that Ca(I) can bind dinitrogen in a side-on manner, but Mg(I) cannot bind dinitrogen because the N2 would be inserted end-on in the most stable conformation, which would have a triplet ground state. Molecular orbital analysis confirmed the energetic favorability of N2 binding to Ca(I) over Mg(I), the latter of which has not yet been experimentally reported to have any activity toward N2.
Later alkaline earth metals have received growing attention for their potential to mimic transition metal reactivity with respect to dinitrogen in an effort to study the N2 analogues of eight-coordinate metal carbonyl complexes of calcium, strontium, and barium. A 2020 paper reported DFT calculations indicating that cubic alkaline earth complexes of N2 and CO may share similar activation ligand activation capabilities, though such reactivity remains to be demonstrated experimentally.
References
Nitrogen compounds
Coordination complexes
1965 in science | Dinitrogen complexes of main-group elements | [
"Chemistry"
] | 1,906 | [
"Coordination chemistry",
"Coordination complexes"
] |
69,695,925 | https://en.wikipedia.org/wiki/Black%20hole%20greybody%20factors | Black hole greybody factors are functions of frequency and angular momentum that characterizes the deviation of the emission-spectrum of a black hole from a pure black-body spectrum. As a result of quantum effects, an isolated black hole emits radiation that, at the black-hole horizon, matches the radiation from a perfect black body. However, this radiation is scattered by the geometry of the black hole itself. Stated more intuitively, the particles emitted by the black hole are subject to the gravitational attraction of the black hole and so some of them fall back into the black hole. As a result, the actual spectrum measured by an asymptotic observer deviates from a black-body spectrum. This deviation is captured by the greybody factors. The name "greybody" is simply meant to indicate the difference of the spectrum of a black hole from a pure black body.
The greybody factors can be computed by a classical scattering computation of a wave-packet off the black hole.
Mathematical definition
The rate at which a black hole emits particles with energy between and and with angular momentum quantum numbers is given by
where k is the Boltzmann constant and T is the Hawking temperature of the black hole. The constant in the denominator is 1 for Bosons and -1 for Fermions. The factors are called the greybody factors of the black hole. For a charged black hole, these factors may also depend on the charge of the emitted particles.
References
Black holes
Astrophysics | Black hole greybody factors | [
"Physics",
"Astronomy"
] | 301 | [
"Black holes",
"Physical phenomena",
"Physical quantities",
"Unsolved problems in physics",
"Astronomy stubs",
"Astrophysics",
"Stellar astronomy stubs",
"Astrophysics stubs",
"Density",
"Relativity stubs",
"Theory of relativity",
"Stellar phenomena",
"Astronomical objects",
"Astronomical ... |
71,275,203 | https://en.wikipedia.org/wiki/Webb%27s%20First%20Deep%20Field | Webb's First Deep Field is the first operational image taken by the James Webb Space Telescope (JWST). The deep-field photograph, which covers a tiny area of sky visible from the Southern Hemisphere, is centered on SMACS 0723, a galaxy cluster in the constellation of Volans. Thousands of galaxies are visible in the image, some as old as 13 billion years. It is the highest-resolution image of the early universe ever taken. Captured by the telescope's Near-Infrared Camera (NIRCam), the image was revealed to the public by NASA on 11 July 2022.
Background
The James Webb Space Telescope is a space telescope operated by NASA and designed primarily to conduct infrared astronomy. Launched in December 2021, the spacecraft has been in a halo orbit around the second Sun–Earth Lagrange point (L2), about from Earth, since January 2022. At L2, the gravitational pull of the Sun combines with the gravitational pull of the Earth to produce an orbital period that matches Earth's, and the Earth and Sun remain co-aligned (as seen from that point) as the Earth and the spacecraft orbit the Sun together.
Webb's First Deep Field was taken by the telescope's Near-Infrared Camera (NIRCam) and is a composite produced from images at different wavelengths, totalling 12.5 hours of exposure time.
SMACS 0723 is a galaxy cluster visible from Earth's Southern Hemisphere, and has often been examined by Hubble and other telescopes in search of the deep past.
Scientific results
The image shows the galaxy cluster SMACS 0723 as it appeared 4.6 billion years ago, covering an area of sky with an angular size approximately equal to a grain of sand held at arm's length. Many of the objects in the image have undergone notable redshift due to the expansion of space over the extreme distance traveled by the light radiating from them. The redshifts of nearly 200 of these objects have been measured to date, with the highest redshift measured at 8.498.
The combined mass of the galaxy cluster acts as a gravitational lens, magnifying and distorting the images of much more distant galaxies behind it. Webb's NIRCam brought the distant galaxies into sharp focus, revealing tiny, faint structures that had never been seen before, including star clusters and diffuse features.
Diffraction spikes in the photo
The six bright and two fainter spikes around the point sources of light in the photo are an artifact created by the physical limitations of the telescope. The six bright spikes are a result of diffraction from the mirror's edges. The mirror is composed of 18 individual units, each having the shape of a regular hexagon. The hexagonal rim of the units that make up the telescope's large mirror give rise to the six spikes. Telescopes with circular mirrors/lenses don't have such spikes (in lieu of spikes, diffraction from circular rims creates a pattern of concentric rings called Airy discs).
The two additional spikes are a result of diffraction from the struts holding the telescope's secondary mirror in front of the main mirror. As shown in the figure on the right, diffraction from the three struts creates six spikes, but four of these are designed to co-align with the spikes created from the diffraction caused by the rim. This leaves the two faint horizontal spikes visible in the photo.
Significance
Deepest image of the Universe
On 11 July 2022, JWST delivered the deepest sharp infrared image of the universe to date. Webb's First Deep Field is the first full false-color image from the JWST, and the highest-resolution infrared view of the universe yet captured. The image reveals thousands of galaxies in a tiny sliver of the universe, with Webb's sharp near-infrared view bringing out faint structures in extremely distant galaxies, offering the most detailed view of the early universe to date. Thousands of galaxies, which include the faintest objects ever observed in the infrared, have appeared in Webb's view for the first time.
It was first revealed to the public during an event on 11 July 2022 by U.S. President Joe Biden.
Comparison with the Hubble Space Telescope
The following images are a comparison with the image taken by the Hubble Space Telescope and the image taken by Webb of the same galaxy cluster.
See also
List of deep fields
References
James Webb Space Telescope
Physical cosmology
Sky regions
Astronomy image articles
2022 in spaceflight
2020s photographs
Color photographs
2022 works | Webb's First Deep Field | [
"Physics",
"Astronomy"
] | 938 | [
"Astronomical sub-disciplines",
"Theoretical physics",
"James Webb Space Telescope",
"Works about astronomy",
"Astrophysics",
"Astronomy image articles",
"Space telescopes",
"Sky regions",
"Physical cosmology"
] |
71,279,138 | https://en.wikipedia.org/wiki/Dov%20Levine | Dov I. Levine (דב לוין, born July 19, 1958) is an American-Israeli physicist, known for his research on quasicrystals, soft condensed matter physics (including granular materials, emulsions, and foams), and statistical mechanics out of equilibrium.
Education and career
The son of a professor of physical chemistry, Dov Levine grew up in New York. He graduated in 1979 with a B.S. from Stony Brook University and in 1986 with a Ph.D. in physics from the University of Pennsylvania. His Ph.D. thesis Quasicrystals: A New Class of Ordered Structure was supervised by Paul Steinhardt.
In 1981, Levine and Steinhardt began developing their theory of a hypothetical new form of matter with icosahedral symmetry (or other forbidden symmetries) that violated the century-old laws of crystallography. The idea, motivated by their study of Penrose tilings, was to consider atomic arrangements that are quasiperiodic rather than periodic. They introduced the term quasicrystals, short for quasiperiodic crystal, to describe the idea. Independently, in April 1982, while studying an aluminum-manganese alloy, A6Mn, Dan Shechtman made a scientific observation, published in 1984, of "a metallic solid which diffracts electrons like a single crystal but has a point group symmetry (icosahedral) that is inconsistent with lattice translations." When Levine and Steinhardt were shown a preprint, they recognized the diffraction pattern as matching their prediction for an icosahedral quasicrystal and, hence, published their theory and proposed that explanation.
According to Steinhardt:
Levine was from 1986 to 1988 a postdoctoral member of UCSB's ITP (now known as KIPT) and from 1988 to 1989 a visiting scientist at the Weizmann Institute. He was from 1988 to 1991 an assistant professor at the University of Florida. In 1990 he joined the physics department of the Technion, where he is now a professor of physics. For the academic year 1997–1998 he was a visiting member of UCSB's ITP.
In 2020 he published, with Shankar Ghosh and five other colleagues, research on the development of rechargeable N95 masks.
Awards and honors
National Science Foundation Presidential Young Investigator Award
Alon Fellowship at Tel Aviv University
Minoru and Ethel Tsutsui Distinguished Graduate Research Award from the New York Academy of Sciences.
With Paul Steinhardt and Alan Mackay, the Oliver E. Buckley Condensed Matter Prize
2021 Fellow of the American Physical Society.
Selected publications
(over 850 citations)
(over 1050 citations)
(over 350 citations)
(over 1050 citations)
(See Rudin–Shapiro sequence.)
See also
Biham–Middleton–Levine traffic model
References
1958 births
Living people
Stony Brook University alumni
University of Pennsylvania alumni
University of Florida faculty
Academic staff of Technion – Israel Institute of Technology
Condensed matter physicists
Israeli materials scientists
Israeli physicists
Jewish American physicists
Oliver E. Buckley Condensed Matter Prize winners
Fellows of the American Physical Society
Quasicrystals
Scientists from New York City
American physicists | Dov Levine | [
"Physics",
"Chemistry",
"Materials_science"
] | 643 | [
"Tessellation",
"Crystallography",
"Quasicrystals",
"Symmetry"
] |
68,407,128 | https://en.wikipedia.org/wiki/OrthoFinder | OrthoFinder is a command-line software tool for comparative genomics. OrthoFinder determines the correspondence between genes in different organisms (also known as orthology analysis). This correspondence provides a framework for understanding the evolution of life on Earth, and enables the extrapolation and transfer of biological knowledge between organisms.
OrthoFinder takes FASTA files of protein sequences as input (one per species) and as output provides:
Orthogroups
Rooted Phylogenetic trees of all orthogroups
A rooted species tree for the set of species included in the input dataset
Hierarchical orthogroups for each node in the species tree
Orthologs between all species
Gene duplication events mapped to branches in the species tree
Comparative genomic statistics
As of August 2021, the tool has been referenced by more than 1500 published studies.
See also
Bioinformatics
Homology (biology)
Sequence homology
Protein family
Sequence clustering
References
Evolutionary biology
Bioinformatics software
Phylogenetics | OrthoFinder | [
"Biology"
] | 205 | [
"Evolutionary biology",
"Bioinformatics software",
"Taxonomy (biology)",
"Bioinformatics",
"Phylogenetics"
] |
68,408,039 | https://en.wikipedia.org/wiki/Shit%20flow%20diagram | A shit flow diagram (also called excreta flow diagram or SFD) is a high level technical drawing used to display how excreta moves through a location, and functions as a tool to identify where improvements are needed. The diagram has a particular focus on treatment of the waste, and its final disposal or use. SFDs are most often used in developing countries.
Development
In 2012–2013, the World Bank's Water and Sanitation Program sponsored a study on the fecal sludge management of twelve cities with the goal of developing tools for better understanding the flow of excreta through the cities. As a result, Isabel Blackett, Peter Hawkins, and Christiaan Heymans authored The missing link in sanitation service delivery: a review of fecal sludge management in 12 cities. Using this as a basis, a group of excreta management institutions began collaborating in June 2014 to continue development of SFDs.
In November 2014, the SFD Promotion Initiative was started with funding from the Bill & Melinda Gates Foundation. Initially funded as a one year project, it was extended in 2015. In September 2019, the focus of the program shifted to scaling up the current methods of producing SFDs to allow for citywide sanitation in South Asia and Africa. As of 2021 more than 240 shit flow diagram reports have been published. The initiative is managed as part of the Sustainable Sanitation Alliance and is supported by the Bill and Melinda Gates Foundation. It is partnered with many nonprofit organizations such as the Centre for Science and Environment, Eawag, and the Global Water Security & Sanitation Partnership.
Use in developing countries
The great majority of those living in urban areas, especially the poor, use non-sewer sanitation systems. This poses environmental and health challenges for growing urban areas in developing countries, and many of these countries will need to change their sanitation strategies as their population grows. Using a shit flow diagram allows political leaders and members of the community to see at a glance the challenges facing their sanitation systems, and where improvements will be most effective. The simplified nature of the diagram allows for easier dialog about local excreta management. Over 140 cities in the developing world have had SFDs prepared and published, many by nonprofit organizations. They are then used to identify where resources should be focused.
References
Biological waste
Biodegradable waste management
Sanitation
Sewerage
Excretion
Human physiology
Diagrams | Shit flow diagram | [
"Chemistry",
"Engineering",
"Biology",
"Environmental_science"
] | 476 | [
"Biodegradable waste management",
"Excretion",
"Water pollution",
"Biodegradation",
"Sewerage",
"nan",
"Environmental engineering"
] |
68,408,273 | https://en.wikipedia.org/wiki/Nuclearite | Nuclearites are hypothetical objects consisting of nuggets of strange quark matter or a strangelet surrounded by an electron shell, forming an atom-like neutral system, but with masses much larger than a normal atom. These heavy compact particles were first proposed by E. Witten, and the name coined by A. De Rujula and S. L. Glasgow to describe such particles colliding with the Earth's atmosphere, by analogy to more conventional meteorites. It is predicted that nuclearites would travel at hundreds of kilometers per second. Owing to their high energies and mass to size ratio, they should form streaks of light in the lower atmospheric regions. To date, no nuclearites have been successfully observed, but this failure itself places constraints on some theories of dark matter.
Properties of nuclearites
The strangelet forms what is called a nuclearite core, composed primarily of a up, down, and strange quarks, in almost equal proportions. Nuclearites are estimated to have masses between 0.1 and 100 kg. Additionally, they are predicted to be more stable than particles composed of solely up and down quarks. Nuclearites are expected to have a constant matter density. The hypothesized source of these particles are relics from the early universe or the big bang, as well as extreme energetic astrophysical phenomena such as the merger of two quark stars.
Experimental techniques for detection
Nuclearites should in principle be detectable based on their interaction with the Earth's atmosphere, with neutrino telescopes, and in collider experiments. In particular, neutrino telescopes such as ANTARES or Ice Cube are possible detectors for nuclearites.
See also
Strangelet
Cosmic rays
References
Exotic matter
Hypothetical objects | Nuclearite | [
"Physics"
] | 348 | [
"Hypotheses in physics",
"Theoretical physics",
"Particle physics",
"Exotic matter",
"Particle physics stubs",
"Matter"
] |
68,411,455 | https://en.wikipedia.org/wiki/Time%20in%20Rwanda | Time in Rwanda is given by a single time zone, officially denoted as Central Africa Time (CAT; UTC+02:00). Rwanda has never observed daylight saving time.
IANA time zone database
In the IANA time zone database, Rwanda is given one zone in the file zone.tab – Africa/Kigali. "RW" refers to the country's ISO 3166-1 alpha-2 country code. Data for Rwanda directly from zone.tab of the IANA time zone database; columns marked with * are the columns from zone.tab itself:
See also
List of time zones by country
List of UTC time offsets
list of other countries as he same Time zones of Rwanda
References
External links
Current time in Rwanda at Time.is
Time in Rwanda at TimeAndDate.com
Time by country
Geography of Rwanda
Time in Africa | Time in Rwanda | [
"Physics"
] | 172 | [
"Spacetime",
"Physical quantities",
"Time",
"Time by country"
] |
68,413,557 | https://en.wikipedia.org/wiki/Marina%20Guenza | Marina Guenza is an Italian theoretical physical chemist who studies the fluid dynamics of macromolecules. She is a professor of chemistry and biochemistry at the University of Oregon.
Education and career
Guenza earned a master's degree at the University of Genoa in 1985, and completed her Ph.D. in 1989 through a consortium of the University of Genoa, University of Turin, and University of Pavia.
Formerly a tenured researcher for the National Research Council (Italy), she moved to the University of Oregon as an assistant professor in 2002, earned tenure as an associate professor in 2006, and became full professor in 2012.
Recognition
In 2011, Guenza was named a Fellow of the American Physical Society (APS), after a nomination from the APS Division of Polymer Physics, "for significant contributions to the field of polymer physics through the development of theoretical methods to study macromolecular structure and dynamics". She became a Fellow of the American Association for the Advancement of Science in 2018.
References
External links
The Guenza Lab
Year of birth missing (living people)
Living people
21st-century Italian chemists
Italian women chemists
Fluid dynamicists
University of Genoa alumni
Fellows of the American Physical Society
Fellows of the American Association for the Advancement of Science | Marina Guenza | [
"Chemistry"
] | 257 | [
"Fluid dynamicists",
"Fluid dynamics"
] |
68,414,750 | https://en.wikipedia.org/wiki/Chet%20Moritz | Chet T. Moritz is an American neural engineer, neuroscientist, physiologist, and academic researcher. He is a Professor of Electrical and Computer Engineering, and holds joint appointments in the School of Medicine departments of Rehabilitation Medicine, and Physiology & Biophysics at the University of Washington.
Moritz's research is focused on neurotechnology including stimulation to restore function after brain and spinal cord injury. His work also includes brain-computer interfaces to control muscle and spinal stimulation. His discoveries have been featured in Nature, MSNBC national news, Wired, Popular Mechanics and local TV news and community outreach videos. He has also been quoted in the New York Times, Newsweek, Scientific American, Forbes, and Science News, and in a news story by Nature.
Education
Moritz graduated with a bachelor's degree in Zoology from the University of Washington in 1998. He then enrolled at the University of California, Berkeley, and earned his Doctoral Degree in Integrative Biology in 2003. From 2003 till 2004, he served as a Postdoctoral Fellow of Integrative Physiology at the University of Colorado, and subsequently rejoined the University of Washington as a Senior Fellow.
Career
Following his Postdoctoral fellowship, Moritz joined the faculty at the University of Washington as a Research Assistant Professor in the Department of Physiology & Biophysics in 2009, and was promoted to Assistant Professor of Rehabilitation Medicine in 2010. Along with this appointment, he held secondary appointments as assistant professor in the Department of Physiology and Biophysics. He was promoted to Associate Professor in 2014, and later joined the Department of Electrical & Computer Engineering in 2018. Since 2010, he has been a member of the Graduate Faculty, and a mentor for the Neuroscience Graduate Program.
Research
Moritz has worked in the area of neurotechnology, neuromodulation, brain-computer interfaces, and home rehabilitation physical therapy.
Brain computer interfaces
Moritz conducted a study in 2008 demonstrating that a brain-computer interface can be used to control stimulation of paralyzed muscles and restore movement. This has spawned several successful human trials of this concept in people with spinal cord injury. With Alik Widge, Moritz also demonstrated that cognitive areas of the pre-frontal cortex could be used to limbic stimulation paving the way for psychiatric neuroprostheses and an allowed patent. With David Bjanes, Moritz demonstrated a new way to provide sensory feedback directly to the brain.
Neurotechnology
Moritz's team demonstrated that stimulation of the spinal cord could lead to lasting improvements in hand and arm function that persisted beyond stimulation. This demonstration of ‘engineered neuroplasticity’ paved the way for human trials of spinal cord stimulation. He and Fatma Inanici's recent studies regarding transcutaneous spinal cord stimulation indicate that non-invasive transcutaneous electrical stimulation of the spinal networks is very effective in restoring movement and function of the hands and arm for people with both complete paralysis and long-term spinal cord injury. This work lead directly to a multi-site clinical trial with ONWARD medical, for which Moritz serves as one of two co-PIs for the study. Parallel work is also exploring optogenetic stimulation of the spinal cord with collaborators Polina Anikeeva and Sarah Mondello.
Motor unit physiology and biomechanics
In his studies of motor unit physiology, Moritz focused on experimentally measured force variability across a wide range of forces to improve the ability of a motor unit model to predict steadiness in the hand. He also published a paper in 2004 demonstrating the contributions of feed-forward anticipation and neuro-mechanical reaction when humans encounter surprise, expected, and random changes from a soft elastic surface to a hard surface underfoot. Furthermore, he studied implications regarding muscle pre-stretch and elastic energy storage in locomotion.
Home rehabilitation
Moritz and colleagues demonstrated that surface electromyography (sEMG) can be used to control a therapy video game using activation of weak or spastic muscles. Termed NeuroGame Therapy (NGT), the team showed improve wrist control in children with cerebral palsy (CP) and tested the approach in older adults following stroke.
Awards and honors
2003 - President's Award, American Society of Biomechanics
2009 - EUREKA Award, National Institutes of Health
2012 - Young Faculty Award, Defense Advanced Research Projects Agency (DARPA)
2013 -2018 - Allen Distinguished Investigator, Paul G. Allen Family Foundation
2015 -2018 - International Research Consortium on Spinal Cord Injury, Christopher and Dana Reeve Foundation
2020 - Weill Neurohub Investigator, Weill Neurohub at UCSF, Berkeley and U. Washington
Bibliography
Moritz, C. T., Barry, B. K., Pascoe, M. A., & Enoka, R. M. (2005). Discharge rate variability influences the variation in force fluctuations across the working range of a hand muscle. Journal of Neurophysiology, 93(5), 2449–2459.
Moritz, C. T., Perlmutter, S. I., & Fetz, E. E. (2008). Direct control of paralysed muscles by cortical neurons. Nature, 456(7222), 639–642.
Kasten, M. R., Sunshine, M. D., & Moritz, C. T. (2012). Cervical intraspinal microstimulation improves forelimb motor recovery after spinal contusion injury. International Functional Electrical Stimulation Society.
Widge, A. S., & Moritz, C. T. (2014). Pre-frontal control of closed-loop limbic neurostimulation by rodents using a brain–computer interface. Journal of neural engineering, 11(2), 024001.
Inanici, F., Samejima, S., Gad, P., Edgerton, V. R., Hofstetter, C. P., & Moritz, C. T. (2018). Transcutaneous electrical spinal stimulation promotes long-term recovery of upper extremity function in chronic tetraplegia. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 26(6), 1272–1278.
Bjånes, D. A., & Moritz, C. T. (2019). A robust encoding scheme for delivering artificial sensory information via direct brain stimulation. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 27(10), 1994–2004.
Inanici, F., Brighton, L. N., Samejima, S., Hofstetter, C. P., & Moritz, C. T. (2021). Transcutaneous spinal cord stimulation restores hand and arm function after spinal cord injury. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 29, 310–319.
Samejima, S., Khorasani, A., Ranganathan, V., Nakahara, J., Tolley, N. M., Boissenin, A., ... & Moritz, C. T. (2021). Brain-Computer-Spinal Interface Restores Upper Limb Function After Spinal Cord Injury. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 29, 1233–1242.
References
Neural engineering
American neuroscientists
Physiologists
Electrical and computer engineering
Year of birth missing (living people)
Living people
University of Washington faculty | Chet Moritz | [
"Engineering"
] | 1,495 | [
"Electrical and computer engineering"
] |
78,573,730 | https://en.wikipedia.org/wiki/LCD%20manufacturing | LCD manufacturing is the process of making liquid crystal display (LCD) panels. It involves using glass and silicon substrates. Photolithography is used to pattern the substrates, and liquid crystal materials are added. In the case of a color TFT LCD, color filters are patterned in layers to make red, green, and blue pixels.
Liquid crystal displays are manufactured in cleanrooms, borrowing techniques from semiconductor device manufacturing.
Process
A class of photolithography known as display lithography is used to etch patterns into substrates.
LCD manufacturing shares some of the process with OLED manufacturing.
The process flow involves multiple separate components that are joined together: a process for making a thin-film transistor (TFT) backplane, a process for making color filters, and a liquid crystal cell process.
Large-scale chemical vapor deposition (CVD) systems have been used in the manufacture of LCDs.
Once LCD panels are manufactured, they can be measured for color quality and panel uniformity using characterization equipment.
TFT backplane process
TFT backplanes are made using photolithography techniques, which involve using photomasks. The photomask(s) are used to create TFTs on a substrate, which involves formation of a gate layer, source/drain layer formation, and contact-hole formation.
The TFT backplane process involves patterning of indium tin oxide (ITO), which is a transparent and electrically conductive material.
Conventional LCDs use a back-channel etched (BCE) TFT display pixel structure.
Liquid crystal cell process
The cell process involves layer alignment, sealant formation, and depositing liquid crystal. The panels are then bonded and cut into individual displays.
A technique that can be used is one drop fill (ODF).
UV photocuring equipment can be used for bonding LCD panels.
Modules
An LCD module (LCM) is a ready-to-use LCD with a backlight. Thus, a factory that makes LCD modules does not necessarily make LCDs, it may only assemble them into the modules.
An LCD panel is attached to a driver board using anisotropic conductive film.
Generations
LCDs are manufactured using large sheets of glass whose size has increased over time. Several displays are manufactured at the same time, and then cut from the sheet of glass, also known as the mother glass or LCD glass substrate. The increase in size allows more displays or larger displays to be made, just like with increasing wafer sizes in semiconductor manufacturing. The glass sizes are as follows:
In 2004, Sharp started manufacturing panels using the 6th-generation glass size, which is 1.8 meters by 1.5 meters.
Until Gen 8, manufacturers would not agree on a single mother glass size and as a result, different manufacturers would use slightly different glass sizes for the same generation. Some manufacturers have adopted Gen 8.6 mother glass sheets which are only slightly larger than Gen 8.5, allowing for more 50- and 58-inch LCDs to be made per mother glass, specially 58-inch LCDs, in which case 6 can be produced on a Gen 8.6 mother glass vs only 3 on a Gen 8.5 mother glass, significantly reducing waste. The thickness of the mother glass also increases with each generation, so larger mother glass sizes are better suited for larger displays.
Companies
Companies that have made or sold LCD panels include:
Sharp Corporation
Japan Display
AUO Corporation
Companies that have produced FPD lithography equipment include Canon and Nikon.
LCD glass substrates are made by companies such as AGC Inc., Corning Inc., and Nippon Electric Glass.
Display lithography equipment include the H803T and H1003T from Canon. Display Technologies, Inc. is a defunct joint venture that manufactured LCD panels.
Materials
Optically clear adhesives are used to bond display components in the manufacturing process.
See also
Liquid crystal on silicon
References
Manufacturing
Liquid crystal displays | LCD manufacturing | [
"Engineering"
] | 798 | [
"Manufacturing",
"Mechanical engineering"
] |
78,584,063 | https://en.wikipedia.org/wiki/Bayo%20Ojulari | Bayo Bashir Ojulari is a Nigerian engineer and expert in petroleum, process and production engineering. He was managing director of Shell Nigeria Exploration and Production Company (SNEPCo) from 2015 to 2021.
Career
Ojulari began his engineering career in Nigeria and served in different leadership positions in Nigeria engineering professional organisations including as chairman and member of board of trustees of Society of Petroleum Engineers (SPE Nigeria Council) between 1998 and 1999. He is a Fellow of Nigerian Society of Engineers (NSE). Ojulari worked in Europe and Middle East in different managerial capacities in Petroleum Engineering, Process Engineering, Production Engineering and in health and safety roles.
Ojulari was appointed Managing Director Shell Nigeria Exploration and Production Company (SNEPCo) and as general manager, Deepwater in November 2015. Within this period, he served as a member of the board of directors of Shell Petroleum Development Company (SPDC) responsible for Onshore and Offshore Petroleum Engineering, Technical Integration of Development, Well and Project Engineering. He retired from his positions in Shell in July 2021. He is chairman of BAT Advisory & Energy Company.
References
Living people
Nigerian engineers
Petroleum engineers
Year of birth missing (living people) | Bayo Ojulari | [
"Engineering"
] | 246 | [
"Petroleum engineers",
"Petroleum engineering"
] |
78,586,725 | https://en.wikipedia.org/wiki/Semi-Dirac%20fermion | In condensed matter physics, semi-Dirac fermions are a class of quasiparticles that are fermionic with the unusual property that their energy dispersion relation changes from quadratic to linear dependent on their direction of motion. Their theoretical properties have been studied for some time.
Their first observation in a solid was in zirconium silicon sulfide (ZrSiS), a topological semi-metal, and was published in 2024.
See also
Dirac fermion
References
Fermions
Quasiparticles
External links
David Nield: Physicists Find Particle That Only Has Mass When Moving in One Direction. ScienceAlert, 14 December 2024. | Semi-Dirac fermion | [
"Physics",
"Materials_science"
] | 139 | [
"Matter",
"Fermions",
"Quantum physics stubs",
"Quantum mechanics",
"Condensed matter physics",
"Quasiparticles",
"Subatomic particles"
] |
72,797,009 | https://en.wikipedia.org/wiki/Bioconvergence | Bioconvergence is a multidisciplinary approach in life sciences that combines the disciplines of biotechnology, engineering, and computing to address complex challenges. The method is used in diagnostic processes and in the development of materials and pharmaceuticals. In addition to healthcare, bioconvergence contributes to improvements in sectors such as agriculture, energy, food, security, and climate. Research by McKinsey & Company indicates that the majority of bioconvergence's potential uses fall outside the healthcare sector, in areas like agriculture, aquaculture, consumer products, novel materials, chemistry, and energy. McKinsey estimates that bioconvergence solutions currently under development could generate an economic impact of up to annually within the next 10 to 20 years.
Implications
Bioconvergence uses methods from various disciplines such as biology, engineering, medicine, agriculture, computational sciences and artificial intelligence (AI), in order to solve challenges across several sectors.
Healthcare
Bioconvergence technologies in healthcare may include translational medicine, enabling the extraction of new insights from massive data sets; neuromorphic computing, which seeks to emulate the biological neural structure of the brain to increase processing performance and energy efficiency; creation of digital twins for clinical trials; and biochips such as an "organ on a chip" (OOC). Other potential implications of bioconvergence include new methods of using nanorobotics for drug delivery, regenerative medicine, diagnostics and biological sensors, optogenetics, bioelectronics, engineered "living" materials, and more. According to Belén Garijo, CEO of Merck, bioconvergence can also bring about the potential of personalized medicine".
Food and agriculture
Traditional agriculture relies on land, water, and a suitable climate. Proponents of bioconvergence research attest that its technologies could be used to grow food anywhere in labs and indoor vertical farms.
Potential applications also include new ways to conduct breeding of animals and plants using molecular or genetic markers that may be quicker than established selective-breeding methods; more precise tools for genetic engineering of plants; use of the microbiome of plants, soil, animals, and water to improve the quality and productivity of agricultural production; and the development of alternative proteins, including cultured meat, alternative eggs, and alternative milk.
Energy, climate and advanced materials
Bioconvergence could transform the natural resource sector through new ways of making and obtaining raw materials and fuels, as well as new manufacturing techniques. This could potentially reduce consumption of natural resources.
History
The term "bioconvergence" was used in 2005 to describe the integration of bio- and information-technologies into the healthcare industry. Since 2020, it has gained wider recognition.
In April 2020, The European Investment Bank and the Israel Innovation Authority concluded a cooperation agreement to jointly pursue investments in the globally emerging domain of bioconvergence.
In March 2021, the US National Intelligence Council (NIC), which bridges the United States Intelligence Community with policy makers in the US, published a research paper on the "Future of Biology", concluding that "During the next 20 years, a more multidisciplinary and data-intensive approach to life sciences will shift our understanding of and ability to manipulate living matter. These disciplines, combined with cognitive science, nanotechnology, physics, and others, are propelling new leaps in our understanding. It is anticipated that the collective application of these diverse technologies to the life sciences—known as bioconvergence— will accelerate discovery and predictability in biotech design and production."
In September 2021, CELLINK Life Sciences, a Swedish publicly traded company that commercialized the first bio-based ink in 2016, changed its group name to BICO Group, short for "bioconvergence." It is building a portfolio that blends biology, engineering and computer science technologies and considering acquisition opportunities in bioconvergence technology companies.
In May 2022, Israel launched a 5-year national plan worth () to boost research and development in bioconvergence. Also in May 2022, Ben-Gurion University of the Negev (BGU) and Soroka Medical Center announced a strategic collaboration for the development of novel technologies in the field of bioconvergence.
In October 2022, Japan announced that it will establish a global center of bioconvergence innovation in the Okinawa Institute of Science and Technology. It will be supported by a grant from the Japan Science and Technology Agency JST Program on Open Innovation Platform for Academia-Industry Co-Creation.
According to a McKinsey report on public policy and "Biological innovations for complex problems", the Israel Innovation Authority is "investing in bioconvergence technologies to ensure that professionals in biology, computer science, mathematics, engineering, and nanoscience work seamlessly together". The Israel Innovation Authority views bioconvergence as potentially "one of the next significant growth engines of Israeli high-tech".
Market
According to research company Grand View Research, the global bioconvergence market was valued at USD 110.9 billion in 2021 and is anticipated to expand at a compound annual growth rate (CAGR) of 7.4% from 2022 to 2030. The significant market growth can be attributed to the increasing elderly population and the accelerating stem cell technology for the fixing of injured cells, tissues, and organs. A McKinsey report in 2020 suggests that a pipeline of over 400 scientifically feasible use cases are already visible, and that these applications alone could have direct economic impact of up to per year over the next 10 to 20 years.
References
Further reading
Biotechnology
Biological engineering
Medical technology | Bioconvergence | [
"Engineering",
"Biology"
] | 1,142 | [
"Biological engineering",
"nan",
"Biotechnology",
"Medical technology"
] |
77,230,607 | https://en.wikipedia.org/wiki/Blanes%20Canyon | The Blanes canyon is an underwater canyon that forms the underwater valley located off the coast of Blanes, in the province of Girona, Catalonia. This underwater canyon is a significant geological feature of the Balearic Sea and plays an important role in the region's marine biodiversity.
Geography
The canyon extends from the continental shelf to depths exceeding . Its formation is due to erosive and tectonic processes that have shaped the seabed over millions of years. Its head is about from the coast, it is a valley at the bottom of the sea about long and wide, with vertical walls that go down to a depth of about . The mouth of the canyon is located near the mouth of the Tordera River, which contributes to the mixing of fresh and salt water in the area.
Biodiversity
The Blanes canyon is home to a great diversity of marine species. Among the benthic organisms that inhabit the canyon are corals, sponges, and a variety of invertebrates. In addition, it is a passage area for several species of pelagic fish and marine mammals, including dolphins. Dense cold-water corals have recently been discovered on its walls, living at temperatures around . It is like an oasis of biodiversity, for many crustaceans and fish, with numerous species of coral and other associated species. It forms an area very rich in biodiversity, since its rocky walls are the shelter of an immense variety of organisms, some of which, like corals, sponges and gorgonians, are protected and in danger of extinction.
Ecological importance
This underwater canyon is crucial for the conservation of marine biodiversity in the Balearic Sea. The upward currents that are generated in the canyon bring nutrients from the depths to the surface, which favors biological productivity and the presence of a rich marine fauna.
Scientific research
The Blanes canyon has been the subject of numerous oceanographic studies. Researchers from several institutions have explored the canyon to better understand the geological and ecological processes that take place in these depths. These studies are essential for the conservation and sustainable management of marine resources. In 2017, as part of an ICM-CSIC project aimed at studying the effect of trawling on deep marine sediments, Pere Puig's team found a large number of colonies of coral in the Blanes canyon. He did it with the help of other international researchers together with scientists from the Institute of Marine Sciences, who collaborated in the exploration of the canyon aboard the oceanographic ship of the CSIC, Sarmiento de Gamboa.
Threats and conservation
Despite its ecological importance, the Blanes Canyon faces threats such as trawling and marine pollution. It is essential to implement conservation measures to protect this valuable underwater ecosystem and ensure its preservation for future generations.
See also
Palamós Canyon
Catalan Sea
Submarine canyon
References
Submarine canyons
Oceanography | Blanes Canyon | [
"Physics",
"Environmental_science"
] | 575 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
75,608,576 | https://en.wikipedia.org/wiki/Otto%20calculus | The Otto calculus (also known as Otto's calculus) is a mathematical system for studying diffusion equations that views the space of probability measures as an infinite dimensional Riemannian manifold by interpreting the Wasserstein distance as if it was a Riemannian metric.
It is named after Felix Otto, who developed it in the late 1990s and published it in a 2001 paper on the geometry of dissipative evolution equations. Otto acknowledges inspiration from earlier work by David Kinderlehrer and conversations with Robert McCann and Cédric Villani.
See also
Itô calculus
References
Diffusion
Partial differential equations
Riemannian manifolds | Otto calculus | [
"Physics",
"Chemistry",
"Mathematics"
] | 122 | [
"Transport phenomena",
"Physical phenomena",
"Diffusion",
"Space (mathematics)",
"Metric spaces",
"Riemannian manifolds"
] |
75,612,982 | https://en.wikipedia.org/wiki/Spatial%20Planning%20Act%202023 | The Spatial Planning Act 2023 (SPA), now repealed, was one of three laws introduced by the Sixth Labour Government in order to replace New Zealand's Resource Management Act 1991 (RMA). Its purpose was to provide for regional spatial strategies that assisted the purpose of the Natural and Built Environment Act 2023 (NBA) and promote integration in the performance of functions under the NBA, the Land Transport Management Act 2003, the Local Government Act 2002, and the Water Services Entities Act 2022.
The Bill passed its third reading on 15 August 2023, and received royal assent on 23 August 2023. On 23 December 2023, the SPA and NBA were both repealed by the National-led coalition government.
Key provisions
The Spatial Planning Act 2023 requires all regions to have a regional spatial strategy that must align with the geographical boundaries of the region. The Chatham Islands' regional planning committee and offshore islands administered by the Minister of Conservation were excluded from this requirement.
The Spatial Planning Act also outlined the scope, contents, preparation and implementation of the regional spatial strategies including matters of national and regional importance. The Act also entrenched Te Ture Whaimana as the primary direction-setting document for the Waikato and Waipā Rivers, along with activities within their catchments affecting the rivers.
The Spatial Planning Act also required regional spatial strategies to take into account customary marine title areas and identified Māori land. Regional planning committees were also required to comply with Māori consultation arrangements. The Act also outlined the process for consulting with Māori groups.
The Act also contained provisions for cross-regional planning committees to develop plans affecting two or more regions. The Act also outlined the responsibilities and process for the Minister responsible for managing the RMA process.
The Spatial Planning Act also amended several existing laws including the Conservation Act 1987, Environment Act 1986, the Land Transport Management Act 2003, the Local Government Act 2002 and the Water Services Entities Act 2022.
Legislative history
Introduction
In 2020, a review of the Resource Management Act 1991 (RMA) identified various problems with the existing resource management system, and concluded that it could not cope with modern environmental pressures. In January 2021, the Sixth Labour Government announced that the RMA will be replaced by three acts: the core Natural and Built Environment Act, focusing land use and environmental regulation; the Strategic Planning Act, focusing on development laws; and the Climate Change Adaptation Act, focusing on managed retreat and climate change funding.
On 14 November 2022, the Labour Government introduced the Spatial Planning Act into the New Zealand House of Representatives alongside the companion Natural and Built Environment Act (NBA) as part of its RMA reform efforts. The opposition National and ACT parties opposed the two replacement bills, claiming that they created more centralisation, bureaucracy and did little to address the problems with the RMA process. The Green Party expressed concerns about the perceived lack of environment protection in the proposed legislation.
First reading
On 22 November 2022, Environment Minister David Parker introduced the Strategic Planning Act during its first reading. Several Labour and Green MPs including Parker, Rachel Brooking, Tāmati Coffey, Eugenie Sage, Anahila Kanongata'a-Suisuiki, Duncan Webb, Lemauga Lydia Sosene and Angie Warren-Clark argued that the SPA would help simplify the resource consent process for housing, infrastructural development, and spatial planning. By contrast, National and ACT MPs including Scott Simpson, Stuart Smith, Simon Court, Sam Uffindell, and David Bennett expressed concerns about red tape and centralisation, and claimed that the bill would do little to address the housing shortage. The SPA passed its first reading by a margin of 74 (Labour and the Greens) to 45 votes (National, ACT, and Te Pāti Māori), and was referred to the Environment select committee.
Select committee stage
On 27 June 2023, the Environment Committee voted by a majority to progress the SPA to its second reading. These amendments included promoting integration in the functions of the regional spatial strategies (RSS) with the NBA, upholding te Oranga o te Taiao, promoting integration between the RSS and proposed water services entities, clarifying the role of Māori iwi (tribes) and hapū (sub-groups) in the bill, and clarifying the wording around the regional spatial planning process and the transitional process from the RMA framework. The ACT and National parties also published their minority reports. ACT claimed that the SPA would frustrate development by creating more red tape and duplication. National's minority report claimed that the SPA created legal uncertainty, increased bureaucracy, complicated decarbonisation efforts, and undermined property rights.
Second reading
During its second reading on 18 July 2023, Parliament voted by a margin of 71 (Labour, Greens) to 48 (National, ACT, Te Paati Māori, independent Members of Parliament Elizabeth Kerekere and Meka Whaitiri) to endorse the Environment Committee's amendments. The SPA passed its second reading by a margin of 72 (Labour, Greens, Kerekere) to 47 (National, ACT, Te Paati Māori, and Whaitiri). Labour MPs Parker, Brooking, Phil Twyford, Warren-Clark, Arena Williams, Tracey McLellan, and Sosene, and Green MP Sage gave speeches defending the Bill. National MPs Chris Bishop, Simpson, Barbara Kuriger, and Tama Potaka, and ACT MP Court spoke against the Bill.
Third reading
The Bill passed its third reading on 15 August 2023 by a margin of 72 (Labour, Greens, and Kerekere) to 47 (National, ACT, Te Paati Māori, and Whaitiri). Labour MPs Parker, Brooking, Twyford, Warren-Clark, Sarah Pallett, Dan Rosewarne, and Sosene and Green MP Sage spoke in favour of the Bill. National MPs Bishop, Simpson, Kuriger, Potaka, Smith and ACT MP Court opposed the Bill. The Bill received royal assent on 23 August 2023.
Repeal
Following the 2023 New Zealand general election, the National-led coalition government repealed the Spatial Planning Act and Natural and Built Environment Act on 23 December 2023. The country reverted back to the Resource Management Act 1991 while the Government worked on introducing new replacement legislation.
Notes and references
External links
2022 in New Zealand law
2023 in New Zealand law
2022 in the environment
2023 in the environment
Environmental law in New Zealand
Environmental mitigation
Natural resource management
Repealed New Zealand legislation
Urban planning in New Zealand | Spatial Planning Act 2023 | [
"Chemistry",
"Engineering"
] | 1,325 | [
"Environmental mitigation",
"Environmental engineering"
] |
74,259,198 | https://en.wikipedia.org/wiki/Guangzhao%20Mao | Professor Guangzhao Mao is an American chemical engineer and an academic. She is professor and Head of the School of Engineering at the University of Edinburgh. From 2020 to 2024 she served as the Head of the School of Chemical Engineering at the University of New South Wales. She has held positions as chief investigator at the Australian Research Council (ARC) Centre of Excellence for Carbon Science and Innovation, the ARC Research Hub for Resilient Intelligent Infrastructure Systems, and the ARC Research Hub for Connected Sensors for Health.
Mao is most known for her work on nanotechnology, primarily focusing on targeted drug delivery and electrochemistry for sensors.
Education
Mao completed her BSc in chemistry from Nanjing University in 1988 and obtained her PhD in chemical engineering from the University of Minnesota in 1994. She then completed her postdoctoral fellowship at the same institution in 1995.
Career
Mao began her academic career in 1995 by joining Wayne State University as an assistant professor, promoted to full professor, and served until 2020. Since 2020, she has been serving as a professor at the school of chemical engineering at the University of New South Wales.
Mao served as the director of the material science graduate program at Wayne State University from 2011 to 2015 and as the Chair of the Chemical Engineering and Material Science Department at Wayne State University from 2015 to 2020. From 2020 to 2024, she held the position of the Head of the School of Chemical Engineering at the University of New South Wales. She joined the University of Edinburgh as Head of the School of Engineering in September 2024.
Mao has been the chief investigator of the ARC Research Hub for Connected Sensors for Health and the ARC Research Hub for Resilient Intelligent Infrastructure Systems, and as of 2023, she has also been serving as the chief investigator of the ARC Centre of Excellence for Carbon Science and Innovation.
Research
Mao has authored numerous publications spanning the areas of nanomanufacturing, nanofabrication, and nanochemistry, including articles in peer-reviewed journals.
Targeted drug delivery
Centered on localized gene delivery, Mao's research proposed biodegradable polymer coatings for sequential DNA release from implantable devices. This was built on her PhD research on the multilayer films. In 2016, she and her team pioneered the idea of using retrograde transport proteins to specifically deliver drugs for treating respiratory issues linked to spinal cord injury. In related research, she collaborated with Harry Goshgarian and Abdulghani Sankari to advance nanotherapeutics by integrating retrograde transport proteins, adenosine receptor antagonists, and nanoparticle carriers. Furthermore, she proposed a new technique for delivering drugs specifically to the central nervous system (CNS) using nanoparticles that are chemically attached to neural tract tracer proteins and can be transported along specific neural pathways, allowing them to bypass the blood–brain barrier and target the CNS directly. Mao used human embryonic stem cells (hESCs) for assessing nanotoxicology, specifically, the effect of nanoparticle size on the viability, pluripotency, neuronal differentiation, and DNA methylation of hESCs. Her work revealed a type of gold nanoparticles to be highly toxic and demonstrated the potential of hESCs in predicting nanotoxicity.
Nanotechnology and nanosensor manufacturing
Mao's other nanotechnology research has focused on seed-mediated crystallization for nanosensor scale up. Her early research examined the potential of designing nucleation seeds to induce shape change in molecular crystals. In her investigation of the impact of seed size and surface chemistry, her study illustrated the capability of nanoparticles to effectively change the ordering pattern of molecular crystals nucleated on the nanoparticle. Moreover, she examined the use of electrochemistry to deposit both the nanoparticle seeds and the molecular crystals on the seed to form a hybrid nanostructure. In 2020, her research group introduced a method for manufacturing nanowire sensors by electrochemically depositing charge-transfer salt nanowire crystals on sensor substrates, demonstrating their gas sensing capabilities for detecting ammonia concentrations in the range of 1–100 ppm through electrical impedance measurements. In 2023, Mao demonstrated the potential of electrochemistry for precise deposition and scale up of nanosensors. She applied atomic force microscopy and surface forces measurement techniques for the study of colloidal and biomolecular interfaces including liposomes, DNA nanoparticles, and viral particles.
Awards and honors
1997 – Faculty Career Award, National Science Foundation
2002 – Fulbright Senior Scholar
2022 – Fellow of the American Institute of Chemical Engineers
Selected articles
Mao, G., Tsao, Y., Tirrell, M., Davis, H. T., Hessel, V., & Ringsdorf, H. (1993). Self-assembly of photopolymerizable bolaform amphiphile mono-and multilayers. Langmuir, 9(12), 3461–3470.
D Chen, R Wang, I Arachchige, G Mao, SL Brock (2004), Particle− Rod Hybrids: Growth of Arachidic Acid Molecular Rods from Capped Cadmium Selenide Nanoparticles, Journal of the American Chemical Society 126 (50), 16290–16291.
MC Senut, Y Zhang, F Liu, A Sen, DM Ruden, G Mao (2016), Size‐dependent toxicity of gold nanoparticles on human embryonic stem cells and their neural derivatives, Small 12 (5), 631–646.
Y Zhang, JB Walker, Z Minic, F Liu, H Goshgarian, G Mao (2016), Transporter protein and drug-conjugated gold nanoparticles capable of bypassing the blood-brain barrier, Scientific reports 6 (1), 1–8.
MM Hassan, M Hettiarachchi, M Kilani, X Gao, A Sankari, C Boyer, G Mao (2021), Sustained A1 adenosine receptor antagonist drug release from nanoparticles functionalized by a neural tracing protein, ACS Chemical Neuroscience 12 (23), 4438–4448.
M Kilani, M Ahmed, M Mayyas, Y Wang, K Kalantar‐Zadeh, G Mao (2023), Toward Precision Deposition of Conductive Charge‐Transfer Complex Crystals Using Nanoelectrochemistry, Small Methods 7 (4), 2201198.
References
Chemical engineers
Nanjing University alumni
University of Minnesota alumni
Faculties of the University of New South Wales
American chemical engineers
Living people
Year of birth missing (living people)
Academics of the University of Edinburgh
Fellows of the American Institute of Chemical Engineers | Guangzhao Mao | [
"Chemistry",
"Engineering"
] | 1,366 | [
"Chemical engineering",
"Chemical engineers"
] |
74,262,417 | https://en.wikipedia.org/wiki/Fiber-reinforced%20cementitious%20matrix | A fiber-reinforced cementitious matrix (FRCM) is a reinforcement system composed by fibers (such as steel, aramid, basalt, plant fibers, carbon, polyparaphenylenebenzobisoxazole, and glass) embedded in an inorganic-based matrix, usually made by cement or lime mortar. Plant fibers are a promising area but they are subjected to degradation in the alkaline environment and elevated temperatures during cement hydration.
In international literature, FRCMs are also called textile-reinforced concrete (TRC), textile reinforced mortars (TRM), fabric-reinforced mortar (FRM), or inorganic matrix-grid composites (IMG).
Starting from the second decade of the 21st century they are used for the structural rehabilitation of existing buildings, in particular made by masonry (existing and historical) or by reinforced concrete, to increase their load-bearing capacity under both vertical and horizontal loads (including seismic ones).
History
FRCM efficacy stands in the association of more materials together to give better mechanical properties to the structural systems. An historical example that shares some features with FRCM is the association of sun-dried clay and straw for the production of bricks in Mesopotamia, or the Roman cocciopesto. The first FRP composite materials appeared in the 1940s in aeronautical engineering. FRCM composite materials, on the other hand, have seen their first applications in the early years of the 21th century. Indeed, in the second decade of the same century, FRCMs have joined the now classic FRPs in terms of importance for structural rehabilitation. This is due to the fact that the inorganic matrix has shown numerous advantages, compared with the organic counterpart (FRP), including a better response when applied to fragile substrates such as masonry and reinforced concrete, thanks to the greater compatibility of the mortar layer when applied on such substrates.
Properties
FRCM composites constitute systems or kits according to the definition set out in point 2 of the art. 2 of EU Regulation 305/2011. They are composed of two fundamental components: an inorganic matrix and a reinforcement. Sometimes, to improve their mechanical characteristics and adherence, connectors, anchoring devices or additives can also be introduced.
An FRCM package is created in situ and applied to the structure that needs to be consolidated. An FRCM system can be constituted by a single textile or by several textiles embedded in a single thickness of mortar.
The matrix (or mortar), cementitious, airborne, hydraulic, bastard or based on natural lime, is reinforced with fibers made by:
high tensile steel (UHTSS – Ultra High Tensile Strength Steel);
basalt;
natural (plant) fibers
polyparaphenylenebenzobisoxazole (PBO);
glass;
carbon;
aramid.
The fibers constitute the textile. The textile is grouped into yarns and can be dry or impregnated with organic resins. Yarns are grouped into nets and spaced according to a measure to be defined appropriately in accordance with the CNR DT 215.
The main net characteristics to be defined are:
the distance between yarns in both directions of the textile (respectively called "warp" and "weft");
weights;
warping methods.
Mechanical characteristics
The constitutive stress-strain relationship of an FRCM reinforcement system in a coupon test is characterised by three Stages. Stage A corresponds to the uncracked sample. Stage B corresponds to the sample undergoing cracking. Finally Stage C corresponds to the cracked one. In Stage C the tension is expressed making reference to the area of fibers without considering inorganic matrix. However, the mechanical behavior of FRCMs is very complex, therefore the constitutive relationship is not sufficient to characterise their mechanical behavior. This is due to the fact that FRCM is placed on a substrate. In fact, it is necessary to take into account multiple failure mechanisms that can occur as a result of the interaction between support and reinforcement. Such mechanisms include:
the detachment with cohesive failure of the support from the reinforcement system;
the detachment at the matrix-support interface;
the detachment at the matrix-fiber interface;
the sliding of the fiber in the matrix;
the sliding of the fiber and the cracking of the outer layer of mortar;
the tensile failure of the fiber.
See also
Reinforced concrete
Kevlar
References
Further reading
External links
CNR DT 215
Linea Guida per la identificazione, la qualificazione ed il controllo di accettazione di compositi fibrorinforzati a matrice inorganica (FRCM) da utilizzarsi per il consolidamento strutturale di costruzioni esistenti
Textile-reinforced mortar (TRM) versus FRP as strengthening material of URM walls: in-plane cyclic loading
Composite materials
Plastics
Structural engineering
Fibre-reinforced polymers
Polymers
Materials science | Fiber-reinforced cementitious matrix | [
"Physics",
"Materials_science",
"Engineering"
] | 998 | [
"Structural engineering",
"Applied and interdisciplinary physics",
"Composite materials",
"Materials science",
"Unsolved problems in physics",
"Construction",
"Materials",
"Civil engineering",
"nan",
"Amorphous solids",
"Matter",
"Plastics"
] |
74,269,270 | https://en.wikipedia.org/wiki/Machine-learned%20interatomic%20potential | Machine-learned interatomic potentials (MLIPs), or simply machine learning potentials (MLPs), are interatomic potentials constructed by machine learning programs. Beginning in the 1990s, researchers have employed such programs to construct interatomic potentials by mapping atomic structures to their potential energies. These potentials are referred to as MLIPs or MLPs.
Such machine learning potentials promised to fill the gap between density functional theory, a highly accurate but computationally intensive modelling method, and empirically derived or intuitively-approximated potentials, which were far lighter computationally but substantially less accurate. Improvements in artificial intelligence technology heightened the accuracy of MLPs while lowering their computational cost, increasing the role of machine learning in fitting potentials.
Machine learning potentials began by using neural networks to tackle low-dimensional systems. While promising, these models could not systematically account for interatomic energy interactions; they could be applied to small molecules in a vacuum, or molecules interacting with frozen surfaces, but not much else – and even in these applications, the models often relied on force fields or potentials derived empirically or with simulations. These models thus remained confined to academia.
Modern neural networks construct highly accurate and computationally light potentials, as theoretical understanding of materials science was increasingly built into their architectures and preprocessing. Almost all are local, accounting for all interactions between an atom and its neighbor up to some cutoff radius. There exist some nonlocal models, but these have been experimental for almost a decade. For most systems, reasonable cutoff radii enable highly accurate results.
Almost all neural networks intake atomic coordinates and output potential energies. For some, these atomic coordinates are converted into atom-centered symmetry functions. From this data, a separate atomic neural network is trained for each element; each atomic network is evaluated whenever that element occurs in the given structure, and then the results are pooled together at the end. This process – in particular, the atom-centered symmetry functions which convey translational, rotational, and permutational invariances – has greatly improved machine learning potentials by significantly constraining the neural network search space. Other models use a similar process but emphasize bonds over atoms, using pair symmetry functions and training one network per atom pair.
Other models to learn their own descriptors rather than using predetermined symmetry-dictating functions. These models, called message-passing neural networks (MPNNs), are graph neural networks. Treating molecules as three-dimensional graphs (where atoms are nodes and bonds are edges), the model takes feature vectors describing the atoms as input, and iteratively updates these vectors as information about neighboring atoms is processed through message functions and convolutions. These feature vectors are then used to predict the final potentials. The flexibility of this method often results in stronger, more generalizable models. In 2017, the first-ever MPNN model (a deep tensor neural network) was used to calculate the properties of small organic molecules. Such technology was commercialized, leading to the development of Matlantis in 2022, which extracts properties through both the forward and backward passes.
Gaussian Approximation Potential (GAP)
One popular class of machine-learned interatomic potential is the Gaussian Approximation Potential (GAP), which combines compact descriptors of local atomic environments with Gaussian process regression to machine learn the potential energy surface of a given system. To date, the GAP framework has been used to successfully develop a number of MLIPs for various systems, including for elemental systems such as Carbon, Silicon, Phosphorus, and Tungsten, as well as for multicomponent systems such as Ge2Sb2Te5 and austenitic stainless steel, Fe7Cr2Ni.
References
Machine learning
Materials science
Density functional theory software | Machine-learned interatomic potential | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 783 | [
"Applied and interdisciplinary physics",
"Computational chemistry software",
"Machine learning",
"Materials science",
"Density functional theory software",
"nan",
"Artificial intelligence engineering"
] |
74,277,546 | https://en.wikipedia.org/wiki/Computed%20torque%20control | Computed torque control is a control scheme used in motion control in robotics. It combines feedback linearization via a PID controller of the error with a dynamical model of the controlled robot.
Let the dynamics of the controlled robot be described by
where is the state vector of joint variables that describe the system, is the inertia matrix, is the vector Coriolis and centrifugal torques, are the torques caused by gravity and is the vector of joint torque inputs.
Assume that we have an approximate model of the system made up of . This model does not need to be perfect, but it should justify the approximations and .
Given a desired trajectory the error relative to the current state is then .
We can then set the input of the system to be
With this input the dynamics of the entire systems becomes
and the normal methods for PID controller tuning can be applied. In this way the complicated nonlinear control problem has been reduced to a relatively simple linear control problem.
References
Motion control
Robotics engineering | Computed torque control | [
"Physics",
"Technology",
"Engineering"
] | 203 | [
"Physical phenomena",
"Computer engineering",
"Robotics engineering",
"Automation",
"Motion (physics)",
"Motion control"
] |
74,285,579 | https://en.wikipedia.org/wiki/Open-circuit%20saturation%20curve | The open-circuit saturation curve (also open-circuit characteristic, OCC) of a synchronous generator is a plot of the output open circuit voltage as a function of the excitation current or field. The curve is typically plotted alongside the synchronous impedance curve.
At the low field, the permeable iron in the magnetic circuit of the generator is not saturated, therefore the reluctance almost entirely depends on the fixed contribution of the air gap, so the part of the curve that starts at the point of origin is a linear "air-gap line" (output voltage is proportional to the excitation current). As the iron saturates with higher excitation and thus higher magnetic flux, the reluctance increases, and the OCC deflects down from the air-gap line.
The curve is obtained by rotating the generator at the rated RPM with the output terminals disconnected and the output voltage typically going to at least 120% of the rated for the device. The hydraulic units sometimes have to be tested at lower RPM with the resulting voltage scaled up to accommodate the differences in frequency. Since the test goes above the rated voltage, the step-up transformer is typically also disconnected to avoid damaging it.
The open circuit saturation curve could be used together with the zero power factor curve in Potier Triangle Method.
References
Sources
Electrical generators | Open-circuit saturation curve | [
"Physics",
"Technology"
] | 278 | [
"Physical systems",
"Electrical generators",
"Machines"
] |
74,285,940 | https://en.wikipedia.org/wiki/Synchronous%20impedance%20curve | The synchronous impedance curve (also short-circuit characteristic, SCC) of a synchronous generator is a plot of the output short circuit current as a function of the excitation current or field. The curve is typically plotted alongside the open-circuit saturation curve.
The SCC is almost linear, since under the short-circuit conditions the magnetic flux in the generator is below the iron saturation levels and thus the reluctance is almost entirely defined by the fixed one of the air gap. The name "synchronous impedance curve" is due to the fact that in the short-circuit condition all the generated voltage dissipates across the generator internal synchronous impedance .
The curve is obtained by rotating the generator at the rated RPM with the output terminals shorted and the output current going to 100% of the rated for the device (higher values are typically not tested to avoid overheating).
References
Sources
Electrical generators | Synchronous impedance curve | [
"Physics",
"Technology"
] | 198 | [
"Physical systems",
"Electrical generators",
"Machines"
] |
74,287,597 | https://en.wikipedia.org/wiki/Spectroswiss | Spectroswiss is a Swiss technology company developing and producing hardware components and software for Fourier transform mass spectrometry. The company was formed in 2014 as a spin-out from the Biomolecular Mass Spectrometry Laboratory at Ecole Polytechnique Fédérale de Lausanne in Switzerland. The company's headquarters are located in Lausanne, Switzerland, with subsidiary in Cambridge, Massachusetts.
References
Mass spectrometry
Technology companies of Switzerland | Spectroswiss | [
"Physics",
"Chemistry"
] | 92 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Matter"
] |
69,706,366 | https://en.wikipedia.org/wiki/Bing%20%28measuring%20unit%29 | A Bing (秉) was a measuring unit used in ancient China for volume.
One bing was equal to 16 hu (斛), which themselves were equal to 10 dou (斗). A dou is near equivalent to 10 litres in modern units. Therefore, a bing would have been near equivalent to 1600 litres.
Usage
The bing unit was used in ancient times to measure volumes of grain.
It is mentioned in book six of the Analects when Ran Qiu requests Confucius, while he is presumably serving in the government of the State of Lu to give a dole of grain to Zihua's mother. Confucius offers to give only a fu (釜) and a yu (庾), together equaling about 88 litres of grain. Ran Qiu, who seems to feel this is too little, then provides her with 5 bing (8000 litres). Confucius then gives an indirect criticism saying that Zihua was a wealthy man and that it was not virtuous to give charity to the wealthy: "I have heard that a superior man helps the distressed, but does not add to the wealth of the rich."
References
Measurement
Ancient China | Bing (measuring unit) | [
"Physics",
"Mathematics"
] | 243 | [
"Quantity",
"Physical quantities",
"Measurement",
"Size"
] |
69,712,039 | https://en.wikipedia.org/wiki/Terbium%20phosphide | Terbium phosphide is an inorganic compound of terbium and phosphorus with the chemical formula TbP.
Synthesis
TbP can be obtained by the reaction of terbium and red phosphorus at 800–1000 °C:
4 Tb + P4 → 4 TbP
The compound can also be obtained by the reaction of sodium phosphide and anhydrous terbium chloride at 700~800 °C.
Physical properties
TbP undergoes a phase transition at 40 GPa from a NaCl-structure to a CsCl-structure. The compound can be sintered with zinc sulfide to make a green phosphor layer.
TbP forms crystals of a cubic system, space group Fm3m.
Uses
The compound is a semiconductor used in high power, high frequency applications and in laser diodes and other photo diodes.
References
Phosphides
Terbium compounds
Semiconductors
Rock salt crystal structure | Terbium phosphide | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 187 | [
"Electrical resistance and conductance",
"Physical quantities",
"Semiconductors",
"Materials",
"Electronic engineering",
"Condensed matter physics",
"Solid state engineering",
"Matter"
] |
69,712,429 | https://en.wikipedia.org/wiki/Gadolinium%20phosphide | Gadolinium phosphide is an inorganic compound of gadolinium and phosphorus with the chemical formula GdP.
Synthesis
Gadolinium phosphide can be obtained by reacting gadolinium and phosphorus at high temperature, and single crystals can be obtained by mineralization.
4 Gd + P4 → 4 GdP
Physical properties
GdP has a NaCl-structure and transforms to a CsCl-structure at 40 GPa.
GdP forms crystals of a cubic system, space group Fm3m.
Gadolinium phosphide is antiferromagnetic.
Uses
The compound is a semiconductor used in high power, high frequency applications and in laser diodes.
References
Phosphides
Gadolinium compounds
Semiconductors
Rock salt crystal structure | Gadolinium phosphide | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 157 | [
"Electrical resistance and conductance",
"Physical quantities",
"Semiconductors",
"Materials",
"Electronic engineering",
"Condensed matter physics",
"Solid state engineering",
"Matter"
] |
78,590,623 | https://en.wikipedia.org/wiki/Diradicaloid | Biradicaloids or diradicaloids are molecules with two radical electrons that have significant interaction with each other. The two unpaired electrons are coupled and can either form a singlet ground state (antiferromagnetic coupling) or a triplet ground state (ferromagnetic coupling) (Figure 1).
This is in contrast to "disbiradicals," where the two radical electrons have no significant interaction and act independently as isolated radical species. Diradicals are characterized by their diradical character, commonly quantified using an indicator . In the limit of fully degenerate frontier molecular orbitals, approaches a value of 1, representing 100% diradical character. However, diradicaloids have a small gap between the highest occupied molecular orbital (HOMO) and the lowest occupied molecular orbital (LUMO) and thus can be described as having incomplete diradical character, generally corresponding to a value of between 0.20 and 0.80. Diradicals have historically been characterized as transient species describing the transition state of a bond breaking and/or making process, but recently, the introduction of steric strain to prevent bond formation and substitution of carbon atoms with main-group elements have been found to significantly stabilize diradical species, leading to their isolation and structural characterization. However, these modifications decrease diradical character, leading these species to be more properly designated as diradicaloids. Diradicaloids have found applications in small molecule activation, molecular switching, nonlinear optics, and spintronics.
Theoretical description
Electronic structure
Due to the coupling interaction between the radical electrons in a diradical(oid) species, they cannot be simply described as the union of two independent radical centers. Both the open-shell singlet and triplet states must be considered to fully describe the electronic structure of diradical(oid) species.
The triplet state wavefunction can be described as a single electronic configuration with a single Slater determinant. However, when the frontier molecular orbitals are degenerate or nearly degenerate, the lowest-energy singlet state wavefunction must account for multiple electronic configurations (see electronic correlation). Thus, is most accurately represented as a combination of Slater determinants. Here, the configuration interaction (CI) coefficients and define the contribution of each determinant to the total wavefunction, where refers to the HOMO and refers to the LUMO:
When , and are degenerate, and the singlet wavefunction describes a perfect diradical. As the HOMO-LUMO gap increases, the wavefunction approaches that of a classical closed-shell species; approaches 1 and approaches 0 so that the lowest-energy singlet state is dominated by the doubly occupied HOMO.
To gain a more intuitive understanding of the diradical nature of the wavefunction, the triplet and singlet wavefunctions can be represented using a localized orbital basis, where and are the two localized orbitals (Figure 2). Assuming and are orthogonal, the overlap integral becomes 0. The HOMO can be decomposed into the in-phase overlap of and , while the LUMO can be decomposed into the out-of-phase overlap of and :
Consequently, the singlet wavefunction can be expressed as the combination of a covalent contribution and an ionic contribution . The covalent component represents the electron configuration in which both localized orbitals are singly occupied; this corresponds to diradical character. The ionic component represents the electron configuration in which one localized orbital is doubly occupied, leaving the other localized orbital empty; this corresponds to zwitterionic character:
where and
When , and ; thus, this situation describes 100% diradical character. As the HOMO-LUMO gap increases, approaches 1 and approaches 0, which results in ; thus, this situation reduces to the complete delocalization of the electrons over the two-orbital system, which is equivalent to the electron configuration of the closed-shell species.
Indicators of diradical character
The CI coefficients and can be used to provide a quantification of diradical character. Some common indicators are listed below:
All of the above indicators () effectively describe how much greater the relative weight of the covalent contribution is to the singlet wavefunction compared to the ionic contribution. Thus, the greater the values of these indicators, the greater the diradical character. In the limit of 100% diradical character, these indicators approach a value of 1; in the limit of 100% classical closed-shell character, these indicators approach a value of 0.
Natural orbital (NO) occupation numbers are also another theoretical indicator of diradical character. The occupancy of the lowest unoccupied NO is equal to the indicator and ranges from 0 to 1; the closer the calculated occupancy is to 1, the greater the predicted diradical character. On the other hand, the occupancy of the highest occupied NO ranges from 1 to 2; the closer the calculated occupancy is to 1, the greater the predicted diradical character. These natural orbital occupancy numbers can be calculated using almost all computational methods and therefore can often be obtained with less computational cost than calculating using CI methods.
A small singlet-triplet energy gap can also indicate increased diradical character. Lastly, if the calculated A-B distance (where A and B are the two radical centers) is elongated compared to the sum of the covalent radii (the typical A-B distance of a closed-shell molecule) but is shorter than the sum of the van der Waals radii, this may also suggest the presence of a diradicaloid. Incorporating sterically bulky substituents and introducing ring strain in heterocycles can help to prevent bond formation and/or generate elongated bonds.
Synthesis
Cyclobutane-1,3-diyl analogues
Cyclobutane-1,3-diyl
Cyclobutane-1,3-diyl is the planar four-membered carbon ring species with radical character localized at the 1 and 3 positions. The singlet cyclobutane-1,3-diyl is predicted to be the transition state for the ring inversion of bicyclobutane, proceeding via homolytic cleavage of the transannular carbon-carbon bond (Figure 3).
A 1,3-dimethyl substituted derivative in the triplet state was detected by electron paramagnetic resonance spectroscopy; the diradical species was generated via irradiation of the precursor diazo compound below 25 K in a solid matrix (Figure 4). However, the all-carbon cyclobutane-1,3-diyl is very short-lived and quickly reacts to form the bicyclobutane isomer.
1,3-diphospha-cyclobutane-2,4-diyl
In 1995, Niecke and coworkers reported the first synthesis of a phosphorus analog of cyclobutane-1,3-diyl, [ClC(μ-PMes*)]2. This species consists of a [P2C2]-four-membered heterocycle with radical character centered on the two carbon atoms. The heterocycle was synthesized from the reaction of aryl(dichloromethylene)phosphene (aryl = Mes*, supermesityl) with n-butyllithium in a 2:1 ratio, followed by elimination of LiCl (Figure 5). X-ray diffraction revealed that that the [P2C2] unit exists in the planar four-membered ring form, rather than as the bicyclic isomer. MCSCF calculations predicted a singlet ground state. In addition, the calculated CI wavefunction has contributions from both the doubly occupied HOMO state and the doubly occupied LUMO state; this corresponded to occupation of the HOMO with 1.6 electrons, indicating considerable diradical character. The diphosphacyclobutane heterocycle is thermally stable, and transannular C-C bond formation is thermally forbidden according to the Woodward-Hoffmann rules. Heating at 100 °C in toluene led to the cleavage of the P-C bond, likely generating a ring-opened carbene intermediate that subsequently performed intramolecular C-H activation.
Another synthetic route was developed by Yoshifuji and Ito to access a wider variety of substituents at phosphorus (Figure 7). 2 equivalents of Mes*-substituted phosphaalkyne can be reacted with the lithiated compound of the first substituent on phosphorus, forming the anionic [P2C2] four-membered ring. This intermediate can then be alkylated to attach the second phosphorus substituent. This two-step synthetic pathway allows for the synthesis of unsymmetrically substituted 1,3-diphospha-cyclobutane-2,4-diyls. The substituents on carbon are limited to Mes*, however, due to the limitation of the phosphaalkyne starting material. Most diradicaloids of this type can be handled in air and display high kinetic stability due to the steric protection provided by the Mes* substituents on the carbon radical centers.
1,3-diaza-2,4-dipnicta-cyclobutane-2,4-diyl
These diradical species consist of a [Pn1(μ-NR)2Pn2] heterocyclic core (Pn = pnictogen) where the radical sites are centered on the pnictogen atoms. The presence of a nitrogen atom in the heterocycle is thought to stabilize the planar form relative to the bicyclic isomer. This is believed to result from the inability of Pn-Pn bond formation in the bicyclobutane form to energetically compensate for the increase in Pn-N-Pn angle strain; consequently, the planar form, which allows for larger Pn-N-Pn angles, is more stable. The lack of electron delocalization found in calculations suggests that aromaticity from the presence of 6π electrons does not play a significant role in stabilization of the planar isomers.
In 2011, Schulz and coworkers synthesized the first example of a [P2N2] four-membered ring diradicaloid (here, Pn = phosphorus) with meta-terphenyl and hypersilyl substituents on the nitrogen atoms. The synthetic route begins with the chlorinated P2N2 heterocycle, which is then reduced to the diradicaloid with relatively mild titanium(II) or titanium(III) reducing agents (Figure 8). The bulky terphenyl and hypersilyl groups provide kinetic stabilization, preventing dimerization. The terphenyl-substituted diradicaloid is almost indefinitely stable under argon atmosphere at ambient temperatures as a solid and in solvent. The crystal structure reveals a planar [P2N2] four-membered ring and a long distance between the two phosphorus atoms (2.6186 Å compared to 2.22 Å, the sum of covalent radii), indicating no significant transannular interactions. Computations also support the diradical character of this species and predict a singlet ground state. The calculated CI wavefunction has contributions from both the doubly occupied HOMO state and the doubly occupied LUMO state; this corresponds to occupation of the HOMO with 1.7 electrons, indicating considerable diradical character.
Using a similar synthetic route, the arsenic analogue was also synthesized from the chlorinated precursor; reduction using magnesium metal generated the arsenic centered diradicaloid. The crystal structure confirmed a long As-As distance, and EPR spectroscopy indicated a singlet ground state. A mixed phosphorus-arsenic diradicaloid was also reported in 2015, the first with different radical centers. The crystal structure revealed a kite-shaped planar four membered ring with a transannular As-P distance of 2.790 Å, which is shorter than the sum of van der Waals radii (3.65 Å) but longer than the sum of covalent radii (2.32 Å).
Heavier derivatives (where Pn = antimony and bismuth) were observed in situ but could not be isolated due to rapid decomposition to the allyl analogues in the presence of magnesium; however, the corresponding diradicaloids could be trapped through [2+2] cycloadditions with alkynes, thereby providing evidence for their existence. Calculations suggest that the antimony and bismuth-centered diradicaloids have higher diradical character than the lighter pnictogen analogues due to the singlet-triplet energy gap decreasing with heavier, larger pnictogens.
Other hetero-cyclobutane-1,3-diyls
In 2002, Bertrand and coworkers synthesized the first 1,3-diphospha-2,4-dibora-cyclobutane-2,4-diyl, in which the diradical character is localized on the boron atoms. In 2009, Schnöckel and coworkers reported the synthesis of a heavier aluminum-centered diradical analog. A silicon-centered diradical (1,3-diaza-2,4-disilacyclobutane-2,4-diyl) is also known, synthesized by Sekiguchi and coworkers in 2011. An analog in which the nitrogen atoms are replaced with carbon, as well as an all-silicon cyclobutane-1,3-diyl, have been synthesized. In 2004, Power and coworkers reported the synthesis of a germanium-centered diradical, the heavier analog of Sekiguchi's silicon diradical. The corresponding tin-centered diradicals have also been synthesized by Lappert and coworkers in 2004. In 2017, N-heterocyclic carbene-stabilized phosphorus-centered diradicals were reported; like the Niecke-type diradicaloid, the core heterocycle is a [P2C2] four-membered ring, but the radical centers are located on phosphorus rather than carbon. Lastly, one of the first hetero-cyclobutanediyl derivates synthesized is N2S2, disulfur dinitride, but its diradical character has been widely discussed in the literature and is still disputed today.
Cyclopentane-1,3-diyl analogues
Cyclopentane-1,3-diyl
Cyclopentane-1,3-diyl is the planar five-membered carbon ring species with radical character localized at the 1 and 3 positions. The triplet diradical was detected by EPR spectroscopy; the diradical species was generated via irradiation of the precursor diazo compound at 5.5 K in a solid matrix (Figure 11). Due to its very short lifetime, all-carbon cyclopentane-1,3-diyl cannot be isolated, but heating cyclopentane-1,3-diyl leads to the formation of a transannular C-C bond, producing the housane isomer. While the triplet state is predicted to be an energy minimum, the singlet state is predicted to be the transition state for housane inversion.
Hetero-cyclopentane-1,3-diyls
Five-membered diradicals with radical character localized on pnictogen atoms can be synthesized via the insertion of carbon monoxide and isonitriles into the corresponding pnictogen-centered cyclobutane-1,3-diyls. In 2015, Schulz and coworkers reported the first stable cyclopentane-1,3-diyl species generated from the ring expansion of terphenyl-substituted diphosphadiazanediyl using carbon monoxide (Figure 12). The computed structural data support an almost planar five-membered ring, and the HOMO/LUMO contributions to the CI wavefunction indicate an occupation of the HOMO with 1.44 electrons, suggesting diradical character. Experimentally, additions of phosphaalkyne and elemental sulfur across the phosphorus atom are consistent with diradicaloid reactivity.
Isonitriles can also insert into the same diphosphadiazanediyls to form the corresponding heterocyclic 5-membered diradicaloids (Figure 13a). The insertion reaction is sensitive to the steric bulk of the substituent on the isonitrile; for example, the terphenyl-substituted isonitrile was unable to undergo the insertion reaction, while the smaller 2,6-dimethylphenyl isonitrile was able to insert into the P-N bond.
Isonitrile insertion was also explored with mixed phosphorus-nitrogen and phosphorus-arsenic centered 4-membered ring diradicaloids. With the latter compound, the isonitrile selectively inserts into the arsenic-nitrogen bond over the phosphorus-nitrogen bond (Figure 13b). The resulting five-membered ring species was characterized via X-ray structural analysis, confirming the above connectivity (Figure 14). Calculations revealed a substantial diradical character (=0.24), which agrees with the experimentally observed activation of triple bonds.
Other main group diradicaloids
Diradicaloid 6-membered heterocycles have been reported. In 2020, a cyclic alkylaminocarbene-stabilized 9,10-diboraanthracene was synthesized. EPR spectroscopy and quantum calculations indicated a singlet diradical ground state, and the incorporation of boron atoms was demonstrated to lower the HOMO-LUMO band gap. In 2021, a cyclic germanium-centered diradicaloid with a [C4Ge2] framework was isolated. Calculations indicated a singlet diradical ground state, and the ability of the germanium species to split dihydrogen at room temperature further supported its diradical character.
A 1,2-diborete diradicaloid containing a highly strained [B2C2] framework was reported by Braunschweig and coworkers in 2022. In 2024, the first diborepin diradicals, in which the boron radical sites are disjointed, were synthesized by Gilliard and coworkers.
Reactivity
Diradicaloids, depending on the reaction conditions and extent of diradical character, can display both closed-shell and open-shell reactivity. Closed-shell reactivity (e.g., pericyclic reactions) is best understood using the delocalized molecular orbital picture, while open-shell reactivity (e.g., radical additions) is best understood using the localized atomic orbital picture.
Closed-shell reactivity
For example, the phosphorus-centered diradicaloid [P(μ-NTer)2]2 can undergo concerted pericyclic reactions with single bonds (H2), double bonds (alkenes, aldehydes), and triple bonds (alkynes, nitriles) (Figure 15). Only the cis-addition products are observed, which is consistent with a concerted mechanism.
From a molecular orbital perspective, the formation of new bonds at phosphorus occurs through the interaction of the antibonding HOMO of the diradicaloid with the antibonding LUMO of the reacting partner or the interaction of the bonding LUMO of the diradicaloid with the bonding HOMO of the reacting partner, both of which are symmetry-allowed.
Interestingly, H2 addition is reversible; below 50 °C, H2 addition is observed, and above 60 °C, H2 release occurs to regenerate the original diradicaloid species.
Diradicaloids can also react as nucleophiles or electrophiles from their zwitterionic resonance forms. For example, [P(μ-NTer)2]2 has been shown to react with both Lewis basic N-heterocyclic carbenes as well as Lewis acidic gold(I) chloride (Figure 17).
Open-shell reactivity
For example, the phosphorus-centered diradicaloid [P(μ-NTer)2]2 can undergo stepwise radical addition reactions with alkyl bromides (Figure 18). The trans-addition products were exclusively formed, which is consistent with a stepwise radical abstraction followed by radical recombination mechanism.
Applications
Hetero-cyclopentane-1,3-diyls have been shown to display molecular switching behavior; this property relies on the ability to use external stimuli to switch a molecule between two different stable states, thereby allowing for easy modulation of special reactivity and/or other properties. Diradicaloids can serve as molecular switches if certain external stimuli can reversibly toggle between the planar isomer, which displays diradical character and corresponding reactivity, and the bicyclic housane isomer, which is a closed shell species.
For example, the concept of switchable diradicals was demonstrated using the hetero-cyclopentane-1,3-diyl with phosphorus-phosphorus centered radicals (Figure 19). Upon exposure to red light, the planar five-membered ring diradical isomerizes to the bicyclic housane species. After irradiation, the thermally induced reverse reaction occurs, breaking the transannular bond to regenerate the planar diradicaloid species. Thus, the activation chemistry of the diradical can be switched "off" via irradiation and can be switched back "on" via stopping irradiation. This switching could be repeated several times without degradation of the diradicaloid.
References
Molecular machines
Radicals | Diradicaloid | [
"Physics",
"Chemistry",
"Materials_science",
"Technology"
] | 4,647 | [
"Physical systems",
"Nanotechnology",
"Machines",
"Molecular machines"
] |
78,594,273 | https://en.wikipedia.org/wiki/Mortgage%20button | The "mortgage button" or "amity button" was a small ornamental inlay often featured on newel posts of a main staircase in the 19th and early 20th centuries, particularly in American and European homes. It was used to hide joinery.
The name comes from the historical misconception that they represented a homeowner who had paid off their mortgage. According to tradition, the homeowner would arrange to have a button made of ivory set onto the newel post when the house was paid off. Another version is that a scrimshaw maker would engrave the date the loan was paid off onto a piece of ivory, which was inserted the newel.
One popular myth was that the decorative cap was concealing a deed to the house, or a mortgage document, which had been rolled up and hidden inside the newel post. According to writer Mary Miley Theobald, no such documents have ever been found, although house plans were found inside the newel post on one occasion.
Others have suggested that the ivory button on the newel post was a symbol of cooperation or brotherly love.
References
Stairways
Architectural elements
Stairs | Mortgage button | [
"Technology",
"Engineering"
] | 232 | [
"Building engineering",
"Architectural elements",
"Components",
"Architecture"
] |
78,601,151 | https://en.wikipedia.org/wiki/Kincardine%20floating%20offshore%20wind%20farm | The Kincardine floating offshore wind farm is located off the east coast of Scotland, about south-east of Aberdeen. It has an installed capacity of nearly 50 MW, and when commissioned in 2021 it was the world's largest wind farm with floating turbines.
It is a demonstration project that consists of six turbines, all mounted on semi-submersible platforms. The first turbine, a Vestas V80-2 MW, was installed in 2018. This was joined by five larger V164-9.5 MW turbines in 2021.
The project was developed by Kincardine Offshore Windfarm Ltd. (KOWL), initially comprising Pilot Offshore Renewables and Atkins, however this is now a joint venture between Flotation Energy and the Cobra Group.
Statkraft has a power purchase agreement to buy all of the power generated, estimated at over 200 GWh per year, equivalent to about 50,000 homes. This contract gives a guaranteed minimum price until 2029, reducing the financial risk of the project.
Although the project is named Kincardine, it is not located near Kincardine, Fife or Kincardineshire.
Technology
Each of the turbines is mounted on a triangular floating semi-submersible platform, the WindFloat® designed by Principle Power. These have three buoyant columns about high and apart. These are tethered to the seabed by cables, allowing them to be installed in much deeper waters than conventional (fixed) offshore wind turbines.
The wind turbine tower is mounted on one of the buoyant columns, as shown in the CAD rendering on the right.
Power from the wind farm is exported via two subsea cables, each rated at 33 kV, which were supplied by Prysmian Group.
History
Plans for the project were announced in 2014 by Pilot Offshore Renewables and Atkins, with the aim of starting construction in 2016 and generating power by 2018.
In March 2017, the Scottish Government approved plans for the Kincardine demonstration project. It was originally proposed to consist of eight turbines rated at 6 MW.
By November 2017, revised plans were announced to develop the project in two or more phases, in order to meet the Scottish Government deadline of October 2018 for eligibility for 3.5 Renewables Obligation Certificates (ROCs) for floating wind. An initial phase with a 2 MW turbine would be installed in 2018, followed by the remaining 48 MW in 2019 and 2020.
In 2016, the project was anticipated to cost around £250m, but by 2018 the costs had risen to around £500m.
The first turbine was assembled onto its foundation in the Port of Dundee, and on 16 August 2018 was towed out to the wind farm site by the Pacific Duchess and two tugs. The first power was exported on 18 September 2018, just meeting the ROC deadline. This turbine has been operating at the site since October 2018.
The floating foundations for the second phase were constructed in a fabrication yard in Ferrol, Spain. They were then towed to Rotterdam, where the diameter turbines with a tip height of were added. The first of the turbines was towed from Rotterdam to the site in December 2020, by the Boskalis anchor handling vessel Manta. Once on site, the turbines were connected to the pre-installed spread moorings.
By October 2021, all six turbines were commissioned and the project started commercial operation.
In July/August 2024, the generator of one of the turbines was replaced at sea, the first time any major component on has ever been replaced at sea. Previously, turbines had to be returned to port for any significant maintenance. A crane supplied by LiftOff, was assembled on the turbine nacelle and used to lift the generator then lower it onto the floating foundation, where it was then transferred to a offshore supply vessel. The new generator was installed using the reverse of this process.
References
Floating wind turbines
Offshore wind farms in the North Sea
Renewable energy in Scotland | Kincardine floating offshore wind farm | [
"Engineering"
] | 794 | [
"Floating wind turbines",
"Offshore engineering"
] |
78,602,131 | https://en.wikipedia.org/wiki/Microplastic%20remediation | Microplastic remediation refers to environmental remediation techniques focused on the removal, treatment and containment of microplastics (small plastic particles) from environmental media such as soil, water, or sediment.
Microplastics can be removed using physical, chemical, or biological techniques.
Remediation of microplastics in air
Microplastic is a type of airborne particulates and is found to prevail in air.
Remediation of microplastics in water
Microplastics can be removed from water by filtration or absorption. Absorption devices include sponges made of cotton and squid bones.
Biochar filtration has been used in wastewater treatment plants.
Efforts to physically remove microplastics from the Great Pacific Garbage Patch have used nets and collection bags.
Remediaton of microplastics in soil
Microplastics are commonly found in soil. Techniques are under development to achieve reductions in soil microplastics via photodegradation, chemical extraction, or bioremediation.
See also
Environmental remediation
Microplastics and human health
Plastic pollution
References
Plastics and the environment
Pollution control technologies
Cleaning and the environment | Microplastic remediation | [
"Chemistry",
"Engineering"
] | 240 | [
"Pollution control technologies",
"Environmental engineering"
] |
71,293,099 | https://en.wikipedia.org/wiki/Daniel%20B.%20Szyld | Daniel B. Szyld (born 1955 in Buenos Aires) is an Argentinian and American mathematician who is a professor at Temple University in Philadelphia. He has made contributions to numerical and applied linear algebra as well as matrix theory.
Education
He was admitted without an undergraduate degree to the graduate school at New York University, where he defended his PhD in 1983. While there, he worked as a research assistant for Wassily Leontief.
International awards and appointments
He was named as a SIAM Fellow and as a fellow of the American Mathematical Society in 2017. In 2020, he was elected president of the International Linear Algebra Society. He was editor-in-chief for the Electronic Transactions on Numerical Analysis from 2005 to 2013 and SIAM Journal on Matrix Analysis and Applications from 2015 to 2020 and is on the editorial boards of several journals, including the Electronic Journal of Linear Algebra (ELA), the Electronic Transactions on Numerical Analysis (ETNA), Linear Algebra and its Applications, Mathematics of Computation, Numerical Linear Algebra with Applications, and Journal of Numerical Analysis and Approximation Theory. A conference in honor of his 65th birthday was held in 2022
Books and edited proceedings
Selected papers
References
1955 births
Living people
Fellows of the Society for Industrial and Applied Mathematics
Argentine emigrants to the United States
Fellows of the American Mathematical Society
Temple University faculty
20th-century Argentine mathematicians
21st-century Argentine mathematicians
Algebraists
21st-century American mathematicians
Courant Institute of Mathematical Sciences alumni
20th-century American mathematicians
Scientists from Buenos Aires | Daniel B. Szyld | [
"Mathematics"
] | 298 | [
"Algebra",
"Algebraists"
] |
71,295,844 | https://en.wikipedia.org/wiki/Capacitated%20arc%20routing%20problem | In mathematics, the capacitated arc routing problem (CARP) is that of finding the shortest tour with a minimum graph/travel distance of a mixed graph with undirected edges and directed arcs given capacity constraints for objects that move along the graph that represent snow-plowers, street sweeping machines, or winter gritters, or other real-world objects with capacity constraints. The constraint can be imposed for the length of time the vehicle is away from the central depot, or a total distance traveled, or a combination of the two with different weighting factors.
There are many different variations of the CARP described in the book Arc Routing:Problems, Methods, and Applications by Ángel Corberán and Gilbert Laporte.
Solving the CARP involves the study of graph theory, arc routing, operations research, and geographical routing algorithms to find the shortest path efficiently.
The CARP is NP-hard arc routing problem.
The CARP can be solved with combinatorial optimization including convex hulls.
The large-scale capacitated arc routing problem (LSCARP) is a variant of the capacitated arc routing problem that applies to hundreds of edges and nodes to realistically simulate and model large complex environments.
References
Graph theory | Capacitated arc routing problem | [
"Mathematics"
] | 246 | [
"Discrete mathematics",
"Mathematical relations",
"Graph theory",
"Combinatorics"
] |
71,300,141 | https://en.wikipedia.org/wiki/Inception%20score | The Inception Score (IS) is an algorithm used to assess the quality of images created by a generative image model such as a generative adversarial network (GAN). The score is calculated based on the output of a separate, pretrained Inception v3 image classification model applied to a sample of (typically around 30,000) images generated by the generative model. The Inception Score is maximized when the following conditions are true:
The entropy of the distribution of labels predicted by the Inceptionv3 model for the generated images is minimized. In other words, the classification model confidently predicts a single label for each image. Intuitively, this corresponds to the desideratum of generated images being "sharp" or "distinct".
The predictions of the classification model are evenly distributed across all possible labels. This corresponds to the desideratum that the output of the generative model is "diverse".
It has been somewhat superseded by the related Fréchet inception distance. While the Inception Score only evaluates the distribution of generated images, the FID compares the distribution of generated images with the distribution of a set of real images ("ground truth").
Definition
Let there be two spaces, the space of images and the space of labels . The space of labels is finite.
Let be a probability distribution over that we wish to judge.
Let a discriminator be a function of type where is the set of all probability distributions on . For any image , and any label , let be the probability that image has label , according to the discriminator. It is usually implemented as an Inception-v3 network trained on ImageNet.
The Inception Score of relative to isEquivalent rewrites include is nonnegative by Jensen's inequality.
Pseudocode:
Interpretation
A higher inception score is interpreted as "better", as it means that is a "sharp and distinct" collection of pictures.
, where is the total number of possible labels.
iff for almost all That means is completely "indistinct". That is, for any image sampled from , discriminator returns exactly the same label predictions .
The highest inception score is achieved if and only if the two conditions are both true:
For almost all , the distribution is concentrated on one label. That is, . That is, every image sampled from is exactly classified by the discriminator.
For every label , the proportion of generated images labelled as is exactly . That is, the generated images are equally distributed over all labels.
References
Machine learning
Computer graphics | Inception score | [
"Engineering"
] | 518 | [
"Artificial intelligence engineering",
"Machine learning"
] |
75,619,418 | https://en.wikipedia.org/wiki/Hydrocodone/guaifenesin | Hydrocodone/guaifenesin, sold under the brand name Obredon among others, is a fixed-dose combination medication used for the treatment of cough. It contains hydrocodone, as the bitartrate, an opioid agonist; and guaifenesin, an expectorant. It is taken by mouth.
Hydrocodone/guaifenesin was approved for medical use in the United States in 2014.
Adverse effects
In the US, the label for hydrocodone/guaifenesin contains a black box warning about addiction, abuse, and misuse.
References
Combination drugs
Expectorants | Hydrocodone/guaifenesin | [
"Chemistry"
] | 137 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
75,624,739 | https://en.wikipedia.org/wiki/Airway%20tone | Airway tone, short for airway smooth muscle tone, is the degree of sustained contractile activation of airway smooth muscle. The airways have a tone baseline, and consequently a baseline level of contraction of their smooth musculature. Airway tone is a key determinant of lung function and the presence of respiratory symptoms in obstructive lung diseases such as asthma, where baseline airway tone is elevated. The upper extreme of the spectrum of airway tone represents bronchoconstriction, wherein the airway smooth muscles are significantly contracted, while the lower extreme represents bronchodilatation, wherein the muscles are relatively relaxed.
While airway tone is related to respiratory airflow and airway caliber insofar as an increase in airway tone decreases airflow due to the airway smooth muscle contraction, the two are not synonymous as airflow is determined by the structural and functional properties of the airways as well as the lung parenchyma in addition to airway tone.
Airway tone and airway resistance are mostly correlated, but adequate upper airway tone is necessary for airflow and airway patency; insufficient upper airway tone during sleep can, for instance, result in obstructive sleep apnea.
Autonomic nervous system signalling
Autonomic nervous system signalling plays a pivotal role in determining airway tone. The innervation of airway smooth musculature varies between the upper and lower airways.
Upper airway tone
The pharynx is innervated by cranial nerves VII, IX, XII, while both the pharynx and the larynx are innervated by the vagus nerve.
Lower airway tone
Lower airway, bronchial, or bronchus tone is mediated both by the innervation of airway smooth musculature and, possibly, also by the innervation of airway mucosal vasculature. Lower airway smooth muscles are mostly only innervated by the vagus nerve.
Cholinergic signalling
Airway smooth muscle is primarily innervated by cholinergic parasympathetic nerves, while its adrenergic sympathetic innervation is sparse to non-existent. Specifically, cholinergic parasympathetic signalling increases the airway tone, meaning the airway tone is proportional to the vagal tone.
Despite this overall airway tone-increasing effect, the individual effects of muscarinic acetylcholine receptors expressed by airway muscle cells, of which there are 5 subtypes, M1 through M5, are ambivalent. M3 receptors directly lead to airway smooth muscle contraction, i.e., an increase in airway tone, while M2 receptors (also) expressed by airway neurons suppress the further release of acetylcholine in a negative feedback loop, wherein cholinergic parasympathetic signalling reduces further cholinergic parasympathetic signalling, which may explain the unexpectedly low effectivity of certain non-selective muscarinic receptor antagonists such as ipratropium bromide.
M2 receptors are less functional in asthma, disrupting the negative feedback which normally reduces airway tone, which may play a role in asthmatic airway hyperresponsiveness.
Adrenergic signalling
As mentioned, adrenergic sympathetic innervation of airway smooth muscle is likely insignificant; however, the sympathetic innervation of the airway mucosal vasculature is significant. Airway muscular vasculature controls the flow of nutrients to the airways, the temperature of the airways, as well as the clearance of insoluble particles in the airways, which may play an important role in the activity of inhaled bronchodilators, thus affecting airway reactivity and airway tone changes in obstructive lung diseases.
Dopaminergic signalling
There is conflicting evidence regarding dopamine's effect on airway tone in vivo, with some studies reporting bronchoconstriction and others bronchodilatation following dopamine inhalation. In one study, dopamine attenuated the increase in airway tone caused by cholinergic signalling, but exacerbated histaminergic bronchoconstriction, while both signals were attenuated in the present study following the administration of intravenous dopamine. Thus, no conclusion can be drawn at this time.
Acute activation of D2 receptors expressed by airway smooth muscle cells inhibits the adenylyl cyclase, lowering cAMP levels, leading to an increase in airway tone. However, their prolonged activation by quinpirole, a D2 and D3 receptor agonist, paradoxically enhances adenylyl cyclase activity, raising cAMP levels, leading to bronchodilatation via phospholipase C and protein kinase C.
Histaminergic signalling
Histamine is a direct bronchoconstrictor that increases airway tone by activating H1 receptors expressed by airway smooth muscle cells.
Bitter taste receptor signalling
Six type 2 (bitter) taste receptors (TAS2Rs) are expressed by airway smooth muscle cells. In the tongue, bitter taste receptors have probably evolved for avoiding the ingestion of plant toxins. In the lungs, bitter taste receptors serve a paradoxically reversed function, causing the relaxation of airway smooth muscle, i.e., a lowering of airway tone. Thus, bitter taste receptor agonists represent promising potential novel bronchodilators.
Phosphodiesterase inhibition
Theophylline's non-selective phosphodiesterase inhibition has been proposed as the mechanism behind its bronchodilatating action. Phosphodiesterases degrade intracellular cAMP, which leads to muscle contraction. Inhibiting phosphodiesterases increases cAMP concentrations in airway smooth muscle cells, lowering airway tone. Adenosine receptor agonism probably does not play a major role in theophylline-induced lowering of airway tone, as inhalation of adenosine actually increases airway tone, though it is probably the cause of theophylline's arrhythmogenicity.
Cysteinyl leukotriene signalling
Like histamine, some cysteinyl leukotrienes, such as leukotriene D4, are direct bronchoconstrictors and increase airway tone by binding to receptors on airway smooth muscle cells. Bronchoconstrictive leukotrienes act via a common cys-LT1 receptor.
Thromboxane signalling
Thromboxane is a direct bronchoconstrictor that acts via the thromboxane receptor on airway smooth muscle cells.
References
Muscular system
Respiratory system | Airway tone | [
"Biology"
] | 1,391 | [
"Organ systems",
"Respiratory system"
] |
75,625,936 | https://en.wikipedia.org/wiki/Memantine/donepezil | Memantine/donepezil, sold under the brand name Namzaric among others, is a fixed dose combination medication used for the treatment of dementia of the Alzheimer's type. It contains memantine, as the hydrochloride, a NMDA receptor antagonist; and donepezil as the hydrochloride, an acetylcholinesterase inhibitor. It is taken by mouth.
Memantine/donepezil was approved for medical use in the United States in 2014.
References
Further reading
Acetylcholinesterase inhibitors
Antidementia agents
Combination drugs
NMDA receptor antagonists
Treatment of Alzheimer's disease | Memantine/donepezil | [
"Chemistry"
] | 135 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
75,627,032 | https://en.wikipedia.org/wiki/Slavik%20Vlado%20Jablan | Slavik Vlado Jablan (; 10June 1952 – 26February 2015) was a Serbian mathematician and crystallographer. Jablan is known for his contributions to antisymmetry, knot theory, the theory of symmetry and ornament, and ethnomathematics.
Career
Jablan was born on 10 June 1952 in Sarajevo. Jablan graduated in mathematics from the University of Belgrade (1977), where he also gained his M.A. degree (1981) and Ph.D. degree (1984) with the dissertation Theory of Simple and Multiple Antisymmetry in E2 and E2\{O}. He was a Fulbright scholar in 2003/4. Jablan was a professor of geometry at the University of Niš until 1999; subsequently he was a researcher at the Mathematical Institute of the Serbian Academy of Sciences and Arts.
Jablan established the online journal VisMath in 2005 and was its editor from its inception until 2014. He joined the editorial board of the journal Symmetry in 2009 and was editor-in-chief from 2012 until 2015. After his death the journal printed a 14-page obituary. Journal of Knot Theory and Its Ramifications printed a special issue in his memory in 2016.
Works
Books published by Jablan:
Theory of symmetry and ornament (1995)
Symmetry, ornament and modularity (2002)
LinKnot: knot theory by computer (2007)
Jablan published 65 academic papers. Selected papers available in English:
Antisymmetry and coloured symmetry:
Groups of conformal antisymmetry and complex antisymmetry In E2\{0} (1985)
A new method of generating plane groups of simple and multiple antisymmetry (1986)
Enantiomorphism of antisymmetric figures (1986)
Colored antisymmetry (1992)
Farbgruppen and their place in the history of colored symmetry (2007)
Knot theory:
Nonplanar graphs derived from Gauss codes of virtual knots and links (2011)
Knots in art (2012)
Delta diagrams (2016)
Ornament and ethnomathematics:
Antisymmetry and modularity in ornamental art (2001)
Elementary constructions of Persian mosaics (2006)
Knots and links in architecture (2012)
References
1952 births
2015 deaths
Serbian mathematicians
Crystallographers | Slavik Vlado Jablan | [
"Chemistry",
"Materials_science"
] | 488 | [
"Crystallographers",
"Crystallography"
] |
77,239,550 | https://en.wikipedia.org/wiki/Thokozani%20Majozi | Thokozani Majozi (born 3 October 1972) is a South African chemical engineer. He has been the Dean of Engineering and the Built Environment at the University of the Witwatersrand since 2021. He holds the South African Research Chair in Sustainable Process Engineering at the same university. His research focuses on chemical process engineering, particularly batch chemical process integration.
Majozi joined the University of the Witwatersrand's School of Chemical and Metallurgical Engineering as a professor in 2013. Before that he worked at the University of Pretoria from 2004 to 2013 and at the University of Pannonia from 2005 to 2009.
Early life and education
Majozi was born in 1972 in KwaMashu in present-day KwaZulu-Natal. His mother was a teacher and his father was a post office clerk. He attended Mqhawe High School in nearby Inanda, and he matriculated as the top achiever in the province. Although he had initially planned to become a medical doctor, he received a bursary from Anglo American to study engineering. He completed a BScEng in chemical engineering at the University of Natal in 1994.
In 1994, as dictated by his bursary obligations, he began his professional career as a junior process engineer at Unilever. Thereafter he joined Dow AgroSciences as a senior process engineer in 1996, specialising in competency improvement. While at Dow he met Professor Chris Buckley of the University of Natal's Pollution Research Group, who suggested that Majozi should return to the university for postgraduate study; under Buckley's supervision, he completed an MScEng in 1998.
In 1999, Majozi moved to Manchester, England to study at the University of Manchester Institute of Science and Technology on a Commonwealth Scholarship. He completed his PhD in process integration in 2002. Later the same year he joined Sasol Technology as technical leader for optimisation and integration; he worked there until he joined academia in 2004.
Academic career
In 2004, Majozi was appointed as an associate professor of chemical engineering at the University of Pretoria. His research was initially supported by Water Research Commission funds given to Buckley, who transferred them to Majozi. He was tenured as a full professor at the University of Pretoria in 2008, and in parallel, from 2005 to 2009, he was an associate professor of computer science at the University of Pannonia in Veszprém, Hungary. Later, from 2009 to 2012, he was the vice-president of the Engineering Council of South Africa.
After nine years at the University of Pretoria, Majozi moved to the University of the Witwatersrand (Wits) in 2013, becoming a professor in the Wits School of Chemical and Metallurgical Engineering. There he took up the South African Research Chair in Sustainable Process Engineering, with the joint sponsorship of the Department of Science and Technology and the National Research Foundation. In addition, he was appointed as board chairperson of the Council for Scientific and Industrial Research in 2015.
On 23 September 2021, the Wits Council approved Majozi's appointment to a five-year term as Dean of the Faculty of Engineering and the Built Environment. He succeeded Professor Ian Jandrell in that position.
Scholarship and research
Majozi's main research interest is batch chemical process integration. He has focused in particular on the minimisation of industrial wastewater in batch processing; he was the first person to apply water minimisation techniques in batch plants. The National Research Foundation rated him as a B1-level researcher.
Honours and awards
Majozi has received three National Science and Technology Forum awards. He also received the University of Pretoria's Leading Minds Centenary Award in 2008, the S2A3 British Association Silver Medal in 2008, the National Research Foundation's President Award in 2007 and 2009, and the South African Institution of Chemical Engineers Bill Neal-May Gold Award in 2010. In 2021 the Water Research Commission gave him a Water Research Legends Award.
On 25 April 2019, South African President Cyril Ramaphosa admitted him to the Order of Mapungubwe. He received the award in Bronze for:His outstanding contribution to science, particularly the development of a novel mathematical technique for near-zero-effluent batch chemical facilities which enables the reuse of wastewater; as a young scientist, more trailblazing is expected of him in the years ahead.Majozi is also a member of the Academy of Science of South Africa, and he is a fellow of the African Academy of Sciences, the South African Academy of Engineering, the Water Institute of Southern Africa, and the Institution of Chemical Engineers. He is an alumnus of the Global Young Academy.
References
External links
Thokozani Majozi at Loop
Thokozani Majozi at University of the Witwatersrand
Living people
1972 births
21st-century South African engineers
Academic staff of the University of Pannonia
Academic staff of the University of Pretoria
Academic staff of the University of the Witwatersrand
Alumni of the University of Manchester Institute of Science and Technology
Environmental engineers
Fellows of the African Academy of Sciences
Members of the Academy of Science of South Africa
People from KwaMashu
South African chemical engineers
University of Natal alumni | Thokozani Majozi | [
"Chemistry",
"Engineering"
] | 1,053 | [
"Environmental engineers",
"Environmental engineering"
] |
77,241,904 | https://en.wikipedia.org/wiki/TESCREAL | TESCREAL is an acronym neologism proposed by computer scientist Timnit Gebru and philosopher Émile P. Torres that stands for "transhumanism, Extropianism, singularitarianism, (modern) cosmism, Rationalism, Effective Altruism, and longtermism". Gebru and Torres argue that these ideologies should be treated as an "interconnected and overlapping" group with shared origins. They say this is a movement that allows its proponents to use the threat of human extinction to justify expensive or detrimental projects and consider it pervasive in social and academic circles in Silicon Valley centered around artificial intelligence. As such, the acronym is sometimes used to criticize a perceived belief system associated with Big Tech.
Origin
Gebru and Torres coined "TESCREAL" in 2023, first using it in a draft of a paper titled "The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence". First Monday published the paper in April 2024, though Torres and Gebru popularized the term elsewhere before the paper's publication. According to Gebru and Torres, transhumanism, extropianism, singularitarianism, (modern) cosmism, Rationalism, effective altruism, and longtermism are a "bundle" of "interconnected and overlapping ideologies" that emerged from 20th-century eugenics, with shared progenitors. They use the term "TESCREAList" to refer to people who subscribe to, or appear to endorse, most or all of the ideologies captured in the acronym.
Analysis
According to critics of these philosophies, TESCREAL describes overlapping movements endorsed by prominent people in the tech industry to provide intellectual backing to pursue and prioritize projects including artificial general intelligence (AGI), life extension, and space colonization. Science fiction author Charles Stross, using the example of space colonization, argued that the ideologies allow billionaires to pursue massive personal projects driven by a right-wing interpretation of science fiction by arguing that not to pursue such projects poses an existential risk to society. Gebru and Torres write that, using the threat of extinction, TESCREALists can justify "attempts to build unscoped systems which are inherently unsafe". Media scholar Ethan Zuckerman argues that by only considering goals that are valuable to the TESCREAL movement, futuristic projects with more immediate drawbacks, such as racial inequity, algorithmic bias, and environmental degradation, can be justified. Speaking at Radio New Zealand, politics writer Danyl McLauchlan said that many of these philosophies may have started off with good intentions but might have been pushed "to a point of ridiculousness."
Philosopher Yogi Hale Hendlin has argued that by both ignoring the human causes of societal problems and over-engineering solutions, TESCREALists ignore the context in which many problems arise. Camille Sojit Pejcha wrote in Document Journal that TESCREAL is a tool for tech elites to concentrate power. In The Washington Spectator, Dave Troy called TESCREAL an "ends justifies the means" movement that is antithetical to "democratic, inclusive, fair, patient, and just governance". Gil Duran wrote that "TESCREAL", "authoritarian technocracy", and "techno-optimism" were phrases used in early 2024 to describe a new ideology emerging in the tech industry.
Gebru, Torres, and others have likened TESCREAL to a secular religion due to its parallels to Christian theology and eschatology. Writers in Current Affairs compared these philosophies and the ensuing techno-optimism to "any other monomaniacal faith... in which doubters are seen as enemies and beliefs are accepted without evidence". They argue pursuing TESCREAL would prevent an actual equitable shared future.
Artificial General Intelligence (AGI)
Much of the discourse about existential risk from AGI occurs among supporters of the TESCREAL ideologies. TESCREALists are either considered "AI accelerationists", who consider AI the only way to pursue a utopian future where problems are solved, or "AI doomers", who consider AI likely to be unaligned to human survival and likely to cause human extinction. Despite the risk, many doomers consider the development of AGI inevitable and argue that only by developing and aligning AGI first can existential risk be averted.
Gebru has likened the conflict between accelerationists and doomers to a "secular religion selling AGI enabled utopia and apocalypse". Torres and Gebru argue that both groups use hypothetical AI-driven apocalypses and utopian futures to justify unlimited research, development, and deregulation of technology. By considering only far-reaching future consequences, creating hype for unproven technology, and fear-mongering, Torres and Gebru allege TESCREALists distract from the impacts of technology that may adversely affect society, disproportionately harm minorities through algorithmic bias, and have a detrimental impact on the environment.
Pharmaceuticals
Neşe Devenot has used the TESCREAL acronym to refer to "global financial and tech elites" who promote new uses of psychedelic drugs as mental health treatments, not because they want to help people, but so that they can make money on the sale of these pharmaceuticals as part of a plan to increase inequality.
Claimed bias against minorities
Gebru and Torres claim that TESCREAL ideologies directly originate from 20th-century eugenics and that the bundle of ideologies advocates a second wave of new eugenics. Others have similarly argued that the TESCREAL ideologies developed from earlier philosophies that were used to justify mass murder and genocide. Some prominent figures who have contributed to TESCREAL ideologies have been alleged to be racist and sexist. McLauchlan has said that, while "some people in these groups want to genetically engineer superintelligent humans, or replace the entire species with a superior form of intelligence" others "like the effective altruists, for example, most of them are just in it to help very poor people ... they are kind of shocked ... that they've been lumped into this malevolent ... eugenics conspiracy".
Criticism and debate
Writing in Asterisk, a magazine related to effective altruism, Ozy Brennan criticized Gebru's and Torres's grouping of different philosophies as if they were a "monolithic" movement. Brennan argues Torres has misunderstood these different philosophies, and has taken philosophical thought experiments out of context. James Pethokoukis, of the American Enterprise Institute, disagrees with criticizing proponents of TESCREAL. He argues that the tech billionaires criticized in a Scientific American article for allegedly espousing TESCREAL have significantly advanced society. McLauchlan has noted that critics of the TESCREAL bundle have objected to what they see as disparate and sometimes conflicting ideologies being grouped together, but opines that TESCREAL is a good way to describe and consolidate many of the "grand bizarre ideologies in Silicon Valley". Eli Sennesh and James Hughes, publishing in the blog for the technoprogressive Institute for Ethics and Emerging Technologies, have argued that TESCREAL is a left-wing conspiracy theory that unnecessarily groups disparate philosophies together without understanding the mutually exclusive tenets in each.
According to Torres, "If advanced technologies continue to be developed at the current rate, a global-scale catastrophe is almost certainly a matter of when rather than if." Torres believes that "perhaps the only way to actually attain a state of 'existential security' is to slow down or completely halt further technological innovation", and criticized the longtermist view that technology, although dangerous, is essential for human civilization to achieve its full potential. Brennan contends that Torres's proposal to slow or halt technological development represents a more extreme position than TESCREAL ideologies, preventing many improvements in quality of life, healthcare, and poverty reduction that technological progress enables.
Alleged adherents
Venture capitalist Marc Andreessen has self-identified as a TESCREAList. He published the "Techno-Optimist Manifesto" in October 2023, which Jag Bhalla and Nathan J. Robinson have called a "perfect example" of the TESCREAL ideologies. In the document, he argues that more advanced artificial intelligence could save countless future potential lives, and that those working to slow or prevent its development should be condemned as murderers.
Elon Musk has been described as sympathetic to some TESCREAL ideologies. In August 2022, Musk tweeted that William MacAskill's longtermist book What We Owe the Future was a "close match for my philosophy". Some writers believe Musk's Neuralink pursues TESCREAList goals. Some AI experts have complained about the focus of Musk's XAI company on existential risk, arguing that it and other AI companies have ties to TESCREAL movements. Dave Troy believes Musk's natalist views originate from TESCREAL ideals.
It has also been suggested that Peter Thiel is sympathetic to TESCREAL ideas. Benjamin Svetkey wrote in The Hollywood Reporter that Thiel and other Silicon Valley CEOs who support the Donald Trump 2024 presidential campaign are pushing for policies that would shut down "regulators whose outdated restrictions on things like human experimentation are slowing down progress toward a technotopian paradise".
Sam Altman and much of the OpenAI board has been described as supporting TESCREAL movements, especially in the context of his attempted firing in 2023. Gebru and Torres have urged Altman not to pursue TESCREAL ideals. Lorraine Redaud writing in Charlie Hebdo described Sam Altman and multiple other Silicon Valley executives as supporting TESCREAL ideals.
Self-identified transhumanists Nick Bostrom and Eliezer Yudkowsky, both influential in discussions of existential risk from AI, have also been described as leaders of the TESCREAL movement. Redaud said Bostrom supported some ideals "in line with the TESCREALists movement".
Sam Bankman-Fried, former CEO of the FTX cryptocurrency exchange, was a prominent and self-identified member of the effective altruist community. According to The Guardian, since FTX's collapse, administrators of the bankruptcy estate have been trying to recoup about $5 million that they allege was transferred to a nonprofit to help secure the purchase of a historic hotel that has been repurposed for conferences and workshops associated with longtermism, Rationalism, and effective altruism. The property hosted liberal eugenicists and other speakers the Guardian said had racist and misogynistic histories.
Longtermist and effective altruist William MacAskill, who frequently collaborated with Bankman-Fried to coordinate philanthropic initiatives, has been described as a TESCREAList.
See also
Effective accelerationism
LessWrong
Utilitarianism
The Californian Ideology
References
2023 neologisms
Acronyms
Effective altruism
Ethical theories
Ethics of science and technology
Eugenics
Existential risk from artificial general intelligence
Extropianism
Futures studies
Ideologies
Philosophy of artificial intelligence
Philosophy of technology
Rationalism
Singularitarianism
Subcultures
Transhumanism
Political neologisms
Natalism | TESCREAL | [
"Technology",
"Engineering",
"Biology"
] | 2,358 | [
"Effective altruism",
"Behavior",
"Existential risk from artificial general intelligence",
"Philosophy of technology",
"Altruism",
"Science and technology studies",
"Genetic engineering",
"Transhumanism",
"Ethics of science and technology"
] |
72,808,068 | https://en.wikipedia.org/wiki/Parameterized%20approximation%20algorithm | A parameterized approximation algorithm is a type of algorithm that aims to find approximate solutions to NP-hard optimization problems in polynomial time in the input size and a function of a specific parameter. These algorithms are designed to combine the best aspects of both traditional approximation algorithms and fixed-parameter tractability.
In traditional approximation algorithms, the goal is to find solutions that are at most a certain factor away from the optimal solution, known as an -approximation, in polynomial time. On the other hand, parameterized algorithms are designed to find exact solutions to problems, but with the constraint that the running time of the algorithm is polynomial in the input size and a function of a specific parameter . The parameter describes some property of the input and is small in typical applications. The problem is said to be fixed-parameter tractable (FPT) if there is an algorithm that can find the optimum solution in time, where is a function independent of the input size .
A parameterized approximation algorithm aims to find a balance between these two approaches by finding approximate solutions in FPT time: the algorithm computes an -approximation in time, where is a function independent of the input size . This approach aims to overcome the limitations of both traditional approaches by having stronger guarantees on the solution quality compared to traditional approximations while still having efficient running times as in FPT algorithms. An overview of the research area studying parameterized approximation algorithms can be found in the survey of Marx and the more recent survey by Feldmann et al.
Obtainable approximation ratios
The full potential of parameterized approximation algorithms is utilized when a given optimization problem is shown to admit an -approximation algorithm running in time, while in contrast the problem neither has a polynomial-time -approximation algorithm (under some complexity assumption, e.g., ), nor an FPT algorithm for the given parameter (i.e., it is at least W[1]-hard).
For example, some problems that are APX-hard and W[1]-hard admit a parameterized approximation scheme (PAS), i.e., for any a -approximation can be computed in time for some functions and . This then circumvents the lower bounds in terms of polynomial-time approximation and fixed-parameter tractability. A PAS is similar in spirit to a polynomial-time approximation scheme (PTAS) but additionally exploits a given parameter . Since the degree of the polynomial in the runtime of a PAS depends on a function , the value of is assumed to be arbitrary but constant in order for the PAS to run in FPT time. If this assumption is unsatisfying, is treated as a parameter as well to obtain an efficient parameterized approximation scheme (EPAS), which for any computes a -approximation in time for some function . This is similar in spirit to an efficient polynomial-time approximation scheme (EPTAS).
k-cut
The k-cut problem has no polynomial-time -approximation algorithm for any , assuming and the small set expansion hypothesis. It is also W[1]-hard parameterized by the number of required components. However an EPAS exists, which computes a -approximation in time.
Steiner Tree
The Steiner Tree problem is FPT parameterized by the number of terminals. However, for the "dual" parameter consisting of the number of non-terminals contained in the optimum solution, the problem is W[2]-hard (due to a folklore reduction from the Dominating Set problem). Steiner Tree is also known to be APX-hard. However, there is an EPAS computing a -approximation in time. The more general Steiner Forest problem is NP-hard on graphs of treewidth 3. However, on graphs of treewidth an EPAS can compute a -approximation in time.
Strongly-connected Steiner subgraph
It is known that the Strongly Connected Steiner Subgraph problem is W[1]-hard parameterized by the number of terminals, and also does not admit an -approximation in polynomial time (under standard complexity assumptions). However a 2-approximation can be computed in time. Furthermore, this is best possible, since no -approximation can be computed in time for any function , under Gap-ETH.
k-median and k-means
For the well-studied metric clustering problems of k-median and k-means parameterized by the number of centers, it is known that no -approximation for k-Median and no -approximation for k-Means can be computed in time for any function , under Gap-ETH. Matching parameterized approximation algorithms exist, but it is not known whether matching approximations can be computed in polynomial time.
Clustering is often considered in settings of low dimensional data, and thus a practically relevant parameterization is by the dimension of the underlying metric. In the Euclidean space, the k-Median and k-Means problems admit an EPAS parameterized by the dimension , and also an EPAS parameterized by . The former was generalized to an EPAS for the parameterization by the doubling dimension. For the loosely related highway dimension parameter, only an approximation scheme with XP runtime is known to date.
k-center
For the metric k-center problem a 2-approximation can be computed in polynomial time. However, when parameterizing by either the number of centers, the doubling dimension (in fact the dimension of a Manhattan metric), or the highway dimension, no parameterized -approximation algorithm exists, under standard complexity assumptions. Furthermore, the k-Center problem is W[1]-hard even on planar graphs when simultaneously parameterizing it by the number of centers, the doubling dimension, the highway dimension, and the pathwidth. However, when combining with the doubling dimension an EPAS exists, and the same is true when combining with the highway dimension. For the more general version with vertex capacities, an EPAS exists for the parameterization by k and the doubling dimension, but not when using k and the highway dimension as the parameter. Regarding the pathwidth, k-Center admits an EPAS even for the more general treewidth parameter, and also for cliquewidth.
Densest subgraph
An optimization variant of the k-Clique problem is the Densest k-Subgraph problem (which is a 2-ary Constraint Satisfaction problem), where the task is to find a subgraph on vertices with maximum number of edges. It is not hard to obtain a -approximation by just picking a matching of size in the given input graph, since the maximum number of edges on vertices is always at most . This is also asymptotically optimal, since under Gap-ETH no -approximation can be computed in FPT time parameterized by .
Dominating set
For the Dominating set problem it is W[1]-hard to compute any -approximation in time for any functions and .
Approximate kernelization
Kernelization is a technique used in fixed-parameter tractability to pre-process an instance of an NP-hard problem in order to remove "easy parts" and reveal the NP-hard core of the instance. A kernelization algorithm takes an instance and a parameter , and returns a new instance with parameter such that the size of and is bounded as a function of the input parameter , and the algorithm runs in polynomial time. An -approximate kernelization algorithm is a variation of this technique that is used in parameterized approximation algorithms. It returns a kernel such that any -approximation in can be converted into an -approximation to the input instance in polynomial time. This notion was introduced by Lokshtanov et al., but there are other related notions in the literature such as Turing kernels and -fidelity kernelization.
As for regular (non-approximate) kernels, a problem admits an α-approximate kernelization algorithm if and only if it has a parameterized α-approximation algorithm. The proof of this fact is very similar to the one for regular kernels. However the guaranteed approximate kernel might be of exponential size (or worse) in the input parameter. Hence it becomes interesting to find problems that admit polynomial sized approximate kernels. Furthermore, a polynomial-sized approximate kernelization scheme (PSAKS) is an -approximate kernelization algorithm that computes a polynomial-sized kernel and for which can be set to for any .
For example, while the Connected Vertex Cover problem is FPT parameterized by the solution size, it does not admit a (regular) polynomial sized kernel (unless ), but a PSAKS exists. Similarly, the Steiner Tree problem is FPT parameterized by the number of terminals, does not admit a polynomial sized kernel (unless ), but a PSAKS exists. When parameterizing Steiner Tree by the number of non-terminals in the optimum solution, the problem is W[2]-hard (and thus admits no exact kernel at all, unless FPT=W[2]), but still admits a PSAKS.
Talks on parameterized approximations
Daniel Lokshtanov: A Parameterized Approximation Scheme for k-Min Cut
Tuukka Korhonen: Single-Exponential Time 2-Approximation Algorithm for Treewidth
Karthik C. S.: Recent Hardness of Approximation results in Parameterized Complexity
Ariel Kulik. Two-variable Recurrence Relations with Application to Parameterized Approximations
Meirav Zehavi. FPT Approximation
Vincent Cohen-Added: On the Parameterized Complexity of Various Clustering Problems
Fahad Panolan. Parameterized Approximation for Independent Set of Rectangles
Andreas Emil Feldmann. Approximate Kernelization Schemes for Steiner Networks
References
Algorithms
Approximation algorithms
Parameterized complexity | Parameterized approximation algorithm | [
"Mathematics"
] | 1,952 | [
"Applied mathematics",
"Algorithms",
"Mathematical logic",
"Approximation algorithms",
"Mathematical relations",
"Approximations"
] |
72,816,186 | https://en.wikipedia.org/wiki/Los%20Angeles%20abrasion%20test | The Los Angeles abrasion test (LA abrasion) is the North American standard for testing toughness (resistance to abrasion and degradation) of construction aggregate or gravel and its suitability for road construction. Test methodology and equipment is defined in the ASTM International publications ASTM C131 for particle sizes smaller than 37 mm (1.5 inches) and ASTM C535 for sizes larger than 19 mm (3/4 of an inch); the overlapping range of 19 to 37 mm can be tested by either of two standards.
The Los Angeles machine defined in the standard is a simple ball mill of specified size and shape The standard charge of rock is set at depending on the size of the particles. The drum of the mill has a single shelf plate that scoops test samples and steel balls from the bottom, lifts them up and then drops them, creating a crushing impact. The interaction of the drum, steel balls and the samples at the bottom of the drum causes further abrading and grinding. The complete test requires 500 drum revolutions at a speed of 30-33 revolutions per minute. Crushed sample is then separated from fine dust on a sieve, washed, dried and weighed. The test reports loss of mass to abrasion and impact, expressed as a percentage of initial sample mass. Maximum acceptable loss for the base course of the road is 45%; the more demanding surface course must be 35% or less.
The test was developed by the city engineers of Los Angeles in the 1920s. The California Highway Commission found the new methodology superior to the established Deval abrasion test, and adopted the LA test in 1927. In the 1930s, national studies demonstrated the Deval test did not correlate with the service record of sampled rock altogether, while an LA loss rating of less than 40% was a reliable indicator of quality. The federal standard for LA abrasion testing was formally adopted by the ASTM in 1937. Decades later, field studies found that the LA test results do not always correlate with reality, thus engineers outside of the United States developed different national standards like the French wet micro-Deval procedure or the British Standard 812.
Citations
References
ASTM standards
Standards of the United States
Construction standards
Roads in the United States
Construction in the United States
Stone (material)
Quarrying | Los Angeles abrasion test | [
"Engineering"
] | 470 | [
"Construction",
"Construction standards"
] |
78,607,422 | https://en.wikipedia.org/wiki/Potassium%20asparaginate | Potassium asparaginate is a potassium salt of L-asparagine amino acid.
Potassium asparaginate can be considered both a salt and a coordination complex. As a salt, potassium asparaginate is formed when the potassium ion () replaces the hydrogen ion () in the carboxyl group of L-asparagine, an amino acid; in this process, the carboxyl group (COOH) in L-asparagine loses hydrogen which is replaced by potassium. As a coordination complex, in the context of coordination chemistry, the potassium ion coordinates with the L-asparagine, forming a stable structure where the central (metal) ion is surrounded by and associated with the L-asparagine, a ligand (complexing molecule), through coordinate covalent bonds.
Chemical properties
The composition by mass of elemental potassium () in potassium asparaginate () is approximately 23%, given that the molar mass of a potassium atom (K) is 39.1 grams per mole (g/mol), and the molar mass of a potassium asparaginate is 170.21 g/mol (39.1/170.21≈23%).
The solubility of potassium asparaginate, in g/100ml of various solvents (water, ethanol, methanol), at temperatures of 30, 35 and 40 degree Celsius, is the following:
Synthesis
Potassium asparaginate can be obtained from L-asparagine and potassium fluoride (KF) in a chemical reaction which yields potassium asparaginate and hydrofluoric acid (HF).
Applications
Medicine
Potassium asparaginate, along with magnesium asparaginate, is marketed in Russia and Eastern European countries to treat or prevent potassium deficiency (hypokalemia) and magnesium deficiency (hyponatremia). Potassium asparaginate and magnesium asparaginate purportedly improve metabolism in the myocardium (heart muscle), enhance the tolerance of cardiac glycosides (heart medications) and exhibit antiarrhythmic activity (help regulate heart rhythm). Still, these health claims are not backed up by reliable studies. In the United States, potassium asparaginate is not specifically approved by the Food and Drug Administration (FDA) for treating any medical condition; to treat hypokalemia, potassium is instead administered as other salts, namely, gluconate, citrate, chloride or bicarbonate.
Nonlinear optics
In nonlinear optics, crystals of potassium asparaginate are investigated as a potential nonlinear optical material, as salts of some amino acids possess strong nonlinear optical properties. A nonlinear optics material is a substance with high optical nonlinearity. Such substances are useful in applications such as signal transmission, data storage, or optical switching. High optical nonlinearity refers to the property of materials to respond to light (e.g., a laser) in a nonlinear manner, meaning that the property doesn't scale linearly with the intensity of the light applied.
References
Potassium compounds
Optical materials
Metal-amino acid complexes | Potassium asparaginate | [
"Physics",
"Chemistry"
] | 647 | [
"Coordination chemistry",
"Materials",
"Optical materials",
"Metal-amino acid complexes",
"Matter"
] |
78,611,182 | https://en.wikipedia.org/wiki/Zero%20power%20factor%20curve | The zero power factor curve (also zero power factor characteristic, ZPF, ZPFC) of a synchronous generator is a plot of the output voltage as a function of the excitation current or field using a zero power factor (purely inductive) load that corresponds to rated voltage at rated current (1 p.u.). The curve is typically plotted alongside the open-circuit characteristic.
Obtained by measuring the terminal voltage when the current has a zero power factor current using a pure inductive load that could be regulated to compensate the reactive power of the generator EMF.
The curve is obtained by rotating the generator at the rated RPM with the output terminals connected to the unity load, varying the excitation field and recording the output voltage.
The ZPFC could be used together with the open-circuit saturation curve in Potier Triangle method.
The zero power characteristic is similar to the open-circuit characteristic but shifted down by .
References
Electrical generators | Zero power factor curve | [
"Physics",
"Technology"
] | 199 | [
"Physical systems",
"Electrical generators",
"Machines"
] |
78,612,468 | https://en.wikipedia.org/wiki/Sipavibart | Sipavibart is an experimental medication under investigation for the prevention of COVID-19 in people who are immunocompromised. Sipavibart is a recombinant human IgG1 monoclonal antibody that provides passive immunization against SARS-CoV-2 by binding its spike protein receptor binding domain.
Society and culture
Legal status
In December 2024, the Committee for Medicinal Products for Human Use of the European Medicines Agency adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Kavigale, intended for the prevention of COVID-19 in immunocompromised people aged twelve years of age and older. Kavigale was reviewed under the EMA's accelerated assessment program. The applicant for this medicinal product is AstraZeneca AB.
Names
Sipavibart is the international nonproprietary name.
References
Further reading
Anti–RNA virus drugs
Antiviral drugs
COVID-19 drug development
Experimental antiviral drugs
Monoclonal antibodies | Sipavibart | [
"Chemistry",
"Biology"
] | 213 | [
"Antiviral drugs",
"COVID-19 drug development",
"Biocides",
"Drug discovery"
] |
78,614,078 | https://en.wikipedia.org/wiki/Cyclin%20B3 | G2/mitotic-specific cyclin-B3 is a protein encoded by the CCNB3 gene located on the X chromosome in humans. Cyclin B3 has features of both A type cyclins and B type cyclins and is a distinct subfamily of B type cyclins conserved across many species. However, human cyclin B3 is considerably larger than all other previously characterized invertebrate or vertebrate cyclin B3s. Unlike cyclin B1 and cyclin B2, it is solely expressed in germ cells in mammals, with a significant role in meiosis and gamete formation.
Structure
Cyclin B3 was originally identified in chickens from cDNA as a 403 amino acid protein. It has roughly 30% similarity to chicken and Xenopus B and A type cyclins. The cyclin box of chicken cyclin B3 has 15 residues different from the consensus sequence for B type cyclins and 22 residues different from the consensus sequence for A type cyclins. The destruction box sequence for chicken cyclin B3 also differs from the expected sequence: it contains a phenylalanine rather than a leucine. The nuclear localization sequence (NLS) of chicken cyclin B3 appears to be in the 26 C-terminal residues, consistent with A type cyclins.
Human cyclin B3 is the largest cyclin, 1395 amino acids long, due to large variable domain (contained in exon 8) between the destruction box and cyclin box. There are indications of alternative splicing that alters localization to the cytoplasm.
Expression
Cyclin B3 is nearly entirely localized to the nucleus and cycles similarly to other B cyclins in somatic cells. In humans it is primarily expressed in germ cells in the testis, somewhat contradictory to its observed function in oocyte meiosis in other organisms.
Function
When it was initially characterized in HeLa cells, human cyclin B3 was found to associate with CDK2 but it did not significantly spur histone H1 kinase activity as is common with other cyclin-CDK complexes. However, further research has shown that cyclin B3 associates with CDK1 rather than CDK2 (as seen with chicken cyclin B3). In HeLa cells, cyclin B3 was observed to degrade during the metaphase-anaphase transition when it had a complete destruction box. Accumulation of cyclin B3 was also shown to induce the beginning of mitosis early and prevent exit from M phase by arresting cells in anaphase.
Role in mitosis
Cyclin B3 has primarily mitotic functionality in Caenorhabditis elegans where it is primarily localized to the nucleus and is necessary for chromatid separation. Cyclin B3 is especially important in early C. elegans embryos where it again governs chromatid separation as well as kinetochore and microtubule assembly. It additionally appears to drive rapid mitosis in early C. elegans embryos, roughly three times faster than mitosis in adult worms.
Role in meiosis and gamete production
Oogenesis
Cyclin B3 has been investigated in the context of oogenesis as its initial mammalian characterization found mRNA expression in fetal ovaries but not adult ovaries. Female mice with null or severe loss of function mutations to both copies of cyclin B3 (Ccnb3-/-) are sterile: most ccnb3-/- oocytes do not form polar bodies. Cyclin B3-CDK1 complexes promote the degradation of Anaphase Promoting Complex/Cyclosome (APC/C) substrates securin and cyclin B1, which potentially leads to the onset of anaphase I. Cyclin B3 is also degraded as the oocyte leaves meiosis I.
Cyclin B3-CDK1 complexes also phosphorylate Emi2, an APC/C inhibitor, flagging it for degradation which maintains APC/C activity. Importantly, cyclin B3 is not present during meiosis II, which allows for arrest in metaphase II. This pattern of degradation, different from cyclins B1 and B2, is potentially the result of its destruction box sequence which does not match cyclins B1 and B2.
Cyclin B3 seems to maintain this key function in oogenesis in other organisms like Drosophila, where Cyclin B3 acts directly on APC/C, and Caenorhabditis elegans. Interestingly, injection of frog (Xenopus laevis), zebrafish (Danio rerio), or fly (Drosophila) cyclin B3 mRNA rescued Ccnb3-/- mutant fertility in mice, suggesting that cyclin B3 is highly conserved amongst all animals.
Spermatogenesis
As its initial mammalian characterization found cyclin B3 is primarily expressed in human testis and implicated in meiosis. Its role in spermatogenesis has been studied in mouse models. Cyclin B3 mRNA is observed beginning in prophase I, and continues to accumulate in leptotene and zygotene stages, decreasing as sperm cells enter the pachytene stage. When cyclin B3 expression is artificially extended until the end of meiosis, spermatogenesis is negatively affected. This extended expression leads to decrease in sperm counts, cells in seminiferous tubules with abnormal morphology and increased instances of apoptosis, and resulted in no functional gametes.
Interestingly, male mice and flies with null or severe loss of function mutations of cyclin B3 (Ccbn3-/Y) retain their fertility and exhibit normal spermatogenesis which shows that cyclin B3 is not necessary for spermatogenesis and has some redundant functionality in males.
Cancer
Despite its primary role in meiosis, cyclin B3 has been implicated in cancer, first described in bone sarcomas as a fusion of BCOR and CCNB3. Tumors with this mutation are relatively rare but more prevalent in adolescents and young adults as well and significantly more common in men than women. No reasons for this demographic breakdown have been proposed.
References
Proteins | Cyclin B3 | [
"Chemistry"
] | 1,319 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
74,309,258 | https://en.wikipedia.org/wiki/Threshold%20of%20toxilogical%20concern | The threshold of toxilogical concern (or TTC) is a method for determining the level of exposure to chemicals above which would be considered toxic, in cases where data about such chemicals is scarce or non-existent.
References
Toxicology | Threshold of toxilogical concern | [
"Environmental_science"
] | 49 | [
"Toxicology",
"Toxicology stubs"
] |
74,310,950 | https://en.wikipedia.org/wiki/Liquid3 | Liquid 3 (also known as Liquid Trees) is a clean energy photobioreactor project designed to replace the function of trees in heavily polluted urban areas where planting and growing real vegetation is not viable.
The project was designed by the Institute for Multidisciplinary Research at the University of Belgrade. The United Nations Development Programme (UNDP) selected Liquid 3 as an "innovative" solution for "Climate Smart Urban Development," a project produced in partnership with Serbia's Ministry of Environmental Protection and the municipality of Stari Grad.
Overview
The Liquid3 algal photobioreactor is powered by solar panels. The glass tank is embedded into a structure that acts as a bench and is outfitted with other utilities such as charging ports. Similar to other photobioreactors, air is sucked through a pressure pump and fed to the microalgae, with oxygen released as a byproduct. Additionally, the Liquid 3 bioreactor can filter out heavy metal contaminants in the air and contains a temperature regulation system in case external climate conditions become too extreme for the microalgae. The creator of the Liquid 3, Dr. Ivan Spasojevic, was motivated to install it in Belgrade due to the city's struggle with pollution.
See also
CityTrees
Smog tower
References
External links
Belgrade
Bioreactors | Liquid3 | [
"Chemistry",
"Engineering",
"Biology"
] | 273 | [
"Bioreactors",
"Biological engineering",
"Chemical reactors",
"Biochemical engineering",
"Microbiology equipment"
] |
74,312,779 | https://en.wikipedia.org/wiki/CERN-MEDICIS | CERN-MEDical Isotopes Collected from ISOLDE (MEDICIS) is a facility located in the Isotope Separator Online DEvice (ISOLDE) facility at CERN, designed to produce high-purity isotopes for developing the practice of patient diagnosis and treatment. The facility was initiated in 2010, with its first radioisotopes (terbium-155) produced on 12 December 2017.
The target used to produce radioactive nuclei at the ISOLDE facility only absorbs 10% of the proton beam. MEDICIS positions a second target behind the first, which is irradiated by the leftover 90% of the proton beam. The target is then moved to an off-line mass separation system and isotopes are extracted from the target. These isotopes are implanted in metallic foil and can be delivered to research facilities and hospitals.
MEDICIS is a nuclear class A laboratory and takes into account various radioprotection procedures to prevent irradiation and contamination.
Background
An isotope of an element contains the same number of protons, but a different number of neutrons, giving it a different mass number than the element found on the periodic table. Isotopes with a large variation in nucleon number will decay into more stable nuclei, and are known as radionuclides or radioisotopes.
The field of nuclear medicine uses radioisotopes to diagnose and treat patients. The radiation and particles emitted by these radioisotopes can be used to weaken or destroy target cells, for example in the case of cancer. For diagnosis, a radioactive dose is given to a patient and its activity can be tracked to study the functionality of a target organ. The tracers used within this process are generally short-lived isotopes.
Diagnostic radiopharmaceuticals are used to examine organ functionality, blood flow, bone growth and other diagnostic procedures. Radioisotopes needed for this procedure must emit gamma radiation with a high energy and short half-life, in order for it to escape the body and decay quickly. There is currently a trend to use cyclotron-produced isotopes as they are becoming more widely available.
Positron emission tomography (PET) is an imaging technique, using radioisotopes also most often produced with a cyclotron. They are injected into the patient, accumulating in the target tissue, and decays through positron emission. The positron annihilates with an electron nearby which results in the emission to two gamma rays (photons) in opposite directions. A PET camera detects these rays and can determine quantitative information about the target tissue.
Therapeutic radiopharmaceuticals are used to destroy or weaken malfunctioning cells, using a radioisotope localised to a specific organ. This process is called radionuclide therapy (RNT), and uses heavy proton radioisotopes (located on the North-West area of the nuclide chart) that decay through beta or alpha emission.
Facility and process
The MEDICIS facility is located in the extension of building 179 at the CERN Meyrin site, next to the ISOLDE building. The facility was established by CERN in 2010, along with contributions from the CERN Knowledge Transfer Fund, as well as receiving a European Commission Marie-Skłodowska-Curie training grant under the title MEDICIS-PROMED. The construction of the facility started in September 2013 and was completed in 2017.
ISOLDE directs a 1.4 GeV proton beam from the Proton Synchrotron Booster (PSB) onto a thick target, the material dependent on the desired produced isotopes. Only 10% of the proton beam used in the ISOLDE facility is absorbed by the target, with the rest otherwise hitting the beam dump. MEDICIS uses these wasted protons to irradiate a second target, which produces specific isotopes, placed behind each of ISOLDE's target stations, the High Resolution Separator (HRS) and the General Purpose Separator (GPS). Alternatively, the facility uses pre-irradiated targets that are provided by external institutions. MEDICIS was one of the few facilities operating throughout the Long Shutdown 2, due to it being provided with 34 externally irradiated target materials.
Due to the high levels of radiation, the targets are transferred from the irradiation station to the radioisotope mass-separation beamline using an automated rail conveyer system (RCS). A KUKA robot is used to transport the target to the station, where the isotope of interest can be collected and radiochemically purified. This is done by heating the target up to very high temperatures, often more than 2000 °C, which causes the specified isotopes to diffuse. The isotopes are then ionised and accelerated by an ion source to be sent through a mass separator. The mass separator extracts the isotope of interest so that it can be implanted onto thin gold foils with a one-sided metallic or salt coating.
In 2019, the MEDICIS Laser Ion Source Setup At CERN (MELISSA) became fully operational, containing the individual lasers, auxiliary and control systems, and optical beam transport. The MELISSA laser laboratory has helped to successfully increase the separation efficiency and the yield of the isotopes. The laser excites only isotopes of the desired element, allowing an element-selective isotope separation for a given atomic mass from other isobars by the mass separator.
A shielded trolley is used to retrieve the samples after the radioisotopes have been collected, in order to avoid risk of contamination. Once the target is finished being used, it is sent to a hot cell in order to be safely dismantled and put in waste bins. Once collected, the samples can be sent to hospitals and research facilities with the purpose of developing patient imaging and treatment, and therapy protocols.
Additionally next to the MEDICIS facility, there is a nanolab laboratory designed for the development and assembly of nanomaterials. The nanomaterials are sealed in a glovebox, meaning there is no contact with the outside environment. It builds up on the development of the first nanostructured targets used for isotope production, and further exploits developments initiated in MEDICIS-Promed under the guidance of Prof. "Kostya" Novozelov.
Projects and results
Targeted therapy
Several lanthanides produced at CERN-MEDICIS, samarium and terbium, are of interest for targeted therapy alike lutetium already used in the clinics. Lutetium emits low energy β particles with a short range, used for irradiation of smaller volume tumor targets. Terbium-149 emits short-range alpha particles, gamma-rays and positrons, in its decay scheme, which makes it suitable for targeted alpha therapy. The particular study of 149Tb produced by ISOLDE has been in folate receptor therapy, prominent in ovarian and lung cancer.
153Sm, produced in the BR2 reactor at SCK CEN, followed by the subsequent mass separation by MEDICIS to increase its molar activity, was found to be suitable for targeted radionuclide therapy (TRNT) in a proof-of-concept research project. It emits low energy β particles and gamma peaks, and presents acceptable half-life for logistics and ambulatory care, making it a candidate of choice for theranostics approaches.
Theranostics
Theranostics, a treatment that combines therapy and diagnosis, is a new trend in precision medicine where the radioisotopes produced at MEDICIS already triggered research projects. The strategy the facility uses is to find an element that has two radioisotopes, used for imaging and therapy separately.
A promising element for use in theranostics is terbium as it has four different radioisotopes for use in therapy and PET or SPECT imaging. In 2021, Tb radioisotope production was successfully performed with the MELISSA laser ion source, with a 53% ionisation efficiency obtained by MEDICIS-Promed students. Since 2021, three other non-conventional isotopes of interest for PET imaging or therapeutic applications have been produced.
Exploration of mass separated 153Sm at MEDICIS using in vitro biological studies showed that the ability for tumors to absorb (uptake) and retain substances (retention) was improved compared to normal tissues. Animal SPECT-CT scans of mice were obtained post-injection and showed cleared activity after twenty-four hours.
Involvement with PRISMAP
The PRoduction of high purity Isotopes by mass Separation for Medical APplication (PRISMAP) is the European medical radionuclide programme, with the goal to provide a sustainable source of high-purity radioisotopes for medicine. The programme brings together 23 beneficiaries from 13 countries, to create a single entry point for the medical isotope user community. The MEDICIS facility provides mass separation of isotopes, which can then be transported to nearby research facilities hosting external researchers to limit long haul transport of the samples.
References
External links
MEDICIS page within CERN website
CERN facilities
Isotopes | CERN-MEDICIS | [
"Physics",
"Chemistry"
] | 1,859 | [
"Isotopes",
"Nuclear physics"
] |
68,435,028 | https://en.wikipedia.org/wiki/Architextiles | Architextiles refers to a broad range of projects and approaches that combine architecture, textiles, and materials science. Architextiles explore textile-based approaches and inspirations for creating structures, spaces, surfaces, and textures. Architextiles contribute to the creation of adaptable, interactive, and process-oriented spaces. Awning is the most basic type of architectural textile. In Roman times, a velarium was used as an awning to cover the entire cavea, the seating area within amphitheaters, serving as a protection for the spectators against the sun.
Hylozoic Ground, on the other hand, is a modern and complex architextile example. Hylozoic Ground is an interactive architecture model presented in the 18th Biennale of Sydney. Olympiastadion is another example of modern architecture presented in an unusual way.
Etymology
Architextiles is a portmanteau word of textiles and architecture. 'Technology' and 'Textiles' both are derivation of a Latin language word that means 'construct' or 'weave'.Textiles is also among derivative words of the Ancestor of the Indo-European language word "tek" which is the root to architecture.
Architecture and textiles
Architectural textiles
Architextiles is the architecture that is inspired by characteristics, elements, and manufacturing techniques of textiles. It is a field that spans multiple disciplines. It is a combination of textile and architectural manufacturing techniques. Laser cutting, ultrasonic welding, thermoplastic setting, pultrusion, electrospinning, and other advanced textile manufacturing techniques are all included in architextiles. Architextiles integrate various fields like architecture, textile design, engineering, physics and materials science.
Textile inspirations
Architextiles exploits the sculptural potential of textile-based structures. Textiles motivate architects with their numerous features, enabling them to express ideas via design and create environmentally conscious buildings. Textiles also influence architecture in the following ways:
Characteristics
Textiles are adaptable, lightweight, and useful for a variety of structures, both temporary and permanent. Tensile surfaces composed of structural fabrics, such as canopies, roofs, and other types of shelter, are included in architectural textiles. If necessary, the subjected materials are given special purpose finishes, such as waterproofing, to make them suitable for outdoor use.
Coated fabrics
There is considerable use of coated materials in certain architectures, Pneumatic structures are made of teflon or PVC-coated synthetic materials. Coated fiberglass, coated polyethylene and coated polyester are the most common materials used in lightweight structural textiles. Lightweight fabric constructions accounted for 13.2 square yards of total usage in 2006, according to Industrial Fabrics Association International (IFAI) Chemically inert, Polytetrafluoroethylene fibreglass coating is capable of withstanding temperatures as low as -100 °F (-73 °C) and as high as +450 °F (232 °C).
Interactive textiles
Textiles that can sense stimuli are known as interactive textiles. They have the capability to adapt or react to the environment. Felecia Davis has designed interactive textiles such as parametric tents that are able to change size and shape in response to changes in light and the number of people underneath.
3D structures
Soundproof 3D woven walls with a ribbed structure that are suitable for soundproofing and interior designing. Aleksandra Gaca designed the furnishing of the concept car Renault Symbioz with a 3D fabric named 'boko'.
Origami-inspired textiles
Textiles inspired by origami impart novel properties to architecture. Architects try out origami and three-dimensional fabric structures when designing structures.
History
Examples of architextiles have been found dating back a long way. Over centuries, nomadic tribes in the Middle East, Africa, the Orient, and the Americas have developed textile structures.
Historical structures
Historical architextiles include yurts and tents, the great awnings of Colosseum in Rome, the tents of the Mongol Empire, and the Ziggurat Aquar Quf near Baghdad.
Present
Properties
Architextiles have a number of advantages; primarily, they are cost effective and can be used to construct temporary or transportable structures. The programming can be modified at any time.
Examples of architextiles
Muscle NSA
NSA Muscle, is a pressurized (Inflatable body) structure which is an interactive model. It is equipped with sensors and computing systems, the MUSCLE is programmed to respond to human visitors.
Carbon tower
The carbon tower is a prototype carbon fiber building.
Hylozoic Ground
Hylozoic Ground is an exemplar of live architecture, interactive model of architecture which is a kind of architextiles.
Textile growth monument
Textile growth monument ‘textielgroeimonument’ is a 3D 'woven' structure in the city Tilburg.
Pneumatrix
Pneumatrix, RCA Department of Architecture, London, a theatre which is deployable and flexible.
See also
3D textiles
Maison folie de Wazemmes
Lars Spuybroek
Tent
Wearable technology
References
Textiles
Architecture
Buildings and structures by type | Architextiles | [
"Engineering"
] | 1,059 | [
"Construction",
"Buildings and structures by type",
"Architecture"
] |
68,435,956 | https://en.wikipedia.org/wiki/Neptunium%20diarsenide | Neptunium diarsenide is a binary inorganic compound of neptunium and arsenic with the chemical formula . The compound forms crystals.
Synthesis
Heating stoichiometric amounts of neptunium hydride and arsenic:
Physical properties
Neptunium diarsenide forms crystals of the tetragonal system, space group P4/nmm, cell parameters a = 0.3958 nm, c = 0.8098 nm.
References
Neptunium(III) compounds
Arsenides
Inorganic compounds | Neptunium diarsenide | [
"Chemistry"
] | 110 | [
"Inorganic compounds"
] |
68,436,286 | https://en.wikipedia.org/wiki/Triazolium%20salt | Triazolium salts are chemical compounds based on the substituted triazole structural element. They are composed of a cation based on a heterocyclic five-membered ring with three nitrogen atoms, two of which are functionalized and a corresponding counterion (anion). Depending on the arrangement of the three nitrogen atoms the triazolium salts are divided into two isomers, namely 1,3,4-trisubstituted-1,2,3-triazolium salts as well as 1,2,4-triazolium salts. They are precursors for the preparation of N-heterocylcic carbenes.
1,3,4-trisubstituted-1,2,3-triazolium salts
1,3,4-trisubstituted-1,2,3-triaolium salts can be synthesized from 3,4-disubsituted-1,2,4-triazol molecule by quaternization of the 1 nitrogen. This quaternization can be done by reaction with alkyl iodides (or other alkyl halide, albeit less yield is generally observed due to less reactivity, alkyl fluoride are rarely seen as they are mostly unreactive) yielding the corresponding 1,3,4-trisubstituted 1,2,3-triazolium salt with iodine. Similarly 1,3,5-trisubstituted-1,2,3-triazolium salts can be obtained from 3,5-disubsituted-1,2,4-triazol.
1,4-disubstituted 1,2,4-triazolium salts
1,4-disubstituted 1,2,4-triaolium salts can be synthesized from 4-subsituted 1,2,4-triazol molecule by quaternization of the 1 nitrogen. This quaternization can be done by reaction with alkyl iodides (or other alkyl halide, albeit less yield is generally observed due to less reactivity, alkyl fluoride are rarely seen as they are mostly unreactive) yielding the corresponding 1,4-disubstituted 1,2,4-triazolium salt with iodine.
References
Triazoles
Heterocyclic compounds
Quaternary ammonium compounds | Triazolium salt | [
"Chemistry"
] | 510 | [
"Organic compounds",
"Heterocyclic compounds"
] |
69,718,146 | https://en.wikipedia.org/wiki/Berkelium%28III%29%20nitrate | Berkelium(III) nitrate is the berkelium salt of nitric acid with the formula Bk(NO3)3. It commonly forms the tetrahydrate, Bk(NO3)3·4H2O, which is a light green solid. If heated to 450 °C, it decomposes to berkelium(IV) oxide and 22 milligrams of the solution of this compound is reported to cost one million dollars.
Production and uses
Berkelium(III) nitrate is produced by the reaction of berkelium metal, the hydroxide, or chloride with nitric acid. This compound has no commercial uses, but was used to synthesize the element tennessine. The aqueous compound was painted onto a titanium foil and was bombarded with calcium-48 atoms to synthesize the element tennessine.
This compound is used as a pathway to pentavalent berkelium compounds by the collision-induced dissociation of this compound to produce BkO2(NO3)2– which contains berkelium in the +5 oxidation state.
References
Berkelium compounds
nitrates | Berkelium(III) nitrate | [
"Chemistry"
] | 234 | [
"Nitrates",
"Oxidizing agents",
"Salts"
] |
69,720,399 | https://en.wikipedia.org/wiki/Rig%20category | In category theory, a rig category (also known as bimonoidal category or 2-rig) is a category equipped with two monoidal structures, one distributing over the other.
Definition
A rig category is given by a category equipped with:
a symmetric monoidal structure
a monoidal structure
distributing natural isomorphisms: and
annihilating (or absorbing) natural isomorphisms: and
Those structures are required to satisfy a number of coherence conditions.
Examples
Set, the category of sets with the disjoint union as and the cartesian product as . Such categories where the multiplicative monoidal structure is the categorical product and the additive monoidal structure is the coproduct are called distributive categories.
Vect, the category of vector spaces over a field, with the direct sum as and the tensor product as .
Strictification
Requiring all isomorphisms involved in the definition of a rig category to be strict does not give a useful definition, as it implies an equality which signals a degenerate structure. However it is possible to turn most of the isomorphisms involved into equalities.
A rig category is semi-strict if the two monoidal structures involved are strict, both of its annihilators are equalities and one of its distributors is an equality. Any rig category is equivalent to a semi-strict one.
References
Monoidal categories | Rig category | [
"Mathematics"
] | 283 | [
"Monoidal categories",
"Mathematical structures",
"Category theory"
] |
69,720,530 | https://en.wikipedia.org/wiki/Dual%20input | Dual input or dual point user input are common terms describing the 'multiple touch input on two devices simultaneously' challenge.
When there are touch input commands from two touch monitors simultaneously this will require a technical solution to function. This is because some operating systems only allow one cursor to work. When there are two users, like in the picture example, the two simultaneous dual input actions would require two “cursors” in the operating system to function. If one of the users also has a mouse connected to their display there is a risk that the second user would interrupt the first user by moving the mouse cursor. In this example the second display user would normally interfere with the main screen user.
These technical solutions can for example be observed in patent applications in the dual input field.
End consumers sometimes need help and assistance to get this setup working with two touch monitors.
There are dedicated companies working with dual input solutions to other enterprise companies for example ID24 second displays. Another B2B example which required a technical solution was two 55" LCD TV's each with their own IR touch overlay. This required additional help to solve the dual input on two screens simultaneously.
Finally we also see web technology frameworks adding dual input support. One example is Smart client which released support for dual input in their software v. 12.
See also
Multi-monitor
Touchscreen
References
Multi-monitor
Electronic display devices
Display technology | Dual input | [
"Engineering"
] | 285 | [
"Electronic engineering",
"Display technology"
] |
69,728,002 | https://en.wikipedia.org/wiki/Mona%20Canyon | Mona Canyon (Spanish: Cañón de la Mona), also known as the Mona Rift, is an submarine canyon located in the Mona Passage, between the islands of Hispaniola (particularly the Dominican Republic) and Puerto Rico, with steep walls measuring between in height from bottom to top. The Mona Canyon stretches from the Desecheo Island platform, specifically the Desecheo Ridge, in the south to the Puerto Rico Trench, which contains some of the deepest points in the Atlantic Ocean, in the north. The canyon is also particularly associated with earthquakes and subsequent tsunamis, with the 1918 Puerto Rico earthquake having its epicenter in the submarine canyon.
Geomorphology
The Mona submarine canyon geomorphology is highly complex yet unexplored. The complex seafloor is the result of oceanographic and tectonic forces that are actively forming and reshaping the landscape of the region. The canyon is located in an intricate and irregular tectonic region at the boundary between the Caribbean and North American plates, where the east–west transversing subduction Septentrional Fault ends in an approximately hole west of the landform.
See also
List of submarine canyons
References
Canyons and gorges of the United States
Geography of Puerto Rico
Landforms of Puerto Rico
Physical oceanography
Submarine canyons of the Atlantic Ocean | Mona Canyon | [
"Physics"
] | 270 | [
"Applied and interdisciplinary physics",
"Physical oceanography"
] |
69,729,976 | https://en.wikipedia.org/wiki/Samarium%28III%29%20arsenide | Samarium(III) arsenide is a binary inorganic compound of samarium and arsenic with the chemical formula SmAs.
Synthesis
Samarium arsenide can be synthesised by heating of pure substances in vacuum:
Physical properties
Samarium arsenide forms crystals of a cubic system, space group Fm3m, cell parameters a = 0.5921 nm, Z = 4, of NaCl-structure.
The compound melts congruently at 2257 °C.
Uses
SmAs is used as a semiconductor and in photo optic applications.
References
Arsenides
Samarium(III) compounds
Semiconductor materials
Rock salt crystal structure | Samarium(III) arsenide | [
"Chemistry"
] | 127 | [
"Semiconductor materials"
] |
69,730,264 | https://en.wikipedia.org/wiki/Biofumigation | Biofumigation is a method of pest control in agriculture, a variant of fumigation where the gaseous active substance—fumigant—is produced by decomposition of plant material freshly chopped and buried in the soil for this purpose.
Plants from the Brassicaceae family (e.g., mustards, cauliflower, and broccoli) are primarily used due to their high glucosinolate content; in the process of decomposition, glucosinolates are broken down to volatile isothiocyanates which are toxic to soil organisms such as bacteria, fungi and nematodes, but less toxic and persistent in the environment than synthetic fumigants. Alternatively, grasses such as sorghum can be used, in which case hydrogen cyanide is produced to similar effect.
The method consists of mowing and chopping the plants during flowering to ensure maximum glucosinolate content and speed up decomposition. The ground needs to be irrigated to field capacity, after which the chopped material is incorporated into the top layer and covered with impermeable film to prevent the gas from escaping. After three or four weeks, the film is removed and the ground is ready for planting 24 hours later. Burying biofumigant crops after the growing season to plant cash crops normally next year may in theory lead to buildup of active substance in the soil after a few cycles of crop rotation, but direct short-term suppression of pests is not notable in this case.
The method can be used as a more sustainable and environment-friendly alternative to classic fumigation and other chemical pest control methods. Additionally, it can serve to replenish the nutrient content of the soil and promote growth of beneficial organisms. On the other hand, it requires changes in cultivation practice due to the time needed for the method to take effect, can be costly if biofumigant-producing plants need to be brought from elsewhere (i. e. if they are not used in crop rotation to be chopped and buried on site), and is difficult to standardize due to varying active substance content in different cultivars.
References
See also
biosolarization
Pest control techniques
Agricultural terminology
Soil science
Soil contamination
Biocides | Biofumigation | [
"Chemistry",
"Biology",
"Environmental_science"
] | 456 | [
"Environmental chemistry",
"Biocides",
"Soil contamination",
"Toxicology"
] |
71,308,064 | https://en.wikipedia.org/wiki/Corundum%20%28structure%29 | Corundum is the name for a structure prototype in inorganic solids, derived from the namesake polymorph of aluminum oxide (α-Al2O3). Other compounds, especially among the inorganic solids, exist in corundum structure, either in ambient or other conditions. Corundum structures are associated with metal-insulator transition, ferroelectricity, polar magnetism, and magnetoelectric effects.
Structure
The corundum structure has the space group . It typically exists in binary compounds of the type A2B3, where A is metallic and B is nonmetallic, including sesquioxides (A2O3), sesquisulfides (A2S3), etc. When A is nonmetallic and B is metallic, the structure becomes the antiphase of corundum, called the anticorundum structure type, with examples including β-Ca3N2 and borates. Ternary and multinary compounds can also exists in the corundum structure. The corundum-like structure with the composition A2BB'O6 is called double corundum. A list of examples are tabulated below.
See also
Corundum
Ilmenite
Perovskite (structure)
References
Crystal structure types
Crystallography
Mineralogy | Corundum (structure) | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 277 | [
"Crystallography",
"Materials science",
"Condensed matter physics",
"Crystal structure types"
] |
71,308,823 | https://en.wikipedia.org/wiki/Capability%20curve | Capability curve of an electrical generator describes the limits of the active (MW) and reactive power (MVAr) that the generator can provide. The curve represents a boundary of all operating points in the MW/MVAr plane; it is typically drawn with the real power on the horizontal axis, and, for the synchronous generator, resembles a letter D in shape, thus another name for the same curve, D-curve. In some sources the axes are switched, and the curve gets a dome-shaped appearance.
Synchronous generators
For a traditional synchronous generator the curve consists of multiple segments, each due to some physical constraint:
at the right part of the curve (close to the rated voltage), the generator is constrained by the heat dissipation in the armature (stator for large generators). The heating is proportional to the sum of squares of active and reactive currents, at the near-constant voltage it is closely proportional to the sum of squares of MW and MVAr, therefore this part of the curve (armature heating limit) resembles a section of a semicircle with the center at (0,0);
at the upper part of the curve (generator produces a lot of reactive power) operation requires higher voltage on the output of the generator and thus higher excitation field. The rotating excitation winding has its own field heating limit;
at the bottom of the curve (generator absorbs a lot of reactive power) the magnetic flux constraints in the stator cause heating of the magnetic core at the stator end (core end heating limit).
The corners between the sections of the curve define the limits of the power factor (PF) that the generator can sustain at its nameplate capacity (the illustration has the PF ticks placed at 0.85 lagging and 0.95 leading angles). In practice, the prime mover (a power source that drives the generator) is designed for less active power than the generator is capable of (due to the fact that in real life generator always has to deliver some reactive power), so a prime mover limit (a vertical dashed line on the illustration) changes the constraints somewhat (in the example, the leading PF limit, now at the intersection of the prime mover limit and core end heating limit, lowers to 0.93.
Due to high cost of a generator, a set of sensors and limiters will trigger the alarm when the generator approaches the capability-set boundary and, if no action is taken by the operator, will disconnect the generator from the grid.
The D-curve for a particular generator can be expanded by improved cooling. Hydrogen-cooled turbo generator's cooling can be improved by increasing the hydrogen pressure, larger generators, from 300 MVA, use more efficient water cooling.
The practical D-curve of a typical synchronous generator has one more limitation, minimum load. The minimum real power requirement means that the left-side of a D-curve is detached from the vertical axis. Although some generators are designed to be able to operate at zero load (as synchronous condensers), operation at real power levels between zero and the minimum is not possible even with these designs.
Wind and solar photovoltaics generators
The inverter-based resources (like solar photovoltaic (PV) generators, doubly-fed induction generators and full-converter wind generators, also known as "Type 3" and "Type 4" turbines) need to have reactive capabilities in order to contribute to the grid stability, yet their contribution is quite different from the synchronous generators and is limited by internal voltage, temperature, and current constraints. Due to flexibility allowed by the presence of the power converter, the doubly-fed and full-converter wind generators on the market have different shapes of the capability curve: "triangular", "rectangular", "D-shape" (the latter one resembles the D-curve of a synchronous generator). The rectangular and D-shapes of the curve theoretically allow using the generator to provide voltage regulation services even when the unit does not produce any active energy (due to low wind or no sun), essentially working as a STATCOM, but not all designs include this feature. The fixed speed wind turbines without a power converter (also known as "Type 1" and "Type 2") cannot be used for voltage control. They simply absorb the reactive power (like any typical induction machine), so a switched capacitor bank is usually used to correct the power factor to unity.
Older PV generators were intended for distribution networks. Since the current state of these networks does not include the voltage regulation, the inverters in these units were operating at the unity power factor. When the PV devices started to appear in the transmission networks, the inverters with reactive power capability appeared on the market. Since the power limit of an invertor is based on the maximum total current, the natural shape of the capability curve is similar to a semicircle, and at full capacity the real power always needs to be lowered if the reactive power is to be produced or absorbed. Theoretically the PV generators can be used as STATCOMs, although in practice the solar plants are disconnected at night.
Effects on electricity pricing
For a synchronous generator operating inside its D-curve, the marginal cost of providing reactive power is close to zero. However, once the generator's operating point reaches the corners of the D-curve, increasing the reactive power output will require reduction of the real (active) power. Since the electricity markets payments are typically based on real power, the generating company will have a disincentive to provide more reactive power if requested by the independent system operator. Therefore the reactive power management (voltage control) is separated into an ancillary service with its own tariffs, like the Reactive Supply and Voltage Control from Generation Sources (GSR) in the US.
References
Sources
Electrical generators | Capability curve | [
"Physics",
"Technology"
] | 1,227 | [
"Physical systems",
"Electrical generators",
"Machines"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.