id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
77,725,717
https://en.wikipedia.org/wiki/Garko%2C%20Gombe%20State
Garko is a ward located in Gombe State, Nigeria's Akko Local Government Area. About 12.7 kilometers, or 7.9 miles, from Garko. The distance is roughly 421 kilometers / 262 mi from Garko to Abuja, the capital of Nigeria. The Postcode of the area is 771104. Gallery References Town Populated places in Gombe State Climate and weather statistics Environment articles by quality Nigeria geography stubs Gombe State Climate of Africa
Garko, Gombe State
[ "Physics" ]
101
[ "Weather", "Physical phenomena", "Climate and weather statistics" ]
77,728,984
https://en.wikipedia.org/wiki/List%20of%20luminous%20blue%20variable%20stars
This is a list of luminous blue variable stars in order of their distance from Earth. List Milky Way galaxy (confirmed LBVs) Milky Way galaxy (candidate LBVs) Magellanic Clouds The Large Magellanic Cloud (LMC) is around 163 kly distant and the Small Magellanic Cloud (SMC) is around 204 kly distant Andromeda Galaxy and Triangulum Galaxy The Andromeda Galaxy (M31) is 2.5 Mly distant and the Triangulum Galaxy is around 3.2 Mly distant Single LBV Galaxies See also List of Wolf-Rayet stars List of O-type stars References Lists of stars Star systems Lists by distance
List of luminous blue variable stars
[ "Physics", "Astronomy" ]
140
[ "Lists by distance", "Physical quantities", "Distance", "Astronomical objects", "Star systems" ]
77,735,541
https://en.wikipedia.org/wiki/Design%20principles
Design principles are propositions that, when applied to design elements, form a design. Unity/harmony According to Alex White, author of The Elements of Graphic Design, to achieve visual unity is a main goal of graphic design. When all elements are in agreement, a design is considered unified. No individual part is viewed as more important than the whole design. A good balance between unity and variety must be established to avoid a chaotic or a lifeless design. Methods Perspective: sense of distance between elements. Similarity: ability to seem repeatable with other elements. Continuation: the sense of having a line or pattern extend. Repetition: elements being copied or mimicked numerous times. Rhythm: is achieved when recurring position, size, color, and use of a graphic element has a focal point interruption. Altering the basic theme achieves unity and helps keep interest. Balance It is a state of equalized tension and equilibrium, which may not always be calm. Types of balance in visual design Symmetry Asymmetrical balance produces an informal balance that is attention attracting and dynamic. Radial balance is arranged around a central element. The elements placed in a radial balance seem to 'radiate' out from a central point in a circular fashion. Overall is a mosaic form of balance which normally arises from too many elements being put on a page. Due to the lack of hierarchy and contrast, this form of balance can look noisy but sometimes quiet. Hierarchy/Dominance/Emphasis A good design contains elements that lead the reader through each element in order of its significance. The type and images should be expressed starting from most important to the least important. Dominance is created by contrasting size, positioning, color, style, or shape. The focal point should dominate the design with scale and contrast without sacrificing the unity of the whole. Scale/proportion Using the relative size of elements against each other can attract attention to a focal point. When elements are designed larger than life, the scale is being used to show drama. Scale can be considered both objectively and subjectively. In terms of objective, scale refers to the exact literal physical dimensions of an object in the real world or the coloration between the representation and the real one. Printed maps can be good examples as they have an exact scale representing the real physical world. Subjectively, however, scale refers to one's impression of an object's size. A representation “lacks scale” when there is no exact cause linking it to lived experience, giving it a physical identity. As an example, a book may have a grand or intimate scale based on how it relates to our own body or our knowledge of other books. Scale in design A printed piece can be as small as a postage stamp or as large as a billboard. A logo should be legible both in tiny dimensions as well as from a distance on a screen. Some projects have their specified scale designed for a certain medium or site, while some others need to work in various sizes designed for reproduction in multiple scales. No matter what size the design work is, it should have its own sense of scale. Increasing an element's scale in a design piece increases its value in terms of hierarchy and makes it to be seen first compared to other elements while decreasing an element's scale reduces its value. Similarity and contrast Planning a consistent and similar design is an important aspect of a designer's work to make their focal point visible. Too much similarity is boring but without similarity important elements will not exist and an image without contrast is uneventful so the key is to find the balance between similarity and contrast. Similar environment There are several ways to develop a similar environment: Build a unique internal organization structure. Manipulate shapes of images and text to correlate together. Express continuity from page to page in publications. Items to watch include headers, themes, borders, and spaces. Develop a style manual and adhere to it. Contrasts Space Filled / Empty Near / Far 2-D / 3-D Position Left / Right Isolated / Grouped Centered / Off-Center Top / Bottom Form Simple / Complex Beauty / Ugly Whole / Broken Direction Stability / Movement Structure Organized / Chaotic Mechanical / Hand-Drawn Size Large / Small Deep / Shallow Fat / Thin Color Grey scale / Color Black & White / Color Light / Dark Texture Fine / Coarse Smooth / Rough Sharp / Dull Density Transparent / Opaque Thick / Thin Liquid / Solid Gravity Light / Heavy Stable / Unstable Movement is the path the viewer's eye takes through the artwork, often to focal areas. Such movement can be directed along lines edges, shape and color within the artwork, and more. See also Composition (visual arts) Gestalt laws of grouping Interior design Landscape design Pattern language Elements of art Principles of art Color theory Notes References Kilmer, R., & Kilmer, W. O. (1992). Designing Interiors. Orland, FL: Holt, Rinehart and Winston, Inc. . Nielson, K. J., & Taylor, D. A. (2002). Interiors: An Introduction. New York: McGraw-Hill Companies, Inc. Pile, J.F. (1995; fourth edition, 2007). Interior Design. New York: Harry N. Abrams, Inc. Sully, Anthony (2012). Interior Design: Theory and Process. London: Bloomsbury. . External links Art, Design, and Visual Thinking. An online, interactive textbook by Charlotte Jirousek at Cornell University. The 6 Principles of Design Design Composition in visual art
Design principles
[ "Engineering" ]
1,109
[ "Design" ]
77,735,553
https://en.wikipedia.org/wiki/Design%20elements
Design elements are the basic units of any visual design which form its structure and convey visual messages. Painter and design theorist Maitland E. Graves (1902–1978), who attempted to gestate the fundamental principles of aesthetic order in visual design, in his book, The Art of Color and Design (1941), defined the elements of design as line, direction, shape, size, texture, value, and color, concluding that "these elements are the materials from which all designs are built." Color Color is the result of light reflecting back from an object to our eyes. The color that our eyes perceive is determined by the pigment of the object itself. Color theory and the color wheel are often referred to when studying color combinations in visual design. Color is often deemed to be an important element of design as it is a universal language which presents the countless possibilities of visual communication. Color serves various purposes to contribute to the overall effectiveness of the design. It is used as an element to convey meaning and emotion, create visual hierarchy, enhance brand identity, improve readability and accessibility, create visual interest and appeal, differentiate information and elements, and make cultural and contextual significance. Hue, saturation, and brightness are the three characteristics that describe color. Hue can simply be referred to as "color" as in red, yellow, or green. Saturation gives a color brightness or dullness, which impacts the vibrance of the color. Values, tints and shades of colors are created by adding black to a color for a shade and white for a tint. Creating a tint or shade of color reduces the saturation. Color theory in visual design Color theory studies color mixing and color combinations. It is one of the first things that marked a progressive design approach. In visual design, designers refer to color theory as a body of practical guidance to achieving certain visual impacts with specific color combinations. Theoretical color knowledge is implemented in designs in order to achieve a successful color design. Color harmony Color harmony, often referred to as a "measure of aesthetics", studies which color combinations are harmonious and pleasing to the eye, and which color combinations are not. Color harmony is a main concern for designers given that colors always exist in the presence of other colors in form or space. When a designer harmonizes colors, the relationships among a set of colors are enhanced to increase the way they complement one another. Colors are harmonized to achieve a balanced, unified, and aesthetically pleasing effect for the viewer. Color harmony is achieved in a variety of ways, some of which consist of combining a set of colors that share the same hue, or a set of colors that share the same values for two of the three color characteristics (hue, saturation, brightness). Color harmony can also be achieved by simply combining colors that are considered compatible to one another as represented in the color wheel. Color contrasts Color contrasts are studied with a pair of colors, as opposed to color harmony, which studies a set of colors. In color contrasting, two colors with perceivable differences in aspects such as luminance, or saturation, are placed side by side to create contrast. Johannes Itten presented seven kinds of color contrasts: contrast of light and dark, contrast of hue, contrast of temperature, contrast of saturation, simultaneous contrast, contrast of sizes, and contrast of complementary. These seven kinds of color contrasts have inspired past works involving color schemes in design. Color schemes Color schemes are defined as the set of colors chosen for a design. They are often made up of two or more colors that look appealing beside one another, and that create an aesthetic feeling when used together. Color schemes depend on color harmony as they point to which colors look pleasing beside one another. A satisfactory design product is often accompanied by a successful color scheme. Over time, color design tools with the function of generating color schemes were developed to facilitate color harmonizing for designers. Use of color in visual design Color is used to create harmony, balance, and visual comfort in a design Color is used to evoke the desired mood and emotion in the viewer Color is used to create a theme in the design Color holds meaning and can be symbolic. In certain cultures, different colors can have different meanings. Color is used to put emphasis on desired elements and create visual hierarchy in a piece of art Color can create identity for a certain brand or design product Color allows viewers to have different interpretations of visual designs. The same color can evoke different emotions, or have various meanings to different individuals and cultures Color strategies are used for organization and consistency in a design product In the architectural design of a retail environment, colors affect decision-making, making which motivates consumers to buy particular products Color strengthens narrative and storytelling in visual design. Color can represent characters, themes, and symbolism. Color is a tool that designers use to strategically add layers of meaning and subtext to their designs. Colors can create recurring visual motifs in a design, strengthening ideas and fostering coherence. Color is an effective tool for communication because it allows for complex interpretation and expression. Line The line is an element of art defined by a point moving in space. . More specifically, Line is defined as a series of points, or the connection between two points, or the path of a moving point. The importance of line comes from its versatility as its characteristics is significantly expressive. Lines can be vertical, horizontal, diagonal, or curved, they may also appear as linear shapes that take on a line-link quality, or as suggested line perceived from eyes as they follow a sequence related shapes. Line may be used either in two-dimensional forms with enclosing a space as an outline and creating shape, or in three-dimensional forms. They can be any width or texture, and can be continuous, implied, or broken. On top of that, there are different types of lines aside from the ones previously mentioned. For example, you could have a line that is horizontal and zigzagged or a line that is vertical and zigzagged. Different lines create different moods, it all depends on what mood you are using line to create and convey. Point A point is basically the beginning of “something” in “nothing”. It forces the mind to think upon its position and gives something to build upon in both imagination and space. Some abstract points in a group can provoke human imagination to link it with familiar shapes or forms. Shape A shape is defined as a two dimensional area that stands out from the space next to or around it due to a defined or implied boundary, or because of differences of value, color, or texture. Shapes are recognizable objects and forms and are usually composed of other elements of design. For example, a square that is drawn on a piece of paper is considered a shape. It is created with a series of lines which serve as a boundary that shapes the square and separates it from the space around it that is not part of the square. Types of shapes Geometric shapes or mechanical shapes are shapes that can be drawn using a ruler or compass, such as squares, circles, triangles, ellipses, parallelograms, stars, and so on. Mechanical shapes, whether simple or complex, produce a feeling of control and order. Organic shapes are irregular shapes that are often complex and resemble shapes that are found in nature. Organic shapes can be drawn by hand, which is why they are sometimes subjective and only exist in the imagination of the artist. Curvilinear shapes are composed of curved lines and smooth edges. They give off a more natural feeling to the shape. In contrast, rectilinear shapes are composed of sharp edges and right angles, and give off a sense of order in the composition. They look more human-made, structured, and artificial. Artists can choose to create a composition that revolves mainly around one of these styles of shape, or they can choose to combine both. Texture Texture refers to the physical and visual qualities of a surface. Definition of texture Texture is the variation of data at a scale smaller than the scale of the main object. Taking a person wearing a Hawaiian shirt as an example, as long as we consider the person as the main object looking at, the patterns of their shirt are considered as texture. However, if we try to identify the pattern of the shirt, each flower or bird of the pattern is a non-textured object, as no smaller detail inside of it can be recognized. Texture in our environment helps us to better understand the nature of things, as a smooth paved road signals safe passage and thick fog creates a veil on our view. Texture in design Texture in design includes the literal physical surface employed in a printed piece as well as the optical appearance of the surface. Physical texture affects how the piece feels in hand and also how it conveys the design, as a glossy surface for example reflects the light differently than a soft or pebbly one. Many of the textures manipulated by graphic designers, however, cannot be physically experienced as it is utilized in the visual representation aspect of the design. Texture adds detail to an image in a way that conveys the overall quality of a surface. Graphic designers use texture to establish a mood, reinforce a point of view, or convey a sense of physical presence whether setting a type or drawing a tree. Uses of texture in design Texture can be used to attract or repel interest to an element, depending on how pleasant the texture is perceived to be. Texture can also be used to add complex detail into the composition of a design. In theatrical design, the surface qualities of a costume sculpt the look and feel of a character, which influences the way the audience reacts to the character. Types of texture Tactile texture, also known as "actual texture", refers to the physical three-dimensional texture of an object. Tactile texture can be perceived by the sense of touch. A person can feel the tactile texture of a sculpture by running their hand over its surface and feelings its ridges and dents. Painters use impasto to build peaks and create texture in their painting. Texture can be created through collage. This is when artists assemble three dimensional objects and apply them onto a two-dimensional surface, like a piece of paper or canvas, to create one final composition. Papier collé is another collaging technique in which artists glue paper to a surface to create different textures on its surface. Assemblage is a technique that consists of assembling various three-dimensional objects into a sculpture, which can also reveal textures to the viewer. Visual texture, also referred to as "implied texture", is not detectable by our sense of touch, but by our sense of sight. Visual texture is the illusion of a real texture on a two-dimensional surface. Any texture perceived in an image or photograph is a visual texture. A photograph of rough tree bark is considered a visual texture. It creates the impression of a real texture on a two-dimensional surface which would remain smooth to the touch no matter how rough the represented texture is. In painting, different paints are used to achieve different types of textures. Paints such as oil, acrylic, and encaustic are thicker and more opaque and are used to create three-dimensional impressions on the surface. Other paints, such as watercolor, tend to be used for visual textures, because they are thinner and have transparency, and do not leave much tactile texture on the surface. Pattern Many textures appear to repeat the same motif. When a motif is repeated over and over again in a surface, it results in a pattern. Patterns are frequently used in fashion design or textile design, where motifs are repeated to create decorative patterns on fabric or other textile materials. Patterns are also used in architectural design, where decorative structural elements such as windows, columns, or pediments, are incorporated into building design. Space In design, space is concerned with the area deep within the moment of designated design, the design will take place on. For a two-dimensional design, space concerns creating the illusion of a third dimension on a flat surface: Overlap is the effect where objects appear to be on top of each other. This illusion makes the top element look closer to the observer. There is no way to determine the depth of the space, only the order of closeness. Shading adds gradation marks to make an object of a two-dimensional surface seem three-dimensional. Highlight, Transitional Light, Core of the Shadow, Reflected Light, and Cast Shadow give an object a three-dimensional look. Linear Perspective is the concept relating to how an object seems smaller the farther away it gets. Atmospheric Perspective is based on how air acts as a filter to change the appearance of distant objects. Form In visual design, form is described as the way an artist arranges elements in the entirety of a composition. It may also be described as any three-dimensional object. Form can be measured, from top to bottom (height), side to side (width), and from back to front (depth). Form is also defined by light and dark. It can be defined by the presence of shadows on surfaces or faces of an object. There are two types of form, geometric (artificial) and natural (organic form). Form may be created by the combining of two or more shapes. It may be enhanced by tone, texture or color. It can be illustrated or constructed. See also Composition (visual arts) Interior design Landscape design Pattern language Elements of art Color theory Notes References Kilmer, R., & Kilmer, W. O. (1992). Designing Interiors. Orland, FL: Holt, Rinehart and Winston, Inc. . Nielson, K. J., & Taylor, D. A. (2002). Interiors: An Introduction. New York: McGraw-Hill Companies, Inc. Pile, J.F. (1995; fourth edition, 2007). Interior Design. New York: Harry N. Abrams, Inc. Sully, Anthony (2012). Interior Design: Theory and Process. London: Bloomsbury. . External links Art, Design, and Visual Thinking. An online, interactive textbook by Charlotte Jirousek at Cornell University. The 6 Principles of Design Design
Design elements
[ "Engineering" ]
2,893
[ "Design" ]
61,130,095
https://en.wikipedia.org/wiki/Dust%20defense
Dust defense, sometimes called environmental defense, was a proposed anti-ballistic missile (ABM) system considered for protecting both Minuteman and MX Peacekeeper missile silos from Soviet attack. Operation The system works by burying a number of high-yield warheads near the missile field below the anticipated flight corridor of approaching enemy reentry vehicles (warheads). Approximately five to ten minutes before the arrival of the enemy warheads, the dust defense warheads would be detonated, sending a cloud of dust high into the atmosphere. Enemy RVs would strike this cloud of dust at approximately which would rapidly abrade the RV's heat shield causing the warhead to fail due to reentry heating or for the warhead to be knocked off course. The effectiveness of the system depended on the hardness of enemy RVs to abrasion and the yield of the weapon used. Approximately of dust would be produced per megaton of warhead yield. While it's possible that an RV could be hardened against the effects of this dust, doing so would carry a steep penalty in RV weight. Advantages and disadvantages of the system The advantage of the system compared to more conventional interceptor based ABM systems is that dust defense provides a "screen" thatwhile activeprovides protection regardless of decoys and the number of warheads used. Dust defense also has the advantage of being an "unambiguous" defense, in that the system is only seen as suitable for defending hard targets such as ICBM silos and therefore it only protects retaliatory capability. By only protecting retaliatory capability the system is thought to help preserve the balance of mutually assured destruction instead of degrading this balance as an ABM system protecting population centers would. Ignoring the issues of detonating a dust defense warhead near population centers, dust defense only provides protection for thirty minutes to an hour; long enough to allow for ICBMs to launch, but once the dust cloud settles the city remains to be retargeted. One of the issues with the system is that of a false alarm. Due to fallout and the consequences of detonating a nuclear weapon on your own soil, the consequences of a false alarm are much more severe than in found in interceptor based ABM. However, during a false alarm if a nation is forced to use launch on warning to prevent the destruction of their ICBM force on the ground, a nation would receive massive casualties from the enemy's retaliatory attack. Therefore, a dust defense system to prevent the need for launch on warning would be safer as in a false alarm you have only slightly irradiated some of your nation instead of facing the full brunt of a retaliatory attack. Dust defenselike any other ground burst nuclear detonationwould produce fallout. However, the stationary nature of the system means that weight and size is not an issue, so very low fission fraction technologies developed for Project Plowshare could be used to reduce the effects of fallout. One source alleges that fission fractions of less than 2% in a weapon would be possible. This would be less than a percent of the fission produced by Soviet warheads the system stops. Another source claims that by assembling the weapons inside underground vaults and then surrounding the weapons with borated water (which readily absorbs neutrons), fission fractions could be reduced to 1% of that of a conventional fission fraction weapon. Variations A variation of dust defense was proposed to instead time detonation to several seconds before the RVs detonated. Instead of destroying enemy RVs through dust, this would destroy RVs through the impact (instead of abrasion) of material thrown up by the blast. This proposal was believed to have a greater chance of success than the abrasion concept (which was thought to require testing and had uncertainties) while also providing a cloud of abrasive dust that could also destroy RVs. This system was proposed to defend Dense Pack. Dense pack itself was based on the dust defense concept, except in its basic form the dust was created by Soviet warheads attacking dense pack. See also Ash CarterAuthor of Ballistic Missile Defense and later US Secretary of Defense. References Nuclear warfare
Dust defense
[ "Chemistry" ]
858
[ "Radioactivity", "Nuclear warfare" ]
61,132,898
https://en.wikipedia.org/wiki/Mamanteo
A mamanteo or amuna is a pre-Columbian water harvesting system used in mountainous parts of Peru. It works by 'delaying' rainy-season runoff in the mountains so it can be used in lowlands settlements during the dry season. Using canals, the system targets floods to permeable sections of soil or rock. The water is filtered by the ground layers and emerges in downslope springs weeks or months later, where it can be used to mitigate drought. The technology may have been used by the Andean Wari culture as early as 700 AD, and some of the ancient systems have been restored in the 21st century to serve modern cities. Modern restoration work Lima is the world's second-largest desert city, behind Cairo, and served by a much smaller river than Cairo. To meet the needs of a 'desert city which keeps growing', Sunass, the Peruvian national water agency, intends to '[combine] grey infrastructure with green infrastructure' and has created a tax-supported PES fund; some of this money has gone to regrouting mamanteos. As of 2016, ten mamanteos had been restored. A study of the system in Huamantanga, Peru modeled an expansion to serve modern Lima, claiming it could extend the growing season for local farms. The study authors measured the amount of water stored and the time taken to deliver the water downslope, and proposed an extrapolation model to estimate large-scale effects of up-scaling the system. A climate risk study by Peruvian hydrology agencies includes the restoration of amunas on its shortlist of adaptive practices that have 'low cost and high hydrological benefit', along with management of grasslands, restoration of wetlands and forests, and other hydrological engineering projects. External links Ancestral technology, cheese and water for Lima, weadapt.org Ancient intervention could boost dwindling water reserves in coastal Peru, phys.org Ancient Peruvian engineering could help solve modern water shortages, Ars Technica Seeking relief from dry spells, Peru’s capital looks to its ancient past, National Geographic Amunas, Siembra de Agua, Cusqueña See also Qanat—an ancient Persian irrigation system for bringing wellwater downslope References Pre-Columbian cultures Water wells Irrigation Water supply Ancient technology
Mamanteo
[ "Chemistry", "Engineering", "Environmental_science" ]
466
[ "Hydrology", "Water wells", "Water supply", "Environmental engineering" ]
61,141,504
https://en.wikipedia.org/wiki/C22H30N2O2
{{DISPLAYTITLE:C22H30N2O2}} The molecular formula C22H30N2O2 (molar mass: 354.485 g/mol) may refer to: A-796,260 Eprozinol Martinostat Molecular formulas
C22H30N2O2
[ "Physics", "Chemistry" ]
62
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
61,142,179
https://en.wikipedia.org/wiki/C18H18N2O
{{DISPLAYTITLE:C18H18N2O}} The molecular formula C18H18N2O (molar mass: 278.348 g/mol) may refer to: AC-262,536 Demexiptiline Mariptiline Proquazone Molecular formulas
C18H18N2O
[ "Physics", "Chemistry" ]
65
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
61,143,497
https://en.wikipedia.org/wiki/C22H25NO5
{{DISPLAYTITLE:C22H25NO5}} The molecular formula C22H25NO5 (molar mass: 383.437 g/mol, exact mass: 383.1733 u) may refer to: Acetylpropionylmorphine Sacubitrilat Molecular formulas
C22H25NO5
[ "Physics", "Chemistry" ]
69
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
61,144,035
https://en.wikipedia.org/wiki/C6428H9912N1694O1987S46
{{DISPLAYTITLE:C6428H9912N1694O1987S46}} The molecular formula C6428H9912N1694O1987S46 (molar mass: 144190.3 g/mol) may refer to: Adalimumab Infliximab
C6428H9912N1694O1987S46
[ "Chemistry" ]
75
[ "Isomerism", "Set index articles on molecular formulas" ]
61,146,406
https://en.wikipedia.org/wiki/Log-rank%20conjecture
In theoretical computer science, the log-rank conjecture states that the deterministic communication complexity of a two-party Boolean function is polynomially related to the logarithm of the rank of its input matrix. Let denote the deterministic communication complexity of a function, and let denote the rank of its input matrix (over the reals). Since every protocol using up to bits partitions into at most monochromatic rectangles, and each of these has rank at most 1, The log-rank conjecture states that is also upper-bounded by a polynomial in the log-rank: for some constant , Lovett proved the upper bound This was improved by Sudakov and Tomon, who removed the logarithmic factor, showing that This is the best currently known upper bound. The best known lower bound, due to Göös, Pitassi and Watson, states that . In other words, there exists a sequence of functions , whose log-rank goes to infinity, such that In 2019, an approximate version of the conjecture has been disproved. See also List of unsolved problems in computer science References Communication Computational complexity theory Conjectures Unsolved problems in computer science Information theory
Log-rank conjecture
[ "Mathematics", "Technology", "Engineering" ]
248
[ "Unsolved problems in mathematics", "Telecommunications engineering", "Unsolved problems in computer science", "Applied mathematics", "Conjectures", "Information theory", "Computer science", "Mathematical problems" ]
53,502,028
https://en.wikipedia.org/wiki/Gambierol
Gambierol is a marine polycyclic ether toxin which is produced by the dinoflagellate Gambierdiscus toxicus. Gambierol is collected from the sea at the Rangiroa Peninsula in French Polynesia. The toxins are accumulated in fish through the food chain and can therefore cause human intoxication. The symptoms of the toxicity resemble those of ciguatoxins, which are extremely potent neurotoxins that bind to voltage-sensitive sodium channels and alter their function. These ciguatoxins cause ciguatera fish poisoning. Because of the resemblance, there is a possibility that gambierol is also responsible for ciguatera fish poisoning. Because the natural source of gambierol is limited, biological studies are hampered. Therefore, chemical synthesis is required. Structure and reactivity Gambierol is a ladder-shaped polyether, composed of eight ether rings, 18 stereocenters, and two challenging pyranyl rings having methyl groups that are in a 1,3-diaxial orientation to one another. Different structural analogues were synthesized to determine which groups and side chains attached to the gambierol are essential for its toxicity. Not only the fused polycyclic ether core is essential, but also the triene side chain at C51 and the C48-C49 double bond were indispensable. In the triene side chain, the double bond between C57 and C58 was essential. The C1 and C8 hydroxy groups were not essential, but they enhance the activity. The conjugate diene in the triene side chain also enhances the toxicity. Synthesis The synthesis of gambierol consists of two tetracyclic precursor molecules, one alcohol and one acetic acid, that fuse together. The whole synthesis of gambierol is depicted in the figure below. After obtaining the octacyclic backbone, the tail is added via Stille coupling. The acetic acid (compound 1) and alcohol (compound 2) are converted to compound 3. The reaction of compound 3 with the titanium alkylidene from dibromide 1,1-dibromoethane, provides cyclic enol ether (compound 4). Oxidation of the alcohols gives majorly compound 5, but also compound 6. These are both ketones, but they have other stereochemistry. Compound 6 can be converted back in compound 5 with reactant c, thereby moving the equilibrium towards compound 5. This ketone can be converted further to produce reactive gambierol. By reductive cyclization of the D ring, the octacyclic core structure (compound 7) was made. Oxidation of compound 7 to the aldehyde was followed by formation of the diiodolefin. Stereoselective reduction, global deprotection and Stille coupling of compound 8 with dienyl stannane (compound 9) provide gambierol. Mechanism of action Gambierol acts as a low-efficacy partial agonist at voltage-gated sodium channels (VGSC's) and as a high affinity inhibitor of voltage-gated potassium currents. It reduces the current through potassium channels irreversibly by stabilizing some of the closed channels. It acts as an intermembrane anchor where it displaces lipids and prohibits the voltage sensor domain of the channel from moving during physiologically important changes. This causes the channel to remain in the closed state and lowers the current. Gambierol also decreases the amplitude of inward sodium currents and hyperpolarizes the inward sodium current activation. Gambierol has a high affinity for especially K1.1-1.5 channels and the K3.1 channel. K1.1-1.5 channels are responsible for repolarization of the membrane potential. The K1.3 channel however, has additional functions by regulating the Ca2+ signaling for T cells. If they are blocked, the T cells at the site of inflammation paralyse and are not reactivated. K3.1 channels are responsible for the high frequency firing of action potentials. If the K channels are closed, the depolarized membrane cannot repolarize to its resting state, causing a permanent action potential. This leads to paralysis of, for example, the respiratory system and therefore suffocation of the organism. In neurons, gambierol has been reported to induce Ca oscillations because of blockage of the voltage-gated potassium channels. The Ca oscillations involve glutamate release and activation of NMDARs (glutamate receptors). This is however secondary to the blockade of potassium channels. The oscillations reduce the cytoplasmic Ca threshold for the activation of Ras. Ras stimulates MAPKs to phosphorylate ERK1/2 which induce outgrowth of neurites. This is however dependent on intracellular concentrations and interaction of the NMDAR receptors They both work bidirectionally. An increase in intracellular calcium concentration also activates the nitric oxide synthase to produce nitric oxide. In combination with a superoxide, nitric oxide forms peroxynitrite and causes oxidative stress in different sorts of tissues. This explains the toxic symptoms derived from intake of gambierol. Metabolism Metabolism of gambierol is not known yet, but the expectation is that gambierol acts almost the same as the ciguatoxins. Ciguatoxins are polycyclic polyether compounds. Their molecular weight is between 1.023 and 1.159 Dalton. Gambierol is structurally similar to ciguatoxins and it can be synthesized together with them. Excretion of these ciguatoxins is largely via the feces and in smaller amounts via urine. The compounds are very lipophilic and will therefore diffuse to multiple organs and tissues, for example the liver, fat and the brain. The highest concentration was found in the brain, but muscles contained the highest total amount after a few days. Because gambierol is lipophilic, it can easily persist and accumulate in the fish food chain. The detoxification pathways are still unknown, but it is possible to eliminate the gambierol. This will take several months or years. Efficacy and side effects The membrane potential and calcium signaling in T lymphocytes are controlled by ion channels. T cells can be activated if membrane potential and calcium signaling are altered, because they are coupled to signal transduction pathways. If these signal transduction pathways are disrupted, it can prevent the T cells from responding to antigens. This is called immune suppression. Gambierol is a potent blocker of potassium channels, which for a part determine the membrane potential. Gambierol is therefore a good option for the development of a drug that can be used in immunotherapy. This is for example used in diseases like multiple sclerosis, diabetes mellitus type 1 and rheumatoid arthritis. Treatment with gambierol is not being used yet, because the compound is toxic and also blocks other channels and thereby disrupts important processes. Intake of gambierol can also cause pain, because Kv1.1 and Kv1.4 channels are blocked and that increases the excitability of central circuits. It also causes illness for several weeks. This is explained by the fact that gambierol is very lipophilic. Lipophilic compounds have high affinity for the lipid bilayer of cell membranes. It is likely that gambierol remains in the cell membrane for days or a few weeks, which explains the long term illness associated with gambierol treatment. There are also other symptoms already explained by the mechanism of action of gambierol, for example difficulties with respiration and hypotension. Gambierol is also an interesting compound in research into treatments of pathologies like Alzheimer's disease, which are caused by increased expression of β-amyloid and/or tau hyperphosphorylation. Increases in β-amyloid accumulation and/or tau phosphorylation affects neurons the most. The neurons will then be degenerated and therefore this process has effects on the nervous system. However, gambierol can reduce this overexpression of β-amyloid and/or tau hyperphosphorylation in vitro and in vivo. Gambierols function in inducing outgrowth of neurites in a bidirectional manner can potentially be used after neural injury. After for example a trauma or a stroke, gambierol can be used to change the structural plasticity of the brain. The possibility of gambierol to cross the blood–brain barrier is very important in this case. Toxicity Poisoning by gambierol is normally caused after eating contaminated fish. Gambierol exhibits potent toxicity against mice at 50-80 μg/kg by intraperitoneal injection or 150 μg/kg when consumed orally. Symptoms resemble those of ciguatera poisoning. The symptoms concerning the gastrointestinal tract are: Abdominal pain Nausea Vomiting Diarrhea Painful defecation The neurological symptoms include: Paradoxical temperature reversion; cold objects feel hot and vice versa. Dental pain; teeth feel loose. Treatment There is no known antidote for gambierol poisoning. References Neurotoxins Polyether toxins Ion channel toxins Non-protein ion channel toxins Potassium channel blockers Heterocyclic compounds with 7 or more rings
Gambierol
[ "Chemistry" ]
1,986
[ "Polyether toxins", "Neurochemistry", "Neurotoxins", "Toxins by chemical classification" ]
53,510,000
https://en.wikipedia.org/wiki/Diseases%20of%20abnormal%20polymerization
Diseases of abnormal polymerization, or simply DAPs, are a class of disorders characterized by a novel alteration in base unit proteins that results in a structure with pathogenic potential. This functional alteration in a protein in relation to its thermodynamic and kinetic properties enacts an extended chain response among neighboring proteins until an extensive and potentially harmful polymerized structure is formed. Due to this endogenous foreign formation, these diseases are often untreatable and very severe in clinical manifestation. Although DAPs are rare infections, the poor outcome in patients and the need for further understanding makes this class of diseases a pillar for future research. Replication by recruitment Diseases of abnormal polymerization are said to undergo "replication", or rather that the number of proteins that are polymerized is shown to generally increase much like in a natural course of infection. Since the functional "pathogens" of DAPs are protein units the diseases are almost entirely independent from the use of nucleic acids. Multiple models illustrating this recruitment function exist, including the PrP protein in prion disease. The PrP protein is the major agent in spongiform encephalopathies and undergoes a clear process of polymerization based upon the natural balancing of thermodynamic states and kinetic summation. Like most proteins PrP can exist in two forms, one major and one minor, an alpha helix structure and a beta-pleated sheet structure respectively, that are balanced during nearly all conditions, but with dominance granted to the stable helix form. In certain instances, it may be possible for two beta forms to contact each other at the same time, and in this case the pair can form bonds that successfully stabilize the beta forms thermodynamically and allowing these structures to remain. This is termed the "seed" of polymerization as from this point the continued interaction, or recruitment, amongst the beta forms is increased perpetually, since there is a constant presence of stable beta forms, as well as the fact that beta forms, or beta-pleated sheets, have a greater number of reactive nucleation sites. This progression forms extended fibrils slowly over time that will then cause localized cytopathology, resulting in the characteristic sites of cell degradation or "sponginess". This general template of recruitment is also characteristic in the condition sickle cell anemia in which the red blood cells are misshapen because of the formation of extended polymer fibrils. Scenarios for pathogenesis Spontaneous development ADPs can be generated in a spontaneous nature by the existence of multiple thermodynamic states and the kinetic equilibrium between the alpha helix and beta-pleated sheets polymerizing into extended fibrils. Genetic predisposition In certain ADPs, such as Gerstmann-Straussler or fatal familial insomnia, individuals naturally encode a form of the PrP protein that shifts the equilibrium slightly to make the beta form more favorable, thus increasing the likelihood of additional nucleation and extended polymerization. Infectious etiology Many ADPs, including Creutzfeldt-Jakob disease and Kuru, have an infectious nature and can be transmitted to other hosts through various means. In the case of Kuru, familial and cultural rituals in indigenous peoples of Papua New Guinea promoted the consumption of a relative's body upon death, and if contaminated, resulting in the transmission of the original "seed" of polymerization. Examples Spongiform encephalopathies Prion Disease (Kuru, CJD, GSS, BSE) Alzheimer's disease Sickle cell anemia Parkinson's disease Huntington's disease References abnormal polymerization of Polymerization reactions
Diseases of abnormal polymerization
[ "Chemistry", "Materials_science" ]
747
[ "Polymerization reactions", "Polymer chemistry" ]
53,511,508
https://en.wikipedia.org/wiki/International%20Beacon%20Project
The International Beacon Project (IBP) is a worldwide network of radio propagation beacons. It consists of 18 continuous wave (CW) beacons operating on five designated frequencies in the high frequency band. The IBP beacons provide a means of assessing the prevailing ionospheric signal propagation characteristics to both amateur and commercial high frequency radio users. The project is coordinated by the Northern California DX Foundation (NCDXF) and the International Amateur Radio Union (IARU). The first beacon of the IBP started operations from Northern California in 1979. The network was expanded to include 8 and subsequently 18 international transmission sites. History The first beacon was put into operation in 1979 using the call sign . It transmitted a 1 minute-long beacon every 10 minutes on 14.1 MHz using custom built transmitter and controller hardware. The signal consisted of the beacon's call sign transmitted in Morse code at 100 watts, four 9 second long dashes, each at 100 watts, 10 watts, 1 watt, and 0.1 watt, followed by sign-out at 100 watts. Northern California DX Foundation and seven partnering organizations from the United States, Finland, Portugal, Israel, Japan, and Argentina operated the first iteration of the beacon network. Due to difficulties encountered in building beacon hardware, each site used a Kenwood TS-120 transceiver keyed and controlled by a custom built beacon controller. The network operated on 14.1 MHz and the beacon format remained unchanged. In 1995, work began to improve the existing beacon network, so it could operate on 5 designated frequencies on the high frequency band. The new beacon network used Kenwood TS-50 transceivers keyed and controlled by an upgraded beacon controller unit. The number of partner organizations were expanded to 18 and the new 10 second beacon format was adopted. Notable Projects Beyond helping amateur radio operators better understand HF radio propagation the project has aided scientists in better understanding the earths ionosphere, improved prediction models, and aided in radio direction finding. Frequencies and transmission schedule The beacons transmit around the clock on the frequencies 14.100 MHz 18.110 MHz 21.150 MHz 24.930 MHz 28.200 MHz Each beacon transmits its signal once on each frequency, in sequence from low (14.100 MHz) to high (28.200 MHz), followed by a 130 second pause during which beacons at other sites transmit in turn on the same frequencies, after which the cycle repeats. Each transmission is 10 second-long, and consists of the call sign of the beacon transmitted at 22 words per minute () followed by four dashes. The call sign and the first dash is transmitted at 100 watts of power. Subsequent three dashes are transmitted at 10 watts, 1 watt, and 0.1 watt respectively. All beacon transmissions are coordinated using GPS time. As such, at a given frequency, all 18 beacons transmit in succession once every 3 minutes. Hardware Beacons transmit using commercial HF transceivers (Kenwood TS-50 or Icom IC-7200) keyed and coordinated by a purpose-built, hardware beacon controller. Beacons The International Beacon Project operates the following beacons as of January 2024. {| class="wikitable" ! ! Beacon region ! Call sign ! Transmit site ! Gridsquare ! Operator |- |style="text-align:center;"| 1 | United Nationsheadquarters | | New York City |style="text-align:left;"| | United Nations Staff Recreation CouncilAmateur Radio Club (UNRC) |- |style="text-align:center;"| 2 | northernCanada | VE8AT | Inuvik, NT |style="text-align:left;"| CP 38 gh | RAC/NARC |- |style="text-align:center;"| 3 | California,United States | | Mt. Umunhum |style="text-align:left;"| | Northern California DX Foundation (NCDXF) |- |style="text-align:center;"| 4 | Hawaii,United States | | Maui |style="text-align:left;"| | Maui Amateur Radio Club (Maui ARC) |- |style="text-align:center;"| 5 | New Zealand | | Masterton |style="text-align:left;"| | New Zealand Association of Radio Transmitters (NZART) |- |style="text-align:center;"| 6 | Western Australia | | Roleystone |style="text-align:left;"| | Wireless Institute of Australia (WIA) |- |style="text-align:center;"| 7 | Honshū, Japan | | Mt. Asama |style="text-align:left;"| | Japan Amateur Radio League (JARL) |- |style="text-align:center;"| 8 | Siberia, Russia | | Novosibirsk |style="text-align:left;"| | Russian Amateur Radio Union (SRR) |- |style="text-align:center;"| 9 | Hong Kong | | Hong Kong |style="text-align:left;"| | Hong Kong Amateur Radio Transmitting Society (HARTS) |- |style="text-align:center;"| 10 | Sri Lanka | | Colombo |style="text-align:left;"| | Radio Society of Sri Lanka (RSSL) |- |style="text-align:center;"| 11 | South Africa | | Pretoria |style="text-align:left;"| | South Africa Radio Society (SARL) |- |style="text-align:center;"| 12 | Kenya | | Kariobangi |style="text-align:left;"| | Amateur Radio Society of Kenya (ARSK) |- |style="text-align:center;"| 13 | Israel | | Tel Aviv |style="text-align:left;"| | Israel Amateur Radio Club (IARC) |- |style="text-align:center;"| 14 | Finland | | Lohja |style="text-align:left;"| | Finnish Amateur Radio League (SRAL) |- |style="text-align:center;"| 15 | Madeira Island,Portugal | | Santo da Serra |style="text-align:left;"| | Rede dos Emissores Portugueses (REP) |- |style="text-align:center;"| 16 | Argentina | | Buenos Aires |style="text-align:left;"| | Radio Club Argentino (RCA) |- |style="text-align:center;"| 17 | Peru | | Lima |style="text-align:left;"| | Radio Club Peruano (RCP) |- |style="text-align:center;"| 18 | northern Venezuela | | Caracas |style="text-align:left;"| | Radio Club Venezolano (RCV) |} References Radio frequency propagation Beacons
International Beacon Project
[ "Physics" ]
1,501
[ "Physical phenomena", "Spectrum (physical sciences)", "Radio frequency propagation", "Electromagnetic spectrum", "Waves" ]
52,012,420
https://en.wikipedia.org/wiki/K%20band%20%28NATO%29
The NATO K band is the obsolete designation given to the radio frequencies from 20 to 40 GHz (equivalent to wavelengths between 1.5 and 0.75 cm) during the cold war period. Since 1992 frequency allocations, allotment and assignments are in line to NATO Joint Civil/Military Frequency Agreement (NJFA). However, in order to identify military radio spectrum requirements, e.g. for crises management planning, training, electronic warfare activities, or in military operations, this system is still in use. References Microwave bands Satellite broadcasting
K band (NATO)
[ "Engineering" ]
110
[ "Telecommunications engineering", "Satellite broadcasting" ]
52,012,421
https://en.wikipedia.org/wiki/K%20band%20%28infrared%29
In infrared astronomy, the K band is an atmospheric transmission window centered on 2.2 μm (in the near-infrared 136 THz range). HgCdTe-based detectors are typically preferred for observing in this band. Photometric systems used in astronomy are sets of filters or detectors that have well-defined windows of absorption, based around a central peak detection frequency and where the edges of the detection window are typically reported where sensitivity drops below 50% of peak. Various organizations have defined systems with various peak frequencies and cutoffs in the K band, including , and KS, and Kdark. See also Absolute magnitude References Electromagnetic spectrum Infrared imaging
K band (infrared)
[ "Physics", "Astronomy" ]
133
[ "Astronomy stubs", "Spectrum (physical sciences)", "Electromagnetic spectrum" ]
52,017,892
https://en.wikipedia.org/wiki/William%20J.%20Evans%20%28chemist%29
William J. Evans is a Distinguished Professor at the University of California, Irvine, who specializes in the inorganic and organometallic chemistry of heavy metals, specifically the rare earth metals (i.e. Sc, Y, and the lanthanides), actinides, and bismuth. He has published over 500 peer-reviewed research papers on these topics. Evans was born in Madison, Wisconsin, and raised in Menomonee Falls, Wisconsin. He received a Bachelor of Science degree at the University of Wisconsin-Madison in 1969 where he did undergraduate research on pentaborane chemistry with Professor Donald F. Gaines. Subsequently, he attended the University of California, Los Angeles, where he obtained his PhD degree in 1973. His PhD research on the synthesis of metallocarboranes was supervised by Professor M. Frederick Hawthorne. He did postdoctoral research at Cornell University on the synthesis of transition metal phosphite complexes under the direction of Professor Earl L. Muetterties. Evans began his independent research career in 1975 at the University of Chicago. He chose an area of research completely different from his training and experience, namely the chemistry of the rare-earth metals and actinides, with the central thesis that the special properties of these metals should lead to unique chemistry. He used exploratory synthesis to generate the new molecular species with the appropriate coordination environments to allow the special chemistry of these metals to be accessed. After receiving tenure at Chicago in 1982, he was recruited to the University of California, Irvine, where he has been a Professor since 1983. Among his recent accomplishments at UCI is the discovery of molecular species containing nine new rare earth and actinide oxidation states. Evans is one of the few people to have received the American Chemical Society (ACS) Awards in both Inorganic Chemistry and Organometallic Chemistry. He has also received the Sir Edward Franklin Award and the Centenary Prize of the Royal Society of Chemistry, the Frank Spedding Award for Excellence in the Science and Technology of Rare Earths, the Terrae Rarae Award of the Tage der Seltenen Erden Society in Germany, the Richard C. Tolman Award of the Southern California Section of the ACS, a Special Creativity Extension Award from the National Science Foundation, the UCI Distinguished Faculty Award for Research, and the UCI Physical Sciences Outstanding Contributions to Undergraduate Education Award. He was also honored with UCI's highest faculty award, the Lauds and Laurels Outstanding Faculty Achievement Award. Recently, he was named Director of the Eddleman Quantum Institute at UCI and has been active in promoting interdisciplinary quantum science. Research Evans initially examined metal vapor methods to make new classes of lanthanide complexes in the 0 oxidation state. In efforts to characterize these products he identified the first crystallographically characterized lanthanide hydrides, [(C5H5)2(THF)Ln(μ-H)]2 (Ln = rare-earth metal), and the first soluble organometallic complex of samarium in the +2 oxidation state, (C5Me5)2Sm(THF)2. The latter complex demonstrated that lanthanide complexes could accomplish small molecule activation in unique ways, e.g. by reductive homologation of three CO molecules to (O2CC=C=O)2-. Desolvation of (C5Me5)2Sm(THF)2 formed the first bent metallocene with no other ligands, (C5Me5)2Sm. Decamethylsamarocene, as it was called, was surprising because it was previously thought that large cyclooctatetraenyl rings were required with lanthanides and actinides to form two ring metallocenes. (C5Me5)2Sm was even more unusual in that it had a bent geometry instead of the parallel plane ferrocene-like structure expected for a simple ionic complex of a +2 ion with two large anionic cyclopentadienyl rings. (C5Me5)2Sm provided the first evidence of a rare-earth metal dinitrogen complex, [(C5Me5)2Sm]2(μ-η2:η2-N2). More importantly, this was the first example of a coplanar arrangement of dinitrogen with two metal atoms. Previous M2N2 complexes had a butterfly geometry in which each metal could interact with a perpendicular N−N pi bond. Subsequent studies of lanthanide-based dinitrogen reduction led to over forty crystallographically characterized examples of the formerly unprecedented planar M2(μ-η2:η2-N2) structures. These studies also led to the first examples of complexes of (N2)3- and (NO)2- radical anions. In collaboration with Professor Jeffrey R. Long, the (N2)3- complexes were found to constitute a new class of single-molecule magnets. Evans' synthetic study of (C5Me5)2Sm led to the discovery a series of sterically crowded tris(pentamethylcyclopentadienyl) (C5Me5)3M complexes (M = rare earth and actinide). Previously, it was thought that three of these large ligands could not fit around any metal. This discovery was significant because of the metal-ligand bond lengths in these complexes were 0.1 Å longer than conventional distances. The sterically crowded (C5Me5)3M complexes exhibit reduction chemistry termed sterically induced reduction (SIR), as well as η1-alkyl reactivity. Further exploration of dinitrogen reduction led to the synthesis of the first crystallographically characterizable molecular complexes containing Pr2+, Gd2+, Tb2+, Ho2+, Y2+, Er2+, and Lu2+ ions. These new lanthanide ions unexpectedly had 4fn5d1 electron configurations, and not the conventional 4fn+1 configuration generated by reduction of 4fn Ln3+ ions of Eu, Yb, Sm, Tm Dy, and Nd. The first molecular complexes of U2+ and Th2+ were also discovered in the Evans lab. In the Th2+ case, the complex contained the first example of any ion with a 6d2 electron configuration. This is the configuration expected for superheavy metal ions like Rf2+ and Db3+. References Year of birth missing (living people) Living people 21st-century American chemists American inorganic chemists University of California, Irvine faculty University of California, Los Angeles alumni Chemists from Wisconsin
William J. Evans (chemist)
[ "Chemistry" ]
1,375
[ "American inorganic chemists", "Inorganic chemists" ]
56,314,301
https://en.wikipedia.org/wiki/Surface%20treatment%20of%20PTFE
Polytetrafluoroethylene (PTFE), better known by its trade name Teflon, has many desirable properties which make it an attractive material for numerous industries. It has good chemical resistance, a low dielectric constant, low dielectric loss, and a low coefficient of friction, making it ideal for reactor linings, circuit boards, and kitchen utensils, to name a few applications. However, its nonstick properties make it challenging to bond to other materials or to itself. A number of adhesion promotion methods have been developed to enhance PTFE bond strength. The primary methods currently used in industry are sodium etching and plasma etching. Results of ion beam treatment and laser surface roughening have also been reported in the literature, but do not have a significant presence as commercial processes. Sodium etching Wetting the surface of PTFE with commercially available solvents and liquid adhesives is virtually impossible. The exception to this is with special halogenated solvents that have a surface energy lower than PTFE, such as 3M's FC series solvents. These 3M solvents are, however, toxic and expensive. Additionally, even if wettability is acceptable, the PTFE will not form hydrogen bonds which are the primary source of adhesion strength. The PTFE surface therefore must be chemically modified to produce a surface which is capable of forming hydrogen bonds. Early sodium etching solutions Sodium etching of fluoropolymers has been used for decades to enhance bondability of PTFE. It is performed by immersion of the PTFE in a solution containing sodium followed by rinsing in alcohol and water. The process was originally performed by dissolving sodium metal in liquid ammonia. An alternative method was to form a complex with naphthalene, which was then dissolved in an ether such as tetrahydrofuran (THF). Both types of solutions carry risks to the user – both ammonia and THF are irritants, and both are flammable. At higher concentrations, THF is also a central nervous system depressant. In rats, the inhalation LC(50) (Lethal Concentration which kills 50% of test subjects) is 21,000 ppm for 3 hours. In humans, chronic effects have not been reported, but researchers using THF have developed severe occipital headaches and marked decreases in white blood cell counts. Newer sodium etching solutions More recently, glycol ethers (known as glymes) have come into use as carriers for the sodium naphthalene complex for PTFE etching. These glymes are ethylene glycol dimethyl ether (monoglyme), diethylene glycol dimethyl ether (diglyme), and tetraethylene glycol dimethyl ether (tetraglyme). Glymes pose minimal or no health risks to the user, and the solutions do not require special storage conditions. When using glyme-based etchants, it is recommended that the etching process be performed at moderately elevated temperatures, about 50 °C. The elevated temperature causes the etchant to release more active sodium. It also lowers the viscosity of the etchant which enhances wetting of high aspect ratio features such as plated through-holes in printed circuit boards. Tests of diglyme-based etchants used at 50 °C have shown bond strength increases of 50% or more over room temperature etching. Commercially available etchants today are primarily glyme-based. Rogers Corporation, a manufacturer of PTFE printed circuit board laminates, refers to Poly-Etch and FluoroEtch etchants in its Fabrication Guidelines, "Bonding PTFE Materials for Microwave Stripline Packages and Other Multilayer Circuits". Poly-Etch is a sodium naphthalene complex in tetraglyme, while Fluoro-Etch is a sodium naphthalide complex in diglyme Matheson, the manufacturer of Poly-Etch, also manufactures a monoglyme-based etchant called Poly-Etch W. Fulcrum Chemicals manufactures three different etchants called Natrex25, NatrexHighFp and Natrex64. Sodium etching mechanism The main effect of sodium etching is defluorination of the PTFE, stripping the fluorine molecules from the carbon backbone of the polymer. The fluorine-to-carbon atomic ratio (F/C ratio) is reduced from PTFE's theoretical ratio of 2.0 to 0.2 or less, after exposure to sodium naphthalene for 1 minute. The fluorine atoms are replaced with hydroxyl, carbonyl, and other functional groups which can form hydrogen bonds. Topographically, chemical etching of PTFE with sodium results in a highly porous defluorinated layer. Superficially, it displays a characteristic "mud crack" appearance. Wettability is improved significantly by the sodium etching process. The resultant surface has an increased surface energy, reported in one study as increasing from 16.4 mN/m to 62.2 mN/m. Contact angle is reduced from approximately 115 degrees to approximately 60 degrees. Bond strengths Relative to untreated PTFE, the sodium etching process has been well-documented to increase PTFE bond strengths significantly regardless of the test method (tensile, peel, lap shear) used to evaluate samples bonded with epoxy. Virtually all sodium etching bond strengths reported in academic journals predate the advent of glymes as carriers for sodium naphthalene complex. In adhesion tests per ASTM D4541, in which an aluminum stud is bonded to the test surface and the stud is pulled in the direction normal to the surface, both surfaces of the failure interface were analyzed by X-ray photoelectron spectroscopy (XPS). F/C ratio was used as an indicator of the failure mode: F/C of zero corresponds to failure in the epoxy, while an F/C ratio near 2.0 indicates failure in the bulk PTFE. Intermediate F/C ratios indicate that failure occurred in the zone modified by the pretreatment. Using this analysis method, failure in sodium etched samples is shown to be cohesive, occurring between the modified layer and the bulk PTFE and not between the epoxy and the treated PTFE. The adhesion properties therefore are assumed to be limited by the properties of the treated layer. Sodium treated PTFE will degrade with exposure to UV radiation. Immediately after sodium treatment, the PTFE surface is dark brown. The weaker the etching solution, the lighter the color change and the weaker the bond will be. When exposed to UV radiation, the treated PTFE will gradually return to its original white color. Exposure to light, abrasion, heat and some oxidizing agents will also degrade the treated surface. The shelf life of treated surfaces may be as high as 3 to 4 months when stored below 5 °C in a dark oxygen- and moisture-free environment. Plasma etching In plasma etching, the PTFE is exposed to plasma, an electrically charged gas. Various gases may be used to generate the plasma. Like chemical etching, plasma etching also defluorinates the PTFE, though not to the same degree. F/C ratios drop from 2.0 to 1.4 with an argon plasma, and to 1.8 with an oxygen plasma, and to 0.7-0.8 with an ammonia or hydrogen plasma. Topographically, plasma treatment changes the surface morphology, with different morphologies resulting from different plasma gases used. Contact angle decreased with treatment by some, but not all, plasma gases – in one study, argon plasma decreased the contact angle from about 105 degrees to 30 degrees after 1 hour of treatment, but oxygen plasma did not affect the contact angle. Surface energy increased from 16.4 mN/m to 48.8 mN/m after ammonia plasma treatment and 36.8 mN/m after hydrogen plasma. The aluminum stud pull-off test showed an increase from 31 N to about 200 N after either ammonia or hydrogen plasma treatment. XPS analysis of the plasma treated failure interface indicated cohesive failure between the modified layer and the bulk PTFE, similar to the chemically etched samples. Comparison between chemical etching and plasma etching Despite similar failure mechanisms in both sodium-etched and plasma-etched samples, sodium etching produces much higher bond strengths than plasma etching. Sodium-etched samples exhibited 4 to 5 times the strength of plasma-etched samples when tested in tension per ASTM D4541. When tested in peel, sodium-etched samples exhibited 3 to 12 times the peel strength of plasma-etched samples, depending on the type of plasma used. One proposed explanation for the large difference in bond strengths is that chemical etching modifies the PTFE to a greater depth than plasma etching, increasing the tortuosity of the fracture path through the etched layer during adhesion testing. Another explanation for the large difference in bond strengths is that, in addition to defluorination, sodium etching results in cross-linking which may stabilize the modified PTFE interface, while plasma etching may cause chain scission (breakage of the PTFE polymer chain), since the C-C bond is weaker than the C-F bond. This polymer chain scission weakens the strength of the modified PTFE. While plasma etching is not able to achieve adhesion increases approaching those of chemical etching, it does provide some improvement to PTFE adhesion over untreated PTFE. Other PTFE surface treatments Ion beam and laser treatments have also been studied as methods to improve PTFE adhesion. However, neither of these treatment modalities appears to be in use commercially. Ion beam-treated PTFE exhibits significantly greater surface morphology changes than either chemical etching or plasma etching. Ion beam treatments with pure argon or pure oxygen result in minimal defluorination as determined by F/C ratio. Contact angle actually increased with ion beam treatment. Peel strength with ion beam treatment increased as a function of the ion beam dose, achieving higher peel strengths than plasma-treated samples at doses above 5E15 ions/cm2. The primary effect of ion beam treatment therefore is morphology modification, with little chemical effect. Longer ion beam treatment time is assumed to increase surface area for bonding, which in turn increases peel strength. Laser surface roughening of PTFE has also been studied as a potential method for increasing bond strength to PTFE. In one study, Rauh et al. treated PTFE with a pulsed ArF laser at 193 nm. Multiple pulses were required to achieve a uniform roughness across the surface due to inhomogeneity of the untreated material. Peel test results using epoxy resin showed an increase from 0.9 N/cm to 8.9 N/cm. References Fluoropolymers Biomaterials Dielectrics Dry lubricants DuPont DuPont products Fluorocarbons Plastics
Surface treatment of PTFE
[ "Physics", "Biology" ]
2,279
[ "Biomaterials", "Unsolved problems in physics", "Materials", "Medical technology", "Dielectrics", "Amorphous solids", "Matter", "Plastics" ]
56,316,763
https://en.wikipedia.org/wiki/Testing%20of%20advanced%20thermoplastic%20composite%20welds
Welding of advanced thermoplastic composites is a beneficial method of joining these materials compared to mechanical fastening and adhesive bonding. Mechanical fastening requires intense labor, and creates stress concentrations, while adhesive bonding requires extensive surface preparation, and long curing cycles. Welding these materials is a cost-effective method of joining concerning preparation and execution, and these materials retain their properties upon cooling, so no post processing is necessary. These materials are widely used in the aerospace industry to reduce weight of a part while keeping strength. For many industries there are codes and standards that need to be followed when being implemented into service. The quality of the welds made on these materials are important in ensuring people receive safe products. There are not codes made specifically for the welding of advanced thermoplastic composite welds, so the codes for adhesive bonding of plastics and metals are slightly altered, and used in order to properly test these materials. Even though the joining method is different these materials have mechanical requirements they need to meet. Weld testing and analysis There are several mechanical properties that need to be tested to ensure the quality of welds. The testing methods talked about in this article will be referenced from the ASTM adhesive bonding standards. The properties needed to be tested are shear strength, fracture toughness, and fatigue properties. Optical microscopy is also often done to look for weld defects. Testing for shear strength According to ASTM D1002 The specimens tested will be configured as lap joints. They will need to be sectioned in a way that they can fit in the grips used for the tensile testing. The length of the overlap for the lap joint is determined by the thickness of the material, the yield point of the metal, and the value that is 50% of the estimated average shear strength in an adhesive bond, but for the purpose of this article it will be specified for a welded joint. The code also specifies the required capabilities of the machine used to test the shear strength. The breaking load of the specimens must fall between 15 and 85 percent of the full scale capabilities of the apparatus. For thermoplastic composites these machines need to be able to maintain a loading rate of 80–100 kg/cm2. The jaws of the machine must align with test specimen so as soon as the test gets started the long axis of the test specimen will be aligned with the direction of the applied tension. The machines grip on the test specimen must be 63 mm (2.5 in). The code specifies what needs to be recorded from the test such as material used, material thickness and other necessary sample measurements, and material properties. ASTM also governs on precision of testing, and avoiding bias in the results. Fatigue strength ASTM D3166 specifies fatigue testing methods for metal to metal adhesive joints. It references ASTM D1002 for creating test specimens. The testing machine must be capable of applying a sinusoidal cyclic axial load. The cycle rate and type of equipment can influence the results of the tests being run. 1800 cycles/minute are recommended unless otherwise specified. Tests are generally run at ambient temperature and humidity which is specified at 50% relative humidity ±4%, and 23 °C ±1.1 °C. At least 5 S-N curves need to be generated for a welded joint to give a usable range of cyclical loads on the material. The loads need to be varied from a minimal value that is ideally above 2000 cycles, to a load with 10% of the materials max strength. Fracture toughness Impact testing is done to test the fracture toughness of the joints welded. ASTM D5041 is used for reference when doing impact tests on advanced thermoplastic composites. Impact tests can get data for figuring out how much energy is needed to break the material, and it can also shed light on modes of failure for a certain joint. The testing machine needs to be something that moves at a constant speed before impact, it needs to be able to give a force readout on impact, generally a wedge is used as the impact tool, along other requirements specified by the code. The code calls for the tests to be done at ambient lab conditions, but depending on the application of the material this may change. Standard speed of testing for impacts is 127 mm/min, where the standard chart speed is 250 mm/min. Optical microscopy Optical microscopy is a necessary testing method in order to observe the quality of the weld joint. There are defects that can occur during welded that can weaken the joint or cause stress concentrations. Voids can occur during processes such as induction, ultrasonic, and resistance welding, so visual inspection is important to ensuring a quality joint, while developing weld procedures, and for using parts for service. Inspection can be done with the naked eye, an optical microscope, and more high-powered devices such as a scanning electron microscope (SEM). Taking a cross section of the welded joint will allow the joint to be inspected for defects. Non-destructive Testing of Thermoplastic Composite Welds Many of the nondestructive testing (NDT) methods available for testing of thermoplastic composite base materials can be used for welds in thermoplastic composites as well. In some cases, modifications are necessary. International standards like EN 13100-1, 13100-2, 13100-3 & 13100-4 govern inspection of the base materials. While these standards were not necessarily developed specifically for the welds in said materials, the physical principles are often still applicable. The methods include: Visual Inspection (VT) is typically the first option for any attempt at NDT, being the least expensive, as it requires the least specialized training and usually few if any special tools. Defects on the surfaces of thermoplastic composite welds can be detected visually if they are of sufficient size. Weld defects such as misalignment, porosity, lack of fusion and degradation of the matrix and/or fibers may be visually apparent. Subsurface defects may not be visible, unless the composite matrix was nearly transparent and the embedded fibers did not obscure them. Ultrasonic testing (UT) can offer detailed NDT information for welded thermoplastic composites. Tests can be done with shear wave or transverse waves, though the composite materials often attenuate the signals significantly and care must be taken to account for this. Contact methods using either manual or automated transducers coupled to the part being inspected or non-contact methods using water immersion or a bubbler (i.e. a continuous stream of water through which the ultrasound passes) can be effective if designed and calibrated properly. Amplitude of reflection data may be used to generate B-scan or C-scan images, which can show the materials being welded at various, discrete depths or cross sections, a capability not available with traditional radiographic methods. Ultrasound can detect delaminations, lack of fusion, porosity, voids, inclusions and other defects mostly regardless of their orientation. Deterring factors include that the method is time consuming and the data are open to some interpretation, requiring skilled technician to perform and interpret the test. Radiographic testing (RT) can be performed in several ways. Typically low energies are required for testing of composites in order to see any detail, which restricts the radiation sources to be used to x-ray types rather than gamma sources like Ir-192 or Cobalt-60, which tend to have higher energy levels. Data may be recorded either on film or digitally, using specially developed screens for detecting and saving an image than can be manipulated later with the proper software and hardware. Because radiographic testing relies on differences in material density to provide an image, resolution of fibers like carbon from the thermoplastic matrix is not always very high, since the density of the plastic does not differ much from that of the carbon or glass filaments. For digital imaging, the lack of contrast may be partially addressed after the radiographic images are taken, using digital imaging software. Radiography can detect porosity, voids and possibly differences in fiber density or orientation in the composite matrix due to the welding process. Lack of fusion may not be visible by RT unless it is perpendicular to the direction of the source of radiation. Computed Tomography (CT), a subset of radiographic testing, is proving useful for the inspection of thermoplastic composite welds. CT involves the computerized building of a 3-D image using X-rays taken from numerous, incremental angles. It is particularly useful for the determination of fiber orientation in welds of glass reinforced composites. Thermography involves testing the part for discontinuities that can be seen by an infrared camera when the part is heated or cooled. It offers a significant improvement on some of the more traditional NDT methods in that it can be used on large areas of, for example, airplane parts or storage tanks. Eddy Current testing (ET) has been found to be useful for characterizing the nature of fibers and their orientation in certain composite materials, particularly those with conductive reinforcing fibers. It would not be useful for composites reinforced with glass or aramid fibers, for example, as no currents can be induced in these insulating materials. Much higher magnetic field frequencies are used to generate the eddy currents used for testing plastic composites than are typically used for metals. Though delaminations in the material were either undetectable or nearly so, more recent research has found that by induction heating the part in addition to exciting an alternating magnetic field, some delaminations could also be detected in CFRP.Laser Shearography involves accurately measuring perturbations in the surfaces of a (usually thin) part under load or strain with the aid of lasers scanning across the surface being evaluated. Voids, pores, delaminations and other defects in composite welds can be detected by this method. Acoustic Emission testing provides qualitative information on the presence and potential growth of defects such as cracks and delaminations in welded composite materials. Typically this method is used to help narrow down the locations(s) of defects in large structures before using a more precise NDT method such as radiography or ultrasonic testing to help localize and characterize the nature of the defect. References Materials testing Materials science Plastic welding Welding
Testing of advanced thermoplastic composite welds
[ "Physics", "Materials_science", "Engineering" ]
2,107
[ "Applied and interdisciplinary physics", "Welding", "Materials science", "Materials testing", "Mechanical engineering", "nan" ]
56,320,236
https://en.wikipedia.org/wiki/Pointing
Pointing is a gesture specifying a direction from a person's body, usually indicating a location, person, event, thing or idea. It typically is formed by extending the arm, hand, and index finger, although it may be functionally similar to other hand gestures. Types of pointing may be subdivided according to the intention of the person, as well as by the linguistic function it serves. Pointing typically develops within the first two years of life in humans, and plays an important role in language development and reading in children. It is central to the use of sign language, with a large number of signs being some variation on pointing. The nature of pointing may differ for children who have autism or who are deaf, and may also vary by gender. It is typically not observed in children who are blind from birth. Pointing may vary substantially across cultures, with some having many distinct types of pointing, both with regard to the physical gestures employed and their interpretation. Pointing, especially at other people, may be considered inappropriate or rude in certain contexts and in many cultures. It is generally regarded as a species-specific human feature that does not normally occur in other primates in the wild. It has been observed in animals in captivity; however, there is disagreement on the nature of this non-human pointing. Definition and types The primary purpose of pointing is to indicate a direction, location, event or thing relative to a person. Pointing is typically defined as having either three or four essential elements: Extension of the index finger; Flexing the remaining fingers into the palm, possibly with the thumb to the side; Usually, but not always, the pronation of the palm to face downward, or to face the mid-line of the body; and Extension of the arm. Gestures that do not meet these three or four criteria are usually classified as a "reach" or an “indicative gesture”, although there is no clear consensus on how to differentiate between the two. Additionally, there may be little or no behavioral or functional difference depending on whether a gesture is considered to be pointing, reaching, or otherwise indicative, and reaching may be considered a form of whole-hand pointing. In one review, 11 separate definitions were identified for the related motions of reach, reaching out, reaching, indicating, and indicates. Imperative, declarative and interrogative pointing Types of pointing are traditionally further divided by purpose, between imperative and declarative pointing. Imperative pointing is pointing to make a request for an object, while declarative pointing is pointing to declare, to comment on an object. As Kovacs and colleagues phrase it, "'Give that to me' vs. 'I like that'". This division is similar to that made by Harris and Butterworth between "giving" and "communicative" pointing. Determining the intention of pointing in infants is done by considering three factors: Whether the behavior is direct toward another, Whether it includes "visual-orientation behaviors" such as observing the recipient of the pointing in addition to the object pointed to, and Whether the gesture is repeated if it fails to achieve the intended effect on the recipient. Declarative pointing may further be divided into declarative expressive pointing, to express feelings about a thing, and declarative interrogative pointing, to seek information about a thing. However, according to Kovacs and colleagues interrogative pointing is clearly different from declarative pointing, since its function is to gain new information about a referent to learn from a knowledgeable addressee. Therefore, unlike declarative pointing, interrogative pointing implies an asymmetric epistemic relation between communicative partners. Linguistic function Types of communicative pointing may be divided by linguistic function into three main groups: Objective pointing: pointing to an object within the visual field of both the pointer and the receiver, such as pointing to a chair which is physically present Syntactic or anaphoric pointing: pointing to linguistic entities or expressions previously identified, such as pointing to the chair which is not physically present Imaginative pointing: pointing to things that exist in the imagination, such as pointing to a fictional or remembered chair Additionally, pointing in children who are deaf may be divided between diectic or "natural" pointing, which is shared with hearing children, and symbolic pointing used specifically in sign language, learned by observing and imitating others who sign. Development Pointing is the first communicative gesture that develops in human infants. It is not clear to what extent the behavior may first emerge as a form of meaningless ritualization, whether some infants may comprehend and visually follow the pointing of others without yet pointing themselves, or whether pointing begins as a form of meaningful imitation, where an infant learns they can produce the same effect in adults as adult produce in them, by mimicking the action of pointing, and drawing attention to an object. Pointing generally emerges within the first two years of life, weeks prior to a baby's first spoken word, and plays a central role in language acquisition. The onset of pointing behavior is typically between seven and 15 months of age, with an average of between 11 months and one year. By eight months, parents reported that 33% of babies exhibited pointing behaviors, with pointing to nearby objects usually occurring by 11 months, and pointing to more distant object by 13 months. By one year of age, more than half of children will exhibit pointing behavior. As early as 10 months of age, children have been shown to spend more time being attentive to novel objects when they are pointed to by others, when compared to objects that are merely presented to them. This time is increased if the object is also labelled verbally. Pointing by children is associated with a high rate of verbal response from adults, specifically labeling the object pointed to. This interaction allows the child to check for words labeling objects they do not yet know, and, when combined with declarative verbal statements on the part of the child, may allow them verify the accuracy of words they have already learned. Infants may begin to point in situations where no one else is present, as a form of egocentric expression, termed "pointing-for-self". This is differentiated from "pointing-for-others" which is done while looking at a "recipient" of the pointing, and done as a communicative gesture. Kita specifies this variety of pointing in the context of being a deictic gesture, which is done for the benefit of an audience, as distinct from what are deemed "superficially similar behaviors". Demonstrating this, as they mature infants will first point at an object, and then visually verify whether the recipient is being attentive to the object, and by 15 months of age, will first verify that they have the attention of the recipient, and only then point as a means of redirecting that attention. Children are more likely to point for adults who respond positively to the gesture. At 16 months they are less likely to point for adults who are shown to be unreliable, adults who have mislabeled objects the children already know the correct word for. At two years of age, children have been shown to be more likely to point for adults than for children their own age. Relationship with language A meta-analysis by Colonnesi and colleagues found a strong relationship between pointing and language, including between pointing at an early age and language ability in later life, and pointing at an early age as a predictor of two-word vocal combinations. They concluded that only a "few studies" had not found a strong correlation between pointing and the development of language. Research has also shown that the frequency of communicative pointing from ages 9 months to 12.5 months was positively correlated with vocabulary size for children at age two. The relationship between language development and pointing tends to be stronger in studies which examined declarative pointing specifically or pointing generally, rather than imperative pointing. In school age children, finger-pointing-reading (reading while pointing to words or letters as they are spoken) can play an important role in reading development, by helping to emphasize the association between the spoken and printed word, and encouraging children to be attentive to the meaning of text. Deaf speakers of sign language Pointing plays an important role in sign language, which as much as 25% of signs being a variation of pointing. Children who are deaf have been shown to begin pointing at a similar age to non-deaf children, but this did not confer any advantage in the acquisition of pronouns in American Sign Language. Initial observations give some indication that deaf children acquiring the use of American Sign Language (ASL) may exhibit self-pointing behavior earlier than hearing children who are acquiring speech. Pointing to a location begins being deictic for deaf children and hearing alike, but becomes lexicalized for more mature signers. There is a distinction between linguistic pointing in ASL and gestural pointing by deaf users, the latter being identical for deaf and hearing people. One small-scale study found that the errors in pointing behavior produced by autistic deaf children and autistic hearing children were similar. Both deaf and hearing children use pointing abundantly while learning language, and initially for the same reason, although that starts to diverge as deaf children acquire signs. Pre-verbal hearing infants use pointing extensively, and use a combination of one word plus one gesture (mostly pointing) before they can produce a two-word sentence. Another study looked at deaf Japanese infants acquiring language from ages four months to two years, and found that the infants moved from duos (where a point plus an iconic sign referred to the same thing) to two-sign combinations where they referred to two different things. As they grew, the latter grew more frequent and led to the development of two-sign sentences in Japanese Sign Language. Autistic people Children with autism show marked differences compared to others, and greater difficulty in their ability to interpret pointing as a form of communication, and a sign of "something interesting". This is similar to difficulties they may experience with other deictic communication, which depend on an interpretation of the relationship between speaker and listener or on particular spatial references. A lack of declarative or proto-declarative pointing and the inability to follow a point are important diagnostic criteria for children with autism, and have been incorporated into screening tools such as the Modified Checklist for Autism in Toddlers. Other factors Pointing is dependent on vision, and is not observed in children who are blind from birth. A number of differences have been observed regarding the onset of pointing behavior and gender, and the tendency to point using the right or left hand, with girls being more likely to point up to 15 degrees into the left visual periphery using their right hand, and being ambidextrous further to the left, while boys are typically ambidextrous for 15 degrees to the left and right of center. Pointing at remembered targets can be impaired in cognitive impairment due to dementia. Cultural variations The gestures used for pointing and their interpretation vary among different cultures. While studies have observed index finger pointing in infants across a range of cultures, because those studied are also ones where adults frequently use this type of pointing, further study is needed to determine whether these are examples of imitation of behaviors performed by observed adults, or whether it indicates pointing may be biologically determined. In much of the world, pointing with the index finger is considered rude or disrespectful, especially pointing to a person. Pointing with the left hand is taboo in some cultures. Pointing with an open hand is considered more polite or respectful in some contexts. In Nicaragua, pointing is frequently done with the lips in a "kiss shape" directed towards the object of attention. Different cultures may point using a range of variations on index finger pointing. In Japan, pointing is done with the fingers together and the palm facing upwards. Those of Indian heritage may point using the chin, whole hand, or thumb. They may consider index finger pointing rude, but further distinguish a point using two fingers for use only at someone considered inferior. In those living near the Vaupés River, Dixon noted at least three distinct types of pointing: pointing with the lips for "visible and near" ... pointing with the lips plus a backwards tilt of the head for "visible and not near" ... pointing with the index finger for "not visible" (if the direction in which the object lies is known) Alternatively, among Aboriginal Australian speakers of Arrernte, researchers identified six distinct types of pointing: index finger, open hand palm down, open hand palm vertical, "'horn-hand' pointing (with the thumb, the index finger and the pinkie extended)", pointing with a protruded lip, and pointing with the eye. When pointing to indicate a position in time, many, but not all cultures tend to point toward the front to indicate events in the future, and toward the back to indicate events in the past. One noted exception is that of speakers of Aymara, who instead tend to associate what is in the past, what is known, with what is in the front, what is seen, and vice versa. In non-human animals There is considerable disagreement as to the nature of pointing behaviors in non-human animals. Miklósi and Soproni described pointing as a "species specific, human communicative gesture" not regularly used by any other species of primates living in the wild. Kita concluded similarly that "on the evidence to date only humans use the pointing gesture declarative to share attention with conspecifics." Kovács and colleagues state "pointing as a referential communicative act seems to be unique to human behavior." However, the claims that pointing is a unique human gesture have not gone unchallenged. A study in 1998 by Veà and Sabater-Pi described examples of explicit declarative pointing in bonobos in what is now the Democratic Republic of the Congo through these observations: “Noises are heard coming from the vegetation. A young male swings from a branch and leaps into a tree... He emits sharp calls, which are answered by other individuals who are not visible. He points—with his right arm stretched out and his hand half closed except for his index and ring fingers—to the position of the two groups of camouflaged observers who are in the undergrowth.” This was one of the only observations of pointing in the wild by primates for years, but recently other possible examples have been documented. Researchers claimed to observe imperative pointing gestures produced by bonobos when attempting to initiate genital rubbing, as well as by chimpanzees when reaching towards objects they desired, although even these researchers admitted the rarity of chimpanzee pointing in the wild. In the wild, both chimpanzees and bonobos have been shown to seemingly signal and gesture to direct each other's attention, through acts like beckoning and “directed scratching.” Thus, while it is clear that other primates use gestures to direct attention, it is still uncertain as to whether this is done overtly as in humans through pointing. Although the question of whether primates point in the wild is still up for debate, pointing in captivity is widely established in primates. Leavens and Hopkins note that pointing behavior has been observed in captivity for a range of species. In some, such as apes, the majority of such behavior is spontaneous (meaning without explicit training to do so), but occurs only rarely in others, such as monkeys. When present, this may be accompanied by visual monitoring of the person being interacted with, the audience of their gesture, rather than being attentive only to the object pointed at. Moreover, it seems that non-human great apes also take the perspective of the communicative partner in order to produce clear, unambiguous points. Studies have found that apes point with the dual goal of directing attention and requesting food, and additionally that they are sensitive to the looking behavior in response to their pointing gestures. There remain questions as to whether this constitutes "true pointing", and whether non-humans have the social or cognitive abilities to understand the intentional communicative nature of pointing. These questions arise especially due to the nature of these primate pointing gestures. They are only produced for humans and not for other apes, and often use the full hand instead of the typical extended index finger that humans use. However, this may be countered with observations in apes trained to use sign language, which do point with the index finger. Recently, studies have also indicated that chimpanzees in captivity use pointing as a flexible signal, by raising their arms in order to point to objects that are further away. Thus, while apes certainly can point, it is difficult to ascertain whether they point naturally. However, this debate may result from the fact that there are procedural differences in how non-human primates are tested, making it more difficult to ascertain if non-human primates do point. The experimenter must be safe, as a result of testing non-human primates, thus a barrier is introduced between the participant and experimenter. Dogs and infants do not have this precaution. However, Udell and colleagues tested dogs with and without a fence, using the object choice task in a similar manner to that of a barrier. The authors reported that the barrier produced a decrease of 31% in terms of success for canines. This has also been shown in pointing as well, where barriers that are present for dogs showed lower success rates than when absent, highlighting that this debate may be partially the result of systematic procedural differences. In contrast to the production of pointing, some non-human animal species can appropriately respond to pointing gestures, preferring an object or direction, which was previously indicated by the gesture. Cats, elephants, ferrets, horses and seals can follow the pointing gesture of a human above chance, while dogs can rely on different types of human points and their performance is comparable to that of 2-year-old toddlers in a similar task. However, it seems, that the default function of pointing is different in dogs and humans, because pointing actions refer to particular locations or directions for dogs in an imperative manner, while these gestures usually indicate specific objects in humans to ask for new information or to comment on an object. There is considerable debate as to whether apes are able to comprehend pointing gestures as well, and it has even been argued that dogs are better able to understand pointing than apes. Some hypothesize that pointing comprehension should be more prevalent in species with a stronger tendency to cooperate, which would explain negative results in apes due to the mainly competitive relationship present in most species of apes and monkeys. However, wolves fare poorly in pointing comprehension tests as well, and are a highly cooperative species, countering this hypothesis. More recent studies have refuted the claim that apes are poor at comprehending pointing, and provide evidence that the tests used to evaluate pointing comprehension are often inaccurate, especially in apes. Thus there is conflicting evidence and debate as to whether apes fully comprehend pointing gestures. See also Deixis, words and phrases that cannot be fully understood without additional contextual information Finger gun, a similar hand gesture Joint attention, shared focus of two people on an object, resulting from pointing or other cues List of gestures used in non-verbal communication Ostensive definition, conveys the meaning of a term by pointing out examples Pointing breed, dogs trained to find and indicate the direction of game Pointing device, an input interface for inputting spatial data into a computer Semiotics, the study of meaning-making, sign process, and meaningful communication Sign (semiotics), something that communicates a meaning that is not the sign itself Notes References External links Hand gestures Developmental psychology Nonverbal communication
Pointing
[ "Biology" ]
4,019
[ "Behavioural sciences", "Behavior", "Developmental psychology" ]
56,322,565
https://en.wikipedia.org/wiki/Kazan%20Soda%20Elektrik
The Kazan Soda Elektrik, full name Kazan Soda Elektrik Üretim A.Ş., is a chemical industry and electric energy company in Ankara Province, Turkey producing natural soda ash and baking soda from trona. The company is a subsidiary of Ciner Holding. Background The trona ore deposits were owned by Rio Tinto Group, an Australian-British multinational and one of the world's largest metals and mining corporation. After survey activities, which lasted more than fifteen years, the company concluded that it will be unable to operate the mining of the trona ore reserves there, and sold the deposits to Ciner Holding in 2010. The construction of the soda products plant began in 2015, after five years of efforts for bureaucratic permissions and financing. The investment budget of the project was US$1.5 billion. The financing of the project was provided by the Industrial and Commercial Bank of China (ICBC), Exim Bank of China and Deutsche Bank backed by the China Export and Credit Insurance Corporation (Sinosure). Sberbank of Russia financially contributed during the groundbreaking phase. The construction of the facility was carried out by the China Tianchen Engineering Corporation (TCC). The facility was completed within two and half years. Kazan Soda Elektrik plant was inaugurated on January 15, 2018, in presence of Turkish President Recep Tayyip Erdoğan, Minister of Energy and Natural Resources Berat Albayrak, Minister of Labour and Social Security Jülide Sarıeroğlu, Ambassador of China Yu Hongyang and many other high-profile politicians and officials. Plant and production The Kazan Soda Elektrik consists of three sections, namely mining, processing and cogeneration. While the mining area is located in Kahramankazan district, the production plant is situated within the Sincan district of Ankara Province, northwest of Ankara. It is about north of Ankara. The plant's mining section supplies the processing section with the trona solution (trisodium hydrogendicarbonate dihydrat), which is the primary source of soda ash. For this, trona ore, laying in average at a depth of underground, is injected with hot water through boreholes drilled, and the dissolved trona is pumped up in the form of trona solution. The plant has five processing lines. The congeneration facility produces 380 MWe electric power and 400 tons of steam. The annual production capacity of the plant is 2.5 million tons of soda ash (sodium carbonate, Na2CO3) and 200,000 tons of baking soda (sodium bicarbonate, NaHCO3). If all the production were exported to Europe, it would increase the key glass raw material by around 25%. Around 1,000 people are employed by the company. The trona ore reserve of Kazan Soda Elektrik is the world's second largest. The plant is the biggest soda ash production facility in Europe. With both Kazan Soda and Eti Soda, the Ciner Holding and Turkey becomes the leading soda ash producer of the world. The soda ash produced has a purity grade of 99.8%, which is the purest in the world. The total annual export value of the products from Kazan Soda Elektrik and Eti Soda will be US$800 million. Sustainability The company has published a report by CDP scoring their environmental impact. Their scope 1 and 2 emissions intensity in 2019 was 0.345 tonnes CO2e per tonne of product. However, the first implementation of the EU Carbon Border Adjustment Mechanism does not include soda. See also Eti Soda, Turkey Ciner Wyoming, United States References Ciner Glass and Chemicals Group Mining companies of Turkey Chemical companies of Turkey Chemical plants Industrial buildings in Turkey Industrial buildings completed in 2018 Chemical companies established in 2010 Non-renewable resource companies established in 2010 2010 establishments in Turkey Companies based in Ankara Kahramankazan District Sincan, Ankara 21st-century architecture in Turkey
Kazan Soda Elektrik
[ "Chemistry" ]
808
[ "Chemical process engineering", "Chemical plants" ]
71,972,078
https://en.wikipedia.org/wiki/Global%20Ocean%20Ship-based%20Hydrographic%20Investigations%20Program
GO-SHIP (The Global Ocean Ship-based Hydrographic Investigations Program) is a multidisciplinary project to monitor ocean/climate changes. So far, this program has involved twelve countries and completed/planned 116 cruises. Participation countries are United States, United Kingdom, Japan, Canada, Germany, Spain, Australia, Norway, France, South Africa, Ireland and Sweden. Most of the cruises are completed by United States, United Kingdom, Japan, Canada, Germany and Spain. Background During 1872 and 1876, Challenger expedition started the modern marine survey and marked the foundation of oceanography. Since then, scientific exploration of the oceans have made many discoveries. At the end of the 19th century, America built the to carry out ocean surveys. In 1893, Norwegian scientist Fridtjof Nansen fixed his Fram in the Arctic ice-cap for three years to undertake long-term observations of oceanographic, meteorological and astronomical data.<ref>Apsley Cherry-Garrard, '[The Worst Journey in the World, Carroll & Graf Publishers, 1922, p. xxii</ref> One of the first acoustic measurements of the ocean floor was in 1919. From 1925 to 1927, the Meteor'' expedition used echo sounders to measure 70000 ocean depth measurements and explore the Mid-Atlantic Ridge. In 1953, Maurice Ewing and Bruce Heezen discovered the global ridge system extending along the Mid Atlantic Ridge. In 1960, Harry Hammond Hess developed the seafloor spreading theory by ocean exploration.Deep Sea Drilling Project started in 1968. In the recent years, oceanographic investigation has revealed that ocean environment is changing, like Ocean acidification, water temperature, Carbon cycle, Sea level rise. Oceanographers are trying to find solutions to these changes by ocean exploration. However, it is hard to understand the whole system in one single subject because the ocean environment is balanced by both its physical conditions and chemical conditions, which is an essential factor for the diversities of marine biology. For example, if the temperature in the ocean surface rises, it would affect the Nutrients distributions, Mixed layer depth, Ocean current, pH conditions, Salinity distributions and so on. Those series of ocean environment changes could even cause dramatic decrease of some Species and effect on the entire Food web in the ocean. Scientists have many assumptions and predictions about the consequences of climate changes in ocean but only by long-term ocean exploration can testify these assumptions. On the other hand, the ocean is large, which accounts for about 97.2% of the Earth's water resources and covers more than 70% of the Earth's surface (Water distribution on Earth), and connected with each other. If one of the oceans changes, the others would also be influenced. Thus it is necessary to use global ocean data to measure how one change can have influence on the others. However, ocean exploration is costly and no one single country can afford continuous yearly global ocean cruises themselves. Therefore, GO-SHIP as one of global ocean observation and exploration programs was launched. Except for GO-SHIP, there are other programs such as World Ocean Circulation Experiment, Tropical Ocean Global Atmosphere program, Argo (oceanography), NPOCE, Global Ocean Observing System and International Ocean Discovery Program. Contributions and discoveries GO-SHIP data have suggested that from the 1990s to 2000 the deep (z > 2000 m) has warmed by absorbing some of the extra heat in system... The GO-SHIP global sampling has proven that the warming is obviously larger in regions of the Antarctic Bottom Water (AABW) especially the Southern Ocean near AABW An anthropogenic storage rate of 2.9 (± 0.4) Pg C year-1 for the most recent decade. An ocean mean annual uptake rate equates to approximately 27% of the total anthropogenic carbon emissions over 1994 to 2010. Global Cruise Plan The Cruise Plan includes completed and planned during 2014–2027.The table was updated in May 2022 References Oceanography Research projects Oceanographic expeditions
Global Ocean Ship-based Hydrographic Investigations Program
[ "Physics", "Environmental_science" ]
815
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
71,972,797
https://en.wikipedia.org/wiki/Fast%20endophilin-mediated%20endocytosis
Fast endophilin-mediated endocytosis (FEME) is an endocytic pathway found in eukaryotic cells. It requires the activity of endophilins as well as dynamins, but does not require clathrin. In Clathrin-dependent endocytic pathways, endosomes budding from the cell membrane into the cell will form in clathrin pits, and be coated by clathrin triskelions. In FEME however, endosomes form when coated by actin, and internalise endophilin A2. Function Each endocytic pathway focuses on a particular component, and FEME is primarily involved in transporting receptors. These include receptors for acetylcholine and IL-2. Associated proteins EGFR HGFR VEGFR PDGFR NGFR IGF1R SHIP1 SHIP2 References Molecular biology Cellular processes
Fast endophilin-mediated endocytosis
[ "Chemistry", "Biology" ]
187
[ "Biochemistry", "Cellular processes", "Molecular biology" ]
71,985,905
https://en.wikipedia.org/wiki/Salisbury%20Municipal%20Incinerator
The Salisbury Municipal Incinerator was a waste management system built for handling municipal waste in Salisbury, Maryland, United States. It burned trash at high temperatures, releasing toxic gasses into the atmosphere and the community. In response to increased awareness of environmental impacts, the city demolished the Incinerator and now uses other modern waste management methods. History The Salisbury Municipal Incinerator construction project started in 1949 and was estimated to be fully completed by September 1950. The Incinerator was a three-story structure that cost an estimated $261,000 to build. The project was commissioned due to the need for the city of Salisbury, Maryland to have a method for city-wide waste management. The city incinerator later closed down circa the 1960s. There were initial talks of turning it into a museum. Still, the site was later demolished, and the City of Salisbury Police Department was built there. The site was located on Delaware Avenue near the intersection with Route 50. Environmental factors caused by incinerators After its closure, Maryland state law required 23 counties to conform to a waste management plan to handle city waste better. Salisbury's solution for this state mandate was to build a landfill to handle city waste. The Environmental Service Commission spearheaded the project. The landfill project, though completed, faced several issues due to heavy rainfall and flooding, so it had to be closed until after the inclement weather. In November 1992, the Salisbury City Council was quoted an estimated cost of $16 million - $20 million to build a more extensive and efficient landfill. Waste materials dumped into landfills release toxic gasses such as methane, ammonia, sulfides, and carbon dioxide, which can contribute to climate change. In 1963, the clean air act initiated the development of a national program to reduce air pollution and protect people's public health and welfare. The 1967 Clean Air Act permitted the enforcement of regulations that impacted waste management services like the Salisbury Municipal incinerator. Present day The Wicomico County, Maryland Solid Waste Division is now a complex that houses buildings of various functions to enable equipment storage and a physical workplace for employees. Sixty employees work at the landfill to run its day-to-day operations, including landfill, dredging operations, and recycling. To efficiently manage daily operations, the workers utilize 85 pieces of equipment to complete daily tasks. For utmost efficiency, some of the work is contracted out when necessary. In partnership with Ingenco Wholesale Power LLC (now Archaea Energy), the Wicomico Solid Waste Division manages its landfill methane emissions by converting it to renewable energy to heat homes and businesses. References Incinerators Waste management in the United States Buildings and structures in Maryland Landfills Methane Renewable energy Recycling Salisbury, Maryland
Salisbury Municipal Incinerator
[ "Chemistry" ]
549
[ "Greenhouse gases", "Incinerators", "Incineration", "Methane" ]
71,985,996
https://en.wikipedia.org/wiki/Ferrocenecarboxylic%20acid
Ferrocenecarboxylic acid is the organoiron compound with the formula . It is the simplest carboxylic acid derivative of ferrocene. It can be prepared in two steps from ferrocene by acylation with a 2-chlorobenzoyl chloride followed by hydrolysis. Reactions and derivatives The pKa of ferrocenecarboxylic acid is 7.8. The acidity increases more than a thousand-fold, to pH 4.54 upon oxidation to the ferrocenium cation. By treatment with thionyl chloride, the carboxylic acid anhydride () is produced. Derivatives of ferrocenecarboxylic acid are components of some redox switches. Related compounds 1,1'-Ferrocenedicarboxylic acid Ferrocenecarboxaldehyde References Ferrocenes Cyclopentadienyl complexes Aromatic acids
Ferrocenecarboxylic acid
[ "Chemistry" ]
191
[ "Organometallic chemistry", "Cyclopentadienyl complexes" ]
64,634,201
https://en.wikipedia.org/wiki/BioTech%20Foods
BioTech Foods is a Spanish biotechnology company dedicated to the development of cultured meat from the cultivation of muscle cells previously extracted from animals. It is a subsidiary of Brazilian company JBS S.A. History Origins The company is based in Donostia–San Sebastián, Basque Country and was co-founded in 2017 by the CTO of the project, Mercedes Vila, and CEO Iñigo Charola. This project is based on the construction of tissues from the natural proliferation of animal cells in a controlled environment of humidity and temperature, without genetic modification or antibiotics. Cultured meat based on tissue engineering aims to help alleviate three serious sustainability problems: the high increase in global demand for animal proteins, the environmental impact of factory farming, associated with the production of greenhouse gases and deforestation and animal welfare. Development The start-up obtained the support of the CIC Nanogune, a research centre promoted by the Basque government. In 2019 BioTech Foods received the Entrepreneur XXI Award and came first in Expansión's Start Up awards in the Food and Agrotech category. By February 2020, BioTech Foods was in the development phase of Ethicameat, its brand of pig protein products for the general public and the meat sector. BioTech Foods was one of the first companies to emerge in the global cultured meat sector which could help increase food safety and prevent zoonotic diseases. Pilot plant and JBS investment As of July 2019, one of the main challenges of the cultivated meat was the high production costs of products. BioTech Foods stated it sought 'to reach pilot scale by 2021'. By the end of 2019, BioTech had opened its first pilot plant. In November 2021, BioTech Foods announced an agreement by which JBS S.A. was going to acquire a majority of shares in the company, including the pilot plant it operated in San Sebastián. JBS was going to invest in the construction of a new production plant to help BioTech achieve commercial production capacity in mid-2024. References External links Cellular agriculture Food technology organizations Biotechnology companies of Spain Tissue engineering Spanish companies established in 2017 Spanish subsidiaries of foreign companies JBS S.A. subsidiaries
BioTech Foods
[ "Chemistry", "Engineering", "Biology" ]
437
[ "Biological engineering", "Cloning", "Chemical engineering", "Tissue engineering", "Medical technology" ]
64,635,007
https://en.wikipedia.org/wiki/Toxicology%20Research
Toxicology Research is a publication of Oxford University Press as of 2020. The Journal launched in 2012 and focuses on articles that cover biological, chemical, clinical, or environmental health aspects of the toxic response and the mechanisms involved. Abstracting and Indexing Toxicology Research is abstracted and indexed in Biological Abstracts, BIOSIS Previews, EBSCOhost, Current Contents, Embase, Journal Citation Reports, ProQuest, PubMed, Science Citation Index, Scopus, and Web of Science. According to the latest Journal Citation Reports, the journal has a 2019 impact factor of 2.283, ranking it 64th out of 92 journals in the category "Toxicology". References External links Journal homepage Submission website Toxicology journals Hybrid open access journals Oxford University Press academic journals Academic journals established in 2012 Bimonthly journals Toxicology in the United Kingdom
Toxicology Research
[ "Environmental_science" ]
173
[ "Toxicology in the United Kingdom", "Toxicology", "Toxicology journals" ]
64,636,692
https://en.wikipedia.org/wiki/HAT-P-36
HAT-P-36, also referred to as Tuiren is a 12th magnitude G-type main-sequence star estimated to be approximately 958 light-years away from Earth in the constellation Canes Venatici. HAT-P-36 is too faint to be seen with the naked eye, but it is possible to view it with binoculars or a small telescope. In 2012 a hot Jupiter-type exoplanet was discovered orbiting HAT-P-36 with an orbital period of about 1.3 Earth days. In December 2019, HAT-P-36 was named Tuiren and its planetary companion, HAT-P-36b, was named Bran as a result of Ireland's contribution to the 2019 NameExoWorlds campaign. Bran has a mass approximately 1.8 times that of Jupiter and a radius 1.2 times larger. Etymology HAT-P-36 and its planet are named after characters from The Birth of Bran, a story in the book Irish Fairy Tales by James Stephens. The book is a re-telling of various stories from Irish folklore. Tuiren was the aunt of the mythical hero Fionn mac Cumhaill and was turned into a hound by the fairy Uchtdealbh after Tuiren married her husband. Bran and Sceólan were the two puppies mothered by Tuiren while she was a dog. They were cousins of Fionn mac Cumhaill. The names were proposed by John Murphy, a teacher at Regina Mundi College, Cork. Planets HAT-P-36b (Bran) was discovered in 2012 by the HATNet Project using the transit method. A search for transit timing variation did not result in detection of additional planets in the system as at 2021. Surprisingly, a planetary orbital period increase by 0.014 seconds per year was detected by 2021. References G-type main-sequence stars Canes Venatici Stars with proper names Planetary systems with one confirmed planet Planetary transit variables
HAT-P-36
[ "Astronomy" ]
404
[ "Canes Venatici", "Constellations" ]
59,686,459
https://en.wikipedia.org/wiki/Material%20extrusion-based%20additive%20manufacturing
Material extrusion-based additive manufacturing (EAM) represents one of the seven categories of 3D printing processes, defined by the ISO international standard 17296-2. While it is mostly used for plastics, under the name of FDM or FFF, it can also be used for metals and ceramics. In this AM process category, the feedstock materials are mixtures of a polymeric binder (from 40% to 60% by volume) and a fine grain solid powder of metal or ceramic materials. Similar type of feedstock is also used in the Metal Injection Molding (MIM) and in the Ceramic Injection Molding (CIM) processes. The extruder pushes the material towards a heated nozzle thanks to the controlled axial movement of a piston inside a heated barrel, or the controlled axial rotation of a screw inside a heated barrel, or the controlled rotation of two feeding rollers. Process of Creating EAM Metal Parts The process for creating material extruded metal parts typically involves several stages, transforming them from plastic/metal composites to fully metal parts. Printing: The process begins with printing the part using a filament containing metal powder bound in plastic. This filament, similar to that used in conventional FFF printers, is infused with metal. The printer deposits the metal-infused filament layer by layer, building up the shape of the part. These printed parts are referred to as "green" parts. To compensate for predictable shrinkage during the subsequent sintering process, the green parts are scaled up by 15-20% from their final dimensions. Debinding: After printing, the green parts are placed in a debinding station. In this step, an organic solvent dissolves most of the plastic binding material. Consequently, the green parts transition into "brown" parts. The debinding process eliminates excess plastic, leaving behind a structure of metal powder. Sintering: The brown parts, now washed, are transferred to a sintering furnace. This furnace adheres to a material-specific profile, depending on the material used. Initially, it burns away any remaining binder. Subsequently, it consolidates the metal powder, transforming it into a fully dense, finished metal part. The sintering process is integral as it ensures that the part attains its required mechanical properties. Use: At this stage, the part becomes a fully metal component, ready for use. History R&D developments In 1995, the Fraunhofer IFAM designed a Rapid Prototyping system, starting from a powder‐binder mixture which is squeezed out through a computer‐controlled nozzle. Parts are manufactured layer by layer and the “green parts” are debinded and sintered to reach their final density; IFAM restarted this line of research in 2017; In 1998, the concept of hybrid, additive/subtractive Shape Deposition Manufacturing for ceramics was proposed and tested at Carnegie Mellon University In year 2000, a system was developed at Rutgers University for the solid freeform fabrication of multiple ceramic actuators and sensors, starting from green ceramic filaments In 2005, a system was development at the Drexel University, based on material extrusion, consisting of a mini-extruder with a single screw mounted on a high-precision positioning system, fed with bulk material in granulated form (pellets); In 2015, a 3d printing machine was developed at Politecnico di Milano for MIM metals and CIM ceramics, based on extrusion of pellets with a stationary piston-based extruder over a reversed Delta Robot table; In 2016, developments in multi-material printing have enabled material extrusion printers to utilize ceramic-based support materials, designed for easy removal. This advancement significantly facilitates the creation of complex geometries, as the support material can be effortlessly broken off after printing. A notable example is Desktop Metal’s machine, which employs a ceramic interface layer on all support structures. This feature ensures that the supports can be snapped off with minimal effort, enhancing the overall efficiency and precision of the printing process. In the past few years, advances in material science and the expansion of material extrusion systems at companies like Markforged, Desktop Metal, and ALM3d have expanded the range of materials suitable for material extrusion printers. Some of these materials include: Stainless Steel, Low-Alloy Steel, Tool Steel, Aluminum 6061, Bronze, Chromium Zirconium Copper, Cobalt Chrome, or even gold. Commercial developments After year 2015, some commercial providers of the technology have started proposing their product, mostly for metal applications, e.g.: Metal X by Markforged, Studio System by Desktop Metal, ExAM by AIM3d. Reference List Manufacturing 3D printing 3D printing processes Fused filament fabrication
Material extrusion-based additive manufacturing
[ "Engineering" ]
979
[ "Manufacturing", "Mechanical engineering" ]
76,344,116
https://en.wikipedia.org/wiki/Ciguatoxin%201
Ciguatoxin 1 or CTX-1 is a toxic chemical compound, the most common and potent type in the group of ciguatoxins. It is a large molecule consisting of polycyclic polyethers that can be found in certain types of fish in the Pacific Ocean. The compound is produced by Dinoflagellates Gambierdiscus toxicus and is passed on through the food chain by fish. The compound has no effect in fish but is toxic to humans. History Before ciguatoxin was discovered and identified, its presence in the food chain was hypothesised by Randall et al, who assumed that the toxin enters the food chain via herbivorous fish that feed on toxic microalgae and then gets passed on to humans directly or by passing through other carnivorous fish. This hypothesis was proven by Helfrich and Banner, who also showed that the toxin has no effect on fish, both herbivorous and carnivorous. Ciguatoxin-1 was first discovered in 1967 by Scheuer et al when studying ciguatera fish responsible for food poisoning. Later on, in 1977 Yasumoto et al isolated the compound from Dinoflagellates and named it ciguatoxin, after which it was classified as a polyether compound. In the 1980s and early 1990s the full structure of ciguatoxin-1 was elucidated using NMR- spectroscopy, mass-spectrometry and X-ray crystallography. Due to the high complexity of its structure, ciguatoxin-1 has not been assigned an official IUPAC name and is denoted simply as ciguatoxin-1 or CTX-1.   As such the compound hasn't been found of any practical use in daily life. However, it has been shown useful for the studies of voltage gated sodium channels, where it can be used as a tool to alter the channel permeability and polarisability. Ecosystem distribution CTX-1 is produced by dinoflagellates called Gambierdiscus toxicus. These dinoflagellates are either free flowing in the water or associated to different types of microalgae. The toxin from the Gambierdiscus Toxicus accumulates in the fish that consume these organisms, and through the food chain the toxin eventually enters the human body. CTX-1 cannot fully be removed from the fish by cooking it. The toxin is found in tropical and subtropical coral reef fishes. Typically, these fishes are large predator fishes like moray eels, barracudas, snappers, Spanish Mackerels and groupers.   There have been studies suggesting the transmission of the toxin from a pregnant mother to the foetus, and from a nursing mother to her child. There have also been some reports about sexual partners of ciguatoxin poisoning patients also experiencing symptoms. Food poisoning symptoms Eating fish containing a high enough dose of CTX-1 causes Ciguatera Food Poisoning (CFP). It is suspected that concentration of 0.08 ug/kg fish is high enough to cause clinical symptoms and concentrations over 0.1 ug/kg fish are considered a health risk. There are different types of symptoms for CFP: gastrointestinal, cardiac, neurological and neuropsychological symptoms. Gastrointestinal symptoms include nausea, vomiting, abdominal pain and diarrhoea. These symptoms usually start to show within 6-12h of fish consumption and resolve spontaneously within 1-4 days. Cardiac symptoms include hypotension and bradycardia. These signs can lead to necessity of emergency medical care. The neurological and neuropsychological symptoms usually become prominent after the gastrointestinal symptoms appear and they usually become present within two days of the illness. The signs and symptoms include weakness, toothache, the sensation of loose teeth, paraesthesia, dysesthesia, itching, confusion, reduced memory, difficulty concentrating, sweating and blurred vision. A characteristic symptom is cold allodynia, sometime referred to as ‘hot-cold reversal’, which is characterised by an abnormal sensation when touching cold water or objects. Mechanism of action In neuronal neural tissue One of the most prominent studies on the effects of P-CTX-1 on the neural tissue by Benoit et al in 1994 revealed that ciguatoxin-1 can induce spontaneous action potentials in frog myelinated neural fibres, that were eliminated by the addition of TTX. This allowed researchers to conclude that P-CTX-1 mechanism of action must involve voltage-dependent (Nav) sodium channels. Later in 2005, a similar study by Birinyi-Strachan et al confirmed this hypothesis by analysing the effects of P-CTX-1 on the excitability of rat dorsal root ganglion. This study has shown that ciguatoxin-1 can prolong the action potential and increase the afterhyperpolarisation of the cells. It has also been shown that P-CTX-1 acts differently on TTX-sensitive and TTX-resistant cells: in the former, it causes leakage current and reduction in peak signal amplitude, while in the latter it causes the reduction of peak amplitude and increased recovery rate from inactivation. These findings show that different action mechanisms of P-CTX-1may contribute to the big variety neurological symptoms as each type of neural tissue reacts to ciguatoxin in a different manner. Further studies were carried out to identify the mechanism of action of P-CTX-1 on Nav channels, which assumed a direct interaction between the toxin and the sodium channels. However, in 1992 Lewis disproved the original hypothesis and showed that the interaction is indirect, by the means of beta1-adrenoceptor stimulation.   In their study Birinyi-Strachan et al have also shown that P-CTX-1 can also block delayed rectifier voltage gated potassium channels in rat neurons, which could generally contribute to the overall membrane depolarisation, prolonged action potentials, increased afterhyperpolarisation, and lowered threshold for action potential firing. These findings could further explain the origin of various symptoms of ciguatera such as paraesthesia or dysesthesia. It was also found that CTX-1 releases noradrenaline and ATP by asynchronous discharge of preganglionic perivascular axons. CTX-1 prolongs the action potential and afterhyperpolarisation duration. In a subpopulation of neurons, tonic action potential firing can be produced. In the gastrointestinal tract Even though ciguatera causes major gastrointestinal issues, so far P-CTX-1 hasn't been proven to have a direct effect on digestive systems. Terao et al showed that no morphological alterations were observed in the mucosa or muscle layers of the small intestine, despite the severe diarrhoea commonly observed upon P-CTX-1 administration. Other studies (Lewis et al 1984, Lewis, Hoy 1983) have shown that P-CTX-1 causes acetyl choline release from parasympathetic cholinergic nerve terminals, which suggests that nerve stimulation by P-CTX-1 is followed by nerve blockade, likely due to further nerve depolarisation. Toxicokinetics As of today, only several studies were carried out on the biodistribution and toxicokinetics of ciguatoxins, most of which were carried out in rats or in vitro.   In a study by Bottein et al in 2011 it was shown that in rats, detectable levels of P-CTX-1 were observed in liver, muscle and brain up to 96 hours after intraperitoneal and oral administration. The terminal half-lives were reported at 112 h and 82 h respectively.  The main excretion route was shown to be faecal, with P-CTX-1 or its other polar metabolites being present in faeces for up to 4 days after administration. Another article speculated that ciguatoxins might be biotransformed in vitro by the means of glucuronidation. However, it was shown that glucuronidation was not observed for any of the ciguatoxins used in the study (including P-CTX-1) by both rat and human, which could suggest the prevalence of other conjugation pathways in mammals References Ion channel toxins Polyether toxins Marine neurotoxins Phycotoxins Heterocyclic compounds with 7 or more rings Oxygen heterocycles Polyols
Ciguatoxin 1
[ "Chemistry" ]
1,788
[ "Polyether toxins", "Toxins by chemical classification" ]
76,346,163
https://en.wikipedia.org/wiki/Mezhigorje%20Formation
The Mezhigorje Formation is a geologic formation in Belarus and Ukraine that dates to the Early Oligocene. Rovno amber is found in this formation and it has been studied since the 1990s. History Small amounts of rough, partially worked, and fully shaped amber discovered in the Mezhigorje Formation suggest that between 13,300 and 10,500 BC, the amber was used by early humans. Rovno amber was first collected from the Mezhigorje Formation during the 1950s on a small scale and the amber was initially used for burning. The amber was first used for jewellery in the 1970s when another outcrop was discovered. In 1991, the outcrop discovered during the 1970s began to be mined and later became the Pugach Quarry, and in 1993, the Ukrainian government began to oversee excavations of the Mezhigorje Formation for the first time. Despite this, 90% of amber collected from the Mezhigorje Formation is extracted illegally and the trade is controlled by armed organised crime groups. Geological context The Early Oligocene Rovno amber is hosted in the Mezhigorje Formation, and it overlies the Late Eocene Obukhov Formation. The formation is found along the northwestern margin of the Ukrainian Crystalline Shield exposed in the Rivne region of Ukraine and across the border near Rechitsa in the Gomel Region of Belarus. The granite basement rock was overlain by sandy to clayey deposits that were host to alluvial amber. The two formations total between in thickness, both containing interbeds or mixtures of brown coals and carbonized vegetation. Both formations are sandy to clayey in texture, with the Mezhigorje Formation containing mostly medium to fine grained sands of a greenish gray tone, and with occasional iron impregnation and layering. References Geology of Belarus Geology of Ukraine Amber
Mezhigorje Formation
[ "Physics" ]
379
[ "Amorphous solids", "Unsolved problems in physics", "Amber" ]
76,346,550
https://en.wikipedia.org/wiki/Conservation%20genomics
Conservation Genomics is the use of genomic study to aide in the preservation and viability of different and diverse organisms and populations. Genomics can be utilized in order to classify or argue diversity, hybridization, and history as well as identity different and similar species. Genomics can evaluate how these measures relate to effective population size as well as other ideas under the umbrella of conservation genetics, and overall biological conservation. Genomic analysis can evaluate the extent to which alleles at certain loci interact with one and other to display nuanced ways which the genome may be intertwined. Genetic diversity Genetic diversity is a measure of the number of different alleles or combinations of alleles present in a population. This may be measured by the amount of heterozygosity measured compared to the expected amount of heterozygosity predicted by Hardy-Weinberg Equilibrium. Evaluating genetic diversity in the genomes of populations can inform us about levels of biodiversity and allele frequencies. Genetics play a large role in the extinction of species and understanding how certain alleles accumulate and interact at the genome level is crucial to the preservation of those species. Heterozygosity One of the most important measures of genomic health is the ratio of expected heterozygosity to measured heterozygosity. Generally, an individual or species with more heterozygous alleles have a higher chance of survival. This is known as heterosis. Low levels of heterozygosity is a sign of possible concern for a species or organism. Evaluating the genomes of organisms that exist in an endangered species or population segment can provide insight to the severity of their endangerment. This method can be used to classify and rank species as well. Linkage disequilibrium Linkage disequilibrium is a concept that defines the non-random association between two alleles at different loci in the genome. Measures of linkage disequilibrium are useful tools for gene and genome mapping. A linkage between two genes may be due to their positions relative to each other in the genome or it may be due to selection acting to favor certain combinations of alleles. On a genomic scale, linkage disequilibrium plays a large analytical role. The term linkage disequilibrium concedes that there may also be a disruption or lack of linkage between two alleles. Any sort of deviation from the expected linkage between two alleles is considered linkage disequilibrium. In the age of genetic analysis in the use of conservation, understanding how alleles interact with each other is imperative to understanding how genomic diversity, and conservation of such, can be aided. Applications Genomics and genomic analysis provide a bigger picture explanation of aggregate smaller genetic evaluations. Different relationships and reasoning can be drawn from the data presented by genomics than compared to that data that can be drawn from genetics. Genomics is beginning its use in the field of invasive species where it is thought that understanding how genomes drive invasive processes can help better protect native species from their devastating effects. These studies are aimed at tackling the major issue of habitat damage and disrupted ecosystems that invasive species have caused. Under historical contexts, understanding genomic analysis can teach us much about patterns and effects of natural processes. Historical genomic evaluations have been performed on animals like Hominids, and even diseases like Multiple Sclerosis and the Black Death in order to determine their origins and evolutionary histories. In the case of diseases this information is often used in eradication efforts, as it would be used with invasive species. However, this data can be used in the preservation of species as well. Understanding the history of the genomics in a population is an important step in making decisions about how to correctly interpret and apply genomic analysis for conservation today. In conservational contexts specifically, genomics can be used to research and understand how the relationships amongst the factors listed above affect an organism's fitness. The study of organisms through a genomic lens can lead to knowledge about their history that can be applied today. Hybridization Hybridization practices, as they become a larger player in conservation may be impacted by genomic research. Hybridization can create entirely new allelic relationships and disequilibria. Understanding these factors is important in the interests of effective conservation efforts. Furthermore, studying these new relationships in hybrids can continue to provide insight in the efforts to conserve species. Hybridization still has many critics and has been shown to cause reduced fitness amongst natural populations. There are still many questions about the consequences of hybridization on evolution and genetic health of species. Genomic studies may help to draw conclusions about new allelic relationships and how they may effect the outcomes for those species. Current landscape The current landscape of conservation genomics is still in its infancy. New ways to apply and understand genomics for the use of conservation are arising, and there is much thought that the understanding of gene interactions plays an important role in conservation. Currently, despite genomics being valuable to the conservation sphere, there is not enough of a connection between the researchers who study it and those with the means to apply it. References Wikipedia Student Program Conservation biology Genomics
Conservation genomics
[ "Biology" ]
1,055
[ "Conservation biology" ]
70,509,023
https://en.wikipedia.org/wiki/Balance%20of%20angular%20momentum
The balance of angular momentum or Euler's second law in classical mechanics is a law of physics, stating that to alter the angular momentum of a body a torque must be applied to it. An example of use is the playground merry-go-round in the picture. To put it in rotation it must be pushed. Technically one summons a torque that feeds angular momentum to the merry-go-round. The torque of frictional forces in the bearing and drag, however, make a resistive torque that will gradually lessen the angular momentum and eventually stop rotation. The mathematical formulation states that the rate of change of angular momentum about a point , is equal to the sum of the external torques acting on that body about that point: The point is a fixed point in an inertial system or the center of mass of the body. In the special case, when external torques vanish, it shows that the angular momentum is preserved. The d'Alembert force counteracting the change of angular momentum shows as a gyroscopic effect. From the balance of angular momentum follows the equality of corresponding shear stresses or the symmetry of the Cauchy stress tensor. The same follows from the Boltzmann Axiom, according to which internal forces in a continuum are torque-free. Thus the balance of angular momentum, the symmetry of the Cauchy stress tensor, and the Boltzmann Axiom in continuum mechanics are related terms. Especially in the theory of the spinning top the balance of angular momentum plays a crucial part. In continuum mechanics it serves to exactly determine the skew-symmetric part of the stress tensor. The balance of angular momentum is, besides the Newtonian laws, a fundamental and independent principle and was introduced first by Swiss mathematician and physicist Leonhard Euler in 1775. History Swiss mathematician Jakob I Bernoulli applied the balance of angular momentum in 1703 – without explicitly formulating it – to find the center of oscillation of a pendulum, which he had already done in a first, somewhat incorrect manner in 1686. The balance of angular momentum thus preceded Newton's laws, which were first published in 1687. In 1744, Euler was the first to use the principles of momentum and of angular momentum to state the equations of motion of a system. In 1750, in his treatise "Discovery of a new principle of mechanics" he published the Euler's equations of rigid body dynamics, which today are derived from the balance of angular momentum, which Euler, however, could deduce for the rigid body from Newton's second law. After studies on plane elastic continua, which are indispensable to the balance of the torques, Euler raised the balance of angular momentum to an independent principle for calculation of the movement of bodies in 1775. In 1822, French mathematician Augustin-Louis Cauchy introduced the stress tensor whose symmetry in combination with the balance of linear momentum made sure the fulfillment of the balance of angular momentum in the general case of the deformable body. The interpretation of the balance of angular momentum was first noted by M. P. Saint-Guilhem in 1851. Kinetics of rotation Kinetics deals with states that are not in mechanical equilibrium. According to Newton's second law, an external force leads to a change in velocity (acceleration) of a body. Analogously an external torque means a change in angular velocity resulting in an angular acceleration. The inertia relating to rotation depends not only on the mass of a body but also on its spatial distribution. With a rigid body this is expressed by the moment of inertia. With a rotation around a fixed axis, the torque is proportional to the angular acceleration with the moment of inertia as proportionality factor. Here the moment of inertia is not only dependent on the position of the axis of rotation (see Steiner Theorem) but also on its direction. Should the above law be formulated more generally for any axis of rotation then the inertia tensor must be used. With the two-dimensional special case, a torque only results in an acceleration or slowing down of a rotation. With the general three-dimensional case, however, it can also alter the direction of the axis (precession). Formulations In rigid body dynamics the balance of angular momentum leads to Euler's equations. In continuum mechanics the balance of angular momentum leads to Cauchy's second law of motion, that states the symmetry of the Cauchy stress tensor. The Boltzmann Axiom has the same consequence. Boltzmann Axiom In 1905, Austrian physicist Ludwig Boltzmann pointed out that with reduction of a body into infinitesimally smaller volume elements, the inner reactions have to meet all static conditions for mechanical equilibrium. Cauchy's stress theorem handles the equilibrium in terms of force. For the analogous statement in terms of torque, German mathematician Georg Hamel coined the name Boltzmann Axiom. This axiom is equivalent to the symmetry of the Cauchy stress tensor. For the resultants of the stresses do not exert a torque on the volume element, the resultant force must lead through the center of the volume element. The line of action of the inertia forces and the normal stress resultants σxx·dy and σyy·dx lead through the center of the volume element. In order that the shear stress resultants τxy·dy and τyx·dx lead through the center of the volume element must hold. This is actually the statement of the equality of corresponding shear stresses in the xy plane. Cosserat Continuum In addition to the torque-free classical continuum with a symmetric stress tensor, cosserat continua (polar continua) that are not torque-free have also been defined. One application of such a continuum is the theory of shells. Cosserat continua are not only capable to transport a momentum flux but also an angular momentum flux. Therefore, there also may be sources of momentum and angular momentum inside the body. Here the Boltzmann Axiom does not apply and the stress tensor may be skew-symmetric. If these fluxes are treated as usual in continuum mechanics, field equations arise in which the skew-symmetric part of the stress tensor has no energetic significance. The balance of angular momentum becomes independent of the balance of energy and is used to determine the skew-symmetric part of the stress tensor. American mathematician Clifford Truesdell saw in this the "true basic sense of Euler's second law". Area rule The area rule is a corollary of the angular momentum law in the form: The resulting moment is equal to the product of twice the mass and the time derivative of the areal velocity. It refers to the ray to a point mass with mass m. This has the angular momentum with the velocity and the momentum . In the infinitesimal time dt the trajectory sweeps over a triangle whose content is , see image, areal velocity and cross product "×". This is how it turns out: . With Euler's second law this becomes: . The special case of plane, moment-free central force motion is treated by Kepler's second law, also known as the area rule. References Continuum mechanics Equations of physics Scientific observation
Balance of angular momentum
[ "Physics", "Mathematics" ]
1,486
[ "Equations of physics", "Continuum mechanics", "Mathematical objects", "Classical mechanics", "Equations" ]
70,513,943
https://en.wikipedia.org/wiki/Carbon%20tech
Carbon tech is a group of existing and emerging technologies that are rapidly transforming oil and gas to low emissions energy. Combined, these technologies take a circular carbon economy approach for managing and reducing carbon footprints, while optimizing biological and industry processes. It is built on the principles of the circular economy for managing carbon emissions: to reduce the amount of carbon emissions entering the atmosphere, to reuse carbon emissions as a feedstock in different industries, to recycle carbon through the natural carbon cycle with bio energy, and to remove carbon and store it. Carbon tech provides a third option for climate and environmental policy as an alternate to the binary business as usual and radical change. Carbon management can be achieved through nature-based solutions such as reforestation and afforestation, or through technological strategies. Technologies available range from Carbon Capture, Utilization and Storage (CCUS), to negative emissions technologies such as bio energy with carbon capture and storage, direct air carbon capture, as well as enhanced weathering, bio fuel, and biochar from waste that exists in today's processes. Principles The circular carbon economy is a closed loop system that encompasses 4Rs - Reduce, Reuse, Recycle, and Remove and applies them to managing carbon emissions. Reduce Energy efficiency, flaring minimization, modern SCADA controls, artificial intelligence, and making consumer products greener are among the strategies used to control the anthropogenic release of carbon emissions. It is complementary to opportunities to reduce fossil fuel use through substitution with low-carbon energy sources like nuclear, hydropower, bioenergy, and non-carbon emitting renewables. Reuse Carbon can be reused by pooling streams for energy generation, in waste management and product manufacturing. It can be reused in fuels, enhanced oil recovery, chemicals, bioenergy, food and beverages. can be reused in building materials and provides a form of long-term storage. Recycle CO₂ can be chemically transformed through organic chemistry into different products such as fertilizers, cement, bio fuels, vodka, carbon nano tubes, material coatings, plastics, methanol, diamonds, clothing, foam, and detergents. CO₂ is also transformed into other forms of energy like synthetic fuels. Synthetic hydrocarbon fuels are of importance in the aviation industry due to few low-carbon alternatives available. Remove Carbon emissions are captured both at the combustion stage and directly from the atmosphere, then stored into deep underground geological formations. Examples are capturing CO₂ from coal and natural gas power plants, hydrocarbon fuels, and heavy industries such as steel and cement manufacturing. Planting flora, such as mangroves, also contributes toward reduction by increasing photosynthesis. Mangrove trees are among the largest stores of blue carbon. Carbon tech globally The IEA estimates that energy efficiency measures are projected to represent over 40% of the emissions abatement needed by 2040 to be in line with the Paris Agreement. IRENA concludes that renewable energy and energy efficiency measures can potentially deliver more than 90% of the carbon emission cuts needed under the Paris Agreement. According to the U.N.’s Intergovernmental Panel on Climate Change, carbon tech solutions will capture almost as many emissions as renewables will reduce by the end of the century. The IEA also notes that when oil and gas are produced through enhanced oil recovery, the full lifecycle emissions intensity can be neutral or even carbon-negative. According to a new report by research and consultancy firm Wood Mackenzie, Canada is a leader in carbon tech with projects underway that could reduce Canada's greenhouse gas emissions by up to 60% of their 2030 goal. References Carbon Climate engineering Carbon capture and storage
Carbon tech
[ "Engineering" ]
741
[ "Planetary engineering", "Geoengineering", "Carbon capture and storage" ]
70,516,451
https://en.wikipedia.org/wiki/Inversion%20recovery
Inversion recovery is a magnetic resonance imaging sequence that provides high contrast between tissue and lesion. It can be used to provide high T1 weighted image, high T2 weighted image, and to suppress the signals from fat, blood, or cerebrospinal fluid (CSF). Fluid-attenuated inversion recovery Fluid-attenuated inversion recovery (FLAIR) is an inversion-recovery pulse sequence used to nullify the signal from fluids. For example, it can be used in brain imaging to suppress cerebrospinal fluid so as to bring out periventricular hyperintense lesions, such as multiple sclerosis plaques. By carefully choosing the inversion time TI (the time between the inversion and excitation pulses), the signal from any particular tissue can be suppressed. Turbo inversion recovery magnitude Turbo inversion recovery magnitude (TIRM) measures only the magnitude of a turbo spin echo after a preceding inversion pulse, thus is phase insensitive. TIRM is superior in the assessment of osteomyelitis and in suspected head and neck cancer. Osteomyelitis appears as high intensity areas. In head and neck cancers, TIRM has been found to both give high signal in tumor mass, as well as low degree of overestimation of tumor size by reactive inflammatory changes in the surrounding tissues. Double inversion recovery Double inversion recovery is a sequence that suppresses both cerebrospinal fluid (CSF) and white matter, and samples the remaining transverse magnetisation in fast spin echo, where the majority of the signals are from the grey matter. Thus, this sequence is useful in detecting small changes on the brain cortex such as focal cortical dysplasia and hippocampal sclerosis in those with epilepsy. These lesions are difficult to detect in other MRI sequences. History Erwin Hahn first used inversion recovery technique to determine the value of T1 (the time taken for longitudinal magnetisation to recover 63% of its maximum value) for water in 1949, 3 years after the nuclear magnetic resonance was discovered. References Magnetic resonance imaging Nuclear magnetic resonance Quantum mechanics
Inversion recovery
[ "Physics", "Chemistry" ]
429
[ "Nuclear magnetic resonance", "Magnetic resonance imaging", "Theoretical physics", "Quantum mechanics", "Nuclear physics" ]
70,519,561
https://en.wikipedia.org/wiki/Transition%20metal%20complexes%20of%202%2C2%27-bipyridine
Transition metal complexes of 2,2'-bipyridine are coordination complexes containing one or more 2,2'-bipyridine ligands. Complexes have been described for all of the transition metals. Although few have any practical value, these complexes have been influential. 2,2'-Bipyridine (bipy) is classified as a diimine ligand. Unlike the structures of pyridine complexes, the two rings in bipy are coplanar, which facilitates electron delocalization. As a consequence of this delocalization, bipy complexes often exhibit distinctive optical and redox properties. Complexes Bipy forms a wide variety of complexes. Almost always, it is a bidentate ligand, binding metal centers with the two nitrogen atoms. Examples: Mo(CO)4(bipy), derived from Mo(CO)6. RuCl2(bipy)2, a popular precursor to mixed ligand complexes. [Ru(bipy)3]2+, a well studied luminophore. [Fe(bipy)3]2+ has been used for the colorimetric analysis of iron ions. {[Ru(bipyridine)2(OH2)]2(O)}2+, "ruthenium blue" has attracted academic interest as a rare complex that catalyzes the oxidation of water. Tris-bipy complexes Bipyridine complexes absorb intensely in the visible part of the spectrum. The electronic transitions are attributed to metal-to-ligand charge transfer (MLCT). In the "tris(bipy) complexes" three bipyridine molecules coordinate to a metal ion, written as [M(bipy)3]n+ (M = metal ion; Cr, Fe, Co, Ru, Rh and so on). These complexes have six-coordinated, octahedral structures and exists as enantiomeric pairs: These and other homoleptic tris-2,2′-bipy complexes of many transition metals are electroactive. Often, both the metal centred and ligand centred electrochemical reactions are reversible one-electron reactions that can be observed by cyclic voltammetry. Under strongly reducing conditions, some tris(bipy) complexes can be reduced to neutral derivatives containing bipy− ligands. Examples include M(bipy)3, where M = Al, Cr, Si. Square planar complexes Square planar complexes of the type [Pt(bipy)2]2+ react with nucleophiles because of the steric clash between the 6,6' positions between the pair of bipy ligands. This clash is indicated by the bowing of the pyridyl rings out of the plane defined by PtN4. Related ligands Many ring-substituted variants of bipy have been described, especially dimethyl-2,2'-bipyridines. Alkyl substituents enhance the solubility of the complexes in organic solvents. 6,6'-Substituents tend to protect the metal center. The related N,N-heterocyclic ligand phenanthroline forms similar complexes. With respective pKa's of 4.86 and 4.3 for their conjugate acids, phenanthroline and bipy are of comparable basicity. 2,2'-Biquinoline is closely related to bipy as a ligand. References Chelating agents Bipyridine complexes
Transition metal complexes of 2,2'-bipyridine
[ "Chemistry" ]
725
[ "Chelating agents", "Process chemicals" ]
70,519,860
https://en.wikipedia.org/wiki/Nitride%20iodide
An iodide nitride is a mixed anion compound containing both iodide (I−) and nitride ions (N3−). Another name is metalloiodonitrides. They are a subclass of halide nitrides or pnictide halides. Some different kinds include ionic alkali or alkaline earth salts, small clusters where metal atoms surround a nitrogen atom, layered group 4 element 2-dimensional structures (which could be exfoliated to a monolayer), and transition metal nitrido complexes counter-balanced with iodide ions. There is also a family with rare earth elements and nitrogen and sulfur in a cluster. Related mixed-anion compounds include halogen variations: nitride fluoride, nitride chloride, and nitride bromide, and pnictogen variations phosphide iodide, arsenide iodide and antimonide iodides. Production Nitride iodides may be produced by heating metal nitrides with metal iodides. The ammonolysis process heats a metal iodide with ammonia. A related method heats a metal or metal hydride with ammonium iodide. The nitrogen source could also be an azide or an amide. List References Iodides Mixed anion compounds Nitrides
Nitride iodide
[ "Physics", "Chemistry" ]
282
[ "Ions", "Matter", "Mixed anion compounds" ]
74,766,728
https://en.wikipedia.org/wiki/Robert%20Va%C3%9Fen
Robert Vaßen is a German physicist and holds a teaching professorship at the Ruhr University Bochum at the Institute of Materials in the Department of Ceramics Technology. He is head of the department "Materials for High Temperature Technologies" and deputy head of the Institute of Energy Materials and Devices (IMD-2): Materials Synthesis and Processing at Forschungszentrum Jülich. Life and career Vaßen studied physics at the RWTH Aachen University from 1980 to 1986, where he received his diploma. At the same university, he received his PhD in solid-state physics under Prof. Uhlmaier with the thesis Diffusion of Helium in Cubic-Space-Centered and Hexagonal Metals in 1990. After his PhD, he was a scientific assistant at IEK-1 Institute for Energy and Climate Research (now Institute of Energy Materials and Devices IMD-2), Forschungszentrum Jülich, where he became head of department in 1998 and deputy head of the institute since 2014. During this time, he habilitated at Ruhr University Bochum in 2004 with the topic of development of new oxide thermal barrier coatings for applications in stationary and aero-gas turbines. Since 2010, he has been a visiting professor at the University West, Trollhättan, Sweden. In 2014, he turned down a call for a W3 professorship in coating technology at the Technische Universität Berlin. Since 2014, Vaßen has been a PhD supervisor of more than 75 students, 40 of them his own PhD students at the Ruhr University Bochum and the others as co-lecturer at various universities such as University West, Trollhättan, Sweden, University of Cambridge, Imperial College London and University of Manchester, all three United Kingdom, University of Stuttgart, University of Bayreuth, Mines Paris Tech, and others. Research focus Vaßen's research focuses on the development of high-temperature materials and coatings also with additional functional properties such as sensing properties, self-healing capabilities or enhanced strain tolerance. He is also active in the development of functional coatings for solid oxide fuel cells and membranes for oxygen and hydrogen separation. Recently, repair technologies, especially by cold gas spraying, aerosol deposition processes, and coating solutions for alkaline and PEM electrolysis have also been developed. Memberships Since 2008: Member of the DIN Standards Committee on Welding and Allied Processes (NAS) German Ceramic Society (DKG) () Gemeinschaft Thermisches Spritzen e.V. (GTS) - Association of Thermal Sprayers Reviewer for the German research foundation (; ), the Alexander-von-Humboldt-Foundation, the Carl-Zeiss-Stiftung, the AiF, and several national research organizations Awards 2017: Appointment as Fellow of the American Ceramic Society 2017: Induction into the ASM/TSS Hall of Fame of Thermal Spray Since 2017: Editor of the Journal of the European Ceramic Society 2019: Appointment as Fellow of ASM/TSS 2019: Elected as a member of the DFG Review Board "Materials Engineering" for "Coating and Surface Technology" 2022: European SOFT Innovation Award (€50,000 prize money) together with scientists of KIT (Kalsruhe Institute of Technology () for the development of plasma sprayed, functionally graded coatings for fusion power plants Selected publications with Daniel E. Mack, Martin Tandler, Yoo J. Sohn, Doris Sebold, Olivier Guillon: Unique performance of thermal barrier coatings made of yttria-stabilized zirconia at extreme temperatures (> 1500 °C). In: Journal of the American Ceramic Society, Volume 104, Nr. 1, September 2020.doi.org/10.1111/jace.17452, Pages 463–471. with Apurv Dash, Olivier Guillon, Jesus Gonzalez-Julian: Molten salt shielded synthesis of oxidation prone materials in air. In: Nature materials. Volume 18, Nr. 5, Mai 2019.doi.org/10.1038/s41563-019-0328-1, Pages 465–470. with Tobias Kalfhaus, M. Schneider, B. Ruttert, Doris Sebold, T. Hammerschmidt, Werner Theisen, Gunther F. Eggler, Olivier Guillon: Repair of Ni-based single-crystal superalloys using Vacuum Plasma Spray. In: Materials & Design, Volume 168, 15. April 2019.doi.org/10.1016/j.matdes.2019.107656, Page 107656. with R Singh, S Schruefer, S Wilson, Jens Gibmeier: Influence of coating thickness on residual stress and adhesion-strength of cold-sprayed Inconel 718 coatings. In: Surface and Coatings Technology, Volume 350, September 2018.doi.org/10.1016/j.surfcoat.2018.06.08, Pages 64–73. with Emine Bakan: Ceramic top coats of plasma-sprayed thermal barrier coatings: materials, processes, and properties. In: Journal of Thermal Spray Technology, Volume 26, Nr. 6, Juli 2017. doi.org/10.1007/s11666-017-0597-7, Pages 992–1010. with Armelle Vardelle, Christian Moreau, Jun Akedo, Hossein Ashrafizadeh, Christopher C. Berndt, Jörg O. Berghaus, Petri Vuoristo: The 2016 thermal spray roadmap. In: Journal of Thermal Spray Technology, Volume 25, Nr. 8, Dezember 2016.dx.doi.org/10.1007/s11666-016-0473-x, Pages 1376–1440. with Markus Haydn, Kai Ortner, Thomas Franco, Sven Uhlenbruck, Norbert H. Menzler, Detlev Stöver: Multi-layer thin-film electrolytes for metal supported solid oxide fuel cells. In: Journal of Power Sources, Volume 256, Juni 2014.doi.org/10.1016/j.jpowsour.2014.01.043, Pages 52–60. with Maria O. Jarligo, Georg Mauer, Martin Bram, Stefan Baumann: Plasma Spray Physical Vapor Deposition of La1-xSrxCoyFe1−yO3−δ Thin-Film Oxygen Transport Membrane on Porous Metallic Supports. In: Journal of thermal spray technology, Volume 23, Nr. 1, 2014.doi.org/10.1007/s11666-013-0004-y, Pages 213–219. with Xueqiang Cao, Frank Tietz, Debabrata Basu, Detlev Stöver: Zirconates as new materials for thermal barrier coatings. In: Journal of the American Ceramic Society, Volume 83, Nr. 8.doi.org/10.1111/j.1151-2916.2000.tb01506.x, Pages 2023–2028. with Detlev Stöver: Processing and properties of nanograin silicon carbide. In: Journal of the American Ceramic Society, Volume 82, Nr. 10, Oktober 1999.doi/10.1111/j.1151-2916.1999.tb02127.x, Pages 2585–2593. References External links Profile Robert Vaßen on the Website of Forschungszentrum Jülich Website Materials Synthesis and Processing (IMD-2) at the Forschungszentrum Jülich Profile on the platform ORCID Profile on the Ruhr University Bochum Website 20th-century German physicists 21st-century German physicists RWTH Aachen University alumni Materials scientists and engineers Year of birth missing (living people) Living people
Robert Vaßen
[ "Materials_science", "Engineering" ]
1,629
[ "Materials scientists and engineers", "Materials science" ]
74,773,534
https://en.wikipedia.org/wiki/Jet%20fire
A jet fire is a high temperature flame of burning fuel released under pressure in a particular orientation. The material burned is a continuous stream of flammable gas, liquid or a two-phase mixture. A jet fire is a significant hazard in process and storage plants which handle or keep flammable fluids under pressure. The heat flux of the jet flame can cause rapid mechanical failure thereby compromising structural integrity and leading to incident escalation. Context The Piper Alpha disaster in 1988 demonstrated how the accidental release of hydrocarbon can lead to the catastrophic failure of an installation with the rupture of major pipeline risers. Jet fires impinged on vessels, pipework and firewalls. Under these conditions the fireproofing material was compromised within a few minutes rather than one to two hours, which had been specified. Even without direct impingement, the high thermal radiation emitted by jet flames also affected plant and would have been fatal to personnel. Characteristics A jet fire, also known as a spray fire if the fuel is a liquid or liquefied gas, is a turbulent diffusion flame of flammable material. The characteristics of a jet fire depend on a number of factors. These include: fuel composition; release conditions; release rate; release geometry; direction; and ambient wind conditions. For full details of the mechanism and structure of jet fires see High Pressure Jet. Some characteristics of specific jet fires are: Sonic releases of natural gas are characterized by high velocity, low buoyancy flames that are relatively non-luminous with low radiative energy, A jet flame of higher hydrocarbons is lazy, buoyant, luminous, with the presence of black smoke at the tail of the flame, they are highly radiative, The surface emissive power (SEP) of jet flames is in the order of 200 kW/m2 to 400 kW/m2. Such flames have a temperature of 1350 °C. These high heat fluxes can readily compromise the integrity of structures and vessels and can lead to mechanical failure of plant and equipment. A jet fires is a particular hazard to personnel. People are able to survive and escape from exposure to heat fluxes less than 5 kW/m2, while higher fluxes can be fatal. Designing for jet fires Process plant is generally protected by a pressure relief system. However, local heating of a pressure vessel by a jet fire may compromise the integrity of the vessel before the pressure relief device operates. The measures taken for protection against jet fires are as follows: Prevention of leaks using effective maintenance Flange orientation and elimination Blowdown systems, to reduce the inventory and pressure in the plant Isolation of leaks Robust external insulation Emergency response Water deluge can reduce the heat loading of plant so that its temperature is maintained below that at which failure occurs, or that the temperature rise is sufficiently reduced such that shutdown and depressurization can take place. Older plants may have been sized on an earlier version of the American Petroleum Institute's Pressure-Relieving and Depressuring Systems standard, which did not include consideration of jet fires. The international standard publication ISO 22899 (Determination of the Resistance to Jet Fires of Passive Fire Protection Materials) sets requirements for the specification of passive fire protection against jet fires. See also High pressure jet Process hazard analysis Process safety References Industrial fires and explosions Process safety Types of fire Combustion
Jet fire
[ "Chemistry", "Engineering" ]
674
[ "Industrial fires and explosions", "Safety engineering", "Combustion", "Process safety", "Explosions", "Chemical process engineering" ]
69,015,860
https://en.wikipedia.org/wiki/Brendel%E2%80%93Bormann%20oscillator%20model
The Brendel–Bormann oscillator model is a mathematical formula for the frequency dependence of the complex-valued relative permittivity, sometimes referred to as the dielectric function. The model has been used to fit to the complex refractive index of materials with absorption lineshapes exhibiting non-Lorentzian broadening, such as metals and amorphous insulators, across broad spectral ranges, typically near-ultraviolet, visible, and infrared frequencies. The dispersion relation bears the names of R. Brendel and D. Bormann, who derived the model in 1992, despite first being applied to optical constants in the literature by Andrei M. Efimov and E. G. Makarova in 1983. Around that time, several other researchers also independently discovered the model. The Brendel-Bormann oscillator model is aphysical because it does not satisfy the Kramers–Kronig relations. The model is non-causal, due to a singularity at zero frequency, and non-Hermitian. These drawbacks inspired J. Orosco and C. F. M. Coimbra to develop a similar, causal oscillator model. Mathematical formulation The general form of an oscillator model is given by where is the relative permittivity, is the value of the relative permittivity at infinite frequency, is the angular frequency, is the contribution from the th absorption mechanism oscillator. The Brendel-Bormann oscillator is related to the Lorentzian oscillator and Gaussian oscillator , given by where is the Lorentzian strength of the th oscillator, is the Lorentzian resonant frequency of the th oscillator, is the Lorentzian broadening of the th oscillator, is the Gaussian broadening of the th oscillator. The Brendel-Bormann oscillator is obtained from the convolution of the two aforementioned oscillators in the manner of , which yields where is the Faddeeva function, . The square root in the definition of must be taken such that its imaginary component is positive. This is achieved by: References See also Cauchy equation Sellmeier equation Forouhi–Bloomer model Tauc–Lorentz model Lorentz oscillator model Condensed matter physics Electric and magnetic fields in matter Optics
Brendel–Bormann oscillator model
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
513
[ "Applied and interdisciplinary physics", "Optics", "Phases of matter", "Electric and magnetic fields in matter", "Materials science", "Condensed matter physics", " molecular", "Atomic", "Matter", " and optical physics" ]
69,023,656
https://en.wikipedia.org/wiki/Biopiracy
Biopiracy (also known as scientific colonialism ) is the unauthorized appropriation of knowledge and genetic resources of farming and indigenous communities by individuals or institutions seeking exclusive monopoly control through patents or intellectual property. While bioprospecting is the act of exploring natural resources for undiscovered chemical compounds with medicinal or anti-microbial properties, commercial success from bioprospecting leads to the company's attempt at protecting their intellectual property rights on indigenous medicinal plants, seeds, genetic resources, and traditional medicines. Moreover, if biological resources and traditional knowledge are taken from indigenous or marginalized groups, the commercialization of their natural resource can harm communities. Despite the medicinal and innovative benefits of bioprospecting and biochemical research, the expropriation of indigenous land for their genetic resources without fair compensation inevitably leads to exploitation. Biopiracy can harm indigenous populations in multiple ways. Without proper compensation or reward for traditional knowledge of natural resources, the sudden increase in commercial value of the species producing the active compound can make it now unaffordable for the native people. In some cases, a patent filed by the western company could prohibit the use or sale of the resource by any individual or institution, including the indigenous group. With nearly one third of all small-molecule drugs approved by the U.S. Food and Drug Administration (FDA) between 1981 and 2014 being either natural products or compounds derived from natural products, bioprospecting or piracy is growing more significantly, especially in the pharmaceutical industry. Furthermore, the United Nations Educational, Scientific and Cultural Organization (UNESCO) mentions, in the context of intangible cultural heritage (ICH), that the medicinal traditions and knowledge of the Kallawaya communities in Peru have been affected by the lack of legal protection from pharmaceutical companies. A number of research projects are currently being developed on this subject, such as the research carried out using digital methods on the biopiracy of traditional medicines, which shows the current context of the problem by developing a description and analysis of the data, and by visualising and mapping the various organisations and actors in the social networks. With the advancement of extraction techniques of genetic material in biochemistry and molecular biology, scientists are now able to identify a specific gene, which directs to enzymes capable of converting one molecule to another. This scientific breakthrough brings up the question of whether the organism containing the gene that has been modified through a series of tests and experiments should be accredited to the country of origin. History Colonial implications Biopiracy is historically associated with colonialism, where developing resource-rich countries and indigenous populations would be exploited without permission. Since the arrival of European settlers in search of gold, silver, and rare spices, the wealth of knowledge on plant-based riches was highly valued. Following Marco Polo's journey through Southwestern India and China, Christopher Columbus expanded upon the "Spice Route" with the help of the Spanish Court. These explorers, amongst hundreds more, share an infamous history of pillaging through indigenous villages and depriving countries of their natural resources. Western food and pharmaceutical companies have profited immensely from these efforts. Valuable commodities like sugar, pepper, quinine, and coffee were all taken from colonized countries that led to environmental destructions in the corresponding developing countries. The General Agreement on Tariffs and Trade (GATT) of 1947 was an effort to encourage international trade by reducing or eliminating trade barriers like tariffs or quotas. Trade-Related Intellectual Property Rights (TRIPS) was negotiated at the end of GATT. Similarly, Columbus set a precedent in 1492 through land titles granted by European kings and queens, which acted as a sort of patent for colonizers. The World Trade Organization (WTO) agreement of TRIPS attempts to signal the importance of maintaining a balance between trade and intellectual property. This agreement, since 1994, requires WTO member countries to develop legal frameworks to protect plant and animal resources in agricultural, pharmaceutical, chemical, textile, or other commodity contexts. Several countries have criticized this agreement, claiming that it's counterproductive in protecting their natural resources. The Eurocentric roots of property claiming and piracy are reinforced by modern Intellectual Property laws established by GATT and WTO which supplements the colonial ideas to "discover and conquer" and to "subdue, occupy, and possess." Environmental activist and food sovereignty advocate Vandana Shiva calls patenting and claiming rights to genetic material and bio-resources "the second coming of Columbus" due to its reinforcement of colonial power dynamics. For example, the intellectual property for Indian products like tamarind, turmeric, and Darjeeling tea have been taken and patented by private corporations in historically colonial countries. More specifically, in 2010 The University of Michigan attempted to patent curcumin, the active ingredient of turmeric powder, to create drugs used for wound healing without directly crediting Indian communities, where turmeric was traditionally used in medicine for treating wounds, infections and skin problems for centuries. "Gene Rush" in Sri Lanka The "Gene Rush" is the new era of biotechnology that allows scientists to extract specific genes from living organisms as raw materials. With the introduction of deoxyribonucleic acid (DNA) research, Sri Lanka has been marked with imminent danger as a target of biopiracy. Spotted in the top 34 biodiversity hotspots, Sri Lanka claims the highest biodiversity per unit area of terrestrial among Asian countries. Currently, Sri Lanka has 1,500 identified species of medicinal herbs and plants, and its attraction to biopiracy has put environment protection and conservation at a significant priority in the country. Recent efforts were enacted by United Nations Industrial Development Organization (UNIDO) in collaboration with the Spice Council and the government of Sri Lanka to enhance the productive capacities and competitiveness of the cinnamon value chain in the country. Terminology "Biopiracy" was coined in the early 1990s by Pat Mooney, founder of ETC Group which works to protect the world's most vulnerable people from socioeconomic and environmental impacts of new, modern technologies. He defines it as when researchers or research organizations take biological resources without official sanction, largely from less affluent countries or marginalized people. Biopiracy includes theft or misappropriation of genetic resources and traditional knowledge through the intellectual property system and unauthorized and uncompensated collection of genetic resources for commercial purposes. Mooney, along with other critics of the patent system, believes that the current intellectual property system creates inequities in the system by allowing wealthy and powerful groups of people to own the most basic building blocks of life. Intellectual Property and international law Intellectual property (IP) rights include patents, copyright, industrial design rights, trademarks, plant variety rights, trade dress, geographical indications, and sometimes trade secrets. Intellectual property rights (IPR) were created to promote and reward scientific knowledge and creativity. However, they naturally weigh towards benefiting transnational corporations. Restrictions in favor of corporations Early intellectual property treaties were crafted in the late 19th century by European powers, and inherently ignored large parts of the impact of intellectual property on non-European peoples, cultures, and traditions. In the late 20th century, more inequalities were added to the intellectual property system, representing a shift from common rights to private rights of knowledge. The preamble of TRIPS agreement acknowledges these rights as private rights. By privatizing intellectual commons, TRIPS encourages corporate monopoly. A second restriction comes from the fact that IP rights are only recognized when they generate profit, rather than then when they meet social needs. The TRIPS agreement clarifies that an innovation must be capable of industrial profit in order to be recognized as an IPR, which discourages recognition of social good. Legal framework against biopiracy in relation to genetic resources and traditional knowledge In parallel, the international community has been working on different legal pathways to rebalance the intellectual property system in favour of indigenous peoples and local communities, in an attempt to address concerns related to biopiracy. 1992–2010: Biodiversity Convention in Rio & Protocol in Nagoya First elements related to genetic resources and traditional knowledge were included in the 1992 Convention on Biological Diversity (CBD). In 2010 (in force 2014), the Nagoya protocol to the CBD created actionable mechanisms to ensure a fair access and benefit-sharing (FABS or ABS) of genetic resources (GR). 2023: High Seas Treaty in New York In June 2023, the UN adopted the United Nations agreement on biodiversity beyond national jurisdiction (BBNJ Agreement), also called "High Seas Treaty". It concerns the conservation and sustainable use of marine biological diversity in areas beyond national jurisdiction, following maritime jurisdictions established under the United Nations Convention on the Law of the Sea. 2024: GRATK Treaty in Geneva Since 2001, the World Intellectual Property Organization through its Intergovernmental Committee on Intellectual Property and Genetic Resources, Traditional Knowledge and Folklore (IGC) has worked on several areas to bridge the gaps in international law in relation to biopiracy in genetic resources, traditional knowledge (TK), and traditional cultural expressions (TCE, formerly called "folklore"). The first output from the work of the IGC was a Diplomatic Conference, held in May 2024 to agree on a treaty for patent disclosures requirements in relation to "GR and Associated TK" (hence the treaty's acronym, GRATK). On 24 May 2024, the WIPO Diplomatic Conference finally adopted the "landmark" WIPO Treaty on Intellectual Property, Genetic Resources and Associated Traditional Knowledge (GRATK), which was signed by 30 countries on the day of its conclusion. Examples Neem tree In the arid areas of India, the neem tree, or Azadirachta indica, is a fast-growing evergreen of up to 20 meters in height. From its roots to leaves, the tree contains a number of potent chemical compounds, including azadirachtin which can be found in the seeds. The neem tree has applications in medicine, toiletries, contraception, timber, fuel, and in agriculture. Historically, access to the neem tree's various products has been free or cheap. There are about 14 million neem trees in India, and the centuries old village techniques of seed oil extraction and pesticidal emulsions do not require expensive equipment. Villagers relied on the large number of different medicinal compounds accessible through the neem material which were commonly available. When US timber importer Robert Larson noticed the tree's usefulness in 1971, he conducted research over the next decade on the pesticidal properties in the neem extract called Margosan-O. After gaining clearance for the product from the US Environmental Protection Agency (EPA) in 1985, he sold the patent for the product to W.R. Grace. While the corporation patented the neem tree seed extract for their antifungal spray, Neemex in 1994, neem extracts have been used by rural farmers in India for more than 2,000 years in insect repellants. Challenge against patent India-based Research Foundation for Science, Technology, and Ecology (RFSTE) challenged the US patent with the claim that the qualities of the neem tree and their use had been known in India for over 2,000 years. The Congressional Research Service (CRS) reported to US Congress in justification of the patent claiming that the synthetic form or the process of synthesis of the naturally occurring compound should be patentable. The patent was finally overturned by the EPO in 2000. The village neem tree has become a symbol of Indian Indigenous knowledge and resistance against transnational corporations, and protestors against international property rights legislation carried twigs or branches of neem. Hoodia The Hoodia plant of the Kalahari Desert was used for thousands of years by the nomadic San people in southern Africa to help survive through hunger and thirst during their long expeditions in the desert. In 2016, the South African Council for Scientific and Industrial Research (CSIR) gained a government-funded patent for a new drug (P57) derived from the succulent for its appetite-suppressing qualities. CSIR scientists isolated the P57 molecule in 1996 after decades of research on indigenous plants. The patented formula was sold to western pharmaceutical multinational companies Pfizer and Phytopharm as a miracle drug for weight loss. Challenge against patent Following the confirmation of the patent, representatives of the San people, backed by the global support of patent-law critics and bioethicists, demanded restitution of their rights to their common intellectual property. After a long dispute, CSIR and the San people came to a confidential 'benefit-sharing' agreement where the San people were given royalties, knowledge exchange and creation of jobs from the industry. Pineapple leather Piña cloth, in the nineteenth century, was a creation unique to the Philippines. With fibers collected from the leaves of pineapples, the weaving mechanism of piña is a complex and labor-intensive process used by a small number of women to dress the country's elite. Scraping, the most common method of extracting pineapple leaf fibre (PALF), starts with carefully removing the prickles, epidermis, and pulp from the sides of the leaf with a dull knife. This is followed by exposing the fiber and finely combing it to separate the strands. With the help of a bamboo device, the separated strands are then threaded and weaved together through a delicate process to create a continuous filament. After some years of research and development of potential leather alternatives at the Royal College of Art in London, Dr. Carmen Hijosa, founder and chief creative innovation officer of Ananas Anam, claimed the rights to Piñatex, a leather alternative made from PALF. The patent on this technology makes it nearly impossible for the people of the Philippines and indigenous people to gain credit for the fabric that greatly impacted the shape of their history and culture for generations. Piñatex recently partnered with Dole, promising scaled-up leather production with the waste product from their pineapple farms in the Philippines. Despite the violent history of the Dole Empire, Piñatex has been expanding its market by collaborating with brands like Chanel, H&M, and Nike. The patent remains to this day. Corporate greenwashing Greenwashing is a term coined by environmentalist Jay Westervelt in 1986, meaning the false claims by companies that give the impression of sustainability and environmentalism. Without clarifying the metrics and quantifiable goal of the company's environmental agenda, many big companies attempt to paint the picture of ethical and eco-friendly images. Resources and materials pirated from indigenous communities are often commodified and recycled into corporate environmentalist agendas. Due to the exploitative nature of the fast fashion supply chain, many 'green' collections released by corporations only promote their marketing strategies and increase problems with textile waste and climate change. Nike received backlash after the 2020 Impact Report which showed the lack of sustainability in footwear. To tackle the feedback, Nike launched the Happy Pineapple Collection featuring Ananas Anam's vegan leather material and a tropical fruit design embroidered across the Air Max 90, the Air Max 95, Air Force One, and Air-Zoom collections. The Conscious Collection released by H&M in 2010 also partners with Ananas Anam to produce vegan leather jackets. Due to inconclusive data on Piñatex biodegradability, the Norwegian Consumer Authority accused the brand of misleading customers with vague details of the sustainability claims made. The brand responded by saying they would accept the criticism and communicate the extra value. New efforts The Convention on Biological Diversity created by the United Nations in 1992 demanded that bioprospecting should not be done without the consent of the host country. The convention concluded that exploitation of local resources for medicinal and pharmaceutical purposes should actively involve local traditional communities and the produced profit and benefits be shared in an equitable way. The International Cooperative Biodiversity Group (ICBG) is a network of bioprospecting projects funded by the US government. While the main objective is to discover and research plants bearing chemical compounds that could cure diseases in the United States, the countries hosting the searches can expect equitable rewards and benefits. Local job creation in communities is promoted by conducting the initial extraction and analysis steps in local laboratories. If the research leads to commercialized drugs, 50% of the royalties are invested into community development funds run by indigenous people. See also Intellectual property Bioprospecting Greenwashing Neo-colonial science References Biotechnology Population genetics
Biopiracy
[ "Biology" ]
3,363
[ "Biotechnology", "nan", "Biodiversity", "Biopiracy" ]
62,377,568
https://en.wikipedia.org/wiki/Solvent%20vapour%20annealing
Solvent vapor annealing (SVA) is a widely used technique for controlling the morphology and ordering of block copolymer (BCP) films. By controlling the block ratio (f = NA/N), spheres, cylinders, gyroids , and lamellae structures can be generated by forming a swollen and mobile layer of thin-film from added solvent vapor to facilitate the self-assembly of the polymer blocks. The process allows increased lateral ordering by several magnitudes to previous methods. It is a more mild alternative to thermal annealing. Ideally, the chamber in which SVA takes place is a metal chamber that is inert to reaction with the given solvent, allowing for high precision in forming the desired nanostructures. Computers with designed program control of the valves for solvent addition and withdrawal are used to increase precision as well. This regulated inlet along with close monitoring of pressure gauges and thickness allows instant response and control while the annealing and evaporation phases precede. Factors Affecting SVA When looking at what affects SVA, one of the main things that come up first is the solvent that is used, and what nanostructure is wanted to be obtained. For example, if a hierarchical structure is desired, a solvent that has a vapor that can selectively mobilize the amorphous polymer chains of a semi-crystalline polymer is ideal because it can also keep the integrity of the crystals, allowing for the secondary structure to form. Looking more at BCP itself, they make ordered nanostructures because of thermodynamic differences between different blocks of the polymer. The sample morphology at equilibrium can be predicted using the molar mass of the blocks, the degree of polymerization of the chains (N), and the Flory-Huggins interaction parameter (χ) which is a magnitude of exactly how incompatible the different blocks are. These factors, along with the composition of the BCP, allow microphase separation of chains and the rearrangement into the desired product. The composition provides an especially important part of the process as knowing the ordering, such as alternating AB monomers, gives light on how to section the polymer in the desired manner. Along with this, the selection of a specific type of block polymer is important for the process and its effectiveness. The main thing to consider is the original structure of the block at room temperature, as well as, temperatures in which each block will begin to change phase. Knowing these temperatures is critical in determining when each will begin to react and take in solvent and at what rate this will happen, which is critical in pushing to a desired morphology of the given block polymer through annealing. Other factors that affect SVA are parameters such as vapor pressure, solvent concertation in the film, and evaporation rate of the solvent. Each of these factors contributes to the volatility and imprecision at times of this method, not possessing a set mechanism for the construction of structures that are desired, such as nanocylinders. Getting perfect success of the desired morphology of a polymer has yet to be achieved with these plethoras of factors dictating formation. Applications There are many applications in technology and lab work for this process to create desired morphologies of polymers. One of these applications is inscribing secondary nanostructures onto electrospun fibers. The use of poly(ε‐caprolactone) fibers, known as PCL, allows using solvents like acetone to move the amorphous chains of block polymers onto a pre-existing crystal, making the inscribed secondary structure. When the PCL is annealed with acetone, the amorphous chains can be mobilized to a given desired region, while the overall integrity of the fully crystallized regions stays intact. With a careful approach to the semi-crystalline polymer chosen and looking for appropriate solvent vapor, this simple process can be applied to many different systems and allows for the creation of many types of hierarchical polymer material. Another application of SVA is its use in helping create and improve photovoltaic device efficiency through the annealing of perovskite materials. For the greater performance of these energy cells, the keys lie with higher quality perovskite materials and on the use of SVA to create these higher quality films that can retain energy more efficiently. Solvent engineering is the key to make the perovskite material and improving their quality through SVA in an anhydrous isopropanol environment, where the crystalline polymer has low solubility, which causes the performance to increase greatly. The use of SVA here leads to a more energy-efficient and promising path of using specific polymers to help move forward with the improvement of energy storage. Challenges and Areas to Focus on for Improvement There are some main areas of focus that can be looked at for the future of SVA to keep improving and being innovational in technology. Firstly, the chambers in which SVA takes place should continue to be improved on to allow precision of the process, as well as, reproducibility of the same structure on each attempt. The focus on these chambers and the components that make it precise have been a hypothetical thought process of what parameters affect reproducibility. It is imperative to continue to improve the amount of control over the annealing through being able to control all factors, such as humidity and temperature. The point of being meticulous in defining such parameters is for the possibility of multiple labs reproducing a certain compound to the same effect. Next off, SVA with the improvement of the apparatus in which the process takes place, in situ studies, through X-ray and neutron scattering methods, can give more highly accurate images of the swollen and dried states of the BCP. Using methods such as also ellipsometry and interferometry can lead to discoveries about the thickness of the polymers in different states and nanostructure orientation, which will help to learn more about the equilibrium structure and the kinetics of developing a specified morphology. It is important here as well to be able to define small molecule additions to different parts of the block polymer at different points of the annealing and evaporation as to accurately be able to precisely know how the moieties will create certain orientations and directionality in structure. The final area moving forward is simply the implementation of the created block polymers in new intended applications and technology, beyond lab study and characterization of the method. It is important to go beyond creating the nanostructures and move into seeing the utility of the structures in an application, which will help reveal practical shortcomings of the created polymers and reveal areas of where to improve in parts of the structure, such as film integrity and attachment strength of the amorphous chains. Going beyond these simple surface imaging will allow us to realize and face some of the dangers and hindrances to functionality, such as the toxicity of working with organic solvents or the issues with dewetting the swollen state of the BCP. References Copolymers Polymer chemistry
Solvent vapour annealing
[ "Chemistry", "Materials_science", "Engineering" ]
1,434
[ "Materials science", "Polymer chemistry" ]
62,387,950
https://en.wikipedia.org/wiki/Dimethyl%20chlorothiophosphate
Dimethyl chlorothiophosphate is a chemical that is used as an intermediate in the manufacture of pesticides and plasticisers. It is an organophosphate with sulfur and chlorine also bonded to the central phosphorus atom. In 1985 American Cyanamid had an accidental release of this chemical from its Linden plant, and it was smelled 32 km away. References Organophosphates Thiophosphoryl compounds
Dimethyl chlorothiophosphate
[ "Chemistry" ]
94
[ "Functional groups", "Thiophosphoryl compounds" ]
62,389,592
https://en.wikipedia.org/wiki/Indentation%20size%20effect
The indentation size effect (ISE) is the observation that hardness tends to increase as the indent size decreases at small scales. When an indent (any small mark, but usually made with a special tool) is created during material testing, the hardness of the material is not constant. At the small scale, materials will actually be harder than at the macro-scale. For the conventional indentation size effect, the smaller the indentation, the larger the difference in hardness. The effect has been seen through nanoindentation and microindentation measurements at varying depths. Dislocations increase material hardness by increasing flow stress through dislocation blocking mechanisms. Materials contain statistically stored dislocations (SSD) which are created by homogeneous strain and are dependent upon the material and processing conditions. Geometrically necessary dislocations (GND) on the other hand are formed, in addition to the dislocations statistically present, to maintain continuity within the material. These additional geometrically necessary dislocations (GND) further increase the flow stress in the material and therefore the measured hardness. Theory suggests that plastic flow is impacted by both strain and the size of the strain gradient experienced in the material. Smaller indents have higher strain gradients relative to the size of the plastic zone and therefore have a higher measured hardness in some materials. For practical purposes this effect means that hardness in the low micro and nano regimes cannot be directly compared if measured using different loads. However, the benefit of this effect is that it can be used to measure the effects of strain gradients on plasticity. Several new plasticity models have been developed using data from indentation size effect studies, which can be applied to high strain gradient situations such as thin films. References Hardness tests Materials science Plasticity (physics)
Indentation size effect
[ "Physics", "Materials_science", "Engineering" ]
368
[ "Applied and interdisciplinary physics", "Deformation (mechanics)", "Materials science", "Plasticity (physics)", "Materials testing", "nan", "Hardness tests" ]
58,052,112
https://en.wikipedia.org/wiki/Pparg%20coactivator%201%20alpha
Peroxisome proliferator-activated receptor gamma coactivator 1-alpha (PGC-1α) is a protein that in humans is encoded by the PPARGC1A gene. PPARGC1A is also known as human accelerated region 20 (HAR20). It may, therefore, have played a key role in differentiating humans from apes. PGC-1α is the master regulator of mitochondrial biogenesis. PGC-1α is also the primary regulator of liver gluconeogenesis, inducing increased gene expression for gluconeogenesis. Function PGC-1α is a gene that contains two promoters, and has 4 alternative splicings. PGC-1α is a transcriptional coactivator that regulates the genes involved in energy metabolism. It is the master regulator of mitochondrial biogenesis. This protein interacts with the nuclear receptor PPAR-γ, which permits the interaction of this protein with multiple transcription factors. This protein can interact with, and regulate the activity of, cAMP response element-binding protein (CREB) and nuclear respiratory factors (NRFs) . PGC-1α provides a direct link between external physiological stimuli and the regulation of mitochondrial biogenesis, and is a major factor causing slow-twitch rather than fast-twitch muscle fiber types. Endurance exercise has been shown to activate the PGC-1α gene in human skeletal muscle. Exercise-induced PGC-1α in skeletal muscle increases autophagy and unfolded protein response. PGC-1α protein may also be involved in controlling blood pressure, regulating cellular cholesterol homeostasis, and the development of obesity. Regulation PGC-1α is thought to be a master integrator of external signals. It is known to be activated by a host of factors, including: Reactive oxygen species and reactive nitrogen species, both formed endogenously in the cell as by-products of metabolism but upregulated during times of cellular stress. Fasting can also increase gluconeogenic gene expression, including hepatic PGC-1α. It is strongly induced by cold exposure, linking this environmental stimulus to adaptive thermogenesis. It is induced by endurance exercise and recent research has shown that PGC-1α determines lactate metabolism, thus preventing high lactate levels in endurance athletes and making lactate as an energy source more efficient. cAMP response element-binding (CREB) proteins, activated by an increase in cAMP following external cellular signals. Protein kinase B (Akt) is thought to downregulate PGC-1α, but upregulate its downstream effectors, NRF1 and NRF2. Akt itself is activated by PIP3, often upregulated by PI3K after G protein signals. The Akt family is also known to activate pro-survival signals as well as metabolic activation. SIRT1 binds and activates PGC-1α through deacetylation inducing gluconeogenesis without affecting mitochondrial biogenesis. PGC-1α has been shown to exert positive feedback circuits on some of its upstream regulators: PGC-1α increases Akt (PKB) and Phospho-Akt (Ser 473 and Thr 308) levels in muscle. PGC-1α leads to calcineurin activation. Akt and calcineurin are both activators of NF-kappa-B (p65). Through their activation, PGC-1α seems to activate NF-kappa-B. Increased activity of NF-kappa-B in muscle has recently been demonstrated following induction of PGC-1α. The finding seems to be controversial. Other groups found that PGC-1s inhibit NF-kappa-B activity. The effect was demonstrated for PGC-1 alpha and beta. PGC-1α has also been shown to drive NAD biosynthesis to play a large role in renal protection in acute kidney injury. Clinical significance PPARGC1A has been implicated as a potential therapy for Parkinson's disease conferring protective effects on mitochondrial metabolism. Moreover, brain-specific isoforms of PGC-1alpha have recently been identified which are likely to play a role in other neurodegenerative disorders such as Huntington's disease and amyotrophic lateral sclerosis. Massage therapy appears to increase the amount of PGC-1α, which leads to the production of new mitochondria. PGC-1α and beta has furthermore been implicated in polarization to anti-inflammatory M2 macrophages by interaction with PPAR-γ with upstream activation of STAT6. An independent study confirmed the effect of PGC-1 on polarisation of macrophages towards M2 via STAT6/PPAR gamma and furthermore demonstrated that PGC-1 inhibits proinflammatory cytokine production. PGC-1α has been recently proposed to be responsible for β-aminoisobutyric acid secretion by exercising muscles. The effect of β-aminoisobutyric acid in white fat includes the activation of thermogenic genes that prompt the browning of white adipose tissue and the consequent increase of background metabolism. Hence, the β-aminoisobutyric acid could act as a messenger molecule of PGC-1α and explain the effects of PGC-1α increase in other tissues such as white fat. PGC-1α increases BNP expression by coactivating Estrogen-related receptor alpha (ERRα) and / or AP1. Subsequently, BNP induces a chemokine cocktail in muscle fibers and activates macrophages in a local paracrine manner, which can then contribute to enhancing the repair and regeneration potential of trained muscles. Most studies reporting effects of PGC-1α on physiological functions have used mouse models in which the PGC-1α gene is either knocked out or overexpressed from conception. However, some of the proposed effects of PGC-1α have been questioned by studies using inducible knockout technology to remove the PGC-1α gene only in adult mice. For example, two independent studies have shown that adult expression of PGC-1α is not required for improved mitochondrial function after exercise training. This suggests that some of the reported effects of PGC-1α are likely to occur only in the developmental stage. In the metabolic disorder of combined malonic and methylmalonic aciduria (CMAMMA) due to ACSF3 deficiency, there is a massively increased expression of PGC-1α, which is consistent with upregulated beta oxidation. Interactions PPARGC1A has been shown to interact with: CREB-binding protein Estrogen-related receptor alpha (ERRα), estrogen-related receptor beta (ERR-β), estrogen-related receptor gamma (ERR-γ). Farnesoid X receptor FBXW7 MED1, MED12, MED14, MED17, NRF1 Peroxisome proliferator-activated receptor gamma Retinoid X receptor alpha Thyroid hormone receptor beta ERRα and PGC-1α are coactivators of both glucokinase (GK) and SIRT3, binding to an ERRE element in the GK and SIRT3 promoters. See also MB-3 (drug) PPARGC1B Transcription coregulator References Further reading External links Exercise biochemistry
Pparg coactivator 1 alpha
[ "Chemistry", "Biology" ]
1,544
[ "Biochemistry", "Exercise biochemistry" ]
58,054,326
https://en.wikipedia.org/wiki/Bis-oxadiazole
Bis-oxadiazole, or more formally known as bis(1,2,4-oxadiazole)bis(methylene) dinitrate, is a nitrated heterocyclic compound of the oxadiazole family. Bis-oxadiazole is related to bis-isoxazole tetranitrate (BITN), which was developed at the United States Army Research Laboratory (ARL). With a high nitrogen content, these compounds are poised to release a large volume of very stable N2. It is a “melt-cast” explosive material that is potentially both more powerful and environmentally friendly alternative to TNT. Synthesis Glyoxal condenses with hydroxylamine to yield diaminoglyoxime (DAG). Treating DAG with in the presence of base at high temperature, followed by nitration, yields bis(1,2,4-oxadiazole). Replacement for TNT TNT is attractive explosive because it is a melt-castable. A low melting point of about 80 °C and high decomposition temperature of 295 °C allows manufacturers to safely pour TNT into molds. The production of TNT generates hazardous waste, e.g. red water and pink water. Bis-oxadiazole, which is also melt-castable, is about 1.5 times more powerful than TNT and yet produces less hazardous wastes. A major challenge in the production of bis-oxadiazole is its low yield. References Military technology Explosive chemicals Oxadiazoles Nitrate esters
Bis-oxadiazole
[ "Chemistry" ]
320
[ "Explosive chemicals" ]
73,400,837
https://en.wikipedia.org/wiki/Osmium%28IV%29%20fluoride
Osmium(IV) fluoride is an inorganic chemical compound of osmium metal and fluorine with the chemical formula . Synthesis Passing fluorine over heated osmium at 280 °C: Reaction products can be contaminated with other osmium fluorides. Physical properties Osmium(IV) fluoride compound forms yellow hygroscopic crystals. Chemical properties Osmium(IV) fluoride reacts with water. References Osmium compounds Fluorides Platinum group halides
Osmium(IV) fluoride
[ "Chemistry" ]
99
[ "Fluorides", "Salts" ]
73,401,172
https://en.wikipedia.org/wiki/Krypton%20hexafluoride
Krypton hexafluoride is an inorganic chemical compound of krypton and fluorine with the chemical formula . It is still a hypothetical compound. Calculations indicate it is unstable. History In 1933, Linus Pauling predicted that the heavier noble gases would be able to form compounds with fluorine and oxygen. He also predicted the existence of krypton hexafluoride. Calculations suggest it would have octahedral molecular geometry. So far, out of all possible krypton fluorides, only krypton difluoride () has actually been synthesized. References Krypton compounds Fluorides Nonmetal halides Hexafluorides Fluorinating agents Hypothetical chemical compounds
Krypton hexafluoride
[ "Chemistry" ]
150
[ "Inorganic compounds", "Hypotheses in chemistry", "Salts", "Inorganic compound stubs", "Fluorinating agents", "Theoretical chemistry", "Hypothetical chemical compounds", "Reagents for organic chemistry", "Fluorides" ]
73,402,513
https://en.wikipedia.org/wiki/Breastmilk%20medicine
Breastmilk medicine refers to the non-nutritional usage of human breast milk (HBM) as a medicine or therapy to cure diseases. Breastmilk is perceived as an important food that provides essential nutrition to infants. It also provides protection in terms of immunity by direct transfer of antibodies from mothers to infants. The immunity developed via this mean protects infants from diseases such as respiratory diseases, middle ear infections, and gastrointestinal diseases. HBM can also produce lifelong positive therapeutic effects on a number of chronic diseases, including diabetes mellitus, obesity, hyperlipidemia, hypertension, cardiovascular diseases, autoimmunity, and asthma. Therapeutic use of breastmilk has long been a part of natural pharmacopeia, and ethnomedicine. The effectiveness of HBM and fresh colostrum as a treatment for inflammatory disorders such as rhinitis, skin infection, soring nipples, and conjunctivitis has been reported by public health nurses. Currently, many breastmilk components have shown therapeutic benefits in preclinical studies and are being evaluated by clinical studies. Anti-inflammatory effects HBM can be used to treat inflammations. Breastfeeding has an anti-inflammatory effect that is conveyed by its chemical components’ interaction with body cells. The major chemical component that produces the anti-inflammatory effect in both colostrum and transitional milk are glycoprotein and lactoferrin. Lactoferrin has multiple actions including lymph-stimulatory, anti-inflammatory, anti-bacterial, anti-viral, and anti-fungal effects.  The anti-inflammatory effects of lactoferrin are attributed to its iron-binding properties, inhibition of inflammation-causing molecules including interleukin-1β (IL-1β) and tumor necrosis factor-alpha (TNF-α), stimulation of the activity and maturation of lymphocytes as well as preservation of an antioxidant environment.  Besides, lactoferrin protects infants against bacterial and fungal infections in combination with other peptides present in HBM. Respiratory viral infection in infants Lactoferrin in HBM can also inhibit the invasion and proliferation of respiratory syncytial virus (RSV), which is a virus commonly found in the human respiratory tract and causes mild cold-like symptoms.  Lactoferrin can interact directly with the F glycoprotein which is a protein on the surface of the virus that is responsible for presenting the virus to body cells and causing infections. Adenovirus is another group of viruses that targets the mucosal membrane of the human respiratory tract. It usually causes mild to severe infection with symptoms like the common cold or flu.  Lactoferrin can prevent infection of adenovirus since it can interfere with the primary receptors of the virus. HBM regurgitation into the nose after breastfeeding is a way to eliminate these mucosal bacteria and protect infants against recurring nose infections in breastfed infants in the long term. Skin Problems: atopic eczema and diaper dermatitis Atopic eczema is an inflammatory disorder that occurs in the outermost skin layer called the epidermis. This skin disorder affects 50% of infants in the first year after birth. Infants suffering from atopic eczema are characterized by intense itching, redness, and crusting in their skin. Skin thickening may result in chronic or sub-acute patients due to scratching and fissuring over time. One of the commonly used medications for atopic eczema is non-prescription cream containing an anti-inflammatory agent 1% hydrocortisone. On the other hand, applying HBM on the skin as ointments is therapeutically beneficial to infants with mild to moderate atopic eczema. It is evidenced that, compared to 1% hydrocortisone, HBM has similar effectiveness as 1% hydrocortisone to relieve infants’ inflaming skin conditions. Diaper dermatitis is another prevalent infant dermatological disorder. Common clinical features of diaper dermatitis include inflamed, itchy, tender skin and soreness in the diaper area. Study results have shown that human breastmilk is highly effective in healing diaper rash. There is much evidence supporting the anti-inflammatory effect of HBM. The immunological components in HBM help strengthen the baby’s immune system. These immunological components include antimicrobial proteins that can inhibit or kill a wide range of pathogens whose invasion may lead to an inflammatory response. This antimicrobial effect could be achieved by indirectly creating an unfavorable environment for the growth of pathogens by modifying commensal flora, pH, or level of bacterial substrates. The antimicrobial effect is also brought by an antibody immunoglobulin A (IgA) which is the prenominal immunoglobulin present in HBM that can protect infants from a variety of skin infections. Nipple problems: sore nipples The painful nipple is a common difficulty confronted by mothers who decided to carry out breastfeeding. Topical application of expressed breastmilk has long been a non-pharmacological intervention to reduce nipple pain. According to the research outcome of many studies, topical application of HBM can help reduce the perception of nipple pain in a treatment course of 4 to 5 days. It is also stated that HBM is more effective in lowering pain perception than Lanolin. However, another study indicates that Lanolin produces lower pain levels in mothers with nipple pain than HBM. This study stated that lanolin shows a better therapeutic effect for healing rates, nipple trauma, and nipple pain. Although lanolin may be more efficacious than HBM to cure nipple problems, HBM is not proven to be ineffective to treat nipple pain. Considering HBM is more easily available than Lanolin, it is still useful for treating nipple problems in a practical sense. Eye Problems Traditional uses The topical application of breastmilk as a treatment for an infectious disease called conjunctivitis has been present since ancient times in different nations such as ancient Egypt, Rome, Greece, etc. HBM was also recommended by the Greek physician Galen as a remedy for conjunctivitis. Physicians in early modern England recommended human milk for conditions ranging from mild symptoms such as soreness to even blindness. Healers in that era even believed that a mixture of HBM with other components could restore eyesight. Scientific findings Evidence from clinical research has shown that applying HBM can prevent people from getting conjunctivitis infections. Gonorrhea is a sexually transmitted infection. Besides sex-borne transmission, it can also be transmitted to babies during childbirth. This infectious disease can cause neonatal conjunctivitis, which could lead to blindness, if untreated. Hospitals in the United States are required to apply antibiotics to the eyes of new-born within one hour of childbirth to prevent the development of conjunctivitis. This is because certain bacteria in HBM are found to be effective against gonorrhea bacteria and may serve as a convenient and readily available substitute for antibiotics in places where antibiotics are not widely available. Breastmilk in umbilical cord care After labor in childbirth, the umbilical cord is clamped and cut, and part of it stays in contact with the infant. The remaining part of the umbilical cord dries out and eventually separates after 5–15 days. In taking care of the umbilical cord, the dry care method involving soap and water is recommended by the WHO and many national health organizations. For the use of HBM in umbilical cord care, clinical studies found that topical application of breast milk will lead to a shorter time of cord separation than other methods including ethanol and dry cord care. Anti tumoricidal and anti-bacterial effects Human alpha-lactalbumin is a natural protein component of HBM. It can be extracted by chromatography from breast milk. It complexes with oleic acid to form a protein called the “human alpha-lactalbumin made lethal to tumor cells” (HAMLET). The HAMLET complex induces apoptosis in lung carcinoma cells. In in vitro and animal model studies, HAMLET has shown preventative and therapeutic effects in reducing and controlling tumor growth. The physiological effects of HAMLET may explain the proposal that breastfeeding has protective effects for mothers and children against cancer, as shown by the association length of breastfeeding and childhood cancer incidence. The HAMLET has also been found to have anti-bacterial effects through the inhibition of enzymes in glycolysis. Role in society Researchers’ interest in HBM is led by the discovery of a number of chemical components in HBM. These components include growth factors, cytokines, and a heterogeneous population of cells which are stem cells, probiotic bacteria, and the HAMLET complex. By considering the easy accessibility of HBM and high prevalence of infant inflammation disorders, breastmilk may be a cheap and convenient ways to relieve inflammatory symptoms. The prophylactic antibiotic use of human milk may be important in areas where mothers and infants do not have easy access to medicine, such as people living in developing countries. Under these circumstances, practice of HBM therapy as medicine will be a determining factor in infant recovery and survival. General limitations Breastfeeding difficulties Breastfeeding may not be feasible and easy for some mothers due to psychological or physiological reasons. For instance, breastfeeding self-efficacy, the mother's confidence in her breastfeeding abilities, is positively associated with exclusive breastfeeding while postpartum depression makes it more difficult to breastfeed. Mothers who have undergone breast surgeries such as mastectomy may have reduced capabilities of HBM production. Suitability of breastmilk For some individuals, HBM may not be suitable for use, as it may transmit of viruses and other pathogens to infants. For instance, cytomegalovirus, HIV, and bacterial infections from the mother may be transmitted through HBM, causing complications for infants. Evaluation of medical effectiveness of breastmilk There is difficulty in the generalization of study results in evidence-based practice due to inconsistencies in the clinical study findings on breastfeeding medicine. HBM compositions are diverse among different individuals, or the same individual at various times. It is influenced by factors such as maternal diet and changes at various times after pregnancy. For instance, protein composition in HBM is higher in the earlier stages of lactation. Gradually, the mother produces more mature milk, which is whiter in color, compared to the yellowish colostrum. These changes may affect the effectiveness of HBM in medical use. References Medicine Medicine
Breastmilk medicine
[ "Biology" ]
2,196
[ "Medicine" ]
73,403,570
https://en.wikipedia.org/wiki/Anthony%20J.%20Dias
Anthony J. Dias is a retired ExxonMobil materials scientist known for scientific contributions in polyolefins and elastomers which led to commercialized products. Education Dias earned a B.S. in chemistry from Kean College in 1982. He earned a Ph.D. in Polymer Science and Engineering from the University of Massachusetts Amherst in 1987 under Prof. Thomas J. McCarthy. Career Dias joined ExxonMobil Chemical Company in 1988 and has held both research and management responsibilities. His most cited scientific work concerned the development of metallocene catalysts on a nonreacting polystrene support to replace reactive silica supports. Dias retired in June 2021 as Chief Scientist. Awards and recognition 1998 - Sparks–Thomas award from the ACS Rubber Division 2015 - Fellow of the American Chemical Society 2018 - Division Fellow, Polymeric Materials: Science and Engineering References Polymer scientists and engineers ExxonMobil people Living people Fellows of the American Chemical Society University of Massachusetts Amherst alumni Kean University alumni Year of birth missing (living people) Place of birth missing (living people) 20th-century American chemists 21st-century American chemists
Anthony J. Dias
[ "Chemistry", "Materials_science" ]
235
[ "Polymer scientists and engineers", "Physical chemists", "Polymer chemistry" ]
73,410,057
https://en.wikipedia.org/wiki/Palladium%20hexafluoride
Palladium hexafluoride is an inorganic chemical compound of palladium metal and fluorine with the chemical formula . It is reported to be a still hypothetical compound. This is one of many palladium fluorides. Synthesis Fluorination of palladium powder with atomic fluoride at 900–1700 Pa. Physical properties Palladium hexafluoride is predicted to be stable. The compound is reported to form dark red solid that decomposes to . Palladium hexafluoride is a very powerful oxidizing agent. References Palladium compounds Fluorides Hexafluorides Metal halides Oxidizing agents Hypothetical chemical compounds Theoretical chemistry
Palladium hexafluoride
[ "Chemistry" ]
145
[ "Redox", "Inorganic compounds", "Oxidizing agents", "Inorganic compound stubs", "Salts", "Hypotheses in chemistry", "Theoretical chemistry", "Hypothetical chemical compounds", "Metal halides", "nan", "Fluorides" ]
73,410,855
https://en.wikipedia.org/wiki/Zeteletinib
Zeteletinib (BOS-172738, DS-5010) is an experimental anticancer medication which acts as a RET inhibitor. See also Enbezotinib Pralsetinib Rebecsinib Resigratinib Selpercatinib References Oxazoles Acetamides Pyridines Quinolinols Methoxy compounds Trifluoromethyl compounds Enzyme inhibitors
Zeteletinib
[ "Chemistry" ]
90
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
73,411,485
https://en.wikipedia.org/wiki/Americium%20hexafluoride
Americium hexafluoride is an inorganic chemical compound of americium metal and fluorine with the chemical formula . It is still a hypothetical compound. Synthesis by fluorination of americium tetrafluoride was unsuccessfully attempted in 1990. A thermochromatographic identification in 1986 remains inconclusive. Calculations suggest that it may be distorted from octahedral symmetry. Synthesis It is proposed that can be prepared by in both the condensed and gaseous states by the reaction of with in anhydrous HF at 313–333 K. References Americium compounds Metal halides Hexafluorides Hypothetical chemical compounds Theoretical chemistry Actinide halides
Americium hexafluoride
[ "Chemistry" ]
144
[ "Inorganic compounds", "Theoretical chemistry stubs", "Hypotheses in chemistry", "Salts", "Theoretical chemistry", "nan", "Hypothetical chemical compounds", "Metal halides" ]
73,414,173
https://en.wikipedia.org/wiki/Robert%20G.%20Wilhelm
Robert Gerard Wilhelm (born June 27, 1960) is an American mechanical engineer. Wilhelm holds the Kate Foster professorship in Mechanical and Materials Engineering at the University of Nebraska — Lincoln. From 2018 to 2023 he served as the Vice Chancellor for Research and Economic Development at UNL. Before joining the University of Nebraska — Lincoln, he served as Vice Chancellor for Research and Economic Development at the University of North Carolina at Charlotte. There, he also held a faculty appointment as a professor. His expertise is in precision engineering and advanced manufacturing. Early life and education Bob Wilhelm was born June 27, 1960, in Mobile, Alabama. As a child, his family moved to Raleigh, North Carolina, where his father, William J. Wilhelm, earned a PhD in Civil Engineering at North Carolina State University. Their family relocated to Morgantown, West Virginia when William J. Wilhelm joined the West Virginia University civil and environmental engineering faculty. While there, Wilhem's mother, Patricia Zietz, earned a Bachelor of Arts in elementary education and Master of Arts in special education. Later, his father joined Wichita State University as the Dean of the College of Engineering, and their family relocated to Wichita, Kansas. Wilhelm earned a Bachelor of Science in Industrial Engineering from Wichita State University in 1981, after beginning coursework at West Virginia University from 1977 to 1979. From 1981 to 1982, he studied the history of science and technology at the University of Leicester and the Ironbridge Gorge Museum as a Rotary Foundation Fellow. In 1984, he earned a Master of Science in Industrial Engineering from Purdue University. In 1992, he received a Ph.D. in Mechanical Engineering from the University of Illinois. Career Early in his career, Wilhelm worked on naval structures and submarines. He also worked in restoration of historic structures including the original iron furnace at Ironbridge (Coalbrookdale, United Kingdom), Jackson's Mill (Lewis County, West Virginia), Staats Mill Covered Bridge and the Fink-Type Truss Bridge (Hamden, New Jersey). Wilhelm has also worked in engineering at Cincinnati Milacron and the Palo Alto Laboratory of Rockwell Science Center. His engineering has impacted results in mechanical design and computational geometry, digital twin approaches to manufacturing for Caterpillar Inc., aerospace design and manufacturing for the Boeing F/A-18E/F Super Hornet and AI approaches to logistics for the US military program Dynamic Analysis and Replanning Tool. He joined University of North Carolina at Charlotte in 1992 as a faculty member and later co-founded a high-tech company in Charlotte, OpSource. In 1994, he was recognized with the Young Investigator Award of the National Science Foundation. He was a founding faculty member at UNC Charlotte in 5 different PhD programs: Mechanical Engineering, Biology and Biotechnology, Information Technology, Optical Sciences, and Nanoscale Sciences. Wilhelm was a very early and longstanding member of the Precision Engineering and Metrology Group at the University of North Carolina at Charlotte. Wilhelm's engineering research has addressed metrology and measurement theory for complex mechanical parts, virtual manufacturing for design of manufacturing systems, software, and automation and artificial intelligence for mechanical design and tolerance synthesis. As a higher education leader he has led university organizations at UNC Charlotte and UNL that envisioned, built and operated innovation campuses with partner companies working collaboratively on the university site. In Charlotte, these organizations included The Charlotte Research Institute Campus at UNC Charlotte and the University Research Park. In Nebraska, Wilhelm led the Nebraska Innovation Campus during his time as vice chancellor at the University of Nebraska - Lincoln. Awards Wilhelm is a fellow of the National Academy of Inventors and the International Academy for Production Engineering. In 2012, he received the Society of Manufacturing Engineers S.M. Wu Research Implementation Award. References Mechanical engineering 1960 births Living people Fellows of the National Academy of Inventors
Robert G. Wilhelm
[ "Physics", "Engineering" ]
769
[ "Applied and interdisciplinary physics", "Mechanical engineering" ]
73,414,919
https://en.wikipedia.org/wiki/Pilar%20Ibarrola
María del Pilar Ibarrola Muñoz (born 1944) is a Spanish statistician and stochastic control theorist, part of the early expansion of statistics into an academic discipline in Spain in the 1960s and 1970s. She was named professor of decision theory at Complutense University of Madrid in 1974, but soon after left for the University of La Laguna, where she was named as University Professor. She returned to the professorship of decision theory at Complutense University in 1979. Ibarrola served as the third president of the Spanish Statistics and Operations Research Society (SEIO), from 1984 to 1986. She received the SEIO Medal in 2013. References 1944 births Living people Spanish statisticians Spanish women statisticians Control theorists Academic staff of the University of La Laguna Academic staff of the Complutense University of Madrid
Pilar Ibarrola
[ "Engineering" ]
172
[ "Control engineering", "Control theorists" ]
66,134,466
https://en.wikipedia.org/wiki/Alona%20Ben-Tal
Alona Ben-Tal is an Israeli and New Zealand applied mathematician who works as an associate professor and deputy head of school in the School of Natural and Computational Sciences at Massey University. Her research concerns dynamical systems and the mathematical modeling of human and bird breathing and of electrical power systems. Education and career Ben-Tal originally studied mechanical engineering at the Technion – Israel Institute of Technology, earning a bachelor's degree there in 1991 and a master's degree in 1994. After working in industry for three years, she moved with her family to New Zealand and returned to graduate study in mathematics, completing a Ph.D. in 2001 at the University of Auckland with the dissertation A Study of Symmetric Forced Oscillators supervised by Vivien Kirk, Graeme Wake and Geoff Nicholls. After she completed her doctorate, she held positions at the University of Auckland as a fixed-term lecturer in mathematics, and then as a NZ Science & Technology Post-doctoral Fellow in the Bioengineering Institute, before moving to Massey University as a lecturer in 2005. Contributions In her work on human breathing, Ben-Tal has studied respiratory sinus arrhythmia, the phenomenon that the heart rate speeds up while inhaling and slows down while exhaling. Initially hypothesising that this variability would improve the rate of gas exchange in the lungs, her research found that instead it saves effort by the heart while maintaining even levels of blood oxygenation. In birds, Ben-Tal has studied the one-way nature of certain air passages in bird lungs, and the ability of birds to change the speed of airflow through these passages. Her research found that, in some circumstances, birds spend less time inhaling than they do exhaling. Recognition Ben-Tal was named a fellow of the New Zealand Mathematical Society in 2016. References External links Year of birth missing (living people) Living people Israeli mathematicians Israeli women mathematicians New Zealand women mathematicians Applied mathematicians Technion – Israel Institute of Technology alumni Academic staff of the University of Auckland Academic staff of Massey University University of Auckland alumni
Alona Ben-Tal
[ "Mathematics" ]
416
[ "Applied mathematics", "Applied mathematicians" ]
66,137,611
https://en.wikipedia.org/wiki/Ron%20Heeren
Ron M.A. Heeren (born 1965, Tilburg) is a Dutch scientist in mass spectrometry imaging. He is currently a distinguished professor at Maastricht University and the scientific director of the Multimodal Molecular Imaging Institute (M4I), where he heads the division of Imaging Mass Spectrometry. Scientific career Heeren obtained a PhD degree in Technical Physics at the University of Amsterdam in 1992 under the supervision of Aart Kleyn. He led a FOM-AMOLF research group on macromolecular ion physics and biomolecular imaging mass spectrometry (1995–2014). He was also professor at the chemistry faculty of Utrecht University in 2001–2019. Between 1995 and 2015, he worked on new approaches towards high spatial resolution and high-throughput molecular imaging mass spectrometry using secondary ion mass spectrometry (SIMS) and matrix-assisted laser desorption and ionization (MALDI). Heeren has coauthored over 300 peer-reviewed articles, which have been cited over 12,600 times (Google Scholar). Research Heeren’s academic research interests are fundamental studies of the energetics of macromolecular systems, conformational studies of non-covalently bound protein complexes, translational imaging research, high-throughput bioinformatics, and the development and validation of new mass spectrometry–based proteomic imaging techniques for the life sciences. During his postdoctoral fellowship, he worked on the development of innovative ion sources, vacuums systems, data acquisition systems and novel temperature-controlled ion cyclotron resonance cells. He used the FTICR-MS instrument for the study of collisional energy transfer and internal energy distributions. These methods were deployed to investigate their role in the determination of dissociation pathways of biomolecular systems. As a project leader (1995–1997), Heeren led the application of high-resolution MS (FTICR-MS, FTIR imaging spectroscopy and SIMS) to the field of conservation science. He discovered and identified saponified pigment particulates in so-called protrusions in Rembrandt’s “The Anatomy Lesson of Dr. Nicolaes Tulp” in collaboration with the Mauritshuis museum in The Hague. Heeren and his group have pioneered the development of active pixelated detectors for mass spectrometry imaging. One such detector, the Medipix detector has been adapted to enable microscope-mode imaging mass spectrometry for biomolecules to enable combined high-throughput and high-resolution molecular imaging using MALDI and SIMS. Professional activities From 2008 to 2013, Heeren was the research director for emerging technologies at the Netherlands Proteomics Centre. In 2014, he was appointed to his current position as one of the scientific directors of M4I at Maastricht University. He was president of the Dutch Society of Mass Spectrometry between 2001–2005. He is one of the founding members of the Mass Spectrometry Imaging Society, and was elected its president in 2017. Commercialization Heeren holds 7 patents and has established two spin-off companies, Omics2Image/ASI and the Dutch Screening Group. In 2019, he was awarded the NWO Valorisation Prize in Physics. Awards 2020     Hans Fisher Senior Fellowship, Institute of Advanced Studies, Technical University of Munich Hans Fisher senior fellowship 2020     Thomson medal, International Mass Spectrometry Foundation for distinguished contribution to international MS 2019     NWO-Physics Valorisation prize (see above) 2019     Brightlands Convention Award 2014     Robert Feulgen lecturer, Society for Histochemistry 2013     Winner, 10th Venture Challenge, Netherlands Genomics Initiative 2012     Award, Exploratory Measurement Science Group (EMSG), University of Edinburgh, UK 2010     Distinguished Wiley Visiting Scientist award, Environmental Molecular Sciences Laboratory, Department of Energy, US 2008     RCM Beynon Prize, Rapid Communications in Mass Spectrometry 2002     Bert L. Schram Award, Dutch Society for Mass Spectrometry (NVMS) Most cited publications References 1965 births People from Tilburg 21st-century Dutch chemists Academic staff of Maastricht University Living people Thomson Medal recipients Mass spectrometrists University of Amsterdam alumni
Ron Heeren
[ "Physics", "Chemistry" ]
875
[ "Biochemists", "Mass spectrometry", "Spectrum (physical sciences)", "Mass spectrometrists" ]
66,141,422
https://en.wikipedia.org/wiki/Nandini%20Trivedi
Nandini Trivedi is an Indian-American physicist and Professor of Physics at Ohio State University. Her research is on the emergence of new states of matter arising from strong interactions between electrons in quantum materials. She was elected a Fellow of the American Association for the Advancement of Science in 2020. Early life and education Trivedi started her scientific career at the Indian Institutes of Technology. She moved to Cornell University for her graduate studies, where she worked on transport in disordered systems and quantum size effects in thin film heterostructures. After earning her doctorate, Trivedi was a postdoctoral research fellow at the University of Illinois at Urbana–Champaign and at the State University of New York in Stony Brook. Research and career Trivedi started her career at the Argonne National Laboratory as an assistant scientist and got promoted to scientist. She then joined the Tata Institute of Fundamental Research as a faculty in 1995. In 2004 Trivedi joined Ohio State University as a Professor in the Department of Physics. Her research explores the emergence of new phases of matter in condensed matter systems. Awards and honours 2010 Elected Fellow of the American Physical Society 2015 Simons Foundation Fellow 2019 Ohio State University Distinguished Scholar 2020 Elected Fellow of the American Association for the Advancement of Science Select publications References Living people Year of birth missing (living people) American people of Indian descent Indian Institutes of Technology alumni Cornell University alumni Ohio State University faculty Condensed matter physicists Fellows of the American Association for the Advancement of Science Fellows of the American Physical Society
Nandini Trivedi
[ "Physics", "Materials_science" ]
301
[ "Condensed matter physicists", "Condensed matter physics" ]
66,141,540
https://en.wikipedia.org/wiki/Rotary%20friction%20welding
Rotary friction welding (RFW) one of the methods of friction welding, the classic way of which uses the work of friction to create a not separable weld. Typically one welded element is rotated relative to the other and to the forge (pressed down by axial force). The heating of the material is caused by friction work and creates a permanent connection. In this method, the materials to be welded can be the same, dissimilar, composite or non-metallic materials. Friction welding methods of are often considered as solid-state welding. History Some applications and patents connected with friction welding date back to the turn of the 20th century, and rotary friction welding is the oldest of these methods. W. Richter patented the method of the linear friction welding (LFW) process in 1924 in England and 1929 in the Weimar Republic; however, the description of the process was vague and H. Klopstock patented the same process in the Soviet Union in 1924. But the first description and experiments related to rotary friction welding took place in the Soviet Union in 1956, by a machinist named A. J. Chudikov (А. И. Чудиков), who after researching a myriad of scientific studies, suggested the use of this welding method as a commercial process. At first he discovered the method by accident in the Elbrussky mine where he worked: Chudikov did not pay enough attention to lubricating the lathe chuck's insides and then learned he had welded the workpiece to the lathe. He wondered if this accident could be used for joining and came to conclusion that it was necessary to work at high rotation speeds ( about 1000 revolutions per second), immediately brake and press down welded components. He decided to write a letter to the Ministry of Metallurgy and received the answer that this welding was inappropriate, but short notes about the method were published in the newspapers of the Union, arousing interest from Yu. Ya. Terentyeva who was manager of the national Scientific Research Institute of Electrical Welding Equipment, and with time Chudikov's method was disseminated. The process was introduced to the United States in 1960. The American companies Caterpillar Tractor Company (Caterpillar - CAT), Rockwell International, and American Manufacturing Foundry all developed machines for this process. Patents were also issued throughout Europe and the former Soviet Union. The first studies of friction welding in England were carried out by the Welding Institute in 1961. The US through Caterpillar Tractor Company and MTI developed an inertia process in 1962. Europe through KUKA AG and Thompson launched rotary friction welding for industrial applications in 1966, developed a direct-drive process and in 1974 built the rRS6 double spindle machine for heavy truck axles. newIn 1997, an international patent application was filed, entitled "Method of Friction Welding Tubular Members". Inventor A. Graham demonstrated, on welding pipes with a diameter of 152.4 mm, a method that uses radial friction welding with an intermediate ring for connecting long pipes, at long last succeeding after some attempts occurred in 1975, and after scientists in Leningrad theorized on the idea in newspapers. Another method was invented and experimentally proven at The Welding Institute (TWI) in the UK and patented in 1991, called the friction stir welding (FSW) process. In 2008 KUKA AG developed the SRS 1000 rotary friction welding machine with a forging force of 1000 tons. An improved modification is Low Force Friction Welding, a hybrid technology developed by EWI and Manufacturing Technology Inc. (MTI). The process can apply to both linear and rotary friction welding. KUKA has been operating in 44 countries and has built more than 1200 systems, including for subcontract facilities; However, there are more companies in the world with experience; for example, The Welding Institute TWI has more than 50 years of expertise and insight inherent to process development. , with the help of more and more companies, friction welding has become popular worldwide with various materials both in scientific studies and industrial applications. Applications Rotary friction welding is widely implemented across the manufacturing sector and has been used for numerous applications, including: Parts in gas turbine such as: turbine shafts, turbine discs, compressor drums, Automotive parts including steel truck axles and casings, overhead valve engine, motor hollow pistons, passenger car wheel rims, converter for passenger car automatic gears, drive shafts, yoke shaft Turbine for aircraft engine, Monel to steel marine fittings, Piston rods, Copper - aluminium electrical connections, Heat exchangers, Cutting tools for example for lengthening drill bits, Drill pipes, Reactor pressure vassels nozzles, Tubular transition joints combining dissimilar metals (Aluminium - Titanium and Aluminium - Stainless steel), Potential for medical applications, For learning students at technical universities. Connections geometry Rotary Friction Welding can join a wide range of part geometries Typically: Tube to Tube, Tube to Plate, Tube to Bar, Tube to Disk, Bar to Bar, Bar to Plate and in addition, to this a rotating ring is used to connect long components.Geometry of the component surface is not always flat for example it can be conical surface and not only. Types of materials to be welded Rotary friction welding enables to weld various materials. Metallic materials of the same name or dissimilar either composite, superalloys and non-metallic e.g. thermoplastic polymers can be welded and even the welding of wood has been investigated. Weldability tables of metallic alloy can be found on the Internet and in books. Sometimes an interlayer is used to connect non-compatible materials. Division due to drive motor In direct-drive friction welding (also called continuous drive friction welding) the drive motor and chuck are connected. The drive motor is continually driving the chuck during the heating stages. Usually, a clutch is used to disconnect the drive motor from the chuck, and a brake is then used to stop the chuck. In inertia friction welding the drive motor is disengaged, and the workpieces are forced together by a friction welding force. The kinetic energy stored in the rotating flywheel is dissipated as heat at the weld interface as the flywheel speed decreases. Before welding, one of the workpieces is attached to the rotary chuck along with a flywheel of a given weight. The piece is then spun up to a high rate of rotation to store the required energy in the flywheel. Once spinning at the proper speed, the motor is removed and the pieces forced together under pressure. The force is kept on the pieces after the spinning stops to allow the weld to "set". Stages of process Step 1 and 2, friction stage: one of the components is set in rotation, and then pressed to the other stationary one in axial of rotation, Step 3, braking stage: the rotating component is stopped in braking time, Step 4, upsetting stage: the welded elements are still forging by forge pressure (pressed down), Step 5: in standard RFW welding (standard parameters), a flash will be created. Outside flash can be cut off on the welder. However, referring to the stages chart: modifications of the process exist, may depend on the version of the process: direct-drive, inertia friction welding, hybrid welding, there are many versions of welding machines, many materials can are welded with not the same properties, with various geometries, the real life process does not have to match to the ideal settings on the welding machine. RFW Friction work on cylindrical rods workpieces Friction work create weld and can believe that is calculated for cylindrical workpieces from math: Work: (1) Moment of force M general formula: (2) The force F will be the frictional force T (F=T) so substituting for the formula (2): (3) The friction force T will be the pressure F times by the friction coefficient μ: (4) So moment of force M: (5) The alpha angle that each point will move with the axis of rotating cylindrical workpieces will be: (6) So friction work: (7) [verification needed] For variable value μ over friction time: (8) This requires verification but from the equation it appears that turnover and force (or pressure on surface ) is linear to friction work (W) so for example if the pressure increases 2 times then the friction work also increase 2 times, if the turnover increase 2 times then the friction work also increase 2 times and referring to conservation of energy this can heat 2 times the material to the same temperature or the temperature may increase 2 times. Pressure has the same effect over the entire surface but rotation has more impact away from the axis of rotation because it is a rotary motion. Referring to thermal conductivity the friction time affects to the flash size when shorter time was used then friction work is more concentrated in a smaller area. or variable values μ, n, F over friction time: (9) t [s]- time of friction (when piece rotary), μ - coefficient of friction, F [N]- pressure force, r [m]- radius of workpiece, n [1/s] - turnover per second, W [J] - friction work. Therefore, the calculation in this way is not reliable in real is complicated. An example article considering the variable depends on the temperature coefficient of friction steel - aluminum Al60611 - Alumina is described by authors from Malaysia in for example this paper "Evaluation of Properties and FEM Model of the Friction Welded Mild Steel-Al6061-Alumina" and based on this position someone created no step by step but whatever an instructional simulation video in abaqus software and in this paper is possible to find the selection of the mesh type in the simulation described by the authors and there are some instructions such as use the Johnson-Cook material model choice, and not only, there is dissipation coefficient value, friction welding condition, the article included too the physical formulas related to rotary friction welding described by the authors such as: heat transfer equation and convection in rods, equations related to deformation processes. Article included information on the parameters of authors research, but it is not a step by step and simple instruction such as also the video and good add that it is not the only one position in literature. The conclusion include information that: "Even though the FE model proposed in this study cannot replace a more accurate analysis, it does provide guidance in weld parameter development and enhances understanding of the friction welding process, thus reducing costly and time consuming experimental approaches." The coefficient of friction changes with temperature and there are a number of factors internal friction (viscosity - e.g. Dynamic viscosity according to Carreau's fluid law), forge, properties of the material during welding are variable, also there is plastic deformation. Carreau's fluid law: Generalized Newtonian fluid where viscosity, , depends upon the shear rate, , by the following equation: (10) Where: , , and are material coefficients. = viscosity at zero shear rate (Pa.s) = viscosity at infinite shear rate (Pa.s) = relaxation time (s) = power index Modelling of the frictional heat generated within the RFW process can be realized as a function of conducted frictional work and its dissipation coefficient, incremental frictional work of a node 𝑖 on the contacting surface can be described as a function of its axial distance from the rotation centre, current frictional shear stress, rotational speed and incremental time. The dissipation coefficient 𝛽FR is often set to 0.9 meaning that 90% of frictional work is dissipated into heat. (11) 𝑑𝑞FR(𝑖) = 𝛽FR ∙ 𝑑𝑊FR(𝑖) = 𝛽FR ∙ 𝜏𝑅(𝑖) ∙ 𝜔 ∙ 𝑟𝑖 ∙ 𝑑𝑡 on contacting surface of node 𝑖 𝛽FR - dissipation coefficient, 𝑊FR - frictional work, 𝑟𝑖 - distance from the rotation centre, dt - time increment, 𝜏𝑅(𝑖) - current frictional shear stress, 𝜔 - rotational speed. Friction work can also calculate from power of used for welding and friction time (will not be greater than the friction time multiply to the power of the welder - engine of the welder) referring to rules conservation of energy. This calculation looks the simplest. (12) E = Pxt or for not constant power E - energy, P - power, t - power runtime. However, in this case, energy can be also stored in the flywheel if is used depending on the welder construction. General flywheel energy formula: (13) where: is the stored kinetic energy, ω is the angular velocity, and is the moment of inertia of the flywheel about its axis of symmetry. Sample calculations not by computer simulation also exist in the literature for example related to power input and temperature distribution can be found in the script from 1974: K. K. Wang and Wen Lin from Cornell University in "Flywheel friction welding research" manually calculates welding process and even at this time the weld structure was analysed. However, generally: The calculations can be complicated. Weld Zone Description Weld photo gallery Heat and mechanical affected zones Friction work is converted into rise of temperature in the welding zone area, and as a result of this the weld structure is changed. In typical rotary friction welding process rise of temperature at the beginning of process should be more extensively away from the axis of rotation because points away axis have greater linear velocity and in time of weld the temperature disperses according to thermal conductivity welded parts. "Technically the WCZ and the TMAZ are both "thermo-mechanically affected zones" but due to the vastly different microstructures they possess they are often considered separately. The WCZ experiences significant dynamic recrystallisation (DRX), the TMAZ does not. The material in HAZ is not deformed mechanically but is affected by the heat. The region from one TMAZ/HAZ boundary to the other is often referred to as the "TMAZ thickness" or the plastically affected zone (PAZ). For the remainder of this article this region will be referred to as the PAZ." Zones: WCZ– weld center zone, HAZ – heat affected zone, TMAZ – Thermo-Mechanically Affected Zone, BM – base material, parent material, Flash. Furthermore, in the literature, there is also a subdivided according to the type of grain. Similar terms exist in welding. During typical welding initially, the outer region heats up more, due to the higher linear velocity. Next, the heat spreads, and the material is pushed outside, thus creating an outside flash which can be cut off on the welding machine. Heat flow, heat flux in rods It can create a hypothesis that heat flows in welding time like in a cylindrical rod it makes it possible to calculate a temperature in individual places and times from knowing the issues of heat flow and heat flux in rods for example, temperature can be read by using thermocouples and compare with computer simulation. Weld measuring system To provide knowledge about the process, monitoring systems are often used and this are carried out in several ways which affects the accuracy and the list of measured parameters. The list of measured and calculated parameters can looks like this: axial force and pressure, angular - rotation speed, spindle centre, velocity, vibration, length (burn off rate), temperature. Temperature measuring systems Examples of weld measurements. In the literature, can be found measurements of the thermal weld area with thermocouples and not only the non-contact thermographic method is also used. However, it also depends on the specific case for a very small area of the weld and HAZ there are cans by difficulties in thermal measuring in real time it can be calculated later after friction time there is heat flow. Sample source code for temperature measurement made on arduino, this is far away from the topic, however there are missing full open friction welding codes. Exists the free open source software for simulation (List of finite element software) but there is no welding open codes and detailed instructions to this software. Research, temperature, parameters in the rotary friction welding process Quality requirements of welded joints depend on the form of application, e.g. in the space or fly industry weld errors are not allowed. Science tries to gets good quality welds, also some people have been interested in many years in welding knowledge, so there are many scientific articles describing the methods of joining, for example Bannari Amman Institute of Technology, published in 2019 year a literature review in their paper is possible to find out list of people who are interested in friction welding, however in this list not all off people are mentioned for example there is not mentioned about mti youtube channel also there are not written about low force friction welding additionally, the list of people may change over time. They are performed weld tests which give knowledge about mechanical properties of material in welded zone e.g. hardness tests, and tensile tests are performed. Based on the tensile tests the stretch curve are created which can give directly knowledge about ultimate tensile strength, breaking strength, maximum elongation and reduction in area and from these measurements the Young's modulus, Poisson's ratio, yield strength, and strain-hardening characteristics is created. Where, the articles often contain only data related to tensile tests such as: Yield Strength in MPa Ultimate Tensile Strength in MPa Elongation in % percentage Where the units of SI are: K, kg, N, m, s and then Pa and this knowledge about this is needed for introducing data, material properties and not do errors in simulation programs. Research articles also often contain information about: chemical composition of connected components and inclusion process parameters is obvious such as: Friction Pressure (MPa) Friction Time (s) Welding Speed (rpm) Upset Pressure (MPa) Upset Time (s) Is also possible to find descriptions in research literature about: mechanical properties, microstructure, corrosion and wear resistance, and even cytotoxicity welded material. However, why research connect topic of cytotoxicity to welding if it is a subject not closely related (cytotoxicity is the quality of being toxic to cells). On this article can write that exist same off toxic metals and metals vapors such as polonium. It can be written than in some cases when welding at high temperatures, harmful metal vapors are released and then protection is recommended such as access to fresh air and exhaust these vapors to outside. There are several methods to determine the quality of a weld and for example the weld microstructure is examined by optical microscopy and scanning electron microscopy. The computer finite element method (FEM) is used to predict the shape of the flash and interface, not only for rotary friction welding (RFW), but also for friction stir welding (FSW), linear friction welding (LFW), FRIEX. In addition to the weld testing, the weld heat-affected zones are described. Knowledge of the maximum temperatures in the welding process make it possible to define the area structural changes. Process are analisis e.g. temperature measurements are also carried out for scientific purposes research materials, journals, by use contact thermocouples or sometimes no contact thermography methods. For example, an ultra fine grain structure of alloy or metal which is obtained by techniques such as severe plastic deformation or Powder metallurgy is desirable, and not changed by the high temperature, a large heat affected zone is unnecessary. Temperature may reduce material properties because dynamic recrystallization will occur, there may be changes in grain size and phase transformations structures of welded materials. In steel between austenite, ferrite, pearlite, bainite, cementite, martensite. Various parameters of welding are tested. The setting of the completely different parameters can obtain different weld for example the structure changes will not be the same width. It is possible to obtain a smaller heat-affected zone (HAZ) and a plastically affected zone (PAZ). The width of the weld is smaller. The results are for example not the same in welds made for the European Space Agency with a high turnover ω = 14000 rpm or another example from Warsaw technical university 12000 rpm and no typical very short friction time only 60 milliseconds instead of using an standard parameters, in addition, in this case, ultra fine grain alloy was welded, but for this example the welded rod workpiece was only 6mm in diameter so it is small rod friction welding another close to this examples with short friction time only e.g. 40 ms also exist in literature but also for small diameter. Unfortunately, welding in very short time carries the risk of welding imperfections such as weld discontinuities. Some cases of welding are made only individually or only in research such as: The welds created in with specific parameters such as welding time below 100 ms, with an appropriate front surface for example (conical contact surface), with materials that are difficult to weld (tungsten to steel), these are not always serial production. The rotations in the research literature for small diameters can be more as standard even e.g. 25000 rpm. Unfortunately the diameter of the workpiece can be a limitation to the use of high speeds of rotation. The key points to understand is that: Fine grain of the welded metal material according to Hall-Petch relation should have better strength and for the description of one technique for obtaining this material Percy Williams Bridgman won the Nobel Prize in Physics in 1946 referring to the achievements related to High Pressure Torsion (HPT),. However, by High Pressure Torsion is obtained only thin film thickness material. There is also research into the introduction of interlayers. Even though dissimilar material joining is often more difficult the introduction for example nickel interlayer by an experimental electrodeposition deposition technique to increase the connection quality has been investigated by the Indian Institute of Metals, however in this case nickel interlayer thickness was of 70 m (micrometre ) and only small rods of 12mm diameter were welded. This nickel layer is only on top of the welded parts. In addition, this topic is not very related to welding but nickel layer may affect off corrosion resistance. Some scientists describe material research. Group of known materials is large includes: Ni nickel based superalloys such as Inconel, ultra-fine grain materials such as ultra-fine grain aluminum, low carbon steel e.g. Ultra Low Carbon Bainitic Steel (ULCBS). Friction welding is used for connection many materials including superalloys for example nickel-based Inconel, scientists describe connecting various materials and on the internet is possible finding articles about this and same part of the research relates to joining superalloys materials or materials with improved properties. Nickel based superalloys exhibit excellent high temperature strength, high temperature corrosion and oxidation resistance and creep resistance. However, referring to this research good add that nickel is not the most common and cheapest material: Prices list of chemical elements. Parameters Turnover: Typically turnover is selected depending on the type of material and dimensions of welded parts have different values: 400 - 1450 rpm, sometimes max 10000 rpm. Not typically, in research literature turnover is to 25000 rpm. Friction time: typically 1 - to several dozen seconds. Not typically, in research literature friction time can be in tens of milliseconds, however when time is very short and parameters are not typical process can require a lot of preliminary preparation and testing to the positive result. Forge time: Up to a few seconds. However, the parameters will be different as elements of different sizes can be welded. For example, can be produced ranging from the smallest component with a diameter of 3 mm to turbine components with a diameter in excess of 400 mm. By combining methods of connecting long elements perhaps future science may study the friction welding of rails for example for the high speeds railway industry and use the preheat Low force linear friction welding or modified Linear friction welding (LFW) method and vibrating insert (just like the rotating insert in FRIEX method) for do this if the machine are developed and also good add that most of attention are directed to safety of travelers, user safety should be preserved at the first place. Preliminary research involving similar welds and geometry has shown improved tensile strength and increased performance in the fatigue tests. Controversies in research As of August 2022, there are no step-by-step reviewed instructions on how to simulate the temperature of welded components. There are no shared source files for programs where simulation of welding is possible. On the other hand, some of the knowledge is difficult for example in 08.2022 an article: Hamed, Maien Mohamed Osman the "Numerical simulation of friction welding processes: An arbitrary Lagrangian-Eulerian approach" was shared on google scholar but it is difficult to understand. Problem with open article reviews or review no exist. Correct use of grants, and repeating of knowledge for example the structure of a lot of articles is similar and sometimes generally only the next material is welded so there is nothing new but some of research is granted. It is only friction welding, generally nothing difficult, on the other hand there are many complicated descriptions, but the practical result of this articles is missing: The expectation is to create something new, unknown so far. Correct use of grants, time of articles, who is first, announcing about the desire to create something new an innovative device by the university but this is not done: Poland Google scholar finds 246 articles in response to the "friction welding" phrase in 8.2022, this is only part of all available research, and for some of them financial grants are given. However, for example in 2016 The Institute of Electronic Materials Technology in Poland published t he article in Polish language about welding Al/Al2O3 Composites, after then two years later the Warsaw University of Technology publishes an article in Polish language about friction welding ultrafine grained 316L steel in 2018, although the materials was different, but the process parameters suggest that the welding machine is the same, so in this case The Institute of Electronic Materials Technology was published the data first, and summarizing in this case: in 2018 only the new material, which was steel, was welded and tested, the machine was not new but for this a grant was obtained (843 920 PLN = ~US$177476). It has not been written into the articles since the two institutes had a machine and since studies with short friction times were carried out. Students were informed about the willingness to create by the university a new innovative friction welder, but to 08.2022 there is no information about this, there are new research articles, but the device is still old (information valid in 2019 year). Low Force Friction Welding An improved modification of the standard friction welding is Low Force Friction Welding, hybrid technology developed by EWI and Manufacturing Technology Inc. (MTI), "uses an external energy source to raise the interface temperature of the two parts being joined, thereby reducing the process forces required to make a solid-state weld compared to traditional friction welding". The process applies to both linear and rotary friction welding. Following the informations from the Manufacturing Technology blog and website, the technology is promising. Low force friction advantages: Little or no flash, Joining of components previously limited by friction welding, For example, those with a high melting point such as refractory metals like molybdenum, tantalum, tungsten or if there is a difference in material properties. The manufacturer also listed same advantages, which are not fully explained, this is not true for every case: Reduced machine footprint, but machine must have additional heating elements. Reduced weld cycle time, but preheating also takes time. Higher orientation precision, Part repeatability, but this may also occur in some traditional welders if welding is repeatable. Moreover, in 2021 the number of scientific articles for example on Google Scholar about Low force friction is smaller compared to description of the standard method about friction welding where an external energy source to raise the interface temperature is not used. Construction of the welding machine Depending on the construction, but a standard welding machine may include the following systems: Control system Motor or motors in e.g. direct-drive welder Pneumatic or hydraulic pressure system Handle Non rotating vice Clutch in direct-drive friction welder Spindle Flywheel in inertia friction welder Housing Measuring systems Producers present solutions and welding machines can include: Measurement and control dimensional systems: Active Travel Control, burn off rate measurement, Automation solutions, Defined angle positioning, Component lifter, Automatic door operations, Weld data export, Ready for industrial solutions, Automatic temperature control of the headstock, Monitoring of the cooling unit, servo motor control, Have solutions for clean environment with no arcs, sparks, smoke or flames, Have ergonomic workspace, nice design, No special foundations or power supplies are required, Process control and documentation systems: All process data is documented numerically and graphically, have program management, Calculated parameters - Smart machine HMI touchscreen panels, Barcode scanner for generate database of frictioned elements, There are optional methods qr, barcode tagging manufactured elements for example on an additional machine such as laser barcode, or tagging if it is necessary and possible, Flash cut off device systems on the welder, flash removal and facing, chip conveyor, Completely integrated solution in the specific production workflow using state of the art 3D process simulation, Service assistance: Remote Service, Alarm conditions, Have certificates, Vapor extractor, Advanced Measurement systems, Include innovative solutions: for example hybrid technology Low Force Friction Welding, and the system associated with this technology, However, there is not one manufacturer on the market and no one welder machine model and in addition, not always the same material and diameters is welded and a good presentation, technology description, design, may or not may determine the best solutions. There are also exist advertising presentations related to welding. Workpiece handles The type of chuck depends on the technology used, their construction sometimes may be similar to a lathe and milling machine. Safety during friction welding Before starting the work, even if the short and basic safety regulations should be known. Compliance with occupational safety and health regulations Following the manufacturer's recommendations Set up the machine in a safe place: not blocking the entrance door, electric wires away from water, free movement of the users Recommended security systems for example: Emergency stop button, possibility of a quick stop of the machine Protection against touching a too hot object also it is not always visible that the object is hot - it also depends on the material being welded for example welding copper to aluminum Protection against lifting to massive components Caution with hot and sharp things for example the hot welded components, chips if they are cut off on the welding machine Fresh air, for example do not smoke on the production hall near the machine also in some cases vapor extractor to outside in welder Covering moving components The description of the security rules depends on the joining method and situation - access to fresh air, electrical ground, wearing protective clothing, protect the eyes is required. However, personal protective equipment is recommended, but in some cases may be uncomfortable and in sometimes unnecessary, so protection depends on the situation. The human factor also influences safety. Staff negligence: -theft for example copper grounding, because it can be sold for scrap, -neglect of medical examinations, performed carelessly, even paided, because it's about earning money and not staff health, -no cleaning for example because the shift time is over, -accidents on the way to work, -alcohol, an employee's bad day, -Spinal strains - e.g. several hours of quality control of manufactured components in a forced body position because for management workforce productivity, quality and earning money is more important than staff health, -outsourcing - transferring responsibility to another company, -neglect of management, because sometimes they want to only make money, they look at production, not to employees. Other techniques of friction welding Forge welding Friction stir welding (FSW) Friction stir spot welding (FSSW) Linear friction welding (LFW) Research on friction welding of pipeline girth welds (FRIEX) Friction hydro pillar overlap processing (FHPPOW) Friction hydro pillar processing (FHHP) Terms and definitions, name shortcuts Welding vs joining - Definitions depend on the author. Welding in Cambridge English dictionary means: "the activity of joining metal parts together" in Collins dictionary "the activity of uniting metal or plastic by softening with heat and hammering, or by fusion", which means that welding is related to connect. Join or joining has a similar meaning that welding and can mean the same in English dictionary means "to connect or fasten things together" but joining otherwise has many meanings for example "If roads or rivers join, they meet at a particular point". Joining opposed to welding, is a general term and there are several methods available for joining metals, including riveting, soldering, adhesive, brazing, coupling, fastening, press fit. Welding is only one type of joining process. Solid-state weld - connect below the melting point, welder - welding machine, but also mean a person who welds metal. weld - the place of connection where the materials are mixed. weldability - a measure of the ease of making a weld without errors. interlayer - an indirect component, indirect material. To quote ISO (the International Organization for Standardization, unfortunately the all ISO text is not free and open shared) - ISO 15620:2019(en) Welding "axial force - force in axial direction between components to be welded, burn-off length - loss of length during the friction phase, burn-off rate - rate of shortening of the components during the friction welding process, component - single item before welding, component induced braking - reduction in rotational speed resulting from friction between the interfaces, external braking - braking located externally reducing the rotational speed, faying surface - surface of one component that is to be in contact with a surface of another component to form a joint, forge force - force applied normal to the faying surfaces at the time when relative movement between the components is ceasing or has ceased, forge burn-off length - amount by which the overall length of the components is reduced during the application of the forge force, forge phase - interval time in the friction welding cycle between the start and finish of application of the forge force, forge pressure - pressure (force per unit area) on the faying surfaces resulting from the axial forge force, forge time - time for which the forge force is applied to the components, friction force - force applied perpendicularly to the faying surfaces during the time that there is relative movement between the components, friction phase - interval time in the friction welding cycle in which the heat necessary for making a weld is generated by relative motion and the friction forces between the components i.e. from contact of components to the start of deceleration, friction pressure - pressure (force per unit area) on the faying surfaces resulting from the axial friction force, friction time - time during which relative movement between the components takes place at rotational speed and under application of the friction forces, interface - contact area developed between the faying surfaces after completion of the welding operation, rotational speed - number of revolutions per minute of rotating component, stick-out - distance a component sticks out from the fixture, or chuck in the direction of the mating component, deceleration phase - interval in the friction welding cycle in which the relative motion of the components is decelerated to zero, deceleration time - time required by the moving component to decelerate from friction speed to zero speed, total length loss (upset) - loss of length that occurs as a result of friction welding, i.e. the sum of the burn-off length and the forge burn-off length, total weld time - time elapsed between component contact and end of forging phase, welding cycle - succession of operations carried out by the machine to make a weldment and return to the initial position, excluding component - handling operations, weldment - two or more components joined by welding." And more than that: RFW - Rotary friction welding, LFW - Linear friction welding, FSSW - Friction stir spot welding, FRIEX - Research on friction welding of pipeline girth welds, FHPPOW - Friction hydro pillar overlap processing, FHHP - Friction hydro pillar processing, LFFW - Low Force Friction Welding, FSW - Friction stir welding, BM - Base material, HAZ - Heat affected zone, PAZ - Plastically affected zone, DRX - Dynamic recrystallization, TMAZ - Thermo-Mechanically Affected Zone, UFG - Ultra fine grain, SPD - Serve plastic deformation, HPT - High Pressure Torsion, FEM - Finite element method, SEM - Scanning electron microscopy, ADC - Analog to digital converter. See also Welding Friction Friction welding Friction stir welding Temperature Heat-affected zone Dynamic recrystallization Grain boundary strengthening Severe plastic deformation Curiosities Frictional welding (μFSW) was also performed using a CNC machine. which does not mean that it is safe and recommended for the milling machine. Sometimes it is possible to perform welding on a lathe. Scientists even describe measurements of acoustic emissions during joining. References External links Rotary Friction Welding at google scholar - scientific search engine also to many articles about rotary friction welding. Rotary Friction Welding at TWI and search-results at TWI Welding
Rotary friction welding
[ "Engineering" ]
7,758
[ "Welding", "Mechanical engineering" ]
66,142,292
https://en.wikipedia.org/wiki/GI%20Monocerotis
GI Monocerotis, also known as Nova Monocerotis 1918, was a nova that erupted in the constellation Monoceros during 1918. It was discovered by Max Wolf on a photographic plate taken at the Heidelberg Observatory on 4 February 1918. At the time of its discovery, it had a photographic magnitude of 8.5, and had already passed its peak brightness. A search of plates taken at the Harvard College Observatory showed that it had a photographic magnitude of 5.4 on 1 January 1918, so it would have been visible to the naked eye around that time. By March 1918 it had dropped to ninth or tenth magnitude. By November 1920 it was a little fainter than 15th magnitude. A single pre-eruption photographic detection of GI Monocerotis exists, showing its magnitude was 15.1 before the nova event. GI Monoceros dropped by 3 magnitudes from its peak in about 23 days, making it a "fast nova". Long after the nova eruption, six small outbursts with a mean amplitude of 0.9 magnitudes were detected when the star was monitored from the year 1991 through 2000. Radio emission from the nova has been detected at the JVLA in the C (5 GHz), X (8 GHz) and K (23 GHz) bands. All novae are binary stars, with a "donor" star orbiting a white dwarf. The two stars are so close together that matter is transferred from the donor star to the white dwarf. Worpel et al. report that the orbital period for the binary is probably 4.33 hours, and there is a 48.6 minute period which may represent the rotation period for the white dwarf. Their X-ray observations indicate that GI Mon is a non-magnetic cataclysmic variable star, meaning that the material lost from the donor star forms an accretion disk around the white dwarf, rather than flowing directly to the surface of the white dwarf. It is estimated that the donor star is transferring of material to the accretion disk each year. A 1995 search for an optically resolved nova remnant using the Anglo-Australian Telescope was unsuccessful. References Novae Monoceros 1918 in science Monocerotis, GI 058756
GI Monocerotis
[ "Astronomy" ]
453
[ "Novae", "Astronomical events", "Monoceros", "Constellations" ]
66,142,757
https://en.wikipedia.org/wiki/BLC1
BLC1 (Breakthrough Listen Candidate 1) was a candidate SETI radio signal detected and observed during April and May 2019, and first reported on 18 December 2020, spatially coincident with the direction of the Solar System's closest star, Proxima Centauri. Signal The apparent shift in its frequency, consistent with the Doppler effect, was suggested to be inconsistent with what would be caused by the movement of Proxima b, a planet of Proxima Centauri. The Doppler shift in the signal was the opposite of what would be expected from the Earth's spin, in that the signal was observed to increase in frequency rather than decrease. Although the signal was detected by Parkes Radio Telescope during observations of Proxima Centauri, due to the beam angle of Parkes Radio telescope, the signal would be more accurately described as having come from within a circle roughly 16 arcminutes (approximately 1/4 of a degree, half the angular width of Earth's moon) in angular diameter, containing Proxima Centauri, so the signal could have originated elsewhere in the Alpha Centauri system. The signal had a frequency of 982.002 MHz. The radio signal was detected during 30 hours of observations conducted by Breakthrough Listen through the Parkes Observatory in Australia in April and May 2019. As of December 2020, follow-up observations had failed to detect the signal again, a step necessary to confirm that the signal was a technosignature. Origin A paper by other astronomers released 10 days before the news report about BLC1 reports the detection of "a bright, long-duration optical flare, accompanied by a series of intense, coherent radio bursts" from Proxima Centauri also in April and May 2019. Their finding has not been put in direct relation to the BLC1 signal by scientists or media outlets as of January 2021 but implies that planets around Proxima Centauri and other red dwarfs are uninhabitable for humans and other currently known organisms. In February 2021, a new study proposed that, as the probability of a radio-transmitting civilization emerging on the Sun's closest stellar neighbour was calculated to be approximately 10−8, the Copernican principle made BLC1 very unlikely to be a technological radio signal from the Alpha Centauri System. On 25 October 2021, researchers published two studies concluding that the signal is unlikely to be a technosignature due to its similarity to previously detected terrestrial interference. See also List of interstellar radio messages Wow! signal References External links Unsolved problems in astronomy Proxima Centauri Search for extraterrestrial intelligence 2019 in Australia 2020 in science
BLC1
[ "Physics", "Astronomy" ]
552
[ "Concepts in astronomy", "Unsolved problems in astronomy", "Astronomical controversies" ]
53,516,779
https://en.wikipedia.org/wiki/Multiple%20Michael/aldol%20reaction
Multiple Michael/aldol reaction (or domino Michael/aldol reaction) is a consecutive series of reactions composed of either Michael addition reactions or aldol reactions. More than two steps of reaction are usually involved. This reaction has been used for synthesis of large macrocyclic or polycyclic ring structures. Gary Posner and co-workers were the first to report using multiple Michael/aldol reactions to construct macrolide structures. Their method utilized a Michael-Michael-Michael-ring closure (MIMI-MIRC) or a Michael-Michael-aldol-ring closure annulation sequences to assemble acrylates and/or aldehydes together to form substituted 9-, 10-, and 11-membered macrolide structures. Besides synthesis of complex ring structures, multiple Michael/aldol reaction can also be used for rapid production of complex compound libraries. Aldolases have been used to mediate multiple aldol reactions. Chi-Huey Wong and co-workers had shown that 2-deoxyribose-5-phosphate aldolase and fructose-1, 6-diphosphate aldolase could be used together in a one-pot reaction to connect two aldehydes and one ketone together through sequential aldol reactions. This reaction could be used to generate a variety of carbohydrate derivatives. See also Robinson annulation, a classic reaction involving a Michael addition followed by an aldol condensation References Organic reactions
Multiple Michael/aldol reaction
[ "Chemistry" ]
299
[ "Organic reactions" ]
53,517,673
https://en.wikipedia.org/wiki/Transcriptional%20amplification
In genetics, transcriptional amplification is the process in which the total amount of messenger RNA (mRNA) molecules from expressed genes is increased during disease, development, or in response to stimuli. In eukaryotic cells, the transcribing activity of RNA Polymerase II results in mRNA production. Transcriptional amplification is specifically defined as the increase in per-cell abundance of this set of expressed mRNAs. Transcriptional amplification is caused by changes in the amount or activity of transcription-regulating proteins. Mechanisms of transcriptional amplification Gene expression is regulated by numerous types of proteins that directly or indirectly influence transcription by RNA Polymerase II. As opposed to transcriptional activators or repressors that selectively activate or repress specific genes, amplifiers of transcription act globally on expressed genes. Several known regulators of transcriptional amplification have been characterized including the oncogene Myc, the Rett syndrome protein MECP2, and the BET bromodomain protein BRD4. In particular, the Myc protein amplifies transcription by binding to promoters and enhancers of active genes where it directly recruits the transcription elongation factor P-TEFb. Furthermore, the BRD4 protein is a regulator of Myc activity. Identifying and measuring transcriptional amplification Commonly used gene expression experiments interrogate the expression of one gene (qPCR) or many genes (microarray, RNA-Seq). These techniques generally measure relative mRNA levels and employ normalization methods that assume only a small number of genes show altered expression. In contrast, single cell or cell-count normalized absolute measurements of mRNA abundance are required to reveal transcriptional amplification. Additionally, global measurements of mRNA or total mRNA per cell can also uncover evidence for transcriptional amplification. Cells in which transcription has been amplified have additional hallmarks suggesting that amplification has occurred. Cells with increased mRNA levels may be larger, consistent with an increased abundance of gene products. This increase in the amount of gene products may result in a decreased doubling time. Role in disease Transcriptional amplification has been implicated in cancer, Rett syndrome, heart disease, Down syndrome, and cellular aging. In cancer, Myc-driven transcriptional amplification is posited to help tumor cells overcome rate-limiting constraints in growth and proliferation. Drugs that target the transcription or mRNA processing machinery are known to be particularly effective against Myc-driven tumor models, suggesting that dampening of transcriptional amplification can have anti-tumor effects. Similarly, small molecules targeting the BET bromodomain protein BRD4, which is up-regulated during heart failure, can block cardiac hypertrophy in mouse models. In Rett syndrome, which is caused by loss of function of the transcriptional regulator MeCP2, MeCP2 was shown to specifically amplify transcription in neurons and not neuronal precursors. Restoration of MeCP2 reverses disease symptoms associated with Rett syndrome References Genetics
Transcriptional amplification
[ "Biology" ]
615
[ "Genetics" ]
77,739,515
https://en.wikipedia.org/wiki/Engineering%20failures%20in%20the%20U.S.
Engineering failures in the United States can be costly, disruptive, and deadly, with the largest incidents prompting changes to engineering practice. Examples Infrastructure Francis Scott Key bridge collapse (2024) The Francis Scott Key Bridge (informally, Key Bridge or Beltway Bridge) collapsed on March 26, 2024 at 1:28 a.m., after a container ship struck one of its piers. Six members of a maintenance crew were killed. Hubert H. Humphrey Metrodome collapse (2010) Five times in the stadium's history, heavy snows or other weather conditions have significantly damaged the roof. At about 5 a.m. Sunday morning, the roof of Minneapolis's Hubert H. Humphrey Metrodome tore under the weight of 17 inches of snow. The Metrodome has a roof of fiberglass fabric that's inflated by the stadium's air pressure, but a weekend blizzard was the trigger to cause the roof to sag and tear, dumping a large volume of snow all over the field. No one was injured. I-35W Mississippi River bridge collapse (2007) On August 1, 2007, at 6:05 p.m., the central span of the bridge gave way, sending the occupants of 111 vehicles to the river or its banks killing 13 and injuring 145. The NTSB cited a design flaw as the likely cause of the collapse, noting that an excessively thin gusset plate ripped along a line of rivets. Levee failures in New Orleans (2005) Levees and floodwalls protecting New Orleans, Louisiana, and its suburbs failed in 50 locations on August 29, 2005, following the passage of Hurricane Katrina, killing 1,392 people. Four major investigations all concurred that the primary cause of the flooding was inadequate design and construction by the U.S. Army Corps of Engineers. Cypress Freeway collapse (1989) During the 1989 Loma Prieta earthquake in Oakland, California, the collapse of the upper tier of the Oakland, CA highway onto the lower tier caused 42 of the 63 total fatalities. The design was unable to survive the earthquake because the upper portions of the exterior columns were not tied by reinforcing to the lower columns, and the concrete columns were not sufficiently reinforced with steel ties to prevent bursting. Hyatt Regency Hotel walkway collapse (1981) On July 17, 1981, two overhead walkways loaded with partygoers at the Hyatt Regency Hotel in Kansas City, Missouri, collapsed. The concrete and glass platforms fell onto a tea dance in the lobby, killing 114 and injuring 216. Investigations concluded the walkway would have failed under one-third the weight it held that night because of an inadequate support connection derived from a revised detail. Sunshine Skyway Bridge collapse (1980) On the morning of May 9, 1980, the freighter MV Summit Venture collided with a support pier near the center of the bridge during a sudden storm, resulting in the catastrophic failure of the southbound roadway and the deaths of 35 people when several vehicles, including a Greyhound bus, plunged into Tampa Bay. Tacoma Narrows Bridge collapse (1940) The first Tacoma Narrows Bridge was a suspension bridge in Washington that spanned the Tacoma Narrows strait of Puget Sound. It dramatically collapsed on November 7, 1940. The proximate cause was moderate winds which produced aeroelastic flutter that was self-exciting and unbounded. For any constant sustained wind speed above about 35 mph, the amplitude of the (torsional) flutter oscillation would continuously increase. New London School natural gas explosion (1937) The New London School explosion occurred on March 18, 1937, when a natural gas leak caused an explosion and destroyed the London School in New London, Texas, United States killing more than 300 students and teachers. Experts from the United States Bureau of Mines concluded that the connection to the cheap 'residue gas' line was faulty and allowed odorless and colorless gas to leak into the school, and because there was no odor, the leak was unnoticed for quite some time. St. Francis Dam collapse (1928) The St. Francis Dam was a concrete gravity dam located in San Francisquito Canyon in Los Angeles County, California, built from 1924 to 1926 to serve Los Angeles's growing water needs. It failed in 1928 due to a defective foundation design, triggering a flood that claimed the lives of at least 431 people. Knickerbocker Theatre roof collapse (1922) The theater's roof collapsed on January 28, 1922, under the weight of snow from a two-day blizzard that was later dubbed the Knickerbocker storm and killed 98 patrons and injured 133. The investigations concluded that the collapse was most likely the result of poor design, blaming the failure on the support for one of the arch girders that supported the roof, which had shifted, allowing the girder to slip off of one of the support pillars. South Fork Dam rupture (1889) The Johnstown Flood occurred on May 31, 1889, when the South Fork Dam located on the Little Conemaugh River upstream of the town of Johnstown, Pennsylvania, failed after days of heavy rainfall killing at least 2,209 people. A 2016 hydraulic analysis confirmed that changes made to the dam severely reduced its ability to withstand major storms. Ashtabula River Bridge collapse (1876) The Ashtabula River railroad disaster occurred December 29, 1876 when a bridge over the Ashtabula River near Ashtabula, Ohio failed as a Lake Shore and Michigan Southern Railway train passed over it killing at least 92 people. Modern analyses blame failure of an angle block lug, thrust stress and low temperatures. Pemberton Mill building collapse (1860) On January 10, 1860, at around 4:30 PM, a section of the Pemberton Mill textiles factory building suddenly collapsed, trapping several hundred workers underneath the rubble and killing up to 145 workers. Investigators attributed the disaster to substandard construction that was then drastically overloaded with second-floor equipment. Aeronautics Space Shuttle Columbia explosion (2003) The Space Shuttle Columbia disaster occurred on February 1, 2003, during the final leg of the 113th flight of the Space Shuttle program. While reentering Earth's atmosphere over Louisiana and Texas the shuttle unexpectedly disintegrated, resulting in the deaths of all seven astronauts on board. The cause was damage to thermal shielding tiles from impact with a falling piece of foam insulation from an external tank during the January 16 launch. Space Shuttle Challenger explosion (1986) The Space Shuttle Challenger disaster occurred on January 28, 1986, when the NASA Space Shuttle orbiter Challenger broke apart 73 seconds into its flight, leading to the deaths of its seven crew members. Disintegration of the vehicle began after an O-ring seal in its right solid rocket booster (SRB) failed at liftoff. Apollo 13 (1970) Apollo 13 was the seventh crewed mission in the Apollo space program and the third meant to land on the Moon. The craft was launched from Kennedy Space Center on April 11, 1970, but the lunar landing was aborted after an oxygen tank in the service module (SM) ruptured two days into the mission, disabling its electrical and life-support system. The crew, supported by backup systems on the lunar module (LM), instead looped around the Moon in a circumlunar trajectory and returned safely to Earth on April 17. See also Engineering disasters Industrial disasters List of maritime disasters List of spaceflight-related accidents and incidents List of building and structure collapses Nuclear and radiation accidents and incidents Structural integrity and failure References US Man-made disasters in the United States
Engineering failures in the U.S.
[ "Technology", "Engineering" ]
1,534
[ "Systems engineering", "Reliability engineering", "Technological failures", "Engineering failures", "Civil engineering" ]
77,750,654
https://en.wikipedia.org/wiki/National%20Institute%20of%20Clean-and-Low-Carbon%20Energy
The National Institute of Clean-and-Low-Carbon Energy (NICE) is a leading clean and renewable energy research institute located in China and affiliated with the China Energy. Established in 2009 and headquartered in the Changping's Future Science Park near Beijing, NICE also has R&D centers in Germany and California. NICE aims to drive innovation and collaboration in clean energy research, contributing to a sustainable future. It is home to the State Key Laboratory of Water Resource Protection and Utilization in Coal Mining. With a team of about 500 researchers, the institute focuses on a range of areas, including carbon emission reduction, carbon neutrality, clean energy, coal chemical industry, hydrogen energy, energy storage technology, energy network, water treatment, environmental protection, global carbon cycle, smart energy, and energy-related applications of artificial intelligence. The advisory board features internationally recognized scientists such as Norman N. Li, Robin John Batterham, Robert Grubbs, Ke-Chang Xie, Uma Chowdhry, and Alexis T. Bell. NICE collaborates closely with partner universities and institutions, including Tsinghua University, Sichuan University, China University of Petroleum, Tianjin University, Zhejiang University, Tongji University, Dalian University of Technology, China University of Mining and Technology, Eindhoven University of Technology, University of Pittsburgh, GE, Pacific Northwest National Laboratory, and Jacobs Consultancy, among others. Since its inception, NICE has undertaken 68 national-level research projects in China, published 67 national industry standards, and received 61 awards from national, provincial, and industry associations. R&D Centres Beijing R&D Centre The Beijing R&D Centre, which also serves as NICE's headquarters, is located in Changping (30 kilometers to central Beijing), with a campus covering 35 acres on the northern shore of the Wenyu River (and 53 additional adjacent acres shared with the Shenhua Management School that is affiliated to the same China Energy group). This R&D centre specializes in research related to the global carbon cycle, carbon emissions reduction, carbon neutrality, climate change, hydrogen energy, environmental protection, new energy storage technologies, advanced materials, water treatment, coal catalysts, deep earth geology, and energy intelligence that encompasses applied artificial intelligence and data science. European R&D Centre The European R&D Centre, located in Berlin, Germany, concentrates on renewable energy, innovative electric power systems, new chemical materials, carbon reduction technologies, and environmental solutions. American R&D Centre The American R&D Centre is based in the Silicon Valley, California. This centre focuses on shale gas catalysts, energy networks, carbon management, hydrogen energy, and additional related fields. Notable labs, centers, and programmes Postdoctoral programme host, China National Postdoctoral Council State Key Laboratory of Water Resource Protection and Utilization in Coal Mining Industrial Company Quality Management Model Technology Hub of National Energy Clean Coal Conversion and Utilization, China National Energy Administration Beijing Nanostructured Thin Film Solar Cell Engineering Technology Research Center Beijing Engineering Technology Research Center Academic journal The institute operates the open-access academic journal of Clean Energy. References Research institutes established in 2009 Research institutes in China 2009 establishments in China
National Institute of Clean-and-Low-Carbon Energy
[ "Engineering" ]
630
[ "Geoengineering", "Carbon capture and storage" ]
61,148,504
https://en.wikipedia.org/wiki/Limiting%20absorption%20principle
In mathematics, the limiting absorption principle (LAP) is a concept from operator theory and scattering theory that consists of choosing the "correct" resolvent of a linear operator at the essential spectrum based on the behavior of the resolvent near the essential spectrum. The term is often used to indicate that the resolvent, when considered not in the original space (which is usually the space), but in certain weighted spaces (usually , see below), has a limit as the spectral parameter approaches the essential spectrum. This concept developed from the idea of introducing complex parameter into the Helmholtz equation for selecting a particular solution. This idea is credited to Vladimir Ignatowski, who was considering the propagation and absorption of the electromagnetic waves in a wire. It is closely related to the Sommerfeld radiation condition and the limiting amplitude principle (1948). The terminology – both the limiting absorption principle and the limiting amplitude principle – was introduced by Aleksei Sveshnikov. Formulation To find which solution to the Helmholz equation with nonzero right-hand side with some fixed , corresponds to the outgoing waves, one considers the limit The relation to absorption can be traced to the expression for the electric field used by Ignatowsky: the absorption corresponds to nonzero imaginary part of , and the equation satisfied by is given by the Helmholtz equation (or reduced wave equation) , with having negative imaginary part (and thus with no longer belonging to the spectrum of ). Above, is magnetic permeability, is electric conductivity, is dielectric constant, and is the speed of light in vacuum. Example and relation to the limiting amplitude principle One can consider the Laplace operator in one dimension, which is an unbounded operator acting in and defined on the domain , the Sobolev space. Let us describe its resolvent, . Given the equation , then, for the spectral parameter from the resolvent set , the solution is given by where is the convolution of with the fundamental solution : where the fundamental solution is given by To obtain an operator bounded in , one needs to use the branch of the square root which has positive real part (which decays for large absolute value of ), so that the convolution of with makes sense. One can also consider the limit of the fundamental solution as approaches the spectrum of , given by . Assume that approaches , with some . Depending on whether approaches in the complex plane from above () or from below () of the real axis, there will be two different limiting expressions: when approaches from above and when approaches from below. The resolvent (convolution with ) corresponds to outgoing waves of the inhomogeneous Helmholtz equation , while corresponds to incoming waves. This is directly related to the limiting amplitude principle: to find which solution corresponds to the outgoing waves, one considers the inhomogeneous wave equation with zero initial data . A particular solution to the inhomogeneous Helmholtz equation corresponding to outgoing waves is obtained as the limit of for large times. Estimates in the weighted spaces Let be a linear operator in a Banach space , defined on the domain . For the values of the spectral parameter from the resolvent set of the operator, , the resolvent is bounded when considered as a linear operator acting from to itself, , but its bound depends on the spectral parameter and tends to infinity as approaches the spectrum of the operator, . More precisely, there is the relation Many scientists refer to the "limiting absorption principle" when they want to say that the resolvent of a particular operator , when considered as acting in certain weighted spaces, has a limit (and/or remains uniformly bounded) as the spectral parameter approaches the essential spectrum, . For instance, in the above example of the Laplace operator in one dimension, , defined on the domain , for , both operators with the integral kernels are not bounded in (that is, as operators from to itself), but will both be uniformly bounded when considered as operators with fixed . The spaces are defined as spaces of locally integrable functions such that their -norm, is finite. See also Sommerfeld radiation condition Limiting amplitude principle References Linear operators Operator theory Scattering theory Spectral theory
Limiting absorption principle
[ "Chemistry", "Mathematics" ]
852
[ "Functions and mappings", "Scattering theory", "Mathematical objects", "Linear operators", "Scattering", "Mathematical relations" ]
61,149,311
https://en.wikipedia.org/wiki/Alternative%20approaches%20to%20redefining%20the%20kilogram
The scientific community examined several approaches to redefining the kilogram before deciding on a revision of the SI in November 2018. Each approach had advantages and disadvantages. Prior to the redefinition, the kilogram and several other SI units based on the kilogram were defined by an artificial metal object called the international prototype of the kilogram (IPK). There was broad agreement that the older definition of the kilogram should be replaced. The International Committee for Weights and Measures (CIPM) approved a redefinition of the SI base units in November 2018 that defines the kilogram by defining the Planck constant to be exactly . This approach effectively defines the kilogram in terms of the second and the metre, and took effect on 20 May 2019. In 1960, the metre, previously similarly having been defined with reference to a single platinum-iridium bar with two marks on it, was redefined in terms of an invariant physical constant (the wavelength of a particular emission of light emitted by krypton, and later the speed of light) so that the standard can be independently reproduced in different laboratories by following a written specification. At the 94th Meeting of the International Committee for Weights and Measures (CIPM) in 2005, it was recommended that the same be done with the kilogram. In October 2010, the CIPM voted to submit a resolution for consideration at the General Conference on Weights and Measures (CGPM), to "take note of an intention" that the kilogram be defined in terms of the Planck constant, (which has dimensions of energy times time) together with other physical constants. This resolution was accepted by the 24th conference of the CGPM in October 2011 and further discussed at the 25th conference in 2014. Although the Committee recognised that significant progress had been made, they concluded that the data did not yet appear sufficiently robust to adopt the revised definition, and that work should continue to enable the adoption at the 26th meeting, scheduled for 2018. Such a definition would theoretically permit any apparatus that was capable of delineating the kilogram in terms of the Planck constant to be used as long as it possessed sufficient precision, accuracy and stability. The Kibble balance is one way do this. As part of this project, a variety of very different technologies and approaches were considered and explored over many years. Some of these approaches were based on equipment and procedures that would have enabled the reproducible production of new, kilogram-mass prototypes on demand using measurement techniques and material properties that are ultimately based on, or traceable to, physical constants. Others were based on devices that measured either the acceleration or weight of hand-tuned kilogram test masses and which expressed their magnitudes in electrical terms via special components that permit traceability to physical constants. Such approaches depend on converting a weight measurement to a mass, and therefore require the precise measurement of the strength of gravity in laboratories. All approaches would have precisely fixed one or more constants of nature at a defined value. Kibble balance The Kibble balance (known as a "watt balance" before 2016) is essentially a single-pan weighing scale that measures the electric power necessary to oppose the weight of a kilogram test mass as it is pulled by Earth's gravity. It is a variation of an ampere balance, with an extra calibration step that eliminates the effect of geometry. The electric potential in the Kibble balance is delineated by a Josephson voltage standard, which allows voltage to be linked to an invariant constant of nature with extremely high precision and stability. Its circuit resistance is calibrated against a quantum Hall effect resistance standard. The Kibble balance requires extremely precise measurement of the local gravitational acceleration g in the laboratory, using a gravimeter. For instance when the elevation of the centre of the gravimeter differs from that of the nearby test mass in the Kibble balance, the NIST compensates for Earth's gravity gradient of , which affects the weight of a one-kilogram test mass by about . In April 2007, the NIST's implementation of the Kibble balance demonstrated a combined relative standard uncertainty (CRSU) of 36μg. The UK's National Physical Laboratory's Kibble balance demonstrated a CRSU of 70.3μg in 2007. That Kibble balance was disassembled and shipped in 2009 to Canada's Institute for National Measurement Standards (part of the National Research Council), where research and development with the device could continue. The virtue of electronic realisations like the Kibble balance is that the definition and dissemination of the kilogram no longer depends upon the stability of kilogram prototypes, which must be very carefully handled and stored. It frees physicists from the need to rely on assumptions about the stability of those prototypes. Instead, hand-tuned, close-approximation mass standards can simply be weighed and documented as being equal to one kilogram plus an offset value. With the Kibble balance, while the kilogram is delineated in electrical and gravity terms, all of which are traceable to invariants of nature; it is defined in a manner that is directly traceable to three fundamental constants of nature. The Planck constant defines the kilogram in terms of the second and the metre. By fixing the Planck constant, the definition of the kilogram depends in addition only on the definitions of the second and the metre. The definition of the second depends on a single defined physical constant: the ground state hyperfine splitting frequency of the caesium-133 atom . The metre depends on the second and on an additional defined physical constant: the speed of light . With the kilogram redefined in this manner, physical objects such as the IPK are no longer part of the definition, but instead become transfer standards. Scales like the Kibble balance also permit more flexibility in choosing materials with especially desirable properties for mass standards. For instance, Pt10Ir could continue to be used so that the specific gravity of newly produced mass standards would be the same as existing national primary and check standards (≈21.55g/ml). This would reduce the relative uncertainty when making mass comparisons in air. Alternatively, entirely different materials and constructions could be explored with the objective of producing mass standards with greater stability. For instance, osmium-iridium alloys could be investigated if platinum's propensity to absorb hydrogen (due to catalysis of VOCs and hydrocarbon-based cleaning solvents) and atmospheric mercury proved to be sources of instability. Also, vapor-deposited, protective ceramic coatings like nitrides could be investigated for their suitability for chemically isolating these new alloys. The challenge with Kibble balances is not only in reducing their uncertainty, but also in making them truly practical realisations of the kilogram. Nearly every aspect of Kibble balances and their support equipment requires such extraordinarily precise and accurate, state-of-the-art technology that—unlike a device like an atomic clock—few countries would currently choose to fund their operation. For instance, the NIST's Kibble balance used four resistance standards in 2007, each of which was rotated through the Kibble balance every two to six weeks after being calibrated in a different part of NIST headquarters facility in Gaithersburg, Maryland. It was found that simply moving the resistance standards down the hall to the Kibble balance after calibration altered their values 10ppb (equivalent to 10μg) or more. Present-day technology is insufficient to permit stable operation of Kibble balances between even biannual calibrations. When the new definition takes effect, it is likely there will only be a few—at most—Kibble balances initially operating in the world. Other approaches Several alternative approaches to redefining the kilogram that were fundamentally different from the Kibble balance were explored to varying degrees, with some abandoned. The Avogadro project, in particular, was important for the 2018 redefinition decision because it provided an accurate measurement of the Planck constant that was consistent with and independent of the Kibble balance method. The alternative approaches included: Atom-counting approaches Avogadro project One Avogadro constant-based approach, known as the International Avogadro Coordination's Avogadro project, would define and delineate the kilogram as a 93.6mm diameter sphere of silicon atoms. Silicon was chosen because a commercial infrastructure with mature technology for creating defect-free, ultra-pure monocrystalline silicon already exists, the Czochralski process, to service the semiconductor industry. To make a practical realisation of the kilogram, a silicon boule (a rod-like, single-crystal ingot) would be produced. Its isotopic composition would be measured with a mass spectrometer to determine its average relative atomic mass. The boule would be cut, ground, and polished into spheres. The size of a select sphere would be measured using optical interferometry to an uncertainty of about 0.3nm on the radius—roughly a single atomic layer. The precise lattice spacing between the atoms in its crystal structure (≈192pm) would be measured using a scanning X-ray interferometer. This permits its atomic spacing to be determined with an uncertainty of only three parts per billion. With the size of the sphere, its average atomic mass, and its atomic spacing known, the required sphere diameter can be calculated with sufficient precision and low uncertainty to enable it to be finish-polished to a target mass of one kilogram. Experiments are being performed on the Avogadro Project's silicon spheres to determine whether their masses are most stable when stored in a vacuum, a partial vacuum, or ambient pressure. However, no technical means currently exist to prove a long-term stability any better than that of the IPK's, because the most sensitive and accurate measurements of mass are made with dual-pan balances like the BIPM's FB2 flexure-strip balance (see , below). Balances can only compare the mass of a silicon sphere to that of a reference mass. Given the latest understanding of the lack of long-term mass stability with the IPK and its replicas, there is no known, perfectly stable mass artefact to compare against. Single-pan scales, which measure weight relative to an invariant of nature, are not precise to the necessary long-term uncertainty of 10–20 parts per billion. Another issue to be overcome is that silicon oxidises and forms a thin layer (equivalent to silicon atoms deep) of silicon dioxide (quartz) and silicon monoxide. This layer slightly increases the mass of the sphere, an effect that must be accounted for when polishing the sphere to its finished size. Oxidation is not an issue with platinum and iridium, both of which are noble metals that are roughly as cathodic as oxygen and therefore don't oxidise unless coaxed to do so in the laboratory. The presence of the thin oxide layer on a silicon-sphere mass prototype places additional restrictions on the procedures that might be suitable to clean it to avoid changing the layer's thickness or oxide stoichiometry. All silicon-based approaches would fix the Avogadro constant but vary in the details of the definition of the kilogram. One approach would use silicon with all three of its natural isotopes present. About 7.78% of silicon comprises the two heavier isotopes: 29Si and 30Si. As described in below, this method would define the magnitude of the kilogram in terms of a certain number of 12C atoms by fixing the Avogadro constant; the silicon sphere would be the practical realisation. This approach could accurately delineate the magnitude of the kilogram because the masses of the three silicon nuclides relative to 12C are known with great precision (relative uncertainties of 1ppb or better). An alternative method for creating a silicon sphere-based kilogram proposes to use isotopic separation techniques to enrich the silicon until it is nearly pure 28Si, which has a relative atomic mass of . With this approach, the Avogadro constant would not only be fixed, but so too would the atomic mass of 28Si. As such, the definition of the kilogram would be decoupled from 12C and the kilogram would instead be defined as atoms of 28Si (≈ fixed moles of 28Si atoms). Physicists could elect to define the kilogram in terms of 28Si even when kilogram prototypes are made of natural silicon (all three isotopes present). Even with a kilogram definition based on theoretically pure 28Si, a silicon-sphere prototype made of only nearly pure 28Si would necessarily deviate slightly from the defined number of moles of silicon to compensate for various chemical and isotopic impurities as well as the effect of surface oxides. Carbon-12 Though not offering a practical realisation, this definition would precisely define the magnitude of the kilogram in terms of a certain number of carbon12 atoms. Carbon12 (12C) is an isotope of carbon. The mole is currently defined as "the quantity of entities (elementary particles like atoms or molecules) equal to the number of atoms in 12 grams of carbon12". Thus, the current definition of the mole requires that moles ( mol) of 12C has a mass of precisely one kilogram. The number of atoms in a mole, a quantity known as the Avogadro constant, is experimentally determined, and the current best estimate of its value is This new definition of the kilogram proposed to fix the Avogadro constant at precisely with the kilogram being defined as "the mass equal to that of atoms of 12C". The accuracy of the measured value of the Avogadro constant is currently limited by the uncertainty in the value of the Planck constant. That relative standard uncertainty has been 50parts per billion (ppb) since 2006. By fixing the Avogadro constant, the practical effect of this proposal would be that the uncertainty in the mass of a 12C atom—and the magnitude of the kilogram—could be no better than the current 50ppb uncertainty in the Planck constant. Under this proposal, the magnitude of the kilogram would be subject to future refinement as improved measurements of the value of the Planck constant become available; electronic realisations of the kilogram would be recalibrated as required. Conversely, an electronic definition of the kilogram (see , below), which would precisely fix the Planck constant, would continue to allow moles of 12C to have a mass of precisely one kilogram but the number of atoms comprising a mole (the Avogadro constant) would continue to be subject to future refinement. A variation on a 12C-based definition proposes to define the Avogadro constant as being precisely 3 (≈) atoms. An imaginary realisation of a 12-gram mass prototype would be a cube of 12C atoms measuring precisely atoms across on a side. With this proposal, the kilogram would be defined as "the mass equal to 3× atoms of 12C." Ion accumulation Another Avogadro-based approach, ion accumulation, since abandoned, would have defined and delineated the kilogram by precisely creating new metal prototypes on demand. It would have done so by accumulating gold or bismuth ions (atoms stripped of an electron) and counting them by measuring the electric current required to neutralise the ions. Gold (197Au) and bismuth (209Bi) were chosen because they can be safely handled and have the two highest atomic masses among the mononuclidic elements that are stable (gold) or effectively so (bismuth). See also Table of nuclides. With a gold-based definition of the kilogram for instance, the relative atomic mass of gold could have been fixed as precisely , from the current value of . As with a definition based upon carbon12, the Avogadro constant would also have been fixed. The kilogram would then have been defined as "the mass equal to that of precisely atoms of gold" (precisely atoms of gold or about fixed moles). In 2003, German experiments with gold at a current of only demonstrated a relative uncertainty of 1.5%. Follow-on experiments using bismuth ions and a current of 30mA were expected to accumulate a mass of 30g in six days and to have a relative uncertainty of better than 1 ppm. Ultimately, ionaccumulation approaches proved to be unsuitable. Measurements required months and the data proved too erratic for the technique to be considered a viable future replacement to the IPK. Among the many technical challenges of the ion-deposition apparatus was obtaining a sufficiently high ion current (mass deposition rate) while simultaneously decelerating the ions so they could all deposit onto a target electrode embedded in a balance pan. Experiments with gold showed the ions had to be decelerated to very low energies to avoid sputtering effects—a phenomenon whereby ions that had already been counted ricochet off the target electrode or even dislodged atoms that had already been deposited. The deposited mass fraction in the 2003 German experiments only approached very close to 100% at ion energies of less than around (<1km/s for gold). If the kilogram had been defined as a precise quantity of gold or bismuth atoms deposited with an electric current, not only would the Avogadro constant and the atomic mass of gold or bismuth have to have been precisely fixed, but also the value of the elementary charge (e), likely to (from the currently recommended value of ). Doing so would have effectively defined the ampere as a flow of electrons per second past a fixed point in an electric circuit. The SI unit of mass would have been fully defined by having precisely fixed the values of the Avogadro constant and elementary charge, and by exploiting the fact that the atomic masses of bismuth and gold atoms are invariant, universal constants of nature. Beyond the slowness of making a new mass standard and the poor reproducibility, there were other intrinsic shortcomings to the ionaccumulation approach that proved to be formidable obstacles to ion-accumulation-based techniques becoming a practical realisation. The apparatus necessarily required that the deposition chamber have an integral balance system to enable the convenient calibration of a reasonable quantity of transfer standards relative to any single internal ion-deposited prototype. Furthermore, the mass prototypes produced by ion deposition techniques would have been nothing like the freestanding platinum-iridium prototypes currently in use; they would have been deposited onto—and become part of—an electrode imbedded into one pan of a special balance integrated into the device. Moreover, the ion-deposited mass wouldn't have had a hard, highly polished surface that can be vigorously cleaned like those of current prototypes. Gold, while dense and a noble metal (resistant to oxidation and the formation of other compounds), is extremely soft so an internal gold prototype would have to be kept well isolated and scrupulously clean to avoid contamination and the potential of wear from having to remove the contamination. Bismuth, which is an inexpensive metal used in low-temperature solders, slowly oxidises when exposed to room-temperature air and forms other chemical compounds and so would not have produced stable reference masses unless it was continually maintained in a vacuum or inert atmosphere. Ampere-based force This approach would define the kilogram as "the mass which would be accelerated at precisely when subjected to the per-metre force between two straight parallel conductors of infinite length, of negligible circular cross section, placed one metre apart in vacuum, through which flow a constant current of elementary charges per second". Effectively, this would define the kilogram as a derivative of the ampere rather than the present relationship, which defines the ampere as a derivative of the kilogram. This redefinition of the kilogram would specify elementary charge (e) as precisely coulomb rather than the current recommended value of It would necessarily follow that the ampere (one coulomb per second) would also become an electric current of this precise quantity of elementary charges per second passing a given point in an electric circuit. The virtue of a practical realisation based upon this definition is that unlike the Kibble balance and other scale-based methods, all of which require the careful characterisation of gravity in the laboratory, this method delineates the magnitude of the kilogram directly in the very terms that define the nature of mass: acceleration due to an applied force. Unfortunately, it is extremely difficult to develop a practical realisation based upon accelerating masses. Experiments over a period of years in Japan with a superconducting, 30g mass supported by diamagnetic levitation never achieved an uncertainty better than ten parts per million. Magnetic hysteresis was one of the limiting issues. Other groups performed similar research that used different techniques to levitate the mass. Notes References SI base units Units of mass
Alternative approaches to redefining the kilogram
[ "Physics", "Mathematics" ]
4,296
[ "Matter", "Quantity", "Units of mass", "Mass", "Units of measurement" ]
61,152,852
https://en.wikipedia.org/wiki/Mirnov%20oscillations
Mirnov oscillations (a.k.a. magnetic oscillations) are amplitude perturbations of the magnetic field in a plasma. It is named after Sergei V. Mirnov who designed a probe to measure these oscillations in 1965. The probe name is Mirnov coil. Mirnov oscillations have been extensively studied in tokamaks as they provide information about the plasma instabilities that occur within the system. The instabilities create local fluctuations in the current which induce a varying magnetic flux density, and are picked up by the coils due to Faraday's law of induction. References Plasma phenomena
Mirnov oscillations
[ "Physics" ]
135
[ "Plasma phenomena", "Physical phenomena", "Plasma physics stubs", "Plasma physics" ]
61,156,704
https://en.wikipedia.org/wiki/C17H26N4O
{{DISPLAYTITLE:C17H26N4O}} The molecular formula C17H26N4O (molar mass: 302.41 g/mol, exact mass: 302.2107 u) may refer to: Alniditan Emedastine Molecular formulas
C17H26N4O
[ "Physics", "Chemistry" ]
63
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
61,158,940
https://en.wikipedia.org/wiki/XCMS%20Online
XCMS Online is a cloud version of the original eXtensible Computational Mass Spectrometry (XCMS) technology (a bioinformatics software designed for statistical analysis of mass spectrometry data), created by the Siuzdak Lab at Scripps Research. XCMS introduced the concept of nonlinear retention time alignment that allowed for the statistical assessment of the detected peaks across LCMS and GCMS datasets. XCMS Online was designed to facilitate XCMS analyses through a cloud portal and as a more straightforward (non command driven) way to analyze, visualize and share untargeted metabolomic data. Further to this, the combination of XCMS and METLIN allows for the identification of known molecules using METLIN's tandem mass spectrometry data, and enables the identification of unknown (uncharacterized molecules) via similarity searching of tandem mass spectrometry data. XCMS Online has also become a systems biology tool for integrating different omic data sets. As of January 2021, the XCMSOnline - METLIN platform has over 44,000 registered users. XCMS - METLIN was recognized in 2023 as the year's top analytical innovation. XCMS Online works by comparing groups of raw or preprocessed metabolomic data to discover metabolites using methods such as nonlinear retention time alignment and feature detection & matching. Once analysis is complete the data can be viewed several different ways including via bubble plots, heat maps, chromatograms, and box plots. In addition, XCMS Online is integrated with METLIN, a large metabolite database. The following file formats are supported for direct upload to the site. History In 2005, the Siuzdak Lab created an open-source tool named XCMS in the programming language R. Noticing the need for a more accessible, graphical data processing tool they created the cloud-based XCMS Online in 2012. The ability for users to stream data directly from instruments while being acquired was added in 2014. Also in that year a commercial version named XCMS Plus (owned by Mass Consortium Corporation) was released and, in 2015, SCIEX became a reseller. In 2017 it was shown that XCMS Online could be used in a systems biology workflow. One year later, in the absence of a publicly available alternative, a version of XCMS Online was released with the ability to perform multiple reaction monitoring (MRM). References External links XCMS Online Bioinformatics software Mass spectrometry software
XCMS Online
[ "Physics", "Chemistry", "Biology" ]
532
[ "Spectrum (physical sciences)", "Chemistry software", "Bioinformatics software", "Bioinformatics", "Mass spectrometry software", "Mass spectrometry" ]
52,024,457
https://en.wikipedia.org/wiki/Valbenazine
Valbenazine, sold under the brand name Ingrezza, is a medication used to treat tardive dyskinesia. It acts as a vesicular monoamine transporter 2 (VMAT2) inhibitor. Medical use Valbenazine is used to treat tardive dyskinesia in adults. Tardive dyskinesia is a drug-induced neurological injury characterized by involuntary movements. The clinical trials that led to the approval of valbenazine by the US Food and Drug Administration (FDA) were six weeks in duration. An industry-sponsored study has studied the use of valbenazine for up to 48 weeks, in which it was found to be safe and effective for maintaining short-term (6 week) improvements in tardive dyskinesia. Contraindications There are no contraindications for the use of valbenazine according to the prescribing information. Adverse effects Side effects may include sleepiness or QT prolongation. Significant prolongation has not yet been observed at recommended dosage levels, however, those taking inhibitors of the liver enzymes CYP2D6 or CYP3A4 – or who are poor CYP2D6 metabolizers – may be at risk for significant prolongation. Valbenazine has not been effectively studied in pregnancy, and it is recommended that women who are pregnant or breastfeeding avoid use of valbenazine. Pharmacology Mechanism of action Valbenazine is known to cause reversible reduction of dopamine release by selectively inhibiting pre-synaptic human vesicular monoamine transporter type 2 (VMAT2). In vitro, valbenazine shows great selectivity for VMAT2 and little to no affinity for VMAT1 or other monoamine receptors. Although the exact cause of tardive dyskinesia is unknown, it is hypothesized that it may result from neuroleptic-induced dopamine hypersensitivity because it is exclusively associated with the use of neuroleptic drugs. By selectively reducing the ability of VMAT2 to load dopamine into synaptic vesicles, the drug reduces overall levels of available dopamine in the synaptic cleft, ideally alleviating the symptoms associated with dopamine hypersensitivity. The importance of valbenazine selectivity inhibiting VMAT2 over other monoamine transporters is that VMAT2 is mainly involved with the transport of dopamine, and to a much lesser extent other monoamines such as norepinephrine, serotonin, and histamine. This selectivity is likely to reduce the likelihood of "off-target" adverse effects which may result from the upstream inhibition of these other monoamines. Pharmacokinetics Valbenazine is a prodrug which is an ester of [+]-α-dihydrotetrabenazine (DTBZ) with the amino acid L-valine. It is extensively hydrolyzed to the active metabolite DTBZ. Plasma protein binding of valbenazine is over 99%, and that of DTBZ is about 64%. The biological half-life of both valbenazine and DTBZ is 15 to 22 hours. Liver enzymes involved in inactivation are CYP3A4, CYP3A5 and CYP2D6. The drug is excreted, mostly in form of inactive metabolites, via the urine (60%) and the feces (30%). Society and culture Legal status Valbenazine is produced by Neurocrine Biosciences. Valbenazine is the first medication approved by the FDA for the treatment of tardive dyskinesia, in April 2017. Economics While Neurocrine Biosciences does not hold a final patent for valbenazine or elagolix, they do hold a patent for the VMAT2 inhibitor [9,10-dimethoxy-3-(2-methylpropyl)-1H,2H,3H,4H,6H,7H,11bH-pyrido-[2,1-a]isoquinolin-2-yl]methanol and related compounds, which includes valbenazine. Names The International Nonproprietary Name (INN) is valbenazine. Research Valbenazine is being studied for the treatment of Tourette's syndrome. References Further reading Antidyskinetic agents Monoamine-depleting agents Prodrugs Tardive dyskinesia VMAT inhibitors
Valbenazine
[ "Chemistry" ]
991
[ "Chemicals in medicine", "Prodrugs" ]
52,026,083
https://en.wikipedia.org/wiki/Electric%20overhead%20traveling%20crane
Electric overhead traveling cranes or EOT cranes are a common type of overhead crane, also called bridge cranes. They consist of parallel runways, much akin to rails of a railroad, with a traveling bridge spanning the gap. EOT cranes are specifically powered by electricity. Applications EOT cranes are extensively used in warehouses and industry. An EOT crane is able to carry heavy objects to anywhere needed on the factory floor, and can also be used for lifting. However, it cannot be used in every industry. The working temperature is to limited to a range between -20°C to 40°C. Single girder EOT crane A single girder EOT crane has one main girder, making it easy to install, and requires less maintenance. The most common single girder EOT cranes are as follows: LD type single girder EOT crane LDP type single girder EOT crane and HD type single girder EOT crane It is used for lighter industrial applications as it has lower weight limits. Double girder EOT crane QD type hook double bridge crane LH electric hoist double girder bridge crane NLH type double girder EOT crane References Cranes (machines)
Electric overhead traveling crane
[ "Engineering" ]
241
[ "Engineering vehicles", "Cranes (machines)" ]
52,037,261
https://en.wikipedia.org/wiki/IEEE%20Journal%20of%20Solid-State%20Circuits
The IEEE Journal of Solid-State Circuits is a monthly peer-reviewed scientific journal on new developments and research in solid-state circuits, published by the Institute of Electrical and Electronics Engineers (IEEE) in New York City. The journal serves as a companion venue for expanding on work presented at the International Solid-State Circuits Conference, the Symposia on VLSI Technology and Circuits, and the Custom Integrated Circuits Conference. The journal has an impact factor of 6.12 and is edited by Dennis Sylvester (University of Michigan). References External links Journal of Solid-State Circuits, IEEE Electronics journals Semiconductor journals Monthly journals English-language journals Academic journals established in 1966 Electrical and electronic engineering journals
IEEE Journal of Solid-State Circuits
[ "Engineering" ]
140
[ "Electrical engineering", "Electronic engineering", "Electrical and electronic engineering journals" ]
54,779,970
https://en.wikipedia.org/wiki/New%20Breeding%20Techniques
New Breeding Techniques (NBT), also named New Plant Engineering Techniques, are a suite of methods that could increase and accelerate the development of new traits in plant breeding. These new techniques, often involve 'genome editing' whose intention is to modify DNA at specific locations within the plants' genes so that new traits and properties are produced in crop plants. An ongoing discussion in many countries is as to whether NBTs should be included within the same pre-existing governmental regulations to control genetic modification. Methods involved New breeding techniques (NBTs) make specific changes within plant DNA in order to change its traits, and these modifications can vary in scale from altering single base, to inserting or removing one or more genes. The various methods of achieving these changes in traits include the following: Cutting and modifying the genome during the repair process (three tools are used to achieve this: Zinc finger nuclease; TALENs, and CRISPR/Cas Tools) Genome editing to introduce changes to just a few base pairs (using a technique called 'oligonucleotide-directed mutagenesis' (ODM)). Transferring a gene from an identical or closely related species (cisgenesis) Adding in a reshuffled set of regulatory instructions from same species (intragenesis) Deploying processes that alter gene activity without altering the DNA itself (epigenetic methods) Grafting of unaltered plant onto a genetically modified rootstock Potential benefits and disbenefits Many European environmental organisations came together in 2016 to jointly express serious concerns over new breeding techniques. Regulation OECD The Organization for Economic Cooperation and Development (OECD) has its own 'Working Group on Harmonization of Regulatory Oversight in Biotechnology' but, as at 2015, there had been virtually no progress in addressing issues around NBTs, and this includes many major food-producing countries like Russia, South Africa, Brazil, Peru, Mexico, China, Japan and India. Despite its huge potential importance for trade and agriculture, as well as potential risks, the majority of food producing countries in the world at that date still had no policies or protocols for regulating or analysing food products derived specifically from new breeding techniques. South America Argentina introduced regulations and protocols affecting NBTs. These were in place by 2015 and gave clarity to plant developers at an early stage so they could anticipate whether or not their products were likely to be regarded as GMOs. The protocols conform to the internationally recognised 2003 Cartagena Protocol on Biosafety. North America United States The United States Department of Agriculture is responsible for determining whether food products derived from NBTs should be regulated, and this is undertaken on a case-by-case manner under the US Plant Protection Act. As of 2015 there was no specific policy towards NBTs, although in the summer of that year the White House announced plans to update the U.S. Regulatory Framework for Biotechnology. Canada Canada's food regulatory system differs from those of most other countries, and its procedures already accommodate products from any breeding technique, including NBTs. This is because its 1993 'Biotechnology Regulatory Framework' is based upon a concept of regulatory triggering based upon "Plants with Novel Traits". In other words, if a new trait does not exist within normal cultivated plant populations in Canada, then no matter how it was developed, it will trigger the normal regulatory processes and testing. See also Genetic engineering in North America Synthetic biology References Further reading Plant genetics Plant breeding
New Breeding Techniques
[ "Chemistry", "Biology" ]
702
[ "Plant breeding", "Plant genetics", "Plants", "Molecular biology" ]
54,795,122
https://en.wikipedia.org/wiki/Off-target%20activity
Off-target activity is biological activity of a drug that is different from and not at that of its intended biological target. It most commonly contributes to side effects. However, in some cases, off-target activity can be taken advantage of for therapeutic purposes. An example of this is the repurposing of the antimineralocorticoid and diuretic spironolactone, which was found to produce feminization and gynecomastia as side effects, for use as an antiandrogen in the treatment of androgen-dependent conditions like acne and hirsutism in women. Metformin also causes off-target activity. See also Antitarget References Bioactivity Pharmacodynamics
Off-target activity
[ "Chemistry" ]
153
[ "Pharmacology", "Pharmacodynamics" ]
63,291,291
https://en.wikipedia.org/wiki/Marcy%20Zenobi-Wong
Marcy Zenobi-Wong is an American engineer and professor of Tissue Engineering and Biofabrication at the Swiss Federal Institute of Technology (ETH Zurich). She is known for her work in the field of Tissue Engineering. Education and career Zenobi-Wong completed her undergraduate degree in mechanical engineering at the Massachusetts Institute of Technology, and a graduate degree at Stanford University. She completed her PhD on the role of mechanical forces in skeletal development in 1990. After this, she first worked for a year as a postdoc in the Orthopaedic Research Laboratories, University of Michigan, before moving to the University of Bern as group leader Cartilage Biomechanics in 1992, where she habilitated in 2000. In 2003, she moved to ETH Zürich, first to the Institute for Biomedical Engineering, and later to the Department of Health Sciences and Technology, where she became an associate professor in 2017. Work Zenobi-Wong works in the area of tissue engineering, in particular for cartilage regeneration. She develops functional biomaterials which mimic the extracellular matrix. The biofabrication techniques used to develop these materials include electrospinning, casting, two-photon polymerization and bioprinting. Zenobi-Wong holds four licensed patents in the fields of tissue engineering, tissue engineering techniques, and gene expression assays. She was one of the originators of the MSc Biomedical Engineering program at ETH Zürich, and developed several graduate level courses in tissue engineering and biomedical engineering. Zenobi-Wong currently serves as President of the Swiss Society for Biomaterials and Regenerative Medicine, and as secretary general of the International Society of Biofabrication. References External links ETH Zürich Department of Health Sciences and Technology - Tissue Engineering and Biofabrication Group Living people Biomaterials Tissue engineering Academic staff of ETH Zurich 1963 births 21st-century American engineers 21st-century American educators MIT School of Engineering alumni Stanford University alumni American women engineers American women academics 21st-century American women
Marcy Zenobi-Wong
[ "Physics", "Chemistry", "Engineering", "Biology" ]
415
[ "Biomaterials", "Biological engineering", "Cloning", "Chemical engineering", "Materials", "Tissue engineering", "Matter", "Medical technology" ]
63,295,596
https://en.wikipedia.org/wiki/Janson%20inequality
In the mathematical theory of probability, Janson's inequality is a collection of related inequalities giving an exponential bound on the probability of many related events happening simultaneously by their pairwise dependence. Informally Janson's inequality involves taking a sample of many independent random binary variables, and a set of subsets of those variables and bounding the probability that the sample will contain any of those subsets by their pairwise correlation. Statement Let be our set of variables. We intend to sample these variables according to probabilities . Let be the random variable of the subset of that includes with probability . That is, independently, for every . Let be a family of subsets of . We want to bound the probability that any is a subset of . We will bound it using the expectation of the number of such that , which we call , and a term from the pairwise probability of being in , which we call . For , let be the random variable that is one if and zero otherwise. Let be the random variables of the number of sets in that are inside : . Then we define the following variables: Then the Janson inequality is: and Tail bound Janson later extended this result to give a tail bound on the probability of only a few sets being subsets. Let give the distance from the expected number of subsets. Let . Then we have Uses Janson's Inequality has been used in pseudorandomness for bounds on constant-depth circuits. Research leading to these inequalities were originally motivated by estimating chromatic numbers of random graphs. References Probabilistic inequalities
Janson inequality
[ "Mathematics" ]
327
[ "Theorems in probability theory", "Probabilistic inequalities", "Inequalities (mathematics)" ]
56,326,541
https://en.wikipedia.org/wiki/Greenland%20Telescope
The Greenland Telescope is a radio telescope situated at the Thule Air Base in north-western Greenland. It will later be deployed at the Summit Station research camp, located at the highest point of the Greenland ice sheet at an altitude of 3,210 meters (10,530 feet). The telescope is an international collaboration between: The Academia Sinica Institute of Astronomy and Astrophysics (Taiwan) (project leaders) The Smithsonian Astrophysical Observatory of the Center for Astrophysics Harvard & Smithsonian (United States) The National Radio Astronomy Observatory (United States) The Haystack Observatory of the Massachusetts Institute of Technology (United States) In 2011 the U.S. National Science Foundation gave the Smithsonian Astrophysical Observatory a 12-meter radio antenna that had been used as a prototype for the ALMA project in Chile. The antenna was to be deployed in Greenland. Deploying the telescope in the middle of Greenland is ideal for detecting certain radio frequencies. The telescope will be used to study the event horizons of black holes and to test how general relativity behaves in environments with extreme gravity. The Greenland Telescope will become part of the global network of telescopes that makes up the Event Horizon Telescope that will study supermassive black holes and explore the origin of the relativistic jet in the active galaxy Messier 87. Progress and current status Between 2013 and 2015, the Taiwanese Academia Sinica Institute of Astronomy and Astrophysics modified the telescope so that it would better work in the cold environment of the Arctic. The telescope was shipped to Greenland in July 2016 and re-assembled in July 2017 at Thule Air Base in north-western Greenland. The telescope took its first image on 25th of December 2017. An update on "Construction, Commissioning, and Operations" of the telescope at Pituffik Space Base (the revised name for the complex) was published on ArXiv in July 2023, describing "the lessons learned from the operations in the Arctic regions, and the prospect of the telescope." One of the systems tested was the location system; when the telescope is deployed on the ice cap summit, it will move with the ground it is mounted on. Establishing the telescope's geographical position to the required accuracy of 5m required about an hour of observation time. The snow and ice removal systems were also successfully tested. The telescope will be deployed at the Summit Station research camp, located at the highest point of the Greenland ice sheet. References Additional sources Hirashita, Hiroyuki; Koch, Patrick M.; Matsushita, Satoki; Takakuwa, Shigehisa; Nakamura, Masanori; Asada, Keiichi; Liu, Hauyu Baobab; Urata, Yuji; Wang, Ming-Jye; Wang, Wei-Hao; Takahashi, Satoko; Tang, Ya-Wen; Chang, Hsian-Hong; Huang, Kuiyun; Morata, Oscar; Otsuka, Masaaki; Lin, Kai-Yang; Tsai, An-Li; Lin, Yen-Ting; Srinivasan, Sundar; Martin-Cocher, Pierre; Pu, Hung-Yi; Kemper, Francisca; Patel, Nimesh; Grimes, Paul; Huang, Yau-De; Han, Chih-Chiang; Huang, Yen-Ru; Nishioka, Hiroaki; Lin, Lupin Chun-Che; Zhang, Qizhou; Keto, Eric; Burgos, Roberto; Chen, Ming-Tang; Inoue, Makoto; Ho, Paul T. P.. "First-generation science cases for ground-based terahertz telescopes". Publications of the Astronomical Society of Japan, 2016: Volume 68, Issue 1, id.R1 pp. doi:10.1093/pasj/psv115 10.1093/pasj/psv115 The M87 Workshop: Towards the 100th Anniversary of the Discovery of Cosmic Jets Arctic Greenland Telescope Opens New Era of Astronomy SpaceRef, 2018-05-31. Telescopes Radio observatories Astronomy in Taiwan Interferometric telescopes Radio telescopes Astronomical imaging Astronomical instruments
Greenland Telescope
[ "Astronomy" ]
842
[ "Telescopes", "Astronomical instruments" ]
56,328,827
https://en.wikipedia.org/wiki/Biological%20methanation
Biological methanation (also: biological hydrogen methanation (BHM) or microbiological methanation) is a conversion process to generate methane by means of highly specialized microorganisms (Archaea) within a technical system. This process can be applied in a power-to-gas system to produce biomethane and is appreciated as an important storage technology for variable renewable energy in the context of energy transition. This technology was successfully implemented at a first power-to-gas plant of that kind in the year 2015. Disambiguation Biological methanation contains the principle of the so-called methanogenesis, a specific, anaerobic metabolic pathway where hydrogen and carbon dioxide are converted into methane. By analogy with the biological process, a chemical-catalytic process, also known as Sabatier reaction, exists. Principle of function Numerous and common microorganisms within the domain Archaea convert the compounds hydrogen (H2) and carbon dioxide (CO2) into methane in a bio-catalytic way. The therefore relevant metabolic processes run under strictly anaerobic conditions and in an aqueous environment. Suitable Archaea for this process are so called Methanogens with a hydrogenotrophical metabolism. They are primary to be allocated among the order of Methanopyrales, Methanobacteriales, Methanococcales and Methanomicrobiales. These Methanogens are naturally adapted for different anaerobic environments and conditions. Basically, the Methanogens need aqueous, anoxic conditions with min. 50% water and a redox potential of less than −330 mV. The Methanogens prefer lightly acidic to alkali living conditions and are found in a very wide temperature range from 4 to 110 °C. Reactor types The most common utilized reactor type for biological methanation is the stirred-tank reactor in which the mass transfer is influenced by several factors such as geometry of the reactor, impeller configuration, the agitation speed and the gas flow rate. Additionally, less investigated reactor types like Trickle-bed reactors, bubble-column reactors and gas-lift reactors have specific drawbacks and advantages regarding the abovementioned limitations. Potential applications of biological methanation Biological methanation can take place as an in-situ process within a fermenter (see fig. 3.1) or as an ex-situ process in a separate reactor (see fig. 3.2 to 3.4). Biological methanation in a biogas or clarification plant with a gas processing system (in-situ process) Hydrogen is added directly to the fermentation material during a fermentation process and the biological methanation takes place subsequently in the thoroughly gassed fermentation material. The gas is, depending on its pureness, cleaned up to methane before the infeed into the gas grid. Biological methanation takes place in a separate methanation plant. The gas is completely converted into methane before the infeed into the gas grid. The carbon dioxide, produced in a gas processing system, is converted into methane in a separate methanation plant, by adding hydrogen and can then be fed into the gas grid. Biological methanation in combination with an arbitrary carbon dioxide source (ex-situ process) In a separate methanation plant the hydrogen is converted into methane together with carbon dioxide and then fed into the gas grid (stand-alone solution). Biological methanation in a pressurized reactor vessel (in-situ process). Pressure allows for better hydrogen solubility and therefore easier conversion into methane by microorganisms. A possible reactor configuration can be Autogenerative high-pressure digestion. Research in Korea has demonstrated that 90% > CH4, 180 MJ/m3 biogas can be produced in this way. Implementation in the field Since March 2015 the first power-to-gas plant globally is feeding synthetical bio methane, generated by means of biological methanation, into the public gas grid in Allendorf (Eder), Germany. The plant runs with an output rate of 15 Nm3/h, which corresponds to 400,000 kWh per year. With this amount of gas a distance of 750,000 kilometers per year with a CNG-vehicle can be achieved. References Biogas technology Methane
Biological methanation
[ "Chemistry", "Biology" ]
888
[ "Greenhouse gases", "Biofuels technology", "Methane", "Biogas technology" ]
69,026,877
https://en.wikipedia.org/wiki/LunaNet
LunaNet is a NASA and ESA project and proposed data network aiming to provide a "Lunar Internet" for cis-lunar spacecraft and installations. It will be able to store-and-forward data to provide a Delay/Disruption Tolerant Network (DTN). The objective is to avoid needing to preschedule data communications back to Earth. LunaNet will also offer navigation services, eg. for orbit determination, and navigation on the lunar surface. Draft interoperability specifications have been issued. The LunaNet Interoperability Specification (LNIS) is the document which publishes the LunaNet standard. LNIS version 4 was published online in September 12, 2022. LNIS version 5 draft for review was provided online for comment late 2023. NASA's instantiation of LunaNet is called Lunar Communication Relay and Navigation System (LCRNS). Moonlight Initiative is an ESA project intending to adopt the specifications. JAXA's instantiation of LunaNet is called Lunar Navigation Satellite System (LNSS) See also Deep Space Network, NASA spacecraft communications Artemis program, NASA's return to the Moon Laser communication in space Coordinated Lunar Time References External links LunaNet Interoperability Specification Documents NASA's Lunar Communications Relay and Navigation Systems (LCRNS) NASA Spacecraft communication
LunaNet
[ "Engineering" ]
260
[ "Spacecraft communication", "Aerospace engineering" ]
69,039,000
https://en.wikipedia.org/wiki/Glugging
Glugging (also referred to as "the glug-glug process") is the physical phenomenon which occurs when a liquid is poured rapidly from a vessel with a narrow opening, such as a bottle. It is a facet of fluid dynamics. As liquid is poured from a bottle, the air pressure in the bottle is lowered, and air at higher pressure from outside the bottle is forced into the bottle, in the form of a bubble, impeding the flow of liquid. Once the bubble enters, more liquid escapes, and the process is repeated. The reciprocal action of glugging creates a rhythmic sound. The English word "glug" is onomatopoeic, describing this sound. Onomatopoeias in other languages include (German). Academic papers have been written about the physics of glugging, and about the impact of glugging sounds on consumers' perception of products such as wine. Research into glugging has been done using high-speed photography. Factors which affect glugging are the viscosity of the liquid, its carbonation, the size and shape of the container's neck and its opening (collectively referred to as "bottle geometry"), the angle at which the container is held, and the ratio of air to liquid in the bottle (which means that the rate and the sound of the glugging changes as the bottle empties). See also References fluid dynamics food science
Glugging
[ "Chemistry", "Engineering" ]
299
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
71,986,552
https://en.wikipedia.org/wiki/Text-to-video%20model
A text-to-video model is a machine learning model that uses a natural language description as input to produce a video relevant to the input text. Advancements during the 2020s in the generation of high-quality, text-conditioned videos have largely been driven by the development of video diffusion models. Models There are different models, including open source models. Chinese-language input CogVideo is the earliest text-to-video model "of 9.4 billion parameters" to be developed, with its demo version of open source codes first presented on GitHub in 2022. That year, Meta Platforms released a partial text-to-video model called "Make-A-Video", and Google's Brain (later Google DeepMind) introduced Imagen Video, a text-to-video model with 3D U-Net. In March 2023, a research paper titled "VideoFusion: Decomposed Diffusion Models for High-Quality Video Generation" was published, presenting a novel approach to video generation. The VideoFusion model decomposes the diffusion process into two components: base noise and residual noise, which are shared across frames to ensure temporal coherence. By utilizing a pre-trained image diffusion model as a base generator, the model efficiently generated high-quality and coherent videos. Fine-tuning the pre-trained model on video data addressed the domain gap between image and video data, enhancing the model's ability to produce realistic and consistent video sequences. In the same month, Adobe introduced Firefly AI as part of its features. In January 2024, Google announced development of a text-to-video model named Lumiere which is anticipated to integrate advanced video editing capabilities. Matthias Niessner and Lourdes Agapito at AI company Synthesia work on developing 3D neural rendering techniques that can synthesise realistic video by using 2D and 3D neural representations of shape, appearances, and motion for controllable video synthesis of avatars. In June 2024, Luma Labs launched its Dream Machine video tool. That same month, Kuaishou extended its Kling AI text-to-video model to international users. In July 2024, TikTok owner ByteDance released Jimeng AI in China, through its subsidiary, Faceu Technology. By September 2024, the Chinese AI company MiniMax debuted its video-01 model, joining other established AI model companies like Zhipu AI, Baichuan, and Moonshot AI, which contribute to China’s involvement in AI technology. Alternative approaches to text-to-video models include Google's Phenaki, Hour One, Colossyan, Runway's Gen-3 Alpha, and OpenAI's Sora, Several additional text-to-video models, such as Plug-and-Play, Text2LIVE, and TuneAVideo, have emerged. Google is also preparing to launch a video generation tool named Veo for YouTube Shorts in 2025. FLUX.1 developer Black Forest Labs has announced its text-to-video model SOTA. Architecture and training There are several architectures that have been used to create Text-to-Video models. Similar to Text-to-Image models, these models can be trained using Recurrent Neural Networks (RNNs) such as long short-term memory (LSTM) networks, which has been used for Pixel Transformation Models and Stochastic Video Generation Models, which aid in consistency and realism respectively. An alternative for these include transformer models. Generative adversarial networks (GANs), Variational autoencoders (VAEs), — which can aid in the prediction of human motion — and diffusion models have also been used to develop the image generation aspects of the model. Text-video datasets used to train models include, but are not limited to, WebVid-10M, HDVILA-100M, CCV, ActivityNet, and Panda-70M. These datasets contain millions of original videos of interest, generated videos, captioned-videos, and textual information that help train models for accuracy. Text-video datasets used to train models include, but are not limited to PromptSource, DiffusionDB, and VidProM. These datasets provide the range of text inputs needed to teach models how to interpret a variety of textual prompts. The video generation process involves synchronizing the text inputs with video frames, ensuring alignment and consistency throughout the sequence. This predictive process is subject to decline in quality as the length of the video increases due to resource limitations. Limitations Despite the rapid evolution of Text-to-Video models in their performance, a primary limitation is that they are very computationally heavy which limits its capacity to provide high quality and lengthy outputs. Additionally, these models require a large amount of specific training data to be able to generate high quality and coherent outputs, which brings about the issue of accessibility. Moreover, models may misinterpret textual prompts, resulting in video outputs that deviate from the intended meaning. This can occur due to limitations in capturing semantic context embedded in text, which affects the model’s ability to align generated video with the user’s intended message. Various models, including Make-A-Video, Imagen Video, Phenaki, CogVideo, GODIVA, and NUWA, are currently being tested and refined to enhance their alignment capabilities and overall performance in text-to-video generation. Ethics The deployment of Text-to-Video models raises ethical considerations related to content generation. These models have the potential to create inappropriate or unauthorized content, including explicit material, graphic violence, misinformation, and likenesses of real individuals without consent. Ensuring that AI-generated content complies with established standards for safe and ethical usage is essential, as content generated by these models may not always be easily identified as harmful or misleading. The ability of AI to recognize and filter out NSFW or copyrighted content remains an ongoing challenge, with implications for both creators and audiences. Impacts and applications Text-to-Video models offer a broad range of applications that may benefit various fields, from educational and promotional to creative industries. These models can streamline content creation for training videos, movie previews, gaming assets, and visualizations, making it easier to generate high-quality, dynamic content. These features provide users with economical and personal benefits. Comparison of existing models See also Text-to-image model VideoPoet, unreleased Google's model, precursor of Lumiere Deepfake Human image synthesis ChatGPT References External links Free text-to-video Artificial intelligence engineering Algorithms Language Natural language processing Video Computers
Text-to-video model
[ "Mathematics", "Technology", "Engineering" ]
1,363
[ "Algorithms", "Mathematical logic", "Applied mathematics", "Software engineering", "Natural language processing", "Artificial intelligence engineering", "Natural language and computing" ]
71,988,529
https://en.wikipedia.org/wiki/Directional%20freezing
Directional freezing freezes from only one direction. Directional freezing can freeze water, from only one direction or side of a container, into clear ice. Directional freezing in a domestic freezer can be done by putting water in a insulated container so that the water freezes from the top down, and removing before fully frozen, so that the minerals in the water are not frozen. F Hoffmann La Roche AG, Roche Diagnostics GmbH has a 2017 directional freezing patent for drying solid material. See also Aquamelt Clear ice Hydrogel Freeze-casting§Static vs. dynamic freezing profiles Molecular self-assembly Further reading References Phase transitions Cryobiology Molecular physics Intermolecular forces Nanotechnology
Directional freezing
[ "Physics", "Chemistry", "Materials_science", "Engineering", "Biology" ]
139
[ " and optical physics stubs", "Physical phenomena", "Phase transitions", "Molecular physics", "Biochemistry", "Phases of matter", "Critical phenomena", "Cryobiology", "Intermolecular forces", "Materials science", " molecular", "nan", "Atomic", "Nanotechnology", "Statistical mechanics", ...
71,990,227
https://en.wikipedia.org/wiki/1%2C1%27-Ferrocenedicarboxylic%20acid
1,1'-Ferrocenedicarboxylic acid is the organoiron compound with the formula . It is the simplest dicarboxylic acid derivative of ferrocene. It is a yellow solid that is soluble in aqueous base. The 1,1' part of its name refers to the location of the carboxylic acid groups on separate rings. It can be prepared by hydrolysis of its diesters (R = Me, Et), which in turn are obtained by treatment of ferrous chloride with the sodium salt of the carboxyester of cyclopentadienide . Ferrocenedicarboxylic acid is the precursor to many derivatives such as the diacid chloride, the diisocyanate, the diamide, and diamine, respectively, , , , and . Derivatives of ferrocenedicarboxylic acid are components of some redox switches and redox active coatings. Related compounds Ferrocenecarboxylic acid References Ferrocenes Cyclopentadienyl complexes Dicarboxylic acids Aromatic acids
1,1'-Ferrocenedicarboxylic acid
[ "Chemistry" ]
230
[ "Organometallic chemistry", "Cyclopentadienyl complexes" ]
71,992,445
https://en.wikipedia.org/wiki/Mermin%27s%20device
In physics, Mermin's device or Mermin's machine is a thought experiment intended to illustrate the non-classical features of nature without making a direct reference to quantum mechanics. The challenge is to reproduce the results of the thought experiment in terms of classical physics. The input of the experiment are particles, starting from a common origin, that reach detectors of a device that are independent from each other, the output are the lights of the device that turn on following a specific set of statistics depending on the configuration of the device. The results of the thought experiment are constructed in such a way to reproduce the result of a Bell test using quantum entangled particles, which demonstrate how quantum mechanics cannot be explained using a local hidden variable theory. In this way Mermin's device is a pedagogical tool to introduce the unconventional features of quantum mechanics to a larger public. History The original version with two particles and three settings per detector, was first devised in a paper called "Bringing home the atomic world: Quantum mysteries for anybody" authored by the physicist N. David Mermin in 1981. Richard Feynman told Mermin that it was "One of the most beautiful papers in physics". Mermin later described this accolade as "the finest reward of my entire career in physics". Ed Purcell shared Mermin's article with Willard Van Orman Quine, who then asked Mermin to write a version intended for philosophers, which he then produced. Mermin also published a second version of the thought experiment in 1990 based on the GHZ experiment, with three particles and detectors with only two configurations. In 1993, Lucien Hardy devised a paradox that can be made into a Mermin-device-type thought experiment with two detectors and two settings. Original two particle device Assumptions In Mermin's original thought experiment, he considers a device consisting of three parts: two detectors A and B, and a source C. The source emits two particles whenever a button is pushed, one particle reaches detector A and the other reaches detector B. The three parts A, B and C are isolated from each other (no connecting pipes, no wires, no antennas) in such a way that the detectors are not signaled when the button of the source has been pushed nor when the other detector has received a particle. Each detector (A and B) has a switch with three configurations labeled (1,2 and 3) and a red and a green light bulb. Either the green or the red light will turn on (never both) when a particle enters the device after a given period of time. The light bulbs only emit light in the direction of the observer working on the device. Additional barriers or instrument can be put in place to check that there is no interference between the three parts (A,B,C), as the parts should remain as independent as possible. Only allowing for a single particle to go from C to A and a single particle from C to B, and nothing else between A and B (no vibrations, no electromagnetic radiation). The experiment runs in the following way. The button of the source C is pushed, particles take some time to travel to the detectors and the detectors flash a light with a color determined by the switch configuration. There are nine total possible configuration of the switches (three for A, three for B). The switches can be changed at any moment during the experiment, even if the particles are still traveling to reach the detectors, but not after the detectors flash a light. The distance between the detectors can be changed so that the detectors flash a light at the same time or at different times. If detector A is set to flash a light first, the configuration of the switch of detector B can be changed after A has already flashed (similarly if B set to flash first, the settings of A can be change before A flashes). Expected results The expected results of the experiment are given in this table in percentages: Every time the detectors are set to the same setting, the bulbs in each detector always flash same colors (either A and B flash red, or A and B flash green) and never opposite colors (A red B green, or A green B red). Every time the detectors are at different setting, the detectors flash the same color a quarter of the time and opposite colors 3/4 of the time. The challenge consists in finding a device that can reproduce these statistics. Hidden variables and classical implementation In order to make sense of the data using classical mechanics, one can consider the existence of three variables per particle that are measured by the detectors and follow the percentages above. Particle that goes into detector A has variables and the particle that goes into detector B has variables . These variables determine which color will flash for a specific setting (1,2 and 3). For example, if the particle that goes in A has variables (R,G,G), then if the detector A is set to 1 it will flash red (labelled R), set to 2 or 3 it will flash green (labelled G). We have 8 possible states: where in order to reproduce the results of table 1 when selecting the same setting for both detectors. For any given configuration, if the detector settings were chosen randomly, when the settings of the devices are different (12,13,21,23,31,32), the color of their lights would agree 100% of the time for the states (GGG) and (RRR) and for the other states the results would agree 1/3 of the time. Thus we reach an impossibility: there is no possible distribution of these states that would allow for the system to flash the same colors 1/4 of the time when the settings are not the same. Thereby, it is not possible to reproduce the results provided in Table 1. Quantum mechanical implementation Contrary to the classical implementation, table 1 can be reproduced using quantum mechanics using quantum entanglement. Mermin reveals a possible construction of his device based on David Bohm's version of the Einstein–Podolsky–Rosen paradox. One can set two spin-1/2 particles in the maximally entangled singlet Bell state: , to leave the experiment, where () is the state where the projection of the spin of particle 1 is aligned (anti-aligned) with a given axis and particle 2 is anti-aligned (aligned) to the same axis. The measurement devices can be replaced with Stern–Gerlach devices, that measure the spin in a given direction. The three different settings determine whether the detectors are vertical or at ±120° to the vertical in the plane of perpendicular to the line of flight of the particles. Detector A flashes green when the spin of the measured particle is aligned with the detector's magnetic field and flashes red when anti-aligned. Detector B has the opposite color scheme with respect to A. Detector B flashes red when the spin of the measured particle is aligned and flashes green when anti-aligned. Another possibility is to use photons that have two possible polarizations, using polarizers as detectors, as in Aspect's experiment. Quantum mechanics predicts a probability of measuring opposite spin projections given by where is the relative angle between settings of the detectors. For and the system reproduces the result of table 1 keeping all the assumptions. Three particle device Mermin's improved three particle device demonstrates the same concepts deterministically: no statistical analysis of multiple experiments is necessary. It has three detectors, each with two settings 1 and 2, and two lights, one red and one green. Each run of the experiment consists of setting the switches to values 1 or 2 and observing the color of the lights that flash when particles enter the detectors. The detectors again are assumed independent of one another, and cannot interact. For the improved device, the expected results are the following: if one detector is switched to setting 1 while the others are on setting 2, an odd number of red lights flash. If all three detectors set to 1, and odd number of red light flashes never occurs. Mermin then imagines that each of three particles emitted from the common source and entering the detectors has a hidden instruction set, dictating which light to flash for each switch setting. If only one device of three has a switch set to 1, there will always be an odd number of red flashes. However, Mermin shows that all possible instruction sets predict an odd number of red lights when all three devices are set to 1. No instruction set built in to the particles can explain the expected results. This contradiction implies that local hidden variable theory cannot explain such a device. Quantum mechanical implementation The improved device can be built using quantum mechanics. This implementation is based on the Greenberger–Horne–Zeilinger (GHZ) experiment. The device can be constructed if the three particles are quantum entangled in a GHZ state, written as where and represent two states of a two-level quantum system. For electrons, the two states can be the up and down projections of the spin along the z-axis. The detector settings correspond to other two orthogonal measurement directions (for example, projections along x-axis or along the y-axis). See also Quantum pseudo-telepathy References Additional references Physical paradoxes Quantum measurement Thought experiments in quantum mechanics
Mermin's device
[ "Physics" ]
1,883
[ "Quantum measurement", "Quantum mechanics", "Thought experiments in quantum mechanics" ]
71,994,871
https://en.wikipedia.org/wiki/Angus%20J.%20Wilkinson
Angus J. Wilkinson is a professor of materials science based at the Department of Materials, University of Oxford. He is a specialist in micromechanics, electron microscopy and crystal plasticity. He assists in overseeing the MicroMechanics group while focusing on the fundamentals of material deformation. He developed the HR-EBSD method for mapping stress and dislocation density at high spatial resolution used at the micron scale in mechanical testing and micro-cantilevers to extract data on mechanical properties that are relevant to materials engineering. Selected publications Wilkinson AJ, Meaden G, Dingley DJ. High-resolution elastic strain measurement from electron backscatter diffraction patterns: New levels of sensitivity. Ultramicroscopy 2006;106:307–13. https://doi.org/10.1016/j.ultramic.2005.10.001. Wilkinson AJ, Hirsch PB. Electron diffraction based techniques in scanning electron microscopy of bulk materials. Micron 1997;28:279–308. https://doi.org/10.1016/S0968-4328(97)00032-2. Wilkinson AJ, Britton T. Strains, planes, and EBSD in materials science. Materials Today 2012;15:366–76. https://doi.org/10.1016/S1369-7021(12)70163-3. Wilkinson AJ, Randman D. Determination of elastic strain fields and geometrically necessary dislocation distributions near nanoindents using electron back scatter diffraction. Philosophical Magazine 2010;90:1159–77. https://doi.org/10.1080/14786430903304145. Guo Y, Britton TB, Wilkinson AJ. Slip band–grain boundary interactions in commercial-purity titanium. Acta Mater 2014;76:1–12. https://doi.org/10.1016/J.ACTAMAT.2014.05.015. Britton TB, Wilkinson AJ. High resolution electron backscatter diffraction measurements of elastic strain variations in the presence of larger lattice rotations. Ultramicroscopy 2012;114:82–95. https://doi.org/10.1016/J.ULTRAMIC.2012.01.004. Zhai T, Wilkinson AJ, Martin JW. A crystallographic mechanism for fatigue crack propagation through grain boundaries. Acta Mater 2000;48:4917–27. https://doi.org/10.1016/S1359-6454(00)00214-7. See also Department of Materials, University of Oxford Electron backscatter diffraction Ultramicroscopy References External links Angus Wilkinson at Department of Materials, University of Oxford Angus Wilkinson at Research.com. Living people Microscopists Academics of the University of Oxford British materials scientists Year of birth missing (living people) Metallurgists
Angus J. Wilkinson
[ "Chemistry", "Materials_science" ]
619
[ "Metallurgists", "Metallurgy", "Microscopists", "Microscopy" ]
71,995,177
https://en.wikipedia.org/wiki/Manganese%20cycle
The manganese cycle is the biogeochemical cycle of manganese through the atmosphere, hydrosphere, biosphere and lithosphere. There are bacteria that oxidise manganese to insoluble oxides, and others that reduce it to Mn2+ in order to use it. Manganese is a heavy metal that comprises about 0.1% of the Earth's crust and a necessary element for biological processes. It is cycled through the Earth in similar ways to iron, but with distinct redox pathways. Human activities have impacted the fluxes of manganese among the different spheres of the Earth. Global manganese cycle Manganese is a necessary element for biological functions such as photosynthesis, and some manganese oxidizing bacteria utilize this element in anoxic environments. Movement of manganese (Mn) among the global "spheres" (described below) is mediated by both physical and biological processes. Manganese in the lithosphere enters the hydrosphere from erosion and dissolution of bedrock in rivers, in solution it then makes its way into the ocean. Once in the ocean, Mn can form minerals and sink to the ocean floor where the solid phase is buried. The global manganese cycle is being altered by anthropogenic influences, such as mining and mineral processing for industrial use, as well as through the burning of fossil fuels. Lithosphere Manganese is the tenth most abundant metal in the Earth's crust, making up approximately 0.1% of the total composition, or about 0.019 mol kg−1, which is found mostly in the oceanic crust. Crust Manganese (Mn) commonly precipitates in igneous rocks in the form of early-stage crystalline minerals, which, once exposed to water and/or oxygen, are highly soluble and easily oxidized to form Mn oxides on the surfaces of rocks. Dendritic crystals rich in Mn form when microbes reprecipitate the Mn from the rocks on which they develop onto the surface after utilizing the Mn for their metabolism. For certain cyanobacteria found on desert varnish samples, for example, it has been found that manganese is used as a catalytic antioxidant to facilitate survival in the harsh sunlight and water conditions they face on desert rock surfaces. Soil Manganese is an important soil micronutrient for plant growth, playing an essential role as a catalyst in the oxygen-evolving complex of photosystem II, a photosynthetic pathway. Soil fungi in particular have been found to oxidize the reduced, soluble form of manganese (Mn2+) under anaerobic conditions, and may reprecipitate it as manganese oxides (Mn+3 to Mn+7) under aerobic conditions, where the preferred metabolic pathway typically involves the utilization of oxygen. Although not all iron-reducing bacteria have the capability of reducing manganese, there is overlap in the taxa that can perform both metabolisms; these organisms are very common in a range of environmental conditions. Challenges however persist in isolating these microbes in cultures. Depending on the pH, organic substrate availability, and oxygen concentration, Mn can either behave as an oxidation catalyst or an electron receptor. Though much of the total Mn that is cycled in soil is biologically-mediated, some inorganic reactions also contribute to Mn oxidation or precipitation of Mn oxides. The reduction potential (pe) and pH are two known constraints on the solubility of Mn in soils. As pH increases, Mn speciation becomes less sensitive to variations in pe. In acidic (pH = 5) soils with high reduction potentials (pe > 8), the forms of Mn are mostly reducible, with exchangeable and soluble Mn decreasing dramatically in concentration with increases in pe. Mn is also found in inorganic chelation complexes, where Mn forms coordinate bonds with SO42-, HCO3−, and Cl− ions. These complexes are important for organic matter stabilization in soils, as they have high surface areas and interact with organic matter through adsorption. Hydrosphere Iron (Fe) and Manganese (Mn) similarities in their respective cycles and are often studied together. Both have similar sources in the hydrosphere, which are hydrothermal vent fluxes, dust inputs, and weathering of rocks. The major removal of Mn from the ocean involves similar processes to Fe as well, with the most abundant removal from the hydrosphere via biological uptake, oxidative precipitation, and scavenging. Microorganisms oxidize the bioavailable Mn(II) to form Mn(IV), an insoluble manganese oxide that aggregates to form particulate matter that can then sink to the ocean floor. Manganese is important in aquatic ecosystems for photosynthesis and other biological functions. Freshwater and estuary Advection from tidal flows re-suspends estuary beds and can unearth manganese. The particulate manganese is dissolved via reduction that forms Mn (II), adding it to the internal cycle of manganese in organisms in the ecosystem. Estuary biogeochemistry is heavily influenced by tidal oscillations, temperature, and pH changes, and thus the manganese input into the internal cycling is variable. Mn in rivers and streams typically has a lower residence time than estuaries, and a large majority of the Mn is soluble Mn (II). In these freshwater ecosystems, the manganese cycling is depended on sediment fluxes that provide an influx of Mn into the system. Oxidation of Mn (II) from sediment drives the redox reactions that fuel the biogeochemical processes with Mn, as well as Mn reducing microbes. Marine In the ocean, different patterns of manganese cycling are seen. In the photic zone, there is a decrease in Mn particulate formation during the daytime, as rates of microbially catalyzed oxidation decrease and photo-dissolution of Mn oxides increases. The GEOTRACES program has led the production of the first global manganese model, in which predictions of global manganese distribution can be made. This global model found strong removal rates of Mn as water moves from the Atlantic Ocean surface to the North Atlantic deep water resulting in Mn depletion in water moving southward along the thermohaline conveyor. Overall, when looking at organism interactions with manganese, it is known that redox reactions play a key role, as well as that Mn has important biological functions, however far less is known about uptake and remineralization processes such as with iron. Early Earth Terrestrial manganese has existed since the formation of Earth around 4.6 Ga. The Sun and the Solar System formed during the collapse of a molecular cloud populated with many trace metals, including manganese. The chemical composition of the molecular cloud determined the composition of the many celestial bodies that form within it. Nearby supernova explosions populated the cloud with manganese; the most common manganese-forming supernovae are Type Ia supernovae.   The early Earth contained very little free oxygen (O2) until the Great Oxygenation Event around 2.35 Ga. Without O2, redox cycling of Mn was limited. Instead, soluble Mn(II) was only released into the oceans via silicate weathering on igneous rocks and supplied through hydrothermal vents. The increase in Mn oxidation occurred during the Archean Eon (> 2.5 Ga), whereas the first evidence of manganese redox cycling appears ~ 2.4 Ga, before the Great Oxygenation Event and during the Paleoproterozoic Era. Although the Great Oxygenation Event raised the abundance of oxygen on Earth, the oxygen levels were still relatively low compared to modern levels. It is believed that many primary producers were anoxygenic phototrophs and took advantage of abundant hydrogen sulfide (H2S) to catalyze photosynthesis. Anoxygenic phototrophy and oxygenic photosynthesis both require electron donors, with all known forms of anoxygenic phototrophy relying on reaction center electron acceptors with reduction potentials around 250-500 mV. Oxygenic photosynthesis requires reduction potentials around 1250 mV. It has been hypothesized that this wide difference in reduction potential indicates an evolutionary missing link in the origin of oxygenic photosynthesis. Mn(II) is the leading candidate for bridging this gap. The water-oxidizing complex, a key component of PSII, begins with the oxidation of Mn(II), which, along with additional evidence, strongly supports the hypothesis that manganese was a necessary step in the evolution of oxygenic photosynthesis. Anthropogenic influences While manganese naturally occurs in the environment, the global Mn cycle is influenced through anthropogenic activities. Mn is utilized in many commercial products, such as fireworks, leather, paint, glass, fertilizer, animal feed, and dry cell batteries. However, the effect of Mn pollution from these sources is minor compared to that of mining and mineral processing. The burning of fossil fuels, such as coal and natural gas, further contribute to the anthropogenic cycling of Mn. Mining and mineral processing Anthropogenic influences on the manganese cycle mainly stem from industrial mining and mineral processing, specifically, within the iron and steel industries. Mn is used in iron and steel production to improve hardness, strength, and stiffness, and is the primary component used in low-cost stainless steel and aluminum alloy production. Anthropogenic mining and mineral processing has spread Mn through three methods: wastewater discharge, industrial emissions, and releases in soils. Wastewater discharge Waste from mining and mineral processing facilities is typically separated into liquid and solid forms. Due to insufficient management and poor mining processes, especially in developing countries, liquid waste containing Mn can be discharged into bodies of water through anthropogenic effluents. Domestic wastewater and sewage sludge disposal are the main anthropogenic sources of Mn within aquatic ecosystems. In marine systems, the disposal of mine tailings contributes to aquatic anthropogenic Mn concentrations where high levels can be toxic to marine life. Industrial emissions The main anthropogenic influence of Mn input to the atmosphere is through industrial emissions, and roughly 80% of industrial emissions of Mn is due to steel and iron processing facilities. In the Northern Hemisphere, some of the Mn pollutants released through industrial emissions are transferred to Arctic regions through atmospheric circulation, where particulates settle and accumulate in natural bodies of water. Such atmospheric pollution of Mn can be hazardous for humans working or living near industrial facilities. Dust and smoke containing manganese dioxide and manganese tetroxide released into the air during mining is a primary cause of manganism in humans. Releases in soils The solid waste disposal of substances containing Mn by industrial sources typically ends up in landfills. Additional Mn deposition in soils can result from particulate settling of Mn released through industrial emissions. An analysis of datasets on the soil chemistry of North America and Europe revealed greater than 50% of Mn in ridge soils near iron or steel processing facilities was attributed to anthropogenic industrial inputs, whether through solid waste disposal or previously airborne particulates depositing in soils. Burning of fossil fuels Anthropogenically sourced Mn from the burning of fossil fuels has been found in the atmosphere, hydrosphere, and lithosphere. Mn is a trace element in fly ash, a residue from the use of coal for power production, which often ends up in the atmosphere, soils, and bodies of water. Methylcyclopentadienyl Mn tricarbonyl (MMT), a gasoline additive containing Mn, also contributes to Mn anthropogenic cycling. Due to the use of MMT as a fuel additive, motor vehicles are a significant source of Mn in the atmosphere, especially in regions of high traffic activity. In some regions, roughly 40% of Mn in the atmosphere was due to exhaust from traffic. Particulate manganese phosphate, manganese sulfate, and manganese oxide are the primary emissions from MMT combustion through its usage in gasoline. A portion of these particulates eventually leave the atmosphere to settle in soils and bodies of water. References Wikipedia Student Program Manganese Biogeochemical cycle
Manganese cycle
[ "Chemistry" ]
2,495
[ "Biogeochemical cycle", "Biogeochemistry" ]
71,995,920
https://en.wikipedia.org/wiki/Vertically%20Generalized%20Production%20Model
The Vertically Generalized Production Model (VGPM) is a model commonly used to estimate primary production within the ocean. The VGPM was designed by Behrenfeld and Falkowski and was originally published in a 1997 article in Limnology and Oceanography. It is one of the most frequently used models for primary production estimation due to its ability to be applied to chlorophyll a data from satellites, and its relatively simple design. Chlorophyll a is a common measure of primary production, as it is a main component of photosynthesis. Primary production is often measured using three variables: the biomass (or amount in weight) of the phytoplankton, the availability of light, and the rate of carbon fixation. The VGPM is now one of the most popular models to use for satellite chlorophyll data due to it being surface light dependent as well as using an estimated maximum value of primary production compared to the units of chlorophyll throughout the water column, known as PBopt. It also considers environmental factors that often influence primary production as well as allows for variables often collected using remote satellites to derive the primary production without having to physically sample the water. This PBopt was found to be dependent on surface chlorophyll, and data for this can be collected using satellites. Satellites can only collect the parameters used to estimate primary production; they cannot calculate it themselves, which is why the need for a model to do so exists. Because of this being a generalized model, it is intended to reflect most accurately the open ocean. Other localized areas, especially coastal regions, may need to incorporate additional factors to get the most accurate representation of primary production. The values produced using the VGPM are estimates and there will be some level of uncertainty with using this model. References Ecology Oceanography
Vertically Generalized Production Model
[ "Physics", "Biology", "Environmental_science" ]
379
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics", "Ecology" ]
72,003,352
https://en.wikipedia.org/wiki/Fuzzy%20differential%20inclusion
Fuzzy differential inclusion is the extension of differential inclusion to fuzzy sets introduced by Lotfi A. Zadeh. with Suppose is a fuzzy valued continuous function on Euclidean space. Then it is the collection of all normal, upper semi-continuous, convex, compactly supported fuzzy subsets of . Second order differential The second order differential is where , is trapezoidal fuzzy number , and is a trianglular fuzzy number (-1,0,1). Applications Fuzzy differential inclusion (FDI) has applications in Cybernetics Artificial intelligence, Neural network, Medical imaging Robotics Atmospheric dispersion modeling Weather forecasting Cyclone Pattern recognition Population biology References Dynamical systems Variational analysis Fuzzy logic
Fuzzy differential inclusion
[ "Physics", "Mathematics" ]
140
[ "Mechanics", "Dynamical systems" ]
74,777,997
https://en.wikipedia.org/wiki/Difluorodioxirane
Difluorodioxirane (CF2O2) is a rare, stable member of the dioxirane family, known for a single oxygen-oxygen bond (O-O). Unlike most dioxiranes that decompose quickly, difluorodioxirane is surprisingly stable at room temperature, making it potentially useful for further research and applications. Synthesis Difluorodioxirane was first synthesised by Russo and DesMarteau in 1993 by treating fluorocarbonyl hypofluorite (FCOOF) with X2 (= F2, Cl2 or ClF) over pelletized CsF in a flow system. It also likely exists as a possible intermediate in reactions involving other fluorine-containing compounds. Properties Unlike most dioxiranes that decompose quickly, difluorodioxirane is surprisingly stable at room temperature due to the stabilising interacton of two fluorine atoms with the ring. This effect makes the O-O bond less reactive and more stable compared to other dioxiranes. The central F–C–F angle is 109°, approximately a tetrahedral angle. Difluorodioxirane is known for its ability to perform regiospecific and stereoselective oxidations. This makes it a valuable tool in organic synthesis for precise manipulation of molecules. Despite its increased stability, difluorodioxirane can still act as an oxidizing agent, transferring oxygen to other molecules. it often leads to cleaner and more predictable reaction outcomes due to its controlled reactivity. Uses Difluorodioxirane itself has not yet found widespread applications due to its recent discovery. However, its unique stability and reactivity similar to other dioxiranes suggest potential uses in several areas: Organic synthesis: Due to its oxidizing properties, difluorodioxirane could be a valuable reagent in organic reactions, particularly for controlled oxidation processes. Researchers are exploring its potential applications in epoxidation (adding oxygen atoms to create epoxide rings), hydroxylation (adding hydroxyl groups -OH), and other oxidation reactions. Development of new catalysts: The stability and reactivity profile of difluorodioxirane make it a promising candidate for the development of new and more efficient catalysts for various organic transformations. See also Dimesityldioxirane Dimethyldioxirane Shi epoxidation References Dioxiranes Organic peroxides Oxidizing agents Heterocyclic compounds with 1 ring Oxygen heterocycles
Difluorodioxirane
[ "Chemistry" ]
535
[ "Organic compounds", "Redox", "Oxidizing agents", "Organic peroxides" ]
74,784,001
https://en.wikipedia.org/wiki/Bisbenzylisoquinoline%20alkaloids
Bisbenzylisoquinoline alkaloids are natural products found primarily in the plant families of the barberry family, the Menispermaceae, the Monimiaceae, and the buttercup family. Occurrence More than 225 different bisbenzylisoquinoline alkaloids are known and have been isolated. Representative The bisbenzylisoquinoline alkaloids are considered the largest group within the isoquinoline alkaloids. The best-known representative of this group is tubocurarine chloride. Other representatives include dauricin, oxyacanthin, tetrandrine, and tiliacorin. Structure Bisbenzylisoquinoline alkaloids are characterized by their structure. Typically, they consist of two benzyl-tetrahydroquinoline units linked by ether groups, and occasionally by C-C bonds. Multiple ether bridges are often present. These alkaloids can be categorized into three groups, using the nomenclature head for the 1,2,3,4-tetrahydroisoquinoline unit and tail for the 1-benzyl residue: head-head tail-tail- head-tail-linked bisbenzylisoquinoline alkaloids. Dauricine is the simplest representative with a tail-to-tail linkage. Oxyacanthine and Tetrandrin contain both head-head and tail-tail linkages, while Tiliacorine features one tail-tail linkage and two head-head linkages linking to a dibenzodioxin moiety. Tubocurarine chloride is characterized by two head-tail linkages of the tetrahydroisoquinoline units. In the following structural formulae, the tail linkages are marked red and the head linkages are marked blue: Uses The alkaloids belonging to the bisbenzylisoquinoline group are generally toxic and exhibit curarizing effects. Oxyacanthine serves as a sympatholytic agent, an antagonist to epinephrine, and a vasodilator. Tubocurarine, a potent curarizing poison, stands as the oldest known muscle relaxant. South American indigenous populations have traditionally employed it as an arrow poison (see curare). Tetrandine, found as an ingredient in the Chinese medicine "han-fang-shi," possesses analgesic and antipyretic properties. References Alkaloids
Bisbenzylisoquinoline alkaloids
[ "Chemistry" ]
511
[ "Organic compounds", "Biomolecules by chemical classification", "Natural products", "Alkaloids" ]
74,786,662
https://en.wikipedia.org/wiki/Liquidus%20and%20solidus
While chemically pure materials have a single melting point, chemical mixtures often partially melt at the temperature known as the solidus (TS or Tsol), and fully melt at the higher liquidus temperature (TL or Tliq). The solidus is always less than or equal to the liquidus, but they need not coincide. If a gap exists between the solidus and liquidus it is called the freezing range, and within that gap, the substance consists of a mixture of solid and liquid phases (like a slurry). Such is the case, for example, with the olivine (forsterite-fayalite) system, which is common in Earth's mantle. Definitions In chemistry, materials science, and physics, the liquidus temperature specifies the temperature above which a material is completely liquid, and the maximum temperature at which crystals can co-exist with the melt in thermodynamic equilibrium. The solidus is the locus of temperatures (a curve on a phase diagram) below which a given substance is completely solid (crystallized). The solidus temperature specifies the temperature below which a material is completely solid, and the minimum temperature at which a melt can co-exist with crystals in thermodynamic equilibrium. Liquidus and solidus are mostly used for impure substances (mixtures) such as glasses, metal alloys, ceramics, rocks, and minerals. Lines of liquidus and solidus appear in the phase diagrams of binary solid solutions, as well as in eutectic systems away from the invariant point. When distinction is irrelevant For pure elements or compounds, e.g. pure copper, pure water, etc. the liquidus and solidus are at the same temperature, and the term melting point may be used. There are also some mixtures which melt at a particular temperature, known as congruent melting. One example is eutectic mixture. In a eutectic system, there is particular mixing ratio where the solidus and liquidus temperatures coincide at a point known as the invariant point. At the invariant point, the mixture undergoes a eutectic reaction where both solids melt at the same temperature. Modeling and measurement There are several models used to predict liquidus and solidus curves for various systems. Detailed measurements of solidus and liquidus can be made using techniques such as differential scanning calorimetry and differential thermal analysis. Effects For impure substances, e.g. alloys, honey, soft drink, ice cream, etc. the melting point broadens into a melting interval. If the temperature is within the melting interval, one may see "slurries" at equilibrium, i.e. the slurry will neither fully solidify nor melt. This is why new snow of high purity on mountain peaks either melts or stays solid, while dirty snow on the ground in cities tends to become slushy at certain temperatures. Weld melt pools containing high levels of sulfur, either from melted impurities of the base metal or from the welding electrode, typically have very broad melting intervals, which leads to increased risk of hot cracking. Behavior when cooling Above the liquidus temperature, the material is homogeneous and liquid at equilibrium. As the system is cooled below the liquidus temperature, more and more crystals will form in the melt if one waits a sufficiently long time, depending on the material. Alternately, homogeneous glasses can be obtained through sufficiently fast cooling, i.e., through kinetic inhibition of the crystallization process. The crystal phase that crystallizes first on cooling a substance to its liquidus temperature is termed primary crystalline phase or primary phase. The composition range within which the primary phase remains constant is known as primary crystalline phase field. The liquidus temperature is important in the glass industry because crystallization can cause severe problems during the glass melting and forming processes, and it also may lead to product failure. See also Melting/freezing point Phase diagram Solvus References Glass chemistry Glass engineering and science Glass physics Materials science Metallurgy Phase transitions Threshold temperatures Physical chemistry
Liquidus and solidus
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
818
[ "Glass engineering and science", "Physical phenomena", "Phase transitions", "Applied and interdisciplinary physics", "Glass chemistry", "Metallurgy", "Phases of matter", "Threshold temperatures", "Critical phenomena", "Materials science", "Glass physics", "Condensed matter physics", "nan", ...
76,358,134
https://en.wikipedia.org/wiki/Valery%20I.%20Levitas
Valery I Levitas is a Ukrainian mechanics and material scientist, academic and author. He is an Anson Marston Distinguished Professor and Murray Harpole Chair in Engineering at Iowa State University and was a faculty scientist at the Ames National Laboratory. Levitas is most known for his works on the mechanics of materials, stress and strain-induced phase transformations and chemical reactions. Among his authored works are his publications in academic journals, including Science, Nature Communications, Nano Letters as well as monographs such as Large Deformation of Materials with Complex Rheological Properties at Normal and High Pressure. He is the recipient of the 2018 Khan International Award for outstanding contributions to the field of plasticity. Education Levitas earned his M.S. in Mechanical Engineering from Kiev Polytechnic Institute in 1978, followed by a PHD in Materials Science and Engineering from the Institute for Superhard Materials in 1981. In 1988, he completed a Doctor of Science degree in Continuum Mechanics from the Institute of Electronic Machine Building. Furthermore, in 1995, he obtained his Doctor-Engineer habilitation in Continuum Mechanics from the University of Hannover. Career Levitas commenced his academic journey in 1978 at the Institute for Superhard Materials of the Ukrainian Academy of Sciences in Kiev. From 1978 to 1981, he served as an engineer and then as a junior researcher from 1981 to 1984. During his tenure at the institute, he led a research group consisting of researchers and students from 1982 to 1994. Simultaneously, he held the positions of senior researcher from 1984 to 1988 and leading researcher from 1989 to 1994. Additionally, he founded the private research firm, Strength, in 1988. Since 1993 he was a Humboldt Research Fellow at the Institute of Structural and Computational Mechanics at the University of Hannover, serving until 1995. From 1995 to 1999, he continued at the same institution as a research and visiting professor. In 1999, he transitioned to Texas Tech University, where he was an associate professor in the Department of Mechanical Engineering until 2002, and then a professor until 2008. He was also the Founding Director of the Center for Mechanochemistry and Synthesis of New Materials from 2002 till 2007. From 2008 to 2017, he served as the Schafer 2050 Challenge Professor in both the Department of Aerospace Engineering and the Department of Mechanical Engineering at Iowa State University. Between 2017 and 2023, he was the Vance Coffman Faculty Chair Professor in Aerospace Engineering, and since 2023 the Murray Harpole Chair in Engineering. Moreover, he has been the Anson Marston Distinguished Professor in Engineering since 2018, all at the same Departments. In addition, he has served as a faculty scientist at the Ames National Laboratory within the US Department of Energy from 2008 to 2023. Since 2002 he has also run the research and consulting firm Material Modeling. Research Levitas' research has focused on the interplay between plasticity and phase transformations across various scales through the creation of various methodologies. He pioneered the field of theoretical high-pressure mechanochemistry through the development of a comprehensive four-scale theory and simulations spanning from the first principle and molecular dynamics to nano- and microscale phase-field approaches and macroscale treatment. His work includes coupling theoretical frameworks with quantitative in-situ experiments using synchrotron radiation facilities to investigate phase transformations and plastic flow in various materials under high pressure and large deformations. These efforts resulted in the identification of novel phenomena and phases, methods for controlling phase transformations, and the search for new high-pressure materials. Additionally, his research has contributed to the determination of material properties such as transformational, structural, deformational, and frictional characteristics from high throughput heterogeneous sample fields. His research team discovered and harnessed the phenomenon of "rotational plastic instability" to lower the required pressure for producing superhard cubic BN, reducing it from 55 to 5.6GPa. In addition, their theoretical insights enabled a reduction in the transformation pressure from graphite to diamond, dropping it from 70 to 0.7GPa through shear-induced plasticity. Moreover, his team unveiled a new amorphous phase of SiC, the self-blow-up phase transformation-induced plasticity-heating process explaining deep-focus earthquakes, the pressure self-focusing effect, virtual melting at temperatures up to 5000K below melting point as a novel mechanism of solid phase transformation, stress relaxation, and plastic flow. Furthermore, his group introduced a mechanochemical melt dispersion mechanism to explain unusual phenomena in the combustion of Al particles at nano and micro scales, proposing significant advancements in particle synthesis, including the creation of prestressed particles, to enhance their energetic performance. He also advanced phase field approach to various phase transformations, dislocation evolution, fracture, surface-induced phenomena, and their interaction by introducing advanced mechanics, large-strain formulation, strict requirements, and extending to larger sample scale. Patents Levitas holds patents to 11 different inventions. They are mostly related to the development of high-pressure apparatuses for diamond synthesis and physical studies. They include a rotational diamond anvil cell. Awards and honors 1995 – Distinguished Paper Award, International Journal of Engineering Sciences 1998 – Richard von Mises Award, GAMM 2007 – ASME Fellow, American Society of Mechanical Engineers 2010 – Lifetime Achievement Award, International Biographical Centre 2011 – Honorary Doctor in Materials, Institute for Superhard Materials 2018 – Khan International Award 2023 – Member, EU Academy of Sciences 2023 – Member, European Academy of Sciences and Arts 2023 – IAAM Fellow, International Association of Advanced Materials Bibliography Books Large Deformation of Materials with Complex Rheological Properties at Normal and High Pressure (1996) ISBN 1560720859 Selected articles Levitas, V. I. (1998). Thermomechanical theory of martensitic phase transformations in inelastic materials. International Journal of Solids and Structures, 35(9–10), 889–940. Mielke, A., Theil, F., & Levitas, V. I. (2002). A Variational Formulation of Rate-Independent Phase Transformations Using an Extremum Principle. Archive for Rational Mechanics and Analysis, 162, 137–177. Levitas, V. I., & Preston, D. L. (2002). Three-dimensional Landau theory for multivariant stress-induced martensitic phase transformations. I. Austenite↔ martensite. Physical Review B, 66(13), 134206. Levitas, V. I., Asay, B. W., Son, S. F., & Pantoya, M. (2006). Melt dispersion mechanism for fast reaction of nanothermites. Applied Physics Letters, 89(7) 071909. Hsieh S., Bhattacharyya P., Zu C., Mittiga T., Smart T. J., Machado F., Kobrin B., Höhn T. O., Rui N. Z., Kamrani M., Chatterjee S., Choi S., Zaletel M., Struzhkin V. V., Moore J. E., Levitas V. I., Jeanloz R., Yao N. Y. (2019) Imaging stress and magnetism at high pressures using a nanoscale quantum sensor. Science, 366, 1349–1354. Levitas V.I. and Samani K. (2011) Size and mechanics effects in surface-induced melting of nanoparticles. Nature Communications, 2, 284. References Materials scientists and engineers 21st-century American academics Kyiv Polytechnic Institute alumni University of Hanover alumni Iowa State University faculty 21st-century Ukrainian scientists Living people Year of birth missing (living people)
Valery I. Levitas
[ "Materials_science", "Engineering" ]
1,574
[ "Materials scientists and engineers", "Materials science" ]
76,367,043
https://en.wikipedia.org/wiki/Viral%20epitranscriptome
The viral epitranscriptome includes all modifications to viral transcripts, studied by viral epitranscriptomics. Like the more general epitranscriptome, these modifications do not affect the sequence of the transcript, but rather have consequences on subsequent structures and functions. History The discovery of mRNA modifications dates back to 1957 with the discovery of the pseudouridine modification. Many of these modifications were found in the noncoding regions of cellular RNA. Once these modifications were discovered in mRNA, discoveries in viral transcripts soon followed. Detections have been aided with the advancement and use of new techniques such as m6A seq. Mechanisms Complexes Viral RNA modifications use the same machinery as cellular RNA. This involves the use of "writer" and "reader" complexes. The writer complex contains the enzyme methyl transferase-like 3 (METTL3) and its cofactors like METTL14, WTP, KIAA1492 and RBM15/RBM15B which adds the m6A modification in the nucleus. The family of proteins known as the YTH like YTHDC1 and YTHDC2 are capable of detecting these modifications within the nucleus. In the cytoplasm, the reading duties are carried out by YTHDF1, YTHDF2, and YTHDF3. The proteins ALKBH5 and FTO remove the m6A modification, functionally serving as erasers, with the latter having a more restricted selectivity depending on the position of the modification. N6-Methyladenosine (m6A) This modification involves the addition of a methyl group (-CH3) group to the 6th nitrogen on the adenine base in an mRNA molecule. This was among the first mRNA modifications to be discovered in 1974. This modification is common in viral mRNA transcripts and is found in nearly 25% of them. The distribution of the modification not uniform with some transcripts containing more than 10. m6A modifications are a dynamic process with many applications ranging from viral interactions with cellular machinery and structural adjustments to viral life cycle control. Studies have shown different regulatory patterns for different viruses depending on the context. For single stranded RNA viruses, the effects of the modifications appear to differ on the basis of the viral family. In the HIV-1 genome, the single stranded positive sense RNA contains m6A modifications at multiple sites in both the untranslated and coding regions. The presence of this modifications in the viral transcript is enough to increase corresponding modifications in host cell mRNA through binding interactions between the HIV-1 gp 120 envelope protein, and the CD4 receptor in T lymphocytes without causing a corresponding increase in. For HIV-1 and other RNA viral families like chikungunya, enteroviruses and influenza, studies show both a positive and negative role for m6A modifications on viral life replication and infection. For other families, the role effects are clearer. For the flaviridae family, the modification had a negative role and hindered viral replication. The modification in respiratory syncytial virus families showed a positive role and enhanced viral replication and infection. The causes of these apparently different roles from different responses within the same family of viruses and why the viral families like flaviridae conserve m6A modifications when they negatively impact their cycles are currently unknown and under investigation. Most of the RNA viruses carry out their cycles in the cytoplasm, away from the required machinery for writing and erasing m6A modifications which are housed in the nucleus. For DNA viruses, that cycle in the nucleus with direct access to said machinery, no clear general positive or negative regulatory role can be attributed to m6A modifications. In the simian virus and hepatitis B viruses, different m6A reading complexes were shown to have different roles in regulation with some having a conserved positive role and others having a neutral or negative effect on replication. O-methylation This modification involves the addition of a methyl group to the 2' hydroxyl (-OH) group of the ribose sugar of RNA molecules. In contrast with the m6A modification, it is the ribose sugar, a part of the backbone rather than the base that is altered. It is present in various kinds of cellular RNA, providing coding and structural support. 2-O-methylation of viral RNA is often accompanied by the addition of an inverted N-7methylguanosine to the 5' end on the phosphate group. These modifications regulate important functions of viral RNA such as metabolism and immune system interactions. Different viruses have their mechanisms for acquiring this modification. Cytoplasmic RNA viruses like flaviridae and coronaviruses encode the required to catalyze cap formation reactions, with some needing one enzyme for the 5' cap and 2-O-methylation while others require two enzymes like poxviruses. Others, like influenza virus can hijack the methylguanosine caps from host cell mRNA and be preferentially translated. 5-methylcytidine (m5C) One viral epitranscriptome modification that has been identified is the 5-methylcytidine (m5C). HIV-1 and MLV transcriptomes contain elevated levels of these residues by approximately 14-30 fold when compared to a cell's normal levels. NSUN2 is the complex that codes the cytidine methyltransferases credited with m5C formation in cells and amplification in viral epitranscriptomes. The NSUN2 affects the translational aspect of the mRNA in the viral cells, boosting the expression of the viral genome. It has also been found that the m5C alters the splicing pattern and locations in the viral transcriptome. This affected the HIV-1 transcript in both early and late infection. Immune system Viral RNA modifications play important roles in interactions with the immune system of host cells. The m6A modification of viral RNAs allows for the viruses to escape recognition by the retinoic acid inducible gene-I receptor (RIG-I), in the type 1 IFN response, a crucial pathway of innate immunity. 5' N-7methylguanisone capping and 2-O-methylation also play vital roles for the viral infections. The cap structures help viral RNA to blend in among modified cellular mRNA and avoid triggering immune response systems. References Molecular biology Virology Viral genes
Viral epitranscriptome
[ "Chemistry", "Biology" ]
1,308
[ "Biochemistry", "Molecular biology" ]
53,529,240
https://en.wikipedia.org/wiki/Nitronic
Nitronic is the trade name for a collection of nitrogen-strengthened stainless steel alloys. They are austenitic stainless steels. History Nitronic alloys were developed by Armco Steel. The first of these alloys, Nitronic 40, was introduced in 1961. Since 2022, the trademark has been owned by Cleveland-Cliffs Steel Corp., successor to AK Steel. Electralloy is the licensed producer in North America for a wide range of Nitronic products. The Nitronic name is due to the addition of nitrogen to the alloy, which enhances the strength internally rather than being nitrided on the surface, as some steel are treated. The nitrogen is homogeneous throughout the material. Nitronic materials have about twice the yield strength of 304L and 316L. Uses Nitronic 30 is used to lighten transportation vehicles. Buses and railcars benefit from the high strength-to-weight ratio for weight savings. Nitronic 40 is used at cryogenic temperatures. and in the aerospace industry as hydraulic tubing. Nitronic 50 is used in marine environments, including boat shafting and solid rod rigging. Nitronic 60 and a similar alloy Gall-Tough have high resistance to galling, a form of wear caused by adhesion between sliding surfaces, and metal-to-metal wear. Composition Nitronic alloys have widely varying compositions, but all are predominantly iron, chromium, manganese and nitrogen. References Superalloys Aerospace materials Chromium alloys
Nitronic
[ "Chemistry", "Engineering" ]
307
[ "Aerospace materials", "Superalloys", "Alloys", "Aerospace engineering", "Chromium alloys" ]
70,520,651
https://en.wikipedia.org/wiki/Fat%20suppression
Fat suppression is an MRI technique in which fat signal from adipose tissue is suppressed to better visualize uptake of contrast material by bodily tissues, reduce chemical shift artifact, and to characterize certain types of lesions such as adrenal gland tumors, bone marrow infiltration, fatty tumors, and steatosis by determining the fat content of the tissues. Due to short relaxation times, fat exhibits a strong signal in magnetic resonance imaging (MRI), easily discernible on scans. Fat suppression can be achieved through various techniques as outlined below: Frequency Selective Pulses (CHESS): This method leverages the difference in resonance frequency with water, employing frequency selective pulses. Known as fat saturation (fat-sat) techniques, this approach facilitates effective fat suppression. Phase Contrast Techniques: Operating on the same principle as black boundary or india ink artifacts, phase contrast techniques contribute to suppressing fat signals in MRI. Inversion Recovery Sequences (STIR Technique): Utilizing short T1 relaxation time, the STIR technique involves inversion recovery sequences to achieve fat suppression. Dixon Method: A distinct approach to fat suppression that is primarily used to achieve uniform fat suppression. Hybrid Techniques (e.g., SPIR): Innovative approaches involve the combination of multiple fat suppression techniques, exemplified by SPIR, which integrates spectral presaturation with inversion recovery. The choice of a specific fat suppression technique should be guided by several factors, including the intended purpose—whether it is for contrast enhancement or tissue characterization. Considerations such as the quantity of fat in the tissue under examination, the magnetic field strength, and the homogeneity of the main magnetic field play crucial roles in the selection process. References Magnetic resonance imaging Nuclear magnetic resonance
Fat suppression
[ "Physics", "Chemistry" ]
342
[ "Nuclear magnetic resonance", "Magnetic resonance imaging", "Nuclear chemistry stubs", "Nuclear magnetic resonance stubs", "Nuclear physics" ]
70,523,734
https://en.wikipedia.org/wiki/Common%20data%20model
A common data model (CDM) can refer to any standardised data model which allows for data and information exchange between different applications and data sources. Common data models aim to standardise logical infrastructure so that related applications can "operate on and share the same data", and can be seen as a way to "organize data from many sources that are in different formats into a standard structure". A common data model has been described as one of the components of a "strong information system". A standardised common data model has also been described as a typical component of a well designed agile application besides a common communication protocol. Providing a single common data model within an organisation is one of the typical tasks of a data warehouse. Examples of common data models Border crossings X-trans.eu was a cross-border pilot project between the Free State of Bavaria (Germany) and Upper Austria with the aim of developing a faster procedure for the application and approval of cross-border large-capacity transports. The portal was based on a common data model that contained all the information required for approval. Climate data The Climate Data Store Common Data Model is a common data model set up by the Copernicus Climate Change Service for harmonising essential climate variables from different sources and data providers. General information technology Within service-oriented architecture, S-RAMP is a specification released by HP, IBM, Software AG, TIBCO, and Red Hat which defines a common data model for SOA repositories as well as an interaction protocol to facilitate the use of common tooling and sharing of data. Content Management Interoperability Services (CMIS) is an open standard for inter-operation of different content management systems over the internet, and provides a common data model for typed files and folders used with version control. The NetCDF software libraries for array-oriented scientific data implements a common data model called the NetCDF Java common data model, which consists of three layers built on top of each other to add successively richer semantics. Health Within genomic and medical data, the Observational Medical Outcomes Partnership (OMOP) research program established under the U.S. National Institutes of Health has created a common data model for claims and electronic health records which can accommodate data from different sources around the world. PCORnet, which was developed by the Patient-Centered Outcomes Research Institute, is another common data model for health data including electronic health records and patient claims. The Sentinel Common Data Model was initially started as Mini-Sentinel in 2008. It is used by the Sentinel Initiative of the USA's Food and Drug Administration. The Generalized Data Model was first published in 2019. It was designed to be a stand-alone data model as well as to allow for further transformation into other data models (e.g., OMOP, PCORNet, Sentinel). It has a hierarchical structure to flexibly capture relationships among data elements. The JANUS clinical trial data repository also provides a common data model which is based on the SDTM standard to represent clinical data submitted to regulatory agencies, such as tabulation datasets, patient profiles, listings, etc. Logistics SX000i is a specification developed jointly by the Aerospace and Defence Industries Association of Europe (ASD) and the American Aerospace Industries Association (AIA) to provide information, guidance and instructions to ensure compatibility and the commonality. The associated SX002D specification contains a common data model. Microsoft Common Data Model The Microsoft Common Data Model is a collection of many standardised extensible data schemas with entities, attributes, semantic metadata, and relationships, which represent commonly used concepts and activities in various businesses areas. It is maintained by Microsoft and its partners, and is published on GitHub. Microsoft's Common Data Model is used amongst others in Microsoft Dataverse and with various Microsoft Power Platform and Microsoft Dynamics 365 services. Rail transport RailTopoModel is a common data model for the railway sector. Other There are many more examples of various common data models for different uses published by different sources. See also Apache OFBiz, an open source enterprise resource planning system which provides a common data model Canonical model Data Reference Model, one of five reference models of the U.S. government federal enterprise architecture Data platform Metadata Open Semantic Framework, which internally uses the RDF to convert all data to a common data model Requirements Interchange Format Generic data model References Sharing Information theory Data modeling Application software Databases
Common data model
[ "Mathematics", "Technology", "Engineering" ]
897
[ "Telecommunications engineering", "Applied mathematics", "Data modeling", "Computer science", "Information theory", "Data engineering" ]
70,527,346
https://en.wikipedia.org/wiki/Rixosome
The rixosome is a protein complex involved in RNA degradation, ribosomal RNA (rRNA) processing, and ribosome biogenesis. It was named after the S. cerevisiae gene RIX1. The rixosome is associated with human PRC1 and PRC2 complexes. The interaction with PRC1 appears to be through the RING1B domain of PRC1 based on mutational analysis. The co-localization of the rixosome and PRC complex suggest a role in the rixosomal degradation of nascent RNA to contribute to the silencing of many Polycomb targets in human cells. Regulation of the interaction with the PRC1 complex is mediated by SENP3 which deSUMOylates several rixosome subunits. Components Rixosome complex contains the following components: WD repeat domain 18 LAS1L TEX10 PELP1 References Protein complexes
Rixosome
[ "Chemistry" ]
185
[ "Biochemistry stubs", "Protein stubs" ]
70,529,013
https://en.wikipedia.org/wiki/Canadian%20Nuclear%20Laboratories%20Research%20Facilities
Canadian Nuclear Laboratories (CNL) research facilities are located in Chalk River, Ontario, Canada, approximately 180km north-west of Ottawa. There are three new additions to the site. The Logistics Warehouse (5016 sqm / 54000 sqft) contains a large reception space, offices, and storage. The Support and Maintenance Facility (4800 sqm / 51670 sqft) houses equipment, offices, and flexible open spaces. The Science Collaboration Centre (8198 sqm / 88240 sqft) has studios, laboratories, and administrative spaces. CNL is a nuclear technology and research institute. Their ageing facilities required an overhaul to continue innovation. The campus contains over 300 buildings across a 3700 hectare plot of land along the Ottawa River. Design firm HDR was the architect. History Historically, the Chalk River Laboratories was a nuclear power plant and advanced nuclear research facility. CNL began developing nuclear technology in the late 1940's and early 1950's. The government owned company Atomic energy of Canada Limited (AECL) took over Chalk River Nuclear Laboratories in 1952, but today the site remains operated through contractors such as CNL. This is referred to as GoCo management, government owned and contractor operated. The research led to the development of the CANDU reactor. Other research included fuels, hydrogen production, storage and handling of radiation, and more recently alpha therapies medical isotope treatment for cancer. In 2014, Ontario became the first jurisdiction in North America to leave behind coal fired power plants and fully rely on nuclear power and renewable energies. In 2016 a $1.2 billion dollar investment plan over ten years was released by the Government of Canada. The investment plan required the decommissioning of 120 aged facilities and designing new centres. Design The Logistics Warehouse is the new public face to the campus. It finished construction in September 2020 at $30.6 million dollars. This is a two storey building that houses spaces for reception and information, offices, and storage. The front half of the building is the public space, with storage in the back half. The front facade design is mostly transparent using glazing with wood slatting in front of the curtain walls. The Support and Maintenance Facility is also a two storey building. This is their manufacturing and servicing depot. This facility was completed in March 2021 at $32.8 million dollars. The interior of the warehouse has entirely exposed services and structure. The facade on this building is mostly solid, with thin glazing that frames the surrounding forest. The Science Collaboration Centre is a six storey, multi-use building that will act as the heart of the campus. Its projected completion date is Spring 2023 with a budget of $62 million dollars. The building will feature three open plan studios, offices, laboratories, and data storage. The facade design is mostly glass which will reveal the active spaces inside as well as the wood structure. References Research institutes in Canada Buildings and structures in Renfrew County Sustainable architecture Wooden buildings and structures
Canadian Nuclear Laboratories Research Facilities
[ "Engineering", "Environmental_science" ]
609
[ "Sustainable architecture", "Environmental social science", "Architecture" ]
70,533,744
https://en.wikipedia.org/wiki/Cheluviation
Cheluviation is the process in which the metal ions in the upper layer of the soil are combined with organic ligands to form coordination complexes or chelates, moving downwards through eluviation and then depositing. Metal ions that can participate in chelation include Fe, Al, Mn, Ca, Mg and trace elements in soil, while the organic ligands combined with these metal ions come mainly from the soil organic matter. Soil organic matter includes relatively stable complex organic compounds (such as lignin, protein, humus, etc.), as well as some simple organic acids and intermediate products of microbial decomposition of organic matter. These organic coordination compounds all contain active groups to varying degrees. Chain organic coordination compounds are complexed with metal ions to generate complexes, and these generated complexes containing multiple coordination atoms in a cyclic structure with metal ions are called chelates. The stability of the chelate is related to the number of atoms in the chelate ring, the stability constant of the chelation reaction, and the concentration of organic chelating agents and metal ions. The chelates produced by fulvic acid and metal ions in soil humus have strong leaching and deposition effects, and therefore are an important manifestation of soil cheluviation, which is generally resulting in the formation of gray-white leaching layers and dark brown/red deposited layer. Dissolution and chelation of metal elements Organic acids have the ability to dissolve soil minerals, and can destroy silicate minerals and iron and aluminum oxides, so that metal ions are precipitated and complexed with organic complexing agents through ion exchange, surface absorption, and chelation-reaction mechanisms. For example, at low pH, a large number of metal ions are complexed with organic acids. When the organic acid occupies the coordination position of the metal ion, it can prevent the precipitation and crystallization of the metal oxide and increase its solubility. Conversely, at high pH (e.g. 7–8), dissolved metal ions, such as Fe(III), will precipitate out of the solution as insoluble complexes. Eluviation of chelate compounds The eluviation of chelate compounds is the downward movement of soil chelates. The eluviation of chelate compounds can be affected by: Acidity. Organic acids produced under acidic conditions can increase the solubility of metal elements such as iron and aluminum, thereby enhancing soil eluviation. Iron and aluminum are easily leached at low pH. As the pH increases, ferric hydroxide and aluminum hydroxide compounds precipitate. Redox conditions. Under reducing conditions, more organic acids are produced and metal ions are reduced to soluble metal complexes that migrate into the soil. Under oxidative conditions, metal ions are easily precipitated, and the chelate is easily polymerized, thereby separating the chelate from the metal ions. Soil texture. Clay has a certain adsorption capacity for chelates, which weakens the leaching of complexes. On the other hand, soils with a coarse texture and water-saturated soils will likely enhance the leaching effect of chelates. References Soil chemistry Coordination chemistry
Cheluviation
[ "Chemistry" ]
653
[ "Soil chemistry", "Coordination chemistry" ]