id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
64,683,481
https://en.wikipedia.org/wiki/Deathgarden
Deathgarden was an asymmetrical multiplayer first-person shooter video game developed and published by Behaviour Interactive with gameplay similar to another of the studio's games, Dead by Daylight. It was initially launched on Steam Early Access in Summer 2018, and was later relaunched as Deathgarden: Bloodharvest in 2019. However, the same year, it ceased development and went free-to-play after failing to maintain a sufficient playerbase. The game pitted a single invincible Hunter against five Scavengers (previously called Runners) who had to try and escape. While praised by critics for its central concept and lore, it was also criticized for unbalanced gameplay caused by its asymmetrical design. On 12 August 2020, the game's servers closed. Plot In the story of the relaunched version of the game, the Bloodharvest is the only way for the Scavengers to earn a spot in the rare, high-class Enclaves that are left after a global apocalypse. Gameplay Each map was randomized upon the start of every round. The Scavengers had to capture control points that grant them a chance to escape, but could not kill the Hunter. The Hunter, who was the only character who got a gun, had to shoot Scavengers to knock them down and was then able to execute them. If the Hunter killed all of the Scavengers, they would win. If the Hunter did not choose to execute the Scavengers immediately, they earned bonus experience. However, a "mercy" mechanic was added later in development that allowed Runners one knockdown and respawn before they could be executed because the instant execute was still used too commonly. Development The game suffered from a troubled early development, with the playerbase drying up only a few days after it released on Early Access. Rather than cancelling the game, the developers decided to continue developing and relaunch the game as Deathgarden: Bloodharvest. Reception Cass Marshall of Polygon called the game "The Hunger Games combined with Judge Dredd", praising the concept of the game, but criticizing the maps as "disappointing" due to their generic nature, and calling Hunters too powerful when played by a skilled player, despite Scavengers being more fun to play. References 2018 video games 2019 video games 2020 video games Cancelled Windows games Inactive multiplayer online games Products and services discontinued in 2020 First-person shooters Asymmetrical multiplayer video games Video games developed in Canada Early access video games Free-to-play video games Dystopian video games Windows games PlayStation 4 games Xbox One games
Deathgarden
[ "Physics" ]
534
[ "Asymmetrical multiplayer video games", "Symmetry", "Asymmetry" ]
64,684,602
https://en.wikipedia.org/wiki/Large%20diameter%20centrifuge
The large diameter centrifuge, or LDC, is any centrifuge extending several meters, which can rotate samples to change their acceleration in space to enhance the effect of gravity. Large diameter centrifuges are used to understand the effect of hyper-gravity (gravitational strengths stronger than that of the Earth) on biological samples, including and not limiting to plants, organs, bacteria, and astronauts (Such as NASA's Human Performance Centrifuge) or non-biological samples to undertake experiments in the field of fluid dynamics, geology, biochemistry and more. Description Frequently, "LDC" is used to refer to the centrifuge at the European Space Agency (ESA)'s campus known as ESTEC (European Space research and TEChnology center). This is an 8-m diameter, four-arm centrifuge covered by a dome, which is available for research. A total of six gondolas, each being able to carry an 80 kg payload, can spin at a maximum of 20 times Earth's gravity, equaling 67 revolutions per minute. Full technical specification are available for free at the ESA website. Competition and grants The European Space Agency (ESA) and UNOOSA lets students compete with a research proposal for the use of the LDC. These competitions are known as the 'Spin Your thesis!'. When the proposal is accepted, they are guided at ESA/ESTEC to use the LDC. Support is given for a variety of different fields including biology, physics and chemistry. See also Gravitropism Random positioning machine Free Fall Machine Clinostat References Laboratory equipment Gravitational instruments
Large diameter centrifuge
[ "Technology", "Engineering" ]
334
[ "Measuring instruments", "Gravitational instruments" ]
74,820,522
https://en.wikipedia.org/wiki/Tuxedo%20Computers
The Tuxedo Computers GmbH (proper spelling: TUXEDO Computers) is a computer manufacturer based in Augsburg, Germany. The company specializes in desktop computers and notebooks with a pre-installed Linux operating system. The devices are manufactured in Leipzig, Germany. Tuxedo Computers equips its devices with Tuxedo OS, its own Linux distribution based on Ubuntu, or installs a selection of distributions as well as Microsoft Windows as an operating system in parallel with the Linux system or in a virtual machine. History Tuxedo Computers was founded on February 1, 2004, by current managing director Herbert Feiler in Bayreuth. In 2013, the company moved to Königsbrunn. In 2019 followed another move to the current headquarters in Augsburg. The name derives from the Linux mascot Tux, whose feathers resemble a tuxedo. The company emerged from an online store that specialized in the distribution of promotional items related to Linux and open-source software and software boxes with Linux distributions. Due to better Linux compatibility, TUXEDO Computers originally only carried desktop computers, as notebooks often required special adaptations. In the meantime, notebooks and small form factor desktop computers complement the range. The names of the devices borrow from stars and planets, space travel, and science and technology. Tuxedo OS With Tuxedo OS (proper spelling: TUXEDO OS), Tuxedo Computers develops its own Linux distribution based on Ubuntu. The first version was released on September 29, 2022. Compared to Ubuntu, Tuxedo OS comes without the package management Snap initiated by Canonical and adds the latest Linux kernel and the latest version of KDE Plasma. In addition, Tuxedo OS uses its own software repositories operated by hosting providers located in Germany and refrains from phoning home. Tuxedo OS can be freely downloaded from the project page in the form of an ISO disk image file. Complementing Tuxedo OS, the company is working on tools to control hardware functions and improve usability. Licensed under the GNU General Public License, the Tuxedo Control Center allows, among other things, the control and management of fans, the clocking of the central and graphics processing unit or the adjustment of the backlit keyboard of Tuxedo laptops. With the WebFAI, Tuxedo Computers releases its in-house tool to automatically install computers running Tuxedo OS or other Linux distributions. Community relations Tuxedo Computers is one of the Patrons of KDE and also supports the KDE developers with the possibility to organize events. For example, the KDE Plasma Sprint 2023 was held at the company's offices. The Linux User Group Augsburg meets regularly on its premises. Developers from Tuxedo Computers regularly submit code and patches to the Linux kernel. MyTuxedo Under the name MyTuxedo, Tuxedo Computers operates a cloud storage service based on Nextcloud. The offer is currently only available to buyers of a Tuxedo computer. The servers as well as the backup storage are located in data centers in Germany, the service is therefore subject to the European General Data Protection Regulation. See also Linux adoption Framework Computer Purism (company) System76 Pine64 References External links MyTuxedo Companies based in Augsburg Companies based in Bavaria Computer hardware companies Computer systems companies Consumer electronics brands Online retailers of Germany Cloud storage Ubuntu
Tuxedo Computers
[ "Technology" ]
699
[ "Computer hardware companies", "Computer systems companies", "Computers", "Computer systems" ]
74,823,213
https://en.wikipedia.org/wiki/Bhojpuri%20numerals
Bhojpuri number words include numerals and other words derived from them, along with the words which are borrowed from other numbers. Cardinal numbers Base numbers 1-99 The Old Bhojpuri word for Twenty is kor̤ī, which is still used in Trinidadian Bhojpuri. In Western Standard Bhojpuri, egara, baara end with "e" instead of "a', hence, egare, baare, tere e.t.c are used till eighteen. The word for Hundred in Bhojpuri is Sai. Higher numbers The word for thousand is Hajār, which is a Persian loanword, the Old Bhojpuri word is Sahas. The word for One Hundred Thousand is Lākh. Numbers above Hundred are formed by subjoining the lower number with the higher ones. Base 20 counting A counting system considering 20 as a base is also used in Bhojpuri. Hence, 65 is expressed as (3*20)+5, i.e. Teen Bees/Kori aa Panch, Some time number lesser than 20 but near twenty are also expressed in terms of twenty. For example, Eightneen can be expresses has Du Kam Bees/Kori. Ordinals First four ordinals are: The rest of the ordinals are made by adding -wā to the cardinals, for ex. pachwā (fifth). Multiplicative numerals Multiplicatives are formed are adding hālī, hālā, ber, beri, tor, torī with the numbers. Notes References numerals Numerals
Bhojpuri numerals
[ "Mathematics" ]
332
[ "Numeral systems", "Numerals" ]
74,823,985
https://en.wikipedia.org/wiki/Mark%20D.%20Foster
Mark D. Foster is the Thomas A. Knowles Professor of Polymer Science and Polymer Engineering, and associate dean of programs, policy and engagement at the University of Akron. His area of research is polymer surfaces and interfaces. Education Foster completed his undergraduate studies in Chemical Engineering at Washington University in St. Louis in 1981. He completed his doctorate, also in Chemical Engineering at University of Minnesota, Twin Cities in 1987. He then took a postdoctoral Staff Scientist position at the Max Planck Institute for Polymer Research. He held a senior postdocotoral position under Frank S. Bates at the University of Minnesota during 1989–1990. Career Foster joined the University of Akron Polymer Science faculty in 1990 at the rank of assistant professor. He served as chair of the department of polymer science from 2005 to 2008. Since 2008, he has held roles as associate Dean, College of Polymer Science and Polymer Engineering, as well as Director of the Akron Global Polymer Academy. His most cited work treated the subject of epoxy-terminated self-assembled monolayers. Awards and recognition 2005 - Sparks–Thomas award from the ACS Rubber Division References Living people Polymer scientists and engineers Year of birth missing (living people) McKelvey School of Engineering alumni University of Minnesota College of Science and Engineering alumni University of Akron faculty
Mark D. Foster
[ "Chemistry", "Materials_science" ]
259
[ "Polymer scientists and engineers", "Physical chemists", "Polymer chemistry" ]
74,825,642
https://en.wikipedia.org/wiki/Fluoroether%20E-1
Fluoroether E-1 (known chemically as heptafluoropropyl 1,2,2,2-tetrafluoroethyl ether, is a chemical compound that is among the class of per- and polyfluoroalkyl substances (PFAS). This synthetic fluorochemical is used in the GenX process, and may arise from the degradation of GenX chemicals including FRD-903. Production The main production of Fluoroether E-1 is within the GenX process where FRD-903 (2,3,3,3-tetrafluoro-2-(heptafluoropropoxy)propanoic acid) is used to generate (FRD-902) ammonium 2,3,3,3-tetrafluoro-2-(heptafluoropropoxy)propanoate, and Fluoroether E-1 (heptafluoropropyl 1,2,2,2-tetrafluoroethyl ether). Properties Fluoroether E-1 is a colorless liquid that is practically insoluble in water. It is volatile and has a low boiling point. References Perfluorinated compounds Chemours Chemical processes
Fluoroether E-1
[ "Chemistry" ]
273
[ "Chemical process engineering", "Chemical processes", "nan" ]
74,827,926
https://en.wikipedia.org/wiki/Data%20centre%20tiers
Data centre tiers are defined levels of resiliency and redundancy for IT facility infrastructure. They are widely used in the data center, ISP and cloud computing industries as part of the engineering design for high availability systems. The standard data center tiers are: Tier I: no redundancy Tier II: partial N+1 redundancy Tier III: full N+1 redundancy of all systems, including power supply and cooling distribution paths Tier IV: as Tier III, but with 2N+1 redundancy of all systems A Tier III system is intended to operate at Tier II resiliency even when under maintenance, and a Tier IV system is intended to operate at Tier III resiliency even when under maintenance. Most commercial data centers are Tier III; instead of using Tier IV datacentres, many large service providers typically use multiple availability zones to implement of their services, thus achieving greater resilience than would be possible with any single data centre. The data center tier system was created by the Uptime Institute. See also Availability zone References Data centers Reliability engineering
Data centre tiers
[ "Technology", "Engineering" ]
222
[ "Systems engineering", "Reliability engineering", "Data centers", "Computing stubs", "Computers" ]
74,828,085
https://en.wikipedia.org/wiki/Availability%20zone
In cloud computing, an availability zone is a subset of an IT infrastructure system that shares no service-critical components (including power, cooling and access) with any other availability zone. Availability zones are typically geographically separated from one another, to prevent local disasters from acting on more than one availability zone. Some service providers also make higher-level regional distinctions between availability zones, allowing service providers to mitigate even regional-level disasters such as earthquakes and forest fires. Applications requiring high availability are typically implemented as distributed systems that span multiple availability zones. Services offering distinct availability zones include Amazon Web Services, Microsoft Azure and Google Cloud. See also Single point of failure Active redundancy References Reliability engineering Cloud computing Fault-tolerant computer systems Distributed computing Disaster management
Availability zone
[ "Technology", "Engineering" ]
149
[ "Systems engineering", "Reliability engineering", "Computer systems", "Fault-tolerant computer systems", "Computing stubs" ]
74,828,277
https://en.wikipedia.org/wiki/HD%20202772%20Ab
HD 202772 Ab is a hot Jupiter orbiting the brighter component of the visual binary star HD 202772, located in the constellation Capricornus at a distance of about 480 light-years from Earth. The discovery was announced on 5 October 2018. HD 202772 Ab orbits its host star once every 3.3 days. It is an inflated hot Jupiter, and a rare example of hot Jupiters around evolved stars. It is also one of the most strongly irradiated planets known, with an equilibrium temperature of . References Capricornus Exoplanets discovered in 2018 Exoplanets discovered by TESS Transiting exoplanets
HD 202772 Ab
[ "Astronomy" ]
137
[ "Capricornus", "Constellations" ]
74,830,262
https://en.wikipedia.org/wiki/LY305
LY305 is a transdermally bioavailable selective androgen receptor modulator (SARM) developed by Eli Lilly for the treatment of osteoporosis in men. Its chemical structure includes N-arylhydroxyalkyl. A phase one trial found promising results. See also Compound 2f (SARM) References Selective androgen receptor modulators Chloroarenes Anilines Nitriles Cyclopentanols Benzonitriles
LY305
[ "Chemistry" ]
102
[ "Nitriles", "Functional groups" ]
74,831,326
https://en.wikipedia.org/wiki/Sobetirome
Sobetirome (GC-1) is a thyromimetic drug that binds to the thyroid hormone receptor TRβ1 preferentially compared to TRα1. It has been investigated for the treatment of dyslipidemia, obesity, Pitt–Hopkins syndrome, cholestatic liver disease, multiple sclerosis, bleomycin-induced lung fibrosis, and COVID-19 caused ARDS. It was designated as an orphan drug by the FDA for the treatment of X-linked adrenoleukodystrophy. References Orphan drugs Thyroid hormone receptor beta agonists Carboxylic acids Phenols Isopropyl compounds
Sobetirome
[ "Chemistry" ]
140
[ "Carboxylic acids", "Functional groups" ]
74,831,484
https://en.wikipedia.org/wiki/Praseodymium%20bromate
Praseodymium bromide is an inorganic compound with the chemical formula Pr(BrO3)3. It is soluble in water and can form the dihydrate, tetrahydrate and nonahydrate. The nonahydrate melts in its own crystal water at 56.5 °C and completely loses its crystal water at 130 °C. It can be produced by the reaction of barium bromate and praseodymium sulfate. References Praseodymium(III) compounds Bromates
Praseodymium bromate
[ "Chemistry" ]
108
[ "Bromates", "Oxidizing agents" ]
74,831,538
https://en.wikipedia.org/wiki/VK2809
VK2809 (formerly known as MB07811) is a thyromimetic prodrug whose active form is selective for the THR-β isoform. It is being developed by Viking Therapeutics in a phase II trial for the treatment of nonalcoholic steatohepatitis and is also being investigated for glycogen storage disease type Ia. In 2023, Viking Therapeutics filed a lawsuit against the developer of ASC41, Chinese company Ascletis BioScience, accusing it of stealing Viking's trade secrets to develop ASC41 which is allegedly similar to, or identical to, VK2809. References Thyroid hormone receptor beta agonists Prodrugs Dioxaphosphorinanes Phenols 3-Chlorophenyl compounds Isopropyl compounds Experimental drugs developed for non-alcoholic fatty liver disease
VK2809
[ "Chemistry" ]
185
[ "Chemicals in medicine", "Prodrugs" ]
74,831,553
https://en.wikipedia.org/wiki/Polonium%28IV%29%20sulfate
Polonium(IV) sulfate is an inorganic compound, a salt of polonium and the sulfate anion with the chemical formula of . As anhydrous, it forms dark purple crystalline solid, and as a hydrate, it forms colourless or white crystals, and is soluble in water. It can be obtained by the reaction of polonium tetrachloride (or hydrated polonium dioxide) and sulfuric acid. Polonium(IV) sulfate can be reduced to by hydroxylamine in acidic solutions; it decomposes to polonium dioxide at 550 °C. It is radioactive and produces gases as it decays. References Polonium compounds Sulfates
Polonium(IV) sulfate
[ "Chemistry" ]
138
[ "Sulfates", "Salts" ]
74,831,610
https://en.wikipedia.org/wiki/Polonium%20tetrabromide
Polonium tetrabromide, is a bromide of polonium, with the chemical formula PoBr4. Preparation Polonium tetrabromide can be formed by the direct reaction of bromine and polonium at 200 °C to 250 °C. Like polonium tetraiodide, polonium tetrabromide can also be produced by the reaction of polonium dioxide and hydrogen bromide: Properties Polonium tetrabromide is a light red solid that is easily deliquescent. It crystallizes in the cubic crystal system, with space group Fm3m (No. 225) and lattice parameter a = 5.6 Å. References Polonium compounds Bromides
Polonium tetrabromide
[ "Chemistry" ]
140
[ "Bromides", "Salts" ]
74,831,611
https://en.wikipedia.org/wiki/Eprotirome
Eprotirome is a thyromimetic drug that has been investigated for the treatment of dyslipidemia. A Phase III trial in humans was discontinued after the drug was found to have negative effects on cartilage in dogs. References Thyroid hormone receptor beta agonists Abandoned drugs Carboxylic acids Bromobenzene derivatives Anilides Isopropyl compounds Phenols
Eprotirome
[ "Chemistry" ]
83
[ "Functional groups", "Carboxylic acids", "Drug safety", "Abandoned drugs" ]
74,832,110
https://en.wikipedia.org/wiki/Kreuzbau%20%28Hamburg%29
The Kreuzbau (also Klassenkreuz) is a building type for school buildings in Hamburg. Between 1957 and 1963, Kreuzbau buildings were erected there at a good 60 locations of state schools. They have four wings on a cruciform ground plan, from which the name is derived. The Kreuzbau building has three floors and a flat roof. Each floor has four classrooms and associated small group rooms, which means that the Kreuzbau building can accommodate twelve school classes. The classrooms are directly accessed by a central staircase - without a corridor and in the way of Schustertyp. The design for the Kreuzbau building came from the Hamburg building director Paul Seitz. The main advantage of this type of building was its rapid assembly; from today's perspective, its disadvantage is its lack of thermal insulation. More than 80% of the Kreuzbau buildings erected in Hamburg are still standing and mostly serve elementary schools as classrooms. History Antecedents After the end of World War II, almost half of Hamburg's former 463 school buildings were no longer usable: 21% of the schools were destroyed and 26% were so badly damaged that they could hardly be used. From 1945 to 1947, the number of students in Hamburg doubled from 95,000 to 186,000. The reasons for this increase were the return of families from the 1943 evacuation after the "Feuersturm," including students returning from the "Kinderlandverschickung." The settlement of 1944–50 flight and expulsion of Germans and refugees from the Soviet occupation zone had a reinforcing effect. Until 1948, school construction was limited to makeshift repairs of damage and the use of shacks and other temporary facilities. The shortage of space could only be met by "shift teaching." This situation generated considerable public pressure, since parents' gainful employment was severely hampered by staggered shift teaching of several children. In Hamburg's state politics, the shortage of space in schools was, along with the housing shortage, the irritant par excellence and contributed to the SPD's loss of its majority in the 1953 Hamburg elections, even though top candidate Max Brauer promised the completion of one new school per month in the election program "A flourishing Hamburg." The winner of the 1953 elections was the bourgeois Hamburg Block, which used the issue of the lack of school buildings to stop the school reform pushed by the SPD. But that did not change the urgent task of multiplying the pace of school construction. At the same time, however, the city's resources were limited - there were enough other expensive tasks in housing construction and industrial settlement. In 1952, Paul Seitz was appointed First Director of Construction and Head of the Structural Engineering Office in Hamburg, and thus also Deputy to the Senior Director of Construction at the Hamburg Building Authority, Werner Hebebrand. Seitz held these posts until 1963, when he left Hamburg for a professorship at the Berlin University of the Arts. Seitz designed mainly schools, university buildings and other public buildings during his ten-year tenure. For schools, he relied entirely on serial designs that were used according to the modular principle. In doing so, he pursued two concepts: the "Green School" and the "wachsenden Schule." The Green School was intended to realize ideas of the new education movement of a return to nature by placing smaller school buildings with a maximum of two stories on generously sized school grounds in the green, ideally with direct access to the garden from the classroom. This type of construction deliberately stood out from the imposing "school barracks" of the Wilhelminism Period and was intended to appear transparent and light. The dimensions of these buildings were to have a human scale, a deliberate contrast to the old school buildings in which a "whole generation was drilled in racial ideological and militaristic values." It is true that school construction was not the focus of Nazi architecture, since hardly any new schools were planned and completed between 1933 and 1945. However, the new school buildings were also intended to express a turning away from the secondary virtues that had made the Nazi state and world war possible. The "wachsenden Schule" was to be available quickly and then grow with the needs of the school. Initially, this concept also called for school buildings to be easily relocatable, and was accompanied by the abandonment of costly foundations and basements. The first series of pavilion schools was built in Hamburg according to this concept; a much-noted prototype for this type of construction at the time is the listed Mendelssohnstraße school in Bahrenfeld. The serial construction developed in the process was the pavilion type A, made of lightweight materials by Polensky & Zöllner. By 1961, 459 new classrooms of this type had been erected. Design phase The design task for Seitz and his working group in the structural engineering department was clear: How could new school construction in Hamburg be drastically accelerated despite a labour shortage of skilled workers and budget constraints without abandoning the ideals of the Green School"? What could a series design look like that would also work on smaller or denser school sites? And how would this serial design have to be designed in order to represent all functions of a school as a nucleus of a "wachsenden Schule" already in the first construction phase? The answer to these questions was the Kreuzbau building. The labour shortage of skilled workers in the construction industry was a major obstacle to accelerating the school building program. Conventional buildings needed skilled masons, foremen, scaffolders, and roofers. Public school construction competed for these skilled workers with residential construction and the private sector. The use of precast concrete reduced the need for such skilled workers on the job site, while the quick 15-day erection time allowed the erection crew to move on, after which the build-out continued. The standardized school construction reduced the need for skilled workers in the shell construction phase to specially trained assemblers - the construction time was already reduced to one fifth by the "Pavilion A" assembly system. The pavilion schools were cheap and quick to build, but they usually had only one or at most two stories, and a correspondingly high land consumption. This required a plot size of at least 24,000 m2 for an Volksschule with the usual class frequency of the time. This was still possible in the planned new development areas in the expansion areas of Hamburg (e.g. Rahlstedt and Bramfeld), but in densification areas such as Wilhelmsburg and Wandsbek or to replace war-damaged school buildings near the inner city, as in Horn and Hamm, these plot sizes were not available. In addition, it became apparent that the influx and birth rates in the new development areas were higher than expected, so that the already designed schools had to accommodate higher numbers of students. A classroom building of more than two stories with a good ratio of usable to circulation space was essential to achieve the necessary space efficiency. In response to the design task, Seitz developed the Kreuzbau building starting in 1955. The three-story arrangement made better use of the land than a single-story pavilion. In addition, the direct access to the classrooms from the staircase resulted in a very favorable ratio of 80% usable space to 20% circulation space. However, the lowest grades of a school were still to be housed in pavilions accessible at ground level. The design relied heavily on precast concrete elements and placed great emphasis on rapid erectability. The small heating cellar with oil-fired heating made it possible to start school operations immediately, even in winter, regardless of progress in further construction phases. In 1955/56, a pre-series type of the Klassenkreuz was built for the school at St. Catherine's Church. This prototype, in contrast to the series production, had four floors, because the plot of land at the Katharinenkirchhof was only 7,000 m2 and thus lacked space for the erection of further school buildings. The school at the St. Catherine's Church was listed as a historical monument, but was nevertheless demolished in 2011 in favor of the newly to be built "St. Catherine's Quarter". The school is located at the Katharinenkirchhof. Construction phase From now on, the Klassenkreuz was to serve as the centerpiece and first construction phase of the "wachsenden Schule". After it was erected, classes could begin immediately in the Kreuzbau building, while other school buildings were added all around. Ideally, the following sequence was typical: First construction phase: Klassenkreuz Second construction phase: classrooms with differentiation rooms in pavilions Third construction phase: administration rooms, janitor's apartment, common room / break hall Fourth phase of construction: classrooms, gymnasium (side hall), smaller gymnasium, auditorium The space available in the Klassenkreuz was in accordance with the specifications of the Room and Furnishing Program for Hamburg Schools of 1958. The production of the prefabricated parts for the Kreuzbau buildings was entrusted to the "Arbeitsgemeinschaft Kreuzschulen," which consisted of Polensky & Zöllner and Paul Thiele AG. The prototype at the St. Catherine's Church was accepted on August 9, 1957. Four more approvals of Kreuzbau buildings followed before the end of August 1957. Based on the experience of the first series of ten, the construction time for a Kreuzbau building was half a year, half of a "normal" school building. The planned cost would be 670,000 marks per Kreuzbau building. At the end of August 1957, school inspector Dressel announced that the new construction method would eliminate the school space shortage in Hamburg in four to five years. In October 1961, the topping out ceremony for the 50th Kreuzbau building was celebrated at the Gymnasium Corveystrasse. On October 21, 1963, the last Kreuzbau building was accepted at Krohnstieg. In just over six years, the type had been built 67 times in Hamburg, resulting in 796 classrooms. Kreuzbau buildings were also erected outside Hamburg, for example in a modified form at the Gottfried-Röhl Elementary School in Berlin, built between 1961 and 1964. In Freiburg im Breisgau, nine Kreuzbau buildings based on Hamburg's design were built by 1976 at school sites in new housing estates to the west of the city center. From the early 1960s, school construction in Hamburg could no longer keep up with the pace of new housing construction. In Bramfeld, shift teaching was reintroduced in 1961 in the newly built Hegholt housing estate, and in 1965, Wilhelm Dressel, the school inspector responsible for school construction, publicly announced that they had "lost the race with the new housing developments". As an emergency solution, "classrooms and more classrooms" had to be built in the outlying districts, and the construction of gymnasiums, break halls, subject rooms, and auditorium buildings was postponed. While new school construction proceeded in absolute numbers throughout the city, the "growing school" concept faltered at individual school sites, or led to growth from the mid-1960s onward only in classroom buildings of the newer "Type-65", "Honeycomb construction," and "Type-68" ("Double-H") series. Gymnasiums were not built at some sites until ten years after the opening of the school, and auditorium buildings were rarely built. The need for specialized rooms was concentrated at secondary schools after the abolition of VolksSchule in 1964; many of the Kreuzbau school sites of the early 1960s are now elementary schools. Across the various types of buildings, the Hamburg school construction program was "unique in scope" compared to other large West German cities. Nowhere else in the Federal Republic of Germany did new schools between 1950 and 1980 rely so heavily on assembly and type buildings as in Hamburg, accompanied by the extensive abandonment of individual designs. In the German-speaking world, this is surpassed only by the type school construction in the GDR. Description The Kreuzbau building is a three-story building with a cruciform floor plan. There are four classrooms on each floor, which are accessed through a central stairwell without corridors. From the stairwell, the schoolchild reaches his or her classroom through a small anteroom that serves as a checkroom. Each classroom is assigned a smaller differentiation room, which is separated by a glass wall. The classrooms are between 65 and 68 m2, the differentiation rooms between 8 and 11 m2. In addition, there are WC rooms on each floor. Room layout The ground plan of this type of building is not axial symmetry. The shape of the Kreuzbau building floor plan is sometimes compared to the wings of a windmill because the surfaces of the wings are laterally displaced with respect to the center of rotation. However, unlike the windmill, the Kreuzbau building floor plan is also not rotationally symmetrical because the wings are not equal in length. This results from the fact that the wings are "pushed" into the floor plan of the stairwell to different extents, because only two wings accommodate the WC rooms and emergency stairwells and are correspondingly longer. In the usual arrangement, the north wing protrudes furthest from the building at about 17 m, while the shortest wing at about 12 m is either the west wing (right tapered variant) or the east wing (left tapered variant). The floor plan of each classroom is in the shape of a rectangular trapezoid. In each of the four wings, one side extends from the stairwell at a right angle, while the other side of the wing tapers toward the front. The tapered side is the same for each wing; in some Kreuzbau buildings it is always the right side, in others it is always the left side. The four end faces are always parallel to the staircase. The depth of the classrooms is up to 8 m. With an economical room height, this requires two-sided lighting, which also enables vertical transverse flow system of ventilation. Typologically, the Klassenkreuz is thus a cobbler type, since there are no corridors and each classroom is lit and ventilated from two sides. Each classroom has a main window wall with tall windows and, opposite, a secondary window wall with a light-diffusing glazed window band just below the ceiling. The panel wall is always the front of the building. If the particular plot of land allowed it, the Kreuzbau building was always placed with maximum use of sunlight: The main window wall of the east and west wings faces south, while for the north and south wings it faces east. Thus, when two wings are viewed, there is a characteristic sequence of the main and secondary window walls, from which the cardinal direction of the respective wing is derived. Development The concept of the pavilion school provides for a loose, organic arrangement of the building structures on a generous plot, which are connected to each other by means of open arcades. These arcades were regularly used in Hamburg schools of the 1950s and early 1960s. These arcades were designed as elevated flat roofs supported by unadorned tubular frames. In Hamburg, the arcades usually have a clear height of little more than two meters and connect to the pavilions at the upper end of the doorway. Apart from the emergency exits, the Kreuzbau building has two entrances. These are located at the point of the abutting wings, one entrance on the southeast side and the other entrance diagonally opposite on the northwest side. Thus, one entrance is located between two secondary window walls and the other entrance between two main window walls. Due to the height of the windows in relation to the height of the pergolas, only the northwest entrance can be connected to the pergola system, where the pergola roof surface can be routed below the bottom edge of the secondary window band. At the lower reaching main windows, the roof would otherwise run in front of the window area. Many Kreuzbau buildings are not (no longer) connected to the arcade system at their locations at all. Behind the not particularly wide glass entrance door, a lobby leads into the central stairs. The lobby on the first floor is placed in the same position in the floor plan as the escape connecting corridors on the two upper floors, which serve as a separate escape route there. The stairs has a rectangular shape with its long side oriented in the direction of the north–south axis. The actual stairs located in the southwest corner of the stairs and leads to the next floor with two flights of stairs at right angles to each other over a landing. The stairwellhas the shape of a kite with strongly rounded corners. Together with the narrow metal handrails, this design looks very typical of the 1950s. The stairs are interior, so it has no windows. Although both the entrance doors on the first floor and the escape connecting corridors on the upper floors are glazed, they are on both sides of the vestibule or corridor. As a result, not an excessive amount of daylighting enters the stairwell. Two circular skylights are incorporated in the roof window for additional lighting. The fire protection concept of the Kreuzbau building provides that the main escape route leads through the stairwell, which thus forms the "necessary stairs". In addition to this main stairwell, there are two emergency stairwells located on the front sides of the north and west wings, from where they lead to the outside via lateral emergency exits. Each classroom therefore has a second escape route, either by direct connection to an emergency stairwell or by an escape connecting corridor to an adjacent classroom from where an emergency stairwell can be reached. The escape connecting corridor is separated from the main stairwell in a smoke-tight manner. This concept corresponded to the building police regulation of 1938 valid at the time of construction, which was confirmed in 1957 and 1958 in consultations of all responsible expert commissions (so-called "theater commission"). In 1961, the building police approval took place, which was again confirmed by the building regulation office in 1974. First, Kreuzbau buildings of the series "K1 V1" were erected. In this first series, the two emergency stairwells at the end faces of the north and west wings are glazed. Later series were less elaborate, and the emergency stairwells are still present but no longer visible from the outside. Interior In 2012, Hamburg's Office for the Protection of Historical Monuments commissioned a survey of the post-war buildings of the Uferstraße Vocational School, which included a Kreuzbau building, an eight-classroom wing and an administration building. This ensemble was listed in 1973 together with the buildings by Fritz Schumacher. In the process, the following original design of the Kreuzbau building was worked out: The interiors of the Kreuzbau building were designed on the vertical surfaces with glass elements, floor-to-ceiling wooden panels and light yellow exposed brickwork. These surfaces were articulated both vertically and horizontally. The ceilings of the rooms were clad with rectangular acoustic panels framed with simple wood trim at the transition to the wall surfaces. The doors leading from the stairwell were recessed into wood-clad wall surfaces and designed with exposed wood. The radiators and inner doors of the WCs were painted in a light yellow-red color. The doors leading from the classrooms were also designed with exposed wood and had glass panels. The slenderly designed staircase was finished in Béton brut, and the handrail was made of metal. The floor was covered with dark, iridescent floor tiles. The classrooms were equipped with mobile chairs and desks, which was in contrast to the rigid school desks of the pre-war period. Fixed rooms were equipped with sound-absorbing ceiling paneling and built-in cabinets. More than half of the Kreuzbau buildings were equipped with artworks purchased with funds from the "Art in Construction" program of the building authority. Most of these were murals or reliefs in the stairwells. Hamburg artists funded in this way included (street names of demolished Kreuzbau buildings (as of 2020) in italics): Ulrich Beier (Stephanstraße), Gerhard Brandes (Walddörferstraße), Annette Caspar (Potsdamer Straße), Jens Cords (Schenefelder Landstraße, Fahrenkrön), Hanno Edelmann (An der Berner Au), Arnold Fiedler (Alsterredder), Heinz Glüsing (Beltgens Garten), Erich Hartmann (Vermoor), Helmuth Heinsohn (Wesperloh), Volker Detlef Heydorn (Windmühlenweg), Fritz Husmann (Sanderstraße), Diether Kressel (Brucknerstraße, Humboldtstrasse), Nanette Lehmann (Fährstrasse), Max Hermann Mahlmann (Heinrich-Helbing-Strasse), Maria Pirwitz (Schimmelmannstrasse), Ursula Querner (Benzenbergweg), Albert Christoph Reck (Rahlaukamp), Walter Siebelist (An der Berner Au), Herbert Spangenberg (Stockflethweg), Eylert Spars (Francoper Straße, Hanhoopsfeld, Krohnstieg), Hans Sperschneider (Hinsbleek), Hann Trier (Struenseestraße) and Johannes Ufer (Neubergerweg). Building construction and assembly Structurally, the Kreuzbau building is a skeleton building made of precast concrete elements, built on a foundation without a full basement and finished with a flat roof. After setting up the construction site, the basement and foundation were built using conventional construction methods: A boiler room basement was excavated and removed under one of the four wings. The ceiling above the basement and the rest of the foundation were then constructed in reinforced concrete. All other slabs of the Kreuzbau building were assembled from 16 cm thick precast concrete elements. All these precast concrete elements were brought to the construction site by special trucks, where they were lifted to the assembly site by means of a single mobile crane - a tower crane was not required. The assembly of the Kreuzbau building was carried out by means of an auxiliary scaffold that extended over two stories. This scaffold was precisely aligned and served as falsework for the preliminary attachment of the reinforced concrete columns and wall sections as well as for the support of the floor slabs. The largest components were the 10.7 m long vertical main columns that run through all three floors. The slabs are designed as T-beams, with two to four main ribs (webs) transferring the load. Connecting steels project inward from the main columns, to which an edge beam of cast-in-place concrete is attached, connecting the main columns to the slabs. Once the inner corner section was installed, the structure had enough stability against torsion and the auxiliary scaffold was removed. After 15 working days the concrete skeleton was ready and the flat roof could be added. The construction was thus practically independent of the weather, which can be seen from the completion dates, which knew no winter break. After completion of the roof, the installation work and drywall construction took place, while at the same time the end faces of the wings were bricked up with masonry. Demolition or redevelopment? Some of the series and prefabricated buildings of the Hamburg Building Authority from the post-war period are now considered to be unsuitable for renovation. This applies in particular to the Type A pavilions, whose wood-frame walls were provided with an outer skin of "fulgurite" and an inner wall of "lignate." "Fulgurite" and "lignate" are brand names for fire-retardant fibre cement. Beginning in 1987, the city of Hamburg had its schools inspected for asbestos and in some cases closed. From 1988, 182 school pavilions in Hamburg were disposed of or demolished because of asbestos contamination. From 1993, the use of asbestos in new buildings was generally prohibited. Kreuzbau buildings are not structurally contaminated with asbestos, so the question of renovation was decided primarily on the basis of economic efficiency and space requirements. Space requirements and building condition From 2010 onwards, the decision on whether to renovate or demolish or replace most of the Kreuzbau buildings became urgent: the Kreuzbau buildings were now around 50 years old and no longer met current requirements, especially in terms of thermal insulation. Also, the accessibility required with increasing inclusion of pupils with walking disabilities is only available on the first floor in existing buildings. Unrenovated Kreuzbau buildings were therefore consistently rated 4 or 5 in the 2019 building classification of the Hamburg authorities (1 = new construction, 2 = basic renovation and meeting all current standards, 6 = practically unusable). A building classification (GKL) of 2 is targeted through renovation. At the same time, the number of pupils in Hamburg has risen sharply since the turn of the millennium. Between 2011 and 2020 alone, the total number of students increased by 11%, and the number of elementary school students even increased by more than 17%. The increase of 11,600 elementary school students from 2011 to 2020 alone is equivalent to more than 500 classrooms at a maximum class frequency of 23 students. By 2030, the number of students is expected to reach about 240,000, a further increase of about 20% over 2020. In view of this development - the need for renovation on the one hand, and rapidly growing space requirements on the other - the decision had to be made at most locations between renovation or demolition with replacement construction. In most cases, the state operation Schulbau Hamburg (SBH) decided in favor of renovation. On the other hand, when a school site was abandoned altogether or a comprehensive new building concept was implemented, the Kreuzbau buildings were also demolished. Kreuzbau buildings are difficult to integrate into existing or new buildings because of the lack of options for horizontal circulation. If a corridor is to lead from a directly adjacent building into a Kreuzbau building, this can only be done via the end face of a wing, which thus becomes a passageway and is lost as a classroom. About 80% of the Kreuzbau buildings were still standing in 2020 and were mainly used by elementary schools as classrooms. The majority of the surviving Kreuzbau buildings have been renovated. A few of this type of buildings are under ensemble protection, i.e. are listed. Accessibility for physically handicapped children has so far only been realized in one case - in the Kreuzbau building of the Schule Hinsbleek, an elevator was installed in the stairwell eye. The majority of the Kreuzbau buildings have been renovated. Thermal insulation Due to the almost always missing monument protection, the Energieeinsparverordnung applied to these renovations without any cutbacks. This led to "large thermal insulation package[s]" and an often "crude renovation." The slender profiles of the windows and the piers were often lost, and in some cases the shadow-giving louvers were also removed. A positive counterexample is the renovation of the Kreuzbau building at Schierenberg, where the reinforced concrete piers were excluded from the insulation. These are only in direct contact with the building structure at the floor slabs and thus contribute little to heat loss. The non-load-bearing walls and the windows, on the other hand, were rebuilt instead of packing the old structure with an insulation layer on wooden battens, as is usually done. The yellow clinker bricks of the original buildings are lost in any case, as the thermal insulation layer requires a new facing. At Schierenberg, green-white glass mosaics in the parapets and black clinker on the end walls were chosen for this purpose - a quotation from the architecture of the 1960s, but not a reconstruction. On the other hand, strong color contrasts were used in some renovated Kreuzbau buildings (Beltgens Garten, Stengelestraße), but often the aim is to use a color scheme that corresponds to the original materials. Art on building In front of and in some of the Kreuzbau buildings scheduled for demolition, works by renowned artists were installed or mounted. Where these works could be easily dismantled, they were usually moved to other buildings on the school site. Plastic arts in front of Kreuzbau buildings were moved. Murals or frescoes, on the other hand, are permanently attached to the building. In the Kreuzbau building of the Gymnasium Rahlstedt were three wall paintings by Eduard Bargheer from 1959. The removal of the picture carrier with the listed pictures was calculated at 150,000 euros. In 2019, two of the three paintings were installed in the atrium of the new Gymnasium building, the remaining picture had been damaged during the removal. The wall paintings were removed. Fire protection Despite repeated tightening of the fire protection regulations since 1938, the Kreuzbau buildings also comply with the current status. Hamburg's 2001 fire protection regulations for school buildings require that each classroom on the same floor have two independent escape routes to exits to the outside or to necessary stairs. The length of the escape route to reach the stairs is limited to a maximum of 35 m. Since the classrooms are 9 m long and up to 8 m wide, the furthest corner results in a diagonal of 12 m per classroom, which must be traversed twice in the worst case. This still leaves more than enough escape route length for the escape connecting corridor between the classrooms. The second escape route in the emergency stairwells must be kept clear, and the other requirements for door and aisle widths are met. If this had not been the case, an exterior stairwell would have to be added to at least two of the wings during renovation, with corresponding costs. The practical suitability of the escape concept has so far been "tested" in one case: At the Schule Eckerkoppel, a fire broke out in the Kreuzbau building while the school was in session. The fire originated on the second floor of the east wing and from there set fire to the floor above and the flat roof structure. All students were evacuated, there was no personal injury. Two classrooms were completely burned out, a third classroom was severely damaged by firewater and soot. The Kreuzbau building was subsequently demolished. As a replacement, after one year of planning and nine months of construction, a modular wooden building of the "Hamburger Klassenhaus" type was erected to accommodate twelve classes on two floors. The new building was occupied in January 2020. Thus, the next phase of serial construction of classrooms in Hamburg has been initiated. Classification and evaluation The Kreuzbau building was part of an attempt to greatly accelerate school construction and significantly increase space efficiency compared to the pavilion school, while maintaining the ideal of the "school in the green." These goals were partially achieved, but the "growing school" concept could not keep pace with the need for classrooms. A balance of classroom buildings, community buildings and green spaces was rarely achieved, certainly not with buildings that followed the same stylistic idea. Instead, many school sites in Hamburg feature a mixture of serial buildings of different generations and styles that wrap around the schoolyard like annual rings - starting with lightweight pavilions, then a Kreuzbau building, plus a Seitz gymnasium and administration with a clinker facade, and concluding with honeycomb buildings made of concrete or a string of Type 65 bricks. From a functional and aesthetic point of view, the Kreuzbau building is a successful design - as an individual building. The "communicative central staircase" and the "quality of the trapezoidal floor plan" of the classrooms, which provide a good framework for other forms of group work in addition to frontal teaching, are praised. Furthermore, the "generous sun protection" is emphasized. Finally, the "formative building shape" creates something like a focal point of the school grounds, especially in comparison to the low box shapes of the other Hamburg series buildings. However, the Kreuzbau building does not stand alone, but is part of an ensemble. In 1961, Egbert Kossak, who later became Hamburg's chief building director, expressed a scathing criticism of Hamburg's "inferior, template-like school construction" in a letter to the editor of an architectural magazine: "With striking but unresisting monotony, the 'famous' Klassenkreuz, pavilions, and gymnasium blocks are scattered over Hamburg. [...] Hamburg [...] boasts of the mass production of proportionless structures that stand out for their questionable modernist design." The main advantage of type construction was rapid assembly with only a few workers, since in post-war Hamburg there was an immense demand for school replacement buildings and new buildings that the construction industry could not satisfy by conventional means. On the other hand, the goal of cost reduction compared to individual designs or solid buildings was not achieved - this can be seen in a comparison with the designs for special schools, which were also executed by individual architects outside the structural engineering department during the Seitz era. The poor thermal insulation compared to today's standards results from the typical construction method of the time with slim profiles and many thermal bridges. In this respect, the Kreuzbau building is neither better nor worse than other post-war buildings. At least it is not contaminated with asbestos, and in most cases renovation is significantly less expensive than replacement construction. Due to its construction, the Kreuzbau building is difficult to connect to new buildings. Cautiously renovated, it can be an attractive solitaire. Locations The following list of Kreuzbau buildings in Hamburg does not claim to be complete. Legend: #: Numbering of the Kreuzbau buildings in alphabetical order by name. Name: current user of the Kreuzbau building. For elementary schools, the name is shortened to "school"; for district schools, the name is shortened to "STS" Address: Street address of school location, linked with coordinates. A map with all coordinates is linked at the top of the article. District: District of the location of the Kreuzbau building Borough: Borough of the location of the Kreuzbau building Year: year of construction of the Kreuzbau building, defined as the year of acceptance. Image: link to Commons category on school site: "Yes", there are images of the corresponding Kreuzbau buildings; "-", no images of the Kreuzbau buildings, but information on the school building. Notes: Building condition, historic preservation, renovation. "First series" refers to "K1 V1" type Kreuzbau buildings, which have two glazed emergency stairwells. For demolished Kreuzbau buildings, the corresponding line is grayed out. References Bibliography Boris Meyn: Der Architekt und Städteplaner Paul Seitz. Eine Werkmonographie. Verein für Hamburgische Geschichte, Hamburg 1996, Boris Meyn: Die Entwicklungsgeschichte des Hamburger Schulbaus (= Schriften zur Kulturwissenschaft. Band 18). Kovač, Hamburg 1998, Olaf Bartels: Kreuzbau am Schierenberg. In: Bauwelt, Nr. 47.2015, pp. 30–33. Das Hamburger Klassenkreuz. In: Das Werk : Schweizer Monatsschrift für Architektur, Kunst und künstlerisches Gewerbe, , Band 50 (1963), Heft 6 ("Schulbau"), pp. 234–236, Paul Seitz, Wilhelm Dressel (Hrsg.): Schulbau in Hamburg 1961. Verlag der Werkberichte, Hamburg 1961. Baubehörde der Freien und Hansestadt Hamburg (Hrsg.): Hamburger Schulen in Montagebau. Hamburg 1962, PPN 32144938X. External links Energetic renovation of the Kreuzbau elementary school Surenland, brochure by SBH Schulbau Hamburg with floor plan Video footage of the renovated Kreuzbau at Gymnasium Meiendorf (formerly Schule Schierenberg School): aerial view, entrance, first floor, stairs. Building 20th-century architecture in Germany Hamburg Education in Germany
Kreuzbau (Hamburg)
[ "Engineering" ]
7,536
[ "Construction", "Building" ]
74,834,638
https://en.wikipedia.org/wiki/Miriam%20M.%20Unterlass
Miriam Margarethe Unterlass (born 1986 in Erlangen, Germany) is a German chemist. She is full professor of solid state chemistry at the University of Konstanz, as well as  adjunct principal investigator at CeMM - Research Center for Molecular Medicine of the Austrian Academy of Sciences. On 1 October 2024, Prof. Miriam Unterlass took over the management of the renowned Fraunhofer Institute for Silicate Research ISC in Würzburg. Education and career Miriam M. Unterlass was born in 1986 in Erlangen, Germany. She studied chemistry, process engineering and materials science in the framework of a double diploma degree in Würzburg, Germany, in Lyon, France, and in Southampton, United Kingdom. She completed her PhD under the supervision of Professor Markus Antonietti at the Max Planck Institute of Colloids and Interfaces in Potsdam-Golm, Germany. In 2011 she obtained her doctoral degree (magna cum laude) at the University of Potsdam, Germany, with her doctoral work entitled “From Monomer Salts and Their Tectonic Crystals to Aromatic Polyimides: Development of Neoteric Synthesis Routes”. In 2011 she continued her career with a postdoc in the Centre national de la recherche scientifique (CNRS) Laboratory Soft Matter and Chemistry under supervision of Professor Ludwik Leibler at the École supérieure de physique et de chimie industrielles de la ville de Paris (ESPCI). In 2012 she was a visiting scholar at Massachusetts Institute of Technology (MIT) hosted by Professor Gregory C. Rutledge. Later that year, she started as an independent group leader of the research group “Advanced Organic Materials” at the Institute of Materials Chemistry of the Vienna University of Technology (TU Wien). She  habilitated (venia docendi) in materials chemistry at TU Wien in 2018 and became assistant professor with tenure in 2019. In 2018 she joined CeMM - Research Center for Molecular Medicine of the Austrian Academy of Sciences and to date works as adjunct principal investigator. In 2021 she became an associate professor at TU Wien and since May 2021 she is full professor of solid state chemistry at the University of Konstanz, Germany. In 2022 she was guest professor at the Department of Chemical Science and Engineering of Institute of Science Tokyo (formerly known as Tokyo Institute of Technology), Japan, hosted by Professor Shinji Ando. Research Miriam Unterlass is a prominent researcher in the field of chemistry, known for her innovative work at the intersection of materials science and synthetic chemistry. Her research primarily focuses on the development of sustainable routes towards advanced materials and small molecules. The latter is based on the central hypothesis that water is able to be a near-universal solvent for chemical synthesis and processing. She has made significant contributions to the understanding of the use of water as a core technology. Her group has demonstrated that water is an ideal medium to produce advanced materials, profiting from the properties of water under hydrothermal conditions. This approach utilizes hot liquid water as a reaction medium, producing a variety of materials, i.e. high-performance polymers suitable for aeronautics and microelectronics, small molecules relevant to biology and medicine or optoelectronics, and inorganic-organic hybrid materials. Moreover, her group employs modern computational and automation approaches to aim for maximal efficiency and discovery of new materials to address the various challenges of human life. She has published over 40 peer-reviewed articles, contributed with more than 80 scientific talks at different conferences. Thus, she has submitted over 7 patents and patent applications, and actively works alongside industry partners to translate her findings into practical applications. Selected publications F. A. Amaya-García and M. M. Unterlass*: "Synthesis of 2,3-Diarylquinoxaline Carboxylic Acids in High-Temperature Water” Synthesis 2022, 54(15), 3367-3382. F. A. Amaya-García, M. Caldera, A. Koren, S. Kubicek, J. Menche, and M. M. Unterlass*: “Green hydrothermal synthesis of fluorescent 2,3-diarylquinoxalines and large-scale computational comparison to existing alternatives“, ChemSusChem 2021, 14(8), 1853-1863. M. J. Taublaender, S. Mezzavilla, S. Thiele, F. Glöcklhofer, and M. M. Unterlass*, “Hydrothermal Generation of Conjugated Polymers on the Example of Pyrrone Polymers and Polybenzimidazoles”, Angew. Chem. Int. Ed. 2020, 59, 15050-15060. M. J. Taublaender, F. Glöcklhofer, M. Marchetti-Deschmann, and M. M. Unterlass*: “Green and Rapid Hydrothermal Crystallization and Synthesis of Fully Conjugated Aromatic Compounds“, Angew. Chem. Int. Ed. 2018, 57, 12270-12274. M. M. Unterlass*: “Hot Water Generates Crystalline Organic Materials”, Angew. Chem. Int. Ed. 2018, 57, 2292-2294. L. Leimhofer, B. Baumgartner, M. Puchberger, T. Prochaska, T. Konegger, and M. M. Unterlass*: “Green one-pot synthesis and processing of polyimide-silica hybrid materials”, J. Mater. Chem. A. 2017, 5, 16326-16335. B. Baumgartner, A. Svirkova, J. Bintinger, C. Hametner, M. Marchetti-Deschmann, and M. M. Unterlass*: “Green and highly efficient synthesis of perylene and naphthalene bisimides is nothing but water”, Chem. Commun. 2017, 53, 1229-1232. B. Baumgartner, M. J. Bojdys, and M. M. Unterlass*: “Geomimetics for Green Polymer Synthesis: Highly Ordered Polyimides via Hydrothermal Techniques”, Polym. Chem. 2014, 5, 3771-3776. Memberships Member of the German Society of Materials Science (DGM), since 2023 Member of the International Society for Advancement of Supercritical Fluids (ISASF), since 2023 Member of the International Solvothermal and Hydrothermal Association (ISHA) and representative for Austria, since 2019 Member of the  Young Academy of the Austrian Academy of Sciences (ÖAW), since 2018 Member of the Royal Society of Chemistry (MRSC), since 2015 European Crystallographic Association (ECA), since 2015 German Association of University Professors and Lecturers (DHV), since 2014 Austrian Chemical Society (GÖCH), since 2013 German Chemical Society (GDCh), since 2005 Awards and honors 2023 Roy-Somiya Award of the International Solvothermal and Hydrothermal Association (ISHA) for her “Outstanding contributions to the field of hydrothermal and solvothermal synthesis by a scientist under the age of 45” 2023 Nomination for the LUKS teaching Awards of the University of Konstanz 2022 Appointment to the Scientific Advisory Board - Materials Field - of the Federal Institute for Materials Research and Testing (BAM) for the term 2022 - 2025 2021 Appointment to the Board of trustees of the Hochschuljubileumsfonds of the city of Vienna 2020 National Patent Award (Staatspreis Patent) from the Austrian Federal Ministry for Climate Action, Environment, Energy, Mobility, Innovation, and Technology 2020 Bürgenstock JSP Fellow of the Swiss Chemical Society (SCS) 2020 Associate Editor of the Journal of Materials Chemistry A (Royal Society of Chemistry, RSC) 2020 Associate Editor of the journal Materials Advances (RSC) 2020 Member of the selection committee for scholarships and representative for Solid State and Materials Chemistry of the Alexander von Humboldt Foundation 2020 Selected as Mentoring lecturer and member of the selection committee of the Austrian Study Foundation (Österreichische Studienstiftung) 2019 Member of the Scientific Advisory Board of the European Forum Alpbach 2019 Member of AcademiaNet, nominated by the Austrian Science Fund (FWF) 2019 Full member of the Wolfgang Pauli Institute (WPI) Vienna 2019 Finalist in the RSC Emerging Technologies competition (Category: Enabling Technologies) with UGP Materials 2018 Selected as one of 100 Women in Materials Science by the Royal Society of Chemistry 2018 Selected for the International Visitor Leadership Program (IVLP) of the U.S. Department of State’s Bureau of Educational and Cultural Affairs (Specific program: “Hidden No More: Empowering Women Leaders in Science, Technology, Engineering, the Arts, and Mathematics (STEAM”) 2018 Elected Member of the Young Academy of the Austrian Academy of Sciences 2018 Sallinger Fonds S&B (Science-to-Business) award to UGP Materials 2017 START prize of the Austrian Science Fund (FWF) 2017 PHÖNIX award (Austrian founders award) in the category “best prototype” 2017 Selected for the Hochschullehrer-Nachwuchs-Workshop of the German Chemical Society (GDCh), section for Macromolecular Chemistry 2016 Member of the Fast Track program (“Excellence and Leadership Skills for Outstanding Women in Science”)of the Robert Bosch Foundation 2016 Named one of the Young Talents 2016 by the journal Macromolecular Chemistry and Physics 2016 Pro Didactica Awards 2016, 2nd place in the category best lecture within the BSc curriculum “Technical Chemistry” at TU Wien for “Inorganic Chemistry I” 2016 Pro Didactica Awards 2016, 2nd place in the category fairest exam within the BSc curriculum “Technical Chemistry” at TU Wien for “Inorganic chemistry I” 2016 INiTS Startup-Camp Award 2016 for best overall concept presented at the i2c StartAcademy 2015 Young Participant of the Lindau Nobel Laureate Meeting 2015 2015 Named one of the Emerging Investigator 2015 by the journal Polymer Chemistry 2014 Anton Paar Science Award by the Austrian Chemical Society (Goech) 2014 One of six finalist for the Austrian Innovator of the Year Award 2014 References External Links Miriam M. Unterlass publications indexed by Google Scholar Website of the UnterlassLAB - Research Group Official Website Website at University of Konstanz Website at CeMM Press Release - Prof. Miriam Unterlass Appointed New Director of the Fraunhofer Institute for Silicate Research ISC Living people 1986 births German chemists People from Erlangen Solid state chemists German women chemists University of Würzburg alumni Academic staff of TU Wien Academic staff of the University of Konstanz
Miriam M. Unterlass
[ "Chemistry" ]
2,234
[ "Solid state chemists" ]
74,834,793
https://en.wikipedia.org/wiki/Vanadium%28II%29%20fluoride
Vanadium(II) fluoride is a fluoride of vanadium, with the chemical formula of VF2. It forms blue crystals. Preparation Vanadium(II) fluoride can be produced by the reduction of vanadium trifluoride by hydrogen in a hydrogen fluoride atmosphere at 1150 °C: Properties Physical properties Vanadium(II) fluoride crystallizes in the tetragonal crystal system with space group P42/mnm (No. 136). Its lattice constants are a = 480.4 pm and c = 323.7 pm. Reactions Vanadium(II) fluoride is a strong reducing agent that can reduce nitrogen to hydrazine in the presence of magnesium hydroxide. It dissolves in water to form [V(H2O)6]2+ ions. References Vanadium(II) compounds Fluorides
Vanadium(II) fluoride
[ "Chemistry" ]
189
[ "Fluorides", "Salts" ]
74,834,873
https://en.wikipedia.org/wiki/Tantalum%28IV%29%20iodide
Tantalum(IV) iodide is an inorganic compound with the chemical formula TaI4. It dissolves in water to give a green solution, but the color fades when left in the air and produces a white precipitate. Preparation Tantalum(IV) iodide can be prepared by the reduction reaction of tantalum(V) iodide and tantalum. If pyridine is used as the reducing agent, there is an adduct TaI4(py)2. Tantalum(IV) iodide can also be obtained by reacting tantalum(V) iodide with aluminum, magnesium or calcium at 380 °C. Ta6I14 is also formed. This makes it difficult to produce a very pure crystallized tantalum(IV) iodide. Properties Tantalum(IV) iodide is a black solid. It has a crystal structure isotypic to that of niobium(IV) iodide. Single-crystalline tantalum(IV) iodide was first obtained in 2008 by Rafal Wiglusz and Gerd Meyer as a chance product of a reaction in a tantalum ampoule that was supposed to lead to the product Rb(Pr6C2)I12. The single crystal has a triclinic crystal structure with space group P1 (space group no. 2) with two formula units per unit cell (a = 707.36 pm, b = 1064.64 pm, c = 1074.99 pm, α = 100.440°, β = 89.824° and γ = 104.392°). The crystal structure differs from that of other transition metal tetraiodides, which usually have a MI4/2I2/1 chain structure, as it consists of TaI6 octahedra bridged over a common surface to form a dimer. Two such dimers bridge over a common edge to form a tetramer. References Tantalum compounds Iodides Metal halides
Tantalum(IV) iodide
[ "Chemistry" ]
426
[ "Inorganic compounds", "Metal halides", "Salts" ]
74,835,475
https://en.wikipedia.org/wiki/Kneeling%20windows
[[File:Bossenwerk.jpg|thumb|'Kneeling windows of Palazzo Medici-Riccardi]] Kneeling windows () is a type of opening used from the fifteenth century, especially in the Tuscany area. History It is a monumental type used especially on the ground floor: the sill rests on supports protruding that resemble those of a kneeler's bench. Typical of the Mannerist and Tuscan Baroque periods, it is usually enclosed by a grille, framed and crowned by tympanum, sometimes with decorations, often zoomorphic: for example, the two supports are often carved as lion's paws and sometimes the space between them is decorated with a bas-relief. The first kneeling window is traditionally the one in Palazzo Medici Riccardi in Florence, attributed to Michelangelo It was made to occupy the large arch of a portal that once led to a family loggia. Among the architects who indulged in the creation and decoration of kneeling windows were Bartolomeo Ammannati and Bernardo Buontalenti. Notes Bibliography Italian sources AA.VV. Enciclopedia dell'Architettura, Garzanti, Milano 1996, . Pevsner, Fleming e Honour, Dizionario di architettura, Utet, Torino 1978 ; ristampato come Dizionario dei termini artistici'', Utet Tea, 1994 External links Ritratto di Michelangelo. Firenze, Casa Buonarroti, 1535 ca. Finestra inginocchiata Architectural elements
Kneeling windows
[ "Technology", "Engineering" ]
317
[ "Building engineering", "Architectural elements", "Components", "Architecture" ]
69,071,532
https://en.wikipedia.org/wiki/Dumping%20in%20Dixie
Dumping in Dixie is a 1990 book by the American professor, author, activist, and environmental sociologist Robert D. Bullard. Bullard spotlights the quintessence of the economic, social, and psychological consequences induced by the siting of noxious facilities in mobilizing the African American community. Starting with the assertion that every human has the right to a healthy environment, the book documents the journey of five American communities of color as they rally to safeguard their health and homes from the lethal effects of pollution. Further, Bullard investigates the heterogeneous obstacles to social and environmental justice that African American communities often encounter. Dumping in Dixie is widely acknowledged as the first book to discuss environmental injustices and distill the concept of environmental justice holistically. Since the publication of Dumping in Dixie, Bullard has emerged as one of the seminal figures of the environmental justice movement; some even label Bullard as the "father of environmental justice". Publication history Dumping in Dixie has three editions. Westview Press published the first edition on October 16, 1990. Ensuingly, Westview Press launched the second edition on July 1, 1994. Routledge released the third and latest edition on March 24, 2000. Summary of chapters Seven chapters compose Bullard's book. Chapter 1 - Environmentalism and Social Justice The introductory chapter of Bullard's book chronicles the rise of the environmental movement in the United States, exposes the void of existing literature examining the intersection of environmentalism and social justice, and details methodologies employed to actualize the novel. Chapter 2 - Race, Class, and the Politics of Place The second chapter considers the intersection of race, class, and place. In particular, Bullard alludes to four cases (Chemical Waste Management, SCA Services, Industrial Chemical Company, and Warren County PCB Landfill) to depict how the siting of toxic waste facilities and landfills often burdens communities with high percentages of poor, elderly, young, and minority (mainly black) residents. Chapter 3 - Dispute Resolution and Toxics: Case Studies In the third chapter, Bullard examines five case studies from diverse settings (Houston, Dallas, Virginia, Louisiana, and Alabama) to highlight the conflicts and unfairness surrounding "unwanted land uses." Chapter 4 - The Environmental Justice Movement: Survey Results Bullard commences the fourth chapter by revealing a gap in the existing literature—while studies underlining how unwanted land uses disproportionately jeopardize poor or minority communities are burgeoning, information regarding how victims cope with such ecological threats is limited. Correspondingly, Bullard presents the results of his survey study investigating environmental dispute-resolution strategies employed by black residents, deriving from 523 responses from 5 locations. Chapter 5 - Environmental Racism Revisited Through chapter five, Bullard unpacks the concept of environmental racism—how public policies and industrial practices emanate benefits for white people while shifting costs to people of color—by depicting the economic transition of southern states like Louisiana. Chapter 6 - Environmental Justice as a Working Model In essence, the sixth chapter delves into the contemporary environmental protection model while simultaneously advancing an alternative framework, the national environmental justice framework, for addressing the wants and concerns of disenfranchised communities Chapter 7 - Action Strategies for the Twenty-First Century The seventh chapter provides a brief recapitulation of Bullard's chief findings on people-of-color environmentalism. Further, it outlines action strategies necessary to enhance the environmental justice movement and environmentalism in this generation. Reaction Favorable Bailus Walker Jr.'s review in the Journal of Public Health Policy commends Bullard's "sensibility" and "style". Moreover, Walker Jr. labels Bullard as a "master at reaching relevant conclusions" while simultaneously lauding Dumping in Dixies uniformity and clarity. Similarly, accentuating the book's "unusual approach", H.H. Fawcett posits that Dumping in Dixie is "a book well worth considering" in a review in the Journal of Hazardous Materials. Finally, Daniel Suman's review in Ecology Law Quarterly ascertains that Bullard's Dumping in Dixie "is an important contribution" to the burgeoning field of environmental equity and racism. Mixed Eddie Girdner's review proposes that Dumping in Dixie is a "valuable contribution to the literature on the hazardous waste issue and the environmental movement." Nevertheless, shortly after, Girdner cautions readers that Bullard's book reaches an "overly optimistic conclusion." Akin, Robert Collin's review published in the Journal of the American Planning Association maintains that Bullard's book presents "both original research and good descriptive data" regarding "an important land-use issue." However, Collin also warns that "although Dumping in Dixie is successful in targeting a general audience, planners will find themselves with unanswered questions" by dint of the book's lack of specificity. Finally, Bruce Wade, writing in Contemporary Sociology, notes that Dumping in Dixie is a "timely book on an old topic: environmental pollution." Besides, Wade commends the book's "lucid" writing style and "logical" organization, further adding that Bullard's commentary "provides valuable insight into the different processes that foster social protest." Regardless, Wade asserts that Bullard's advanced solutions are "mundane and traditional." All in all, Wade concludes that - "As an explanation, Bullard's work falls short; his description, however, is provocative and helpful." Unfavorable Lawrence Hamilton's review in Social Forces denounces Dumping in Dixie's "narrow" solutions imbued by the NIMBY (Not In My Backyard) ideology to the waste management problem. In particular, Hamilton emphasizes that Bullard "misses a chance to connect local activism with larger environmental issues." Similarly, Texas A&M University professor John Thomas posits that Dumping in Dixie is excessively "laden with human exploitation, suffering, apathy, and pain." Furthermore, Thomas holds that Bullard has made a "commitment to prod our consciences." Aftermath Since the release of Dumping in Dixie, Bullard has continued his activism and points to situations of environmental injustice emerging in his current work. In 1991, after the book’s release, Bullard was involved in planning the First National People of Color Environmental Leadership Summit. He also assisted in passing an executive order which attempted to ensure that environmental justice had to be considered by government bodies. Additionally, he has gone on to release many other books including several contributions from other leading activists. Environmental justice continues to be an ongoing struggle exemplified in events across the United States such as Hurricane Katrina, which Bullard has focused on through his continued body of work. Awards National Wildlife Federation (NWF) Conservation Achievement Award in Science (1990) See also Flammable: Environmental Suffering in an Argentine Shantytown (2008) Black Faces, White Spaces (2014) Salvage the Bones (2011) Toms River (2013) A Civil Action (1995) References Further reading Full text on Taylor & Francis (paywalled) Environmental sociology Environmental justice in the United States 1990 non-fiction books Environmental racism in the United States
Dumping in Dixie
[ "Environmental_science" ]
1,473
[ "Environmental sociology", "Environmental social science" ]
69,071,767
https://en.wikipedia.org/wiki/Prompt%20engineering
Prompt engineering is the process of structuring or crafting an instruction in order to produce the best possible output from a generative artificial intelligence (AI) model. A prompt is natural language text describing the task that an AI should perform. A prompt for a text-to-text language model can be a query, a command, or a longer statement including context, instructions, and conversation history. Prompt engineering may involve phrasing a query, specifying a style, choice of words and grammar, providing relevant context, or describing a character for the AI to mimic. When communicating with a text-to-image or a text-to-audio model, a typical prompt is a description of a desired output such as "a high-quality photo of an astronaut riding a horse" or "Lo-fi slow BPM electro chill with organic samples". Prompting a text-to-image model may involve adding, removing, emphasizing, and re-ordering words to achieve a desired subject, style, layout, lighting, and aesthetic. History In 2018, researchers first proposed that all previously separate tasks in natural language processing (NLP) could be cast as a question-answering problem over a context. In addition, they trained a first single, joint, multi-task model that would answer any task-related question like "What is the sentiment" or "Translate this sentence to German" or "Who is the president?" The AI boom saw an increase in the amount of "prompting technique" to get the model to output the desired outcome and avoid nonsensical output, a process characterized by trial-and-error. After the release of ChatGPT in 2022, prompt engineering was soon seen as an important business skill, albeit one with an uncertain economic future. A repository for prompts reported that over 2,000 public prompts for around 170 datasets were available in February 2022. In 2022, the chain-of-thought prompting technique was proposed by Google researchers. In 2023, several text-to-text and text-to-image prompt databases were made publicly available. The Personalized Image-Prompt (PIP) dataset, a generated image-text dataset that has been categorized by 3,115 users, has also been made available publicly in 2024. Text-to-text Multiple distinct prompt engineering techniques have been published. Chain-of-thought For example, given the question, "Q: The cafeteria had 23 apples. If they used 20 to make lunch and bought 6 more, how many apples do they have?", Google claims that a CoT prompt might induce the LLM to answer "A: The cafeteria had 23 apples originally. They used 20 to make lunch. So they had 23 - 20 = 3. They bought 6 more apples, so they have 3 + 6 = 9. The answer is 9." When applied to PaLM, a 540 billion parameter language model, Google claims that CoT prompting significantly aided the model, allowing it to perform comparably with task-specific fine-tuned models on several tasks, achieving state-of-the-art results at the time on the GSM8K mathematical reasoning benchmark. According to Google, it is possible to fine-tune models on CoT reasoning datasets to enhance this capability further and stimulate better interpretability. An example of a CoT prompting: Q: {question} A: Let's think step by step. As originally proposed by Google, each CoT prompt included a few Q&A examples. This made it a few-shot prompting technique. However, according to researchers at Google and the University of Tokyo, simply appending the words "Let's think step-by-step", has also proven effective, which makes CoT a zero-shot prompting technique. OpenAI claims that this prompt allows for better scaling as a user no longer needs to formulate many specific CoT Q&A examples. Few-shot learning A prompt may include a few examples for a model to learn from, such as asking the model to complete "maison house, chat cat, chien " (the expected response being dog), an approach called few-shot learning. Self-consistency decoding Self-consistency decoding performs several chain-of-thought rollouts, then selects the most commonly reached conclusion out of all the rollouts. If the rollouts disagree by a lot, a human can be queried for the correct chain of thought. Tree-of-thought Tree-of-thought prompting generalizes chain-of-thought by prompting the model to generate one or more "possible next steps", and then running the model on each of the possible next steps by breadth-first, beam, or some other method of tree search. The LLM has additional modules that can converse the history of the problem-solving process to the LLM, which allows the system to 'backtrack steps' the problem-solving process. Prompting to disclose uncertainty By default, the output of language models may not contain estimates of uncertainty. The model may output text that appears confident, though the underlying token predictions have low likelihood scores. Large language models like GPT-4 can have accurately calibrated likelihood scores in their token predictions, and so the model output uncertainty can be directly estimated by reading out the token prediction likelihood scores. Prompting to estimate model sensitivity Research consistently demonstrates that LLMs are highly sensitive to subtle variations in prompt formatting, structure, and linguistic properties. Some studies have shown up to 76 accuracy points across formatting changes in few-shot settings. Linguistic features significantly influence prompt effectiveness—such as morphology, syntax, and lexico-semantic changes—which meaningfully enhance task performance across a variety of tasks. Clausal syntax, for example, improves consistency and reduces uncertainty in knowledge retrieval. This sensitivity persists even with larger model sizes, additional few-shot examples, or instruction tuning. To address sensitivity of models and make them more robust, several methods have been proposed. FormatSpread facilitates systematic analysis by evaluating a range of plausible prompt formats, offering a more comprehensive performance interval. Similarly, PromptEval estimates performance distributions across diverse prompts, enabling robust metrics such as performance quantiles and accurate evaluations under constrained budgets. Automatic prompt generation Retrieval-augmented generation Retrieval-augmented generation (RAG) is a two-phase process involving document retrieval and answer generation by a large language model. The initial phase uses dense embeddings to retrieve documents. This retrieval can be based on a variety of database formats depending on the use case, such as a vector database, summary index, tree index, or keyword table index. In response to a query, a document retriever selects the most relevant documents. This relevance is typically determined by first encoding both the query and the documents into vectors, then identifying documents whose vectors are closest in Euclidean distance to the query vector. Following document retrieval, the LLM generates an output that incorporates information from both the query and the retrieved documents. RAG can also be used as a few-shot learner. Graph retrieval-augmented generation GraphRAG (coined by Microsoft Research) is a technique that extends RAG with the use of a knowledge graph (usually, LLM-generated) to allow the model to connect disparate pieces of information, synthesize insights, and holistically understand summarized semantic concepts over large data collections. It was shown to be effective on datasets like the Violent Incident Information from News Articles (VIINA). Earlier work showed the effectiveness of using a knowledge graph for question answering using text-to-query generation. These techniques can be combined to search across both unstructured and structured data, providing expanded context, and improved ranking. Using language models to generate prompts Large language models (LLM) themselves can be used to compose prompts for large language models. The automatic prompt engineer algorithm uses one LLM to beam search over prompts for another LLM: There are two LLMs. One is the target LLM, and another is the prompting LLM. Prompting LLM is presented with example input-output pairs, and asked to generate instructions that could have caused a model following the instructions to generate the outputs, given the inputs. Each of the generated instructions is used to prompt the target LLM, followed by each of the inputs. The log-probabilities of the outputs are computed and added. This is the score of the instruction. The highest-scored instructions are given to the prompting LLM for further variations. Repeat until some stopping criteria is reached, then output the highest-scored instructions. CoT examples can be generated by LLM themselves. In "auto-CoT", a library of questions are converted to vectors by a model such as BERT. The question vectors are clustered. Questions nearest to the centroids of each cluster are selected. An LLM does zero-shot CoT on each question. The resulting CoT examples are added to the dataset. When prompted with a new question, CoT examples to the nearest questions can be retrieved and added to the prompt. In-context learning Prompt engineering can possibly be further enabled by in-context learning, defined as a model's ability to temporarily learn from prompts. The ability for in-context learning is an emergent ability of large language models. In-context learning itself is an emergent property of model scale, meaning breaks in downstream scaling laws occur such that its efficacy increases at a different rate in larger models than in smaller models. In contrast to training and fine-tuning for each specific task, which are not temporary, what has been learnt during in-context learning is of a temporary nature. It does not carry the temporary contexts or biases, except the ones already present in the (pre)training dataset, from one conversation to the other. This result of "mesa-optimization" within transformer layers is a form of meta-learning or "learning to learn". Text-to-image In 2022, text-to-image models like DALL-E 2, Stable Diffusion, and Midjourney were released to the public. These models take text prompts as input and use them to generate AI-generated images. Text-to-image models typically do not understand grammar and sentence structure in the same way as large language models, thus may require a different set of prompting techniques. Text-to-image models do not natively understand negation. The prompt "a party with no cake" is likely to produce an image including a cake. As an alternative, negative prompts allow a user to indicate, in a separate prompt, which terms should not appear in the resulting image. Techniques such as framing the normal prompt into a sequence-to-sequence language modeling problem can be used to automatically generate an output for the negative prompt. Prompt formats A text-to-image prompt commonly includes a description of the subject of the art, the desired medium (such as digital painting or photography), style (such as hyperrealistic or pop-art), lighting (such as rim lighting or crepuscular rays), color, and texture. Word order also affects the output of a text-to-image prompt. Words closer to the start of a prompt may be emphasized more heavily. The Midjourney documentation encourages short, descriptive prompts: instead of "Show me a picture of lots of blooming California poppies, make them bright, vibrant orange, and draw them in an illustrated style with colored pencils", an effective prompt might be "Bright orange California poppies drawn with colored pencils". Artist styles Some text-to-image models are capable of imitating the style of particular artists by name. For example, the phrase in the style of Greg Rutkowski has been used in Stable Diffusion and Midjourney prompts to generate images in the distinctive style of Polish digital artist Greg Rutkowski. Famous artists such as Vincent van Gogh and Salvador Dalí have also been used for styling and testing. Non-text prompts Some approaches augment or replace natural language text prompts with non-text input. Textual inversion and embeddings For text-to-image models, textual inversion performs an optimization process to create a new word embedding based on a set of example images. This embedding vector acts as a "pseudo-word" which can be included in a prompt to express the content or style of the examples. Image prompting In 2023, Meta's AI research released Segment Anything, a computer vision model that can perform image segmentation by prompting. As an alternative to text prompts, Segment Anything can accept bounding boxes, segmentation masks, and foreground/background points. Using gradient descent to search for prompts In "prefix-tuning", "prompt tuning", or "soft prompting", floating-point-valued vectors are searched directly by gradient descent to maximize the log-likelihood on outputs. Formally, let be a set of soft prompt tokens (tunable embeddings), while and be the token embeddings of the input and output respectively. During training, the tunable embeddings, input, and output tokens are concatenated into a single sequence , and fed to the LLMs. The losses are computed over the tokens; the gradients are backpropagated to prompt-specific parameters: in prefix-tuning, they are parameters associated with the prompt tokens at each layer; in prompt tuning, they are merely the soft tokens added to the vocabulary. More formally, this is prompt tuning. Let an LLM be written as , where is a sequence of linguistic tokens, is the token-to-vector function, and is the rest of the model. In prefix-tuning, one provides a set of input-output pairs , and then use gradient descent to search for . In words, is the log-likelihood of outputting , if the model first encodes the input into the vector , then prepend the vector with the "prefix vector" , then apply . For prefix tuning, it is similar, but the "prefix vector" is pre-appended to the hidden states in every layer of the model. An earlier result uses the same idea of gradient descent search, but is designed for masked language models like BERT, and searches only over token sequences, rather than numerical vectors. Formally, it searches for where is ranges over token sequences of a specified length. Prompt injection Prompt injection is a family of related computer security exploits carried out by getting a machine learning model (such as an LLM) which was trained to follow human-given instructions to follow instructions provided by a malicious user. This stands in contrast to the intended operation of instruction-following systems, wherein the ML model is intended only to follow trusted instructions (prompts) provided by the ML model's operator. See also Social engineering (security) References Deep learning Machine learning Natural language processing Unsupervised learning 2022 neologisms Linguistics Generative artificial intelligence
Prompt engineering
[ "Technology", "Engineering" ]
3,047
[ "Generative artificial intelligence", "Machine learning", "Natural language processing", "Artificial intelligence engineering", "Natural language and computing" ]
69,072,077
https://en.wikipedia.org/wiki/Eat-me%20signals
Eat-me signals are molecules exposed on the surface of a cell to induce phagocytes to phagocytose (eat) that cell. Currently known eat-me signals include: phosphatidylserine, oxidized phospholipids, sugar residues (such as galactose), deoxyribonucleic acid (DNA), calreticulin, annexin A1, histones and pentraxin-3 (PTX3). The most well characterised eat-me signal is the phospholipid phosphatidylserine. Healthy cells do not expose phosphatidylserine on their surface, whereas dead, dying, infected, injured and some activated cells expose phosphatidylserine on their surface in order to induce phagocytes to phagocytose them. Most glycoproteins and glycolipids on the surface of our cells have short sugar chains that terminate in sialic acid residues, which inhibit phagocytosis, but removal of these residues reveals galactose residues (and subsequently N-acetylglucosamine and mannose residues) that can bind opsonins or directly activate phagocytic receptors. Calreticulin, annexin A1, histones, pentraxin-3 and DNA may be released by (and onto the surface of) dying cells to encourage phagocytes to eat these cells, thereby acting as self-opsonins. Eat-me signals, or the opsonins that bind them, are recognised by phagocytic receptors on phagocytes, inducing engulfment of the cell exposing the eat-me signal. See also Find-me signals References Cell biology Cellular senescence
Eat-me signals
[ "Biology" ]
363
[ "Senescence", "Cellular senescence", "Cell biology", "Cellular processes" ]
69,072,129
https://en.wikipedia.org/wiki/Token-based%20replay
Token-based replay technique is a conformance checking algorithm that checks how well a process conforms with its model by replaying each trace on the model (in Petri net notation ). Using the four counters produced tokens, consumed tokens, missing tokens, and remaining tokens, it records the situations where a transition is forced to fire and the remaining tokens after the replay ends. Based on the count at each counter, we can compute the fitness value between the trace and the model. The algorithm The token-replay technique uses four counters to keep track of a trace during the replaying: : Produced tokens : Consumed tokens : Missing tokens (consumed while not there) : Remaining tokens (produced but not consumed) Invariants: At any time: At the end: At the beginning, a token is produced for the source place (p = 1) and at the end, a token is consumed from the sink place (c' = c + 1). When the replay ends, the fitness value can be computed as follows: Example Suppose there is a process model in Petri net notation as follows: Example 1: Replay the trace () on the model M Step 1: A token is initiated. There is one produced token (). Step 2: The activity consumes 1 token to be fired and produces 2 tokens ( and ). Step 3: The activity consumes 1 token and produces 1 token ( and ). Step 4: The activity consumes 1 token and produces 1 token ( and ). Step 5: The activity consumes 2 tokens and produces 1 token (, ). Step 6: The token at the end place is consumed (). The trace is complete. The fitness of the trace () on the model is: Example 2: Replay the trace (a, b, d) on the model M Step 1: A token is initiated. There is one produced token (). Step 2: The activity consumes 1 token to be fired and produces 2 tokens ( and ). Step 3: The activity consumes 1 token and produces 1 token ( and ). Step 4: The activity needs to be fired but there are not enough tokens. One artificial token was produced and the missing token counter is increased by one (). The artificial token and the token at place are consumed () and one token is produced at place end (). Step 5: The token in the end place is consumed (). The trace is complete. There is one remaining token at place (). The fitness of the trace () on the model is: References Algorithms
Token-based replay
[ "Mathematics" ]
534
[ "Applied mathematics", "Algorithms", "Mathematical logic" ]
69,072,781
https://en.wikipedia.org/wiki/Find-me%20signals
Cells destined for apoptosis release molecules referred to as find-me signals. These signal molecules are used to attract phagocytes which engulf and eliminate damaged cells. Find-me signals are typically released by the apoptotic cells while the cell membrane remains intact. This ensures that the phagocytic cells are able to remove the dying cells before their membranes are compromised. A leaky membrane leads to secondary necrosis which may cause additional inflammation, therefore, it is best to remove dying cells before this occurs. One cell is capable of releasing multiple find-me signals. Should a cell lack the ability to release its find-me signal, other cells may release additional find-me signals to overcome the discrepancy. Inflammation can be suppressed by find-me signals during cell clearance. A phagocyte may also be able to engulf more material or enhance its ability to engulf materials when stimulated by find-me signals. A wide range of molecules, from cellular lipids, proteins, peptides, to nucleotides, act as find-me signals. History The correlation between the early stages of cell death and the removal of apoptotic cells was first studied in C. elegans. Mutants that could not carry out normal caspase-mediated apoptosis were used to demonstrate that cells in the beginning stages of death were still efficiently recognized and removed by phagocytes. This occurred because the engulfment machinery of the phagocytes was still functioning normally even though the apoptotic process in the dying cell was disrupted. A study done in 2003 showed the breast cancer cells release find me signals known as lysophosphatidylcholine. This research brought the concept of find-me signals to the fore front of cell clearance research and introduced the idea that dying cells release signals that flow throughout the body's tissues in order to alert and recruit monocytes to their location. Chemicals that act as find-me signals Known types of find-me signals include: Lipids: lysophosphatidylcholine (lysoPC) sphingosine-1-phosphate (S1P) Proteins and peptides: fractalkine (CX3CL1) interleukin-8 (IL-8) complement components C3a and C5a split tyrosyl tRNA synthetase (mini TyrRS) dimerized ribosomal protein S19 (RP S19) endothelial monocyte-activating polypeptide II (EMAP II) Formyl peptides, especially N-formylmethionine-leucyl-phenylalanine, fMLP) Nucleotides: adenosine triphosphate (ATP), adenosine diphosphate (ADP), uridine triphosphate (UTP) and uridine diphosphate (UDP). All of these molecules are linked to monocyte or macrophage recruitment towards dying cells. The receptor on the monocyte or other phagocyte for ATP and UTP signals has been shown to be P2Y2 in vivo. The receptor on the monocyte or other phagocyte for the CX3CL1 signal has been shown to be CX3CR1 in vivo. The roles of the S1P and LPC signals remained to be established through a model in vivo. Lipids Lysophosphatidylcholine (LPC) Identified in breast cancer cells, this find-me signals is released by MCF-7 cells to attract the THP-1 monocytes. Other cells and different methods of apoptosis may be able to release LPC, but MCF-7 cells have been the most thoroughly studied. The enzyme calcium-independent phospholipase A2 (iPLA2) is most likely responsible for the apoptotic cell releasing LPC as it is dying. The amount of LPC released is small, so it is unclear how it is able to set up a concentration gradient in the serum or plasma in order to attract phagocytes to their location. High concentrations of LPC cause lysis of many cells in its vicinity. LPC may be present in a different chemical from rather than its native form when released by an apoptotic cell. It may bind to components of the serum, making it unavailable to be modified or taken into other tissues. LPC may also be able to function with other soluble molecules. The receptor on the phagocyte that is thought to be linked to LPC is G2A, but it has not been confirmed. The role of LPC as a find-me signal has also not been characterized in vivo. Sphingosine 1-phosphate (S1P) It has been suggested that the induction of apoptosis results in increased expression of S1P kinase 1 (SphK1). The increased presence of SphK1 is linked to the creation of S1P, which then recruits macrophages to the immediate area surrounding apoptotic cells. It has also been suggested that S1P kinase 2 (SphK2) is a target of caspase 1, and that a cleaved fragment of SphK2 is what is released from dying cells into the surrounding extracellular space where it is transformed into S1P. All of the studies thus far characterizing S1P have been done in vitro, and the role or S1P in recruiting phagocytes to apoptotic cells in vivo has not been determined. Staurosine-induced cell death has been shown to influence caspase-1 to initiate the cleavage of SphK2. In other forms of apoptosis, caspase-1 is not normally induced, meaning the formation of S1P needs to be further studied. S1P can be recognized by the G protein-coupled receptors S1P1 through S1P5. Which one of these receptors is relevant in the recruitment of phagocytes to apoptotic cells is not yet known. Sphingosine kinase 1 and sphingosine kinase 2 have been linked to S1P generation during apoptosis through different pathways. The level of SphK1 is increased during apoptosis while caspases cleave SphK2. CX3CL1 CX3CL1 is a soluble fragment of fractalkine protein that serves as a find-me signal for monocytes. A soluble fragment of fractalkine that is usually on the plasma membrane as an intercellular adhesion molecule is sent out as a 60 kDa fragment during apoptosis as a find me signal. CX3CL1 release is dependent upon caspase indirectly. CX3CL1 could also be released as part of microparticles from the beginning stages of apoptotic death of Burkitt Lymphoma cells. The receptors on monocytes that are able to detect the presence of CX3CL1 are CX3R1 receptors, as shown in both in vivo and in vitro studies. Nucleotides: ATP and UTP These were the most recent find me signals to be characterized as components of the supernatant of apoptotic cells. Studies were able to show that the controlled release of the nucleotides ATP and UTP from cells in the beginning stages of apoptosis can potentially attract monocytes in vivo and in vitro. This has been observed in Jurkat cells (primary thymocytes), MCF-7 cells, and lung epithelial cells. Release is dependent upon caspase activity. Less than 2% of ATP released from the beginning stages of cell death is released when the dying cell's plasma membrane is still intact. The released ATP preferentially attracts phagocytes through chemotaxis, rather than random migration through chemokineses. The receptors on monocytes that are able to sense the release of nucleotides are in the P2Y family of nucleotide receptors. Monocytic P2Y2 has been shown to be able to recognize nucleotides in vitro and in genetically modified mice. Nucleotides are often degraded by nucleotide triphosphatases (NTPases) when they are in the extracellular space. Only a small amount of ATP is released during find me signaling, so it is unclear how the nucleotide avoids degradation by NTPases in order to establish a gradient used to signal clearing by monocytes. NTPases may serve as regulators in various tissues in order to control how far the nucleotide signal can travel. The signaling pathway within the monocyte downstream of P2Y receptor activation is still unknown. Others The ribosomal protein S19 has been suggested as a possible find me signal. Apoptosis causes a dimerization of S19, inducing a conformation change that allows it to bind to the C5a receptor on monocytes. Research suggests that S19 is released during the late to final stages of apoptosis. EMAPII, a fragment of tyrosyl tRNA synthetase, has also been shown to attract monocytes. This molecule has inflammatory properties, meaning it is capable of attracting and activating neutrophils. In apoptosis Background Humans turn over billions of cells as a part of normal bodily processes every day, which correlates with about 1 million cells being replaced per second. The ultimate goal of the body's intrinsic cell death mechanisms is to efficiently and asymptomatically clear dying cells. There are many reasons as to why the body needs to get rid of non diseased and diseased cells. As a part of the cell's natural division process, excess cells may be generated during normal growth, development, or tissue repair after illness or an injury. Only a fraction of these new cells will stay and become mature, while the rest will die and be cleared by the body's immune system. Cells may also need to be removed because they are too old or become damaged overtime. Cell damage can occur through environmental factors such as air pollution, UV radiation from the sun, or physical injury. In most cases, the cells that are dying are recognized by phagocytes through find-me signals and removed. Quick and efficient clearing of apoptotic cells is crucial to prevent secondary necrosis of dying cells and to avoid autoantigens causing immune responses. Find-me signals alert the presence of apoptotic cells to phagocytes when they are in the beginning states of dying. The phagocytes are able to use the find-me signals to locate the dying cell. Find-me signals set up a gradient within the tissue they are in to attract phagocytes to their location. The phagocytes migrate to the dying cell through the use of their receptors responding to the find-me signals initiating a signaling pathway within, causing them to move to the proximity of the cell emitting those signals. If the body's immune system, or more specifically phagocytes, fail to clear dying cells in the body, symptoms such as chronic inflammation, autoimmune disorders, and developmental abnormalities have been shown to occur. As long as the engulfment process is functioning and efficient, uncleared apoptotic cells go unnoticed in the body and do not cause any long-term symptoms. If this process is disrupted in any way, the accumulation of secondary necrotic cells in tissues of the body can occur. This is associated with autoimmune disorders, causing the immune system to attack self-antigens on the uncleared cells. Release from dying cells The main function of a find-me signal is to be released while a cell undergoing apoptosis is still intact in order to attract phagocytes to come and clear the dying cell before secondary necrosis can occur. This suggests that the initiation of apoptosis may be coupled with the release of find me signals from the dying cells. As of now, it is unknown how LPC is released from apoptotic cells. S1P generation involved caspase-1-dependent release of sphingosine kinase 2 (SphK2) fragments. CX3CL1 release is mediated through the release of a 60 kDa microparticle fragment of fractalkine from the beginning stages of Burkitt Lymphoma cell apoptosis. Nucleotide release is one of the better defined find me signal release mechanisms. They are released through a pannexin family channel known as PANX1. PANX1 is a four pass transmembrane protein that forms large pores in the plasma membrane of a cell, allowing molecules up to 1 kDa in size to pass through. The nucleotides are detected by P2Y2 on monocytes, which causes them to migrate to the location of the apoptotic cell. Engulfment and clearance of apoptotic cells by phagocytes Phagocytes are able to sense the find-me signals presented by an apoptotic cell during the beginning stages of cell death. They sense the find-me signal gradient and migrate to the vicinity of the signaling cell. Using the presented find-me signal along with the "eat-me" signal also exposed by the apoptotic cell, the phagocyte is able to recognize the dying cell and engulf it. Phagocytes contribute to the "final stages" of cell death by apoptosis. They are often already nearby a dying cell and do not have to travel far in order to engulf and clear it. In most mammalian systems, however, this is not the case. In the human thymus, for example, a dying thymocyte is likely to be engulfed by a healthy neighboring thymocyte, and a macrophage or dendritic cell that resides in the thymus is likely to carry out clearance of the corpse. In this case, a dying cell needs to be able to send out an advertisement of sorts to declare its state of death in order to recruit phagocytes to its location. Phagocytic cells use the soluble find-me signals released by the apoptotic signals to do this. Phagocytes detect the gradient set up by the find-me signals presented by the dying cell in order to navigate to their location. Steps in the engulfment and clearance of apoptotic cells by phagocytes: Phagocytes need to be in the vicinity of the cells presenting find-me signals. The phagocytes use the find-me signals to locate these cells and move to their location. The phagocytes interact with the dying cells through the presenting eat-me signals through specific eat-me signal receptors on the phagocytic cell. The phagocyte will engulf the eat-me signal presenting cell through induced signaling of engulfment receptors and by the reorganization of the phagocytic cell's cytoskeleton. The components of the dying cell are processed by the phagocytes within their lysosomes. Non-apoptotic roles Find me signals may also play a role in phagocytic activity of cell in the direct vicinity of cells undergoing apoptosis. This phenomenon allows neighboring cells adjacent to the apoptotic cell sending out the find me signal to be engulfed without going through the trouble of releasing find me signals of their own. Find me signals could possibly play a role in priming phagocytes to enhance their phagocytic capacity. In addition, they may also be able to enhance production of certain bridging molecules created by macrophages. See also Eat-me signals References Molecules Phagocytes
Find-me signals
[ "Physics", "Chemistry" ]
3,248
[ "Molecular physics", "Molecules", "Physical objects", "nan", "Atoms", "Matter" ]
69,072,799
https://en.wikipedia.org/wiki/Mixtilinear%20incircles%20of%20a%20triangle
In plane geometry, a mixtilinear incircle of a triangle is a circle which is tangent to two of its sides and internally tangent to its circumcircle. The mixtilinear incircle of a triangle tangent to the two sides containing vertex is called the -mixtilinear incircle. Every triangle has three unique mixtilinear incircles, one corresponding to each vertex. Proof of existence and uniqueness The -excircle of triangle is unique. Let be a transformation defined by the composition of an inversion centered at with radius and a reflection with respect to the angle bisector on . Since inversion and reflection are bijective and preserve touching points, then does as well. Then, the image of the -excircle under is a circle internally tangent to sides and the circumcircle of , that is, the -mixtilinear incircle. Therefore, the -mixtilinear incircle exists and is unique, and a similar argument can prove the same for the mixtilinear incircles corresponding to and . Construction The -mixtilinear incircle can be constructed with the following sequence of steps. Draw the incenter by intersecting angle bisectors. Draw a line through perpendicular to the line , touching lines and at points and respectively. These are the tangent points of the mixtilinear circle. Draw perpendiculars to and through points and respectively and intersect them in . is the center of the circle, so a circle with center and radius is the mixtilinear incircle This construction is possible because of the following fact: Lemma The incenter is the midpoint of the touching points of the mixtilinear incircle with the two sides. Proof Let be the circumcircle of triangle and be the tangency point of the -mixtilinear incircle and . Let be the intersection of line with and be the intersection of line with . Homothety with center on between and implies that are the midpoints of arcs and respectively. The inscribed angle theorem implies that and are triples of collinear points. Pascal's theorem on hexagon inscribed in implies that are collinear. Since the angles and are equal, it follows that is the midpoint of segment . Other properties Radius The following formula relates the radius of the incircle and the radius of the -mixtilinear incircle of a triangle : where is the magnitude of the angle at . Relationship with points on the circumcircle The midpoint of the arc that contains point is on the line . The quadrilateral is harmonic, which means that is a symmedian on triangle . Circles related to the tangency point with the circumcircle and are cyclic quadrilaterals. Spiral similarities is the center of a spiral similarity that maps to respectively. Relationship between the three mixtilinear incircles Lines joining vertices and mixtilinear tangency points The three lines joining a vertex to the point of contact of the circumcircle with the corresponding mixtilinear incircle meet at the external center of similitude of the incircle and circumcircle. The Online Encyclopedia of Triangle Centers lists this point as X(56). It is defined by trilinear coordinates: and barycentric coordinates: Radical center The radical center of the three mixtilinear incircles is the point which divides in the ratio: where are the incenter, inradius, circumcenter and circumradius respectively. References Euclidean plane geometry
Mixtilinear incircles of a triangle
[ "Mathematics" ]
745
[ "Planes (geometry)", "Euclidean plane geometry" ]
69,072,943
https://en.wikipedia.org/wiki/Perth%20Water%20Works
Perth Water Works (also known as Corporation Water Works) is an historic building in Perth, Scotland, dating to 1832. Standing at the corner of Tay Street and Marshall Place (both part of the A989), the building, a former engine house and water tank, has been the home of The Fergusson Gallery, displaying the work of John Duncan Fergusson, since 1992. The building is Category A listed. Historic Environment Scotland states that it is one of Scotland's most significant industrial buildings, and that its large-scale cast-iron construction may be the first very first in the world. Clean water was drawn from filter beds on Moncreiffe Island, in the adjacent River Tay, and pumped beneath the river, by a steam engine, into a holding tank in the building's rotunda. The building's architect was Adam Anderson, the rector of Perth Academy. An inscription over the door in the rotunda reads ("I draw water by fire and water"). The engine house has a tall Doric columned chimney, capped by a Roman urn (a fibreglass replica of the original, which was destroyed by a lightning strike in 1871). The building became surplus to requirements in 1965, when the city opened a new water works. It was restored in 1973, for use as a Tourist Information Centre, by James Morris and Robert Steedman, and then converted to its current use nineteen years later. Its dome was reconstructed in 2003 as part of a restoration funded by the Heritage Lottery, Historic Scotland and Perth and Kinross Council. Gallery See also List of listed buildings in Perth, Scotland References External links The Fergusson Gallery – Culture Perth & Kinross 1835 establishments in Scotland Water Works, Perth Category A listed buildings in Perth and Kinross Water treatment facilities
Perth Water Works
[ "Chemistry" ]
364
[ "Water treatment", "Water treatment facilities" ]
69,074,412
https://en.wikipedia.org/wiki/American%20Center%20for%20Mobility
The American Center for Mobility (ACM) is a vehicular automation research center and federally designated automated vehicle proving ground located in Ypsilanti Township, Michigan. History Founded in December 2017 on the site of the Willow Run manufacturing complex, the American Center for Mobility began as a joint initiative of the State of Michigan, partnering with Ann Arbor SPARK, Business Leaders for Michigan, the Michigan Department of Transportation, the Michigan Economic Development Corporation, the University of Michigan, and Ypsilanti Township as a way of accelerating autonomous vehicle research regionally and nationally. Portions of the US Highway 12 alignment and ramps to the former manufacturing complex were repurposed for creation of a test track. Additional roadways and connections were constructed on the site of the complex. Features The test track includes a curved tunnel, highway loop, an off-road course, two double overpasses, along with various intersections and roundabouts. The track is branded as "Powered by Intertek" as Intertek serves as operations and maintenance partner. In January of 2020 ACM opened its Technology Park, designed to serve as an incubation hub for startups and offices onsite for partners, as well as event and demonstration space. In addition to the test track and research center in Ypsilanti, ACM also operates the Detroit Smart Parking Center in Detroit in partnership with Ford, Bedrock, and Bosch. References Road test tracks 2017 establishments in Michigan Self-driving cars
American Center for Mobility
[ "Engineering" ]
293
[ "Automotive engineering", "Self-driving cars" ]
69,075,483
https://en.wikipedia.org/wiki/Lydia%20Bieri
Lydia Rosina Bieri (born 1972) is a Swiss-American applied mathematician, geometric analyst, mathematical physicist, cosmologist, and historian of science whose research concerns general relativity, gravity waves, and gravitational memory effects. She is a professor of mathematics and director of the Michigan Center for Applied and Interdisciplinary Mathematics at the University of Michigan. Education and career Bieri is originally from Sempach, in Switzerland. She studied mathematics at ETH Zurich, earning a diploma (the equivalent of a master's degree) in 2001. She completed a doctorate (Dr. sc.) at ETH Zurich in 2007, with the support of a Swiss National Funds Fellowship. Her dissertation, An Extension of the Stability Theorem of the Minkowski Space in General Relativity, was supervised by Demetrios Christodoulou, and jointly promoted by Michael Struwe. After postdoctoral research as a Benjamin Peirce Fellow in mathematics at Harvard University from 2007 to 2010, Bieri became an assistant professor of mathematics at the University of Michigan in 2010. She became associate professor in 2015, director of the Michigan Center for Applied and Interdisciplinary Mathematics in 2019, and full professor in 2021. Books With Harry Nussbaumer of ETH Zurich, Bieri is the coauthor of a general-audience book on cosmology and its history, Discovering the Expanding Universe (Cambridge University Press, 2009), She is also the coauthor of a research monograph with Nina Zipser, Extensions of the Stability Theorem of the Minkowski Space in General Relativity (AMS/IP Studies in Advanced Mathematics, American Mathematical Society, 2009). Recognition Bieri won an NSF CAREER Award in 2013 and was named a Simons Fellow in Mathematics in 2018. She was named a Fellow of the American Physical Society (APS) in 2021, after a nomination from the APS Division of Gravitational Physics, "for fundamental results on the global existence of solutions of the Einstein field equations, and many contributions to the understanding of gravitational wave memory". She was named to the 2023 class of Fellows of the American Mathematical Society, "for contributions to mathematical general relativity and geometric analysis". References External links Home page 1972 births Living people People from Sursee District 21st-century American mathematicians American cosmologists American women physicists American physicists Swiss mathematicians Swiss women mathematicians Swiss cosmologists Swiss women physicists Cosmologists Applied mathematicians Differential geometers Mathematical physicists American historians of science ETH Zurich alumni University of Michigan faculty Fellows of the American Mathematical Society Fellows of the American Physical Society 21st-century American women mathematicians
Lydia Bieri
[ "Mathematics" ]
521
[ "Applied mathematics", "Applied mathematicians" ]
69,076,412
https://en.wikipedia.org/wiki/List%20of%20UWB-enabled%20mobile%20devices
Ultra-wideband (UWB, ultra wideband, ultra-wide band and ultraband) is a radio technology that can use a very low energy level for short-range, high-bandwidth communications over a large portion of the radio spectrum. The following is a list of devices that support the technology from various UWB silicon providers. Smartphones, foldables, & tablets Smartwatches IoT devices References Near-field communication NFC-enabled mobile devices
List of UWB-enabled mobile devices
[ "Technology" ]
97
[ "Near-field communication", "Mobile telecommunications" ]
69,077,760
https://en.wikipedia.org/wiki/Compton%20Spectrometer%20and%20Imager
The Compton Spectrometer and Imager (COSI) is a NASA SMEX astrophysics mission that will launch a soft gamma-ray telescope (0.2–5 MeV) in 2027. It is a wide-field compact Compton telescope (CCT) that is uniquely suited to investigate the "MeV gap" (0.1–10 MeV). It provides imaging, spectroscopy, and polarimetry of astrophysical sources, and its germanium detectors provide excellent energy resolution for emission line measurements. The germanium detectors have an instantaneous field of view of more than 25% of the sky, and they are surrounded on the sides and bottom by active shields, which provide background rejection while also allowing for detection of gamma-ray bursts and other gamma-ray flares across the majority of the sky. History COSI has its origins in the Nuclear Compton Telescope (NCT), which was a prototype designed to study astrophysical sources of nuclear line emission and gamma-ray polarization. The NCT was a balloon-borne soft gamma-ray telescope, flying on several high-altitude balloon missions to test and refine its technology. Flights included launches from locations such as New Zealand and Antarctica. COSI evolved from the advancements and lessons learned from the NCT missions. The improved design incorporated germanium cross-strip detectors to enhance sensitivity and resolution. This progression culminated in the development of a more sophisticated instrument capable of high-resolution spectroscopy, wide-field imaging, and gamma-ray polarization measurements. The successful demonstration of NCT's technology in balloon missions laid the groundwork for COSI to be selected as a NASA Small Explorer (SMEX) mission. Mission "For more than 60 years, NASA has provided opportunities for inventive, smaller-scale missions to fill knowledge gaps where we still seek answers", said Thomas Zurbuchen, associate administrator for the agency's Science Mission Directorate in Washington, D.C. "COSI will answer questions about the origin of the chemical elements in our own Milky Way galaxy, the very ingredients critical to the formation of Earth itself". The principal investigator is John Tomsick at the University of California, Berkeley. The mission will cost approximately US$145 million, with an additional launch cost of approximately US$69 million. The spacecraft is targeted to launch August 2027 on a SpaceX Falcon 9 rocket from Cape Canaveral Space Force Station. The COSI team spent decades developing their technology through flights on scientific balloons. In 2016, they sent a version of the gamma-ray instrument aboard NASA's super pressure balloon, which is designed for long flights and heavy lifts. NASA's Explorers Program is the agency's oldest continuous program. It provides frequent, low-cost access to space using principal investigator-led space research relevant to the astrophysics and heliophysics programs. Since the 1958 launch of Explorer 1, which discovered Earth's radiation belts, the program has launched more than 90 missions. The Cosmic Background Explorer (COBE), another NASA Explorer mission, led to a Nobel Prize in 2006 for its principal investigators. NASA's Goddard Space Flight Center in Greenbelt, Maryland, manages the program for NASA. Partnership COSI is a collaboration between High Energy Astrophysics Group, Space Sciences Laboratory at UC Berkeley; Center for Astrophysics and Space Sciences, UCSD; Institute of Astronomy at National Tsing Hua University, Taiwan; Lawrence Berkeley National Laboratory; NASA Goddard Space Flight Center; U.S. Naval Research Laboratory; NASA Columbia Scientific Balloon Facility and NASA Super Pressure Balloon Blog. See also Gamma-ray astronomy Compton scattering Semiconductor detector: Germanium detectors References External links Project web site How the instrument functions The Compton Spectrometer and Imager Astro2020 APC White Paper (Tomsick et al) Space telescopes Explorers Program 2027 in spaceflight
Compton Spectrometer and Imager
[ "Astronomy" ]
781
[ "Space telescopes" ]
69,079,808
https://en.wikipedia.org/wiki/Lunar%20arithmetic
Lunar arithmetic, formerly called dismal arithmetic, is a version of arithmetic in which the addition and multiplication operations on digits are defined as the max and min operations. Thus, in lunar arithmetic, and The lunar arithmetic operations on nonnegative multidigit numbers are performed as in usual arithmetic as illustrated in the following examples. The world of lunar arithmetic is restricted to the set of nonnegative integers. 976 + 348 ---- 978 (adding digits column-wise) 976 × 348 ---- 876 (multiplying the digits of 976 by 8) 444 (multiplying the digits of 976 by 4) 333 (multiplying the digits of 976 by 3) ------ 34876 (adding digits column-wise) The concept of lunar arithmetic was proposed by David Applegate, Marc LeBrun, and Neil Sloane. In the general definition of lunar arithmetic, one considers numbers expressed in an arbitrary base and define lunar arithmetic operations as the max and min operations on the digits corresponding to the chosen base. However, for simplicity, in the following discussion it will be assumed that the numbers are represented using 10 as the base. Properties of the lunar operations A few of the elementary properties of the lunar operations are listed below. The lunar addition and multiplication operations satisfy the commutative and associative laws. The lunar multiplication distributes over the lunar addition. The digit 0 is the identity under lunar addition. No non-zero number has an inverse under lunar addition. The digit 9 is the identity under lunar multiplication. No number different from 9 has an inverse under lunar multiplication. Some standard sequences Even numbers It may be noted that, in lunar arithmetic, and . The even numbers are numbers of the form . The first few distinct even numbers under lunar arithmetic are listed below: These are the numbers whose digits are all less than or equal to 2. Squares A square number is a number of the form . So in lunar arithmetic, the first few squares are the following. Triangular numbers A triangular number is a number of the form . The first few triangular lunar numbers are: Factorials In lunar arithmetic, the first few values of the factorial are as follows: Prime numbers In the usual arithmetic, a prime number is defined as a number whose only possible factorisation is . Analogously, in the lunar arithmetic, a prime number is defined as a number whose only factorisation is where 9 is the multiplicative identity which corresponds to 1 in usual arithmetic. Accordingly, the following are the first few prime numbers in lunar arithmetic: Every number of the form , where is arbitrary, is a prime in lunar arithmetic. Since is arbitrary this shows that there are an infinite number of primes in lunar arithmetic. Sumsets and lunar multiplication There is an interesting relation between the operation of forming sumsets of subsets of nonnegative integers and lunar multiplication on binary numbers. Let and be nonempty subsets of the set of nonnegative integers. The sumset is defined by To the set we can associate a unique binary number as follows. Let . For we define and then we define It has been proved that where the "" on the right denotes the lunar multiplication on binary numbers. Magic squares of squares using lunar arithmetic A magic square of squares is a magic square formed by squares of numbers. It is not known whether there are any magic squares of squares of order 3 with the usual addition and multiplication of integers. However, it has been observed that, if we consider the lunar arithmetic operations, there are an infinite amount of magic squares of squares of order 3. Here is an example: See also Tropical arithmetic References External links Multiplication Elementary arithmetic Arithmetic Prime numbers Integer sequences
Lunar arithmetic
[ "Mathematics" ]
756
[ "Sequences and series", "Integer sequences", "Elementary arithmetic", "Mathematical structures", "Recreational mathematics", "Prime numbers", "Mathematical objects", "Combinatorics", "Elementary mathematics", "Arithmetic", "Numbers", "Number theory" ]
69,079,936
https://en.wikipedia.org/wiki/BmK%20NSPK
BmK NSPK (Buthus martensii Karsch, Neurite-Stimulating Peptide targeting Kv channels) is a toxin isolated from the venom of the Chinese armor-tail scorpion (Mesobuthus martensii), which specifically targets voltage gated potassium channels (Kv), resulting in a direct inhibition of outward potassium current. BmK NSPK is a neurotoxin isolated from the venom of the Chinese armor-tail scorpion (Mesobuthus martensii). Chemistry BmK NSPK is a short-chain toxin that has a primary amino acid sequence of 38 amino acids and a molecular weight of 3962.2 Da. The toxin has a cysteine-stabilized α-helix-β-sheet motif, containing one α-helix connected by disulfide bonds to two parallel β-sheets, which suggests that BmK NSPK belongs to the Csαβ potassium channel blockers. In comparison with BmKTX (Buthus martensii kaliotoxin), which is a previously discovered Kv1.3 channel blocker, BmK NSPK showed 79% sequence homology. Furthermore, BmKTX and BmK NSPK also show structural similarities. In both toxins, the α-helixes contain Leu14 and Ala19/20 and the β-sheets show a similar folding pattern. Target and Mode of Action Whole-cell patch-clamp recordings in mouse spinal cord neurons (SCNs) determined that BmK NSPK targets Kv channels. More specifically, when focussing on the similarities between BmK NSPK and BmKTX, Kv1.3 channels are expected to be the target of BmK NSPK. When BmK NSPK (300 nM) is introduced to the SCNs, outward potassium (IK) currents are inhibited. As a result, the membrane potential will not repolarize. This is due to inhibition of the transient components (IA) and sustained delayed rectifier components (ID) of IK, which are normally responsible for membrane repolarization. BmK NSPK has additional effects in SCNs, possibly indirectly caused by the membrane depolarisation that is a consequence of the inhibition of outward potassium currents: Spontaneous calcium oscillations (SCOs) are increased in amplitude and frequency with the addition of BmK NSPK. When adding higher concentrations (3-300 nM) of BmK NSPK, the membrane of SCNs depolarizes. BmK NSPK modulates the release of Nerve Growth Factor in SCNs through the NGF/TrkA signaling pathway, leading to enhanced neurite outgrowth. References Scorpion toxins Ion channel toxins Neurotoxins
BmK NSPK
[ "Chemistry" ]
582
[ "Neurochemistry", "Neurotoxins" ]
69,079,937
https://en.wikipedia.org/wiki/N58A
N58A is a peptide depressant β-neurotoxin found in the venom of certain East Asian scorpions. The toxin affects voltage-gated sodium channels, specifically Nav1.8 & Nav1.9 channels. Etymology and Chemistry The N58A protein is a scorpion depressant β-toxin, in which the asparagine (N) on the 58th position of the peptide is exchanged for an alanine (A) amino acid. In scorpion depressant β-toxins, the N58 amino acid is known to play an important role in its activity, and the substitution with alanine severely reduces the toxicity of the peptide. The molecular mass of N58A is approximately 9 kDa with a UV absorption peak of 279.4 nm. Sources N58A is originally found in the venom of some East Asian scorpion species., similar to the one found in Leiurus quinquestriatus hebraeus, whose venom contains the structurally similar LqhIT2 toxin. In addition, N58A can be obtained by using molecular biology techniques such as PCR amplification and recombinant technology to express the peptide in E. coli Target and mode of action N58A affects voltage-gated sodium channels, and its effects have specifically been shown on the Nav1.8 and Nav1.9 channels. These effects include a reduction in neuronal channel expression and current density, and shifts of the channel activation and inactivation to more depolarized and hyperpolarized membrane potentials, respectively. Aside from acting on voltage-gated sodium channels, N58A also reduces phosphorylation of MAPK pathway proteins, most importantly the ERK 1/2 (affecting the posterior horn of the spinal cord) and P38 (involved in inflammatory and analgesic processes). As a result of the aforementioned effects, the transmission of peripheral pain signals is blocked. The analgesic effect of N58A is similar to that of morphine, which also reduces the protein phosphorylation in the MAPK pathway, but to a lesser extent. However, morphine does not have an effect on the Nav1.8 and Nav1.9 channels, in contrast to N58A. Toxicity and treatment Scorpion depressant beta-toxins are selectively toxic to insects, and when N58A is applied at a dose of 400 μg/100 mg of body weight in rats, no neurotoxicity is observed. As such, there is no need for treatment of the toxin in human cases. Therapeutic use Due to its analgesic effect, N58A has been tested as a possible treatment for trigeminal neuralgia in a rat model. This model shows a reduced threshold for thermal and mechanical pain, which was alleviated by administration of N58A. At a dose of 400 μg/100 mg of body weight, the analgesic effects of N58A last for several hours and are similar to the effects of morphine. The injection of N58A does not affect motor control of the limbs in rats References Ion channel toxins Neurotoxins Scorpion toxins
N58A
[ "Chemistry" ]
658
[ "Neurochemistry", "Neurotoxins" ]
69,080,137
https://en.wikipedia.org/wiki/Izhar%20Bar-Gad
Izhar Bar-Gad is a full professor at the Leslie and Susan Gonda Brain Research Center at Bar-Ilan University. Bar-Gad is a researcher in the field of neurophysiology and neural computation. His main areas of research are information processing in the basal ganglia in a normal state and in various pathologies, such as Parkinson's disease and Tourette's syndrome. Biography Born in Rehovot and raised in Pretoria, South Africa and Qiryat Gat, Israel. Enlistment in the Israel Defense Forces in 1989 in the Israeli Intelligence Corps, discharge in 1994 with the rank of Captain. In the years 1994–1997, he worked for Amdocs, as a researcher and project manager in the development of distributed artificial intelligence. From 1997 to 2002, he worked for Sanctum in Israel and later in California as the Chief Technology Officer (CTO). In 2005, he was appointed a lecturer at the Center for Brain Research at Bar-Ilan University. In the years 2008–2011 he served as head of the Department of Neuroscience. In 2010 he was appointed a senior lecturer. In 2012 he was appointed associate professor and since 2018 he has been a full professor at Bar-Ilan University. Izhar Bar-Gad researches information processing in the basal ganglia in the normal condition and in various pathological conditions. His research combines experimental methods in the fields of systems neurophysiology with computational methods from the fields of data science and neural computation. His early research was mainly concerned with changes in brain computation that occur during Parkinson's disease and its treatments, drug therapies and deep brain stimulation (DBS). His later research deals with the neurophysiological changes that occur in neurodevelopmental disorders, such as Tourette's Syndrome and Attention Deficit Hyperactivity Disorder (ADHD). Academic education In 2003, he completed a Ph.D. at the Hebrew University in neural computation, under the guidance of Prof. Hagai Bergman and Prof. Yaakov Ritov. His doctoral dissertation was written on the subject of "Reinforcement driven dimensionality reduction as a model for information processing in the basal ganglia". References External links Website of Izhar Bar Gad Lab Google Scholar Link Researchers Brain from Bar-Ilan won an international award sponsored by the Michael J. Fox Foundation for Parkinson's Research Dr. Izhar Bar Gad, on the Ynet website People from Giv'at Shmuel Academic staff of Bar-Ilan University People of the Military Intelligence Directorate (Israel) People in information technology Tel Aviv University alumni Hebrew University of Jerusalem alumni Israeli neuroscientists Israeli officers 1971 births Living people
Izhar Bar-Gad
[ "Technology" ]
556
[ "People in information technology", "Information technology" ]
69,081,496
https://en.wikipedia.org/wiki/Harvard%20Six%20Cities%20study
The Harvard "Six Cities" study was a major epidemiological study of over 8,000 adults in six American cities that helped to establish the connection between fine-particulate air pollution (such as diesel engine soot) and reduced life expectancy ("excess mortality"). Widely acknowledged as a landmark piece of public health research, it was initiated by Benjamin G. Ferris, Jr at Harvard School of Public Health and carried out by Harvard's Douglas Dockery, C. Arden Pope of Brigham Young University, Ferris himself, Frank E. Speizer, and four other collaborators, and published in the New England Journal of Medicine in 1993. Following a lawsuit by The American Lung Association, the study, and its various follow-ups, led to a tightening of pollution standards by the US Environmental Protection Agency. This prompted an intense backlash from industry groups in the late 1990s, culminating in a Supreme Court case, in what Science magazine termed "the biggest environmental fight of the decade". Background The Six Cities study was born in the wake of the 1970s energy crisis amid growing concerns that a squeeze on oil supply would lead to greater use of low-quality coal and, therefore, higher mortality from air pollution. The harmful health effects of burning coal had already come to light following the 1952 Great London Smog (in the United Kingdom), the 1948 Donora tragedy (in the United States), and other major pollution episodes, but it was unclear which part of coal pollution (sulphur dioxide, particulates, or some combination of these and other emissions) was of most concern. There were also differences of scientific opinion about how particulates affected human health, which types were most harmful, and whether there were impacts even at low to moderate levels of exposure. The Harvard Six Cities study aimed to address some of these questions. As it acknowledged in its introduction, it built on a number of earlier studies that had found "associations between mortality rates and particulate air pollution in U.S. metropolitan areas", including a 1970 Science paper "Air Pollution and Human Health" by Lester Lave and Eugene Seskin of Carnegie Mellon University. Crucially, unlike the earlier studies, which were generally cross-sectional in design (statistical snapshots of large, anonymous populations taken at arbitrary times), the Harvard Six Cities was a cohort study that tracked the same people over their lives, so allowing risk factors such as age, sex, and smoking history to be eliminated, and the effects of air pollution to be studied in isolation. Methodology Dockery and colleagues studied a cohort of 8,111 adults living in six American cities "selected as representative of the range of particulate air pollution in the United States": Harriman, Tennessee; Portage, Wisconsin; St Louis, Missouri; Steubenville, Ohio; Topeka, Kansas and Watertown, Massachusetts. Over a decade and a half, each person was questioned on such things as their medical history and lifestyle (including whether and how much they smoked, their body mass index, their education level, average age, and so on). This data was compared with ambient air pollution measurements from the six cities and mortality data from the National Death Index. Conclusion The study found that people living in the most polluted city (Steubenville) were 26 percent more likely to die than those in the least polluted city (Portage), suggesting an association between particulate pollution and higher death rates in urban areas: "Although the effects of other, unmeasured risk factors cannot be excluded with certainty, these results suggest that fine-particulate air pollution, or a more complex pollution mixture associated with fine particulate matter, contributes to excess mortality in certain U.S. cities." Confirmation The Six Cities study was followed (and its findings effectively confirmed) by a much bigger epidemiological project, generally referred to as the American Cancer Society (ACS) study, which was carried out by three authors of the original study (Pope, Dockery, and Frank E. Speizer) and four other collaborators. The ACS study correlated air pollution data, lifestyle factors, and death records for a sample of 552,138 adults in 151 urban areas followed over a 16-year period and concluded, just as the original had done, that breathing particulate pollution increases a person's risk of death: "Particulate air pollution was associated with cardiopulmonary and lung cancer mortality but not with mortality due to other causes. Increased mortality is associated with sulfate and fine particulate air pollution at levels commonly found in U.S. cities." A variety of similar epidemiological studies have also supported the association between fine particulate pollution and higher mortality. Crucially, a 2006 paper by Francine Laden and members of the original Harvard team (Frank Speizer and Douglas Dockery) also confirmed the opposite effect: reductions in particulate pollution save lives. Impact Following the publication of the Six Cities and ACS studies, there were new calls for tougher pollution standards in the United States, and The American Lung Association ultimately sued the US Environmental Protection Agency to bring that about. As a result, in 1997, the EPA introduced the National Ambient Air Quality Standards (NAAQS) with new limits on particulates. This, in turn, prompted pushback from industry groups and various legal challenges, including a request to release data from the original studies for scrutiny by third parties. Medical confidentiality agreements prevented this, so, as a compromise, the studies were independently re-analyzed by Daniel Krewski, Richard Burnett, and colleagues on behalf of the Health Effects Institute, which used different statistical methods but essentially confirmed the original findings. The legal challenges were eventually settled by a Supreme Court ruling on February 27, 2001 (Whitman v. American Trucking Associations, S.Ct. No. 99-1257) that unanimously sided with the EPA. Since then, largely as a result of the initial Six Cities and ACS studies, and the follow-up research they inspired, air quality standards and guidelines for particulate pollution have been introduced throughout the world - potentially saving many millions of lives. According to air pollution scientist Gary Fuller: "It is hard to overstate the impact of the Six Cities study on global health... the results still offer the best estimate for how much our lives are shortened by the particle pollution that we breathe." Proposed EPA "Honest Act" When then-EPA director, Scott Pruitt, announced his proposed scientific research policy requiring full transparency of all studies that inform public environmental policies, this would have excluded studies, such as the Six Cities studies, because they used confidential data in personal medical reports that could not be made openly available. Critics of Pruitt's policy traced its roots to the Harvard Six Cities study. Various iterations of the bill have been supported by American Chemistry Council, an organization that advises DuPont and Monsanto, among others. It has also been supported by Koch Industries, Peabody Energy, and ExxonMobil. According to the American Association for the Advancement of Science, some within the chemical, manufacturing and energy sector did not approve of the clean air regulations that were implemented because of the Six Cities studies, so they are trying to "attack the science underlying the regulation". The "demand for transparency" was in reality a way to "undermine scientific independence." The Honest and Open New E.P.A. Science Treatment Act, which was sponsored by Lamar Smith (R-Texas) provided the basis for Pruitt's plans for transparency in science policy that he announced on The Daily Caller in March 2018. References Further reading Air pollution in the United States Epidemiology Health Public health research Epidemiological study projects Cohort studies Health research
Harvard Six Cities study
[ "Environmental_science" ]
1,577
[ "Epidemiology", "Environmental social science" ]
69,081,952
https://en.wikipedia.org/wiki/Cl6a
μ-THTX-Cl6a, also known as Cl6a, is a 33-residue peptide toxin extracted from the venom of the spider Cyriopagopus longipes. The toxin acts as an inhibitor of the tetrodotoxin-sensitive (TTX-S) voltage-gated sodium channel (NaV1.7), thereby causing sustained reduction of NaV1.7 currents. Etymology and Source Cl6a is a peptide extracted from the venom of the Cyriopagopus longipes, which was first described by von Wirth and Striffler in 2005. This spider lives in multiple Southeast Asian countries such as Thailand, Cambodia and Laos. Cyriopagopus longipes originates from the Cyriopagopus, which is a genus of the southeast Asian tarantula. Chemistry Structure Cl6a is a 33 amino acid residue peptide toxin with a molecular weight of 3775.6 Dalton. Its molecular structure encompasses six cysteine residues, which demonstrate three disulfide bonds that assemble an inhibitor cystine knot (ICK) scaffold. Generally, ICK peptides are most prominently present in the venom of snails and spiders. These peptides usually function as gating modifier toxins affecting the gating and kinetic properties of voltage-gated ion channels. The ICK fold is characterized by two β-strands composing the polypeptide backbone from which two disulfide bonds originate. These disulfide bonds and the polypeptide backbone form a ring structure, which is innervated by a third sulfide bond creating a pseudoknot. This comprises an extraordinarily stable protein structure, which is resistant to heat denaturation, extreme pH environments and proteolysis. Family and homology Cl6a belongs to the voltage-gated sodium channel (NaV)-targeting spider toxin (NaSpTx) family 1, because it accommodates the same cysteine structure and ICK scaffold. Toxins of NaSpTx family 1 are characterized by containing the following motif present in the amino acid sequence of the toxins: (R/K)X(R/K)WCK. The amino acid sequence of Cl6a contains the NaSpTx family 1 motif KHKWCK. Cl6a highly resembles a similar sequence with other spider peptide toxins. For instance, the amino acid sequence of Cl6a shows 67% sequence similarity with Hainantoxin (HNTX) III and 97% with huwentoxin (HWTX) I. Target Cl6a is a selective antagonist of voltage-gated sodium channels. It targets TTX-S channels (NaV1.2, NaV1.3, NaV1.4, NaV1.6 and NaV1.7) with the highest affinity against NaV1.7 channels. NaV1.7 channels are expressed in nociceptive dorsal root ganglion (DRG), sympathetic and olfactory sensory neurons. NaV1.7 channels are located in epidermal free nerve endings and more centrally in the superficial lamina of the spinal cord. Mode of action Cl6a induces an irreversible inhibition of NaV1.7 peak currents with a slow onset of action. Cl6a alters the current-voltage relationship of NaV1.7 channels by binding to site 4 of domain II (DII S3-S4), which contains acidic residues. Cl6a acquires a positively charged surface that interacts with the acidic residues of site 4. Subsequently, the DII S3-S4 voltage sensor becomes trapped. Hereby, the channel is less sensitive to changes in the membrane voltage. As a consequence, the inward sodium current is not initiated. Consequently, there will be a decrease in neuronal excitability. Toxicity Cl6a selectively blocks NaV1.7 channels, which are involved in peripheral pain relief. The half-maximal inhibitory concentration (IC50) of Cl6a is 11.00 ± 2.5 nM. Generally, spider peptide toxins can incapacitate prey and hereby promote successful predation. In combination with the irreversible properties of Cl6a, this spider peptide toxin could indirectly be lethal to its prey. Therapeutic use Cl6a exhibits a high affinity against NaV1.7 channels, which are known to be critically involved in peripheral pain regulation. Other spider peptide toxins like HNTX-III and GpTx1 cause reversible inhibition of NaV1.7 channels. However, Cl6a induces irreversible inhibition of NaV1.7 channel activity. Therefore, this spider peptide toxin is a potential therapeutic target by obtaining prolonged blockage of NaV1.7 channels. Nonetheless, Cl6a also inhibits NaV1.4 and NaV1.5 currents, which are involved in skeletal and cardiac muscle functioning. Since Cl6a contains the ICK motif, which is interestingly resistant to proteases, this peptide toxin could have further therapeutic implications. ICK peptides are stable in the human body for multiple days and demonstrate half-lives larger than 12 hours in a simulated gastric environment. Altogether, this implies that the ICK motif of Cl6a could contribute to the delivery of certain therapeutic agents to a specific location in the human body. References Ion channel toxins Neurotoxins Spider toxins
Cl6a
[ "Chemistry" ]
1,127
[ "Neurochemistry", "Neurotoxins" ]
69,082,360
https://en.wikipedia.org/wiki/Speed%20limits%20in%20Cyprus
The general speed limits in Cyprus are as follows: References Cyprus Roads in Cyprus
Speed limits in Cyprus
[ "Physics" ]
17
[ "Physical systems", "Transport", "Transport stubs" ]
69,082,373
https://en.wikipedia.org/wiki/DKK-Sp1
DKK-SP1 is one of the many neurotoxins present in the scorpion Mesobuthus martensii. This toxin inhibits the voltage-gated sodium channel Nav1.8. Sources DKK-SP1 is a neurotoxin, which is named after the DKK-SP1 gene that is extracted from a cDNA library of Mesobuthus martensii. DKK-SP1 naturally occurs in the venom gland located in its stinger (or telson). This scorpion is also known as the Golden Chinese Scorpion or the Manchurian scorpion. It was formerly called Buthus martensii Karsch (BmK). Chemistry Family DKK-SP1 is part of the alpha-like toxin family of scorpion toxins, which typically consist of 60 to 76 amino acid residues. DKK-SP1 specifically is composed of 66 amino acid residues folded into beta sheets with an alpha helix atop. The structures of these alpha-like toxins are linked together by four disulfide bonds. Structure The molecular mass of the peptide chain of the toxin is 7157.96 Da. Its gene sequence is as follows: GTTCGTGATGCTTATATTGCCAAGCCCGAAAACTGTGTATACCATTGTGCTACAAATGAAGGTTGCAACAAATTATGTACTGACAATGGTGCTGAGAGTGGCTATTGCC AATGGGGAGGTAGATATGGAAATGCCTGCTGGTGCATAAAGTTGCCCGATAGTGTACCGATTGAAGTACCAGGAAAATGCCAACGCTAA As a result, the amino acid sequence is: Val-Arg-Asp-Ala-Tyr-Ile-Ala-Lys-Pro-Glu-Asn-Cys-Val-Tyr-His-Cys-Ala-Thr-Asn-Glu-Gly-Cys-Asn-Lys-Leu-Cys-Thr-Asp-Asn-Gly-Ala-Glu-Ser-Gly-Tyr-Cys-Gln-Trp-Gly-Gly-Arg-Tyr-Gly-Asn-Ala-Cys-Trp-Cys-Ile-Lys-Leu-Pro-Asp-Ser-Val-Pro-Ile-Glu-Val-Pro-Gly-Lys-Cys-Gln-Arg Homology DKK-SP1 shares 92.19% of its amino acid sequence with BmK M4, which also is a scorpion alpha-like toxin. DKK-SP1 is also functionally similar to delta-conotoxins, which binds to the receptor site 6 of voltage-gated sodium channel, slowing the channel’s inactivation state. Target DKK-SP1 specifically targets site 3 of the voltage-gated sodium channel, Nav1.8. Mode of Action The alpha-like toxin family of scorpion toxins occupies the receptor site 3 of voltage-gated sodium channels. Receptor site 3 is located between segments 3 and 4 of the alpha subunit of the voltage-gated sodium channel. Mechanically, alpha-like scorpion toxins prevent the normal gating movement of the sodium channel S4 segment by changing its conformational properties. This leads to delayed channel inactivation, which prolongs the depolarization phase of the action potential. Toxicity Generally, Mesobuthus martensii is not considered as highly lethal to humans. Scorpion injuries usually cause skin redness, swelling, and numbness. The Buthus scorpion may also provoke symptoms of severe excitation of the Autonomic Nervous System. Goudet, et al. showed that the LD50 of Mesobuthus martensii toxins is 2.4 mg/kg in humans. The LD50 of DKK-Sp1 in mice is 20.57 grams/kg. Treatment Although Mesobuthus martensii is not considered as a particularly lethal scorpion, its injuries may still cause local symptoms. Treatment against such effects is mostly symptomatic. Moderate and severe cases may require hospitalization. A 2002 clinical study conducted by Liu et al concluded that DKK-SP1 exhibited anti-inflammatory properties in mice. Mesobuthus martensii toxins have been used in Chinese medicine for years. However, no clinical trials have been performed on humans yet. References Scorpion toxins Ion channel toxins Neurotoxins
DKK-Sp1
[ "Chemistry" ]
894
[ "Neurochemistry", "Neurotoxins" ]
69,082,806
https://en.wikipedia.org/wiki/Speed%20limits%20in%20Austria
The general speed limits in Austria are as follows: References Austria Roads in Austria
Speed limits in Austria
[ "Physics" ]
17
[ "Physical systems", "Transport", "Transport stubs" ]
69,082,863
https://en.wikipedia.org/wiki/Speed%20limits%20in%20Malta
The general speed limits in Malta are as follows: References Malta Roads in Malta
Speed limits in Malta
[ "Physics" ]
17
[ "Physical systems", "Transport", "Transport stubs" ]
69,082,905
https://en.wikipedia.org/wiki/Speed%20limits%20in%20Luxembourg
The general speed limits in Luxembourg are as follows: References Luxembourg Roads in Luxembourg
Speed limits in Luxembourg
[ "Physics" ]
17
[ "Physical systems", "Transport", "Transport stubs" ]
69,083,337
https://en.wikipedia.org/wiki/Speed%20limits%20in%20Liechtenstein
The general speed limits in Liechtenstein are the same for every category of vehicle. They are as follows: References Liechtenstein Roads in Europe
Speed limits in Liechtenstein
[ "Physics" ]
27
[ "Physical systems", "Transport", "Transport stubs" ]
69,083,416
https://en.wikipedia.org/wiki/Speed%20limits%20in%20Kazakhstan
The general speed limits in Kazakhstan are as follows: References Kazakhstan Roads in Kazakhstan
Speed limits in Kazakhstan
[ "Physics" ]
17
[ "Physical systems", "Transport", "Transport stubs" ]
67,653,537
https://en.wikipedia.org/wiki/Basil%20Lythgoe
Basil Lythgoe FRS (18 August 1913 — 18 April 2009) was a British organic chemist who investigated the structure of many natural substances including nucleosides, plant toxins, and vitamin D2. He was Professor of Organic Chemistry at the University of Leeds. Biography Basil Lythgoe was born in Leigh, the second of three children of Peter Whittaker Lythgoe (company secretary of a local textile firm) and Agnes Lily Lythgoe (née Shepherd). Basil, like his father, attended Leigh Grammar School. Aided by a county grant he progressed to the University of Manchester in 1930. His final degree examinations were delayed by a severe throat infection; he graduated in 1934, with first class honours. Lythgoe stayed at Manchester to work or his PhD, supervised by Professor I W Heilbron, FRS; it was awarded in 1936. He then joined ICI in Huddersfield, where he worked on a synthetic dye. But he soon returned to the University of Manchester as an assistant lecturer, where he worked with Alexander Todd, successor to Heibron. In 1946 Lythgoe accompanied Todd to the Cambridge as an “assistant in research’” – he was later promoted to Lecturer. Their main area of research was nucleosides. Their findings, and those of others in the group, contributed to determining the correct structure of DNA. By 1948 Lythgoe was working independently, although still on nucleosides. Later, he turned his attention to the structure of macrozamin, a very toxic natural substance. In 1953 Basil Lythgoe moved to the University of Leeds to take up the professorship of organic chemistry. One of his principal research areas for many years was taxine alkaloids. Lythgoe was an early user of nuclear magnetic resonance (NMR), which helped him determine the correct structure of taxine-I. Another extensive area of research was calciferols. In one paper he described the synthesis of cholecalciferol, which involved the use of the Wittig reaction. It was “early days for Wittig reagents, and possibly Lythgoe saw this as one of the strengths of the work in the 1958 paper”. Basil Lythgoe retired in 1978. Honours Lythgoe was elected Fellow of the Royal Society (FRS) in 1958. He also was appointed to the Tilden Lectureship of the Chemical Society in 1958, received the Synthetic Organic Chemistry Award of the Chemical Society and the Simonsen Lectureship of the Chemical Society in 1978 and the Chemical Society Award for Organic Synthesis in 1979. Family The engagement of Basil Lythgoe and the mathematician Kathleen (Kate) Cameron Hallum was announced in April 1946. They had met at Manchester and were married on 29 June. They had two sons: John Cameron (1948) and Andrew Hallum (1950). John graduated in electrical and electronic engineering at the University of Birmingham and Andrew in materials science at the University of Liverpool. Both followed industrial careers and were to marry and have families. Kate died in Leeds on 10 July 2003. Basil Lythgoe developed dementia in later life. He died on 18 April 2009 at a nursing home in Wychbold. References People from Leigh, Greater Manchester British organic chemists Fellows of the Royal Society Alumni of the University of Manchester Alumni of the University of Cambridge Alumni of the University of Leeds Academics of the University of Manchester Academics of the University of Cambridge Fellows of King's College, Cambridge Academics of the University of Leeds 1913 births 2009 deaths
Basil Lythgoe
[ "Chemistry" ]
727
[ "Organic chemists", "British organic chemists" ]
67,654,081
https://en.wikipedia.org/wiki/Demetrios%20Magiros
Demetrios G. Magiros (Δημήτριος Γ. Μαγείρος, 19 December 1912, Euboea, Greece – 19 January 1982, Philadelphia, Pennsylvania) was a Greek-American mathematician, specializing in the stability of dynamical systems. Education and career Magiros did his undergraduate and graduate study at the University of Athens (National and Kapodistrian University of Athens), where he received his doctorate in pure mathematics in 1940. At the National Technical University of Athens he was appointed a lecturer in mechanics and geodesy and subsequently was promoted to professor of mathematics. During WW II he published no papers but in 1946 he published three papers on the catenary. In 1949 he went to the USA. There he studied applied mathematics at Brown University, at the Courant Institute of Mathematical Sciences, and at Massachusetts Institute of Technology (MIT). After holding research positions at Columbia University's IBM Thomas J. Watson Research Laboratory, the Republic Aviation Corporation, and at the Courant Institute, he was appointed professor of mathematics and mechanics at Hofstra University. When he was a professor at Hofstra, he was also a consultant for the General Electric Company's Missile and Space Vehicle Department at the Valley Forge Technology Center. In 1960 he resigned from Hofstra University to work full-time as a researcher at General Electric's Missile and Space Vehicle Department. He worked for General Electric Aerospace for the remainder of his career. During his career Magiros published 54 papers, 2 of them in the Proceedings of the National Academy of Sciences. In 2012 a book containing a selection of 43 of his papers was published with Spyros G. Tzafestas as editor. The book is organized into three parts: mathematics applied to engineering modelling and social issues (with 11 papers), nonlinear mechanics (with 18 papers), and dynamic systems analysis (with 12 papers), plus an appendix with 2 papers published in Soviet mathematical journals. The section on nonlinear mechanics contains 8 papers on celestial and orbital mechanics. The section on dynamical systems analysis contains 6 papers on stability analysis, 4 on precessional phenomena, and 2 on separatrices of dynamical systems. Selected publications (reprinted from 1966 original in Proceedings of the Athens Academy of Sciences — George H. Reehl (1923–2012) was an electrical engineer employed by General Electric. ) References Greek emigrants to the United States 20th-century Greek mathematicians Applied mathematicians National and Kapodistrian University of Athens alumni Academic staff of the National Technical University of Athens Hofstra University faculty General Electric employees People from Euboea (regional unit) 1912 births 1982 deaths
Demetrios Magiros
[ "Mathematics" ]
548
[ "Applied mathematics", "Applied mathematicians" ]
67,654,122
https://en.wikipedia.org/wiki/Philip%20George%20Burke
Philip George Burke FRS (18 October 1932—4 June 2019) was a British theoretical and computational physicist who developed the R-matrix method for studying electron collisions with atoms and molecules. Life He was born in London. He graduated from University College of the South West, and University College London. He worked at the National Physical Laboratory, Teddington. From 1959 to 1960, he worked at the Lawrence Berkeley Radiation Laboratory. The majority of Burke's research career was based at Queen's University Belfast, where he was a member of the Centre for Theoretical Atomic, Molecular and Optical Physics. He was elected Fellow of the Royal Society in 1978 and was awarded a CBE in 1993. References British physicists Computational physicists British theoretical physicists Fellows of the Royal Society Academics of Queen's University Belfast 1932 births 2019 deaths Members of the Order of the British Empire
Philip George Burke
[ "Physics" ]
172
[ "Computational physicists", "Computational physics" ]
67,654,308
https://en.wikipedia.org/wiki/Tides%20in%20marginal%20seas
Tides in marginal seas are tides affected by their location in semi-enclosed areas along the margins of continents and differ from tides in the open oceans. Tides are water level variations caused by the gravitational interaction between the Moon, the Sun and the Earth. The resulting tidal force is a secondary effect of gravity: it is the difference between the actual gravitational force and the centrifugal force. While the centrifugal force is constant across the Earth, the gravitational force is dependent on the distance between the two bodies and is therefore not constant across the Earth. The tidal force is thus the difference between these two forces on each location on the Earth. In an idealized situation, assuming a planet with no landmasses (an aqua planet), the tidal force would result in two tidal bulges on opposite sides of the earth. This is called the equilibrium tide. However, due to global and local ocean responses different tidal patterns are generated. The complicated ocean responses are the result of the continental barriers, resonance due to the shape of the ocean basin, the tidal waves impossibility to keep up with the Moons tracking, the Coriolis acceleration and the elastic response of the solid earth. In addition, when the tide arrives in the shallow seas it interacts with the sea floor which leads to the deformation of the tidal wave. As a results, tides in shallow waters tend to be larger, of shorter wavelength, and possibly nonlinear relative to tides in the deep ocean. Tides on the continental shelf The transition from the deep ocean to the continental shelf, known as the continental slope, is characterized by a sudden decrease in water depth. In order to apply to the conservation of energy, the tidal wave has to deform as a result of the decrease in water depth. The total energy of a linear progressive wave per wavelength is the sum of the potential energy (PE) and the kinetic energy (KE). The potential and kinetic energy integrated over a complete wavelength are the same, under the assumption that the water level variations are small compared to the water depth (). where is the density, the gravitation acceleration and the vertical tidal elevation. The total wave energy becomes: If we now solve for a harmonic wave , where is the wave number and the amplitude, the total energy per unit area of surface becomes: A tidal wave has a wavelength that is much larger than the water depth. And thus according to the dispersion of gravity waves, they travel with the phase and group velocity of a shallow water wave: . The wave energy is transmitted by the group velocity of a wave and thus the energy flux () is given by: The energy flux needs to be conserved and with and constant, this leads to: where and thus . When the tidal wave propagates onto the continental shelf, the water depth decreases. In order to conserve the energy flux, the amplitude of the wave needs to increase (see figure 1). Transmission coefficient The above explanation is a simplification as not all tidal wave energy is transmitted, but it is partly reflected at the continental slope. The transmission coefficient of the tidal wave is given by: This equation indicates that when the transmitted tidal wave has the same amplitude as the original wave. Furthermore, the transmitted wave will be larger than the original wave when as is the case for the transition to the continental shelf. The reflected wave amplitude () is determined by the reflection coefficient of the tidal wave: This equation indicates that when there is no reflected wave and if the reflected tidal wave will be smaller than the original tidal wave. Internal tide and mixing At the continental shelf the reflection and transmission of the tidal wave can lead to the generation of internal tides on the pycnocline. The surface (i.e. barotropic) tide generates these internal tides where stratified waters are forced upwards over a sloping bottom topography. The internal tide extracts energy from the surface tide and propagates both in shoreward and seaward direction. The shoreward propagating internal waves shoals when reaching shallower water where the wave energy is dissipated by wave breaking. The shoaling of the internal tide drives mixing across the pycnocline, high levels carbon sequestration and sediment resuspension. Furthermore, through nutrient mixing the shoaling of the internal tide has a fundamental control on the functioning of ecosystems on the continental margin. Tidal propagation along coasts After entering the continental shelf, a tidal wave quickly faces a boundary in the form of a landmass. When the tidal wave reaches a continental margin, it continues as a boundary trapped Kelvin wave. Along the coast, a boundary trapped Kelvin is also known as a coastal Kelvin wave or Edge wave. A Kelvin wave is a special type of gravity wave that can exist when there is (1) gravity and stable stratification, (2) sufficient Coriolis force and (3) the presence of a vertical boundary. Kelvin waves are important in the ocean and shelf seas, they form a balance between inertia, the Coriolis force and the pressure gradient force. The simplest equations that describe the dynamics of Kelvin waves are the linearized shallow water equations for homogeneous, in-viscid flows. These equations can be linearized for a small Rossby number, no frictional forces and under the assumption that the wave height is small compared to the water depth (). The linearized depth-averaged shallow water equations become: u momentum equation: v momentum equation: the continuity equation: where is the zonal velocity ( direction), the meridional velocity ( direction), is time and is the Coriolis frequency. Kelvin waves are named after Lord Kelvin, who first described them after finding solutions to the linearized shallow water equations with the boundary condition . When this assumption is made the linearized depth-averaged shallow water equations that can describe a Kelvin wave become: u momentum equation: v momentum equation: the continuity equation: Now it is possible to get an expression for , by taking the time derivative of the continuity equation and substituting the momentum equation: The same can be done for , by taking the time derivative of the v momentum equation and substituting the continuity equation Both of these equations take the form of the classical wave equation, where . Which is the same velocity as the tidal wave and thus of a shallow water wave. These preceding equations govern the dynamics of a one-dimensional non-dispersive wave, for which the following general solution exist: where length is the Rossby radius of deformation and is an arbitrary function describing the wave motion. In the most simple form is a cosine or sine function which describes a wave motion in the positive and negative direction. The Rossby radius of deformation is a typical length scale in the ocean and atmosphere that indicates when rotational effects become important. The Rossby radius of deformation is a measure for the trapping distance of a coastal Kelvin wave. The exponential term results in an amplitude that decays away from the coast. The expression of tides as a bounded Kelvin wave is well observable in enclosed shelf seas around the world (e.g. the English channel, the North Sea or the Yellow sea). Animation 1 shows the behaviour of a simplified case of a Kelvin wave in an enclosed shelf sea for the case with (lower panel) and without friction (upper panel). The shape of an enclosed shelf sea is represented as a simple rectangular domain in the Northern Hemisphere which is open on the left hand side and closed on the right hand side. The tidal wave, a Kelvin wave, enters the domain in the lower left corner and travels to the right with the coast on its right. The sea surface height (SSH, left panels of animation 1), the tidal elevation, is maximum at the coast and decreases towards the centre of the domain. The tidal currents (right panels of animation 1) are in the direction of wave propagation under the crest and in the opposite direction under the through. They are both maximum under the crest and the trough of the waves and decrease towards the centre. This was expected as the equations for and are in phase as they both depend on the same arbitrary function describing the wave motion and exponential decay term. Therefore this set of equations describes a wave that travels along the coast with a maximum amplitude at the coast which declines towards the ocean. These solutions also indicate that a Kelvin wave always travels with the coast on their right hand side in the Northern Hemisphere and with the coast at their left hand side in the Southern Hemisphere. In the limit of no rotation where , the exponential term increase without a bound and the wave will become a simple gravity wave orientated perpendicular to the coast. In the next section, it will be shown how these Kelvin waves behaves when traveling along a coast, in an enclosed shelf seas or in estuaries and basins. Tides in enclosed shelf seas The expression of tides as a bounded Kelvin wave is well observable in enclosed shelf seas around the world (e.g. the English channel, the North Sea or the Yellow sea). Animation 1 shows the behaviour of a simplified case of a Kelvin wave in an enclosed shelf sea for the case with (lower panel) and without friction (upper panel). The shape of an enclosed shelf sea is represented as a simple rectangular domain in the Northern Hemisphere which is open on the left hand side and closed on the right hand side. The tidal wave, a Kelvin wave, enters the domain in the lower left corner and travels to the right with the coast on its right. The sea surface height (SSH, left panels of animation 1), the tidal elevation, is maximum at the coast and decreases towards the centre of the domain. The tidal currents (right panels of animation 1) are in the direction of wave propagation under the crest and in the opposite direction under the through. They are both maximum under the crest and the trough of the waves and decrease towards the centre. This was expected as the equations for and are in phase as they both depend on the same arbitrary function describing the wave motion and exponential decay term. On the enclosed right hand side, the Kelvin wave is reflected and because it always travels with the coast on its right, it will now travel in the opposite direction. The energy of the incoming Kelvin wave is transferred through Poincare waves along the enclosed side of the domain to the outgoing Kelvin wave. The final pattern of the SSH and the tidal currents is made up of the sum of the two Kelvin waves. These two can amplify each other and this amplification is maximum when the length of the shelf sea is a quarter wavelength of the tidal wave. Next to that, the sum of the two Kelvin waves result in several static minima's in the centre of the domain which hardly experience any tidal motion, these are called Amphidromic points. In the upper panel of figure 2, the absolute time averaged SSH is shown in red shading and the dotted lines show the zero tidal elevation level at roughly hourly intervals, also known as cotidal lines. Where these lines intersect the tidal elevation is zero during a full tidal period and thus this is the location of the Amphidromic points. In the real world, the reflected Kelvin wave has a lower amplitude due to energy loss as a result of friction and through the transfer via Poincare waves (lower left panel of animation 1). The tidal currents are proportional to the wave amplitude and therefore also decrease on the side of the reflected wave (lower right panel of animation 1). Finally, the static minima's are no longer in the centre of the domain as wave amplitude is no longer symmetric. Therefore, the Amphidromic points shift towards the side of the reflected wave (lower panel figure 2). The dynamics of a tidal Kelvin wave in enclosed shelf sea is well manifested and studied in the North Sea. Tides in estuaries and basins When tides enter estuaries or basins, the boundary conditions change as the geometry changes drastically. The water depth becomes shallower and the width decreases, next to that the depth and width become significantly variable over the length and width of the estuary or basin. As a result the tidal wave deforms which affects the tidal amplitude, phase speed and the relative phase between tidal velocity and elevation. The deformation of the tide is largely controlled by the competition between bottom friction and channel convergence. Channel convergence increases the tidal amplitude and phase speed as the energy of the tidal wave is traveling through a smaller area while bottom friction decrease the amplitude through energy loss. The modification of the tide leads to the creation of overtides (e.g. tidal constituents) or higher harmonics. These overtides are multiples, sums or differences of the astronomical tidal constituents and as a result the tidal wave can become asymmetric. A tidal asymmetry is a difference between the duration of the rise and the fall of the tidal water elevation and this can manifest itself as a difference in flood/ebb tidal currents. The tidal asymmetry and the resulting currents are important for the sediment transport and turbidity in estuaries and tidal basins. Each estuary and basin has its own distinct geometry and these can be subdivided in several groups of similar geometries with its own tidal dynamics. See also References Tides Planetary science Geophysics Oceanography Fluid dynamics
Tides in marginal seas
[ "Physics", "Chemistry", "Astronomy", "Engineering", "Environmental_science" ]
2,685
[ "Hydrology", "Applied and interdisciplinary physics", "Oceanography", "Chemical engineering", "Geophysics", "Piping", "Planetary science", "Astronomical sub-disciplines", "Fluid dynamics" ]
67,654,568
https://en.wikipedia.org/wiki/International%20Longevity%20Alliance
The International Longevity Alliance (ILA) is an international nonprofit organization that is a platform for interaction between regional organizations that support anti-aging technologies, usually at the administrative and popularization levels. Purpose The declared objectives of the organization are to establish regional organizations' interaction and collaboration, to popularize the idea of the need to combat the aging process as a negative but treatable medical condition of the body, and to provide support for scientific research in all possible ways and at all possible levels around the world (up to cooperation with WHO). History ILA began to function in January 2013 as an informal platform for communication between managers and representatives of several organizations. In September 2014, the alliance was formally registered in Paris, France, acquiring the status of an official organization. Organizations As of September 2024, ILA includes 65 nonprofit organizations from 35 countries. Some of them are: SENS Research Foundation Healthy Life Extension Society (HEALES) Israeli Longevity Alliance Global Healthspan Policy Institute (GHPI) Council for Public Health and the Problems of Demography (CPHD) I Am Future Foundation Institute of Exponential Sciences Moreover, the ILA's Board of Advisors includes Aubrey de Grey, , Natasha Vita-More, and others. Activity In addition to being a platform for interaction between organizations and facilitating their activities, the alliance also periodically holds online conferences, seminars and other public events to draw people's attention to the problem of aging. ILA popularize the initiative of holding the International Longevity Day (October 1) and the International Longevity Month (October) to promote biomedical aging research. Another anniversary date that the organization popularizes and promotes is the Metchnikoff Day, which falls on May 15 – the birthday of Élie Metchnikoff, who is considered the founder of gerontology. Permanently-supported projects: Major Mouse Testing Program (MMTP) – project aimed at testing potential anti-aging approaches in mice. DENIGMA – IT-platform of computational biology of aging. Longevity for All – public information resource. Longevity History – educational resource on the history of the study of aging. ILA attaches particular importance to cooperation with WHO in order to draw the attention of the state and interstate structures to the problem of aging as a type of medical problem that needs scientific study and treatment. In particular, ILA took an active part in the discussion, as a result of which WHO included in the international classification of diseases ICD-11 a special additional code XT9T. Now, after that, aging began to be officially recognized as a major factor that increases the risk of diseases, the severity of their course and the difficulty of treatment. Critique ILA does not have an official office – ILA members are located in different countries around the world and in the vast majority of cases communicate with each other only via the Internet. ILA conferences are also usually online. ILA does not have its own scientific laboratories, always acting only as a partner organization and/or providing administrative and public support. See also Longevity escape velocity Timeline of senescence research References External links Longevity History Biotechnology advocacy Life extension organizations International non-profit organizations Science advocacy organizations Organizations established in 2013
International Longevity Alliance
[ "Engineering", "Biology" ]
639
[ "Biotechnology organizations", "Biotechnology advocacy" ]
67,654,719
https://en.wikipedia.org/wiki/Southern%20Caribbean%20upwelling%20system
The Southern Caribbean Upwelling system (SCUS) is a low latitude tropical upwelling system, where due to multiple environmental and bathymetric conditions water from the deep sea is forced to the surface layers of the ocean. The SCUS is located at about 10°N on the southern coast of the Caribbean sea basin off Colombia, Venezuela, and Trinidad. There are two main upwelling zones in the system that vary in intensity throughout the year; The Western Upwelling Zone (WUZ); And the Eastern Upwelling Zone; (EUZ). The western the WUZ is situated between 74-71°W and generates mainly seasonal upwelling and high offshore transport due to intense winds. The EUZ, situated between 71-60°W is less intense but is more favourable for the upwelling throughout the year. General information For thirty years after 1990, the upwelling has intensified which is producing cooling of the sea surface temperature (SST) in the WUZ, this is in contrast to the general temperature in the Caribbean sea which has shown to increase. The "typical" Caribbean surface water is a mixture of North Atlantic Surface Water (NASW) and riverine waters from the Orinoco and Amazon rivers. The intensity of the Caribbean low-level jet (further explanation below) and the coastal orientation are determining the timing and spatial variability of this upwelling system. The system is likely to be responsible for a major part in the primary production due to the addition of nutrients that are added to the system through the upwelling. Under the Caribbean surface waters more saline water is found with values close to those typical for the Subtropical Underwater (SUW) SA(salinity) ~37, Θ ~22 °C. This forms a subsurface maximum(SSM) of more saline water than the water on top of it. After the rainy season the SSM is lower due to dilution of the surface waters. Characterization of the SCUS Sinces 1994 variations in upwelling are studied using cycles of satellite SST (Sea surface Temperature). The SST this is used as a proxy for upwelling (explained in more detail below) in this tropical region As well as the dominant winds and chlorophyll a. These are all proxies that are relatively easy to measure and quite easily accessible. Location and source of the SCUS The location of the SCUS is depending on the Rossby radius or R. The rossby radius changes the positioning of the upwelling relative to the coastline. The Rossby radius for this region is ~19 km; estimated using a mean depth h= 35m, gravity g = 9,81 m s -1. The upwelling zones are found close to the coast roughly within the 19 km found within the Rossby radius. However, in rare cases upwelled water moves offshore by over 250 km from the coast. Upwelled waters in the SCUS are consistent in geochemical compositions with the "Subtropical Underwater". This is a water mass that comes from the central Atlantic and due to its relative dense water properties (SA (salinity) ~37g per kg water, Temperature ~22 °C) lays under the Caribbean surface waters. Because the properties are so similar to the water that is upwelled in the SCUS its likely that the water comes from this water mass. Sea surface temperature (SST) The SCUS is studied through the SST at a high resolution (1 km grid) radiometer (National Oceanic and Atmospheric Administration / NOAA). This data is used to identify differences in the SST and to locate upwelling regions. There is a semi-annual cycle of SST within the upwelling areas. With cooling periodically occurring between December and April, showing 2-4 upwelling pulses peaking during February–March. Around May there is a typical increase in SST followed by cooling during June–August due to a midyear upwelling. There is a strong relationship between SST and Chlorophyll-a, this is explained more later in this article. Wind The trade winds that blow over the southern Caribbean Sea, amplified by the Caribbean low-level jet generate northward Ekman transport. The intensity of the trade winds varies per season (the image of the Seasonal differences shows the WUZ b) and c) in December and d) and e) in February) and explains the variation in upwelling and the measured differences in SST that are mentioned above. The driving forces of the Ekman transport is wind-stress or "τ" (wind stress on the seasurface). The driving winds are divided in multiple areas; East of 68°W has relative stable wind speeds (>6 m ), and slightly lower during August–November (4 – 6 m ). The direction of the wind is generally parallel to the southern Caribbean Sea margin. However, between May and October the EUZ has more along shore winds that are ~1.7° varying from the along-shore direction and during November until April the direction is more onshore within ±12°. In the WUZ the wind is more offshore in the majority of the year approximately -14°. This changes during October–November when winds are aligned with the shoreline ~0.2°. These wind directions produce the northward wind curl and thus offshore Ekman transport that are favourable year round for the SCUS. Chlorophyll-a Chlorophyll a is used to see the productivity of phytoplankton and therefore zooplankton. Amounts of chlorophyll-a increase with higher nutrient concentrations that are found in upwelled water and can therefore be used as a proxy for upwelling systems. Within the SCUS there are strong correlations between the SST and Chlorophyll-a. These show a Chlorophyll-a maximum in December and April and a shorter maximum between June and July further confirming the upwelling of nutrient rich water. Biological impact As mentioned in the Chlorophyll-a section, nutrient concentrations that come with the water from the SUW have a strong positive impact on the phytoplankton productivity. It is estimated that up to 95% of the small pelagic biomass is the southern Caribbean sea is sustained by the primary production that comes with these upwelled waters. In the EUZ there is a four time higher amount of small pelagic biomass compared to the WUZ. This difference is contributed to the prolonged duration of the upwelling. The water in the EUZ has a SST < 26°C for 8.5 months and the WUZ for 6.9. In addition to that, the EUZ has a wider continental shelf. Upwelling over wide and shallow continental shelves can generate resuspension and transport of essential microelements from the benthic boundary layer to the surface. The Caribbean low-level jet (CLLJ) The CLLJ has a core in the western basin (70°W - 80°W, 15°N) and maximum horizontal wind speeds of up to 16 m/s that tops in July and February. The Caribbean low-level jet is an amplification of the large-scale circulation of the North Atlantic subtropical high (NASH). The NASH interacts closely with the trade winds and therefor connects the CLLJ with the tradewinds. References Caribbean Sea Oceanography
Southern Caribbean upwelling system
[ "Physics", "Environmental_science" ]
1,532
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
67,654,734
https://en.wikipedia.org/wiki/Memory%20of%20Mankind%20on%20the%20Moon
Memory of Mankind on the Moon was a time capsule that was launched onboard Astrobotic Technology's Peregrine lander. It was made in collaboration with Hungarian company Puli Space Technologies and Memory of Mankind. References Space Peregrine Payloads
Memory of Mankind on the Moon
[ "Physics", "Astronomy", "Mathematics" ]
52
[ "Outer space", "Astronomy stubs", "Space", "Geometry", "Spacetime", "Outer space stubs" ]
67,655,566
https://en.wikipedia.org/wiki/Elizabeth%20Niespolo
Elizabeth Niespolo is an American geologist. Her work utilizes geochemical methods to understand archaeological sites and human activity on multiple continents. Niespolo integrates laboratory and field work studying natural materials such as ostrich egg shells, corals, and minerals in rocks to quantify potential human environmental signatures preserved in these materials and their relevance in piecing together understanding of Homo sapiens through time. In the absence of archaeological sites, Niespolo uses high precision isotopic dating of minerals from volcanoes to determine their petrologic and eruptive history. Early life and education Niespolo was a first generation college student when she started her studies at the university level and chose to pursue a path integrating science, the Earth, and human history. She earned her undergraduate degree, a Bachelor of Arts, with a double major from the University of California, Berkeley in Berkeley, California. One major she chose in the humanities, Classics, and one major she chose in the sciences, Astrophysics. During her undergraduate studies, she was drawn towards ancient history. Pursuing this initial inclination she worked for some archaeologists in the classics department to get a feel for what archaeology entailed. Part of this work included on-site field work in Greece where she found herself drawn to natural materials such as soils and fossils, their story, and how their stories were intertwined with the archaeological deposits. This interest morphed into realizations about natural resources and how things including drywall in houses and cell phone electrical circuits, chips, and battery components are partially made of earth materials requiring mining and quarrying to extract from the Earth and put into everyday use. She pursued additional field work in Italy where she worked on pottery from Pompeii and saw Mount Vesuvius, which piqued her interest in volcanoes and their interactions with human civilizations. This theme of human civilization interactions with natural processes became a theme in the course of her graduate studies. Graduate studies in geology Niespolo began her graduate studies in geology at California State University, Long Beach in Long Beach, California focusing her thesis work on the minerals in and geochemical signatures of jadeitites from Guatemala. These signatures she used to develop a new way of finding where Mesoamerican jade artifacts originated from. As part of her research she worked at a Mayan archaeological site in Chiapas, Mexico. Her efforts at California State University, Long Beach both on-campus and in the field in Mexico earned her a Master of Science, MS, degree. After completing her MS, Niespolo continued her graduate studies in geology by pursuing a Doctor of Philosophy, PhD, at the University of California, Berkeley. Her work was split into two settings: on-campus laboratory work including activities such as taking geochemical measurements and on-location field work in various locations including the East African Rift Valley. In 2019 Niespolo finished and filed her dissertation and earned her PhD from the University of California, Berkeley. Niespolo's dissertation focused on using both stable and radiogenic isotopes to determine the geochronology and environmental context of human evolution in the past, the Quaternary specifically. Career 2010s: The Americas, Polynesia, and Africa beginnings Niespolo started integrating her archaeological experience with her physics background through geochemical dating methods based on fundamental nuclear physical reactions and radioactive decay chains in the chemical elements. Her physics background also came through in instrumentation she used to measure both radioactive and stable isotope abundances including mass spectrometry. Isotopic abundances became a center of her work on jadeitites from Guatemala, Central America. Her emphasis was on finding ways to fingerprint natural materials used by humans to make tools and artwork so the origin of these artifacts and the materials used in their construction can be pin-pointed. She has also emphasized the importance of jade sourcing in terms of economics in the Mayan Empire. As part of this work, Niespolo was a recipient of a research grant in 2014 from the Geological Society of America. Niespolo furthered her experience working with natural minerals focusing on volcanic sanidine from northern California. This work provided high precision dating to aid in understanding petrologic and eruptive processes that may have been preserved in minerals erupted in the Alder Creek rhyolite during the early Pleistocene. Radiometric dating of corals from the Cook Islands in Polynesia Niespolo pursued to aid in providing precise dates for human arrival and inhabitation of the island of Mangaia in Polynesia. As islands in Polynesia were not originally inhabited by humans, precise geochemical dating can help to provide precise timelines for human arrival and colonization of the islands as well as the non-native plants humans brought with them. Specifically, Niespolo found that coral abraders contained chemical evidence in the form of Thorium that Polynesians arrived to Mangaia by 1011.6 ± 5.8 CE and sweet potatoes arrived by 1361-1466 CE. Following her work on natural materials from Central America, North America, and Polynesia, Niespolo broadened her scope geographically by honing in on piecing together what past physical and chemical environments were like in Africa. The goal with her work being correlating geochemical findings with archaeological sites to understand the timing of new tool development, human evolution, and human migration in relation to the land around them. Part of why Niespolo chose to pursue geochronology was her interest in understanding past human evolution in response to changing environmental conditions that may be helpful in modern times to understand potential environment changes in response to human activity. Foundational work on this topic for Niespolo included geochemical measurements of stable isotopes providing understanding of rainfall and vegetation variability in Eastern Africa during the Pleistocene-Holocene. This work provided environmental context for archaeological sites in Eastern Africa in the form of stable isotope abundances from ostrich egg shells and was funded by a $199,496 grant from the National Science Foundation. 2020s: Africa continues and professorship Niespolo draws inspiration in her geology work from past geologist Charles Lyell emphasizing that to understand the present the past is particularly pertinent. Niespolo narrows this point down in an interview with Scientific American, "Geology is the direct means to understanding our resources, and we use natural resources for literally everything (your house, your drinking water, energy). If we don't know the geologic processes controlling these observable resources, we will be hard pressed to continue utilizing them safely and responsibly, and developing more sustainable resource use in the future." In April 2021, Niespolo's research from leading a group of scientists in investigating marine resource overuse in South Africa during the Middle Stone Age and providing high precision dating to correlate Homo sapiens resource use with sea level change in the area was published in the Proceedings of the National Academy of Sciences. Specifically, the study expanded her use of isotopic dating of ostrich egg shells, this time utilizing a 230Thorium/Uranium burial dating, to remains from an archaeological site near modern day Cape Town and found the deposit of dated remains accumulated between 113,000 and 120,000 years before the study. The study also found inhabitants of the archaeological site continued maintaining a consistent diet even as sea level dropped following a high stand of the sea, which was attributed to selective foraging by the inhabitants in part due to an increase in aridity over the dated time period. The same month, Princeton University announced that Niespolo was one of 10 new faculty members approved by Princeton University's Board of Trustees. Her faculty appointment began in the autumn of 2021 at the assistant professor level. In 2022, her further expansion of her geochronology expertise in assessing human-climate dynamics in the past was published in a collaborative study investigating tool and technology development in relation to changes in wind intensity and rainfall around 80,000 to 92,000 years before the published 2022 study in what is modern day South Africa, to which she contributed uranium-series dating of natural materials. Personal Niespolo enjoys the research setting, its cutting edge nature, and being a part of new discoveries. However, one of the frustrations she has with researching in an academic setting is the amount of time and effort spent on securing funding as opposed to doing the science itself. This particular aspect of research in higher education Niespolo identified as her least favorite part of what she does. Niespolo is a vocal proponent of female mentorship in science disciplines taking initiative herself by participating in organizations including Bay Area Scientists in Schools. See also 2021 in mammal paleontology Kondoa Rock-Art Sites Polynesian navigation References Living people American women geologists Women geochemists American women archaeologists Princeton University faculty California State University, Long Beach people California State University, Long Beach alumni Year of birth missing (living people) 21st-century American women
Elizabeth Niespolo
[ "Chemistry" ]
1,820
[ "Geochemists", "Women geochemists" ]
67,655,835
https://en.wikipedia.org/wiki/FragAttacks
FragAttacks, or fragmentation and aggregation attacks, are a group of Wi-Fi vulnerabilities discovered by security research Mathy Vanhoef. Since the vulnerabilities are design flaws in the Wi-Fi standard, any device released after 1997 could be vulnerable. The attack can be executed without special privileges. The attack was detailed on August 5, 2021 at Black Hat Briefings USA and at later at the USENIX 30th Security Symposium, where recordings are shared publicly. The attack does not leave any trace in the network logs. Patches Vanhoef worked with the Wi-Fi Alliance to help vendors issue patches. Microsoft started issuing patches for Windows 7 through Windows 10 on May 11, 2021. References External links Fragment and Forge: Breaking Wi-Fi Through Frame Aggregation and Fragmentation by Mathy Vanhoef Computer-related introductions in 2021 Computer security exploits Wi-Fi
FragAttacks
[ "Technology" ]
180
[ "Computer security stubs", "Wireless networking", "Wi-Fi", "Computer security exploits", "Computing stubs" ]
67,657,720
https://en.wikipedia.org/wiki/Wave%20nonlinearity
The nonlinearity of surface gravity waves refers to their deviations from a sinusoidal shape. In the fields of physical oceanography and coastal engineering, the two categories of nonlinearity are skewness and asymmetry. Wave skewness and asymmetry occur when waves encounter an opposing current or a shallow area. As waves shoal in the nearshore zone, in addition to their wavelength and height changing, their asymmetry and skewness also change. Wave skewness and asymmetry are often implicated in ocean engineering and coastal engineering for the modelling of random sea states, in particular regarding the distribution of wave height, wavelength and crest length. For practical engineering purposes, it is important to know the probability of these wave characteristics in seas and oceans at a given place and time. This knowledge is crucial for the prediction of extreme waves, which are a danger for ships and offshore structures. Satellite altimeter Envisat RA-2 data shows geographically coherent skewness fields in the ocean and from the data has been concluded that large values of skewness occur primarily in regions of large significant wave height. At the nearshore zone, skewness and asymmetry of surface gravity waves are the main drivers for sediment transport. Skewness and asymmetry Sinusoidal waves (or linear waves) are waves having equal height and duration during the crest and the trough, and they can be mirrored in both the crest and the trough. Due to Non-linear effects, waves can transform from sinusoidal to a skewed and asymmetric shape. Skewed waves In probability theory and statistics, skewness refers to a distortion or asymmetry that deviates from a normal distribution. Waves that are asymmetric along the horizontal axis are called skewed waves. Asymmetry along the horizontal axis indicates that the wave crest deviates from the wave trough in terms of duration and height. Generally, skewed waves have a short and high wave crest and a long and flat wave trough. A skewed wave shape results in larger orbital velocities under the wave crest compared to smaller orbital velocities under the wave trough. For waves having the same velocity variance, the ones with higher skewness result in a larger net sediment transport. Asymmetric waves Waves that are asymmetric along the vertical axis are referred to as asymmetric waves. Wave asymmetry indicates the leaning forward or backward of the wave, with a steep front face and a gentle rear face. A steep front correlates with an upward tilt, a steep back is correlated with a downward tilt. The duration and height of the wave-crest equal the duration and height of the wave-trough. An asymmetric wave shape results in a larger acceleration between trough and crest and a smaller acceleration between crest and trough. Mathematical description Skewness (Sk) and asymmetry (As) are measures of the wave nonlinearity and can be described in terms of the following parameters: In which: is the zero-mean wave surface elevation is the Hilbert transform the angle brackets indicate averaging over many waves Values for the skewness are positive with typical values between 0 and 1, where values of 1 indicate high skewness. Values for asymmetry are negative with typical values between -1.5 and 0, where values of -1.5 indicate high asymmetry. Ursell number The Ursell number, named after Fritz Ursell, relates the skewness and asymmetry and quantifies the degree of sea surface elevation nonlinearity. Ruessink et al. defined the Ursell number as: , where is the local significant wave height, is the local wavenumber and is the mean water depth. The skewness and asymmetry at a certain location nearshore can be predicted from the Ursell number by: For small Ursell numbers, the skewness and asymmetry both approach zero and the waves have a sinusoidal shape, and thus waves having small Ursell numbers do not result in net sediment transport. For , the skewness is maximum and the asymmetry is small and the waves have a skewed shape. For large Ursell numbers, the skewness approaches 0 and the asymmetry is maximum, resulting in an asymmetric wave shape. In this way, if the wave shape is known, the Ursell number can be predicted and consequently the size and direction of sediment transport at a certain location can be predicted. Impact on sediment transport The nearshore zone is divided into the shoaling zone, surf zone and swash zone. In the shoaling zone, the wave nonlinearity increases due to the decreasing depth and the sinusoidal waves approaching the coast will transform into skewed waves. As waves propagate further towards the coast, the wave shape becomes more asymmetric due to wave breaking in the surf zone until the waves run up on the beach in the swash zone. Skewness and asymmetry are not only observed in the shape of the wave, but also in the orbital velocity profiles beneath the waves. The skewed and asymmetric velocity profiles have important implications for sediment transport in shallow conditions, where it both affects the bedload transport as the suspended load transport. Skewed waves have higher flow velocities under the crest of the waves than under the trough, resulting in a net onshore sediment transport as the high velocities under the crest are much more capable of moving large sediments. Beneath waves with high asymmetry, the change from onshore to offshore flow is more gradual than from offshore to onshore, where sediments are stirred up during peaks in offshore velocity and are transported onshore because of the sudden change in flow direction. The local sediment transport generates nearshore bar formation and provides a mechanism for the generation of three-dimensional features such as rip currents and rhythmic bars. Models including wave skewness and asymmetry Two different approaches exist to include wave shape in models: the phase-averaged approach and the phase-resolving approach. With the phase-averaged approach, wave skewness and asymmetry are included based on parameterizations. Phase-averaged models incorporate the evolution of wave frequency and direction in space and time of the wave spectrum. Examples of these kinds of models are WAVEWATCH3 (NOAA) and SWAN (TU Delft). WAVEWATCH3 is a global wave forecasting model with a focus on the deep ocean. SWAN is a nearshore model and mainly has coastal applications. Advantages of phase-averaged models are that they compute wave characteristics over a large domain, they are fast and they can be coupled to sediment transport models, which is an efficient tool to study morphodynamics. See also Infragravity waves Wind waves Wind stress Sediment transport References Gravity waves
Wave nonlinearity
[ "Chemistry" ]
1,394
[ "Gravity waves", "Fluid dynamics" ]
67,658,437
https://en.wikipedia.org/wiki/Wulbari%20%28god%29
Wulbari is a supreme deity figure worshipped in the traditional religions of the Krache and Guang people in Ghana and Togo. Aside from his role as a supreme deity, Wulbari is a sky god, where he lived ever since he retreated from Earth. He is also often depicted as the foil to the spider god Anansi. Legends Retreat from earth There are several versions of the folktale that led to Wulbari’s retreat from earth to the skies, which represented heavens. Lynch and Roberts (2010) laid out several of these accounts: A woman who was pounding her pestle caused Wulbari pain, since her motions hit him. So, Wulbari kept going higher to escape the pain and eventually arrived at the skies. Wulbari was used as a towel for human’s soiled hands. Earth became too crowded and Wulbari left for the skies to escape the masses. The smoke of the cooking fire annoyed Wulbari greatly, and he decided to move to the skies. Wulbari was cut to pieces by a woman who used him as a seasoning, and so he left earth. Origin of death A hornbill bird called Animabri started killing and eating mankind. Wulbari called on his court to decide on what to do next. The court, represented by animals, was later by Wulbari to name their people and their place. The elephant controlled the far countryside – and thus the lands are under its control. The goat stated that they have dominion over the grasslands. The dog claimed the humans, and thus Wulbari put him in charge of the medicine that would revive those that have been killed by Animabri. Unfortunately, in his journey to deliver the medication to the humans, the dog became hungry and left the medicine on the roadside while he feasted on a bone. The goat snatched the medicine and poured it all over the grasses. Thus, mankind die and cannot return to life, while the grasses that die each season shall return to life in the next season. See also Nyame Abassi List of African mythological figures References African mythology Sky and weather deities
Wulbari (god)
[ "Physics" ]
433
[ "Weather", "Sky and weather deities", "Physical phenomena" ]
67,659,075
https://en.wikipedia.org/wiki/HD%20203949
HD 203949 is a K-type giant star 257 light-years away in the constellation of Microscopium. Its surface temperature is 4618 K. It is either on the red giant branch fusing hydrogen in a shell around a helium core, or more likely a red clump star currently fusing helium in its core. HD 203949 is enriched in heavy elements relative to the Sun, with a metallicity ([Fe/H]) of . As is common for red giants, HD 203949 has an enhanced concentration of sodium and aluminium compared to iron. Multiplicity surveys did not find any stellar companions around HD 203949 as of 2019. Planetary system In 2014, one planet orbiting HD 203949 was discovered by the radial velocity method. The planet is highly unlikely to have survived the red giant stage of stellar evolution on the present orbit. It is likely to be recently scattered from a wider orbit. The planetary system configuration is favourable for direct imaging of exoplanets in the near future, and was included in the top ten easiest targets known by 2018. References Microscopium K-type giants Planetary systems with one confirmed planet J21262286-3749458 105854 CD-38 14551 203949 8200
HD 203949
[ "Astronomy" ]
260
[ "Microscopium", "Constellations" ]
67,659,539
https://en.wikipedia.org/wiki/Per%20sign
The per sign is a rare symbol used to indicate a ratio. In English, it can replace the word "per" in phrases such as miles per hour ("miles ⅌ hour"). Unicode The Unicode code point is . The symbol does not appear in the ASCII set. See also Wiktionary's entry on the symbol References Typographical symbols
Per sign
[ "Mathematics" ]
78
[ "Symbols", "Typographical symbols" ]
67,659,951
https://en.wikipedia.org/wiki/Animal%20Welfare%20%28Sentience%29%20Act%202022
The Animal Welfare (Sentience) Act 2022 (c. 22) is an act of the Parliament of the United Kingdom. It was introduced to Parliament by the Government of the United Kingdom at the 2021 State Opening of Parliament. The act recognises animal sentience in law for the first time. The scope of the legislation includes all vertebrates and some invertebrates such as octopuses and lobsters. Background The bill was created after an original attempt to reintroduce animal sentience back into the law through the Animal Welfare (Sentencing and Recognition of Sentience) Bill 2017. Before Brexit sentience was provided through Article 13 of the Treaty on the Functioning of the European Union which stated that states "shall, since animals are sentient beings, pay full regard to the welfare requirements of animals" when they formulate EU policies. On the 15 November 2017, a vote was taken on whether to incorporate Article 13 into the EU (Withdrawal) Bill where it was defeated 313 to 295 votes in the House of Commons,  as well as 211 against 169 for in the House of Lords. The Animal Welfare (Sentience) Bill partly came about through the desire to separate out the two sections of the Animal Welfare (Sentencing and Recognition of Sentience) Bill, being sentencing and sentience. Passage The bill was introduced by minister Lord Goldsmith of Richmond Park on 13 May 2021. Reception Fears that the bill will infringe on "kosher and halal slaughter, game shooting, killing vermin on farms and testing medical products on animals" were raised in a letter written by several Tory donors. Further complaints were raised such as the notion that the UK has recognised animal sentience and that animal welfare has been around in the UK for 200 years (originally introduced in the Cruel Treatment of Cattle Act 1822). This sentiment was expressed by Nick Herbert in the House of Lords as well as by Daniel Hannan, arguing that the law is already clear on the issue. The idea that the law in place is already enough may not be accurate after Brexit, Angus Nurse argues that leaving the EU will result in a step back in terms of animal rights, returning animals to the status of things. The reason for this is that there are significant differences between the laws MPs state as reasons for not including sentience in the law and the protections that used to be granted by EU law, in particular Article 13. Steven McCulloch draws attention to the fact that the Animal Welfare Act fails to protect wild animals whereas Article 13 protects all sentient animals. Therefore, the firm placement of animal sentience in the law could be a step in the right direction for animal welfare according to Jessica Horton and Jonathan Merritt. There has also been critique that the original bill did not go far enough as it fails to recognise the sentience of invertebrates. According to recent studies conducted by C. M. Sherwin, the notion that invertebrates are not sentient is incorrect, as many studies on them are conducted differently. If the same arguments from analogy were used in investigations on invertebrates then it would be concluded that they were sentient. Therefore, leaving them out of the bill may cause them to be unduly unprotected. The final bill was amended to include some invertebrate animals such as octopuses and lobsters, following a scientific review that concluded that there was "strong scientific evidence" octopuses were sentient. See also Animal Welfare (Kept Animals) Bill List of acts of the Parliament of the United Kingdom References Animal cognition Animal welfare and rights legislation in the United Kingdom United Kingdom Acts of Parliament 2022
Animal Welfare (Sentience) Act 2022
[ "Biology" ]
723
[ "Animals", "Animal cognition" ]
67,660,345
https://en.wikipedia.org/wiki/Behnke%E2%80%93Stein%20theorem%20on%20Stein%20manifolds
In mathematics, especially several complex variables, the Behnke–Stein theorem states that a connected, non-compact (open) Riemann surface is a Stein manifold. In other words, it states that there is a nonconstant single-valued holomorphic function (univalent function) on such a Riemann surface. It is a generalization of the Runge approximation theorem and was proved by Heinrich Behnke and Karl Stein in 1948. Method of proof The study of Riemann surfaces typically belongs to the field of one-variable complex analysis, but the proof method uses the approximation by the polyhedron domain used in the proof of the Behnke–Stein theorem on domains of holomorphy and the Oka–Weil theorem. References Several complex variables Riemann surfaces
Behnke–Stein theorem on Stein manifolds
[ "Mathematics" ]
164
[ "Several complex variables", "Functions and mappings", "Mathematical relations", "Mathematical objects" ]
67,661,885
https://en.wikipedia.org/wiki/Miles-Phillips%20mechanism
In physical oceanography and fluid mechanics, the Miles-Phillips mechanism describes the generation of wind waves from a flat sea surface by two distinct mechanisms. Wind blowing over the surface generates tiny wavelets. These wavelets develop over time and become ocean surface waves by absorbing the energy transferred from the wind. The Miles-Phillips mechanism is a physical interpretation of these wind-generated surface waves.Both mechanisms are applied to gravity-capillary waves and have in common that waves are generated by a resonance phenomenon. The Miles mechanism is based on the hypothesis that waves arise as an instability of the sea-atmosphere system. The Phillips mechanism assumes that turbulent eddies in the atmospheric boundary layer induce pressure fluctuations at the sea surface. The Phillips mechanism is generally assumed to be important in the first stages of wave growth, whereas the Miles mechanism is important in later stages where the wave growth becomes exponential in time. History It was Harold Jeffreys in 1925 who was the first to produce a plausible explanation for the phase shift between the water surface and the atmospheric pressure which can give rise to an energy flux between the air and the water. For the waves to grow, a higher pressure on the windward side of the wave, in comparison to the leeward side, is necessary to create a positive energy flux. Using dimensional analysis, Jeffreys showed that the atmospheric pressure can be displayed as where is the constant of proportionality, also termed sheltering coefficient, is the density of the atmosphere, is the wind speed, is the phase speed of the wave and is the free surface elevation. The subscript is used to make the distinction that no boundary layer is considered in this theory. Expanding this pressure term to the energy transfer yields where is the density of the water, is the gravitational acceleration, is the wave amplitude and is the wavenumber. With this theory, Jeffreys calculated the sheltering coefficient at a value of 0.3 based on observations of wind speeds. In 1956, Fritz Ursell examined available data on pressure variation in wind tunnels from multiple sources and concluded that the value of found by Jeffreys was too large. This result led Ursell to reject the theory from Jeffreys. Ursell's work also resulted in new advances in the search for a plausible mechanism for wind-generated waves. These advances led a year later to two new theoretical concepts: the Miles and Phillips mechanisms. Miles' Theory John W. Miles developed his theory in 1957 for inviscid, incompressible air and water. He assumed that air can be expressed as a mean shear flow with varying height above the surface. By solving the hydrodynamic equations for the coupled sea-atmosphere system, Miles was able to express the free surface elevation as a function of wave parameters and sea-atmosphere characteristics as where , is the scale parameter, is the phase speed of free gravity waves, is the wind speed and is the angular frequency of the wave. The wind speed as a function of height was found by integrating the Orr-Sommerfeld equation with the assumption of a logarithmic boundary layer and that in the equilibrium state no currents below the sea surface exist where is the von Kármán's constant, is the friction velocity, is the Reynolds stress and is the roughness length. Furthermore, Miles defined the growth rate of the wave energy for arbitrary angles between the wind and the waves as Miles determined in his 1957 paper by solving the inviscid form of the Orr-Sommerfeld equation. He further expanded his theory on the growth rate of wind driven waves by finding an expression for the dimensionless growth rate at a critical height above the surface where the wind speed is equal is to the phase speed of the gravity waves . with the frequency of the wave and the amplitude of the vertical velocity field at the critical height . The first derivative describes the shear of the wind velocity field and the second derivative described the curvature of the wind velocity field. This result represents Miles' classical result for the growth of surface waves. It becomes clear that without wind shear in the atmosphere (), the result from Miles fails, hence the name 'shear instability mechanism'. Even though this theory gives an accurate description of the transfer of energy from the wind to the waves, it also has some limitations Miles considered the case of inviscid air and water, which means that viscous effects are neglected in this case. The effects that waves have on the atmospheric boundary layer are not taken into account. Only the case of linear effects are examined with this theory. Miles theory predicts growth of waves for all wind speeds, observations show however that there exists a minimum wind speed of 0.23 m/s before growth occurs. The atmospheric energy input from the wind to the waves is represented by . Snyder and Cox (1967) were the first to produce a relationship for the experimental growth rate due to atmospheric forcing by use of experimental data. They found where the wind speed measured at a height of 10 meters and a spectrum of the form of the JONSWAP. The JONSWAP spectrum is a spectrum based on data collected during the Joint North Sea Wave Observation Project and is a variation on the Pierson-Moskowitz spectrum, but then multiplied by an extra peak enhancement factor Phillips' Theory At the same time, but independently from Miles, Owen M. Phillips (1957) developed his theory for the generation of waves based on the resonance between a fluctuating pressure field and surface waves. The main idea behind Phillips' theory is that this resonance mechanism causes the waves to grow when the length of the waves matches the length of the atmospheric pressure fluctuations. This means that the energy will be transferred to the components in the spectrum which satisfy the resonance condition. Phillips determined the atmospheric source term for his theory as the following where is the frequency spectrum, with the three dimensional wave number . The strong points from this theory are that waves can grow from an initially smooth surface, so the initial presence of surface waves is not necessary. In addition, contrary to Miles' theory, this theory does predict that no wave growth can occur if the wind speed is below a certain value. Miles theory predicts exponential growth of waves with time, while Phillips theory predicts linear growth with time. The linear growth of the wave is especially observed in the earliest stages of wave growth. For later stages, Miles' exponential growth is more consistent with observations. See also Gravity waves Wind waves Wind stress Swell References Physical oceanography
Miles-Phillips mechanism
[ "Physics" ]
1,294
[ "Applied and interdisciplinary physics", "Physical oceanography" ]
67,662,457
https://en.wikipedia.org/wiki/Inosperma%20adaequatum
Inosperma adaequatum, until 2019 known as Inocybe adaequata, is a species of fungus of the family Inocybaceae found in North America and Europe. References adaequatum Fungi described in 1879 Fungi of North America Fungi of Europe Fungus species
Inosperma adaequatum
[ "Biology" ]
62
[ "Fungi", "Fungus species" ]
67,662,946
https://en.wikipedia.org/wiki/Geological%20engineering
Geological engineering is a discipline of engineering concerned with the application of geological science and engineering principles to fields, such as civil engineering, mining, environmental engineering, and forestry, among others. The work of geological engineers often directs or supports the work of other engineering disciplines such as assessing the suitability of locations for civil engineering, environmental engineering, mining operations, and oil and gas projects by conducting geological, geoenvironmental, geophysical, and geotechnical studies. They are involved with impact studies for facilities and operations that affect surface and subsurface environments. The engineering design input and other recommendations made by geological engineers on these projects will often have a large impact on construction and operations. Geological engineers plan, design, and implement geotechnical, geological, geophysical, hydrogeological, and environmental data acquisition. This ranges from manual ground-based methods to deep drilling, to geochemical sampling, to advanced geophysical techniques and satellite surveying. Geological engineers are also concerned with the analysis of past and future ground behaviour, mapping at all scales, and ground characterization programs for specific engineering requirements. These analyses lead geological engineers to make recommendations and prepare reports which could have major effects on the foundations of construction, mining, and civil engineering projects. Some examples of projects include rock excavation, building foundation consolidation, pressure grouting, hydraulic channel erosion control, slope and fill stabilization, landslide risk assessment, groundwater monitoring, and assessment and remediation of contamination. In addition, geological engineers are included on design teams that develop solutions to surface hazards, groundwater remediation, underground and surface excavation projects, and resource management. Like mining engineers, geological engineers also conduct resource exploration campaigns, mine evaluation and feasibility assessments, and contribute to the ongoing efficiency, sustainability, and safety of active mining projects History While the term geological engineering was not coined until the 19th century, principles of geological engineering are demonstrated through millennia of human history. Ancient engineering One of the oldest examples of geological engineering principles is the Euphrates tunnel, which was constructed around 2180 B.C. – 2160 B.C... This, and other tunnels and qanats from around the same time were used by ancient civilizations such as Babylon and Persia for the purposes of irrigation. Another famous example where geological engineering principles were used in an ancient engineering project was the construction of the Eupalinos aqueduct tunnel in Ancient Greece. This was the first tunnel to be constructed inward from both ends using principles of geometry and trigonometry, marking a significant milestone for both civil engineering and geological engineering Geological engineering as a discipline Although projects that applied geological engineering principles in their design and construction have been around for thousands of years, these were included within the civil engineering discipline for most of this time. Courses in geological engineering have been offered since the early 1900s; however, these remained specialized offerings until a large increase in demand arose in the mid-20th century. This demand was created by issues encountered from development of increasingly large and ambitious structures, human-generated waste, scarcity of mineral and energy resources, and anthropogenic climate change – all of which created the need for a more specialized field of engineering with professional engineers who were also experts in geological or Earth sciences. Notable disasters that are attributed to the formal creation of the geological engineering discipline include dam failures in the United States and western Europe in the 1950s and 1960s. These most famously include the St Francis dam failure (1928), Malpasset dam failure (1959), and the Vajont dam failure (1963), where a lack of knowledge of geology resulted in almost 3,000 deaths between the latter two alone. The Malpasset dam failure is regarded as the largest civil engineering disaster of the 20th century in France and Vajont dam failure is still the deadliest landslide in European history. Education Post-secondary degrees in geological engineering are offered at various universities around the world but are concentrated primarily in North America. Geological engineers often obtain degrees that include courses in both geological or Earth sciences and engineering. To practice as a professional geological engineer, a bachelor's degree in a related discipline from an accredited institution is required. For certain positions, a Master’s or Doctorate degree in a related engineering discipline may be required. After obtaining these degrees, an individual who wishes to practice as a professional geological engineer must go through the process of becoming licensed by a professional association or regulatory body in their jurisdiction. Canadian institutions In Canada, 8 universities are accredited by Engineers Canada to offer undergraduate degrees in geological engineering. Many of these universities also offer graduate degree programs in geological engineering. These include: Queen’s University (Department of Geological Sciences and Geological Engineering) (1975 – present), École Polytechnique (1965 – present), Université Laval (1965 – present), Université du Québec à Chicoutimi (1983 – present), University of British Columbia (1965 – present), University of New Brunswick (jointly administered by Department of Earth Sciences and Department of Civil Engineering) (1984 – present), University of Saskatchewan (1965 – present), and University of Waterloo (1986 – present). American institutions In the United States there are 13 geological engineering programs recognized by the Engineering Accreditation Commission (EAC) of the Accreditation Board for Engineering and Technology (ABET). These include: Colorado School of Mines (1936 – present), Michigan Technological University (1951 – present), Missouri University of Science and Technology (1973 – present), Montana Technological University (1972–present), South Dakota School of Mines and Technology (1950 – present), The University of Utah (1952 – present), University of Alaska-Fairbanks (1941 – present), University of Minnesota Twin Cities (1950 – present), University of Mississippi (1987 – present), University of Nevada, Reno (1958 – present), University of North Dakota (1984 – present), University of Texas at Austin (1998 – present), and University of Wisconsin – Madison (1993 – present). Other institutions Universities in other countries that hold accreditation to offer degree programs in geological engineering from the EAC by the ABET include: Escuela Superior Politécnica Del Litoral, Guayaquil, Ecuador (2018 – present), Istanbul Technical University, Istanbul, Turkey (2009 – present), Universidad Nacional de Ingeniería, Rímac, Peru (2017 – present), and Universidad Politécnica de Madrid, Madrid, Spain (2014 – present). Specializations In geological engineering there are multiple subdisciplines which analyze different aspects of Earth sciences and apply them to a variety of engineering projects. The subdisciplines listed below are commonly taught at the undergraduate level, and each has overlap with disciplines external to geological engineering. However, a geological engineer who specializes in one of these subdisciplines throughout their education may still be licensed to work in any of the other subdisciplines. Geoenvironmental and hydrogeological engineering Geoenvironmental engineering is the subdiscipline of geological engineering that focuses on preventing or mitigating the environmental effects of anthropogenic contaminants within soil and water. It solves these issues via the development of processes and infrastructure for the supply of clean water, waste disposal, and control of pollution of all kinds. The work of geoenvironmental engineers largely deals with investigating the migration, interaction, and result of contaminants; remediating contaminated sites; and protecting uncontaminated sites. Typical work of a geoenvironmental engineer includes: The preparation, review, and update of environmental investigation reports, The design of projects such as water reclamation facilities or groundwater monitoring wells which lead to the protection of the environment, Conducting feasibility studies and economic analyses of environmental projects, Obtaining and revising permits, plans, and standard procedures, Providing technical expertise for environmental remediation projects which require legal actions, The analysis of groundwater data for the purpose of quality-control checks, The site investigation and monitoring of environmental remediation and sustainability projects to ensure compliance with environmental regulations, and Advising corporations and government agencies regarding procedures for cleaning up contaminated sites. Mineral and energy resource exploration engineering Mineral and energy resource exploration (commonly known as MinEx for short) is the subdiscipline of geological engineering that applies modern tools and concepts to the discovery and sustainable extraction of natural mineral and energy resources. A geological engineer who specializes in this field may work on several stages of mineral exploration and mining projects, including exploration and orebody delineation, mine production operations, mineral processing, and environmental impact and risk assessment programs for mine tailings and other mine waste. Like a mining engineer, mineral and energy resource exploration engineers may also be responsible for the design, finance, and management of mine sites. Geophysical engineering (applied geophysics) Geophysical engineering is the subdiscipline of geological engineering that applies geophysics principles to the design of engineering projects such as tunnels, dams, and mines or for the detection of subsurface geohazards, groundwater, and pollution. Geophysical investigations are undertaken from ground surface, in boreholes, or from space to analyze ground conditions, composition, and structure at all scales. Geophysical techniques apply a variety of physics principles such as seismicity, magnetism, gravity, and resistivity. This subdiscipline was created in the early 1990s as a result of an increased demand in more accurate subsurface information created by a rapidly increasing global population. Geophysical engineering and applied geophysics differ from traditional geophysics primarily by their need for marginal returns and optimized designs and practices as opposed to satisfying regulatory requirements at a minimum cost Job responsibilities Geological engineers are responsible for the planning, development, and coordination of site investigation and data acquisition programs for geological, geotechnical, geophysical, geoenvironmental, and hydrogeological studies. These studies are traditionally conducted for civil engineering, mining, petroleum, waste management, and regional development projects but are becoming increasingly focused on environmental and coastal engineering projects and on more specialized projects for long-term underground nuclear waste storage. Geological engineers are also responsible for analyzing and preparing recommendations and reports to improve construction of foundations for civil engineering projects such as rock and soil excavation, pressure grouting, and hydraulic channel erosion control. In addition, geological engineers analyze and prepare recommendations and reports on the settlement of buildings, stability of slopes and fills, and probable effects of landslides and earthquakes to support construction and civil engineering projects. They must design means to safely excavate and stabilize the surrounding rock or soil in underground excavations and surface construction, in addition to managing water flow from, and within these excavations. Geological engineers also perform a primary role in all forms of underground infrastructure including tunnelling, mining, hydropower projects, shafts, deep repositories and caverns for power, storage, industrial activities, and recreation. Moreover, geological engineers design monitoring systems, analyze natural and induced ground response, and prepare recommendations and reports on the settlement of buildings, stability of slopes and fills, and the probable effects of natural disasters to support construction and civil engineering projects. In some jobs, geological engineers conduct theoretical and applied studies of groundwater flow and contamination to develop site specific solutions which treat the contaminants and allow for safe construction. Additionally, they design means to manage and protect surface and groundwater resources and remediation solutions in the event of contamination. If working on a mine site, geological engineers may be tasked with planning, development, coordination, and conducting theoretical and experimental studies in mining exploration, mine evaluation and feasibility studies relative to the mining industry. They conduct surveys and studies of ore deposits, ore reserve calculations, and contribute mineral resource expertise, geotechnical and geomechanical design and monitoring expertise and environmental management to a developing or ongoing mining operation. In a variety of projects, they may be expected to design and perform geophysical investigations from surface using boreholes or from space to analyze ground conditions, composition, and structure at all scales Professional associations and licensing Professional Engineering Licenses may be issued through a municipal, provincial/state, or federal/national government organization, depending on the jurisdiction. The purpose of this licensing process is to ensure professional engineers possess the necessary technical knowledge, real-world experience, and basic understanding of the local legal system to practice engineering at a professional level. In Canada, the United States, Japan, South Korea, Bangladesh, and South Africa, the title of Professional Engineer is granted through licensure. In the United Kingdom, Ireland, India, and Zimbabwe the granted title is Chartered Engineer . In Australia, the granted title is Chartered Professional Engineer. Lastly, in the European Union, the granted title is European Engineer. All these titles have similar requirements for accreditation, including a recognized post-secondary degree and relevant work experience. Canada In Canada, Professional Engineer (P.Eng.) and Professional Geoscientist (P.Geo.) licenses are regulated by provincial professional bodies which have the groundwork for their legislation laid out by Engineers Canada and Geoscientists Canada. The provincial organizations are listed in the table below. United States In the United States, all individuals seeking to become a Professional Engineer (P.E.) must attain their license through the Engineering Accreditation Commission (EAC) of the Accreditation Board for Engineering and Technology (ABET). Licenses to be a Certified Professional Geologist in the United States are issued and regulated by the American Institute of Professional Geologists (AIPG) Professional Societies Professional societies in geological engineering are not-for-profit organizations that seek to advance and promote the represented profession(s) and connect professionals using networking, regular conferences, meetings, and other events, as well as provide platforms to publish technical literature through forms of conference proceedings, books, technical standards, and suggested methods, and provide opportunities for professional development such as short courses, workshops, and technical tours. Some regional, national, and international professional societies relevant to geological engineers are listed here: American Geophysical Union (AGU) American Geosciences Institute (AGI) American Rock Mechanics Association (ARMA) Association of Environmental and Engineering Geologists (AEG) Association for Mineral Exploration (AME) Atlantic Geoscience Society (AGS) Canadian Dam Association (CDA) Canadian Federation of Earth Sciences (CFES) Canadian Geophysical Union (CGU) Canadian Geotechnical Society (CGS) Canadian Institute of Mining, Metallurgy and Petroleum (CIM) Canadian Society of Petroleum Geologists (CSPG) Canadian Rock Mechanics Association (CARMA) European Association of Geoscientists & Engineers (EAGE) European Geosciences Union (EGU) European Federation of Geologists (EFG) Geological Association of Canada (GAC) Geological Society of America (GSA) Geoscience Information Society (GSIS) Institute of Materials, Minerals and Mining (IOM3) International Association for Engineering Geology and the Environment (IAEG) International Association of Hydrogeologists (IAH) International Council on Mining and Metals (ICMM) International Society for Rock Mechanics and Rock Engineering (ISRM) International Society for Soil Mechanics and Geotechnical Engineering (ISSMGE) International Tunnelling Association (ITA) International Union of Geological Sciences (IUGS) Mineralogical Association of Canada (MAC) Mining Association of Canada (MAC) Prospectors and Developers Association of Canada (PDAC) Society for Mining, Metallurgy & Exploration (SME) Society of Exploration Geophysicists (SEG) Tunnelling Association of Canada (TAC) U.S. Geological Survey (USGS) U.S. National Mining Association (NMA) Distinction from engineering geology Engineering geologists and geological engineers are both interested in the study of the Earth, its shifting movement, and alterations, and the interactions of human society and infrastructure with, on, and in Earth materials. Both disciplines require licenses from professional bodies in most jurisdictions to conduct related work. The primary difference between geological engineers and engineering geologists is that geological engineers are licensed professional engineers (and sometimes also professional geoscientists/geologists) with a combined understanding of Earth sciences and engineering principles, while engineering geologists are geological scientists whose work focusses on applications to engineering projects, and they may be licensed professional geoscientists/geologists, but not professional engineers. The following subsections provide more details on the differing responsibilities between engineering geologists and geological engineers. Engineering geology Engineering geologists are applied geological scientists who assess problems that might arise before, during, and after an engineering project. They are trained to be aware of potential problems like: landslides, faults, unstable ground, groundwater challenges, and floodplains. They use a variety of field and laboratory testing techniques to characterize ground materials that might affect the construction, the long-term safety, or environmental footprint of a project. Job responsibilities of an engineering geologist include: collecting samples and surveys, conducting lab tests on samples, assessing in situ soil or rock conditions at many scales, preparing reports based on testing and on-site observations for clients, and creating geological models, maps, and sections. Geological engineering Geological engineers are engineers with extensive knowledge of geological or Earth sciences as well as engineering geology, engineering principles, and engineering design practices. These professionals are qualified to perform the role of or interact with engineering geologists. Their primary focus, however, is the use of engineering geology data, as well as engineering skills to: Design advanced exploration programs, environmental management or remediation projects including: Groundwater extraction and sustainability, Natural hazard mitigation systems, Energy resource exploration and extraction, Mineral resource exploration and extraction, and Environmental remediation. Design Infrastructure, including: Surface works, Foundations, Tunnels, Dams, Caverns, and Other construction that interfaces with the ground. Oversee components of mining including: Advanced resource assessment and economics, Mineral processing, Mine planning, and Geomechanical and geotechnical stability. In all these activities, the geological model, geological history, and environment, as well as measured engineering properties of relevant Earth materials are critical to engineering design and decision making. References See also Civil engineering Engineering geology Geology Environmental engineering Mining engineering Petroleum engineering Engineering disciplines
Geological engineering
[ "Engineering" ]
3,671
[ "nan" ]
67,663,189
https://en.wikipedia.org/wiki/Lenore%20Fahrig
Lenore Fahrig is a Chancellor's Professor in the biology department at Carleton University, Canada and a Fellow of the Royal Society of Canada. Fahrig studies effects of landscape structure—the arrangement of forests, wetlands, roads, cities, and farmland—on wildlife populations and biodiversity, and is best known for her work on habitat fragmentation. In 2023, she was elected to the National Academy of Sciences. Early life and education Fahrig is from Ottawa, Ontario. She completed a BSc (Biology) at Queen's University, Kingston, in 1981 and an MSc from Carleton University, Ottawa in 1983 under the supervision of Gray Merriam, on habitat connectivity and population stability. She completed her PhD in 1987 at the University of Toronto under the supervision of Jyri Paloheimo, on the effects of animal dispersal behaviour on the relationship between population size and habitat spatial arrangement. Research and career After her PhD, Fahrig worked for two years as a postdoctoral fellow at the University of Virginia, researching how different plant dispersal strategies allow species to respond to environmental disturbances. She then spent two years as a research scientist for the Federal Department of Fisheries and Oceans in St. John's, Newfoundland, Canada, where she modeled the spatial and temporal interactions between fisheries and fish populations. In 1991 she joined the faculty of the Biology Department at Carleton University, Ottawa, where, as of 2024, she is a Chancellor's Professor and the Gray Merriam Chair in Landscape Ecology. Fahrig is best known for her work on habitat fragmentation. Her early work in this area culminated in her highly cited 2003 review. Fahrig argues that the effects of fragmentation (breaking of habitat into small patches) on biodiversity should be estimated independently of the effects of habitat loss, showing that the combined effects of habitat loss and fragmentation are almost entirely due to the effects of habitat loss alone. This is important for species conservation because it means that, on a per-area basis, habitat in small patches is as valuable for conservation as habitat in large patches. This finding negates a common 'excuse' for habitat destruction, namely the assumed low conservation value of small patches. Fahrig's later work on habitat fragmentation found that effects of habitat fragmentation on biodiversity, independent of effects of habitat loss, are more likely to be positive than negative. This indicates that small patches have high cumulative value for biodiversity, and provides support for small-scale conservation efforts. Fahrig presented her work on habitat fragmentation at the International Biogeography Society's 50th anniversary celebration of The Theory of Island Biogeography, and at the World Biodiversity Forum. She published a retrospective article on her habitat fragmentation research for the 30th anniversary of the journal Global Ecology and Biogeography. Fahrig has also worked on habitat connectivity, road ecology, and effects of cropland heterogeneity on biodiversity. Based on her MSc thesis in 1983, Fahrig and Merriam published the first paper on habitat connectivity., and provided the earliest evidence for the concept of wildlife movement corridors. These concepts–habitat connectivity and wildlife movement corridors – are widely used in large-scale conservation planning, in municipal and regional greenways planning, and in mitigation of road effects on wildlife. Fahrig and colleagues' further work demonstrated the importance of distinguishing between structural and functional connectivity. and showed that habitat fragmentation does not necessarily decrease functional connectivity. Fahrig's contributions in road ecology include the first paper to show that roadkill causes declines in wildlife populations. Her later work showed strong and widespread impacts of roads on wildlife populations. Fahrig and her students found that the groups of species whose populations are most impacted by roads are amphibians, reptiles, and mammals with low reproductive rates. They also argued that high roadkill sites arenot necessarily the best sites for mitigating road effects on wildlife, and that ecopassages alone do not reduce roadkill. Her research on cropland heterogeneity shows that regions with small crop fields have higher biodiversity than regions with large crop fields, even when the total area under crop production is the same. Further, her group showed that this benefit of cropland heterogeneity to biodiversity is as large as the benefits from reducing intense practices such as pesticide use. She is a co-author of a book on road ecology, and several major reviews of the subject. Honours and distinctions 2016 Elected Fellow of the Royal Society of Canada 2018 Miroslaw Romanowski Medal for environmental science 2019 Chancellor's Professor: highest honour by Carleton University for research and scholarship 2021 Guggenheim Fellowship in Geography & Environmental Studies from the Guggenheim Foundation 2021 President's Award from the Canadian Society for Ecology and Evolution for Research Excellence 2021 BBVA Foundation Frontiers of Knowledge Awards in Ecology and Conservation Biology 2022 Gerhard Herzberg Canada Gold Medal for Science and Engineering References Living people Academic staff of Carleton University Fellows of the Royal Society of Canada Year of birth missing (living people) Canadian women scientists Women ecologists Ecologists Members of the United States National Academy of Sciences
Lenore Fahrig
[ "Environmental_science" ]
1,018
[ "Ecologists", "Environmental scientists" ]
67,663,938
https://en.wikipedia.org/wiki/Asiatic%20Low
The Asiatic Low is a low-pressure trough which lies over southern Asia, during early summer. It is located roughly over India, heading over the Bay of Bengal. It is a major action centre for the Northern Hemisphere during that time of the year. It is created by more intense July sun, causing desert land areas of Northern Africa and Asia to warm rapidly. Winds round it circle counterclockwise, from May to September or October giving persistent southwest monsoon winds from over the north Indian Ocean and South China Sea, also south-south-west or south winds over the west Pacific Ocean. Its counterpart during the winter is the Siberian High. The Asian Low is part of the Intertropical Convergence Zone (ITCZ). Winds from May to October are persistent southwesterly from the Indian Ocean and South China Sea as well as south-southwesterly or southerly over the western Pacific Ocean. This gradually generates the summer monsoon over the Indian subcontinent and Southeast Asia. See also Aleutian Low Azores High East Asian Monsoon Cyclogenesis Tropical wave Trough (meteorology) References Atmospheric dynamics Meteorological phenomena
Asiatic Low
[ "Physics", "Chemistry" ]
220
[ "Physical phenomena", "Earth phenomena", "Atmospheric dynamics", "Meteorological phenomena", "Fluid dynamics" ]
67,664,358
https://en.wikipedia.org/wiki/Alessandro%20De%20Angelis%20%28astrophysicist%29
Alessandro De Angelis (born 16 August 1959 in Cencenighe Agordino, Italy) is an Italian and Argentine physicist and astrophysicist. A Professor of Experimental Physics at the University of Padova and Professor Catedratico of Astroparticle Physics at IST Lisboa, he is mostly known for his role in the proposal, construction and data analysis of new telescopes for gamma-ray astrophysics. He is a member of Istituto nazionale di fisica nucleare (INFN), Istituto nazionale di astrofisica (INAF), Italian Physical Society (SIF), International Astronomical Union (IAU), Gruppo2003. Career De Angelis graduated in physics from the University of Padova in 1983 studying charmed particles produced in the LExan Bubble Chamber at the European Hybrid Spectrometer. Had a post-doctoral activity at CERN ending as a staff member in Ugo Amaldi's DELPHI experiment. Back to Italy, since 1999 he works mostly to particle astrophysics. He participated to the design and construction of NASA's Fermi Gamma-ray Space Telescope and of the MAGIC Telescopes in the Canary Island of La Palma. He is principal investigator of the space project ASTROGAM and is among the proponents of the Southern Wide-field Gamma-ray Observatory (SWGO), a very-high-energy gamma-rays observatory to be constructed on the Andes. He proposed the mixing between gamma rays and axions in intergalactic magnetic fields. From 2010 to 2011 he has been guest scientist at the Werner Heisenberg Max Planck Institute for Physics in Munich, and since 2014 has been for three years Director of Research at INFN. He also works on popularization of science and on history and philosophy of physics, in particular in relation to cosmic rays and to the Galilei period. He is editor for Springer Nature in the area of History of Physics. Prizes Highly Cited Researcher, Thomson-Reuters/Clarivate, 2016 Thomson-Reuters Award for belonging to the "top 1% researchers publishing in the field of Space Science over the [...] decade" 2001–2010, 2011 American Astronomical Society's Bruno Rossi Prize with the Fermi LAT Team, 2011 Highlight of the European Physical Society for the article "Nationalism and internationalism in science: the case of the discovery of cosmic rays", with P. Carlson, Eur. Phys. J. H 36, 309, 2010 NASA Group Achievement Award, 2008 Honors Books With prefaces by Ugo Amaldi and Telmo Pievani. With a preface by Francis Halzen. With a preface by Margherita Hack. References 1959 births Academic staff of the University of Padua Living people 20th-century Italian physicists Italian astrophysicists Argentine physicists Argentine astrophysicists Particle physicists People associated with CERN
Alessandro De Angelis (astrophysicist)
[ "Physics" ]
590
[ "Particle physicists", "Particle physics" ]
77,792,436
https://en.wikipedia.org/wiki/Bangkok%20Design%20Week
Bangkok Design Week (BKKDW) () is an annual event held to celebrate and promote design and creativity in Bangkok, Thailand. Supported by the Creative Economy Agency and Bangkok Metropolitan Administration, the event was inaugurated in 2018 by the Thailand Creative & Design Center. In 2022, UNESCO designated Bangkok as 'Creative City of Design'. Overview The inaugural 2018 Bangkok Design Week was hosted under the theme "The New-ist Vibes", seeking to promote local creative businesses across five areas: Charoen Krung, Klong San, Sam Yan, Rama 1 and Sukhumvit. In 2024, the event was hosted under the theme "Livable Scape: The More We Act, the Better Our City”, seeking to improve public infrastructure and utilities in Bangkok. The 2025 event will be hosted under the theme "Design Up+Rising". See also Chiang Mai Design Week Pakk Taii Design Week References Design events Events in Bangkok Events in Thailand Festivals in Thailand Annual events in Bangkok
Bangkok Design Week
[ "Engineering" ]
207
[ "Design", "Design events" ]
77,793,120
https://en.wikipedia.org/wiki/Chiang%20Mai%20Design%20Week
Chiang Mai Design Week () is an annual event held to celebrate and promote design and creativity in Chiang Mai, Thailand. Supported by the Creative Economy Agency, it was Thailand's first design week. In 2017, UNESCO designated Chiang Mai as a Crafts and Folk Art Creative Cities Network. Overview The inaugural event was held 6 - 14 December, 2014 by the Thailand Creative & Design Center. In 2021, the event was hosted under the theme "Co-Forward" during the COVID-19 pandemic. In 2023, the event was hosted under the theme "Transforming Local: Adapt / Enhance / Local / Grow". The 2024 event will be hosted under the theme "Scaling Local: Creativity, Technology, Sustainability". See also Chiang Mai Creative City Bangkok Design Week Pakk Taii Design Week References Festivals in Thailand Events in Thailand Design events Culture of Chiang Mai
Chiang Mai Design Week
[ "Engineering" ]
177
[ "Design", "Design events" ]
77,793,337
https://en.wikipedia.org/wiki/Pakk%20Taii%20Design%20Week
Pakk Taii Design Week () is an annual event held to celebrate and promote design and creativity in Southern Thailand. The inaugural event was launched in 2023 in Songkhla, Thailand, hosted under the theme "The Next Spring". “Pakk Taii” refers to Southern Thailand (; ). Overview Supported by the Creative Economy Agency and the Tourism Authority of Thailand, Pakk Taii Design Week hosts events including live performances, workshops, music, creative markets, and talks focused on Southern Thailand's culture. In 2024, the event was hosted under the theme "The South's Turn!", seeking to revitalize the region's economy. It also held the Microwave Film Festival, focused on Thai cinema. See also Bangkok Design Week Chiang Mai Design Week References Design events Events in Thailand Festivals in Thailand Southern Thai culture
Pakk Taii Design Week
[ "Engineering" ]
173
[ "Design", "Design events" ]
77,793,394
https://en.wikipedia.org/wiki/1952%20Tacoma%20C-54%20crash
The 1952 Tacoma C-54 crash was an aviation accident involving a Douglas C-54G Skymaster of the United States Air Force, which occurred in the early hours of Friday, November 28 1952, near McChord Field in the vicinity of Tacoma (Washington), resulting in the deaths of 37 people. Crew The aircraft's crew consisted of 7 members: Captain — 29-year-old Captain Albert J. Fenton (); First officer — 27-year-old First Lieutenant James D. Harvey (); 20-year-old Second Officer John H. Benedict (); 24-year-old Third Officer Patricia Bentley (); 24-year-old Staff Sergeant Joseph H. Bokinsky (); 21-year-old Second Officer Wilber C. Childers (); 20-year-old Third Officer Bobby R. Wilson (). Accident The Douglas C-54G military transport aircraft from the 1701st Air Transport Wing was performing a passenger flight from Fairbanks (State of Alaska) to Tacoma (State of Washington) transporting a group of military personnel with their families. There were 39 people on board (including 7 crew members), among them 7 women (5 of whom were civilians) and 8 children. Just after midnight, while approaching McChord Field (near Tacoma), the crew requested weather data for Seattle-Tacoma Airport at 00:30. According to the data received, the region was experiencing fog, with visibility reaching ¾ mile (1.2 km), which was above the meteorological minimum. Therefore, the decision was made to land at McChord from the south side. However, as the aircraft descended to an altitude of 300 feet (approximately 91 meters), visibility sharply dropped to near zero. Consequently, at 00:48, the pilots reported aborting the approach and returning to their home base at Malmstrom Air Force Base (Great Falls, Montana). A few minutes later, a call was received at the airbase from the sheriff of Lakewood, Washington stating that an air crash had occurred south of the city. Continuing northward, at 00:50, the "Douglas" struck trees and crashed into a field one and a half kilometers from the airfield. The fuselage split in two upon impact, debris scattered over two hundred yards, and the spilled fuel ignited, causing a significant fire. Only three people were initially rescued at the scene: crew member 20-year-old Bobby Wilson (), who suffered third-degree burns and internal injuries, passenger 23-year-old Officer Curtis Redd (), and 8-year-old Joseph M. Iacovitti (), who lost his father, mother, two brothers, and sister in the crash. However, on November 29, Wilson died from his injuries. Both surviving passengers also sustained severe injuries but managed to survive. In total, the air crash near Tacoma claimed 37 lives, completely destroying three families. The crash attracted attention because it occurred just 8 days after the crash of a military C-124 in Alaska (which killed 52 people). Three weeks after the Tacoma C-54 crash, another military C-124 crashed near Moses Lake, Washington, also in Washington state, killing 87 people. In total, over four weeks in the northwestern United States, three consecutive military transport plane crashes occurred, claiming a total of 176 lives (52+37+87). Causes of the Crash The commission investigating the crash was led by Brigadier General Richard J. O’Keefe (), who was summoned from Norton Air Force Base (State of California). One of the aircraft's propellers was found a hundred yards from the main debris area, embedded vertically in the ground. An examination showed that the propeller was not rotating at the moment of impact. Witnesses reported seeing flames on the right wing or engine, but no reports of fire were received from the crew. After all the investigations, the following sequence of events was established. When the crew encountered very thick fog during the approach and decided to divert to Malmstrom Air Base, the aircraft's nose was sharply raised to gain altitude, and the engines were set to maximum power. However, engine failure of No. 3 occurred unexpectedly, significantly reducing the overall thrust, and the aircraft could no longer climb, continuing to fly at low altitude. As it flew over a hill, the C-54 was flying very low when it struck two fir trees about 100 feet (approximately 30 meters) high, causing it to lose speed and subsequently crash to the ground. References Pierce County, Washington November 1952 1952 in Washington (state) Aviation accidents and incidents in the United States in 1952 Aviation accidents and incidents in Washington (state) Accidents and incidents involving the Douglas DC-4 Aviation accidents and incidents caused by engine failure
1952 Tacoma C-54 crash
[ "Technology" ]
967
[ "Aviation accidents and incidents caused by engine failure", "Engines" ]
77,793,650
https://en.wikipedia.org/wiki/Boundaries%20of%20Macau
The Boundaries of Macau, officially the Delimitation of the administrative division of the Macao Special Administrative Region of the People's Republic of China (, ), is a regulated administrative border with border control controlled by the Public Security Police Force in force under the One country, two systems constitutional principle, which separates the Macau Special Administrative Region from mainland China, by land border fence of and maritime boundary of , enforcing a separate immigration and customs-controlled jurisdiction from mainland China. Immigration control points As of 2024, there were a total of 10 checkpoints or points or entry and exit in operation in Macau. See also Borders of China Boundaries of Hong Kong References M Borders of Macau Borders of China Law enforcement in Macau
Boundaries of Macau
[ "Engineering" ]
144
[ "Separation barriers", "Border barriers" ]
77,795,578
https://en.wikipedia.org/wiki/Glossary%20of%20astrology
The following is a list of terms associated with astrology, a range of divinatory practices based on the apparent positions of celestial objects. References Further reading Glossaries Astrology
Glossary of astrology
[ "Astronomy" ]
37
[ "Astrology", "History of astronomy" ]
77,795,799
https://en.wikipedia.org/wiki/Gravity%20map
A gravity map is a map that depicts gravity measurements across an area of space, which are typically obtained via gravimetry. Gravity maps are an extension of the field of geodynamics. Readings are typically taken at regular intervals for surface analysis on Earth. Other methods include analysis of artificial satellite orbital mechanics, which can allow comprehensive gravity maps of planets, as has been done for Mars by NASA. Gravity maps typically are based on depictions of gravity anomalies or a planet's geoid. Creation of gravity maps Measurements are typically taken via measuring ground stations with surveys conducted at regular intervals. For surface mapping of gravity, placement of instruments can be randomized. Surface gravity mapping is often used to map out gravity anomalies such as a Bouguer anomaly or isostatic gravity anomalies. Derivative gravity maps are an extension of standard gravity maps, involving mathematical analysis of the local gravitational field strength, to present data in analogous formats to a geologic map. Gravity maps, in a 'heat' context, typically represent intensity being representative of concentrations of mass in a given area, which correlates to that area having a stronger gravitational field; an example would be a mountain range. In the inverse, geological structures such as oceanic trenches or landmass depressions such as those caused by glaciers or fault lines will depict lower gravitational field values, due to the lower underlying amount of mass in the area. Other methods include analysis of satellite orbital mechanics, which can allow comprehensive gravity maps of planets, as has been done for Mars by NASA. Goddard Mars Model (GMM) 3 is a gravity map of the gravitational field on the planet Mars. Three orbital craft over Mars, the Mars Global Surveyor (MGS), Mars Odyssey (ODY), and the Mars Reconnaissance Orbiter (MRO) assisted in the creation of the GMM 3 by the study of their orbital flight paths. Their travel times and the Doppler shift of radio communications between the respective craft and parabolic antennas belonging to the Deep Space Network, and incremental variations of the communication timing of radio signals and travel times of the craft allowed for the creation of an accurate GMM 3. The Martian gravity map was generated using more than sixteen years of data. External links World Gravity Map Project (WGM) References Geophysics Gravimetry
Gravity map
[ "Physics" ]
468
[ "Applied and interdisciplinary physics", "Geophysics" ]
77,796,881
https://en.wikipedia.org/wiki/Peripolar%20cell
Peripolar cells are specialized epithelial cells. Peripolar cells are located within Bowman's capsule at its vascular pole. These cells were discovered at the vascular pole of the sheep glomerulus. The cells contain numerous cytoplasmic granules. The granules in peripolar cells are secretory, and the cells show features of secretory epithelial cells, although no exocytosis was observed. By secreting specific molecules, they may influence the composition of the filtrate and the reabsorption processes in the renal tubules. There is also ongoing research that if it is part of the juxtaglomerular apparatus (JGA). The number, size, and appearance of peripolar cells can vary across different mammalian species. References Renal physiology Cell biology Histology Nephrology
Peripolar cell
[ "Chemistry", "Biology" ]
173
[ "Histology", "Cell biology", "Microscopy" ]
77,798,836
https://en.wikipedia.org/wiki/Framework%20Convention%20on%20Artificial%20Intelligence
The Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (also called Framework Convention on Artificial Intelligence or AI convention) is an international treaty on artificial intelligence. It was signed on 5 September 2024. The treaty is intended to protect human rights, to protect democracy from misinformation, and stop damage to government institutions by AI. References Regulation of artificial intelligence 2024 treaties September 2024
Framework Convention on Artificial Intelligence
[ "Technology" ]
85
[ "Computing and society", "Regulation of artificial intelligence" ]
77,799,419
https://en.wikipedia.org/wiki/KMT-2020-BLG-0414L
KMT-2020-BLG-0414L is a white dwarf star about 4,000 light-years away in the constellation Sagittarius, which is orbited by an Earth-mass exoplanet and a brown dwarf. Discovery This system was discovered via the gravitational microlensing event KMT-2020-BLG-0414, when it passed in front of a background star. The discovery observations were made by the Korea Microlensing Telescope Network (KMTNet) in 2020, at a time when many observatories (including two of three KMTNet sites) were shut down due to the COVID-19 pandemic. The discovery was announced in 2021. Due to the detection method, all that was initially known about the star was its location and mass. By 2024, follow-up observations ruled out the possibility of it being a main-sequence star, so given its mass, it must be a white dwarf. Planetary system The planet KMT-2020-BLG-0414Lb is close in mass to Earth and is one of the least massive exoplanets detected by microlensing. It is about twice as far from its star as Earth is from the Sun. The second companion, KMT-2020-BLG-0414Lc, is a brown dwarf about 30 times the mass of Jupiter. It is likely far from its star at about 22 AU (between Uranus and Neptune in the Solar System), though in an unlikely alternative scenario it may be much closer to the star at 0.2 AU. KMT-2020-BLG-0414Lb is the first confirmed terrestrial planet orbiting a white dwarf; previously only gas giants and asteroidal bodies were known. As such, this planet serves as an analog for Earth in the far future, when the Sun becomes a white dwarf. See also MOA-2010-BLG-477L List of exoplanets and planetary debris around white dwarfs References Sagittarius (constellation) White dwarfs Brown dwarfs Planetary systems with one confirmed planet Astronomical objects discovered in 2021 Gravitational lensing
KMT-2020-BLG-0414L
[ "Astronomy" ]
438
[ "Sagittarius (constellation)", "Constellations" ]
77,801,224
https://en.wikipedia.org/wiki/HPP%2B
{{DISPLAYTITLE:HPP+}} HPP+, also known as haloperidol pyridinium, is a monoaminergic neurotoxin and a metabolite of haloperidol. Formation and metabolism HPP+ is formed from haloperidol, and its dehydration product HPTP, by CYP3A enzymes in the liver. The compound can cross the blood–brain barrier and has been detected in the brain following haloperidol administration in both animals and humans. Neurotoxicity HPP+ is structurally related to the selective dopaminergic neurotoxin MPTP (and its active metabolite MPP+), which induces Parkinson's disease-like symptoms in humans. HPP+ is a neurotoxin specifically affecting serotonergic and dopaminergic neurons, and its neurotoxicity resembles that of MPTP. Extrapyramidal symptoms (EPS) HPP+ may contribute to the development of extrapyramidal symptoms (EPS) in patients undergoing long-term haloperidol therapy. An alternative theory posits that these symptoms result from long-term dopamine receptor supersensitivity, rather than direct neurotoxicity. Discovery HPP+ was first identified as a neurotoxic metabolite of haloperidol in 1990 and 1991, many years after haloperidol was introduced clinically and following the discovery of MPTP. Additional metabolites Besides HPP+, another reactive metabolite of haloperidol, RHPP+, has been detected in humans. The parent form of RHPP+ is RHPTP. References 4-Fluorophenyl compounds Alkanones Chloroarenes Human drug metabolites Human pathological metabolites Monoaminergic neurotoxins Pyridinium compounds
HPP+
[ "Chemistry" ]
399
[ "Ketones", "Chemicals in medicine", "Functional groups", "Human drug metabolites" ]
77,802,759
https://en.wikipedia.org/wiki/Transdetermination
Transdetermination is a concept in developmental biology to describe the process by which pluripotent stem cells change their fate from becoming one kind of specialized cell lineage to a different lineage. It is contrasted to transdifferentiation where a differentiated cell switches to another lineage without intermediate stages of dedifferentiation. In Drosophila, it has been shown that imaginal disc cells could convert from eye to wing tissue through a factor called winged eye (wge) which induces histone modifications that lead to the altered fate. References Cell biology Developmental biology concepts
Transdetermination
[ "Biology" ]
119
[ "Developmental biology concepts", "Cell biology" ]
77,804,243
https://en.wikipedia.org/wiki/MS-20%20Daglezja
The MS-20 Daglezja is a Polish armoured vehicle-launched bridge mounted on the Jelcz C662 wheeled chassis in service with the Polish Army since 2012. History The Daglezja program began in 2002, at the request of the Department of Armament Policy of the Ministry of National Defense, OBRUM ( – Research and Development Centre for Mechanical Appliances) began to develop tactical and technical assumptions for a modern automotive bridge set. Research and development work began in 2003, and in 2004 a bridge model and the first prototype were created (2005). Due to the failure to provide the planned mass of the set and reliability, work on the program was discontinued. Therefore, OBRUM made a second, thoroughly rebuilt prototype from its own funds in 2008. After successfully passing the tests in 2010, the Department of Armament Policy of the Ministry of National Defense signed an agreement with OBRUM for the implementation of implementation work, which culminated in the delivery of two vehicles of the implementation batch to the army. The first bridge was taken over on 20 November 2012 and sent to the Engineering and Chemical Troops Training Centre in Wrocław, and the second one was handed over on 10 December 2012. The second one was eventually delivered to the 2nd Engineer Battalion in Stargard the same year. After two implementation work, 10 series bridges were delivered in 2017. In 2021, as a result of a 2018 agreement, four MS-20 bridges modified to suit local conditions were delivered to the People's Army of Vietnam. Based on the accompanying bridge, a whole family of bridges is being built, including an assault bridge on the MG-20 Daglezja-G tracked chassis (on the T-72 tank chassis extended to 14 wheels), the MS-40 support bridge (enabling overcoming obstacles up to 40 m wide), and the Daglezja-P pontoon bridge. The prototype of the assault bridge was built in 2011. After tests, in 2018, the prototype was to be converted to a serial standard and a second copy was to be built. Construction details MS-20 Daglezja is a companion bridge on a wheeled chassis. The entire set consists of: a Jelcz C662D.43-M tractor unit in a 6×6 configuration, a bridge semi-trailer, a bridge layer and the PM-20 span. The PM-20 span was made in such a way that its width can be changed, in the transport position it is 3 m, and in the working position 4 m. In addition, the span has fillings between the girders, enabling, for example, the passage of people and vehicles. The bridge enables securing crossings or overcoming obstacles up to 20 m wide by tracked vehicles exerting a load of the MLC70 class (this corresponds to a vehicle weight of up to 63.5 tonnes) and wheeled vehicles or their combinations exerting a load of the MLC110 class (weight of up to 73 tonnes). Specifications span width: 3 m (in transport condition) 4 m (in working condition) span weight: 15 tons span length: 23 m (25.5 m – with access ramps) load capacity (according to STANAG 2021 standard): 70MLC for tracked vehicles 110MLC for wheeled vehicles Orders Polish Land Forces (In October 2023 an order was placed for 43 MS-20 Daglezja-S, they will be delivered between 2025-2028) Vietnam People's Ground Forces (4 MS-20 Daglezja were delivered in 2021 as part of agreement signed in 2018) See also M1074 Joint Assault Bridge System M104 Wolverine PP-64 Wstęga List of equipment of the Polish Land Forces References Bibliography Military engineering vehicles Military vehicles of Poland Military bridging equipment
MS-20 Daglezja
[ "Engineering" ]
769
[ "Military bridging equipment", "Engineering vehicles", "Military engineering", "Military engineering vehicles" ]
77,805,948
https://en.wikipedia.org/wiki/NGC%206051
NGC 6051 is a giant elliptical galaxy located in the constellation of Serpens. The galaxy lies 453 million light-years from Earth, which means given its apparent dimensions, the galaxy is around 250,000 light-years across. It is the brightest cluster galaxy inside a relaxed poor cluster called AWM 4, a fossil system, making up of at least 30 galaxy members. Observational history NGC 6051 was discovered by Edouard Stephan on June 20, 1881. According to John Louis Emil Dreyer, he described it as faint small round object with a bright middle nucleus and a 10th magnitude star to southeast. SIMBAD and HyperLEDA databases listed NGC 6051 as IC 4588, but according to Harold Corwin, these galaxies are two separate objects. O'Sullivan and associates (2011) have them as separate entities, with NGC 6051 being the central dominant galaxy of a cluster. Characteristics NGC 6051 is a Type cD galaxy. It is much more luminous compared to other galaxies in AWM 4, with a low surface brightness profile, but does not have a stellar envelope. The nucleus in NGC 6051 is considered active and it is a radio galaxy. It is classified as a Fanaroff-Riley class hybrid transition of Type I and Type II. Hosting a wide-angle tailed powerful radio source, NGC 6051 contains two reflection-symmetrical wiggled radio jets and large radio lobes emerging from its radio core by ~ 80 kiloparsecs (kpc). The most accepted theory for this active galactic nuclei (AGN) activity, is the presence of a supermassive black hole. The mass of the black hole in NGC 6051 is estimated to be 9.57 ± 0.02 M⊙ based on the MBH - MK correlation. According to the jet to counterjet brightness ratio, the central ~ 10 kpc jet region of NGC 6051 is potentially orientated closer to the plane of the sky. From the analysis of gradually steepening spectral index, the jets and lobes have an estimated lifetime of 160 million years old. This indicates the source of NGC 6051 is old. A study has found traces of iron inside the entrained gas produced from the central region of NGC 6051, with a mass of ~1.4 x 106 M⊙. With the energy amount of ~4.5 x 1057 erg, the gas might been transported out from the galaxy by its jets, to a certain extent of enriching the intracluster medium. See also List of NGC objects (5001–6000) NGC 6166 NGC 4889 External links References 6051 Serpens 057006 10178 24.36 Elliptical galaxies Radio galaxies 18810620 Discoveries by Édouard Stephan
NGC 6051
[ "Astronomy" ]
554
[ "Constellations", "Serpens" ]
62,435,722
https://en.wikipedia.org/wiki/OpenHAB
open Home Automation Bus (openHAB) is an open source home automation software written in Java. It is deployed on premises and connects to devices and services from different vendors. As of 2019, close to 300 bindings are available as OSGi modules. Actions, such as switching on lights, are triggered by rules, voice commands, or controls on the openHAB user interface. The openHAB project started in 2010. In 2013, the core functionality became an official project of the Eclipse Foundation under the name Eclipse SmartHome. openHAB is based on Eclipse SmartHome and remains the project for the development of bindings. According to Black Duck Open Hub, it is developed by one of the largest open-source teams in the world. It also has an active user community. Features Installation and runtime OpenHAB requires a JVM and can be deployed on servers running various operating systems, a dedicated Raspberry Pi instance, or some network-attached storage systems. The required bindings can be added at runtime via OSGi. OpenHAB supports a number of persistence backends for storing and querying the smart home data, including relational and time series databases. By default openHAB uses rrd4j for persistence. Discovery and configuration After installation, openHAB scans the local network and discovers devices that can be included in the smart home solution. Users can provide credentials and meaningful device names via an administration user interface. Things and Items Since major version 2 of openHAB the connections to physical devices is split in 2 levels. "Things" are the interface elements to a specific physical device (e.g. an interface to a home automation network like KNX, Z-Wave or Zigbee). Within these things, one or more "Items" can then be defined or discovered. These "Items" correspond to one specific component like a relay controlling a light, the desired temperature of a heating system or a dimmer percentage. Sitemaps Sitemaps allow the user to determine how the devices in the smart home are arranged. A sitemap groups devices by floor and room and determines how they are visualized and controlled. The following example illustrates what a typical sitemap definition might look like: sitemap demo label="My home automation" { Frame label="Date" { Text item=Date } Frame label="Demo" { Switch item=Lights icon="light" Text item=LR_Temperature label="Livingroom [%.1f °C]" Group item=Heating Text item=LR_Multimedia_Summary label="Multimedia [%s]" icon="video" { Selection item=LR_TV_Channel mappings=[0="off", 1="DasErste", 2="BBC One", 3="Cartoon Network"] Slider item=LR_TV_Volume } } } User interface Once the system is configured, openHAB users can view data and perform actions using a number of user interfaces. These include a browser based interface as well as apps for Android, Windows 10, and iOS. All of these UIs are defined by the sitemap mechanism. Rules An event, condition, action rule-based system is used to automate the smart home. The following example turns off a light once the sun rises at the configured location. rule "Start wake up light on sunrise" when Channel "astro:sun:home:rise#event" triggered then switch(receivedEvent.getEvent()) { case "START": { Light.sendCommand(OFF) } } end openHAB Cloud OpenHAB Cloud is a companion cloud service and backend for openHAB. It provides secure remote access and enables openHAB users to remotely monitor, control and steer their homes through the internet. The openHAB foundation provides a demo system without SLA guarantees. Version 3 improvements In 2020, the code was forked for a major rework, separating to 2.5 version from the upcoming 3.0 branch. Apart from some technical code changes (e.g. use of Java 11), several functional improvements are foreseen: the UI is unified, and pages (previously managed in sitemaps) are now managed in the openHAB designer. User and group management will be available to control who can use specific parts of the UI. Rules and scripts are extended and can be edited directly in the openHAB designer. The main drawback is that backward compatibility to openHAB add-ons for version 1 is dropped. Version 3.0 has been released as of 21 December 2020. Security Many security and privacy concerns have been raised with home automation and IoT in general. OpenHAB's on-premises engine and open source character are answers to these concerns. However, it was criticized for its use of default configurations. Reception OpenHAB won the IoT Challenge 2013 and the JavaOne Duke's Choice Award 2013. It was nominated for the JAX Innovation Award 2014 and was the People's Choice Winner at the Postscapes IoT Awards 2014/15. See also Home Assistant, another popular open source home automation software List of home automation software References External links Home automation Internet of things Smart home hubs
OpenHAB
[ "Technology" ]
1,067
[ "Home automation", "Smart home hubs" ]
62,436,503
https://en.wikipedia.org/wiki/Green%20finance%20and%20the%20Belt%20and%20Road%20Initiative
Green finance is officially promoted as an important feature of the Belt and Road Initiative, China's signature global economic development initiative. The official vision for the BRI calls for an environmentally friendly "Green Belt and Road". Policy Chinese policy documents for the BRI coordinate and encourage green finance and investment. The Ministry of Ecology and Environment with four other ministries released the "Guidance on Promoting a Green Belt and Road" in 2017. A section of the policy document covers mobilizing capital for financing green projects using "international multilateral and bilateral cooperative institutions and funds, such as Silk Road Fund, South-South Cooperation Assistance Fund, China-ASEAN Investment Cooperation Fund, China-Central and Eastern Europe Investment Cooperation Fund, China-ASEAN Maritime Cooperation Fund, Special Fund for Asian Regional Cooperation and LMC Special Fund." Policy institutions like the China Development Bank and Export-Import Bank of China are to play the "guiding role". The Development Research Center of the State Council and Export-Import Bank of China released a report in 2019 on green finance for the Belt and Road. The report gives recommendations and draws on lessons for China to develop Belt and Road green finance and goes into the details about "implementing the concept of green finance" by Export-Import Bank. Forms The various forms of green finance includes investments, lending, and insurance by Chinese state-owned financial entities and companies for renewable energy projects in host countries of the Belt and Road. Bonds The market for green bonds in China is the second largest in the world. In the international bond market, Chinese banks have also issued green bonds. China Development Bank in November 2017 issued the first green bond specifically for Belt and Road projects. This first green BRI bond had EUR and USD tranches of US$1.1 billion for "renewable energy, clean transportation and water resource management projects" in BRI countries. In the same month, the Bank of China issued a green bond on the London Stock Exchange although not specifically for projects in the BRI. Loans The two primary Chinese policy banks for financing BRI projects are China Development Bank and Export Import Bank and each states support for advancing more green loans. Both banks consider green loans to mean financing projects in renewable energy or environmental protection. The Export Import Bank claimed to fulfill green obligations under the Belt and Road by supporting "a large number of projects featuring low energy consumption and high value added in areas of new energy development and utilization and the circular economy." However, out of the energy project loans advanced by both banks between 2014 and 2017 for the BRI, 18% went to coal while solar and wind accounted for 3.4% and 2.9% respectively. Coal projects The primary contradiction with adherence to green finance and BRI projects is the large amount of lending by Chinese banks for coal fired power plants. In contrast, Western financial institutions have limited or prohibited financing of coal fired power plants starting with the World Bank and European Investment Bank in 2013. State owned Chinese commercial banks have shown a willingness to limit coal projects. In 2017, ICBC and China Construction Bank decided to not fund the Carmichael coal mine after environmental protests by the Australian public. In 2021, the International Institute of Green Finance reported that China didn't finance any coal projects via its Belt and Road Initiative in the first half of 2021, which is a first since 2013 when BRI was launched. References Belt and Road Initiative Bonds in foreign currencies Economy of China Finance in China Foreign trade of China Industrial ecology International finance Investment Natural resources Resource economics Sustainability
Green finance and the Belt and Road Initiative
[ "Chemistry", "Engineering" ]
707
[ "Industrial ecology", "Industrial engineering", "Environmental engineering" ]
62,436,652
https://en.wikipedia.org/wiki/Partial%20allocation%20mechanism
The Partial Allocation Mechanism (PAM) is a mechanism for truthful resource allocation. It is based on the max-product allocation - the allocation maximizing the product of agents' utilities (also known as the Nash-optimal allocation or the Proportionally-Fair solution; in many cases it is equivalent to the competitive equilibrium from equal incomes). It guarantees to each agent at least 0.368 of his/her utility in the max-product allocation. It was designed by Cole, Gkatzelis and Goel. Setting There are m resources that are assumed to be homogeneous and divisible. There are n agents, each of whom has a personal function that attributes a numeric value to each "bundle" (combination of resources). The valuations are assumed to be homogeneous functions. The goal is to decide what "bundle" to give to each agent, where a bundle may contain a fractional amount of each resource. Crucially, some resources may have to be discarded, i.e., free disposal is assumed. Monetary payments are not allowed. Algorithm PAM works in the following way. Calculate the max-product allocation; denote it by z. For each agent i: Calculate the max-product allocation when i is not present. Let fi = (the product of the other agents in z) / (the max-product of the other agents when i is not present). Give to agent i a fraction fi of each resource he gets in z. Properties PAM has the following properties. It is a truthful mechanism - each agent's utility is maximized by revealing his/her true valuations. For each agent i, the utility of i is at least 1/e ≈ 0.368 of his/her utility in the max-product allocation. When the agents have additive linear valuations, the allocation is envy-free. PA vs VCG The PA mechanism, which does not use payments, is analogous to the VCG mechanism, which uses monetary payments. VCG starts by selecting the max-sum allocation, and then for each agent i it calculates the max-sum allocation when i is not present, and pays i the difference (max-sum when i is present)-(max-sum when i is not present). Since the agents are quasilinear, the utility of i is reduced by an additive factor. In contrast, PA does not use monetary payments, and the agents' utilities are reduced by a multiplicative factor, by taking away some of their resources. Optimality It is not known whether the fraction of 0.368 is optimal. However, there is provably no truthful mechanism that can guarantee to each agent more than 0.5 of the max-product utility. Extensions The PAM has been used as a subroutine in a truthful cardinal mechanism for one-sided matching. References Fair division protocols Mechanism design
Partial allocation mechanism
[ "Mathematics" ]
588
[ "Game theory", "Mechanism design" ]
62,437,054
https://en.wikipedia.org/wiki/Mar%C3%ADa%20Teresa%20Lozano%20Im%C3%ADzcoz
María Teresa Lozano Imízcoz (born July 31, 1946) is a Spanish emeritus professor and mathematician. She studies topology principally in three dimensions. She has been awarded a Royal Spanish Mathematical Society (RSME) medal for her career and as a trailblazer for women to be involved in mathematical research. Life Lozano was born in Pamplona in 1946. In 1969 she obtained a degree and five years later she completed her doctorate in mathematics at the University of Zaragoza. Her postdoctoral work began at the University of Wisconsin, where she was an honorary fellow. In 1978 she returned to Spain where she became a professor at the University of Zaragoza. In 1990 she was made the Professor of Geometry and Topology. She was the first professor and the first director in her university's Faculty of Sciences. She was also the first emeritus professor of her faculty. Memberships In 1996 she became an Academician of the Royal Academy of Exact, Physical, Chemical and Natural Sciences of Zaragoza. In 2006 she became a Corresponding Academician of the Spanish Royal Academy of Sciences. In 2016 she was awarded the Royal Spanish Mathematical Society (RSME) Medal in recognition of the 40 years that she had contributed to the mathematics profession. The citation mentioned her dissemination work and her studies with Hugh Michael Hilden and Vicente Montesinos on knot theory and 3-dimensional topology. The newspapers also mentioned her as a trailblazer for women to be involved in mathematical research. References 1946 births Living people People from Pamplona Topologists 20th-century Spanish mathematicians 21st-century Spanish mathematicians Spanish women mathematicians University of Zaragoza alumni University of Wisconsin–Madison fellows Academic staff of the University of Zaragoza
María Teresa Lozano Imízcoz
[ "Mathematics" ]
334
[ "Topologists", "Topology" ]
62,437,333
https://en.wikipedia.org/wiki/Digital%20dystopia
Digital dystopia, cyber dystopia or algorithmic dystopia refers to an alternate future or present in which digitized technologies or algorithms have caused major societal disruption. It refers to dystopian narratives of technologies influencing social, economic, and political structures, and its diverse set of components includes virtual reality, artificial intelligence, ubiquitous connectivity, ubiquitous surveillance, and social networks. In popular culture, technological dystopias often are about or depict mass loss of privacy due to technological innovation and social control. They feature heightened socio-political issues like social fragmentation, intensified consumerism, dehumanization, and mass human migrations. Origins In 1998, "digital dystopia" was used to describe negative effects of multichannel television on society. "Cyber-dystopia" was coined in 1998 in connection with cyber-punk literature. One of the earliest mentions is on 2004 when an academic and blogger was expelled for commenting on how the Sims Online Computer game based in the city of Alphaville had become a digital dystopia controlled by "president" Donald Meacham and corrupt faction of robot nobles had become a digital dystopia with crime, cyber-sex prostitution and general civic chaos. Digital experimentation of the elements of cyberspace became extremely invasive and took on the appearance of anarchy in Alphaville. In August 2007, David Nye presented the idea of cyber-dystopia, which envisions a world made worse by technological advancements. Cyber-dystopian principles focus on the individual losing control, becoming dependent and being unable to stop change. Nancy Baym shows a cyber-dystopia negatively effect of a cyber-dystopia in social interactions as it says new media will take people away from their intimate relationships, as they substitute mediated relationships or even media use itself for face to face engagement". The dystopian voices of Andrew Keen, Jaron Lanier, and Nicholas Carr tell society as a whole could sacrifice our humanity to the cult of cyber-utopianism. In particular, Lanier describes it as "an apocalypse of self-abdication" and that "consciousness is attempting to will itself out of existence"; warning that by emphasising the majority or crowd, we are de-emphasising individuality. Similarly, Keen and Carr write that there is a dangerous mob mentality that dominates the internet; since, rather than creating more democracy, the internet is empowering the rule of the mob. Instead of achieving social equality or utopianism, the internet has created a "selfie-centered" culture of voyeurism and narcissism. John Naughton, writing for The Guardian, described Aldous Huxley, the author of Brave New World, as the prophet of digital dystopia. See also Cyberpunk Digital sublime Further reading References External links Deviance (sociology) Cyberspace Social theories Technophobia
Digital dystopia
[ "Technology", "Biology" ]
595
[ "Behavior", "Cyberspace", "Information technology", "Deviance (sociology)", "Human behavior" ]
62,438,732
https://en.wikipedia.org/wiki/Prescribed%20daily%20dose
Prescribed daily dose (PDD) is the usual dose of medication calculated by looking at a group of prescriptions for the medication in question. At times the PDD needs to be related to the condition being treated. See also References Pharmacy
Prescribed daily dose
[ "Chemistry" ]
49
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs", "Pharmacy" ]
62,439,273
https://en.wikipedia.org/wiki/H%C3%BCnk%C3%A2r%20Mahfili
A Hünkâr Mahfili is a structure within the prayer hall of a mosque used for worship by the Sultan, the royal family, and high-ranking government officials. It originated in the Ottoman mosque of Turkey. Often raised, it provides privacy and protection from would-be assassins. It is often attached to a Hünkâr Kasrı. Gallery See also Maqsura References External links Hünkar Mahfili Nedir? Fatih Camii Hünkâr Mahfili on youtube Mosques
Hünkâr Mahfili
[ "Engineering" ]
108
[ "Architecture stubs", "Architecture" ]
62,439,312
https://en.wikipedia.org/wiki/Copper%28II%29%20thiocyanate
Copper(II) thiocyanate (or cupric thiocyanate) is a coordination polymer with formula Cu(SCN)2. It is a black solid which slowly decomposes in moist air. It was first reported in 1838 by Karl Ernst Claus and its structure was determined first in 2018. Structure The structure of Cu(SCN)2 was determined via powder X-ray diffraction and consists of chains of Cu(NCS)2 linked together by weak Cu–S–Cu bonds into two-dimensional layers. It can be considered a Jahn–Teller distorted analogue of the mercury thiocyanate structure-type. Each copper is octahedrally coordinated by four sulfurs and two nitrogens. The sulfur end of the SCN− ligand is doubly bridging. Synthesis Copper(II) thiocyanate can be prepared from the reaction of concentrated solutions of copper(II) and a soluble thiocyanate salt in water, precipitating as a black powder. With rapid drying, pure Cu(SCN)2 can be isolated. Reaction at lower concentrations and for longer periods of time generates instead copper(I) thiocyanate. Magnetism Copper(II) thiocyanate, like copper(II) bromide and copper(II) chloride, is a quasi low-dimensional antiferromagnet and it orders at into a conventional Néel ground state. References Copper(II) compounds Coordination polymers Thiocyanates
Copper(II) thiocyanate
[ "Chemistry" ]
316
[ "Thiocyanates", "Functional groups" ]
62,440,712
https://en.wikipedia.org/wiki/Uth%20Roeun
Uth Roeun (born 1944) is a Cambodian cartoonist responsible for the first Cambodian comic book in 1964. He is a key figure in the "golden age" of Cambodian comics prior to the Khmer Rouge regime. He later served as Department Chief for Publishing at the Cambodian Ministry of Education, Youth and Sport. Career He was attending the Lycée Sisowath in 1962 when his art teacher introduced him to French bandes dessinées and he was inspired to draw Khmer language comics. Of his first comic book, the story of two teenage friends, he said "Frankly the illustrations weren’t beautiful. I had no idea how to draw properly. When I’d finished illustrating the first one, no printers wanted to take it [...] it was new and nothing like it had ever been produced before.” Eventually it was published in 1964 as Neytung Neysang, the first Cambodian comic book. The popular book sold twenty thousand copies and earned him four thousand riel. His second book, Preah Thoung Neang Neak (1964), contained content about tolerance towards Muslims, which prompted the police to arrest and interrogate him for a day. He missed an exam and quit school entirely to devote himself to cartooning. He also drew book covers, illustrated magazines and schoolbooks, and painted. Many comic books were lost during the Khmer Rouge regime. The earliest surviving comic book by Uth Rouen is 1966's One Night for You. His career came to a halt when he was conscripted during the Khmer Republic. During the Khmer Rouge regime, he drew construction plans for the government. "The government said you are an artist, then you must draw. There was no money paid, just food given. I drew pictures for the Khmer Rouge. If they were building a dam, I did the drawing of it. My fingers were red from exerting myself while drawing for the Pol Pot regime. [...] I served by drawing soldiers, Khmer Rouge plans. I was too skinny to work in the rice fields; I was very thin and could not do hard, physical work." Comics had a resurgence in the 1980s, with new books being published and surviving books reprinted, though many were pirated without attribution to the artists. These years featured reprints of Uth Roeun's Torn Chey (1985), about a peasant boy who outwits the king, and Tum Teav (1986), an adaptation of the classic Cambodian romance, and his new work New Life in Kompong Preah (1986). In 2000 he founded The Association of Cambodian Artist Friends, based near Wat Phnom. References Cambodian artists 1944 births Living people
Uth Roeun
[ "Engineering" ]
552
[ "Design engineering", "Draughtsmen" ]
62,440,811
https://en.wikipedia.org/wiki/Milk%20fiber
Milk fiber or milk wool is a type of Azlon, a regenerated protein fiber based on the casein protein found in milk. There are several trade names for milk-casein-based fibers, including Lanital, Fibrolane and Aralac. Invention and history First produced and patented in Italy in 1935 by Antonio Ferretti and sold under the name Lanital, milk fiber was created under an Italian national self-sufficiency drive and was intended to capitalize on previous successes with rayon. Milk fibers enjoyed a brief period of success in the 1930s and 1940s. The popularity of milk fibers declined rapidly once full-synthetic fibers were developed. Fully synthetic fibers, such as acrylic, were able to significantly undercut milk fiber on price while being more durable. During the 2010s several producers tried to reintroduce milk fibers to commercial production. Production process The production process of milk fiber was of some public interest and was documented on film by several contemporary sources. A simplified overview of the process is as follows: Acid is mixed with milk to extract the casein. Water is evaporated to form casein crystals. The casein is hydrated to a thick syrup and extruded through spinnerets. The resulting fiber is passed through a hardening bath. The continuous fiber is then cut to the desired length. References Materials Fibers
Milk fiber
[ "Physics" ]
276
[ "Materials", "Matter" ]
62,441,095
https://en.wikipedia.org/wiki/Herbivore%20effects%20on%20plant%20diversity
Herbivores' effects on plant diversity vary across environmental changes. Herbivores could increase plant diversity or decrease plant diversity. Loss of plant diversity due to climate change can also affect herbivore and plant community relationships Herbivores are crucial in determining the distribution, abundance, and diversity of plant populations. Research indicates that by consuming large amounts of plant biomass, herbivores can directly reduce the local abundance of plants, thereby affecting the spatial distribution of different plant species. For example, the impact of herbivory is typically more pronounced in grassland species than in woodland forbs, especially in environments that undergo frequent disturbances. Dominant species effect People used to think herbivores increase plant diversity by avoiding dominance. Dominant species tend to exclude subordinate species as competitive exclusion. However, the effects on plant diversity caused by variation in dominance could be beneficial or negative. Herbivores do increase bio-diversity by consuming dominant plant species, but they can also prefer eating subordinate species according to plants’ palatability and quality. Plant palatability also heavily affects which plant species becomes dominant and which becomes subordinate, as palatability is a huge factor in whether herbivores choose to consume a certain plant more or less and hence affects its course of growth. In addition to the preference of herbivores, herbivores' effects on plant diversity are also influenced by other factors, defense trade-off theory, the predator-prey interaction, and inner traits of the environment and herbivores. Defense trade-off theory effect One way that plants could differ in their susceptibility to herbivores is through defense trade-off. Defense trade-off theory is commonly used to be seen as a fundamental theory to maintain ecological evenness. Plants can make a trade-off response to resource allocation, such as between defense and growth. Defenses against herbivores on plant diversity can vary in different situations. It can be neutral, detrimental or beneficial for plant fitness. Defense trade-offs can be used to change plant phenotype based on environmental challenges (such as herbivory). Even in the absence of defensive trade-offs, herbivores may still be able to increase plant diversity, such as herbivores prefer subordinate species rather than dominant species. The predator-prey interaction, especially the “top-down” regulation. Some of the consequences of high grazing pressure, is that plant productivity is reduced due to photosynthetic tissue removal thus, reducing their richness and/or abundance in the ecosystem. Herbivore damage to non-photosynthetic plant tissue has also been found to reduce flowering plant productivity due to its detrimental effect on plant attractiveness to pollinators. This is what we know as the top-down effect that in this case focuses on the herbivore population and plant communities. The predator-prey interaction encourages the adaptation in plant species which the predator prefers. The theory of “top-down” ecological regulation disproportionately manipulates the biomass of dominant species to increase diversity. The herbivore effect on plant is universal but still significantly distinguish on each site, can be positive or negative. Overall, herbivory and its overarching effect on plant diversity can fluctuate due to many variables, such as herbivore population, plant phenology and palatability to herbivores. Productivity effect In a highly productive system, the environment provides an organism with adequate resources to grow. The effects of herbivores competing for resources on the plant are more complicated. Moderate levels of herbivory can increase the productivity of biomass, including plants. The existence of herbivores can increase plant diversity by reducing the abundance of dominant species, redundant resources then can be used by subordinate species. Therefore, in a highly productive system, direct consumption of dominant plants could indirectly benefit those herbivory-resistant and unpalatable species. But the less productive system can support limited herbivores because of lack of resources. Herbivory boosts the abundance of most tolerant species and decreases the less-tolerant species’ existence which accelerates the plant extinction. Moderate productive system sometimes barely has long-term effects on plant diversity. Because the environment provides a stable coexistence of different organisms. Even when herbivores create some disturbances to the community. The system is still able to recover to the original state. Light is one of the most important resources in environments for plant species. Competition for light availability and predator avoidance are equally important. With the addition of the resources, more competition arises among plant species. But herbivores can buffer the diversity reduction. Especially large herbivores can enhance the bio-diversity by selectively excluding tall, dominant plant species, and increase light availability. Plants can sense being touched, and they can use several strategies to defend against damage caused by herbivores, including the production of secondary metabolites known as allelochemicals, altering their attractiveness, and employing various defensive strategies such as escaping or avoiding herbivores, diverting herbivores toward non-essential parts, and encouraging the presence of natural enemies of herbivores. Body size of herbivores effect Body size of herbivores is a key reason underlying the interaction between herbivores and plant diversity, and the body size explains many of the phenomena connected to herbivore-plant interaction. An increase of body size means it requires more nutrients and energy to sustain itself. Small herbivores are less likely to decrease plant diversity. Because small non-digging animals may not cause too many disturbances to the environment. Intermediate-sized herbivores mostly increase plant diversity by consuming or influencing the dominant plant species, such as herbivore birds, that can directly use dominant plant species. While some herbivores enhance plant diversity by indirect effects on plant competition. Some digging animals at this size local community environmental fluctuations. And the adaptation of plant species to avoid predators can also adjust the vegetation structure and increase diversity. Larger herbivores often increase plant diversity. They use competitively dominant plant species, and disperse seeds and create disorder of the soil. Besides, their urine position also adjusts the local plant distribution, and prevent light competition. With a larger body size, large herbivores tend to consume higher-quality and more plants to gain back the required amount of nutrition and energy. Larger herbivores also leave behind larger amounts of fecal matter, which tends to increase the nutrients needed to grow plants in herbivore dominated areas such as grasslands, such as nitrogen and phosphorus. Plant diversity can be highly variable when in the presence of herbivores, however studies have shown that grazing that occurs in herbivore assemblages, such as a mixture of cattle and sheep can increase plant diversity. Therefore, the mechanisms of herbivores’ effects on plant diversity are complicated. Generally, the existence of herbivores increases plant diversity. Moderate herbivore enhances plant productivity as it reduces self-shading and accelerates nutrient cycling. But varies according to different environmental factors, multiple factors combined to affect how herbivores influence plant diversity. References Herbivory Plant ecology
Herbivore effects on plant diversity
[ "Biology" ]
1,439
[ "Eating behaviors", "Plant ecology", "Plants", "Herbivory" ]
62,442,381
https://en.wikipedia.org/wiki/Index%20of%20architecture%20articles
This is an alphabetical index of articles related to architecture. 0–9 4th millennium BC in architecture 30th century BC in architecture 29th century BC in architecture 27th century BC in architecture 26th century BC in architecture 25th century BC in architecture 21st century BC in architecture 19th century BC in architecture 18th century BC in architecture 14th century BC in architecture 13th century BC in architecture 6th century BC in architecture 5th century BC in architecture 2nd century in architecture 3rd century in architecture 4th century in architecture 5th century in architecture 6th century in architecture 7th century in architecture 8th century in architecture 9th century in architecture 10th century in architecture 11th century in architecture 14th century in architecture 1000s in architecture 2000 in architecture A A-frame building A-un Abacus Ab anbar Abat-son Abbasid architecture Ablaq Acanthus Accolade Achaemenid architecture Acropolis Acroterion Adam style Adaptive reuse Additive Architecture Adirondack Architecture Adobe Advanced work Adyton Aedicula Aeolic order Aerary Aerospace architecture Affordable housing by country Affordable housing in Canada Afromodernism Agadir Airey house Aisle Akbari architecture Albarrana tower Alcazaba Alcázar Alcove Alfarje Alfiz Alure Amalaka Ambry Ambulacrum Ambulatory American colonial architecture American Foursquare American Renaissance Ammonite order Amphiprostyle Amphitheatre Amsterdam School Anastylosis Anathyrosis Anatolian Seljuk architecture Anchor plate Ancient Chinese wooden architecture Ancient Egyptian architecture Ancient Greek and Roman roofs Ancient Greek architecture Ancient Greek temple Ancient Indian architecture Ancient monuments of Java Ancient Roman architecture Ancient Roman defensive walls Andalusian patio Andaruni Andean Baroque Andron Anglo-Japanese style Anglo-Saxon architecture Anglo-Saxon turriform churches Annulet Anta Anta capital Antarala Antae temple Antebellum architecture Antechamber Ante-chapel Ante-choir Antefix Apadana Apartment Apodyterium Apophyge Apron Apse Apse chapel Apsidiole Aqueduct Arabesque Araeostyle Arcachon villa Arcade Arch Arch bridge Architect Architects of Iran Architrave Archivolt Architect of record Architectural acoustics Architectural analytics Architectural animation Architectural conservation Architectural design competition Architectural design optimization Architectural design values Architectural designer Architectural development of the eastern end of cathedrals in England and France Architectural drawing Architectural education in the United Kingdom Architectural educator Architectural endoscopy Architectural engineer (PE) Architectural engineering Architectural Experience Program (AXP) Architectural forgery in Japan Architectural firm Architectural geometry Architectural glass Architectural Heritage Society of Scotland Architectural historian Architectural icon Architectural illustrator Architectural ironmongery Architectural light shelf Architectural lighting design Architectural metals Architectural model Architectural mythology Architectural photographers Architectural photography Architectural plan Architectural propaganda Architectural psychology in Germany Architectural rendering Architectural reprography Architectural Review Architectural school of Nakhchivan Architectural sculpture Architectural sculpture in the United States Architectural style Architectural technologist Architectural technology Architectural terracotta Architectural theory Architectural vaults Architecture Architecture for Humanity Architecture in early modern Scotland Architecture in modern Scotland Architecture in Omaha, Nebraska Architecture museum Architecture of Aarhus Architecture of Aberdeen Architecture of Afghanistan Architecture of Africa Architecture of Albania Architecture of Albany, New York Architecture of Algeria Architecture of Almaty Architecture of ancient Sri Lanka Architecture of Angola Architecture of Argentina Architecture of Atlanta Architecture of Australia Architecture of Aylesbury Architecture of Azerbaijan Architecture of Baku Architecture of Bangladesh Architecture of Barcelona Architecture of Bathurst, New South Wales Architecture of Belfast Architecture of Belgrade Architecture of Bengal Architecture of Berlin Architecture of Bermuda Architecture of Bhutan Architecture of Birmingham Architecture of Bolivia Architecture of Bosnia and Herzegovina Architecture of Boston Architecture of Brazil Architecture of Buffalo, New York Architecture of the Bulgarian Revival Architecture of the California missions Architecture of Canada Architecture of Cantabria Architecture of Cape Verde Architecture of Cardiff Architecture of Casablanca Architecture of cathedrals and great churches Architecture of Central Asia Architecture of Chennai Architecture of Chicago Architecture of Chile Architecture of Chiswick House Architecture of Colombia Architecture of Copenhagen Architecture of Costa Rica Architecture of Croatia Architecture of Cuba Architecture of the Cucuteni–Trypillia culture Architecture of Dakota Crescent Architecture of Delhi Architecture of Denmark Architecture of Dhaka Architecture of England Architecture of Estonia Architecture of Ethiopia Architecture of Fez Architecture of Fiji Architecture of Finland Architecture of Fredericksburg, Texas Architecture of Georgia Architecture of Germany Architecture of Glasgow Architecture of Goan Catholics Architecture of Gujarat Architecture of Hong Kong Architecture of Houston Architecture of Hungary Architecture of Hyderabad Architecture of Iceland Architecture of India Architecture of Indonesia Architecture of Ireland Architecture of Istanbul Architecture of Italy Architecture of Jacksonville Architecture of Jiangxi Architecture of Johannesburg Architecture of Jordan Architecture of Kansas City Architecture of Karnataka Architecture of Kathmandu Architecture of Kerala Architecture of Kievan Rus' Architecture of Kosovo Architecture of Kuala Lumpur Architecture of Kuwait Architecture of Lagos Architecture of Lahore Architecture of Las Vegas Architecture of Lebanon Architecture of Leeds Architecture of Letterkenny Architecture of Lhasa Architecture of Limerick Architecture of Liverpool Architecture of London Architecture of the London Borough of Croydon Architecture of Lucknow Architecture of Luxembourg Architecture of Macau Architecture of Madagascar Architecture of Madrid Architecture of Maharashtra Architecture of Mali Architecture of Malta Architecture of Manchester Architecture of Mangalorean Catholics Architecture of the medieval cathedrals of England Architecture of Melbourne Architecture of Mesopotamia Architecture of metropolitan Detroit Architecture of Mexico Architecture of Monaco Architecture of Mongolia Architecture of Montenegro Architecture of Montreal Architecture of Mostar Architecture of Mumbai Architecture of the Netherlands Architecture of Nepal Architecture of New York City Architecture of New Zealand Architecture of Nigeria Architecture of Normandy Architecture of North Macedonia Architecture of Norway Architecture of Ottawa Architecture of Paris Architecture of the Paris Métro Architecture of Palestine Architecture of Peć Architecture of Penang Architecture of Peru Architecture of Philadelphia Architecture of the Philippines Architecture of Plymouth, Pennsylvania Architecture of Portland, Oregon Architecture of Provence Architecture of Puerto Rico Architecture of Quebec Architecture of Quebec City Architecture of Rajasthan Architecture of Rome Architecture of Samoa Architecture of San Antonio Architecture of San Francisco Architecture of Saudi Arabia Architecture of Scotland Architecture of Scotland in the Industrial Revolution Architecture of Scotland in the Middle Ages Architecture of Scotland in the Prehistoric era Architecture of Scotland in the Roman era Architecture of Seattle Architecture of Serbia Architecture of Singapore Architecture of Sri Lanka Architecture of the Song dynasty Architecture of South Korea Architecture of St. John's, Newfoundland and Labrador Architecture of St. Louis Architecture of Stockholm Architecture of Sumatra Architecture of Sweden Architecture of Switzerland Architecture of Sydney Architecture of Taiwan Architecture of Tamil Nadu Architecture of the Tarnovo Artistic School Architecture of Tehran Architecture of Telangana Architecture of Texas Architecture of Thailand Architecture of Tibet Architecture of Tokyo Architecture of Toronto Architecture of Turkey Architecture of the Netherlands Architecture of the Paris Métro Architecture of the United Arab Emirates Architecture of the United Kingdom Architecture of the United States Architecture of Uttar Pradesh Architecture of Uzbekistan Architecture of Vancouver Architecture of Vatican City Architecture of Veliko Tarnovo Architecture of Wales Architecture of Warsaw Architecture of Western Australia Architecture of Yugoslavia Architecture of Zimbabwe Architecture parlante Architecture schools in Switzerland Architecture studio Architecture terrible Architrave Archivolt Arcology Arcosolium Ardhamandapa Area Arena Armenian architecture Armenian church architecture Arris Arrowslit Art Deco Art Deco architecture Art Deco architecture of New York City Art Deco in Mumbai Art Deco in Paris Art Deco in the United States Art Deco buildings in Sydney Art Nouveau Art Nouveau architecture in Riga Art Nouveau architecture in Russia Art Nouveau in Alcoy Art Nouveau in Antwerp Art Nouveau in Strasbourg Art Nouveau religious buildings Artesonado Articular church Articulation Ashlar Assam-type architecture Association of German Architects Astragal Asturian architecture Astylar Atalburu Atlantean figures Atlas Atmosphere Atrium Attap dwelling Attic Attic base Attic style Aula regia Australian architectural styles Australian non-residential architectural styles Australian residential architectural styles Autonomous building Avant-garde architecture Avant-corps Awning Azekurazukuri Aztec architecture B Barabara Bachelor of Architectural Studies Bachelor of Architecture Back-to-back house Badami Chalukya architecture Bailey Baita Balairung Balconet Balconies of Cusco Balconies of Lima Balcony Bald arch Baldachin Baldresca Bale kulkul Bali Aga architecture Balinese architecture Balinese traditional house Ball flower Baluster Banjarese architecture Banna'i Banqueting house Banquette Baptistery Baradari Barbican Bargeboard Bargrennan chambered cairn Barndominium Baroque architecture Baroque architecture in Portugal Baroque Revival architecture Barrel roof Barrel vault Bartizan Baseboard Basement Basilica Bastide (Provençal manor) Bastion Bastion fort Bastle house Batak architecture Batter Battered corner Battle of the Styles Battlement Baubotanik Bauhaus Bay Bay-and-gable Bay window Beach house Bead and reel Beaux-Arts architecture Bed-mould Beehive house Belarusian Gothic Belfry Bell-cot Bell-gable Bell roof Bell tower Bell tower (wat) Belsize Architects Belt course Belvedere Bench table Bent Bent entrance Berg house Béton brut Bezantée Biedermeier Bifora Bildts farmhouse Biomimetic architecture Bionic architecture Black and white bungalow Black-and-white Revival architecture Black Forest house Blackhouse Blind arcade Blind arch Blobitecture Blockhouse Blue roof Bolection Bond beam Bosnian style in architecture Boss Bossage Bossche School Bouleuterion Bowellism Bowtell Bow window Box gutter Brabantine Gothic Bracket Brahmasthan Branchwork Brâncovenesc style Brattishing Breezeway Bresse house Bressummer Bretèche Brick Expressionism Brick Gothic Brick Gothic buildings Brick nog Brick Renaissance Brick Romanesque buildings Brickwork Bridge castle Brief Brise soleil Bristol Byzantine British megalith architecture Broach spire Broch Brutalist architecture Brutalist structures Bucranium Buddhist architecture Building Building code Building design Building envelope Building restoration Building typology Buildings and architecture of Allentown, Pennsylvania Buildings and architecture of Bath Buildings and architecture of Brighton and Hove Buildings and architecture of Bristol Buildings and architecture of New Orleans Buildings in Dubai Burdock piling Burgus Burnham Baroque But and ben Butterfly roof Buttress Byre-dwelling Byzantine architecture Byzantine Revival architecture C Caisson Caldarium Calendar house California bungalow Camarín Camber beam Cambridge School of Architecture and Landscape Architecture Canada's grand railway hotels Canadian Centre for Architecture Candi bentar Candi of Indonesia Canopy Cant Cantilever Cantoris Cape Dutch architecture Capilla abierta Capital Caravanserai Carolingian architecture Carpenter Gothic Carport Cartilage Baroque Cartouche Caryatid Casa montañesa Cascina a corte Cas di torto Casemate Casement stay Casement window Castellum Cast-iron architecture Castle Castle chapel Cast stone Catalan Gothic Catalan Romanesque Churches of the Vall de Boí Catalan vault Catenary arch Cathedral Cathedral arch Cathedral Architect Cathedral floorplan Cathedrals in Spain Catshead (architecture) Cavaedium Cavalier Cave castle Cavea Cavetto Cavity wall Ceiling Cella Cell church Cenotaph Central-passage house Centring Ceramic house Chahartaq Chalet Chamber gate Chamber tomb Chambered cairn Chambranle Chamfer Chancel Channel letters Chantlate Chapel Chapter house Chardak Charleston single house Charrette Chartaque Charter bole Chartered architect Château Châteauesque Chattel house Chemin de ronde Chemise Cherokee Gothic Chhajja Chhatri Chicago school Chigi Chilotan architecture Chimney Chimney breast Chinese architecture Chinese Chippendale Chinese Islamic architecture Chinese pagoda Chinese temple architecture Choga Choir Chola art and architecture Church architecture Church architecture in England Church architecture in Scotland Church window Churches in Norway Churches of Chiloé Churrigueresque Ciborium Circulation Circus Cistercian architecture Citadel City Beautiful movement City block City gate City of Vicenza and the Palladian Villas of the Veneto Clapboard Classical architecture Classical order Clerestory Clerk of works Cliff dwelling Clock gable Cloister Cloister vault Coade stone Cobblestone architecture Coenaculum Coercion castle Coffer Collegiate Gothic Colonette Colonial architecture Colonial architecture in Jakarta Colonial architecture in Padang Colonial architecture in Surabaya Colonial architecture of Indonesia Colonial architecture of Makassar Colonial architecture of Southeast Asia Colonial Revival architecture Colonnade Column Comacine masters Combination stair Compass Complementary architecture Compound pier Compression member Computer-aided architectural design Comtois steeple Concatenation Concentric castle Conceptual architecture Conch house Concrete landscape curbing Concrete shell Congrès Internationaux d'Architecture Moderne Conical roof Conisterium Connected farm Construction partnering Constructivist architecture Consumption wall Contemporary architecture Contextual architecture Conversation pit Coping Copper cladding Copper in architecture Coptic architecture Copyright in architecture in the United States Corbel Corbel arch Cordonata Core Corinthian order Cornerstone Corner tower Cornice Coron Corps de logis Cosmatesque Cotswold architecture Cottage flat Cottage orné Cottage window Council architect Council on Tall Buildings and Urban Habitat Counter-arch Coupled column Court of honor (Cour d'honneur) Course Court cairn Courtyard house Cove lighting Coved ceiling Covertway Crannog Creole architecture in the United States Crepidoma Crescent Cresting Crimson Architectural Historians Crinkle crankle wall Critical regionalism Croatian pre-Romanesque art and architecture Crocket Crooked spire Cross-in-square Cross-wall Cross-window Cross-wing Crossing Crowdsourcing architecture Crown molding Crown steeple Crownwork Cruciform Crypt Cryptoporticus Cubiculum Curtain wall Cyclopean masonry Cyclostyle Cymatium Cyzicene hall Czech architecture Czech Baroque architecture Czech Cubism Czech Gothic architecture Czech Renaissance architecture D Dado Dado rail Daibutsuyō Dakkah Danish design Darbazi Dargah Dartmoor longhouse Deck Deconstruction Deconstructivism Deep foundation Deep Jyoti Stambh Deep plan Defensive wall Defensive towers of Cantabria Demerara window Dentil Destruction of country houses in 20th-century Britain Detinets Diagrid Diamond vault Diapering Diaphragm arch Diaulos Digital architecture Dikka Diocletian window Discharging arch Disordered piling Dissenting Gothic Distyle Distyle in antis Dō Doctor of Architecture Dog-tooth Dome Domus Doric order Dormer Double chapel Double-skin facade Dougong Dragestil Dravidian architecture Drawing board Dropped ceiling Drum tower Dry stone Dun (fortification) Duomo Duplex (building) Dutch architecture in Semarang Dutch Baroque architecture Dutch brick Dutch Colonial architecture Dutch Colonial Revival architecture Dutch door Dutch gable Dwarf gallery Dzong architecture E Early Christian art and architecture Early New York Architecture in 19th Century Early skyscrapers Earthquake Baroque East Asian hip-and-gable roof Easter Sepulchre Eastern Orthodox church architecture Eastlake movement Eave return Eaves Eclecticism in architecture Edwardian architecture Edwardian Baroque architecture Egg-and-dart Egyptian pyramids Egyptian pyramid construction techniques Egyptian Revival architecture Egyptian Revival architecture in the British Isles Elevated entrance Elizabethan architecture Elizabethan Baroque Ell Ellipsoidal dome Elliptical dome Embrasure Emissary Empire style Enceinte Enclosure castle Enfilade Engaged column Engawa English Baroque English country house English Gothic architecture Entablature Entasis Ergastulum Estate houses in Scotland Estipite Estonian vernacular architecture Etruscan architecture European medieval architecture in North America European Route of Brick Gothic European Union Prize for Contemporary Architecture Euthynteria Examination for Architects in Canada Exedra Experimental architecture Expression Expressionist architecture F Fabric structure Facade Facadism False door Falsework Fanlight Fan vault Fantastic architecture Farmhouse Fascia Fascist architecture Fatimid architecture Fatimid Great Palaces Fauces Faussebraye Federal architecture Federal modernism Federation architecture Fender pier Ferro Festoon Fina Finial Firebox Fire door Fire lookout tower Firewall First national architectural movement First Period First Romanesque Flak tower Flamboyant Flame palmette Flanking tower Flat roof Flèche (architecture) Flèche (fortification) Flèche faîtière Fleuron Float glass Floating floor Flood arch Floor medallion Floor plan Floor vibration Florida cracker architecture Florida modern Flushwork Fluting (architecture) Flying arch Flying buttress Foil Folk Victorian Folly Folly fort Forced perspective Forecourt Form follows function Fortification Fortified gateway Fortified house Fortified tower Fortochka Fortress church Forum Foundation Four-centred arch Frederician Rococo Free plan French architecture French Baroque architecture French Colonial French Gothic architecture French Renaissance architecture French Restoration style French Romanesque architecture Frëngji Fretwork Frieze Frigidarium Frisian farmhouse Frontispiece Fumarium Funco Functionalism Fusuma G Gabion Gable Gablefront house Gable roof Gablet roof Gable stone Gaiola Galilee Gallery Galleting Gambrel Gaper Garbhagriha Garderobe Gargoyle Garland bearers Garret Garrison Gatehouse Gate tower Gavaksha Gavit Gazebo Geestharden house Geison Genius loci Geodesic dome Georgian architecture Gibbs surround Gingerbread Girih Girih tiles Girt Giyōfū architecture Glass brick Glass floor Glass in green buildings Glass mosaic Glass mullion system Glass tile Glazed architectural terra-cotta Glazing Gloriette Gold leaf Gonbad Gongbei Gothic architecture Gothic architecture in Lithuania Gothic architecture in modern Poland Gothic brick buildings in Germany Gothic brick buildings in the Netherlands Gothic buildings Gothic cathedrals and churches Gothic Revival architecture Gothic Revival architecture in Canada Gothic Revival architecture in Poland Gothic Revival buildings Gothic secular and domestic architecture Goût grec Grade beam Graecostasis Granary Grands Projets of François Mitterrand Great chamber Great hall Great house Great Rebuilding Great room Great Seljuk architecture Greek Baths Greek Revival architecture Green building Gridshell Grille (architecture) Grillwork Groin vault Grotesque Grotto Gründerzeit Guard stone Guard tower Guastavino tile Guerrilla architecture Gulf house Gutta Gymnasium Gynaeceum H Hachiman-zukuri Hagioscope Haiden Hakka walled village Half tower Hall Hall and parlor house Hall church Hall house Hammerbeam roof Han dynasty tomb architecture Hanover school of architecture Harappan architecture Harling Hasht-behesht Hashti Haubarg Hausa architecture Hawaiian architecture Hay hood Heiden Heimatschutz Heliopolis style Heliotrope Hemadpanti architecture Henry II style Henry IV style Heritage houses in Sydney Heritage structures in Chennai Herma Herodian architecture Heroon Herrerian style Herzog & de Meuron Hexafoil Hexagonal window Hidden roof High-rise building High-tech architecture High Victorian Gothic Hill castle Hillfort Hillforts in Scotland Hillside castle Hilltop castle Hindu and Buddhist architectural heritage of Pakistan Hindu architecture Hindu temple architecture Hip roof Hippodrome Hirairi Hisashi Historic house Historicism History of architectural engineering History of architecture History of domes in South Asia History of early and simple domes History of early modern period domes History of Italian Renaissance domes History of medieval Arabic and Western European domes History of modern period domes History of the world's tallest buildings History of urban planning Hiyoshi-zukuri Hoarding Hogan Hokkien architecture Hokora Honden Hood mould Hórreo Horreum Horseshoe arch Hosh Hostile architecture Hôtel particulier House Housebarn House-commune House plan Housing in Azerbaijan Housing in China Housing in Europe Housing in Glasgow Housing in Hong Kong Housing in India Housing in Japan Housing in New Zealand Housing in Pakistan Housing in Portugal Housing in Scotland Housing in Senegal Housing in the United Kingdom Howz Hoysala architecture Huabiao Hui-style architecture Hunky punk Hypaethral Hyphen Hypocaust Hypostyle Hypotrachelium I I-house Iberian pre-Romanesque art and architecture Ice house Icelandic turf house Iconostasis Ideal town Illusionistic ceiling painting Imbrex and tegula Imperial castle Imperial Crown Style Imperial roof decoration Imperial staircase Impluvium Impluvium (house) Impost Inca architecture Indented corners Indian rock-cut architecture Indian vernacular architecture Indies Empire style Indigenous architecture Indo-Corinthian capital Indo-Islamic architecture Indo-Saracenic architecture Industrial architecture Infill wall Inglenook Insula (building) Insula (Roman city) Interactive architecture Intercolumniation Interior architecture Intern architect Intern Architect Program International Gothic International Style International Union of Architects Interstitial space Inverted arch Inverted bell Inverted pyramid Ionic order Ipswich window Iranian architecture Irish round tower Iron railing Irori Isabelline Isfahani style Ishi-no-ma-zukuri Islamic architecture Islamic geometric patterns Island castle Italian Baroque architecture Italian Gothic architecture Italian modern and contemporary architecture Italian Neoclassical architecture Italianate architecture Iwan Izba J Jacal Jack arch Jacobean architecture Jagati Jali Jamaican Georgian architecture Jama masjid Jamb Jamb statue Japan Institute of Architects Japanese architecture Japanese Buddhist architecture Japanese pagoda Japanese wall Japanese-Western Eclectic Architecture Javanese traditional house Jeffersonian architecture Jengki style Jesmonite Jettying Jharokha Joglo Jugendstil Jutaku K Kadamba architecture Kagura-den Kairō Kalae house Kalang house Kalinga architecture Kalybe (temple) Karahafu Karamon Kasbah Kasuga-zukuri Kath kuni architecture Katōmado Katsuogi Keep Keystone Khmer architecture Khorasani style Khrushchyovka Kibitsu-zukuri Kinetic architecture King post Kit house Kiva Kliros Knee Knee wall Knotted column Koil Kokoshnik architecture Komainu Konak Korean architecture Korean pagoda Kraton Kremlin Kucheh Kura Kuruwa Kyōzō L L-plan castle Labrum Laconicum Lally column Lamolithic house Lanai Lancet window Landhuis Landscape architect Landscript Lantern tower Latina Lattice tower Latticework Lesene Leuit Levantine Gothic Liberty style Library stack Lierne Lightwell Lime plaster Limes Linenfold Lingnan architecture Linhay Linked house Lintel Listed building Liwan Lobby Loculus Log building Log cabin Log house Loggia Lombard architecture Lombard band London Festival of Architecture Long barrow Long gallery Longhouse Longhouses of the indigenous peoples of North America Lookout Lopo house Lorraine house Louis period styles Louis XIII style Louis XIV style Louis XV style Louis XVI style Louis Philippe style Louver Low-energy house Low German house Lowland castle Low-rise building Lucarne Lunette Lunette (fortification) Luten arch M Maashaus Machiya Machicolation Maenianum Mahal Mahoney tables Main Hall Major town houses of the architect Victor Horta (Brussels) Malay house Maltese Baroque architecture Mamluk architecture Mammisi Mandaloun Mandapa Mannerism Manor house Mansard roof Mansion Mansionization Manueline Manufactured housing Maqam Maqsurah Mar del Plata style Margent Marine architecture Marriage stone Marsh castle Martello tower Martyrium Māru-Gurjara architecture Mas (Provençal farmhouse) Mascaron Mashrabiya Masia Massing Mastaba Master of Architecture Materiality Mathematical tile Mathematics and architecture Mathura lion capital Matroneum Mausoleum Maya architecture Mayan Revival architecture Mead hall Meander Medallion Medici villas Medieval architecture Medieval fortification Medieval Serbian architecture Medieval stained glass Medieval turf building in Cronberry Mediterranean Revival architecture Megalithic architectural elements Megaron Megastructure Meitei architecture Membrane structure Memorial gates and arches Mendicant monasteries in Mexico Merlon Merovingian art and architecture Meru tower Mesoamerican architecture Mesoamerican ballcourt Mesoamerican pyramids Metabolism Metaphoric architecture Metope Metroon Mezzanine Miami Modern architecture Microdistrict Mid-century modern Middle German house Mihashira Torii Mihrab Minaret Minimal Traditional Minka Minstrels' gallery Mission Revival architecture Mithraeum Model maker Modern architecture Modern architecture in Athens Modern Greek architecture Moderne architecture Modernisme Modillion Modular building Mokoshi Moldavian style Molding Mole Mon Monaco villas Mondop Monitor Monofora Monolithic architecture Monolithic church Monolithic column Monolithic dome Mono-pitched roof Monopteros Monterey Colonial architecture Monumental sculpture Monumentalism Moon gate Moorish architecture Moorish Revival architecture Moorish Revival architecture in Bosnia and Herzegovina Morava architectural school Moroccan architecture Moroccan riad Moroccan style Morphology Mosaic Mosque Motte-and-bailey castle Motte-and-bailey castles Mozarabic art and architecture Mudéjar Mudéjar architecture of Aragon Mughal architecture Muisca architecture Mullion Mullion wall Multi-family residential Multifoil arch Muntin Muqarnas Muragala Murder hole Musalla Museum architecture Musgum mud huts Myanmar architecture Mycenaean Revival architecture N Nabataean architecture Nagare-zukuri Naiskos Nakazonae Namako wall Nano House Napoleon III style Naqqar khana Narthex Naryshkin Baroque National Aptitude Test in Architecture National Park Service rustic National Romantic style Natural building Nave Nazi architecture Neck ditch Neo-Andean Neo-Byzantine architecture in the Russian Empire Neo-eclectic architecture Neo-futurism Neo-Grec Neo-historism Neo-Manueline Neomodern Neo-Mudéjar Neo-Tiwanakan architecture Neoclásico Isabelino Neoclassical architecture Neoclassical architecture in Belgium Neoclassical architecture in Milan Neoclassical architecture in Poland Neoclassical architecture in Russia Neoclassicism in France Neolithic architecture Neolithic long house Neorion New Classical architecture New Formalism (architecture) New Hague School New Indies Style New Khmer Architecture New Mexico vernacular New Objectivity New Spanish Baroque New Urbanism Newa architecture Newar window Newel Niche Nieuwe Zakelijkheid Nightingale floor Nijūmon Nilachal architecture Niōmon Nipa hut Nocturnal architecture Nonbuilding structure types Non-Referential Architecture Nordic Classicism Nordic megalith architecture Norman architecture Norman architecture in Cheshire Norman Revival architecture North light North-Western Italian architecture Novelty architecture Nubian architecture Nubian vault Nuraghe Nymphaeum O Ōbaku Zen architecture Obelisk Observation deck Observation tower Octagon house Octagon on cube Oculus Odeon Oecus Oeil-de-boeuf Ogee Ogive Okinawan architecture Old Frisian farmhouse Old Frisian longhouse Oldest buildings in Scotland One-day votive churches Onigawara Onion dome Open building Open plan Openwork Opisthodomos Opus Opus africanum Opus albarium Opus compositum Opus craticum Opus emplectum Opus gallicum Opus incertum Opus isodomum Opus latericium Opus listatum Opus mixtum Opus quadratum Opus regulatum Opus reticulatum Opus sectile Opus signinum Opus spicatum Opus tessellatum Opus testaceum Opus vermiculatum Opus vittatum Orangery Order Organic architecture Oriel window Origins and architecture of the Taj Mahal Orillon Ornamentalism Orri Orthostates Ottoman architecture Ottoman architecture in Egypt Overhang Overlay architecture Ovolo P Padmasana Paduraksa Pagoda Pair-house Pakistani architecture Palace Palaestra Palas Palazzo Palazzo style architecture Palisade church Palladian architecture Palladio Award Pallava art and architecture Palloza Palmette Pandyan art and architecture Paned window Panelák Panelling Panjdari Parabolic arch Paraguayan architecture Parametricism Parapet Parclose screen Pargeting Paris architecture of the Belle Époque Parlour Parthenon Parthian style Parti pris Party wall Parvise Pataliputra capital Patera Patina Patio Patio home Pattern Pattern book Pattern language Paulista School Pavement Pavilion Pavilion (exhibition) Peak ornament Pedestal Pediment Pedimental sculpture Pedway Peel tower Pelmet Pend Pendant vault Pendentive Pendhapa Performative architecture Pergola Peribolos Peripteros Peristasis Peristyle Perpend stone Perron Perserschutt Persian column Peruvian colonial architecture Petrine Baroque Phallic architecture Phenomenology Phiale Philosophy of architecture Piano nobile Pier Pierrotage Pieve Pila Pilae stacks Pilaster Piloti Pinnacle Pit-house Place-of-arms Plafond Plan Plank house Plantagenet style Plateresque Plattenbau Plot plan Pluteus Plyscraper Podium Pointed arch Polifora Polish Cathedral style Polished plaster Polite architecture Polychrome Polychrome brickwork Polygonal fort Polygonal masonry Pombaline style Ponce Creole Pont Street Dutch Porch Portal Portcullis Porte-cochère Portego Portico Porticus Porto School of Architecture Portuguese Architecture Portuguese colonial architecture Portuguese Gothic architecture Portuguese Romanesque architecture Post Post and lintel Post-and-plank Post church Post in ground Postconstructivism Postern Postmodern architecture Poteaux-sur-sol Poupou Prairie School Pranala Prang Prasat Prastara Prefabricated building Prefabricated home Prefabs in the United Kingdom Prehistoric pile dwellings around the Alps Pre-Parsian style Pre-Romanesque art and architecture Pre-war architecture Primitive Hut Pritzker Architecture Prize Prodigy house Professional requirements for architects Project architect Promenade architecturale Promontory fort Proportion Propylaea Prospect 100 best modern Scottish buildings Prostyle Prow house Prytaneion Pseudodipteral Pseudoperipteros Pteron Pucca housing Pueblo Deco architecture Pueblo Revival architecture Pullman Pulpitum Pulvino Purism Purlin Puteal Putlog hole Puuc PWA Moderne Pyatthat Pylon Pyramidion Q Qa'a Qadad Qalat Quadrangle Quadrangular castle Quadrant Quadrifora Quarry-faced stone Quarter round Quatrefoil Quattrocento Queen Anne Revival architecture in the United Kingdom Queen Anne style architecture Queen Anne style architecture in the United States Queenslander Quincha Quoin Qutb Shahi architecture R Rafter Raised floor Rampart Ranch-style house Rangkiang Raška architectural school Ratha Rationalism Raygun Gothic Rayonnant Realism Reconstruction Redoubt Reduit Reeding Reflecting pool Refuge castle Regency architecture Regia Regional characteristics of Romanesque churches Reglet Regulating Lines Reinforced concrete column Relief Religious architecture in Belgrade Religious architecture in Novi Sad Renaissance architecture Renaissance Revival architecture Repoblación art and architecture Residence Residential architecture in Historic Cairo Residential architecture in Ibiza Resort architecture Respond Responsive architecture Retaining wall Retractable roof Retrofuturism Retroquire Rhenish helm Revenue house Revivalism Revolving door RIBA Competitions RIBA Journal Ribat Rib vault Richardsonian Romanesque Ridge castle Ridge-post framing Ridge turret Rim joist Rinceau Ringfort Riwaq Rocca Rock castle Rock-cut architecture Rock-cut architecture of Cappadocia Rococo architecture in Portugal Rococo in Spain Roman amphitheatre Roman aqueduct Roman architectural revolution Roman brick Roman bridge Roman canal Roman cistern Roman concrete Roman dams and reservoirs Roman domes Roman shower Roman temple Roman theatre Roman villa Romanesque architecture Romanesque architecture in Poland Romanesque architecture in Sardinia Romanesque architecture in Spain Romanesque buildings Romanesque churches in Madrid Romanesque Revival architecture in the United Kingdom Romanesque secular and domestic architecture Romanian architecture Romano-Gothic Rōmon Rondavel Rondocubism Roof comb Roof garden Roof lantern Roofline Roof pitch Roof window Rood screen Room Rorbu Rosette Rose window Roshandan Rostra Rostral column Rota Rotunda Round barn Roundel Roundhouse Round-tower church Royal Gold Medal Royal Institute of British Architects Ruin value Ruins Rumah Gadang Rumah limas Rumah ulu Rumoh Aceh Rundbogenstil Russian architecture Russian church architecture Russian cultural heritage register Russian neoclassical revival Russian Revival architecture Rustication S Sacellum Sacral architecture Saddle roof Saddleback roof Sahn Sail shade Saka guru Sakuji-bugyō Sala Sally port Saltbox house Sand Hills cottage architecture Sandō Sanmon Sarasota School of Architecture Sarnath capital Sasak architecture Sasanian architecture Sash window Scaenae frons Scagliola Scamilli impares Scarsella Schinkel school Scissors truss Sconce Scottish baronial architecture Scottish castles Scottish Vernacular Screened porch Scroll Seattle box Sebil Second Empire architecture in Europe Second Empire architecture in the United States and Canada Secondary suite Secret passage Secular building Sedilia Segmental arch Self-cleaning floor Self-cleaning glass Semi-basement Semi-detached Semi-dome Serbian wooden churches Serbo-Byzantine architecture Serbo-Byzantine Revival Serpentine shape Setback Setchūyō Set-off Sexpartite vault Shabaka Shabestan Shah Jahan period architecture Shallow foundation Shanxi architecture Shear wall Shed style Shell keep Shibi Shinbashira Shinden-zukuri Shingle style architecture Shinmei-zukuri Shinto architecture Shinto shrine Shipping container architecture Shipping container clinic Shitomi Shoebox style Shoin-zukuri Shōji Shophouse Shōrō Shotgun house Siberian Baroque Sicilian Baroque Side-deck Side passage plan architecture Sikh architecture Silesian architecture Sill plate Sima Single- and double-pen architecture Single-family detached home Sino-Portuguese architecture Site plan Site-specific architecture Skylight Skyscraper Index Skyway Slab hut Sleeping porch Slenderness ratio Slipcover Sliver building Slow architecture Smoke hole Snout house Sobrado Sociology of architecture Socle Soffit Soft Portuguese style Solar Solar architecture Solar chimney Solarized architectural glass Solomonic column Somali architecture Sōmon Sondergotik Sopo Sōrin Sotoportego Southern Colonial style in California Southern French Gothic Spa architecture Space Space architecture Spatiality Spandrel Spanish architecture Spanish Baroque architecture Spanish Colonial architecture Spanish Colonial Revival architecture Spanish Gothic architecture Spanish Romanesque Sphaeristerium Spire Spire light Spite house Split-level home Springer Spolia Spur Spur castle Squinch Stabilization Staddle stones Stained glass Stair riser Staircase tower Stalinist architecture Stanchion Starchitect State architect State room Stavanger Renaissance Stave church Steeple Step pyramid Stepwell Stepped gable Stick style Stile Umbertino Stillicidium Still room Stilt house Stilt tower Stilts Stoa Stone ender Stoop Storybook house Strap footing Strapwork Streamline Moderne Stripped Classicism Structuralism Structures built by animals Studio apartment Stupa Style Sapin Stylobate Sudatorium Sundanese traditional house Sudano-Sahelian architecture Sukanasa Sukiya-zukuri Sumbanese traditional house Summer architecture Sumiyoshi-zukuri Sunburst Sunken courtyard Sunroom Suntop Homes Superposed order Suprematism Surau Suspensura Sustainable architecture Svan towers Swahili architecture Swiss Chalet Revival architecture Swiss chalet style Symbolism of domes T Taberna Tablinum Tadelakt Taenia Tahōtō Taisha-zukuri Tajug Talud-tablero Tambo Tambour Tas-de-charge Tatar mosque Technical drawing Teito Telamon Temazcal Temple Templon Tenaille Tenement Tenshu Tensile structure Tension member Teocalli Tepidarium Term Terrace Terraced house Terraced houses in Australia Terraced houses in the United Kingdom Terreplein Territorial Style Territorial Revival architecture Tessellated roof Tetraconch Tetrapylon Thai temple art and architecture The 20th-Century Architecture of Frank Lloyd Wright Thin-shell structure Tholobate Tholos Three hares Tibetan Buddhist architecture Tidewater architecture Tie Tiltyard Timber framing Timber roof truss Timeline of architectural styles Timeline of architectural styles 1750–1900 Timeline of Art Nouveau Timeline of Italian architecture Tin ceiling Tiny house movement Tokyō Toll castle Tongkonan Tong lau Tulou Torana Torii Torp Totalitarian architecture Tourelle Tower Tower blocks in Great Britain Tower castle Tower house Tower houses in Britain and Ireland Tower houses in the Balkans Townhouse Townhouse (Great Britain) Tracery Trachelium Traditional architecture of Enggano Traditional Chinese house architecture Traditional Korean roof construction Traditional Persian residential architecture Traditional Thai house Traditionalist School Transept Transom Transverse rib Trefoil Trefoil arch Trellis Triadic pyramid Tribune Triclinium Trifora Triforium Triglyph Trilithon Trinitarian steeple Triodetic dome Triquetra Triumphal arch Trombe wall Trompe-l'œil Trophy of arms Trullo Trumeau Truss Truth to materials Truth window Tsumairi The Leeds Look Tudor architecture Tudor Revival architecture Türbe Turret Twig work Two-up two-down Tympanum U Ubaid house Ukrainian architecture Ukrainian Baroque Ultimate bungalow Uma Umayyad architecture Undercroft Unfinished building Universal design Upper Lusatian house Upright and Wing Urban canyon Urban castle Urban design Urban planning Urban planning in ancient Egypt Urban planning in Australia Urban planning in communist countries Urban planning in Nazi Germany Usonia Uthland-Frisian house V Vainakh tower architecture Valencian Art Nouveau Valencian Gothic Vancouver Special Vancouverism Vanderbilt houses Vastu shastra Vatadage Vault Velarium Vellar cupola Venereum Venetian door Venetian Gothic architecture Venetian Renaissance architecture Venetian window Venice Biennale of Architecture Ventilation Ventilation shaft Veranda Verify in field Vernacular architecture Vernacular architecture in Norway Vernacular architecture of the Carpathians Vernacular residential architecture of Western Sichuan Vesara Vestibule Viaduct Victorian architecture Victorian house Victorian restoration Victory column Viga Vihāra Vijayanagara architecture Viking ring fortress Villa Villa rustica Vimana Vineyard style Visigothic art and architecture Vitruvian module Vitruvian opening Vitruvian scroll Volume and displacement indicators for an architectural structure Volute Vomitorium Votive column Voussoir W Wada Waldlerhaus Wall Wall dormer Wall footing Walipini Wantilan Wat Watchtower Water castle Watergate Waterleaf Water table Water tower Wattle and daub Wayō Wealden hall house Weavers' cottage Weavers' windows Wedding-cake style Weep Well house Welsh Tower houses Wessobrunner School Western Chalukya architecture Western false front architecture Westwork Wetu Wharenui Whispering gallery Widow's walk Wilhelminism Wind brace Windcatcher Window Window blind Window sill Wing Wing wall Witch window Witches' stones World Architecture Festival World Architecture Survey WPA Rustic Wunderlich X Xylotechnigraphy Xystum Xystus Y Yagura Yakhchāl Yalı Yaodong Yeseria Yett Z Z-plan castle Zakopane Style Zarih Zellige Zenshūyō Zero carbon housing Zero-energy building Zingel Zoomorphic architecture Zoophorus Zvonnitsa Zwinger Lists Architects Architects of supertall buildings Architectural historians Architecture schools Architectural styles Architecture awards Architecture criticism Architecture firms Architecture magazines Bizarre buildings Building types Buildings and structures Firsts in architecture Greek and Roman architectural records Historic houses House styles House types Largest domes Nonbuilding structure types Oldest known surviving buildings Professional architecture organizations Tallest buildings Twisted buildings Visionary tall buildings and structures Category :Category:Architecture See also Outline of architecture Outline of classical architecture Table of years in architecture Timeline of architecture Glossary of architecture Architecture topics Architecture
Index of architecture articles
[ "Engineering" ]
7,740
[ "Construction", "Architecture" ]
62,443,231
https://en.wikipedia.org/wiki/Multi-material%203D%20printing
Multi-material 3D printing is the additive manufacturing procedure of using multiple materials at the same time to fabricate an object. Similar to single material additive manufacturing it can be realised through methods such as FFF, SLA and Inkjet (material jetting) 3D printing. By expanding the design space to different materials, it establishes the possibilities of creating 3D printed objects of different color or with different material properties like elasticity or solubility. The first multi-material 3D printer Fab@Home became publicly available in 2006. The concept was quickly adopted by the industry followed by many consumer ready multi-material 3D printers. Multi-material 3D printing Technologies Fused Filament Fabrication (FFF) Fused Filament Fabrication (also known as Fused Deposition Modeling - FDM) describes the process of continuously extruding a line of thermoplastic material to form a three dimensional model. The FFF process supports a variety of materials reaching from bio degradable ones like PLA to PETG, ABS and engineering grade materials like PEEK. This technology additionally allows for the use of flexible materials like TPU. Two possible solutions to realise a multi-material FFF 3D printer are: Single Nozzle Design The single nozzle design combines the different materials before or in the melting zone of the print head such that the materials are extruded through the same nozzle. For example: The different filaments can be cut and rejoined to a single strand of a mixed filament before being fed into the melting chamber. Such a technique is implemented in the Mosaic Palette. Another example is the multi-material upgrade by Prusa3d, which is mounted on top of a single material printer to add multi material capabilities. It uses a bowden style extrusion system with an additional axis to cut and select the material. To prevent impurities inside of the object a combined melting chamber has to be cleared from the previous material before a new one can be used. Depending on the implementation, the amount of waste material produced during the printing process may be significant. In some implementations, the previous material may be used as in-fill to prevent waste, or to simultaneously print a different object in which color does not matter. Multi-Nozzle Design The multi-nozzle design features a separate nozzle for each material. The nozzle can either be mounted on the same print head or on independent print heads. For this approach to work the different nozzles have to be calibrated to the exact same height relative to the print surface to circumvent the interference of an inactive nozzle with the printed object. Such a design reduces the amount of waste material during the printing process significantly compared to a single nozzle design which does not use the previous material as in-fill or to print another object. Stereolithography (SLA) Stereolithography is the process of solidifying a photopolymer with a laser layer by layer to form a three dimensional object. To realize multi-material prints with this technology, one can use multiple reservoirs for different photopolymers. A major problem with this approach is the removal of the not yet polymerised material as the print may contain cavities filled with the old material, which should be emptied before the next material can be used. The photopolymer resins used for SLA can have highly different physical properties, generally being more brittle and having a lower heat deflection temperature. The SLA standard resins come in different colours and opacities. Besides the engineering grade materials like the ABS-like or PP-like resin, there exist bio-compatible ones used for medical applications and flexible resins. Material Jetting The process of material jetting, often also called Inkjet 3D printing, is similar to the 2D Inkjet printing procedure. The print head consists of multiple small nozzles which jet droplets of photopolymers on demand. Each nozzle can extrude a different material, which allows for the creation of multi-material parts. The droplets of material are then immediately cured using a UV light source mounted to the print head. In contrast to the FFF printing process, a layer is not formed by moving the print head along a pre-calculated path, but by scanning the layer line by line. The Statasys J750, for example, allows for full colour prints. The materials supported by the material jetting printing process are similar to the ones of the SLA process and hence share similar properties. Additionally there have been advances in the field of material jetting metals by suspending nano metal particles in a fluid. After the removal of the support material the printed object has to be sintered to create a final metal part. Binder Jetting A binder jetting 3D printer uses particles of a fine-grained powder, which are fused together using a binder, to form a three-dimensional object. In principle, it consists out of two separate chambers: One functions as a reservoir for the powdered material, the other one as the printing chamber. To fabricate a layer of an object a blade pushes the material out of the reservoir and spreads it over the printing surface to create a thin layer of powder. A print head similar to the one found in a 2D inkjet printer then applies the binder to the layer to solidify and bind it to the previous one. Although binder jetting does not allow for multi-material support, there exist printers, which feature a second print head to apply pigment to the layer after the binder to allow for full color prints. Workflow Designing Designing a three dimensional object is the first step in the workflow of 3D printing. This design process can be supported by software. Such CAD software is capable of creating, managing and manipulating different 3D geometric figures while giving the user feedback through a graphical interface. Most CAD programs already support the annotation of a geometric figure with a material. The combination of different geometries then forms a single multiple material object. However, not all file formats support the annotation of materials together with the geometry of the object. Slicing Slicing is the process of splitting a 3D model into layers to transform them into a sequence of G-Code instructions. These instructions can be processed by a 3D printer to manufacture the corresponding model in either a bottom-up, top-down or even left to right manner. Before generating the instructions, support structures can be added to connect overhanging sections of the model to either the printing surface or other parts of the model. The support structures have to be removed in a post processing step after the print has finished. The slicing process for multi-material prints differs depending on the hardware used. For FFF based machines, instructions for changing the material have to be added. This comes with multiple computational challenges such as handling two print heads at the same time without them interfering with each other or clearing the melting chamber from the previous material. For SLA based multi-material prints the slicing software has to handle the additional degrees of freedom arising from the possibility of moving the print from one resin tray to the next one. The slicing procedure for material jetting printers involves the generation of multiple bitmap images representing the voxels of the object. Post-Processing 3D printed objects may need to be post processed before they can be used as a prototype or a finished product. Such post-processing steps may including sanding the surface of the object to make it smoother or painting it to match the colours of the design. Depending on the printing method and the objects geometry, support structures may have to be removed. The use of multi-material 3D printing reduces the amount of post processing needed for the same result, as colours can be printed directly. Furthermore, it is possible to use a water soluble material for printing the support structures, as their removal only involves placing the object into a water bath. Applications Food 3D Printing The rising trend of food 3D printing supports the customisation of shape, colour, flavour, texture and nutrition of different meals. Multi-material 3D printing enables using multiple ingredients like peanut butter, jelly or dough in the printing process, which is essential for the creation of most foods. Medical Applications Multi-material 3D printing technology is often used in the production of 3D printed prosthetics. It enables the use of different materials like a soft TPU on the contact points with the body and a stiff carbon fibre material for the corpus of the prosthetics. The prosthetics can therefore be adjusted to suit the varying needs and desires of an individual. Another medical use case is the generation of artificial tissue structures. The research focuses on creating tissue, that mimics human tissue in terms of feel, elasticity and structure. Such artificial tissues can be used by surgeons to train and learn on realistic models, which is otherwise hard or expensive to achieve. Current research focuses on 3D printed drug delivery systems to efficiently deploy a medication or vaccine. Through the use of multi-material printing they create biocompatible structures that can interact with the human body on a cellular level. Physical Properties The capability of switching between different materials is essential for controlling the physical properties of a 3D printed object. Besides being able to manipulate the strength of an object through micro-structures, the user can switch between harder or softer materials in the printing process to affect the rigidity of the object. Hard and soft material combination is also applied to fabricate biomimetic structure with desired properties. The use of materials of different colour or elasticity can affect the looks and the haptics of the resulting object. Additionally, it is possible to reduce the amount of post-processing needed by choosing a suitable material for the support structures or the outer hull of the part. Rapid Prototyping Multi-material 3D printing enables designers to rapidly manufacture and test their prototypes. The use of multiple materials in a single part enables the designer to create functional and visually appealing prototypes. An example of how 3D printing can be included in the design process is automotive design. There, it is necessary to quickly test and verify a prototype to get the design approved for production. The reduced post-processing steps induced by the multi-material 3D printing technology result in a shorter fabrication time. Additionally, multi-material 3D printing reduces the part count of the produced prototypes compared to traditional fabrication methods like milling or molding, because the assembly of multiple parts with different materials is no longer required. File Formats There exist multiple file formats to represent three dimensional objects which are suitable for 3D printing. Yet not all of them support the definition of different materials in the same file as the geometry. The table below lists the most common file formats and their capabilities: References External links 3D Printing Timeline STL 2.0: A Proposal for a Universal Multi-Material Additive Manufacturing File Format 3D Printing File Formats The PLY file format 3D printing Computer printers DIY culture Industrial design Industrial processes
Multi-material 3D printing
[ "Engineering" ]
2,203
[ "Industrial design", "Design engineering", "Design" ]
62,443,960
https://en.wikipedia.org/wiki/Geochemical%20Perspectives%20Letters
Geochemical Perspectives Letters is a peer-reviewed open access scholarly journal publishing original research in geochemistry. It is published by the European Association for Geochemistry. Abstracting and indexing The journal is abstracted and indexed in: References External links Open access journals Academic journals established in 2015 English-language journals Geochemistry journals
Geochemical Perspectives Letters
[ "Chemistry" ]
68
[ "Geochemistry journals", "Geochemistry stubs" ]
62,445,239
https://en.wikipedia.org/wiki/Thuc-Quyen%20Nguyen
Thuc-Quyen Nguyen is Director and Professor at the Center for Polymers and Organic Solids (CPOS), and a professor of the Chemistry & Biochemistry department at the University of California Santa Barbara. Her research focuses on organic electronic devices, using optical, electrical, and structural techniques to understand materials and devices such as photovoltaics, LEDs, photodiodes, and field-effect transistors. Early life and education Professor Nguyen was born in Ban Me thuot Vietnam. She was curious from an early age, always trying to understand how things work. Her mother was a math teacher so she was inspired from an early age to become a teacher. There are four generations of teachers in her family, and as a young child, she went along to her mother's classes as there was no daycare to attend. These experiences sparked interest in being an effective teacher. In 1991, when she was 21, she moved with her family to the United States, arriving with very little knowledge of English. To try to improve her language skills and progress through school, she attended three schools at once, going to morning, afternoon, and evening classes. Her first term at Santa Monica College, she took four ESL courses at the same time, and after a year, was able to begin normal coursework. After graduating from Santa Monica College in 1995 with an A.S., she began working toward a bachelor's degree at UCLA, while also work in the library in evenings to help pay for University. She also began working in a plant physiology lab, beginning by washing glassware. She asked for a research internship in the same lab during the summer but the lab manager turned her down. She also asked several labs at UCLA but they also turned her down. One professor said "research is not for everyone" and she should focus on learning English. Research and career Professor Nguyen completed her masters in 1998, and PhD in 2001, both from UCLA.  In her PhD, she processed and studied conducting polymers using ultrafast spectroscopy under the supervision of Professor Benjamin Schwartz. After her PhD, Professor Nguyen worked as a research associate at Columbia University, with Professor Louis Brus. She also worked for some time after her PhD at the IBM Thomas J Watson Research Center. In 2004, Nguyen joined UCSB Chemistry and Biochemistry department as an assistant professor, and received appointment to full professor in 2011. She collaborated with Guillermo Bazan and Alan Heeger at UCSB for many years [4]. Professor Nguyen's current research focuses on organic electronic devices. She studies how chemical structure influence performance and function of organic devices like PVs, OLEDs, OFETs, and OPDs. She is interested in improving organic solar cells as well as developing flexible electronics. Awards 2005 Office of Naval Research Young Investigator Award. 2006 NSF CAREER award. 2007 Harold Plous Award. 2008 Camille Dreyfus Teacher-Scholar Award. 2009 Alfred P. Sloan Research Fellow. 2010 Alexander von Humboldt Senior Research Award. 2010 American Competitiveness and Innovation Fellowship (ACI). 2015     Alexander von Humboldt Research Award for Senior Scientists 2015-2019 World's most influential scientific minds. 2016     Fellow of the Royal Society of Chemistry 2019    Fellow of the American Association for the Advancement of Science (AAAS) 2019    Beaufort Visiting Scholar, St John’s College, Cambridge University 2019    Hall of Fame, Advanced Materials 2020    UCSB Outstanding Graduate Student Mentor Award 2023    Wilhelm Exner Medal 2023    Elected to the US National Academy of Engineering 2023    De Gennes Prize for Materials Chemistry (Royal Society of Chemistry) 2023    Fellow of the US National Academy of Inventors See also VinFuture References External links 1970 births Living people People from Đắk Lắk province University of California, Santa Barbara faculty Women biochemists 21st-century Vietnamese scientists Santa Monica College alumni Fellows of St John's College, Cambridge
Thuc-Quyen Nguyen
[ "Chemistry" ]
777
[ "Biochemists", "Women biochemists" ]
62,445,848
https://en.wikipedia.org/wiki/Medicine%20Hat%20Ocean
The Medicine Hat Ocean is an inferred small ocean basin that closed in the Proterozoic as the Hearne craton and Wyoming craton collided. See also List of ancient oceans Geology of Wyoming Geology of Montana References Historical oceans Oceanography Proterozoic North America Geology of North America
Medicine Hat Ocean
[ "Physics", "Environmental_science" ]
61
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
63,340,225
https://en.wikipedia.org/wiki/Eta2%20Doradus
{{DISPLAYTITLE:Eta2 Doradus}} Eta2 Doradus, Latinized from η2 Doradus, is a star in the southern constellation of Dorado. It is visible to the naked eye as a dim, reddish star with an apparent visual magnitude of 5.01 It is about 580 light years from the Sun as shown by parallax, and its net movement is one of receding, having a radial velocity of +34.5 km/s. It is circumpolar south of latitude  S. This object is an M-type giant star, with its stellar classification being M2.5III. It has left the main sequence after exhausting its core hydrogen and expanded to around . The star is radiating about 1200 times the Sun's luminosity from its photosphere, at an effective temperature of 3726 K. References External links 2004. Starry Night Pro, Version 5.8.4. Imaginova. . www.starrynight.com M-type giants 043455 029353 PD-65 00561 Dorado Doradus, Eta2 2245
Eta2 Doradus
[ "Astronomy" ]
231
[ "Dorado", "Constellations" ]
63,340,505
https://en.wikipedia.org/wiki/Adiabatic%20electron%20transfer
In chemistry, adiabatic electron-transfer is a type of oxidation-reduction process. The mechanism is ubiquitous in nature in both the inorganic and biological spheres. Adiabatic electron-transfers proceed without making or breaking chemical bonds. Adiabatic electron-transfer can occur by either optical or thermal mechanisms. Electron transfer during a collision between an oxidant and a reductant occurs adiabatically on a continuous potential energy surface. History Noel Hush is often credited with formulation of the theory of adiabatic electron-transfer. Figure 1 sketches the basic elements of adiabatic electron-transfer theory. Two chemical species (ions, molecules, polymers, protein cofactors, etc.) labelled D (for “donor”) and A (for “acceptor”) become a distance R apart, either through collisions, covalent bonding, location in a material, protein or polymer structure, etc. A and D have different chemical environments. Each polarizes their surrounding condensed media. Electron-transfer theories describe the influence of a variety of parameters on the rate of electron-transfer. All electrochemical reactions occur by this mechanism. Adiabatic electron-transfer theory stresses that intricately coupled to such charge transfer is the ability of any D-A system to absorb or emit light. Hence fundamental understanding of any electrochemical process demands simultaneous understanding of the optical processes that the system can undergo. Figure 2 sketches what happens if light is absorbed by just one of the chemical species, taken to be the charge donor. This produces an excited state of the donor. As the donor and acceptor are close to each other and surrounding matter, they experience a coupling . If the free energy change is favorable, this coupling facilitates primary charge separation to produce D+-A− , producing charged species. In this way, solar energy is captured and converted to electrical energy. This process is typical of natural photosynthesis as well as modern organic photovoltaic and artificial photosynthesis solar-energy capture devices. The inverse of this process is also used to make organic light-emitting diodes (OLEDs). Adiabatic electron-transfer is also relevant to the area of solar energy harvesting. Here, light absorption directly leads to charge separation D+-A−. Hush's theory for this process considers the donor-acceptor coupling , the energy required to rearrange the atoms from their initial geometry to the preferred local geometry and environment polarization of the charge-separated state, and the energy change associated with charge separation. In the weak-coupling limit ( ), Hush showed that the rate of light absorption (and hence charge separation) is given from the Einstein equation by … (1) This theory explained how Prussian blue absorbes light, creating the field of intervalence charge transfer spectroscopy. Adiabatic electron transfer is also relevant to the Robin-Day classification system, which codifies types of mixed valence compounds. An iconic system for understanding Inner sphere electron transfer is the mixed-valence Creutz-Taube ion, wherein otherwise equivalent Ru(III) and Ru(II) are linked by a pyrazine. The coupling is not small: charge is not localized on just one chemical species but is shared quantum mechanically between two Ru centers, presenting classically forbidden half-integral valence states. that the critical requirement for this phenomenon is … (2) Adiabatic electron-transfer theory stems from London's approach to charge-transfer and indeed general chemical reactions applied by Hush using parabolic potential-energy surfaces. Hush himself has carried out many theoretical and experimental studies of mixed valence complexes and long range electron transfer in biological systems. Hush's quantum-electronic adiabatic approach to electron transfer was unique; directly connecting with the Quantum Chemistry concepts of Mulliken, it forms the basis of all modern computational approaches to modeling electron transfer. Its essential feature is that electron transfer can never be regarded as an “instantaneous transition”; instead, the electron is partially transferred at all molecular geometries, with the extent of the transfer being a critical quantum descriptor of all thermal, tunneling, and spectroscopic processes. It also leads seamlessly to understanding electron-transfer transition-state spectroscopy pioneered by Zewail. In adiabatic electron-transfer theory, the ratio is of central importance. In the very strong coupling limit when Eqn. (2) is satisfied, intrinsically quantum molecules like the Creutz-Taube ion result. Most intervalence spectroscopy occurs in the weak-coupling limit described by Eqn. (1), however. In both natural photosynthesis and in artificial solar-energy capture devices, is maximized by minimizing through use of large molecules like chlorophylls, pentacenes, and conjugated polymers. The coupling can be controlled by controlling the distance R at which charge transfer occurs- the coupling typically decreases exponentially with distance. When electron transfer occurs during collisions of the D and A species, the coupling is typically large and the “adiabatic” limit applies in which rate constants are given by transition state theory. In biological applications, however, as well as some organic conductors and other device materials, R is externally constrained and so the coupling set at low or high values. In these situations, weak-coupling scenarios often become critical. In the weak-coupling (“non-adiabatic”) limit, the activation energy for electron transfer is given by the expression derived independently by Kubo and Toyozawa and by Hush. Using adiabatic electron-transfer theory, in this limit Levich and Dogonadze then determined the electron-tunneling probability to express the rate constant for thermal reactions as . … (3) This approach is widely applicable to long-range ground-state intramolecular electron transfer, electron transfer in biology, and electron transfer in conducting materials. It also typically controls the rate of charge separation in the excited-state photochemical application described in Figure 2 and related problems. Marcus showed that the activation energy in Eqn. (3) reduces to in the case of symmetric reactions with . In that work, he also derived the standard expression for the solvent contribution to the reorganization energy, making the theory more applicable to practical problems. Use of this solvation description (instead of the form that Hush originally proposed) in approaches spanning the adiabatic and non-adiabatic limits is often termed “Marcus-Hush Theory”. These and other contributions, including the widespread demonstration of the usefulness of Eqn. (3), led to the award of the 1992 Nobel Prize in Chemistry to Marcus. Adiabatic electron-transfer theory is also widely applied in Molecular Electronics. In particular, this reconnects adiabatic electron-transfer theory with its roots in proton-transfer theory and hydrogen-atom transfer, leading back to London's theory of general chemical reactions. References Physical chemistry Reaction mechanisms
Adiabatic electron transfer
[ "Physics", "Chemistry" ]
1,410
[ "Reaction mechanisms", "Applied and interdisciplinary physics", "Physical organic chemistry", "nan", "Chemical kinetics", "Physical chemistry" ]
63,342,615
https://en.wikipedia.org/wiki/Remix%20Fuel
REMIX-Fuel (REgenerated MIXture of U, Pu oxides) was developed in Russia to simplify the reprocessing process, reuse spent fuel, reduce the consumption of natural uranium and to enable multi-recycling. Compared to "conventional" MOX-fuel MOX or Mixed Oxide Fuel as deployed in some western European and East Asian nations generally consists of depleted uranium mixed with between 4% and 7% reactor grade plutonium. Only a few Generation II and about half of Generation III reactor designs are MOX fuel compliant allowing them to use a 100% MOX fuel load with no safety concerns. Nuclear physics background However all moderated reactors using lightly enriched uranium fuel produce plutonium in the course of normal operation as Uranium-238 (typically 94% to 97% of the uranium content in lightly enriched uranium) captures neutrons and undergoes successive beta decays until it is transmuted to plutonium-239. This internally produced plutonium increases in percentage until it is common enough that a growing percentage of fission reactions within the fuel are actually within the plutonium generated during the fuel cycle. Approximately half of the plutonium-239 "bred" during the fuel cycle is fissioned and another 25% is transmuted through additional neutron capture into other plutonium isotopes, primarily Pu-240. Virtually all of the minor actinides present in spent nuclear fuel are produced by successive neutron capture of the plutonium produced and as decay products of the more short lived isotopes. As a consequence of these factors the fresh uranium oxide fuel initially generates all of its fission reactions from U-235 but at the end of the cycle this has shifted to 50% U-235/50% Pu-239 fission reactions. In total about 33% of the energy generated by uranium fuel at the end of its life cycle actually comes from the bred and consumed Pu-239. Because the thermal neutron spectrum is not very good for fissioning Pu-239 the fuel shifts from 100% uranium at start of cycle to 96% uranium, 1% plutonium and 3% mixture of transuranic minor actinides and fission products. The longer the fuel remains in the reactor undergoing fission the more the uranium percentage decreases while the other materials increase. In effect all power reactors have been long known to be capable of operating with a mixed fissionable core containing 1% reactor grade plutonium without issues arising like those caused by the more highly concentrated MOX fuel used in western reactors. Ultimately, the spent fuel is removed from power reactors long before all available "fuel" is actually consumed, as neutron poisons and minor actinides with undesirable properties build up to unacceptable levels and alter the reaction parameters too much. Nuclear reprocessing is primarily done to remove undesirable parts of the spent fuel and either re-use the other parts or store them as waste. Reprocessed uranium for example, which is derived from spent fuel, usually has a higher uranium-235 content than natural uranium. Process Russia spent nearly a decade developing techniques similar to nuclear pyroprocessing that allows them to reprocess spent nuclear fuel without separating the recycled uranium and plutonium as is done in the PUREX chemical reprocessing system used to manufacture MOX fuel. Small volumes of enriched uranium are added to this recovered mixture of non-separated uranium and plutonium so that it performs similarly to the fuel made only from freshly enriched uranium. After extensive testing in a reactor starting in 2016 Russia is now deploying Remix Fuel as replacement fuel for their VVER pressurized water reactors as of February 2020. Experiments at Balakovo Nuclear Power Plant Balakovo Nuclear Power Plant is used for the pilot program. In December 2024 the third final 18-month phase of the program has started with the goal to achieve closed nuclear cycle for VVER reactors. A mixture of enriched uranium with recycled uranium and plutonium received from the used nuclear fuel at VVER reactors is used instead of a standard enriched uranium. After the first 2 stages of 3, fuel elements were inspected and were approved for the 3rd final stage. The 3rd stage should conclude in 2026 when the fuel will be unloaded and further studied. Remix fuel has a lower plutonium content of up to 5% compared with MOX fuel. References Fuels Fuel production Nuclear reprocessing
Remix Fuel
[ "Chemistry" ]
877
[ "Fuels", "Chemical energy sources" ]
63,343,321
https://en.wikipedia.org/wiki/Local%20history%20book
A local history book (also known as a (rural) farm book or local chronicle; ), is a Norwegian publication genre describing the history and population of one or more rural settlements. Many local history books feature a short history of each farm and a chronology of its owners dating back several generations or centuries. Norwegian local history books have usually been published under the auspices of or in collaboration with the municipality. Such local history books began being published in Norway around 1910 starting with the work of Lorens Berg, but one can trace the roots of the phenomenon back to the topographic literature of the Enlightenment. Local history books can be divided into three main categories, and many local history books contain volumes of several types: General rural and cultural history; Topic-based rural history with chapters on building practices, geology, dialects, school, churches, and the like; and Farm history and genealogy history, where the village is described based on the properties. For each farm, information is provided on its name, property tax, operating statistics, inheritance, division, and ownership or user change. In addition, with some variation, information about the inhabitants from historical archives (from the 15th or 16th century) is often presented by family based on the owner or user of the property. As a rule, a very brief presentation of individual biographies follows. It is common to list years of birth and death, years of marriages, spouses, and possibly destinations of emigration. The oldest local history books only presented farm owners, whereas newer local history books describe entire families and also crofters, tenants, and the homeless. References External links Overview of local history books in Norway at the Slekt1 genealogy research organization List of digitized local history books at the Norwegian Genealogical Society What is a "bygdebok"? by Martin Roe Eidhammer Local history Genealogy publications Demography History of agriculture
Local history book
[ "Environmental_science" ]
375
[ "Demography", "Environmental social science" ]