id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
67,834,439 | https://en.wikipedia.org/wiki/Trimeric%20intracellular%20cation-selective%20channel | The trimeric intracellular cation-selective channels or TRIC proteins are a group of homo-trimeric cation channel proteins of ~300 residues in the ER membrane. There are two known TRIC proteins, TRIC-A and TRIC-B.
Channel function
TRICs are permeable to both Na+ and K+ but not divalent cations like Ca2+. They exhibit marked voltage-dependence, becoming more open when the cytosol is more positively charged than the ER lumen.
TRIC-A
TRIC-A is predominantly expressed in excitable tissues including brain and skeletal muscle. TRIC-A activity is thought to support RyR1-mediated efflux of Ca2+ ions from the sarcoplasmic reticulum into the cytosol.
TRIC-B
K+ flux into the ER through TRIC-B is thought to support IP3-induced efflux of Ca2+ ions through IP3-gated Ca2+ channels in the ER membrane.
Clinical significance
TRIC-A has been implicated in the regulation of arterial blood pressure through regulating the excitability of vascular smooth muscle cells. Several single-nucleotide polymorphisms (SNPs) in close proximity to the TRIC-A locus and, in future, may serve as an important biomarker in the diagnosis of essential hypertension
Null mutations in TMEM38B encoding TRIC-B are an uncommon but relatively severe cause of autosomal recessive osteogenesis imperfecta or "brittle bone disease".
References
Proteins | Trimeric intracellular cation-selective channel | [
"Chemistry"
] | 325 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
67,834,500 | https://en.wikipedia.org/wiki/Border%20incident | A border incident, also known as cross-border incident, is an event at a border, often in the form of armed clashes.
See also
List of border conflicts
References
Borders | Border incident | [
"Physics"
] | 37 | [
"Spacetime",
"Borders",
"Space"
] |
67,834,566 | https://en.wikipedia.org/wiki/Rurouni%20Kenshin%20%281996%20TV%20series%29 | , sometimes called Samurai X, is a Japanese anime television series, based on Nobuhiro Watsuki's manga series Rurouni Kenshin. It was directed by Kazuhiro Furuhashi, produced by SPE Visual Works and Fuji Television, and animated by Studio Gallop (episodes 1–66) and Studio Deen (episodes 67–95). It was broadcast on Fuji TV from January 1996 to September 1998. Besides an animated feature film, three series of original video animations (OVAs) were also produced; the first adapts stories from the manga that were not featured in the anime series; the second is both a retelling and a sequel to the anime series; and the third was a reimagining of the second story arc of the series.
Sony Pictures Television International produced its own English dub of the series, releasing it as Samurai X in Southeast Asia. Media Blasters later licensed the series in North America and released it on home video from 2000 to 2002. The series was aired in the United States on Cartoon Network's Toonami programming block in 2003, only broadcasting the first 62 episodes.
Rurouni Kenshin has ranked among the 100 most-watched series in Japan multiple times.
A second anime television series adaptation by Liden Films premiered in 2023 on Fuji TV's Noitamina programming block.
Plot
When arriving in Tokyo in the 11th year of Meiji era (1878), the former Ishin Shishi Himura Kenshin wanders around Japan until reaching Tokyo. There, he is attacked by a young woman named Kamiya Kaoru, who believes him to be the Hitokiri Battōsai but ends up forgetting about him upon the appearance of a man claiming to be the Hitokiri Battōsai–tarnishing the name of the swordsmanship school that she teaches. Kenshin decides to help her and defeats the fake Battōsai, revealing himself as the actual former manslayer who has become a pacifist.
Kaoru invites Kenshin to stay at her dojo, claiming she is not interested in his past. Although Kenshin accepts the invitation, his fame causes him to accidentally attract other warriors who wish him dead. However, Kenshin also meets new friends including the young Myōjin Yahiko who wishes to reach his strength but ends up becoming Kaoru's student, the fighter-for-hire Sagara Sanosuke from the Sekihō Army who realizes the current Kenshin is different from the Ishin Shishi he detested for killing his leader Sagara Sōzō, and the doctor Takani Megumi who wishes to atone for her sins as a drug dealer, inspired by Kenshin's devotion to his past.
Production
In a manga volume prior to the release of the anime, Watsuki said that while some fans might object to the adaptation of the series into anime, Watsuki looked forward to the adaptation and felt it would work since the manga was already "anime-esque." He had some worries about the series since he felt since the creation of the series was sudden and the series had a "tight" production schedule. In another note in the same volume Watsuki added that he had little input in the series, as he was too busy with the publishing. In addition his schedule did not match the schedule of the anime production staff. Watsuki said that it would be impossible to make the anime and manga exactly the same, so he would feel fine with the anime adaptation as long as it took advantage of the strengths of an anime format.
After the anime began production, Watsuki said that the final product was "better than imagined" and that it was created with the "pride and soul of professionals." Watsuki criticized the timing, the "off-the-wall, embarrassing subtitles," and the condensing of the stories; for instance, he felt the Jin-e storyline would not sufficiently fit two episodes. Watsuki said that he consulted a director and that he felt the anime would improve after that point. The fact that the CD book voice actors, especially Megumi Ogata and Tomokazu Seki, who portrayed Kenshin and Sanosuke in the CD books, respectively, did not get their corresponding roles in the anime disappointed Watsuki. Watsuki reported receiving some letters of protest against the voice actor change and letters requesting that Ogata portray Seta Sōjirō; Watsuki said that he wanted Ogata to play Misao and that Ogata would likely find "stubborn girl" roles more challenging than the "pretty boy" roles she usually gets, though Watsuki felt Ogata would have "no problem" portraying a "stubborn girl." Watsuki said that the new voice actor arrangement "works out" and that he hoped that the CD book voice actors would find roles in the anime. Watsuki said that the reason why the CD book voice actors did not get the corresponding roles in the anime was due to the fact that many more companies were involved in the production of the anime than the production of the CD books, and therefore the "industry power-structure" affected the series.
The second season of the anime television series had some original stories, not in the manga. Watsuki said that some people disliked "TV originals," but to him, the concept was "exciting." Watsuki said that because the first half of the original storyline that existed by the time of the production of the tenth volume was "jammed" into the first season, he looked forward to a "more entertaining" second season. Watsuki added that it was obvious that the staff of the first season "put their hearts and souls" into the work, but that the second series will be "a much better stage for their talents."
In producing the English dub version of the series, Media Blasters considered following suit, with Mona Marshall considered a finalist to voice Kenshin. Richard Hayworth was eventually selected for the role, giving Kenshin's character a more masculine voice in the English adaptation. Marshall was also selected to voice the younger Kenshin during flashback scenes. Clark Cheng, Media Blasters dub script writer, said that localizing Kenshin's unusual speech was a difficult process. His use of de gozaru and oro were not only character trademarks that indicated his state of mind, but important elements to the story. However, neither is directly translatable into English, and in the end the company chose to replace de gozaru with "that I did," "that I am," or "that I do." Kenshin's signature oro was replaced with "huah" to simulate a "funny sound" that had no real meaning. Lex Lang is Sanosuke's voice actor. When writing Sanosuke's dialogue, Clark Cheng, the writer of the English dub script, noted that the character was smarter than he would have liked in the first few episodes, so Cheng tried slowly to change the character's dialogue to make Sanosuke seem less intelligent so he would be more similar to the equivalent in the Japanese version of the series.
Release
Directed by Kazuhiro Furuhashi, Rurouni Kenshin was broadcast for 94 episodes on Fuji TV from January 10, 1996, to September 8, 1998. It was produced by SPE Visual Works and Fuji TV, and animated by Studio Gallop (episodes 1–66) and Studio Deen (episode 67 onwards). The anime only adapts the manga up until the fight with Shishio, from then on it features original material not included in the manga. The unaired final episode was released on VHS on December 2, 1998. The episodes were collected on 26 VHS sets, released from September 21, 1997, to June 2, 1999; they were later collected on 26 DVD sets, released from June 19, 1999, to March 23, 2000. Three DVD box sets were released from September 5, 2001, to March 20, 2002.
Sony Pictures Television International produced its own English dub of the series, and released it under the name Samurai X in Southeast Asia. Sony attempted and failed to market Samurai X via an existing company in the United States. In October 1999, Media Blasters announced that it had licensed the series, later confirming that it would be released on home video. Media Blasters produced an English dub at Bang Zoom!, and 22 DVDs were released from July 25, 2000, to September 24, 2002. The series later aired in the United States on Cartoon Network, as a part of the Toonami programming block, starting on March 17, 2003, but ended with the 62nd episode, aired on October 18 of that same year. The series was heavily edited for content during its broadcast on Toonami. Media Blasters later split the series in three seasons and released each one as three premium DVD box sets from November 18, 2003, to July 27, 2004; they were re-released as "Economy" box sets from November 15, 2005, to February 15, 2006. The series, with both the original Japanese audio and the Media Blasters dub, was available on Netflix from 2016 to 2020.
Soundtracks
The music for the series was composed by Noriyuki Asakura. The first soundtrack album was released on April 1, 1996, containing 23 tracks. The second one, Rurouni Kenshin OST 2 – Departure was released on October 21, 1996, containing 15 tracks. The third one, Rurouni Kenshin OST 3 – Journey to Kyoto, was released on April 21, 1997, containing 13 tracks. The fourth one, Rurouni Kenshin OST 4 – Let it Burn was released on February 1, 1998, containing 12 tracks.
Several compilations of the songs were also released in collection CDs. 30 were selected and joined in a CD called Rurouni Kenshin – The Director's Collection, released on July 21, 1997. Rurouni Kenshin: Best Theme Collection, containing ten tracks, was released on March 21, 1998. All opening and ending themes were also collected in a CD, titled Rurouni Kenshin – Theme Song Collection, on December 6, 2000. Two Songs albums, containing tracks performed by the Japanese voice actors, were released on July 21, 1996, and July 18, 1998. All soundtrack albums, including OVAs and films, tracks were collected in Rurouni Kenshin Complete CD-Box, released on September 19, 2002. It contains the four TV OSTs, the two OVA OSTs, the movie OST, the two game OSTs, an opening and closing theme collection, and the two Character Songs albums. On July 27, 2011, Rurouni Kenshin Complete Collection, which includes all the opening and ending themes and the theme song of the animated film, was released.
Related media
Anime film
An anime film, Rurouni Kenshin: The Motion Picture, premiered on December 20, 1997.
Original video animations
A four-episode original video animation (OVA), titled Rurouni Kenshin: Trust & Betrayal, which served as a prequel to the series, was released in 1999.
A two-episode OVA, titled Rurouni Kenshin: Reflection, which served as a sequel to the series, was released from 2001 to 2002.
A two-episode OVA, Rurouni Kenshin: New Kyoto Arc, which remade the series' Kyoto arc, was released from 2011 to 2012.
Reception
On TV Asahi's top 100 most popular anime television series poll, Rurouni Kenshin ranked 66th. They also conducted an online web poll, in which the series ranked 62nd. Nearly a year later, TV Asahi once again conducted an online poll for the top one hundred anime, and Rurouni Kenshin anime advanced in rank and came in twenty-sixth place. It also ranked at tenth place in the Web's Most Wanted 2005, ranking in the animation category. The fourth DVD of the anime was also Anime Castle's best selling DVD in October 2001. Rurouni Kenshin was also a finalist in the American Anime Awards in the category "Long Series" but lost against Fullmetal Alchemist. In 2010, Mania.com's Briana Lawrence listed Rurouni Kenshin at number three of the website's "10 Anime Series That Need a Reboot".
The anime has also been commented by Chris Shepard from Anime News Network (ANN), noting a well-crafted plot and good action scenes. However, he also criticized that during the first episodes the fights never get quite interesting as it becomes a bit predictable that Kenshin is going to win as the music of moments of victory is repeated many times. Lynzee Loveridge from ANN highlighted as the most known series to use the Meiji period and saw the Kyoto arc as one of the best ones.
However, Mark A. Grey from the same site mentioned that all those negatives points disappear during the Kyoto arc due to amazing fights and a great soundtrack. Tasha Robinson from SciFi.com remarked "Kenshin's schizoid personal conflict between his ruthless-killer side and his country-bumpkin" side was a perfect way to develop good stories which was one of the factors that made the series popular. Anime News Network acclaimed both Shishio's characterization in regards to what he represents to Kenshin's past: "a merciless killer who believes his sword to be the only justice in the land." Similarly, Chris Beveridge Mania Entertainment praised the build up the anime's Kyoto arc has had as after fighting so much build up, Shishio fights and delivers skills that would amaze viewers despite suffering major wounds in the process. Beveridge reflected that while Shishio's death caused by his old wounds rather than an attack by Kenshin, the series' protagonist was also pushed down to his limits in the story arc due to fighting Sojiro and Shinomori before Shishio. Nevertheless, the writer concluded that it was still way paid off despite assumptions that Shishio's death might initially come across as a copout.
Although Carlos Ross from THEM Anime Reviews also liked the action scenes and storyline, he added that the number of childish and violent scenes make the show a bit unbalanced, saying it is not recommended for younger children. Daryl Surat of Otaku USA approved of the anime series, stating that while half of the first-season episodes consisted of filler, the situation "clicks" upon the introduction of Saitō Hajime and that he disagreed with people who disliked the television series compared to the OVAs. Surat said that while the Media Blasters anime dub is "well-cast," the English dub does not sound natural since the producers were too preoccupied with making the voice performances mimic the Japanese performances. Surat said that while he "didn't mind" the first filler arc with the Christianity sect, he could not stomach the final two filler arcs, and Japanese audiences disapproved of the final two filler arcs. Robin Brenner from Library Journal noted that despite its pacifist messages, Rurouni Kenshin was too violent, recommending it to older audiences.
In the making of the 2019 anime series Dororo, Kazuhiro Furuhashi was selected as its director mainly due to his experience directing Rurouni Kenshin.
Notes
References
Further reading
External links
Rurouni Kenshin
Adventure anime and manga
Anime series based on manga
Anime Works
Aniplex
Fiction set in 1878
Fuji Television original programming
Gallop (studio)
Historical anime and manga
Madman Entertainment anime
Martial arts anime and manga
Meiji era in fiction
Romance anime and manga
Samurai in anime and manga
Studio Deen
Television series by Sony Pictures Television
Television series set in the 1870s
Works about atonement | Rurouni Kenshin (1996 TV series) | [
"Biology"
] | 3,226 | [
"Behavior",
"Works about atonement",
"Works about behavior"
] |
67,835,259 | https://en.wikipedia.org/wiki/NGC%20677 | NGC 677 is an elliptical galaxy located in the constellation Aries. It was discovered on September 25, 1886, by the astronomer Lewis A. Swift. It is located about 200 million light-years (70 megaparsecs) from Earth at the center of a rich galaxy cluster. It has a LINER nucleus.
According to A.M. Garcia, NGC 677 is a member of the NGC 673 Group (also known as LGG 31). This group contains at least 17 galaxies, including IC 156, IC 162, NGC 665, NGC 673, NGC 683, and 11 galaxies from the UGC catalogue.
References
See also
List of NGC objects (1–1000)
Elliptical galaxies
Aries (constellation)
0677
006673 | NGC 677 | [
"Astronomy"
] | 157 | [
"Aries (constellation)",
"Constellations"
] |
67,835,347 | https://en.wikipedia.org/wiki/JUWELS | JUWELS (Jülich Wizard for European Leadership Science) is a supercomputer developed by Atos and hosted by the Jülich Supercomputing Centre (JSC) of the Forschungszentrum Jülich.
Supercomputer
It is capable of a theoretical peak of 70.980 petaflops (the speed is for JUWELS Booster Module) and it serves as the replacement of the now out-of-operation JUQUEEN supercomputer. JUWELS Booster Module was ranked as the seventh fastest supercomputer in the world at its debut on the November 2020 TOP500 list. The JUWELS Booster Module is part of a modular system architecture and a second Xeon based JUWELS Cluster Module ranked separately as the 44th fastest supercomputer in the world on the November 2020 TOP500 list.
JUWELS Booster Module uses AMD Epyc processors with Nvidia A100 GPUs for acceleration. University of Edinburgh contracted a deal to utilise JUWELS to pursue research in the fields of particle physics, astronomy, cosmology and nuclear physics.
In 2021, JUWELS Booster was among eight other supercomputing systems which participated in the MLPerf HPC training benchmark, which is the benchmark developed by the consortium of artificial intelligence developers from academia, research labs, and industry aiming to unbiasedly evaluate the training and inference performance for hardware, software, and services used for AI. JUWELS also ranked among the top 15 on the worldwide Green500 list of energy-efficient supercomputers.
The Simulation and Data Laboratory (SimLab) for Climate Science at Forschungszentrum Jülich uses JUWELS to detect gravity waves in the atmosphere by running computing programs to continuously download and compute on the operational radiance measurements from the NASA's data servers.
See also
Computer science
Computing
Supercomputing in Europe
References
External links
Forschungszentrum Jülich website
Supercomputing in Europe
Jülich Research Centre | JUWELS | [
"Technology"
] | 417 | [
"Supercomputing in Europe",
"Supercomputing"
] |
67,838,325 | https://en.wikipedia.org/wiki/Stuart%20Loudon | Stuart Loudon (born 11 April 1988 in Uddingston) is a British rally co-driver and engineer. He was a co-driver to Gus Greensmith at the 2021 Rally Italia Sardegna for M-Sport Ford.
Personal life
Loudon was born in Uddingston. His grandfather is Boyd Tunnock, who owes Tunnock's. Loudon is also a qualified Rolls-Royce Aeronautical Engineer.
Rally results
* Season still in progress.
References
External links
Stuart Loudon's e-wrc profile
1988 births
Living people
Automotive engineers
British rally co-drivers
People from Uddingston
World Rally Championship co-drivers | Stuart Loudon | [
"Engineering"
] | 129 | [
"Automotive engineering",
"Automotive engineers"
] |
67,838,357 | https://en.wikipedia.org/wiki/Attosecond%20chronoscopy | Attosecond chronoscopy are measurement techniques for attosecond-scale delays of atomic and molecular single photon processes like photoemission and photoionization. Ionization-delay measurements in atomic targets provide information about the timing of the photoelectric effect, resonances, electron correlations, and transport.
Attosecond chronoscopy deals with the time-resolved observation of ultrafast electronic processes of quantum physics of matter with applications to atoms, molecules. and solids. Typical time scales covered range from attoseconds (10−18 sec.) to femtoseconds (10−15 sec.). Realtime observations of such processes became possible with the availability of well-controlled subfemtosecond laser pulses. Chronoscopy can provide information complementary to that accessible through conventional spectroscopy. While spectroscopy aims at characterizing processes through measurements with the highest possible energy resolution but without time resolution, chronoscopy attempts to capture dynamical aspects of quantum dynamics through high time resolution but with only limited energy resolution. Important applications are non-stationary and decaying states, quantum transport and charge migration, irreversible processes (the "Arrow of time") and the loss of phase information called decoherence of a quantum system due to its interaction with the environment.
See also
Attophysics
Bibliography
Time-resolved spectroscopy
Atomic physics
Molecular physics | Attosecond chronoscopy | [
"Physics",
"Chemistry"
] | 288 | [
" and optical physics stubs",
"Molecular physics",
"Spectrum (physical sciences)",
"Quantum mechanics",
"Time-resolved spectroscopy",
"Atomic physics",
" molecular",
"nan",
"Atomic",
"Molecular physics stubs",
"Spectroscopy",
"Physical chemistry stubs",
" and optical physics"
] |
63,508,559 | https://en.wikipedia.org/wiki/Chair%20tiling | In geometry, a chair tiling (or L tiling) is a nonperiodic substitution tiling created from L-tromino prototiles. These prototiles are examples of rep-tiles and so an iterative process of decomposing the L tiles into smaller copies and then rescaling them to their original size can be used to cover patches of the plane. Chair tilings do not possess translational symmetry, i.e., they are examples of nonperiodic tilings, but the chair tiles are not aperiodic tiles since they are not forced to tile nonperiodically by themselves. The trilobite and cross tiles are aperiodic tiles that enforce the chair tiling substitution structure and these tiles have been modified to a simple aperiodic set of tiles using matching rules enforcing the same structure. Barge et al. have computed the Čech cohomology of the chair tiling and it has been shown that chair tilings can also be obtained via a cut-and-project scheme.
References
External links
Tilings Encyclopedia, Chair
Aperiodic tilings | Chair tiling | [
"Physics",
"Mathematics"
] | 228 | [
"Tessellation",
"Geometry",
"Geometry stubs",
"Aperiodic tilings",
"Symmetry"
] |
63,510,049 | https://en.wikipedia.org/wiki/Recovery%20Toolbox | Recovery Toolbox is a family of tools and online services for recovering corrupted files, file formats, and recovering passwords for various programs.
History
Recovery Toolbox was created by Recovery Toolbox, Inc., which has developed software for repairing damaged files since 2003.
Components
The Recovery Toolbox family includes both installable software and web services.
Installable software
Freeware recovery tools include:
Recovery Toolbox for CD Free for repairing data from compact discs, including HD DVD and Blu-ray, affected by system errors or physically damaged (scratched, exposed to liquids, etc.).
Recovery Toolbox File Undelete for recovering of deleted files on HDD with support for NTFS file system ( Windows), though it doesn’t work with SSD storage.
Some Recovery Toolbox tools are provided as shareware:
Recovery Toolbox for Flash “undeletes” files from various storage media with FAT file systems: SD, CF, MMC, and other memory cards, smart media cards, IBM MicroDrives, Flash and USB drives, digital cameras, and floppy disks.
Recovery Toolbox for RAR repairs damaged RAR archives, including all formats and compression rates, password-protected archives, and archives stored on corrupted media.
Recovery Toolbox for Excel repairs corrupted Microsoft Excel files while preserving most tabular data, styles, fonts, sheets, formulas, functions, cell colors, borders, etc.
Recovery Toolbox for Outlook repairs data from corrupted Microsoft Outlook's PST and OST files, including emails, contacts, reminders, meetings, tasks, notes, calendar entries, logs, etc.
Web services
Recovery Toolbox web services allow repairing the following file formats:
Adobe file formats: PDF documents and presentations ((Adobe Acrobat/PDF Reader), AI image files (Adobe Illustrator), and PSD project files (Adobe Photoshop)
Microsoft Office file formats: Excel spreadsheets, Word documents (including RTF), PowerPoint presentations, and Project files; email formats: PST and OST (Outlook), and DBX (Outlook Express
Other image file formats: DWG (AutoCAD) and CDR (CorelDraw)
Database formats: ACCDB and MDB (Access, DBF (FoxPro/Clipper/dBase, etc.)
References
External links
Official Website
2003 software
Utility software
Computer memory
Windows software
System software
Recovery
Data recovery
Data recovery software
Data recovery companies | Recovery Toolbox | [
"Engineering"
] | 491 | [
"Reliability engineering",
"Backup"
] |
63,510,139 | https://en.wikipedia.org/wiki/KeeWeb | KeeWeb is a free and open-source password manager compatible with KeePass, available as a web version and desktop apps. The underlying file format is KDBX (KeePass database file).
Technology
KeeWeb is written in JavaScript and uses WebCrypto and WebAssembly to process password files in the browser, without uploading them to a server. It can synchronize files with popular file hosting services, such as Dropbox, Google Drive, and OneDrive.
KeeWeb is also available as an Electron bundle which resembles a desktop app. The desktop version adds some features not available on web:
auto-typing passwords
ability to open and save local files
sync to WebDAV without CORS enabled
KeeWeb can also be deployed as a standalone server, or installed as a Nextcloud app.
Reception
KeeWeb was praised by Ghacks Technology News in 2016 as "brand-new" fixing the "shortcoming of a web-based version" of KeePass, and by Tech Advisor in 2020 as "well-designed cross-platform password manager".
See also
List of password managers
Password manager
Cryptography
References
External links
Android (operating system) software
Cross-platform free software
Cryptographic software
Free password managers
Free software for Linux
Free software for macOS
Free software for Windows
IOS software
Password managers | KeeWeb | [
"Mathematics"
] | 273 | [
"Cryptographic software",
"Mathematical software"
] |
63,510,811 | https://en.wikipedia.org/wiki/Katherine%20Weimer | Katherine Ella Mounce Weimer (April 15, 1919 – April 23, 2000) was a research physicist at the Princeton Plasma Physics Laboratory at the Princeton University. She is known for her scientific research in the field of plasma magnetohydrodynamic equilibrium and contribution to stability theory of a magnetically confined plasma.
Education
Originally from New Jersey, Weimer received a scholarship to Purdue University and got her B.Sc. in chemistry in 1939. She continued her education at Ohio State University, switching her area of interest from chemistry to physics, and received her Ph.D. in physics in 1943. Her thesis was entitled "Artificial Radioactivity of Barium and Lanthanum" and was supervised by Marion Llewellyn Pool.
Katherine Weimer was the first woman to receive a Ph.D in physics from Ohio State University.
Scientific career
In 1957, Weimer joined the theory group at Princeton Plasma Physics Laboratory. She was the first female research staff member at the laboratory and successfully developed her scientific career for 29 years at PPPL. She conducted fundamental research in the field of plasma equilibrium and magnetohydrodynamic stability in the toroidal magnetic confinement devices, like tokamaks and stellarators. Her work resulted in many important designs of experiments through PPPL, including devices such as the Adiabatic Toroidal Compressor (ATC), Model C Stellarator, and the Poloidal Divertor Experiment (PDX). In 1984, she retired from Princeton University after 29 years at PPPL.
Scientific legacy
The American Physical Society Division of Plasma Physics established the Katherine E. Weimer award in 2001 to "recognize and encourage outstanding achievement in plasma science research by a woman physicist in the early years of her career." The Division of Plasma Physics has historically had lower female representation compared to other divisions, and the award was established to attract and retain more female physicists in the field. The winning physicist receives $4000 and an invitation to speak at the Division of Plasma Physics annual meeting.
References
1919 births
2000 deaths
American women scientists
American women physicists
American physicists
Plasma physicists
Scientists from New Jersey
Ohio State University Graduate School alumni
Purdue University alumni
20th-century American women
20th-century American people | Katherine Weimer | [
"Physics"
] | 444 | [
"Plasma physicists",
"Plasma physics"
] |
63,511,158 | https://en.wikipedia.org/wiki/Predator%3A%20Hunting%20Grounds | Predator: Hunting Grounds is a 2020 multiplayer video game developed by IllFonic and originally published by Sony Interactive Entertainment. The game is part of the Predator franchise, featuring Arnold Schwarzenegger reprising his role as Alan "Dutch" Schaefer (Predator), Alice Braga reprising her role as Isabelle (Predators), and Jake Busey reprising his role as Sean Keyes (The Predator). Set in the remote jungles of the world, it tasks a team of four elite operatives with completing paramilitary operations before a single Predator can find and eliminate them.
Predator: Hunting Grounds was the first Predator video game in a decade, following the Predators-themed mobile games from Angry Mob and Gameloft released in 2010, and the first full title for consoles since 2005's Predator: Concrete Jungle (although several other games featuring the Yautja were released in the interim).
Predator: Hunting Grounds was released for PlayStation 4 and Windows on April 24, 2020. Upon release, the game received mixed reviews from critics. PlayStation 5 and Xbox Series X/S ports were released on October 1, 2024.
Gameplay
Predator: Hunting Grounds is an asymmetrical multiplayer video game taking place in remote jungle locations. One player controls the Predator, while 4 others play as a team of special operations operators known as "Fireteam Voodoo" on a mission to collect intel or eliminate a drug lord until being forced to fight the Predator. The chief element is to either avoid being hunted by the Predator or capture and kill the Predator who in turn will be controlled by the player.
Objectives for Fireteam Voodoo include neutralizing computer-controlled NPC enemies, sabotaging their shipments and retrieving important VIP targets from them, as well as other special tasks. The game's maps offer various tactical opportunities for Fireteam players, from working together as a cohesive unit to splitting their force to reach their objectives. While this element of the game plays out, another player takes control of the Predator and tries to wipe out all of the special forces team members. If the human players manage to kill the Predator, their operation will be taken over by the Other Worldly Life Forms Program (OWLF) and they will be instructed to guard the body against hostiles until they can be extracted.
For the first time in a Predator game, players have the option of playing as a female Yautja.
The game includes a lootcrate system known a "Field Lockers", which are unlocked during gameplay and grant various appearance and weapon customization options for both Fireteam and Predator characters, including the flintlock pistol of Prey'''s Raphael Adolini. Field Lockers are randomized and can contain duplicate items, in which case the duplicate item will be converted to additional in-game currency. As well as being a reward for increasing in rank, Field Lockers can also be purchased, either using "Veritanium", a form of in-game currency that can be earned through gameplay or found hidden within the game map. Items may also be purchased directly for Veritanium, although some of the rarer items can be many times more expensive than a single Field Locker.
Two characters from previous Predator films also return in paid DLC: Major Alan "Dutch" Schaefer (reprised by Arnold Schwarzenegger) from the original 1987 Predator and Israeli Defense Forces sniper Isabelle (reprised by Alice Braga) from 2010's Predators. The Yautja Wolf from the Alien vs. Predator franchise is also included as a playable character.
ReleasePredator: Hunting Grounds was announced at a PlayStation State of Play livestream in May 2019. It was noted that the game will allow cross-play between PlayStation 4 and Windows. The beta version of the game was released on March 27, 2020 which was available until March 29, with the full game released on April 24. PlayStation 5 and Xbox Series X/S ports released on October 1, 2024.
Reception
Critical reception Predator: Hunting Grounds received "mixed or average" reviews from critics, according to review aggregator website Metacritic.
Tomas Franzese of Inverse reviewed the beta calling it the "worst Sony game of this generation", that the "game feels like a mess", visually outdated and unpolished. Jonathon Dornbush of IGN, who also played the trial weekend, noted the excessive wait times to get in to a game and said he hopes that IllFonic "can find a better balance to making the other objectives a bit more interesting".
Sales
In Japan, the PlayStation 4 version of Predator: Hunting Grounds'' sold 9,172 units, making it the tenth best-selling retail game during its first week of release in the country.
References
External links
2020 video games
Asymmetrical multiplayer video games
First-person shooters
Multiplayer video games
PlayStation 4 games
Predator (franchise) games
Science fiction video games
Sony Interactive Entertainment games
Video games developed in the United States
Video games set in South America
Video games with cross-platform play
Windows games
Unreal Engine 4 games
IllFonic games | Predator: Hunting Grounds | [
"Physics"
] | 1,029 | [
"Asymmetrical multiplayer video games",
"Symmetry",
"Asymmetry"
] |
63,511,900 | https://en.wikipedia.org/wiki/Marasmius%20cohaerens | Marasmius cohaerens is a species of gilled mushroom which is fairly common in European woods.
Description
This section uses the given references throughout.
The matt or slightly felted cap grows from about 1 cm to 3.5 cm, and can be pale brown, yellow brown or chocolate brown, sometimes also with a pink tinge. The shape develops with age from campanulate to flat.
There is no ring or other veil remnant. The stem is about 5 to 9 cm long and up to 0.5 cm in diameter and varies from dark brown at the base to whitish at the top with some ochraceous to reddish colour in the middle. It has a distinctive shiny and horny consistency.
The adnate to almost free gills are quite distant and have a cream to brownish colour with a darker brown edge and there are tiny hairs on the edge which can be seen with a hand-lens.
The taste is mild and there is little smell.
The spores are ellipsoid to almond-shaped and are around 8-10.5 μm by 4–5.5 μm. There are cheilocystidia which take a broadly club-shaped form with finger-like protrusions at the far end; such cells are known as "broom cells of the siccus type" (see Marasmius siccus).
Distribution, habitat, ecology and human impact
This saprobic mushroom grows singly or in small groups on humus and litter in beech forests or with other deciduous trees and (only occasionally) in coniferous forests.
It is widely distributed and fairly common in Europe, and in eastern Asia. It also occurs though rarely in North America, and there other varieties have been identified (the European one being M. cohaerens var. cohaerens).
Naming
This species was originally described by the mycologist Christiaan Hendrik Persoon in 1801 as Agaricus cohaerens. Then in 1878 in a common work published in London, Mordecai Cubitt Cooke and Lucien Quélet assigned the current name which has remained the same for over 100 years.
The Latin epithet cohaerens has the same origin as the English word "coherent" and means "keeping together" (i.e. it is difficult to pull the mushroom apart).
References
Links
cohaerens
Taxa named by Christiaan Hendrik Persoon
Fungus species | Marasmius cohaerens | [
"Biology"
] | 490 | [
"Fungi",
"Fungus species"
] |
63,511,984 | https://en.wikipedia.org/wiki/Be%C5%9Fir%20Fuad | Beşir Fuad (5 February 1887) was an Ottoman soldier, intellectual, and writer during the First Constitutional Era.
He wrote works on science, philosophy, literary criticism and biography. Unlike Tanzimat era intellectuals, who generally subscribed to romanticism, he promulgated realism and naturalism in literature; and positivism in philosophy. He has been called "the first Turkish positivist and naturalist".
His suicide at the age of 35 had wide repercussions in the Ottoman society and the press, which were unfamiliar with the concept of suicide until then. His death is reported with starting a suicide epidemic in Istanbul.
Early life and military career
Beşir Fuad was born in Constantinople (modern-day Istanbul) to a family of Georgian descent. He was the son of Habibe Hanım and Hurşid Pasha, who had served as mutasarrif of Marash and Adana.
After graduating from Fatih Highschool, he continued his education at the Aleppo Jesuit School in Syria, where his father was posted. During his stay in Aleppo, he learned French. He graduated from Kuleli Military High School in 1871 and the Ottoman Military Academy in 1873. After graduating, he served as an aide-de-camp to Sultan Abdulaziz for three years. When the Serbian-Ottoman war of 1876-1877 began, he joined the army as a volunteer. Afterwards, he took part in the Russo-Turkish War of 1877–1878 and the suppression of the Cretan revolt of 1878; achieving the rank of bimbashi (lieutenant colonel). He stayed in Crete for several years, and learned English and German during this time.
His marriage to an aunt was arranged when he was very young, he had a son named Mehmet Cemil from this marriage. He divorced a short time later and married Şaziye Hanım, daughter of Salih Pasha, a son of the palace doctor Kadri Pasha. He had two sons from this marriage, Namık Kemal and Mehmed Selim. He also had a daughter named Feride, born to a French mistress.
Career as a writer
Beşir Fuad was interested in science and philosophy, and thanks to his knowledge of English, French and German, he was able to keep up with Western intellectual and artistic developments. He started his career as a writer in 1883 by translating articles for the Envâr-ı Zekâ magazine. He left the military in 1884, and from then he devoted himself entirely to writing.
He published over 200 articles on science, philosophy, language learning and the military; as well as reviews of theatrical plays. During his short writing career, he also published 16 books, and introduced Western figures such as Émile Zola, Alphonse Daudet, Charles Dickens, Gustave Flaubert, Auguste Comte, Karl Georg Büchner, Herbert Spencer, Jean le Rond d'Alembert, Julien Offray de La Mettrie, Diderot Claude Bernard and Gabriel Tarde to the Ottoman audience.
He published the magazine Hâver, later Güneş, which ran for 12 issues. He wrote the editorials of Ceride-i Havadis for a month and a half. After the closure of that newspaper, he wrote articles for Tercüman-ı Adalet and Saadet.
Despite not writing any literature himself, he engaged in literary criticism, often contradicting the dominant views in the Ottoman Empire at the time. He defended the power and value of science and philosophy against the Romantic writers of the period, engaging in fierce arguments with Mehmet Tahir and Namık Kemal. He expressed his thoughts on art and philosophy in his work Intikad, which includes his correspondence with Muallim Naci. Upon the death of Victor Hugo in 1885, he wrote a small book about him. This work is considered the first critical monograph written in the history of Turkish literature. In another monograph on Voltaire, he defended positivism.
Death
His son Namık Kemal died at the age of one and a half years old from scarlet fever in 1885, and Beşir Fuad could not get over the impact of the loss. After his mother (who suffered a mental illness) died in March 1886, he started worrying because he thought the disease could be hereditary. He turned to nightlife and had several mistresses, being torn between them and his wife. He had a daughter, Feride, from a French mistress. He fell into financial difficulties by spending his father's inheritance, and decided to kill himself. He took the decision two years before he carried it out, also motivated by his disbelief in afterlife.
He committed suicide on February 5, 1887 by cutting his wrists in his house. He first injected himself with cocaine to relieve the pain, and then cut his wrists. He took notes while he remained conscious, regarding his suicide as a scientific experiment:I performed my operation and did not feel any pain. It hurts a little as the blood flows out. My sister-in-law came downstairs while the blood was flowing. I told her I shut the door because I was writing and send her back. Fortunately she did not come in. I cannot think of a sweeter death than this. I raised my arm like fury to let the blood out. I started to feel dizzy...
He had intended to donate his body to the Imperial School of Medicine, but he was buried in Eyüp Cemetery instead. His tomb was later lost. Beşir Fuad's death was widely reported in the press. Since suicide was a rarely discussed topic in the Ottoman Empire, it was reported as starting a suicide epidemic in Istanbul. Subsequently, Abdul Hamid II's government banned newspapers from publishing news involving suicide.
References
1850s births
1887 deaths
1880s suicides
19th-century journalists from the Ottoman Empire
Opinion journalists
19th-century translators
Positivists
Materialists
Journalists from Istanbul
Political people from the Ottoman Empire
Philosophers from the Ottoman Empire
Georgians from the Ottoman Empire
Suicides in the Ottoman Empire
Suicides by sharp instrument in Turkey
Kuleli Military High School alumni
Ottoman Military Academy alumni
Ottoman Army officers
Serbian–Turkish Wars (1876–1878)
Ottoman military personnel of the Russo-Turkish War (1877–1878)
Burials at Eyüp Cemetery
Military personnel who died by suicide | Beşir Fuad | [
"Physics"
] | 1,288 | [
"Materialism",
"Matter",
"Materialists"
] |
63,513,617 | https://en.wikipedia.org/wiki/Pioneering%20Women%20in%20American%20Mathematics | Pioneering Women in American Mathematics: The Pre-1940 PhD's is a book on women in mathematics. It was written by Judy Green and Jeanne LaDuke, based on a long study beginning in 1978, and was published in 2009 by the American Mathematical Society and London Mathematical Society as volume 34 in their joint History of Mathematics series. Unlike many previous works on the topic, it aims at encyclopedic coverage of women in mathematics in the pre-World War II United States, rather than focusing only on the biographies of individual women or on collecting stories of only the most famous women in mathematics. The Basic Library List Committee of the Mathematical Association of America has strongly recommended its inclusion in undergraduate mathematics libraries.
Topics
The first part of the book discusses the institutions that granted doctorates to women in mathematics before 1940, and the milieu in which they operated, including typical practices of the time of that demanded that women resign on marriage, that forbade institutions from hiring wives or other relatives of their male faculty, or in some cases prevented women who had done all the work for a graduate degree from being granted one. It also discusses the patterns the authors' found in these women's lives, including the discovery that their life expectancies were higher than typical for their time. Its eight chapters include material on the family background of the subjects, their undergraduate and graduate education, hiring and careers, and their contributions to mathematics.
The second part of the book provides biographical profiles of every woman that the authors could identify as having earned a doctorate in mathematics in the US before 1940, as well as four American women who earned doctorates abroad, giving 228 in all. The typical biography in this section is approximately 2/3 of a page to a page in length, with information drawn from reference works, review journals, and archival material as well as interviews with the subjects still living at the time of the study. The 1940 cutoff for the biographies in the book represents both a time of "a precipitous drop in enrollment" for women in mathematics, and the starting time for two previous studies on women in mathematics and science by Margaret A. M. Murray and Margaret W. Rossiter. The rate of doctorates given to women in the period covered by the book, approximately 14%, would not be reached again until the 1980s.
A companion web site provides additional information on the subjects of the book, and can be considered as a third and "potentially most valuable" section of the book itself.
Audience and reception
This book is readable by a general audience, but reviewer Charles Ashbacher writes that "only people deeply interested in the history of mathematics, particularly in the role of women, will find it a critical read", and suggests that the second half should be used as reference material rather than reading it through. Reviewer Amy Shell-Gellasch agrees, writing "It is intended as a reference, not necessarily as a book to sit down and read." Reviewer Silke Göbel adds that, beyond mathematics, the book will also be of interests to sociologists.
Ashbacher rates the book as "an excellent resource for information in this area". Despite calling it "a labor of love" and "an important contribution", Shell-Gellasch writes that she was "disappointed by the lack of references" in the book, although significantly more references can be found in the companion web site. In contrast, reviewer Andrea Blunck calls the book "really fascinating", writing that she was "surprised to learn how numerous" these women were, "and how different yet how similar their lives and careers were". And reviewer Margaret A. M. Murray calls the book "spectacular" and "a stunning historical achievement", writing that because of it "we now know more about this first cohort of American women mathematicians than we know about any cohort of mathematicians, male or female."
Notable mentions
Mary Nicholas Arnoldy
Grace Hopper
M. Henrietta Reilly
References
External links
Additional Material for the Book, American Mathematical Society
Women in mathematics
Biographies and autobiographies of mathematicians
2009 non-fiction books | Pioneering Women in American Mathematics | [
"Technology"
] | 827 | [
"Women in science and technology",
"Women in mathematics"
] |
63,513,954 | https://en.wikipedia.org/wiki/NGC%20766 | NGC 766 is an elliptical galaxy located in the Pisces constellation about 362 million light years from the Milky Way. It was discovered by British astronomer John Herschel in 1828.
Due to NGC 766 being situated close to the celestial equator it is at least partly visible from both hemispheres in certain times of the year.
See also
List of NGC objects (1–1000)
John Hershel
References
External links
Elliptical galaxies
766
Pisces (constellation)
007468 | NGC 766 | [
"Astronomy"
] | 99 | [
"Pisces (constellation)",
"Constellations"
] |
63,513,992 | https://en.wikipedia.org/wiki/NGC%20767 | NGC 767 is a barred spiral galaxy located in the constellation Cetus about 241 million light years from the Milky Way. It was discovered by the American astronomer Francis Leavenworth in 1886.
One supernova has been observed in NGC 767: SN2019lre (typeII, mag. 19.2).
See also
List of NGC objects (1–1000)
References
External links
Barred spiral galaxies
Cetus
0767
007483
Discoveries by Francis Leavenworth
Astronomical objects discovered in 1886 | NGC 767 | [
"Astronomy"
] | 105 | [
"Cetus",
"Constellations"
] |
63,514,033 | https://en.wikipedia.org/wiki/NGC%20768 | NGC 768 is a barred spiral galaxy located in the constellation Cetus about 314 million light years from the Milky Way. It was discovered by the American astronomer Lewis Swift in 1885.
See also
List of NGC objects (1–1000)
References
External links
Barred spiral galaxies
0768
Cetus
007465 | NGC 768 | [
"Astronomy"
] | 64 | [
"Cetus",
"Constellations"
] |
63,514,080 | https://en.wikipedia.org/wiki/NGC%20769 | NGC 769 is a spiral galaxy located in the constellation Triangulum about 197 million light years from the Milky Way. It was discovered by the American astronomer Truman Safford in 1866.
See also
List of NGC objects (1–1000)
References
External links
Spiral galaxies
Triangulum
769
007537 | NGC 769 | [
"Astronomy"
] | 64 | [
"Triangulum",
"Constellations"
] |
63,514,300 | https://en.wikipedia.org/wiki/Indian%20Vaccination%20Act%20of%201832 | Indian Vaccination Act of 1832 is a US federal law passed by the US Congress in 1832. The purpose of the act was to vaccinate the American Indians against smallpox to prevent the spread of the disease.
History
The act was first passed on May 5, 1832. Lewis Cass, Secretary of War, designed the act. Members of Congress appropriated US$12,000 (approximately $ in current money) to vaccinate them. By February 1, 1833, more than 17,000 Indians had been vaccinated.
Congress allocated $12,000 for the entire program, to be administered by Indian agents and sub-agents. Some US army surgeons refused to participate due to the lack of funds, leaving agents themselves and others with no medical training to produce and administer vaccines. However, not everyone was included. As a result, a few years later, smallpox killed 90% of the Mandan Indians, who had been excluded from the act. It also excluded Hidatsas and Arikaras.
References
See also
1721 Boston smallpox outbreak
1738–1739 North Carolina smallpox epidemic
1770s Pacific Northwest smallpox epidemic
1775–1782 North American smallpox epidemic
1837 Great Plains smallpox epidemic
1862 Pacific Northwest smallpox epidemic
Plenipotentiary letters regarding smallpox in Colonial America
General Court of Massachusetts Province Laws
Audio media archive
External links
1832 in American law
22nd United States Congress
Public health in the United States
Smallpox eradication
Smallpox in the United States
Smallpox vaccines
United States federal Native American legislation
Vaccination law
Vaccination in the United States | Indian Vaccination Act of 1832 | [
"Biology"
] | 304 | [
"Biotechnology law",
"Vaccination law",
"Vaccination"
] |
63,514,425 | https://en.wikipedia.org/wiki/Praseodymium%28IV%29%20oxide | Praseodymium(IV) oxide is an inorganic compound with chemical formula PrO2.
Production
Praseodymium(IV) oxide can be produced by boiling Pr6O11 in water or acetic acid:
Pr6O11 + 3 H2O → 4 PrO2 + 2 Pr(OH)3
Chemical reactions
Praseodymium(IV) oxide starts to decompose at 320~360 °C, liberating oxygen.
References
Praseodymium compounds
Oxides
Fluorite crystal structure | Praseodymium(IV) oxide | [
"Chemistry"
] | 108 | [
"Oxides",
"Salts"
] |
63,516,197 | https://en.wikipedia.org/wiki/Database%20repair | The problem of database repair is a question about relational databases which has been studied in database theory, and which is a particular kind of data cleansing. The problem asks about how we can "repair" an input relational database in order to make it satisfy integrity constraints. The goal of the problem is to be able to work with data that is "dirty", i.e., does not satisfy the right integrity constraints, by reasoning about all possible repairs of the data, i.e., all possible ways to change the data to make it satisfy the integrity constraints, without committing to a specific choice.
Several variations of the problem exist, depending on:
what we intend to figure out about the dirty data: figuring out if some database tuple is certain (i.e., is in every repaired database), figuring out if some query answer is certain (i.e., the answer is returned when evaluating the query on every repaired database)
which kinds of ways are allowed to repair the database: can we insert new facts, remove facts (so-called subset repairs), and so on
which repaired databases do we study: those where we only change a minimal subset of the database tuples (e.g., minimal subset repairs), those where we only change a minimal number of database tuples (e.g., minimal cardinality repairs)
The problem of database repair has been studied to understand what is the complexity of these different problem variants, i.e., can we efficiently determine information about the state of the repairs, without explicitly materializing all of these repairs.
References
See also
Probabilistic database
Database theory | Database repair | [
"Technology"
] | 333 | [
"Computing stubs",
"Computer science",
"Computer science stubs"
] |
63,517,158 | https://en.wikipedia.org/wiki/Roborock | Roborock (also known as Beijing Roborock Technology Co. Ltd.; ) is a Chinese consumer goods company known for its robotic sweeping and mopping devices and handheld cordless stick vacuums. Xiaomi played a key role in the company's founding.
History
Beijing Roborock Technology Co. Ltd. was founded in 2014 in Beijing, China. Its launch was largely supported by Xiaomi. The company raised about $640 million in its February 2020 IPO, and the company had annual revenue of approximately CNY 4.5 billion as of August 2021.
Roborock currently trades on Beijing's STAR market.
Products
Newer models in Roborock's "S" line of robotic floor cleaning devices have an obstacle avoidance system which uses dual cameras and a microprocessor to discern objects as small as 5 cm wide by 3 cm high. As the cleaners move about a space they create a schematic map, marking objects to be avoided later.
Roborock has previously claimed that their floor cleaning devices do not store images or upload them to the cloud, and that all captured images are immediately deleted after processing.
Roborock introduced ReactiveAI 2.0 with the release of the Roborock S7 MaxV. It has an RGB camera and 3D structured light scanning with a new neural processor for improved object recognition regardless of lighting conditions.
In addition to their front-mounted cameras, newer Roborock floor cleaning devices use top-mounted LIDAR to map rooms. Using an app, users can set off-limits areas to ensure the device does not clean there. Users can also set "no-mop" areas where the device may vacuum but not mop.
Roborock Q7 Max, released in 2022, generates 4,200 Pa suction, and can be controlled by Alexa, Siri, or Google Assistant.
In 2023, Roborock released the S8, S8 Plus and S8 Pro Ultra. The main difference between the models is the docking station each includes. The S8 has a standard charging base whereas the S8 Plus includes an Auto-Empty Dock. The S8 Pro Ultra ships with the RockDock Ultra, the most advanced dock Roborock offers. In addition to emptying the S8's dustbin and charging the robot, the dock also manages the S8's mopping system including refilling its water and drying its mop pad. The S8 Pro Ultra is the first Roborock robot vacuum with lifting dual brushrolls.
The S8 and S8 Plus have dual brushrolls but they do not lift. All models which precede the S8 have a single brushroll.
Roborock S7 MaxV Ultra has 5,100Pa suction and a livestreaming camera. Roborock S7, which debuted at CES 2021, uses trademarked VibaRise. Roborock S7 can detect the type of floor to use either its mop or vacuum. The Roborock S6 MaxV operates at 67 dB and generates maximum suction of 2,500 Pa. Its dustbin measures 460 mL at full capacity. It can vacuum approximately 250 square meters between charges, and its mop can cover about 200 square meters of hard flooring on the same charge. The Roborock S4 does not mop.
In 2022, Roborock released the Q5 which replaces the S models, and is similar to the S4 Max and the S5. The Q5 has a higher suction power but lacks the mop feature.
References
External links
Xiaomi
Companies based in Beijing
Robotic vacuum cleaners
Home automation
Chinese companies established in 2014
Vacuum cleaner manufacturers
Chinese brands | Roborock | [
"Technology"
] | 757 | [
"Home automation",
"Home automation companies"
] |
63,518,465 | https://en.wikipedia.org/wiki/Eleftherios%20Economou | Eleftherios Ν. Economou (; born 7 February 1940) is a Greek theoretical physicist and professor emeritus at the department of physics of the University of Crete. He has contributed to various areas of theoretical condensed matter physics, starting with the study of surface plasmons during his thesis in 1969.
Economou influenced the evolution of theoretical physics in Greece since the late 1970s, as described in detail in a special volume published in 2000, in Physica B, on the occasion on his 60th birthday. His opinion perspective is still solicited by major science journals, such as Nature Materials, in particular on how to address challenges related to effects of economic crisis in Greece in science and technology. He contributed substantially to shaping the first steps of the University of Crete and led the effort in creating the Foundation for Research & Technology – Hellas (FORTH), serving as its first director general from its foundation in 1983 until 2004. He has been teaching in the department of physics of the University of Crete since 1982 and also wrote 13 textbooks, mostly in topics related to theoretical physics and condensed matter physics. He has published over 250 refereed papers, which have received more than 23,000 citations according to Google Scholar.
Early years
Eleftherios Economou was born in Athens, Greece on February 7, 1940. He grew up in a working-class neighbourhood of Kallithea and his early years, along with his parents Nikos and Sophia and his younger brother Vassilis, were influenced by World War II and in particular by the Greek Civil War that followed. In 1952 he was admitted to the selective public Experimental School of the University of Athens, from where he graduated in 1958. The same year he was admitted in the School of Electrical & Mechanical Engineering of the National Technical University of Athens. Among his classmates was John Iliopoulos, A theoretical physicist. He graduated with first class honours in 1963 and commenced the compulsory, in Greece, military service, which at the time lasted two and a half years. During this period, he was given the opportunity to attend a number of graduate level physics courses at the "Center for Advanced Studies and Philosophy of Science" which was organised at the National Centre of Scientific Research "Demokritos".
In 1965 Economou applied for graduate studies in the U.S. and was admitted to the department of physics of the University of Chicago. Before leaving for the U.S., in February 1966, he married Athanasia Paganou and they have a daughter, Sophia Economou, who is professor of physics at Virginia Polytechnic Institute and State University. He arrived in Chicago in 1966. Two months later he passed the qualifying exams, ranking first among his classmates, which made it easy to be accepted as a PhD student in the group of Morrel H. Cohen. Economou decided to focus on theoretical condensed matter physics. He completed his studies and was awarded a PhD in three years with a dissertation entitled, "Surface Plasmons in Thin Films" in 1969.
Scientific career
Economou became assistant professor and later professor in the department of physics at the University of Virginia (1970–1981). He has been visiting professor at the University of Chicago (1994), Université de Lausanne (1992), Princeton University (1978), Iowa State University (1991–1992) and an affiliated member of Ames Laboratory since 1992. He was visiting researcher at the National Centre of Scientific Research "Demokritos" (1997–1998) and a Chair Professor of Theoretical Physics at the University of Athens (1978–1981). In 1981 he and P. Lambropoulos were the first elected professors in the newly founded department of physics of the University of Crete (Greece). As the first chairman of the department he established the bases in the Physics curriculum and he was instrumental in hiring high quality faculty at the University of Crete. He was and remains a strong advocate on topics related to the negative influence of political parties and direct involvement of students in the normal operation of the universities. He retired in 2007, and still serves and teaches in the department of physics as professor emeritus.
Economou was the prominent figure of a group of five Greek scientists from abroad, namely Fotis Kafatos, Dionysios (Dennis) Tsichritzis, Grigoris Sifakis and Peter (Panagiotis) Lambropoulos, who planned the idea and with the help of the Minister of Research and Technology Georgios Lianis, convinced the Greek Government to create the first three Institutes of the Research Center of Crete (RCC; Ερευνητικό Κέντρο Κρήτης – ΕΚΕΚ) in Heraklion. During his leadership as its first director general (1983–2004), RCC expanded with the Institute of Mediterranean Studies (IMS) in Rethymno and the Institute of Applied Computational Mathematics in Heraklion. In 1986 Skinakas Observatory, jointly supported by RCC, the University of Crete and the Max Planck Institute for Extraterrestrial Physics (Germany) also commenced its operations. In 1987 with the agreement of George Papatheodorou and Iacovos Vasalos Directors of the Institute of Chemical Engineering & High Temperature Processes - (ICE/HT) in Patras, and the Chemical Process Engineering Research Institute (CPERI) Thessaloniki, the two Institutes joined RCC and the Foundation for Research & Technology – Hellas (FORTH) was created. In 2000 CPERI was renamed Centre for Research and Technology Hellas (CERTH) becoming independent of FORTH. In 2002 the Biomedical Research Institute (BRI), based in Ioannina was incorporated into FORTH. In parallel, with financial support from the European Union, the construction of the FORTH buildings in Heraklion, Patras, Thessaloniki, and the restoration of the IMS building in Rethymno began. Science and Technology Parks were established connected to the institutes in Heraklion, Patras, and Thessaloniki and respective buildings with European funding were erected. The main FORTH building infrastructure in Heraklion in 2004 had an area of 30,000 square meters. Most of the buildings were designed and supervised by Panos Koulermos, then a professor of architecture at the University of Southern California. FORTH has been established as the premier research organisation in Greece, ranking consistently first in scientific quality and international recognition by a variety of metrics, including evaluations by external Committees as well as attracting funding by European Research Council grants. Achieving this status was not easy, as several times the supervising agencies attempted to introduce political criteria and to intervene.
Economou stepped down from the position of director general of FORTH in April 2004, and he was succeeded by Stelios C. Orphanoudakis. Three years later, in 2007, he reached the compulsory retirement age of 67 at the university. He was awarded the title of emeritus professor at the University of Crete and he continues to teach and remains active in research at the department of physics.
Research areas
Economou worked in a broad range of topics in the area of condensed matter physics. These topics include electronic properties of many different materials and systems, with emphasis on systems with defects (crystallographic defects) and disordered systems (e.g. amorphous semiconductors), magnetic and optical properties of different materials, including superconductors and strongly correlated materials, surface plasmons and their interactions in metals and semiconductors, electron-phonon interactions, non-linear systems and properties, acoustic wave and elastic wave propagation in random and periodic media (e.g. phononic crystals), and electromagnetic wave propagation in complex systems, with emphasis on photonic crystals and metamaterials.
Since 1990 Economou's research is mainly focused on electromagnetic and acoustic/elastic wave propagation in complex systems. He was one of the initiators of the field of phononic crystals (i.e. acoustic/elastic wave band gap materials) which led to the more recent and wider field of acoustic metamaterials. His 1992 publication «Elastic and acoustic wave band structure», was one of the two (almost simultaneous) publications discussing for the first time the concept of the acoustic band gap. This paper was followed by many other well-recognized of Economou's works in the field of phononic crystals and elastic wave propagation in complex systems (see ).
In the field of electromagnetic metamaterials, Economou's research helped to eliminate some of the first objections of the scientific community on the possibility of the existence of Negative-index metamaterials, revealing the possibility and limitations for the achievement of negative refractive index in the optical spectrum, and demonstrating some of the unique properties and capabilities of metamaterials (e.g. the possibility for achievement of repulsive Casimir force in chiral metamaterials). In his research on metamaterials, Economou works in close collaboration with his long-term colleague Costas Soukoulis, since the period 1979 to 1982 when Soukoulis was a postdoctoral researcher under his supervision at the University of Virginia. The collaborative research on negative index metamaterials led by Economou and Soukoulis, including scientists from Imperial College (Sir John Pendry), Karlsruhe Institute of Technology and Bilkent University (Ekmel Özbay), was recognized with the award of the European Union Descartes prize for collaborative research in 2005.
One of the most important scientific contributions by Economou is considered his PhD work on surface plasmons. The relevant 1969 publication "Surface plasmons in thin films", among the first of his career, has become a reference work for the modern field of plasmonics.
Among Economou's earlier research, quite important is considered also the research on Anderson localization in systems with defects (i.e., crystallographic defect) and disordered systems. Many of the novel and important results of this research are summarized in his book “Green’s Functions in Quantum Physics”.
Books
Economou has written several physics textbooks in Greek and in English. His book Green's functions in Quantum Physics, originally published in 1979, has received more than 2500 citations according to Google Scholar. It was included in the electronic Springer Book Archives containing 40 renowned imprints published by Springer between 1842 and 2005.
Books in English
"Green's functions in Quantum Physics", Springer-Verlag, 1979. Second edition 1983, third edition 2006.
"A Short Journey from Quarks to the Universe" , SpringerBriefs, 2011. A 2nd enlarged edition appeared in Jan. 2016 entitled "From Quarks to the Universe: A short Physics Course", Springer-Verlag, 2016.
"The Physics of Solids. Essentials and Beyond", Springer-Verlag, 2010.
Books in Greek
"Statistical Physics and Thermodynamics", Crete University Press, 1994, 2nd ed., 2001
"Science: How its allurement set", Eurasian Publications, Athens 2012
"From Quarks to the Universe: A short journey", Crete University Press, 2012
"Solid State Physics Vol. I: Metals, Semiconductors, Insulators", Crete University Press, 1997
"Solid State Physics. Vol. II: Order, Disorder, Correlations", Crete University Press, 2003
"Solid State Physics: A shortened version", Crete University Press, 2016
"Solids I – General View" & "Solids I – Metals and Semiconductors", Hellenic Open University, 1999
"Nuclear Weapons and Human Civilization", Crete University Press, 1985, 2nd ed. 1987
"Contemporary Physics", Volume 1, Crete University Press, 1989, 5th ed., 2010 (co-author)
"Contemporary Physics", Volume 2, Crete University Press, 1989, 1991, 5th ed., 2010 (co-author)
Awards
Fellow of the American Physical Society (1994) "...For contributions to the theory of disordered systems including mobility edges and localization of classical waves."
Honorary PhD, Grenoble Institute of Technology, France (1994)
Honorary PhD, Department of Materials Science & Engineering, University of Ioannina, Greece (2004)
Outstanding Referee by the American Physics Society (2008)
Award of the Foundation of the Greek Parliament (2010)
Commander of the Order of the Phoenix by the President of the Greek Republic (2013)
Award of Ethical Order of the city of Heraklion (2020)
Naming of the main building of FORTH in Heraklion to "Building Eleftherios Economou" (2023)
Selected publications
E.N. Economou, "Surface Plasmons in Thin Films", Phys. Rev. 182, 539–554 (1969)
E.N. Economou, M.H. Cohen, "Existence of Mobility Edges in Anderson's Model for Radom Lattices", Phys. Rev. B. 5, 2931–2948 (1972)
E.N. Economou, C.M. Soukoulis, "Static Conductance and Scaling Theory of Localization in One Dimension", Phys. Rev. Lett. 46, 618–621 (1981)
M. Sigalas, E.N. Economou,"Elastic and Acoustic Wave Band Structure", Journal of Sound and Vibration, 158 (2), 377 (1992)
M. Kafesaki, R. S. Penciu, and E. N. Economou “Air bubbles in water: a strongly multiple scattering medium for acoustic waves” Physical Review Letters 84 (26), 6050 (2000)
S. Foteinopoulou, E.N. Economou, C.M. Soukoulis, "Refraction at Media with Negative Refractive Index", Phys. Rev. Lett. 90, 107402 (2003)
J. Zhou, Th. Koschny, M. Kafesaki, E.N. Economou, J. Pendry, and C.M. Soukoulis, "Saturation of the Magnetic Response of Split-Ring Resonators at Optical Frequencies", Phys. Rev. Lett. 95 (22) 223902 (2005)
S. Droulias, I. Katsantonis, M. Kafesaki, C.M. Soukoulis, E.N. Economou “Chiral Metamaterials with PT-Symmetry and Beyond” Physical Review Letters 122 (21), 213201 (2019)
References
1940 births
Living people
National Technical University of Athens alumni
20th-century Greek physicists
Academic staff of the University of Crete
Academic staff of the National and Kapodistrian University of Athens
Theoretical physicists
Scientists from Athens
Optical physicists
Metamaterials scientists
21st-century Greek physicists | Eleftherios Economou | [
"Physics",
"Materials_science"
] | 3,073 | [
"Metamaterials scientists",
"Theoretical physics",
"Theoretical physicists",
"Metamaterials"
] |
58,281,008 | https://en.wikipedia.org/wiki/Andrew%20P.%20Carter | Andrew P. Carter is a British structural biologist who works at the Medical Research Council (MRC) Laboratory of Molecular Biology (LMB) in Cambridge, UK. He is known for his work on the microtubule motor dynein.
Education
Carter studied Biochemistry at the University of Oxford, graduating in 1999. He obtained a PhD in 2003 from the MRC Laboratory of Molecular Biology where he worked with Venki Ramakrishnan on the ribosome. He was a member of the team in Ramakrishnan's lab that solved the first X-ray crystal structure of the small (30S) ribosomal subunit. Carter also determined structures of 30S bound to antibiotics and bound to the initiation factor IF1. Ramakrishnan shared the Nobel prize in Chemistry for the team's work on the 30S.
Career and research
Carter was a post-doc in Ron Vale's lab at University of California, San Francisco from 2003 to 2010. During his post-doc, he studied the molecular motor protein, dynein using X-ray crystallography and single molecule fluorescence microscopy.
He became a group leader at MRC Laboratory of Molecular Biology in Cambridge in 2010 where he uses X-ray crystallography, electron microscopy, and single molecule microscopy assays to understand how dynein transports cargo. His group solved X-ray crystal structures of the dynein motor domain showing how it generates force to pull cargos along microtubules and reconstituted a recombinant dynein, showing how its processive movement is activated by cofactors/cargo adaptors. His group used cryoEM to solve the structure of dynein's cofactor dynactin and the full length dynein complex. They showed how dynein and dynactin come together in the presence of cargos and how this activates transport.
Grants, awards and honours
2001 Clare College Junior Research Fellowship
2002 Max Perutz PhD Student Prize (MRC Laboratory of Molecular Biology)
2003 Agouron Institute / Jane Coffin Childs Memorial Fund Fellowship
2006 Leukemia & Lymphoma Society Special Fellow Award
2010 Fellow of Clare College and Director of Studies for Biological Sciences
2012 EMBO Young Investigator Program
2012 Wellcome Trust New Investigator Award
2016 Member, European Molecular Biology Organisation (EMBO)
2018 Wellcome Trust Investigator Award
2024 Fellow of the Royal Society
References
Structural biologists
Alumni of the University of Oxford
University of California, San Francisco faculty
Year of birth missing (living people)
Living people
21st-century British biologists
Fellows of the Royal Society | Andrew P. Carter | [
"Chemistry"
] | 523 | [
"Structural biologists",
"Structural biology"
] |
58,295,661 | https://en.wikipedia.org/wiki/Jan%20Krissler | Jan Krissler, better known by his pseudonym starbug, is a German computer scientist and hacker. He is best known for his work on defeating biometric systems, most prominently the iPhone's TouchID. He is also an active member of the German and European hacker community.
Fingerprints of prominent German politicians
Krissler, along with Chaos Computer Club published the fingerprints of then Interior Minister Wolfgang Schäuble as a means of protest as well as proof of concept. He shot traces of a glass used by Schäuble using a digital camera and tweaked it digitally. Previously, Schäubles Ministry of the Interior had introduced biometric passports which included a digital copy of the holder's fingerprint.
He further refined the attack in 2014 when he reproduced Minister of Defense Ursula von der Leyen's fingerprint from a high resolution press photo. The attack was presented during 2014's Chaos Communication Congress.
In 2014, Neurotechnology's "VeriFinger" was used by Jan Krissler to recreate the German defense minister Ursula von der Leyen's fingerprint.
Scientific work
Aside from his activities and popular papers published as an activist, Krissler is also a published scientist. His early works looked into the security of biometric systems. Later, Krissler researched the foundations of optic fibre systems and the development of novel attacks on smart cards.
From 2014 onwards, his work has focused on novel methods of defeating biometric systems. He is internationally recognized for his research on the risks emanating from high resolution smartphone cameras, which may allow malicious actors to covertly steal fingerprints. Deficiencies in biometric payment systems is another field of his research.
Currently, Krissler is a research assistant at TU Berlin working with Jean-Pierre Seifert's research group.
References
Hackers
German computer scientists | Jan Krissler | [
"Technology"
] | 372 | [
"Lists of people in STEM fields",
"Hackers"
] |
58,296,164 | https://en.wikipedia.org/wiki/Arterial%20spin%20labelling | Arterial spin labeling (ASL), also known as arterial spin tagging, is a magnetic resonance imaging technique used to quantify cerebral blood perfusion by labelling blood water as it flows throughout the brain. ASL specifically refers to magnetic labeling of arterial blood below or in the imaging slab, without the need of gadolinium contrast. A number of ASL schemes are possible, the simplest being flow alternating inversion recovery (FAIR) which requires two acquisitions of identical parameters with the exception of the out-of-slice saturation; the difference in the two images is theoretically only from inflowing spins, and may be considered a 'perfusion map'.
The ASL technique was developed by John S. Leigh Jr, John A. Detre, Donald S. Williams, and Alan P. Koretsky in 1992.
Physics
Arterial spin labeling utilizes the water molecules circulating with the brain, and using a radiofrequency pulse, tracks the blood water as it circulates throughout the brain. After a period of time in microseconds (enough to allow the blood to circulate through the brain), a 'label' image is captured. A 'control' image is also acquired before the labeling of the blood water. A subtraction technique gives a measurement of perfusion. In order to increase SNR, collections of control and label images can be averaged. There are also other specifications in the MRI that can increase SNR, like the amount of head coils of the MRI, or a stronger field strength (3 T is standard, but 1.5 T is satisfactory). In order to properly scale the perfusion values into cerebral blood flow units (CBF, ml/100g/1 min), a separate proton density map with the same parameters (but longer TR to fully relax the blood spins) is recommended to be acquired as well. Alternatively, the average control image can be used to generate CBF, which is the case for Phillips pCASL readouts. Usually background suppression is also applied to increase the SNR. Due to the different variations of each implementations, it is recommended that a large multi-scanner study should design a protocol minimizing the variety of readout methods used by each scanner.
One study has shown that although there are voxel differences when different readout methods are used, average gray matter CBF are still comparable. Differences in SNR are apparent when each voxel compared, but collectively are negligible.
Continuous arterial spin labelling
In continuous arterial spin labeling (CASL), the blood water is inverted as it flows through the brain in one plane. CASL is characterized by one single long pulse (around 1–3) seconds. This may be disadvantageous for certain scanners that are not designed to maintain a radiofrequency pulse that long, and therefore would require adjustments to a RF amplifier. This is rectified in pseudo-continuous arterial spin labeling (pCASL), where a single long pulse is replaced with multiple (up to a thousand) millisecond pulses. This leads to a higher labelling efficiency. pCASL is the preferred implementation of ASL. There are different readout modules for pCASL, depending on the scanner used, with 2D pCASL usually being implemented for all scanners and 3D pCASL stack of spirals implemented in GE scanners.
Pulsed arterial spin labelling
In pulse arterial spin labeling (PASL), blood water is inverted as it passes through a labeling slab (of 15 to 20 cm) instead of a plane. There are different variations of this implementations, including EPISTAR and PICORE and PULSAR. Most scanners have been designed to have PASL work out-of-the-box for research use.
Velocity selective arterial spin labelling
Velocity selective arterial spin labeling is a strategy that still requires validation. Velocity selective arterial spin labeling is advantageous in a population where blood flow may be impeded (e.g. stroke), because the labeling occurs closer to the capillaries. This allows the post labeling decay to be shorter.
Diffusion prepared pseudocontinuous arterial spin labelling (DP-pCASL)
Diffusion-prepared pseudocontinuous ASL (DP-pCASL) is a more recent ASL variant sequence that magnetically labels water molecules and measures their movement across the blood-brain barrier complex, which allows for the calculation of the water exchange rate (kw). kw is used as a surrogate for BBB function and permeability. Water exchange across the BBB is mediated by a number of processes, including passive diffusion, active co-transport through the endothelial membrane, and predominantly by facilitated diffusion through the dedicated water channel aquaporin-4 (AQP4). Several studies have investigated the use of DP-pCASL in cerebrovascular diseases, including acute ischemic stroke, CADASIL, hereditary cerebral small vessel disease as well as in animal models.
Analysis of ASL images
ASL maps can mainly be analyzed using the same tools to analyze fMRI and VBM. Many ASL-specific toolboxes have been developed to assist in ASL analysis, such as BASIL (Bayesian inference for arterial spin labelling MRI), part of the FSL neuroimaging package and also Ze Wang's ASL toolbox (using MATLAB) to assist in the subtraction and averaging of the tagged/control pairs. A visual quality check is often needed to make sure that the perfusion map is valid (such as correct registration, or correct segmentation of non-cerebral materials such as the dura mater). A whole brain/voxel-wise approach can be analyzed by registering the ASL map into MNI space for group comparisons. A region of interest approach can be analyzed by registering the ASL map into a selected cluster, or an atlas, like a standard (such as the Harvard-Oxford Cortical atlas) or an individual atlas developed by software like FreeSurfer. The recommended procedure of ASL registration for voxel-wise analysis is to register the perfusion map to a gray matter segmentation of each individual in a non-rigid procedure.
Gray matter often requires more oxygenation and is the source of more brain activity compared to white matter. Therefore, gray matter CBF is often higher than white matter CBF. The single value of gray matter CBF is often isolated in order to give a broad overview of CBF differences. Gray matter and white matter CBF can be localized using atlases or Freesurfer.
ASL functional connectivity can be designed with parameters conducive to a long scan time. Studies have suggested that ASL complement resting state fMRI findings well but can differentiate between resting brain networks (such as the default mode network) less.
Comparison with fMRI
Functional MRI (fMRI) has been the modality of choice to visualize brain activity, and takes advantages of a range of techniques that can be used to interpret it. However, the signal that fMRI is acquiring is BOLD signal, which does not directly correlate with blood flow. Cerebral blood flow on the other hand does, allowing for cardiovascular disease (CVD) and inflammatory risk factor analysis, and disorders (such as schizophrenia and bipolar disorder) that have comorbid effects with CVD. ASL imaging can be a useful tool to complement fMRI and vice versa.
Clinical use
In cerebral infarction, the penumbra has decreased perfusion. Besides acute and chronic neurovascular diseases, the value of ASL has been demonstrated in brain tumors, epilepsy and neurodegenerative disease, such as Alzheimer's disease, frontotemporal dementia and Parkinson disease. Additionally, DP-pCASL has promising potential for assessing blood-brain barrier integrity in patients with ischemic stroke.
Although the primary form of fMRI uses the blood-oxygen-level dependent (BOLD) contrast, ASL is another method of obtaining contrast.
There have been research to apply ASL to renal imaging, pancreas imaging, and placenta imaging. A challenge to these sort of non-cerebral perfusion is motion due to breathing. Additionally, there is a lot less development on the segmentation of theses specific organs, so the studies are relatively small scale.
Safety
ASL is in general a safe technique, although injuries may occur as a result of failed safety procedures or human error like other MRI techniques.
ASL, like other MRI modalities generate a fair amount of acoustic noise during the scan, so earplugs are advised.
References
External links
mriquestions.com
Neuroimaging
Nuclear magnetic resonance
Imaging
Magnetic resonance imaging
Scientific techniques | Arterial spin labelling | [
"Physics",
"Chemistry"
] | 1,785 | [
"Nuclear magnetic resonance",
"Magnetic resonance imaging",
"Nuclear physics"
] |
58,296,197 | https://en.wikipedia.org/wiki/Surfactant%20leaching%20%28decontamination%29 | Surfactant leaching is a method of water and soil decontamination, e.g., for oil recovery in petroleum industry. It involves mixing of contaminated water or soil with surfactants with the subsequent leaching of emulsified contaminants. In oil recovery, most common surfactant types are ethoxylated alcohols, ethoxylated nonylphenols, sulphates, sulphonates, and biosurfactants.
References
Soil contamination
Solid-solid separation
Oil spill remediation technologies | Surfactant leaching (decontamination) | [
"Chemistry",
"Environmental_science"
] | 113 | [
"Solid-solid separation",
"Environmental chemistry",
"Soil contamination",
"Separation processes by phases"
] |
58,296,445 | https://en.wikipedia.org/wiki/Piezoelectric%20microelectromechanical%20systems | A piezoelectric microelectromechanical system (piezoMEMS) is a miniature or microscopic device that uses piezoelectricity to generate motion and carry out its tasks. It is a microelectromechanical system that takes advantage of an electrical potential that appears under mechanical stress. PiezoMEMS can be found in a variety of applications, such as switches, inkjet printer heads, sensors, micropumps, and energy harvesters.
Development
Interest in piezoMEMS technology began around the early 1990s as scientists explored alternatives to electrostatic actuation in radio frequency (RF) microelectromechanical systems (MEMS). For RF MEMS, electrostatic actuation specialized high voltage charge pump circuits due to small electrode gap spacing and large driving voltages. In contrast, piezoelectric actuation allowed for high sensitivity as well as low voltage and power consumption as low as a few millivolts. It also had the ability to close large vertical gaps while still allowing for low microsecond operating speeds. Lead zirconate titanate (PZT), in particular, offered the most promise as a piezoelectric material because of its high piezoelectric coefficient, tunable dielectric constant, and electromechanical coupling coefficient. PiezoMEMS have been applied to various different technologies from switches to sensors, and further research have led to the creation of piezoelectric thin films, which aided in the realization of highly integrated piezoMEMS devices.
The first reported piezoelectrically actuated RF MEMS switch was developed by scientists at the LG Electronics Institute of Technology in Seoul, South Korea in 2005. The researchers designed and actualized a RF MEMS switch with a piezoelectric cantilever actuator that had an operation voltage of 2.5 volts.
In 2017, researchers from the U.S. Army Research Laboratory (ARL) evaluated the radiation effects in the piezoelectric response of PZT thin films for the first time. They determined that PZT exhibited a degree of radiation hardness that could be further extended by using conductive oxide electrodes instead of traditional platinum electrodes. Gamma radiation tests have also shown that actuated devices such as switches, resonators, and inertial devices could benefit from the radiation tolerance of PZT, suggesting the possibility that actuators and sensors can be integrated into platforms evaluating nuclear material and reduce human exposure to radiation.
This experiment was part of a decades-long research investment effort at ARL to improve the use of PZT thin film technology for piezoMEMS. Other piezoMEMS-related work included developing a piezoelectric microphone based on PZT thin films, creating new integrated surface micromachining processes for RF MEMS to incorporate thin film PZT actuators, providing the first experimental demonstration of monolithically integrated piezoMEMS RF switches with contour mode filters, and demonstrating the feasibility of vibrational energy harvesting using thin film PZT MEMS. In their work, researchers from ARL have also increased the overall electromechanical response of PZT thin films by 15-30% by incorporating iridium oxide electrode materials.
Design
There exists three primary approaches to realizing PiezoMEMS devices:
The additive approach: The piezoelectric thin films are deposited on silicon substrates with layers of insulating and conducting material followed by surface or silicon bulk micromachining.
The subtractive approach: Single crystal or polycrystalline piezoelectrics and piezoceramics are subjected to direct bulk micromachining and then electrodes.
The integrative approach: Micromachined structures are integrated in silicon or piezoelectrics by using bonding techniques on bulk piezoelectric or silicon substrates.
PiezoMEMS use two principal crystal structures, the wurtzite and perovskite structures.
Challenges
PiezoMEMS still face many difficulties that impede its ability to be successfully commercialized. For instance, the success of depositing uniform films of piezoelectrics still depend heavily on the use of appropriate layers of proper nucleation and film growth. As a result, extensive device-specific development efforts are needed to create a proper sensor structure. In addition, researchers continue to search for ways to reduce and control the material and sensor drift and aging characteristics of thin film piezoelectric materials. Deposition techniques to create thin films with properties approaching those of bulk materials remain in development and in need of improvement. Furthermore, the chemistry and etching characteristics of most piezoelectric materials remain very slow.
References
Mechanical engineering
Electrical engineering
Microtechnology
Microelectronic and microelectromechanical systems
Transducers
Electrical phenomena
Energy harvesting | Piezoelectric microelectromechanical systems | [
"Physics",
"Materials_science",
"Engineering"
] | 991 | [
"Physical phenomena",
"Applied and interdisciplinary physics",
"Microtechnology",
"Materials science",
"Electrical phenomena",
"Mechanical engineering",
"Electrical engineering",
"Microelectronic and microelectromechanical systems"
] |
58,296,557 | https://en.wikipedia.org/wiki/Thomas%20Stevens%20Stevens | Thomas Stevens Stevens (8 October 1900 – 12 November 2000) was a Scottish organic chemist. He was affectionately known as T.S.S. or Tommy Stevens.
Life
He was born in Renfrew on 8 October 1900, the only son of John Stevens and his wife, Jane Irving. His father was a design engineer and Production Director of William Simons & Co. shipbuilders in Renfrew. He was home educated by his mother (a former schoolteacher) until 1908 then educated at Paisley Grammar School. In 1915 he moved to Glasgow Academy and completed his education there in 1917.
He studied Science at Glasgow University under a Taylor Open Bursary, graduating BSc in 1921. He continued at Glasgow as a researcher and as assistant to Horwood Tucker. In 1923 he went to Oxford University to study under Prof William Henry Perkin Jr, gaining his first doctorate (PhD) in 1925.
He returned to Glasgow University in 1925 as a university assistant. In 1928 he became a teaching assistant and in 1933 a Lecturer which he continued until 1947. In 1947 he moved to Sheffield University as a Senior Lecturer in Organic Chemistry. He became a Reader in 1949 and became Professor in 1963.
In 1963 he was elected a Fellow of the Royal Society of London In 1964 he was elected a Fellow of the Royal Society of Edinburgh. His proposers were Peter Pauson, James Bell, Ian Dawson and John Monteath Robertson.
He retired in 1966. In 1985 he was awarded an honorary doctorate (DSc) from Glasgow University.
He died on 12 November 2000, a few weeks after his 100th birthday.
Family
In 1949 he married Janet Wilson Forsyth (d. 1994).
References
1900 births
2000 deaths
People from Renfrew
Alumni of the University of Glasgow
Academics of the University of Glasgow
Academics of the University of Sheffield
British organic chemists
Scottish chemists
Fellows of the Royal Society
Fellows of the Royal Society of Edinburgh
Scottish men centenarians | Thomas Stevens Stevens | [
"Chemistry"
] | 389 | [
"Organic chemists",
"British organic chemists"
] |
58,297,367 | https://en.wikipedia.org/wiki/Lisa%20Welp | Lisa Welp is a biogeochemist who utilizes stable isotopes to understand how water and carbon dioxide are exchanged between the land and atmosphere. She is a professor at Purdue University in the department of Earth, Atmosphere, and Planetary Sciences.
Early life and education
Lisa Welp grew up in Ferdinand, Indiana. In high school, Welp participated in a program from Indiana University titled Exploration of Careers in Science where she spent eight weeks on campus doing research for the university. She focuses her research on isotopes of oxygen-18 of and water, and she did field work in Alaska and Siberia studying the carbon cycling in the forests. Welp attained her Master of Science from the California Institute of Technology in Pasadena California in Environmental Science and Engineering in 2002. She also received her PhD from California Institute of Technology in the same field in 2006. Dr. Welp's obtained her undergraduate degree in chemistry (minor in geology) in 2000 from Indiana University in Bloomington, Indiana.
Career and research
Welp is currently an assistant professor at Purdue University in Earth, Atmosphere, and Planetary Sciences. Prior to this, she was an Assistant Project Scientist (2012-2014) and Postdoctoral Scholar (2008-2012) at the Scripps Institution of Oceanography, UCSD and postdoctoral Research Associate & Lecturer at Yale University (2006-2008).
Welp's areas of research concern stable isotope biogeochemistry, water and carbon dioxide exchange from land biosphere and the atmosphere, and boreal forest carbon cycling. In 2008, she worked with the Keeling CO2 Lab run by Ralph Keeling, son of Charles David Keeling (see also Keeling Curve). Her work at Scripps led to greater understanding of how ENSO (El Nino Southern Oscillation) affects global shifts in primary production, due to the redistribution of moisture. Welp quantified this using δ18O-CO2 (the isotopic shift in precipitation is transferred to respired CO2). Her analysis suggested that global estimates of gross primary production (GPP) were too low and revised them upwards to carbon per year.
Welp's collaborative research program has provided insight into how seasonal warming and drying affects ecosystem exchange of carbon in boreal forests, how seasonal exchange of C between the atmosphere and biosphere has shifted over time, potentially due to increased water use efficiency, proportional to the rise in atmospheric CO2
Awards and fellowships
Great Lakes Chief Scientist Training Cruise Award
BASIN young investigator travel grant 2011
Outstanding poster contribution at the International Carbon Dioxide Conference 2009
EPA Science to Achieve Results (STAR) Fellowship: 2001-2004
BASIN student travel grants: 2002-2004
References
External links
Purdue University faculty
Living people
Year of birth missing (living people)
Place of birth missing (living people)
Biogeochemists
People from Dubois County, Indiana
California Institute of Technology alumni
Indiana University Bloomington alumni
American geochemists
Women geochemists | Lisa Welp | [
"Chemistry"
] | 585 | [
"Geochemists",
"American geochemists",
"Women geochemists",
"Biogeochemistry",
"Biogeochemists"
] |
58,297,912 | https://en.wikipedia.org/wiki/Peter%20Pauson | Prof Peter Ludwig Israel Pauson FRSE FRIC (1925–2013) was a German–Jewish emigrant who settled in Britain and who is remembered for his contributions to chemistry, most notably the Pauson–Khand reaction and as joint discoverer of ferrocene.
Life
He was born in Bamberg, Germany on 30 July 1925, the son of Stefan Pauson and his wife, Helene Dorothea Herzfelder. His parents escaped to England in 1939 with Peter and his two sisters to flee the Nazi persecution of Jews.
In 1942 the family moved to Glasgow and he began studying chemistry in the University of Glasgow under Thomas Stevens Stevens. After graduating in 1946, he moved to Sheffield University as a postgraduate, studying under Robert Downs Haworth and receiving his doctorate in 1949. He then went to Duquesne University in Pittsburgh, Pennsylvania and pursued research on tropolones and other aromatic non-benzenoid molecules. His discovery of ferrocene with his student, Thomas J. Kealy, arose from an attempt to dimerize cyclopentadienylmagnesium bromide using Iron(III) chloride; the orange-yellow solid with formula C10H10Fe was described as a "molecular sandwich" in Pauson's note which was published in Nature in 1951.
From 1951 to 1952 he studied at the University of Chicago under Morris Kharasch, then becoming a DuPont Fellow at Harvard University. He then gained practical experience at the DuPont Laboratories in Wilmington. Returning to Britain, he became a lecturer at Sheffield University and in 1959 became Professor of Organic Chemistry at Strathclyde University. In 1964 he was elected a Fellow of the Royal Society of Edinburgh.
Pauson and his postdoctoral assistant, Ihsan Khand, discovered the reaction now renowned as the Pauson–Khand reaction in 1971, though Pauson always referred to it as the "Khand reaction".
In 1994, the University of Strathclyde established the Merck Pauson Chair in Preparative Chemistry, funded by Merck, marking the contribution of Pauson to chemistry and to the university.
Pauson retired in 1995 and died peacefully at home on 10 December 2013. He was cremated at Clydebank Crematorium. In his obituary, he is described as "a gentleman of modesty, humility, and compassion … a fine man and a marvellous scientist".
Family
He married Lai-Ngau Mary (née Wong) (1928 – March 18, 2010), having met her at a party hosted by Enrico Fermi when Pauson was at the University of Chicago in the early 1950s. They went on to have two children, Hilary and Alfred.
Selected publications
Organometallic Chemistry (1967)
References
1925 births
2013 deaths
Jewish emigrants from Nazi Germany to the United Kingdom
German organic chemists
Alumni of the University of Sheffield
Academics of the University of Sheffield
Academics of the University of Strathclyde
Fellows of the Royal Society of Edinburgh
Duquesne University alumni | Peter Pauson | [
"Chemistry"
] | 603 | [
"Organic chemists",
"German organic chemists"
] |
58,297,950 | https://en.wikipedia.org/wiki/Pauline%20Harrison | Pauline May Harrison (née Cowan, 24 August 1926 – 28 May 2024) was a British protein crystallographer and professor emeritus at the University of Sheffield. She gained her chemistry degree from Somerville College, Oxford in 1948, followed by a DPhil in X-ray crystallography in 1952 supervised by Dorothy Hodgkin. After three years at King's College London (contemporary with Rosalind Franklin) she moved to the University of Sheffield in 1955 as a demonstrator in the Biochemistry department (now Molecular Biology and Biotechnology), obtaining an MRC grant to study the iron storage protein Ferritin, publishing preliminary X-ray diffraction data in the 1st volume of the Journal of Molecular Biology in 1959. The molecule which became her life's work. In 1978, she was awarded a personal chair and retired in 1991. In 2001 she was appointed a CBE for services to higher education.
Personal life and death
Harrison was the daughter of botanists Adeline May Organe and John Macqueen Cowan, Assistant Keeper of the Royal Botanic Garden, Edinburgh. She was married to Royden Harrison, also a lecturer at Sheffield and a figure in the Labour movement until his death in 2002. Harrison was an alumna of St. Trinnean's School.
Harrison died on 28 May 2024, at the age of 97.
References
1926 births
2024 deaths
Alumni of Somerville College, Oxford
British biochemists
British crystallographers
Academics of the University of Sheffield
English biophysicists
X-ray crystallography
British women biologists
British women chemists
20th-century British women scientists
20th-century British scientists
21st-century British women scientists
21st-century British scientists | Pauline Harrison | [
"Chemistry",
"Materials_science"
] | 345 | [
"X-ray crystallography",
"Crystallography"
] |
58,299,142 | https://en.wikipedia.org/wiki/Quercus%20%C3%97%20hispanica | Quercus × hispanica, commonly known as Spanish oak, is tree in the family Fagaceae. It is a semideciduous hybrid between the European trees Turkey oak (Quercus cerris) and cork oak (Quercus suber).
Taxonomy
The taxon was first described as the species Quercus hispanica by Jean-Baptiste Lamarck in 1785. , Plants of the World Online treated it as the hybrid between the European species Quercus cerris (Turkey oak) and Quercus suber (cork oak) using the hybrid name Quercus × hispanica. In this treatment, one of its many synonyms is Quercus × crenata, which may also be treated as a separate species.
Distribution
Hybridisation occurs naturally in southwestern Europe where both parent species occur. The Lucombe oak cultivar is frequently found in British collections. To be a true Lucombe oak, cultivars must be clones of the original hybrid arising in William Lucombe's Exeter nursery.
Cultivation
A number of named cultivars are grown in gardens, parks, arboreta and botanical gardens.
Cultivars
Quercus × hispanica 'Lucombeana' ("Lucombe oak"), originally raised by William Lucombe at his Exeter, UK nursery in 1762. An early Lucombe Oak is in Kew Gardens arboretum, and is regarded as one of their 'heritage trees'. The Tree Register of the British Isles−TROBI Champion is at Phear Park in Exmouth, measuring in height, with a trunk diameter of in 2008.
Quercus × hispanica 'Waasland' ("Waasland select oak"), leaves display an unusual slender, lobed shape.
Quercus × hispanica 'Wageningen' ("Wageningen oak")
Quercus × hispanica 'Fulhamensis' ("Fulham oak")
References
hispanica
Trees of Europe
Trees of Mediterranean climate
Plants described in 1785
Hybrid plants | Quercus × hispanica | [
"Biology"
] | 402 | [
"Hybrid plants",
"Plants",
"Hybrid organisms"
] |
58,300,295 | https://en.wikipedia.org/wiki/Structural%20support | A structural support is a part of a building or structure that provides the necessary stiffness and strength in order to resist the internal forces (vertical forces of gravity and lateral forces due to wind and earthquakes) and guide them safely to the ground. External loads (actions of other bodies) that act on buildings cause internal forces (forces and couples by the rest of the structure) in building support structures. Supports can be either at the end or at any intermediate point along a structural member or a constituent part of a building and they are referred to as connections, joints or restraints.
Building support structures, no matter what materials are used, have to give accurate and safe results. A structure depends less on the weight and stiffness of a material and more on its geometry for stability. Whatever the condition is, a specific rigidity is necessary for connection designs. The support connection type has effects on the load bearing capacity of each element, which makes up a structural system. Each support condition influences the behaviour of the elements and therefore, the system. Structures can be either Horizontal-span support systems (floor and roof structures) or Vertical building structure systems (walls, frames, cores, etc.)
Structure
Structure is necessary for buildings but architecture, as an idea, does not require structure. Every building has both load-bearing structures and non-load bearing portions. Structural members form systems and transfer the loads that are acting upon the structural systems, through a series of elements to the ground. Building Structure Elements include Line (beams, columns, cables, frames or arches, space frames, surface elements (walls, slab or shells) and Freeform.
The structure's functional requirements will narrow the possible forms that one can consider. Other factors such as the availability of materials, foundation conditions, the aesthetic requirements and economic limitations also play important roles in establishing the structural form. Structural systems or all their members and parts are considered to be in equilibrium if the systems are initially at rest and remain at rest when a system of forces and couples acts on them. They are not aspects of a model that should be guessed. To be able to analyze a structure, it is necessary to be clear about the forces that can be quite complicated.
There are two types of forces, External Forces which are the actions of other bodies on the structure under consideration and Internal Forces which the rest of the structure exert on a member or portion of the structure as forces and couples. A little deflection or play is required for a structure to protect other surrounding materials from those forces.
Support structure
There are five basic idealized support structure types, categorized by the types of deflection they constrain: roller, pinned, fixed, hanger and simple support.
Roller supports
A roller support allows thermal expansion and contraction of the span and prevents damage on other structural members such as a pinned support. The typical application of roller supports is in large bridges. In civil engineering, roller supports can be seen at one end of a bridge.
A roller support cannot prevent translational movements in horizontal or lateral directions and any rotational movement but prevents vertical translations. Its reaction force is a single linear force perpendicular to, and away from, the surface (upward or downward). This support type is assumed to be capable of resisting normal displacement.
It can be rubber bearings, rocker or a set of gears allowing a limited amount of lateral movement. A structure on roller skates, for example, remains in place as long as it must only support itself. As soon as lateral load pushes on the structure, a structure on roller skates will roll away in response to the force.
Pinned support
A pinned support attaches the only web of a beam to a girder called a shear connection. The support can exert a force on a member acting in any direction and prevent translational movements, or relative displacement of the member-ends in all directions but cannot prevent any rotational movements. Its reaction forces are single linear forces of unknown direction or horizontal and vertical forces which are components of the single force of unknown direction.
A pinned support is just like a human elbow. It can be extended and flexed (rotation), but you cannot move your forearm left to right (translation).
One benefit of pinned supports is not having internal moment forces and only their axial force playing a big role in designing them. However, a single pinned support cannot completely restrain a structure. At least two supports are needed to resist the moment. Applying in trusses is one frequent way we can use this support.
Fixed support
Rigid or fixed supports maintain the angular relationship between the joined elements and provide both force and moment resistance. It exerts forces acting in any direction and prevents all translational movements (horizontal and vertical) as well as all rotational movements of a member. These supports’ reaction forces are horizontal and vertical components of a linear resultant; a moment. It is a rigid type of support or connection. The application of the fixed support is beneficial when we can only use single support, and people most widely used this type as the only support for a cantilever. They are common in beam-to-column connections of moment-resisting steel frames and beam, column and slab connections in concrete frames.
Hanger support
A hanger support only exerts a force and prevents a member from acting or translating away in the direction of the hanger. However, this support cannot prevent translational movement in all directions and any rotational movement. This is one of the simplest structural forms in which the elements are in pure tension. Structures of this type range from simple guyed or stayed structures to large cable-supported bridge and roof systems.
Simple support
A simple support is basically where the structural member rests on an external structure as in two concrete blocks holding a resting plank of wood on their tops. This support is similar to roller support in a sense that restrains vertical forces but not horizontal forces. Therefore, it is not widely used in real life structures unless the engineer can be sure that the member will not translate.
Varieties of support
See also
Bracket (architecture)
References
Nl:Oplegging
Civil engineering | Structural support | [
"Engineering"
] | 1,222 | [
"Construction",
"Civil engineering"
] |
58,300,612 | https://en.wikipedia.org/wiki/NGC%206053 | NGC 6053 is an elliptical galaxy located about 450 million light-years away in the constellation Hercules. The galaxy was discovered by astronomer Lewis Swift on June 8, 1886 and is member of the Hercules Cluster.
See also
List of NGC objects (6001–7000)
References
External links
http://ngcicproject.org/NGC/NGC_60xx/NGC_6057.htm
http://www.astronomy-mall.com/Adventures.In.Deep.Space/NGC%206000%20-%206999%20(11-30-17).htm
6053
57090
Hercules (constellation)
Hercules Cluster
Astronomical objects discovered in 1886
Elliptical galaxies | NGC 6053 | [
"Astronomy"
] | 146 | [
"Hercules (constellation)",
"Constellations"
] |
58,301,658 | https://en.wikipedia.org/wiki/Mathletics%20%28educational%20software%29 | Mathletics is an online educational website which launched in 2005. The website operates through a subscription model, offering access at an individual and school level. Online users, known as 'Mathletes', have access to math quizzes and challenges, and can participate in a real-time networked competition known as 'Live Mathletics'. A customisable avatar visually represents each player in the 'Live Mathletics' competitions. 'Credits' are awarded through the completion of quizzes and tasks, which can be used to customise their avatar's clothing and aesthetics.
In 2007, Mathletics started World Maths Day, and in 2010, World Maths Day obtained a Guinness World Record for the Largest Online Maths Competition. As of 2023, Mathletics caters to 3.2 million users worldwide and 14,000 schools.
History
Mathletics was established as a Personal Learning Environment (PLE) application in 2005 by 3P Learning, catering for Australian schools. The website is structured to facilitate an engagement with students from K-12 educational level, and offers various visual resources in their interactive and online Web 2.0 appropriation of the Australian Curriculum. Though initially based around this curriculum, Mathletics broadened its offices as well as its student and teacher audiences to various other countries residing in North America, Europe, Asia and the Middle East, adapting to those regions' various school curricula. The US and Canadian version of the website aligns with state-based educational standards including the Common Core and Texas Essential Knowledge and Skills (TEKS) from high school into kindergarten. The UK version of the website follows the various National Curricula within Britain, comprising Foundation Stage to Key Stage 5. Both the Middle Eastern and Asian versions of the website adopt and reflect International Curricula, and offer an entire translation of the English course.
Mathletics is a project by 3P Learning, an organisation that creates education applications such as Reading Eggs and Mathseeds.
Content
Primary Students
Mathletics functions via an emphasis upon the bilateral capabilities of Web 2.0, which concerns interface and user interactivity. Mathletics heavily anchors their teaching styles within the "Primary" section of their website through the lens of 'visual learning'; employing a vast array of colours combined with cartoon imagery to create a "captivating" website aesthetic in an effort to appease the juvenile temperament of students under twelve years old. The site offers animated tutorials and learning support that display animated adolescent characters offering mathematic tips and answers to questions. The website currently offers 1200 unique questions, that have been individually tailored to suit each user's mathematic comprehension. Students are encouraged to participate in Mathematic activities which host up to 20 questions related to a certain topic. Once a student answers a question, the website recognises its completion and then adapts to the "student's progress in understanding", leading to questions that may be more complex in difficulty. At the completion of each topic, students are offered the opportunity to take a 'Topic Test' which summates the hardest questions in the past activities.
A certificate award is presented to a student once they have earned 1000 points within a week. Ten points are awarded per correct activity answer in a regular activity. Twenty points are awarded per correct answer in a 'Topic Test'.
The 'Primary Section' of the website is accessible via Tablet device, and is available for offline use if the user doesn't have sufficient Wi-Fi. All points garnered whilst on offline will be synchronised onto online servers once the individuals access the website online.
Secondary Students
Mathletics believes that "secondary school is a whole new world and a new school demands an older, more study-focused interface for new students". As a result, the "Secondary" section of the website available to students doesn't reflect the juvenile decor that saturates the "Primary" section of the website, replacing it with a "more study-based" interface. Further, this "Secondary" area of the website exclusively promotes the use of a student progress system through use of a 'Traffic Light System', which categorises a user's understanding of a mathematical topic into three, colour-based, identifiable sections:
Green = 85% to 100% correct marks.
Orange = 50% to 84% correct marks.
Red = 0% to 49% correct marks.
Alongside this colour-coded progress guide secondary students have access to "adaptive practice activities with animated support, plus interactive and video content" as well as a library of various printable eBooks. Secondary Students also are offered the option to personally customise the website's interface from a range of differing backgrounds, in an effort to suit their "learning needs". The collection of backgrounds encompass pictures of natural environments, sporting fields, pictures of live animals and vibrant pattens of colours.
Alike to the "Primary' section, accumulative points are awarded for the completion of questions in activities. Secondary Students also have access to 'Topic Tests' which summate the most difficult questions of the topic.
The "Secondary Students" section of the website is also reachable on a Tablet device, and can be accessed for offline use. The same process of offline synchronisation to the online profile is also applied in the 'Secondary Students" section of the website.
Early Learners
The 'Early Learners Numeracy' section of the site offers a series of multimedia resources that are designed to support students aged from four to seven. The section's mascot feature animated 'Numbeanies', drawn and designed to appeal to infants. These 'Numbeanies' present younger users with a series of flash cards that represent numbers as "collections, numerals and words".
The overall purpose of the Early Learners section is to be entertaining, and provide the basic foundations of mathematics through the guise of interactive games and videos.
Teachers
Users that identify as teachers have access to a 'Mathletics Teacher Console' which manages their classroom's collective progress, as well as providing insight into each individual student's progress also. The teacher console delineates live data analysis of each student's progress, represented via colour-coded visuals, in an attempt to provide teachers greater agency in assigning "targeted and personalised learning pathways" for the class. The teacher console provides teachers with tools and instruments to manage classes, create custom mathematical learning courses to suit different and various learning groups, and bestow students with multimedia sources that will assist them answer the questions assigned. Teachers have the option to select mandatory assignments and activities to be completed by their students. These assignments must be completed before a student participate in games of Live Mathletics or other activities.
The teacher console is multi-platformed and available on various media devices including Tablet and Mobile.
Avatar
Each individual subscriber of Mathletics must create an identifiable Avatar. The customisable Avatar template provided by Mathletics is a portrait-view shot of a head and face in the foreground, combined with an animated environment in the background. This online persona is known as the user's 'Mathlete'. The avatar does not have to represent a user's actual facial features, however the avatar's design will represent the user against others in competitions of 'Live Mathletics'. The Avatar can be updated/altered through the 'Face Maker' interface via purchasing upgrades with credits that have been earned from completing tasks and competitions of 'Live Mathletics'.
Mathletics operates via a credits-based incentive system, awarding students who have completed quizzes or competitions of 'Live Mathletics' with in-app online credits that replicate virtual currency, and can be used to purchase aesthetic upgrades to their 'Mathlete'. The Mathletics website will award an obligatory 10 credits to user for participation in a quiz or competition, as an added bonus to their base gained score.
Live Mathletics
The option to participate in real-time, live networked mathematic competitions known as 'Live Mathletics' are offered to users on the Mathletics Website. The primary objective to win is for users to complete as many addition, subtraction and multiplication problems as possible before the one-minute timer ends. Users must select which difficulty level they wish to compete in, which vary on a difficulty scale from 1-10 (1 being the easiest, 10 being the hardest) that dictates the complexity of the questions asked by the website. The user who answers the most correctly, wins.
'Live Mathletics' incorporates a "Who's Online" panel which allows users to read a live feed of other students in their class that are currently online and engaged with 'Live Mathletics'.
Reception
The overall reception of Mathletics as an educational software has been generally positive. Technology-based reviewer TeachWire appraised Mathletics, calling it an "intuitive and engaging resource; one that's bound to improve the learners' skills, knowledge and ability in maths, especially in numerical skills and speed". EducationWorld named Mathletics a "tremendous resource" and an educational website that "injected(ed) a little competitions into lessons".
Critically, Macquarie University's leading mathematic education expert, Dr Michael Cavanagh described Mathletics to SMH as a "drill and practise" learning software. He believes that "this type of program needs to be complemented - and this is when the teacher comes in, to develop a deeper and broader understanding". He then substantiates his belief that Mathletics is only "one piece of the puzzle" in regards to mathematical informative learning with "If all students do is stuff on Mathletics then that's a pretty shallow approach".
External links
Mathletics
3p learning
References
Australian educational websites
Educational math software
Year of establishment missing | Mathletics (educational software) | [
"Mathematics"
] | 2,015 | [
"Educational math software",
"Mathematical software"
] |
58,301,902 | https://en.wikipedia.org/wiki/V-topology | In mathematics, especially in algebraic geometry, the v-topology (also known as the universally subtrusive topology) is a Grothendieck topology whose covers are characterized by lifting maps from valuation rings.
This topology was introduced by and studied further by , who introduced the name v-topology, where v stands for valuation.
Definition
A universally subtrusive map is a map f: X → Y of quasi-compact, quasi-separated schemes such that for any map v: Spec (V) → Y, where V is a valuation ring, there is an extension (of valuation rings) and a map Spec W → X lifting v.
Examples
Examples of v-covers include faithfully flat maps, proper surjective maps. In particular, any Zariski covering is a v-covering. Moreover, universal homeomorphisms, such as , the normalisation of the cusp, and the Frobenius in positive characteristic are v-coverings. In fact, the perfection of a scheme is a v-covering.
Voevodsky's h topology
See h-topology, relation to the v-topology
Arc topology
have introduced the arc-topology, which is similar in its definition, except that only valuation rings of rank ≤ 1 are considered in the definition. A variant of this topology, with an analogous relationship that the h-topology has with the cdh topology, called the cdarc-topology was later introduced by Elmanto, Hoyois, Iwasa and Kelly (2020).
show that the Amitsur complex of an arc covering of perfect rings is an exact complex.
See also
List of topologies on the category of schemes
References
Algebraic geometry | V-topology | [
"Mathematics"
] | 346 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
59,923,907 | https://en.wikipedia.org/wiki/List%20of%20named%20storms | Tropical cyclones are named to avoid confusion with the public and streamline communications, as more than one tropical cyclone can exist at a time. Names are drawn in order from predetermined lists, and are usually assigned to tropical cyclones with one-, three- or ten-minute windspeeds of more than . However, standards vary from basin to basin.
See also
Tropical cyclone
List of historical tropical cyclone names
Lists of tropical cyclone names
European windstorm names
Atlantic hurricane season
Pacific hurricane season
South Atlantic tropical cyclone
References
Named | List of named storms | [
"Physics"
] | 105 | [
"Weather",
"Physical phenomena",
"Weather-related lists"
] |
59,931,116 | https://en.wikipedia.org/wiki/Pillow-plate%20heat%20exchanger | Pillow-plate heat exchangers are a class of fully welded heat exchanger design, which exhibit a wavy, “pillow-shaped” surface formed by an inflation process. Compared to more conventional equipment, such as shell and tube and plate and frame heat exchangers, pillow plates are a quite young technology. Due to their geometric flexibility, they are used as well as “plate-type” heat exchangers and as jackets for cooling or heating of vessels. Pillow plate equipment is currently experiencing increased attention and implementation in process industry.
Construction
Pillow plates are manufactured by an inflation process, where two thin metal sheets are spot-welded to each other over the entire surface by laser or resistance welding. The sides of the plates are sealed by seam welding, other than the connecting ports. Finally, the gap between the thin metal sheets is pressurized by a hydraulic fluid causing a plastic forming of the plates, which eventually leads to their characteristic wavy surface.
In principle, there are two different types of pillow plates: single-embossed and double-embossed. The former commonly form the double walls of jacketed vessels, while the latter are assembled to a stack (bank) to manufacture pillow plate heat exchangers. Single-embossed pillow plates are formed when the base plate is significantly thicker than the top plate. The thinner top plate deforms, while the base plate remains plane.
Furthermore, pillow plates are commonly equipped with “baffle” seam weldings, which offer a targeted flow guidance in the pillow plate channels in cases, where flow distribution or fluid velocity might be an issue. A method for obtaining flow guidance by baffles in the channels between adjacent pillow plates in pillow plate heat exchangers, has recently been proposed in.
Due to their construction, pillow plates are hermetically sealed, they have a high structural stability and their manufacturing is mostly automated and highly flexible. Pillow plates can be operated at pressures > 100 MPa and temperatures of up to 800 °C.
Application
The application of pillow plates is very extensive, due to their favorable properties such as high geometric flexibility and good adaptivity to almost every process. Their implementation depends on their underlying construction, i.e. pillow plate banks or pillow plate jacketed tanks. The relatively flat external surface is easy to clean and suitable for high fouling or sanitary applications, but the internal surface has fine seams around each spot weld and is not easy to clean, therefore the internal surface is only suitable for non-fouling fluids like water, steam or refrigerants.
Pillow plate banks (heat exchangers)
Pillow plate banks are typically used in applications involving liquid-liquid, gas-liquid, high viscosity or dirty media, low pressure loss requirements, condensation (e.g. top condensers), falling film evaporation (e.g. paper & pulp industry), reboilers, water chilling, drying of solids, flake ice generation (food industry) and more. They are also commonly used as immersion chillers (e.g. in electroplating), where the banks are immersed directly into the tank. Banks can be constructed to allow the individual plates to be separated from the stack, allowing easy cleaning or maintenance.
Pillow plate jacketed tanks
The most extensive application of pillow plates to date is with jacketed vessels, because of their flexibility, full surface area coverage for heat transfer, low fluid hold-up, favorable manufacturing costs & time, and easy cleaning, especially in sterile applications. The tanks can be equipped with multiple jackets over its surface, including also the tank bottom, e.g. conical or dished, and can include additional cylindrical shells inside the tank. Typical areas of implementation of pillow plate jacketed tanks are in food and beverage industry and in chemical and pharmaceutical industry. These jackets are also referred to as "dimple jackets".
Other
Due to their geometrical flexibility, pillow plates can be customized/adapted to almost any geometry to offer targeted heat transfer where it is needed. Some examples are cooling of pipes in thermal processes or even battery packs and electric motors for electric vehicles in automotive industry.
Know-how and research on pillow plates
In contrast to more conventional heat exchangers, knowledge of thermohydraulic performance of pillow plates and experience with their design is limited. To overcome this bottleneck efforts are currently being made to develop commercial software tools. A rough overview of state-of-the-art on pillow plates can be found in.
Research on pillow plates can be subdivided into three main categories: geometrical analysis, analysis of fluid flow and heat transfer in pillow plates and analysis of fluid flow and heat transfer in the gap between adjacent pillow plates.
Geometrical analysis
Methods for the calculation of surface area, fluid hold-up volume, cross-sectional area and hydraulic diameter, needed in thermohydraulic calculations, have been proposed in. The mentioned geometrical parameters were determined using Finite Element Analysis (FEM), which imitates the inflation process during manufacturing of pillow plates. Moreover, theoretical burst pressures of pillow plates, could be estimated with FEM.
Fluid flow and heat transfer in pillow plates (inner channels)
The complex wavy geometry in pillow plate channels promotes fluid mixing, which leads to favorable heat transfer rates but is also unfavorable for pressure loss (formation of recirculation regions in the wake of welding spots). Information on fluid flow and heat transfer in pillow plates is available in, while correlations for the calculation of Darcy-Friction-Factor and Nusselt number in pillow plates over a wide range of geometrical parameters variations and process conditions is found in.
Fluid flow and heat transfer in the gap between adjacent pillow plates (outer channels)
Similar to the inner channels of pillow plates, the channels formed between adjacent pillow plates (outer channels) are also wavy and promote fluid mixing, which is in turn favorable for heat transfer rates. However, pressure loss in the outer channels is significantly lower than in the inner ones because of the absence of welding spots, which act as obstacles for the flow (flow around welding spots). Information on fluid flow and heat transfer in the outer channels of pillow plate heat exchangers is available in.
Falling film flow over the surface of pillow plates
The reliable design of condensers, falling film evaporators and water chillers requires detailed knowledge of fluid dynamics and heat transfer of the falling liquid film over the surface of the pillow plates.
References
Heat exchangers | Pillow-plate heat exchanger | [
"Chemistry",
"Engineering"
] | 1,307 | [
"Chemical equipment",
"Heat exchangers"
] |
59,934,381 | https://en.wikipedia.org/wiki/Spectral%20G-index | The spectral G-Index is a variable that was developed to quantify the amount of short wavelength light in a visible light source relative to its visible emission (it is a measure of the amount of blue light per lumen). The smaller the G-index, the more blue, violet, or ultraviolet light a lamp emits relative to its total output. It is used in order to select outdoor lamps that minimize skyglow and ecological light pollution. The G-index was originally proposed by David Galadí Enríquez, an astrophysicist at Calar Alto Observatory.
Definition
The G-index is grounded in the system of astronomical photometry, and is defined as follows:
where
G is the spectral G-index;
λ is the wavelength in nanometers;
E is the spectral power distribution of the lamp;
V(λ) is the luminosity function
The sums are to be taken using a step size of 1 nm. For lamps with absolutely no emissions below 500 nm (e.g. Low Pressure Sodium or PC Amber LED), the G-index would in principle be undefined. In practice, such lamps would be reported as having G greater than some value, due to the limits of measurement precision. The Regional Government of Andalusia has developed a spreadsheet[dead link] to allow calculation of the G-index for any lamp for which the spectral power distribution is known, and it can also be calculated in the "Astrocalc" software or the f.luxometer web app.
The G-index does not directly measure light pollution, but rather says something about the color of light coming from a lamp. For example, since the equation defining G-index is normalised to total flux, if twice as many lamps are used, the G-index would not change; it is a measure of fractional light, not total light. Similarly, the definition of G-index does not include the direction in which light shines, so it is not directly related to skyglow, which depends strongly on direction.
Rationale
The ongoing global switch from (mainly) orange high pressure sodium lamps for street lighting to (mainly) white LEDs has resulted in a shift towards broad spectrum light, with greater short wavelength (blue) emissions. This switch is problematic from the perspective of increased astronomical and ecological light pollution. Short wavelength light is more likely to scatter in the atmosphere, and therefore produces more artificial skyglow than an equivalent amount of longer wavelength light. Additionally, both broad spectrum (white) light and short wavelength light tend to have greater overall ecological impacts than narrow band and long wavelength visible light. For this reason, lighting guidelines, recommendations, norms, and legislation frequently place limits on blue light emissions. For example, the "fixture seal of approval" program of the International Dark-Sky Association limits lights to have a correlated color temperature (CCT) below 3000 K, while the national French light pollution law restricts CCT to maximum 3000 K in most areas, and 2400 K or 2700 K in protected areas such as nature reserves.
The problem with these approaches is that CCT is not perfectly correlated with blue light emissions. Lamps with identical CCT can have quite different fractional blue light emissions. This is because CCT is based upon comparison to a blackbody light source, which is a poor approximation for LEDs and vapor discharge lamps such as high pressure sodium. The G-index was therefore developed for use in decision making for the purchase of outdoor lamps and in lighting regulations as an improved alternative to the CCT metric.
Use
In 2019, the European Commission's Joint Research Centre incorporated the G-index into their guidelines for the Green Public Procurement of road lighting. Specifically, in areas needing protection for astronomical or ecological reasons, they recommend the use of the G-index instead of CCT in making lighting decisions, because the G-index more accurately quantifies the amount of blue light. In their "core criteria", they recommend that "in parks, gardens and areas considered by the procurer to be ecologically sensitive, the G-index shall be ≥1.5". In the case that G-index could for some reason not be calculated, they suggest that CCT≤3000 K is likely to satisfy this criterion. In the stricter "comprehensive criteria", they recommend that parks and ecologically sensitive areas or areas at specified distances from optical astronomy observatories have a G-index greater than or equal to 2.0. Again, in this case if calculating the G-index is not possible, CCT≤2700 K is suggested.
The G-index is planned to be used by the Regional Government of Andalusia, specifically for the purpose of protecting the night sky. Depending on the "environmental zone", the regulation requires lighting to have a G value above 2, 1.5, or 1. In areas where astronomical activities are ongoing, it is expected that only monochromatic or quasi-monochromatic lamps will be used, with G>3.5 and in principle only emissions in the interval 585-605 nm.
Questionable Use Warning
The G-index has not been evaluated or adopted by a standards development organization (SDO), such as the CIE. Generally, for a specification to be used in a regulation or tender, it must go through the rigorous process of evaluation and adoption by an SDO. It is thus questionable for the EC Joint Research Center and the Andalusian Regional Government (and others) to suggest or prescribe mandatory requirements based on the G-index.
A measure focused solely on reducing blue light will not provide ecological protection. Because the intensity of light plays a role as strong or stronger than spectrum, putting the light in the right places (on road surfaces and sidewalks) and avoiding spillage into ecological regions is likely to be more effective than manipulating the spectrum of the light. Spectrum does play a role, but in order to prevent disturbance to sensitive animals, changes must be made to the spectrum which cannot be described by the G-index. Those changes are also species dependent. A specific (red-dominant) spectrum has been proven to be as good as darkness for many (but not all) light sensitive insect and bat species. An amber spectrum is proven to be less eco-friendly than a red spectrum for some species, although both have negligible blue content and ‘favorable’ G index. Therefore the use of spectral G-index is overly simplistic and may do more harm than good. The use of the G-index is therefore strongly discouraged for use in lighting specifications or regulations.
References
External links
More information, including a spreadsheet for calculating G-index, Regional Government of Andalusia (note: Spanish language page; English language handbook on index G, and LibreOffice spreadsheet in English for index computation, are linked from the lower part of the page.)
Information on Green Public Procurement of Street Lighting in the EU, Joint Research Centre
f.luxometer web tool, Online calculator for g-index and other indexes
Astrocalc software, Carlos Tapia Ayuga (note: Spanish language page, English instructions at bottom)
Color
Lighting
Radiometry | Spectral G-index | [
"Engineering"
] | 1,467 | [
"Telecommunications engineering",
"Radiometry"
] |
59,935,860 | https://en.wikipedia.org/wiki/Eiss%20Archive | Eiss Archive refers to the collection of documents and related memorabilia documenting the rescue by Polish diplomats of Jews threatened by the Holocaust during World War II. The archive is named after Chaim Yisroel Eiss, a Jewish Rabbi and activist who jointly set up the Ładoś Group.
History
The archive is named after Chaim Eiss, a Jewish activist, who during World War II co-created the Ładoś Group (also known as the Bernese Group), a group of Polish diplomats and Jewish activists led by the Polish ambassador to Switzerland in Bern, Aleksander Ładoś. During the war the group developed a system of illegal production of Latin American passports aimed at saving European Jews from The Holocaust. The documents are said to have made their way to Israel with one of Eiss’ descendants after World War II.
The documents that form the Archive were acquired by the Polish Ministry of Culture from a private collector in Israel in 2018. They were displayed in the Polish embassy in Switzerland in January 2019, and later were transferred to the Auschwitz-Birkenau State Museum in Poland.
Contents
The collection includes eight forged Paraguayan passports as well as correspondence between persons to be rescued and Polish diplomats and Jewish organisations, photos of Jews seeking to obtain the documents, and a list of thousands of individuals, Polish Jews in ghettos in occupied Poland, who corresponded with the rescue activists.
Implications
The documents in the Eiss archive helped establish that 330 people survived the Holocaust due to the actions of the Ładoś Group. Despite their efforts, 387 individuals corresponding with the group were identified as Holocaust victims even though they held the forged passports. The fate of 430 others known to have communicated with the group is not known.
See also
Hotel Polski
Righteous Among the Nations
References
Rescue of Jews during the Holocaust
The Holocaust in Poland
Archives in Poland
2019 establishments in Poland
Holocaust historical documents | Eiss Archive | [
"Biology"
] | 374 | [
"Rescue of Jews during the Holocaust",
"Behavior",
"Altruism"
] |
59,936,434 | https://en.wikipedia.org/wiki/C/1879%20Q1%20%28Palisa%29 | Palisa's Comet, also known formally as C/1879 Q1 by its modern nomenclature, is a parabolic comet that was barely visible to the naked eye in late 1879. It was the only comet discovered by Austrian astronomer, Johann Palisa.
Discovery and observations
Johann Palisa discovered this comet on 21 August 1879, initially mistaking it for a nebula not recorded in the catalogs of Messier and d'Arrest before confirming the object's motion a few hours later. At the time it was located within the constellation Ursa Major, where he described the comet as "round, small, but bright". One of the first ephemerides of the comet were calculated on September 5.
The comet was moving inbound through the inner Solar System between September and October 1879, enabling further observations and refining orbital calculations. Pietro Tacchini measured the coma diameter as 1.7' on October 7. Ralph Copeland described the comet as "bright and round" on October 19 while measuring the comet's spectra.
References
Notes
Citations
External links
Non-periodic comets
Hyperbolic comets
Astronomical objects discovered in 1879 | C/1879 Q1 (Palisa) | [
"Astronomy"
] | 225 | [
"Astronomy stubs",
"Comet stubs"
] |
59,937,019 | https://en.wikipedia.org/wiki/Urumiit | Urumiit or uruniit (Inuktitut syllabics: ᐅᕈᓅᑦ, uruniit; Greenlandic: urumiit) is a term used by native Inuit in Greenland and the Canadian High Arctic to refer to the feces of the rock ptarmigan (Lagopus muta) and the willow ptarmigan (Lagopus lagopus), which are considered a delicacy in their food cultures. The droppings are collected when they have dried out during the winter months (fresh droppings in the summer are thought to be unpleasant to eat), a time in which food sources are scarce, especially on land, so the pre-digested willow and birch plant matter in ptarmigan scat provides a much needed source of nutrition in a harsh environment. One ptarmigan may defecate as many as 50 times in one spot, so urumiit is very plentiful and easy to gather. The pellet-shaped droppings are generally cooked in rancidified seal fat before eating; sometimes mixed with seal or ptarmigan meat or blood. Historically in some areas, the meat cooked with urumiit is prepared by being pre-chewed by the women of a household. The smell of cooked urumiit in rancid fat has been compared to that of Gorgonzola cheese. It has been cited as a dish which non-Inuit are particularly likely to find disgusting, and as an example of how much taste in food can vary between cultural contexts.
See also
Coprophagia
References
Inuit cuisine
Foods and drinks produced with excrement
Feces | Urumiit | [
"Biology"
] | 331 | [
"Feces",
"Excretion",
"Animal waste products"
] |
59,937,667 | https://en.wikipedia.org/wiki/NGC%204092 | NGC 4092 is a spiral galaxy located 310 million light-years away in the constellation Coma Berenices. It was discovered by astronomer Heinrich d'Arrest on May 2, 1864. NGC 4092 is a member of the NGC 4065 Group and hosts an AGN.
See also
List of NGC objects (4001–5000)
References
External links
4092
038338
Coma Berenices
Astronomical objects discovered in 1864
Spiral galaxies
NGC 4065 Group
Active galaxies
07087 | NGC 4092 | [
"Astronomy"
] | 102 | [
"Coma Berenices",
"Constellations"
] |
59,937,780 | https://en.wikipedia.org/wiki/Katharina%20Ribbeck | Katharina Ribbeck is a German-American biologist. She is the Andrew (1956) and Erna Viterbi Professor of Biological Engineering at the Massachusetts Institute of Technology. She is known as one of the first researchers to study how mucus impacts microbial behavior. Ribbeck investigates both the function of mucus as a barrier to pathogens such as fungi, bacteria, and viruses and how mucus can be leveraged for therapeutic purposes. She has also studied changes that cervical mucus undergoes before birth, which may lead to a novel diagnostic for the risk of preterm birth.
Education
Ribbeck received her B.S. in biology from the University of Heidelberg in 1998. During her senior year, she attended the University of California, San Diego, to study neurobiology for her diploma thesis. She earned her Ph.D. in biology, also from the University of Heidelberg, in 2001.
Career
Upon completing her Ph.D., Ribbeck continued her research as a postdoctoral scientist at the European Molecular Biology Laboratory in Heidelberg, Germany, and then Harvard Medical School. After her postdoctoral research, she moved to Harvard University as an independent Bauer Fellow in 2007, where she began to investigate how particles and bacteria move through mucus barriers.
In 2010, Ribbeck moved to the Department of Biological Engineering at the Massachusetts Institute of Technology as an assistant professor. She attained tenure as a full professor in 2017.
Research on nuclear pore complexes
During her Ph.D. work, Ribbeck investigated the selective transport of molecules through the nuclear pore complex, which is partly mediated by a hydrogel barrier. With her Ph.D. advisor, Dirk Görlich, Ribbeck developed a selective phase model for molecular transport through the nuclear pore barrier. Görlich and Ribbeck also showed that molecular transport through nuclear pore complexes may be facilitated by hydrophobic interactions.
Research on mitotic spindles
As a postdoctoral researcher at the European Molecular Biology Laboratory, Ribbeck studied proteins involved in the organization of the mitotic spindle, a dynamic bundle consisting of proteins and molecules that aids in chromosome segregation during cell division. Her research contributed to the discovery of a novel protein (NuSAP) that plays a crucial role in mitotic spindle organization.
Research on mucus
In 2007, Ribbeck's research returned to hydrogels, with a specific focus on mucus, i.e., a large natural hydrogel that is closely related to the polymer network she and Görlich had proposed to exist within nuclear pore complexes. Her work has elucidated the role of mucins, a primary component of mucus, in human health. Ribbeck is known for her pioneering work in this field, which has shown that mucus plays an active role in protecting against harmful pathogens, including fungi, bacteria, and viruses. Specifically, her research has shown that mucins and their associated sugar chains (glycans) can "tame" pathogens by inhibiting virulence traits such as biofilm formation, cell adhesion, and toxin secretion.
She has shown that mucins prevent bacteria such as Pseudomonas aeruginosa and Streptococcus mutans, the bacteria that cause tooth decay, from forming biofilms, which make them hard to eradicate. Ribbeck demonstrated that mucin glycans can reduce the virulence of pathogens such as Pseudomonas aeruginosa, a bacterium that can cause illness in individuals with cystic fibrosis or compromised immune systems, by inhibiting the cell-cell communication, toxin secretion, and biofilm formation ability of these bacteria.
Ribbeck's work has also demonstrated the role of mucus in protecting against fungal infections. Her studies have shown that mucins and specific mucin glycans induce a morphological change, accompanied by a reduction in biofilm formation and cell adhesion, in Candida albicans, a fungal pathogen that causes a variety of diseases in humans. Her work has also shown that mucins found in multiple types of mucus, including human spit, can prevent fungal pathogens from causing disease in healthy humans.
Ribbeck identified a correlation between the properties of mucus in the cervix in pregnant women and the likelihood of preterm birth and has developed probes to test mucus permeability as a step towards diagnosing the risk for premature birth.
Ribbeck has extensively investigated the biophysical properties of mucus and other hydrogels and the mechanisms by which some particles and molecules, including viruses such as SARS-CoV-2, selectively pass through the barrier. Ribbeck has also studied hydrogels produced by pathogens and has found that the extracellular matrix formed by the pathogenic bacterium Pseudomonas aeruginosa protects the bacterium against antibiotics.
Ribbeck has investigated approaches for engineering mucus, with the aim of potentially influencing the population of bacteria in the human body. In collaboration with others, Ribbeck demonstrated that synthetic mucins can block toxins produced by Vibrio cholerae, the bacteria that causes cholera. She has also shown that purified foreign mucins can prevent viruses from infecting cells and suggested that they could be used to supplement the anti-viral activity of native mucins.
Ribbeck has given presentation about her work on mucus at the MIT Museum and the Boston Museum of Science. Regarding educating others on the importance of mucus in human health, she has stated: "The intention here is to really introduce a field to the generations to come, so they grow up understanding that mucus is not a waste product. It's an integral part of our physiology and a really important piece of our health. If we understand it, it can really give us a lot of information that will help us stay healthy and possibly treat diseases."In 2015, Ribbeck and her team produced a TED-Ed lesson to provide basic education about mucus and its role in human health. Ribbeck has been interviewed on NPR and STAT news and has been featured in articles in WIRED and MIT News.
Awards and recognitions
2003: Ruprecht-Karls Prize Heidelberg University
2007: Award for Genome-Related Research (Merck)
2013: John Kendrew Award (EMBL).
2014: Popular Science, "Brilliant 10"
2015: NSF CAREER award
2015: Junior Bose Award for Excellence in Teaching (MIT)
2016: Harold E. Edgerton Faculty Achievement Award (MIT)
2018: Professor Amar G. Bose Research Grant (MIT), given for "work that is unorthodox, and potentially world-changing".
References
External links
The Ribbeck lab
How mucus keeps you healthy (YouTube)
Science Friday: It's snot what you think
STAT News: Why mucus is the ‘unsung hero’ of the human body
WIRED: How the Sugars in Spit Tame the Body's Unruly Fungi
MIT Technology Review: The science of slime
Living people
American biophysicists
MIT School of Engineering faculty
Women biochemists
21st-century American women scientists
21st-century American chemists
Heidelberg University alumni
21st-century American physicists
Scientists from Darmstadt
Year of birth missing (living people)
American women academics
German emigrants to the United States | Katharina Ribbeck | [
"Chemistry"
] | 1,481 | [
"Biochemists",
"Women biochemists"
] |
59,939,359 | https://en.wikipedia.org/wiki/Conway%27s%2099-graph%20problem | In graph theory, Conway's 99-graph problem is an unsolved problem asking whether there exists an undirected graph with 99 vertices, in which each two adjacent vertices have exactly one common neighbor, and in which each two non-adjacent vertices have exactly two common neighbors. Equivalently, every edge should be part of a unique triangle and every non-adjacent pair should be one of the two diagonals of a unique 4-cycle. John Horton Conway offered a $1000 prize for its solution.
Properties
If such a graph exists, it would necessarily be a locally linear graph and a strongly regular graph with parameters (99,14,1,2). The first, third, and fourth parameters encode the statement of the problem: the graph should have 99 vertices, every pair of adjacent vertices should have 1 common neighbor, and every pair of non-adjacent vertices should have 2 common neighbors. The second parameter means that the graph is a regular graph with 14 edges per vertex.
If this graph exists, it cannot have symmetries that take every vertex to every other vertex. Additional restrictions on its possible groups of symmetries are known.
History
The possibility of a graph with these parameters was already suggested in 1969 by Norman L. Biggs,
and its existence noted as an open problem by others before Conway.
Conway himself had worked on the problem as early as 1975, but offered the prize in 2014 as part of a set of problems posed in the DIMACS Conference on Challenges of Identifying Integer Sequences.
Other problems in the set include the thrackle conjecture, the minimum spacing of Danzer sets, and the question of who wins after the move 16 in the game sylver coinage.
Related graphs
More generally, there are only five possible combinations of parameters for which a strongly regular graph could exist with each edge in a unique triangle and each non-edge forming the diagonal of a unique quadrilateral. It is only known that graphs exist with two of these five combinations. These two graphs are the nine-vertex Paley graph (the graph of the 3-3 duoprism) with parameters (9,4,1,2) and the Berlekamp–van Lint–Seidel graph with parameters (243,22,1,2). The parameters for which graphs are unknown are: (99,14,1,2), (6273,112,1,2) and (494019,994,1,2). The 99-graph problem describes the smallest of these combinations of parameters for which the existence of a graph is unknown.
References
Strongly regular graphs
Unsolved problems in graph theory
John Horton Conway | Conway's 99-graph problem | [
"Mathematics"
] | 545 | [
"Unsolved problems in mathematics",
"Mathematical problems",
"Unsolved problems in graph theory"
] |
59,939,845 | https://en.wikipedia.org/wiki/Differentiable%20programming | Differentiable programming is a programming paradigm in which a numeric computer program can be differentiated throughout via automatic differentiation. This allows for gradient-based optimization of parameters in the program, often via gradient descent, as well as other learning approaches that are based on higher order derivative information. Differentiable programming has found use in a wide variety of areas, particularly scientific computing and machine learning. One of the early proposals to adopt such a framework in a systematic fashion to improve upon learning algorithms was made by the Advanced Concepts Team at the European Space Agency in early 2016.
Approaches
Most differentiable programming frameworks work by constructing a graph containing the control flow and data structures in the program. Attempts generally fall into two groups:
Static, compiled graph-based approaches such as TensorFlow, Theano, and MXNet. They tend to allow for good compiler optimization and easier scaling to large systems, but their static nature limits interactivity and the types of programs that can be created easily (e.g. those involving loops or recursion), as well as making it harder for users to reason effectively about their programs. A proof of concept compiler toolchain called Myia uses a subset of Python as a front end and supports higher-order functions, recursion, and higher-order derivatives.
Operator overloading, dynamic graph based approaches such as PyTorch, NumPy's autograd package as well as Pyaudi. Their dynamic and interactive nature lets most programs be written and reasoned about more easily. However, they lead to interpreter overhead (particularly when composing many small operations), poorer scalability, and reduced benefit from compiler optimization.
The use of Just-in-Time compilation has emerged recently as a possible solution to overcome some of the bottlenecks of interpreted languages. The C++ heyoka and python package heyoka.py make large use of this technique to offer advanced differentiable programming capabilities (also at high orders). A package for the Julia programming language Zygote works directly on Julia's intermediate representation.
A limitation of earlier approaches is that they are only able to differentiate code written in a suitable manner for the framework, limiting their interoperability with other programs. Newer approaches resolve this issue by constructing the graph from the language's syntax or IR, allowing arbitrary code to be differentiated.
Applications
Differentiable programming has been applied in areas such as combining deep learning with physics engines in robotics, solving electronic structure problems with differentiable density functional theory, differentiable ray tracing, image processing, and probabilistic programming.
Multidisciplinary application
Differentiable programming is making significant strides in various fields beyond its traditional applications. In healthcare and life sciences, for example, it is being used for deep learning in biophysics-based modelling of molecular mechanisms. This involves leveraging differentiable programming in areas such as protein structure prediction and drug discovery. These applications demonstrate the potential of differentiable programming in contributing to significant advancements in understanding complex biological systems and improving healthcare solutions.
See also
Differentiable function
Machine learning
Notes
References
Differential calculus
Programming paradigms | Differentiable programming | [
"Mathematics"
] | 621 | [
"Differential calculus",
"Calculus"
] |
59,940,863 | https://en.wikipedia.org/wiki/Onion%20Test | The onion test is a way of assessing the validity of an argument for a functional role for junk DNA. It relates to the paradox that would emerge if the majority of eukaryotic non-coding DNA were assumed to be functional and the difficulty of reconciling that assumption with the diversity in genome sizes among species. The term "onion test" was originally proposed informally in a blog post by T. Ryan Gregory in order to help clarify the debate about junk DNA. The term has been mentioned in newspapers and online media, scientific journal articles, and a textbook. The test is defined as: The onion test is a simple reality check for anyone who thinks they have come up with a universal function for junk DNA. Whatever your proposed function, ask yourself this question: Can I explain why an onion needs about five times more non-coding DNA for this function than a human?Onions and their relatives vary dramatically in their genome sizes, without changing their ploidy, and this gives an exceptionally valuable window on the genomic expansion junk DNA. Since the onion (Allium cepa) is a diploid organism having a haploid genome size of 15.9 Gb, it has 4.9x as much DNA as does a human genome (3.2 Gb). Other species in the genus Allium vary hugely in DNA content without changing their ploidy. Allium schoenoprasum (chives) for example has a haploid genome size of 7.5 Gb, less than half that of onions, yet Allium ursinum (wild garlic) has a haploid genome size of 30.9 Gb, nearly twice (1.94x) that of onion and over four times (4.1x) that of chives. This extreme size variation between closely related species in the genus Allium is also part of the extended onion test rationale as originally defined:Further, if you think perhaps onions are somehow special, consider that members of the genus Allium range in genome size from 7 pg to 31.5 pg. So why can A. altyncolicum make do with one fifth as much regulation, structural maintenance, protection against mutagens, or [insert preferred universal function] as A. ursinum?
C-value paradox
Some researchers argue that the onion test is related to wider issues involving the C-value paradox and is only valid if one can justify the presumption that genome size has no bearing on organismal physiology. According to Larry Moran, the onion test is not an argument for junk DNA, but an approach to assessing possible functional explanations for non-coding DNA. According to him, it asks why allium species need so much more of that proposed function than do humans, and why so much more (or less) than other closely related species of allium and does not address the variation in genome size (C-value) among organisms itself.
Responses
According to Christian creationist Jonathan McLatchie, the onion test is only valid if one can justify the presumption that genome size has no bearing on organismal physiology. Long sequences of repetitive DNA can be highly relevant to an organism and can contribute to transcription delays and developmental timing mechanisms for an organism. Furthermore, he argues that there is a positive correlation between genome size and cell volume for unicellular eukaryotes like plants and protozoa and so the larger amount of DNA thus provides a selective advantage by contributing to the skeleton and volume of the nucleus of these cells. Larry Moran who was actually addressed in McLatchie's post extensively replied :[the onion test is] designed as a thought experiment to test a hypothesis about the possible function of large amounts of noncoding DNA. If you think you have an explanation for why most of the human genome has a function then you should explain how that accounts for the genomes of onions. Ryan Gregory knew that most so-called explanations look very silly when you try using them to account for genome size in onion species.
Ewan Birney (then head of the ENCODE Project) explained the difference as a product of polyploidy, and therefore not relevant to the discussion of humans.(re: onions etc); polyploidy and letting your repeats "go crazy" (bad piRNAs anyone) mean your genome can be v. big
Similar claims were made by John Mattick in an article defending the ENCODE project against arguments disputing the main finding of the project:The other substantive argument that bears on the issue, alluded to in the quotes that preface the Graur et al. article, and more explicitly discussed by Doolittle, is the so-called ‘C-value enigma’ , which refers to the fact that some organisms (like some amoebae, onions, some arthropods, and amphibians) have much more DNA per cell than humans, but cannot possibly be more developmentally or cognitively complex, implying that eukaryotic genomes can and do carry varying amounts of unnecessary baggage. That may be so, but the extent of such baggage in humans is unknown. However, where data is available, these upward exceptions appear to be due to polyploidy and/or varying transposon loads (of uncertain biological relevance), rather than an absolute increase in genetic complexity. Moreover, there is a broadly consistent rise in the amount of non-protein-coding intergenic and intronic DNA with developmental complexity, a relationship that proves nothing but which suggests an association that can only be falsified by downward exceptions, of which there are none known.
Freeling et al. proposed a genome balance hypothesis that presumably accounts for the C-Value Paradox and passes the Onion Test.
References
Non-coding DNA
Evolutionary biology | Onion Test | [
"Biology"
] | 1,173 | [
"Evolutionary biology"
] |
64,979,699 | https://en.wikipedia.org/wiki/Gale%20diagram | In the mathematical discipline of polyhedral combinatorics, the Gale transform turns the vertices of any convex polytope into a set of vectors or points in a space of a different dimension, the Gale diagram of the polytope. It can be used to describe high-dimensional polytopes with few vertices, by transforming them into sets with the same number of points, but in a space of a much lower dimension. The process can also be reversed, to construct polytopes with desired properties from their Gale diagrams. The Gale transform and Gale diagram are named after David Gale, who introduced these methods in a 1956 paper on neighborly polytopes.
Definitions
Transform
Given a -dimensional polytope, with vertices, adjoin 1 to the Cartesian coordinates of each vertex, to obtain a -dimensional column vector. The matrix of these column vectors has dimensions , defining a linear mapping from -space to -space, surjective with rank . The kernel of describes linear dependencies among the original vertices with coefficients summing to zero; this kernel has dimension . The Gale transform of is a matrix of dimension , whose column vectors are a chosen basis for the kernel of . Then has row vectors of dimension . These row vectors form the Gale diagram of the polytope. A different choice of basis for the kernel changes the result only by a linear transformation.
Note that the vectors in the Gale diagram are in natural bijection with the vertices of the original -dimensional polytope, but the dimension of the Gale diagram is smaller whenever .
A proper subset of the vertices of a polytope forms the vertex set of a face of the polytope, if and only if the complementary set of vectors of the Gale transform has a convex hull that contains the origin in its relative interior.
Equivalently, the subset of vertices forms a face if and only if its affine span does not intersect the convex hull of the complementary vectors.
Linear diagram
Because the Gale transform is defined only up to a linear transformation, its nonzero vectors can be normalized to all be -dimensional unit vectors. The linear Gale diagram is a normalized version of the Gale transform, in which all the vectors are zero or unit vectors.
Affine diagram
Given a Gale diagram of a polytope, that is, a set of unit vectors in an -dimensional space, one can choose a -dimensional subspace through the origin that avoids all of the vectors, and a parallel subspace that does not pass through the origin. Then, a central projection from the origin to will produce a set of -dimensional points. This projection loses the information about which vectors lie above and which lie below it, but this information can be represented by assigning a sign (positive, negative, or zero) or equivalently a color (black, white, or gray) to each point. The resulting set of signed or colored points is the affine Gale diagram of the given polytope. This construction has the advantage, over the Gale transform, of using one less dimension to represent the structure of the given polytope.
Gale transforms and linear and affine Gale diagrams can also be described through the duality of oriented matroids.
As with the linear diagram, a subset of vertices forms a face if and only if there is no affine function (a linear function with a possibly nonzero constant term) that assigns a non-negative value to each positive vector in the complementary set and a non-positive value to each negative vector in the complementary set.
Examples
The Gale diagram is particularly effective in describing polyhedra whose numbers of vertices are only slightly larger than their dimensions.
Simplices
A -dimensional polytope with vertices, the minimum possible, is a simplex. In this case, the linear Gale diagram is 0-dimensional, consisting only of zero vectors. The affine diagram has gray points.
One additional vertex
In a -dimensional polytope with vertices, the linear Gale diagram is one-dimensional, with the vector representing each point being one of the three numbers , , or . In the affine diagram, the points are zero-dimensional, so they can be represented only by their signs or colors without any location value. In order to represent a polytope, the diagram must have at least two points with each nonzero sign. Two diagrams represent the same combinatorial equivalence class of polytopes when they have the same numbers of points of each sign, or when they can be obtained from each other by negating all of the signs.
For , the only possibility is two points of each nonzero sign, representing a convex quadrilateral. For , there are two possible Gale diagrams: the diagram with two points of each nonzero sign and one zero point represents a square pyramid, while the diagram with two points of one nonzero sign and three points with the other sign represents the triangular bipyramid.
In general, the number of distinct Gale diagrams with , and the number of combinatorial equivalence classes of -dimensional polytopes with vertices, is .
Two additional vertices
In a -dimensional polytope with vertices, the linear Gale diagram consists of points on the unit circle (unit vectors) and at its center. The affine Gale diagram consists of labeled points or clusters of points on a line. Unlike for the case of vertices, it is not completely trivial to determine when two Gale diagrams represent the same polytope.
Three-dimensional polyhedra with six vertices provide natural examples where the original polyhedron is of a low enough dimension to visualize, but where the Gale diagram still provides a dimension-reducing effect.
A regular octahedron has linear Gale diagram comprising three pairs of equal points on the unit circle (representing pairs of opposite vertices of the octahedron), dividing the circle into arcs of angle less than . Its affine Gale diagram consists of three pairs of equal signed points on the line, with the middle pair having the opposite sign to the outer two pairs.
A triangular prism has linear Gale diagram comprising six points on the circle, in three diametrically opposed pairs, with each pair representing vertices of the prism that are adjacent on two square faces of the prism. The corresponding affine Gale diagram has three pairs of points on a line, like the regular octahedron, but with one point of each sign in each pair.
Applications
Gale diagrams have been used to provide a complete combinatorial enumeration of the -dimensional polytopes with vertices, and to construct polytopes with unusual properties. These include:
The Perles polytope, an 8-dimensional polytope with 12 vertices that cannot be realized with rational Cartesian coordinates. Micha Perles constructed it from the Perles configuration (nine points and nine lines in the plane that cannot be realized with rational coordinates) by doubling three of the points, assigning signs to the resulting 12 points, and treating the resulting signed configuration as the Gale diagram of a polytope. Although irrational polytopes are known with dimension as low as four, none are known with fewer vertices.
The Kleinschmidt polytope, a 4-dimensional polytope with 8 vertices, 10 tetrahedral facets, and one octahedral facet, constructed by Peter Kleinschmidt. Although the octahedral facet has the same combinatorial structure as a regular octahedron, it is not possible for it to be regular. Two copies of this polytope can be glued together on their octahedral facets to produce a 10-vertex polytope in which some pairs of realizations cannot be continuously deformed into each other.
The bipyramid over a square pyramid is a 4-dimensional polytope with 7 vertices having the dual property, that the shape of one of its vertex figures (the apex of its central pyramid) cannot be prescribed. Originally found by David W. Barnette, it was rediscovered by Bernd Sturmfels using Gale diagrams.
The construction of small "unneighborly polytopes", that is, polytopes without a universal vertex, and "illuminated polytopes", in which every vertex is incident to a diagonal that passes through the interior of the polytope. The cross polytopes have these properties, but in 16 or more dimensions there exist illuminated polytopes with fewer vertices, and in 6 or more dimensions the illuminated polytopes with the fewest vertices need not be simplicial. The construction involves Gale diagrams.
Notes
References
Polyhedral combinatorics | Gale diagram | [
"Mathematics"
] | 1,748 | [
"Polyhedral combinatorics",
"Combinatorics"
] |
64,981,054 | https://en.wikipedia.org/wiki/2020%20United%20Kingdom%20school%20exam%20grading%20controversy | Due to the COVID-19 pandemic in the United Kingdom, all secondary education examinations due to be held in 2020 were cancelled. As a result, an alternative method had to be designed and implemented at short notice to determine the qualification grades to be awarded to students for that year. A standardisation algorithm was produced in June 2020 by the regulator Ofqual in England, Qualifications Wales in Wales, Scottish Qualifications Authority in Scotland, and CCEA in Northern Ireland. The algorithm was designed to combat grade inflation, and was to be used to moderate the existing but unpublished centre-assessed grades for A-Level and GCSE students. After the A-Level grades were issued, and after criticism, Ofqual, with the support of HM Government, withdrew these grades. It issued all students the Centre Assessed Grades (CAGs), which had been produced by teachers as part of the process. The same ruling was applied to the awarding of GCSE grades, just a few days before they were issued: CAG-based grades were the ones released on results day.
A similar controversy erupted in Scotland, after the Scottish Qualifications Authority marked down as many as 75,000 predicted grades to "maintain credibility", and later agreed to upgrade the results and issue new exam certificates. The Scottish Government apologised for the controversy, with Nicola Sturgeon, the First Minister of Scotland saying of the situation that the Scottish Government "did not get it right".
Background
In England, Wales and Northern Ireland, students sit General Certificate of Secondary Education (GCSE) and A-Level exams, typically at ages 16 and 18 respectively. Similar but equivalent international versions of these qualifications are offered by UK exam boards.
On 18 March 2020, the government decided to cancel all examinations in England due to the COVID-19 pandemic, although the regulator, Ofqual, had advised that holding exams in a socially distanced manner was the best option. The same cancellation decision was taken by the Scottish, Welsh and Northern Ireland devolved governments. The governments announced that, in their place, grades were to be based on teacher predictions which would be moderated to prevent grade inflation. Overseas exams provided by CIE were cancelled on 23 March 2020, and grades were issued on the same basis as in England.
Secretary of State for Education Gavin Williamson stated that his "priority now is to ensure no young person faces a barrier when it comes to moving on to the next stage of their lives – whether that's further or higher education, an apprenticeship or a job" and that he had "asked exam boards to work closely with the teachers who know their pupils best to ensure their hard work and dedication is rewarded and fairly recognised." Students unhappy with their calculated grades would be able to appeal through their school, or sit exams in the autumn.
For homeschooled students, or those retaking exams, Ofqual stated they may not receive a grade, and would have to sit exams in 2021 because of a "lack of any credible alternatives identified". It was estimated that over 20,000 students would be affected, and would be unable to move on to college or university.
Standardisation algorithm
A grades standardisation algorithm was produced by Ofqual, the regulator of qualifications, exams and tests in England. It was designed to combat grade inflation, and was to be used to standardise or moderate the teacher-predicted grades for A Level and GCSE qualifications.
A-Level results
The A-Level grades were announced in England, Wales and Northern Ireland on 13 August 2020. Nearly 36% were one grade lower than teachers' predictions and 3% were down two grades. By comparison, 79% of university entrants in 2019 did not achieve their predicted grades.
Reaction
The release of results resulted in a public outcry. Particular criticism was made of the disparate effect the grading algorithm had in downgrading the results of those who attended state schools, and upgrading the results of pupils at privately funded independent schools and thus disadvantaging pupils of a lower socio-economic background, in part due to the algorithm's behaviour around small cohort sizes, and resulting in private schools seeing a bigger yearly increase in the proportion of students getting As and A*s than others.
Students and teachers felt deprived and upset following the controversial algorithm calculation and protested against it, with many demanding Prime Minister Boris Johnson and his government take immediate action. In response to the public outcry, on 15 August, Gavin Williamson said that the grading system is here to stay, and there will be "no U-turn, no change". Williamson criticised Scottish ministers for their u-turn the week prior, stating that awarding unmoderated grades would be "unwise", cause "rampant grade inflation". Instead, he suggested that schools appeal swiftly on behalf of affected students, to ensure any errors could be amended. Boris Johnson stated that the results are "robust and dependable".
Legal action, in the form of judicial review, was initiated by multiple students and legal advocacy organisations such as the Good Law Project.
A-Level results revised
On 17 August, Ofqual and Secretary of State for Education Gavin Williamson agreed that grades would be reissued using unmoderated teacher predictions. As a result, there was an annual increase by more than 10 percentage points in the number of top grades awarded (from 25.2% to an estimated 37.7%), the biggest increase for at least 20 years.
The initial algorithm 'upgraded' students, leading 100,000 to secure their firm university choices, which filled courses at top universities. The switch to teacher-assessed grades meant that a further 15,000, who at first missed their firm offers, then met their grade requirements. This caused a capacity issue that meant that some oversubscribed universities, such as Durham University, had to offer incentives for students to defer their place to the following academic year. Incentives from Durham included money and a guarantee of accommodation choice.
GCSE results
On 20 August 2020 the GCSE results were released. After the problems arising from the use of the grade algorithm for A-Levels, it was decided that GCSE grades awarded to each student would be the higher of the teacher predicted result or algorithm standardised result for each subject they took.
Vocational and technical qualifications (BTEC) results
A further 200,000 students who had taken the level one and two vocational qualifications were told on 19 August 2020, hours before results day, that they would not receive them on time. About 250,000 level-three grades, which had already been awarded, were also reassessed; these vocational equivalents to A-Levels had been given a result at the same time as the A-Levels were released. The examining board, Pearson Edexcel, withdrew them when the controversy broke, and has re-marked them upwards and is issuing a revised certificate, on a rolling basis, in the week beginning 24 August.
Aftermath
On 25 August 2020, Sally Collier resigned from the position of chief regulator of Ofqual following the grading controversy. Three days later, Permanent Secretary Jonathan Slater, the most senior civil servant at the Department for Education (DfE), stood down. Subsequently, the government was accused of scapegoating civil servants and avoiding accountability.
On 1 September, the question of blame was reopened by The Guardian. In a report OCR, one of the exam boards, told Williamson that the algorithm was producing some rogue results. But Williamson and the DfE were told by Ofqual that the appeals procedure would correct the few rogue results. OCR informed them that this was more than a few results and that patterns could be observed, such as students with better results than a low-performing group the year before.
On 2 September, Ofqual's chair Roger Taylor appeared before the Education Select Committee of the House of Commons during their inquiry into the impact of COVID-19 on education and children's services. He apologised to students, parents and teachers, and stated that the Secretary of State made the decisions to cancel examinations and to abruptly withdraw the procedure to challenge calculated A-level grades.
Scottish Highers
On 4 August 2020, secondary school students in Scotland received their Higher grades. Having also been unable to take their exams because of the pandemic, their grades were estimated by teachers, but the body awarding the qualifications was reported to have downgraded around a quarter of the marks awarded in order to "maintain credibility". Following criticism of the system from teachers and students, on 10 August, First Minister Nicola Sturgeon apologised for the controversy, saying the Scottish Government "did not get it right". The following day, on 11 August, the Scottish Government agreed to upgrade thousands of exam results, and accept teachers' estimates of pupils' results. On 18 August, the Scottish Qualifications Authority announced that 75,000 new exam certificates would be issued.
See also
2000 SQA examinations controversy (Scotland)
2020 AP exams controversy (United States and other countries)
Impact of the COVID-19 pandemic on education in the United Kingdom
Government by algorithm
Impact of the COVID-19 pandemic on education
Social impact of the COVID-19 pandemic in the United Kingdom
References
External links
Taking exams during the coronavirus (COVID-19) outbreak – guidance from the Department for Education, published 20 March 2020, updated 27 August
"Your results, what next?" – guidance from Ofqual, via Internet Archive:
27 July – first version, archived 28 July
20 August – updated after method changed, archived 20 August
Education Committee Oral evidence: The Impact of Covid-19 on education and children's services, HC 254 Wednesday 2 September 2020
Code repository − Ofqual, published 7 December 2020
GCSE and A-Level grading
GCSE and A-Level grading
GCSE and A-Level grading
GCSE and A-Level grading
2020 controversies
GCSE and A-Level grading
Government by algorithm
Impact of the COVID-19 pandemic on education
GCSE and A-Level grading | 2020 United Kingdom school exam grading controversy | [
"Engineering"
] | 2,044 | [
"Government by algorithm",
"Automation"
] |
64,982,850 | https://en.wikipedia.org/wiki/Energy%20poverty%20and%20gender | Energy poverty is defined as lacking access to the affordable sustainable energy service. Geographically, it is unevenly distributed in developing and developed countries. In 2019, there were an estimated 770 million people who have no access to electricity, with approximately 95% distributed in Asia and sub-Saharan Africa.
In developing countries, poor women and girls living in the rural areas are significantly affected by energy poverty, because they are usually responsible for providing the primary energy for households. In developed countries, old women living alone are mostly affected by energy poverty due to the low income and high cost of energy service.
Even though energy access is an important climate change adaptation tool especially for maintaining health (i.e. access to air conditioning, information etc.), a systematic review published in 2019 found that research does not account for these effects onto vulnerable populations like women.
Energy poverty has a disproportionate impact on women. Without access to other energy sources, 13% of the global population is compelled to collect wood for fuel. Out of the population, women and girls contribute to more than 85% of the work involved in gathering wood for fuel.
In developing countries
Domestic responsibilities
In developing countries, energy poverty has significant gender characteristics. Approximately 70% of the 1.3 billion population in the developing countries living in poverty are women. Women living in rural areas are usually responsible for housework, including gathering fuels and water, cooking, farming etc. Studies in India indicates that rural women provide approximately 92% of total household energy supply, and 85% of their energy for cooking is provided by the biomass from forests or fields.
Health impacts from energy consumption
Energy poverty in rural households causes the health problems of women and children. One health problem is caused by indoor air pollution from traditional stoves. Study indicated that cooking with biomass is predicted to lead to 1.5 million deaths per year by 2030. The other health risks are caused by the heavy workload of collecting fuels and exposed to malnutrition. Meanwhile, the scarcity of fuel makes them less likely to use the fuels for cooking waters, which might increase risk of water-borne diseases.
Also powering of medical equipment and stools, storage of blood and vaccines, performing of basic health procedures after dark are all possible on reliable energy supply. Unreliable energy supply prevent patient care at night especially pregnant women during delivery and those undergoing emergency caesarean section at night. All these result to 95% of maternal mortality in sub-Saharan Africa.
Time poverty
Energy poverty further affects women by putting them into the situation of "time poverty", which refers to the lack of time for resting, leisure, working outsides, getting education etc. It is the consequence of spending a long time gathering the fuels to supply the domestic energy use. The forest degradation caused by climate change might exaggerate the current problem.
Participation in decision-making
Also, energy poverty and gender menifest in the area of decision making and participation within the household. Studies have shown that in the rural areas in developing countries, men usually have more power in making decisions in purchasing energy devices or new technologies. This is because men and women have different and distinct perception about energy needs.Excluding women from participating public discussion and decision-making process is likely to lead to the failure in addressing the effects of energy poverty on women.
Energy Poverty and Education
Energy poverty affects teaching and learning. Lack of access to energy reduces children performance and attendance.
Example
In sub-Saharan African countries, energy poverty is especially challenging, due to the high cost of extending grid electricity in existing scattered rural settlement. For example, in Tanzania, energy poverty is affecting the livelihood of the majority, with only 15.5% of the population has access to electricity. The lack of electricity leads to the missing of efficient energy services like cooking, lighting etc. Hence the basic capabilities for development, like education, health, transportation are restricted. In face of energy poverty, the burden of supplying household energy use disproportionately lies on women than men. A case study in Tanzania examines impact of a women-oriented solar lighting social enterprise project on health, education, livelihood and gender equality. The results indicate that increasing the accessibility to energy service for women, could contribute to empowering women, children and local families’ development.
In developed countries
In developed countries, lone and old women are affected disproportionately by energy poverty. There are more women living alone than men because of their relatively longer life expectancy. Those older women usually have less pensions to support themselves, because they worked mostly inside the house. The rise in energy cost affects the affordability of heating and cooling service at home. Data from the UK Office for National Statistics indicates that, women have higher Excess Winter Mortality (EWM) more than men, and there is an increase of EWM from 8.2% to 12.4% between 2013 and 2012, among women under 65. Furthermore, the increasing energy price, relative low income, and together energy-inefficient houses contribute to energy poverty in developed countries.
Components
There are gender gaps in the energy labor market, energy-related education and decision-making process in developed countries. In the European Union, men dominant the energy sector with 77.9% in the workforce. Studies show that the under-representation is attributed to the following reasons: lack of necessary skills caused by the energy education gap, the perception of stereotypical men-domain energy sectors, and lack of opportunities for women working in energy sectors. The gendered energy education is related to the traditional images of ‘feminine’ or ‘masculine’ subjects as well as the lack of mentoring programs engaging female students to study in science subjects, like energy. Women are also under-represented in the decision-making process in energy sections in developed countries. A study conducted in Germany, Sweden and Spain shows that there is no female staff work in management group or as board member in the 295 energy companies they investigated in 2010. Similar situation is observed in the public energy sector, with 82.7% of high-level position occupied by men, though it is better in Nordic countries than in Mediterranean countries. Those gender gaps contribute to the "gender blindness" in the energy policies in developed countries.
Example
Caitlin Robinson (2019) conducted a study on gender and poverty in England. With the socio-spatial analysis, he argued that energy poverty could increase the gendered vulnerabilities. Five dimensions of gendered socio-spatial energy vulnerability are examined, including
Exclusion from a productive economy
Unpaid reproductive, caring or domestic roles
Coping and helping others to cope
Susceptibility to physiological and mental health impacts
Lack of social protection during a life course "
The result indicated that energy poverty is connected with economic and social activities and health, but more complex effects of energy vulnerability and gender should be analyzed at the household level, since it is relatively individual.
Responses
Some research indicates that investing in low-emission energy technologies can increase the accessibility to modern energy services, which will benefit the women living in energy poverty. The low-emission technologies are believed to be able to free poor women from fuel collection and drudgery, protect them from the air pollution caused by burning biomass, and enable them to have time for education and participating in public discussion etc.
Other research argues that merely technologies approach is not enough, and suggests engaging local women in the decision-making process for locally appropriate energy programs.
Pueyo & Maestre (2019) further studied whether men and women benefit differently in electrification. The results indicate that electrification benefits women in accessing paid works, but not as much as men. Women still have relatively lower quality works after electrification. Policies that address gender mainstreaming are suggested to consider both women's existing domestic work, and their accessibility to profitable activities, hence empowering them for long-term development.
References
Energy
Gender
Gender equality | Energy poverty and gender | [
"Physics",
"Biology"
] | 1,584 | [
"Behavior",
"Physical quantities",
"Energy (physics)",
"Energy",
"Gender",
"Human behavior"
] |
64,985,942 | https://en.wikipedia.org/wiki/1%2C3%2C5-Triheptylbenzene | 1,3,5-Triheptylbenzene (also called sym-triheptylbenzene) is an aromatic organic compound with a chemical formula and molar mass 372.67 g/mol. It can be prepared by the hydrogenation reduction reaction of 1,1',1''-(benzene-1,3,5-triyl)tris(heptan-1-one). Alternatively, 1-nonyne trimerizes to 1,3,5-triheptylbenzene when catalyzed by rhodium trichloride.
References
Alkylbenzenes | 1,3,5-Triheptylbenzene | [
"Chemistry"
] | 139 | [
"Organic chemistry stubs"
] |
64,988,125 | https://en.wikipedia.org/wiki/History%20of%20mobile%20games | The popularisation of mobile games began as early as 1997 with the introduction of Snake preloaded on Nokia feature phones, demonstrating the practicality of games on these devices. Several mobile device manufacturers included preloaded games in the wake of Snake's success. In 1999, the introduction of the i-mode service in Japan allowed a wide variety of more advanced mobile games to be downloaded onto smartphones, though the service was largely limited to Japan. By the early 2000s, the technical specifications of Western handsets had also matured to the point where downloadable applications (including games) could be supported, but mainstream adoption continued to be hampered by market fragmentation between different devices, operating environments, and distributors.
The introduction of the iPhone and its dedicated App Store provided a standard means for developers of any size to develop and publish games for the popular smartphone. Several early success stories from app developers in the wake of the App Store's launch in 2008 attracted a large number of developers to speculate on the platform. Most initial games were published as premium (pay-once) titles, but the addition of in-app purchases in October 2009 allowed games to try other models, with notable successes Angry Birds and Cut the Rope using a combination of free-to-try and ad-supported games. Apple's success with the App Store drastically altered the mobile landscape and within a few years left only its and Google's Android-based smartphones using its Google Play app store as the dominant players.
A major transition in game monetization came with the introduction of Candy Crush Saga and Puzzle & Dragons, taking gameplay concepts from social-network games which generally required the player to wait some length of time after exhausting a number of turns for a day, and offering the use of in-app purchases to refresh their energy. These games generated revenue numbers previously unseen in the mobile game sector, and became the standard for many freemium games that followed. Many of the most successful games have hundreds of millions of players, and have annual revenues exceeding a year, with the top games breaking .
More recent trends have included hyper-casual games such as Crossy Road and location-based games like Pokémon Go.
Prior to mobile phones
Early precursors of mobile gaming include handheld electronic games and early handheld video game consoles, though these devices were always game-oriented with nearly no utility function. Nintendo's Gunpei Yokoi had conceived of their Game & Watch line - handheld games that also served as a digital timepiece - after seeing a bored businessman on a commuter train pass time by using a calculator to play makeshift games.
Personal digital assistants (PDAs), precursors themselves to modern smartphones, arrived in 1984, and early models included built-in or add-ons games such as with the Sharp Wizard in 1989. As most PDAs used low resolution monochromatic liquid crystal display (LCD)s designed for displaying text over graphics, these games tended to be simple, which included block or tile games like Tetris. These types of games carried over into some of the earlier smartphone models but did not have as much popularity, such as on the Hagenuk MT-2000 in 1993.
Introducing gaming on smartphones (1997−2006)
In 1997, Nokia introduced its Nokia 6110 mobile phone which included Snake. Snake proved to be one of the phone's popular features, and Nokia continued to include the game, or a variation of it, on nearly every phone it released since, with about 400 million devices shipped with the game installed as of 2016.
In 1999, NTT Docomo launched the i-mode mobile platform in Japan, allowing mobile games to be downloaded onto smartphones. Several Japanese video game developers announced games for the i-mode platform that year, such as Konami announcing its dating simulation Tokimeki Memorial. The same year, Nintendo and Bandai were developing mobile phone adapters for their handheld game consoles, the Game Boy Color and WonderSwan, respectively. By 2001, i-mode had users in Japan, along with more advanced handsets with graphics comparable to 8-bit consoles. A wide variety of games were available for the i-mode service, along with announcements from established video game developers such as Taito, Konami, Namco, and Hudson Soft, including ports of classic arcade games and 8-bit console games.
Snake showed there was a viable interest in expanding the capabilities of mobile phones for gaming applications. With the introduction of the Wireless Application Protocol (WAP), many mobile phones were able to access limited browser-based games, and later downloading new apps that could be purchased from their wireless carrier or a third party distributor to use on their phone. However, at this stage, in the early 2000s, there was a wide discrepancies of technologies available in terms of both hardware and software. Phones were still of a wide ranges of form factors, input features, and screen resolutions, so game developers were typically focused their efforts on specific software platforms and subsets of available devices. Additionally, a range of software platform standards, like J2ME, Macromedia Flash Lite, DoJa, and Binary Runtime Environment for Wireless (BREW), existed, the implementations of which varied by phone manufacturer and model, further limiting portability of games. Thus, while games were developed for mobile devices over the next several years, they tended to be limited. Mobile game discoverability was further complicated by the limitations of the early mobile internet. Games were often primarily offered via a content store provided by a wireless carrier (the "carrier deck"). Publishers would license games for inclusion on these portals. These stores tended to be largely text-based, offering very limited descriptions of products or sophisticated search and navigation. As a result, games promoted by carriers (thereby appearing nearer the top of the store) tended to be greatly more successful, while others listed below would not be seen by many users who did not scroll beyond the first page of the deck.
Prior to 2007, Japan was the leading developer for games on handsets since most of the primary handset developers were located there and smartphones had a greater proliferation among the population. A wide array of various genres were tried, including virtual pet games which used early camera phone features as part of the gameplay cycle.
Meanwhile, handheld consoles still typically offered superior gaming experiences compared to the limited smartphone games; Nintendo had released its Game Boy Advance in 2001 as a successor to the widely popular Game Boy. To try to merge the two markets, Nokia released the N-Gage in 2003, designed as both a handheld console and a phone. The N-Gage was able to offer similar video games as the Advance, but even with its N-Gage QD redesign in 2004, the unit was a commercial failure.
The iPhone and the App Store (2007−2008)
Apple, Inc. had been an early player in the PDA market with the Apple Newton, but Steve Jobs had discontinued the line in 1998 to focus the company's hardware towards devices like the iMac and iPod. Under Jobs' direction, the same teams worked to develop the iPhone, which Apple first released in June 2007. Among key hardware features in the iPhone was a large random access memory (RAM) size compared to most other smartphones on the market as well as a larger screen, making it capable of running more complex apps, and a new operating system that could handle multitasking, far surpassing any other device on the market at the time. The iPhone also included various sensors such as an accelerometer, and also included a capacitive touchscreen that did not require any stylus and could be controlled by a finger, with later models adding support for multipoint sensing. In 2008, alongside the iPhone 3G, Apple released an iPhone OS software development kit, allowing developers to officially and inexpensively develop native apps (whereas previously, only web apps were allowed and native apps could only be installed through jailbreaking), which could be published through the newly available App Store.
Developers, including game developers, rushed to take advantage of the App Store. At launch, there were 500 apps, while six months later, there were over 15,000, along with over half a billion app downloads. These figured doubled three months later (circa March 2009), and by November 2009, the App Store had over 100,000 apps with over 2 billion downloads.
Gaming applications were one major area that found success on the App Store. One such early success was Trism, a tile-matching game incorporating the phone's accelerometer released near the App Store launch developed by a single person, Steve Demeter. Demeter had priced the game at , and within two months of launch, had made in profit, and Demeter was highly publicized as a rags to riches story on the lucrative nature of developing for the iPhone. Another early success was Tap Tap Revenge, a rhythm game by Tapulous, which also was released at the App Store's launch and saw over one million downloads in 20 days. Following on similar stories, numerous smaller developers tried to release the next big game, while larger game publishers took to their existing catalogs and released mobile-compatible titles where possible. PopCap Games, which had already had success with a line of computer and browser-based puzzle games such as Bejeweled, was one of the first companies to transition their products to mobile versions in 2009 which helped them to rapidly grow their mobile business, leading to their acquisition by Electronic Arts in 2011 as to allow Electronic Arts to compete in the mobile and casual games area.
Beyond games, the iPhone and App Store caused most other smartphone manufacturers to abandon their own attempts to build out a more sophisticated smartphone environment, such as BlackBerry and Symbian. BlackBerry had attempted to release its own app store but failed to gain the success as Apple's. Only two major competitors remained after the iPhone's introduction, the Android-based devices (based on the Java language), using the operating system that had been developed by Google, and Windows Phone by Microsoft which had close interoperability with its Microsoft Windows operating system. Both took up the same approach as Apple, introducing app stores in Google Play and Windows Phone Store, respectively, with similar developer policies. Ultimately, Microsoft ceased active development of Windows Phone, leaving iOS and Android as the principle players in the mobile operating system and app store market.
Angry Birds: transitioning from premium to free-to-play (2009–2011)
At launch, the iOS App Store only allowed single-time purchases of apps akin to how one purchased music from iTunes, so most games were purchased on the traditional "premium" model, buying the game upfront. In October 2009, the store introduced "in-app purchases" (IAP), microtransactions that an app could offer with the transaction made through the App Store's storefront. Some existing app devs were savvy to jump on this; Tapulous released Tap Tap Revenge 3 shortly after this change that included IAP to obtain new songs. Similar IAPs were added to the Google Play store on Android as well.
In December 2009, Rovio Entertainment released Angry Birds on the App Store, a physics-based game involving launching cartoonish birds at structures occupied by pigs that have stolen their eggs as to do as much damage as possible, which had been inspired by the browser game Crush the Castle and others like it. As released on the iOS store, it was a still a premium game at , and its low cost, as well as being featured by Apple in February 2010, led to it becoming highly successful and leading the Top Paid App charts by mid-2010. When Rovio ported the game to Android, they introduced an ad-supported version that could be downloaded for free, but a user could pay to remove the ads, such that Rovio gained revenue from both the IAP and the ads, which shortly after the Android's release in October 2010, was estimated to be about a month. Another game, Cut the Rope, released on both iOS and Android at the same time, followed a model of releasing a free version with a few levels, and with an in-game purchase to unlock the rest of the game. It was one of the fastest-selling games on the iOS App Store at that time according to its publisher Chillingo.
Mobile game development was also not limited to the English-speaking world, as Japan and many Asian countries had an active mobile development scene. As the app stores on iOS and Android had regional distinctions, apps developed in different regions typically would not be available in others unless translated or localized. An important region during this period is China. Separate from most other markets, the Chinese video game industry had been relatively small prior to 2008 due to poor economic conditions. The Chinese government set about trying to improve the economic welfare of the country and introduce more high technology education and jobs. However, computer costs remained high and importing consoles were difficult, so many used PC bangs, giving rise to free-to-play or subscription based games like massively multiplayer online games (MMOs). China is also recognized for creating social-network games with Happy Farm, developed by 5 Minutes in 2008, which served as direct inspiration for FarmVille.
Apple further introduced the iPad in 2010, its tablet computer based on similar design principles as the iPhone. While tablets had existed before as descendants of PDAs, the iPad was the first tablet to achieve mass-market success. Part of the iPad's success was using iOS for its operating system, assuring that all apps and games on the App Store worked for the iPad as they did for the iPhone. Android-based phone manufacturers followed suit with their own suite of Android-based tablet in the years that followed to create a similar dichotomy. Mobile game developers had a whole new audience available to them without any extra work, while others saw potential in tablet-based games due to the larger screen space that they offered. These could be geared towards children for educational purposes or elderly where hand dexterity is not as agile to use a smaller screen. Amazon developed its own Amazon Fire tablet first released in 2011 with Quanta Computer with its own customized version of Android as a means to offer digital products from its storefront to users which included apps and games.
Candy Crush Saga and Puzzle & Dragons: Establishing the freemium model (2012–2014)
While casual games like Angry Birds and Cut the Rope were gaining success on mobile devices, the development of new social network sites using advanced web browser technology on personal computers, such as Facebook, gave rise to free-to-play browser games and social-network games, generally supported by ads on the hosting website. One of the most notable examples of these is Zynga's FarmVille, released in 2009. The farm management simulation game had the player work to raise crop and tend livestock on a virtual farm, but were only afforded a limited number of actions per day. Players, however, could engage their Facebook friends to ask for extra actions, and give extra actions back when requested. The "time-lapse" or "energy" gameplay mechanics was heavily criticized by traditional game designers since any reasonable progression required one to commit time to the game. However, the game was considered highly successful, with more than 80 million players by February 2010.
Zygna's success with Farmville drew gamers away from non-social browser games on portal sites. King, who ran one such portal site, was impacted by this and decided to change their own model to incorporate Facebook games that worked alongside their portal games. One of the first games King offered on this approach was Bubble Witch Saga, released in October 2011. Bubble Witch Saga used mechanics similar to the older game Puzzle Bobble, where players shot colored orbs to clear away matching orbs. However, as to avoid the drawn-out gameplay that FarmVille was noted for, King introduced the "saga" model; the game was divided into a number of levels which each was effectively a puzzle. The player had a number of turns (shots) to clear the board or meet other conditions. If they did this, they were able to continue, but otherwise they lose one "life", though these lives would regenerated in real-time, or players could ask friends on Facebook for free lives. The game thus only required the player to commit a few minutes each day. By January 2012, Bubble Witch Saga had over 10 million players and was the fastest-growing game on Facebook. King followed this with Candy Crush Saga on its portal and Facebook by April 2012, a more direct tile-matching game but using the same "saga" approach, which also enjoyed similar success.
Buoyed by the success of these games, King opted to enter the mobile game market with these titles, developing ad-supported versions for iOS that synchronized with the portal and Facebook versions; Bubble Witch Saga for mobile was released in July 2012, and Candy Crush Saga in October 2012. Both games still integrated with Facebook to ask their friends for lives, but also included an in-app purchase to fully restore one's lives or on special powerup, however, the game was still designed to be playable without having to purchase these, and 70% of the players had been able to make it to the final level of the game (as of September 2013) without spending any money. Candy Crush Saga proved to be the more popular game, and by the end of 2013, King had seen over 400 million new players of the game and their revenues had jumped from in 2011 to from advertising revenue and in-app purchases. In June 2013, King opted to eliminate advertising in-game and simply let the mobile version of its games earn revenue from in-app purchases as they continued to release additional games. The strategy proved effective as by the final quarter of 2014, King had seen 356 million monthly unique players, with only 8.3 million spending money on their games (2.3%), but had brought in over per player per month, as to make over across its game portfolio that quarter. King's success with Candy Crush Saga created the freemium model that numerous mobile games that followed used.
Separately, in Japan, developer GungHo Online Entertainment had released Puzzle & Dragons in February 2012 first in Japan, a tile-matching game with some role-playing elements that including improving one's team of "monsters". At the time of its release, one of the more popular mobile apps in Japan were card battle games, but GungHo believed they could improve on the formula. Like Candy Crush Saga, the game used regenerable "stamina" to limit how many times the player could play in a row, but could use in-app purchases to immediately restore their stamina, or obtain other forms of in-game currency. By October 2013 the game has been downloaded 20 million times in Japan (about 1/6th of the nation's population) and over a million times in North America, and was earning an estimated a day. News of these numbers caused GungHo's stock market capitalization to rise sharply in October as to surpass that of Nintendo at around , and further establishing the success of the freemium model for mobile games.
In 2013, Apple was able to secure deals to distribute the iPhone cheaply in China. Because of the feature set and its relatively low cost compared to a computer, the iPhone became nearly ubiquitous for many Chinese residents. This spurred mobile game development within China particularly across the 2013-2014 period. These games followed the established freemium models from Candy Crush Saga and Puzzle & Dragons, using a mix of advertising and in-app purchases for revenue generation. Chinese publishers and developers, though limited by the type of content that they can release within the country due to the government's oversight of the media, were able to publish their games to the mobile app stores to release their titles beyond China, including to other southeast Asian countries or globally when possible, which helped to draw in additional revenue. This also led to some of the larger publishers within China, such as Tencent and Perfect World Games to establish foreign subsidiaries or acquire foreign companies to make them subsidiaries for mobile game development.
Clash of Clans and the massively-multiplayer role-playing experience (2012−2015)
During this same period, Supercell released Clash of Clans in 2012. Clash of Clans is a strategy game that at its core has elements of city management and tower defense as the player oversees a fighting clan's home base. To obtain resources to maintain and upgrade the base, the player can send their forces to attack another player's base, which is handled asynchronously with the opposing player's forces managed by the computer. Should the attacking player win, they steal some resources from the losing player, while the losing player, when they next access the game, will learn of these loses. To encourage cooperation, players can join into "clans" which help to attack or defend automatically. Clash of Clan retains similar in-app purchases as with Candy Crush Saga and Puzzle & Dragons that can be used to rush certain building objectives, but also weigh heavily on social engagement similar to MMOs. By September 2014, the app was earning per day, and many users had reported playing the game for thousands of hours since its launch. Supercell considered part of its success to be able to draw in both casual and hardcore games with the Clash of Clans gameplay.
Clash of Clans inspired numerous other games that gave a simulated multiplayer experience, including Game of War: Fire Age and Empires & Allies that typically required more of a time commitment and a deeper understanding of the game rules to be successful but still could be played in a casual manner.
In China, Tencent released Honor of Kings in 2015, which when it was exported to other markets, rebranded as Arena of Valor. Honor/Arena built up on the type of gameplay found in League of Legends, a multiplayer online battle arena that had been built by Riot Games, an American company which Tencent had previously acquired. Riot had believed that League could not be replicated on mobile devices, leading Tencent and its Chinese studio TiMi Studios to develop Honor of Kings. Within China the game was a success with more than 50 million daily players, and spurred its own esports league by 2016. Tencent saw the potential for its global release, but replace the game's heavy Chinese mythology with more traditional fantasy characters in rebranding it to Arena. With its international release, Honor and Arena and combined have remained one of the top-grossing mobile games overall, with over in annual revenue in 2019. And in 2020 Riot Games did make mobile versions for several of their games, with League of Legends being one of them but with a different name, Wild Rift.
Crossy Road and the growth of the hyper-casual game (2014–2015)
Around early 2015, a new type of gaming app emerged on the app stores, called hyper-casual games, with Crossy Road, by Hipster Whale considered one of the key examples in this period, though earlier games like Flappy Bird by dotGears in 2013 had displayed the same principles in gameplay. Hyper-casual games differentiated themselves from the bulk of existing app games by being small and lightweight downloads, using simple graphics, and having extremely simple rulesets, but were otherwise infinitely replayable. In the case of Crossy Road, the goal is to maneuver a character as far as possible across lanes of a busy road and avoiding traffic, a type of endless game of Frogger, earning in-game coins based on distance and any collected coins picked up that can be used to unlock new characters or buy power-ups. In-app purchases also could be used to buy coins, or coins could be earned through advertising. The game's monetization scheme was designed to avoid some of the bad reputation that in-app purchasing had been getting in recent years, using the lure of new characters to get players to spend money rather than to extend gameplay sessions. Within 90 days of release, the app had earned from over 50 million users.
Other companies soon followed to build on the hyper-casual games market, with Voodoo and Ketchapp among those releasing a new wave of hyper-casual games with similar monetization schemes as Crossy Road. Often these games were reductions of other gameplay concepts or simple expansions of more trivial games: Voodoo's Paper.io was effectively a remake of Snake and its later Hole.io a simpler version of Donut County. Hyper-casual games have continued to gain popularity, both as easier games for players to get into compared to titles like Clash of Clans, and typically are much easier and cheaper to develop, and are said to have disrupted the mobile gaming market as much as Candy Crush Saga had done when it was introduced. For established studios, the rapid development time allowed them to publish more experimental titles which they could monitor to see if players took to enjoy them, and if any title became popular, they could commit more resources and advertising to it.
Pokémon Go and location-based gaming (2016−2017)
Under license from The Pokémon Company and Nintendo, Niantic released Pokémon Go in July 2016 as a freemium app for mobile phones. Having already had experience using location-based games with its prior Ingress title, Niantic used phones' GPS to map out nearby spots close to players where they could find and try to capture Pokémon to which they could then use at virtual local Pokémon gyms, also determined by GPS location. In game, Pokémon were shown to the player using augmented reality atop the camera's view so that the player knew they had found the Pokémon and engage in its capture. In-app purchases could be used to buy improved Pokéballs used to capture Pokémon and other powerups and items to help one's Pokémon. Pokémon Go had record-breaking numbers of players, with both its initial iOS and Android releases seeing over 100 million players worldwide within a month of release. The game was recognized by the Guinness World Records for numerous milestones by August 2016. The game was a cultural phenomena for several months, in a wave of "Pokémon Go Mania", or "Pokémania", though also led to several incidents where due to how Niantic's servers has planned out Pokémon spots and gyms, people were flocking to private homes and sites. By the end of 2017, the game has grossed over in revenue, and has continued to bring in more than each year.
While Pokémon Go was not the first location-based game released for mobile devices, it established a fundamental monetization model to make such a game work and that would engage the user in physical activity in moving to nearby local areas. It also was seen as a positive impact on social interactions since players would often interact face-to-face at the gyms. Other location-based games based on popular properties have since been released with similar gameplay and monetization models, including Harry Potter: Wizards Unite and Minecraft Earth.
Video game analysts had been watching the mobile market for several years, in part due to the growth of mobile gaming from China. Market analysis firms identified that mobile gaming global gross revenues exceeded that of either personal computer or console games for the first time in 2016, earning around , and remained one of the fastest growing sectors of the video game market.
Fortnite and cross-platform play (2018–present)
In mid-2017, Epic Games released Fortnite, a third-person shooter with base-building elements as its Fortnite: Save the World component on personal computers in an early access model and then by September 2017 had released a standalone Fortnite Battle Royale mode, based on the success of the battle royale game genre from PlayerUnknown's Battlegrounds released earlier that year. Fortnite Battle Royale rapidly grew popular, leading Epic to port the game to other systems, including onto mobile devices by mid-2018. From the launch, the mobile versions of the games supported cross-platform play with computer and console versions, one of the first games to incorporate mobile games into direct interactive cross-platform play. By June 2018, over 125 million registered players across all platforms. Revenue, earned through the purchase of in-game currency to buy customization options and battle passes, had brought the game to reach over in revenue daily by July 2018. A large portion of the game's audience are younger school-aged children being able to play it on their mobile phone, and parents and teachers expressed concern about the game's impact on coursework inside and out of school.
Notably, Epic Games challenged the requirement from both Apple and Google that in-game purchases had to be made through the specific storefront. In August 2020, Epic purposely released a version of Fortnite on mobile that allowed players to purchase directly from Epic. The game was pulled from both the App Store and Play Store, leading Epic to file a pair of lawsuits against Apple and Google citing that this practice was an anti-trust violation. While the lawsuit was largely decided in Apple's favor in 2021, the judge did affirm that Apple's anti-steering policy which prevented apps from informing users of alternate pay schemes violated various laws and required the company to allow apps to notify users of such systems.
Game subscription services, cloud gaming, and popular players
Apple introduced the Apple Arcade in September 2019 which worked with its iOS, macOS, and Apple TV. Comparable to Xbox Game Pass, users pay a flat monthly fee to gain access to a number of curated games, with new games added to the service periodica11y while other games are removed over time. Games on the service lack in-game purchase options or advertisements, but allow the user to purchase the game to keep to own, as well as store progress through their iCloud account if they purchase the game at a later time. Thus, games on the Apple Arcade tended to be those that resembled more traditional premium-priced games that were not built on microtransactions. Google followed suit with its own Google Play Pass, launched in the same month, but which also extended to general apps as well as games.
Separately, both Microsoft and Google have been developing cloud gaming services in Xbox Game Pass cloud gaming and Stadia that would allow console-quality games to be run and played on other devices included mobile phones. Currently, due to restrictions Apple has on iOS applications, these cloud streaming services are only targeted at Android phones and devices.
The COVID-19 pandemic in 2019 and 2020 caused many people around the globe to be quarantined or forced to stay at home to prevent transmission of the virus, and video games became a popular pastime. Mobile game saw a significant boost in revenues as a result of the pandemic, with a 40% increase year-to-year in the second quarter of 2020 according to Sensor Tower. Mobile-friendly games such as Among Us and Genshin Impact, alongside Fortnite and other mobile titles, saw large player counts during the pandemic period.
Through most of mobile gaming's history, mobile game publishers have come from new publishers created in that space, such as Chillingo and Glu Mobile or from the developers themselves such as for Rovio and King, rather than through large AAA publishers such as Electronic Arts, Activision, Ubisoft, and Take-Two Interactive. As mobile provided to be a viable space, these AAA publishers started adapting to the model, either becoming mobile publishers themselves and acquiring studios, or acquiring mobile publishers, but these were still generally seen as secondary business models relative to their computer and console games. Ubisoft was the first major AAA publisher to commit to wane off computer and console games and put a stronger focus on mobile gaming in a 2021 investor report, with plans to transition to this approach by their 2023 fiscal year.
See also
History of arcade games
History of online games
History of video games
List of most-played mobile games by player count
References
History of video games | History of mobile games | [
"Technology"
] | 6,487 | [
"History of video games",
"History of computing"
] |
64,989,268 | https://en.wikipedia.org/wiki/IBRO-Kemali%20Prize | The IBRO Dargut and Milena Kemali International Prize for Research in the field of Basic and Clinical Neurosciences' is a prize awarded every two years to an outstanding researcher, under 45 years old, who made important contributions in the field of Basic and Clinical Neurosciences. The award was established in 1998.
The prize award equals 25,000 Euros, and the prize winner is invited to give a lecture at the Federation of European Neuroscience Societies (FENS) Forum of Neuroscience held every two year. According to the FENS regulations, speakers from the previous FENS Forum cannot be speakers at the next FENS Forum. Nominations should be submitted in electronic format and are evaluated by the Prize Committee of the IBRO Dargut & Milena Kemali Foundation.
Prize winners
2022 – Sergiu P. Pasca (Romania, USA) – for his innovative research work using stem cell technology to create human brain organoids and assembloids, and their application to realistic studies of cellular mechanisms of human brain development and disease mechanisms.
2020 - Hailan Hu (Zheijiang, China) – for impressive work on the fundamental neurobiological mechanisms of emotional and affective behaviors.
2018 - Guillermina López-Bendito (Alicante, Spain) - for outstanding work on mechanisms of axon guidance in brain development, and in particular in thalamocortical connectivity.
2016 - Casper Hoogenraad (Utrecht, The Netherlands) - for outstanding work on cytoskeleton dynamics and intracellular transport in neural development and synaptic plasticity.
2014 - Patrik Verstreken (Leuven, Belgium) - for success in undoing the effect of one of the genetic defects that leads to Parkinson's using vitamin K2.
2012 - Eleanor Maguire (London, UK) - for innovative contributions to understanding human memory.
2010 - (Stockholm, Sweden) - for pioneering contributions to understanding of neurogenesis in the central nervous system.
2008 - (San Diego, CA, USA) - for seminal discoveries on how cerebral cortex perceives the environment by showing that cortical circuits operate in an activity-dependent and non-linear fashion using canonical feed-forward and feed-back inhibition circuits as feature detectors of incoming stimuli.
2006 - (Stockholm, Sweden) - for outstanding work on the expression and function of neurotrophic factors and neuropeptide and their receptors exploiting transgenic techniques.
2004 - Cornelia I. Bargmann (San Francisco, CA, USA) for fundamental discoveries concerning genes, behavior, and the sense of smell in the nematode C. elegans.
2002 - Daniele Piomelli (Irvine, CA, USA) for fundamental discoveries concerning the functional roles and regulation of endogenous cannabinoids in the brain and peripheral tissues.
2000 - Robert C. Malenka (Boston, MA, USA) - for fundamental contributions in the field of synaptic plasticity, in particular long term potentiation and long term depression, and the characterization of the role of silent synapses in these processes.
1998 - Tamas Freund (Budapest, Hungary) - for outstanding contributions to the organization and chemical characterization of identified neuronal circuits and cell types in the brain, in particular in the hippocampus.
References
Neuroscience awards
European science and technology awards
International awards | IBRO-Kemali Prize | [
"Technology"
] | 683 | [
"Science and technology awards",
"International science and technology awards",
"Neuroscience awards"
] |
64,989,513 | https://en.wikipedia.org/wiki/Anavip | Anavip (stylized as ANAVIP) is the trade name of a snake antivenin indicated for the management of adult and pediatric patients with North American rattlesnake envenomation. As defined by the FDA, the proper name is crotalidae immune F(ab')2 (equine). It is manufactured by Instituto Bioclon for Rare Disease Therapeutics in the United States.
Anavip is a divalent fragment antigen-binding protein, F(ab')2, derived from the blood of horses immunized with the venom of the snakes Bothrops asper and Crotalus durissus. The product is produced by pepsin digestion of horse blood plasma then purified resulting in a preparation containing >85% F(ab')2.
References
Antitoxins
Medical treatments | Anavip | [
"Chemistry"
] | 173 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
64,989,681 | https://en.wikipedia.org/wiki/Aboso%20Glass%20Factory | Aboso Glass Factory is a glass company located in Aboso, a town near Tarkwa, which is the capital of Wassa West District, in the Western Region of Ghana.
Controversy
The situation surrounding the Aboso Glass Factory highlights ongoing tensions regarding resource management and community involvement. The opposition by the youth to Linkin Birds Company accessing scraps from the factory suggests concerns about transparency, equitable benefits for locals, or potential environmental impacts.
Such disputes are common when decisions about local resources are made without sufficient community engagement, particularly when those resources hold economic or historical significance. GIHOC Distilleries Company, as a key stakeholder, might need to work closely with the community to address these concerns and ensure that any agreements are fair and mutually beneficial.
In a phone interview, Maxwell Kofi Jumah, Managing Director of GIHOC Distilleries Company, explained that the directive originated from a new investor and assured that only the scraps would be removed, with nothing else taken out.
The community, however, disputes this claim, stating that scraps are still being taken away. They allege that roofing sheets from the building have also been removed, and transformers powering the factory have been dismantled.
History
The Aboso Glass Factory set up by Kwame Nkrumah in 1966, was a major manufacturer and supplier of bottles for the beverage industry, among many other products. Most of the employees were residents from Aboso and other neighbouring communities. The company was renamed as Tropical Glass Factory when it was handed over to Gilchrist Olympio. However due to financial constraints, the company collapsed. In 2003, Aboso Glass Factory was placed on a divestiture listing, after ECG ceased power supply for their works. The government in 2017 has announced plans of restoring the company to its former glory with the help of some investors. In 2019, GIHOC Distilleries Company Limited declared a takeover of the Aboso Glass Factory. Aboso youth in the Prestea Huni Valley Municipality of the Western Region to hit the streets if there are no signs of revising glass factory.
References
Glassmaking companies
Ghanaian brands
Western Region (Ghana)
1966 establishments in Ghana | Aboso Glass Factory | [
"Materials_science",
"Engineering"
] | 443 | [
"Glass engineering and science",
"Glassmaking companies",
"Engineering companies"
] |
64,991,194 | https://en.wikipedia.org/wiki/Time%20in%20Slovakia | In Slovakia, the standard time is Central European Time (UTC+01:00). Daylight saving time is observed from the last Sunday in March (02:00 CET) to the last Sunday in October (03:00 CEST). This is shared with several other EU member states.
IANA time zone database
The IANA time zone database gives Slovakia Europe/Bratislava.
See also
Time in Europe
List of time zones by country
List of time zones by UTC offset
References
External links
Current time in Slovakia at Time.is
Time in Slovakia at TimeAndDate.com
Time in Slovakia at Lonely Planet
Geography of Slovakia | Time in Slovakia | [
"Physics"
] | 126 | [
"Spacetime",
"Physical quantities",
"Time",
"Time by country"
] |
64,991,197 | https://en.wikipedia.org/wiki/Zero-overhead%20looping | Zero-overhead looping is a feature of some processor instruction sets whose hardware can repeat the body of a loop automatically, rather than requiring software instructions which take up cycles (and therefore time) to do so. Zero-overhead loops are common in digital signal processors and some CISC instruction sets.
Background
In many instruction sets, a loop must be implemented by using instructions to increment or decrement a counter, check whether the end of the loop has been reached, and if not jump to the beginning of the loop so it can be repeated. Although this typically only represents around 3–16 bytes of space for each loop, even that small amount could be significant depending on the size of the CPU caches. More significant is that those instructions each take time to execute, time which is not spent doing useful work.
The overhead of such a loop is apparent compared to a completely unrolled loop, in which the body of the loop is duplicated exactly as many times as it will execute. In that case, no space or execution time is wasted on instructions to repeat the body of the loop. However, the duplication caused by loop unrolling can significantly increase code size, and the larger size can even impact execution time due to cache misses. (For this reason, it's common to only partially unroll loops, such as transforming it into a loop which performs the work of four iterations in one step before repeating. This balances the advantages of unrolling with the overhead of repeating the loop.) Moreover, completely unrolling a loop is only possible for a limited number of loops: those whose number of iterations is known at compile time.
For example, the following C code could be compiled and optimized into the following x86 assembly code:
Implementation
Processors with zero-overhead looping have machine instructions and registers to automatically repeat one or more instructions. Depending on the instructions available, these may only be suitable for count-controlled loops ("for loops") in which the number of iterations can be calculated in advance, or only for condition-controlled loops ("while loops") such as operations on null-terminated strings.
Examples
PIC
In the PIC instruction set, the and instructions implement zero-overhead loops. only repeats a single instruction, while repeats a specified number of following instructions.
Blackfin
Blackfin offers two zero-overhead loops. The loops can be nested; if both hardware loops are configured with the same "loop end" address, loop 1 will behave as the inner loop and repeat, and loop 0 will behave as the outer loop and repeat only if loop 1 would not repeat.
Loops are controlled using the and registers ( either 0 to 1) to set the top and bottom of the loop — that is, the first and last instructions to be executed, which can be the same for a loop with only one instruction — and for the loop count. The loop repeats if is nonzero at the end of the loop, in which case is decremented.
The loop registers can be set manually, but this would typically consume 6 bytes to load the registers, and 8–16 bytes to set up the values to be loaded. More common is to use the loop setup instruction (represented in assembly as either with pseudo-instruction and , or in a single line as ), which optionally initializes and sets and to the desired values. This only requires 4–6 bytes, but can only set and within a limited range relative to where the loop setup instruction is located.
P0 = array + 396;
R0 = 100;
LC0 = R0;
LOOP my_loop LC0; // sets LT0 and LB0
LOOP_BEGIN my_loop; // pseudo-instruction; generates a label used to compute LT0
// LC0 cannot be written directly to memory,
// so we must use a temporary register.
R0 += -1; // equally fast and small would be R0 = LC0
[P0--] = R0;
LOOP_END my_loop; // pseudo-instruction; generates a label used to compute LB0
x86
The x86 assembly language prefixes implement zero-overhead loops for a few instructions (namely ). Depending on the prefix and the instruction, the instruction will be repeated a number of times with holding the repeat count, or until a match (or non-match) is found with or with . This can be used to implement some types of searches and operations on null-terminated strings.
References
Computer hardware | Zero-overhead looping | [
"Technology",
"Engineering"
] | 925 | [
"Computer engineering",
"Computer hardware",
"Computer systems",
"Computer science",
"Computers"
] |
64,991,255 | https://en.wikipedia.org/wiki/List%20of%20inflammatory%20disorders |
Nervous system
CNS
Encephalitis
Myelitis
Meningitis
Arachnoiditis
PNS
Neuritis
eye
Dacryoadenitis
Scleritis
Episcleritis
Keratitis
Retinitis
Chorioretinitis
Blepharitis
Conjunctivitis
Uveitis
ear
Otitis externa
Otitis media
Labyrinthitis
Mastoiditis
Cardiovascular system
Carditis
Endocarditis
Myocarditis
Pericarditis
Vasculitis
Arteritis
Phlebitis
Capillaritis
Respiratory system
upper
Sinusitis
Rhinitis
Pharyngitis
Laryngitis
lower
Tracheitis
Bronchitis
Bronchiolitis
Pneumonitis
Pleuritis
Mediastinitis
Digestion system
Mouth
Stomatitis
Gingivitis
Gingivostomatitis
Glossitis
Tonsillitis
Sialadenitis/Parotitis
Cheilitis
Pulpitis
Gnathitis
Gastrointestinal tract
Esophagitis
Gastritis
Gastroenteritis
Enteritis
Colitis
Enterocolitis
Duodenitis
Ileitis
Caecitis
Appendicitis
Proctitis
Accessory digestive organs
Hepatitis
Ascending cholangitis
Cholecystitis
Pancreatitis
Peritonitis
Integumentary system
Dermatitis
Folliculitis
Cellulitis
Hidradenitis
Musculoskeletal system
Arthritis
Dermatomyositis
soft tissue
Myositis
Synovitis/Tenosynovitis
Bursitis
Enthesitis
Fasciitis
Capsulitis
Epicondylitis
Tendinitis
Panniculitis
Osteochondritis: Osteitis/Osteomyelitis
Spondylitis
Periostitis
Chondritis
Urinary system
Nephritis
Glomerulonephritis
Pyelonephritis
Ureteritis
Cystitis
Urethritis
Reproductive system
Female
Oophoritis
Salpingitis
Endometritis
Parametritis
Cervicitis
Vaginitis
Vulvitis
Mastitis
Male
Orchitis
Epididymitis
Prostatitis
Seminal vesiculitis
Balanitis
Posthitis
Balanoposthitis
Pregnancy/newborn
Chorioamnionitis
Funisitis
Omphalitis
Endocrine system
Insulitis
Hypophysitis
Thyroiditis
Parathyroiditis
Adrenalitis
Lymphatic system
Lymphangitis
Lymphadenitis
Physiology
Inflammations | List of inflammatory disorders | [
"Biology"
] | 500 | [
"Physiology"
] |
64,991,614 | https://en.wikipedia.org/wiki/Estradiol%20benzoate/progesterone/testosterone%20propionate | Estradiol benzoate/progesterone/testosterone propionate (EB/P4/TP), sold under the brand names Lukestra, Steratrin, Trihormonal, and Trinestryl, is an injectable combination medication of estradiol benzoate (EB), an estrogen, progesterone (P4), a progestogen, and testosterone propionate (TP), an androgen/anabolic steroid. It contained 1 to 3 mg EB, 20 to 25 mg P4, and 25 mg TP, was provided in the form of ampoules, and was administered by intramuscular injection. The medication was introduced by 1949 and was marketed in the United States, the United Kingdom, and Germany among other places. It is no longer available.
See also
List of combined sex-hormonal preparations § Estrogens, progestogens, and androgens
References
Abandoned drugs
Combined estrogen–progestogen–androgen formulations | Estradiol benzoate/progesterone/testosterone propionate | [
"Chemistry"
] | 221 | [
"Drug safety",
"Abandoned drugs"
] |
64,991,874 | https://en.wikipedia.org/wiki/Alien%20Oceans | Alien Oceans: The Search for Life in the Depths of Space is a 2020 non-fiction book by American writer and scientist Kevin Peter Hand. The book explores the possibility of life on planets and moons with subsurface oceans, and argues that the common understanding of the habitable zone should include natural satellites around gas giants. Satellites discussed in the book include Europa, Enceladus, and Triton.
Hand wrote the book to make the scientific information it discusses readily accessible to the public.
References
2020 non-fiction books
Books about extraterrestrial life
Astronomy books
American non-fiction books
Princeton University Press books | Alien Oceans | [
"Astronomy"
] | 124 | [
"Astronomy books",
"Astronomy book stubs",
"Works about astronomy",
"Astronomy stubs"
] |
64,996,713 | https://en.wikipedia.org/wiki/Influences%20upon%20Gothic%20architecture | The Gothic style of architecture was strongly influenced by the Romanesque architecture which preceded it. Why the Gothic style emerged from Romanesque, and what the key influences on its development were, is a difficult problem for which there is a lack of concrete evidence because medieval Gothic architecture was not accompanied by contemporary written theory, in contrast to the 'Renaissance' and its treatises. A number of contrasting theories on the origins of Gothic have been advanced: for example, that Gothic emerged organically as a 'rationalist' answer to structural challenges; that Gothic was informed by the methods of medieval Scholastic philosophy; that Gothic was an attempt to imitate heaven and the light referred to in various Biblical passages such as Revelation; that Gothic was 'medieval modernism' deliberately rejecting the 'historicist' forms of classical architecture. Beyond specific theories, the style was also shaped by the specific geographical, political, religious and cultural context of Europe in the 12th century onwards (the 'first' Gothic building is considered to have been St Denis, in France in the 1140s by scholarly consensus).
Political
At the end of the 12th century, Europe was divided into a multitude of city states and kingdoms. The area encompassing modern Germany, southern Denmark, the Netherlands, Belgium, Luxembourg, Switzerland, Austria, Czech Republic and much of northern Italy (excluding Venice and Papal States) was nominally part of the Holy Roman Empire, but local rulers exercised considerable autonomy. France, Denmark, Poland, Hungary, Portugal, Scotland, Castile, Aragon, Navarre, and Cyprus were independent kingdoms, as was the Angevin Empire, whose Plantagenet kings ruled England and large domains in what was to become modern France. Norway came under the influence of England, while the other Scandinavian countries and Poland were influenced by trading contacts with the Hanseatic League. Swabian kings brought the Gothic tradition from Germany to Southern Italy, part of the Norman Kingdom of Sicily, while after the First Crusade the Lusignan kings introduced French Gothic architecture to Cyprus and the Kingdom of Jerusalem.
Throughout Europe at this time there was a rapid growth in trade and an associated growth in towns. Germany and the Lowlands had large flourishing towns that grew in comparative peace, in trade and competition with each other, or united for mutual weal, as in the Hanseatic League. Civic building was of great importance to these towns as a sign of wealth and pride. England and France remained largely feudal and produced grand domestic architecture for their kings, dukes and bishops, rather than grand town halls for their burghers.
Religious
The Roman Catholic Church prevailed across Western Europe at this time, influencing not only faith but also wealth and power. Bishops were appointed by the feudal lords (kings, dukes and other landowners) and they often ruled as virtual princes over large estates. The early mediaeval periods had seen a rapid growth in monasticism, with several different orders being prevalent and spreading their influence widely. Foremost were the Benedictines whose great abbey churches vastly outnumbered any others in France, Normandy and England. A part of their influence was that towns developed around them and they became centres of culture, learning and commerce. They were the builders of the Abbey of Saint-Denis, and Abbey of Saint-Remi in France. Later Benedictine projects (constructions and renovations) include Rouen's Abbey of Saint-Ouen, the Abbey La Chaise-Dieu, and the choir of Mont Saint-Michel in France. English examples are Westminster Abbey, originally built as a Benedictine order monastic church; and the reconstruction of the Benedictine church at Canterbury.
The Cluniac and Cistercian Orders were prevalent in France, the great monastery at Cluny having established a formula for a well planned monastic site which was then to influence all subsequent monastic building for many centuries. The Cistercians spread the style as far east and south as Poland and Hungary. Smaller orders such as the Carthusians and Premonstratensians also built some 200 churches, usually near cities.
In the 13th century Francis of Assisi established the Franciscans, or so-called "Grey Friars", a mendicant order. Saint Dominic founded the mendicant Dominicans, in Toulouse and Bologna, were particularly influential in the building of Italy's Gothic churches.
The Teutonic Order, a military order, spread Gothic art into Pomerania, East Prussia, and the Baltic region.
Geographic
From the 10th to the 13th century, Romanesque architecture had become a pan-European style and manner of construction, affecting buildings in countries as far apart as Ireland, Croatia, Sweden and Sicily. The same wide geographic area was then affected by the development of Gothic architecture, but the acceptance of the Gothic style and methods of construction differed from place to place, as did the expressions of Gothic taste. The proximity of some regions meant that modern country borders do not define divisions of style. On the other hand, some regions such as England and Spain produced defining characteristics rarely seen elsewhere, except where they have been carried by itinerant craftsmen, or the transfer of bishops. Regional differences that are apparent in the churches of the Romanesque period often become even more apparent in the Gothic.
The local availability of materials affected both construction and style. In France, limestone was readily available in several grades, the very fine white limestone of Caen being favoured for sculptural decoration. England had coarse limestone and red sandstone as well as dark green Purbeck marble which was often used for architectural features.
In Northern Germany, Netherlands, northern Poland, Denmark, and the Baltic countries local building stone was unavailable but there was a strong tradition of building in brick. The resultant style, Brick Gothic, is called "Backsteingotik" in Germany and Scandinavia and is associated with the Hanseatic League. In Italy, stone was used for fortifications, but brick was preferred for other buildings. Because of the extensive and varied deposits of marble, many buildings were faced in marble, or were left with undecorated façade so that this might be achieved at a later date.
The availability of timber also influenced the style of architecture, with timber buildings prevailing in Scandinavia. Availability of timber affected methods of roof construction across Europe. It is thought that the magnificent hammer-beam roofs of England were devised as a direct response to the lack of long straight seasoned timber by the end of the mediaeval period, when forests had been decimated not only for the construction of vast roofs but also for ship building.
Romanesque tradition
Gothic architecture grew out of the previous architectural genre, Romanesque. For the most part, there was not a clean break, as there was to be later in Renaissance Florence with the revival of the Classical style in the early 15th century.
By the 12th century, builders throughout Europe developed Romanesque architectural styles (termed Norman architecture in England because of its association with the Norman Conquest). Scholars have focused on categories of Romanesque/Norman building, including the cathedral church, the parish church, the abbey church, the monastery, the castle, the palace, the great hall, the gatehouse, the civic building, the warehouse, and others.
Many architectural features that are associated with Gothic architecture had been developed and used by the architects of Romanesque buildings, particularly in the building of cathedrals and abbey churches. These include ribbed vaults, buttresses, clustered columns, ambulatories, wheel windows, spires, stained glass windows, and richly carved door tympana. These were already features of ecclesiastical architecture before the development of the Gothic style, and all were to develop in increasingly elaborate ways.
It was principally the development of the pointed arch which brought about the change that separates Gothic from Romanesque. This technological change broke the tradition of massive masonry and solid walls penetrated by small openings, replacing it with a style where light appears to triumph over substance. With its use came the development of many other architectural devices, previously put to the test in scattered buildings and then called into service to meet the structural, aesthetic and ideological needs of the new style. These include the flying buttresses, pinnacles and traceried windows.
Eastern Christian, Sasanian, and Islamic Architecture
The pointed arch, one of the defining attributes of Gothic, appears in Late Roman Byzantine architecture and the Sasanian architecture of Iran during late antiquity, although the form had been used earlier, as in the possibly 1st century AD Temple of Bel, Dura Europos in Roman Mesopotamia. In the Roman context it occurred in church buildings in Syria and occasional secular structures, like the Karamagara Bridge in modern Turkey. In Sassanid architecture parabolic and pointed arches were employed in both palace and sacred construction. A very slightly pointed arch built in 549 exists in the apse of the Basilica of Sant'Apollinare in Classe in Ravenna, and slightly more pointed example from a church, built 564 at Qasr Ibn Wardan in Roman Syria. Pointed arches' development may have been influenced by the elliptical and parabolic arches frequently employed in Sasanian buildings using pitched brick vaulting, which obviated any need for wooden centring and which had for millennia been used in Mesopotamia and Syria. The oldest pointed arches in Islamic architecture are in the Dome of the Rock, completed in 691/2, while some others appear in the Great Mosque of Damascus, begun in 705. The Umayyads were responsible for the oldest significantly pointed arches in medieval western Europe, employing them alongside horseshoe arches in the Great Mosque of Cordoba, built from 785 and repeatedly extended. The Abbasid palace at al-Ukhaidir employed pointed arches in 778 as a dominant theme both structural and decorative throughout the façades and vaults of the complex, while the tomb of al-Muntasir, built 862, employed a dome with a pointed arch profile. Abbasid Samarra had many pointed arches, notably its surviving Bab al-ʿAmma (monumental triple gateway). By the 9th century the pointed arch was used in Egypt and North Africa: in the Nilometer at Fustat in 861, the 876 Mosque of Ibn Tulun in Cairo, and the 870s Great Mosque of Kairouan. Through the 8th and 9th centuries, the pointed arch was employed as standard in secular buildings in architecture throughout the Islamic world. The 10th century Aljafería at Zaragoza displays numerous forms of arch, including many pointed arches decorated and elaborated to a level of design sophistication not seen in Gothic architecture for a further two centuries.
Increasing military and cultural contacts with the Muslim world, including the Norman conquest of Islamic Sicily between 1060 and 1090, the Crusades, beginning 1096, and the Islamic presence in Spain, may have influenced medieval Europe's adoption of the pointed arch, although this hypothesis remains controversial. The structural advantages of pointed arches seems first to have been realised in a medieval Latin Christian context at the abbey church known as Cluny III at Cluny Abbey. Begun by abbot Hugh of Cluny in 1089, the great Romanesque church of Cluny III was the largest church in the west when completed in 1130. Kenneth John Conant, who excavated the site of the church's ruins, argued that the architectural innovations of Cluny III were inspired by the Islamic architecture of Sicily via Monte Cassino. The Abbey of Monte Cassino was the foundational community of the Benedictine Order and lay within the Norman Kingdom of Sicily, . The rib vault with pointed arches was used at Lessay Abbey in Normandy in 1098, Cefalù Cathedral in Sicily and at Durham Cathedral in England at about the same time. In those parts of the Western Mediterranean subject to Islamic control or influence, rich regional variants arose, fusing Romanesque, Byzantine and later Gothic traditions with Islamic decorative forms, as seen, for example, in Monreale and Cefalù Cathedrals, the Alcázar of Seville, and Teruel Cathedral.
Notes
Citations
Bibliography
Clark, W. W.; King, R. (1983). Laon Cathedral, Architecture. Courtauld Institute Illustration Archives. 1. London: Harvey Miller Publishers. .
Architectural history
Architectural styles
European architecture
Architecture in England
Architecture in Italy
Medieval French architecture
Catholic architecture
12th-century architecture
13th-century architecture
14th-century architecture
15th-century architecture
16th-century architecture | Influences upon Gothic architecture | [
"Engineering"
] | 2,470 | [
"Architectural history",
"Architecture"
] |
64,997,934 | https://en.wikipedia.org/wiki/DDO%2044 | DDO 44 (or UGCA 133) is a dwarf spheroidal galaxy in the M81 Group, believed to be a satellite galaxy of the nearby NGC 2403.
Structure
DDO 44 is a relatively large dwarf galaxy, and it has been observed to possess a tidal tail extending at least 50,000 parsecs from its center. It has an estimated metallicity ([Fe/H]) of -1.54 ± 0.14. Due to its proximity and relative velocity to the larger NGC 2403, it is believed to be NGC 2403's satellite galaxy. Stellar streams has been observed to originate from DDO 44, flowing towards and away for NGC 2403, indicating tidal disruptions. Around 20 percent of the galaxy's stars are believed to be of intermediate age (between 2-8 Gya), with the most recent stellar formation being estimated at 300 Mya due to a lack of young bright blue stars. This lack of bright stars caused DDO 44 to have a relatively low level of brightness.
It is located approximately 3 million parsecs away from the Milky Way, and 79 arcminutes towards north-northwest from NGC 2403 (or approximately 75 kpc). Mass estimates based on luminosity measurements give a galactic mass of 2×107–6×107 M☉. This makes DDO 44 by far NGC 2403's most massive known satellite galaxy, with the other known satellite galaxy (MADCASH J074238+652501-dw) having a mass of just ~105 M☉. HI observations place an upper limit for DDO 44's hydrogen gas mass at 4×105 M☉.
References
Bibliography
Dwarf spheroidal galaxies
M81 Group
UGCA objects
Camelopardalis | DDO 44 | [
"Astronomy"
] | 371 | [
"Camelopardalis",
"Constellations"
] |
56,553,444 | https://en.wikipedia.org/wiki/Katalin%20Vesztergombi | Katalin Vesztergombi (born July 17, 1948) is a Hungarian mathematician known for her contributions to graph theory and discrete geometry. A student of Vera T. Sós and a co-author of Paul Erdős, she is an emeritus associate professor at Eötvös Loránd University and a member of the Hungarian Academy of Sciences.
Education
As a high-school student in the 1960s, Vesztergombi became part of a special class for gifted mathematics students at Fazekas Mihály Gimnázium with her future collaborators László Lovász, József Pelikán, and others. She completed her Ph.D. in 1987 at Eötvös Loránd University. Her dissertation, Distribution of Distances in Finite Point Sets, is connected to the Erdős distinct distances problem and was supervised by Vera Sós.
Contributions
Vesztergombi's research contributions include works on permutations, graph coloring and graph products,
combinatorial discrepancy theory, distance problems in discrete geometry, geometric graph theory,
the rectilinear crossing number of the complete graph, and graphons.
With László Lovász and József Pelikán, she is the author of the textbook Discrete Mathematics: Elementary and Beyond.
Personal
Vesztergombi is married to László Lovász, with whom she is also a frequent research collaborator.
Selected publications
Books
Research articles
References
Living people
20th-century Hungarian mathematicians
21st-century Hungarian mathematicians
Women mathematicians
Graph theorists
Geometers
Academic staff of Eötvös Loránd University
Members of the Hungarian Academy of Sciences
1948 births | Katalin Vesztergombi | [
"Mathematics"
] | 330 | [
"Geometers",
"Graph theory",
"Mathematical relations",
"Geometry",
"Graph theorists"
] |
56,553,637 | https://en.wikipedia.org/wiki/%CE%92-Leucine | β-Leucine (beta-leucine) is a beta amino acid and positional isomer of -leucine which is naturally produced in humans via the metabolism of -leucine by the enzyme leucine 2,3-aminomutase. In cobalamin (vitamin B12) deficient individuals, plasma concentrations of β-leucine are elevated.
Biosynthesis and metabolism in humans
A small fraction of metabolism – less than 5% in all tissues except the testes where it accounts for about 33% – is initially catalyzed by leucine aminomutase, producing β-leucine, which is subsequently metabolized into (β-KIC), β-ketoisocaproyl-CoA, and then acetyl-CoA by a series of uncharacterized enzymes.
References
Beta-Amino acids
Non-proteinogenic amino acids | Β-Leucine | [
"Chemistry",
"Biology"
] | 188 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry"
] |
56,553,704 | https://en.wikipedia.org/wiki/Lachancea%20thermotolerans | Lachancea thermotolerans is a species of yeast.
Taxonomy
L. thermotolerans is the type species of the genus Lachancea. The species has previously been known as Kluyveromyces thermotolerans and Zygosaccharomyces thermotolerans, which is the name by which it was first described in 1932.
Habitat and ecology
L. thermotolerans is widely distributed and occurs in diverse environments, both natural and man-made. It has been isolated from locations around the world. The species is commonly associated with fruit and with insects such as fruit flies that feed on fruit. In some cases, it has been identified as one of several species found in naturally fermented foods.
Uses
L. thermotolerans is unusual among yeasts in its ability to produce lactic acid through fermentation. This property has prompted study of L. thermotolerans in the production of wine and beer, both of which are traditionally produced using Saccharomyces yeasts. In winemaking, L. thermotolerans and other yeast species have been studied for the effects of their metabolites on the flavor profile of wines. Systems including L. thermotolerans in co-fermentation with wine yeast or in place of lactic acid bacteria have been described as an alternative to traditional malolactic fermentation. L. thermotolerans has been sold commercially on its own and in a yeast blend. In beer brewing, L. thermotolerans has been considered as a method for producing sour beer. It has been observed that this kind of yeast ferments at low temperatures (17 °C) as well as at high temperatures (27 °C) and with SO2 doses of 25 mg/L and 75 mg/L with an ethanol yield between 7-11% vol. Sequential inoculations (binary) and sequential co-inoculations (ternary) with different non-Saccharomyces, including L. thermotolerans, have also been studied, resulting in very significant synergies and inhibitions in lactic acid production.
References
Yeasts
Fungi described in 2003
Wine chemistry
Yeasts used in brewing
Fungus species | Lachancea thermotolerans | [
"Chemistry",
"Biology"
] | 466 | [
"Fungi",
"Fungus species",
"Yeasts",
"Alcohol chemistry",
"Wine chemistry"
] |
56,553,748 | https://en.wikipedia.org/wiki/Volatility%20tax | The volatility tax is a mathematical finance term first published by Rick Ashburn, CFA in a 2003 column, and formalized by hedge fund manager Mark Spitznagel, describing the effect of large investment losses (or volatility) on compound returns. It has also been called volatility drag, volatility decay or variance drain. This is not literally a tax in the sense of a levy imposed by a government, but the mathematical difference between geometric averages compared to arithmetic averages. This difference resembles a tax due to the mathematics which impose a lower compound return when returns vary over time, compared to a simple sum of returns. This diminishment of returns is in increasing proportion to volatility, such that volatility itself appears to be the basis of a progressive tax. Conversely, fixed-return investments (which have no return volatility) appear to be "volatility tax free".
Overview
As Spitznagel wrote:
Quantitatively, the volatility tax is the difference between the arithmetic and geometric average (or “ensemble average” and “time average”) returns of an asset or portfolio. It thus represents the degree of “non-ergodicity” of the geometric average.
Standard quantitative finance assumes that a portfolio’s net asset value changes follow a geometric Brownian motion (and thus are log-normally distributed) with arithmetic average return (or “drift”) , standard deviation (or “volatility”) , and geometric average return
So the geometric average return is the difference between the arithmetic average return and a function of volatility. This function of volatility
represents the volatility tax. (Though this formula is under the assumption of log-normality, the volatility tax provides an accurate approximation for most return distributions. The precise formula is a function of the central moments of the return distribution.)
The mathematics behind the volatility tax is such that a very large portfolio loss has a disproportionate impact on the volatility tax that it pays and, as Spitznagel wrote, this is why the most effective risk mitigation focuses on large losses:
According to Spitznagel, the goal of risk mitigation strategies is to solve this “vexing non-ergodicity, volatility tax problem” and thus raise a portfolio’s geometric average return, or CAGR, by lowering its volatility tax (and “narrow the gap between our ensemble and time averages”). This is “the very name of the game in successful investing. It is the key to the kingdom, and explains in a nutshell Warren Buffett’s cardinal rule, ‘Don’t lose money.’” Moreover, “the good news is the entire hedge fund industry basically exists to help with this—to help save on volatility taxes paid by portfolios. The bad news is they haven't done that, not at all.”
As Nassim Nicholas Taleb wrote in his 2018 book Skin in the Game, “more than two decades ago, practitioners such as Mark Spitznagel and myself built our entire business careers around the effect of the difference between ensemble and time.”
See also
Annual growth %
Arithmetic mean
Compound interest
Ecological fallacy (Averages do not predict individual performance)
Exponential growth
Geometric Brownian motion
Geometric mean
Log-normal distribution
Mathematical finance
Rate of return
References
Interest
Mathematical finance
Exponentials
Risk management | Volatility tax | [
"Mathematics"
] | 702 | [
"E (mathematical constant)",
"Mathematical finance",
"Exponentials",
"Applied mathematics"
] |
56,553,986 | https://en.wikipedia.org/wiki/3%CE%B1-Hydroxytibolone | 3α-Hydroxytibolone (developmental code name ORG-4094) is a synthetic steroidal estrogen which was never marketed. Along with 3β-hydroxytibolone and δ4-tibolone, it is a major active metabolite of tibolone, and 3α-hydroxytibolone and 3β-hydroxytibolone are thought to be responsible for the estrogenic activity of tibolone.
References
Abandoned drugs
Alkene derivatives
Ethynyl compounds
Diols
Estranes
Human drug metabolites
Synthetic estrogens
Secondary alcohols
Tertiary alcohols | 3α-Hydroxytibolone | [
"Chemistry"
] | 130 | [
"Chemicals in medicine",
"Drug safety",
"Human drug metabolites",
"Abandoned drugs"
] |
56,553,987 | https://en.wikipedia.org/wiki/3%CE%B2-Hydroxytibolone | 3β-Hydroxytibolone (developmental code name ORG-30126) is a synthetic steroidal estrogen which was never marketed. Along with 3α-hydroxytibolone and δ4-tibolone, it is a major active metabolite of tibolone, and 3α-hydroxytibolone and 3β-hydroxytibolone are thought to be responsible for the estrogenic activity of tibolone.
References
Abandoned drugs
Alkene derivatives
Ethynyl compounds
Diols
Estranes
Human drug metabolites
Synthetic estrogens
Secondary alcohols
Tertiary alcohols | 3β-Hydroxytibolone | [
"Chemistry"
] | 129 | [
"Chemicals in medicine",
"Drug safety",
"Human drug metabolites",
"Abandoned drugs"
] |
56,556,160 | https://en.wikipedia.org/wiki/PLEKHG2 | Pleckstrin homology domain containing, family G member 2 (PLEKHG2) is a protein that in humans is encoded by the PLEKHG2 gene. It is sometimes written as ARHGEF42, FLJ00018.
The PLEKHG2 protein is a huge protein of about 1300 amino acids, 130 kDa and has a Dbl homology (DH) domain and a pleckstrin homology (PH) domain near the N terminus of its structure. The DH domain is a domain responsible for guanine nucleotide exchange activity that converts GDP on the Rho family Small GTPase (RhoGTPase) to GTP, and PLEKHG2 having this domain also acts as a Rho-specific guanine nucleotide exchange factor (RhoGEF).
Activation of RhoGTPase reconstitute the actin cytoskeleton and changes the cell morphology, so PLEKHG2 might be contributes to cell motility and neuronal network development of neurons via RhoGTPase and actin remodeling (see later).
Cloning
Recombinant BXH2 and AKXD inbred mice mutated by retroviral transduction are known to develop myeloid leukemia, B cell and T cell leukemia at high frequency.
In 2002, Himmel et al., used this model of acute myelogenous leukemia and showed that a novel Dbl family guanine nucleotide exchange factor gene is contained downstream of the retroviral uptake site called Evi24. They named this gene Clg. Hemmel and colleagues cloned Clg and showed homology with PLEKHG2 contained in human chromosome 19 chromosome 19q13.1 region. From these observations they pointed out association with acute myeloid leukemia.
Functions
In a paper published by Hemmel et al., in 2002, they showed that a construct containing a DH-PH domain of Clg promotes guanine nucleotide exchange of Cdc42 but does not promote guanine nucleotide exchange of Rac1 or RhoA. In addition, DH-PH domains or full-length Clg were introduced into NIH3T3 cells and transformation occurred.
Later, Ueda and his colleagues introduced the expression construct of full-length human PLEKHG2 into HEK 293 cells. In this cell the Gβγ subunit of the trimeric G protein were interacted with PLEKHG2 directly. Ueda and colleagues also showed that PLEKHG2 were activated by Gbg and PLEKHG2 activates Rac1, Cdc42 of RhoGTPase and contributes to cell morphological change.
In 2013, Runne et al., showed that PLEKHG2 is elevated in several leukemia cell lines, including Jurkat T cells. In addition, they showed that GPCR signal-dependent activation of Rac and Cdc42 regulates the chemotaxis of lymphocytes via actin polymerization. From this observation PLEKHG2 was considered to an important regulator of cell motility.
Furthermore, in recent years, it has become clear that PLEKHG2 undergoes regulation through modification such as phosphorylation and interaction with other proteins by various intracellular signals (see the section on interaction / protein modification). However, the function in vivo is still unclear.
Disease related with PLEKHG2
In 2016, Edvardson et al., identified homozygosity for Arg204Trp mutation in the PLEKHG2 gene in the patients with dystonia or postnatal microcephaly.
Interactions
PLEKHG2 is known to interacted with the following proteins.
・Gβγ
・β-actin
・Four and a Half LIM domain1 (FHL1)
・Gαs
protein modification
It is known that PLEKHG2 undergoes modification such as phosphorylation by the following signals.
・SRC
・EGFR
References
Genes
Proteins | PLEKHG2 | [
"Chemistry"
] | 825 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
56,556,388 | https://en.wikipedia.org/wiki/Basic%20Principles%20for%20the%20Treatment%20of%20Prisoners | The Basic Principles for the Treatment of Prisoners were adopted and proclaimed by the General Assembly of the United Nations by resolution 45/111 on 14 December 1990.
Article 1 protects human dignity. Article 2 bans discrimination.
References
United Nations General Assembly resolutions
Human rights
Prisoners and detainees
Discrimination
1990 in law | Basic Principles for the Treatment of Prisoners | [
"Biology"
] | 59 | [
"Behavior",
"Aggression",
"Discrimination"
] |
56,556,509 | https://en.wikipedia.org/wiki/Sodium%20peroxycarbonate | Sodium peroxycarbonate or sodium percarbonate, sodium permonocarbonate is a chemical compound, a peroxycarbonate of sodium, with formula
See also
Sodium percarbonate
Peroxycarbonate
References
Sodium compounds
Peroxides
Oxidizing agents | Sodium peroxycarbonate | [
"Chemistry"
] | 57 | [
"Redox",
"Oxidizing agents"
] |
56,557,404 | https://en.wikipedia.org/wiki/Networked%20flying%20platform | Networked flying platforms (NFPs) are unmanned flying platforms of various types including unmanned aerial vehicles (UAVs), drones, tethered balloon and high-altitude/medium-altitude/low-altitude platforms
(HAPs/MAPs/LAPs) carrying RF/mmWave/FSO payload (transceivers) along with an extended battery life capabilities, and are floating or moving in the air at a quasi-stationary positions with the ability to move horizontally and vertically to offer 5G and beyond 5G (B5G) cellular networks and network support services.
Deployment configurations
There are following two possible NFPs deployment configurations:
Deployment configuration 1: NFPs are expected to complement the conventional cellular networks to further enhance the wireless capacity, expand the coverage and improve the network reliability for temporary events, where there is a high density of mobile users or small cells in a limited/hard to reach area or in a remote region where infrastructure is not available and expensive to deploy, e.g., sports events and concert gatherings
Deployment configuration 2: NFPs can be deployed for unexpected scenarios, such as in emergency situations to support disaster relief activities and to enable communications when conventional cellular networks are either damaged or congested. In addition, owing to their mobility, NFPs are expected to deploy quickly and efficiently to support cellular networks, enhance network quality of service (QoS) and improve network resilience under emergency scenarios
NFPs can be manually (non-autonomously) controlled but mainly designed for autonomous pre-determined flights. NFPs can either operate in a single NFP mode where NFPs do not cooperate with other NFPs in the network, if exists or a swarm of NFPs where multiple interconnected NFPs cooperate, collaborate and perform the network mission autonomously with one of the NFPs designated as mother-NFP
References
External links
BT Drone flights to connect Isle of Lewis with mainland
Qualcomm Technologies releases LTE drone trial results
Intel testing drones over AT&T LTE Networks, Verizon starts 5G Trials with Samsung
Project Skybender: Google's secretive 5G internet drone tests revealed
Wireless networking
Telecommunications
Radio technology | Networked flying platform | [
"Technology",
"Engineering"
] | 445 | [
"Information and communications technology",
"Telecommunications engineering",
"Wireless networking",
"Computer networks engineering",
"Telecommunications",
"Radio technology"
] |
56,560,841 | https://en.wikipedia.org/wiki/Matrilineal%20belt | In anthropology, the matrilineal belt is an area in Africa south of the equator centered in south-central Africa where matrilineality is predominant. The matrilineal belt runs diagonally from the Atlantic to the Indian ocean, crossing Angola, Zambia, Malawi and Mozambique. The belt is linked to horticultural household economics, and Bantu groups that have embraced pastoralism have tended to lose matrilinearity.
Hypotheses linking the matrilineal belt to a supposed matrilineal Bantu expansion have been rejected as lacking evidence.
References
Belt regions
Cultural regions
Geography of Africa
Kinship and descent
Matriarchy | Matrilineal belt | [
"Biology"
] | 130 | [
"Behavior",
"Human behavior",
"Kinship and descent"
] |
56,561,099 | https://en.wikipedia.org/wiki/Ortrud%20Oellermann | Ortrud R. Oellermann is a South African mathematician specializing in graph theory. She is a professor of mathematics at the University of Winnipeg.
Education and career
Oellermann was born in Vryheid.
She earned a bachelor's degree, cum laude honours, and a master's degree at the University of Natal in 1981, 1982, and 1983 respectively,
as a student of Henda Swart.
She completed her Ph.D. in 1986 at Western Michigan University.
Her dissertation was Generalized Connectivity in Graphs and was supervised by Gary Chartrand.
Oellermann taught at the University of Durban-Westville, Western Michigan University, University of Natal, and Brandon University, before moving to Winnipeg in 1996. At Winnipeg, she was co-chair of mathematics and statistics for 2011–2013.
Contributions
With Gary Chartrand, Oellermann is the author of the book Applied and Algorithmic Graph Theory (McGraw Hill, 1993).
She is also the author of well-cited research publications on metric dimension of graphs, on distance-based notions of convex hulls in graphs, and on highly irregular graphs in which every vertex has a neighborhood in which all degrees are distinct. The phrase "highly irregular" was a catchphrase of her co-author Yousef Alavi; because of this, Ronald Graham suggested that there should be a concept of highly irregular graphs, by analogy to the regular graphs, and Oellermann came up with the definition of these graphs.
Recognition
In 1991, Oellermann was the winner of the annual Silver British Association Medal of the Southern Africa Association for the Advancement of Science.
She won the Meiring Naude Medal of the Royal Society of South Africa in 1994.
She was also one of three winners of the Hall Medal of the Institute of Combinatorics and its Applications in 1994, the first year the medal was awarded.
Selected publications
Book
Research articles
References
External links
Home page
Year of birth missing (living people)
Living people
Canadian mathematicians
South African mathematicians
Women mathematicians
Graph theorists
University of Natal alumni
Western Michigan University alumni
Western Michigan University faculty
Academic staff of the University of Natal
Academic staff of Brandon University
Academic staff of University of Winnipeg | Ortrud Oellermann | [
"Mathematics"
] | 437 | [
"Mathematical relations",
"Graph theory",
"Graph theorists"
] |
56,563,719 | https://en.wikipedia.org/wiki/Dragon%27s%20Breath%20%28dessert%29 | The dragon's Breath is a frozen dessert made from cereal dipped in liquid nitrogen. When placed in the eater's mouth, it produces vapors which comes out of the nose and mouth, giving the dessert its name.
Description
Dragon's Breath is made using colorful cereal balls described as having a flavor similar to Froot Loops. The cereal is dipped in liquid nitrogen and served in a cup. The eater uses a stick to skewer the balls. Once in the eater's mouth, the cold of the liquid nitrogen combines with the warmth of the mouth to release visible vapors out of the nose and mouth.
According to Glutto Digest, Dragon's Breath was originally invented and served at a “minibar” by José Andrés in 2008. After Andrés stopped serving it at his LA restaurant, “The Bazaar”, in 2009, it spread throughout Taiwan, Korea, and the Philippines over the following years.
According to The Straits Times, Dragon's Breath first appeared in the Philippines and South Korea circa 2015, but gained popularity when Los Angeles-based chain, Chocolate Chair, added it to its menu.
Dragon's Breath is noted for the spectacle of its consumption more than its flavor, with several publications commenting on its compatibility with Instagram trends.
Safety
Liquid nitrogen is used in several foods and drinks to quickly freeze them or for the vapors it produces. Its consumption poses several dangers to humans. The extreme cold temperature can cause damage to human tissue, and the displacement of oxygen by nitrogen can cause asphyxiation.
At a shop in Singapore in 2016, a woman was burned when the dessert stuck to her gums.
In October 2017, two children at the Pensacola Interstate Fair in Florida were injured while handling or consuming Dragon's Breath. A 14-year-old suffered a burn on her thumb from contact with the frozen dessert. Another child suffered second degree burns on the roof of her mouth. Following complaints, the fair's general manager announced the vendor would not be allowed to sell Dragon's Breath at the next year's event.
The smoking balls which are puffy cereals, infused with liquid nitrogen are sold as Heaven's Breath and Nitro Puff. In 2018, the FDA issued an alert against the delicacy. It warned on the colorful ball's danger to children afflicted with asthma, severe skin and internal organs damages, burns, breathing difficulty and life-threatening injuries.
External links
References
Frozen desserts
Cryogenics | Dragon's Breath (dessert) | [
"Physics"
] | 502 | [
"Applied and interdisciplinary physics",
"Cryogenics"
] |
56,564,310 | https://en.wikipedia.org/wiki/MRI%20artifact | An MRI artifact is a visual artifact (an anomaly seen during visual representation) in magnetic resonance imaging (MRI). It is a feature appearing in an image that is not present in the original object. Many different artifacts can occur
during MRI, some affecting the diagnostic quality, while others may be confused with pathology. Artifacts can be classified as patient-related, signal processing-dependent and hardware (machine)-related.
Patient-related MR artifacts
Motion artifacts
A motion artifact is one of the most common artifacts in MR imaging. Motion can cause either ghost images or diffuse image noise in the phase-encoding direction. The reason for mainly affecting data sampling in the phase-encoding direction is the significant difference in the time of acquisition in the frequency- and phase-encoding directions. Frequency-encoding sampling in all the rows of the matrix (128, 256 or 512) takes place during a single echo (milliseconds). Phase-encoded sampling takes several seconds, or even minutes, owing to the collection of all the k-space lines to enable Fourier analysis. Major physiological movements are of millisecond to seconds duration and thus too slow to affect frequency-encoded sampling, but they have a pronounced effect in the phase-encoding direction. Periodic movements such as cardiac movement and blood vessel or CSF pulsation cause ghost images, while non-periodic movement causes diffuse image noise (Fig. 1). Ghost image intensity increases with amplitude of movement and the signal intensity from the moving tissue. Several methods can be used to reduce motion artifacts, including patient immobilisation, cardiac and respiratory gating, signal suppression of the tissue causing the artifact, choosing the shorter dimension of the matrix as the phase-encoding direction, view-ordering or phase-reordering methods and swapping phase and frequency-encoding directions to move the artifact out of the field of interest.
Flow
Flow can manifest as either an altered intravascular signal (flow enhancement or flow-related signal loss), or as flow-related artifacts (ghost images or spatial misregistration). Flow enhancement, also known as inflow effect, is caused by fully magnetised protons entering the imaged slice while the stationary protons have not fully regained their magnetization. The fully magnetized protons yield a high signal in comparison with the rest of the surroundings. High velocity flow causes the protons entering the image to be removed from it by the time the 180-degree pulse is administered. The effect is that these protons do not contribute to the echo and are registered as a signal void or flow-related signal loss (Fig. 2). Spatial misregistration manifests as displacement of an intravascular signal owing to position encoding of a voxel in the phase direction preceding frequency encoding by time TE/2.The intensity of the artifact is dependent on the signal intensity from the vessel, and is less apparent with increased TE.
Metal artifacts
Metal artifacts occur at interfaces of tissues with different magnetic susceptibilities, which cause local magnetic fields to distort the external magnetic field. This distortion changes the precession frequency in the tissue leading to spatial mismapping of information. The degree of distortion depends on the type of metal (stainless steel having a greater distorting effect than titanium alloy), the type of interface (most striking effect at soft tissue-metal interfaces), pulse sequence and imaging parameters. Metal artifacts are caused by external ferromagnetics such as cobalt containing make-up, internal ferromagnetics such as surgical clips, spinal hardware and other orthopaedic devices, and in some cases, metallic objects swallowed by people with pica. Manifestation of these artifacts is variable, including total signal loss, peripheral high signal and image distortion (Figs 3 and 4). Reduction of these artifacts can be attempted by orientating the long axis of an implant or device parallel to the long axis of the external magnetic field, possible with mobile extremity imaging and an open magnet. Further methods used are choosing the appropriate frequency encoding direction, since metal artifacts are most pronounced in this direction, using smaller voxel sizes, fast imaging sequences, increased readout bandwidth and avoiding gradient-echo imaging when metal is present. A technique called MARS (metal artifact reduction sequence) applies an additional gradient, along the slice select gradient at the time the frequency encoding gradient is applied.
Signal processing dependent artifacts
The ways in which the data are sampled, processed and mapped out on the image matrix manifest these artifacts.
Chemical shift artifact
Chemical shift artifact occurs at the fat/water interface in the frequency encoding direction (Fig. 5). These artifacts arise due to the difference in resonance of protons as a result of their micromagnetic environment. The protons of fat resonate at a slightly lower frequency than those of water. High field strength magnets are particularly susceptible to this artifact. Determination of the artifact can be made by swapping the phase- and frequency-encoding gradients and examining the resultant shift of fat tissue.
Partial volume
Partial volume artifacts arise from the size of the voxel over which the signal is averaged. Objects smaller than the voxel dimensions lose their identity, and loss of detail and spatial resolution occurs. Reduction of these artifacts is accomplished by using a smaller pixel size and/or a smaller slice thickness.
Wrap-around
A wrap-around artifact also known as an aliasing artifact, is a result of mismapping of anatomy that lies outside the field of view but within the slice volume. The selected field of view is smaller than the size of the imaged object. The anatomy is usually displaced to the opposite side of the image (Figs 6 and 7). It can be caused by non-linear gradients or by undersampling of the frequencies contained within the return signal. The sampling rate must be twice the maximal frequency that occurs in the object (Nyquist sampling limit). If not, the Fourier transform will assign very low values to the frequency signals greater than the Nyquist limit. These frequencies will then ‘wrap around’ to the opposite side of the image, masquerading as low-frequency signals. In the frequency encode direction a filter can be applied to the acquired signal to eliminate frequencies greater than the Nyquist frequency. In the phase encode direction, artifacts can be reduced by an increasing number of phase encode steps (increased image time). For correction, a larger field of view may be chosen.
Gibbs artifacts
Gibbs artifacts or Gibbs ringing artifacts, also known as truncation artifacts are caused by the under-sampling of high spatial frequencies at sharp boundaries in the image. Lack of appropriate high-frequency components leads to an oscillation at a sharp transition known as a ringing artifact. It appears as multiple, regularly spaced parallel bands of alternating bright and dark signal that slowly fade with distance (Fig. 8). Ringing artifacts are more prominent in smaller digital matrix sizes. Methods employed to correct Gibbs artifact include filtering the k-space data prior to Fourier transform, increasing the matrix size for a given field of view, the Gegenbauer reconstruction and Bayesian approach.
Machine/hardware-related artifacts
This is a wide and still expanding subject. Only a few common artifacts are recognised.
Radiofrequency (RF) quadrature
RF detection circuit failure arises from improper detector channel operation. Fourier-transformed data display a bright spot in the centre of the image. If one channel of the detector has a higher gain than the other it will result in object ghosting in the image. This is the result of a hardware failure and must be addressed by a service representative.
External magnetic field (B0) inhomogeneity
B0 inhomogeneity leads to mismapping of tissues. Inhomogeneous external magnetic field causes either spatial, intensity, or both distortions. Intensity distortion occurs when the field in a location is greater or less than in the rest of the imaged object (Fig. 9). Spatial distortion results from long-range field gradients, which remain constant in the inhomogeneous field.
Gradient field artifacts
Magnetic field gradients are used to spatially encode the location of signals from excited protons within the volume being imaged. The slice select gradient defines the volume (slice). Phase- and frequency-encoding gradients provide the information in the other two dimensions. Any deviation in the gradient would be represented as a distortion. As the distance increases from the centre of the applied gradient, loss of field strength occurs at the periphery. Anatomical compression occurs and is especially pronounced on coronal and sagittal imaging. When the phase-encoding gradient is different, the width or height of the voxel is different, resulting in distortion. Anatomical proportions are compressed along one or the other axis. Square pixels (and voxels) should be obtained. Ideally the phase gradient should be assigned to the smaller dimension of the object and the frequency gradient to the larger dimension. In practice this is not always possible because of the necessity of displacing motion artifacts. This may be corrected by reducing the field of view, by lowering the gradient field strength or by decreasing the frequency bandwidth of radio signal. If correction is not achieved, the cause might be either a damaged gradient coil or an abnormal current passing through the gradient coil.
RF (B1) inhomogeneity
Variation in intensity across the image may be due to the failure of the RF coil, non-uniform B1 field, non-uniform sensitivity of the receive only coil (spaces between wire in the coil, uneven distribution of wire), or presence of non-ferromagnetic material in the imaged object.
When using a FLASH sequence, tip angle variations due to B1 inhomogeneity can affect the contrast of the image. Similarly, for inversion recovery pulses, and other T1-dependent methods, will suffer from signal intensity errors and generally lower T1 weighting. This is due to imperfect flip angles throughout the slice, but particularly around the edges of the body, resulting in imperfect magnetization recovery.
RF tip angle theory vs reality
Human body is full of protons and during imaging, the B0 field aligns these individual protons to a net magnetization in the direction of the magnetic field. An RF pulse that is applied perpendicular to the main magnetic field flips the spins to a desired angle. This flip angle scales with the B1 field amplitude. Accurate flip angle is crucial because the measured MR signals depend on the flip angle of the protons. However, this theory assumes that the B1 field is homogenous and, therefore, all spins in a slice are flipped an equal amount.
In reality, different areas of a slice see different radio frequency fields, leading to different flip angles. One reason this occurs is because the RF wavelength is inversely proportional to B0. So RF wavelength decreases when B0 increases. At B0 fields of 1.5T, RF wavelengths are long compared to the size of the body. But as the main magnetic field is increased, these wavelengths become the same or smaller than the regions of the body being imaged, resulting in flip angle inhomogeneity. In the images of a healthy patient's brain, it can be visually seen how inhomogeneous the fields are at 3T and 7T.
On a side note, this isn't the only cause of B1 inhomogeneity. It could also be due to the RF pulse design, B0 field inhomogeneity, or even patient movement.
Asymmetrical brightness
There is a uniform decrease in signal intensity along the frequency encoding axis. Signal drop-off is due to filters that are too tight about the signal band. Some of the signal generated by the imaged section is, thereby, inappropriately rejected. A similar artifact may be caused by non-uniformity in slice thickness.
RF noise
RF pulses and precessional frequencies of MRI instruments occupy the same frequency bandwidth as common sources such as TV, radio, fluorescent lights and computers. Stray RF signals can cause various artifacts. Narrow-band noise is projected perpendicular to the frequency- encoding direction. Broadband noise disrupts the image over a much larger area. Appropriate site planning, proper installation and RF shielding (Faraday cage) eliminate stray RF interference.
Zero line and star artifacts
A bright linear signal in a dashed pattern that decreases in intensity across the screen and can occur as a line or star pattern, depending on the position of the patient in the ‘phase-frequency space’. Zero line and star artifacts are due to system noise or any cause of RF pollution within the room (Faraday cage). If this pattern persists, check for sources of system noise such as bad electronics or alternating current line noise, loose connections to surface coils, or any source of RF pollution. If a star pattern is encountered, the manufacturer needs to readjust the system software so that the image is moved off the zero point.
Zipper artifacts
Although less common, zippers are bands through the image centre due to an imperfect Faraday cage, with RF pollution in, but originating from outside, the cage. Residual free induction decay stimulated echo also causes zippers.
Bounce point artifact
Absence of signal from tissues of a particular T1 value is a consequence of magnitude sensitive reconstruction in inversion recovery imaging. When the chosen T1 equals 69% of the T1 value of a particular tissue, a bounce point artifact occurs. Use phase-sensitive reconstruction inversion recovery techniques.
Surface coil artifacts
Close to the surface coil the signals are very strong resulting in a very intense image signal (Fig. 10). Further from the coil the signal strength drops rapidly due to the attenuation with a loss of image brightness and significant shading to the uniformity. Surface coil sensitivity intensifies problems related to RF attenuation and RF mismatching.
Slice-to-slice interference
Non-uniform RF energy received by adjacent slices during a multi-slice acquisition is due to cross-excitation of adjacent slices with contrast loss in reconstructed images (Fig. 11). To overcome these interference artifacts, the acquisition of two independent sets of gapped multi-slice images need to be included, and subsequently reordered during display of the full image set.
Artifact correction
Motion correction
Gating
Gating, also known as triggering, is a technique that acquires MRI data at a low motion state. An example of this could be acquiring an MRI slice only when the lung capacity is low (i.e. between large breaths). Gating is a very simple solution that can have a very large result.
Gating is best suited for mitigating breathing and cardiac artifacts. This is because these types of motion are repetitive, so we can leverage triggering acquisitions in a ‘low motion state’. Gating is used for cine imaging, MRA, free-breathing chest scans, CSF flow imaging, and more.
In order to gate correctly, the system needs to have knowledge of the patient's cardiac motion and breathing pattern. This is commonly done by using a pulse oximeter or EKG sensor to read a cardiac signal and/or a bellows to read the breathing signal.
A big disadvantage to gating is ‘dead time’, defined as time wasted due to waiting for a high motion state to pass. For example, we do not want to acquire an MRI image while someone is in the process of inhaling, since this would be a high motion state. So, we have many time periods where we are waiting for a high motion state to pass. This is even more prominent when we consider respiratory and cardiac gating together. The windows of time where the respiratory and cardiac motions are low are very infrequent, leading to high dead times. However, the advantage is that images acquired with both cardiac and respiratory gating have a significant improvement in image quality.
Pilot tone
The Pilot Tone method involves turning on a constant RF frequency to detect patient motion. More specifically, the MRI machine will detect the pilot tone signal when acquiring an image. The strength of the pilot tone signal at every TR will be proportional to the breathing/motion patterns of the patient. That is, the patient's movements will cause the received constant RF tone to be amplitude modulated. A very large advantage to the pilot tone is that it requires no contact with the patient.
Extracting a breathing signal using a pilot tone is simple in theory: One must place a constant frequency signal near the MRI bore, acquire an image, and take an FFT along the readout direction to extract the pilot tone. Technical considerations include choosing the RF frequency. The pilot tone must be detectable by the MRI machine, however must be carefully chosen not to interfere with the MRI image. The pilot tone shows up as a zipper (for a cartesian acquisition).
The location of that line is determined by the frequency of the RF tone. For this reason, pilot tone acquisitions usually have slightly large FOVs, to make room for the pilot tone.
Once an image has been acquired, the pilot tone signal can be extracted by taking the FFT along the readout direction and plotting the amplitude of the resulting signal. The pilot tone will show up as a line (of varying amplitude) when taking an FFT along the readout direction. The pilot tone method can also be used prospectively to acquire cardiac images.
The Pilot Tone method is great for detecting respiratory motion artifacts. This is because there is a very large and distinct modulation due to human breathing patterns. Heart signals are much more subtle and difficult to detect using a pilot tone. Retrospective techniques using the pilot tone are able to increase the level of detail and reduce blurring in free-breathing radial images.
TAMER
Targeted Motion Estimation and Reduction (TAMER) is a retrospective motion correction method developed by Melissa Haskell, Stephen Cauley, and Lawrence Wald. The method was first introduced in their paper Targeted Motion Estimation and Reduction (TAMER): Consistency Based Motion Mitigation for MRI using a Reduced Model Joint Optimization, as part of the IEEE Transactions on Medical Imaging Journal. The method corrects motion-related artifacts by acquiring a joint estimation of the desired motion-free image and the associated motion trajectory by minimizing the data consistency error of a SENSE forward model that includes rigid-body subject motion.
Preliminaries
The TAMER Method utilizes the SENSE forward model (described below) that has been modified to include the effects of motion in a 2D multi-shot imaging sequence. Note: the following modified SENSE model is described in detail in Melissa Haskell's doctoral dissertation, Retrospective Motion Correction for Magnetic Resonance Imaging.
Suppose that we have coils. Let be a column vector of image voxel values where is the number of k space samples acquired per shot and let be the signal data from coils. Let encoding matrix for a given patient motion trajectory vector, . is composed of many sub-matrices (encoding matrices for each shot ).
For each shot , we have the sub-matrix which is the encoding matrix for that particular shot where:
is the under-sampling operator
is the Fourier Encoding Operator
is the in-place translation operator
is the through-plane translation operator
is the rotation operator
SENSE Motion Forward Model:
SENSE model Extended to describe a 2D multi-shot imaging sequence:
The rigid-body motion forward model is nonlinear and the process of solving for estimations of both the motion trajectory and the image volume is computationally challenging and time-consuming. In the effort to speed up and simplify computations, the TAMER method separates the vector of image voxel values, , into a vector of target voxel values, , and a vector of fixed voxels, . Given any choice of target voxels and fixed voxels, we have the following:
Note: The length of only makes up about 5% of the total length of .
Now the optimization can be reduced to fitting the signal contribution of the target voxels to the correct target voxel values and the correct motion, .
TAMER Algorithm
The TAMER algorithm has 3 main stages: Initialization, Jumpstart of Motion Parameter Search, and the Joint Optimization Reduced Model Search.
Initialization:
The first stage of the TAMER algorithm acquires the initial reconstruction of the full image volume, , by assuming that all motion parameters are zero. One can solve for by minimizing the least squared error of the SENSE forward model without motion i.e. solve the system where and is the conjugate transpose of . We have discussed the notion of separating the sense model into ; however, we haven't yet discussed how the target voxels are chosen. Voxels that are strongly coupled together indicate motion. In a motion-free Cartesian acquisition, each voxel would only be coupled to itself, so our goal is to essentially un-couple these voxels. As described in the paper Targeted Motion Estimation and Reduction (TAMER): Consistency Based Motion Mitigation for MRI using a Reduced Model Joint Optimization, as part of the IEEE Transactions on Medical Imaging Journal, the TAMER algorithm converges fastest when choosing target voxels that are highly coupled. The target voxels can be entirely determined by the sequence parameters and coil sensitivities.
Target Voxel Selection Process:
Group coils based on artifact properties. The model error is first computed assuming no motion. The model correlation is then computed across all channels. TAMER is applied to groups of coils with the largest correlation artifacts to attain the motion and image estimation.
The initial target voxels are selected by first choosing a root voxel (generally the center of the image). Once the root voxel is chosen, the correlation between the root voxel and all other voxels is determined by attaining the column vector of the correlation matrix corresponding to the root voxel. The magnitude of the entries in this column vector represent the strength of interaction between the root voxel and all of the other voxels. The root voxel along with the voxels that have the strongest interaction with the root voxel are then chosen to be the initial target voxels.
Note: For each iteration of the TAMER process, the target voxels are selected by shifting the target voxels from the previous iteration perpendicularly to the phase encode direction by a preset amount.
Jumpstart of Motion Parameter Search:
Now the initial guess of the patient's motion is determined by evaluating the data consistency metric over a range of values for each of the motion parameters and the best value for each parameter is selected to construct the initial guess.
Joint Optimization Reduced Model Search:
We now have the initial target voxels, motion estimate, and coil groupings. The following procedure is now executed.
Let be the motion trajectory estimate for the search step. Let be the max number of iterations.
While repeat the following:
Solve for
Solve for
Set
Set
Set
TAMER: Advantages and Disadvantages
Advantages:
TAMER retrospectively corrects for motion, so modifications to the MRI exam procedure isn't necessary.
TAMER doesn't alter the acquisition procedure, so it can be easily integrated into current clinical MRI scans.
TAMER significantly reduces computation of the joint optimization model used to estimate motion parameters and image voxels.
Disadvantages:
Current TAMER implementations have lengthy overall computation times.
TAMER requires multi-channel data as the motion parameters need additional degrees of freedom which is provided by the multichannel acquisition.
The TAMER algorithm assumes static coil profiles that don't change with the motion of the patient. This assumption would be an issue for larger motion.
Neural network approaches
In recent years, neural networks have generated a great deal of interest by outperforming traditional methods on longstanding problems across many fields. Machine learning, and by extension neural networks, have been used in many facets of MRI — for instance, speeding up image reconstruction, or improving reconstruction quality when working with a lack of data. Neural networks have also been used in motion artifact correction thanks to their ability to learn visual information from data, as well as infer underlying, latent representations in data.
NAMER
Network Accelerated Motion Estimation and Reduction (NAMER) is a retrospective motion correction technique that utilizes convolutional neural networks (CNNs), a class of neural networks designed to process and learn from visual information such as images. This is a follow-up from the authors of the TAMER paper titled Network Accelerated Motion Estimation and Reduction (NAMER): Convolutional neural network guided retrospective motion correction using a separable motion model. Similar to TAMER, the paper aims to correct for motion-related artifacts by way of estimating a desired motion-free image and optimizing parameters for a SENSE forward model describing the relationship between raw k-space data and image space while factoring in rigid motion.
Setup
A SENSE forward model is used to induce synthetic motion artifacts in raw k-space data, allowing us access to both data with motion artifacts, as well as the ground-truth image without motion artifacts. This is important to the NAMER technique, because it utilizes a Convolutional Neural Network (CNN) to frontload image estimation and guide model parameter estimation. Convolutional Neural Networks leverage convolution kernels to analyze visual imagery. Here, a 27-layer network is used with multiple convolution layers, batch normalization, and ReLU activations. It uses a standard ADAM optimizer.
Image Estimation
The CNN attempts to learn the image artifacts from the motion-corrupted input data . The estimate for these artifacts, denoted as , are then subtracted from the motion-corrupted input data in order to produce a best estimate for the motion-free image:This serves two purposes: First, it allows the CNN to perform backpropagation and update its model weights by using a mean square error loss function comparing the difference between and the known ground-truth motion-free image. Second, it gives us a good estimate of the motion-free image that gives us a starting point for model parameter optimization.
SENSE model parameter optimization
Using a CNN effectively allows us to bypass the second stage of TAMER by skipping the joint parameter search. This means that we can focus on solely estimating motion parameters . Because is really a vector of multiple, independent parameters, we can parallelize our optimization by estimating each parameter separately.
Optimizing the Optimization Procedure
Before, we used the following to optimize both the image and parameters at once. Now, we can optimize solely the values:
On top of this, if a multi-shot acquisition was performed, we can estimate the parameters for each of shots separately, and go even further by estimating the parameters for each line in each shot :
This allows us to massively reduce computation time, from around 50 minutes with TAMER to just 7 minutes with NAMER.
Reconstruction
The new model parameters are then used in a standard Least Squares optimization problem to reconstruct an image that minimizes the distance between the k-space data, and the result of applying the SENSE forward model under our new parameter estimate to our best estimate for the motion-free image:This process is repeated until a desired number of time steps, or when the change in reconstructed image is sufficiently low. The NAMER technique has shown itself to be very effective in correcting for rigid motion artifacts, and converges much faster than other methods including TAMER. This illustrates the power of deep learning in improving results across myriad fields.
Generative adversarial networks
Other more advanced techniques take advantage of generative adversarial networks (GANs) which aim to learn the underlying latent representation of data in order to synthesize new examples that are indistinguishable from real data. Here, two neural networks, a Generator Network and a Discriminator Network, are modelled as agents competing in a game. The Generator Network's goal is to produce synthetic images that are as close as possible to images from the true distribution, while the Discriminator Network's goal is to distinguish generated synthetic images from the true data distribution. Specific to motion artifact correction in MRI, the Generator Network takes in an image with motion artifacts, and outputs an image without motion artifacts. The Discriminator Network then differentiates between the synthesized image and ground truth data. Various studies have shown that GANs perform very well in correcting for motion artifacts.
RF (B1) Inhomogeneity Correction
External Objects
B1 inhomogeneity due to constructive or destructive interference from the permittivity of body tissue can be mitigated using external objects with high dielectric constants and low conductivity. These objects, called radiofrequency/dielectric cushion, can be placed over or near the imaging slice to improve B1 homogeneity. The combination of high dielectric constant and having low conductivity allows the cushion to alter the phase of the RF standing waves and has been shown to reduce signal loss due to B1 inhomogeneity. This correction method was shown to have the greatest effect on sequences that suffer from B1 inhomogeneity artifacts but has no effect on those with B0 inhomogeneity. In one study, the dielectric cushion improved image quality for turbo spin echo‐based T2‐weighted sequences but not on gradient echo‐based T2‐weighted sequences.
Coil Mitigated Corrections
B1 inhomogeneity has been successfully mitigated by adjusting coil type and configurations.
Reducing the number of coils
One method is as simple as using the same transmit and receive coil to improve homogeneity. This method exploits the tradeoff between B1 dependence and coil sensitivity dependence in FLASH sequences and allows the user to select an optimized flip angle that will reduce B1 dependence. By using the same coil for transmitting and receiving, the receiver coil sensitivity can offset some of the nonuniformities in the transmitter coil, reducing the overall RF inhomogeneity. For anatomical studies using the FLASH sequence that can be performed with one transmit and receive coil, this method can be used to reduce B1 inhomogeneity artifacts. However, the method would not be suitable for exams under strict time constraints, since the user first needs to perform flip angle optimization.
Coil excitation
Modifying the field distribution within the RF coils will create a more homogenous field. This can be done by changing the way that the RF coil is driven and excited. One method uses a four-port RF excitation that applies different phase shifts at each port.
By implementing a four-port drive, the power requirement is decreased by 2, SNR is increased by √2, and the overall B1 homogeneity is improved.
Spiral coil
Changing the shape of the coils can be used to reduce B1 inhomogeneity artifacts. The use of spiral coil instead of standard coils at higher fields has been shown to eliminate the effects of standing waves in larger samples. This method can be effective when imaging large samples at 4T or higher; however, the proper equipment is required to implement this correction method. Unlike post-processing or sequence modulations, changing the coil shape is not feasible in all scanners.
Parallel excitation with coils
Another method to correct for B1 inhomogeneity is to employ the infrastructure in place from a parallel system to generate multiple RF pulses of lower flip angles that, together, can result in the same flip angle as that created using a single transmit coil. This method uses the multiple transmit coils from parallel imaging systems to reduce and better mitigate the RF power deposition by relying on shorter RF pulses. One advantage of using parallel excitation with coils is the potential to reduce scan time by combining the multiple short RF pulses and the parallel imaging capabilities to cut scan time. Overall, when this method is used with the correct selection of RF pulses and optimized for a low power deposition, the artifacts from B1 inhomogeneity can be greatly reduced.
Active Power modulation
Actively modulating the RF transmit power for each slice position compensates for B1 inhomogeneity. This method focuses on inhomogeneity along the axial, or z axis, direction since it is the most dominant in terms of poor homogeneity and least sample dependent.
Prior to inhomogeneity correction, measurement of the B1 profile along the z-axis of the coil is necessary for calibration. Once calibrated, the B1 data can be used for active transmit power modulation. For a specific pulse sequence, the values of each slice position are pre-determined and the appropriate RF transmitter power scale values are read from a look-up table. Then, while the sequence runs, a real time slice counter varies the attenuation of the RF transmit power.
This method is advantageous for reducing artifacts at the source, particularly when accurate flip angle is critical and for increasing signal to noise ratio. Even though this technique can only be used to compensate for the B1 variation along the z-axis in axially acquired images, it's still significant since B1 inhomogeneity is most dominant along this axis.
B1 insensitive adiabatic pulses
One way to achieve perfect spin inversion despite B1 inhomogeneity is to use adiabatic pulses. This correction method works by removing the source of the problem and applying pulses that will not generate flip angle errors. Specific sequences that employ adiabatic pulses for increased flip angle uniformity include a slice selective spin-echo pulse, adiabatic 180 degrees inversion RF pulses, and 180 degrees refocusing pulses.
Image post-processing
Post-processing techniques correct for intensity inhomogeneity (IIH) of the same tissue over an image domain. This method applies a filter to the data, typically based on a pre-acquired IIH map of the B1 field. If a map of the IIH in the image domain is known, then the IIH can be corrected by division into the pre-acquired image. This popular model in describing the IIH effect is:
Where is the measured intensity, is the true intensity, is the IIH effect and ξ is the noise.
This method is advantageous because it can be conducted offline, i.e., the patient is not required to be in the scanner. Therefore, correction time is not an issue. However, this technique does not improve SNR and contrast of the image because it only utilizes information that was already acquired. Since the B1 field was not homogeneous when the images were acquired, the flip angles and subsequent acquired signals are imprecise.
The effects of an AI-based image post-scan processing denoising system in brain scans have been demonstrated to be effective in higher image quality and morphometric analysis. Post-scan image processing systems enable noise reduction while retaining contrast. The subsequent image enhancement can be processed with shorter scan times for higher throughput and plausible earlier detection.
B1 mapping techniques for image post-processing corrections
To correct RF inhomogeneity artifacts using post-processing corrections, there are a few methods to map the B1 field. Here is a short description of some common techniques.
Double angle method
A common and robust method that uses the results from two images acquired at flip angles of and . The B1 map is then constructed using a ratio of the signal intensities of these two images. This method, although robust and accurate, requires a long TR and long scan time; therefore, the method is not optimal for imaging regions susceptible to motion.
Phase map method
Similar to the double angle method, the phase map method uses two images; however, this method relies on the accrual of phase to determine the real flip angle of each spin. After applying a 180 degree rotation about the x-axis followed by a 90 degree rotation about the y axis, the resulting phase is then used to map the B1 field. By obtaining two images and subtracting one from the other, any phase from B0 inhomogeneity can be removed and only phase accumulated by the inhomogeneous RF field will be mapped. This method can be used to map 3D volumes but requires a long scan time, making it unsuitable for some scanning requirements.
Dual Refocusing Echo Acquisition Mode (DREAM)
This method is a multislice B1 mapping technique. DREAM can be used to acquire a 2D B1 map in 130 ms, making it insensitive to motion and feasible for scans that require breath holds, such as cardiac imaging. The short acquisition also reduces effects of chemical shifts and susceptibility. Additionally, this method requires low SAR rates. Although not as accurate as the double angle method, DREAM achieves reliable B1 mapping during short acquisitions. T
References
Magnetic resonance imaging | MRI artifact | [
"Chemistry"
] | 7,434 | [
"Nuclear magnetic resonance",
"Magnetic resonance imaging"
] |
69,229,577 | https://en.wikipedia.org/wiki/Double%20empathy%20problem | The theory of the double empathy problem is a psychological and sociological theory first coined in 2012 by Damian Milton, an autistic autism researcher. This theory proposes that many of the difficulties autistic individuals face when socializing with non-autistic individuals are due, in part, to a lack of mutual understanding between the two groups, meaning that most autistic people struggle to understand and empathize with non-autistic people, whereas most non-autistic people also struggle to understand and empathize with autistic people. This lack of mutual understanding may stem from bidirectional differences in dispositions (e.g., communication style, social-cognitive characteristics), and experiences between autistic and non-autistic individuals, as opposed to always being an inherent deficit.
Studies from the 2010s and 2020s have shown that most autistic individuals are able to interact effectively, communicate effectively, empathize well or build good rapport, and display social reciprocity with most other autistic individuals. A 2024 systematic review of 52 papers found that most autistic people have generally positive interpersonal relations and communication experiences when interacting with most autistic people, and autistic-autistic interactions were generally associated with better quality of life (e.g., mental health and emotional well-being) across various domains. This theory and subsequent findings challenge the commonly held belief that the social skills of all autistic individuals are inherently and universally impaired across contexts, as well as the theory of "mind-blindness" proposed by prominent autism researcher Simon Baron-Cohen in the mid-1990s, which suggested that empathy and theory of mind are universally impaired in autistic individuals.
The double empathy concept and related concepts such as bidirectional social interaction have been supported by or partially supported by a substantial number of studies in the 2010s and 2020s, with mostly consistent findings in mismatch effects as well as some supportive but also mixed findings in matching effects between autistic people. The theory and related concepts have the potential to shift goals of interventions and public psychoeducation or stigma reduction regarding autism. In recognition of the findings that support the double empathy theory, Baron-Cohen positively recognized the theory and related findings in multiple research articles and podcasts since the late 2010s.
History
Development and spread of mind-blindness theory
Earlier studies on autism regarding theory of mind and empathy had concluded that a lack of theory of mind was one of the primary symptoms of autism. The most popular of these studies were those led by Simon Baron-Cohen in the 1980s and 1990s, who used the term "mind-blindness" to describe his theory in an attempt to empirically explain the tendency of autistic people to avoid eye contact, proposing a homogeneous explanation of autism as due to either a lack of theory of mind or developmental delay in theory of mind in early childhood. Some have additionally described the supposed social impairment present in autistic people as "an extreme form of egocentrism with the resulting lack of consideration for others".
Mind-blindness implies an inability to make sense of and predict another person's behavior, and to attribute mental states such as knowledge, beliefs, desires, emotions, and intentions to oneself and others. The claim that autistic people lack theory of mind is taught across a wide range of psychology textbooks and promoted by over 75% of the top 500 scholarly articles indexed for "theory of mind" and "autism" on Google Scholar, serving as one of psychology's widely promoted topics throughout psychological literature, practice, and instruction. Mind-blindness has also been embraced by scholars in other disciplinary areas such as sociology, philosophy, economics, anthropology, robotics, and narratology.
Problems with earlier studies on theory of mind and empathy in autism
The mind-blindness hypothesis, in addition to being questioned shortly after its publication, has faced a great deal of criticism from the scientific community over the years, in response to the replication studies (mostly the false-belief tasks) that have failed to reveal significant differences in theory of mind between autistic and non-autistic participants, as well as the growing body of evidence for the high degree of heterogeneity in autistic brains at a neurobiological level.
There have been developments of new theory-of-mind measures when existing measures were perceived by some researchers as inadequate. There have been some successful replications demonstrating differences in theory of mind and empathy with some measures such as the Frith–Happé Animations Test, Baron-Cohen's "Reading the Mind in the Eyes" task, and self-report empathy questionnaires – which have been criticized for being vague and imprecise as well as not considering social interaction contexts, reference groups, and the substantially lowered social-desirability bias of autistic individuals. In addition, several independent teams have repetitively failed to replicate highly cited and widely taught findings with picture-sequencing tasks and false-belief tasks such as the Sally–Anne test. Such mixed and inconsistent findings with many different measures have raised doubts regarding the generalizability and validity of the mind-blindness theory of autism.
Furthermore, autism intervention research based on theory of mind has shown little efficacy, and theory-of-mind experiments typically fail to take into account the fact that autistic people have different sensory experiences, which vary between autistic individuals, than non-autistic people. Academics have also noted that many autistic children and adults pass some theory-of-mind tasks but performances vary substantially between diverse tasks and between autistic individuals; hence, Baron-Cohen's earlier repeated assertion of mind-blindness being a universal characteristic of autism across contexts has also been called into question by other researchers since the 1990s. While Baron-Cohen has revised his understanding, his well-powered and large-sample studies have found substantial heterogeneity in empathy and theory of mind among autistic people, with lower performances or scores in theory-of-mind and empathy tasks among autistic people on average, but also a substantial proportion (around 40–60%) of autistic people showing "unimpaired" or even above-average performances in some rather controversial theory-of-mind and empathy measures. Similar results have been consistently demonstrated by other research teams.
Additionally, it has been argued that many professionals and, likewise, parents seem to have neglected that reciprocity needs to be mutual and symmetrical. For example, John Constantino's Social Responsiveness Scale, a 2002 quantitative measure of social reciprocity in children which has since been used extensively in autism research, consisted of the item that asks whether the child "is regarded by other children as odd or weird", which, although seems to indicate a lack of social or emotional reciprocity in the regarder, is used instead to indicate a lack of social or emotional reciprocity in the target child. Several other items in the questionnaire, such as the one that asks whether the child "is not well coordinated in physical activities", seem completely unrelated to reciprocity.
Counter-theory to mind-blindness
Around the early 2010s, academics began to suggest that some studies of theory-of-mind and empathy tests may have misinterpreted autistic people having difficulty understanding non-autistic or neurotypical people as being an intrinsic social deficit present in autistic individuals. They argued that it seemed more likely that autistic people were specifically having trouble understanding neurotypical people in some contexts, due to differences in experiences and social cognition between the two groups. The theory of the double empathy problem was coined in 2012 by Damian Milton as a counter-theory to mind-blindness in an effort to explain this phenomenon of mutual misunderstanding, defined as follows:The "double empathy problem": a disjuncture in reciprocity between two differently disposed social actors which becomes more marked the wider the disjuncture in dispositional perceptions of the lifeworld – perceived as a breach in the "natural attitude" of what constitutes "social reality" for "non-autistic spectrum" people and yet an everyday and often traumatic experience for "autistic people".The claim that autism is characterized by a lack of social or emotional reciprocity has become a truism in academia; for instance, in a 2004 research article examining a hypothesized autism susceptibility gene, the opening line simply stated, without any scientific citations or supporting data, that "impaired reciprocal social interaction is one of the core features of autism". The double empathy theory, subsequent findings, and findings in the broader theory of mind and empathy literature in the 21st century contest common assumptions about autistic people in the fields of psychology and psychiatry, which are often riddled with information regarding autism and theory of mind (e.g., autistic people are universally deficient in empathy or theory of mind) that is outdated, overgeneralized, empirically questionable with inconsistent findings, and potentially societally harmful, but still often assumed by some researchers, educators, students, and practitioners as factual.
While the concept of double empathy had existed in prior publications, Milton named and significantly expanded on it. Since 2015, there has been an increasing number of research studies, including experimental studies, qualitative research, and real-life social interaction studies, many of which are emerging under the banner of critical autism studies and neurodiversity paradigm, supporting the double empathy theory and the findings appear generally consistent.
The double empathy theory has been supported or positively recognized by various autism researchers, including Catherine Crompton, Morton Ann Gernsbacher, Baron-Cohen himself, Elizabeth Pellicano, and Sue Fletcher-Watson, the editor-in-chief of the academic journal Autism. The theory has also been approached by research projects in various disciplinary areas, including but not limited to psychology, sociology, philosophy, neuroscience, linguistics, film studies, and design.
Double empathy and bidirectional communication studies
Interpersonal rapport, empathy, and communication effectiveness
It has been suggested that non-autistic people tend to have a poor understanding of autistic people and lack emotional empathy for autistic people, just as autistic people may have a poor understanding of non-autistic people. Whilst autistic people sometimes have difficulties in understanding non-autistic people and struggle to socialize with non-autistic people, it is likely that most non-autistic people often hold negative stereotypes, views, and/or biases regarding autistic differences, and also struggle to understand autistic people's communication, emotions, and intentions, resulting in and contributing to this "double empathy problem".
Studies from the 2010s and 2020s that have used autistic-autistic pairs to test interpersonal rapport, empathy, and communication effectiveness in adults have shown that autistic adults generally perform better in empathy, rapport, and communication effectiveness when paired with other autistic adults, that interpersonal rapport may be stronger in autistic-autistic interactions than in those between autistic and neurotypical people, and that autistic people may be able to understand and predict each other's thoughts and motivations better than neurotypical people as well as possibly autistic close relatives.
The importance of social reciprocity
One major factor influencing communication effectiveness is social reciprocity. Research from the 1980s and 1990s has indicated that when professionals, peers, and parents are taught to act reciprocally to autistic children, non-autistic children are considerably more likely to reciprocate with autistic children, who end up becoming more responsive. Non-autistic children can demonstrate reciprocity via imitation, which improves social responsiveness in all children, including autistic children; when a random person imitates an autistic child engaging in object manipulation by manipulating a duplicate object in the same way that the child does, the child makes longer and more frequent eye contact with the person. Similarly, when mothers imitate their autistic children's manipulation of toys, the children not only gaze longer and more frequently at their mothers, but also engage in more exploratory and creative behavior with the toys, on top of showing considerably more positive affect.
In contrast, in a 1992 study on reciprocal interactions, non-autistic preschoolers, called "peer tutors", were taught to prompt for the verbal labels of preferred toys from autistic target children; the peer tutors were told to "wait for the target child to initiate a request for a toy", "ask the target child for the label of the toy", "give the toy to the target child when he labeled it", and "praise the correct answer". None of the autistic children maintained their initiation with the peer tutors even after the training sessions were completed, which was likely because their interaction was neither mutual nor symmetrical. When social interaction is neither mutual nor symmetrical between autistic and non-autistic peers, a double empathy problem occurs, which is likely exacerbated through professionals, peers, and parents neglecting the reciprocal nature of reciprocity.
Bullying and subsequent masking
Some researchers have argued that autistic people likely understand non-autistic people to a higher degree than vice versa, due to the frequency of masking – i.e., the conscious or subconscious suppression of autistic behaviors and the compensation of difficulties in social interaction by autistic people with the goal of being perceived as neurotypical. Masking begins at a young age as a coping strategy, partly to avoid harassment and bullying, which are highly common experiences for autistic children and adults. High rates of peer victimization are also seen in autistic children and adults.
Whilst many health professionals and researchers have argued from time to time that autism is characterized by a lack of social or emotional reciprocity, the bullying and victimization targeted at autistic people by non-autistic people, along with the problem of ableism in autism research, has been viewed as a demonstration of non-autistic people's lack of social or emotional reciprocity towards autistic people, further suggesting what Milton has described to be a "disjuncture in reciprocity" (i.e., the presence of a "double empathy gap") between autistic and non-autistic people.
Anthropomorphism and understanding for animals
An area of social-cognitive strength in autistic people centers upon anthropomorphism. A 2018 study has shown that autistic people are likely more prone to object personification, suggesting that autistic empathy may be more complex and all-encompassing, contrary to the popular belief that autistic people lack empathy. Whilst neurotypical participants have outperformed autistic participants in the Reading the Mind in the Eyes test designed by Baron-Cohen in 2001, autistic participants have outperformed neurotypical participants in a cartoon version of the said test in a 2022 study, supporting the view of social-cognitive differences rather than deficits in the autistic population.
Some autistic people also appear to possess a heightened understanding, empathy, and sensitivity towards animals, once again suggesting social-cognitive differences in autistic people, but not global deficits.
Autistic perspectives and dehumanizing research
Autistic theory of mind, argued to have facilitated the release of cognitive resources, is typically based on the use of rules and logic and may be modulated by differences in thinking. If autistic people were inherently poor at theory of mind and social communication, an interaction between a pair of autistic people would logically be more challenging than one between an autistic and neurotypical person. As a result, Milton has described the belief that autistic people lack theory of mind as a myth analogous to the now-discredited theory that vaccines cause autism.
Many autistic activists and a growing number of autism researchers have shown support for the double empathy concept, and have argued that some past studies and articles regarding theory of mind and empathy in autism (especially the universal core deficit version by Baron-Cohen from the 1980s to 2011) have served to stigmatize autistic people, place the responsibility for autistic-neurotypical misunderstandings solely on autistic people, and dehumanize autistic people by portraying them as unempathetic. Many autistic activists have advocated for the inclusion of autistic people in autism research, promoting the slogan "nothing about us without us". In addition, autistic individuals may tend to have a reliable and scientific understanding of autism that is also less stigmatizing, contrary to the implication that autistic people lack the ability to infer to their selves.
Research has shown that autistic people are more likely to be dehumanized by non-autistic people, and first-hand accounts of autism research, including autoethnographies, blogs, commentaries, and editorials, have described autism research to often be dehumanizing to autistic people. Furthermore, autistic people are said to be "less domesticated" at morphological, physiological, and behavioral levels, and have integrity equivalent to that of non-human animals. Autism has been described as an epidemic, and in some cases, lack of empathy is used to link autism with terrorism. Autistic people are also said to be an economic burden, and extensive arguments supporting the use of eugenics in autism have been published, with exceptions being made only for those who are economically productive and normative enough to not make others uncomfortable.
As a result of this dehumanization, the lack of understanding and resultant stigma and marginalization felt by autistic people in social settings may negatively impact upon their mental health, employment, accessibility to education and services, and experiences with the criminal justice system. Autistic people have increased premature mortality rates and one of the leading causes of death in autistic people is suicide, which is likely exacerbated by this stigma and marginalization. Additionally, many autistic people often feel trapped by the stereotypes this largely non-autistic society has of autism, and have reported changing their behavior (i.e., masking) as a result of those stereotypes. Because a lack of theory of mind is believed to impair autistic people's understanding of their selves and other people, the claim that autistic people lack theory of mind is seen to dispute their autonomy, devalue their self-determination, and undermine their credibility.
Limitations and future directions
The literature on double empathy is still relatively young, and the generalizability of double empathy and bidirectional interaction findings to younger autistic children as well as autistic people with an intellectual disability, speech-language impairment, and/or higher support needs is very uncertain, may be confounding, and will require further research. Another limitation is most studies on double empathy and bidirectional social interaction are based on western samples, and studies with non-western samples will be worthwhile.
Milton agrees that there currently remain large gaps in this area of research. The vast majority of studies on double empathy, bidirectional communication, and socialization so far have not included autistic children and autistic people who are nonverbal or have an intellectual disability. There exists a high degree of comorbidity between autism and intellectual disability; roughly 30% of autistic people have an intellectual disability, while just roughly 1–3% of the global population or lower has an intellectual disability. In addition, roughly 20–30% of autistic children are either nonverbal or minimally verbal. Glass & Yuill (2023) found support for the presence of similar or higher social synchrony between autistic pairs compared to non-autistic pairs under certain conditions, with participants including autistic children and autistic people who are nonverbal or minimally verbal.
Moreover, double empathy and bidirectional communication studies typically fail to take into account the vast differences in autism and factors like masking, which may possibly interfere with autistic people's ability to communicate and empathize with one another. Acknowledging these differences which may affect communication within and between autistic and non-autistic groups, Gillespie-Smith et al. (2024) suggested a need to (re)frame the double empathy problem to be understood as a "spectrum of understanding", which sees double empathy in the context of a continuum of neurocommunicative learning, situated between poles of understanding and misunderstanding. In this sense, the spectrum of understanding simply illustrates that as individuals learn more about each other from direct interaction, their relationships tend to deepen, their comprehension of each other increases, and they become more able to empathize with each other.
Conceptual replications and further studies on double empathy are needed in different groups, including siblings of autistic people, non-autistic pupils in schools including autistic peers, late-diagnosed autistic adults, parents of autistic children, and autism service providers.
Emphasizing that empathy and reciprocity are a "two-way street", Milton and many other researchers propose that further autism research should focus on bridging the double empathy gap by empowering autistic individuals, building rapport and appreciation for their worldview, educating non-autistic people about what being autistic means, and moving towards a more continuous understanding of neurodiversity. It has also been suggested that the medical model of autism – the traditional and dominant model of autism in which autism is viewed as a disorder and deficit – is problematic due to its approach being too narrow, individualistic, and deficit-based, as well as how its messaging could contribute to ableism, prejudice, and stigma towards autistic people, further widening this double empathy gap.
Triple empathy problem
Autistic individuals are more likely to face significant health disparities, including a higher prevalence of co-occurring health conditions and a lower life expectancy compared to their neurotypical peers, and thus are more likely to use emergency services. Despite increased awareness of these health inequities, many autistic people encounter substantial barriers when accessing healthcare services. Shaw et al. (2023) conducted a qualitative study involving 1,248 autistic adults to investigate these challenges, revealing a complex interplay of factors that contribute to adverse health outcomes. Key themes emerged from the participants' experiences, such as early barriers to care, communication mismatches, feelings of doubt from both patients and healthcare providers, a sense of helplessness and fear in navigating the system, and a tendency toward healthcare avoidance – each contributing to significant health risks.
Shaw et al. (2023) constructed a model illustrating a chronological journey that outlines how barriers to healthcare access can lead to detrimental health outcomes for autistic individuals. Their work emphasizes the necessity of amplifying autistic voices in discussions about healthcare and highlights the relevance of the double empathy problem within medical contexts, thereby proposing the concept of a "triple empathy problem".
This expanded framework, further elaborated by Josefson (2024), encompasses:
the difficulty neurotypical people have relating to or understanding the needs of neurodivergent people,
the difficulty neurodivergent people have relating to or understanding the needs of neurotypical people, and
the difficulty urban planners and other designers, whether of products, services, technology, places, systems or processes etc., have finding solutions that equitably balance the needs of all community members.
Triple empathy is associated with the concept and principles of universal design, which aims to create environments and services that are accessible and beneficial to everyone, regardless of their neurotype. Fostering neuroinclusive design not only accommodates but actively embraces the diverse perspectives and experiences of all individuals.
Quadruple empathy problem
The transition through menopause can be particularly difficult for autistic people, exacerbating existing communication barriers and experiences of misunderstanding in medical contexts. Participants from a study by Brady et al. (2024) described profound communication challenges that echoed their earlier experiences during puberty and menarche, periods in which they also struggled to articulate their needs and experiences due to their neurodivergent perspectives. Brady et al. (2024) coined the term "quadruple empathy problem" to not only reflect the challenges autistic individuals face in communicating their needs but also emphasize the impact of medical misogyny – i.e., systemic biases in healthcare that may dismiss or undermine the experiences of neurodivergent women, who may find themselves navigating a healthcare landscape lacking in appropriate levels of support and understanding, further leading to feelings of desperation and the need for self-advocacy, such as seeking private healthcare or educating medical personnel about their unique experiences. This research underscores the necessity for healthcare professionals to adopt a person-centered, autism-informed approach that respects the unique communication styles of autistic individuals and acknowledges the often-misunderstood symptoms associated with menopause.
See also
Discrimination against autistic people
Empathy gap
Epistemic injustice
Inclusion (disability rights)
Medical model of disability
Social model of disability
The Fox and the Stork
References
Further reading
External links
Communication
Empathy
Human communication
Neurodiversity
Psychological theories
Sociological theories
Theory of mind | Double empathy problem | [
"Biology"
] | 5,249 | [
"Human communication",
"Behavior",
"Human behavior"
] |
69,229,950 | https://en.wikipedia.org/wiki/Tramp%20species | In ecology, a tramp species is an organism that has been spread globally by human activities. The term was coined by William Morton Wheeler in the bulletin of the American Museum of Natural History in 1906, used to describe ants that “have made their way as well known tramps or stow-aways [sic] to many islands". The term has since widened to include non-ant organisms, but remains most popular in myrmecology. Tramp species have been noted in multiple phyla spanning both animal and plant kingdoms, including but not limited to arthropods, mollusca, bryophytes, and pteridophytes. The term "tramp species" was popularized and given a more set definition by Luc Passera in his chapter of David F William's 1994 book Exotic Ants: Biology, Impact, And Control Of Introduced Species.
Definition
Tramp species are organisms that have stable populations outside their native ranges. They are closely associated with human activities. They are disturbance-specialists, and are characterized by their synanthropic associations with humans as their primary mode of expansion is human-mediated dispersal. That being said, tramp species are not limited to anthropogenically disturbed habitats, they have the potential to invade pristine habitats, especially when established in a new area. For example, Anoplolepis gracilipes was able to invade undisturbed forest ecosystems in Australia after being introduced and having an established population in northeast Arnhem Land. It is important to note that while some tramp species are invasive, the majority of them are not. Some can exist alongside native species without competing with them, simply occupying unfilled niches, such as is the case with some populations of Tapinoma melanocephalum and Monomorium pharaonic, who rarely interfere with native species outside human settlement areas.
Ants
Ants have a more rigid list of criterion to be considered "true" tramp species. The most cited body of work outlining these traits comes from Luc Passera. His primary and most important criterion is that the distribution of the species must be linked to human activities, what he refers to as "anthropophilic tendency". He also lists the following traits as being likely common to all tramp species: small size, monomorphism of worker ants (worker ants having only one phenotype), high rates of polygyny, unicoloniality, strong interspecific aggressiveness, worker ant sterility, and colony reproduction by budding. These traits may appear with more or less intensity among considered tramp species, and in fact, literature does not currently require a tramp species to possess every single one of these attributes. Ant tramp species in particular can be ecological indicators on the susceptibility of an ecosystem to become invaded or ecological instability.
Causes and distribution
All tramp species are distributed globally by as a result of human transportation. As such, they are almost always present in urban or human-settled environments, and have colonizing mechanisms that are well adapted to human cohabitation, referred to as possessing "anthropogenically reinforced dispersal biology". The globalization of trade and travel have contributed significantly to the dispersal of tramp species worldwide. Trade activities involving the importation and exportation of cargos on ships (often containing plants, soil, wood and other biological mediums) are noted as being an especially important methods of introduction. These often repeated introductions (as oftentimes shipments will come from the same place) contribute to fortifying the genetic variability and initial population sizes of newly transplanted tramp species, which facilitates their establishment in novel environments. After their human-mediated introductions, tramp species can also benefit from human disturbance to the environment. Anthropogenic forces (such as construction and agriculture) can dramatically impact local fauna and flora, weakening the environment and making the area more susceptible to the encroachment of tramp species. This phenomenon is noted as a particularly tough issue in Tropical Asia, where monocropping practices of local rubber plant farms have decimated indigenous species assemblages and habitat structures, allowing the establishment of many problematic tramp species. Another example is the Thousand Islands Archipelago in Indonesia, where the small tropical islands are especially vulnerable to human disturbance, which facilitated the establishment of multiple tramp species.
The range expansion of tramp ants is projected to increase with weather pattern changes due to climate change. As many tramp species are well adapted to disturbances in their native habitat, they are particularly resilient to large-scale, unpredictable weather events (such as floods, wildfires and monsoons), which are set to increase in frequency as anthropogenic activity continues to affect global systems.
Effects on local environments
Tramp species can have similar effects to invasive species, and in some literature the term "tramp" species is used as a synonym for invasive. As such they can outcompete and displace local fauna, decreasing species richness. They can also have direct impacts on human health, such as is the case with Solenopsis geminata and Pachycondyla senaarensis. Both of these venomous species have been known to bite humans, often times causing severe anaphylactic reactions; this has made them known public health hazards in the regions they are found. Tramp species can also be nuisance pests, damaging housing structures and crops. However, it is important to note that tramp species are not always invasive, and can cohabitate without harming local environments or species assemblages.
Control and eradication
As tramp species are so diverse in their ecology, there is no universal protocol to prevent their encroachment into new territories. However, there are certain strategies that can be employed to mitigate tramp species. In some environments, maintaining diversity of local species assemblages can deter certain tramp species. Currently, there is a deficiency in our ability to identify potential new tramp species quickly - a phenomenon dubbed "taxonomic impediment", which is a delay in identifying invasive species threats. As such, it is essential to increasing identification tools for preventative action against tramp species. Interdepartmental cooperation for pest management can be very effective in tramp species management, as a collaborative effort between affected stakeholders can increase the likelihood of success in mitigation. Direct pest management efforts have included baits with insect growth regulators to sterilize colonies to varying degrees of success. One method that can be successful for urban infestation of tramp ants specifically (depending on their specific biology) in temperate zones is to shut off heat sources for two weeks or more, as many can be heat-adapted species.
List of tramp species
Arthropods
Ants
Anoplolepis gracilipes
Brachyponera sennaarensis
Cardiocondyla emeryi
Cardiocondyla kagutsuchi
Cardiocondyla nuda
Cardiocondyla obscurior
Cardiocondyla wroughtonii
Hypoponera punctatissima
Iridomyrmex anceps
Lasius neglectus
Linepithema humile
Monomorium destructor
Monomorium floricola
Monomorium indicum
Monomorium monomorium
Monomorium pharaonic
Nylanderia spp.*
Paratrechina flavipes
Paratrechina jaegerskioeldi
Paratrechina longicornis
Pheidole fervens
Pheidole megacephala
Pheidole teneriffana
Solenopsis geminata
Solenopsis invicta
Tetramorium caespitum
Tetramorium bicarinatum
Tetramorium lanuginosum
Tetramorium pacificum
Tetramorium simillimum
Tapinoma melanocephalum
Tapinoma simrothi
Technomyrmex albipes
Technomyrmex brunneus
Trichomyrmex destructor
Wasmannia auropunctata
Millipedes
Chondromorpha xanthotricha
Glyphiulus granulatus
Orthomorpha coarcata
Oxidus gracilis
Pseudospirobolellus avernus
Trigoniulus corallinus
Silverfish
Ctenolepisma longicaudata
Termites
Cryptotermes sp.
Wasps
Calliscelio elegans
Platygastroidea superfamily
Mollusca
Land snails
Bradybaena similaris
Slugs
Deroceras panormitanum
Deroceras invadens
Plants
Bryophytes
Diplasiolejeunea ingekarolae
Daltonia marginata
Daltonia splachnoides
Pteridophytes
Nephrolepis biserrata
Williams and Lucky 2020 provide a thorough listing of all known Nylanderia species with established populations outside their native ranges.
See also
Lists of invasive species
Supertramp (ecology)
Climate change and invasive species
Attribution of recent climate change
References
Introduced species
Ecology terminology | Tramp species | [
"Biology"
] | 1,848 | [
"Ecology terminology"
] |
69,230,957 | https://en.wikipedia.org/wiki/Gallium%20%2868Ga%29%20gozetotide | {{DISPLAYTITLE:Gallium (68Ga) gozetotide}}
Gallium (68Ga) gozetotide or Gallium (68Ga) PSMA-11 sold under the brand name Illuccix among others, is a radiopharmaceutical made of 68Ga conjugated to prostate-specific membrane antigen (PSMA) targeting ligand, Glu-Urea-Lys(Ahx)-HBED-CC, used for imaging prostate cancer by positron emission tomography (PET). The PSMA targeting ligand specifically directs the radiolabeled imaging agent towards the prostate cancerous lesions in men.
The most common side effects with gallium (68Ga)-radiolabelled gozetotide are tiredness, nausea (feeling sick), constipation and vomiting.
Gallium (68Ga) gozetotide was approved for medical use in the United States in December 2021, and in the European Union in December 2022. It is the first drug approved by the US Food and Drug Administration (FDA) as a PET imaging agent.
Structure
Radiopharmaceuticals based on HBED are composed of three components: a chelator that has a HBED structure and two functions, a radiometal coordinated with the chelator, and a binding motif or pharmacophore that is conjugated to the chelator (such as a peptide or antibody). One of the most popular HBED chelators is HBED-CC. This chelator can create stable complexes with trivalent gallium at normal temperatures and it attaches to bioactive molecules through its propionic acid moieties.
Medical uses
Gallium (68Ga) gozetotide is a radioactive diagnostic agent indicated for positron emission tomography (PET) of prostate-specific membrane antigen (PSMA)-positive lesions in men with prostate cancer.
Ga 68 PSMA-11 injections are used for PET imaging of prostate-specific membrane antigen (PSMA) positive lesions in males with prostate cancer. It can be given for the patients with suspected metastasis, and the candidates with initial definitive therapy.
History
In the early 2000s, researchers began exploring the use of PSMA as a target for imaging and therapy. The first PSMA-targeted radiotracer was developed using a different radioactive element, technetium-99m. This radiotracer, called 99mTc-MIP-1404, showed promise in preclinical studies but did not perform well in clinical trials.
In 2011, researchers started investigating the use of gallium-68, a different radioactive element, as a more suitable alternative for PSMA-targeted radiotracers. In 2013, the first Ga-PSMA radiotracer was developed by researchers at DKFZ in Germany, and it showed promising results in early clinical studies.
Since then, Ga-PSMA has been extensively studied in clinical trials, and it has been found to be a highly effective imaging agent for detecting prostate cancer lesions. It is now widely used in clinical practice, particularly for patients with recurrent prostate cancer and those with high-risk disease.
Initially gallium (68Ga) chloride solution injections used for radiolabelling, in 2019 European Pharmacopoeia mentions gallium (68Ga) DOTATOC injection for radiolabelling and PET imaging.
Ga 68 PSMA-11 was co-developed by researchers at University of California, Los Angeles and University of California, San Francisco, who conducted a phase III clinical trial. In December 2020, the drug was first approved by the US Food and Drug Administration (FDA) for PET imaging.
Mechanism of action
Gallium (68Ga) gozetotide binds with prostate-specific membrane antigen (PSMA). This binds to cells that express PSMA, including malignant prostate cancer cells. The radioactive isotope of gallium, 68Ga is responsible for emitting β+ radiations and X-rays. This helps in recording images by positron emission tomography (PET) and CT scan.
Society and culture
Legal status
On 13 October 2022, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Locametz, intended for the diagnosis of prostate cancer. The applicant for this medicinal product is Novartis Europharm Limited. Locametz was approved for medical use in the European Union in December 2022.
Names
Gallium (68Ga) gozetotide is the international nonproprietary name (INN).
References
External links
Gallium compounds
Radiopharmaceuticals | Gallium (68Ga) gozetotide | [
"Chemistry"
] | 977 | [
"Chemicals in medicine",
"Radiopharmaceuticals",
"Medicinal radiochemistry"
] |
69,232,129 | https://en.wikipedia.org/wiki/Xcel-Arc | Xcel-Arc is a New Zealand-based welding company that is owned by Esseti NZ Ltd. It was founded in 1994, providing welding machines across New Zealand. Today, it is one of the primary welding companies of New Zealand.
The company is headquartered in Palmerston North, Wellington, New Zealand.
History
The Xcel-Arc Welding NZ was founded in 1994 in New Zealand. It provides plasma cutters, TIG-welding, MIG, Arc welding machines, machine trolleys, and protective gear.
Xcel-Arc manufactures machines that comply Australian-New Zealand market standards AS/NZS60974-1 and EN 50199.
References
External links
Xcel-Arc Welding NZ
Welding
Companies based in Wellington | Xcel-Arc | [
"Engineering"
] | 150 | [
"Welding",
"Mechanical engineering"
] |
69,232,536 | https://en.wikipedia.org/wiki/Intelligent%20automation | Intelligent automation (IA), or alternately intelligent process automation, is a software term that refers to a combination of artificial intelligence (AI) and robotic process automation (RPA). Companies use intelligent automation to cut costs and streamline tasks by using artificial-intelligence-powered robotic software to mitigate repetitive tasks. As it accumulates data, the system learns in an effort to improve its efficiency. Intelligent automation applications consist of but are not limited to, pattern analysis, data assembly, and classification. The term is similar to hyperautomation, a concept identified by research group Gartner as being one of the top technology trends of 2020.
Technology
Intelligent automation applies the assembly line concept of breaking tasks into repetitive steps to improve business processes. Rather than having humans do each step, intelligent automation can replace steps with an intelligent software robot or bot, improving efficiency.
Applications
The technology is used to process unstructured content. Common real-world applications include self-driving cars, self-checkouts at grocery stores, smart home assistants, and appliances. Businesses can apply data and machine learning to build predictive analytics that react to consumer behavior changes, or to implement RPA to improve manufacturing floor operations.
For example, the technology has also been used to automate the workflow behind distributing Covid-19 vaccines. Data provided by hospital systems’ electronic health records can be processed to identify and educate patients, and schedule vaccinations.
Intelligent Automation can provide real-time insights on profitability and efficiency. However in an April 2022 survey by Alchemmy, despite three quarters of businesses acknowledging the importance of Artificial Intelligence to their future development, just a quarter of business leaders (25%) considered Intelligent Automation a “game changer” in understanding current performance. 42% of CTOs see “shortage of talent” as the main obstacle to implementing Intelligent Automation in their business, while 36% of CEOs see ‘upskilling and professional development of existing workforce’ as the most significant adoption barrier.
IA is becoming increasingly accessible for firms of all sizes. With this in mind, it is expected to continue to grow rapidly in all industries. This technology has the potential to change the workforce. As it advances, it will be able to perform increasingly complex and difficult tasks. In addition, this may expose certain workforce issues as well as change how tasks are allocated.
Benefits
Streamline Processes
Repetitive manual tasks can put a strain on the workforce, these tasks can be automated to allow the workforce to work on more important matters that require human cognition. Intelligent automation can also be used to mitigate tasks with human error which in turn increases proficiency. This allows the opportunity for firms to scale production without the traditional negative consequences such as reduced quality or increased risk.
Customer Service Improvement
Customers service can be improved drastically, this allows for a competitive advantage for the firm. IA utilizing chat features allows for instant curated responses to customers. In addition, it can give updates to customers, make appointments, manage calls, and personalize campaigns.
Flexibility
Due to the wide range of applications, IA is useful across a variety of fields, technologies, projects and industries. In addition, IA can be integrated with current automated systems in place. This allows for optimized systems unique to each firm to best fit their individual needs.
Capabilities
Cognitive automation: Employs AI techniques to assist humans in decision-making and task completion
Natural language processing: Allows computers to automate knowledge work
Business process management: Enhances the consistency and agility of corporate operations
Process mining: Applies data mining methods to discover, analyze, and improve business processes
Intelligent document processing: Utilizes OCR and other advanced technologies to extract data from documents and convert it into structured, usable data
Computer vision: Allows computers to extract information from digital images, videos, and other visual inputs
Integration automation: Establishes a unified platform with automated workflows that integrate data, applications, and devices.
See also
Robotic process automation
Artificial intelligence
Automation
References
&
Business software
Automation software
Information economy
Machine learning | Intelligent automation | [
"Engineering"
] | 800 | [
"Machine learning",
"Automation",
"Control engineering",
"Automation software",
"Artificial intelligence engineering"
] |
69,232,707 | https://en.wikipedia.org/wiki/Scaled%20particle%20theory | The Scaled Particle Theory (SPT) is an equilibrium theory of hard-sphere fluids which gives an approximate expression for the equation of state of hard-sphere mixtures and for their thermodynamic properties such as the surface tension.
One-component case
Consider the one-component homogeneous hard-sphere fluid with molecule radius . To obtain its equation of state in the form (where is the pressure, is the density of the fluid and is the temperature) one can find the expression for the chemical potential and then use the Gibbs–Duhem equation to express as a function of .
The chemical potential of the fluid can be written as a sum of an ideal-gas contribution and an excess part: . The excess chemical potential is equivalent to the reversible work of inserting an additional molecule into the fluid. Note that inserting a spherical particle of radius is equivalent to creating a cavity of radius in the hard-sphere fluid. The SPT theory gives an approximate expression for this work . In case of inserting a molecule it is
,
where is the packing fraction, is the Boltzmann constant.
This leads to the equation of state
which is equivalent to the compressibility equation of state of the Percus-Yevick theory.
References
Statistical mechanics | Scaled particle theory | [
"Physics"
] | 255 | [
"Statistical mechanics stubs",
"Statistical mechanics"
] |
69,233,992 | https://en.wikipedia.org/wiki/Mercedes-Benz%20M21%20engine | The Mercedes-Benz M21 engine is a naturally-aspirated, 2.0-liter, straight-6, internal combustion piston engine, designed, developed and produced by Mercedes-Benz; between 1933 and 1936.
M21 Engine
The side-valve six-cylinder engine had a capacity of 1,961 cc which produced a claimed maximum output of at 3,200 rpm. The engine shared its piston stroke length with the smaller 6-cylinder unit fitted in the manufacturer's W15 model, but for the W21 the bore was increased by to . The stated top speed was 98 km/h (61 mph) for the standard length and 95 km/h (59 mph) for the long bodied cars. Power from the engine passed to the rear wheels through a four-speed manual transmission in which the top gear was effectively an overdrive ratio. The top two ratios featured synchromesh. The brakes operated on all four wheels via a hydraulic linkage.
During the model's final year, Mercedes-Benz announced, in June 1936, the option of a more powerful 2,229 cc engine, which was seen as a necessary response to criticism of the car's leisurely performance in long bodied form.
Applications
Mercedes-Benz W21
References
Mercedes-Benz engines
Straight-six engines
Engines by model
Gasoline engines by model | Mercedes-Benz M21 engine | [
"Technology"
] | 272 | [
"Engines",
"Engines by model"
] |
69,234,027 | https://en.wikipedia.org/wiki/Mercedes-Benz%20M18%20engine | The Mercedes-Benz M18 engine is a naturally-aspirated, 2.9-liter, straight-6, internal combustion piston engine, designed, developed and produced by Mercedes-Benz; between 1933 and 1937.
M18 Engine
The six-cylinder 2,867 cc side-valve engine produced a maximum output of at 3,200 rpm. In 1935 the compression ratio was increased along with maximum power which was now given as . Power was delivered to the rear wheels via a four-speed manual transmission with synchromesh on the top two ratios.
Applications
Mercedes-Benz W18
References
Mercedes-Benz engines
Straight-six engines
Engines by model
Gasoline engines by model | Mercedes-Benz M18 engine | [
"Technology"
] | 138 | [
"Engines",
"Engines by model"
] |
69,234,064 | https://en.wikipedia.org/wiki/Mercedes-Benz%20M15%20engine | The Mercedes-Benz M15 engine is a naturally-aspirated, 1.7-liter, straight-6, internal combustion piston engine, designed, developed and produced by Mercedes-Benz; between 1931 and 1936.
M15 Engine
The car was powered by a six-cylinder 1,692 cc engine: maximum power was set at at 3,200 rpm. The engine featured central lubrication and the water-based cooling system for the engine employed both a pump and a thermostat. Power was transmitted to the rear wheels via what was in effect a four-speed manual transmission, on which the top gear operated as a form of overdrive. Third gear used the 1:1 ratio conventionally used by a top gear, and there was a fourth gear with a ratio of 1 : 0.73. Fuel economy was quoted as and top speed 90 km/h (56 mph), which combined to represent a competitive level of performance in the passenger car market of that time.
Applications
Mercedes-Benz W15
References
Mercedes-Benz engines
Straight-six engines
Engines by model
Gasoline engines by model | Mercedes-Benz M15 engine | [
"Technology"
] | 226 | [
"Engines",
"Engines by model"
] |
69,234,090 | https://en.wikipedia.org/wiki/Mercedes-Benz%20M11%20engine | The Mercedes-Benz M11 engine is a naturally-aspirated, 2.6-liter, straight-6, internal combustion piston engine, designed, developed and produced by Mercedes-Benz; between 1929 and 1935.
M11 Engine
The manufacturer applied the widely followed German naming conventions of the time. On the Mercedes-Benz 10/50 PS the “10” defined the car's tax horsepower, used by the authorities to determine the level of annual car tax to be imposed on car owners. The “38” defined the manufacturer's claims regarding car's actual power output as defined in metric horsepower. In Germany tax horsepower, which had been defined by statute since 1906, was based on the dimensions of the cylinders in the engine.
Unlike the systems used elsewhere in Europe, the German tax horsepower calculation took account both of the cylinder bore and of the cylinder stroke, and there was therefore a direct linear relationship between engine size and tax horsepower. Reflecting the manufacturer's new naming strategy, the car was also sold as the Mercedes-Benz Typ Stuttgart, the Mercedes-Benz Typ 260 and as the Mercedes-Benz Typ Stuttgart 260.
The side-valve six-cylinder 2,581 cc engine delivered a maximum output of at 3,400 rpm which translated into a top speed of 90 km/h (56 mph). Power was transmitted to the rear wheels via a four-speed manual transmission, the fourth speed being effectively an overdrive ratio of 1 : 0.76 while the more conventional “top” 1 : 1 ratio was achieved by selecting third gear. The wheels were fixed to a rigid axle suspended from semi-elliptic leaf springs. The braking applied to all four wheels, mechanically controlled using rod linkages.
Applications
Mercedes-Benz W11
References
Mercedes-Benz engines
Straight-six engines
Engines by model
Gasoline engines by model | Mercedes-Benz M11 engine | [
"Technology"
] | 374 | [
"Engines",
"Engines by model"
] |
69,234,145 | https://en.wikipedia.org/wiki/Mercedes-Benz%20M10%20engine | The Mercedes-Benz M10 engine is a naturally-aspirated, 3.4-liter to 3.7-liter, straight-6, internal combustion piston engine, designed, developed and produced by Mercedes-Benz; between 1929 and 1933.
Applications
Mercedes-Benz W10
References
Mercedes-Benz engines
Straight-six engines
Engines by model
Gasoline engines by model | Mercedes-Benz M10 engine | [
"Technology"
] | 75 | [
"Engines",
"Engines by model"
] |
69,234,252 | https://en.wikipedia.org/wiki/Mercedes-Benz%20M06%20engine | The Mercedes-Benz M06 engine is a supercharged, 6.8-liter to 7.1-liter, straight-6, internal combustion piston engine, designed, developed and produced by Mercedes-Benz; between 1928 and 1934.
M06 engine
The M06 has a supercharged, single overhead camshaft, 7-litre straight-6 engine that produces . Depending on state of tune, there is over 500lbs of torque, which made the SSK the fastest car of its day. A clutch operates the supercharger that is engaged by fully depressing the throttle pedal with an extra push, whereas letting off the throttle pedal disengages it.
Applications
Mercedes-Benz SSK
References
Mercedes-Benz engines
Straight-six engines
Engines by model
Gasoline engines by model | Mercedes-Benz M06 engine | [
"Technology"
] | 162 | [
"Engines",
"Engines by model"
] |
69,234,360 | https://en.wikipedia.org/wiki/Mercedes-Benz%20M09%20engine | The Mercedes-Benz M09 engine is a naturally-aspirated, 3.4-liter, straight-6, internal combustion piston engine, designed, developed and produced by Mercedes-Benz; between 1928 and 1929.
Applications
Mercedes-Benz W03
References
Mercedes-Benz engines
Straight-six engines
Engines by model
Gasoline engines by model | Mercedes-Benz M09 engine | [
"Technology"
] | 71 | [
"Engines",
"Engines by model"
] |
69,234,561 | https://en.wikipedia.org/wiki/Mercedes-Benz%20M04%20engine | The Mercedes-Benz M04 engine is a naturally-aspirated, 3.0-liter and 3.1-liter, straight-6, internal combustion piston engine, designed, developed and produced by Mercedes-Benz; between 1927 and 1928.
M04 engine
The side-valve six-cylinder 2,994 cc engine delivered maximum output of , but now at the lower engine speed of 3,200 rpm. At the back, however, the final drive ratio was changed from 5.4 :1 to 4.8 : 1, and the listed top speed went up to 108 km/h (67 mph)
Having raised the final drive and the top speed for 1927, the manufacturer now moved to offer a choice of ratios, either reducing it back to 5.4 :1 or raising it further to 5.8 :1. The former ratio was described as the “Flachland” (flat lands) version while the latter as the “Berg” (mountain) version. At the same time a small increase in the cylinder stroke accounted for an increase in overall engine capacity to 3,131 cc. Claimed maximum output was unchanged at , still at 3,200, although there was a measurable increase in torque.
Applications
Mercedes-Benz 12/55 hp Type 320 Sedan
References
Mercedes-Benz engines
Straight-six engines
Engines by model
Gasoline engines by model | Mercedes-Benz M04 engine | [
"Technology"
] | 279 | [
"Engines",
"Engines by model"
] |
69,234,796 | https://en.wikipedia.org/wiki/Nitter | Nitter is a discontinued free and open source alternative viewer for Twitter, focusing on privacy and performance.
Features
The user interface was designed to be minimalist and resemble the classic Twitter desktop layout. Since the user cannot log in to Twitter through Nitter, Nitter has no notifications, no home feed, and no ability to tweet. By default Nitter has no infinite scroll. Nitter had no ads or tracking and the timeline was in chronological order. Nitter relied on a glitch that allowed creating a large amount of "guest accounts" using proxy servers in order to fetch content.
In addition to the official web instance, there are unofficial public web instances, as well as community-contributed mobile apps and browser extensions. Nitter was funded by donations as well as a grant from NLnet's NGI fund.
Discontinuation
Nitter was officially discontinued in February 2024. The developer had announced the project was "dead" after Twitter removed the guest account feature, on which Nitter relied, in January 2024. Some instances had previously stopped working some months before due to changes to the Twitter API. The developer stated instances could be self hosted by having users use their own account at the risk of the account being banned.
See also
Invidious, a viewer for YouTube that inspired Nitter
References
External links
nitter.net - the official instance. Archived
Free software websites
Privacy
Software using the GNU Affero General Public License
Twitter services and applications
Websites
Discontinued software | Nitter | [
"Technology"
] | 302 | [
"Computing websites",
"Free software websites"
] |
69,234,832 | https://en.wikipedia.org/wiki/Mercedes-Benz%20M03%20engine | The Mercedes-Benz M03 engine is a naturally-aspirated, 3.0-liter, straight-6, internal combustion piston engine, designed, developed and produced by Mercedes-Benz; between 1926 and 1927.
M03 engine
The side-valve six-cylinder 2,968 cc engine delivered a maximum output of at 3,500 rpm which translated into a top speed of 100 km/h (62 mph). Power was transmitted via a four-speed manual transmission to the rear wheels which were fixed to a rigid axle suspended from semi-elliptic leaf springs. The braking applied to all four wheels, mechanically controlled using rod linkages.
Applications
Mercedes-Benz 12/55 hp Type 300 Sedan
References
Mercedes-Benz engines
Straight-six engines
Engines by model
Gasoline engines by model | Mercedes-Benz M03 engine | [
"Technology"
] | 162 | [
"Engines",
"Engines by model"
] |
69,234,883 | https://en.wikipedia.org/wiki/Mercedes-Benz%20M02%20engine | The Mercedes-Benz M02 engine is a naturally-aspirated, 2.0-liter, straight-6, internal combustion piston engine, designed, developed and produced by Mercedes-Benz; between 1926 and 1933.
M02 engine
The side-valve six-cylinder 1,988 cc engine delivered a maximum output of at 3,400 rpm, which translated into a top speed of 75 km/h (47 mph). Power was transmitted via a three-speed manual transmission to the rear wheels, which were fixed to a rigid axle suspended from semi-elliptic leaf springs. The braking applied to all four wheels, mechanically controlled using rod linkages.
Applications
Mercedes-Benz W02
References
Mercedes-Benz engines
Straight-six engines
Engines by model
Gasoline engines by model | Mercedes-Benz M02 engine | [
"Technology"
] | 160 | [
"Engines",
"Engines by model"
] |
69,235,103 | https://en.wikipedia.org/wiki/Daimler%20M9456%20engine | The Daimler-Mercedes M9456 engine is a supercharged and naturally-aspirated, 6.2-liter to 6.4-liter, straight-6, internal combustion piston engine, designed, developed and produced by Mercedes-Benz, in partnership with Daimler; between 1924 and 1929.
M9456 engine
The six-cylinder in-line 6240 cc engine featured an overhead camshaft which at the time was an unusual feature, with “bevel linkage”. However, it was the switchable supercharger (”Kompressor”), adopted from the company's racing cars, that attracted most of the attention. With the device switched off maximum claimed output was of at 3,100 rpm: with the supercharger operating, maximum output rose to .
The top speed listed was 115 km/h (71 mph) or 120 km/h (75 mph) depending on which of the two offered final drive ratios was fitted.
From 1928 the Modell K received a still more powerful "Kompressor engine", although there was no change to the overall engine size. Stated power now increased to or, with the compressor switched on, . The official performance figures were unchanged.
Applications
Mercedes 24/100/140 PS
References
Mercedes-Benz engines
Straight-six engines
Engines by model
Gasoline engines by model | Daimler M9456 engine | [
"Technology"
] | 276 | [
"Engines",
"Engines by model"
] |
69,235,243 | https://en.wikipedia.org/wiki/Daimler%20M836%20engine | The Daimler-Mercedes M836 engine is a naturally-aspirated and supercharged, 3.9-liter to 4.0-liter, straight-6, internal combustion piston engine, designed, developed and produced by Mercedes-Benz, in partnership with Daimler; between 1924 and 1929.
M836 engine
The six-cylinder in-line 3920 cc engine featured an overhead camshaft which at the time was an unusual feature, with “bevel linkage”. However, it was the switchable supercharger (”Kompressor”), adopted from the company's racing cars, that attracted most of the attention. With the device switched off maximum claimed output was of at 3,100 rpm: with the supercharger operating, maximum output rose to .
The top speed listed was 105 km/h (65 mph) or 112 km/h (70 mph) according to which of the two offered final drive ratios was fitted.
Applications
Mercedes 15/70/100 PS
References
Mercedes-Benz engines
Straight-six engines
Engines by model
Gasoline engines by model | Daimler M836 engine | [
"Technology"
] | 223 | [
"Engines",
"Engines by model"
] |
69,237,585 | https://en.wikipedia.org/wiki/Jenny%20Zhang%20%28chemist%29 | Jenny Zhenqi Zhang is a Chinese-Australian chemist and BBSRC David Phillips Research Fellow of the Department of Chemistry, University of Cambridge, where she is also a Fellow of Corpus Christi College (2019-present). She was awarded the 2020 RSC Felix Franks Biotechnology Medal for her research into re-wiring photosynthesis to provide sustainable fuel sources.
Early life and education
Zhang was born in China, and moved to Gosford on the Central Coast (New South Wales), Australia at age eight. She credits her mother's stories explaining the scientific basis of various phenomena with stimulating her interest in science. She moved to Sydney to attend the University of Sydney, where she completed a Bachelor of Science (Advanced) in 2007 and a PhD in Chemistry under the supervision of Professor Trevor Hambley in 2011. During her PhD, Zhang also briefly worked at the Hebrew University of Jerusalem.
Career and research
Zhang's doctoral research was in the area of bioinorganic chemistry, and she worked on the development of a platinum-based library of chemotherapeutic candidates featuring anthraquinone ligands and redox activity. This involved using a variety of imaging techniques (including those based on synchrotron radiation) to study the biological distributions and metabolism of the chemotherapeutics in 3D solid tumour models, and synthetic strategies to generate new examples of such complexes.
Zhang sought a change in research field following her PhD, and in 2013 she joined the group of Professor Erwin Reisner at the University of Cambridge as a postdoctoral fellow after receiving a Marie Skłodowska-Curie International Fellowship, also becoming a Research Associate of St John's College. This brought her into sustainability research, in particular artificial photosynthesis. Her postdoctoral research involved developing ways to wire oxidoreductases to electrodes and use photosynthesis to generate a sustainable biofuel, especially photosystem II.
In 2018, Zhang was awarded a BBSRC David Phillips Fellowship to start her own, independent research group in the Department of Chemistry at Cambridge. In her independent career, she has continued to work on the re-wiring of photosynthesis but now focuses on doing so in live cells. She also became a Fellow of Corpus Christi College, where she is now Director of Studies in Natural Sciences Chemistry. Zhang was recognised for her contributions to semi-artificial photosynthesis with the award of the Felix Franks Biotechnology Medal from the RSC in 2020.
References
Chinese emigrants to Australia
Australian expatriates in England
Australian people of Chinese descent
Australian chemists
Inorganic chemists
Bioinorganic chemists
Australian women chemists
University of Sydney alumni
Living people
Members of the University of Cambridge Department of Chemistry
Fellows of Corpus Christi College, Cambridge
Year of birth missing (living people)
Fellows of St John's College, Cambridge | Jenny Zhang (chemist) | [
"Chemistry"
] | 573 | [
"Inorganic chemists",
"Bioinorganic chemistry",
"Bioinorganic chemists"
] |
69,237,858 | https://en.wikipedia.org/wiki/Seaweed%20fertiliser | Seaweed fertiliser is organic fertilizer made from seaweed that is used in agriculture to increase soil fertility and plant growth. The use of seaweed fertilizer dates back to antiquity and has a broad array of benefits for the soils.
Seaweed fertilizer can be applied in a number of different forms, including refined liquid extracts and dried, pulverized organic material. Through its composition of various bioactive molecules, seaweed functions as a strong soil conditioner, bio-remediator, and biological pest control, with each seaweed phylum offering various benefits to soil and crop health. These benefits can include increased tolerance to abiotic stressors, improved soil texture and water retention, and reduced occurrence of diseases.
On a broader socio-ecological scale, seaweed aquaculture and fertilizer development have significant roles in biogeochemical nutrient cycling through carbon storage and the uptake of nitrogen and phosphorus. Seaweed fertilizer application to soils can also alter the structure and function of microbial communities. Seaweed aquaculture has the potential to yield ecosystem services by providing a source of nutrition to human communities and a mechanism for improving water quality in natural systems and aquaculture operations.
The rising popularity of organic farming practices is drawing increased attention towards the various applications of seaweed-derived fertilizers and soil additives. While the seaweed fertilizer industry is still in its infancy, it holds significant potential for sustainable economic development as well as the reduction of nutrient runoff in coastal systems. There are however ongoing challenges associated with the use and production of seaweed fertilizer including the spread of diseases and invasive species, the risk of heavy-metal accumulation, and the efficiency and refinement of production methods.
Nomenclature and taxonomy
“Seaweed" is one of the common names given to multicellular macroalgae, such as green algae (Chlorophyta), brown algae (Phaeophyceae), and red algae (Rhodophyta). The term, seaweed is sometimes used to refer to microalgae and plants as well. Seaweeds are typically benthic organisms which have a structure called a holdfast, that keeps them anchored to the sea floor; they also have a stipe, otherwise known as a stem, and blade-shaped foliage. Sargassum seaweed is one exception to this anatomy and function, as it does not attach to the benthic environment. The color of seaweeds generally follows depth/light, with green seaweeds, brown seaweeds, and red seaweeds corresponding to shallow, moderate, and deeper waters respectively; red seaweeds are sometimes found up to 30 meters in depth. The smallest seaweeds grow only a few millimeters in height, while the largest seaweeds can grow up to 50 meters in height. There are an estimated 1,800 green, 1,800 brown, and 6,200 red seaweed species in existence. Brown seaweeds are generally known as kelp, but are also known by other common names such as rockweed and wracks. Red seaweeds are the most diverse group of seaweed, and along with green seaweeds, are most closely related to terrestrial plants, whereas brown seaweeds are the most distantly related to terrestrial plants. Seaweeds are found extensively in shallow natural environments, and farmed both in the ocean and in land-based aquaculture operations. Most brown seaweeds that are found in the wild are from the genera Laminaria, Undaria, Hizikia, whereas most brown seaweeds that are farmed for uses such as fertilizer and heavy metal indication, are from the species Ascophyllum, Ecklonia, Fucus, Sargassum. Green seaweeds that are used as bioindicators, for heavy metal indication for example, are from the genera Ulva and Enteromorpha. Red seaweed from the genus Poryphora, is commonly used for human food.
History
The first written record of agricultural use seaweed was from ancient Greek and Roman civilizations in the 2nd century, where foraged beach castings were used to feed livestock and wrap plant roots for preservation. However, stable isotope analysis of prehistoric sheep teeth in Orkney indicate that early peoples used seaweed as livestock fodder over 5,000 years ago, and researchers speculate that foraged seaweed was also used as fertilizer because ashed remnants of seaweed were found in archeological sites. Such agricultural techniques might have been key to the survival of early settlements in Scotland.
Historical records and archaeological evidence of seaweed fertilizer use in the coastal Atlantic are vast and scattered, ranging from Scandinavia to Portugal, from the neolithic period through the 20th century. Most details of seaweed fertilizer use come from the British Isles, Channel Islands, Normandy and Brittany (France), where a variety of application techniques were used over the centuries, and some continue to this day. Ireland has a long history (12th century) of harvesting seaweed for fertilizing nutrient-poor post glacial soils using composted manure as enrichment and the increased agricultural productivity allowed the Irish population to grow substantially. The Channel Islands (12th century) used a dried blend of red and brown seaweeds, called "Vraic" or "wrack", to spread over potato fields during the winter months to enrich before planting the crop in the spring. Similarly, coastal people in Normandy and Brittany have been collecting "wrack" using wood rakes since the neolithic period, though the fertilizer composition originally included all marine debris that washed ashore. In 17th–19th century Scotland, Fucus spp. were cultivated by placing rocky substrate in the intertidal zones to encourage seaweed settlement. The seaweed biomass was then used in composted trenches, where crops (potatoes, oats, wheat, onions) were grown directly in the sandy fertilizer mixture. This ‘lazy bed’ method afforded minimal crop rotation and allowed rugged landscape and acidic soils to be farmed, where plant growth was otherwise unsuitable. The high value of seaweed in these regions caused political disputes over harvesting rights and in Ireland such rights were established before the country itself. These early applications of seaweed fertilizer were limited to coastlines, where the macroalgae could be harvested from the intertidal or collected after a storm washed it to shore. However, dried wrack mixtures or ashed ‘fucus’ potash could be transported further inland because it weighs less than wet seaweed.
Seaweed fertilizer spread inland when a kelp industry developed in Scotland, Norway, and Brittany in the 18th and 19th century. The industry developed out of demand for ashed soda, or potash, which was used to create glass and soap, and led to shortages for agricultural applications in traditional coastal communities. Potash is a water-soluble potassium rich concentrate made from plant matter, so it was also exported as a fertilizer. Coastal communities in the seaweed industry both expanded and struggled to keep up with the demand. Early commercial kelp export in Scotland devastated traditional agriculture in the region because intensive labor was needed during the seaweed growing season to harvest and process the kelp, which led to a labor transition from farming to kelp processing. Additionally, exploitation of kelp resources for potash production left little kelp behind for local fertilizer and coastal land became more desirable than inland regions. The Scottish seaweed industry went through multiple boom and bust cycles, employing 10,000 families and producing 3,000 tonnes of ash per year during its peak. The export price of kelp ash dropped in 1822, leading to a sudden emigration from the area because the crop was no longer profitable enough to support such a large industry. Kelp exploitation and toxic ash processing caused ecological and economic damage in Orkney and left many people sick and blinded. The kelp industry picked up again for iodine production in 1845, and alginate (a thickening agent) production in the early 1900s, which reinvigorated kelp harvest.
Global production of seaweed fertilizer largely phased out when chemical fertilizers were developed in the 1920s, due to the cheaper production cost. Chemical fertilizers revolutionized the agriculture industry and allowed the human population to grow far beyond the limits of traditional food production methods. Synthetic fertilizers are still the predominant global source for commercial agricultural applications due to the cheap cost of production and widespread access. However, small scale organic farmers and coastal communities continued traditional seaweed techniques in regions with a rich seaweed history. The first industrial kelp liquid fertilizer, Maxicrop, was created by Reginald Milton in 1947. The creation of liquid fertilizer has allowed for more widespread application of seaweed-derived fertilizer to inland regions and sparked a growing agronomic interest in seaweed for a variety of agricultural applications, including foliage spray, biostimulants, and soil conditioning. Interestingly, the historic rise of seaweed aquaculture did not align with fertilizer production because the European countries that produce seaweed fertilizer haven't developed a significant aquaculture industry; seaweed farming is also currently dominated by China and Indonesia, where the crop is grown for food and other lucrative uses.
Aquaculture
The development of modern seaweed mariculture/aquaculture has allowed the expansion of seaweed fertilizer research and improved processing methods since the 1950s. Seaweed has been cultivated in Asian countries for food production for centuries, but seaweed aquaculture is now growing rapidly across the world for specialty use in biofuel, agar, cosmetics, medicine, and bioplastics. The nascent agricultural seaweed sector, including animal feed, soil additives, and agrochemicals, makes up less than 1% of the overall global value of seaweed aquaculture. However, significant interest in agricultural applications of the crop has increased dramatically since 1950, as specialty agrochemical uses for seaweed materials have been demonstrated through scientific research. Increased concern over the depletion and degradation of marine resources in the past century, coupled with the threats of climate change, has increased global interest in sustainable solutions for blue economic development of the oceans. Seaweed aquaculture is promoted as a solution to expand novel industry development and food security while simultaneously restoring damaged ecosystems. Unlike terrestrial crops, growing seaweed requires no land, feed, fertilizers, pesticides, and water resources. Different seaweeds also offer a variety of ecosystem services (discussed below), which contribute to the growing popularity of seaweed as a bioremediation crop. Fertilizer plays an important role in sustainable seaweed aquaculture development because seaweed farming can help alleviate excess nutrient loading associated with terrestrial chemical fertilizer run-off and applying organic seaweed fertilizer on soil closes the nutrient loop between land and sea. Additionally, seaweed fertilizer can be produced using by-products from other industries or raw materials that are unsuitable for human consumption, such as rotting or infected biomass or biowaste products from carrageenan processing methods. Seaweed aquaculture is also important for supporting sustainble growth of the seaweed fertilizer industry because it limits the potential for exploitation of native seaweed for commercial interests. However, the nascent seaweed aquaculture industry faces a number of challenges to sustainable development, as discussed below. Environmental impacts of seaweed harvest and production need to be carefully scrutinized to protect coastal communities and maintain the socioeconomic benefits of using seaweed resources in industry.
Ecosystem services
Seaweed mariculture for purposes including fertilizer production, has the potential to improve environmental conditions in coastal habitats, especially with regards to toxic algal blooms, as mariculture seaweeds uptake excess nutrients that have resulted from runoff, thereby inhibiting the growth of toxic algal blooms that harm local ecosystems. Seaweed fertilizers can also be more biodegradable, less toxic, and less hazardous than chemical fertilizers, depending on the type of seaweed fertilizer.
Seaweeds are used in aquaculture operations to uptake fish waste as nutrients and improve water quality parameters. Humans use seaweeds nutritionally as food, industrially for animal feed and plant fertilizer, and ecologically to improve environmental conditions. Seaweeds have been consumed by humans for centuries because they have excellent nutritional profiles, contain minerals, trace elements, amino acids, and vitamins, and are high in fiber and low in calories. Red seaweeds have the highest protein content and brown seaweeds have the lowest protein content. Of all the red seaweeds, Porphyra, is the genus most frequently used for human consumption. Brown seaweeds are so plentiful that they most used for industrial animal feeds and fertilizers. Furthermore, seaweeds are currently being investigated as a potential source of sustainable biofuel, as well as being investigated as a potential component of wastewater treatment, because some species are able to absorb and remove heavy metals and other toxicants from water bodies, and also generally serve as water quality indicators.
Ecosystem impacts
Any ecosystem impacts of using seaweed for plant and crop fertilizer are primarily due to how the seaweed is harvested. Large-scale, unsustainable seaweed farming can lead to the displacement and alteration of native habitats due to the presence of farming infrastructure in the water, and day-to-day anthropogenic operations in the area. Seaweed is currently harvested from farmed sources, wild sources, and from beach collection efforts. Harvesting wild seaweed will tend to have negative impacts on local ecosystems, especially if existing populations are overexploited and rendered unable to provide ecosystem services. There is also a risk that large, industrial scale seaweed monocultures will be established in natural benthic environments, leading to the competitive exclusion of native seaweeds and sea grasses, which inhabit the depths underneath seaweed farms. Furthermore, large, industrial scale seaweed farming can alter the natural benthic environment that they are established in, by altering environmental parameters such as light availability, the movement of water, sedimentation rates and nutrient levels, and due to the general, overall stress caused by anthropogenic factors.
Production and application methods
Brown seaweeds are most commonly used for fertilizer production, at present and historically. Seaweed fertilizer can be used as a crude addition to soil as mulch, composted to break down the hardy raw material, or dried and pulverized to make the nutrients more bioavailable to plant roots. Compost fertilization is a technique that any small-scale organic farm can readily use if they have access to seaweed, though extracts are more common for large-scale commercial applications. Commercial manufacturing processes are often more technical than traditional techniques using raw biomass and use different biochemical processes to concentrate and extract the most beneficial nutrients from seaweed.
A simple liquid fertilizer can be created by fermenting seaweed leaves in water, though the process is intensified and hastened industrially through heat and pressure. Other methods for liquid extraction include a soft-extraction with low temperature milling to suspend fine particles in water, heating the raw material with alkaline sodium or potassium to extract nutrients and the addition of enzymes to aid in biochemical decomposition.
Extraction of bioavailable nutrients from raw seaweed is achieved by breaking down the hardy cell walls through physical techniques, such as ultrasound extraction, boiling, or freeze-thaw. Biological fermentation techniques are also used to degrade the cells. Physical extraction techniques are often faster, but more expensive and result in poorer crop yield in trials. Since seaweed extract has chelating properties that maintain trace metal ions bioavailability to plants, additional micronutrients are often added to solution to increase the fertilization benefit to specific crops.
Organic fertilization techniques have lower environmental consequences in comparison to the production of artificial chemical fertilizers, because they use no harsh caustic or organic solvents to produce fertilizer and the seaweed raw material is a renewable resource, as opposed to mineral deposits and fossil fuels needed to synthesize chemical fertilizer.
Large-scale agricultural use of synthetic fertilizer depletes soil fertility and increases water hardness over time, so recent trends in agricultural development are following an organic approach to sustain food production through improved soil management and bio-fertilization techniques.
Seaweed extracts are bio-fertilizers that can also be used as biostimulants, which are applied to enhance nutrient efficiency and abiotic stress tolerance. New extraction technologies are being developed to improve efficiency and target the isolation of specific compounds for specialized applications of seaweed biostimulants, though specific extraction techniques are frequently trade secrets. Additionally, many liquid fertilizer extraction processes can complement other industrial uses for seaweed, such as carrageenan production, which increases the economic benefit of the same seaweed crop.
Nutrient cycling
To support a growing seaweed aquaculture industry many studies have evaluated the nutrient cycle dynamics of different seaweed species in addition to exploring co-production applications including bioremediation and carbon sequestration. Seaweeds can form highly productive communities in coastal regions, dominating the nutrient cycles within these ecosystems. As primary producers, seaweeds incorporate inorganic carbon, light, and nutrients (such as nitrogen and phosphorus), into biomass through photosynthesis. Harvesting seaweed from marine environments results in the net removal of these elements from these ecosystems in addition to the removal of heavy metals and contaminants.
For photosynthesis, seaweeds utilize both inorganic nitrogen, in the forms of nitrate - NO3−, ammonium - NH4+, and organic nitrogen in the form of urea. Primary production using nitrate is generally considered new production because nitrate is externally supplied through upwelling and riverine input, and often has been converted from forms of nitrogen that are released from biological respiration. However, primary production using ammonium is denoted as recycled production because ammonium is internally supplied through regeneration by heterotrophs within ecosystems. For example, the ammonium excreted by fish and invertebrates within the same coastal ecosystems as seaweeds can support seaweed production through providing a nitrogen source.
Phosphorus is supplied inorganically as phosphate (PO43-) and generally follows similar seasonal patterns to nitrate. In addition, seaweeds require inorganic carbon, typically supplied from the environment in the form of carbon dioxide (CO2) or bicarbonate (HCO3−).
Similar to other marine photosynthesizing organisms like phytoplankton, seaweeds also experience nutrient limitations impacting their ability to grow. Nitrogen is the most commonly found limiting nutrient for seaweed photosynthesis, although phosphorus has also been found to be limiting. The ratio of inorganic carbon, nitrogen, and phosphorus is also important to ensure balanced growth. Generally the N:P ratio for seaweeds is 30:1, however, the ratio can differ significantly among species and requires experimental testing to identify the specific ratio for a given species. Exploring the relationship between nutrient cycling and seaweed growth is vital to optimizing seaweed aquaculture and understanding the functions and benefits of seaweed applications, including its use as a fertilizer, bio-remediator, and in the blue economy.
Coastal eutrophication
A growing population and intensification of industry and agriculture have increased the volume of wastewater discharged into coastal marine ecosystems. These waters typically contain high concentrations of nitrogen and phosphorus, and relatively high heavy metal concentrations, leading to eutrophication of many coastal ecosystems. Eutrophication results from the excessive nutrient load within these ecosystems resulting from the pollution of waters entering the oceans from industry, animal feed, and synthetic fertilizers, and thus over-fertilizes these systems.
Eutrophication leads to high productivity in coastal systems, which can result in coastal hypoxia and ocean acidification, two major concerns for coastal ecosystems. A notable service of seaweed farming is its ability to act as a bio-remediator through uptake and removal of th excessive nutrients in the coastal ecosystems with their application to land uses.
Brown algae, due in part to their large size, have been noted for their high productivity and corresponding high nutrient uptake in coastal ecosystems. Additionally, studies have focused on how brown algae growth can be optimized to increase biomass production and therefore increase the quantity of nutrients removed from these ecosystems. Studies have also explored the potential of brown algae to sequester large volumes of carbon (blue carbon).
Bio-remediation in eutrophic ecosystems
Seaweeds have received significant attention for their potential to mitigate eutrophication in coastal ecosystems through nutrient uptake during primary production in integrated multi-trophic aquaculture (IMTA). Bioremediation involves the use of biological organisms to lower the concentrations of nitrogen, phosphorus, and heavy metal concentrations in marine ecosystems. The bioremediation potential of seaweeds depends, in part, on their growth rate which is controlled by numerous factors including water movement, light, desiccation, temperature, salinity, life stage, and age class. It has also been proposed that in eutrophic ecosystems phosphorus can become limiting to seaweed growth due to the high N:P ratio of the wastewater entering these ecosystems. Bioremediation practices have been widely used due to their cost-effective ability to reduce excess nutrients in coastal ecosystems leading to a decrease in harmful algal blooms and an oxygenation of the water column. Seaweeds have also been studied for their potential use in the biosorption and accumulation of heavy metals in polluted waters, although the accumulation of heavy metals may impact algal growth.
Blue carbon
Blue carbon methods involve the use of marine ecosystems for carbon storage and burial. Seaweed aquaculture shows potential to act as a CO2 sink through the uptake of carbon during photosynthesis, transformation of inorganic carbon into biomass, and ultimately the fixation of carbon which can later be exported and buried. Duarte et al. (2017) outline a potential strategy for a seaweed farming blue carbon initiative. However the contribution of seaweed to blue carbon has faced controversy over the ability of seaweed to act as a net sink for atmospheric carbon. Krause-Jensen et al., (2018) discuss two main criteria for seaweed farming to be considered a blue carbon initiative: it must be both extensive in size and sequestration rate and possess the ability to be actionable by humans, that the sequestration rate can be managed by human action. Seaweed farming, including the use of seaweed as fertilizer could become an important contributor in climate mitigation strategies through carbon sequestration and storage.
Functions and benefits of seaweed fertilizer
Fertilization
Seaweed functions as an organic bio-fertilizer. Because seaweed is rich in micro and macronutrients, humic acids, and phytohormones, it enhances soil fertility. In addition, seaweed-derived fertilizers contain polysaccharides, proteins, and fatty acids which improve the moisture and nutrient retention of soil, contributing to improved crop growth. More trace minerals are found in seaweed than those produced with animal byproducts.
The application of seaweed fertilizers can also result in enhanced tolerance to abiotic stressors that generally inhibit crop growth and yield such as low moisture, high salinity, and freezing temperatures. These stress tolerance benefits appear to be driven by physiological changes induced in crops by the seaweed, including improved energy storage, enhanced root morphology, and greater metabolic potential, enhancing the plant's ability to survive unfavorable conditions. Kappaphycus alvarezzi extracts have also resulted in considerable reductions in the leakage of electrolytes, as well as enhanced chlorophyll and carotenoid production, and water content. Research has also demonstrated that wheat plants treated with seaweed extracts have accumulated key osmoprotectants such as proline, other amino acids, and total protein.
Foliar applications of seaweed fertilizer extract have been shown to improve the uptake of nitrogen, phosphorus, potassium, and sulfur in soybeans such as Glycine max. Research has also demonstrated that brown algae seaweed extracts can improve tomato plant growth, overall crop yield, and resistance to environmental stressors. Additional documented benefits of using seaweed as a fertilizer include reduced transplant shock, increased leaf surface area, and increased sugar content.
Soil conditioning
As a soil conditioner, seaweed fertilizer can improve the physical qualities of soil, such as aeration and water retention. Clay soils that lack organic matter and porosity benefit from the humic acid and soluble alginates found in seaweed. These compounds bond with metallic radicals which cause the clay particles to aggregate, thereby improving the texture, aeration, and retention of the soil by stimulating clay disaggregation. The degradation of alginates also supplements the soil with organic matter, enhancing its fertility.
In particular, brown seaweeds such as Sargassum are known to have valuable soil conditioning properties. This seaweed contains soluble alginates as well as alginic acid, which catalyzes the bacterial decomposition of organic matter. This process improves soil quality by enhancing populations of nitrogen-fixing bacteria and by supplementing the soil with additional conditioners through the waste products produced by these bacteria.
Bio-remediation of polluted soils
Seaweed functions as a bio-remediator through its adsorption of harmful pollutants. Functional groups on the algal surface such as ester, hydroxyl, carbonyl amino, sulfhydryl, and phosphate groups drive the biosorption of heavy metal ions.
Seaweeds such as Gracilaria corticata varcartecala and Grateloupia lithophila effectively remove a wide variety of heavy metals, including chromium (III) and (IV), mercury (II), lead (II), and cadmium (II) from their environment. In addition, Ulva spp. and Gelidium spp. have been shown to enhance the degradation of DDT in polluted soils and may reduce its bioavailability. Although there is significant potential for seaweed to serve as a bio-remediator for polluted soils, more research required to fully develop the mechanisms for this process in the context of the agriculture. Heavy metals accumulated by seaweed fertilizer may transfer to crops in some cases, causing significant implications for the public health.
The application of biochar is another strategy that can remediate and enhance infertile soils. Seaweed can be transformed into biochar and used as a means of increasing the organic matter and nutrient content of the soil. Different types of seaweed appear to yield unique nutrients and parameters; red seaweeds, for example, create biochar that is rich in potassium and sulfur and is more acidic than biochar generated from brown seaweeds. While this is a new field of research, current data shows that targeted breeding of seaweeds may result in biochars that can be tailored to different types of soil and crops.
Integrated pest management
The addition of seaweed to soil can increase crop health and resistance to diseases. Seaweeds contain a diverse array of bioactive molecules that can respond to diseases and pests, including steroids, terpenes, acetogenins, and amino acid-derived polymers. The application of seaweed extracts reduces the presence of harmful pests including nematodes and insects. While the application of seaweed seems to reduce the harmful effects of nematode infestation, the combination of seaweed application and carbofuran, a chemical nematocide, seems to be most effective. In addition, several species of seaweed appear to hinder the early growth and development of numerous detrimental insects, including Sargassum swartzii, Padina pavonica, and Caulerpa denticulata.
Soil microbial response to seaweed fertilizer treatment
Shifts in bacterial and fungal communities, in response to seaweed fertilizer treatment, have only recently been studied. Soil Microbial community composition and functionality is largely driven by underlying soil health and abiotic properties.
Many DNA sequencing and omics-based approaches, combined with greenhouse experiments, have been used to characterize microbial responses to seaweed fertilizer treatment on a wide variety of crops. Deep 16S ribosomal RNA (rRNA) amplicon sequencing of the bacteria found in the soils of tomato plots, treated with a Sargassum horneri fermented seaweed fertilizer, showed a large shift in alpha diversity and beta diversity indices between untreated soils and soils after 60 days. This shift in community composition was correlated with a 1.48-1.83 times increase in tomato yield in treated soils. Though dominant bacterial phyla remained similar between treatment groups, changes in the abundance of the class, Bacilli and family, Micrococcaceae were noted. Enzyme assays also displayed an increase in protease, polyphenol oxidase, dehydrogenase, invertase, and urease activity, which was thought to be induced by microbial community alterations. Each of the microbial and enzymatic results listed above were noted to improve the nutrient turnover and quality in soils treated with fertilizer.
To investigate interactions between plant growth-promoting rhizobacteria (PGPR) and seaweed-derived extract, Ngoroyemoto et al. treated Amaranthus hybridus with both Kelpak and PGPR and measured impacts on plant growth. It was found that the treatment of plants with Kelpak® and the bacteria, Pseudomonas fluorescens and Bacillus licheniformis, decreased plant stress responses and increased production. The most recently mentioned study provides implications for crop benefits when the application of seaweed fertilizer to soils favors the growth of PGPR.
Wang et al. found that apple seedlings treated with seaweed fertilizer differed markedly in fungal diversity and species richness, when compared to no-treatment control groups. These findings were complemented by increases in soil quality and enzyme activities in treated soil groups, which supports the hypothesis that the fertilizer promoted the growth of plant-beneficial fungal species. With the use of 16S rRNA and fungal internal transcribed spacer (ITS) sequencing, Renaut et al. examined the effect of Ascophyllum nodosum extract treatment on the rhizospheres of pepper and tomato plants in greenhouses. This group found that bacterial and fungal species composition and community structures differed based on treatment. A rise of the abundance of certain amplicon sequence variants (ASVs) were also directly correlated with increases in plant health and growth. These ASVs included fungi in the family, Microascaceae, the genus, Mortierella spp., and several other uncultured ASVs. A large diversity of bacterial ASVs were identified to be positively correlated with growth in this same study, including Rhizobium, Sphingomonas, Sphingobium, and Bradyrhizobium.
Resistance to plant pathogens
Application of the seaweed fertilizer may also increase resistance to plant pathogens. In greenhouse samples, Ali et al. tested the treatment of Ascophyllum nodosum extract on tomato and sweet pepper crops and found that it both increased plant health and reduced the incidence of plant pathogens. Further investigation showed that the up-regulation of pathogen defense-related enzymes led to the reduction of the pathogens, Xanthomonas campestris pv. vesicatoria and Alternaria solani.
Chen et al. found that Ascophyllum nodosum treatment positively impacted the community composition of maize rhizospheres. This may have critical implications for plant health because the structure of rhizosphere microbial communities can aid in the resistance of plants to soil-borne pathogens.
Other pathogen reductions include the mitigation of carrot foliar fungal diseases following Ascophyllum nodosum treatment and inoculation with the fungal pathogens, Alternaria radicina and Botrytis cinerea. Reduced disease severity was noted at 10 and 20 days post-inoculation in comparison to control plants, and the seaweed treatment was found to be more effective at reducing disease pathology than salicylic acid, a known plant protector from biotic and abiotic stresses. Islam et al. had similar results when treating Arabidopsis thaliana with brown algal extracts, followed by inoculation with the fungal pathogen Phytophthora cinnamomi. This group analyzed plant RNA transcripts and found that the seaweed extract primed A. thaliana to defend against the fungal pathogen before its inoculation, which led to increased host survival and decreased susceptibility to infection.
Fewer studies have analyzed the impact of seaweed fertilizer treatment on plant resistance to viral pathogens, however limited auspicious results have been demonstrated. It has been shown that green, brown, and red seaweeds contain polysaccharides that illicit pathogen response pathways in plants, which primes defense against viruses, along with bacteria and fungi. Specifically, defense enzymes, including phenylalanine ammonia lyase and lipoxygenase, are activated and lead to viral defense.
Aqueous and ethanolic extracts from the brown alga, Durvillaea antarctica was shown to decrease pathological symptoms of tobacco mosaic virus (TMV) in tobacco leaves. Another study done on tobacco plants found that sulfated fucan oligosaccharides, extracted from brown algae, induced local and systemic acquired resistance to TMV. Based on the above results, it can be stated that the application of seaweed fertilizers has considerable potential to provide broad benefits to the agricultural crops and resistance to bacterial, fungal, and viral plant pathogens.
References
Wikipedia Student Program
Organic fertilizers
Seaweeds | Seaweed fertiliser | [
"Biology"
] | 7,011 | [
"Seaweeds",
"Algae"
] |
69,239,975 | https://en.wikipedia.org/wiki/Impact%20of%20self-driving%20cars | The impact of self-driving cars is anticipated to be wide-ranging in many areas of daily life. Self-driving cars (also known as autonomous vehicles or AVs) have been the subject of significant research on their environmental, practical, and lifestyle consequences and their impacts remain debated.
Some experts claim substantial reduction in traffic collisions and the resulting severe injuries or deaths. United States government estimates suggest 94% of traffic collisions have humans as the final critical element in crash, with one study estimating that converting 90% of cars on US roads to AVs would save 25,000 lives per year. Other experts claim that the number of human error collisions is overestimated and that self-driving cars may actually increase collisions.
Self-driving cars are speculated to worsen air pollution, noise pollution, and sedentary lifestyles, to increase productivity and housing affordability, reclaim land used for parking, cause greater energy use, traffic congestion and sprawl. The impact of self-driving cars on absolute levels of individual car use is not yet clear; other forms of self-driving vehicles, such as self-driving buses, may actually decrease car use and congestion.
AVs are anticipated to affect the healthcare, insurance, travel, and logistics fields. Auto insurance costs are expected to decrease, and the burden of cars on the healthcare system to reduced. Self-driving cars are predicted to cause significant job losses in the transportation industry.
Automobile industry
A McKinsey report has forecast that AVs could reach $300 to $400 billion in revenue by 2035. The industry has attracted multiple car manufacturers, most notably General Motor's subsidiary Cruise and Tesla. Ford and Volkswagen invested billions in Argo AI but withdraw from the market by 2022, instead focusing on semi-autonomous driving (L2+, L3 under SAE classification). Notably, non-car manufacturers have also investigated and speculated about self-driving cars, including Google subsidiary Waymo, among others.
To help reduce the possibility of safety issues, some companies have begun to open-source parts of their driverless systems. Udacity for instance is developing an open-source software stack, and some companies are having similar approaches.
Public Health
Car crash reduction
Proponents
Estimates of numbers of crashes prevented by AVs varies widely. An NHSTA report in 2018 found that 94% of crashes had humans as the final causal step in a chain of events. One study claimed that if 90% of cars in the US became self-driving, an estimated 25,000 lives would be saved annually. Lives saved by averting automobile crashes in the US has been valued at more than $200 billion annually. Other studies claim self-driving car would have the potential to save 10 million lives worldwide, per decade. Opponents argue that the number of human driven crashes is taken out of context and estimates of lives saved may not be accurate.
Driving safety experts predict that once driverless technology has been fully developed, traffic collisions (and resulting deaths and injuries and costs) caused by human error, such as delayed reaction time, tailgating, rubbernecking, and other forms of distracted or aggressive driving would be substantially reduced. Some experts advocate the idea of a "smart city" and claim data sharing infrastructure with AVs could further reduce crashes.
Opponents
Lack of data remains a key challenge in comparisons of fatalities per million miles between AVs and humans. One limited early study claimed a rate of 9.1 crashes per million miles by AVs, nearly double the rate from human driving, though crashes were less serious than humans. Arstechnica calculated 102 crashes over 6 million miles, but claimed crashes were low-impact and still safer than human driving. Waymo claimed only 3 crashes with injuries over 7.1 million miles, nearly twice as safe as human drivers. As more cities give permission for AVs to operate, incidents and complaints have increased.
Opponents of AVs have argued that current self-driving technology fails to take into account "edge cases" which may make the technology more dangerous than human driving. In 2017, driving experts were contacted by "TheDrive.com", operated byTime magazine, to rank autopilot systems. None ranked any of the autopilot systems at the time as safer than human driving. Factors that reduce safety may include unexpected interactions between humans and vehicle systems; complications due to technical limitations of technologies; the effect of the bugs that inevitably occur in complex interdependent software systems; sensor or data shortcomings; and compromise by malicious actors. Security problems include what an autonomous car might do if summoned to pick up the owner but another person attempts entry, what happens if someone tries to break into the car, and what happens if someone attacks the occupants, for example by exchanging gunfire.
One ethicist argued that autonomous vehicles requiring any human supervision would create complacency and would be immoral to deploy. Specifically, they argued humans are unlikely to effectively take over during a sudden software failure if an impending decision is required immediately. Research shows that drivers in automated cars react later when they have to intervene in a critical situation, compared to if they were driving manually.
Other public health impacts
According to a 2020 Annual Review of Public Health review of the literature, self-driving cars "could increase some health risks (such as air pollution, noise, and sedentarism); however, if properly regulated, AVs will likely reduce morbidity and mortality from motor vehicle crashes and may help reshape cities to promote healthy urban environments."
An unexpected disadvantage of the widespread acceptance of autonomous vehicles would be a reduction in the supply of organs for donation. In the US, for example, 13% of the organ donation supply comes from car crash victims.
Welfare
According to a 2020 study, self-driving cars will increase productivity, and housing affordability, as well as reclaim land used for parking. However, self-driving cars will cause greater energy use, traffic congestion and sprawl. Automated cars could reduce labor costs; relieve travelers from driving and navigation chores, thereby replacing behind-the-wheel commuting hours with more time for leisure or work; and also would lift constraints on occupant ability to drive, distracted and texting while driving, intoxicated, prone to seizures, or otherwise impaired.
For the young, the elderly, people with disabilities, and low-income citizens, automated cars could provide enhanced mobility. The removal of the steering wheel—along with the remaining driver interface and the requirement for any occupant to assume a forward-facing position—would give the interior of the cabin greater ergonomic flexibility. Large vehicles, such as motorhomes, would attain appreciably enhanced ease of use.
The elderly and persons with disabilities (such as persons who are hearing-impaired, vision-impaired, mobility-impaired, or cognitively-impaired) are potential beneficiaries of adoption of autonomous vehicles; however, the extent to which such populations gain greater mobility from the adoption of AV technology depends on the specific designs and regulations adopted.
Children and teens, who are not able to drive a vehicle themselves in case of student transport, would also benefit of the introduction of autonomous cars. Daycares and schools are able to come up with automated pick-up and drop-off systems by car in addition to walking, cycling and busing, causing a decrease of reliance on parents and childcare workers.
The extent to which human actions are necessary for driving will vanish. Since current vehicles require human actions to some extent, the driving school industry will not be disrupted until the majority of autonomous transportation is switched to the emerged dominant design. It is plausible that in the distant future driving a vehicle will be considered as a luxury, which implies that the structure of the industry is based on new entrants and a new market. Self-driving cars would also exasperate existing mobility inequalities driven by the interests of car companies and technology companies while taking investment away from more equitable and sustainable mobility initiatives such as public transportation.
Urban planning
According to a Wonkblog reporter, if fully automated cars become commercially available, they have the potential to be a disruptive innovation with major implications for society. The likelihood of widespread adoption is still unclear, but if they are used on a wide scale, policymakers face a number of unresolved questions about their effects.
One fundamental question is about their effect on travel behavior. Some people believe that they will increase car ownership and car use because it will become easier to use them and they will ultimately be more useful. This may, in turn, encourage urban sprawl and ultimately total private vehicle use. Others argue that it will be easier to share cars and that this will thus discourage outright ownership and decrease total usage, and make cars more efficient forms of transportation in relation to the present situation.
Policy-makers will have to take a new look at how infrastructure is to be built and how money will be allotted to build for automated vehicles. The need for traffic signals could potentially be reduced with the adoption of smart highways. Due to smart highways and with the assistance of smart technological advances implemented by policy change, the dependence on oil imports may be reduced because of less time being spent on the road by individual cars which could have an effect on policy regarding energy. On the other hand, automated vehicles could increase the overall number of cars on the road which could lead to a greater dependence on oil imports if smart systems are not enough to curtail the impact of more vehicles. However, due to the uncertainty of the future of automated vehicles, policymakers may want to plan effectively by implementing infrastructure improvements that can be beneficial to both human drivers and automated vehicles. Caution needs to be taken in acknowledgment to public transportation and that the use may be greatly reduced if automated vehicles are catered to through policy reform of infrastructure with this resulting in job loss and increased unemployment.
Other disruptive effects will come from the use of automated vehicles to carry goods. Self-driving vans have the potential to make home deliveries significantly cheaper, transforming retail commerce and possibly making hypermarkets and supermarkets redundant. the US Department of Transportation defines automation into six levels, starting at level zero which means the human driver does everything and ending with level five, the automated system performs all the driving tasks. Also under the current law, manufacturers bear all the responsibility to self-certify vehicles for use on public roads. This means that currently as long as the vehicle is compliant within the regulatory framework, there are no specific federal legal barriers in the US to a highly automated vehicle being offered for sale. Iyad Rahwan, an associate professor in the MIT Media Lab said, "Most people want to live in a world where cars will minimize casualties, but everyone wants their own car to protect them at all costs." Furthermore, industry standards and best practice are still needed in systems before they can be considered reasonably safe under real-world conditions.
Traffic
Additional advantages could include higher speed limits; smoother rides; and increased roadway capacity; and minimized traffic congestion, due to decreased need for safety gaps and higher speeds. Currently, maximum controlled-access highway throughput or capacity according to the US Highway Capacity Manual is about 2,200 passenger vehicles per hour per lane, with about 5% of the available road space is taken up by cars. One study estimated that automated cars could increase capacity by 273% (≈8,200 cars per hour per lane). The study also estimated that with 100% connected vehicles using vehicle-to-vehicle communication, capacity could reach 12,000 passenger vehicles per hour (up 545% from 2,200 pc/h per lane) traveling safely at with a following gap of about of each other. Human drivers at highway speeds keep between away from the vehicle in front. These increases in highway capacity could have a significant impact in traffic congestion, particularly in urban areas, and even effectively end highway congestion in some places. The ability for authorities to manage traffic flow would increase, given the extra data and driving behavior predictability combined with less need for traffic police and even road signage.
Insurance
Safer driving is expected to reduce the costs of vehicle insurance. The automobile insurance industry might suffer as the technology makes certain aspects of these occupations obsolete. As fewer collisions implicate less money spent on repair costs, the role of the insurance industry is likely to be altered as well. It can be expected that the increased safety of transport due to autonomous vehicles will lead to a decrease in payouts for the insurers, which is positive for the industry, but fewer payouts may imply a demand drop for insurances in general.
In order to accommodate such changes, the Automated and Electric Vehicles Act 2018 was introduced. While Part 2 deals with Electric Vehicles, Part 1 covers insurance provisions for automated vehicles.
Labor market
Driving-related jobs
A direct impact of widespread adoption of automated vehicles is the loss of driving-related jobs in the road transport industry. There could be resistance from professional drivers and unions who are threatened by job losses. In addition, there could be job losses in public transit services and crash repair shops. A frequently cited paper by Michael Osborne and Carl Benedikt Frey found that automated cars would make many jobs redundant. The industry has, however created thousands of jobs in low-income countries for workers who train autonomous systems.
Taxis
With the aforementioned ambiguous user preference regarding the personal ownership of autonomous vehicles, it is possible that the current mobility provider trend will continue as it rises in popularity. Established providers such as Uber and Lyft are already significantly present within the industry, and it is likely that new entrants will enter when business opportunities arise.
Energy and environmental impacts
Vehicle use
A review found that private autonomous vehicles may increase total travel, whereas autonomous buses may lead to reduced car use.
Vehicle automation can improve fuel economy of the car by optimizing the drive cycle, as well as increasing congested traffic speeds by an estimated 8%–13%. Reduced traffic congestion and the improvements in traffic flow due to widespread use of automated cars will translate into higher fuel efficiency, ranging from a 23%–39% increase, with the potential to further increase. Additionally, self-driving cars will be able to accelerate and brake more efficiently, meaning higher fuel economy from reducing wasted energy typically associated with inefficient changes to speed. However, the improvement in vehicle energy efficiency does not necessarily translate to net reduction in energy consumption and positive environmental outcomes.
Alongside the induced demand, there may also be a reduction in the use of more sustainable modes, such as public or active transport. It is expected that convenience of the automated vehicles encourages the consumers to travel more, and this induced demand may partially or fully offset the fuel efficiency improvement brought by automation. Alongside the induced demand, there may also be a reduction in the use of more sustainable modes, such as public or active transport. Overall, the consequences of vehicle automation on global energy demand and emissions are highly uncertain, and heavily depends on the combined effect of changes in consumer behavior, policy intervention, technological progress and vehicle technology.
Production
By reducing the labor and other costs of mobility as a service, automated cars could reduce the number of cars that are individually owned, replaced by taxi/pooling and other car-sharing services. This would also dramatically reduce the size of the automotive production industry, with corresponding environmental and economic effects.
Indirect effects
The lack of stressful driving, more productive time during the trip, and the potential savings in travel time and cost could become an incentive to live far away from cities, where housing is cheaper, and work in the city's core, thus increasing travel distances and inducing more urban sprawl, raising energy consumption and enlarging the carbon footprint of urban travel. There is also the risk that traffic congestion might increase, rather than decrease. Appropriate public policies and regulations, such as zoning, pricing, and urban design are required to avoid the negative impacts of increased suburbanization and longer distance travel.
Since many autonomous vehicles are going to rely on electricity to operate, the demand for lithium batteries increases. Similarly, radar, sensors, lidar, and high-speed internet connectivity require higher auxiliary power from vehicles, which manifests as greater power draw from batteries. The larger battery requirement causes a necessary increase in the supply of these type of batteries for the chemical industry. On the other hand, with the expected increase of battery-powered (autonomous) vehicles, the petroleum industry is expected to undergo a decline in demand. As this implication depends on the adoption rate of autonomous vehicles, it is unsure to what extent this implication will disrupt this particular industry. This transition phase of oil to electricity allows companies to explore whether there are business opportunities for them in the new energy ecosystem. In 2020, Mohan, Sripad, Vaishnav & Viswanathan at Carnegie Mellon University found that the electricity consumption of all the automation technology, including sensors, computation, internet access as well as the increased drag from sensors causes up to a 15% impact on the range of an automated electric vehicle, therefore, implying that the larger battery requirement might not be as large as previously assumed.
Self-parking and parking space
Self-parking
A study conducted by AAA Foundation for Traffic Safety found that drivers did not trust self-parking technology, even though the systems outperformed drivers with a backup camera. The study tested self-parking systems in a variety of vehicles (Lincoln MKC, Mercedes-Benz ML400 4Matic, Cadillac CTS-V Sport, BMW i3 and Jeep Cherokee Limited) and found that self-parking cars hit the curb 81% fewer times, used 47% fewer manoeuvres and parked 10% faster than drivers. Yet, only 25% of those surveyed said they would trust this technology.
Parking space
Manually driven vehicles are reported to be used only 4–5% of the time, and being parked and unused for the remaining 95–96% of the time. Autonomous taxis could, on the other hand, be used continuously after they have reached their destination. This could dramatically reduce the need for parking space. For example, in Los Angeles a 2015 study found 14% of the land is used for parking alone, equivalent to some . This combined with the potential reduced need for road space due to improved traffic flow, could free up large amounts of land in urban areas, which could then be used for parks, recreational areas, buildings, among other uses; making cities more livable. Besides this, privately owned self-driving cars, also capable of self-parking would provide another advantage: the ability to drop off and pick up passengers even in places where parking is prohibited. This would benefit park and ride facilities.
Cybersecurity
Privacy
The vehicles' increased awareness could aid the police by reporting on illegal passenger behaviour, while possibly enabling other crimes, such as deliberately crashing into another vehicle or a pedestrian. However, this may also lead to much-expanded mass surveillance if there is wide access granted to third parties to the large data sets generated.
Privacy could be an issue when having the vehicle's location and position integrated into an interface that other people have access to. Moreover, they require a sensor-based infrastructure that would constitute an all-encompassing surveillance apparatus. This gives the car manufacturers and other companies the data needed to understand the user's lifestyle and personal preferences.
Terrorist scenarios
There is the risk of terrorist attacks by automotive hacking through the sharing of information through V2V (Vehicle to Vehicle) and V2I (Vehicle to Infrastructure) protocols. Self-driving cars could potentially be loaded with explosives and used as bombs. According to legislation of US lawmakers, autonomous and self-driving vehicles should be equipped with defences against hacking.
Car repair
As collisions are less likely to occur, and the risk for human errors is reduced significantly, the repair industry will face an enormous reduction of work that has to be done on the reparation of car frames. Meanwhile, as the generated data of the autonomous vehicle is likely to predict when certain replaceable parts are in need of maintenance, car owners and the repair industry will be able to proactively replace a part that will fail soon. This "Asset Efficiency Service" would implicate a productivity gain for the automotive repair industry.
Rescue, emergency response, and military
The technique used in autonomous driving also ensures life savings in other industries. The implementation of autonomous vehicles with rescue, emergency response, and military applications has already led to a decrease in deaths. Military personnel use autonomous vehicles to reach dangerous and remote places on earth to deliver fuel, food and general supplies and even rescue people. In addition, a future implication of adopting autonomous vehicles could lead to a reduction in deployed personnel, which will lead to a decrease in injuries, since the technological development allows autonomous vehicles to become more and more autonomous. Another future implication is the reduction of emergency drivers when autonomous vehicles are deployed as fire trucks or ambulances. An advantage could be the use of real-time traffic information and other generated data to determine and execute routes more efficiently than human drivers. The time savings can be invaluable in these situations.
Interior design and entertainment
With the driver decreasingly focused on operating a vehicle, the interior design and media-entertainment industry will have to reconsider what passengers of autonomous vehicles are doing when they are on the road. Vehicles need to be redesigned, and possibly even be prepared for multipurpose usage. In practice, it will show that travellers have more time for business and/or leisure. In both cases, this gives increasing opportunities for the media-entertainment industry to demand attention. Moreover, the advertisement business is able to provide location-based ads without risking driver safety.
Connected vehicle
All cars can benefit from information and connections, but autonomous cars "Will be fully capable of operating without C-V2X." In addition, the earlier mentioned entertainment industry is also highly dependent on this network to be active in this market segment. This implies higher revenues for the telecommunication industry.
Hospitality industry and airlines
Driver interactions with the vehicle will be less common within the near future, and in the more distant future, the responsibility will lie entirely with the vehicle. As indicated above, this will have implications for the entertainment- and interior design industry. For roadside restaurants, the implication will be that the need for customers to stop driving and enter the restaurant will vanish, and the autonomous vehicle will have a double function. Moreover, accompanied by the rise of disruptive platforms such as Airbnb that have shaken up the hotel industry, the fast increase of developments within the autonomous vehicle industry might cause another implication for their customer bases. In the more distant future, the implication for motels might be that a decrease in guests will occur, since autonomous vehicles could be redesigned as fully equipped bedrooms. The improvements regarding the interior of the vehicles might additionally have implications for the airline industry. In the case of relatively short-haul flights, waiting times at customs or the gate imply lost time and hassle for customers. With the improved convenience in future car travel, it is possible that customers might go for this option, causing a loss in customer bases for the airline industry.
References
Technology assessments
self-driving | Impact of self-driving cars | [
"Technology",
"Engineering"
] | 4,673 | [
"Self-driving cars",
"Technology assessments",
"Impact of automation",
"Automation",
"Automotive engineering"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.