id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
21,697,612 | https://en.wikipedia.org/wiki/Australian%20Integrated%20Forecast%20System | The Australian Integrated Forecast System (AIFS) is a UNIX and Linux -based processing, display, analysis and communications system for meteorological data.
It incorporates facilities for the ingest and storage of meteorological and hydrological observations, which can be displayed, analysed and manipulated on screen. Tools are also provided for alerting, chart plotting and the preparation and dissemination of forecasts and warnings to the public.
AIFS is currently running on AIX, HP-UX and Linux platforms in Australia, Fiji, Indonesia and Malaysia.
Development began in 1991 as a replacement for the Automated Regional Operations System (AROS), built on Tandem NonStop architecture.
References
Meteorological instrumentation and equipment | Australian Integrated Forecast System | Technology,Engineering | 136 |
74,579,575 | https://en.wikipedia.org/wiki/Maui%20Nui%20Venison | Maui Nui Venison is a venison producer on the island of Maui, Hawaii. The company harvests axis deer, an invasive species in Hawaii, in order to balance the population, and sells the resulting meat. Its night harvesting and field processing system is unique in the world.
History
Maui Nui Venison was founded in 2015 by Jake and Ku‘ulani Muise to address the invasive axis deer problem on Maui by culling them and selling the meat to the public. Axis deer are native to the Indian subcontinent, and were brought to Hawaii in the 1860s, as a gift to the Hawaiian king. The deer are prolific breeders, one of the few deer species able to breed year-round, and have no predators on the island. In large numbers, they cause severe damage to the island's ecosystem. As of 2023, the axis deer population on Maui numbers 50,000 to 60,000, growing at about 30% annually.
Jake Muise was formerly axis deer coordinator for the Big Island Invasive Species Committee. The organization led eradication efforts on Hawaii Island, after the deer, populous on the islands of Molokai, Lānai and Maui, were illegally introduced to the main island in 2009. High tech tracking methods involving camera traps, forward-looking infrared (FLIR), and radio telemetry were used to aid in locating the animals.
In order to commercially hunt wild deer, Maui Nui Venison had to comply with US Department of Agriculture (USDA) regulations for hunting and processing, and animal and meat inspection. A hunting operation on Molokai, Molokai Wildlife Management, in 2007 became the first in Hawaii to obtain a USDA permit to cull axis deer and sell the USDA-certified meat.
Operations
The company uses FLIR equipment to track the deer at night, and kills, processes and stores the venison in a mobile facility. The intention is to allow the animals to roam wild and unstressed by the hunt, until the moment of death. Following USDA requirements, only single headshots are permitted. The hunters carry the carcasses on their backs to the processing facility to avoid ground contamination. Hunting is conducted on privately owned ranches, where the majority of the deer live. To meet US certification requirements, the harvest team is accompanied by a USDA inspector, and each carcass is inspected by a USDA veterinarian. The overall system makes it the only such field operation in the world.
Products
Maui Nui sells the entire animal online, as cuts of meat, bone broth, individual organs, ground meat, jerky, and pet treats. Products are shipped frozen, available within the United States.
References
Hunting in the United States
Deer hunting
Maui
Invasive animal species in the United States
Ecological restoration
Meat processing in the United States
Meat companies of the United States
Food and drink companies based in Hawaii | Maui Nui Venison | Chemistry,Engineering | 589 |
11,207,764 | https://en.wikipedia.org/wiki/Glycerol%20%28data%20page%29 | This page provides supplementary chemical data on glycerol.
Material Safety Data Sheet
The handling of this chemical may incur notable safety precautions. It is highly recommended that you seek the Material Safety Datasheet (MSDS) for this chemical from a reliable source and follow its directions.
Structure and properties
Thermodynamic properties
Vapor pressure of liquid
Table data obtained from CRC Handbook of Chemistry and Physics, 44th ed.
loge of Glycerol vapor pressure. Uses formula: with coefficients A=-2.125867E+01, B=-1.672626E+04, C=1.655099E+02, and D=1.100480E-05 obtained from CHERIC
Freezing point of aqueous solutions
Table data obtained from Lange's Handbook of Chemistry, 10th ed. Specific gravity is at 15 °C, referenced to water at 15 °C.
See details on: Freezing Points of Glycerine-Water Solutions Dow Chemical
or Freezing Points of Glycerol and Its Aqueous Solutions.
Distillation data
Spectral data
References
Chemical data pages
Chemical data pages cleanup | Glycerol (data page) | Chemistry | 233 |
8,100,661 | https://en.wikipedia.org/wiki/School%20Astronomical%20Olympiad%20by%20Correspondence | Russian Open School Astronomical Olympiad by Correspondence (ROSAOC) – is an annual international competition for secondary school students in astronomy. Olympiad is being conducted in one theoretical stage by correspondence. The languages of the Olympiad are English and Russian.
Russian Open School Astronomical Olympiad by Correspondence – 2008 started on December 16, 2007. The deadline for paper submission is February, 11th 2008.
See also
International Astronomy Olympiad
External links
The official website
Science competitions | School Astronomical Olympiad by Correspondence | Technology | 86 |
14,231,149 | https://en.wikipedia.org/wiki/International%20Generic%20Sample%20Number | The International Generic Sample Number or IGSN is a persistent identifier for sample. As an active persistent identifier it can be resolved through the Handle System. The system is used in production by the System for Earth Sample Registration (SESAR), Geoscience Australia, Commonwealth Scientific and Industrial Research Organisation Mineral Resources, Australian Research Data Commons (ARDC), University of Bremen MARUM, German Research Centre for Geosciences (GFZ), IFREMER Institut Français de Recherche pour l'Exploitation de la Mer, Korea Institute of Geoscience & Mineral Resources (KIGAM), and University of Kiel. Other organisations are preparing the introduction of the IGSN.
The IGSN was developed as the International Geo Sample Number to provide a persistent, globally unique, web resolvable identifier for physical samples. IGSN is both a governance and technical system for assigning globally unique persistent identifiers to physical samples. Even though initially developed for samples in the geosciences, the application of IGSN can be and has already been expanded to other domains that rely on physical samples and collections. To take into account the expanded scope of the application of IGSN beyond the earth and environmental sciences, the IGSN Implementation Organization (IGSN e.V.) voted to change the name of the identifier to International Generic Sample Number (IGSN) and rename the organisation accordingly.
The IGSN preserves the identity of a sample even as it is moved from lab to lab and as data appear in different publications, thus eliminating ambiguity that stems from similar names for samples from the earth. The IGSN unique identifier allows researchers to track the analytical history of a sample and build on previously collected data as new techniques are developed. Additionally, the IGSN provides a link between disparate data generated by different investigators and published in different scientific articles.
In September 2021, the members of IGSN e.V. and DataCite agreed to enter a partnership. Under the partnership, DataCite will provide the IGSN ID registration services and supporting technology to enable the ongoing sustainability of the IGSN PID infrastructure. The IGSN e.V. will facilitate a Community of Communities to promote and support new research and innovation for standard methods of identifying, citing, and locating physical samples.
History
The IGSN was developed as part of SESAR with the support of the National Science Foundation at the Lamont–Doherty Earth Observatory. The project was initiated and managed by the Geoinformatics for Geochemistry Program under the direction of Kerstin Lehnert to address data curation obstacles such as different samples that share the same name, and samples that are renamed as they move between laboratories and thus generating analyses that are published under different aliases. As a result, metadata that ensure unique identification are often missing and this causes irritation for future reuse of data from a sample or the sample itself. Sample databases, such as the SESAR database, are designed to address these issues.
At a workshop hosted at the San Diego Supercomputer Center in 2011, a group of experts met to discuss how to transition the IGSN project into a sustainable infrastructure. The group recommended opening the system to other IGSN registration agents, making it international and transferring the operation and governance of the IGSN to an independent body. This recommendation led to the foundation of the International Geo Sample Implementation Organization e.V. (IGSN e.V.) and the founding event was held at the American Geophysical Union Fall Meeting 2011 in San Francisco, California. The IGSN e.V. is an incorporated organisation under German law and is registered at the Magistrates Court in Potsdam, Germany.
Membership in the organisation is open only to institutions, not to individuals. At present, IGSN e.V. has 16 full members.
In 2018, the Alfred P. Sloan Foundation awarded Columbia University's Lamont–Doherty Earth Observatory a grant for a project to modernise the IGSN business model and system architecture. The funding from the Sloan Foundation will support a series of workshops, at which international experts will come together to redesign the IGSN system and its management to allow researchers world-wide use the IGSN with confidence.
In September 2021, IGSN e.V. and DataCite entered a partnership under which DataCite will provide the IGSN ID registration services and supporting technology to enable the ongoing sustainability of the IGSN PID infrastructure. The IGSN e.V. will facilitate a Community of Communities to promote and support new research and innovation for standard methods of identifying, citing, and locating physical samples. The partnership allows IGSN to leverage DataCite DOI registration services and to focus community efforts on advocacy and expanding the global samples ecosystem.
IGSN and DataCite have a common purpose, and a close relationship in the future will provide mutual benefit to our shared vision of connecting research and identifying knowledge. The partnership brings years of experience across our organizations and communities to scale sample community engagement, develop sample identifier practice standards, and increase adoption globally. In a study published 2023 by Knowledge Exchange it is highlighted that IGSN IDs point to physical objects instead of intellectual property or outcomes (as DOIs mostly do) or their creators. Besides, the report emphasized that the service itself and its organisational framework were developed bottom-up in a sheer community-based effort.
Example
An example of a publication using live IGSNs can be found here:
Dere, A. L., T. S. White, R. H. April, B. Reynolds, T. E. Miller, E. P. Knapp, L. D. McKay, and S. L. Brantley (2013), Climate dependence of feldspar weathering in shale soils along a latitudinal gradient, Geochimica et Cosmochimica Acta, 122, 101–126, https://dx.doi.org/10.1016/j.gca.2013.08.001.
This paper contains several samples identified by IGSN. One of them is IGSN: 10.58052/SSH000SUA. Information about this sample can be obtained by resolving the IGSN by adding the URL of the resolver before the IGSN: https://doi.org/10.58052/SSH000SUA.
IGSN can be used to identify samples and sampling features, such as boreholes or outcrops. The IGSN 10.60510/ICDP5054ESYI201 identifies a core section from core 5054_1_A_658_Z (IGSN 10.60510/ICDP5054ECYD101) of the COSC Expedition of the International Continental Scientific Drilling Program. The corresponding drill hole (sampling feature) 5054_1_A is identified by IGSN 10.60510/ICDP5054EEW1001.
Sample Registration
Samples are registered through Allocating Agents. At present (November 2021) the following IGSN Allocation Agents register IGSN:
System for Earth Sample Registration (SESAR)
Geoscience Australia
Commonwealth Scientific and Industrial Research Organisation Mineral Resources
Australian Research Data Commons (ARDC)
University of Bremen MARUM
German Research Centre for Geosciences (GFZ)
IFREMER Institut Français de Recherche pour l'Exploitation de la Mer
Korea Institute of Geoscience & Mineral Resources (KIGAM)
University of Kiel
To obtain an IGSN, users need to register a sample by submitting information about it to an IGSN Allocating Agent. Once logged in, users can:
register individual samples or batches
register sampling features
track relationships between samples and subsamples (e.g., bulk samples and mineral separates)
update information on registered samples
download QR code images for labelling purposes
See also
Digital Object Identifier
Handle System
DataCite
LSID
Observations and Measurements
References
Homepage of the International Geo Sample Number Implementation Organisation IGSN e.V.
IGSN Documentation, overview, statutes, syntax guidelines.
IGSN code repository for metadata schemas and software
SESAR
Geoinformatics for Geochemistry
Klump, J., Lehnert, K. A., Ulbricht, D., Devaraju, A., Elger, K., Fleischer, D., et al. (2021). Towards Globally Unique Identification of Physical Samples: Governance and Technical Implementation of the IGSN Global Sample Number. Data Science Journal, 20(33), 1–16. https://doi.org/10.5334/dsj-2021-033
Unique identifiers
Geochemistry | International Generic Sample Number | Chemistry | 1,819 |
45,444,629 | https://en.wikipedia.org/wiki/Apple%20car%20project | From 2014 until 2024, Apple undertook a research and development effort to develop an electric and self-driving car, codenamed "Project Titan". Apple never openly discussed any of its automotive research, but around 5,000 employees were reported to be working on the project In May 2018, Apple reportedly partnered with Volkswagen to produce an autonomous employee shuttle van based on the T6 Transporter commercial vehicle platform. In August 2018, the BBC reported that Apple had 66 road-registered driverless cars, with 111 drivers registered to operate those cars. In 2020, it was believed that Apple was still working on self-driving related hardware, software and service as a potential product, instead of actual Apple-branded cars. In December 2020, Reuters reported that Apple was planning on a possible launch date of 2024, but analyst Ming-Chi Kuo claimed it would not be launched before 2025 and might not be launched until 2028 or later.
In February 2024, Apple executives canceled their plans to release the autonomous electric vehicle, instead shifting resources on the project to the company's generative artificial intelligence efforts. The project had reportedly cost the company over $1 billion per year, with other parts of Apple collaborating and costing hundreds of millions of dollars in additional spend. Additionally, over 600 employees were laid off due to the cancellation of the project.
Car details
The car project cycled through multiple designs over the years. Teams at Apple outside of the development project were involved in its development. People from the Apple silicon team were heavily involved in the car to design the processor used for its autonomy. At the time of cancellation, the chip was nearly finished, and had the equivalent processing power of four M2 Ultras combined. The microkernel for the car was named "safetyOS".
Proposed collaborations and acquisitions
During the 2008–2010 automotive industry crisis, with car companies nearing collapse, Apple SVP Tony Fadell proposed to Jobs the idea of buying General Motors at a reduced price. The idea was abandoned partly because the company felt that it would be a bad look, and partly because of its focus on the iPhone.
In 2014, with renewed interest in the project, Apple's head of corporate development Adrian Perica met with Elon Musk several times with an interest in acquiring Tesla. Tim Cook, Apple's CEO, shut down these early negotiations, partly due to Apple's CFO (and former GM Europe CFO) Luca Maestri saying how difficult the car business was. Despite the failure, years later, then-hardware chief Dan Riccio and former Ford engineer and iPhone engineer Steve Zadesky returned to Musk to discuss ideas for a collaboration. A few more years later, as Tesla struggled to make its Model 3 sedan, Musk attempted to restart talks with Apple, but said Cook wouldn't meet.
Attempts to partner with Mercedes-Benz advanced somewhat further than those with Tesla. The plan was for Mercedes-Benz to manufacture the car and Apple to also provide Mercedes-Benz its self-driving platform and UI for other cars. Apple pulled out partly because it had confidence that it could successfully manufacture a car themselves, and partly over disagreements over controlling the user's experience and data. The talks lasted for more than a year.
The closest talks came to acquiring a car company were with McLaren. Some executives hoped that Jony Ive would be closer to Apple with that acquisition, following his reduced involvement in the company. BMW and Canoo, among others, were also in exploratory talks for an acquisition. Apple also met with Nissan and BYD Auto. Apple was concerned that integrating an automaker would be a disaster internally. Apple briefly partnered with Magna Steyr, a maker of low-volume vehicles for the project.
In 2018, Apple signed a deal with Volkswagen to make an autonomous shuttle for Apple employees at their new headquarters, Apple Park. Volkswagen's T6 transport vans were to be modified, keeping the chassis and wheels, but with replaced dashboards, seats, and other components. The deal, an interim effort, was shut down by Doug Field, the head of the project, who saw it as a distraction.
The Korea Economic Daily reported in 2021 that Hyundai was in early discussions with Apple to develop and produce self-driving electric cars jointly. Some weeks later, in late January, Apple announced some upper-level engineering changes, leading some Apple-watchers to speculate if Dan Riccio's "new chapter at Apple" might indicate leadership of the Titan project (or something altogether unrelated, such as augmented/virtual reality headset or deluxe noise-cancelling headphones). By early February, it appeared that Apple was close to a $3.59B deal with Hyundai to use its Kia Motors' West Point, Georgia manufacturing plant for the car, a fully autonomous machine without a driver's seat. However, in February 2021, Hyundai and Kia confirmed that they were not in talks with Apple to develop a car. Adding further credence to Apple's automotive aspirations, Business Insider Deutschland (Germany) reported that Apple had hired Porsche VP of Chassis Development, Dr. Manfred Harrer. After rumors coming from Financial Times about Apple talking to several Japanese car companies about the Apple Car project after the Hyundai-Kia rumor, Nissan came out to Reuters as saying it is not in any of these discussions. The next Apple Car speculation was that Apple was shopping around for Lidar navigation sensor suppliers for its project.
History
2014–2015
The project was rumored to have been approved by Apple CEO Tim Cook in late 2014. For the project, Apple was rumored to have hired Johann Jungwirth, the former president and chief executive of Mercedes-Benz Research and Development North America, as well as at least one transmission engineer.
In February 2015, Apple board member Mickey Drexler stated that Apple co-founder and CEO Steve Jobs had plans to design and build a car and that discussions about the concept surfaced around the time that Tesla Motors debuted its first car in 2008. In May 2015, Apple investor Carl Icahn stated that he believed rumors that Apple would enter the automobile market in 2020, and that logically Apple would view this car as "the ultimate mobile device".
In August 2015, The Guardian reported that Apple were meeting with officials from GoMentum Station, a testing ground for connected and autonomous vehicles at the former Concord Naval Weapons Station in Concord, California. In September 2015, there were reports that Apple were meeting with self-driving car experts from the California Department of Motor Vehicles. According to The Wall Street Journal in September 2015, it will be a battery electric vehicle, initially lacking full autonomous driving capability, with a possible unveiling around 2019.
In October 2015, Tim Cook stated about the car industry: "It would seem like there will be massive change in that industry, massive change. You may not agree with that. That's what I think... We'll see what we do in the future. I do think that the industry is at an inflection point for massive change." Cook enumerated ways that the modern descendants of the Ford Model T would be shaken to the very chassis—the growing importance of software in the car of the future, the rise of autonomous vehicles, and the shift from an internal combustion engine to electrification.
In November 2015, various websites reported that suspected Apple front SixtyEight Research had attended an auto body conference in Europe. Also in November 2015, after unknown EV startup Faraday Future announced a $1 billion U.S. factory project, some speculated that it might be a front for Apple's secret car project. In late 2015, Apple contracted Torc Robotics to retrofit two Lexus SUVs with sensors in a project known internally as Baja.
2016
In 2016, Tesla Motors CEO Elon Musk stated that Apple will probably make a compelling electric car: "It's pretty hard to hide something if you hire over a thousand engineers to do it." In May 2016, reports were indicating Apple was interested in electric car charging stations.
The Wall Street Journal reported on July 25, 2016, that Apple had convinced retired senior hardware engineering executive Bob Mansfield to return and take over the Titan project. A few days later, on July 29, Bloomberg Technology reported that Apple had hired Dan Dodge, the founder and former chief executive officer of QNX, BlackBerry Ltd.’s automotive software division. According to Bloomberg, Dodge's hiring heralded a shift in emphasis at Apple's Project Titan, in which the company will prioritize creating software for autonomous vehicles. However, the story said that Apple would continue to develop a vehicle of its own. On September 9, The New York Times reported dozens of layoffs in an effort to reboot, presumably from a team still numbering around 1,000.
The following week, reports surfaced that Magna International, a contract vehicle manufacturer, had a small team working at Apple's Sunnyvale lab.
2017
After no new reports, car project news flared up again in mid-April 2017, as word spread that Apple was permitted to test autonomous vehicles on California roads. In mid-June, Tim Cook in an interview with Bloomberg TV said Apple was "focusing on autonomous systems" but not necessarily leading to an actual Apple car product, leaving speculation about Apple's role in the convergence of three disruptive "vectors of change": autonomous systems, electric vehicles and ride-sharing services.
In mid-August, various sources reported that the car project was focusing on autonomous systems, now expected to test its technology in the real world using a company-operated inter-campus shuttle service between the main Infinite Loop campus in Cupertino and various Silicon Valley offices, including the new Apple Park. Then at the end of August, around 17 former Titan team members, braking and suspension engineers with Detroit experience, were hired by autonomous vehicle startup Zoox.
October 2016 reports claimed the Titan project has a 2017 deadline to determine its fate - prove its practicality and viability, set a final direction.
In November 2017, Apple employees Yin Zhou and Oncel Tuzel published a paper on VoxelNet, which uses a convolutional neural network to detect three-dimensional objects using lidar.
Transportation/tech website Jalopnik reported in late November that Apple was recruiting automotive test engineering and tech talent for autonomous systems work and appeared to be secretly leasing, via third parties, a former Fiat Chrysler proving grounds site in Surprise, Arizona (originally Wittman). Also in 2017, The New York Times suggested that Apple had stopped developing its self-driving car. In response to such reports, Apple CEO Tim Cook acknowledged publicly that year that the company was working on autonomous-car technology.
2018
In January 2018, the company registered 27 self-driving vehicles with California's Department of Motor Vehicles.
While Apple attempted to keep its autonomous vehicles plans secret, regulatory filings did provide evidence of their project and related activities. In September 2018, Apple held the third-highest number of California autonomous vehicle permits with 70, behind GM's Cruise (175) and Alphabet's Waymo (88).
On July 7, 2018, a former Apple employee was arrested by the FBI for allegedly stealing trade secrets about Apple's self-driving car project. He was charged by federal prosecutors. The criminal complaint against the former employee revealed that at that time, Apple still had yet to openly discuss any of its self-driving research, with around 5,000 employees disclosed on the project.
In August 2018, Doug Field, formerly senior vice president of engineering at Tesla, became the leader of Apple's Titan team.
On August 24, 2018, it was reported that one of Apple's self-driving car had apparently been involved in a crash, when it was rear-ended during road-testing. The crash occurred while the car was at a stop, waiting to merge into traffic about 3.5 miles from Apple's headquarters in Cupertino, with no reported injuries. At the time, the BBC reported that Apple had 66 road-registered driverless cars, with 111 drivers registered to operate those cars.
In August 2018, there were reports about an Apple patent of a system that warns riders ahead of time about what an autonomous car would do, purportedly to alleviate the discomfort of surprise.
2019
In January 2019, Apple laid off more than 200 employees from their 'Project Titan' autonomous vehicle team.
In June 2019, Apple acquired autonomous vehicle startup Drive.ai.
2020
In early December, Bloomberg reported that Apple artificial intelligence lead John Giannandrea is overseeing Apple Car development as prior lead Bob Mansfield has retired. A few weeks later, Reuters reported that Apple was working towards a possible launch date of 2024 according to two unnamed insiders.
2021
An industry source told The Korea Times that Apple was working in Korea to build its supply chain. Later in 2021, Apple was reportedly in talks with Toyota as well as Korean partners for production to commence in 2024.
After Doug Field departed the project and joined Ford, Kevin Lynch, the wearables chief at Apple, was appointed to lead the project.
2022–2024
Bloomberg reported that Apple had given up on building a fully self-driving car and was instead looking to bring a car capable of self-driving only on highways. Its price would be below 100,000 dollars. TrendForce reported that microLED would be used in the car. Apple had 66 road-registered driverless cars, with 152 drivers registered to operate those cars.
In January 2024, Bloomberg reports suggested that Apple further delayed the car's release date to 2028, significantly scaling down its plans for self-driving and instead focusing on basic driver-assistance features similar to existing electric vehicles.
On February 27, 2024, Apple executives made an internal announcement that the entire car project was being cancelled, with most resources moving to work on Apple's generative AI projects.
In April 2024, Apple laid off more than 600 employees in Santa Clara, California. Most of the offices impacted by the layoffs were previously linked to project Titan and one, 3250 Scott Blvd, code named Aria, was developing the microLED screens.
Purported employees and affiliates
Jamie Carlson, a former engineer on Tesla's Autopilot self-driving car program. After he left Tesla for Apple, he left Apple to work with Chinese automaker Nio on their NIO Pilot autonomous driving platform. Most recently he has returned to Apple special projects.
Megan McClain, a former Volkswagen AG engineer with expertise in automated driving.
Vinay Palakkode, a graduate researcher at Carnegie Mellon University, a hub of automated driving research.
Xianqiao Tong, an engineer who developed computer vision software for driver assistance systems at microchip maker Nvidia Corp NVDA.O.
Paul Furgale, former deputy director of the Autonomous Systems Lab at the Swiss Federal Institute of Technology in Zurich.
Sanjai Massey, an engineer with experience in developing connected and automated vehicles at Ford and several suppliers.
Stefan Weber, a former Bosch engineer with experience in video-based driver assistance systems.
Lech Szumilas, a former Delphi research scientist with expertise in computer vision and object detection.
Anup Vader, formerly Caterpillar autonomous systems thermal engineer, who left Apple in April 2019 to join Zoox autonomous vehicle startup.
Doug Betts, former global quality leader at Fiat Chrysler.
Johann Jungwirth, former CEO of Mercedes-Benz Research & Development, North America, Inc. – left for VW in Nov. 2015.
Mujeeb Ijaz, a former Ford Motor Co. engineer, who founded A123 Systems's Venture Technologies division, which focused on materials research, electrical battery cell product development and advanced concepts (who helped recruited four to five staff researchers from A123, a battery technology company)
Nancy Sun, formerly vice president of electrical engineering at electric motorcycle company Mission Motors in San Francisco.
Mark Sherwood, formerly director of powertrain systems engineering at Mission Motors.
Eyal Cohen, formerly vice president of software and electrical engineering at Mission Motors.
Jonathan Cohen, former director of Nvidia's deep learning software. Nvidia uses deep learning in its Nvidia Drive PX platform, which is used in driver assistance systems.
Chris Porritt – former Tesla vice president of vehicle engineering and former Aston Martin chief engineer.
Luigi Taraborrelli, a former Lamborghini executive for 20 years.
Alex Hitzinger is a German engineer who until March 31, 2016, was the technical director of Porsche's LMP1 project. He previously worked as Head of Advanced Technologies for the Red Bull and Toro Rosso Formula One teams. In January 2019 he left to head the technical VW commercial vehicles department.
Benjamin Lyon, sensor expert, manager and founding team member, who reported directly to Doug Field, left Apple for a chief engineer position at "satellite and space startup" Astra in Feb 2021.
Weibao Wang, software engineer indicted for theft and attempted theft of trade secrets.
Doug Field, VP of special projects at Apple - de facto head of the car project. Left to join Ford
See also
Xiaomi SU7
References
Further reading
Apple Inc. hardware
Electric vehicles
Proposed vehicles
Unreleased products
Self-driving cars | Apple car project | Engineering | 3,495 |
35,932,439 | https://en.wikipedia.org/wiki/Advanced%20Technician%20in%20Aviation%20non%20civil%20servant | In France, the training of the Technicien supérieur de l'aviation (civilian) (TSA civilian, in English Advanced Technician in Aviation non civil servant) is performed by the École nationale de l'aviation civile (French civil aviation university).
History
The TSA civilian training is created in addition to the one of Technicien supérieur des études et de l'exploitation de l'aviation civile available since 1962.
Application
The competitive examination is organized each year for students holder of a Baccalauréat. 5 seats are available. After the application process, students are trained during two years at the École nationale de l'aviation civile (French civil aviation university) of Toulouse.
Job
The TSA civilians can have different jobs in airports, airlines or manufacturers including:
Operational activities,
Operation of the airport area (use of airport runways, aircraft operating conditions),
Work in an operational department (weight and balance, flight planning, route calculation, fuel calculation), or a ground handling (passengers, cargo, etc..).
Training
The initial training of the Technicien supérieur des études et de l'exploitation de l'aviation civile was reformed in 2011. It is now called TSA for Technicien supérieur de l'aviation and include, since 2011, two curriculum:
TSA civilian training: students are admitted after a competitive examination or by Validation des Acquis de l'Experience for a two-years training.
TSEEAC: after being admitted with the same competitive examination as the TSA civilian, they attend the same two years course before a one year complementary training (in a dual education system) at the Directorate General for Civil Aviation.
Bibliography
Ariane Gilotte, Jean-Philippe Husson and Cyril Lazerge, 50 ans d'Énac au service de l'aviation, Édition S.E.E.P.P, 1999
See also
Technicien supérieur de l'aviation
Technicien supérieur des études et de l'exploitation de l'aviation civile
References
External links
TSA on ENAC website
Occupations in aviation
Technicians
École nationale de l'aviation civile
Aviation licenses and certifications | Advanced Technician in Aviation non civil servant | Engineering | 449 |
59,355,695 | https://en.wikipedia.org/wiki/Manx%20comet | A Manx comet is a class of rocky minor celestial bodies that have a long-period comet orbit. Unlike most bodies on a long-period comet orbit which typically sport long, bright tails, a Manx comet is tailless, more typical of an inner Solar System asteroid. The nickname comes from the Manx breed of tailless cat. Examples include C/2013 P2 (PANSTARRS), discovered on 4 August 2013, which has an orbital period greater than 51 million years, and C/2014 S3 (PANSTARRS), discovered on 22 September 2014, which is thought to originate from the Oort cloud and could help explain the formation of the Solar System.
References
External links
JPL Small-Body Database Browser: C/2013 P2 (PANSTARRS)
JPL Small-Body Database Browser: C/2014 S3 (PANSTARRS)
Comets
Oort cloud | Manx comet | Astronomy | 183 |
1,519,594 | https://en.wikipedia.org/wiki/Lebesgue%20point | In mathematics, given a locally Lebesgue integrable function on , a point in the domain of is a Lebesgue point if
Here, is a ball centered at with radius , and is its Lebesgue measure. The Lebesgue points of are thus points where does not oscillate too much, in an average sense.
The Lebesgue differentiation theorem states that, given any , almost every is a Lebesgue point of .
References
Mathematical analysis | Lebesgue point | Mathematics | 97 |
8,275,832 | https://en.wikipedia.org/wiki/Paracellular%20transport | Paracellular transport refers to the transfer of substances across an epithelium by passing through the intercellular space between the cells. It is in contrast to transcellular transport, where the substances travel through the cell, passing through both the apical membrane and basolateral membrane.
The distinction has particular significance in renal physiology and intestinal physiology. Transcellular transport often involves energy expenditure whereas paracellular transport is unmediated and passive down a concentration gradient, or by osmosis (for water) and solvent drag for solutes. Paracellular transport also has the benefit that absorption rate is matched to load because it has no transporters that can be saturated.
In most mammals, intestinal absorption of nutrients is thought to be dominated by transcellular transport, e.g., glucose is primarily absorbed via the SGLT1 transporter and other glucose transporters. Paracellular absorption therefore plays only a minor role in glucose absorption, although there is evidence that paracellular pathways become more available when nutrients are present in the intestinal lumen. In contrast, small flying vertebrates (small birds and bats) rely on the paracellular pathway for the majority of glucose absorption in the intestine. This has been hypothesized to compensate for an evolutionary pressure to reduce mass in flying animals, which resulted in a reduction in intestine size and faster transit time of food through the gut.
Capillaries of the blood–brain barrier have only transcellular transport, in contrast with normal capillaries which have both transcellular and paracellular transport.
The paracellular pathway of transport is also important for the absorption of drugs in the gastrointestinal tract. The paracellular pathway allows the permeation of hydrophilic molecules that are not able to permeate through the lipid membrane by the transcellular pathway of absorption. This is particularly important for hydrophilic pharmaceuticals, which may not have affinity for membrane-bound transporters, and therefore may be excluded from the transcellular pathway.
The vast majority of drug molecules are transported through the transcellular pathway, and the few which rely on the paracellular pathway of transportation typically have a much lower bioavailability; for instance, levothyroxine has an oral bioavailability of 40 to 80%, and desmopressin of 0.16%.
Structure of paracellular channels
Some claudins form tight junction-associated pores that allow paracellular ion transport.
The tight junctions have a net negative charge, and are believed to preferentially transport positively charged molecules. Tight junctions in the intestinal epithelium are also known to be size-selective, such that large molecules (with molecular radii greater than about 4.5 Å) are excluded. Larger molecules may also pass the intestinal epithelium via the paracellular pathway, although at a much slower rate and the mechanism of this transport via a "leak" pathway is unknown but may include transient breaks in the epithelial barrier.
Paracellular transport can be enhanced through the displacement of zona occludens proteins from the junctional complex by the use of permeation enhancers. Such enhancers include medium chain fatty acids (e.g. capric acid), chitosans, zona occludens toxin, etc.
References
Cell biology | Paracellular transport | Biology | 676 |
71,908,220 | https://en.wikipedia.org/wiki/Planetary%20habitability%20in%20the%20Solar%20System | Planetary habitability in the Solar System is the study that searches the possible existence of past or present extraterrestrial life in those celestial bodies. As exoplanets are too far away and can only be studied by indirect means, the celestial bodies in the Solar System allow for a much more detailed study: direct telescope observation, space probes, rovers and even human spaceflight.
Aside from Earth, no planets in the solar system are known to harbor life. Mars, Europa, and Titan are considered to have once had or currently have conditions permitting the existence of life. Multiple rovers have been sent to Mars, while Europa Clipper is planned to reach Europa in 2030, and the Dragonfly space probe is planned to launch in 2027.
Outer space
The vacuum of outer space is a harsh environment. Besides the vacuum itself, temperatures are extremely low and there is a high amount of radiation from the Sun. Multicellular life cannot endure such conditions. Bacteria can not thrive in the vacuum either, but may be able to survive under special circumstances. An experiment by microbiologist Akihiko Yamagishi held at the International Space Station exposed a group of bacteria to the vacuum, completely unprotected, for three years. The Deinococcus radiodurans survived the exposure. In earlier experiments, it had survived radiation, vacuum, and low temperatures in lab-controlled experiments. The outer cells of the group had died, but their remains shielded the cells on the inside, which were able to survive.
Those studies give credence to the theory of panspermia, which proposes that life may be moved across planets within meteorites. Yamagishi even proposed the term massapanspermia for cells moving across the space in clumps instead of rocks. However, astrobiologist Natalie Grefenstette considers that unprotected cell clumps would have no protection during the ejection from one planet and the re-entry into another one.
Mercury
According to NASA, Mercury is not a suitable planet for Earth-like life. It has a surface boundary exosphere instead of a layered atmosphere, extreme temperatures that range from 800 °F (430 °C) during the day to -290 °F (-180 °C) during the night, and high solar radiation. It is unlikely that any living beings can withstand those conditions. It is unlikely to ever find remains of ancient life, either. If any type of life ever appeared on the planet, it would have suffered an extinction event in a very short time. It is also suspected that most of the planetary surface was stripped away by a large impact, which would have also removed any life on the planet.
The spacecraft MESSENGER found evidence of water ice on Mercury, within permanently shadowed craters not reached by sunlight. As a result of the thin atmosphere, temperatures within them stay cold and there is very little sublimation. There may be scientific support, based on studies reported in March 2020, for considering that parts of the planet Mercury may have hosted sub-surfaced volatiles. The geology of Mercury is considered to be shaped by impact craters and earthquakes caused by a large impact at the Caloris basin. The studies suggest that the required times would not be consistent and that it could be instead that sub-surface volatiles were heated and sublimated, causing the surface to fall apart. Those volatiles may have condensed at craters elsewhere on the planet, or lost to space by solar winds. It is not known which volatiles may have been part of this process.
Venus
The surface of Venus is completely inhospitable for life. As a result of a runaway greenhouse effect Venus has a temperature of 900 degrees Fahrenheit (475 degrees Celsius), hot enough to melt lead. It is the hottest planet in the Solar System, even more than Mercury, despite being farther away from the Sun. Likewise, the atmosphere of Venus is almost completely carbon dioxide, and the atmospheric pressure is 90 times that of Earth. There is no significant temperature change during the night, and the low axial tilt, only 3.39 degrees with respect to the Sun, makes temperatures quite uniform across the planet and without noticeable seasons.
Venus likely had liquid water on its surface for at least a few million years after its formation. The Venus Express detected that Venus loses oxygen and hydrogen to space, and that the escaping hydrogen doubles the oxygen. The source could be Venusian water, that the ultraviolet radiation from the Sun splits into its basic composition. There is also deuterium in the planet's atmosphere, a heavy type of hydrogen that is less capable of escaping the planet's gravity. However, the surface water may have been only atmospheric and not form any oceans. Astrobiologist David Grinspoon considers that although there is no proof of Venus having oceans, it is likely that it had them, as a result of similar processes to those that took place on Earth. He considers that those oceans may have lasted for 600 million years, and were lost 4 billion years ago. The growing scarcity of liquid water altered the carbon cycle, reducing carbon sequestration. With most carbon dioxide staying in the atmosphere for good, the greenhouse effect worsened even more.
Nevertheless, between the altitudes of 50 and 65 kilometers, the pressure and temperature are Earth-like, and it may accommodate thermoacidophilic extremophile microorganisms in the acidic upper layers of the Venusian atmosphere. According to this theory, life would have started in Venusian oceans when the planet was cooler, adapt to other environments as it did on Earth, and remain at the last habitable zone of the planet. The putative detection of an absorption line of phosphine in Venus's atmosphere, with no known pathway for abiotic production, led to speculation in September 2020 that there could be extant life currently present in the atmosphere. Later research attributed the spectroscopic signal that was interpreted as phosphine to sulfur dioxide, or found that in fact there was no absorption line.
Earth
Earth is the only celestial body known for sure to have generated living beings, and thus the only current example of a habitable planet. At a distance of 1 AU from the Sun, it is within the circumstellar habitable zone of the Solar system, which means it can have oceans of water in a liquid state. There also exist a great number of elements required by lifeforms, such as carbon, oxygen, nitrogen, hydrogen, and phosphorus. The Sun provides energy for most ecosystems on Earth, processed by vegetals with photosynthesis, but there are also ecosystems in the deep areas of the oceans that never receive sunlight and thrive on geothermal heat instead.
The atmosphere of Earth also plays an important role. The ozone layer protects the planet from the harmful radiations from the Sun, and free oxygen is abundant enough for the breathing needs of terrestrial life. Earth's magnetosphere, generated by its active core, is also important for the long-term habitability of Earth, as it prevents the solar winds from stripping the atmosphere out of the planet. The atmosphere is thick enough to generate atmospheric pressure at sea level that keeps water in a liquid state, but it is not strong enough to be harmful either.
There are further elements that benefited the presence of life, but it is not completely clear if life could have thrived or not without them. The planet is not tidally locked and the atmosphere allows the distribution of heat, so temperatures are largely uniform and without great swift changes. The bodies of water cover most of the world but still leave large landmasses and interact with rocks at the bottom. A nearby celestial body, the Moon, subjects the Earth to substantial but not catastrophic tidal forces.
Following a suggestion of Carl Sagan, the Galileo probe studied Earth from the distance, to study it in a way similar to the one we use to study other planets. The presence of life on Earth could be confirmed by the levels of oxygen and methane in the atmosphere, and the red edge was evidence of plants. It even detected a technosignature, strong radio waves that could not be caused by natural reasons.
The Moon
Despite its proximity to Earth, the Moon is mostly inhospitable to life. No native lunar life has been found, including any signs of life in the samples of Moon rocks and soil. In 2019, Israeli craft Beresheet carrying tardigrades crash landed on the Moon. While their "chances of survival" were "extremely high", it was the force of the crashand not the Moon's environmentthat likely killed them.
The atmosphere of the Moon is almost non-existent, there is no liquid water (although there is solid ice at some permanently shadowed craters), and no protection from the radiation of the Sun.
However, circumstances could have been different in the past. There are two possible time periods of habitability: right after its origin, and during a period of high volcanic activity. In the first case, it is debated how many volatiles would survive in the debris disk, but it is thought that some water could have been retained thanks to its difficulty to diffuse in a silicate-dominated vapor. In the second case, thanks to extreme outgassing from lunar magma the Moon could have an atmosphere of 10 millibars. Although that's just 1% of the atmosphere of Earth, it is higher than on Mars and may be enough to allow liquid surface water, such as in the theorized Lunar magma ocean. This theory is supported by studies of Lunar rocks and soil, which were more hydrated than expected. Studies of Lunar vulcanism also reveal water within the Moon, and that the Lunar mantle would have a composition of water similar to Earth's upper mantle.
This may be confirmed by studies on the crust of the Moon that would suggest an old exposition to magma water. The early Moon may have also had its own magnetic field, deflecting solar winds. Life on the Moon may have been the result of a local process of abiogenesis, but also from panspermia from Earth.
Dirk Schulze-Makuch, professor of planetary science and astrobiology at the University of London considers that those theories may be properly tested if a future expedition to the Moon seeks markers of life on lunar samples from the age of volcanic activity, and by testing the survival of microorganisms at simulated lunar environment that try to imitate that specific Lunar age.
Mars
Mars is the celestial body in the solar system with the most similarities to Earth. A Mars sol lasts almost the same as an Earth day, and its axial tilt gives it similar seasons. There is water on Mars, most of it frozen at the Martian polar ice caps, and some of it underground. However, there are many obstacles to its habitability. The surface temperature averages about -60 degrees Celsius (-80 degrees Fahrenheit). There are no permanent bodies of liquid water on the surface. The atmosphere is thin, and more than 96% of it is toxic carbon dioxide. Its atmospheric pressure is below 1% than that of Earth. Combined with its lack of a magnetosphere, Mars is open to harmful radiation from the Sun. Although no astronauts have set foot on Mars, the planet has been studied in great detail by rovers. So far, no native lifeforms have been found. The origin of the potential biosignature of methane observed in the atmosphere of Mars is unexplained, although hypotheses not involving life have been proposed.
It is thought, however, that those conditions may have been different in the past. Mars could have had bodies of water, a thicker atmosphere and a working magnetosphere, and may have been habitable then. The rover Opportunity first discovered evidences of such a wet past, but later studies found that the territories studied by the rover were in contact with sulfuric acid, not water. The Gale crater, on the other hand, has clay minerals that could have only been formed in water with a neutral PH. For this reason, NASA selected it for the landing of the Curiosity rover.
The crater Jezero is suspected of being the location of an ancient lake. For this reason NASA sent the Perseverance rover to investigate. Although no actual life has been found, the rocks may still contain fossil traces of ancient life, if the lake had any. It is also suggested that microscopic life may have escaped the worsening conditions of the surface by moving underground. An experiment simulated those conditions to check the reactions of lichen and found that it survived by finding refuge in rock cracks and soil gaps.
Although many geological studies suggest that Mars was habitable in the past, that does not necessarily mean that it was inhabited. Finding fossils of microscopic life of such distant times is an incredibly difficult task, even for Earth's earliest known life forms. Such fossils require a material capable to preserve cellular structures and survive degradational rock-forming and environmental processes. The knowledge of taphonomy for those cases is limited to the sparse fossils found so far, and are based on Earth's environment, which greatly differs from the Martian one.
Asteroid belt
Ceres
Ceres, the only dwarf planet in the asteroid belt, has a thin water-vapor atmosphere. The vapor is likely the result of impacts of meteorites containing ice, but there is hardly an atmosphere besides said vapor. Nevertheless, the presence of water had led to speculation that life may be possible there. It is even conjectured that Ceres could be the source of life on Earth by panspermia, as its small size would allow fragments of it to escape its gravity more easily. Although the dwarf planet might not have living things today, there could be signs it harbored life in the past.
The water in Ceres, however, is not liquid water on the surface. It comes frozen in meteorites and sublimates to vapor. The dwarf planet is out of the habitable zone, is too small to have sustained tectonic activity, and does not orbit a tidally disruptive body like the moons of the gas giants. However, studies by the Dawn space probe confirmed that Ceres has liquid salt-enriched water underground.
Jupiter
Carl Sagan and others in the 1960s and 1970s computed conditions for hypothetical microorganisms living in the atmosphere of Jupiter. The intense radiation and other conditions, however, do not appear to permit encapsulation and molecular biochemistry, so life there is thought unlikely. In addition, as a gas giant Jupiter has no surface, so any potential microorganisms would have to be airborne. Although there are some layers of the atmosphere that may be habitable, Jovian climate is in constant turbulence and those microorganisms would eventually be sucked into the deeper parts of Jupiter. In those areas atmospheric pressure is 1,000 times that of Earth, and temperatures can reach 10,000 degrees. However, it was discovered that the Great Red Spot contains water clouds. Astrophysicist Máté Ádámkovics said that "where there’s the potential for liquid water, the possibility of life cannot be completely ruled out. So, though it appears very unlikely, life on Jupiter is not beyond the range of our imaginations".
Callisto
Callisto has a thin atmosphere and a subsurface ocean, and may be a candidate for hosting life. It is more distant to the planet than other moons, so the tidal forces are weaker, but also it receives less harmful radiation.
Europa
Europa may have a liquid ocean beneath its icy surface, which may be a habitable environment. This potential ocean was first noticed by the two Voyager spacecraft, and later backed by telescope studies from Earth. Current estimations consider that this ocean may contain twice the amount of water of all Earth's oceans combined, despite Europa's smaller size. The ice crust would be between 15 and 25 miles thick and may represent an obstacle to study this ocean, though it may be probed via possible eruption columns that reach outer space.
Life would need liquid water, a number of chemical elements, and a source of energy. Although Europa may have the first two elements, it is not confirmed if it has the three of them. A potential source of energy would be a hydrothermal vent, which has not been detected yet. Solar light is not considered a viable energy source, as it is too weak in the Jupiter system and would also have to cross the thick ice surface. Other proposed energy sources, although still speculative, are the Magnetosphere of Jupiter and kinetic energy.
Unlike the oceans of Earth, the oceans of Europa would be under a permanent thick ice layer, which may make water aeration difficult. Richard Greenberg of the University of Arizona considers that the ice layer would not be a homogeneous block, but the ice would be rather in a cycle renewing itself at the top and burying the surface ice deeper, which would eventually drop the surface ice into the lower side in contact with the ocean. This process would allow some air from the surface to eventually reach the ocean below. Greenberg considers that the first surface oxygen to reach the oceans would have done so after a couple of billion years, allowing life to emerge and develop defenses against oxidation. He also considers that, once the process started, the amount of oxygen would even allow the development of multicellular beings, and perhaps even sustain a population comparable to all the fishes of Earth.
On 11 December 2013, NASA reported the detection of "clay-like minerals" (specifically, phyllosilicates), often associated with organic materials, on the icy crust of Europa. The presence of the minerals may have been the result of a collision with an asteroid or comet, according to the scientists. The Europa Clipper, which would assess the habitability of Europa, launched in 2024 and is set to reach the moon in 2030. Europa's subsurface ocean is considered the best target for the discovery of life.
Ganymede
Ganymede, the largest moon in the Solar system, is the only one that has a magnetic field of its own. The surface seems similar to Mercury and the Moon, and is likely as hostile to life as them. It is suspected that it has an ocean below the surface, and that primitive life may be possible there. This suspicion is caused because of the unusually high level of water vapor in the thin atmosphere of Ganymede. The moon likely has several layers of ice and liquid water, and finally a liquid layer in contact with the mantle. The core, the likely cause of Ganymede's magnetic field, would have a temperature near 1600 K. This particular environment is suspected to be likely to be habitable. The moon is set to be the subject of investigation by the European Space Agency's Jupiter Icy Moons Explorer, which was launched in 2023 and will reach the Jovian system in 2031.
Io
Of all the Galilean moons, Io is the closest to the planet. It is the moon with the highest volcanic activity in the Solar System, as a result of the tidal forces from the planet and its oval orbit around it. Even so, the surface is still cold: -143 Cº. The atmosphere is 200 times lighter than Earth's atmosphere, the proximity of Jupiter gives a lot of radiation, and it is completely devoid of water. However, it may have had water in the past, and perhaps lifeforms underground.
Saturn
Similarly to Jupiter, Saturn is not likely to host life. It is a gas giant and the temperatures, pressures, and materials found in it are too dangerous for life. The planet is hydrogen and helium for the most part, with trace amounts of ice water. Temperatures near the surface are near -150 C. The planet gets warmer on the inside, but in the depth where water may be liquid the atmospheric pressure is too high.
Enceladus
Enceladus, the sixth-largest moon of Saturn, has some of the conditions for life, including geothermal activity and water vapor, as well as possible under-ice oceans heated by tidal effects. The Cassini–Huygens probe detected carbon, hydrogen, nitrogen and oxygen—all key elements for supporting life—during its 2005 flyby through one of Enceladus's geysers spewing ice and gas. The temperature and density of the plumes indicate a warmer, watery source beneath the surface. Of the bodies on which life is possible, living organisms could most easily enter the other bodies of the Solar System from Enceladus.
Mimas
Mimas, the seventh-largest moon of Saturn, is similar in size and orbit location to Enceladus. In 2024, based on orbital data from the Cassini–Huygens mission, Mimas was calculated to contain a large tidally heated subsurface ocean starting ~20–30 km below the heavily cratered but old and well-preserved surface, hinting at the potential for life.
Titan
Titan, the largest moon of Saturn, is the only known moon in the Solar System with a significant atmosphere. Data from the Cassini–Huygens mission refuted the hypothesis of a global hydrocarbon ocean, but later demonstrated the existence of liquid hydrocarbon lakes in the polar regions—the first stable bodies of surface liquid discovered outside Earth. Further data from Cassini have strengthened evidence that Titan likely harbors a layer of liquid water under its ice shell. Analysis of data from the mission has uncovered aspects of atmospheric chemistry near the surface that are consistent with—but do not prove—the hypothesis that organisms there, if present, could be consuming hydrogen, acetylene and ethane, and producing methane. NASA's Dragonfly mission is slated to land on Titan in the mid-2030s with a VTOL-capable rotorcraft with a launch date set for 2027.
Uranus
The planet Uranus, an ice giant, is unlikely to be habitable. The local temperatures and pressures may be too extreme, and the materials too volatile. The only spacecraft to visit, and thus observe, Uranus and its moons in detail is Voyager 2 in 1986.
Uranian moons
The five major moons of Uranus, however, may have been home to tidally heated subsurface oceans at some point in their histories, based on observations of Ariel's and Miranda's variegated geology, combined with computer models of the four largest moons, with Titania, the largest, deemed the most likely.
Neptune
The planet Neptune, another ice giant explored by Voyager 2, is also unlikely to be habitable. The local temperatures and pressures may be too extreme, and the materials too volatile.
Triton
The moon Triton, however, was thoroughly shown to have cryovolcanism on its surface, as well as deposits of water ice and relatively young and smooth geology for its age, raising the possibility of a subsurface ocean.
Pluto
The dwarf planet Pluto is too cold to sustain life on the surface. It has an average of -232 °C, and surface water only exists in a rocky state. The interior of Pluto may be warmer and perhaps contain a subsurface ocean. Also, the possibility of geothermal activity comes into play. That combined with the fact that Pluto has an eccentric orbit, making it sometimes closer to the sun, means that there is a slight chance that the dwarf planet could contain life.
Kuiper belt
The dwarf planet Makemake is not habitable, due to its extremely low temperatures. The same goes for Haumea and Eris.
See also
Water on terrestrial planets of the Solar System
Bibliography
References
Extraterrestrial life
Planetary habitability
Solar System | Planetary habitability in the Solar System | Astronomy,Biology | 4,784 |
39,078,701 | https://en.wikipedia.org/wiki/Crack%20closure | Crack closure is a phenomenon in fatigue loading, where the opposing faces of a crack remain in contact even with an external load acting on the material. As the load is increased, a critical value will be reached at which time the crack becomes open. Crack closure occurs from the presence of material propping open the crack faces and can arise from many sources including plastic deformation or phase transformation during crack propagation, corrosion of crack surfaces, presence of fluids in the crack, or roughness at cracked surfaces.
Description
During cyclic loading, a crack will open and close causing the crack tip opening displacement (CTOD) to vary cyclically in phase with the applied force. If the loading cycle includes a period of negative force or stress ratio (i.e. ), the CTOD will remain equal to zero as the crack faces are pressed together. However, it was discovered that the CTOD can also be zero at other times even when the applied force is positive preventing the stress intensity factor reaching its minimum. Thus, the amplitude of the stress intensity factor range, also known as the crack tip driving force, is reduced relative to the case in which no closure occurs, thereby reducing the crack growth rate. The closure level increases with stress ratio and above approximately , the crack faces do not contact and closure does not typically occur.
The applied load will generate a stress intensity factor at the crack tip, producing a crack tip opening displacement, CTOD. Crack growth is generally a function of the stress intensity factor range, for an applied loading cycle and is
However, crack closure occurs when the fracture surfaces are in contact below the opening level stress intensity factor even though under positive load, allowing us to define an effective stress intensity range as
which is less than the nominal applied .
History
The phenomenon of crack closure was first discovered by Elber in 1970. He observed that a contact between the fracture surfaces could take place even during cyclic tensile loading. The crack closure effect helps explain a wide range of fatigue data, and is especially important in the understanding of the effect of stress ratio (less closure at higher stress ratio) and short cracks (less closure than long cracks for the same cyclic stress intensity).
Crack closure mechanisms
Plasticity-induced crack closure
The phenomenon of plasticity-induced crack closure is associated with the development of residual plastically deformed material on the flanks of an advancing fatigue crack.
The degree of plasticity at the crack tip is influenced by the level of material constraint. The two extreme cases are:
Under plane stress conditions, the piece of material in the plastic zone is elongated, which is mainly balanced by an out-of-the-plane flow of the material. Hence, the plasticity-induced crack closure under plane stress conditions can be expressed as a consequence of the stretched material behind the crack tip, which can be considered as a wedge that is inserted in the crack and reduces the cyclic plastic deformation at the crack tip and hence the fatigue crack growth rate.
Under plane strain conditions and constant load amplitudes, there is no plastic wedge at large distances behind the crack tip. However, the material in the plastic wake is plastically deformed. It is plastically sheared; this shearing induces a rotation of the original piece of material, and as a consequence, a local wedge is formed in the vicinity of the crack tip.
Phase-transformation-induced crack closure
Deformation-induced martensitic transformation in the stress field of the crack tip is another possible reason to cause crack closure. It was first studied by Pineau and Pelloux and Hornbogen in metastable austenitic stainless steels. These steels transform from the austenitic to the martensitic lattice structure under sufficiently high deformation, which leads to an increase of the material volume ahead of the crack tip. Therefore, compression stresses are likely to arise as the crack surfaces contact each other. This transformation-induced closure is strongly influenced by the size and geometry of the test specimen and of the fatigue crack.
Oxide-induced crack closure
Oxide-induced closure occurs where rapid corrosion occurs during crack propagation. It is caused when the base material at the fracture surface is exposed to gaseous and aqueous atmospheres and becomes oxidized. Although the oxidized layer is normally very thin, under continuous and repetitive deformation, the contaminated layer and the base material experience repetitive breaking, exposing even more of the base material, and thus produce even more oxides. The oxidized volume grows and is typically larger than the volume of the base material around the crack surfaces. As such, the volume of the oxides can be interpreted as a wedge inserted into the crack, reducing the effect stress intensity range. Experiments have shown that oxide-induced crack closure occurs at both room and elevated temperature, and the oxide build-up is more noticeable at low R-ratios and low (near-threshold) crack growth rates.
Roughness-induced crack closure
Roughness induced closure occurs with Mode II or in-plane shear type of loading, which is due to the misfit of the rough fracture surfaces of the crack’s upper and lower parts. Due to the anisotropy and heterogeneity in the micro structure, out-of-plane deformation occurs locally when Mode II loading is applied, and thus microscopic roughness of fatigue fracture surfaces is present. As a result, these mismatch wedges come into contact during the fatigue loading process, resulting in crack closure. The misfit in the fracture surfaces also takes place in the far field of the crack, which can be explained by the asymmetric displacement and rotation of material.
Roughness induced crack closure is justifiable or valid when the roughness of the surface is of same order as the crack opening displacement. It is influenced by such factors as grain size, loading history, material mechanical properties, load ratio and specimen type.
References
Fracture mechanics | Crack closure | Materials_science,Engineering | 1,183 |
49,369,271 | https://en.wikipedia.org/wiki/Gestigon | Gestigon (stylized as gestigon) is a software development company founded in September 2011, to develop software for gesture control and body tracking based on 3D depth data.
, Gestigon began developing augmented reality and automotive solutions for Audi, Renault and Volkswagen, and also AR/VR headsets.
In March 2017, Gestigon was acquired by Valeo, a French automotive supplier.
History
The company was founded by Sascha Klement, Erhardt Barth, and Thomas Martinetz.
Sascha Klement worked as a student assistant and Ph.D. student for the professors Thomas Martinetz and Erhardt Barth, who have been developing software solutions based on time-of-flight sensors at the University of Lübeck since 2002. Together they founded Gestigon in 2011 with seed-funding from High-Tech Gründerfonds, Mittelständische Beteiligungsgesellschaft Schleswig-Holstein and local business angels.
In March 2012, Moritz von Grotthuss joined the company as advisor and later rose to CEO. In the same month, Gestigon was 1 of 15 companies that received an Innovation Award at CeBIT 2012.
In January 2013, Gestigon participated at CES in Las Vegas and, later that year, also at TechCrunch Disrupt in New York City. The next year Visteon and Volkswagen used Gestigon's gestures solutions in their products presented at CES 2014. It won the CeBIT Innovation Award again in 2014.
Gestigon's technologies were included Audi at CES 2015 and CES 2016; Volkswagen and Infineon. Gestigon launched its Virtual Reality solution Carnival at the TechCrunch Disrupt in San Francisco in September 2015, using an Oculus Rift and different depth sensors. The first demo using a mobile device was done at the CES in 2015. In 2015, Gestigon partnered with Inuitive, a 3D computer vision and image processors developer, to create a VR unit. The system was presented at CES 2016 assembled on an Oculus Rift development kit.
In July 2015, Gestigon closed its Series A financing round from with nbr technology ventures GmbH as a primary investor headed by Fabian von Kuenheim and High-Tech Gründerfonds and Vorwerk Direct Selling Ventures. Fabian von Kuenheim became chairman of the advisory board which was also composed of the German entrepreneur Holger G. Weiss and the French investor Gunnar Graef. In March 2017, Gestigon was acquired by French automotive supplier, Valeo. In March 2017, Gestion developed software that recognizes driving gestures.
Products
Gestigon develops software that works with 3D sensors to recognize human gestures, poses and biometrical features in real time, such as:
Gecko, a feature tracker that tracks an individual and measures their biometric features,
Flamenco, a piece of software for finger and hand gesture control,
Carnival SDK, software for augmented reality and virtual reality, which allows users to see and use their hands in virtual interfaces.
Gestigon's solutions are based on skeleton recognition. Their software recognizes body parts in 3D data to make the recognition faster and more accurate.
References
Middleware
Virtual reality companies
Augmented reality
Software companies of Germany
Companies based in Schleswig-Holstein
Gesture recognition
Software companies established in 2011
Companies based in Sunnyvale, California
German companies established in 2011 | Gestigon | Technology,Engineering | 686 |
68,518 | https://en.wikipedia.org/wiki/Chemisorption | Chemisorption is a kind of adsorption which involves a chemical reaction between the surface and the adsorbate. New chemical bonds are generated at the adsorbent surface. Examples include macroscopic phenomena that can be very obvious, like corrosion, and subtler effects associated with heterogeneous catalysis, where the catalyst and reactants are in different phases. The strong interaction between the adsorbate and the substrate surface creates new types of electronic bonds.
In contrast with chemisorption is physisorption, which leaves the chemical species of the adsorbate and surface intact. It is conventionally accepted that the energetic threshold separating the binding energy of "physisorption" from that of "chemisorption" is about 0.5 eV per adsorbed species.
Due to specificity, the nature of chemisorption can greatly differ, depending on the chemical identity and the surface structural properties.
The bond between the adsorbate and adsorbent in chemisorption is either ionic or covalent.
Uses
An important example of chemisorption is in heterogeneous catalysis which involves molecules reacting with each other via the formation of chemisorbed intermediates. After the chemisorbed species combine (by forming bonds with each other) the product desorbs from the surface.
Self-assembled monolayers
Self-assembled monolayers (SAMs) are formed by chemisorbing reactive reagents with metal surfaces. A famous example involves thiols (RS-H) adsorbing onto the surface of gold. This process forms strong Au-SR bonds and releases H2. The densely packed SR groups protect the surface.
Gas-surface chemisorption
Adsorption kinetics
As an instance of adsorption, chemisorption follows the adsorption process. The first stage is for the adsorbate particle to come into contact with the surface. The particle needs to be trapped onto the surface by not possessing enough energy to leave the gas-surface potential well. If it elastically collides with the surface, then it would return to the bulk gas. If it loses enough momentum through an inelastic collision, then it "sticks" onto the surface, forming a precursor state bonded to the surface by weak forces, similar to physisorption. The particle diffuses on the surface until it finds a deep chemisorption potential well. Then it reacts with the surface or simply desorbs after enough energy and time.
The reaction with the surface is dependent on the chemical species involved. Applying the Gibbs energy equation for reactions:
General thermodynamics states that for spontaneous reactions at constant temperature and pressure, the change in free energy should be negative. Since a free particle is restrained to a surface, and unless the surface atom is highly mobile, entropy is lowered. This means that the enthalpy term must be negative, implying an exothermic reaction.
Physisorption is given as a Lennard-Jones potential and chemisorption is given as a Morse potential. There exists a point of crossover between the physisorption and chemisorption, meaning a point of transfer. It can occur above or below the zero-energy line (with a difference in the Morse potential, a), representing an activation energy requirement or lack of. Most simple gases on clean metal surfaces lack the activation energy requirement.
Modeling
For experimental setups of chemisorption, the amount of adsorption of a particular system is quantified by a sticking probability value.
However, chemisorption is very difficult to theorize. A multidimensional potential energy surface (PES) derived from effective medium theory is used to describe the effect of the surface on absorption, but only certain parts of it are used depending on what is to be studied. A simple example of a PES, which takes the total of the energy as a function of location:
where is the energy eigenvalue of the Schrödinger equation for the electronic degrees of freedom and is the ion interactions. This expression is without translational energy, rotational energy, vibrational excitations, and other such considerations.
There exist several models to describe surface reactions: the Langmuir–Hinshelwood mechanism in which both reacting species are adsorbed, and the Eley–Rideal mechanism in which one is adsorbed and the other reacts with it.
Real systems have many irregularities, making theoretical calculations more difficult:
Solid surfaces are not necessarily at equilibrium.
They may be perturbed and irregular, defects and such.
Distribution of adsorption energies and odd adsorption sites.
Bonds formed between the adsorbates.
Compared to physisorption where adsorbates are simply sitting on the surface, the adsorbates can change the surface, along with its structure. The structure can go through relaxation, where the first few layers change interplanar distances without changing the surface structure, or reconstruction where the surface structure is changed. A direct transition from physisorption to chemisorption has been observed by attaching a CO molecule to the tip of an atomic force microscope and measuring its interaction with a single iron atom.
For example, oxygen can form very strong bonds (~4 eV) with metals, such as Cu(110). This comes with the breaking apart of surface bonds in forming surface-adsorbate bonds. A large restructuring occurs by missing row.
Dissociative chemisorption
A particular brand of gas-surface chemisorption is the dissociation of diatomic gas molecules, such as hydrogen, oxygen, and nitrogen. One model used to describe the process is precursor-mediation. The absorbed molecule is adsorbed onto a surface into a precursor state. The molecule then diffuses across the surface to the chemisorption sites. They break the molecular bond in favor of new bonds to the surface. The energy to overcome the activation potential of dissociation usually comes from translational energy and vibrational energy.
An example is the hydrogen and copper system, one that has been studied many times over. It has a large activation energy of 0.35 – 0.85 eV. The vibrational excitation of the hydrogen molecule promotes dissociation on low index surfaces of copper.
See also
Adsorption
Physisorption
References
Bibliography
Physical chemistry
Catalysis | Chemisorption | Physics,Chemistry | 1,315 |
3,853,916 | https://en.wikipedia.org/wiki/Royal%20Institution%20of%20Naval%20Architects | The Royal Institution of Naval Architects (also known as RINA) is a professional institution and global governing body for naval architecture and maritime engineering. Members work in industry, academia, and maritime organisations worldwide, participating in the design, construction, repair, and operation of ships, boats, and marine structures in over 90 countries.
The Patron of the Institution was Queen Elizabeth II but is now King Charles III.
History
The Royal Institution of Naval Architects was founded in Britain in 1860 as The Institution of Naval Architects and was incorporated by Royal Charter in 1910 and 1960 to "advance the art and science of ship design."
Founding members included John Scott Russell, Edward Reed, Rev. Joseph Woolley, Nathaniel Barnaby, Frederick Kynaston Barnes, and John Penn.
On April 9, 1919, Blanche Thornycroft, Rachel Mary Parsons, and Eily Keary became the first women admitted into the institution.
Arms
Historical members
The following have been members of the society historically:
David Keith Brown (1928–2008)
Peter Du Cane CBE (1901–1984)
Sir John Isaac Thornycroft (1843–1928)
Bernard Waymouth (1824–1890)
Sir Eric Yarrow MBE (1920–2018)
See also
Royal Society of Arts
References
External links
The Royal Institution of Naval Architects
Organizations established in 1860
1860 establishments in the United Kingdom
ECUK Licensed Members
Marine engineering organizations | Royal Institution of Naval Architects | Engineering | 273 |
11,438,910 | https://en.wikipedia.org/wiki/Phloeospora%20multimaculans | Phloeospora multimaculans is a fungal plant pathogen infecting plane trees.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal tree pathogens and diseases
Mycosphaerellaceae
Fungus species | Phloeospora multimaculans | Biology | 48 |
25,351,840 | https://en.wikipedia.org/wiki/Fine%20structure%20genetics | Fine structure genetics encompasses a set of tools used to examine not just the mutations within an entire genome, but can be isolated to either specific pathways or regions of the genome. Ultimately, this more focused lens can lead to a more nuanced and interactive view of the function of a gene.
Regional Mutagenesis
Similar to forward genetics, regional mutagenesis seeks to saturate with insertions or point mutations, but instead of for the entire genome, it saturates only a small portion of the genome. By limiting the region in focus, researchers are then able to intensify the number of mutations within any genes or promoters within that regions, often illuminating more complicated functions than could be identified with a broader focus. Furthermore, such mutations can show how the specific structure of that region of a chromosome affect expression levels and function.
Such mutations are introduced in the same means as forward genetics, often through chemical induction or transposable element insertions. The creation of specific balancer chromosomes that are restrictive to only a small region of the genome can guarantee that mutations will only be isolated and reproduced only in that region.
Modifier Screens
When a gene is identified as affecting a specific phenotype, a modifier screen can be used to assess which genes that either enhance or inhibit the phenotypic expression of the initial mutation. This is a powerful way of rapidly identifying many genes that are involved in the expression of a phenotype, but such screens can only say whether or not two genes interact, not what their exact function are, or how they relate. For instance, the product of the second gene may interact directly with that of the first gene, or it may be involved in distantly on the pathway.
One of the major benefits of modifier screens is that screens do not necessarily have to take place in the organism of interest. For instance, a gene that corresponds to an important phenotype in an organism in which a set of screens involving mutagenesis (i.e. human beings), will often have a homologue in a model organism. In this case, that homologous gene can either be knocked out or the initial gene can be ectopically expressed in the model organism, at which point a screen for modifiers of the aberrant phenotype can take place.
Enhancer trapping
Enhancer trapping involves the insertion of a reporter gene, such as lac-Z or GFP, into the promoter region a desired gene, so that whenever the gene is expressed, it can be monitored by said reporter, giving a specific spatial and temporal map of when a gene expressed. This method again involves Transposable Element insertion, taking advantage of certain transposable elements that have a propensity to insert into promoter regions. This method is also advantageous as such insertions can be reversed.
A similar method can be used to study novel phenotypes created by tissue specific gain-of-function or loss of function. In order to create gain-of-function, the TE is inserted with not just with a reporter gene, but also with the GAL4 transcriptional activator. When this line is crossed with an organism with a gene fused with a GAL4 mediated promoter. This way anytime that particular promoter is turned on, it will not only express its original gene, it will also turn on expression of any gene the experimenter would like turned on. This is an easy way to ensure tissue or time specific expression of a gene where it is not usually expressed. Under a similar principle, the GAL4 transcriptional activator can be replaced with an RNAi construct for a specific gene. This can make any promoter into an inhibitor of a gene in a specific location.
Floxing
For a fuller explanation, see Cre-Lox Recombination
With a similar effect as the insertion of TE with RNAi constructs, Cre-Lox recombinants can be used to have tissue specific loss-of-function. It is particularly useful in dissecting the specific functions of genes that are essential in development, and therefore knock-outs are lethal.
References
A Primer of Genome Science, Third Edition. Greg Gibson and Spencer V. Muse. 2009. Sinauer Press
Molecular genetics
Genomics | Fine structure genetics | Chemistry,Biology | 863 |
433,572 | https://en.wikipedia.org/wiki/Decision%20table | Decision tables are a concise visual representation for specifying which actions to perform depending on given conditions. Decision table is the term used for a Control table or State-transition table in the field of Business process modeling; they are usually formatted as the transpose of the way they are formatted in Software engineering.
Overview
Each decision corresponds to a variable, relation or predicate whose possible values are listed among the condition alternatives. Each action is a procedure or operation to perform, and the entries specify whether (or in what order) the action is to be performed for the set of condition alternatives the entry corresponds to.
To make them more concise, many decision tables include in their condition alternatives a don't care symbol. This can be a hyphen or blank, although using a blank is discouraged as it may merely indicate that the decision table has not been finished. One of the uses of decision tables is to reveal conditions under which certain input factors are irrelevant on the actions to be taken, allowing these input tests to be skipped and thereby streamlining decision-making procedures.
Aside from the basic four quadrant structure, decision tables vary widely in the way the condition alternatives and action entries are represented. Some decision tables use simple true/false values to represent the alternatives to a condition (similar to if-then-else), other tables may use numbered alternatives (similar to switch-case), and some tables even use fuzzy logic or probabilistic representations for condition alternatives. In a similar way, action entries can simply represent whether an action is to be performed (check the actions to perform), or in more advanced decision tables, the sequencing of actions to perform (number the actions to perform).
A decision table is considered balanced or complete if it includes every possible combination of input variables. In other words, balanced decision tables prescribe an action in every situation where the input variables are provided.
Example
The limited-entry decision table is the simplest to describe. The condition alternatives are simple Boolean values, and the action entries are check-marks, representing which of the actions in a given column are to be performed.
The following balanced decision table is an example in which a technical support company writes a decision table to enable technical support employees to efficiently diagnose printer problems based upon symptoms described to them over the phone from their clients.
This is just a simple example, and it does not necessarily correspond to the reality of printer troubleshooting. Even so, it demonstrates how decision tables can scale to several conditions with many possibilities.
Software engineering benefits
Decision tables, especially when coupled with the use of a domain-specific language, allow developers and policy experts to work from the same information, the decision tables themselves.
Tools to render nested if statements from traditional programming languages into decision tables can also be used as a debugging tool.
Decision tables have proven to be easier to understand and review than code, and have been used extensively and successfully to produce specifications for complex systems.
History
In the 1960s and 1970s a range of "decision table based" languages such as Filetab were popular for business programming.
Program embedded decision tables
Decision tables can be, and often are, embedded within computer programs and used to "drive" the logic of the program. A simple example might be a lookup table containing a range of possible input values and a function pointer to the section of code to process that input.
Control tables
Multiple conditions can be coded for in similar manner to encapsulate the entire program logic in the form of an "executable" decision table or control table. There may be several such tables in practice, operating at different levels and often linked to each other (either by pointers or an index value).
Implementations
Filetab, originally from the NCC
DETAB/65, 1965, ACM
FORTAB from Rand in 1962, designed to be imbedded in FORTRAN
A Ruby implementation exists using MapReduce to find the correct actions based on specific input values.
See also
Decision trees
Case based reasoning
Cause–effect graph
Dominance-based rough set approach
DRAKON
Karnaugh-Veitch diagram
Many-valued logic
Semantic decision table
Decision Model and Notation
Truth table
References
Further reading
Dwyer, B. and Hutchings, K. (1977) "Flowchart Optimisation in Cope, a Multi-Choice Decision Table" Aust. Comp. J. Vol. 9 No. 3 p. 92 (Sep. 1977).
Fisher, D.L. (1966) "Data, Documentation and Decision Tables" Comm ACM Vol. 9 No. 1 (Jan. 1966) p. 26–31.
General Electric Company (1962) GE-225 TABSOL reference manual and GF-224 TABSOL application manual CPB-l47B (June 1962).
Grindley, C.B.B. (1968) "The Use of Decision Tables within Systematics" Comp. J. Vol. 11 No. 2 p. 128 (Aug. 1968).
Jackson, M.A. (1975) Principles of Program Design Academic Press
Myers, H.J. (1972) "Compiling Optimised Code from Decision Tables" IBM J. Res. & Development (Sept. 1972) p. 489–503.
Pollack, S.L. (1962) "DETAB-X: An improved business-oriented computer language" Rand Corp. Memo RM-3273-PR (August 1962)
Schumacher, H. and Sevcik, K.C. (1976) "The Synthetic Approach to Decision Table Conversion" Comm. ACM Vol. 19 No. 6 (June 1976) p. 343–351
CSA, (1970): Z243.1–1970 for Decision Tables, Canadian Standards Association
Jorgensen, Paul C. (2009) Modeling Software Behavior: A Craftsman's Approach. Auerbach Publications, CRC Press. Chapter 5.
External links
RapidGen Software For Windows, Unix, Linux and OpenVMS versions of decision table based programming tools and compilers
LogicGem Software For Windows decision table processor for perfecting logic and business rules
LF-ET Software For Windows, Unix, Linux a decision table editor, program generator and test case generator
A Decision Table Example
Software testing
Decision analysis | Decision table | Engineering | 1,274 |
31,958,369 | https://en.wikipedia.org/wiki/MapR | MapR was a business software company headquartered in Santa Clara, California. MapR software provides access to a variety of data sources from a single computer cluster, including big data workloads such as Apache Hadoop and Apache Spark, a distributed file system, a multi-model database management system, and event stream processing, combining analytics in real-time with operational applications. Its technology runs on both commodity hardware and public cloud computing services. In August 2019, following financial difficulties, the technology and intellectual property of the company were sold to Hewlett Packard Enterprise.
Funding
MapR was privately held with original funding of $9 million from Lightspeed Venture Partners and New Enterprise Associates in 2009. MapR executives come from Google, Lightspeed Venture Partners, Informatica, EMC Corporation and Veoh. MapR had an additional round of funding led by Redpoint Ventures in August, 2011. A round in 2013 was led by Mayfield Fund that also included Greenspring Associates. In June 2014, MapR closed a $110 million financing round that was led by Google Capital. Qualcomm Ventures also participated, along with existing investors Lightspeed Venture Partners, Mayfield Fund, New Enterprise Associates and Redpoint Ventures.
In May 2019, the company announced that it would shut down if it was unable to find additional funding.
History
The company contributed to the Apache Hadoop projects HBase, Pig, Apache Hive, and Apache ZooKeeper.
MapR entered a technology licensing agreement with EMC Corporation on 2011, supporting an EMC-specific distribution of Apache Hadoop. MapR was selected by Amazon Web Services to provide an upgraded version of Amazon's Elastic MapReduce (EMR) service. MapR broke the minute sort speed record on Google's Compute platform.
See also
Apache Accumulo
Apache Software Foundation
Big data
Bigtable
Database-centric architecture
Hadoop
MapReduce
RainStor
References
Software companies based in the San Francisco Bay Area
Cloud infrastructure
Distributed file systems
Hadoop
Companies based in San Jose, California
Big data companies
Defunct software companies of the United States | MapR | Technology | 416 |
12,184,856 | https://en.wikipedia.org/wiki/Solid%20harmonics | In physics and mathematics, the solid harmonics are solutions of the Laplace equation in spherical polar coordinates, assumed to be (smooth) functions . There are two kinds: the regular solid harmonics , which are well-defined at the origin and the irregular solid harmonics , which are singular at the origin. Both sets of functions play an important role in potential theory, and are obtained by rescaling spherical harmonics appropriately:
Derivation, relation to spherical harmonics
Introducing , , and for the spherical polar coordinates of the 3-vector , and assuming that is a (smooth) function , we can write the Laplace equation in the following form
where is the square of the angular momentum operator,
It is known that spherical harmonics are eigenfunctions of :
Substitution of into the Laplace equation gives, after dividing out the spherical harmonic function, the following radial equation and its general solution,
The particular solutions of the total Laplace equation are regular solid harmonics:
and irregular solid harmonics:
The regular solid harmonics correspond to harmonic homogeneous polynomials, i.e. homogeneous polynomials which are solutions to Laplace's equation.
Racah's normalization
Racah's normalization (also known as Schmidt's semi-normalization) is applied to both functions
(and analogously for the irregular solid harmonic) instead of normalization to unity. This is convenient because in many applications the Racah normalization factor appears unchanged throughout the derivations.
Addition theorems
The translation of the regular solid harmonic gives a finite expansion,
where the Clebsch–Gordan coefficient is given by
The similar expansion for irregular solid harmonics gives an infinite series,
with . The quantity between pointed brackets is again a Clebsch-Gordan coefficient,
The addition theorems were proved in different manners by several authors.
Complex form
The regular solid harmonics are homogeneous, polynomial solutions to the Laplace equation . Separating the indeterminate and writing , the Laplace equation is easily seen to be equivalent to the recursion formula
so that any choice of polynomials of degree and of degree gives a solution to the equation. One particular basis of the space of homogeneous polynomials (in two variables) of degree is . Note that it is the (unique up to normalization) basis of eigenvectors of the rotation group : The rotation of the plane by acts as multiplication by on the basis vector .
If we combine the degree basis and the degree basis with the recursion formula, we obtain a basis of the space of harmonic, homogeneous polynomials (in three variables this time) of degree consisting of eigenvectors for (note that the recursion formula is compatible with the -action because the Laplace operator is rotationally invariant). These are the complex solid harmonics:
and in general
for .
Plugging in spherical coordinates , , and using one finds the usual relationship to spherical harmonics with a polynomial , which is (up to normalization) the associated Legendre polynomial, and so (again, up to the specific choice of normalization).
Real form
By a simple linear combination of solid harmonics of these functions are transformed into real functions, i.e. functions . The real regular solid harmonics, expressed in Cartesian coordinates, are real-valued homogeneous polynomials of order in x, y, z. The explicit form of these polynomials is of some importance. They appear, for example, in the form of spherical atomic orbitals and real multipole moments. The explicit Cartesian expression of the real regular harmonics will now be derived.
Linear combination
We write in agreement with the earlier definition
with
where is a Legendre polynomial of order .
The dependent phase is known as the Condon–Shortley phase.
The following expression defines the real regular solid harmonics:
and for :
Since the transformation is by a unitary matrix the normalization of the real and the complex solid harmonics is the same.
z-dependent part
Upon writing the -th derivative of the Legendre polynomial can be written as the following expansion in
with
Since it follows that this derivative, times an appropriate power of , is a simple polynomial in ,
(x,y)-dependent part
Consider next, recalling that and ,
Likewise
Further
and
In total
List of lowest functions
We list explicitly the lowest functions up to and including .
Here
The lowest functions and are:
References
Partial differential equations
Special hypergeometric functions
Atomic physics
Fourier analysis
Rotational symmetry | Solid harmonics | Physics,Chemistry | 893 |
14,830,046 | https://en.wikipedia.org/wiki/British%20Coal%20Utilisation%20Research%20Association | British Coal Utilisation Research Association (BCURA) was a non-profit association of industrial companies, incorporated 23 April 1938 and dissolved 24 February 2015.
History
It was founded in 1938, with an assured income of £25000 per year for five years, supplied by the Mining Association of Great Britain and a grant from the government Department of Scientific and Industrial Research, establishing a research station in West Brompton. It was formed from the research department of the Combustion (formerly Coal-burning) Appliance Manufacturer's Association becoming a separate entity. Laboratories were also later established in Leatherhead.
The first Director was John G. Bennett.
During the Second World War it developed small units for the manufacture of producer gas from coal to use in vehicles in place of petrol. A £1000,000 five-year programme was also begun with a view not only to the needs of wartime but also for industry afterwards with fuels and chemicals from coal and greater efficiency of domestic appliances.
Following the Nationalisation of the British Coal Industry in 1946 it continued as an independent body with the support of the National Coal Board in place of the Mining Association.
It developed the commercially successful Satchwell Automatic Controller for small-pipe heating systems.
As the National Coal Board became the dominant industrial member of the Association the government decided to run down its grant from 1968 and NCB would take over BCURA as a subsidiary, which it did in 1971. However NCB felt BCURAs's income from contracts was small compared with its running costs and decided to concentrate research on its own Coal Research Establishment at Stoke Orchard and to close BCURA in the same year. Some of the library stock is now in the North of England Institute of Mining and Mechanical Engineers collection at the Common Room of the Great North in Newcastle upon Tyne.
Notable People
Rosalind Franklin worked on porosity of coal during World War II.
Victor Goldschmidt lectured on rare elements in coal ash during World War II.
Marcello Pirani was scientific consultant during 1941—1947, concerned with carbonaceous materials resistant to high temperatures.
Meredith Thring was there from the outset.
The family of John G. Bennett have a web site that contains information about him and BCURA
Peter H. Given, Head of Organic Chemistry, went on to Pennsylvania State University, achieving distinction in U.S.
BCURA activities were subject of a review published in Nature Volume 153 Number 3873 p 104 (22 January 1944).
References
External links
bcura.org
BUCURA Coal Bank
British research associations
Coal mining in the United Kingdom
Coal organizations
1938 establishments in the United Kingdom
Non-profit organisations based in the United Kingdom
Organisations based in Cheltenham
Scientific organisations based in the United Kingdom
Scientific organizations established in 1938
2015 disestablishments in the United Kingdom | British Coal Utilisation Research Association | Engineering | 558 |
1,014,792 | https://en.wikipedia.org/wiki/Key-based%20routing | Key-based routing (KBR) is a lookup method used in conjunction with distributed hash tables (DHTs) and certain other overlay networks. While DHTs provide a method to find a host responsible for a certain piece of data, KBR provides a method to find the closest host for that data, according to some defined metric. This may not necessarily be defined as physical distance, but rather the number of network hops.
Key-based routing networks
Freenet
GNUnet
Kademlia
Onion routing
Garlic routing
See also
Public-key cryptography
Distributed Hash Table - Overlay Network
Anonymous P2P
References
Anonymity networks
Routing
File sharing networks
Distributed data storage
Network architecture
Cryptographic protocols
Key-based routing | Key-based routing | Technology,Engineering | 145 |
24,795,817 | https://en.wikipedia.org/wiki/List%20of%20architectural%20design%20competitions | This is a list of notable architectural design competitions worldwide.
Major architecture competitions by country
Australia
Flinders Street station, Melbourne – Fawcett and Ashworth, 1899 (17 entries)
Shrine of Remembrance, Melbourne – Phillip Hudson and James Wardrop, 1923 (83 entries; open to Australian and British architects only)
Shrine of Remembrance, Brisbane – Buchanan and Cowper, 1928
ANZAC War Memorial, Sydney – Charles Bruce Dellit, 1929 (117 entries)
Opera House, Sydney – Jørn Utzon, 1955 (233 entries)
Parliament House, Canberra – Romaldo Giurgola, 1978 (329 entries)
Federation Square, Melbourne – Lab Architecture Studio, 1997 (177 entries)
Flinders Street station renewal, Melbourne – Hassell + Herzog & de Meuron, 2013 (118 entries)
Austria
Vienna Ring Road – Ludwig Förster – Friedrich August von Stache – Eduard van der Nüll and August Sicard von Sicardsburg, 1858 (85 international participants)
Vienna State Opera – August Sicard von Sicardsburg and Eduard van der Nüll, 1860
Karlskirche, Vienna – Johann Bernhard Fischer von Erlach, 1713
Votivkirche, Vienna – Heinrich Ferstel, 1854 (75 international participants)
Austrian Postal Savings Bank, Vienna, 1903
City Hall, Innsbruck – Dominique Perrault, 1994
Brazil
City of Brasília – Oscar Niemeyer and Lúcio Costa, 1957 (47 final submissions). The goal was to build a new capital in 1000 days.
Canada
Between 1960 and 2000, close to 150 competitions had been held in Canada.
City Hall, Toronto – Viljo Revell, 1956 (500 entries)
University of Manitoba – Visionary (re)Generation, Winnipeg – Janet Roseberg & Studio Inc. with Cibinel Architects Ltd. and Landmark Planning & Design Inc., 2013 (45 international participants)
China
Beijing National Stadium – Herzog & de Meuron, 2001 (13 final submissions).
China Central Television Headquarters – Office for Metropolitan Architecture, 2002 (10 submissions)
Beijing National Aquatics Center – PTW Architects and Arup, 2003 (10 proposals)
Denmark
Royal Danish Library, Copenhagen – Schmidt Hammer Lassen, 1993 (179 entries)
Geo Centre Møns Klint, Møn Island – PLH Architects, 2002 (292 entries)
Egypt
Bibliotheca Alexandrina – Snøhetta, 1998 (523 entries)
Finland
Over the past 130 years, almost 2,000 architectural competitions have been held in Finland.
Central railway station, Helsinki – Eliel Saarinen, 1904 (21 entries)
Viipuri Library – Alvar Aalto, 1927
Paimio Sanatorium – Alvar Aalto, 1929
Säynätsalo Town Hall – Alvar Aalto, 1949
Kiasma Contemporary Art Museum, Helsinki – Steven Holl, 1992 (516 entries)
Guggenheim Helsinki Plan – 2014 (1,715 entries)
France
Opera Garnier, Paris – Charles Garnier, 1861 (171 participants)
Centre Georges Pompidou, Paris – Renzo Piano and Richard Rogers, 1971 (681 entries)
Arab World Institute, Paris – Jean Nouvel, 1981
Parc de la Villette, Paris – Bernard Tschumi, 1982 (471 entries)
La Grande Arche de la Défense, Paris – Johann Otto von Spreckelsen, 1982 (420 entries)
Cité de la Musique, Paris – Christian de Portzamparc, 1983
Opéra Bastille, Paris – Carlos Ott, 1983 (750 entries)
Carré d'Art, Nîmes – Norman Foster, 1984 (12 invited architects)
Opéra National de Lyon, Lyon – Jean Nouvel, 1986
Germany
Reichstag, Berlin, 1872 and 1882 (189 entries by German architects)
Central Station, Hamburg – Heinrich Reinhardt, 1900
House for an Art Lover, Darmstadt, 1901
Berliner Philharmonie, Berlin – Hans Scharoun, 1956–57 (14 invited architects)
Neue Staatsgalerie, Stuttgart – James Stirling, 1977
International Building Exhibition, Berlin – various architects for several projects, 1980–1987
Messeturm, Frankfurt am Main – Helmut Jahn, 1985
Jewish Museum, Berlin Daniel Libeskind, 1989
Commerzbank Tower, Frankfurt am Main – Norman Foster, 1991
Reichstag building, Berlin – Norman Foster, 1992
Central Station, Berlin – Gerkan, Marg and Partners, 1992
Olympic velodrome and swimming pool, Berlin – Dominique Perrault, 1992
Felix Nussbaum Museum, Osnabrück – Daniel Libeskind, 1995
French Embassy, Berlin – Christian de Portzamparc, 1997 (7 invited architects)
Phaeno Science Center, Wolfsburg – Zaha Hadid, 2000
BMW Welt, Munich – COOP HIMMELB(L)AU, 2001
BMW Werk, Leipzig – Zaha Hadid, 2002
Ireland
U2 Tower, Dublin, 2002 (not yet built)
Italy
Termini railway station, Rome, 1947
Japan
Memorial Cathedral for World Peace, Hiroshima, 1947 (177 designs, no winner)
Peace Memorial Museum, Hiroshima – Kenzo Tange, 1949
New National Theatre, Tokyo – Takahiro Yanagisawa, 1984
Tokyo Metropolitan Government Building, Tokyo – Kenzo Tange, 1985–1986
Kansai International Airport – Renzo Piano, 1988
Tokyo International Forum, Tokyo – Rafael Viñoly, 1987 (395 entries)
Lithuania
Vilnius Guggenheim Hermitage Museum – Zaha Hadid – scheduled for completion in 2011
Luxembourg
Philharmonie Luxembourg – Christian de Portzamparc, 1997
Mexico
Guggenheim Guadalajara, Guadalajara, Jalisco – Enrique Norten/TEN Arquitectos
Netherlands
Rijksmuseum, Amsterdam – Pierre Cuypers, 1863 and 1875
Beurs, Amsterdam – Hendrik Petrus Berlage, 1884
Peace Palace, The Hague – Louis M. Cordonnier, 1905
Amsterdam City Hall – Wilhelm Holzbauer, Cees Dam, B. Bijvoet and G.H.M. Holt, 1967 (804 entries)
The Hague City Hall – Richard Meier, 1986–1989
Netherlands Architecture Institute, Rotterdam – Jo Coenen, 1988 (6 submissions)
New Caledonia
Jean-Marie Tjibaou Cultural Centre, Nouméa – Renzo Piano, 1991
Norway
Oslo Central Station, Oslo – John Engh
Russia
Palace of Soviets, Moscow – Boris Iofan, 1931–1933, 160 architectural design entries (never built)
Commisariat for Heavy Industry, Moscow, 1934
Spain
Igualada Cemetery, Barcelona – Enric Miralles and Carme Pinós
Sweden
City Hall, Stockholm, 1907
Switzerland
Palace of Nations, Geneva, 1926, Henri Paul Nénot & Julien Flegenheimer; Carlo Broggi; Camille Lefèvre; Giuseppe Vago (377 entries)
United Kingdom
Crystal Palace, London – Joseph Paxton
Houses of Parliament, London – Charles Barry, 1836 (98 proposals)
Royal Courts of Justice, London – George Edmund Street, 1868 (11 competing architects)
Kelvingrove Art Gallery, Glasgow – John William Simpson and E J Milner Allen, 1891 (19 competing architects)
Liverpool Cathedral, Liverpool – Giles Gilbert Scott, 1902 (5 prequalified architects)
McLeod Centre, Iona for the Iona Community – Feilden Clegg Bradley
Manchester Art Gallery – Hopkins Architects, 1994 (132 entries)
Scottish Parliament building, Edinburgh – Enric Miralles, 1998 (5 prequalified architects)
National Assembly for Wales, Cardiff – Richard Rogers, 1998 (55 entries)
United States
White House, Washington DC – James Hoban, 1792 (9 entries)
33 Liberty Street, New York – York and Sawyer, 1919
Tribune Tower, Chicago – John Mead Howells and Raymond Hood, 1922 (260 entries)
Boston City Hall, Boston – Kallmann McKinnell & Knowles, 1962 (national, 256 entries)
McCormick Tribune Campus Center, Chicago – Rem Koolhaas, 1998
New York World Trade Center
2002 World Trade Center Master Design Contest – Daniel Libeskind (concept)
World Trade Center Site Memorial Competition – Michael Arad and Peter Walker
Visual and Performing Arts Library, Brooklyn, NY – Enrique Norten / TEN Arquitectos
References
Sources
De Jong, Cees and Mattie, Erik: Architectural Competitions 1792–1949, Taschen, 1997,
De Jong, Cees and Mattie, Erik: Architectural Competitions 1950-Today, Taschen, 1997,
External links
Visionary (re)Generation – Envisioning a Sustainable Campus Community in Winnipeg, Manitoba
Design competitions | List of architectural design competitions | Engineering | 1,719 |
54,022,162 | https://en.wikipedia.org/wiki/Droplet-based%20microfluidics | Droplet-based microfluidics manipulate discrete volumes of fluids in immiscible phases with low Reynolds number and laminar flow regimes. Interest in droplet-based microfluidics systems has been growing substantially in past decades. Microdroplets offer the feasibility of handling miniature volumes (μL to fL) of fluids conveniently, provide better mixing, encapsulation, sorting, sensing and are suitable for high throughput experiments. Two immiscible phases used for the droplet based systems are referred to as the continuous phase (medium in which droplets flow) and dispersed phase (the droplet phase).
Droplet formation methods
In order for droplet formation to occur, two immiscible phases, referred to as the continuous phase (medium in which droplets are generated) and dispersed phase (the droplet phase), must be used. The size of the generated droplets is mainly controlled by the flow rate ratio of the continuous phase and dispersed phase, interfacial tension between two phases, and the geometry of the channels used for droplet generation. Droplets can be formed both passively and actively. Active droplet formation (electric, magnetic, centrifugal) often uses similar devices to passive formation but requires an external energy input for droplet manipulation. Passive droplet formation tends to be more common than active as it produces similar results with simpler device designs. Generally, three types of microfluidic geometries are utilized for passive droplet generation: (i) cross-flowing, (ii) flow focusing, and (iii) co-flowing. Droplet-based microfluidics often operate under low Reynolds numbers to ensure laminar flow within the system. Droplet size is often quantified with coefficient of variation (CV) as a description of the standard deviation from the mean droplet size. Each of the listed methods provide a way to generate microfluidic droplets in a controllable and tunable manner with proper variable manipulation.
Cross-flowing droplet formation
Cross-flowing is a passive formation method that involves the continuous and aqueous phases running at an angle to each other. Most commonly, the channels are perpendicular in a T-shaped junction with the dispersed phase intersecting the continuous phase; other configurations such as a Y-junction are also possible. The dispersed phase extends into the continuous and is stretched until shear forces break off a droplet. In a T-junction, droplet size and formation rate are determined by the flow rate ratio and capillary number. The capillary number relates the viscosity of the continuous phase, the superficial velocity of the continuous phase, and the interfacial tension. Typically, the dispersed phase flow rate is slower than the continuous flow rate. T-junction formation can be further applied by adding additional channels, creating two T-junctions at one location. By adding channels, different dispersed phases can be added at the same point to create alternating droplets of different compositions. Droplet size, usually above 10 μm, is limited by the channel dimensions and often produces droplets with a CV of less than 2% with a rate of up to 7 kHz.
Flow focusing droplet formation
Flow focusing is a usually passive formation method that involves the dispersed phase flowing to meet the continuous phase typically at an angle (nonparallel streams) then undergoing a constraint that creates a droplet. .This constraint is generally a narrowing in the channel to create the droplet though symmetric shearing, followed by a channel of equal or greater width. As with cross-flowing, the continuous phase flow rate is typically higher than the dispersed phase flow rate. Decreasing the flow of the continuous phase can increase the size of the droplets. Flow focusing can also be an active method with the constraint point being adjustable using pneumatic side chambers controlled by compressed air. The movable chambers act to pinch the flow, deforming the stream and creating a droplet with a changeable driving frequency. Droplet size is usually around several hundred nanometers with a CV of less than 3% and a rate of up to several hundred Hz to tens of kHz.
Co-flowing droplet formation
Co-flowing is a passive droplet formation method where the dispersed phase channel is enclosed inside a continuous phase channel. At the end of the dispersed phase channel, the fluid is stretched until it breaks from shear forces and forms droplets either by dripping or jetting. Dripping occurs when capillary forces dominate the system and droplets are created at the channel endpoint. Jetting occurs, by widening or stretching, when the continuous phase is moving slower, creating a stream from the dispersed phase channel opening. Under the widening regime, the dispersed phase is moving faster than the continuous phase causing a deceleration of the dispersed phase, widening the droplet and increasing the diameter. Under the stretching regime, viscous drag dominates causing the stream to narrow creating a smaller droplet. The effect of the continuous phase flow rate on the droplet size depends on whether the system is in a stretching or widening regime thus different equations must be used to predict droplet size. Droplet size is usually around several hundred nanometers with a CV of less than 5% and a rate of up to tens of kHz.
Droplet manipulation
The benefits of microfluidics can be scaled up to higher throughput using larger channels to allow more droplets to pass or by increasing droplet size. Droplet size can be tuned by adjusting the rate of flow of the continuous and disperse phases, but droplet size is limited by the need to maintain the concentration, inter-analyte distances, and stability of microdroplets. Thus, increased channel size becomes attractive due to the ability to create and transport a large number of droplets, though dispersion and stability of droplets become a concern. Finally, thorough mixing of droplets to expose the greatest possible number of reagents is necessary to ensure the maximum amount of starting materials react. This can be accomplished by using a windy channel to facilitate unsteady laminar flow within the droplets.
Surfactants
Surfactants play an important role in droplet-based microfluidics. The main purpose of using a surfactant is to reduce the interfacial tension between the dispersed phase (droplet phase, typically aqueous) and continuous phase (carrier liquid, typically oil) by adsorbing at interfaces and preventing droplets from coalescing with each other, therefore stabilizing the droplets in a stable emulsion state, which allows for longer storage times in delay-lines, reservoirs, or vials. Without using surfactants, the unstable emulsions will eventually evolve into separate phases to reduce the overall energy of the system. Surface chemistry cannot be ignored in microfluidics as the interfacial tension becomes a major consideration among microscale droplets. Linas Mazutis and Andrew D. Griffiths presented a method that used surfactants to achieve a selective and highly controllable coalescence without external manipulation. They manipulate the contact time and the interfacial surfactant coverage of a drop pair to control droplet fusion. The larger the difference percentage of the interfacial surfactant coverage between two droplets, the less likely coalescence will occur. This method allowed researchers to add reagents to droplets in a different way and further study the emulsification.
Microfluidics is widely used for biochemical experiments, so it is important that surfactants are biocompatible when working with living cells and high-throughput analysis. Surfactants used in living cell research devices should not interfere with biochemical reactions or cellular functions. Hydrocarbon oil is typically not used in cell microfluidic research because it is not compatible with cells and damages cell viability. Hydrocarbon oil also extracts organic molecules from the aqueous phase. However, fluorosurfactants with fluorinated tails, for example, are used as a compatible droplet emulsifier that stabilizes droplets containing cells inside without harming or altering the cells. Fluorosurfactants are soluble in a fluorinated oil (continuous phase) but insoluble in the aqueous phase, which results in decreasing the aqueous-fluorous interfacial tension. For example, a triblock copolymer surfactant containing two perfluoropolyether (PFPE) tails and a polyethylene glycol (PEG) block head group is a fluorosurfactant with great biocompatibility and excellent droplet stability against coalescence. Another example are the fluorinated linear polyglycerols, which can be further functionalized on their tailored side-chains and are more customizable compared to the PEG-based copolymer. Surfactants can be purchased from many chemical companies, such as RainDance Technologies (now through BioRad) and Miller-Stephenson.
Physical considerations
Upon addition of surfactants or inorganic salts to a droplet-based microfluidic system, the interfacial tension of individual droplets alters within the microfluidic system. These separatory components allow for the utilization of the droplets as microreactors for various procedural mechanisms. In order to describe the relationship between interfacial tension (), concentration of dissociated surfactants/salts in the bulk droplet (C), Temperature (T), the Boltzmann constant (kB), and the concentration of dissociated surfactants/salts at the interface (Γ), the Gibbs adsorption isotherm was created, a simplified section highlighting relevant information displayed to the right.
This isotherm reaffirms the notion that while the inorganic salt concentration increases, salts are depleted from the droplet interface (Γ<0), and the interface tension of the droplet increases. This is contrasted by surfactants, which adsorb at the interface (Γ>0), and lower interfacial tension . At low surfactant concentrations, surface tension decreases according to the Gibbs adsorption isotherm, until a certain concentration is reached, known as the critical micelle concentration (CMC), when micelles begin to form. Upon reaching the CMC, the dissolved surfactant concentration reaches a maximum, where the surfactant monomers will aggregate to form nanometer sized micelles. Due to this potential for micelle formation, three steps can be utilized when analyzing the adsorption of the surfactants to the droplet’s interface. First, the surfactant molecules adsorb between the surface layer and the subsurface layer. Second, the molecules exchange between the subsurface and the bulk solution. Third, the micelles relax, caused by the breaking of equilibrium between free molecules and micelles.
The molecules making up each micelle are organized depending on the solution they are suspended in, with the more soluble portions in contact with the solution, and the less soluble portions of the molecule in contact with each other. Depending on the ratio of volume of the polar heads and nonpolar tail, various surfactants have been found to form larger aggregates, hollow, bi-layered structures known as vesicles. A notable surfactant that has been witnessed to form vesicles is AOT (Dioctyl sulfosuccinate sodium salt). These micelles and vesicles are relatively new discoveries; however, they have been utilized to transport agents within microfluidic systems, revealing future applications for microfluidic transports.
Reagent addition
Microscale reactions performed in droplet-based applications conserve reagents and reduce reaction time all at kilohertz rates. Reagent addition to droplet microreactors has been a focus of research due to the difficulty of achieving reproducible additions at kilohertz rates without droplet-to-droplet contamination.
Reagent coflow prior to droplet formation
Reagents can be added at the time of droplet formation through a "co-flow" geometry. Reagent streams are pumped in separate channels and join at the interface with a channel containing the continuous phase, which shears and creates droplets containing both reagents. By changing the flow rates in reagent channels, reagent ratios within a droplet can be controlled.
Droplet fusion
The fusion of droplets with different contents can also be exploited for reagent addition. Electro-coalescence merges pairs of droplets by applying an electric field to temporarily destabilize the droplet-droplet interface to achieve reproducible droplet fusion in surfactant-stabilized emulsions. Electro-coalescence requires droplets (which are normally separated by the continuous phase) to come into contact. By manipulating droplet size in separate streams, differential flow of droplet sizes can bring droplets into contact before merging.
Another method for facilitating droplet fusion is acoustic tweezing. While droplets are flowing in microfluidic channels, they can be immobilised using an acoustic tweezer based on surface acoustic waves. Once a droplet is held with the acoustic tweezer, consecutive droplets collide into it and fusion takes place.
Injection of reagents into existing droplets
Reagent co-flow and droplet fusion methods are tied to droplet formation events which lack downstream flexibility. To decouple reagent addition from droplet creation, a setup where reagent stream flows through a channel perpendicular to the droplet stream is utilized. An injection droplet is then merged with the plug as it passes the channel. Reagent volume is controlled by the flow rate of the perpendicular reagent channel.
An early challenge for such systems is that reagent droplet merging was not reproducible for stable emulsions. By adapting the use of an actuated electric field into this geometry, Abate et al. achieved sub-picoliter control of reagent injection. This approach, termed picoinjection, controls injection volume through reagent stream pressure and droplet velocity. Further work on this method has aimed to reduce pressure fluctuations that impede reproducible injections.
Injection of the pressurized aqueous fluid occurs when the electrodes are activated creating an electric field that destabilizes the aqueous fluid/oil interface, triggering the injection. Key advantages of picoinjection include low inadvertent material transfer between droplets and maintenance of droplet compartmentalization through the injection, however, electrodes are often fabricated using metal-solder which can complicate construction of the microfluidic device through increased fabrication time as a result of a more intricate design. An alternative picoinjection method involves utilizing the injection reagent as the conductor of an electric field where a voltage applied to the fluid stimulates injection. Such a method also allows for greater control of injection as the voltage applied corresponds to the volume of reagent fluid injected.
Droplet-to-droplet contamination is a challenge of many injection methods. To combat this, Doonan et al. developed a multifunctional K-channel, which flows reagent streams opposite the path of the droplet stream. Utilizing an interface between the two channels, injection is achieved similarly to picoinjection, but any bilateral contamination washed away through continuous reagent flow. Contamination is avoided at the expense of potentially wasting precious reagent.
Droplet incubation
In order to make droplet-based microfluidics a viable technique for carrying out chemical reactions or working with living cells on the microscale, it is necessary to implement methods allowing for droplet incubation. Chemical reactions often need time to occur, and living cells similarly require time to grow, multiply, and carry out metabolic processes. Droplet incubation can be accomplished either within the device itself (on-chip) or externally (off-chip), depending on the parameters of the system. Off-chip incubation is useful for incubation times of a day or more or for incubation of millions of droplets at a time. On-chip incubation allows for integration of droplet manipulation and detection steps in a single device.
Off-chip incubation
Droplets containing cells can be stored off-chip in PTFE tubing for up to several days while maintaining cell viability and allowing for reinjection onto another device for analysis. Evaporation of aqueous and oil-based fluids has been reported with droplet storage in PTFE tubing, so for storage longer than several days, glass capillaries are also used. Finally, following formation in a microfluidic device, droplets may also be guided through a system of capillaries and tubing leading to a syringe. Droplets can be incubated in the syringe and then directly injected onto another chip for further manipulation or detection and analysis.
On-chip incubation
Delay lines are used to incubate droplets on-chip. After formation, droplets can be introduced into a serpentine channel with length of up to a meter or more. Increasing the depth and width of the delay line channel (as compared to channels used to form and transport droplets) enables longer incubation times while minimizing channel back pressure. Because of the larger channel size, droplets fill up the delay line channel and incubate in the time it takes the droplets to traverse this channel.
Delay lines were originally designed for incubating droplets containing chemical reaction mixtures and were capable of achieving delay times of up to one hour. These devices make use of delay line channels tens of centimeters in length. Increasing the total length of the delay line channels to one or more meters made incubation times of 12 or more hours possible. Delay lines have been shown to maintain droplet stability for up to 3 days, and cell viability has been demonstrated using on-chip delay lines for up to 12 hours. Prior to the development of delay lines, on-chip incubation was performed by directing droplets into large reservoirs (several millimeters in both length and width), which offers high storage capacity and lower complexity of device construction and operation if precise time control of droplets is not required.
If it is important to have a uniform distribution of incubation times for the droplets, the delay line channel may contain regularly-spaced constrictions. Droplets flowing through a channel of uniform diameter travel at different speeds based on their radial position; droplets closer to the center of the channel move faster than those near the edges. By narrowing the channel width to a fraction of its original size, droplets with higher velocities are forced to equilibrate with slower-moving droplets because the constriction allows fewer droplets to pass through at a time. Another manipulation to the geometry of the delay line channel involves introducing turns to the droplets' trajectory. This increases the extent to which any reagents contained within the droplets are mixed via chaotic advection. For systems requiring the incubation of 100 to 1000 droplets, traps can be fabricated in the delay line channel that store droplets separately from one another. This provides for finer control and monitoring of individual droplets.
Magnetic droplets
The micro-magnetofluidic method is the control of magnetic fluids by an applied magnetic field on a microfluidic platform, offering wireless and programmable control of the magnetic droplets. Hence, the magnetic force can also be used to perform various logical operations, in addition to the hydrodynamic force and the surface tension force. The magnetic field strength, type of the magnetic field (gradient, uniform or rotating), magnetic susceptibility, interfacial tension, flow rates, and flow rate ratios determine the control of the droplets on a micro-magnetofluidic platform.
Magnetic droplets, in the context of droplet-based microfluidics, are microliter size droplets that are either composed of ferrofluids or contain some magnetic component that allows for manipulation via an applied magnetic field. Ferrofluids are homogenous mixtures of colloidal solutions of magnetic nanoparticles in a liquid carrier. Two applications of magnetic droplets are the control and manipulation of microfluidic droplets in a microenvironment and the fabrication, transport, and utilization of nanomaterial constructs in the microdroplets. Manipulating magnetic droplets can be used to perform tasks such as arranging droplets into an ordered array for applications in cell culture studies, while the use of magnetic droplets for nanostructure fabrication can be used in drug delivery applications.
Magnetic droplets in non-traditional systems
In traditional, droplet-based microfluidic systems, that is to say, a droplet in a channel which contains an immiscible oil that separates the droplets, movement of the droplets is achieved through differences in pressure or surface tension. In non-traditional, droplet-based microfluidic systems, such as those herein, other mechanisms of control are needed to manipulate the droplets. Application of a magnetic field to a microfluid array containing magnetic droplets allows for easily achieved sorting and arrangement of the droplets into useful patterns and configurations. These types of manipulations can be achieved via static or dynamic application of a magnetic field which allows for a high degree of control over magnetic droplets. Characterization of the degree of control over magnetic droplets includes measurements of the magnetic susceptibility of the ferrofluid, measurement of the change in droplet in substrate interface area in the presence of an applied magnetic field, and measurement of the “roll-off angle” or the angle at which the droplet would move in the presence of a magnetic field when the surface was tilted. Interactions between the water droplet and the surface can be manipulated by adjusting the structure of the microfluidic system itself by applying a magnetic field to iron-doped poly[dimethylsiloxane] (PDMS), a common material for microfluidic devices.
In other systems considered non-traditional, droplet-based microfluidic system, magnetic microdroplets can be a facile means of fabrication and control of micro and nanomaterials, sometimes called "robots". These nanostructures are formed of magnetic nanoparticles in microdroplets that have been manipulated into specific structures by an applied magnetic field. Microhelices are a multifunctional application of this technology. Monodisperse droplets containing magnetic nanoparticles are generated and subjected to a magnetic field which organizes the nanoparticles into a helical template that is fabricated in place through photoinduced polymerization. These microhelices were shown to be effective at clearing channels that were blocked with semi-solid composites of fats, oils, and proteins, such as those found in arteries. Microhelices and microparticle clusters in magnetic droplets have been demonstrated to be a means of transport for small (500 μm diameter) microparticles, showing applications in drug delivery as well. Non-spherical microstructures have also been fabricated using magnetic microfluidics, demonstrating the minute control that is available. Among the non-spherical microstructures to be fabricated were graphene oxide microcapsules that could be aspirated and reinflated using a micropipette, while also exhibiting photoresponsive and magnetoresponsive behavior. Microcapsules that respond to magnetic and photo stimuli, such as these constructed of graphene oxide, are useful for biomedical applications that require in vivo, contact-free manipulation of cellular structures such as stem cells.
Droplet sorting
Droplet sorting in microfluidics is an important technique, allowing for discrimination based on factors ranging from droplet size to chemicals labeled with fluorescent tags within the droplet, stemming off of the work done to sort cells in Flow Cytometry. Within the realm of droplet sorting there are two main types, bulk sorting, which uses either active or passive methods, and precise sorting, which relies mainly on active methods. Bulk sorting is applied to samples with a large number of droplets (> 2000 s−1) that can be sorted based on intrinsic properties of the droplets (such as viscosity, density, etc.) without checking each droplet. Precise sorting, on the other hand, aims to separate droplets that meet certain criteria that is checked on each droplet.
Passive sorting is done through control of the microfluidic channel design, allowing for discrimination based on droplet size. Size sorting relies on the bifurcating junctions in the channel to divert the flow, which causes droplets to sort based on how they interact with the cross section of that flow, the shear rate, which relates directly to their size. Other passive methods include inertia and microfiltration, each having to do with the physical properties, such as inertia, and density, of the droplet. Active sorting uses additional devices attached to the microfluidic device to alter the path of a droplet during flow by controlling some aspect, including thermal, magnetic, pneumatic, acoustic, hydrodynamic and electric control. These controls are utilized to sort the droplets in response to some signal detection from the droplets such as fluorescence intensity.
Precise sorting methods utilize these active sorting methods by first making a decision (e.g., fluorescence signal) about the droplets then altering their flow with one of the aforementioned methods. A technique called Fluorescent Activated Droplet Sorting (FADS) has been developed which utilizes electric field-induced active sorting with fluorescent detection to sort up to 2000 droplets per second. The method relies on, but is not limited to, enzymatic activity of compartmentalized target cells to activate a fluorogenic substrate within the droplet. When a fluorescing droplet is detected, two electrodes are switched on applying a field to the droplet, which shifts its course into the selection channel, while non-fluorescing droplets flow through the main channel to waste. Other methods utilize different selection criteria, such as absorbance or transmittance of droplet, number of encapsulated particles, or image recognition of cell shapes. Sorting can be done to improve encapsulation purity, an important factor for collecting sample for further experiments.
Key applications
Cell culture
One of the key advantages of droplet-based microfluidics is the ability to use droplets as incubators for single cells.
Devices capable of generating thousands of droplets per second open new ways to characterize cell populations, not only based on a specific marker measured at a specific time point but also based on cells' kinetic behavior such as protein secretion, enzyme activity or proliferation. Recently, a method was found to generate a stationary array of microscopic droplets for single-cell incubation that does not require the use of a surfactant.
Cell culture using droplet-based microfluidics
Droplet-based microfluidic systems provide an analytic platform that enables the isolation of single cells or groups of cells in droplets. This tool offers high-throughput for cell experiments since droplet-based microfluidic systems can generate thousands of samples (droplets) per second. Compared with cell culture in conventional microtiter plates, microdroplets from μL to pL volumes reduce the use of reagents and cells. Additionally, automated handling and continuous processing allow assays to be carried out more efficiently. The isolated environment in an encapsulated droplet helps analyze each individual cell population. High-throughput cell culture experiments, for example, testing the behavior of bacteria, finding rare cell types, directed evolution, and cell screening are suitable for using the droplet-based microfluidic techniques.
Materials, incubation and viability
Polydimethylsiloxane (PDMS) is the most common material to fabricate microfluidic devices due to low cost, ease of prototyping, and good gas permeability. Along with perfluorocarbon carrier oils, which also allow good gas permeability, used as a continuous phase in the droplet-based microfluidic system for cell culture, some studies have found that cell viability is comparable to culture in flasks, for example mammalian cells. To reach the required culture time, a reservoir or a delay line can be used. Using a reservoir allows long-term culture from several hours to several days while the delay line is suitable for short-term culture with several minutes. Incubation is feasible both on-chip (reservoir connected with a microfluidic system or delay lines) and off-chip (PTFE tubing isolated with a microfluidic system) after the droplets formed. After incubation, droplets can be reinjected into the microfluidic device for analysis. There are also specially designed on-chip droplet storage systems for direct analysis, such as the "dropspot" device, which stores droplets in several array chambers and uses microarray scanner for direct analysis.
Challenges
Cell culture using droplet-based microfluidics has created many opportunities for research that is inaccessible in conventional platforms, but also has many challenges. Some of the challenges of cell culture in droplet-based microfluidics are common to other microfluidic culture system. First, nutrient consumption should be re-evaluated for a specific microfluid system. For example, glucose consumption is sometimes increased in microfluidic systems (depending on the cell type). The medium turnover is sometimes faster than in macroscopic culture due to reduced culture volumes, thus the volumes of the medium used must be adjusted in each cell line and device. Secondly, the cellular proliferation and behavior may differ depending on the microfluidic systems, a determining factor is the culture surface area to media volume, which vary from one device to another. One report found that proliferation was impaired in the microchannels; increased glucose or serum supplementation did not address the problem for his specific case. Thirdly, the pH regulation must be controlled. PDMS is more permeable to CO2 than to O2 or N2, thus, the dissolved gas level during incubation should be adjusted to reach the expected pH condition.
Biological macromolecule characterization
Protein crystallization
Droplet-based devices have also been used to investigate the conditions necessary for protein crystallization.
Droplet-based PCR
Polymerase chain reaction (PCR) has been a vital tool in genomics and biological endeavors since its inception as it has greatly sped up production and analysis of DNA samples for a wide range of applications. The technological advancement of microdroplet scale PCR has enabled the construction of single-molecule PCR-on-a-chip device. Early single molecule DNA replication, including what occurs in microdroplet or emulsion PCR, was more difficult than larger scale PCR so much higher concentrations of components were usually used. However, fully optimized conditions have minimized this overload by insuring single molecules have an appropriate concentration of replication components distributed throughout the reaction cell. Non-droplet based microfluidic PCR also faces challenges with reagent absorption into the device channels, but droplet-based systems lessen this problem with decreased channel contact.
Using water-in-oil systems, droplet PCR operates by assembling ingredients, forming droplets, combining droplets, thermocycling, and then processing results much like normal PCR. This technique is capable of running in excess of 2 million PCR reactions in addition to a 100,000-fold increase in the detection of wild-type alleles over mutant alleles. Droplet-based PCR greatly increases the multiplexing capabilities of normal PCR – allowing for fast production of mutation libraries. Without proofreading, DNA replication is inherently somewhat error-prone, but by introducing error-prone polymerases, droplet-based PCR utilizes higher than normal mutation output to build a mutation library more quickly and efficiently than normal. This makes droplet-based PCR more attractive than slower, traditional PCR. In a similar application, highly multiplexed, microdroplet PCR has been developed that allows for the screening of large numbers of target sequences enabling applications such as bacterial identification. On-chip PCR allows for an excess of 15 x 15 multiplexing, which means that multiple target DNA sequences could be run on the same device at the same time. This multiplexing was made possible with immobilized DNA primer fragments placed in the base of the individual wells of the chips.
Combining droplet-based PCR with polydimethylsiloxane (PDMS) devices has allowed for novel enhancements of droplet PCR as well as remedying some preexisting problems with droplet PCR including high liquid loss due to evaporation. Droplet-based PCR is highly sensitive to air bubbles as they create temperature differentials hindering DNA replication while also dislodging reagents from the replication chamber. Now, droplet-based PCR has been carried out in PDMS devices to transfer reagents into droplets through a PDMS layer in a more controlled manner that better maintains replication progress and stability than traditional valves. A recent droplet-PCR PDMS device allowed for higher accuracy and amplification of small copy numbers in comparison to traditional quantitative PCR experiments. This higher accuracy was due to surfactant-doped PDMS as well as a sandwiched glass-PDMS-glass device design. These device properties have allowed for more streamlined priming of DNA and less water evaporation during PCR cycling.
DNA sequencing
Multiple microfluidic systems, including droplet-based systems, have been used for DNA sequencing.
Directed evolution
Directed evolution of an enzyme is a repetitive process of creating random genetic mutations, screening for a target phenotype, and selecting the most robust variant(s) for further modification. The ability of humankind to use directed evolution to optimize enzymes for biotechnological purposes is largely limited by the throughput of screening tools and methods and the simplicity of their use. Due to the iterative nature of directed evolution and the necessity for large libraries, directed evolution at the macroscale can be a costly endeavor. As such, performing experiments at the microscale through droplet-based microfluidics provides a significantly cheaper alternative to macroscopic equivalents. Various approaches price the directed evolution through droplet microfluidics under $40 for a screen of a 106–107 sized gene library, while the corresponding macroscale experiment is priced at approximately $15 million. Additionally, with screening times that range from 300 to 2000 droplets sorted per second, droplet-based microfluidics provides a platform for significantly accelerated library screening such that gene libraries of 107 can be sorted well within a day. Droplet-based microfluidic devices make directed evolution accessible and cost effective.
Many different approaches to device construction of droplet-based microfluidic devices have been developed for directed evolution in order to have the capacity to screen a vast variety of different proteins, pathways, and genomes. One method of feeding libraries into the microfluidic device uses single cell encapsulation, in which droplets contain a maximum of one cell each. This avoids confounding results that could be generated by having multiple cells, and consequently multiple genotypes, in a single droplet, while maximizing the efficiency of resource consumption. This method enables the detection of secreted proteins and proteins on the cell membrane. The addition of a cell lysate to the droplets, which breaks down the cellular membrane such that the intracellular species are freely available within the droplet, expands the capabilities of the single cell encapsulation method to analyze intracellular proteins. The library can also be made entirely in vitro (i.e., not in its biological/cellular context) such that the content of the droplet is exclusively a mutated DNA strand. The in vitro system requires PCR and the use of in vitro transcription and translation (IVTT) systems to generate the desired protein in the droplet for analysis. Sorting of droplets for directed evolution is primarily done by fluorescence detection (e.g., fluorescence-activated droplet sorting (FADS)), however recent developments in a absorbance-based sorting methods, known as absorbance-activated droplet sorting (AADS), have expanded the diversity of substrates that can undergo directed evolution through a droplet-based microfluidic device. Recently, sorting capability has even expanded to the detection of NADPH levels and has been used to create higher activity NADP-dependent oxidoreductases. Ultimately, the potential for different methods of droplet creation and analysis in directed evolution droplet-based microfluidic devices allows for a variability that facilitates a large population of potential candidates for directed evolution.
As a method for protein engineering, directed evolution has many applications in fields from development of drugs and vaccines to the synthesis of food and chemicals. A microfluidic device was developed to identify improved enzyme production hosts (i.e., cell factories) that can be employed industrially in various fields. An artificial aldolase was further enhanced by 30-fold using droplet-based microfluidics so that its activity resembled that of naturally occurring proteins. More recently, the creation of functional oxidases has been enabled by a novel microfluidic device created by Debon et al. The droplet-based microfluidic approach to the directed evolution has a great potential for the development of a myriad of novel proteins.
Recent advances
Since the early 2000s, advances in droplet-based microfluidics have made it a powerful technique for conducting directed evolution campaigns. Early developments in bulk production of single-emulsions (SEs; e.g. "water-in-oil" droplets) and double-emulsions (DEs; e.g. "water-in-oil-in-water" droplets) were followed by innovations in on-chip formation and sorting of SEs and DEs, which allow for greater ease and throughput of directed evolution experiments on microfluidic chips.
An essential component of directed evolution is the maintenance of the linkage between enzymatic genotypes and phenotypes. The ability to form DEs on-chip and subsequently sort using fluorescence-activated cell sorting (FACS) pushed the field forward. In 2013, Yan et al. showed the use of FACS to sort DEs. In 2014, Zinchenko et al. published a system to formulate monodisperse DEs and to sort and quantitatively analyze them using a commercially available flow cytometer. The authors demonstrated the power of their system by enriching an active wild-type arylsulfatase from populations of 0.1% and 0.01% active cells by 800- to 2500-fold, respectively. In 2016, Larsen et al. developed a fluorescence-based optical sorting system to monitor polymerases activity inside a microfluidic device. Using their system, Larsen and colleagues showed approximately 1200-fold enrichment of an engineered polymerase. After one round of selection of an alpha-L-threofuranosyl nucleic acid (TNA) polymerase, they demonstrated roughly 14-fold improvement in activity and >99% correct placement of residues in a growing polypeptide. In 2017, S. S. Terekhov et al. developed monodisperse microfluidic double water-in-oil-in-water emulsion (MDE) sorting, which they combined with FACS followed by liquid chromatography-mass spectrometry (LC-MS) and next-generation sequencing (NGS). The authors demonstrated high sensitivity sorting of enzymatically active yeast cells from non-active cells using fluorescence. Further, they showed the ability of their MDE-FACS system to interrogate interactions between target and effector cells within droplets without interference from other yeast and bacterial cells.
Rather than developing new platforms, some groups have focused on the optimization of existing methods, tools and platforms to simplify and improve their ease of use by non-experts. In 2017, Sukovitch et al.created a system to produce monodisperse or approximately equal size DEs by cutting out the coating process required for DE chips. Various groups have altered surfactant types and concentrations to simplify reagent delivery in SEs and DEs. In 2018, Ma et al. presented a dual-channel microfluidic droplet screening system (DMDS). The system uses fluorogenic tags to sort SEs by two different properties of a target enzyme at the same time. Using DMDS, Ma and coworkers directed the evolution of a highly enantioselective esterase using multiple enzymatic properties. In 2020, Brower et al. demonstrated DE sorting and isolation on-chip followed by FACS that allows for high sorting throughput of encapsulated mammalian cells, from which genetic material can later be extracted.
While fluorogenic labeling is a powerful tool for tracking and sorting, it is not always compatible with droplet-based microfluidic systems and experimental design. New label-free and non-fluorescence-based detection techniques have recently been reported. In 2016, Gielen et al. published an absorbance-activated droplet sorting (AADS) microfluidic device and demonstrated its functionality by directing the evolution of a phenylalanine dehydrogenase. In 2016, Sun et al. demonstrated the use of SE droplets and high-throughput MS to screen enzyme activators and inhibitors by screening a transaminase library. In 2019, Pan et al. showed sorting of droplets by interfacial tensions, which are affected by droplet content. In 2020, Haidas et al. presented a microfluidic approach that uses both matrix-assisted laser desorption ionization mass spectrometry (MALDI-MS) and fluorescence microscopy, which the authors used to measure the concentration and activity of phytase, respectively, in yeast cells. In 2020, Holland-Moritz et al. published their mass activated droplet sorting (MADS) method, which integrates MS analysis with fluorescence-activated droplet sorting (FADS). Using this method, droplets are split and analyzed separately by both MS and FADS. The power of this method was demonstrated by screening the activity of a transaminase library expressed in vitro.
Experts anticipate that future microfluidic-based innovations for directed evolution campaigns will be driven in the commercial space, resulting in more simple and less expensive methods and tools that can be applied to biotechnologically-relevant enzymes.
Chemical synthesis
Droplet-based microfluidics has become an important tool in chemical synthesis due to several attractive features. Microscale reactions allow for cost reduction through the usage of small reagent volumes, rapid reactions in the order of milliseconds, and efficient heat transfer that leads to environmental benefits when the amount of energy consumed per unit temperature rise can be extremely small. The degree of control over local conditions within the devices often makes it possible to select one product over another with high precision. With high product selectivity and small sizes of reagents and reaction environments come less stringent reaction clean-up and smaller footprint. Microdispersed droplets created by droplet-based chemistry are capable of acting as environments in which chemical reactions occur, as reagent carriers in the process of generating complex nanostructures. Droplets are also capable of being transformed into cell-like structures which can be used to mimic humans' biological components and processes.
As a method of chemical synthesis, Droplets in microfluidics devices act as individual reaction chambers protected from contamination through device fouling by the continuous phase. Benefits of synthesis using this regime (compared to batch processes) include high throughput, continuous experiments, low waste, portability, and a high degree of synthetic control. Some examples of possible syntheses are the creation of semiconductor microspheres and nanoparticles. Chemical detection is integrated into the device to ensure careful monitoring of reactions, NMR spectroscopy, microscopy, electrochemical detection, and chemiluminescent detection are used. Often, measurements are taken at different points along the microfluidic device to monitor the progress of the reaction.
Increased rate of reactions using microdroplets is seen in the aldol reaction of silyl enol ethers and aldehydes. Using a droplet-based microfluidic device, reaction times were shortened to twenty minutes versus the twenty-four hours required for a batch process. Other experimenters were able to show a high selectivity of cis-stilbene to the thermodynamically favored trans-stilbene compared to the batch reaction, showing the high degree of control afforded by microreactor droplets. This stereocontrol is beneficial to the pharmaceutical industry. For instance, L-Methotrexate, a drug used in chemotherapy, is more readily absorbed than the D isomer.
Liquid crystal microcapsules
Fabrication of liquid crystals has been a point of intrigue for over 5 decades for their anisotropic properties. Microfluidic devices can be used for the synthesis of confined cholesteric liquid crystals through layering of multiple shell-like layers consisting of varying oil-in-water and water-in-oil emulsions. The construction of encapsulated liquid crystals through microfluidic flow of liquid crystal droplets in an immiscible oil allows for monodispersed emulsion layer thickness and composition previously unseen before the advent of microfluidic techniques. The advent of liquid crystal technology has led to advancements in optical displays used both in research and consumer branded products, but more recent discoveries have opened the door to the combination of photon combination and upconversion as well as optical sensors for biological analytes.
Microparticle and nanoparticle synthesis
Advanced particles and particle-based materials, such as polymer particles, microcapsules, nanocrystals, and photonic crystal clusters or beads can be synthesized with the assistance of droplet-based microfluidics. Nanoparticles, such as colloidal CdS and CdS/CdSe core-shell nanoparticles, can also be synthesized through multiple steps on a millisecond time scale in a microfluidic droplet-based system.
Nanoparticles, microparticles and colloidal clusters in microfluidic devices are useful for functions such as drug delivery. The first particles incorporated in droplet-based systems were silica gels in the micrometer size range in order to test their applications in the manufacturing of displays and optical coatings. Mixing solid particles with aqueous microdroplets requires changes to microfluidic channels such as additional reagent mixes and choice of specific materials such as silica or polymers that do not interfere with the channels and any bioactive substances the droplets contain.
The synthesis of copolymers requires milling macroscopic molecules to microparticles with porous, irregular surfaces using organic solvents and emulsification techniques. These droplets preloaded with microparticles can also be quickly processed using UV irradiation. Characterization of these microparticles and nanoparticles involves microimaging for analyzing the structure and the identification of the macroscopic material being milled. Techniques such as the controlled encapsulation of individual gas bubbles to create hollow nanoparticles for synthesizing microbubbles with specific contents are vital for drug delivery systems. Both silica and titanium-based microparticles are used as durable shells after using gas to increase the flow velocity of the aqueous phase. A higher flow velocity allows greater control over the thickness of the aqueous shells. The emerging versatility of nanoparticles can be seen in the delivery of particle-loaded microdroplets being utilized in depot injections for drug delivery rather than the typical approach of injecting drugs intravenously. This is possible due to the low thickness of the shells which typically are in the range of 1 to 50 μm.
More recent advancements in microfluidic particles allowed the synthesis of nanometer sized particles from biologically derived polymers. Using specific flow-focusing multiphase designs that control flow rate and temperature, the size of nanoparticle formation can be controlled along with the concentration and configuration of the droplets. Another technique for creating particle-loaded microdroplets is the use of lipid-hydrogel nanoparticles that can be manipulated into more narrowly-shaped droplets, which is useful when soft or brittle materials must be used. These soft materials are especially important in the production of powders. Recent advancements on the nanoscale such as devices that fabricate both spherical and non-spherical droplets that are ultrafast and homogeneous mixed are being produced for large scale production of powdered particles in industrial applications.
Monodispersed nanoparticles are also of great interest in catalyst fabrication. Many heterogeneous catalytic systems efficiencies rely on high surface areas of transition metal particles. Microfluidic techniques have been used to fabricate gold nanoparticles through the interfacial interaction of droplets containing gold chloride, hexane, and a reducing agent with a surrounding aqueous phase. This process can also control both the size and shape of nanoparticles/nanosheets with precision and high throughput compared to other methods such as physical vapor deposition.
The use of droplets containing various materials such as silica or transition metals such as gold flowed through an immiscible oil phase has been shown to be effective in controlling both size of nanoparticles as well as pore size, which allows for design of efficient absorptive gas capture devices and heterogeneous catalysts. Monodispersed nanoparticles of gold and silver have been synthesized using gold and silver chloride droplets dosed with a reducing agent to cleave metal-ligand bonds, leading to the agglomeration of monodispersed metal nanoparticles which can be easily filtered out of solution.
Gel particle synthesis
The synthesis of gel particles also known as hydrogels, microgels, and nanogels, has been an area of interest for researchers and industries alike for the last several decades. A microfluidic based approach to synthesizing these hydrogel particles is a useful tool, due to high throughput, mono-dispersity of particles, and cost reduction through the use of small reagent volumes. One of the key challenges early on in the field of gels was forming monodisperse particles. Initially polymerization-based techniques were used to form bulk microparticles that were polydisperse in size. These techniques generally were centered around using an aqueous solution that was mixed vigorously to create emulsions. Eventually a technique was developed to create monodisperse biodegradable microgels by making O/W emulsions in an in-line droplet generating channel geometry. This junction geometry accompanied with a surfactant laden continuous phase was responsible for creating microgels made from poly-dex-HEMA. Other device geometries including T-junction style formation are also viable and have been used to make silica-based gels.
Once these methods were established, efforts focused on applying functionality to these particles. Examples include bacteria encapsulated particles, drug or protein encapsulated particles, and magnetic gel particles. To insert these functional components into the gel structure, can be as simple as integrating the component into the dispersed phase. In some cases, certain device geometries are preferred, for example a flow focusing junction was used to encapsulate bacteria in agarose microparticles. Multiple emulsions are of interest for pharmaceutical and cosmetic applications and are formed using two consecutive flow focusing junctions. More complicated particles can also be synthesized such as Janus particles, which have surfaces with two or more distinct physical properties.
Some examples of the increasing application of gel particles include drug delivery, biomedical applications, and tissue engineering, and many of these applications require monodisperse particles where a microfluidics-based approach is preferred. Bulk emulsification methods are still relevant, though, since not all applications require uniform microparticles. The future of microfluidic synthesis of gels may lie in developing techniques to create bulk amounts of these uniform particles in order to make them more commercially/industrially available.
Recent developments in droplet microfluidics have also allowed for in situ synthesis of hydrogel fibers containing aqueous droplets with controlled morphology. Hydrogel fibers provide an intriguing option for biocompatible material for drug delivery and bioprinting of materials that can mimic the behavior of an extracellular matrix. This microfluidic method differs from the traditional wet-spinning synthesis route through the use of aqueous droplets in an immiscible oil stream rather than the extrusion of a bulk solution of the same composition mixed off site. The ability to control the size, flow rate, and composition of droplets provides an option to fine tune the morphology of fibers to fit a specific use in bioanalysis and emulation of anatomical functions.
Extraction and phase transfer using droplet microfluidics
Liquid-liquid extraction is a method used to separate an analyte from a complex mixture; with this method compounds separate based on their relative solubility in different immiscible liquid phases. To overcome some of the disadvantages associated with common bench top methods such as the shake-flask method, Microfluidic liquid-liquid extraction methods have been employed. Microfluidic droplet-based systems have demonstrated the capability to manipulate discrete volumes of fluids in immiscible phases with low Reynolds numbers. and laminar flow regimes. Microscale methods reduce time required, reduce sample and reagent volume, and allow for automation and integration. In some studies, the performance of droplet-based microfluidic extraction compares closely with the shake-flask method. A study which compared the shake-flask and microfluidic liquid-Liquid extrication methods for 26 compounds and found a close correlation between the values obtained (R2= 0.994).
It has also been demonstrated that microfluidic liquid-liquid extraction devices can be integrated with other instruments for detection of the extracted analytes. For example, microfluidic extraction could be used to extract an analyte initially in an aqueous phase such as cocaine in saliva then interfaced with on-chip IR spectroscopy for detection. Microfluidic liquid-liquid extraction has shown to be advantageous in numerous applications such as pharmacokinetic drug studies where only small cell numbers are needed, and in additional studies where smaller reagent volumes are required.
Droplet detection
Separation methods
Droplet-based microfluidic systems can be coupled to separation methods for specific tasks. Common separation techniques coupled to droplet-based microfluidic systems include high-performance liquid chromatography (HPLC) and electrophoresis.
High-performance liquid chromatography
Many forms of chromatography, including high-performance liquid chromatography (HPLC), nanoflow ultra-performance liquid chromatography (nano-UPLC or nano-LC), and 2-dimensional capillary flow chromatography (capillary LC), have been integrated into the field of droplet-based microfluidics. On the microscale, chemical separation techniques like HPLC can be used in both biological and chemical analysis. Within the field of microfluidics, these techniques have been applied to microfluidic systems at three different stages in the microfluidic process. Off-chip HPLC columns are used to separate analytes before feeding them into a microfluidic device for fractionation and analysis. HPLC columns can also be built directly into microfluidic lab-chips creating monolithic hybrid devices capable of chemical separation as well as droplet formation and manipulation. Additionally, HPLC is used at the tail end of droplet-based microfluidic chemistry as a way to purify, analyze, and quantify the products of an experiment.
Droplet-based microfluidic devices coupled to HPLC have high detection sensitivity, use low volumes of reagents, have short analysis times, and minimal cross-contamination of analytes, which make them efficient in many aspects. However, there are still problems associated with microscale chromatography, such as dispersion of separated bands, diffusion, and "dead volume" in channels after separation. One way to bypass these issues is the use of droplets to compartmentalize separation bands, which combats diffusion and the loss of separated analytes. In early attempts to integrate chromatography with droplet microfluidics, the lower flow rates and pressures required for 2-D capillary LC provided less of an obstacle to overcome in combining these technologies and made it possible to couple multiple 2-D separation techniques into one device (i.e. HPLC x LC, LC x LC, and HPLC x HPLC). HPLC autosamplers feeding into microfluidic devices have taken advantage of the dispersion occurring between separation and droplet formation to feed gradient pulses of analytes into microfluidic devices where the production of thousands of pico-liter droplets captures unique analyte concentrations. Similar approaches have used the withdrawal capabilities of a syringe pump to align the relatively high flow rates necessary for HPLC with the lower flow rates of the continuous medium common in microfluidic devices. The development of nano-LC, or nano-UPLC, has provided another opportunity for coupling with microfluidic devices such that large droplet libraries can be formed with multiple dimensions of information being stored in each droplet. Instead of identifying peaks and storing them as a single sample, as seen in standard LC, these droplet libraries allow for the specific concentration of the analyte to be retained along with its identity. Moreover, the ability to perform high frequency fractionation immediately from the eluent of a nano-LC column has greatly increases peak resolution and improved the overall separation quality when compared to continuous flow nano-LC devices.
An HPLC column was first built directly into a microfluidic device by using TPE instead of PDMS for the device fabrication. The additional strength of TPE made it capable of supporting the higher pressures needed for HPLC such that a single, microfluidic lab-chip could perform chemical separation, fractionation, and further droplet manipulation. In order to increase the quality of chromatographic output, sturdier devices made of glass have shown the ability to withstand far greater pressure than TPE. Achieving these higher pressures to increase the degree of separation and eliminating all dead volumes through immediate droplet formation has shown the potential for droplet microfluidics to expand and improve the capabilities of HPLC separations.
Electrophoresis
Capillary electrophoresis (CE) and microcapillary gel electrophoresis (μCGE) are well-recognized microchip electrophoresis (MCE) methods that can provide numerous analytical advantages including high resolution, high sensitivity, and effective coupling to mass spectrometry (MS). Microchip electrophoresis can be applied generally as a method for high-throughput screening processes that help discover and evaluate drugs. Using MCE, specifically CE, microcapillary gel electrophoresis (μCGE) devices are created to perform high-number DNA sample processing, which makes it a good candidate for DNA analysis. μCGE devices are also practical for separation purposes because they use online separation, characterization, encapsulation, and selection of differing analytes originating from a composite sample. All of these advantages of MCE methods translate to microfluidic devices. The reason MCE methods are coupled to droplet-based microfluidic devices is because of the ability to analyze samples on the nanoliter scale. Using MCE methods on a small scale reduces cost and reagent use. Similarly to HPLC, fluorescence based detection techniques are used for capillary electrophoresis, which make these methods practical and can be applied to fields such as biotechnology, analytical chemistry, and drug development. These MCE and other electrophoresis based methods began to develop once capillary electrophoresis gained popularity in the 1980s and gained even more attention in the early 1990s, as it was reviewed nearly 80 times by the year 1992.
Mass spectrometry (MS) is a near universal detection technique that is recognized throughout the world as the gold standard for identification of manycompounds. MS is an analytical technique in which chemical species are ionized and sorted before detection, and the resulting mass spectrum is used to identify the ions' parent molecules. This makes MS, unlike other detection techniques (such as fluorescence), label-free; i.e. there is no need to bind additional ligands or groups to the molecule of interest in order to receive a signal and identify the compound.
Mass spectrometry
Mass spectrometry (MS) is a near universal detection technique that is recognized throughout the world as the gold standard for identification of manycompounds. MS is an analytical technique in which chemical species are ionized and sorted before detection, and the resulting mass spectrum is used to identify the ions' parent molecules. This makes MS, unlike other detection techniques (such as fluorescence), label-free; i.e. there is no need to bind additional ligands or groups to the molecule of interest in order to receive a signal and identify the compound.
There are many cases in which other spectroscopic methods, such as nuclear magnetic resonance (NMR), fluorescence, infrared, or Raman, are not viable as standalone methods due to the particular chemical composition of the droplets. Often, these droplets are sensitive to fluorescent labels, or contain species that are otherwise indeterminately similar, where MS may be employed along with other methods to characterize a specific analyte of interest. However, MS has only recently (in the past decade) gained popularity as a detection method for droplet-based microfluidics (and microfluidics as a whole) due to challenges associated with coupling mass spectrometers with these miniaturized devices. Difficulty of separation/purification make entirely microfluidic scale systems coupled to mass spectrometry ideal in the fields of proteomics, enzyme kinetics, drug discovery, and newborn disease screening. The two primary methods of ionization for mass analysis used in droplet-based microfluidics today are matrix-assisted laser desorption/ionization (MALDI) and electrospray ionization (ESI). Additional methods for coupling, such as (but not limited to) surface acoustic wave nebulization (SAWN), and paper-spray ionization onto miniaturized MS, are being developed as well.
Electrospray ionization
One complication offered by the coupling of MS to droplet-based microfluidics is that the dispersed samples are produced at comparatively low flow rates compared to traditional MS-injection techniques. ESI is able to easily accept these low flow rates and is now commonly exploited for on-line microfluidic analysis. ESI and MALDI offer a high throughput answer to the problem of label-free droplet detection, but ESI requires less intensive sample preparation and fabrication elements that are scalable to microfluidic device scale. ESI involves the application of a high voltage to a carrier stream of analyte-containing droplets, which aerosolizes the stream, followed by detection at a potential-differentiated analyser region. The carrier fluid within a droplet-based microfluidic device, typically an oil, is often an obstacle within ESI. The oil, when part of the flow of droplets going into an ESI-MS instrument, can cause a constant background voltage interfering with the detection of sample droplets. This background interference can be rectified by changing the oil used as a carrier fluid and by adjusting the voltage used for the electrospray.
Droplet size, Taylor cone shape, and flow rate can be controlled by varying the potential differential and the temperature of a drying (to evaporate analyte-surrounding solvent) stream of gas (usually nitrogen). Because ESI allows for online droplet detection, other problems posed by segmented or off-chip detection based systems can be solved, such as the minimizing of sample (droplet) dilution, which is especially critical to microfluidic droplet detection where analyte samples are already diluted to the lowest experimentally relevant concentration.
Matrix-assisted laser desorption/ionization
MALDI is typified by the use of an ultraviolet (UV) laser to trigger ablation of analyte species that are mixed with a matrix of crystallized molecules with high optical absorption. The ions within the resulting ablated gasses are then protonated or deprotonated before acceleration into a mass spectrometer. The primary advantages of MALDI detection over ESI in microfluidic devices are that MALDI allows for much easier multiplexing, which even further increases the device's overall throughput, as well as less reliance on moving parts, and the absence of Taylor cone stability problems posed by microfluidic-scale flow rates. The speed of MALDI detection, along with the scale of microfluidic droplets, allows for improvements upon macro-scale techniques in both throughput and time-of-flight (TOF) resolution. Where typical MS detection setups often utilize separation techniques such as chromatography, MALDI setups require a sufficiently purified sample to be mixed with pre-determined organic matrices, suited for the specific sample, prior to detection. MALDI matrix composition must be tuned to produce appropriate fragmentation and ablation of analytes.
One method to obtain a purified sample from droplet-based microfluidics is to end the microfluidic channel onto a MALDI plate, with aqueous droplets forming on hydrophilic regions on the plate. Solvent and carrier fluid are then allowed to evaporate, leaving behind only the dried droplets of the sample of interest, after which the MALDI matrix is applied to the dried droplets. This sample preparation has notable limitations and complications, which are not currently overcome for all types of samples. Additionally, MALDI matrices are preferentially in much higher concentrations than the analyte sample, which allows for microfluidic droplet transportation to be incorporated into online MALDI matrix production. Due to the low number of known matrices and trial and error nature of finding appropriate new matrix compositions, this can be the determining factor in the use of other forms of spectroscopy over MALDI.
Raman spectroscopy
Raman spectroscopy is a spectroscopic technique that provides non-destructive analysis capable of identifying components within mixtures with chemical specificity without complex sample preparation. Raman spectroscopy relies on photon scattering following visible light radiation, where the shift in photon energies corresponds to information about the system's vibrational modes and their frequencies. Upon obtaining vibrational modenfrequencies, qualitative classifications about the system can be both made and reinforced.
Raman spectroscopy works well in parallel with microfluidic devices for many qualitative biological applications. For some applications, Raman spectroscopy is preferred over other detection methods such as infrared (IR) spectroscopy as water has a strong interference signal with IR but not with Raman. Likewise, methods such as high-performance liquid chromatography (HPLC), nuclear magnetic resonance (NMR), mass spectrometry (MS), or gas chromatography (GC) are also not ideal as these methods require larger sample sizes. Since microfluidics enables experiments with small volumes (including analysis of single cells or few cells), Raman is a leading microfluidic detection method. Specifically, Raman integration with microfluidic devices has strong applications in systems where lipid identification is necessary, common in biofuel research. For example, a lipid fluorescent assay is not selective enough and thus cannot identify molecular differences the way Raman can through molecular vibrations. Raman, when coupled with microfluidic devices, can also monitor fluid mixing and trapping of liquids and can also detect solid and gas phases within microfluidic platforms, an ability that is applicable to the study of gas-liquid solubility.
Raman spectroscopy in microfluidic devices is applied and detected using either integrated fiberoptics within a microfluidic chip or by placing the device on a Raman microscope. Furthermore, some microfluidic systems utilize metallic colloid or nanoparticles within solution to capitalize on surface-enhanced Raman spectroscopy (SERS). SERS can improve Raman scattering by up to a factor of 1011 by forming charge-transfer complexes on the surfaces. It follows that these devices are commonly fabricated out of nanoporous polycarbonate membranes allowing for easy coating of nanoparticle. However, if fabricated out of polydimethylsiloxane (PDMS), signal interference with the Raman spectrum can occur. PDMS generates a strong Raman signal which can easily overpower and interfere with the desired signal. A common solution for this is fabricating the microfluidic device such that a confocal pinhole can be used for the Raman laser. Typical confocal Raman microscopy allows for spectroscopic information from small focal volumes less than 1 micron cubed, and thus smaller than the microfluidic channel dimensions. Raman signal is inherently weak; therefore, for short detection times at small sample volumes in microfluidic devices, signal amplification is utilized. Multi-photon Raman spectroscopy, such as stimulated Raman spectroscopy (SRS) or coherent anti-Stokes Raman spectroscopy (CARS) help enhance signals from substances in microfluidic devices.
For droplet-based microfluidics, Raman detection provides online analysis of multiple analytes within droplets or continuous phase. Raman signal is sensitive to concentration changes, therefore solubility and mixing kinetics of a droplet-based microfluidic system can be detected using Raman. Considerations include the refractive index difference at the interface of the droplet and continuous phase, as well as between fluid and channel connections.
Fluorescent detection
Fluorescence spectroscopy is one of the most common droplet detection techniques. It provides a rapid response, and, for applicable analytes, it has a strong signal. The use of fluorescence spectroscopy in microfluidics follows a similar format to most other fluorescent analytical techniques. A light source is used to excite analyte molecules in the sample, after which the analyte fluoresces, and the fluorescence response is the measured output. Cameras can be used to capture the fluorescence signal of the droplets, and filters are often used to filter out scattered excitation light. In microfluidic droplet detection, the experimental setup of a fluorescence instrument can vary greatly. A common setup in fluorescent droplet detection is with the use of an epifluorescence microscope. This sometimes utilizes a confocal geometry, which can vary depending on experimental needs. For example, Jeffries et al. reported success with exploring an orthogonal confocal geometry, as opposed to a standard epi geometry. However, other setups for fluorescence detection have been explored, as epifluorescence microscopes can be expensive and difficult to upkeep. Cole et al. have proposed and tested an experimental setup with fiber optics to conduct fluorescence analysis of microfluidic droplets.
Fluorescence detection of droplets has a number of advantages. First, it can accommodate a large and fast throughput. Analysis of thousands of samples can be conducted in a short period of time, which is advantageous for the analysis of a large number of samples. Another advantage is the accuracy of the method. In an analysis performed by Li et al., it was found that use of fluorescence detection techniques yielded 100% detection accuracy in 13 of 15 collected images. The remaining two had relative errors around 6%. Another advantage of fluorescence detection is that it allows for quantitative analysis of droplet spacing in a sample. This is done by use of temporal measurements and the flow velocity of the analyte. The time spacing between signals allows for calculation of droplet spacing. Further fluorescence analysis of microfluidic droplet samples can be used to measure the fluorescent lifetime of samples, providing additional information that is not obtainable for fluorescence intensity measurements alone.
The applications of fluorescence detection are varied, with many of its uses centered in biological applications. Frenz et al. utilized fluorescence detection of droplets to examine enzyme kinetics. For this experiment, b-lactamase interacted with fluorocillin, a fluorogenic substrate. Fluorescence of the droplets was measured at multiple time intervals to examine the change with time. This detection method goes beyond biological applications, though, and allows for the physical study of droplet formation and evolution. For example, Sakai et al. used fluorescence detection to monitor droplet size. This was done by collecting fluorescence data to calculate the concentration of a fluorescent dye within a single droplet, thus allowing size growth to be monitored. The use of fluorescence detection techniques can be expanded into applications beyond data collection; a widely used method of cell and droplet sorting in microfluidics is fluorescence-activated sorting, where droplets are sorted into different channels or collection outlets based on their fluorescence intensity.
Fluorescent quantum dots have been used to develop biosensing platforms and drug delivery in microfluidic devices. Quantum dots are useful due to their small size, precise excitation wavelength, and high quantum yield. These are advantages over traditional dyes which may interfere with the activity of the studied compound. However, the bulk creation and conjugation of quantum dots to molecules of interest remains a challenge. Microfluidic devices that conjugate nucleotides with quantum dots have been designed to solve this issue by significantly reducing the conjugation time from two days to minutes. DNA-quantum dot conjugates are of importance to detect complementary DNA and miRNA in biological systems.
Electrochemical detection
Electrochemical detection serves as an inexpensive alternative to not only measure chemical composition in certain cases, but also droplet length, frequency, conductivity, and velocity at high speeds and usually with very little space compensation on the chip. The method was first discussed in Luo et al. wherein the team was able to successfully measure the size and ion concentration in pico-liter droplets containing dissolved NaCl ions. It is usually performed with a set or series of microelectrodes which measure the perturbations of current, smaller drops giving smaller perturbations while larger drops giving longer curves. The number of perturbations in the current can also indicate the frequency of the droplets passing the electrode as a way to determine the rate of droplets as well. Several different compounds have been suggested for use within the electrodes, as accurate, precise, and significant readings can be difficult within the microscale. These compounds range from carbon paste electrodes that are applied directly to the chip, to platinum black electrodeposited on platinum wire in tandem with a silver chloride on silver microelectrode to increase activity and surface area.
As for chemical composition, readings are achieved through chronoamperometric analysis of electro-active compounds within the droplets as stated above. The potential varies dependent on the electrically viable ions, dissolved sodium and chlorine ions in this experiment, and their concentrations within each droplet. Another group displayed, with a series of controls, that mixed droplet composition involving potassium iodide was detected accurately on the time scale of seconds with optimal voltage, velocity, and pH ranges. In addition to this, a more unique approach is developing within chronoamperometric readings, where magneto-fluidic systems have been created and the potential readings are measured in otherwise electro-inactive fluids by the dissolution of magnetic microparticles into the reagent. This method is enhanced into a digital microfluidic (DMF) setting, where gold and silver electrodes in junction with dissolved magnetic microparticles in the fluids replaced the typical fluorescence-based detection of droplets in the immunoassay of biomarker analytes.
The above experiment by Shamsi et al, alludes to the main use for electrochemical detection in microfluidics; biosensing for various measurements such as enzyme kinetics and biological assays of many other types of cells. Increased control on the system is needed for these processes as with increasing flow rate, enzyme detection decreases. Though as an enzymatic reaction progresses, the amperometric reading will evolve as well, allowing for rapid monitoring of the kinetics. Also, specific surfactants can lack biocompatibility with the system, affecting the enzyme and skewing detection. The reaches of this application have even had effects in aquaculture and economics, as electrochemical sensing has been used to test the freshness of fish rapidly. Use of this detection method is primarily found in the electrowetting of dielectric DMFs, where the sensing electrode apparatus can be reconfigurable and has a longer lifetime while still producing accurate results.
References
Cell culture techniques
Microfluidics | Droplet-based microfluidics | Chemistry,Materials_science,Biology | 15,959 |
11,465,653 | https://en.wikipedia.org/wiki/Physopella%20ampelopsidis | Physopella ampelopsidis is a plant pathogen.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Pucciniales
Fungi described in 1958
Fungus species | Physopella ampelopsidis | Biology | 44 |
70,084,116 | https://en.wikipedia.org/wiki/The%20Bottle%20Houses | The Bottle Houses in Prince Edward Island (PEI), also known as the symphony of color and light, was built in 1980 by Edouard Arsenault. It is made of hundreds of recycled colored glass bottles, a popular tourist site in PEI. Bottle wall construction is the process of building a structure, usually housing, with glass or plastic bottles and a binding material. This sustainable building type helps reduce the chances of bottles dumped at landfills and promotes reuse of "waste" material. The bottle houses on the Island have many benefits to the locals and visitors, such as; its aesthetic style, low cost of production to renewable resource architecture and sustainability (8).
Introduction
Architecture today pushes the notion of creating buildings that are sustainable and resilient, whether it be by reducing non-renewable resources needed to keep the building running, using passive architectural methods to retire the use of electricity and fuel or by conserving water. Several organizations, such as Leeds, BREEAM, and Energy Star have been put in place to promote sustainability and hold designers accountable for the long-term goal of sustaining the planet for future generations. The Bottle house on Prince Edward Island- P.E.I., designed by Edward Arsenault in 1980, portrays sustainable architecture in a unique way. It promotes the use of recycled materials, glass bottles in this case, to create three resilient structures that not only create shelters but euphoric moments for the people of the community and tourists which visit the spaces.
History
Bottle wall construction is the process of building a structure, usually housing, with glass or plastic bottles and binding material. This sustainable building type helps reduce the chances of bottles being dumped at landfills and promotes the reuse of "waste" material. Its benefits range from aesthetic style, low cost of production to renewable architecture and sustainability[ii], carrying out the three steps of effective waste management; reduce, reuse, and recycle. Bottle house construction has been present since ancient Rome, the very first was constructed by William F. Peck in Tonopah, Nevada. At the time these houses were not constructed for aesthetic purposes like they are today, they were appreciated due to their ability to reduce the load of the upper levels of houses & reduce the amount of concrete needed. The building was made with 10,000 bottles of J. Hostetter's Stomach Bitters, medicine bottles that were initially reused by being refilled with glop and resold by shady dealers. The demand for this type of vernacular architecture arose due to the short supply of construction materials and the rising population levels in the area. Peck took it upon himself to construct these buildings to tackle the housing demand-supply issue in the town, using mortar as a binding element, lime as an interior finish and the bottles as insulators to keep the homes warm in the harsh winter.
Bottle House, Prince Edward Island
The inspiration for the Bottle House, P.E.I. was taken from another glass house in British Columbia. By 1980, Edouard Arsenault had already built three of his structures which have become a tourist landmark for the locals in the Cape Egmont community. He sourced his bottles from various trash sites around him and even began to get his communities support as they began to donate their bottles to help his project. They were built from the ground up and bound with cement to form the shape of houses. As light shines through them they create kaleidoscopic patterns and stained-glass colors on the floor, creating a magical and euphoric feeling for the visitors that enter the space, as to why it's named "the symphony of color and light"
Life of Edouard Arsenault
Edouard Arsenault, the architect of the bottle village, was born in 1914 to Emmanuel and Roseline Arsenault. He spent most of his time in Cap-Egmont before he moved to the United Kingdom to serve in the second world war. He started off as a fisherman, a protégé under his father, then he began to fix up and construct boats. In 1948 he married Rosina Leclerc who later bore his first two children; Yvette and Rejanee, they all lived in the Cap-Egmont lighthouse where he served as the resident lighthouse keeper till the lighthouse became automated, and they moved onto the grounds of the current bottle houses in P.E.I. Roseline bore two more kids; Maurice and Pierre there. During his retirement from being the Lighthouse keeper he had the vision to create the Bottle houses which consist of a six-gable house, a tavern and a chapel, however, the chapel was only completed after his death.
The glass village
He received the inspiration to design the houses after receiving a postcard from his daughter in 1979 of a glass castle in British Columbia which she had visited. He began to collect over 25,000 recycled bottles from the community and dumps to build the spaces. After he retrieved them, he spent the winter in the basement cleaning the bottles, removing the labels, and planning the project, which he began to build in 1980 at the age of 66. The bottle house village houses a six-gabled house, tavern, chapel, gift shop and a replica of the lighthouse he worked in. The gable house was built at using about 12,000 bottles that formed the three main sections. Arsenault carefully picked the size and colors of the bottles to create unique patterns on the building's façade as well as in the rooms when light shines through them. He built it by cementing 300 to 400 bottles per row, using about 85 bags of cement, the binding material, over a six-month period before it was finally open to the public in 1981 Then he designed the hexagon shaped tavern in 1982, later rebuilt in 1993, which only used 8,000 bottles, the harsh winter conditions called for the rebuilding of the Tavern, the initial roof and the central cylinder were able to be salvaged. The third building built was the Chapel in 1983, made with 10,000 bottles. The building during a sunset would shine colorful streams in from behind the altar, creating a feeling of peace for the guests as they take in the atmosphere of the site.
The garden and site
The buildings sit around several Acadian gardens, trees, a pond, and bottle tree structures. It also sits by a wood carved structure of a woman's face, a local gift shop, and a miniature replica of the lighthouse Arsenault tended to before his retirement. Apart from constructing the buildings Arsenault carried out the gardening and landscaping on the site. He spent his time after retiring planting trees on the site, laying out the stonework and designing the flower beds. His Acadian roots prompted his commitment to developing the Cap-Egmont, Evangeline area; he dedicated his retirement to designing artefacts that made his home community special and unique, "radiating a symphony of sunlight-powered colors in a tranquil garden setting with a goldfish pond and a fountain" to create a space for tourists to enjoy nature and the serene sounds of the water. The views captured from the site are of the ocean, and a mini replica of the Cape-Egmont Lighthouse that he tended.
Construction process
The bottles were sourced from various trash sites around the town and from the community as they began to donate their own bottles to help the project. They were built from the ground up, bound with cement to form the shape of houses. As light shines through them they create kaleidoscopic patterns and stained glass colors on the cement floor, creating a magical and euphoric feeling for the visitors that enter the space, as to why it's named "the symphony of color and light".
Advantages of vernacular architecture
The bottle wall or bottle house technique provides various advantages for the glass houses, sustainability, aesthetics, cost-effective waste management, and bulletproof. In terms of aesthetics the bottle house construction is beneficial for a small community like the Cape-Egmont community, it becomes a unique attraction which brings in tourists and improves the GDP of the country. Its structural components consist of the binder, typically a motor or clay and glass bottles making them structurally sound and stable and can resist bullet shots. Cost-effective waste management- millions of water and wine bottles are discarded yearly into landfills so reusing this material turns these discarded bottles into eco bricks which save cost and reduce the levels of energy that is consumed producing concrete. Arsenault's artefacts brought life and a personality to the city bringing its locals and visitors together to witness a unique and significant moment, the bottle house village. The building is not only effective for its sustainable qualities but the site as well which promotes healthy living within nature as well as taking care of your environment and the community.
References
External links
Wikipedia Student Program
Bottle houses
Roadside attractions in Canada
Acadian culture
Buildings and structures in Prince County, Prince Edward Island
1980 establishments in Canada | The Bottle Houses | Engineering | 1,811 |
790,061 | https://en.wikipedia.org/wiki/Centimetre%20or%20millimetre%20of%20water | A centimetre or millimetre of water (US spelling centimeter or millimeter of water) are less commonly used measures of pressure based on the pressure head of water.
Centimetre of water
A centimetre of water is a unit of pressure. It may be defined as the pressure exerted by a column of water of 1 cm in height at 4 °C (temperature of maximum density) at the standard acceleration of gravity, so that = × × 1 cm = ≈ , but conventionally a nominal maximum water density of is used, giving .
The centimetre of water unit is frequently used to measure the central venous pressure, the intracranial pressure while sampling cerebrospinal fluid, as well as determining pressures during mechanical ventilation or in water supply networks (then usually in metres water column). It is also a common unit of pressure in the speech sciences. This unit is commonly used to specify the pressure to which a CPAP machine is set after a polysomnogram.
{|
|-
| || = 98.0665 pascals
|-
|rowspan=9|
|= 0.01 metre water (mH2O), metre water column (m wc) or metre water gauge (m wg)
|-
|= 10 mm wg
|-
|= mbar or hPa
|-
|≈ inH2O
|-
|≈ atm
|-
|≈ torr
|-
|≈ mm Hg
|-
|≈ inHg
|-
|≈ psi
|}
Millimetre of water
Millimetre of water (US spelling millimeter of water) is a unit of pressure. It may be defined as the pressure exerted by a column of water of 1 mm in height at 4 °C (temperature of maximum density) at the standard acceleration of gravity, so that = × × 1 mm = ≈ , but conventionally a nominal maximum water density of is used, giving .
{|
|-
| || = 9.80665 pascals
|-
|rowspan=9|
|= 0.001 metre water (mH2O), metre water column (m.wc) or metre water gauge (m wg)
|-
|= 0.1 cm wg
|-
|= mbar or hPa
|-
|≈ inH2O
|-
|≈
|-
|≈ torr
|-
|≈ mmHg
|-
|≈ inHg
|-
|≈ psi
|}
In limited and largely historic contexts it may vary with temperature, using the equation:
P = ρ·g·h/1000,
where
P: pressure in Pa
ρ: density of water (conventionally 1000 kg/m3 at 4 °C)
g: acceleration due to gravity (conventionally 9.80665 m/s2 but sometimes locally determined)
h: water height in millimetres.
The unit is often used to describe how much water rainwear or other outerwear can take or how much water a tent can resist without leaking.
See also
Inch of mercury
Inch of water
Millimetre of mercury
References
External links
Pressure conversion calculator at Cornell University website
Units of pressure | Centimetre or millimetre of water | Mathematics | 641 |
1,904,332 | https://en.wikipedia.org/wiki/Brassica%20rapa | Brassica rapa is a plant species that has been widely cultivated into many forms, including the turnip (a root vegetable), komatsuna, napa cabbage, bomdong, bok choy, and rapini.
Brassica rapa subsp. oleifera is an oilseed commonly known as turnip rape, field mustard, bird's rape, and keblock. Rapeseed oil is a general term for oil from some Brassica species. Food grade oil made from the seed of low-erucic acid Canadian-developed strains is also called canola oil, while non-food oil is called colza oil. Canola oil can be sourced from Brassica rapa and Brassica napus, which are commonly grown in Canada, and Brassica juncea, which is less common.
History
The geographic and genetic origins of B. rapa have been difficult to identify due to its long history of human cultivation. It is found in most parts of the world, and has returned to the wild many times as a feral plant or weed.
Genetic sequencing and environmental modelling have indicated that ancestral B. rapa likely originated 4000 to 6000 years ago in the Hindu Kush area of Central Asia, and had three sets of chromosomes, providing the genetic potential for a diversity of form, flavour, and growth. Domestication has produced modern vegetables and oil-seed crops, all with two sets of chromosomes.
Oilseed subspecies (subsp. oleifera) of Brassica rapa may have been domesticated several times from the Mediterranean to India, starting as early as 2000 BC. There are descriptions of B. rapa vegetables in Indian and Chinese documents from around 1000 BC.
Edible turnips were possibly first cultivated in northern Europe, and were an important food in ancient Rome. The turnip then spread east to China, and reached Japan by 700 AD.
In the 18th century, the turnip and the oilseed-producing variants were thought to be different species by Carl Linnaeus, who named them B. rapa and B. campestris. Twentieth-century taxonomists found that the plants were cross fertile and thus belonged to the same species. Since the turnip had been named first by Linnaeus, the name Brassica rapa was adopted.
Uses
Many butterflies, including the small white, feed from and pollinate the B. rapa flowers.
The young leaves are a common leaf vegetable and can be eaten raw; older leaves are typically cooked. The taproot and seeds can also be eaten raw, although the seeds contain an oil that can cause irritation for some people.
Cultivars
References
External links
PROTA (Plant Resources of Tropical Africa) database record on Brassica rapa L.
rapa
Leaf vegetables
Plants described in 1753
Taxa named by Carl Linnaeus
Root vegetables
Space-flown life | Brassica rapa | Biology | 572 |
58,447,785 | https://en.wikipedia.org/wiki/Aspergillus%20robustus | Aspergillus robustus is a species of fungus in the genus Aspergillus. It has phototropic conidiophores. The species was first described in 1978. The genome of A. robustus was in 2016 sequenced as a part of the Aspergillus whole-genome sequencing project - a project dedicated to performing whole-genome sequencing of all members of the Aspergillus genus. The genome assembly size was 33.14 Mbp.
Growth and morphology
Aspergillus robustus has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
robustus
Fungi described in 1978
Fungus species | Aspergillus robustus | Biology | 163 |
20,652,422 | https://en.wikipedia.org/wiki/Yakov%20Eliashberg | Yakov Matveevich Eliashberg (also Yasha Eliashberg; ; born 11 December 1946) is an American mathematician who was born in Leningrad, USSR.
Education and career
Eliashberg received his PhD, entitled Surgery of Singularities of Smooth Mappings, from Leningrad University in 1972, under the direction of Vladimir Rokhlin.
Due to the growing anti-Semitism in the Soviet Union, from 1972 to 1979 he had to work at the Syktyvkar State University in the isolated Komi Republic. In 1980 Eliashberg returned to Leningrad and applied for a visa, but his request was denied and he became a refusenik until 1987. He was cut off from mathematical life and was prevented to work in academia, but due to a friend's intercession, he managed to secure a job in industry as the head of a computer software group.
In 1988 Eliashberg managed to move to the United States, and since 1989 he has been Herald L. and Caroline L. Ritch professor of mathematics at Stanford University. Between 2001 and 2002 he was Distinguished Visiting professor at the Institute of Advanced Studies.
Awards
Eliashberg received the "Young Mathematician" Prize from the Leningrad Mathematical Society in 1972. He was an invited speaker at the International Congress of Mathematicians in 1986, 1998 and 2006 (plenary lecture). In 1995 he was a recipient of the Guggenheim Fellowship.
In 2001 Eliashberg was awarded the Oswald Veblen Prize in Geometry from the AMS for his work in symplectic and contact topology, in particular for his proof of the symplectic rigidity and the development of 3-dimensional contact topology.
In 2002 Eliashberg was elected to the National Academy of Sciences of the US and in 2012 he became a fellow of the American Mathematical Society. He also was a member of the Selection Committee in mathematical sciences of the Shaw Prize. He received a Doctorat Honoris Causa from the ENS Lyon in 2009 and from the University of Uppsala in 2017.
In 2013 Eliashberg shared with Helmut Hofer the Heinz Hopf Prize from the ETH, Zurich, for their pioneering research in symplectic topology. In 2016 Yakov Eliashberg was awarded the Crafoord Prize in Mathematics from the Swedish Academy of Sciences for the development of contact and symplectic topology and groundbreaking discoveries of rigidity and flexibility phenomena.
In 2020 he received the Wolf Prize in Mathematics (jointly with Simon K. Donaldson). He was elected to the American Academy of Arts and Sciences in 2021. For 2023 he was awarded the BBVA Foundation Frontiers of Knowledge Award in Basic Sciences (jointly with Claire Voisin).
Research
Eliashberg's research interests are in differential topology, especially in symplectic and contact topology.
In the 80's he developed a combinatorial technique which he used to prove that the group of symplectomorphisms is -closed in the diffeomorphism group. This fundamental result, proved also in a different way by Gromov, is now called the Eliashberg-Gromov theorem, and is one of the first manifestation of symplectic rigidity.
In 1990 he discovered a complete topological characterization of Stein manifolds of complex dimension greater than 2.
Eliashberg classified contact structures into "tight" and "overtwisted" ones. Using this dichotomy, he gave the complete classification of contact structures on the 3-sphere. Together with Thurston, he developed the theory of confoliations, which unifies foliations and contact structures.
Eliashberg worked on various aspects of the h-principle, introduced by Mikhail Gromov, and he wrote in 2002 an introductory book on the subject.
Together with Givental and Hofer, Eliashberg pioneered the foundations of symplectic field theory.
He supervised 41 PhD students as of 2022.
Major publications
Books
Eliashberg, Yakov M.; Thurston, William P. Confoliations. University Lecture Series, 13. American Mathematical Society, Providence, RI, 1998. x+66 pp.
Eliashberg, Y.; Mishachev, N. Introduction to the h-principle. Graduate Studies in Mathematics, 48. American Mathematical Society, Providence, RI, 2002. xviii+206 pp.
Cieliebak, Kai; Eliashberg, Yakov. From Stein to Weinstein and back. Symplectic geometry of affine complex manifolds. American Mathematical Society Colloquium Publications, 59. American Mathematical Society, Providence, RI, 2012. xii+364 pp.
References
1946 births
Living people
Mathematicians from Saint Petersburg
Jewish Russian scientists
Saint Petersburg State University alumni
Members of the United States National Academy of Sciences
Fellows of the American Academy of Arts and Sciences
Fellows of the American Mathematical Society
Topologists
Stanford University Department of Mathematics faculty | Yakov Eliashberg | Mathematics | 986 |
11,155,427 | https://en.wikipedia.org/wiki/De%20Re%20Atari | De Re Atari (Latin for "All About Atari"), subtitled A Guide to Effective Programming, is a book written by Atari, Inc. employees in 1981 and published by the Atari Program Exchange in 1982 as an unbound, shrink-wrapped set of three-holed punched pages. It was one of the few non-software products sold by APX. Targeted at developers, it documents the advanced features of the Atari 8-bit computers and includes ideas for how to use them in applications. The information in the book was not available in a single, collected source at the time of publication.
The content of De Re Atari was serialized in BYTE beginning in 1981, prior to the book's publication. The release of Atari 8-bit technical details through the magazine and book quickly resulted in other sources being published, such as COMPUTE!'s First Book of Atari Graphics (1982).
Atari published official documentation for the hardware and a source listing of the operating system the same year, 1982, but they were not as easily obtainable as De Re Atari and tutorials in magazines such as COMPUTE!. Following the closure of the Atari Program Exchange in late 1984, De Re Atari went out of print.
Background
Atari at first did not disclose technical information on its computers, except to software developers who agreed to keep it secret. De Re Atari ("All About Atari") was sold through the Atari Program Exchange mail-order catalog, which described the book as "everything you want to know about the Atari ... but were afraid to ask" and a resource for "professional programmers" and "advanced hobbyists who understand Atari BASIC and assembly language".
An article on Player/Missile Graphics by De Re Atari coauthor Chris Crawford appeared in Compute! in 1981 Another article by Crawford and Lane Winner appeared in the same month in BYTE. De Re Atari was serialized in BYTE in 1981 and 1982 in ten articles.
De Re Atari, and its 1981-82 serialization in BYTE, were the first public, official publication of Atari 8-bit technical information. It was based on Atari's documentation written in 1979-80 for third-party developers under non-disclosure agreements. Individual chapters are devoted to making use of the features of the platform: ANTIC and display lists, color registers, redefined character sets, player/missile graphics, the vertical blank interrupt and display list interrupts (a.k.a. raster interrupts), fine scrolling, and sound. Additional chapters cover the operating system, Atari DOS, Atari BASIC, and designing intuitive human interfaces.
Lead author Chris Crawford used many of these features in the computer wargame Eastern Front (1941) released in 1981. Another of the book's authors, Jim Dunion, used custom display lists in the DDT 6502 debugger to produce a partitioned, IDE-like display. DDT was later incorporated into the MAC/65 assembler.
Reception
De Re Atari was successful; the manager of APX later said that it and Eastern Front "paid the bills, i.e. were our biggest sellers". Mapping the Atari described De Re Atari as "an arcane, but indispensable reference to the Atari's operations and some of its most impressive aspects". The Addison-Wesley Book of Atari Software 1984 stated that the book had "a wealth of information, but tends to be obscure and includes numerous errors".
References
External links
De Re Atari online
1982 non-fiction books
Computer books
Atari 8-bit computers | De Re Atari | Technology | 712 |
1,411,415 | https://en.wikipedia.org/wiki/Homophily | Homophily () is a concept in sociology describing the tendency of individuals to associate and bond with similar others, as in the proverb "". The presence of homophily has been discovered in a vast array of network studies: over have observed homophily in some form or another, and they establish that similarity is associated with connection. The categories on which homophily occurs include age, gender, class, and organizational role.
The opposite of homophily is heterophily or intermingling. Individuals in homophilic relationships share common characteristics (beliefs, values, education, etc.) that make communication and relationship formation easier. Homophily between mated pairs in animals has been extensively studied in the field of evolutionary biology, where it is known as assortative mating. Homophily between mated pairs is common within natural animal mating populations.
Homophily has a variety of consequences for social and economic outcomes.
Types and dimensions
Baseline vs. inbreeding
To test the relevance of homophily, researchers have distinguished between two types:
Baseline homophily: simply the amount of homophily that would be expected by chance given an existing uneven distribution of people with varying characteristics; and
Inbreeding homophily: the amount of homophily over and above this expected value, typically due to personal preferences and choices.
Status vs. value
In their original formulation of homophily, Paul Lazarsfeld and Robert K. Merton (1954) distinguished between status homophily and value homophily; individuals with similar social status characteristics were more likely to associate with each other than by chance:
Status homophily: includes both society-ascribed characteristics (e.g. race, ethnicity, sex, and age) and acquired characteristics (e.g., religion, occupation, behavior patterns, and education).
Value homophily: involves association with others who have similar values, attitudes, and beliefs, regardless of differences in status characteristics.
Dimensions
Race and ethnicity
Social networks in the United States today are strongly divided by race and ethnicity, which account for a large proportion of inbreeding homophily (though classification by these criteria can be problematic in sociology due to fuzzy boundaries and different definitions of race).
Smaller groups have lower diversity simply due to the number of members. This tends to give racial and ethnic minority groups a higher baseline homophily. Race and ethnicity also correlates with educational attainment and occupation, which further increase baseline homophily.
Sex and gender
In terms of sex and gender, the baseline homophily networks were relatively low compared to race and ethnicity. In this form of homophily men and women frequently live together and have large populations that are normally equal in size. It is also common to find higher levels of gender homophily among school students. Most sex homophily are a result of inbreeding homophily.
Age
Most age homophily is of the baseline type. An interesting pattern of inbreeding age homophily for groups of different ages was found by Marsden (1988). It indicated a strong relationship between someone's age and the social distance to other people with regard to confiding in someone. For example, the larger age gap someone had, the smaller chances that they were confided by others with lower ages to "discuss important matters."
Religion
Homophily based on religion is due to both baseline and inbreeding homophily. Those that belong in the same religion are more likely to exhibit acts of service and aid to one another, such as loaning money, giving therapeutic counseling, and other forms of help during moments of emergency. Parents have been shown to have higher levels of religious homophily than nonparent, which supports the notion that religious institutions are sought out for the benefit of children.
Education, occupation and social class
Family of birth accounts for considerable baseline homophily with respect to education, occupation, and social class. In terms of education, there is a divide among those who have a college education and those who do not. Another major distinction can be seen between those with white collar occupations and blue collar occupations.
Interests
Homophily occurs within groups of people that have similar interests as well. We enjoy interacting more with individuals who share similarities with us, so we tend to actively seek out these connections. Additionally, as more users begin to rely on the Internet to find like minded communities for themselves, many examples of niches within social media sites have begun appearing to account for this need. This response has led to the popularity of sites like Reddit in the 2010s, advertising itself as a "home to thousands of communities... and authentic human interaction."
Social media
As social networks are largely divided by race, social-networking websites like Facebook also foster homophilic atmospheres. When a Facebook user 'likes' or interacts with an article or post of a certain ideology, Facebook continues to show that user posts of that similar ideology (which Facebook believes they will be drawn to). In a research article, McPherson, Smith-Lovin, and Cook (2003) write that homogeneous personal networks result in limited "social worlds in a way that has powerful implications for the information they receive, the attitudes they form, and the interactions they experience." This homophily can foster divides and echo chambers on social networking sites, where people of similar ideologies only interact with each other.
Causes and effects
Causes
Geography: Baseline homophily often arises when the people who are located nearby also have similar characteristics. People are more likely to have contact with those who are geographically closer than those who are distant. Technology such as the telephone, e-mail, and social networks have reduced but not eliminated this effect.
Family ties: These ties decay slowly, but familial ties, specifically that of domestic partners, fulfill many requisites that generate homophily. Family relationships are generally close and keep frequent contact though they may be at great geographic distances. Ideas that may get lost in other relational contexts, will often instead lead to actions in this setting.
Organizations: School, work, and volunteer activities provide the great majority of non-family ties. Many friendships, confiding relations, and social support ties are formed within voluntary groups. The social homogeneity of most organizations creates a strong baseline homophily in networks that are formed there.
Isomorphic sources: The connections between people who occupy equivalent roles will induce homophily in the system of network ties. This is common in three domains: workplace (e.g., all heads of HR departments will tend to associate with other HR heads), family (e.g., mothers tend to associate with other mothers), and informal networks.
Cognitive processes: People who have demographic similarity tend to own shared knowledge, and therefore they have a greater ease of communication and share cultural tastes, which can also generate homophily.
Effects
According to one study, perception of interpersonal similarity improves coordination and increase the expected payoff of interactions, above and beyond the effect of merely "liking others." Another study claims that homophily produces tolerance and cooperation in social spaces. However, homophilic patterns can also restrict access to information or inclusion for minorities.
Nowadays, the restrictive patterns of homophily can be widely seen within social media. This selectiveness within social media networks can be traced back to the origins of Facebook and the transition of users from MySpace to Facebook in the early 2000s. One study of this shift in a network's user base from (2011) found that this perception of homophily impacted many individuals' preference of one site over another. Most users chose to be more active on the site their friends were on. However, along with the complexities of belongingness, people of similar ages, economic class, and prospective futures (higher education and/or career plans) shared similar reasons for favoring one social media platform. The different features of homophily affected their outlook of each respective site.
The effects of homophily on the diffusion of information and behaviors are also complex. Some studies have claimed that homophily facilitates access information, the diffusion of innovations and behaviors, and the formation of social norms. Other studies, however, highlight mechanisms through which homophily can maintain disagreement, exacerbate polarization of opinions, lead to self segregation between groups, and slow the formation of an overall consensus.
As online users have a degree of power to form and dictate the environment, the effects of homophily continue to persist. On Twitter, terms such as "stan Twitter", "Black Twitter", or "local Twitter" have also been created and popularized by users to separate themselves based on specific dimensions.
Homophily is a cause of homogamy—marriage between people with similar characteristics. Homophily is a fertility factor; an increased fertility is seen in people with a tendency to seek acquaintance among those with common characteristics. Governmental family policies have a decreased influence on fertility rates in such populations.
See also
Groupthink
Echo chamber (media)
References
Interpersonal relationships
Sociological terminology | Homophily | Biology | 1,846 |
29,353,834 | https://en.wikipedia.org/wiki/Artin%20algebra | In algebra, an Artin algebra is an algebra Λ over a commutative Artin ring R that is a finitely generated R-module. They are named after Emil Artin.
Every Artin algebra is an Artin ring.
Dual and transpose
There are several different dualities taking finitely generated modules over Λ to modules over the opposite algebra Λop.
If M is a left Λ-module then the right Λ-module M* is defined to be HomΛ(M,Λ).
The dual D(M) of a left Λ-module M is the right Λ-module D(M) = HomR(M,J), where J is the dualizing module of R, equal to the sum of the injective envelopes of the non-isomorphic simple R-modules or equivalently the injective envelope of R/rad R. The dual of a left module over Λ does not depend on the choice of R (up to isomorphism).
The transpose Tr(M) of a left Λ-module M is a right Λ-module defined to be the cokernel of the map Q* → P*, where P → Q → M → 0 is a minimal projective presentation of M.
References
Ring theory | Artin algebra | Mathematics | 262 |
9,552,145 | https://en.wikipedia.org/wiki/1%20%2B%202%20%2B%204%20%2B%208%20%2B%20%E2%8B%AF | In mathematics, is the infinite series whose terms are the successive powers of two. As a geometric series, it is characterized by its first term, 1, and its common ratio, 2. As a series of real numbers it diverges to infinity, so the sum of this series is infinity.
However, it can be manipulated to yield a number of mathematically interesting results. For example, many summation methods are used in mathematics to assign numerical values even to a divergent series. For example, the Ramanujan summation of this series is −1, which is the limit of the series using the 2-adic metric.
Summation
The partial sums of are since these diverge to infinity, so does the series.
It is written as
Therefore, any totally regular summation method gives a sum of infinity, including the Cesàro sum and Abel sum. On the other hand, there is at least one generally useful method that sums to the finite value of −1. The associated power series
has a radius of convergence around 0 of only so it does not converge at Nonetheless, the so-defined function has a unique analytic continuation to the complex plane with the point deleted, and it is given by the same rule Since the original series is said to be summable (E) to −1, and −1 is the (E) sum of the series. (The notation is due to G. H. Hardy in reference to Leonhard Euler's approach to divergent series.)
An almost identical approach (the one taken by Euler himself) is to consider the power series whose coefficients are all 1, that is,
and plugging in These two series are related by the substitution
The fact that (E) summation assigns a finite value to shows that the general method is not totally regular. On the other hand, it possesses some other desirable qualities for a summation method, including stability and linearity. These latter two axioms actually force the sum to be −1, since they make the following manipulation valid:
In a useful sense, is a root of the equation (For example, is one of the two fixed points of the Möbius transformation on the Riemann sphere). If some summation method is known to return an ordinary number for ; that is, not then it is easily determined. In this case may be subtracted from both sides of the equation, yielding so
The above manipulation might be called on to produce −1 outside the context of a sufficiently powerful summation procedure. For the most well-known and straightforward sum concepts, including the fundamental convergent one, it is absurd that a series of positive terms could have a negative value. A similar phenomenon occurs with the divergent geometric series (Grandi's series), where a series of integers appears to have the non-integer sum These examples illustrate the potential danger in applying similar arguments to the series implied by such recurring decimals as and most notably . The arguments are ultimately justified for these convergent series, implying that and but the underlying proofs demand careful thinking about the interpretation of endless sums.
It is also possible to view this series as convergent in a number system different from the real numbers, namely, the 2-adic numbers. As a series of 2-adic numbers this series converges to the same sum, −1, as was derived above by analytic continuation.
See also
1 − 1 + 2 − 6 + 24 − 120 + ⋯
1 − 1 + 1 − 1 + ⋯ (Grandi's series)
1 + 1 + 1 + 1 + ⋯
1 − 2 + 3 − 4 + ⋯
1 + 2 + 3 + 4 + ⋯
1 − 2 + 4 − 8 + ⋯
Two's complement, a data convention for representing negative numbers where −1 is represented as if it were .
Notes
References
Further reading
Binary arithmetic
Divergent series
Geometric series
P-adic numbers | 1 + 2 + 4 + 8 + ⋯ | Mathematics | 789 |
16,174,329 | https://en.wikipedia.org/wiki/Gene%20Ball | Gene Ball is a computer science researcher and computer programmer.
Ball obtained a bachelor's degree from the University of Oklahoma, and attended graduate school at the University of Rochester, completing a master's degree and finishing his doctorate in 1982. While at Rochester, he met Rick Rashid, and together they created Alto Trek, one of the earlier networked multiplayer computer games.
In 1979, along with Rashid, Ball worked as a researcher Carnegie Mellon University. In 1983, he left academia for two years, spending 1983 and 1984 designing software at Formative Technologies. In 1985, he became an assistant professor at the University of Delaware at Newark.
From 1991 until 2001 he was a researcher at Microsoft, leading the Persona Project, which focused on developing lifelike computer characters that could conversationally interact with users.
References
Contributor biography in Emotions in Humans and Artifacts, by Robert Trappl, Paolo Petta, and Sabine Payr. .
University of Delaware faculty
Microsoft employees
University of Rochester alumni
University of Oklahoma alumni
Carnegie Mellon University faculty
Living people
Year of birth missing (living people) | Gene Ball | Technology | 213 |
1,463,904 | https://en.wikipedia.org/wiki/Nitroamine | In organic and inorganic chemistry, nitroamines or nitramides are chemical compounds with the general chemical structure . They consist of a nitro group () bonded to the nitrogen of an amine. The R groups can be any group, typically hydrogen (e.g., methylnitroamine ) and organyl (e.g., diethylnitroamine ). An example of inorganic nitroamine is chloronitroamine, . The parent inorganic compound, where both R substituents are hydrogen, is nitramide or nitroamine, .
N-Nitroaniline rearranges in the presence of acid to give 2-nitroaniline.
References
Further reading
Functional groups | Nitroamine | Chemistry | 149 |
25,312,729 | https://en.wikipedia.org/wiki/Institute%20for%20Transportation%20and%20Development%20Policy | The Institute for Transportation and Development Policy (ITDP) is a non-governmental non-profit organization that focuses on developing bus rapid transit (BRT) systems, promoting biking, walking, and non-motorized transport, and improving private bus operators margins. Other programs include parking reform, traffic demand management, and global climate and transport policy. According to its mission statement, ITDP is committed to "promoting sustainable and equitable transportation worldwide."
In addition to its role supporting and consulting local governmental efforts to develop more sustainable transportation, ITDP publishes the magazine Sustainable Transport annually, produces the BRT Standard and other research, and sits on the committee for the annual Sustainable Transport Award.
Overview
ITDP was founded in 1985 by Michael Replogle and other sustainable transport advocates in the United States to counteract the spread of costly and environmentally damaging car-centric urban development models, and to promote biking, walking, and public transit in transportation planning.
In its first ten years, ITDP worked to support and grow local bicycle industries in Haiti, Nicaragua, Mozambique, South Africa, and West Africa. By 1989, ITDP's Bikes Not Bombs campaign had shipped 10,000 second-hand bicycles to support health and education efforts in Nicaragua and used these to establish a bicycle assembly industry in that country. ITDP advocated for the redirection of lending activity by the World Bank and other multi-lateral institutions. Where these global institutions had an exclusive focus on road projects, ITDP worked to open up funding for multi-modal transport solutions. ITDP advocated for sustainable transport initiatives in U.S. transportation policy, influencing the 1991 Intermodal Surface Transportation Efficiency Act (ISTEA). Responding to ITDP pressure, the Peace Corps put its volunteers on bicycles rather than motorcycles.
In the early 1990s, ITDP helped establish the Transport Sector Task Force, an advisory panel to the US Treasury Department's Multi-lateral Development Bank liaison office, to comment on specific transport projects. In its 1994 study "Counting on Cars, Counting Out People" ITDP published a preliminary set of guidelines for reforming the World Bank Transport Sector economic appraisal to make it less biased in favor of motorways. The report's key recommendation that economic impacts on non-motorized road users be included in the appraisal has been incorporated into World Bank practice.
ITDP has offices in seven countries, with projects and relationships in over 100 cities worldwide.
In 2009, former mayor of Bogotá, Colombia Enrique Peñalosa, who was instrumental in the establishment of that city's TransMilenio BRT system, was elected as President of the Board of Directors of ITDP. Walter B. Hook served as the organizations executive director from 1993 to 2014. Heather Thompson is ITDP's interim CEO.
Key areas of operation
Public transport
ITDP works to encourage safe, modern, and efficient public transportation systems in cities worldwide. ITDP is currently active in a design and/or consulting capacity in the BRT programs of Ahmedabad, India; Dar es Salaam, Tanzania; Johannesburg, South Africa (Rea Vaya); Jakarta, Indonesia (TransJakarta); Guangzhou, Lanzhou, and Yichang, China; Mexico City, Mexico, Buenos Aires, Argentina, and more.
In June 2007, ITDP published the Bus Rapid Transit Planning Guide along with the United Nations Environment Programme, the Deutsche Gesellschaft für Technische Zusammenarbeit (GTZ), the Hewlett Foundation, and Viva. The guide draws from the extensive BRT design experience of Latin American transit planners, and aims to disseminate this information in the US and other countries around the world. The guide is currently available in English, Spanish, Portuguese, and Chinese, and is free for download in .pdf format from the ITDP website.
In addition, ITDP developed the BRT Standard, a design guide and rating system for Bus Rapid Transit systems around the world. The Standard establishes a common definition for BRT and identifies BRT best practices, as well as functioning as a scoring system to allow BRT corridors to be evaluated and recognized for their superior design and management aspects. It uses a systematic method of evaluating BRT systems, rating their quality as "gold, silver, or bronze". Some systems that had been branded as BRT failed to meet even minimal standards distinguishing BRT from regular urban bus service. A related report on BRT in the US noted that "Some American systems reviewed had so few essential characteristics that calling them a BRT system at all does a disservice to efforts to gain broader adoption of BRT in the United States...These systems, with relatively few BRT characteristics, have helped confuse the American public about what exactly constitutes BRT."
Cycling and walking
ITDP encourages urban design that prioritizes human-powered forms of transportation, such as walking, cycling, and the use of rickshaws. Specifically, ITDP often works with cities to encourage car-free days and bike share programs, create safe streets for pedestrians and cyclists, and provide high-quality bicycles.
In December 2013 the ITDP published the Bike-Share Planning Guide with the aim "to bridge the divide between developing and developed countries' experiences with bike-share." The guide is expected to be useful for planning and implementing a bike sharing system regardless of the location, size, or density of the city. The guide is currently available only in English and is free for download in .pdf format from the ITDP website.
ITDP works with local governments on the expansion and design of bike lanes, and pedestrian networks throughout the city. In São Paulo, Brazil, ITDP assisted in the design of a pilot bicycle path in the neighborhood of Butantã. For the project, ITDP commissioned a report for a 58 kilometer feeder network, which will lead cyclists from adjacent streets and sidewalks to the bicycle path. The path will pass through a high-visibility corridor of the city, and if successfully implemented could be expanded to surrounding neighborhoods and throughout the city.
Past projects have included a redesign of India's traditional cycle rickshaw in collaboration with local experts, reducing the weight of the vehicles by 30% and adding a multi-gear system to increase efficiency; increasing Africa's cycling capacity while bolstering local industry through the establishment of the California Bike Coalition (CBC); and traffic impact and mitigation analysis along with outreach to local interest groups in the pedestrianization of Malioboro Road in Yogyakarta, Indonesia.
Sustainable urban development
ITDP works to integrate transport and smart urban design to help remake cities and suburbs into livable spaces that foster economic opportunities, encourage low carbon lifestyles, and attract residents. This is done through designing environments for cycling and walking, fostering the development of pedestrian and transit based real estate development, and creating policies that help turn cultural and physical spaces into economic assets.
In November 2013 ITDP published "The TOD Standard" which elaborates in eight key principles for guiding the implementation of transit-oriented development (TOD). The guide is available for download in .pdf format from the ITDP website.
ITDP has initiated advised on a number of projects in cities around the world. ITDP worked with the Mexico City government to provide technical support for the revitalization of Mexico City's Historic Center. ITDP managed the planning and implementation efforts of the revitalization, in addition to promoting street maintenance and cleanliness, supplementation of public security, and the management and controlling of parking and street vending activity in the area. ITDP claims that this reorientation of the Historic Center towards pedestrian and transit oriented development will reverse decades of deterioration, attract tourism and investment, and improve air quality in the notoriously polluted city. Additionally, ITDP participated as part of the team developing Mexico City's Bicycle Master Plan to design routes that connect to the Historic Center, further integrating multi-modal development of the area.
International sustainable transport policy
ITDP co-founded the Partnership on Sustainable Low Carbon Transport (SLoCaT) in 2009 and through that helped secure in 2012 a $175 billion 10-year commitment from the world's largest multilateral development banks to support sustainable transport, with annual reporting and monitoring. Working with SLoCaT, ITDP helped mainstream sustainable transport strategies in the United Nations' post-2015 development agenda and in discussions of climate change mitigation strategies in the run-up to the 2015 global climate summit in Paris (COP-15). With the University of California Davis, ITDP in 2014 published, A Global High Shift Scenario: Impacts and Potential for More Public Transport, Walking, and Cycling With Lower Car Use, showing how a shift in transport funding to support alternatives to more driving could save over $100 trillion cumulatively for consumers and governments by 2050 while cutting cumulative climate change pollution from urban transport by 25% and improving equity of access to opportunities for the poor. This is available on the ITDP website.
See also
BRT creep
BRT Standard
Enrique Peñalosa
References
External links
Sustainable transport
Transportation organizations based in the United States
Sustainable urban planning
Urban planning
Bus rapid transit
Environmental organizations based in New York City
International environmental organizations
Public transport advocacy organizations
Organizations based in New York City
Organizations established in 1985
1985 establishments in New York (state) | Institute for Transportation and Development Policy | Physics,Engineering | 1,877 |
36,142,813 | https://en.wikipedia.org/wiki/PD%20144418 | PD 144418 or 1,2,3,6-tetrahydro-5-[3-(4-methylphenyl)-5-isoxazolyl]-1-propylpyridine is a potent and selective ligand for the sigma-1 receptor, with a reported binding affinity of Ki = , and 17,212 times selectivity over the sigma-2 receptor.
References
Tetrahydropyridines
Ligands (biochemistry) | PD 144418 | Chemistry | 99 |
29,419,224 | https://en.wikipedia.org/wiki/Planetrees%20%28Hadrian%27s%20Wall%20section%29 | Planetrees is an extant section of Hadrian's Wall named after the farm located around to the west. The surviving section is in length.
William Hutton intervention
It is said that this section was saved by the intervention of William Hutton in 1801. During his visit to Hadrian's Wall, he encountered a workman robbing stone from this stretch for use in the construction of a nearby farmhouse, and persuaded the workman to desist:
At the twentieth-mile stone, I should have seen a piece of Severus's Wall seven feet and a half high, and two hundred and twenty-four yards long: a sight not to be found in the whole line. But the proprietor, Henry Tulip, Esq. is now taking it down, to erect a farm-house with the materials. Ninety-five yards are already destroyed, and the stones, fit for building removed. . . . I desired the servant with whom I conversed, "to give my compliments to Mr. Tulip, and request him to desist, or he would wound the whole body of Antiquaries. As he was putting an end to the most noble monument of Antiquity in the whole Island, they would feel every stroke. If the Wall was of no estimation, he must have a mean opinion of me, who would travel six hundred miles to see it; and if it was, he could never merit my thanks for destroying it."
Construction
The eastern part of the surviving section (approximately in length) is built as Broad Wall, 10 Roman Feet wide. The eastern section (approximately in length is built as Narrow Wall, 8 Roman Feet wide, yet still on foundations clearly installed for a Broad Wall. The join is archeologically interesting, as it suggests a change in plan during construction.
Also present on the narrow wall stretch is a culvert. The stones forming the culvert cross the whole width of the foundation (built for the broad wall), suggesting that the culverts were constructed at the same time as the foundations, rather than along with the curtain wall.
Excavations and investigations
1989 - English Heritage Field Investigation on 7 September, under the Hadrian's Wall Project. It was noted that this stretch demonstrates a change from Narrow Wall, wide [west part], to Broad Wall, wide [east end].
Monument Records
Public Access
Access is on foot from the Hadrian's Wall National Trail.
References
Bibliography
Hadrian's Wall
Wall, Northumberland | Planetrees (Hadrian's Wall section) | Engineering | 502 |
16,928,506 | https://en.wikipedia.org/wiki/Visual%20servoing | Visual servoing, also known as vision-based robot control and abbreviated VS, is a technique which uses feedback information extracted from a vision sensor (visual feedback) to control the motion of a robot. One of the earliest papers that talks about visual servoing was from the SRI International Labs in 1979.
Visual servoing taxonomy
There are two fundamental configurations of the robot end-effector (hand) and the camera:
Eye-in-hand, or end-point open-loop control, where the camera is attached to the moving hand and observing the relative position of the target.
Eye-to-hand, or end-point closed-loop control, where the camera is fixed in the world and observing the target and the motion of the hand.
Visual Servoing control techniques are broadly classified into the following types:
Image-based (IBVS)
Position/pose-based (PBVS)
Hybrid approach
IBVS was proposed by Weiss and Sanderson. The control law is based on the error between current and desired features on the image plane, and does not involve any estimate of the pose of the target. The features may be the coordinates of visual features, lines or moments of regions. IBVS has difficulties with motions very large rotations, which has come to be called camera retreat.
PBVS is a model-based technique (with a single camera). This is because the pose of the object of interest is estimated with respect to the camera and then a command is issued to the robot controller, which in turn controls the robot. In this case the image features are extracted as well, but are additionally used to estimate 3D information (pose of the object in Cartesian space), hence it is servoing in 3D.
Hybrid approaches use some combination of the 2D and 3D servoing. There have been a few different approaches to hybrid servoing
2-1/2-D Servoing
Motion partition-based
Partitioned DOF Based
Survey
The following description of the prior work is divided into 3 parts
Survey of existing visual servoing methods.
Various features used and their impacts on visual servoing.
Error and stability analysis of visual servoing schemes.
Survey of existing visual servoing methods
Visual servo systems, also called servoing, have been around since the early 1980s
, although the term visual servo itself was only coined in 1987.
Visual Servoing is, in essence, a method for robot control where the sensor used is a camera (visual sensor).
Servoing consists primarily of two techniques,
one involves using information from the image to directly control the degrees of freedom (DOF) of the robot, thus referred to as Image Based Visual Servoing (IBVS).
While the other involves the geometric interpretation of the information extracted from the camera, such as estimating the pose of the target and parameters of the camera (assuming some basic model of the target is known). Other servoing classifications exist based on the variations in each component of a servoing system
,
e.g. the location of the camera, the two kinds are eye-in-hand and hand–eye configurations.
Based on the control loop, the two kinds are end-point-open-loop and end-point-closed-loop. Based on whether the control is applied to the joints (or DOF)
directly or as a position command to a robot controller the two types are
direct servoing and dynamic look-and-move.
Being one of the earliest works
the authors proposed a hierarchical
visual servo scheme applied to image-based servoing. The technique relies on
the assumption that a good set of features can be extracted from the object
of interest (e.g. edges, corners and centroids) and used as a partial model
along with global models of the scene and robot. The control strategy is
applied to a simulation of a two and three DOF robot arm.
Feddema et al.
introduced the idea of generating task trajectory
with respect to the feature velocity. This is to ensure that the sensors are
not rendered ineffective (stopping the feedback) for any the robot motions.
The authors assume that the objects are known a priori (e.g. CAD model)
and all the features can be extracted from the object.
The work by Espiau et al.
discusses some of the basic questions in
visual servoing. The discussions concentrate on modeling of the interaction
matrix, camera, visual features (points, lines, etc..).
In
an adaptive servoing system was proposed with a look-and-move
servoing architecture. The method used optical flow along with SSD to
provide a confidence metric and a stochastic controller with Kalman filtering
for the control scheme. The system assumes (in the examples) that the plane
of the camera and the plane of the features are parallel., discusses an approach of velocity control using the Jacobian relationship s˙ = Jv˙ . In addition the author uses Kalman filtering, assuming that
the extracted position of the target have inherent errors (sensor errors). A
model of the target velocity is developed and used as a feed-forward input
in the control loop. Also, mentions the importance of looking into kinematic
discrepancy, dynamic effects, repeatability, settling time oscillations and lag
in response.
Corke poses a set of very critical questions on visual servoing and tries
to elaborate on their implications. The paper primarily focuses the dynamics
of visual servoing. The author tries to address problems like lag and stability,
while also talking about feed-forward paths in the control loop. The paper
also, tries to seek justification for trajectory generation, methodology of axis
control and development of performance metrics.
Chaumette in provides good insight into the two major problems with
IBVS. One, servoing to a local minima and second, reaching a Jacobian singularity. The author show that image points alone do not make good features
due to the occurrence of singularities. The paper continues, by discussing the
possible additional checks to prevent singularities namely, condition numbers
of J_s and Jˆ+_s, to check the null space of ˆ J_s and J^T_s . One main point that
the author highlights is the relation between local minima and unrealizable
image feature motions.
Over the years many hybrid techniques have been developed. These
involve computing partial/complete pose from Epipolar Geometry using multiple views or multiple cameras. The values are obtained by direct estimation or through a learning or a statistical scheme. While others have used
a switching approach that changes between image-based and position-based
on a Lyapnov function.
The early hybrid techniques that used a combination of image-based and
pose-based (2D and 3D information) approaches for servoing required either
a full or partial model of the object in order to extract the pose information
and used a variety of techniques to extract the motion information from the
image. used an affine motion model from the image motion in addition
to a rough polyhedral CAD model to extract the object pose with respect to
the camera to be able to servo onto the object (on the lines of PBVS).
2-1/2-D visual servoing developed by Malis et al. is a well known technique that breaks down the information required for servoing into an organized fashion which decouples rotations and translations. The papers
assume that the desired pose is known a priori. The rotational information is
obtained from partial pose estimation, a homography, (essentially 3D information) giving an axis of rotation and the angle (by computing the eigenvalues and eigenvectors of the homography). The translational information is
obtained from the image directly by tracking a set of feature points. The only
conditions being that the feature points being tracked never leave the field of
view and that a depth estimate be predetermined by some off-line technique.
2-1/2-D servoing has been shown to be more stable than the techniques that
preceded it. Another interesting observation with this formulation is that
the authors claim that the visual Jacobian will have no singularities during
the motions.
The hybrid technique developed by Corke and Hutchinson, popularly called portioned approach partitions the visual (or image) Jacobian into
motions (both rotations and translations) relating X and Y axes and motions related to the Z axis. outlines the technique, to break out columns
of the visual Jacobian that correspond to the Z axis translation and rotation
(namely, the third and sixth columns). The partitioned approach is shown to
handle the Chaumette Conundrum discussed in. This technique requires
a good depth estimate in order to function properly.
outlines a hybrid approach where the servoing task is split into two,
namely main and secondary. The main task is keep the features of interest within the field of view. While the secondary task is to mark a fixation
point and use it as a reference to bring the camera to the desired pose. The
technique does need a depth estimate from an off-line procedure. The paper
discusses two examples for which depth estimates are obtained from robot
odometry and by assuming that all features are on a plane. The secondary
task is achieved by using the notion of parallax. The features that are tracked
are chosen by an initialization performed on the first frame, which are typically points.
carries out a discussion on two aspects of visual servoing, feature
modeling and model-based tracking. Primary assumption made is that the
3D model of the object is available. The authors highlights the notion that
ideal features should be chosen such that the DOF of motion can be decoupled
by linear relation. The authors also introduce an estimate of the target
velocity into the interaction matrix to improve tracking performance. The
results are compared to well known servoing techniques even when occlusions
occur.
Various features used and their impacts on visual servoing
This section discusses the work done in the field of visual servoing. We try
to track the various techniques in the use of features. Most of the work
has used image points as visual features. The formulation of the interaction
matrix in assumes points in the image are used to represent the target.
There has some body of work that deviates from the use of points and use
feature regions, lines, image moments and moment invariants.
In, the authors discuss an affine based tracking of image features.
The image features are chosen based on a discrepancy measure, which is
based on the deformation that the features undergo. The features used were
texture patches. One of key points of the paper was that it highlighted the
need to look at features for improving visual servoing.
In the authors look into choice of image features (the same question
was also discussed in in the context of tracking). The effect of the choice
of image features on the control law is discussed with respect to just the
depth axis. Authors consider the distance between feature points and the
area of an object as features. These features are used in the control law with
slightly different forms to highlight the effects on performance. It was noted
that better performance was achieved when the servo error was proportional
to the change in depth axis.
provides one of the early discussions of the use of moments. The
authors provide a new formulation of the interaction matrix using the velocity
of the moments in the image, albeit complicated. Even though the moments
are used, the moments are of the small change in the location of contour
points with the use of Green’s theorem. The paper also tries to determine
the set of features (on a plane) to for a 6 DOF robot.
In discusses the use of image moments to formulate the visual Jacobian.
This formulation allows for decoupling of the DOF based on type of moments
chosen. The simple case of this formulation is notionally similar to the 2-1/2-
D servoing. The time variation of the moments (m˙ij) are determined using
the motion between two images and Greens Theorem. The relation between
m˙ij and the velocity screw (v) is given as m˙_ij = L_m_ij v. This technique
avoids camera calibration by assuming that the objects are planar and using
a depth estimate. The technique works well in the planar case but tends to
be complicated in the general case. The basic idea is based on the work in [4]
Moment Invariants have been used in. The key idea being to find
the feature vector that decouples all the DOF of motion. Some observations
made were that centralized moments are invariant for 2D translations. A
complicated polynomial form is developed for 2D rotations. The technique
follows teaching-by-showing, hence requiring the values of desired depth and
area of object (assuming that the plane of camera and object are parallel,
and the object is planar). Other parts of the feature vector are invariants
R3, R4. The authors claim that occlusions can be handled.
and build on the work described in. The major differ-
ence being that the authors use a technique similar to, where the task is
broken into two (in the case where the features are not parallel to the cam-
era plane). A virtual rotation is performed to bring the featured parallel to
the camera plane. consolidates the work done by the authors on image
moments.
Error and stability analysis of visual servoing schemes
Espiau in showed from purely experimental work that image based visual servoing (IBVS)
is robust to calibration errors. The author used a camera with no explicit
calibration along with point matching and without pose estimation. The
paper looks at the effect of errors and uncertainty on the terms in the interaction matrix from an experimental approach. The targets used were points
and were assumed to be planar.
A similar study was done in where the
authors carry out experimental evaluation of a few uncalibrated visual servo
systems that were popular in the 90’s. The major outcome was the experimental evidence of the effectiveness of visual servo control over conventional
control methods.
Kyrki et al. analyze servoing errors for position based and 2-1/2-D
visual servoing. The technique involves determining the error in extracting
image position and propagating it to pose estimation and servoing control.
Points from the image are mapped to points in the world a priori to obtain a mapping (which is basically the homography, although not explicitly stated
in the paper). This mapping is broken down to pure rotations and translations. Pose estimation is performed using standard technique from Computer
Vision. Pixel errors are transformed to the pose. These are propagating to
the controller. An observation from the analysis shows that errors in the
image plane are proportional to the depth and error in the depth-axis is
proportional to square of depth.
Measurement errors in visual servoing have been looked into extensively.
Most error functions relate to two aspects of visual servoing. One being
steady state error (once servoed) and two on the stability of the control
loop. Other servoing errors that have been of interest are those that arise
from pose estimation and camera calibration. In, the authors extend the
work done in by considering global stability in the presence of intrinsic
and extrinsic calibration errors. provides an approach to bound the task
function tracking error. In, the authors use teaching-by-showing visual
servoing technique. Where the desired pose is known a priori and the robot
is moved from a given pose. The main aim of the paper is to determine the
upper bound on the positioning error due to image noise using a convex-
optimization technique.
provides a discussion on stability analysis with respect the uncertainty
in depth estimates. The authors conclude the paper with the observation that
for unknown target geometry a more accurate depth estimate is required in
order to limit the error.
Many of the visual servoing techniques implicitly assume that
only one object is present in the image and the relevant feature for tracking
along with the area of the object are available. Most techniques require either
a partial pose estimate or a precise depth estimate of the current and desired
pose.
Software
Matlab toolbox for visual servoing.
Java-based visual servoing simulator.
ViSP (ViSP states for "Visual Servoing Platform") is a modular software that allows fast development of visual servoing applications.
See also
Robotics
Robot
Computer Vision
Machine Vision
Robot control
References
External links
S. A. Hutchinson, G. D. Hager, and P. I. Corke. A tutorial on visual servo control. IEEE Trans. Robot. Automat., 12(5):651—670, Oct. 1996.
F. Chaumette, S. Hutchinson. Visual Servo Control, Part I: Basic Approaches. IEEE Robotics and Automation Magazine, 13(4):82-90, December 2006.
F. Chaumette, S. Hutchinson. Visual Servo Control, Part II: Advanced Approaches. IEEE Robotics and Automation Magazine, 14(1):109-118, March 2007.
Notes from IROS 2004 tutorial on advanced visual servoing.
Springer Handbook of Robotics Chapter 24: Visual Servoing and Visual Tracking (François Chaumette, Seth Hutchinson)
UW-Madison, Robotics and Intelligent Systems Lab
INRIA Lagadic research group
Johns Hopkins University, LIMBS Laboratory
University of Siena, SIRSLab Vision & Robotics Group
Tohoku University, Intelligent Control Systems Laboratory
INRIA Arobas research group
LASMEA, Rosace group
UIUC, Beckman Institute
Robotic sensing
Computer vision
Robot control
Articles containing video clips | Visual servoing | Engineering | 3,608 |
77,567,287 | https://en.wikipedia.org/wiki/Lichenostigma%20svandae | Lichenostigma svandae is a species of lichenicolous (lichen-dwelling) fungus in the family Phaeococcomycetaceae. It was described as a new species in 2007 by Jan Vondrák and Jaroslav Šoun. The authors collected the type specimen from a limestone hill in the protected area Karadagskyj Zapovednik (Feodosia, Crimea) at an elevation of about ; it was growing on the thalli and apothecia (fruiting bodies) crustose lichen species Acarospora cervina, which was itself growing on sun-exposed limestone rock. The species epithet honours bus driver
Jaroslav Švanda, who drove the bus to the excursion where the type was collected.
References
Arthoniomycetes
Fungus species
Fungi described in 2007
Fungi of Europe
Lichenicolous fungi
Taxa named by Jan Vondrák | Lichenostigma svandae | Biology | 186 |
17,728,011 | https://en.wikipedia.org/wiki/Vine%E2%80%93Matthews%E2%80%93Morley%20hypothesis | The Vine–Matthews–Morley hypothesis, also known as the Morley–Vine–Matthews hypothesis, was the first key scientific test of the seafloor spreading theory of continental drift and plate tectonics. Its key impact was that it allowed the rates of plate motions at mid-ocean ridges to be computed. It states that the Earth's oceanic crust acts as a recorder of reversals in the geomagnetic field direction as seafloor spreading takes place.
History
Harry Hess proposed the seafloor spreading hypothesis in 1960 (published in 1962); the term "spreading of the seafloor" was introduced by geophysicist Robert S. Dietz in 1961. According to Hess, seafloor was created at mid-oceanic ridges by the convection of the Earth's mantle, pushing and spreading the older crust away from the ridge. Geophysicist Frederick John Vine and the Canadian geologist Lawrence W. Morley independently realized that if Hess's seafloor spreading theory was correct, then the rocks surrounding the mid-oceanic ridges should show symmetric patterns of magnetization reversals using newly collected magnetic surveys. Both of Morley's letters to Nature (February 1963) and Journal of Geophysical Research (April 1963) were rejected, hence Vine and his PhD adviser at Cambridge University, Drummond Hoyle Matthews, were first to publish the theory in September 1963. Some colleagues were skeptical of the hypothesis because of the numerous assumptions made—seafloor spreading, geomagnetic reversals, and remanent magnetism—all hypotheses that were still not widely accepted. The Vine–Matthews–Morley hypothesis describes the magnetic reversals of oceanic crust. Further evidence for this hypothesis came from Allan V. Cox and colleagues (1964) when they measured the remanent magnetization of lavas from land sites. Walter C. Pitman and J. R. Heirtzler offered further evidence with a remarkably symmetric magnetic anomaly profile from the Pacific-Antarctic Ridge.
Marine magnetic anomalies
The Vine–Matthews-Morley hypothesis correlates the symmetric magnetic patterns seen on the seafloor with geomagnetic field reversals. At mid-ocean ridges, new crust is created by the injection, extrusion, and solidification of magma. After the magma has cooled through the Curie point, ferromagnetism becomes possible and the magnetization direction of magnetic minerals in the newly formed crust orients parallel to the current background geomagnetic field vector. Once fully cooled, these directions are locked into the crust and it becomes permanently magnetized. Lithospheric creation at the ridge is considered continuous and symmetrical as the new crust intrudes into the diverging plate boundary. The old crust moves laterally and equally on either side of the ridge. Therefore, as geomagnetic reversals occur, the crust on either side of the ridge will contain a record of remanent normal (parallel) or reversed (antiparallel) magnetizations in comparison to the current geomagnetic field. A magnetometer towed above (near bottom, sea surface, or airborne) the seafloor will record positive (high) or negative (low) magnetic anomalies when over crust magnetized in the normal or reversed direction. The ridge crest is analogous to “twin-headed tape recorder”, recording the Earth's magnetic history.
Typically there are positive magnetic anomalies over normally magnetized crust and negative anomalies over reversed crust. Local anomalies with a short wavelength also exist, but are considered to be correlated with bathymetry. Magnetic anomalies over mid-ocean ridges are most apparent at high magnetic latitudes, over north-south trending ridges at all latitudes away from the magnetic equator, and east-west trending spreading ridges at the magnetic equator.
The intensity of the remanent magnetization in the crust is greater than the induced magnetization. Consequently, the shape and amplitude of the magnetic anomaly is controlled predominately by the primary remanent vector in the crust. In addition, where the anomaly is measured on Earth affects its shape when measured with a magnetometer. This is because the field vector generated by the magnetized crust and the direction of the Earth's magnetic field vector are both measured by the magnetometers used in marine surveys. Because the Earth's field vector is much stronger than the anomaly field, a modern magnetometer measures the sum of the Earth's field and the component of the anomaly field in the direction of the Earth's field.
Sections of crust magnetized at high latitudes have magnetic vectors that dip steeply downward in a normal geomagnetic field. However, close to the magnetic south pole, magnetic vectors are inclined steeply upwards in a normal geomagnetic field. Therefore, in both these cases the anomalies are positive. At the equator the Earth's field vector is horizontal so that crust magnetized there will also align horizontal. Here, the orientation of the spreading ridge affects the anomaly shape and amplitude. The component of the vector that effects the anomaly is at a maximum when the ridge is aligned east-west and the magnetic profile crossing is north-south.
Impact
The hypothesis links seafloor spreading and geomagnetic reversals in a powerful manner, with each expanding knowledge of the other. Early in the history of investigating the hypothesis only a short record of geomagnetic field reversals was available for studies of rocks on land. This was sufficient to allow computing of spreading rates over the last 700,000 years on many mid-ocean ridges by locating the closest reversed crust boundary to the crest of a mid-ocean ridge. Marine magnetic anomalies were found later to span the vast flanks of the ridges. Drillcores into the crust on these ridge flanks allowed dating of the early and of the older anomalies. This in turn allowed design of a predicted geomagnetic time scale. With time, investigations married land and marine data to produce an accurate geomagnetic reversal time scale for almost 200 million years.
See also
Edward Bullard
Drummond Matthews
Walter C. Pitman III
Frederick Vine
Geodynamo
Lamont–Doherty Earth Observatory
References
External links
Geophysics
History of Earth science
Plate tectonics
Geology theories | Vine–Matthews–Morley hypothesis | Physics | 1,273 |
64,747,458 | https://en.wikipedia.org/wiki/WASP-46 | WASP-46 is a G-type main-sequence star about away. The star is older than the Sun and is strongly depleted in heavy elements compared to the Sun, having just 45% of the solar abundance. Despite its advanced age, the star is rotating rapidly, being spun up by the tides raised by a giant planet on a close orbit.
The star displays an excess ultraviolet emission associated with starspot activity, and is suspected to be surrounded by a dust and debris disk.
Planetary system
In 2011 a transiting hot superjovian planet, WASP-46b, was detected. The planet's equilibrium temperature is . The dayside temperature measured in 2014 is much higher at , indicating a very poor heat redistribution across the planet. A re-measurement of the dayside planetary temperature in 2020 resulted in a lower value of 1870 K.
In 2017, a search for transit-timing variations of WASP-46b yielded zero results, thus ruling out existence of additional gas giants in the system. The orbital decay of WASP-46b was also not detected.
References
Indus (constellation)
G-type main-sequence stars
Planetary systems with one confirmed planet
Planetary transit variables
J21145687-5552184
101
231663901
46 | WASP-46 | Astronomy | 252 |
77,480,034 | https://en.wikipedia.org/wiki/Rubbens%20%28distillery%29 | Rubbens is a Belgian company that distils jenever. For most of its history it has been a family business, owned by the Rubbens family and located in Zele. The company buildings in Zele have been abandoned since 2014. The company is meanwhile active at a new location in Wichelen.
History
In 1817 Melchior Singeleyn founded an agricultural distillery in Zele, East Flanders. Originally, it was a farm that processed its surplus grain into alcohol. In 1872, under the leadership of Charles (Karel) Rubbens, agricultural activities were phased out in favour of an expansion of the distillery. In 1877 new buildings, with an industrial steam boiler and a round brick chimney, were constructed for the storage and processing of larger quantities of grain into jenever. Charles Rubbens subsequently gave his name to the company. In 1880, the distillery employed three people. Charles Rubbens regularly expanded the company’s land holding, via purchases and inheritance.
When Charles died in 1910, his widow Dymphna Callebaut managed the farm and the distillery until 1911, when sons Jean and Benoit Rubbens took over. Benoit was responsible for administration and sales. Jean, a graduate agricultural engineer, took care of the technical side. After the disruption of World War I, they modernised the company which was then known as Rubbens bros Liqueur Distillery. In 1920 the company employed 5 people.
During World War II, the rationing of raw materials forced the distillery to operate at a reduced rate. After the war, management passed to Jean's daughters, Elisabeth and Martha Rubbens. Together with their husbands they further expanded the distillery. In 1950, new warehouses were added and the mechanisation of production was improved.
In 2008, the family sold Rubbens Distillery to farmer Dirk Beck from Sint-Gillis-Waas. Between 2009 and 2014, to expand its production capacity, the distillery moved to Wichelen, to the industrial site of former steel drawing mill N.V. Produrac. Dirk Beck turned the activities of the distillery and the farm into a cycle: the grain produced by the farm was sent to the distillery for alcohol production and the spent grain was subsequently used as animal feed. In 2021 his son Hendrik Beck became co-manager of the company.
Production
The first production run was grain jenever. It was sold to outlets such as shops and inns in white stoneware jars and 30 and 50 litre oak casks.
Under the leadership of Jean and Benoit Rubbens, the company experienced its first major boom when it started to produce 'Vieux-Système' jenever, from 1911 onwards. Production was subsequently greatly diversified to include, among other things, liqueurs. Alcohol production was eventually phased out and more use was made of imported alcohol and self-produced distillates.
In 2021, Rubbens range included more than 120 products under different brand names, from jenever and gin, to aperitifs, liqueurs, absinthe and even non-alcoholic syrups. Beer production also commenced in that year.
Buildings
After the company moved, the original distillery, which was built in 1872 on the corner of Langemuntstraat and Oudburgstraat, was converted into an apartment block. Its original façade, a familiar sight in the centre of Zele, was retained. As well as 25 apartments and 4 family homes, a commercial building was also built. The project was called De Stokerij (The Distillery). The façade of the owners’ mansion, with the year 1817 above the door, was listed as architectural heritage. That protection was lifted in 2023.
Trivia
The Rubbens Distillery logo included a seahorse. It is said that this is because the hippocampus – the part of the brain affected by alcohol – is shaped like that bony little fish.
Distillery Rubbens promoted its products with, among other things, enamel signs. In 1949, it ordered a series of six enamel signs from Émaillerie Belge, in an edition of 55 copies of each. Many of these have been preserved by collectors.
References
Distilleries
Wichelen | Rubbens (distillery) | Chemistry | 857 |
7,023,290 | https://en.wikipedia.org/wiki/Mark%20Barr | James Mark McGinnis Barr (18 May 187115 December 1950) was an electrical engineer, physicist, inventor, and polymath known for proposing the standard notation for the golden ratio. Born in America, but with English citizenship, Barr lived in both London and New York City at different times of his life.
Though remembered primarily for his contributions to abstract mathematics, Barr put much of his efforts over the years into the design of machines, including calculating machines. He won a gold medal at the 1900 Paris Exposition Universelle for an extremely accurate engraving machine.
Life
Barr was born in Pennsylvania, the son of Charles B. Barr and Ann M'Ginnis.
He was educated in London, then worked for the Westinghouse Electric Company in Pittsburgh from 1887 to 1890. He started there as a draughtsman before becoming a laboratory assistant, and later an erection engineer. For two years in the early 1890s, he worked in New York City at the journal Electrical World as an assistant editor, at the same time studying chemistry at the New York City College of Technology, and by 1900, he had worked with both Nikola Tesla and Mihajlo Pupin in New York. However, he was known among acquaintances for his low opinion of Thomas Edison. Returning to London in 1892, he studied physics and electrical engineering at the City and Guilds of London Technical College for three years.
From 1896 to 1900, he worked for Linotype in England, and from 1900 to 1904, he worked as a technical advisor to Trevor Williams in London.
Beginning in 1902, he was elected to the Small Screw Gauge Committee of the British Association for the Advancement of Science. The committee was set up to put into practice the system of British Association screw threads, which had been settled on but not implemented in 1884. More broadly, it was tasked with considering "the whole question of standardisation of engineering materials, tools, and machinery".
In January 1916, Barr was given charge of a school for machinists in London, intended to supply workers to a nearby factory for machine guns for the war effort; the school closed that June, as the factory was unable to take on the new workers at the expected rate.
In the early 1920s, Barr was a frequent visitor to Alfred North Whitehead in Chelsea, London, but by 1924, he had moved back to New York.
Hamlin Garland writes that, "after thirty years in London", Barr returned to America "in order that his young sons might become citizens". Garland quotes Barr as saying that, for him, "to abandon America would be an act of treason".
In 1924, Harvard University invited Whitehead to join its faculty, with the financial backing of Henry Osborn Taylor. Barr, a friend of both Whitehead and Taylor, served as an intermediary in the preparations for this move.
Whitehead, in subsequent letters to his son North in 1924 and 1925, writes of Barr's struggles to sell the design for one of his calculating machines to an unnamed large American company. In the 1925 letter, Whitehead writes that Barr's son Stephen was staying with him while Barr and his wife Mabel visited Elyria, Ohio, to oversee a test build of the device. However, by 1927, Barr and Whitehead had fallen out, Whitehead writing to North (amid much complaint about Barr's character) that he was "very doubtful whether he will keep his post at the business school here";
Barr was a "research assistant in finance" at Harvard Business School around this time.
Barr joined the Century Association in 1925, and in his later life it "became practically his home". He died in The Bronx in 1950.
Contributions
Machining
At Linotype, Barr improved punch-cutting machines by substituting ball bearings for oil lubrication to achieve a more precise fit, and using tractrix-shaped sleeves to distribute wear uniformly.
In an 1896 publication in The Electrical Review on calculating the dimensions of a ball race, Barr credits the bicycle industry for stimulating development of the perfectly spherical steel balls needed in this application.
The punch-cutters he worked on were, essentially, pantographs that could engrave copies of given shapes (the outlines of letters or characters) as three-dimensional objects at a much smaller scale (the punches used to shape each letter in hot metal typesetting).
Between 1900 and 1902, with Linotype managers Arthur Pollen and William Henry Lock, Barr also designed pantographs operating on a very different scale, calculating aim for naval artillery based on the positions, headings, and speeds of the firing ship and its target.
Golden ratio
Barr was a friend of William Schooling, and worked with him in exploiting the properties of the golden ratio to develop arithmetic algorithms suitable for mechanical calculators.
According to Theodore Andrea Cook, Barr gave the golden ratio the name of phi (ϕ). Cook wrote that Barr chose ϕ by analogy to the use of for the ratio of a circle's circumference to its diameter, and because it is the first Greek letter in the name of the ancient sculptor Phidias. Although Martin Gardner later wrote that Phidias was chosen because he was "believed to have used the golden proportion frequently in his sculpture", Barr himself denied this, writing in his paper "Parameters of beauty" that he doubted Phidias used the golden ratio. Schooling communicated some of his discoveries with Barr to Cook after seeing an article by Cook about phyllotaxis, the arrangement of leaves on a plant stem, which often approximates the golden ratio.
Schooling published his work with Barr later, in 1915, employing the same notation.
Barr also published a related work in The Sketch in around 1913, generalizing the Fibonacci numbers to higher-order recurrences.
Other inventions and discoveries
Around 1910, Barr built a lighting apparatus for painter William Nicholson, using filters and reflectors to mix different types of light to produce an "artificial reproduction of daylight".
In 1914, as an expert in electricity, he took part in an investigation of psychic phenomena involving Polish medium Stanisława Tomczyk by the Society for Psychical Research; however, the results were inconclusive.
At some point prior to 1916, Barr was a participant in a business venture to make synthetic rubber from turpentine by a bacterial process. However, after much effort in relocating the bacterium after exhausting the original supply (a barrel of vinegar from New Jersey), the process ended up being less cost-effective than natural rubber, and the business failed.
With Edward George Boulenger of the London Zoo, he built a timer-operated electromechanical rat trap.
In preparation for a diving expedition to Haiti by William Beebe and the New York Zoological Society in early 1927, in which he participated as "physicist, master electrician, and philosopher", Barr helped develop an underwater telephone allowing divers to talk to a support boat, and a brass underwater housing for a motion picture camera.
Selected publications
References
Golden ratio
1871 births
1950 deaths
Scientists from Pennsylvania
American electrical engineers
19th-century American inventors
20th-century American inventors
English electrical engineers
English inventors
Engineers from Pennsylvania | Mark Barr | Mathematics | 1,436 |
3,351,671 | https://en.wikipedia.org/wiki/Collidinium%20p-toluenesulfonate | Collidinium p-toluenesulfonate or CPTS is a salt between p-toluenesulfonic acid and collidine (2,4,6-trimethylpyridine). It is used as a mild glycosylation catalyst in chemistry.
References
Pyridines
Salts | Collidinium p-toluenesulfonate | Chemistry | 67 |
4,597,574 | https://en.wikipedia.org/wiki/Effective%20potential | The effective potential (also known as effective potential energy) combines multiple, perhaps opposing, effects into a single potential. In its basic form, it is the sum of the 'opposing' centrifugal potential energy with the potential energy of a dynamical system. It may be used to determine the orbits of planets (both Newtonian and relativistic) and to perform semi-classical atomic calculations, and often allows problems to be reduced to fewer dimensions.
Definition
The basic form of potential is defined as:
where
L is the angular momentum
r is the distance between the two masses
μ is the reduced mass of the two bodies (approximately equal to the mass of the orbiting body if one mass is much larger than the other); and
U(r) is the general form of the potential.
The effective force, then, is the negative gradient of the effective potential:
where denotes a unit vector in the radial direction.
Important properties
There are many useful features of the effective potential, such as
To find the radius of a circular orbit, simply minimize the effective potential with respect to , or equivalently set the net force to zero and then solve for :
After solving for , plug this back into to find the maximum value of the effective potential .
A circular orbit may be either stable or unstable. If it is unstable, a small perturbation could destabilize the orbit, but a stable orbit would return to equilibrium. To determine the stability of a circular orbit, determine the concavity of the effective potential. If the concavity is positive, the orbit is stable:
The frequency of small oscillations, using basic Hamiltonian analysis, is
where the double prime indicates the second derivative of the effective potential with respect to and it is evaluated at a minimum.
Gravitational potential
Consider a particle of mass m orbiting a much heavier object of mass M. Assume Newtonian mechanics, which is both classical and non-relativistic. The conservation of energy and angular momentum give two constants E and L, which have values
when the motion of the larger mass is negligible. In these expressions,
is the derivative of r with respect to time,
is the angular velocity of mass m,
G is the gravitational constant,
E is the total energy, and
L is the angular momentum.
Only two variables are needed, since the motion occurs in a plane. Substituting the second expression into the first and rearranging gives
where
is the effective potential. The original two-variable problem has been reduced to a one-variable problem. For many applications the effective potential can be treated exactly like the potential energy of a one-dimensional system: for instance, an energy diagram using the effective potential determines turning points and locations of stable and unstable equilibria. A similar method may be used in other applications, for instance determining orbits in a general relativistic Schwarzschild metric.
Effective potentials are widely used in various condensed matter subfields, e.g. the Gauss-core potential (Likos 2002, Baeurle 2004) and the screened Coulomb potential (Likos 2001).
See also
Geopotential
Notes
References
Further reading
.
Mechanics
Potentials | Effective potential | Physics,Engineering | 644 |
48,799,229 | https://en.wikipedia.org/wiki/Physica%20speculatio | Physica speculatio is a text of scientific character written by Alonso de la Vera Cruz in 1557 in the capital of New Spain. It was the first published work in the American continent that specifically addressed the study of physics, and was written to teach the students of the Real University of Mexico.
It introduced the main theoretical concepts of geocentric astronomy and references the heliocentric model.
Fray Alonso de la Vera Cruz published in the capital of New Spain a Course of Arts, constituted in three volumes in Latin. The first form in 1553 under the title of Recognitio Summularum, that had like purpose help to the students of the Real University of Mexico to understand the philosophy by means of the understanding of the formal logic. A year afterwards appeared the second called Dialectica Resolutio, that was a continuation of the previous. The last was Physica speculatio.
They did four editions, the last 3 of which were for use of the salmantino students and were abbreviated versions of the Mexican one.
Subjects
The Physica speculatio has by object the study or "investigation" -speculatio- and the exhibition, in general, of subjects of physics on the nature -Physica-, treated by fray Alonso de la Vera Cruz basically from the philosophical perspective, characteristic of Aristotle and traditional in the Half Age.
It talks about, in what can be considered like the first part, the subjects treated by Aristotle in the Eight books of physics, as they are the essence of the physical or natural being, the movement and the infinite, the extension, the continuous, the space, the time, the first engine, etc. The second part treats of the subjects of the generation and the corruption of the living beings, of the mixed and composed being, of the primary qualities and of the elements and their properties. In the third part it exposes the doctrines on the meteors, it talks about the stars and their influence on humans, of the three regions of the air or atmosphere, of the comets, of the tides, of the ray and of a lot of other atmospheric phenomena. The fourth part devotes fray Alonso to comment the books De Anima by Aristotle. To end the Physica speculatio, there are some reflections on the treatise De Caelo by Aristotle.
Formal characteristics
It consists of 400 pages in paper, in which there are two columns that form 900 sheets in current transcription and nearly 1200 in translation to the Spanish.
It contains the following writings:
Eight books of physics;
On Generation and Corruption;
On the meteors;
On the soul;
On the sky.
Titled exactly like the Aristotelian works.
It contains added as appendix the Tractatus de Sphera written by the Italian mathematician and astronomer Campanus of Novara in the 13th century, and printed for the first time in 1518.
Structure and form
The main divisions in Books are in general the ones of the corresponding works of Aristotle.
Each book is divided in Speculations (particular studies), that can be understood as chapters.
The writing is presented according to the scholastic method, proposing first the opinions or negative affirmations, contrary to the thesis that it will sustain, and afterwards the positive, with foundations and explanations.
References
1557 books
16th-century books in Latin
Mexican documents
History of science
Astronomy education
Physics books | Physica speculatio | Astronomy,Technology | 685 |
55,181,911 | https://en.wikipedia.org/wiki/Identity%20provider%20%28SAML%29 | A SAML identity provider is a system entity that issues authentication assertions in conjunction with a single sign-on (SSO) profile of the Security Assertion Markup Language (SAML).
In the SAML domain model, a SAML authority is any system entity that issues SAML assertions. Two important examples of SAML authorities are the authentication authority and the attribute authority.
Definition
A SAML authentication authority is a system entity that produces SAML authentication assertions. Likewise a SAML attribute authority is a system entity that produces SAML attribute assertions.
A SAML authentication authority that participates in one or more SSO Profiles of SAML is called a SAML identity provider (or simply identity provider if the domain is understood). For example, an authentication authority that participates in SAML Web Browser SSO is an identity provider that performs the following essential tasks:
receives a SAML authentication request from a relying on party via a web browser
authenticates the browser user principal
responds to the relying party with a SAML authentication assertion for the principal
In the previous example, the relying on party that receives and accepts the authentication assertion is called a SAML service provider.
A given SAML identity provider is described by an <md:IDPSSODescriptor> element defined by the SAML metadata schema. Likewise, a SAML service provider is described by an <md:SPSSODescriptor> metadata element.
In addition to an authentication assertion, a SAML identity provider may also include an attribute assertion in the response. In that case, the identity provider functions as both an authentication authority and an attribute authority.
See also
Identity provider
Security Assertion Markup Language (SAML)
SAML service provider
SAML-based products and services
References
XML-based standards
Federated identity | Identity provider (SAML) | Technology | 361 |
11,569,840 | https://en.wikipedia.org/wiki/Gnomonia%20comari | Gnomonia comari is a fungus on overwintered leaves of Comarum palustre L. (Rosaceae). The fungus has been recently referred to Gnomoniopsis comari (Karst.) Sogonov, comb. nov. It occurs in Europe (Finland, Germany, Switzerland) It causes a disease on strawberry.
See also
List of strawberry diseases
References
Fungal plant pathogens and diseases
Fungal strawberry diseases
Gnomoniaceae
Fungi described in 1873
Fungus species | Gnomonia comari | Biology | 100 |
14,918,355 | https://en.wikipedia.org/wiki/Treo%2090 | The Treo 90 is a Palm OS PDA developed by Handspring. It was released on May 28, 2002. The Treo 90 was the only Treo model produced without an integrated cellular phone. When first released it was the smallest Palm OS device on the market.
Design
The Treo 90 features a 12-bit standard (160x160) resolution color super-twisted nematic display, LED backlighting, a miniature QWERTY keyboard that replaces the usual Graffiti on other PDAs, an infrared port and an SD card slot.
ROM update
The original Palm OS (4.1H) lacks SDIO support and reportedly has trouble formatting 128 megabyte SanDisk cards. The Treo 90 Updater addresses this problem and adds some other edits. The update is not a software patch but is burned into ROM.
The ROM 4.1H3 update allows the use of SD Cards up to 1GB (confirmed).
Additionally the new SDIO capability allows users to expand the device's features with two expansion cards: the Palm Bluetooth card, which allows the Treo 90 to access the Internet, email and messages wirelessly with a Bluetooth-enabled mobile phone and MARGI Systems Presenter-and-Go which connects the Treo 90 directly to digital LCD projectors or other VGA devices to show business presentations stored on the Treo in full color.
See also
List of Palm OS Devices
Palm Treo
Palm OS
References
External links
Official Palm site
Palm OS devices
68k-based mobile devices | Treo 90 | Technology | 311 |
15,209,792 | https://en.wikipedia.org/wiki/Collusion%20Syndicate | The Collusion Syndicate, formerly the Collusion Group and sometimes spelled Collu5ion, C0llu5i0n or C011u5i0n, was a Computer Security and Internet Politics Special Interest Group (SIG) founded in 1995 and effectively disbanded around 2002.
Collusion Group
The Collusion Group was founded in 1995 by technologist Tex Mignog, aka the TexorcisT (sic) in Dallas, Texas before moving the headquarters to Austin, Texas in 1997. Founding members included individuals that all operated anonymously using hacker pseudonyms (called "handles" or "nyms") including the TexorcisT, Progress, Sfear (sic), Anormal, StripE (sic) and Elvirus. The membership of this organization grew to an estimated 30+ by 1999 and was not localized to its headquarters in Austin, Texas, with members in other states, countries and continents. The group made numerous open appearances at computer security events such as H.O.P.E. and DefCon and was often quoted by the media on computer related security, political and cultural issues.
The group was well known for its online publication, www.Collusion.org and also founded and financed other events such as the "irQconflict", the largest seasonal computer gaming tournament in the South-Central US.
The group was often interviewed with regard to Internet security issues by reporters for a variety of media outlets, some examples being KVUE News
,
the Austin American Statesman and Washington Post
,
and The New York Times
.
www.Collusion.org
The Collusion Syndicate began publishing articles written by its members and others that submitted articles in 1997 on their site, www.Collusion.org , their stated mission being to "Learn all that is Learnable".
This site won awards including a Best of Austin in 2000 by the Austin Chronicle where the site was described as "an edgy cabal of net-savvy punks and vinyl-scratching, video-gaming malcontents, laying it down in no uncertain terms with a lot of dark backgrounds and urban-toothed graphics and in-your-face-yo rants."
Collusion Syndicate research on SIPRNet has been referenced by the SANS Institute
.
Xchicago has published tips by Collusion's founder, TexorcisT, on Google Hacking.
The group's work and research is referenced in many books including
Steal This Computer Book 4.0: What They Won't Tell You about the Internet,
Mac OS X Maximum Security
and Anarchitexts: Voices from the Global Digital Resistance.
The group may have been tied to Assassination Politics as evidenced by declassified documents.
Notable Inventions and Actions
AnonyMailer
1995 - An application developed to point out security issues with the Simple Mail Transfer Protocol.
Port-A-LAN
1998 - The Port-A-LAN is described as a "LAN-in-a-Box" and designed to facilitate quick network deployments. With Cat 3 50-pin telco cable and break-out "harmonicas" to quickly deploy a 160 node network at a previously unwired location in less than one hour. (Developed prior to the advent of WiFi popularity.)
irQconflict
1998-2001 - The Collusion Syndicate hosted the irQconflict
,
the largest seasonal computer gaming tournament in the South-Central US. These events were different in that they were very large for LAN party standards (100-200 gamers) and included a rave like atmosphere with DJs, club lighting and projectors showing computer animation and machinima. They took place in various venues in Austin, Texas, utilized Port-A-LAN technology and, due to its size, required the re-engineering of the venues' electrical wiring. These events drew attendance from all over Texas and surrounding states.
The Collusion Group took the show on the road in 1999, taking the irQconflict to DefCon 7
and in 2000 was invited to do their thing in conjunction with SXSW Interactive and COnduit 2K electronic film festival
and was where some machinima films chose to debut
, during the gaming.
Virtual Sit-ins
1999 - The Collusion Syndicate promoted Virtual Sit-ins which are manual DDoS attacks created by hundreds of protesters attempting to overload the servers of the organization they are protesting by repeatedly requesting data, manually.
SecurityTraq credits this site as providing an early introduction to the concept of Hacktivism and
they are referenced in The Internet and Democracy, a paper by Roger Clarke Prepared for IPAA/NOIE and included in a NOIE publication in September 2004.
They explanation of Hactivism was published in the Hacktivist and credited in the Ashville Global Report as lately as 2007.
Electric Dog
2000 - The Electric Dog is a remote control wireless camera robot created to demonstrate the practicality of simple robotics in routine office technical repair.
See also
2600 The Hacker Quarterly
Phrack
Legion of Doom
Chaos Computer Club
Cult of the Dead Cow
l0pht
Crypto-anarchism
Culture jamming
E-democracy
Hacker culture
Hacker ethic
Internet activism
References
External links
Collusion E-zine
1995 establishments in Texas
Computing culture
Politics and technology | Collusion Syndicate | Technology | 1,060 |
45,716,348 | https://en.wikipedia.org/wiki/Igor%20Hawryszkiewycz | Igor Titus Hawryszkiewycz (born c. 1948) is an American computer scientist, organizational theorist, and Professor at the School of Systems, Management and Leadership of the University of Technology, Sydney, known for his work in the field of database systems, systems analysis, and knowledge management.
Biography
Hawryszkiewycz obtained his PhD in computer science from Massachusetts Institute of Technology in 1973 with the thesis, entitled "Semantics of data base systems." developed within the Project MAC.
Hawryszkiewycz started his academic career as lecturer in information Systems at the University of Canberra in 1975. Since 1986 he is Professor at the University of Technology, Sydney and Head of its Department of Information Systems. From 1989 to 1993 he also directed its Key Center of Advanced Computing Sciences.
Hawryszkiewycz's research interest is focussed on "developing design thinking environments to provide business solutions in complex environments by integrating processes, knowledge, and social networking... [and] facilitating agility and evolution of business systems through collaboration."
Selected publications
I.T. Hawryszkiewycz. Semantics of data base systems. PhD thesis, Massachusetts Institute of Technology, Cambridge (1973)
Hawryszkiewycz, Igor Titus, and I. T. Hawryszkiewycz. Database analysis and design. Chicago, IL: Science Research Associates, 1984.
Hawryszkiewycz, Igor T. Introduction to systems analysis and design. Prentice Hall PTR, 1994.
Hawryszkiewycz, Igor. Knowledge Management: Organizing Knowledge Based Enterprises. Palgrave Macmillan P." (2009)
Articles, a selection
Hawryszkiewycz, Igor. "A metamodel for modeling collaborative system s." (2005).
References
External links
Igor Hawryszkiewycz | University of Technology, Sydney
1940s births
Living people
American business theorists
American computer scientists
MIT School of Engineering alumni
Academic staff of the University of Technology Sydney | Igor Hawryszkiewycz | Technology | 410 |
23,693,736 | https://en.wikipedia.org/wiki/Neomogroside | Neomogroside is a cucurbitane glycoside isolated from the fruit of Siraitia grosvenorii.
See also
Mogroside
Siamenoside
References
External links
Triterpene glycosides
Sugar substitutes | Neomogroside | Chemistry | 53 |
11,422,391 | https://en.wikipedia.org/wiki/U98%20small%20nucleolar%20RNA | U98 small nucleolar RNA also is a non-coding RNA (ncRNA) molecule which functions in the biogenesis (modification) of other small nuclear RNAs (snRNAs). This type of modifying RNA is located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA) and also often referred to as a "guide" RNA.
U98 belongs to the H/ACA box class of snoRNAs which are thought to guide the sites of modification of uridines to pseudouridines, the target for this family is unknown.
The mouse homologue was cloned and is called MBII-367.
References
External links
Small nuclear RNA | U98 small nucleolar RNA | Chemistry | 167 |
19,777,655 | https://en.wikipedia.org/wiki/List%20of%20elements%20by%20atomic%20properties | This is a list of chemical elements and their atomic properties, ordered by atomic number (Z).
Since valence electrons are not clearly defined for the d-block and f-block elements, there not being a clear point at which further ionisation becomes unprofitable, a purely formal definition as number of electrons in the outermost shell has been used.
Table
[*] a few atomic radii are calculated, not experimental
[—] a long dash marks properties for which there is no data available
[ ] a blank marks properties for which no data has been found
References
Atomic Number | List of elements by atomic properties | Chemistry | 121 |
32,393,699 | https://en.wikipedia.org/wiki/Mechanically%20stimulated%20gas%20emission | Mechanically stimulated gas emission (MSGE) is a complex phenomenon embracing various physical and chemical processes occurring on the surface and in the bulk of a solid under applied mechanical stress and resulting in emission of gases. MSGE is a part of a more general phenomenon of mechanically stimulated neutral emission. MSGE experiments are often performed in ultra-high vacuum.
Phenomenology
The specific characteristics of MSGE as compared with MSNE is that the emitted neutral particles are limited to gas molecules. MSGE is opposite to Mechanically Stimulated Gas Absorption that usually occurs under fretting corrosion of metals, exposure to gases at high pressures, etc.
There are three main sources of MSGE:
I. Gas molecules adsorbed on the surface of a solid
IIa. Gases dissolved in the material bulk
IIb. Gases occluded or trapped in micro- and nanovoids, discontinuities and on defects in the material bulk
III. Gases generated as a result of mechanical activation of chemical reactions.
Generally, for producing MSGE, the mechanical action on the solid can be of any type including tension, compression, torsion, shearing, rubbing, fretting, rolling, indentation, etc. In previous studies carried out by various groups it was found that MSGE is associated mainly with plastic deformation, fracture, wear and other irreversible modifications of a solid. Under elastic deformation MSGE is almost negligible and only was observed just below elastic limit due to possible microplastic deformation.
In accordance to the main sources, the emitted gases usually contain hydrogen (source type IIa), argon (for coatings obtained using PVD in Ar plasma - source type IIb), methane (source type III), water (source type I and/or III), carbon mono- and dioxide (source type I/III).
The knowledge on the mechanisms of MSGE is still vague. On the basis of the experimental findings it was speculated that the following processes can be related with MSGE:
Transport of gas atoms by moving dislocations
Gas diffusion in the bulk driven by gradient of mechanical stress
Phase transformation induced by deformation
Removal of oxide and other surface layers, which prevent exit of dissolved atoms on the surface
Extension of free surface
Thermal effect seems to be irrelevant to the gas emission under light load conditions.
Terminology
Emerging character of this interdisciplinary branch of science is reflected by a lack of established terminology. There are different terms and definitions used by different authors depending on the main approach used (chemical, physical, mechanical, vacuum science, etc.), specific gas emission mechanism (desorption, emanation, emission, etc.) and type of mechanical activation (friction, traction, etc.):
Mechanically stimulated outgassing (MSO)
Tribodesorption
Triboemission,
Fractoemission
Atomic and Molecular emission
Outgassing stimulated by friction
Outgassing stimulated by deformation
Desorption (tribodesorption, fractodesorption, etc.) refers to release of gases dissolved in the bulk and adsorbed on the surface. Therefore, desorption is only one of the contributing processes to MSGE. Outgassing is a technical term usually utilized in vacuum science. Thus, the term "gas emission" embraces various processes, reflects the physical nature of this complex phenomenon and is preferable for use in scientific publications.
Experimental observations
Due to low emission rate experiments should be performed in ultrahigh vacuum (UHV). In some studies the materials were previously doped with tritium. MSGE rate then was measured by radioactivity outcome from the material under applied mechanical stress.
See also
Mechanochemistry
References
Materials science
Physical chemistry
Gases
Hydrogen | Mechanically stimulated gas emission | Physics,Chemistry,Materials_science,Engineering | 751 |
70,012,002 | https://en.wikipedia.org/wiki/Gromia%20appendiculariae | Gromia appendiculariae is a unicellular, and parasitic, organism in the genus Gromia, which closely resembles Gromia sphaerica.
A specimen of G. appendiculariae was discovered as a parasite attached to the tail of a species of Oikopleura.
References
Amoeboids
Parasitic eukaryotes
Parasites of animals
Rhizaria species
Protists described in 1908 | Gromia appendiculariae | Biology | 88 |
58,925,146 | https://en.wikipedia.org/wiki/Donald%20Nicklin | Donald Nicklin AO (1934-2007) was an Australian chemical engineer and academic.
Early life
Donald James Nicklin was born 20 December 1934 in Home Hill, Queensland. His father, James Nicklin was a mechanical and electrical engineer who worked at a number of sugar mills. His grandfather, Reuben Nicklin was a merchant in the Coorparoo area. His cousin George Frances Reuben Nicklin was a Premier of Queensland. Donald Nicklin attended Buranda Boys State School and Brisbane Grammar School, where he was Dux in his final year. He won a scholarship to attend the University of Queensland in 1952 and graduated with his Bachelor of Applied Science degree with First Class Honours in Industrial Chemistry in 1957. Nicklin also won a University Gold Medal in the same year. He completed a Bachelor of Science (Mathematics) degree in 1959. Nicklin won a Shell Scholarship to undertake studies towards a PhD in chemical engineering from the University of Cambridge in 1961. He won the Junior Moulton Medal from the Institution of Chemical Engineers. Nicklin commenced work with the DuPont company in Canada following graduation from Cambridge.
Career
Nicklin moved to the U.S. branch of the DuPont group and worked on the group that developed Lycra, undertaking analyses of the process of spinning the fabric. He returned to Australia in 1965, where he was appointed Senior Lecturer in the Department of Chemical Engineering at the University of Queensland. He was appointed Professor in 1969 and was Head of the Department from 1969-1980. One of Nicklin’s students, Andrew Liveris, would also pursue a career with the DuPont company. Nicklin took his Bachelor of Economics degree from UQ in 1973. He was Dean of the Faculty of Engineering between 1976-1979. He was Pro Vice Chancellor for Physical Sciences between 1983-1992. He retired from the University in 1993 and continued his work in industry and academia for many subsequent years.
He published over 50 papers and several patents.
Personal life
Nicklin died after a short illness on 29 October 2007. He married occupational therapist Joanna Wilson of Sydney in 1958 and they had six children.
Awards and memberships
Fellow of the Australian Academy of Technological Sciences and Engineering
1987 - Chemeca medal from the Chemical College of the Institution of Engineers, Australia
Member - Prime Minister’s Science, Engineering and Innovation Council – 6 years
Member – AusIndustry’s Industry Research and Development Board
Chair, Centre for Mining Technology and Equipment
Director, Ticor
Chair, Austa Energy Corporation
Chair, Board of Trustees, Brisbane Grammar School
Member, Sugar Research Institute
Officer of the Order of Australia, 1996
Legacy
The Institution of Chemical Engineers created a Nicklin Medal in his honour in 2008. A building was named in his honour at UQ.
References
Australian chemical engineers
Officers of the Order of Australia
1934 births
2007 deaths
People from North Queensland
People educated at Brisbane Grammar School
University of Queensland alumni
Chemical engineering academics | Donald Nicklin | Chemistry | 579 |
13,535,375 | https://en.wikipedia.org/wiki/Mass%20spectrometry%20imaging | Mass spectrometry imaging (MSI) is a technique used in mass spectrometry to visualize the spatial distribution of molecules, as biomarkers, metabolites, peptides or proteins by their molecular masses. After collecting a mass spectrum at one spot, the sample is moved to reach another region, and so on, until the entire sample is scanned. By choosing a peak in the resulting spectra that corresponds to the compound of interest, the MS data is used to map its distribution across the sample. This results in pictures of the spatially resolved distribution of a compound pixel by pixel. Each data set contains a veritable gallery of pictures because any peak in each spectrum can be spatially mapped. Despite the fact that MSI has been generally considered a qualitative method, the signal generated by this technique is proportional to the relative abundance of the analyte. Therefore, quantification is possible, when its challenges are overcome. Although widely used traditional methodologies like radiochemistry and immunohistochemistry achieve the same goal as MSI, they are limited in their abilities to analyze multiple samples at once, and can prove to be lacking if researchers do not have prior knowledge of the samples being studied. Most common ionization technologies in the field of MSI are DESI imaging, MALDI imaging, secondary ion mass spectrometry imaging (SIMS imaging) and Nanoscale SIMS (NanoSIMS).
History
More than 50 years ago, MSI was introduced using secondary ion mass spectrometry (SIMS) to study semiconductor surfaces by Castaing and Slodzian. However, it was the pioneering work of Richard Caprioli and colleagues in the late 1990s, demonstrating how matrix-assisted laser desorption/ionization (MALDI) could be applied to visualize large biomolecules (as proteins and lipids) in cells and tissue to reveal the function of these molecules and how function is changed by diseases like cancer, which led to the widespread use of MSI. Nowadays, different ionization techniques have been used, including SIMS, MALDI and desorption electrospray ionization (DESI), as well as other technologies. Still, MALDI is the current dominant technology with regard to clinical and biological applications of MSI.
Operation principle
The MSI is based on the spatial distribution of the sample. Therefore, the operation principle depends on the technique that is used to obtain the spatial information. The two techniques used in MSI are: microprobe and microscope.
Microprobe
This technique is performed using a focused ionization beam to analyze a specific region of the sample by generating a mass spectrum. The mass spectrum is stored along with the spatial coordination where the measurement took place. Then, a new region is selected and analyzed by moving the sample or the ionization beam. These steps are repeated until the entire sample has been scanned. By coupling all individual mass spectra, a distribution map of intensities as a function of x and y locations can be plotted. As a result, reconstructed molecular images of the sample are obtained.
Microscope
In this technique, a 2D position-sensitive detector is used to measure the spatial origin of the ions generated at the sample surface by the ion optics of the instruments. The resolution of the spatial information will depend on the magnification of the microscope, the quality of the ions optics and the sensitivity of the detector. A new region still needs to be scanned, but the number of positions drastically reduces. The limitation of this mode is the finite depth of vision present with all microscopes.
Ion source dependence
The ionization techniques available for MSI are suited to different applications. Some of the criteria for choosing the ionization method are the sample preparation requirement and the parameters of the measurement, as resolution, mass range and sensitivity. Based on that, the most common used ionization method are MALDI, SIMS AND DESI which are described below. Still, other minor techniques used are laser ablation electrospray ionization (LAESI), laser-ablation-inductively coupled plasma (LA-ICP) and nanospray desorption electrospray ionization (nano-DESI).
SIMS and NanoSIMS imaging
Secondary ion mass spectrometry (SIMS) is used to analyze solid surfaces and thin films by sputtering the surface with a focused primary ion beam and collecting and analyzing ejected secondary ions. There are many different sources for a primary ion beam. However, the primary ion beam must contain ions that are at the higher end of the energy scale. Some common sources are: Cs+, O2+, O, Ar+ and Ga+. SIMS imaging is performed in a manner similar to electron microscopy; the primary ion beam is emitted across the sample while secondary mass spectra are recorded. SIMS proves to be advantageous in providing the highest image resolution but only over small area of samples. More, this technique is widely regarded as one of the most sensitive forms of mass spectrometry as it can detect elements in concentrations as small as 1012-1016 atoms per cubic centimeter.
Multiplexed ion beam imaging (MIBI) is a SIMS method that uses metal isotope labeled antibodies to label compounds in biological samples.
Developments within SIMS: Some chemical modifications have been made within SIMS to increase the efficiency of the process. There are currently two separate techniques being used to help increase the overall efficiency by increasing the sensitivity of SIMS measurements: matrix-enhanced SIMS (ME-SIMS) - This has the same sample preparation as MALDI does as this simulates the chemical ionization properties of MALDI. ME-SIMS does not sample nearly as much material. However, if the analyte being tested has a low mass value then it can produce a similar looking spectra to that of a MALDI spectra. ME-SIMS has been so effective that it has been able to detect low mass chemicals at sub cellular levels that was not possible prior to the development of the ME-SIMS technique. The second technique being used is called sample metallization (Meta-SIMS) - This is the process of gold or silver addition to the sample. This forms a layer of gold or silver around the sample and it is normally no more than 1-3 nm thick. Using this technique has resulted in an increase of sensitivity for larger mass samples. The addition of the metallic layer also allows for the conversion of insulating samples to conducting samples, thus charge compensation within SIMS experiments is no longer required.
Subcellular (50 nm) resolution is enabled by NanoSIMS allowing for absolute quantitative analysis at the organelle level.
MALDI imaging
Matrix-assisted laser desorption ionization can be used as a mass spectrometry imaging technique for relatively large molecules. It has recently been shown that the most effective type of matrix to use is an ionic matrix for MALDI imaging of tissue. In this version of the technique the sample, typically a thin tissue section, is moved in two dimensions while the mass spectrum is recorded. Although MALDI has the benefit of being able to record the spatial distribution of larger molecules, it comes at the cost of lower resolution than the SIMS technique. The limit for the lateral resolution for most of the modern instruments using MALDI is 20 m. MALDI experiments commonly use either an Nd:YAG (355 nm) or N2 (337 nm) laser for ionization.
Pharmacodynamics and toxicodynamics in tissue have been studied by MALDI imaging.
DESI imaging
Desorption electrospray Ionization is a less destructive technique, which couples simplicity and rapid analysis of the sample. The sample is sprayed with an electrically charged solvent mist at an angle that causes the ionization and desorption of various molecular species. Then, two-dimensional maps of the abundance of the selected ions in the surface of the sample in relation with the spatial distribution are generated. This technique is applicable to solid, liquid, frozen and gaseous samples. Moreover, DESI allows analyzing a wide range of organic and biological compounds, as animal and plant tissues and cell culture samples, without complex sample preparation Although, this technique has the poorest resolution among other, it can create high-quality image from a large area scan, as a whole body section scanning. Fn
Comparative between the ionization techniques
Combination of various MSI techniques and other imaging techniques
Combining various MSI techniques can be beneficial, since each particular technique has its own advantage. For example, when information regards both proteins and lipids are necessary in the same tissue section, performing DESI to analyze the lipid, followed by MALDI to obtain information about the peptide, and finalize applying a stain (haematoxylin and eosin) for medical diagnosis of the structural characteristic of the tissue. On the other side of MSI with other imaging techniques, fluorescence staining with MSI and magnetic resonance imaging (MRI) with MRI can be highlighted. Fluorescence staining can give information of the appearance of some proteins present in any process inside a tissue, while MSI may give information about the molecular changes presented in that process. Combining both techniques, multimodal picture or even 3D images of the distribution of different molecules can be generated. In contrast, MRI with MSI combines the continuous 3D representation of MRI image with detailed structural representation using molecular information from MSI. Even though, MSI itself can generate 3D images, the picture is just part of the reality due to the depth limitation in the analysis, while MRI provides, for example, detailed organ shape with additional anatomical information. This coupled technique can be beneficial for cancer precise diagnosis and neurosurgery.
Data processing
Standard data format for mass spectrometry imaging datasets
The imzML was proposed to exchange data in a standardized XML file based on the mzML format. Several imaging MS software tools support it. The advantage of this format is the flexibility to exchange data between different instruments and data analysis software.
Software
There are many free software packages available for visualization and mining of imaging mass spectrometry data. Converters from Thermo Fisher format, Analyze format, GRD format and Bruker format to imzML format were developed by the Computis project. Some software modules are also available for viewing mass spectrometry images in imzML format: Biomap (Novartis, free), Datacube Explorer (AMOLF, free), EasyMSI (CEA), Mirion (JLU), MSiReader (NCSU, free) and SpectralAnalysis.
For processing .imzML files with the free statistical and graphics language R, a collection of R scripts is available, which permits parallel-processing of large files on a local computer, a remote cluster or on the Amazon cloud.
Another free statistical package for processing imzML and Analyze 7.5 data in R exists, Cardinal.
SPUTNIK is an R package containing various filters to remove peaks characterized by an uncorrelated spatial distribution with the sample location or spatial randomness.
Applications
A remarkable ability of MSI is to find out the localization of biomolecules in tissues, even though there are no previous information about them. This feature has made MSI a unique tool for clinical research and pharmacological research. It provides information about biomolecular changes related with diseases by tracking proteins, lipids, and cell metabolism. For example, identifying biomarkers by MSI can show detailed cancer diagnosis. In addition, low cost imaging for pharmaceuticals studies can be acquired, such as images of molecular signatures that would be indicative of treatment response for a specific drug or the effectiveness of a particular drug delivery method.
Ion colocalization has been studied as a way to infer local interactions between biomolecules. Similarly to colocalization in microscopy imaging, correlation has been used to quantify the similarity between ion images and generate network models.
Advantages, challenges and limitations
The main advantage of MSI for studying the molecules location and distribution within the tissue is that this analysis can provide either greater selectivity, more information or more accuracy than others. Moreover, this tool requires less investment of time and resources for similar results. The table below shows a comparison of advantages and disadvantages of some available techniques, including MSI, correlated with drug distribution analysis.
Notes
Further reading
"Imaging Trace Metals in Biological Systems" pp 81–134 in "Metals, Microbes and Minerals: The Biogeochemical Side of Life" (2021) pp xiv + 341. Authors Yu, Jyao; Harankhedkar, Shefali; Nabatilan, Arielle; Fahrni, Christopher; Walter de Gruyter, Berlin.
Editors Kroneck, Peter M.H. and Sosa Torres, Martha.
DOI 10.1515/9783110589771-004
References
Mass spectrometry | Mass spectrometry imaging | Physics,Chemistry | 2,625 |
12,703,710 | https://en.wikipedia.org/wiki/Gas%20composition | The Gas composition of any gas can be characterised by listing the pure substances it contains, and stating for each substance its proportion of the gas mixture's molecule count.Nitrogen 78.084
Oxygen 20.9476
Argon Ar 0.934
Carbon Dioxide 0.0314
Gas composition of air
To give a familiar example, air has a composition of:
Standard Dry Air is the agreed-upon gas composition for air from which all water vapour has been removed. There are various standards bodies which publish documents that define a dry air gas composition. Each standard provides a list of constituent concentrations, a gas density at standard conditions and a molar mass.
It is extremely unlikely that the actual composition of any specific sample of air will completely agree with any definition for standard dry air. While the various definitions for standard dry air all attempt to provide realistic information about the constituents of air, the definitions are important in and of themselves because they establish a standard which can be cited in legal contracts and publications documenting measurement calculation methodologies or equations of state.
The standards below are two examples of commonly used and cited publications that provide a composition for standard dry air:
ISO TR 29922-2017 provides a definition for standard dry air which specifies an air molar mass of 28,965 46 ± 0,000 17 kg·kmol-1.
GPA 2145:2009 is published by the Gas Processors Association. It provides a molar mass for air of 28.9625 g/mol, and provides a composition for standard dry air as a footnote.
References
Gases | Gas composition | Physics,Chemistry | 317 |
28,116,091 | https://en.wikipedia.org/wiki/World%20Architecture%20Survey | The World Architecture Survey was conducted in 2010 by Vanity Fair, to determine the most important works of contemporary architecture. 52 leading architects, teachers, and critics, including several Pritzker Prize winners and deans of major architecture schools were asked for their opinion.
The survey asked two questions:
What are the five most important buildings, bridges, or monuments constructed since 1980?
What is the greatest work of architecture thus far in the 21st century?
While the range of responses was very broad, more than half of the experts surveyed named the Guggenheim Museum Bilbao by Frank Gehry as one of the most important works since 1980. The Beijing National Stadium (Bird’s Nest stadium) in Beijing by Herzog and de Meuron was the building most often cited, by seven respondents, as the most significant structure of the 21st century so far. Counted by architect, works by Frank Gehry received the most votes, followed by those of Rem Koolhaas. The result of the survey led Vanity Fair to label Gehry as "the most important architect of our age".
Results
Most important works since 1980
The respondents named a total of 132 different structures when asked to indicate the five most important buildings, monuments, and bridges completed since 1980. The top 21 were:
Guggenheim Museum Bilbao (completed 1997) in Bilbao, Spain by Frank Gehry (28 votes)
Menil Collection (1987) in Houston, Texas by Renzo Piano (10 votes)
Thermal Baths of Vals (1996) in Vals, Switzerland by Peter Zumthor (9 votes)
Hong Kong Shanghai Bank (HSBC) Building (1985) in Hong Kong by Norman Foster (7 votes)
Tied (6 votes):
Seattle Central Library (2004) in Seattle by Rem Koolhaas
Sendai Mediatheque (2001) in Sendai, Japan by Toyo Ito
Neue Staatsgalerie (1984) in Stuttgart, Germany by James Stirling
Church of the Light (1989) in Osaka, Japan by Tadao Ando
Vietnam Veterans Memorial (1982) in Washington, D.C. by Maya Lin (5 votes)
Tied (4 votes):
Millau Viaduct (2004) in France by Norman Foster
Jewish Museum, Berlin (1998) in Berlin by Daniel Libeskind
Tied (3 votes):
Lloyd’s Building (1984) in London by Richard Rogers
Beijing National Stadium (2008) in Beijing by Jacques Herzog and Pierre de Meuron
CCTV Building (under construction ) in Beijing by Rem Koolhaas
Casa da Musica (2005) in Porto, Portugal by Rem Koolhaas
Cartier Foundation (1994) in Paris by Jean Nouvel
BMW Welt (2007) in Munich by COOP Himmelblau
Addition to the Nelson-Atkins Museum (2007) in Kansas City, Missouri by Steven Holl
Cooper Union building (2009) in New York by Thom Mayne
Parc de la Villette (1984) in Paris by Bernard Tschumi
Yokohama International Passenger Terminal (2002) at Ōsanbashi Pier in Yokohama, Japan by Foreign Office Architects
Saint-Pierre church, Firminy (2006) in Firminy, France by Le Corbusier (2 votes)
Most significant work of the 21st century
The buildings most often named as the greatest work of architecture thus far in the 21st century were:
Beijing National Stadium by Herzog and de Meuron (7 votes)
Saint-Pierre, Firminy by Le Corbusier (4 votes)
Seattle Central Library by Rem Koolhaas (3 votes)
CCTV Headquarters by Rem Koolhaas (2 votes)
Tied with one vote each: Sendai Mediatheque (Toyo Ito), Millau Viaduct (Norman Foster), Casa da Musica (Rem Koolhaas), Cartier Foundation (Jean Nouvel), BMW Welt (COOP Himmelblau)
Criticism
Writing for the Chicago Tribune, Blair Kamin criticized the "self-aggrandizing" survey for not including any green buildings. In response, Lance Hosey of Architect magazine conducted an alternate survey of leading green building experts and found that no buildings appeared on both lists, suggesting that standards of "good design" and "green design" are misaligned. Commentators also noted that several of the architects surveyed (but not Gehry) "perhaps took the magazine’s title a little too seriously" and voted for their own buildings.
Participants
The following people replied to the survey:
Stan Allen
Tadao Ando
George Baird
Deborah Berke
David Chipperfield
Neil Denari
Hank Dittmar
Roger Duffy
Peter Eisenman
Martin Filler
Norman Foster
Kenneth B. Frampton
Frank Gehry
Richard Gluckman
Paul Goldberger
Michael Graves
Zaha Hadid
Hugh Hardy
Steven Holl
Hans Hollein
Michael Holzer
Michael Jemtrud
Charles Jencks
Leon Krier
Daniel Libeskind
Thom Mayne
Richard Meier
José Rafael Moneo
Eric Owen Moss
Mohsen Mostafavi
Victoria Newhouse
Jean Nouvel
Richard Olcott
John Pawson
Cesar Pelli
James Stewart Polshek
Christian de Portzamparc
Antoine Predock
Wolf D. Prix
Jaquelin T. Robertson
Richard Rogers
Joseph Rykwert
Ricardo Scofidio
Annabelle Selldorf
Robert Siegel
John Silber
Brett Steele
Bernard Tschumi
Renzo Piano
Ben van Berkel
Anthony Vidler
Rafael Viñoly
Tod Williams and Billie Tsien
See also
Architectural icon
References
Architecture lists
Vanity Fair (magazine)
2010 | World Architecture Survey | Engineering | 1,109 |
67,354,487 | https://en.wikipedia.org/wiki/Park%20of%20Generous%20Souls | The Park of Generous Souls () is a park in Zvolen, Slovakia dedicated to Slovak citizens who helped save Jews during the Holocaust.
References
Zvolen
Rescue of Jews during the Holocaust
Holocaust memorials | Park of Generous Souls | Biology | 42 |
6,434,550 | https://en.wikipedia.org/wiki/Eulogia | The term eulogia (, eulogía), Greek for "a blessing", has been applied in ecclesiastical usage to "a blessed object". It was occasionally used in early times to signify the Holy Eucharist, and in this sense is especially frequent in the writings of St. Cyril of Alexandria. The origin of this use is doubtless to be found in the words of St. Paul (1 Corinthians 10:16); to poterion tes eulogias ho eulogoumen. But the more general use is for such objects as bread, wine etc., which it was customary to distribute after the celebration of the Divine Mysteries. Bread so blessed, we learn from St. Augustine (De pecat. merit., ii, 26), was customarily distributed in his time to catechumens, and he even gives it the name of sacramentum, as having received the formal blessing of the Church: "Quod acceperunt catechumeni, quamvis non sit corpus Christi, sanctum tamen est, et sanctius quam cibi quibus alimur, quoniam sacramentum est" (What the catechumens receive, though it is not the Body of Christ, is holy — holier, indeed, than our ordinary food, since it is a sacramentum). For the extension of this custom in later ages, see Antidoron; Sacramental bread.
The word eulogia has a special use in connection with monastic life. In the Benedictine Rule monks are forbidden to receive "litteras, eulogias, vel quaelibet munuscula" without the abbot's leave. Here the word may be used in the sense of blessed bread only, but it seems to have a wider signification, and to designate any kind of present. There was a custom in monasteries of distributing in the refectories, after Mass, the eulogiae of bread blessed at the Mass.
Sources
See also
Eulogy (disambiguation)
Eucharist
Religious objects | Eulogia | Physics | 429 |
75,298,411 | https://en.wikipedia.org/wiki/Bluetooth%20Low%20Energy%20denial%20of%20service%20attacks | The Bluetooth Low Energy denial of service attacks are a series of denial-of-service attacks against mobile phones and iPads via Bluetooth Low Energy that can make it difficult to use them.
iPhone and iPad attacks
DEFCON proof of concept attack
At DEF CON 31 in 2023, a demonstration was given using equipment made with a Raspberry Pi, a Bluetooth adapter and a couple of antennas. This attack used Bluetooth advertising packets, hence did not require pairing. The demonstration version claimed to be an Apple TV and affected iOS 16.
Flipper Zero attack
This attack also uses Bluetooth advertising packets to repeatedly send notification signals to iPhones and iPads running iOS 17. It uses a Flipper Zero running third-party Xtreme firmware. It functions even when the device is in airplane mode, and can only be avoided by disabling Bluetooth from the device's Settings app.
The attack can cause the device to crash. It also affects iOS 17.1.
The release of iOS 17.2 made devices more resistant to the attack, reducing the flood of popup messages.
An app to perform these attacks was written for Android.
Interference with a medical device
An attendee of Midwest FurFest 2023 tweeted that the Android device they used to control their insulin pump had been crashed by a BLE attack and that if they hadn't been able to fix it they would have had to go to a hospital.
Wall of Flippers
The Wall of Flippers project has written a Python script that can scan for BTLE attacks. It can run on Linux or Microsoft Windows.
Android attack
The Flipper Zero version of the attack has been adapted to attack Android and Microsoft Windows systems.
References
Bluetooth
Denial-of-service attacks
Hacking in the 2020s | Bluetooth Low Energy denial of service attacks | Technology | 361 |
76,989,062 | https://en.wikipedia.org/wiki/Journal%20of%20Micro/Nanopatterning%2C%20Materials%2C%20and%20Metrology | Journal of Micro/Nanopatterning, Materials, and Metrology is a peer-reviewed scientific journal published quarterly by SPIE. It covers science, development, and practice of micro and nanofabrication processes and metrology. Established in 2002 under the name Journal of Microlithography, Microfabrication, and Microsystems, it was subsequently retitled to Journal of Micro/Nanolithography, MEMS, and MOEMS in 2007. The journal title was changed to its current name in 2021.
The editor-in-chief of the journal is Harry Levinson (HJL Lithography).
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2022 impact factor of 2.3.
References
External links
Quarterly journals
SPIE academic journals
Academic journals established in 2002
English-language journals
Mechanical engineering journals
Semiconductor journals
Materials science journals | Journal of Micro/Nanopatterning, Materials, and Metrology | Materials_science,Engineering | 189 |
48,725,243 | https://en.wikipedia.org/wiki/Mlecchita%20vikalpa | Mlecchita Vikalpa is one of the 64 arts listed in Vatsyayana's Kamasutra, translated into English as "the art of understanding writing in cypher, and the writing of words in a peculiar way". The list appears in Chapter 3 of Part I of Kamasutra and Mlecchita Vikalpa appears as the 44th item in the list.
Introduction
Mlecchita Vikalpa is the art of secret writing and secret communications. In The Codebreakers, a 1967 book by David Kahn about the history of cryptography, the reference to Mlecchita Vikalpa in Kamasutra is cited as proof of the prevalence of cryptographic methods in ancient India. Though Kamasutra does not have details of the methods by which people of that time practiced this particular form of art, later commentators of Kamasutra have described several methods. For example, Yasodhara in his Jayamangala commentary on Kamasutra gives descriptions of methods known by the names Kautilya and Muladeviya. The ciphers described in the Jayamangala commentary are substitution ciphers: in Kautiliyam the letter substitutions are based on phonetic relations, and Muladeviya is a simplified version of Kautiliyam. There are also references to other methods for secret communications like Gudhayojya, Gudhapada and Gudhavarna. Some modern writers on cryptography have christened the ciphers alluded to in the Kamasutra as Kamasutra cipher or Vatsyayana cipher.
The exact date of the composition of Kamasutra has not been fixed. It is supposed that Vatsyayana must have lived between the first and sixth centuries AD. However, the date of the Jayamangla commentary has been fixed as between the tenth and thirteenth centuries CE.
Kautiliya
This is a Mlecchita named after Kautilya, the author of the ancient Indian political treatise, the Arthashastra. In this system, the short and long vowels, the anusvara and the spirants are interchanged for the consonants and the conjunct consonants. The following table shows the substitutions used in the Kautiliyam cipher. The characters not listed in the table are left unchanged.
{| class="wikitable"
|-
|a || ā || i || ī || u || ū || ṛ || ṝ || ḷ || ḹ|| e || ai || o || au || ṃ|| ḩ || ñ|| ś || ṣ || s || i || r || l || u
|-
| kh || g || gh ||ṅ || ch || j || jh || ñ || ṭh || ḍ || ḍh || ṇ || th || d || dh || n || ph || b || bh || m || y || r || l || v
|}
There is a simplified form of this scheme known by the name Durbodha.
Muladeviya
Another form of secret writing mentioned in Yasodhara's commentary on Kamasutra is known by the name Muladeviya. This existed both in the spoken form and in the written form. In the written form it is called Gudhalekhya. This form of secret communications were used by kings' spies as well as traders in various geographical locations in India. Also this form of secret communications has been popular among thieves and robbers. However, there were variations in the actual scheme across the various geographical areas. For example, in the erstwhile Travancore Kingdom, spread over a part of present-day Kerala State in India, it was practiced under the name Mulabhadra with some changes from the schemes described by Yashodhara.
The cipher alphabet of Muladeviya consists of the reciprocal one specified in the table below.
{| class="wikitable"
|-
| a || kh || gh || c || t || ñ || n || r || l || y
|-
| k || g || ṅ || ṭ || p || ṇ || m || ṣ || s || ś
|}
The great Indian epic Mahabharata contains an incident involving the use of this type of secret talking. Duryodhana was planning to burn Pandavas alive and had made arrangements to send Pandavas to Varanavata. Vidura resorted to secret talk to warn Yudhishthira about the dangers in front of everybody present. Only Yudhishthira could understand the secret message. None others even suspected that it was a warning.
Gudhayojya
This is an elementary and trivial method for obscuring the true content of spoken messages and it is popular as a language game among children. The idea is to add some unnecessary letters chosen randomly to the beginning or to the end of every word in a sentence. For example, to obscure the sentence "will visit you tonight" one may add the letters "dis" at the beginning of every word and convey the message as "diswill disvisit disyou distonight" the real content of which may not be intelligible to the uninitiated if pronounced rapidly.
See also
Mulabhadra
Pig Latin
References
History of cryptography
Military communications
Classical ciphers
Kamashastra | Mlecchita vikalpa | Engineering | 1,152 |
76,916,306 | https://en.wikipedia.org/wiki/IC%204481 | IC 4481 is a type SBbc barred spiral galaxy located in Boötes. Its redshift is 0.110727, meaning IC 4481 is located 1.49 billion light-years away from Earth. It is one of the furthest objects in the Index Catalogue and has an apparent dimension of 0.30 x 0.2 arcmin. IC 4481 was discovered on May 10, 1904, by Royal Harwood Frost, who found it "faint, very small, round and diffuse".
See also
List of the most distant astronomical objects
References
4481
Barred spiral galaxies
1501729
Boötes
Discoveries by Royal Harwood Frost
Astronomical objects discovered in 1904 | IC 4481 | Astronomy | 140 |
65,770,382 | https://en.wikipedia.org/wiki/Palanka%20%28fortification%29 | A palanka (), also known as parkan in Southern Hungary and palanga, was a wooden fortification used by the Ottoman Empire extensively in certain regions of Southeast Europe, including Hungary, the Balkans and the Black Sea coast against rival states, especially the Archduchy of Austria and the Kingdom of Hungary. Such wooden forts could be built and expanded quickly, and usually contained a small garrison. These fortifications varied in size and shape but were primarily constructed of palisades. Palankas could be adjacent to a town and later they could be replaced by a more formidable stone fortress as in the case of Uyvar. Palankas could also be built as an extension of the main fortress. Many Ottoman forts were a mixture of palanka type fortifications and stonework. Evliya Çelebi describes the word palanka also as a technique of timber masonry.
Some palankas developed into larger settlements and word palanga has been also used to describe rural settlements which originates from palankas in Erzincan, Eastern Anatolia.
Etymology
The word comes from Hungarian , which itself comes from Middle Latin meaning log, which is derived from Ancient Greek or (, ) also meaning log.
Architecture
Typical palanka had a rectangular plan and its entrance could be guarded by a watchtower called ağaçtan lonca köşkü. Walls of a palanka could be made of a single palisade as well as two rows of stockade, creating a gap in between which is filled with earth which might be acquired from the ditch dug around the fortification, called şarampa, thus creating a protected walkway. The inner and outer palisades were held together by transverse beams, whose ends were fixed to the outer walls by wooden pins, to counter the pressure of earth filling. In order to increase resistance against cannon fire, wooden walls could be strengthened by applying mortar in a technique called horasani palanka. After that, military buildings such as bastions which cannons are placed, towers, barracks and civilian buildings such as inns, marketplaces, mosques, cisterns could be added. Lastly, a stockade could be constructed around the palanka as a secondary fortification.
Characteristics
Palankas were the basis of Ottoman frontier defence system in Europe and their purpose was to protect military and riverine routes, which had strategic value, and travellers, who were passing through these routes, against plunderers. These routes connected palankas, thus leading to creation of a defense network. They also allowed effective communication between strategic areas. When Ottoman reached the limit of their conquests in Europe, they used these structures to stabilize the frontier.
Although palankas were not indestructible on their own, they were interconnected structures, and if an army too strong to resist attacked, the forces of the other palankas would come to their aid. Wooden walls of palankas were difficult to ignite since they were filled with earth; and stakes used to build them were damp. Most of the troops in palankas were azaps and a palanka functioning in the frontier could have a higher ratio of cavalry troops compared to a fortress defended by cannons.
Palankas showed similarities to Roman limes system. In the pre-Ottoman period, there used to be fortifications, where palankas were constructed, and after the conquests these fortifications were rebuilt with remarkable Ottoman characteristics. Due to their makeshift aspect few palankas survive today but researches show that this kind of structures were used between 14th and late 19th century.
Havale
Havale, which is the fortification that palanka was inspired by, acted as a base for troops and artillery during sieges of the early Ottoman era. 15th century Ottoman historian Aşıkpaşazade mentions that this kind of fortresses were built during the Siege of Bursa (1326). Havale type forts were also built during the Siege of Sivrihisar in Karaman, and in Giurgiu during the campaign to Hungary (1435–36) by Murad II.
Gallery
Related towns
Serbia
Bačka Palanka
Smederevska Palanka
Bela Palanka
Brza Palanka
Banatska Palanka
Macedonia
Kriva Palanka
References
Bibliography
Fortifications | Palanka (fortification) | Engineering | 867 |
73,069,613 | https://en.wikipedia.org/wiki/Nonprofit%20Adopt%20a%20Star | Nonprofit Adopt a Star is a charitable fundraising program operated by White Dwarf Research Corporation, a 501c3 nonprofit organization based in Golden, Colorado USA. The program features the targets of NASA space telescopes that are searching for planets around other stars, and it uses the proceeds to support research by an international team of astronomers known as the Kepler/TESS Asteroseismic Science Consortium.
Supporters of the program receive a personalized “Certificate of Adoption” by email, and their selected star is updated in a public database, ensuring that each star can only be adopted once. The database shows an image of the star in Google Sky, along with the constellation name and coordinates, a link to a star chart, and a link to additional information about the star from the SIMBAD astronomical database.
History
The program was started in January 2008 by American astronomer Travis Metcalfe, and was originally known as "The Pale Blue Dot Project".
The original database only included stars observed by NASA’s Kepler space telescope, which operated from 2009 to 2013. After losing the ability to point at the original star field, the mission was renamed K2 in 2014 and observed a series of star fields near the ecliptic before running out of fuel in 2018. The launch of NASA’s Transiting Exoplanet Survey Satellite (TESS) in 2018 expanded the database to include bright stars in every constellation.
Proceeds from the program have supported several research projects of the international team, including characterization of the smallest known planet around Kepler-37 and the oldest known planetary system around Kepler-444, both discovered by the Kepler mission.
The phrase "Adopt a Star" is registered as a charitable fundraising service with the U.S. Patent and Trademark Office, but trademark infringement has continued by several for-profit companies.
In popular culture
In August 2009, the estate of Carl Sagan threatened legal action after a news article noted that the project was called Pale Blue Dot to echo the popular astronomer's description of the Earth as seen from space.
In July 2014, Ukrainian astronomers adopted a star with a disparaging nickname for Russian president Vladimir Putin and the insult went viral on social media.
In May 2022, Gucci adopted a star for each of the guests at their space-themed Cosmogonie fashion show, held at the Castel del Monte in Italy.
References
Non-profit corporations
Astronomy organizations
Research institutes | Nonprofit Adopt a Star | Astronomy | 479 |
59,409,397 | https://en.wikipedia.org/wiki/State%20Institute%20for%20Drug%20Control | The State Institute for Drug Control () is a Czech government agency responsible for regulation of the safe production of pharmaceuticals in the country, clinical evaluation of medicines and for monitoring the advertising and marketing of both medicines and medical devices. Its powers stem from the Act on Public Health Insurance (Act No. 48/1997 Coll.).
Only part of its operating costs are directly funded. It is largely self-financing through charges for its services.
The SÚKL sets maximum ex-factory prices for reimbursement based on 195 reference groups of therapeutically interchangeable products of similar clinical efficacy. More than 20% of the health budget is spent on medication and medical devices. From January 2019 it took responsibility for regulating the reimbursement of consumer medical devices prescribed as part of outpatient care.
It allocates codes and names of medications for use in the electronic prescribing system.
It has been criticised by KOPAC, the Patient Association for Cannabis Treatment, for failing to ensure a supply of Czech medical cannabis so that patients have to pay inflated prices for imported supplies.
References
Pharmacy organizations
Medical and health organizations based in the Czech Republic
National agencies for drug regulation
1952 establishments in Czechoslovakia
Organizations established in 1952 | State Institute for Drug Control | Chemistry | 244 |
56,633,693 | https://en.wikipedia.org/wiki/Wendy%20Taylor%20%28physicist%29 | Wendy Taylor is an Experimental Particle Physicist at York University and a former Canada Research Chair. She is the lead for York University's ATLAS experiment group at CERN.
Education
Taylor graduated from the University of British Columbia with Bachelors of Science in Physics in 1991. As an undergraduate, she worked at TRIUMF, working on rare kaon decay. She completed her graduate studies at the University of Toronto, where she earned a PhD under the supervision of Pekka Sinervo in 1999. She worked on fragmentation properties of the bottom quark. She worked at Stony Brook University as a postdoctoral fellow. She worked on Fermilab's D0 experiment, building electronics to detect bottom quark particles in real time.
Research
Taylor's research focuses on the magnetic monopole. To do this, she is using the ATLAS detector. Her lab concentrated on the development of firmware for the transition radiation tracker within the ATLAS experiment. She is motivated by predictions from Grand Unified Theory, the observation of quantised charge and potential to reinforce the symmetry in Maxwell's equations.
Taylor spent five years working at the Tevatron particle accelerator. She was concerned when it lost government funding in 2011. Whilst working at the Fermilab Tevatron particle accelerator, Taylor identified CP violation in the decay of bottom quarks, which could contribute to the dominance of matter in the universe. The rate at which she detected CP violation was two-orders of magnitude larger than that predicted by the Standard Model of particle physics.
She joined York University in 2004, where she was one of two women in the department. She held a Canada Research Chair between 2004 and 2014.
Taylor is a member of the American Physical Society and the Particle Physics Division of the Canadian Association of Physicists.
References
Living people
Particle physicists
Experimental physicists
Canadian physicists
University of British Columbia Faculty of Science alumni
University of Toronto alumni
Canadian women physicists
Academic staff of York University
Year of birth missing (living people)
People associated with CERN
Canada Research Chairs | Wendy Taylor (physicist) | Physics | 405 |
33,312,827 | https://en.wikipedia.org/wiki/Glycoside%20hydrolase%20family%2089 | In molecular biology, glycoside hydrolase family 89 is a family of glycoside hydrolases.
Glycoside hydrolases are a widespread group of enzymes that hydrolyse the glycosidic bond between two or more carbohydrates, or between a carbohydrate and a non-carbohydrate moiety. A classification system for glycoside hydrolases, based on sequence similarity, has led to the definition of >100 different families. This classification is available on the CAZy web site, and also discussed at CAZypedia, an online encyclopedia of carbohydrate active enzymes.
Glycoside hydrolase family 89 CAZY GH_89 includes enzymes with α-N-acetylglucosaminidase activity. The enzyme consist of three structural domains, the N-terminal domain has an alpha-beta fold, the central domain has a TIM barrel fold, and the C-terminal domain has an all alpha helical fold.
Alpha-N-acetylglucosaminidase is a lysosomal enzyme required for the stepwise degradation of heparan sulphate. Mutations on the alpha-N-acetylglucosaminidase (NAGLU) gene can lead to Mucopolysaccharidosis type IIIB (MPS IIIB; or Sanfilippo syndrome type B) characterised by neurological dysfunction but relatively mild somatic manifestations.
References
EC 3.2.1
Glycoside hydrolase families
Protein families | Glycoside hydrolase family 89 | Biology | 326 |
3,293,217 | https://en.wikipedia.org/wiki/Sudan%20stain | Sudan stains and Sudan dyes are synthetic organic compounds that are used as dyes for various plastics (plastic colorants) and are also used to stain sudanophilic biological samples, usually lipids. Sudan II, Sudan III, Sudan IV, Oil Red O, and Sudan Black B are important members of this class of compounds (see images below).
Staining
Sudan dyes have high affinity to fats, therefore they are used to demonstrate triglycerides, lipids, and lipoproteins. Alcoholic solutions of Sudan dyes are usually used, however pyridine solutions can be used in some situations as well.
Sudan stain test is often used to determine the level of fecal fat to diagnose steatorrhea. A small sample is dissolved in water or saline, glacial acetic acid is added to hydrolyze the insoluble salts of fatty acids, a few drops of alcoholic solution of Sudan III are added, the sample is spread on a microscopic slide, and heated twice to boil. Normally a stool sample should show only a few drops of red-orange stained fat under the microscope. The method is only semiquantitative but, due to its simplicity, it is used for screening.
Dyeing
Since they are characteristically oil- and fat-soluble, Sudan dyes are also useful for dyeing plastics and fabrics. Sudan dyes I–IV and Sudan Red G consist of arylazo-substituted naphthols. Such compounds are known to exist as a pair of tautomers:
Examples
Safety
Some spices exported from Asia have been adulterated with Sudan dyes, especially Sudan I and Sudan III, to enhance their colors. This finding has led to controversy because some Sudan dyes are carcinogenic in rats.
References
Staining
Stool tests
2-Naphthols
Azo dyes
Food colorings
IARC Group 3 carcinogens | Sudan stain | Chemistry,Biology | 387 |
56,866,568 | https://en.wikipedia.org/wiki/Saphenofemoral%20junction | The sapheno-femoral junction (SFJ) is located at the saphenous opening within the groin and formed by the meeting of the great saphenous vein (GSV), common femoral vein and the superficial inguinal veins (confluens venosus subinguinalis). It is one of the distinctive points where a superficial vein meets a deep vein and at which incompetent valves may occur.
Structure
The SFJ can be located in the groin crease, or in a 3 × 3 cm region situated up to 4 cm to the side and up to 3cm below to the pubic tubercle. It is nearer to the pubic tubercle in younger and thinner subjects.
The GSV has two valves near the SFJ. One is a terminal valve about 1-2mm from the opening into the femoral vein and the other is about 2cm away.
References
Angiology
Anatomy
Veins | Saphenofemoral junction | Biology | 193 |
12,799,505 | https://en.wikipedia.org/wiki/GYRO | GYRO is a computational plasma physics code developed and maintained at General Atomics. It solves the 5-D coupled gyrokinetic-Maxwell equations using a combination of finite difference, finite element and spectral methods. Given plasma equilibrium data, GYRO can determine the rate of turbulent transport of particles, momentum and energy.
External links
GYRO Homepage at General Atomics
Computational physics
Physics software
Plasma theory and modeling
Tokamaks | GYRO | Physics | 92 |
36,747,016 | https://en.wikipedia.org/wiki/Miner%27s%20cap | The miner's cap () is part of the traditional miner's costume. It consists of a white material (linen) and served in the Middle Ages to protect the miner when descending below ground (unter Tage). Later it was replaced by the miner's hat (Fahrhut or Schachthut), from which the leather cap or helmet were developed and subsequently today's mining helmets.
See also
Miner's habit
Mooskappe - miner's cap from the Harz Mountains
Literature
Caps
Miners' clothing
Mining culture and traditions | Miner's cap | Engineering | 118 |
70,878,793 | https://en.wikipedia.org/wiki/NGC%204868 | NGC 4868 is an unbarred spiral galaxy located about 240 million light-years away in the constellation Canes Venatici. It was discovered by William Herschel on March 17, 1787. A 2002 study suggests that a quasar may exist within NGC 4868.
See also
List of galaxies
New General Catalogue
References
4868
Canes Venatici
Unbarred spiral galaxies | NGC 4868 | Astronomy | 82 |
26,859,186 | https://en.wikipedia.org/wiki/Graphalloy | Graphalloy is the trademark for a group of metal-impregnated graphite materials. The materials are commonly used for self-lubricating plain bearings or electrical contacts. They are proprietary materials owned by the Graphite Metallizing Corp. based in Yonkers, New York, USA.
Construction
When the metal is impregnated in the graphite it forms long continuous filaments. These are what gives the material its ductility, strength, and heat dissipation properties.
Types
There are many types of Graphalloy because the graphite can be impregnated with many different metals.
Applications
Graphalloy is used in applications where high and low temperatures are encountered, grease or oil is not feasible, expulsion of wear particles is prohibited, or in dusty, submerged, or corrosive environments. It is non-corrosive in gasoline, jet fuel, solvents, bleaches, caustics, dyes, liquefied gases, acids, and many more chemicals. It is not used in highly abrasive applications. Common applications include bushings/bearings for pumps, bleaching and washing tanks, ovens, industrial dryers, steam turbines, kilns, cryogenics.
It is also used as bearing in applications where electrical conduction is necessary. It is used in when high frequency current degrades ball or needle bearings. Examples of applications include packaging machines, radar joints, and welding equipment.
References
External links
Composite materials
Bearings (mechanical) | Graphalloy | Physics | 310 |
1,357,514 | https://en.wikipedia.org/wiki/Recombinant%20DNA | Recombinant DNA (rDNA) molecules are DNA molecules formed by laboratory methods of genetic recombination (such as molecular cloning) that bring together genetic material from multiple sources, creating sequences that would not otherwise be found in the genome.
Recombinant DNA is the general name for a piece of DNA that has been created by combining two or more fragments from different sources. Recombinant DNA is possible because DNA molecules from all organisms share the same chemical structure, differing only in the nucleotide sequence. Recombinant DNA molecules are sometimes called chimeric DNA because they can be made of material from two different species like the mythical chimera. rDNA technology uses palindromic sequences and leads to the production of sticky and blunt ends.
The DNA sequences used in the construction of recombinant DNA molecules can originate from any species. For example, plant DNA can be joined to bacterial DNA, or human DNA can be joined with fungal DNA. In addition, DNA sequences that do not occur anywhere in nature can be created by the chemical synthesis of DNA and incorporated into recombinant DNA molecules. Using recombinant DNA technology and synthetic DNA, any DNA sequence can be created and introduced into living organisms.
Proteins that can result from the expression of recombinant DNA within living cells are termed recombinant proteins. When recombinant DNA encoding a protein is introduced into a host organism, the recombinant protein is not necessarily produced. Expression of foreign proteins requires the use of specialized expression vectors and often necessitates significant restructuring by
foreign coding sequences.
Recombinant DNA differs from genetic recombination in that the former results from artificial methods while the latter is a normal biological process that results in the remixing of existing DNA sequences in essentially all organisms.
Production
Molecular cloning is the laboratory process used to produce recombinant DNA. It is one of two most widely used methods, along with polymerase chain reaction (PCR), used to direct the replication of any specific DNA sequence chosen by the experimentalist. There are two fundamental differences between the methods. One is that molecular cloning involves replication of the DNA within a living cell, while PCR replicates DNA in the test tube, free of living cells. The other difference is that cloning involves cutting and pasting DNA sequences, while PCR amplifies by copying an existing sequence.
Formation of recombinant DNA requires a cloning vector, a DNA molecule that replicates within a living cell. Vectors are generally derived from plasmids or viruses, and represent relatively small segments of DNA that contain necessary genetic signals for replication, as well as additional elements for convenience in inserting foreign DNA, identifying cells that contain recombinant DNA, and, where appropriate, expressing the foreign DNA. The choice of vector for molecular cloning depends on the choice of host organism, the size of the DNA to be cloned, and whether and how the foreign DNA is to be expressed. The DNA segments can be combined by using a variety of methods, such as restriction enzyme/ligase cloning or Gibson assembly.
In standard cloning protocols, the cloning of any DNA fragment essentially involves seven steps: (1) Choice of host organism and cloning vector, (2) Preparation of vector DNA, (3) Preparation of DNA to be cloned, (4) Creation of recombinant DNA, (5) Introduction of recombinant DNA into the host organism, (6) Selection of organisms containing recombinant DNA, and (7) Screening for clones with desired DNA inserts and biological properties.
These steps are described in some detail in a related article (molecular cloning).
DNA expression
DNA expression requires the transfection of suitable host cells. Typically, either bacterial, yeast, insect, or mammalian cells (such as Human Embryonic Kidney cells or CHO cells) are used as host cells.
Following transplantation into the host organism, the foreign DNA contained within the recombinant DNA construct may or may not be expressed. That is, the DNA may simply be replicated without expression, or it may be transcribed and translated and a recombinant protein is produced. Generally speaking, expression of a foreign gene requires restructuring the gene to include sequences that are required for producing an mRNA molecule that can be used by the host's translational apparatus (e.g. promoter, translational initiation signal, and transcriptional terminator). Specific changes to the host organism may be made to improve expression of the ectopic gene. In addition, changes may be needed to the coding sequences as well, to optimize translation, make the protein soluble, direct the recombinant protein to the proper cellular or extracellular location, and stabilize the protein from degradation.
Properties of organisms containing recombinant DNA
In most cases, organisms containing recombinant DNA have apparently normal phenotypes. That is, their appearance, behavior and metabolism are usually unchanged, and the only way to demonstrate the presence of recombinant sequences is to examine the DNA itself, typically using a polymerase chain reaction (PCR) test. Significant exceptions exist, and are discussed below.
If the rDNA sequences encode a gene that is expressed, then the presence of RNA and/or protein products of the recombinant gene can be detected, typically using RT-PCR or western hybridization methods. Gross phenotypic changes are not the norm, unless the recombinant gene has been chosen and modified so as to generate biological activity in the host organism. Additional phenotypes that are encountered include toxicity to the host organism induced by the recombinant gene product, especially if it is over-expressed or expressed within inappropriate cells or tissues.
In some cases, recombinant DNA can have deleterious effects even if it is not expressed. One mechanism by which this happens is insertional inactivation, in which the rDNA becomes inserted into a host cell's gene. In some cases, researchers use this phenomenon to "knock out" genes to determine their biological function and importance. Another mechanism by which rDNA insertion into chromosomal DNA can affect gene expression is by inappropriate activation of previously unexpressed host cell genes. This can happen, for example, when a recombinant DNA fragment containing an active promoter becomes located next to a previously silent host cell gene, or when a host cell gene that functions to restrain gene expression undergoes insertional inactivation by recombinant DNA.
Applications of recombinant DNA
Recombinant DNA is widely used in biotechnology, medicine and research. Today, recombinant proteins and other products that result from the use of DNA technology are found in essentially every pharmacy, physician or veterinarian office, medical testing laboratory, and biological research laboratory. In addition, organisms that have been manipulated using recombinant DNA technology, as well as products derived from those organisms, have found their way into many farms, supermarkets, home medicine cabinets, and even pet shops, such as those that sell GloFish and other genetically modified animals.
The most common application of recombinant DNA is in basic research, in which the technology is important to most current work in the biological and biomedical sciences. Recombinant DNA is used to identify, map and sequence genes, and to determine their function. rDNA probes are employed in analyzing gene expression within individual cells, and throughout the tissues of whole organisms. Recombinant proteins are widely used as reagents in laboratory experiments and to generate antibody probes for examining protein synthesis within cells and organisms.
Many additional practical applications of recombinant DNA are found in industry, food production, human and veterinary medicine, agriculture, and bioengineering. Some specific examples are identified below.
Recombinant chymosin
Found in rennet, chymosin is the enzyme responsible for hydrolysis of κ-casein to produce para-κ-casein and glycomacropeptide, which is the first step in formation of cheese, and subsequently curd, and whey. It was the first genetically engineered food additive used commercially. Traditionally, processors obtained chymosin from rennet, a preparation derived from the fourth stomach of milk-fed calves. Scientists engineered a non-pathogenic strain (K-12) of E. coli bacteria for large-scale laboratory production of the enzyme. This microbiologically produced recombinant enzyme, identical structurally to the calf derived enzyme, costs less and is produced in abundant quantities. Today about 60% of U.S. hard cheese is made with genetically engineered chymosin. In 1990, FDA granted chymosin "generally recognized as safe" (GRAS) status based on data showing that the enzyme was safe.
Recombinant human insulin
Recombinant human insulin has almost completely replaced insulin obtained from animal sources (e.g. pigs and cattle) for the treatment of type 1 diabetes. A variety of different recombinant insulin preparations are in widespread use. Recombinant insulin is synthesized by inserting the human insulin gene into E. coli, or yeast (Saccharomyces cerevisiae) which then produces insulin for human use. Insulin produced by E. coli requires further post translational modifications (e.g. glycosylation) whereas yeasts are able to perform these modifications themselves by virtue of being more complex host organisms. The advantage of recombinant human insulin is after chronic use patients don't develop an immune defence against it the way animal sourced insulin stimulates the human immune system.
Recombinant human growth hormone (HGH, somatotropin)
Administered to patients whose pituitary glands generate insufficient quantities to support normal growth and development. Before recombinant HGH became available, HGH for therapeutic use was obtained from pituitary glands of cadavers. This unsafe practice led to some patients developing Creutzfeldt–Jakob disease. Recombinant HGH eliminated this problem, and is now used therapeutically. It has also been misused as a performance-enhancing drug by athletes and others.
Recombinant blood clotting factor VIII
It is the recombinant form of factor VIII, a blood-clotting protein that is administered to patients with the bleeding disorder hemophilia, who are unable to produce factor VIII in quantities sufficient to support normal blood coagulation. Before the development of recombinant factor VIII, the protein was obtained by processing large quantities of human blood from multiple donors, which carried a very high risk of transmission of blood borne infectious diseases, for example HIV and hepatitis B.
Recombinant hepatitis B vaccine
Hepatitis B infection can be successfully controlled through the use of a recombinant subunit hepatitis B vaccine, which contains a form of the hepatitis B virus surface antigen that is produced in yeast cells. The development of the recombinant subunit vaccine was an important and necessary development because hepatitis B virus, unlike other common viruses such as polio virus, cannot be grown in vitro.
Recombinant antibodies
Recombinant antibodies (rAbs) are produced in vitro by the means of expression systems based on mammalian cells. Their monospecific binding to a specific epitope makes rAbs eligible not only for research purposes, but also as therapy options against certain cancer types, infections and autoimmune diseases.
Diagnosis of HIV infection
Each of the three widely used methods for diagnosing HIV infection has been developed using recombinant DNA. The antibody test (ELISA or western blot) uses a recombinant HIV protein to test for the presence of antibodies that the body has produced in response to an HIV infection. The DNA test looks for the presence of HIV genetic material using reverse transcription polymerase chain reaction (RT-PCR). Development of the RT-PCR test was made possible by the molecular cloning and sequence analysis of HIV genomes. HIV testing page from US Centers for Disease Control (CDC)
Golden rice
Golden rice is a recombinant variety of rice that has been engineered to express the enzymes responsible for β-carotene biosynthesis. This variety of rice holds substantial promise for reducing the incidence of vitamin A deficiency in the world's population. Golden rice is not currently in use, pending the resolution of regulatory and intellectual property issues.
Herbicide-resistant crops
Commercial varieties of important agricultural crops (including soy, maize/corn, sorghum, canola, alfalfa and cotton) have been developed that incorporate a recombinant gene that results in resistance to the herbicide glyphosate (trade name Roundup), and simplifies weed control by glyphosate application. These crops are in common commercial use in several countries.
Insect-resistant crops
Bacillus thuringiensis is a bacterium that naturally produces a protein (Bt toxin) with insecticidal properties. The bacterium has been applied to crops as an insect-control strategy for many years, and this practice has been widely adopted in agriculture and gardening. Recently, plants have been developed that express a recombinant form of the bacterial protein, which may effectively control some insect predators. Environmental issues associated with the use of these transgenic crops have not been fully resolved.
History
The idea of recombinant DNA was first proposed by Peter Lobban, a graduate student of Prof. Dale Kaiser in the Biochemistry Department at Stanford University Medical School. The first publications describing the successful production and intracellular replication of recombinant DNA appeared in 1972 and 1973, from Stanford and UCSF. In 1980 Paul Berg, a professor in the Biochemistry Department at Stanford and an author on one of the first papers was awarded the Nobel Prize in Chemistry for his work on nucleic acids "with particular regard to recombinant DNA". Werner Arber, Hamilton Smith, and Daniel Nathans shared the 1978 Nobel Prize in Physiology or Medicine for the discovery of restriction endonucleases which enhanced the techniques of rDNA technology.
Stanford University applied for a U.S. patent on recombinant DNA on November 4, 1974, listing the inventors as Herbert W. Boyer (professor at the University of California, San Francisco) and Stanley N. Cohen (professor at Stanford University); this patent, U.S. 4,237,224A, was awarded on December 2, 1980. The first licensed drug generated using recombinant DNA technology was human insulin, developed by Genentech and licensed by Eli Lilly and Company.
Controversy
Scientists associated with the initial development of recombinant DNA methods recognized that the potential existed for organisms containing recombinant DNA to have undesirable or dangerous properties. At the 1975 Asilomar Conference on Recombinant DNA, these concerns were discussed and a voluntary moratorium on recombinant DNA research was initiated for experiments that were considered particularly risky. This moratorium was widely observed until the US National Institutes of Health developed and issued formal guidelines for rDNA work. Today, recombinant DNA molecules and recombinant proteins are usually not regarded as dangerous. However, concerns remain about some organisms that express recombinant DNA, particularly when they leave the laboratory and are introduced into the environment or food chain. These concerns are discussed in the articles on genetically modified organisms and genetically modified food controversies. Furthermore, there are concerns about the by-products in biopharmaceutical production, where recombinant DNA result in specific protein products. The major by-product, termed host cell protein, comes from the host expression system and poses a threat to the patient's health and the overall environment.
See also
Asilomar conference on recombinant DNA
Genetic engineering
Genetically modified organism
Recombinant virus
Vector DNA
Biomolecular engineering
Recombinant DNA technology
Host cell protein
T7 expression system
References
Further reading
The Eighth Day of Creation: Makers of the Revolution in Biology. Touchstone Books, . 2nd edition: Cold Spring Harbor Laboratory Press, 1996 paperback: .
Micklas, David. 2003. DNA Science: A First Course. Cold Spring Harbor Press: .
Rasmussen, Nicolas, Gene Jockeys: Life Science and the rise of Biotech Enterprise, Johns Hopkins University Press, (Baltimore), 2014. .
Rosenfeld, Israel. 2010. DNA: A Graphic Guide to the Molecule that Shook the World. Columbia University Press: .
Schultz, Mark and Zander Cannon. 2009. The Stuff of Life: A Graphic Guide to Genetics and DNA. Hill and Wang: .
Watson, James. 2004. DNA: The Secret of Life. Random House: .
External links
Recombinant DNA fact sheet (from University of New Hampshire)
Plasmids in Yeasts (Fact sheet from San Diego State University)
Recombinant DNA research at UCSF and commercial application at Genentech Edited transcript of 1994 interview with Herbert W. Boyer, Living history project. Oral history.
Recombinant Protein Purification Principles and Methods Handbook
Massachusetts Institute of Technology, Oral History Program, Oral History Collection on the Recombinant DNA Controversy, MC-0100. Massachusetts Institute of Technology, Department of Distinctive Collections, Cambridge, Massachusetts
American inventions
Biopharmaceuticals
Genetics techniques
Molecular genetics
Molecular biology
Synthetic biology
1972 in biotechnology | Recombinant DNA | Chemistry,Engineering,Biology | 3,568 |
42,335,455 | https://en.wikipedia.org/wiki/Biomedical%20spectroscopy | Biomedical spectroscopy is a multidisciplinary research field involving spectroscopic tools for applications in the field of biomedical science. Vibrational spectroscopy such as Raman or infrared spectroscopy is used to determine the chemical composition of a material based on detection of vibrational modes of constituent molecules. Some spectroscopic methods are routinely used in clinical settings for diagnosis of disease; an example is Magnetic resonance imaging (MRI). Fourier transform infrared (FTIR) spectroscopic imaging is a form of chemical imaging for which the contrast is provided by composition of the material.
NOCISCAN – The first, evidence-supported, SaaS platform to leverage MR Spectroscopy to noninvasively help physicians distinguish between painful and nonpainful discs in the spine.
References
Spectroscopy | Biomedical spectroscopy | Physics,Chemistry | 153 |
39,641,467 | https://en.wikipedia.org/wiki/Impossible.com | Impossible is an innovation group and incubator. It started as a gift economy platform created by Lily Cole in 2013, and since then has expanded to other areas, mainly design and technology. Impossible claim to be working on client projects with potentially far-reaching impacts.
Impossible People
Impossible People (previously Impossible.com) is an altruism-based mobile app which invites people to give their services and skills away to help others. Created by Lily Cole, the app allows users to post something they would like to do or need so that others can grant their wish. In May 2013, Cole presented the app's beta in conjunction and with the support of Wikipedia co-founder Jimmy Wales at a special event at Cambridge University. It is the first Yunus social business in the UK. The project became open source in March 2017.
Funding and support
In the past, the Impossible.com gift economy project received a grant of £200,000 from the Cabinet Office’s Innovation in Giving fund. Other investors include Lily Cole herself and boyfriend and Impossible's co-founder, Kwame Ferreira. Donations of services from Muhammad Yunus, Brian Boylan, chairman of Wolff Olins, Tea Uglow, creative director for Google’s Creative Lab, office space and "angel investor" role from Jimmy Wales, and legal services from Herbert Smith Freehills bolstered the social network.
References
External links
Industrial design | Impossible.com | Engineering | 286 |
44,746,274 | https://en.wikipedia.org/wiki/DNA%20phenotyping | DNA phenotyping is the process of predicting an organism's phenotype using only genetic information collected from genotyping or DNA sequencing. This term, also known as molecular photofitting, is primarily used to refer to the prediction of a person's physical appearance and/or biogeographic ancestry for forensic purposes.
DNA phenotyping uses many of the same scientific methods as those being used for genetically informed personalized medicine, in which drug responsiveness (pharmacogenomics) and medical outcomes are predicted from a patient's genetic information. Significant genetic variants associated with a particular trait are discovered using a genome-wide association study (GWAS) approach, in which hundreds of thousands or millions of single-nucleotide polymorphisms (SNPs) are tested for their association with each trait of interest. Predictive modeling is then used to build a mathematical model for making trait predictions about new subjects.
Predicted phenotypes
Human phenotypes are predicted from DNA using direct or indirect methods. With direct methods, genetic variants mechanistically linked with variable expression of the relevant phenotypes are measured and used with appropriate statistical methodologies to infer trait value. With indirect methods, variants associated with genetic component(s) of ancestry that correlate with the phenotype of interest, such as Ancestry Informative Markers, are measured and used with appropriate statistical methodologies to infer trait value. The direct method is always preferable, for obvious reasons, but depending on the genetic architecture of the phenotype, is not always possible.
Biogeographic ancestry determination methods have been highly developed within the genetics community, as it is a key GWAS quality control step. These approaches typically use genome-wide human genetic clustering and/or principal component analysis to compare new subjects to curated individuals with known ancestry, such as the International HapMap Project or the 1000 Genomes Project. Another approach is to assay ancestry informative markers (AIMs), SNPs that vary in frequency between the major human populations.
As early as 2004, evidence was compiled showing that the bulk of phenotypic variation in human iris color could be attributed to polymorphisms in the OCA2 gene. This paper, and the work it cited, laid the foundation for the inference of human iris color from DNA, first carried out on basic level by DNAPrint Genomics Beginning in 2009, academic groups developed and reported on more accurate predictive models for eye color and, more recently, hair color in the European population.
More recently, companies such as Parabon NanoLabs and Identitas have begun offering forensic DNA phenotyping services for U.S. and international law enforcement. However, the science behind the commercial services offered by Parabon NanoLabs has been criticized as it has not been subjected to scrutiny in peer-reviewed scientific publications. It has been suggested that it is not known "whether their ability to estimate a face’s appearance is better than chance, or if it’s an approximation based on what we know about ancestry”.
DNA phenotyping is often referred to as a "biologic witness," a play on the term eye-witness. Just as an eye-witness may describe the appearance of a person of interest, the DNA left at a crime scene can be used to discover the physical appearance of the person who left it. This allows DNA phenotyping to be used as an investigative tool to help guide the police when searching for suspects. DNA phenotyping can be particularly helpful in cold cases, where there may not be a current lead. However, it is not a method used to help incarcerate suspects, as more traditional forensic measures are better suited for this.
Pigmentation Prediction
One online tool available to the public and law enforcement is the HIrisPlex-S Webtool. This system uses SNPs that are linked to human pigmentation to predict an individual's phenotype. Using the multiplex assay described in three separate papers, the genotype for 41 different SNPs can be generated, which are linked to hair, eye and skin color in humans.
The genotype can then be entered into the HIrisPlex-S Webtool to generate the most probable phenotype of an individual based on their genetic information.no
This tool originally started as the IrisPlex System, consisting of six SNPs linked to eye color (rs12913832, rs1800407, rs12896399, rs16891982, rs1393350 and rs12203592). The addition of 18 SNPs linked to both hair and eye color lead to the updated HIrisPlex System (rs312262906, rs11547464, rs885479, rs1805008, rs1805005, rs1805006, rs1805007, rs1805009, rs201326893, rs2228479, rs1110400, rs28777, rs12821256, rs4959270, rs1042602, rs2402130, rs2378249 and rs683). Another assay was developed using 17 SNPs involved in skin pigmentation to create the current HIris-SPlex System (s3114908, rs1800414, rs10756819, rs2238289, rs17128291, rs6497292, rs1129038, rs1667394, rs1126809, rs1470608, rs1426654, rs6119471, rs1545397, rs6059655, rs12441727, rs3212355 and rs8051733).
The predictions for eye pigmentation are Blue, Intermediate and Brown. There are two categories for hair pigmentation: color (Blond, Brown, Red and Black) and shade (light and dark). The predictions for skin pigmentation are Very Pale, Pale, Intermediate, Dark and Dark to Black. Unlike eye and hair predictions where only the highest probability is used to make a prediction, the top two highest probabilities for skin color are used to account for tanning ability and other variations.
Genes responsible for facial features
In 2018, researchers found 15 loci in which genes are found that are responsible for our facial features.
Differences from DNA profiling
Traditional DNA profiling, sometimes referred to as DNA fingerprinting, uses DNA as a biometric identifier. Like an iris scan or fingerprint, a DNA profile can uniquely identify an individual with very high accuracy. For forensic purposes, this means that investigators must have already identified and obtained DNA from a potentially matching individual. DNA phenotyping is used when investigators need to narrow the pool of possible individuals or identify unknown remains by learning about the person's ancestry and appearance. When the suspected individual is identified, traditional DNA profiling can be used to prove a match, provided there is a reference sample that can be used for comparison.
Published DNA phenotyping composites
On 9 January 2015, the fourth anniversary of the murders of Candra Alston and her three-year-old daughter Malaysia Boykin, police in Columbia, South Carolina, issued a press release containing what is thought to be the first composite image in forensic history to be published entirely on the basis of a DNA sample. The image, produced by Parabon NanoLabs with the company's Snapshot DNA Phenotyping System, consists of a digital mesh of predicted face morphology overlaid with textures representing predicted eye color, hair color and skin color. Kenneth Canzater Jr. was charged with the murders in 2017.
On 30 June 2015, NBC Nightly News featured a DNA phenotyping composite, also produced by Parabon, of a suspect in the 1988 murder of April Tinsley near Fort Wayne, Indiana. The television segment also included a composite of national news correspondent Kate Snow, which was produced using DNA extracted from the rim of a water bottle that the network submitted to Parabon for a blinded test of the company's Snapshot™ DNA Phenotyping Service. Snow's identity and her use of the bottle were revealed only after the composite had been produced. In 2018 John D. Miller was charged with the murder.
Sheriff Tony Mancuso of the Calcasieu Parish Sheriff's Office in Lake Charles, Louisiana, held a press conference on 1 September 2015 to announce the release of a Parabon Snapshot composite for a suspect in the 2009 murder of Sierra Bouzigard in Moss Bluff, Louisiana. The investigation had previously focused on a group of Hispanic males with whom Bouzigard was last seen. Snapshot analysis indicates the suspect is predominantly European, with fair skin, green or possibly blue eyes and brown or black hair. Sheriff Mancuso told the media, “This totally redirects our whole investigation and will move this case in a new direction.” Blake A. Russell was charged with the murder in 2017.
Florida police chiefs from Miami Beach, Miami, Coral Gables and Miami-Dade jointly released a Snapshot composite of the “Serial Creeper” on 10 September 2015. For more than a year, the perpetrator has been spying on and sexually terrorizing women, and police believe he is connected to at least 15 crimes, possibly as many as 40. In a Miami Beach attack on 18 August 2015, which was first reported to the public on 23 September 2015, the perpetrator spoke in Spanish and told his victim he was from Cuba. Consistent with this claim, Snapshot had previously determined that the subject is Latino, with European, Native American, and African ancestry, an admixture most similar to that found in Latino individuals from the Caribbean and Northern South America.
On 2 February 2016, the Anne Arundel County Maryland Police Department released what is believed to be the first published composite created by combining DNA phenotyping and forensic facial reconstruction from a victim's skull. The victim's body which had suffered severe upper body trauma was found on 23 April 1985 in a metal trash container at the construction site of the Marley Station Mall in Glen Burnie, MD. Police initially estimated the homicide occurred approximately five months before the body was discovered. Later the date of death was changed to about 1963. Thom Shaw, an IAI-certified forensic artist at Parabon NanoLabs, performed the physical facial reconstruction and the digital adaptation of a Snapshot composite to reflect details gleaned from the victim's facial morphology. In 2019, with the help of Parabon and genetic genealogy, the body was identified as Roger Kelso, born in Fort Wayne, Indiana in 1943. The murderer was not identified.
Police in Tacoma, Washington, disclosed Parabon Snapshot reports to the public on 6 April 2016 for two male suspects believed to be individually responsible for the deaths of Michella Welch (age 12) and Jennifer Bastian (age 13), both abducted from Tacoma's North End area in 1986, just four months apart. Investigators long believed one person committed both crimes because of their many similarities. However, 2016 DNA testing proved two individuals were separately involved. Snapshot descriptions of the two killers were released to aid the public in generating new leads for the investigations. In 2018 Gary Charles Hartman and Robert D. Washburn were charged with the murders of the two girls. In 2019 Washington State passed a law called "Jennifer and Michella's law" named after the two murdered girls. This law allowed police to take DNA samples from people convicted of indecent exposure and from dead sex offenders.
Also on 6 April 2016, police in Athens Ohio released a Snapshot composite of an active sexual predator linked to at least three attacks, the most recent in December 2015 near Ohio University.
On 15 April 2016, the Hallandale Beach Florida Police Department released a Snapshot composite of a suspect believed to be responsible for the murders of Toronto residents David “Donny” Pichosky and Rochelle Wise. It was the first time a Snapshot composite of a female was released to the public.
On 21 April 2016, police in Windsor, Canada, released a Snapshot composite of the suspect responsible for the abduction and murder of Ljubica Topic in 1971. It was the first public release of a Snapshot composite outside of the United States and, at the time, the oldest case to which the technology had been applied.
On 11 May, the Loudoun County Sheriff's Office in Virginia released a Snapshot composite of a suspect responsible for abducting and sexually assaulting a 9-year-old girl in 1987.
On 16 May 2016, eve of the third anniversary of veteran John “Jack” Fay's murder, the Warwick Rhode Island Police Department released a Snapshot composite produced using DNA taken from a hammer found near the crime scene. Police hoped the composite would generate fresh leads in a case that may have involved multiple assailants.
On 3 May 2017 Idaho Falls, Idaho Police released a DNA phenotype composite sketch from DNA found at the murder scene of Angie Dodge on 13 June 1996. Police hoped the widespread distribution of the composite sketch would generate new leads into the suspect. Excerpt from Idaho Falls Police Department Press release: "The crime scene and evidence collected at the scene, including the collection and extraction of one major and two minor DNA profiles, indicates that there was more than one individual involved in the death of Angie Dodge. With current technologies, the major profile collected is the only viable DNA sample that can be used to make an identification." Christopher Tapp was released in 2017 after spending 20 years in jail for taking part in the rape and murder of Angie Dodge although his DNA did not match DNA at the crime scene. In May 2019 Brian Leigh Dripps confessed to the murder of Dodge after Idaho Falls, Idaho Police charged Dripps. Dripps DNA matched DNA left at the crime scene. Parabon Nanolabs had helped investigate this case using DNA genetic genealogy and GEDmatch.
See also
DNA
Phenotype
Genotyping
Genome-wide association study
Single-nucleotide polymorphisms
Predictive modeling
References
External links
Parabon NanoLabs
Identitas
DNA
Forensic genetics
DNA profiling techniques | DNA phenotyping | Biology | 2,937 |
227,109 | https://en.wikipedia.org/wiki/Lamer | Lamer is a jargon or slang name originally applied in cracker and phreaker culture to someone who did not really understand what they were doing. Today it is also loosely applied by IRC, BBS, demosceners, and online gaming users to anyone perceived to be contemptible. In general, the term has come to describe someone who is willfully ignorant of how things work. It is derived from the word "lame".
A lamer is sometimes understood to be the antithesis of a hacker. While a hacker strives to understand the mechanisms behind what they use, even when such extended knowledge would have no practical value, a lamer only cares to learn the bare minimum necessary to operate the device in the way originally intended.
Origin
At least one example of the term "lamer" to mean "a dull, stupid, inept, or contemptible person" appeared as early as 1961. It was popularized among Amiga crackers of the mid-1980s by "Lamer Exterminator", a notable Amiga virus, which gradually corrupted non-write-protected floppy disks with bad sectors. The bad sectors, when examined, were overwritten with repetitions of the string "LAMER!".
In phreak culture, a lamer is one who scams codes from others and lacks understanding of the fundamental concepts. In warez culture, where the ability to distribute cracked commercial software within days of (or before) release to the commercial market is much esteemed, the lamer might try to upload garbage, shareware, or outdated releases.
See also
Luser
Noob
Script kiddie
References
External links
Definition of lame - Merriam-Webster Online Dictionary
Definition of Lamer - Jargon File - Origination of term lamer.
Pejorative terms for people
Computer jargon | Lamer | Technology | 369 |
63,376,504 | https://en.wikipedia.org/wiki/Introduction%20to%20the%20Theory%20of%20Error-Correcting%20Codes | Introduction to the Theory of Error-Correcting Codes is a textbook on error-correcting codes, by Vera Pless. It was published in 1982 by John Wiley & Sons, with a second edition in 1989 and a third in 1998. The Basic Library List Committee of the Mathematical Association of America has rated the book as essential for inclusion in undergraduate mathematics libraries.
Topics
This book is mainly centered around algebraic and combinatorial techniques for designing and using error-correcting linear block codes. It differs from previous works in this area in its reduction of each result to its mathematical foundations, and its clear exposition of the results follow from these foundations.
The first two of its ten chapters present background and introductory material, including Hamming distance, decoding methods including maximum likelihood and syndromes, sphere packing and the Hamming bound, the Singleton bound, and the Gilbert–Varshamov bound, and the Hamming(7,4) code. They also include brief discussions of additional material not covered in more detail later, including information theory, convolutional codes, and burst error-correcting codes. Chapter 3 presents the BCH code over the field , and Chapter 4 develops the theory of finite fields more generally.
Chapter 5 studies cyclic codes and Chapter 6 studies a special case of cyclic codes, the quadratic residue codes. Chapter 7 returns to BCH codes. After these discussions of specific codes, the next chapter concerns enumerator polynomials, including the MacWilliams identities, Pless's own power moment identities, and the Gleason polynomials.
The final two chapters connect this material to the theory of combinatorial designs and the design of experiments, and include material on the Assmus–Mattson theorem, the Witt design, the binary Golay codes, and the ternary Golay codes.
The second edition adds material on BCH codes, Reed–Solomon error correction, Reed–Muller codes, decoding Golay codes, and "a new, simple combinatorial proof of the MacWilliams identities".
As well as correcting some errors and adding more exercises, the third edition includes new material on connections between greedily constructed lexicographic codes and combinatorial game theory, the Griesmer bound, non-linear codes, and the Gray images of codes.
Audience and reception
This book is written as a textbook for advanced undergraduates; reviewer H. N. calls it "a leisurely introduction to the field which is at the same time mathematically rigorous". It includes over 250 problems, and can be read by mathematically-inclined students with only a background in linear algebra (provided in an appendix) and with no prior knowledge of coding theory.
Reviewer Ian F. Blake complained that the first edition omitted some topics necessary for engineers, including algebraic decoding, Goppa codes, Reed–Solomon error correction, and performance analysis, making this more appropriate for mathematics courses, but he suggests that it could still be used as the basis of an engineering course by replacing the last two chapters with this material, and overall he calls the book "a delightful little monograph". Reviewer John Baylis adds that "for clearly exhibiting coding theory as a showpiece of applied modern algebra I haven't seen any to beat this one".
Related reading
Other books in this area include The Theory of Error-Correcting Codes (1977) by Jessie MacWilliams and Neil Sloane, and A First Course in Coding Theory (1988) by Raymond Hill.
References
External links
Introduction to the Theory of Error-Correcting Codes (2nd ed.) on the Internet Archive
Error detection and correction
Mathematics textbooks
1982 non-fiction books
1989 non-fiction books
1998 non-fiction books | Introduction to the Theory of Error-Correcting Codes | Engineering | 748 |
2,840,305 | https://en.wikipedia.org/wiki/Computer-assisted%20proof | A computer-assisted proof is a mathematical proof that has been at least partially generated by computer.
Most computer-aided proofs to date have been implementations of large proofs-by-exhaustion of a mathematical theorem. The idea is to use a computer program to perform lengthy computations, and to provide a proof that the result of these computations implies the given theorem. In 1976, the four color theorem was the first major theorem to be verified using a computer program.
Attempts have also been made in the area of artificial intelligence research to create smaller, explicit, new proofs of mathematical theorems from the bottom up using automated reasoning techniques such as heuristic search. Such automated theorem provers have proved a number of new results and found new proofs for known theorems. Additionally, interactive proof assistants allow mathematicians to develop human-readable proofs which are nonetheless formally verified for correctness. Since these proofs are generally human-surveyable (albeit with difficulty, as with the proof of the Robbins conjecture) they do not share the controversial implications of computer-aided proofs-by-exhaustion.
Methods
One method for using computers in mathematical proofs is by means of so-called validated numerics or rigorous numerics. This means computing numerically yet with mathematical rigour. One uses set-valued arithmetic and in order to ensure that the set-valued output of a numerical program encloses the solution of the original mathematical problem. This is done by controlling, enclosing and propagating round-off and truncation errors using for example interval arithmetic. More precisely, one reduces the computation to a sequence of elementary operations, say . In a computer, the result of each elementary operation is rounded off by the computer precision. However, one can construct an interval provided by upper and lower bounds on the result of an elementary operation. Then one proceeds by replacing numbers with intervals and performing elementary operations between such intervals of representable numbers.
Philosophical objections
Computer-assisted proofs are the subject of some controversy in the mathematical world, with Thomas Tymoczko first to articulate objections. Those who adhere to Tymoczko's arguments believe that lengthy computer-assisted proofs are not, in some sense, 'real' mathematical proofs because they involve so many logical steps that they are not practically verifiable by human beings, and that mathematicians are effectively being asked to replace logical deduction from assumed axioms with trust in an empirical computational process, which is potentially affected by errors in the computer program, as well as defects in the runtime environment and hardware.
Other mathematicians believe that lengthy computer-assisted proofs should be regarded as calculations, rather than proofs: the proof algorithm itself should be proved valid, so that its use can then be regarded as a mere "verification". Arguments that computer-assisted proofs are subject to errors in their source programs, compilers, and hardware can be resolved by providing a formal proof of correctness for the computer program (an approach which was successfully applied to the four color theorem in 2005) as well as replicating the result using different programming languages, different compilers, and different computer hardware.
Another possible way of verifying computer-aided proofs is to generate their reasoning steps in a machine readable form, and then use a proof checker program to demonstrate their correctness. Since validating a given proof is much easier than finding a proof, the checker program is simpler than the original assistant program, and it is correspondingly easier to gain confidence into its correctness. However, this approach of using a computer program to prove the output of another program correct does not appeal to computer proof skeptics, who see it as adding another layer of complexity without addressing the perceived need for human understanding.
Another argument against computer-aided proofs is that they lack mathematical elegance—that they provide no insights or new and useful concepts. In fact, this is an argument that could be advanced against any lengthy proof by exhaustion.
An additional philosophical issue raised by computer-aided proofs is whether they make mathematics into a quasi-empirical science, where the scientific method becomes more important than the application of pure reason in the area of abstract mathematical concepts. This directly relates to the argument within mathematics as to whether mathematics is based on ideas, or "merely" an exercise in formal symbol manipulation. It also raises the question whether, if according to the Platonist view, all possible mathematical objects in some sense "already exist", whether computer-aided mathematics is an observational science like astronomy, rather than an experimental one like physics or chemistry. This controversy within mathematics is occurring at the same time as questions are being asked in the physics community about whether twenty-first century theoretical physics is becoming too mathematical, and leaving behind its experimental roots.
The emerging field of experimental mathematics is confronting this debate head-on by focusing on numerical experiments as its main tool for mathematical exploration.
Theorems proved with the help of computer programs
Inclusion in this list does not imply that a formal computer-checked proof exists, but rather, that a computer program has been involved in some way. See the main articles for details.
See also
References
Further reading
External links
Argument technology
Automated theorem proving
Computer-assisted proofs
Formal methods
Numerical analysis
Philosophy of mathematics | Computer-assisted proof | Mathematics,Engineering | 1,067 |
27,545,816 | https://en.wikipedia.org/wiki/Degeneracy%20%28graph%20theory%29 | In graph theory, a k-degenerate graph is an undirected graph in which every subgraph has at least one vertex of degree at most k: that is, some vertex in the subgraph touches k or fewer of the subgraph's edges. The degeneracy of a graph is the smallest value of k for which it is k-degenerate. The degeneracy of a graph is a measure of how sparse it is, and is within a constant factor of other sparsity measures such as the arboricity of a graph.
Degeneracy is also known as the k-core number, width, and linkage, and is essentially the same as the coloring number or Szekeres–Wilf number (named after ). k-degenerate graphs have also been called k-inductive graphs. The degeneracy of a graph may be computed in linear time by an algorithm that repeatedly removes minimum-degree vertices. The connected components that are left after all vertices of degree less than k have been (repeatedly) removed are called the k-cores of the graph and the degeneracy of a graph is the largest value k such that it has a k-core.
Examples
Every finite forest has either an isolated vertex (incident to no edges) or a leaf vertex (incident to exactly one edge); therefore, trees and forests are 1-degenerate graphs. Every 1-degenerate graph is a forest.
Every finite planar graph has a vertex of degree five or less; therefore, every planar graph is 5-degenerate, and the degeneracy of any planar graph is at most five. Similarly, every outerplanar graph has degeneracy at most two, and the Apollonian networks have degeneracy three.
The Barabási–Albert model for generating random scale-free networks is parameterized by a number m such that each vertex that is added to the graph has m previously-added vertices. It follows that any subgraph of a network formed in this way has a vertex of degree at most m (the last vertex in the subgraph to have been added to the graph) and Barabási–Albert networks are automatically m-degenerate.
Every k-regular graph has degeneracy exactly k. More strongly, the degeneracy of a graph equals its maximum vertex degree if and only if at least one of the connected components of the graph is regular of maximum degree. For all other graphs, the degeneracy is strictly less than the maximum degree.
Definitions and equivalences
The coloring number of a graph G was defined by to be the least κ for which there exists an ordering of the vertices of G in which each vertex has fewer than κ neighbors that are earlier in the ordering. It should be distinguished from the chromatic number of G, the minimum number of colors needed to color the vertices so that no two adjacent vertices have the same color; the ordering which determines the coloring number provides an order to color the vertices of G with the coloring number, but in general the chromatic number may be smaller.
The degeneracy of a graph G was defined by as the least k such that every induced subgraph of G contains a vertex with k or fewer neighbors. The definition would be the same if arbitrary subgraphs are allowed in place of induced subgraphs, as a non-induced subgraph can only have vertex degrees that are smaller than or equal to the vertex degrees in the subgraph induced by the same vertex set.
The two concepts of coloring number and degeneracy are equivalent: in any finite graph the degeneracy is just one less than the coloring number. For, if a graph has an ordering with coloring number κ then in each subgraph H the vertex that belongs to H and is last in the ordering has at most κ − 1 neighbors in H. In the other direction, if G is k-degenerate, then an ordering with coloring number k + 1 can be obtained by repeatedly finding a vertex v with at most k neighbors, removing v from the graph, ordering the remaining vertices, and adding v to the end of the order.
A third, equivalent formulation is that G is k-degenerate (or has coloring number at most k + 1) if and only if the edges of G can be oriented to form a directed acyclic graph with outdegree at most k. Such an orientation can be formed by orienting each edge towards the earlier of its two endpoints in a coloring number ordering. In the other direction, if an orientation with outdegree k is given, an ordering with coloring number k + 1 can be obtained as a topological ordering of the resulting directed acyclic graph.
k-Cores
A k-core of a graph G is a maximal connected subgraph of G in which all vertices have degree at least k. Equivalently, it is one of the connected components of the subgraph of G formed by repeatedly deleting all vertices of degree less than k. If a non-empty k-core exists, then, clearly, G has degeneracy at least k, and the degeneracy of G is the largest k for which G has a k-core.
A vertex has coreness if it belongs to a
-core but not to any -core.
The concept of a k-core was introduced to study the clustering structure of social networks and to describe the evolution of random graphs. It has also been applied in bioinformatics, network visualization, and resilience of networks in ecology. A survey of the topic, covering the main concepts, important algorithmic techniques as well as some application domains, may be found in .
Bootstrap percolation is a random process studied as an epidemic model and as a model for fault tolerance for distributed computing. It consists of selecting a random subset of active cells from a lattice or other space, and then considering the -core of the induced subgraph of this subset.
Algorithms
outline an algorithm to derive the degeneracy ordering of a graph with vertex set and edge set in time and words of space, by storing vertices in a degree-indexed bucket queue and repeatedly removing the vertex with the smallest degree. The degeneracy is given by the highest degree of any vertex at the time of its removal.
In more detail, the algorithm proceeds as follows:
Initialize an output list L.
Compute a number dv for each vertex v in G, the number of neighbors of v that are not already in L. Initially, these numbers are just the degrees of the vertices.
Initialize an array D such that D[i] contains a list of the vertices v that are not already in L for which dv = i.
Initialize k to 0.
Repeat n times:
Scan the array cells D[0], D[1], ... until finding an i for which D[i] is nonempty.
Set k to max(k,i)
Select a vertex v from D[i]. Add v to the beginning of L and remove it from D[i].
For each neighbor w of v not already in L, subtract one from dw and move w to the cell of D corresponding to the new value of dw.
At the end of the algorithm, any vertex will have at most edges to the vertices . The -cores of are the subgraphs that are induced by the vertices , where is the first vertex with degree at the time it is added to .
Relation to other graph parameters
If a graph G is oriented acyclically with outdegree k, then its edges may be partitioned into k forests by choosing one forest for each outgoing edge of each node. Thus, the arboricity of G is at most equal to its degeneracy. In the other direction, an n-vertex graph that can be partitioned into k forests has at most k(n − 1) edges and therefore has a vertex of degree at most 2k− 1 – thus, the degeneracy is less than twice the arboricity. One may also compute in polynomial time an orientation of a graph that minimizes the outdegree but is not required to be acyclic. The edges of a graph with such an orientation may be partitioned in the same way into k pseudoforests, and conversely any partition of a graph's edges into k pseudoforests leads to an outdegree-k orientation (by choosing an outdegree-1 orientation for each pseudoforest), so the minimum outdegree of such an orientation is the pseudoarboricity, which again is at most equal to the degeneracy. The thickness is also within a constant factor of the arboricity, and therefore also of the degeneracy.
A k-degenerate graph has chromatic number at most k + 1; this is proved by a simple induction on the number of vertices
which is exactly like the proof of the six-color theorem for planar graphs. Since chromatic number is an upper bound on the order of
the maximum clique, the latter invariant is also at most degeneracy plus one. By using a greedy coloring algorithm on an ordering with optimal coloring number, one can graph color a k-degenerate graph using at most k + 1 colors.
A k-vertex-connected graph is a graph that cannot be partitioned into more than one component by the removal of fewer than k vertices, or equivalently a graph in which each pair of vertices can be connected by k vertex-disjoint paths. Since these paths must leave the two vertices of the pair via disjoint edges, a k-vertex-connected graph must have degeneracy at least k. Concepts related to k-cores but based on vertex connectivity have been studied in social network theory under the name of structural cohesion.
If a graph has treewidth or pathwidth at most k, then it is a subgraph of a chordal graph which has a perfect elimination ordering in which each vertex has at most k earlier neighbors. Therefore, the degeneracy is at most equal to the treewidth and at most equal to the pathwidth. However, there exist graphs with bounded degeneracy and unbounded treewidth, such as the grid graphs.
The Burr–Erdős conjecture relates the degeneracy of a graph G to the Ramsey number of G, the least n such that any two-edge-coloring of an n-vertex complete graph must contain a monochromatic copy of G. Specifically, the conjecture is that for any fixed value of k, the Ramsey number of k-degenerate graphs grows linearly in the number of vertices of the graphs. The conjecture was proven by .
Any -vertex graph with degeneracy has at most maximal cliques whenever and , so the class of graphs with bounded degeneracy is said to have few cliques.
Infinite graphs
Although concepts of degeneracy and coloring number are frequently considered in the context of finite graphs, the original motivation for was the theory of infinite graphs. For an infinite graph G, one may define the coloring number analogously to the definition for finite graphs, as the smallest cardinal number α such that there exists a well-ordering of the vertices of G in which each vertex has fewer than α neighbors that are earlier in the ordering. The inequality between coloring and chromatic numbers holds also in this infinite setting; state that, at the time of publication of their paper, it was already well known.
The degeneracy of random subsets of infinite lattices has been studied under the name of bootstrap percolation.
See also
Graph theory
Network science
Percolation Theory
Core–periphery structure
Cereceda's conjecture
Notes
References
Graph invariants
Graph algorithms | Degeneracy (graph theory) | Mathematics | 2,454 |
72,049,053 | https://en.wikipedia.org/wiki/TUM%20School%20of%20Computation%2C%20Information%20and%20Technology | The TUM School of Computation, Information and Technology (CIT) is a school of the Technical University of Munich, established in 2022 by the merger of three former departments. As of 2022, it is structured into the Department of Mathematics, the Department of Computer Engineering, the Department of Computer Science, and the Department of Electrical Engineering.
Department of Mathematics
The Department of Mathematics (MATH) is located at the Garching campus.
History
Mathematics was taught from the beginning at the Polytechnische Schule in München and the later Technische Hochschule München. Otto Hesse was the department's first professor for calculus, analytical geometry and analytical mechanics. Over the years, several institutes for mathematics were formed.
In 1974, the Institute of Geometry was merged with the Institute of Mathematics to form the Department of Mathematics, and informatics, which had been part of the Institute of Mathematics, became a separate department.
Research Groups
As of 2022, the research groups at the department are:
Algebra
Analysis
Analysis and Modelling
Applied Numerical Analysis, Optimization and Data Analysis
Biostatistics
Discrete Optimization
Dynamic Systems
Geometry and Topology
Mathematical Finance
Mathematical Optimization
Mathematical Physics
Mathematical Modeling of Biological Systems
Numerical Mathematics
Numerical Methods for Plasma Physics
Optimal Control
Probability Theory
Scientific Computing
Statistics
Department of Computer Science
The Department of Computer Science (CS) is located at the Garching campus.
History
The first courses in computer science at the Technical University of Munich were offered in 1967 at the Department of Mathematics, when Friedrich L. Bauer introduced a two-semester lecture titled Information Processing. In 1968, Klaus Samelson started offering a second lecture cycle titled Introduction to Informatics. By 1992, the computer science department had separated from the Department of Mathematics to form an independent Department of Informatics.
In 2002, the department relocated from its old campus in the Munich city center to the new building on the Garching campus.
In 2017, the Department celebrated 50 Years of Informatics Munich with a series of lectures and ceremonies, together with the Ludwig Maximilian University of Munich and the Bundeswehr University Munich.
Chairs
As of 2022, the department consists of the following chairs:
AI in Healthcare and Medicine
Algorithmic Game Theory
Algorithms and Complexity
Application and Middleware Systems
Augmented Reality
Bioinformatics
Computational Imaging and AI in Medicine
Computational Molecular Medicine
Computer Aided Medical Procedures
Computer Graphics and Visualization
Computer Vision and AI
Cyber Trust
Data Analytics and Machine Learning
Data Science and Engineering
Database Systems
Decision Science & Systems
Dynamic Vision and Learning
Efficient Algorithms
Engineering Software for Decentralized Systems
Ethics in Systems Design and Machine Learning
Formal Languages, Compiler & Software Construction
Formal Methods for Software Reliability
Hardware-aware Algorithms and Software for HPC
Information Systems & Business Process Management
Law and Security of Digitization
Legal Tech
Logic and Verification
Machine Learning of 3D Scene Geometry
Physics-based Simulation
Quantum Computing
Scientific Computing
Software & Systems Engineering
Software Engineering
Software Engineering for Business Information Systems
Theoretical Computer Science
Theoretical Foundations of AI
Visual Computing
Notable people
Seven faculty members of the Department of Informatics have been awarded the Gottfried Wilhelm Leibniz Prize, one of the highest endowed research prizes in Germany with a maximum of €2.5 million per award:
2020 – Thomas Neumann
2016 – Daniel Cremers
2008 – Susanne Albers
1997 – Ernst Mayr
1995 – Gerd Hirzinger
1994 – Manfred Broy
1991 –
Friedrich L. Bauer was awarded the 1988 IEEE Computer Society Computer Pioneer Award for inventing the stack data structure. Gerd Hirzinger was awarded the 2005 IEEE Robotics and Automation Society Pioneer Award. and Burkhard Rost were awarded the Alexander von Humboldt Professorship in 2011 and 2008, respectively. Rudolf Bayer was known for inventing the B-tree and Red–black tree.
Department of Electrical Engineering
The Department of Electrical Engineering (EE) is located at the Munich campus.
History
The first lectures in the field of electricity at the Polytechnische Schule München were given as early as 1876 by the physicist Wilhelm von Bezold. Over the years, as the field of electrical engineering became increasingly important, a separate department for electrical engineering emerged within the mechanical engineering department. In 1967, the department was renamed the Faculty of Mechanical and Electrical Engineering, and six electrical engineering departments were permanently established.
In April 1974, the formal establishment of the new TUM Department of Electrical and Computer Engineering took place. While still located in the Munich campus, a new building is currently in construction on the Garching campus and the department is expected to move by 2025.
Professorships
As of 2022, the department consists of the following chairs and professorships:
Biomedical Electronics
Circuit Design
Computational Photonics
Control and Manipulation of Microscale Living Objects
Environmental Sensing and Modeling
High Frequency Engineering
Hybrid Electronic Systems
Measurement Systems and Sensor Technology
Micro- and Nanosystems Technology
Microwave Engineering
Molecular Electronics
Nano and Microrobotics
Nano and Quantum Sensors
Neuroelectronics
Physics of Electrotechnology
Quantum Electronics and Computer Engineering
Semiconductor Technology
Simulation of Nanosystems for Energy Conversion
Department of Computer Engineering
The Department of Computer Engineering was separated from the former Department of Electrical and Computer Engineering as the result of merger into the School of Computation, Information and Technology.
Professorships
As of 2022, the department consists of the following chairs and professorships:
Architecture of Parallel and Distributed Systems
Audio Information Processing
Automatic Control Engineering
Bio-inspired Information Processing
Coding and Cryptography
Communications Engineering
Communication Networks
Computer Architecture & Operating Systems
Computer Architecture and Parallel Systems
Connected Mobility
Cognitive Systems
Cyber Physical Systems
Data Processing
Electronic Design Automation
Embedded Systems and Internet of Things
Healthcare and Rehabilitation Robotics
Human-Machine Communication
Information-oriented Control
Integrated Systems
Line Transmission Technology
Machine Learning for Robotics
Machine Learning in Engineering
Machine Vision and Perception
Media Technology
Network Architectures and Services
Neuroengineering Materials
Real-Time Computer Systems
Robotics Science and System Intelligence
Robotics, AI and realtime systems
Security in Information Technology
Sensor-based Robot Systems and Intelligent Assistance Systems
Signal Processing Methods
Theoretical Information Technology
Building
The Department of Computer Science shares a building with the Department of Mathematics.
In the building, two massive parabolic slides run from the fourth floor to the ground floor. Their shape corresponds to the equation and is supposed to represent the "connection of science and art".
Rankings
The Department of Computer Science has been consistently rated the top computer science department in Germany by major rankings. Globally, it ranks No. 29 (QS), No. 10 (THE), and within No. 51-75 (ARWU). In the 2020 national CHE University Ranking, the department is among the top rated departments for computer science and business informatics, being rated in the top group for the majority of criteria.
The Department of Mathematics has been rated as one of the top mathematics departments in Germany, ranking 43rd in the world and 2nd in Germany (after the University of Bonn) in the QS World University Rankings, and within No. 51-75 in the Academic Ranking of World Universities. In Statistics & Operational Research, QS ranks TUM first in Germany and 28th in the world.
The Departments of Electrical and Computer Engineering are leading in Germany. In Electrical & Electronic Engineering, TUM is rated 18th worldwide by QS and 22nd by ARWU. In engineering as a whole, TUM is ranked 20th globally and 1st nationally in the Times Higher Education World University Rankings.
See also
Summer School Marktoberdorf
References
External links
2022 establishments in Germany
Universities and colleges established in 2022
Computer science departments
Electrical and computer engineering departments
Schools of mathematics | TUM School of Computation, Information and Technology | Engineering | 1,481 |
15,611,629 | https://en.wikipedia.org/wiki/Alpine%20Wall | The Alpine Wall (Vallo Alpino) was an Italian system of fortifications along the of Italy's northern frontier. Built in the years leading up to World War II at the direction of Italian dictator Benito Mussolini, the defensive line faced France, Switzerland, Austria, and Yugoslavia. It was defended by the "Guardia alla Frontiera" (GaF), Italian special troops.
Characteristics
The Alpine line was similar in concept to other fortifications of the same era, including the Maginot Line of France, the Siegfried Line of Germany, and the National Redoubt of Switzerland.
Italy's land frontiers were in most places mountainous and easily defended, but in the years leading up to World War II, Italy's relations with its neighbours were uneasy. Even in its dealings with its German ally, Italy was concerned about German ambitions towards the province of South Tyrol, inhabited by a German majority.
Due to the rugged nature of the Alpine frontier, defences were confined to passes and observation posts in accessible locations.
History
Work on the Alpine Wall began in 1931, intended to cover an arc from the Mediterranean coast at Ventimiglia in the west to Fiume on the Adriatic coast in the east. Three zones were designated at increasing distances from the frontier:
"Zone of Security": Initial contact with the enemy.
"Zone of Resistance": Heavier fortifications capable of resistance in isolation.
"Zone of Alignment": Assembly area for counterattack, into which the enemy was to be directed.
Three types of fortifications were provided:
"Type A": The largest fortifications, generally built into mountainsides.
"Type B": Smaller point-defence fortifications.
"Type C": Widely distributed shelters and rallying points
The work, which was carried out in secrecy using Italian labor, was a significant economic burden, resulting in 208 installations with 647 machine guns and fifty artillery pieces. Construction continued until 1942; a report by General Vittorio Ambrosio on 3 October 1942 recorded that 1,475 bunkers had been completed and 450 more were under construction. The forts were armed with a mixture of new weaponry and older equipment from World War I. Provisions were made to deal with the use of poison gas. Much of the armour was obtained from Germany in compensation for Italian military ventures on behalf of the Axis.
The Alpine Wall during World War II
Little use was made of the Alpine Wall during World War II. During the Italian invasion of France in 1940, some western forts such as Fort Chaberton exchanged fire with their French counterparts of the Alpine Line. Chaberton was hit by French 280mm field mortars and suffered disabling damage. In addition, some Alpine Wall fortifications were used defensively by Italian and German forces during the Second Battle of the Alps in 1944.
After World War II
At the end of the conflict, some of the western fortifications were destroyed. In the east former Austro-Hungarian territories that Italy acquired with the London treaty of 1915 were in 1947 awarded to Yugoslavia. Consequently, the entire eastern part of the pre-WW2 fortifications were thus on the Yugoslav side and are now on Slovenian territory. The 1947 Paris Peace Treaty forbade the construction or expansion of fortifications within twenty kilometers of the border.
However, with Italy's membership in NATO, construction began on a new defensive line from Austria to the Adriatic along the Yugoslavian border along the Natisone and Tagliamento rivers. The new line used tank turrets in a manner similar to German defences during the previous conflict, allowing 360-degree traverse and a high rate of fire. By 1976 this system was still considered useful in any type of conflict short of a nuclear war.
Abandonment
The end of the Cold War brought an end to the usefulness of the Alpine Wall. The emplacements were partially stripped and sealed in 1991-1992. Only some active fortifications have been preserved.
Arrangement
The fortifications were primarily constructed in the flanking heights of the valleys, with works within the valleys only where they were sufficiently wide. Anti-tank guns, artillery and machine guns were trained on prepared fields of fire, with observation stations at higher points. Shelters for infantry were located rearwards. A system of communications links and roads, or for higher locations, ropeways were provided for communication and supply.
Fortifications
The individual fortifications were typically built in rock on valley sides. Where this was not suitable, concrete was used for protection, with a minimum of openings and three to five metres of concrete thickness. Combat blocks were to the front, with ammunition rooms behind. Underground galleries connected the combat blocks and their support areas, such as the utility rooms, barracks, storage and command centres, with the main entry farthest to the rear. Combat areas were isolated from the rest of the structure by gas-tight doorways. Units built after 1939 were designed to operate independently, cut off from utilities and supplies.
Fortifications were camouflaged so that they appeared to blend with the surroundings, whether doors or embrasures were open or closed. Emergency escape routes were also provided.
Armament
Armament typically included an anti-tank gun and a number of machine guns. Post-war units used tank turrets.
Usual armament included:
Machine gun, Fiat 35 in casemate or metal turret
Machine gun, Breda 30 and Breda M37, for defense of entries to the fortifications
Gun 57/43 RM mod. 887 on naval mounts
Gun 75/43, ball mount in 10 cm steel slab
Gun 47/32
Mortar 81mm mod. 35
Flamethrower
Fortifications were usually surrounded by minefields and barbed wire. Where feasible, an anti-tank ditch was provided.
Guardia alla Frontiera
The Vallo Alpino was mainly defended by the 21,000 strong "Guardia alla Frontiera" (G.A.F.), a special Italian corps created in 1937. They defended the of northern Italian frontiers with the so-called "Vallo Alpino Occidentale" ( with France), "Vallo Alpino Settentrionale" ( with Switzerland and with Austria) and "Vallo Alpino Orientale" ( with Yugoslavia).
See also
Atlantic Wall
Czechoslovak border fortifications
Notes
Bibliography
Marco Boglione, Le Strade dei Cannoni Blu Edizioni, Torino 2005,
Kauffmann, J.E., Jurga, Robert M. Fortress Europe: European Fortifications of World War II, 1999.
Alessandro Bernasconi; Giovanni Muran. Le fortificazioni del Vallo Alpino Littorio in Alto Adige Trento, editore Temi [maggio 1999], 328 pagine.
Alessandro Bernasconi; Giovanni Muran. Il testimone di cemento - Le fortificazioni del "Vallo Alpino Littorio" in Cadore, Carnia e Tarvisiano, Udine editore La Nuova Base Editrice [maggio 2009], 498 pagine + CD con allegati storici e tecnici.
Malte Koenig, Vallo del littorio. Die italienischen Verteidigungsanlagen an der Nordfront, in fortifikation. Fachblatt des Studienkreises für Internationales Festungs-, Militaer- und Schutzbauwesen 22 (2008), pp. 87–92.
Malte Koenig, Kooperation als Machtkampf. Das faschistische Achsenbuendnis Berlin-Rom im Krieg 1940/41, Cologne 2007, pp. 238–249.
Josef Urthaler; Christina Niederkofler; Andrea Pozza. Bunker 2a ed., editore Athesia [2005], 2006, 244 pagine.
External links
The Underground Fortifications of the Alpine Wall
Bunkermuzeum
Le fortificazioni del Vallo Alpino in provincia di Cuneo
http://www.vecio.it
Eastern Vallo Alpino
Vallo alpino del Littorio nelle attuali Slovenia e Croazia (1920-1943)
Endangered Italian fortifications in actual Slovenia
World War II defensive lines
Italian fascist architecture
World War II sites in Italy
Yugoslavia in World War II
World War II sites in Slovenia
National redoubts | Alpine Wall | Engineering | 1,678 |
403,852 | https://en.wikipedia.org/wiki/Agreed%20Measures%20for%20the%20Conservation%20of%20Antarctic%20Fauna%20and%20Flora | The Agreed Measures for the Conservation of Antarctic Fauna and Flora is a set of environmental protection measures which were accepted at the third Antarctic Treaty Consultative Meeting in Brussels in 1964. The Agreed Measures were formally in force as part of the Antarctic Treaty System from 1982 to 2011, when they were withdrawn as the principles were now entirely superseded by later agreements such as the 1991 Protocol on Environmental Protection to the Antarctic Treaty. The Agreed Measures were adopted in order to further international collaboration within the administration of the Antarctic Treaty System and promote the protection of natural Antarctic ecological systems while enabling scientific study and exploration.
The Agreed Measures were the first attempts under the Treaty to prioritise wildlife conservation and environmental protection. This was needed due to increasing human interest in exploration, science, and fishing, which had put pressure on natural flora and fauna. They proved successful, and led the way for more stringent environmental protection in future.
History
Antarctic interests in the late 1940s were increasing, with nations fighting over territory in the Antarctic Peninsula region. Fear of open conflict from these nations, as well as fear of Antarctica becoming involved in the Cold War between the United States and the Soviet Union, led to the first discussions of Antarctic diplomacy and treaties. This led to the negotiations of the Antarctic Treaty in 1959 in which the International Geophysical Year Antarctic Program met to discuss scientific papers from 12 participating nations, regarding Antarctic science and research. The 12 nations in attendance were also members of the Scientific Committee on Antarctic Research (SCAR) which was founded one year prior in 1958. SCAR was formulated as an international association of biologists and other scientists interested in Antarctic research, and included Argentina, Australia, Belgium, Chile, France, Japan, New Zealand, Norway, South Africa, United Kingdom, United States, and USSR. The formation of SCAR and the Antarctic Treaty enabled scientists to advocate for conservation efforts and policy in Antarctica, leading to the first discussions of establishing the Agreed Measures for the Conservation of Antarctic Fauna and Flora.
The International Geophysical Year Antarctic Program was the beginning of concerns for Antarctic wildlife, as the geophysical scientists' efforts to explore Antarctica proved to be inadvertently harming Antarctic flora and fauna. Biologists were calling for awareness that Antarctica was not a lifeless tundra, but in fact had wildlife that was extremely vulnerable to human interference. SCAR secretary Gordon Robin published a paper for fellow scientist Robert Carrick in the SCAR Bulletin to bring further awareness to the requirement of conservation in Antarctica. Carrick, along with other prominent scientists, William J. L. Sladen, Robert Falla, Carl Eklund, Jean Prevost and Robert Cushman Murphy to name a few, were among the loudest contributors to SCAR's position of conservation.
As these scientists had all specialised in the area of birds, their first action towards Antarctic conservation occurred at the 1960 International Council for Bird Preservation in which they called specifically for protection of Antarctic birds. After this, SCAR continued to have a large voice in advocating Antarctic conservation, with Robert Carrick speaking at the Fourth SCAR meeting in 1960 to address specific reasons why conservation was necessary as well as providing recommendations for legislation. Following these meetings, SCAR supplied the parties of the Antarctic Treaty with their report, and it was from there that the first talks of the Agreed Measures began amongst Antarctic Treaty System members.
Negotiations
In January 1960, the U.S. representative for the Antarctic Treaty, Paul Daniels, asked that conservation be formally discussed at the first Antarctic Treaty Consultative Meeting. After this, US participation declined, and the British government was the only strong advocate for conservation. Brian Roberts from the Foreign Office began calling for a separate Convention for Antarctic conservation. At the first Antarctic Treaty Consultative Meeting at Canberra in 1961, parties agreed that some form of conservation effort was required, and implemented Recommendation I-VIII; a very broad set of interim guidelines which incorporated much of the SCAR report. Following this meeting, Roberts continued pushing for a formal agreement and drafted a full Convention to present to the other parties. On 6 June 1963 all the parties convened to discuss three position papers for conservation: the British draft, and the responses to said draft by Chile and the Soviet Union. In September 1963, the US representatives released their own draft titled the "Agreed Measures" rather than a Convention, and incorporated the preamble from the Soviets and much of the British draft with small changes in specific terminology. The US argued that a Measures would be better than a separate Convention as the Measures would fall under the authority of the Antarctic Treaty and share the same administrations.
At the third Antarctic Treaty Consultative Meeting at Brussels in June 1964, the Agreed Measures were passed as Recommendation VIII. Despite this, it took 18 years before they were effective in 1982 after Japan was the final country to sign them. The 12 countries required to sign for the measures to be effective, were the same 12 who had formulated SCAR; Argentina, Australia, Belgium, Chile, France, Japan, New Zealand, Norway, South Africa, United Kingdom, United States, and the Soviet Union. During the 18 years interim, parties behaved as though the measures were in force, with all of Antarctica being considered a "Special Conservation Area".
Summary of the Agreed Measures
The Agreed Measures for the Conservation of Antarctic Fauna and Flora consisted of fourteen articles, of which four were simply formalities. The Measures applied to land areas south of latitude 60°S which fell under the jurisdiction of the Antarctic Treaty. In Article I it was explicitly stated that this was with the exception of high seas areas which remain under international law. The first article also included provisions to ensure all Annexes to the Agreed Measures were considered a part of the measures themselves. A participating government could only be exempt from these Measures in "extreme circumstances", such as involving the potential loss of human life or an event which may jeopardise the welfare of large vessels such as ships and aircraft. The Agreed Measures strictly prohibited attempted killing, harming, or capturing of native mammals or birds without permit in Article VI, and in Article VIII it prohibited driving of vehicles and collecting native flora without permit. In both cases, a permit would only be supplied with "compelling scientific purpose" and assurance that ecology would not be endangered by these actions. The permits for these activities had specific terms and were only provided to a participating government in the case of limited food quantities for humans or dogs, for scientific research or to provide specimens for education. Article VI also instructed that permits had to be restricted in number by participating governments to ensure that native species were not killed more than can be compensated naturally in the year.
The Agreed Measures also set out to prevent harmful interference of native conditions in Article VII, and provided a detailed list of activities deemed harmful. These activities included eliciting loud sounds near wildlife, flying aircraft too close to wildlife, allowing dogs to run free, and excessive human disturbance during breeding periods. Article VI also stated that participating governments must take appropriate actions to prevent pollution of waters. Most notably, the Agreed Measures designated all applied areas as "Specially Protected Areas" in Article VIII, to emphasise the vulnerability of native Antarctic flora and fauna. In addition, the introduction of non-indigenous flora and fauna was prohibited in Article IX unless supported by a permit and was a species approved by Annex C. This did not include flora and fauna imported for the use of food, as long as it did not threaten Antarctic ecosystems. This section also highlighted the role of each participating government in preventing introduction of disease, with Annex D citing a list of precautions to prevent this. The Agreed Measures also established a framework for participating governments to communicate and share data on native Antarctic bird and mammal species in Article XII. This included information on how many numbers of each species had been killed or captured annually under a permit for use as food, or scientific study. This communication ensured transparency and allowed participating governments to determine the level of protection each native species required to protect and preserve Antarctic flora and fauna.
Ratification
The convention was ratified both by members whose ratification was required for entry into force as by others. A list is shown:
Other Agreements
Convention for the Conservation of Antarctic Seals
The Agreed Measures for the Conservation of Antarctic Fauna and Flora, only covered land areas south of latitude 60°S, and thus there was no measure in place for protection on the sea or floating ice. This was despite the efforts of Robert Carrick and the Australian party, who advocated strongly for this to be included in the Agreed Measures, to protect animals who spend most of their lives on pack ice or in the seas surrounding Antarctica. This issue was rectified by the signing of the Convention for the Conservation of Antarctic Seals in 1972, and was the first treaty in the wake of the Agreed Measures.
Convention on the Conservation of Antarctic Marine Living Resources
In 1975 at the Eighth Antarctic Treaty Consultative Meeting, they adopted Recommendation VIII-10 to protect marine life, which were excluded from the scope of the Agreed Measures. This issue had become increasingly urgent due to extensive fishing practices and overfishing of Antarctic krill which had become popular in the late 1960s to mid-1970s. In 1978 they held a Conference on the Conservation of Antarctic Marine Living Resources which resulted in the signing of the Convention for the Conservation of Antarctic Marine Living Resources (CCAMLR) in 1980. This was the world's first conservation agreement which protected the ecosystem (marine life) rather than an individual species such as seals.
Bilateral Treaties
Many updated measures were put in to place addressing similar issues of the Agreed Measures at Antarctic Treaty Consultative Meetings such as Article 3.2 and The Annex II to the Protocol on Environmental Protection to the Antarctic Treaty. The Protocol, otherwise known as the "Madrid Protocol" was set into effect in 1998, and prohibited mining or mineral resource activity in Antarctica. Article 3.2 is similar to the Agreed Measures as it prevents harm to natural Antarctic wildlife. Annex II also relates to the Agreed Measures by banning harmful interference and introduction of parasites or foreign species without permit as well as defining Specially Protected Areas and Species. Annex V of the Environmental Protocol followed that of the Agreed Measures, by designating "Specially Protected Areas" in Antarctica, and was adopted separately in 1991 and in force from 2002 onwards. In contrast to the Agreed Measures, Annex V also designated marine areas to be included within the scope of "Antarctic Specially Protected Areas". In fact, Annex V added several different layers of protection for Antarctic land areas, by introducing "Antarctic Specially Protected Areas", "Antarctic Specially Managed Areas" and "Historic Sites and Monuments". As with the Agreed Measures, "Antarctic Specially Protected Areas", was defined by Annex V of the Environmental Protocol as an area protected to maintain ecological, scientific, historic and aesthetic features and requires a permit to enter. "Antarctic Specially Managed Area" was defined as an area in which activities may be conducted and does not require a permit to enter, however parties must continue to minimise their ecological impact and avoid conflicts between participating governments. Lastly, Antarctic "Historic Sites and Monuments" were defined as areas of significant historic relevance and can be proposed by any participating government.
The Agreed Measures also focused significantly on prohibiting harmful human interference, and since then several other treaties were adopted to manage human disturbance. This includes the 1994 Recommendation XVIII-1: Guidance for Visitors to the Antarctic as well as the 2004 Guidelines for the Operation of Aircraft Near Concentrations of Birds in Antarctica. Recommendation XVIII-1 provided the main regulations for tourists and expeditions to the Antarctic and required report submissions for their visits. The Recommendation explicitly stated prohibited activities for tourists in order to prevent harmful interference with wildlife, as well as guidelines for respecting protected areas, and scientific research facilities and equipment. The guidelines also included provisions to prevent human waste, pollution and defacement of property including engraving or painting on natural rocks. The Guidelines for the Operation of Aircraft Near Concentrations of Birds in Antarctica followed the legislation provided by the Agreed Measures in terms of prohibiting aircraft near natural wildlife to prevent disruption. The Guidelines were adopted by the Antarctic Treaty Consultative Parties in 2004, and listed specific regulations to protect wildlife by discouraging aircraft from flying below above ground level, landing within of bird colonies, hovering or making repeated passes over wildlife and flying less than from the coastline.
See also
Antarctic and Southern Ocean Coalition (ASOC)
Antarctic Specially Protected Area (ASPA)
Antarctic Specially Managed Area (ASMA)
Antarctic Treaty System
Multilateral treaty
National Antarctic Program
Category: Outposts of Antarctica
Research stations in Antarctica
International Council for Science (ICSU)
International Geophysical Year (IGY)
International Polar Year (IPY)
References
External links
Full text of document
1982 in Antarctica
Antarctica agreements
Cold War treaties
Environmental treaties
Treaties concluded in 1964
Treaties entered into force in 1982
1982 in the environment
Environment of Antarctica
Treaties of Argentina
Treaties of Australia
Treaties of Belgium
Treaties of Brazil
Treaties of Chile
Treaties of the People's Republic of China
Treaties of France
Treaties of West Germany
Treaties of India
Treaties of Italy
Treaties of Japan
Treaties of South Korea
Treaties of New Zealand
Treaties of Norway
Treaties of the Polish People's Republic
Treaties of the Soviet Union
Treaties of South Africa
Treaties of Spain
Treaties of the United Kingdom
Treaties of the United States
Treaties of Uruguay
Animal treaties
Biota of Antarctica | Agreed Measures for the Conservation of Antarctic Fauna and Flora | Biology | 2,628 |
2,515,425 | https://en.wikipedia.org/wiki/Von%20Neumann%20entropy | In physics, the von Neumann entropy, named after John von Neumann, is a measure of the statistical uncertainty within a description of a quantum system. It extends the concept of Gibbs entropy from classical statistical mechanics to quantum statistical mechanics, and it is the quantum counterpart of the Shannon entropy from classical information theory. For a quantum-mechanical system described by a density matrix , the von Neumann entropy is
where denotes the trace and denotes the matrix version of the natural logarithm. If the density matrix is written in a basis of its eigenvectors as
then the von Neumann entropy is merely
In this form, S can be seen as the Shannon entropy of the eigenvalues, reinterpreted as probabilities.
The von Neumann entropy and quantities based upon it are widely used in the study of quantum entanglement.
Fundamentals
In quantum mechanics, probabilities for the outcomes of experiments made upon a system are calculated from the quantum state describing that system. Each physical system is associated with a vector space, or more specifically a Hilbert space. The dimension of the Hilbert space may be infinite, as it is for the space of square-integrable functions on a line, which is used to define the quantum physics of a continuous degree of freedom. Alternatively, the Hilbert space may be finite-dimensional, as occurs for spin degrees of freedom. A density operator, the mathematical representation of a quantum state, is a positive semi-definite, self-adjoint operator of trace one acting on the Hilbert space of the system. A density operator that is a rank-1 projection is known as a pure quantum state, and all quantum states that are not pure are designated mixed. Pure states are also known as wavefunctions. Assigning a pure state to a quantum system implies certainty about the outcome of some measurement on that system (i.e., for some outcome ). The state space of a quantum system is the set of all states, pure and mixed, that can be assigned to it. For any system, the state space is a convex set: Any mixed state can be written as a convex combination of pure states, though not in a unique way. The von Neumann entropy quantifies the extent to which a state is mixed.
The prototypical example of a finite-dimensional Hilbert space is a qubit, a quantum system whose Hilbert space is 2-dimensional. An arbitrary state for a qubit can be written as a linear combination of the Pauli matrices, which provide a basis for self-adjoint matrices:
where the real numbers are the coordinates of a point within the unit ball and
The von Neumann entropy vanishes when is a pure state, i.e., when the point lies on the surface of the unit ball, and it attains its maximum value when is the maximally mixed state, which is given by .
Properties
Some properties of the von Neumann entropy:
is zero if and only if represents a pure state.
is maximal and equal to for a maximally mixed state, being the dimension of the Hilbert space.
is invariant under changes in the basis of , that is, , with a unitary transformation.
is concave, that is, given a collection of positive numbers which sum to unity () and density operators , we have
is additive for independent systems. Given two density matrices describing independent systems A and B, we have
is strongly subadditive. That is, for any three systems A, B, and C:
This automatically means that is subadditive:
Below, the concept of subadditivity is discussed, followed by its generalization to strong subadditivity.
Subadditivity
If are the reduced density matrices of the general state , then
The right hand inequality is known as subadditivity, and the left is sometimes known as the triangle inequality. While in Shannon's theory the entropy of a composite system can never be lower than the entropy of any of its parts, in quantum theory this is not the case; i.e., it is possible that , while . This is expressed by saying that the Shannon entropy is monotonic but the von Neumann entropy is not. For example, take the Bell state of two spin-1/2 particles:
This is a pure state with zero entropy, but each spin has maximum entropy when considered individually, because its reduced density matrix is the maximally mixed state. This indicates that it is an entangled state; the use of entropy as an entanglement measure is discussed further below.
Strong subadditivity
The von Neumann entropy is also strongly subadditive. Given three Hilbert spaces, A, B, C,
By using the proof technique that establishes the left side of the triangle inequality above, one can show that the strong subadditivity inequality is equivalent to the following inequality:
where , etc. are the reduced density matrices of a density matrix . If we apply ordinary subadditivity to the left side of this inequality, we then find
By symmetry, for any tripartite state , each of the three numbers is less than or equal to the sum of the other two.
Minimum Shannon entropy
Given a quantum state and a specification of a quantum measurement, we can calculate the probabilities for the different possible results of that measurement, and thus we can find the Shannon entropy of that probability distribution. A quantum measurement can be specified mathematically as a positive operator valued measure, or POVM. In the simplest case, a system with a finite-dimensional Hilbert space and measurement with a finite number of outcomes, a POVM is a set of positive semi-definite matrices on the Hilbert space that sum to the identity matrix,
The POVM element is associated with the measurement outcome , such that the probability of obtaining it when making a measurement on the quantum state is given by
A POVM is rank-1 if all of the elements are proportional to rank-1 projection operators. The von Neumann entropy is the minimum achievable Shannon entropy, where the minimization is taken over all rank-1 POVMs.
Holevo χ quantity
If are density operators and is a collection of positive numbers which sum to unity (), then
is a valid density operator, and the difference between its von Neumann entropy and the weighted average of the entropies of the is bounded by the Shannon entropy of the :
Equality is attained when the supports of the – the spaces spanned by their eigenvectors corresponding to nonzero eigenvalues – are orthogonal. The difference on the left-hand side of this inequality is known as the Holevo χ quantity and also appears in Holevo's theorem, an important result in quantum information theory.
Change under time evolution
Unitary
The time evolution of an isolated system is described by a unitary operator:
Unitary evolution takes pure states into pure states, and it leaves the von Neumann entropy unchanged. This follows from the fact that the entropy of is a function of the eigenvalues of .
Measurement
A measurement upon a quantum system will generally bring about a change of the quantum state of that system. Writing a POVM does not provide the complete information necessary to describe this state-change process. To remedy this, further information is specified by decomposing each POVM element into a product:
The Kraus operators , named for Karl Kraus, provide a specification of the state-change process. They are not necessarily self-adjoint, but the products are. If upon performing the measurement the outcome is obtained, then the initial state is updated to
An important special case is the Lüders rule, named for Gerhart Lüders. If the POVM elements are projection operators, then the Kraus operators can be taken to be the projectors themselves:
If the initial state is pure, and the projectors have rank 1, they can be written as projectors onto the vectors and , respectively. The formula simplifies thus to
We can define a linear, trace-preserving, completely positive map, by summing over all the possible post-measurement states of a POVM without the normalisation:
It is an example of a quantum channel, and can be interpreted as expressing how a quantum state changes if a measurement is performed but the result of that measurement is lost. Channels defined by projective measurements can never decrease the von Neumann entropy; they leave the entropy unchanged only if they do not change the density matrix. A quantum channel will increase or leave constant the von Neumann entropy of every input state if and only if the channel is unital, i.e., if it leaves fixed the maximally mixed state. An example of a channel that decreases the von Neumann entropy is the amplitude damping channel for a qubit, which sends all mixed states towards a pure state.
Thermodynamic meaning
The quantum version of the canonical distribution, the Gibbs states, are found by maximizing the von Neumann entropy under the constraint that the expected value of the Hamiltonian is fixed. A Gibbs state is a density operator with the same eigenvectors as the Hamiltonian, and its eigenvalues are
where T is the temperature, is the Boltzmann constant, and Z is the partition function. The von Neumann entropy of a Gibbs state is, up to a factor , the thermodynamic entropy.
Generalizations and derived quantities
Conditional entropy
Let be a joint state for the bipartite quantum system AB. Then the conditional von Neumann entropy is the difference between the entropy of and the entropy of the marginal state for subsystem B alone:
This is bounded above by . In other words, conditioning the description of subsystem A upon subsystem B cannot increase the entropy associated with A.
Quantum mutual information can be defined as the difference between the entropy of the joint state and the total entropy of the marginals:
which can also be expressed in terms of conditional entropy:
Relative entropy
Let and be two density operators in the same state space. The relative entropy is defined to be
The relative entropy is always greater than or equal to zero; it equals zero if and only if . Unlike the von Neumann entropy itself, the relative entropy is monotonic, in that it decreases (or remains constant) when part of a system is traced over:
Entanglement measures
Just as energy is a resource that facilitates mechanical operations, entanglement is a resource that facilitates performing tasks that involve communication and computation. The mathematical definition of entanglement can be paraphrased as saying that maximal knowledge about the whole of a system does not imply maximal knowledge about the individual parts of that system. If the quantum state that describes a pair of particles is entangled, then the results of measurements upon one half of the pair can be strongly correlated with the results of measurements upon the other. However, entanglement is not the same as "correlation" as understood in classical probability theory and in daily life. Instead, entanglement can be thought of as potential correlation that can be used to generate actual correlation in an appropriate experiment. The state of a composite system is always expressible as a sum, or superposition, of products of states of local constituents; it is entangled if this sum cannot be written as a single product term. Entropy provides one tool that can be used to quantify entanglement. If the overall system is described by a pure state, the entropy of one subsystem can be used to measure its degree of entanglement with the other subsystems. For bipartite pure states, the von Neumann entropy of reduced states is the unique measure of entanglement in the sense that it is the only function on the family of states that satisfies certain axioms required of an entanglement measure. It is thus known as the entanglement entropy.
It is a classical result that the Shannon entropy achieves its maximum at, and only at, the uniform probability distribution . Therefore, a bipartite pure state is said to be a maximally entangled state if the reduced state of each subsystem of is the diagonal matrix
For mixed states, the reduced von Neumann entropy is not the only reasonable entanglement measure. Some of the other measures are also entropic in character. For example, the relative entropy of entanglement is given by minimizing the relative entropy between a given state and the set of nonentangled, or separable, states. The entanglement of formation is defined by minimizing, over all possible ways of writing of as a convex combination of pure states, the average entanglement entropy of those pure states. The squashed entanglement is based on the idea of extending a bipartite state to a state describing a larger system, , such that the partial trace of over E yields . One then finds the infimum of the quantity
over all possible choices of .
Quantum Rényi entropies
Just as the Shannon entropy function is one member of the broader family of classical Rényi entropies, so too can the von Neumann entropy be generalized to the quantum Rényi entropies:
In the limit that , this recovers the von Neumann entropy. The quantum Rényi entropies are all additive for product states, and for any , the Rényi entropy vanishes for pure states and is maximized by the maximally mixed state. For any given state , is a continuous, nonincreasing function of the parameter . A weak version of subadditivity can be proven:
Here, is the quantum version of the Hartley entropy, i.e., the logarithm of the rank of the density matrix.
History
The density matrix was introduced, with different motivations, by von Neumann and by Lev Landau. The motivation that inspired Landau was the impossibility of describing a subsystem of a composite quantum system by a state vector. On the other hand, von Neumann introduced the density matrix in order to develop both quantum statistical mechanics and a theory of quantum measurements. He introduced the expression now known as von Neumann entropy by arguing that a probabilistic combination of pure states is analogous to a mixture of ideal gases. Von Neumann first published on the topic in 1927. His argument was built upon earlier work by Albert Einstein and Leo Szilard.
Max Delbrück and Gert Molière proved the concavity and subadditivity properties of the von Neumann entropy in 1936. Quantum relative entropy was introduced by Hisaharu Umegaki in 1962. The subadditivity and triangle inequalities were proved in 1970 by Huzihiro Araki and Elliott H. Lieb. Strong subadditivity is a more difficult theorem. It was conjectured by Oscar Lanford and Derek Robinson in 1968. Lieb and Mary Beth Ruskai proved the theorem in 1973, using a matrix inequality proved earlier by Lieb.
References
Quantum mechanical entropy
John von Neumann | Von Neumann entropy | Physics | 3,018 |
52,624,843 | https://en.wikipedia.org/wiki/ERB-196 | ERB-196, also known as WAY-202196, is a synthetic nonsteroidal estrogen that acts as a highly selective agonist of the ERβ. It possesses 78-fold selectivity for the ERβ over the ERα. The drug was under development by Wyeth for the treatment of inflammation and sepsis starting in 2004 but development was discontinued by 2011.
See also
8β-VE2
Diarylpropionitrile
FERb 033
Prinaberel
WAY-166818
WAY-200070
WAY-214156
References
External links
ERB-196 - AdisInsight
Hydroxyarenes
Fluoroarenes
Naphthalenes
Nitriles
Selective ERβ agonists
Synthetic estrogens | ERB-196 | Chemistry | 153 |
32,015,767 | https://en.wikipedia.org/wiki/German%20Chemical%20Society | The German Chemical Society () is a learned society and professional association founded in 1949 to represent the interests of German chemists in local, national and international contexts. GDCh "brings together people working in chemistry and the molecular sciences and supports their striving for positive, sustainable scientific advance – for the good of humankind and the environment, and a future worth living for."
History
The earliest precursor of today's GDCh was the German Chemical Society (, DChG). Adolf von Baeyer was prominent among the German chemists who established DChG in 1867; and August Wilhelm von Hofmann was the first president. This society was modeled after the British Chemical Society, which was the precursor of the Royal Society of Chemistry. Like its British counterpart, DChG sought to foster the communication of new ideas and facts throughout Germany and across international borders.
In 1946, the current organization was created by a merger of the German Chemical Society (DChG) and the Association of German Chemists (, VDCh).
Honorary Members of the GDCh have included Otto Hahn, Robert B. Woodward, Jean-Marie Lehn, George Olah and other eminent scientists.
Activities
Scientific publications of the society include , Angewandte Chemie, Chemistry: A European Journal, European Journal of Inorganic Chemistry, European Journal of Organic Chemistry, ChemPhysChem, ChemSusChem, ChemBioChem, ChemMedChem, ChemCatChem, ChemistryViews, Chemie Ingenieur Technik and Chemie in unserer Zeit.
In the 21st century, the society has become a member of ChemPubSoc Europe, which is an organization of 16 European chemical societies. This European consortium was established in the late 1990s as many chemical journals owned by national chemical societies were amalgamated.
Prizes and awards
The society acknowledges individual achievement with prizes and awards, including medals originally conferred by the predecessor organizations DChG and VDCh:
Hofmann Medal (Hofmann Denkmünze), first awarded to Henri Moissan, 1903
Liebig Medal (Liebig Denkmünze), first awarded to Adolf von Baeyer, 1903
Gmelin-Beilstein Medal (Gmelin-Beilstein Denkmünze), first awarded to Paul Walden and Maximilian Pflücke, 1954
Hermann Staudinger Prize (Hermann-Staudinger-Preis), first awarded to Werner Kern and Günter Victor Schulz in 1971.
Meyer-Galow Award For Business Chemistry (Der Meyer-Galow-Preis für Wirtschaftschemie), first awarded to Susanne Röhrig, 2012 .
See also
List of chemistry societies
Royal Society of Chemistry, 1841
Société Chimique de France, 1857
American Chemical Society, 1876
Chemical Society of Japan, 1878
Notes
External links
Gesellschaft Deutscher Chemiker; English website
Organizations established in 1867
Chemistry education
1949 establishments in Germany
Scientific organizations established in 1949
Society of German Chemists
Scientific societies based in Germany | German Chemical Society | Chemistry | 615 |
54,706,443 | https://en.wikipedia.org/wiki/Gun%20Violence%20Archive | Gun Violence Archive (GVA) is an American nonprofit group with an accompanying website and social media delivery platforms which seeks to catalog every incident of gun violence in the United States. It was founded by Michael Klein and Mark Bryant. Klein is the founder of Sunlight Foundation, and Bryant is a retired systems analyst.
History
GVA was established in 2013 and began in 2014 and is ongoing. It provides gun violence data and statistics. Perceived gaps in both CDC and FBI data, as well as their lagging distribution, are some reasons behind why GVA felt the need to offer independent data collection. The GVA typically publishes incidents in its database within 3 days whereas the government agencies like the FBI may take months or even years.
GVA maintains a database of known shootings in the United States, coming from law enforcement, media and government sources in all 50 states.
See also
Firearm death rates in the United States by state
References
External links
Gun violence in the United States
Internet properties established in 2013
Online databases
Gun violence
Gun politics
Violence | Gun Violence Archive | Biology | 205 |
61,820,316 | https://en.wikipedia.org/wiki/Archa%20%28document%20store%29 | An archa or arca (plural archae) was a mediaeval document repository, such as a chest, associated with the financial records of Jews in England at the time.
According to Jewish Communities and Records, UK, the archa was "an official chest, provided with three locks and seals, in which a counterpart of all deeds and contracts involving Jews was to be deposited in order to preserve the records." Similarly, The Jewish Encyclopedia of 1906 describes an archa as a "repository in which chirographs and other deeds were preserved."
Worcester and Winchester were two of the 26 Jewish centres of the time to have archae. The introduction of archae in Worcester was part of the reorganization of English Jewry ordered by King Richard I in light of the massacres of Jews that took place in 1189-1190 at, and shortly following, his coronation. These massacres resulted in a heavy loss of Crown revenue partly thanks to the destruction result of Jewish financial records by the murderous mob (in order to conceal evidence of debts due to the Jews). The archae were intended to safeguard the royal rights in case of future disorder. All Jewish possessions and credits were to be registered and several cities were designated as centres for all Jewish business operations and registration of Jewish financial transactions. In each centre, a bureau was set up consisting of two reputable Jews and two Christian clerks, under the supervision of a representative of the newly established central authority that became known as the Exchequer of the Jews.
See also
References
Further reading
Scott, K. (1950) "The Jewish Arcae", in: The Cambridge Law Journal, 10:446–455. Cambridge: Cambridge University Press .
Information management
Accounting source documents
Archives in England
Medieval English Jews | Archa (document store) | Technology | 355 |
1,491,215 | https://en.wikipedia.org/wiki/Cajun%20Dart | Cajun Dart is the designation of an American sounding rocket. The Cajun Dart was used 87 times between 1964 and 1970. The Cajun rocket motor was developed from Deacon.
Staged on top of a Nike rocket, it was part of the Nike-Cajun sounding rocket; it was also used as part of the Terasca three-stage rocket.
Specs
Takeoff thrust: 36 kN
Maximum flight height: 74 km
Takeoff weight: 100 kg
Diameter: 0.17 m
Length: 4.10 m
References
https://web.archive.org/web/20100102071657/http://www.astronautix.com/lvs/cajun.htm
Sounding rockets of the United States | Cajun Dart | Astronomy | 149 |
1,864,484 | https://en.wikipedia.org/wiki/Enumerator%20polynomial | In coding theory, the weight enumerator polynomial of a binary linear code specifies the number of words of each possible Hamming weight.
Let be a binary linear code of length . The weight distribution is the sequence of numbers
giving the number of codewords c in C having weight t as t ranges from 0 to n. The weight enumerator is the bivariate polynomial
Basic properties
MacWilliams identity
Denote the dual code of by
(where denotes the vector dot product and which is taken over ).
The MacWilliams identity states that
The identity is named after Jessie MacWilliams.
Distance enumerator
The distance distribution or inner distribution of a code C of size M and length n is the sequence of numbers
where i ranges from 0 to n. The distance enumerator polynomial is
and when C is linear this is equal to the weight enumerator.
The outer distribution of C is the 2n-by-n+1 matrix B with rows indexed by elements of GF(2)n and columns indexed by integers 0...n, and entries
The sum of the rows of B is M times the inner distribution vector (A0,...,An).
A code C is regular if the rows of B corresponding to the codewords of C are all equal.
References
Chapters 3.5 and 4.3.
Coding theory
Error detection and correction
Mathematical identities
Polynomials | Enumerator polynomial | Mathematics,Engineering | 285 |
760,328 | https://en.wikipedia.org/wiki/Lenka%20Kotkov%C3%A1 | Lenka Kotková (née Šarounová; born 26 July 1973) is a Czech astronomer and a discoverer of minor planets.
She works at Observatoř Ondřejov (Ondřejov Observatory), located near Prague. Besides numerous main-belt asteroids she also discovered Mars-crosser asteroid 9671 Hemera and Hilda family asteroid 21804 Václavneumann.
Lenka Kotková studied meteorology at the faculty of Mathematics and Physics of the Charles University in Prague. Her tasks at the Astronomical institute AV ČR in Ondřejov are primarily the development of databases, spectroscopical and photometric observation, and data processing. During her work at the department of inter planetary matter her main role was the observation of near-earth asteroids, along with Petr Pravec and Peter Kušnirák she identified a large proportion of known binary asteroids. In the same time period she discovered or co-discovered over one hundred asteroids. At the present time Lenka Kotková works in the stellar department as an observant with a two-metre Ondřejov telescope.
In the year 2000 she received the Zdeněk Kvíz Award of the Czech Astronomical Society for significant work in the research of variable stars.
The asteroid 10390 Lenka, discovered by her colleagues Petr Pravec and Marek Wolf in 1997, is named after her. The asteroid 60001 Adélka, discovered by her in 1999, is named after her daughter, while 7897 Bohuška, discovered by her in 1995, is named after her mother.
List of discovered minor planets
References
1973 births
Czech astronomers
Discoverers of asteroids
Living people
Women astronomers
People from Prague-West District
Charles University alumni | Lenka Kotková | Astronomy | 354 |
32,573,740 | https://en.wikipedia.org/wiki/Hautus%20lemma | In control theory and in particular when studying the properties of a linear time-invariant system in state space form, the Hautus lemma (after Malo L. J. Hautus), also commonly known as the Popov-Belevitch-Hautus test or PBH test, can prove to be a powerful tool.
A special case of this result appeared first in 1963 in a paper by Elmer G. Gilbert, and was later expanded to the current PBH test with contributions by Vasile M. Popov in 1966, Vitold Belevitch in 1968, and Malo Hautus in 1969, who emphasized its applicability in proving results for linear time-invariant systems.
Statement
There exist multiple forms of the lemma:
Hautus Lemma for controllability
The Hautus lemma for controllability says that given a square matrix and a the following are equivalent:
The pair is controllable
For all it holds that
For all that are eigenvalues of it holds that
Hautus Lemma for stabilizability
The Hautus lemma for stabilizability says that given a square matrix and a the following are equivalent:
The pair is stabilizable
For all that are eigenvalues of and for which it holds that
Hautus Lemma for observability
The Hautus lemma for observability says that given a square matrix and a the following are equivalent:
The pair is observable.
For all it holds that
For all that are eigenvalues of it holds that
Hautus Lemma for detectability
The Hautus lemma for detectability says that given a square matrix and a the following are equivalent:
The pair is detectable
For all that are eigenvalues of and for which it holds that
References
Notes
Control theory
Lemmas | Hautus lemma | Mathematics | 371 |
44,499,171 | https://en.wikipedia.org/wiki/Suillellus%20comptus | Suillellus comptus is a species of bolete fungus found in Europe. Originally described as a species of Boletus in 1993, it was transferred to Suillellus in 2014.
References
External links
comptus
Fungi described in 1993
Fungi of Europe
Fungus species | Suillellus comptus | Biology | 56 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.