id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
77,962,594
https://en.wikipedia.org/wiki/Mucorrhea
Mucorrhea or mucorrhoea is discharge of mucus, especially when excessive. The term may refer to mucous rectal discharge or refer to the emission of a large amount of mucus through the feces. The term mucorrhea' or cervical mucorrhea is also used in gynecology and refers to increased cervical discharge at ovulation. Causes Simple traces of mucus are not an expression of any pathology, because it is normal physiology, while an excessive quantity of the substance could simply arise from excessive stimulation of the anus. Excessive emission of mucus without defecation can indicate anal lesions, even of tumor origin, or pathologies such as colitis, ulcerative colitis, intestinal dysbiosis, gonorrhea, food intolerances, chronic constipation, etc. If, however, it is present at the time of defecation, it could be a symptom of internal lesions. In this case, in addition to tumor masses, there may be chronic inflammatory diseases, or the presence of mucus could occur due to constipation, hemorrhoids, anal fissures, mucorectal prolapses, rectocele. A study on 31 patients shows that majority of patients with solitary rectal ulcer syndrome present with mucorrhoea. A study on a 36-year-old woman with solitary rectal ulcer syndrome also shows that the patient presents with mucorrhea. Irritable bowel syndrome (IBS) patients may also present with mucorrhoea. See also Anal canal Ulcerative colitis Diarrhea Spinnbarkeit References Bibliography Medical signs Defecation
Mucorrhea
Biology
356
20,612,434
https://en.wikipedia.org/wiki/Adafenoxate
Adafenoxate is a compound related to centrophenoxine, that has been found to act as a nootropic in rats. Synthesis Adafenoxate can be prepared starting with 4-chlorophenoxyacetic acid (pCPA) by converting it to its acid chloride to give 4-chlorophenoxyacetyl chloride (1). Esterification with 2-(1-adamantylamino)ethanol (2) gives adafenoxate (3). Alternatives, the final step can be accomplished via Fischer–Speier esterification using a Dean-Stark trap. References Adamantanes 4-Chlorophenyl compounds Cholinergics Amines Nootropics Phenoxyacetic acids
Adafenoxate
Chemistry
161
4,550,091
https://en.wikipedia.org/wiki/Arp%20107
Arp 107 is a pair of interacting galaxies (designated separately as UGC 5984 and MCG+05-26-025) located about 450 million light-years away in the constellation Leo Minor. The galaxies are in the process of colliding and merging. Characteristics Arp 107 is made of two separate galaxies. The larger galaxy to the left is PGC 32620, and the smaller galaxy to the right is PGC 32628, (as depicted in the Hubble image). These galaxies are different in which one is a spiral galaxy while the other one is an elliptical galaxy being connected by a bridge and tidal tail made of dust and gas. The nucleus of PGC 32620 is active and it is classified as a type 2 Seyfert galaxy. Additionally, the galaxy is depicted having a ring-like appearance. The most likely scenario for this appearance in PGC 32620, is that the elliptical galaxy penetrated through its disk, causing it to become semi-annular with a large single spiral arm protruding out. This spiral arm in turn, then branches out in a form of a tidal arm, where star-forming regions of both old and young star populations are present. See also Antennae Galaxies Arp 299 References External links SIMBAD: VV 233 -- Seyfert 2 Galaxy Interacting galaxies Leo Minor 107 05984
Arp 107
Astronomy
278
71,702,453
https://en.wikipedia.org/wiki/C13H16FNO
{{DISPLAYTITLE:C13H16FNO}} The molecular formula C13H16FNO (molar mass: 221.28 g/mol) may refer to: 2-Fluorodeschloroketamine 3-Fluorodeschloroketamine
C13H16FNO
Chemistry
62
426,865
https://en.wikipedia.org/wiki/United%20Airlines%20Flight%20232
United Airlines Flight 232 was a regularly scheduled United Airlines flight from Stapleton International Airport in Denver to O'Hare International Airport in Chicago, continuing to Philadelphia International Airport. On July 19, 1989, the DC-10 (registered as N1819U) serving the flight crash-landed at Sioux Gateway Airport in Sioux City, Iowa, after suffering a catastrophic failure of its tail-mounted engine due to an unnoticed manufacturing defect in the engine's fan disk, which resulted in the loss of all flight controls. Of the 296 passengers and crew on board, 112 died during the accident, while 184 people survived. Thirteen of the passengers were uninjured. It was the deadliest single-aircraft accident in the history of United Airlines. Despite the fatalities, the accident is considered a good example of successful crew resource management. A majority of those aboard survived; experienced test pilots in simulators were unable to reproduce a survivable landing. It has been termed "The Impossible Landing" as it is considered one of the most impressive landings ever performed in the history of aviation. Aircraft The airplane, a McDonnell Douglas DC-10-10 (registration ), was delivered in 1971 and owned by United Airlines since then. Before departure on the flight from Denver on July 19, 1989, the airplane had been operated for a total of 43,401 hours and 16,997 cycles (takeoff-landing pairs). The airplane was powered by three CF6-6D high bypass-ratio turbofan engines produced by General Electric Aircraft Engines (GEAE). The aircraft's No. 2 (tail-mounted) engine had accumulated 42,436 hours and 16,899 cycles of operating time immediately prior to the accident flight. The DC-10 used three independent hydraulic systems, each powered by one of the aircraft's three engines, to power movement of the aircraft's flight controls. In the event of loss of engine power or primary pump failure, a ram air turbine could provide emergency electrical power for electrically powered auxiliary pumps. These systems were designed to be redundant, such that if two hydraulic systems were inoperable, the one remaining hydraulic system would still permit the full operation and control of the airplane. However, at least one hydraulic system must have fluid present and the ability to hold fluid pressure to control the aircraft. Like other widebody transport aircraft of the time, the DC-10 was not designed to revert to unassisted manual control in the event of total hydraulic failure. The DC-10's hydraulic system was designed and demonstrated to the Federal Aviation Administration (FAA) as compliant with regulations that "no single [engine] failure or malfunction or probable combination of failures will jeopardize the safe operation of the airplane..." Crew Flight 232's captain, Alfred C. "Al" Haynes, 57, was hired by United Airlines in 1956. He was highly experienced and had 29,967 hours of total flight time with United, of which 7,190 were in the DC-10. Haynes' co-pilot was First Officer William R. "Bill" Records, 48. He estimated that he had approximately 20,000 hours of total flight time. He was hired first by National Airlines in 1969. He worked subsequently for Pan American World Airways. He was hired by United in 1985, and had accrued 665 hours as a DC-10 first officer while at United. Flight Engineer Dudley J. Dvorak, 51, was hired by United Airlines in 1986. He estimated that he had about 15,000 hours of total flying time. While working for United, he had accumulated 1,903 hours as a flight engineer in the Boeing 727 and 33 hours as a flight engineer in the DC-10. Dennis E. Fitch, nicknamed "Denny", 46, a training check airman aboard Flight 232 as a passenger, was hired by United in 1968. He estimated that, prior to working for United, he had accrued at least 1,400 hours of flight time with the Air National Guard, with a total flight time around 23,000 hours. His total DC-10 time with United was 2,987 hours, including 1,943 hours accrued as a flight engineer, 965 hours as a first officer, and 79 hours as a captain. Eight flight attendants were also aboard the flight. Accident Takeoff and engine failure Flight 232 lifted off from Stapleton International Airport in Denver at 14:09 Central Daylight Time, en route to O'Hare International Airport in Chicago with continuing service to Philadelphia. At 15:16, while the airplane was making a slight right turn at its cruising altitude of , the fan disk of its tail-mounted General Electric CF6-6 engine disintegrated explosively. The uncontained failure resulted in the engine's fan disk departing the aircraft, tearing out components including parts of the No. 2 hydraulic system and supply hoses in the process; these were later found near Alta, Iowa. Engine debris penetrated the aircraft's tail section in numerous places, including the horizontal stabilizer, severing the No. 1 and No. 3 hydraulic system lines where they passed through the horizontal stabilizer. The pilots felt a jolt, and the autopilot disengaged. As First Officer Records took hold of his control column, Captain Haynes concentrated on the tail engine, the instruments for which indicated it was malfunctioning; he found its throttle and fuel supply controls jammed. At Dvorak's suggestion, a valve for fuel to the tail engine was shut off. This part of the emergency took 14 seconds. Attempts to control the plane Meanwhile, Records found that the airplane did not respond to his control column. Even with the control column turned all the way to the left, commanding maximum left aileron, and pulled all the way back, commanding maximum up elevatorinputs that would never be used together in normal flightthe aircraft was banking to the right with the nose dropping. Haynes attempted to level the aircraft with his own control column, then both Haynes and Records tried using their control columns together, but the aircraft still did not respond. Afraid the aircraft would roll into a completely inverted position (an unrecoverable situation), the crew reduced the left wing-mounted engine to idle and applied maximum power to the right engine. This caused the airplane to level slowly. While Haynes and Records performed the engine shutdown checklist for the failed engine, Dvorak observed that the gauges for fluid pressure and quantity in all three hydraulic systems were indicating zero. The loss of all hydraulic fluid meant that control surfaces were inoperative. The flight crew deployed the DC-10's air-driven generator in an attempt to restore hydraulic power by powering the auxiliary hydraulic pumps, but this was unsuccessful. The crew contacted United Airlines maintenance personnel via radio, but were told that the possibility of a total loss of hydraulics in a DC-10 was considered so remote that no procedure had been established for such an event. The airplane was tending to pull right and oscillated slowly vertically in a phugoid cyclecharacteristic of planes in which control surface command is lost. With each iteration of the cycle, the aircraft lost about of altitude. Fitch, an experienced United Airlines captain and DC-10 flight instructor, was among the passengers and volunteered to assist. The message was relayed by senior flight attendant Jan Brown Lohr to the flight crew, who invited Fitch into the cockpit; he began assisting at about 15:29. Haynes asked Fitch to observe the ailerons through the passenger cabin windows to see if control inputs were having any effect. Fitch reported that the ailerons were not moving at all. Nonetheless, the crew continued to manipulate their control columns for the remainder of the flight, hoping for at least some effect. Haynes then asked Fitch to take control of the throttles so that Haynes could concentrate on his control column. With one throttle in each hand, Fitch was able to mitigate the phugoid cycle and make rough steering adjustments. Air traffic control (ATC) was contacted and an emergency landing at nearby Sioux Gateway Airport was organized. The conversation was recorded by the airplane's cockpit voice recorder (CVR), including: Sioux City Approach: "United Two Thirty-Two Heavy, the wind's currently three six zero at one one; three sixty at eleven. You're cleared to land on any runway." Haynes: "[laughter] Roger. [laughter] You want to be particular and make it a runway, huh?" ATC asked the crew to make a left turn to keep them clear of the city: Haynes: "Whatever you do, keep us away from the city." Haynes later noted, "We were too busy [to be scared]. You must maintain your composure in the airplane, or you will die. You learn that from your first day flying." Crash landing As the crew began to prepare for arrival at Sioux Gateway Airport, they questioned whether they should deploy the landing gear or belly-land the aircraft with the gear retracted. They decided that having the landing gear down would provide some shock absorption on impact. The complete hydraulic failure left the landing gear lowering mechanism inoperative. Two options were available to the flight crew. The DC-10 is designed so that if hydraulic pressure to the landing gear is lost, the gear will fall down slightly and rest on the landing gear doors. Placing the regular landing gear handle in the down position will unlock the doors mechanically, and the doors and landing gear will then fall down into place and lock due to gravity. An alternative system is also available using a lever in the cockpit floor to cause the landing gear to fall into position. This lever has the added benefit of unlocking the outboard ailerons, which are not used in high-speed flight and are locked in a neutral position. The crew hoped that there might be some trapped hydraulic fluid in the outboard ailerons and that they might regain some use of flight controls by unlocking them. They elected to extend the gear with the alternative system. Although the gear deployed successfully, no change of the controllability of the aircraft resulted. Landing was originally planned for Runway 31. Difficulties in controlling the aircraft made alignment with the runway almost impossible. While dumping some of the excess fuel, the airplane executed a series of mostly right-hand turns (turning the airplane in this direction was easier) with the intention of aligning with Runway 31. When they finished they were instead aligned with the closed Runway 22, and had little ability to maneuver. Runway 22 had been closed permanently a year earlier. Fire trucks had been placed on Runway 22, anticipating a landing on nearby Runway 31, so all the vehicles were quickly moved out of the way before the airplane touched down. ATC also advised that a four-lane Interstate highway ran north and south just east of the airport, which they could land on if they did not think they could make the runway. Captain Haynes replied that they were passing over the interstate at that time and they would try for the runway instead. Fitch continued to control the aircraft's descent by adjusting engine thrust. With the loss of all hydraulics, the flaps could not be extended, and since flaps control both the minimum required forward speed and sink rate, the crew was unable to control either the airspeed or the sink rate. At final approach, the aircraft's forward speed was and it had a sink rate of , while a safe landing would require and . Moments before landing, the roll to the right suddenly worsened significantly and the aircraft began to pitch forward into a dive; Fitch realized this and pushed both throttles to full power in a desperate, last-ditch attempt to level the plane. It was now 16:00. The CVR recorded these final moments: Records: "Close 'em off." Haynes: "Left turn, close 'em off." Records: "Pull 'em all off." Fitch: "Nah, I can't pull 'em off or we'll lose it, that's what's turning ya." Records: "Okay." Fitch: "Back, Al!" Haynes: "Left, left throttle, left, left, left, left, left, left, left, left, left, left, left!" Ground Proximity Warning System: "Whoop whoop pull up. Whoop whoop pull up. Whoop whoop pull up." Haynes: "Everybody stay in brace!" GPWS: "Whoop whoop pull up." Haynes: "God!" [Sound of impact] End of recording. The engines were not able to respond to Fitch's controls in time to stop the roll, and the airplane struck the ground with its right wing, spilling fuel which ignited immediately. The tail section broke off from the force of the impact, and the rest of the aircraft bounced several times, shedding the landing gear and engine nacelles and breaking the fuselage into several main pieces. At final impact, the right wing was torn off and the main part of the aircraft skidded sideways, rolled over onto its back, and slid to a stop upside-down in a corn field to the right of Runway 22. Witnesses reported that the aircraft "cartwheeled" end-over-end, but the investigation did not confirm this. The reports were due to misinterpretation of the video of the crash that showed the flaming right wing tumbling end-over-end and the intact left wing, still attached to the fuselage, rolling up and over as the fuselage flipped over. Injuries and deaths Of the 296 people aboard, 112 died in the accident. Most were killed by injuries sustained during the multiple impacts, but 35 people in the middle fuselage section directly above the fuel tanks died from smoke inhalation in the post-crash fire. Of those, 24 had no traumatic blunt-force injuries. The majority of the 184 survivors were seated behind first class and ahead of the wings. Many passengers were able to walk out through the ruptures to the structure. Of all of the passengers: 35 died because of smoke inhalation (none were in first class). 76 died for reasons other than smoke inhalation (17 in first class). One died a month after the crash. 47 were injured seriously (eight in first class). 125 had minor injuries (one in first class). 13 had no injuries (none in first class). The passengers who died for reasons other than smoke inhalation were seated in rows 1–4, 24–25, and 28–38. Passengers who died because of smoke inhalation were seated in rows 14, 16, and 22–30. The person assigned to seat 20H moved to an unknown seat and died of smoke inhalation. One crash survivor died one month after the accident; he was classified according to NTSB regulations as a survivor with serious injuries. Fifty-two children, including four "lap children" without their own seats, were aboard the flight because of a United Airlines promotion for "Children's Day". Eleven children, including one lap child, died. Many of the children were traveling alone. Rescuers did not identify the debris that was the remains of the cockpit, with the four crew members alive inside, until 35 minutes after the crash. All four recovered from their injuries and eventually returned to flight duty. Investigation The rear engine's fan disk and blade assemblyabout acrosscould not be located at the accident scene despite an extensive search. The engine's manufacturer, General Electric, offered rewards of $50,000 for the disk and $1,000 for each fan blade. Three months after the crash, a farmer discovered most of the fan disk, with several blades still attached, in her cornfield, thereby qualifying her for a reward, as a General Electric lawyer confirmed. The rest of the fan disk and most of the additional blades were later found nearby. The NTSB determined that the probable cause of this accident was the inadequate consideration given to human factors, and limitations of the inspection and quality control procedures used by United Airlines' engine overhaul facility. These resulted in the failure to detect a fatigue crack originating from a previously undetected metallurgical defect located in a critical area of the titanium-alloy stage-1 fan disk that was manufactured by General Electric Aircraft Engines. The uncontained manner in which the engine failed resulted in high-speed metal fragments being hurled from the engine; these fragments penetrated the hydraulic lines of all three independent hydraulic systems aboard the aircraft, which rapidly lost their hydraulic fluid. The subsequent catastrophic disintegration of the disk resulted in the liberation of debris in a pattern of distribution and with energy levels that exceeded the level of protection provided by design features of the hydraulic systems that operate the DC-10's flight controls; the flight crew lost its ability to operate nearly all of them. Despite these losses, the crew was able to attain and then maintain limited control by using the throttles to adjust thrust from the remaining wing-mounted engines. By using each engine independently, the crew made rough steering adjustments, and by using the engines together they were able to roughly adjust altitude. The crew guided the crippled jet to Sioux Gateway Airport and aligned it for landing on one of the runways. Without the use of flaps and slats, they were unable to slow for landing, and were forced to attempt landing at a very high ground speed. The aircraft also landed at an extremely high rate of descent because of the inability to flare (reduce the rate of descent before touchdown by increasing pitch). As a result upon touchdown, the aircraft broke apart, rolled over, and caught fire. The largest section came to rest in a cornfield next to the runway. Despite the ferocity of the accident, 184 (62.2%) passengers and crew survived owing to a variety of factors including the relatively controlled manner of the crash and the early notification of emergency services. Failed component The investigation, while praising the actions of the flight crew for saving lives, later identified the cause of the accident as a failure by United Airlines maintenance processes and personnel to detect an existing fatigue crack. The Probable Cause in the report by the NTSB read as follows: Post-crash analysis of the crack surfaces showed the presence of a penetrating fluorescent dye used to detect cracks during maintenance. The presence of the dye indicated that the crack was present and should have been detected at a prior inspection. The detection failure arose from poor attention to human factors in United Airlines' specification of maintenance processes. Investigators discovered an impurity and fatigue crack in the disk. Titanium reacts with air when melted, which creates impurities that can initiate fatigue cracks like that found in the crash disk. To prevent this, the ingot that would become the fan disk was formed using a "double vacuum" process: the raw materials were melted together in a vacuum, allowed to cool and solidify, then melted in a vacuum once more. After the double vacuum process, the ingot was shaped into a billet, a sausage-like form about 16 inches in diameter, and tested using ultrasound to look for defects. Defects were located and the ingot was processed further to remove them, but some nitrogen contamination remained. GE later added a third vacuum-forming stage because of their investigation into failing rotating titanium engine parts. The contamination caused what is known as a hard alpha inclusion, where a contaminant particle in a metal alloy causes the metal around it to become brittle. The brittle titanium around the impurity then cracked during forging and fell out during final machining, leaving a cavity with microscopic cracks at the edges. For the next 18 years, the crack grew slightly each time the engine was powered up and brought to operating temperature. Eventually, the crack broke open, causing the disk to fail. The origins of the crash disk are uncertain because of significant irregularities and gaps, noted in the NTSB report, in the manufacturing records of GE Aircraft Engines (GEAE) and its suppliers. Records found after the accident indicated that two rough-machined forgings having the serial number of the crash disk had been routed through GEAE manufacturing. Records indicated that Alcoa supplied GE with TIMET titanium forgings for one disk with the serial number of the crash disk. Some records show that this disk "was rejected for an unsatisfactory ultrasonic indication", that an outside laboratory performed an ultrasound inspection of this disk, that this disk was subsequently returned to GE, and that this disk should have been scrapped. The FAA report stated, "There is no record of warranty claim by GEAE for defective material and no record of any credit for GEAE processed by Alcoa or TIMET". GE records of the second disk having the serial number of the crash disk indicate that it was made with an RMI Titanium Company titanium billet supplied by Alcoa. Research of GE's records showed no other titanium parts were manufactured at GE from this RMI titanium billet during the period of 1969 to 1990. GE records indicate that final finishing and inspection of the crash disk were completed on December 11, 1971. Alcoa records indicate that this RMI titanium billet was first cut in 1972 and that all forgings made from this material were for airframe parts. If the Alcoa records were accurate, the RMI titanium could not have been used to manufacture the crash disk, indicating that the initially rejected TIMET disk with "an unsatisfactory ultrasonic indication" was the crash disk. CF6 engines like the one containing the crash disk were used to power many civilian and military aircraft at the time of the crash. Due to concerns that the accident could recur, a large number of in-service disks were examined by ultrasound for indications of defects. The fan disks on at least two other engines were found to have defects like that of the crash disk. Prioritization and efficiency of inspections of the many engines suspected would have been aided by determination of the titanium source of the crash disk. Chemical analyses of the crash disk intended to determine its source were inconclusive. The NTSB report stated that if examined disks were not from the same source, "the records on a large number of GEAE disks are suspect. It also means that any AD (Airworthiness Directive) action that is based on the serial number of a disk could fail to have its intended effect because suspect disks could remain in service." Influence on the industry The NTSB investigation, after reconstructions of the accident in flight simulators, deemed that training for such an event involved too many factors to be practical. While some degree of control was possible, no precision could be achieved, and a landing with these conditions was stated to be "a highly random event". Expert United and McDonnell Douglas pilots were unable to reproduce a survivable landing; according to a United pilot who flew with Fitch, "Most of the simulations never even made it close to the ground". The NTSB stated that "under the circumstances the UAL (United Airlines) flight crew performance was highly commendable and greatly exceeded reasonable expectations." At the time of the crash, McDonnell Douglas had ended production of DC-10's, with the last of these being delivered to Nigeria Airways during the summer of 1989. The last passenger version of the DC-10 flew in 2014, although freighter versions continued to operate until late 2022. Because this type of aircraft control (with loss of control surfaces) is difficult for humans to achieve, some researchers have attempted to integrate this control ability into the computers of fly-by-wire aircraft. Early attempts to add the ability to real airplanes were not very successful; the software was based on experiments performed in flight simulators where jet engines are usually modeled as "perfect" devices with exactly the same thrust on each engine, a linear relationship between throttle setting and thrust, and instantaneous response to input. Later, computer models were updated to account for these factors, and aircraft such as the F-15 STOL/MTD have been flown successfully with this software installed. Titanium processing The manufacturing process for titanium was changed to eliminate the type of gaseous anomaly that served as the starting point for the crack. Newer batches of titanium use much higher melting temperatures and a "triple vacuum" process in an attempt to eliminate such impurities (triple melt VAR). Aircraft designs Newer designs such as the McDonnell Douglas MD-11 have incorporated hydraulic fuses to isolate a punctured section and prevent a total loss of hydraulic fluid. After the United 232 accident, such fuses were installed in the number three hydraulic system in the area below the number two engine on all DC-10 aircraft to ensure sufficient control capability remained if all three hydraulic system lines should be damaged in the tail area. Although elevator and rudder control would be lost, the aircrew would still be able to control the aircraft's pitch (up and down) with stabilizer trim, and would be able to control roll (left and right) with some of the aircraft's ailerons and spoilers. Although not ideal, the system provides a greater control than that available to the United 232. Losing all three hydraulic systems remained possible if serious damage occurs elsewhere, as nearly happened to a cargo DC-10-40F in April 2002 during takeoff in San Salvador when a main-gear tire exploded after running over a lost thrust reverser cascade. The extensive damage in the left wing caused total loss of pressure from the number-one and the number-two hydraulic systems. The number-three system was dented but not penetrated. NTSB then recommended that FAA "Require adequate protection of DC-10 hydraulic system components in the wing area from tire fragments" by better shielding or adding fuses in this area. Restraints for children Of the four children deemed too young to require seats of their own ("lap children"), one died from smoke inhalation. The NTSB added a safety recommendation to the FAA on its "List of Most Wanted Safety Improvements" in May 1999 suggesting a requirement for children younger than two years old to be restrained safely, which was removed in November 2006. The accident began a campaign directed by United Flight 232's senior flight attendant, Jan Brown Lohr, for all children to have seats on aircraft. The argument against requiring seats on aircraft for children younger than age two is the higher cost to a family of having to buy a seat for the child, and this higher cost will motivate more families to drive instead of fly, and incur the much greater risk of driving (see Epidemiology of motor vehicle collisions). The FAA estimates that a regulation that all children must have a seat would equate, for every one child's life saved on an aircraft, to 60 people dying in highway accidents. Though it is no longer on the "most wanted" list, providing aircraft restraints for children younger than age two is still recommended practice by the NTSB and FAA, though it is not required by the FAA as of May 2016. The NTSB asked the International Civil Aviation Organization to make this a requirement in September 2013. Crew resource management The accident has since become an example of successful crew resource management (CRM). For much of aviation's history, the captain was considered the final authority, and crews were expected to respect the captain's expertise without question. This began to change during the 1970s, especially after the Tenerife airport disaster in 1977 and the crash of United Airlines Flight 173 outside Portland, Oregon in 1978. CRM, while still considering the captain as final authority, instructs crew members to speak up when they detect a problem, and instructs captains to listen to crew concerns. United Airlines instituted a CRM class during the early 1980s. The NTSB later credited this training as valuable for the success of United 232's crew in handling their emergency. The FAA made CRM mandatory after the accident. Factors contributing to survival rate Of the 296 people aboard, 112 were killed and 184 survived. Haynes later identified three factors relating to the time of day that increased the survival rate: The accident occurred during daylight hours in good weather; The accident occurred as a shift change was occurring at both a regional trauma center and a regional burn center in Sioux City, allowing for more medical personnel to treat the injured; The accident occurred when the Iowa Air National Guard was on duty at Sioux Gateway Airport, allowing for 285 trained personnel to assist with triage and evacuation of the injured. "Had any of those things not been there," Haynes said, "I'm sure the fatality rate would have been a lot higher." Haynes also credited CRM as being one of the factors that saved his own life, and many others. When Haynes died in August 2019, United Airlines issued a statement thanking him for "his exceptional efforts aboard Flight UA232". As with the Eastern Air Lines Flight 401 crash of a similarly sized Lockheed L-1011 in 1972, the relatively shallow angle of descent likely played a large part in the relatively high survival rate. The National Transportation Safety Board concluded that under the circumstances, "a safe landing was virtually impossible". Notable people onboard Victims John Kenneth Stille – chemist Jay Ramsdell – commissioner of the Continental Basketball Association Survivors Spencer Bailey – writer, editor, and journalist Al Haynes – aircraft captain on Flight 232 Helen Young Hayes – investment fund manager Michael R. Matz – Olympian and racehorse trainer Jerry Schemmel – former radio broadcaster of the Denver Nuggets and Colorado Rockies Pete Wernick – banjo player and member of American bluegrass ensemble Hot Rize, who managed to resume performing two days after the crash Depictions The accident was the subject of an 11th-season episode of the documentary series Mayday (also known as Air Crash Investigation), titled "Impossible Landing". The episode featured interviews with survivors and showed actual footage of the crash. The accident was the subject of the 1992 television movie A Thousand Heroes, also known as Crash Landing: The Rescue of Flight 232. The episode "Engineering Disasters" (season 6, episode 18) of Modern Marvels featured the crash. The accident was featured in an episode of Seconds from Disaster (S2E7 9/13/05 "Crash Landing in/at Sioux City") on the National Geographic Channel and MSNBC Investigates on the MSNBC news channel. The History Channel distributed a documentary named Shockwave; a portion of Episode 7 (originally aired January 25, 2008) detailed the events of the crash. The episode "A Wing and a Prayer" of Survival in the Sky (UK title: Black Box) featured the accident. The Biography Channel series I Survived... explained in detail the events of the crash through passenger Jerry Schemmel, flight attendant Jan Brown Lohr, and pilot Alfred Haynes. The episode "Crisis in the Cockpit" (Season 2, Episode 1) of Why Planes Crash on The Weather Channel featured the accident. The 1999 play Charlie Victor Romeo (made into a film in 2013) dramatically reenacted the incident using transcripts from the cockpit voice recorder (CVR). The 1991 novel Cold Fire, by Dean Koontz, includes a fictional crash based on Flight 232. The 1993 film Fearless portrayed a fictional plane crash based in part on the crash of Flight 232. In 2016, The House Theatre of Chicago produced United Flight 232. The play was a new work directed and adapted by Vanessa Stalling and based on the book Flight 232 by Laurence Gonzales. Surviving crew members attended the play in April 2016, and the production was subsequently nominated for six Equity Jeff Awards, winning two. In 2021, the accident was covered in episode 5 of the UK TV series Plane Crash Recreated. Survivor accounts Dennis Fitch described his experiences in Errol Morris's television show First Person, episode "Leaving the Earth". Martha Conant told her story of survival to her daughter-in-law, Brittany Conant, on "Storycorps" during NPR's Morning Edition of January 11, 2008. Flight 232: A Story of Disaster and Survival by Laurence Gonzales (2014, W. W. Norton & Company; ). Miracle in the Cornfield – an inside survivor narrative by Joseph Trombello (1999, PrintSource Plus, Appleton, WI; ). When the World Breaks Your Heart: Spiritual Ways of Living With Tragedy by Gregory S. Clapper, a chaplain in the National Guard who relates the stories of some of the survivors he aided in the aftermath of the crash (1999; 2016, Wipf and Stock; ). Chosen to Live: The Inspiring Story of Flight 232 Survivor Jerry Schemmel by Jerry Schemmel with Kevin Simpson (Victory Pub. Co.,1996; ). Spencer Bailey discussed his experiences on the Time Sensitive podcast, in a 2019 interview with Andrew Zuckerman. Flight 232 Memorial The Flight 232 Memorial was built along the Missouri River in Sioux City, Iowa, to commemorate the heroism of the flight crew and the rescue efforts the Sioux City community undertook after the crash. It features a statue of Iowa National Guard Lt. Col. Dennis Nielsen from a news photo that was taken that day while he was carrying a three-year-old to safety. Similar accidents The odds against all three hydraulic systems failing simultaneously had previously been calculated as low as a billion to one. Yet such calculations assume that multiple failures must have independent causes, an unrealistic assumption, and similar flight control failures have indeed occurred: In 1971, a Boeing 747, operating as Pan Am 845, struck approach light structures for the reciprocal runway as it lifted off the runway at San Francisco Airport. Major damage to the belly and landing gear resulted, which caused the loss of hydraulic fluid from three of its four flight control systems. The fluid which remained in the fourth system gave the captain very limited control of some of the spoilers, ailerons, and one inboard elevator. That was sufficient to circle the plane while fuel was dumped and then to make a hard landing. There were no fatalities, but there were some injuries. In 1981, a Lockheed L-1011, operating as Eastern Air Lines Flight 935, suffered a similar failure of its tail-mounted number two engine. The shrapnel from that engine inflicted damage on all four of its hydraulic systems, which were also close together in the tail structure. Fluid was lost in three of the four systems. The fourth hydraulic system was struck by shrapnel, but not punctured. The hydraulic pressure remaining in that fourth system enabled the captain to land the plane safely with some limited use of the outboard spoilers, the inboard ailerons, and the horizontal stabilizer, plus differential engine power of the remaining two engines. There were no injuries. On August 12, 1985, Japan Air Lines Flight 123, a Boeing 747-146SR, suffered a rupture of the pressure bulkhead in its tail section, caused by undetected damage during a faulty repair to the rear bulkhead after a tailstrike seven years earlier. Pressurized air subsequently rushed out of the bulkhead and blew off the plane's vertical stabilizer, also severing all four of its hydraulic control systems. The pilots were able to keep the plane airborne for 32 minutes using differential engine power, but without any hydraulics or the stabilizing force of the vertical stabilizer, the plane eventually crashed in mountainous terrain. There were only 4 survivors among the 524 on board. This accident is the deadliest single-aircraft accident in history. In 1994, RA85656, a Tupolev Tu-154 operating as Baikal Airlines Flight 130, crashed near Irkutsk shortly after departing from Irkutsk Airport, Russia. Damage to the starter caused a fire in engine number two (located in the rear fuselage). High temperatures during the fire destroyed the tanks and pipes of all three hydraulic systems. The crew lost control of the aircraft. The out-of-control plane, at a speed of 275 knots, hit the ground at a dairy farm and burned. All 124 passengers and crew, as well as a dairyman on the ground, died. In 2003, OO-DLL, a DHL Airbus A300, was struck by a surface-to-air missile shortly after departing from Baghdad International Airport, Iraq. The missile struck the port-side wing, rupturing a fuel tank and causing the loss of all three hydraulic systems. With the flight controls disabled, the crew used differential thrust to execute a safe landing at Baghdad. The disintegration of a turbine disc, leading to loss of control, was a direct cause of two major aircraft disasters in Poland: On March 14, 1980, LOT Polish Airlines Flight 007, an Ilyushin Il-62, attempted a go-around when the crew experienced troubles with a gear indicator. When thrust was applied, the low-pressure turbine disc in engine number 2 disintegrated because of material fatigue; parts of the disc damaged engines number 1 and 3 and severed control pushers for both horizontal and vertical stabilizers. After 26 seconds of uncontrolled descent, the aircraft crashed, killing all 87 people on board. On May 9, 1987, improperly assembled bearings in Il-62M engine number 2 on LOT Polish Airlines Flight 5055 overheated and exploded during cruise over the village of Lipinki, causing the shaft to break in two; this caused the low-pressure turbine disc to spin to enormous speeds and disintegrate, damaging engine number 1 and cutting the control pushers. The crew managed to return to Warsaw, using nothing but trim tabs to control the crippled aircraft, but on the final approach, the trim controlling links burned and the crew completely lost control over the aircraft. Soon after, it crashed on the outskirts of Warsaw; all 183 on board died. Had the plane stayed airborne for 40 seconds more, it would have been able to reach the runway. In contrast to deploying landing gear: On August 15, 2019, Ural Airlines Flight 178, an Airbus A321, encountered a flock of seagulls resulting in a bird strike that caused fires in both CFM56-5 engines just after takeoff from Zhukovsky International Airport, Moscow, Russia. Due to a failure to follow standard operating procedures, the plane was forced to land in a corn field with the landing gear raised. The pilots claimed they intentionally landed with the landing gear up, though the CVR recording revealed no discussion about this. Everyone on board the flight survived. 74 people were injured, none severely. See also List of aircraft accidents and incidents resulting in at least 50 fatalities Miracle on the Hudson Notes References External links NTSB Accident report of United Airlines Flight 232 Alternate link at Embry-Riddle Aeronautical University Cockpit voice-recorder transcript (pdf) (NB contains error) A talk given by the pilot describing the crash at NASA Dryden in 1991 Siouxland Chamber Of Commerce: Remembering Flight 232 (Picture of memorial depicting Lt. Colonel Dennis Nielsen carrying Spencer Bailey) "17th Anniversary Tribute of Flight 232" News report with video of crash landing of Flight 232, ABC News, July 19, 1989 Pre-crash photos from Airliners.net Martha Conant tells her story of surviving the crash. – 1992 TV movie Errol Morris' First Person (one hour documentary video, accident recounting by Denny Fitch) Aviation accidents and incidents in the United States in 1989 1989 in Iowa Airliner accidents and incidents in Iowa Airliner accidents and incidents caused by mechanical failure Airliner accidents and incidents caused by engine failure Sioux City, Iowa 232 Accidents and incidents involving the McDonnell Douglas DC-10 July 1989 events in the United States Airliner accidents and incidents involving uncontained engine failure Aviation accidents and incidents caused by loss of control Aviation accidents and incidents in 1989 Filmed deaths during aviation accidents and incidents
United Airlines Flight 232
Materials_science
8,095
2,935,783
https://en.wikipedia.org/wiki/John%20N.%20Warfield
John Nelson Warfield (November 21, 1925November 17, 2009) was an American systems scientist, who was professor and director of the Institute for Advanced Study in the Integrative Sciences (IASIS) at George Mason University, and president of the Systems, Man, and Cybernetics Society. Biography Warfield was born November 21, 1925, and grew up in Missouri, and studied at the University of Missouri in Columbia. Originally he majored in chemistry and minored in mathematics, but his studies were interrupted by World War II. After basic training in the U. S. Army Infantry, the Army put him in a specialized training program to study electrical engineering, which he found very interesting, especially electronics and communications. After the war he completed his original undergraduate program and continued on to get advanced degrees in electrical engineering. He received the Bachelor of Arts in 1948, Bachelor of Science in Electrical Engineering in 1948, and Master of Science in Electrical Engineering in 1949 from the University of Missouri, Columbia, Missouri. He received the Doctor of Philosophy degree from Purdue University, West Lafayette, Indiana, in 1952. His major was electrical engineering with a specialty in communications engineering. He gained about 10 years of industrial experience with the firms: Wilcox Electric Company, Battelle Memorial Institute, and Burroughs Corporation. His industrial experience included theoretical and experimental research, electronic development and reliability testing of navigational equipment for jet aircraft. His longest service in this group was with the Battelle Memorial Institute from 1968 to 1974, where he held the title Senior Research Leader. At Battelle, and later at Virginia and George Mason universities, he developed the sociotechnology of interpretive structural modeling (ISM) and developed interactive management in collaboration with Alexander Christakis from 1979 till 1989. He was elected President of the Systems, Man, and Cybernetics Society of the Institute of Electrical and Electronics Engineers IEEE, and of the International Society for the Systems Sciences (formerly called the Society for General Systems Research). He served as Editor of the IEEE Transactions on Systems, Man, and Cybernetics from 1968 to 1971, and as founding Editor-in-Chief of the Pergamon journal Systems Research, during the period 1981–1990. Warfield was a member of the Academic Committee of the International Encyclopedia of Systems and Cybernetics. He was a Life Fellow of the Institute of Electrical and Electronics Engineers, and holds that organization's Centennial Medal. He was a member of the Association for Integrative Studies, and in the Board of Governors of the International Society for Panetics. In 2006 John N. Warfield was awarded the Joseph G. Wohl Award for Career Achievement at the 2006 annual meeting of the IEEE Systems, Man, and Cybernetics Society. This is the highest award given by the society, and is not awarded every year. He was awarded for his contributions to systems engineering concepts, methodology, design, education and management. Warfield was also awarded the IEEE Third Millennium Medal. Publications Warfield was the author of more than 10 books and 100 papers. His books: 1958. Synthesis of Linear Communications Networks. with G. E. Knausenberger, New York: McGraw-Hill. 1959. Introduction to Electronic Analog Computers. Englewood Cliffs: Prentice-Hall. 1963. Principles of Logic Design. Boston: Ginn and Company. 1976. Societal Systems: Planning, Policy, and Complexity. New York: Wiley Interscience. 1990. A Science of Generic Design: Managing Complexity through Systems Design. Ames, IA: Iowa State University Press 1994. 1994. A Handbook of Interactive Management. With Roxana Cárdenas, Ames, IA: Iowa State University Press 1994. 2002. Understanding Complexity: Thought and Behavior. AJAR Publishing Company, Palm Harbor, FL. 2003. The Mathematics of Structure. AJAR Publishing Company, Palm Harbor, FL. 2006. An Introduction to Systems Sciences. World Scientific, Singapore Articles, papers and monographs, a selection: 1956. Systems Engineering. United States Department of Commerce PB111801. 1957. "How to Improve Systems Engineering". Aeronautical Engineering Review, 16(7), July, 1957, 50–51. 1969. "What is System Planning?". With R.W. House, in: Automatica Vol. 5, 1969, pp. 151–157. 1972. A Unified Systems Engineering Concept. With J. D. Hill, et al., Columbus: Battelle Memorial Institute, Monograph No. 1, June, 1972. 1974. Structuring Complex Systems. Columbus: Battelle Memorial Institute Monograph No. 4, April, 1974 1987. "Dimensionality" with Alexander Christakis Systems Research 4, pp. 127–137 2003. "A Proposal for Systems Science". In: Systems Research and Behavioral Science, Vol. 20 (2003), pp. 507–520. 2003. "Autobiographical Retrospectives: Discovering Systems Science". In: International Journal of General Systems, December 2003 Vol 32 (6), pp. 525–563. See also One-page management system References External links : Warfield site maintained by the Warfield IP Trust, includes free downloadable ISM software J.N. Warfield: short resume John N. Warfield: background International Encyclopedia of Systems and Cybernetics Panetics Society Interview with Peirce, Foucault and Hayek about Ralph Siu Can Panetics Become a Science? Second Interview with Foucault, Hayek, and Peirce, Interviews by John N. Warfield Conflicting Values and Perceptions and Panetics by John N. Warfield John N. Warfield Digital Archive Obituary from TimesDaily The Passing of Dr. John N. Warfield, 84 Warfield in the Quergeist collection John Warfield Papers John 1925 births 2009 deaths Systems engineers American systems scientists George Mason University faculty IEEE Centennial Medal laureates Presidents of the International Society for the Systems Sciences
John N. Warfield
Engineering
1,205
156,766
https://en.wikipedia.org/wiki/Professional%20development
Professional development, also known as professional education, is learning that leads to or emphasizes education in a specific professional career field or builds practical job applicable skills emphasizing praxis in addition to the transferable skills and theoretical academic knowledge found in traditional liberal arts and pure sciences education. It is used to earn or maintain professional credentials such as professional certifications or academic degrees through formal coursework at institutions known as professional schools, or attending conferences and informal learning opportunities to strengthen or gain new skills. Professional education has been described as intensive and collaborative, ideally incorporating an evaluative stage. There is a variety of approaches to professional development or professional education, including consultation, coaching, communities of practice, lesson study, case study, capstone project, mentoring, reflective supervision and technical assistance. Participants A wide variety of people, such as teachers, military officers and non-commissioned officers, health care professionals, architects, lawyers, accountants and engineers engage in professional development. Individuals may participate in professional development because of an interest in lifelong learning, a sense of moral obligation, to maintain and improve professional competence, to enhance career progression, to keep abreast of new technology and practices, or to comply with professional regulatory requirements. In the training of school staff in the United States, "[t]he need for professional development ... came to the forefront in the 1960s". Many American states have professional development requirements for school teachers. For example, Arkansas teachers must complete 60 hours of documented professional development activities annually. Professional development credits are named differently from state to state. For example, teachers in Indiana are required to earn 90 Continuing Renewal Units (CRUs) per year; in Massachusetts, teachers need 150 Professional Development Points (PDPs); and in Georgia, teachers must earn 10 Professional Learning Units (PLUs). American and Canadian nurses, as well as those in the United Kingdom, have to participate in formal and informal professional development (earning credit based on attendance of education that has been accredited by a regulatory agency) in order to maintain professional registration. Approaches In a broad sense, professional development may include formal types of vocational education, typically post-secondary or poly-technical training leading to qualification or credential required to obtain or retain employment. Professional development may also come in the form of pre-service or in-service professional development programs. These programs may be formal, or informal, group or individualized. Individuals may pursue professional development independently, or programs may be offered by human resource departments. Professional development on the job may develop or enhance process skills, sometimes referred to as leadership skills, as well as task skills. Some examples for process skills are 'effectiveness skills', 'team functioning skills', and 'systems thinking skills'. Professional development opportunities can range from a single workshop to a semester-long academic course, to services offered by a medley of different professional development providers and varying widely with respect to the philosophy, content, and format of the learning experiences. Some examples of approaches to professional development include: Case Study Method – The case method is a teaching approach that consists in presenting the students with a case, putting them in the role of a decision maker facing a problem – See Case method. Consultation – to assist an individual or group of individuals to clarify and address immediate concerns by following a systematic problem-solving process. Coaching – to enhance a person's competencies in a specific skill area by providing a process of observation, reflection, and action. Communities of Practice – to improve professional practice by engaging in shared inquiry and learning with people who have a common goal Lesson Study – to solve practical dilemmas related to intervention or instruction through participation with other professionals in systematically examining practice Mentoring – to promote an individual's awareness and refinement of his or her own professional development by providing and recommending structured opportunities for reflection and observation Reflective Supervision – to support, develop, and ultimately evaluate the performance of employees through a process of inquiry that encourages their understanding and articulation of the rationale for their own practices Technical Assistance – to assist individuals and their organization to improve by offering resources and information, supporting networking and change efforts. The World Bank's 2019 World Development Report on the future of work argues that professional development opportunities for those both in and out of work, such as flexible learning opportunities at universities and adult learning programs, enable labor markets to adjust to the future of work. Initial Initial professional development (IPD) is defined as "a period of development during which an individual acquires a level of competence necessary in order to operate as an autonomous professional". Professional associations may recognise the successful completion of IPD by the award of chartered or similar status. Examples of professional bodies that require IPD prior to the award of professional status are the Institute of Mathematics and its Applications, the Institution of Structural Engineers, and the Institution of Occupational Safety and Health. Continuing Continuing professional development (CPD) or continuing professional education (CPE) is continuing education to maintain knowledge and skills. Most professions have CPD obligations. Examples are the Royal Institution of Chartered Surveyors, American Academy of Financial Management, safety professionals with the International Institute of Risk & Safety Management (IIRSM) or the Institution of Occupational Safety and Health (IOSH), and medical and legal professionals, who are subject to continuing medical education or continuing legal education requirements, which vary by jurisdiction. CPD authorities in the United Kingdom include the CPD Standards Office who work in partnership with the CPD Institute, and also the CPD Certification Service. For example, CPD by the Institute of Highway Engineers is approved by the CPD Standards Office, and CPD by the Chartered Institution of Highways and Transportation is approved by the CPD Certification Service. A systematic review published in 2019 by the Campbell Collaboration found little evidence of the effectiveness of continuing professional development (CPD). See also References External links Personal development Vocational education Professional ethics
Professional development
Biology
1,181
13,931,339
https://en.wikipedia.org/wiki/DnaE
DnaE, the gene product of dnaE, is the catalytic α subunit of DNA polymerase III, acting as a DNA polymerase. This enzyme is only found in prokaryotes. References Bacterial proteins DNA replication
DnaE
Chemistry,Biology
46
8,999,947
https://en.wikipedia.org/wiki/Tiffany%20glass
Tiffany glass refers to the many and varied types of glass developed and produced from 1878 to 1929–1930 at the Tiffany Studios in New York City, by Louis Comfort Tiffany and a team of other designers, including Clara Driscoll, Agnes F. Northrop, and Frederick Wilson. In 1865, Tiffany traveled to Europe, and in London he visited the Victoria and Albert Museum, whose extensive collection of Roman and Syrian glass made a deep impression on him. He admired the coloration of medieval glass and was convinced that the quality of contemporary glass could be improved upon because the production of art glass in America during this time was not close to what Europeans were creating. In his own words, the "Rich tones are due in part to the use of pot metal full of impurities, and in part to the uneven thickness of the glass, but still more because the glass maker of that day abstained from the use of paint". Tiffany was an interior designer, and in 1878 his interest turned toward the creation of stained glass, when he opened his own studio and glass foundry because he was unable to find the types of glass that he desired in interior decoration. His inventiveness both as a designer of windows and as a producer of the material with which to create them was to become renowned. Tiffany wanted the glass itself to transmit texture and rich colors and he developed a type of glass he called "Favrile". Tiffany Studios The favrile, or "fabrile" glass was manufactured at the Tiffany factory located at 96–18 43rd Avenue in the Corona section of Queens from 1901 to 1932. Today, the Louis Tiffany School or New York City's P.S. (public school) 110Q, is now built on the old site. Closing The closing of the factory has been a matter of some controversy. Tiffany's glass fell out of favor in the 1910s, and by the 1920s a foundry had been installed for a separate bronze company. Tiffany's leadership and talent, as well as his father's money and old firm allowed Tiffany to relaunch Tiffany studios as a marketing strategy in order for his business to thrive. In 1932, Tiffany Studios filed for bankruptcy. Ownership of the complex passed back to the original owners of the factory — the Roman Bronze Works — which had served as a subcontractor to Tiffany for many years." John Polachek, founder of the General Bronze Corporation —who had worked at the Tiffany Studios earlier— purchased the Roman Bronze Works (the old Tiffany Studios). General Bronze then became the largest bronze fabricator in New York City formed through the merger of his own companies and Tiffany's Corona factory. Louis Tiffany subsequently died in 1933. Types Opalescent glass The term "opalescent glass" is commonly used to describe glass where more than one color is present, being fused during the manufacture, as against flashed glass in which two colors may be laminated, or silver stained glass where a solution of silver nitrate is superficially applied, turning red glass to orange and blue glass to green. Some opalescent glass was used by several stained glass studios in England from the 1860s and 1870s onwards, notably Heaton, Butler and Bayne. Its use became increasingly common. Opalescent glass is the basis for the range of glasses created by Tiffany. In addition opalescent glass comes in three main types. The first type is exemplified by blue-tinged semi-opaque or clear glass with milky opalescence in the center, seen in creations by Lalique, Sabino, and Jobling's. This effect is achieved through slower cooling, causing crystallization. The glass glows golden when backlit and a beautiful blue when front-lit. Many French companies in the 1920s and 1930s, such as Lalique and Sabino, produced opalescent art deco pieces. The second type features a milky white edge or raised pattern on colored pressed glass. Reheating sections during the cooling process turns them white, creating a decorative effect. This method was employed by various companies, including Barolac in Bohemia, Joblings in England, and Val St Lambert in Belgium. The third type involves hand-blown glass with two layers, containing heat-reactive components like bone ash. The glass is blown into a mold with a raised pattern, and reheating turns the heat-sensitive glass milky white, creating a contrasting silhouette against the clear background (for more information ). Favrile glass Tiffany patented Favrile glass in 1892. Favrile glass often has a distinctive characteristic that is common in some glass from Classical antiquity: it possesses a superficial iridescence. This iridescence causes the surface to shimmer, but also causes a degree of opacity. This iridescent effect of the glass was obtained by mixing different colors of glass together while hot. Streamer glass Streamer glass refers to a sheet of glass with a pattern of glass strings affixed to its surface. Tiffany made use of such textured glass to represent, for example, twigs, branches and grass. Streamers are prepared from very hot molten glass, gathered at the end of a punty (pontil) that is rapidly swung back and forth and stretched into long, thin strings that rapidly cool and harden. These hand-stretched streamers are pressed on the molten surface of sheet glass during the rolling process, and become permanently fused. Fracture glass Fracture glass refers to a sheet of glass with a pattern of irregularly shaped, thin glass wafers affixed to its surface. Tiffany made use of such textured glass to represent, for example, foliage seen from a distance. The irregular glass wafers, called fractures, are prepared from very hot, colored molten glass, gathered at the end of a blowpipe. A large bubble is forcefully blown until the walls of the bubble rapidly stretch, cool and harden. The resulting glass bubble has paper-thin walls and is immediately shattered into shards. These hand blown shards are pressed on the surface of the molten glass sheet during the rolling process, to which they become permanently fused. Fracture-streamer glass Fracture-streamer glass refers to a sheet of glass with a pattern of glass strings, and irregularly shaped, thin glass wafers, affixed to its surface. Tiffany made use of such textured glass to represent, for example, twigs, branches and grass, and distant foliage. The process is as above except that both streamers and fractures are applied to sheet glass during the rolling process. Ring mottle glass Ring mottle glass refers to sheet glass with a pronounced mottle created by localized, heat-treated opacification and crystal-growth dynamics. Ring mottle glass was invented by Tiffany in the early 20th century. Tiffany's distinctive style exploited glass containing a variety of motifs such as those found in ring mottle glass, and he relied minimally on painted details. When Tiffany Studio closed in 1929–1930, the secret formula for making ring mottle glass was forgotten and lost. Ring mottle glass was re-discovered in the late sixties by Eric Lovell of Uroboros Glass. Traditionally used for organic details on leaves and other natural elements, ring mottles also find a place in contemporary work when abstract patterns are desired. Ripple glass Ripple glass refers to textured glass with marked surface waves. Tiffany made use of such textured glass to represent, for example, water or leaf veins. The texture is created during the glass sheet-forming process. A sheet is formed from molten glass with a roller that spins on itself while travelling forward. Normally the roller spins at the same speed as its own forward motion, much like a steam roller flattening tarmac, and the resulting sheet has a smooth surface. In the manufacture of rippled glass, the roller spins faster than its own forward motion. The rippled effect is retained as the glass cools. Drapery glass Drapery glass refers to a sheet of heavily folded glass that suggests fabric folds. Tiffany made abundant use of drapery glass in ecclesiastical stained glass windows to add a 3-dimensional effect to flowing robes and angel wings, and to imitate the natural coarseness of magnolia petals. The making of drapery glass requires skill and experience. A small diameter hand-held roller is manipulated forcefully over a sheet of molten glass to produce heavy ripples, while folding and creasing the entire sheet. The ripples become rigid and permanent as the glass cools. Each sheet produced from this artisanal process is unique. Cutting techniques In order to cut streamer, fracture or ripple glass, the sheet may be scored on the side without streamers, fractures or ripples with a carbide glass cutter, and broken at the score line with breaker-grozier pliers. In order to cut drapery glass, the sheet may be placed on styrofoam, scored with a carbide glass cutter, and broken at the score line with breaker-grozier pliers, but a bandsaw or ringsaw are the preferred. Locations and collections Stained glass in situ Canada Ontario London – St Paul's Cathedral, four windows, two signed by Tiffany Quebec Montreal – Montreal Museum of Fine Arts, Bourgie Pavilion (formerly Erskine and American United Church), twenty windows signed by Tiffany Mexico Mexico City – Palacio de Bellas Artes Scotland Aberdeenshire – St Peter's Kirk, Fyvie Dunfermline – Dunfermline Abbey Edinburgh – Parish Church of Saint Cuthbert United States Alabama Mobile – Christ Church Cathedral Arizona Douglas – Gadsden Hotel California Vallejo – St. Peter's Chapel, Mare Island, 25 windows by Tiffany Colorado Colorado Springs – First United Methodist Church Connecticut Southport Pequot Library Association Hartford First Church of Christ and Ancient Burial Ground Mark Twain House New London St. James Episcopal Church New Haven – Center Church on the Green Trinity Lutheran Church Florida St. Augustine – Flagler College Georgia Atlanta – All Saints' Episcopal Church Jekyll Island – Faith Chapel Macon – St. Paul's Episcopal Church Savannah – Gryphon Tea Room Thomasville – St. Thomas Episcopal Church Illinois Chicago – Macy's on State Street, formerly Marshall Field's Second Presbyterian Church on South Michigan Avenue Chicago Cultural Center Springfield – First Presbyterian Church Tinley Park – St. Andrew's Anglican Church Indiana Indianapolis – Second Presbyterian Church Richmond – Reid Center, formerly Reid Memorial Presbyterian Church Iowa Dubuque – St. Luke's United Methodist Church Kansas Topeka – First Presbyterian Church Kentucky Covington – Trinity Episcopal Church Louisiana New Orleans – Tulane University Maine Portland – Masonic Temple Maryland Baltimore – Brown Memorial Presbyterian Church Massachusetts Boston – Arlington Street Church Church of the Covenant Wellesley – Houghton Memorial Chapel at Wellesley College Nantucket – St. Pauls Episcopal Church Michigan Ann Arbor – Unitarian Universalist Church (Hobbs & Black) Newberry Hall (Kelsey Museum of Archeology) Grand Rapids – Ladies Literary Club Temple Emanuel Marquette – The Resurrection Window, Morgan Chapel, St. Paul's Episcopal Church Minnesota Stillwater – Episcopal Church of the Ascension Mississippi University – Ventress Hall at The University of Mississippi Tribute to the University Greys Missouri Kansas City – St. Mary's Episcopal Church Kirkwood – Grace Episcopal Church New Hampshire Bretton Woods – Mount Washington Hotel New Jersey Hackensack – Second Reformed Church Maplewood – Morrow Memorial United Methodist Church New Brunswick – Kirkpatrick Chapel at Rutgers, The State University of New Jersey New York Albany – First Presbyterian Church of Albany Albion – Pullman Memorial Universalist Church Auburn – Willard Chapel Bath – First Presbyterian Church Beacon – St. Andrew's Church Briarcliff Manor – Congregational Church Buffalo – St. Paul's Episcopal Cathedral Irvington – Irvington Presbyterian Church Irvington Town Hall – Clock face and reading room Lockport – First Presbyterian Church New York City – Brooklyn – Brown Memorial Baptist Church and church house Flatbush Reformed Church and church house First Unitarian Congregational Society and Rev. Donald McKinney chapel Manhattan – Grand Central Terminal – clock face on south facade West End Collegiate Church, West End Avenue St. Michael's Church, New York City, Amsterdam Avenue at 99th Street Holy Trinity Lutheran Church Roslyn – Trinity Episcopal Church Roxbury – Jay Gould Memorial Reformed Church Saugerties – St. Mary of the Snow, 36 Cedar Street Troy – Troy Public Library St. Joseph's Catholic Church – St. Paul's Episcopal Church Tuxedo Park – St. Mary's-in-Tuxedo Episcopal Church Garden City – St Paul's School, endangered glass Washingtonville – Moffat Library Ohio Cleveland – Wade Memorial Chapel in Lake View Cemetery Dayton – Westminster Presbyterian Church, 125 N. Wilkinson Street Historic Woodland Cemetery & Arboretum, 118 Woodland Avenue Pennsylvania Altoona – St. Lukes Episcopal Church Brownsville – Christ Church Erie – Cathedral of St. Paul First Presbyterian Church Franklin – St. John's Episcopal Church Franklin – Christ's Church Kittanning – Grace Presbyterian Church Lancaster – First Presbyterian Church Lewistown – St. Mark's Episcopal Church First United Methodist Church Montgomery Township – Robert Kennedy Memorial Presbyterian Church New Castle – St. Jude's Episcopal Church, formerly known as Trinity Episcopal Church Philadelphia – Calvary Center for Culture and Community Church of the Holy Trinity First Presbyterian Church St. Stephen's Episcopal Church Tenth Presbyterian Church Pittsburgh – Calvary United Methodist Church Emmanuel Episcopal Church Shadyside Presbyterian Church First Presbyterian Church Third Presbyterian Church St. Andrews Episcopal Church Sewickley – First Presbyterian Church St. Stephen's Episcopal Church Sharon – Buhl Mausoleum Titusville – St. James Memorial Episcopal Church Uniontown – Trinity United Presbyterian Church St. Peter's Anglican Church Whitemarsh Township – St. Thomas' Church Williamsport – Christ Community Worship Center, formerly the Presbyterian Church of the Covenant Tennessee Chattanooga – Saints Peter and Paul Basilica Memphis – Grace-St. Luke's Episcopal Church Texas Galveston – Trinity Episcopal Church Houston – Christ Church Cathedral Utah Salt Lake City – Salt Lake Temple St. Mark's Episcopal Cathedral Vermont St. Johnsbury – Grace United Methodist Church Virginia Newport News – St. Paul's Episcopal Church Norfolk – St. Paul's Episcopal Church Richmond – Congregation Beth Ahabah Petersburg – Blandford Church Staunton – Trinity Episcopal Church Washington Seattle – Pierre P. Ferry House Wisconsin Menomonie – Mabel Tainter Memorial Building Milwaukee – Charles Allis Art Museum Milwaukee – St. Paul's Episcopal Church Oshkosh – Oshkosh Public Museum Museums United Kingdom England Haworth Art Gallery, Accrington United States Florida Charles Hosmer Morse Museum of American Art, Winter Park Illinois Art Institute of Chicago, Chicago Halim Time and Glass Museum, Evanston Louisiana Newcomb Art Museum, Tulane University, New Orleans Michigan Ella Sharp Museum of Art and History, Jackson Meadow Brook Hall, Rochester University of Michigan Museum of Art, Ann Arbor New York Brooklyn Museum, Brooklyn, New York City Metropolitan Museum of Art, Manhattan, New York City Neustadt Collection of Tiffany Glass, Queens Museum, Queens, New York City New-York Historical Society, Manhattan, New York City Texas Dallas Museum of Art, Dallas Museum of Fine Arts, Houston, Houston Virginia Virginia Museum of Fine Arts, Richmond Wisconsin Charles Allis Art Museum, Milwaukee See also Tiffany lamp Stained glass The Sunset Scene (Glass Window) Stained glass windows of Chartres Cathedral Tiffany & Co. Louis Comfort Tiffany References Informational notes Citations Further reading External links Publications and ephemeral materials from Tiffany Studios, Tiffany Glass & Decorating Company, Tiffany and Company, and the Louis Comfort Tiffany Foundation – held by the Metropolitan Museum of Art Architectural elements Glass types Glass art Glass trademarks and brands Stained glass Tiffany Studios Corona, Queens
Tiffany glass
Technology,Engineering
3,156
69,989,959
https://en.wikipedia.org/wiki/Lichexanthone
Lichexanthone is an organic compound in the structural class of chemicals known as xanthones. Lichexanthone was first isolated and identified by Japanese chemists from a species of leafy lichen in the 1940s. The compound is known to occur in many lichens, and it is important in the taxonomy of species in several genera, such as Pertusaria and Pyxine. More than a dozen lichen species have a variation of the word lichexanthone incorporated as part of their binomial name. The presence of lichexanthone in lichens causes them to fluoresce a greenish-yellow colour under long-wavelength UV light; this feature is used to help identify some species. Lichexanthone is also found in several plants (many are from the families Annonaceae and Rutaceae), and some species of fungi that do not form lichens. In lichens, the biosynthesis of lichexanthone occurs through a set of enzymatic reactions that start with the molecule acetyl-CoA and sequentially add successive units, forming a longer chain that is cyclized into a double-ring structure. Although it has been suggested that lichexanthone functions in nature as a photoprotectant—protecting resident algal populations (photobionts) in lichens from high-intensity solar radiation—its complete ecological function is not fully understood. Some biological activities of lichexanthone that have been demonstrated in the laboratory include antibacterial, larvicidal, and sperm motility-enhancing activities. Many lichexanthone derivatives are known, some produced naturally in lichens, and others created synthetically; like lichexanthone, some of these derivatives are also biologically active. History Lichexanthone was first reported by Japanese chemists Yasuhiko Asahina and Hisasi Nogami in 1942. They isolated the lichen product from Parmelia formosana (known today as Hypotrachyna osseoalba), a lichen that is widespread in Asia. Another early publication described its isolation from Parmelia quercina (now Parmelina quercina). Lichexanthone was the first xanthone to be reported from lichens, and it was given its name by Asahina and Nogami for this reason. Asahina and Nogami used a chemical method called potash fusion (decomposition with a hot solution of the strong base potassium hydroxide) on lichexanthone to produce orcinol. The earliest syntheses of lichexanthone used orsellinic aldehyde and phloroglucinol as starting reactants in the Tanase method. This method, one of six standard ways of synthesising xanthone derivatives, enables the creation of partially methylated polyhydroxyxanthones. In the reaction, the two substrates, in the presence of hydrochloric acid and acetic acid, produce a fluorone derivative that is subsequently reduced to give a xanthene derivative, which, after subsequent methylation and oxidation, leads to a xanthone with three methoxy groups. Afterwards, one of the methoxy groups is demethylated to yield lichexanthone. A simpler synthesis, starting from everninic acid (2-hydroxy-4-methoxy-6-methylbenzoic acid) and phloroglucinol, was proposed in 1956. These early syntheses also helped to confirm the structure of lichexanthone before spectral methods of analysis were widely available. In 1977, Harris and Hay proposed a biogenetically modelled synthesis of lichexanthone starting from the polycarbonyl compound 3,5,7,9,11,13-hexaoxotetradecanoic acid. In this synthesis, an aldol cyclization between positions 8 and 13 followed by a Claisen cyclization between positions 1 and 6 leads to the formation of a group of compounds that includes lichexanthone. Properties Lichexanthone is a member of the class of chemical compounds called xanthones. Specifically, it is a 9H-xanthen-9-one substituted by a hydroxy group at position 1, a methyl group at position 8 and methoxy groups at positions 3 and 6. Its IUPAC name is 1-hydroxy-3,6-dimethoxy-8-methyl-9H-xanthen-9-one. Lichexanthone's molecular formula is C16H14O5; it has a molecular mass of 286.27 grams per mole. In its purified crystalline form, it exists as long yellow prisms with a melting point of . Its crystal structure is part of the monoclinic crystal system, in the space group called P21/c. An ethanolic solution of lichexanthone reacts with iron(III) chloride to produce a purple colour; an acetic acid solution containing lichexanthone will emit a greenish fluorescence after adding a drop of concentrated sulfuric acid. The presence of the compound in lichens causes them to fluoresce yellow under long-wavelength UV light, a property that is used as a tool in lichen species identification. The mass spectrum of lichexanthone was reported in 1968. It features a strong parent peak at m/z (mass-to-charge ratio) of 286, and weaker-intensity rearrangement peaks at 257, 243, and 200. A 2009 study on the electrochemical reduction of the compound used techniques such as cyclic voltammetry with rotating disc and rotating ring electrodes, and controlled-potential electrolysis to characterise the reduction mechanism of lichexanthone, and to better understand the nature of its chemical reactivity. The complete proton nuclear magnetic resonance (1H NMR) and carbon-13 nuclear magnetic resonance (13C NMR) spectral assignments for lichexanthone were reported in 2010, as well as its crystal structure determined using X-ray diffraction. Biological activities Various biological activities of lichexanthone, studied using in vitro experiments, have been recorded in the scientific literature. The antimicrobial activity of the bark-dwelling lichen Marcelaria benguelensis is largely attributed to the presence of lichexanthone. Chemically unmodified lichexanthone has weak antimycobacterial activity against Mycobacterium tuberculosis and M. aurum. However, a dihydropyrane derivative of lichexanthone had antimycobacterial activity similar to that of drugs commonly used to treat tuberculosis. Lichexanthone has a strong antibacterial effect towards Bacillus subtilis, and also inhibits the growth of methicillin-resistant Staphylococcus aureus. In contrast, no antiparasitic activity was detected against either Plasmodium falciparum or Trypanosoma brucei, nor did it have any cytotoxic activity against a variety of cancer cell lines. In laboratory tests, the presence of lichexanthone enhances the motility of human sperm; there are only a few compounds known to have this effect. The chemical also has larvicidal activity against second-instar larvae of the mosquito Aedes aegypti, a vector of the Dengue virus. Biosynthesis In lichens, biosynthesis of lichexanthone occurs through the acetate-malonate metabolic pathway, which uses acetyl coenzyme A as a precursor. In this pathway, polyketides are created by the sequential reactions of a variety of polyketide synthases. These enzymes control a number of enzymatic reactions through several coordinated active sites on a large multienzyme protein complex. The structure of lichen xanthones is derived by linear condensation of seven acetate and malonate units with one orsellinic acid-type cyclisation. The two rings are joined by a ketonic carbon and by an ether-oxygen arising from cyclodehydration (i.e., a dehydration reaction leading to the formation of a cyclic compound). The exact mechanism is not known, but this ring closure might proceed through a benzophenone intermediate that could dehydrate to yield the central pyrone core of lichexanthone. A standardized high-performance liquid chromatography (HPLC) assay has been described to identify many lichen-derived substances, including lichexanthone and many other xanthones; because many xanthone isomers have different retention times, this technique can be used to identify complex mixtures of structurally similar derivatives. The technique was later refined to couple the HPLC output with a photodiode array detector to screen for xanthones based on their specific ultraviolet–visible spectra. In this way, lichexanthone is detected by monitoring its retention time, and verifying the presence of three peaks representing wavelengths of maximum absorption (λmax) at 208, 242, and 310 nm. Occurrence Although first isolated from foliose (leafy) Parmelia species, lichexanthone has since been found in a wide variety of lichens. For example, in the foliose genus Hypotrachyna, it is found in about a dozen species; when present, it usually completely replaces other cortical substances common in that genus, like atranorin and usnic acid. The presence or absence of lichexanthone is a character used in classifying species of the predominantly tropical genus Pyxine; of about 70 species in the genus, 20 contain lichexanthone. This represents the largest group of foliose lichens with the compound, as it is generally restricted to some groups of tropical crustose lichens, chiefly pyrenocarps and Graphidaceae. The large genus Pertusaria relies heavily on thallus chemistry to distinguish and classify species, some of which differ only in the presence or absence of a single secondary chemical. Lichexanthone, norlichexanthone, and their chlorinated derivatives are common in this genus. Although normally considered a secondary metabolite of lichens, lichexanthone has also been isolated from several plants, listed here organized by family: Annonaceae: Annona muricata, Guatteria blepharophylla, Rollinia leptopetala Clusiaceae: Garcinia forbesii Euphorbiaceae: Croton cuneatus Gentianaceae: Anthocleista djalonensis Hypericaceae: Vismia baccifera var. dealbata Meliaceae: Trichilia rubescens Melastomataceae: Henriettella fascicularis Olacaceae: Minquartia guianensis Polygonaceae: Ruprechtia tangarana Rutaceae: Clausena excavata, Feroniella lucida, Zanthoxylum microcarpum, Z. valens, Z. setulosum, Z. tetraspermum Sapindaceae: Cupania cinerea Lichexanthone has also been reported to occur in the bark of Faramea cyanea, although in that case it was suspected to have originated from a lichen growing on the bark. Additionally, two non-lichenised fungus species, Penicillium persicinum and Penicillium vulpinum, can synthesize lichexanthone. Xanthones are known to have strong UV-absorbing properties. In experiments using laboratory-grown mycobionts from the lichen Haematomma fluorescens, the synthesis of lichexanthone was induced when young mycelia were exposed to long-wavelength UV light (365 nm) for three to four hours every week over a time span of three to four months. In the natural lichen, the compound is present in both the outer cortical layer of the thallus and in the exciple (rim) of the ascomata. Lichexanthone may function as a light filter to protect the UV-sensitive algal layer in lichens from high-intensity solar radiation. The presence of the photoprotective chemical in the cortex may allow them to survive in otherwise inhospitable habitats, like on exposed trees in tropical areas or high mountains. It has been pointed out, however, that lichexanthone is also found in lichens living in less stressed environments, and from species that are in families where cortical substances are rare. In some instances, similar or related species exist that lack cortical substances entirely, suggesting that the actual ecological function of lichexanthone is not fully understood. Related compounds Norlichexanthone (1,3,6-trihydroxy-8-methylxanthone) differs from lichexanthone in having hydroxy rather than methoxy groups at positions 3 and 6. In (1,6-dihydroxy-3-methoxy-8-methylxanthen-9-one), the methoxy at position 6 of lichexanthone is replaced with a hydroxy. Dozens of chlorinated lichexanthone derivatives have been reported, some isolated from a variety of lichen species, and some produced synthetically. These derivatives are variously mono-, bi-, or trichlorinated with the chlorines at positions 2, 4, 5, and 7. As of 2016, 62 molecules with the lichexanthone scaffold had been described, and another eight additional lichexanthone derivatives were considered "putative"–thought to exist in nature, but not yet discovered in lichens. The effects of chlorine substituents on some structural and electronic properties of lichexanthones have been studied with quantum mechanical theory, to better understand things such as intramolecular interactions, aromaticity of the three rings, interactions between ionic and halogen bonds, and binding energies of complexes formed between lichexanthone, magnesium ion (Mg+2) and NH3. A series of lichexanthone derivatives were synthesized and assessed for antimycobacterial activity against Mycobacterium tuberculosis. These derivatives consisted of ω-bromo and ω-aminoalkoxylxanthones; lichexanthone and several derivatives were found to have weak antimycobacterial activity. According to the authors, this chemometrics approach was useful to correlate structural and chemical features with in vitro antimycobacterial activity among the group of ω-aminoalkoxylxanthones. Eponyms Some authors have explicitly named lichexanthone in the specific epithets of their published lichen species, thereby acknowledging the presence of this compound as an important taxonomic characteristic. These eponyms are listed here, followed by their author citation and year of publication. All of these species occur in Brazil: Parmotrema lichexanthonicum Lecanora lichexanthona Crypthonia lichexanthonica Cryptothecia lichexanthonica Buellia lichexanthonica Chiodecton lichexanthonicum Enterographa lichexanthonica Cladonia lichexanthonica Pertusaria lichexanthofarinosa Pertusaria lichexanthoimmersa Pertusaria lichexanthoverrucosa Diorygma isidiolichexanthonicum Caprettia lichexanthotricha Lecanora lichexanthoxylina Lepra lichexanthonorstictica – named for both lichexanthone and norstictic acid Aggregatorygma lichexanthonicum Allographa lichexanthonica Ocellularia fuscolichexanthonica Ocellularia lichexanthocavata In the case of Crypthonia, Chiodecton, Cladonia, and Caprettia, the listed species are the only members of those genera that contain lichexanthone. References Xanthones Methoxy compounds Lichen products Ketones Xanthonoids
Lichexanthone
Chemistry
3,435
43,676,277
https://en.wikipedia.org/wiki/Cartesian%20monoidal%20category
In mathematics, specifically in the field known as category theory, a monoidal category where the monoidal ("tensor") product is the categorical product is called a cartesian monoidal category. Any category with finite products (a "finite product category") can be thought of as a cartesian monoidal category. In any cartesian monoidal category, the terminal object is the monoidal unit. Dually, a monoidal finite coproduct category with the monoidal structure given by the coproduct and unit the initial object is called a cocartesian monoidal category, and any finite coproduct category can be thought of as a cocartesian monoidal category. Cartesian categories with an internal Hom functor that is an adjoint functor to the product are called Cartesian closed categories. Properties Cartesian monoidal categories have a number of special and important properties, such as the existence of diagonal maps Δx : x → x ⊗ x and augmentations ex : x → I for any object x. In applications to computer science we can think of Δ as "duplicating data" and e as "deleting data". These maps make any object into a comonoid. In fact, any object in a cartesian monoidal category becomes a comonoid in a unique way. Examples Cartesian monoidal categories: Set, the category of sets with the singleton set serving as the unit. Cat, the bicategory of small categories with the product category, where the category with one object and only its identity map is the unit. Cocartesian monoidal categories: Vect, the category of vector spaces over a given field, can be made cocartesian monoidal with the monoidal product given by the direct sum of vector spaces and the trivial vector space as unit. Ab, the category of abelian groups, with the direct sum of abelian groups as monoidal product and the trivial group as unit. More generally, the category R-Mod of (left) modules over a ring R (commutative or not) becomes a cocartesian monoidal category with the direct sum of modules as tensor product and the trivial module as unit. In each of these categories of modules equipped with a cocartesian monoidal structure, finite products and coproducts coincide (in the sense that the product and coproduct of finitely many objects are isomorphic). Or more formally, if f : X1 ∐ ... ∐ Xn → X1 × ... × Xn is the "canonical" map from the n-ary coproduct of objects Xj to their product, for a natural number n, in the event that the map f is an isomorphism, we say that a biproduct for the objects Xj is an object isomorphic to and together with maps ij : Xj → X and pj : X →  Xj such that the pair (X, {ij}) is a coproduct diagram for the objects Xj and the pair (X, {pj}) is a product diagram for the objects Xj , and where pj ∘ ij = idXj. If, in addition, the category in question has a zero object, so that for any objects A and B there is a unique map 0A,B : A → 0 → B, it often follows that pk ∘ ij = : δij, the Kronecker delta, where we interpret 0 and 1 as the 0 maps and identity maps of the objects Xj and Xk, respectively. See pre-additive category for more. See also Cartesian closed category References Category theory Monoidal categories
Cartesian monoidal category
Mathematics
766
6,365,212
https://en.wikipedia.org/wiki/Payload%20specialist
A payload specialist (PS) was an individual selected and trained by commercial or research organizations for flights of a specific payload on a NASA Space Shuttle mission. People assigned as payload specialists included individuals selected by the research community, a company or consortium flying a commercial payload aboard the spacecraft, and non-NASA astronauts designated by international partners. The term refers to both the individual and to the position on the Shuttle crew. History The National Aeronautics and Space Act of 1958 states that NASA should provide the "widest practicable and appropriate dissemination of information concerning its activities and the results thereof". The Naugle panel of 1982 concluded that carrying civilians—those not part of the NASA Astronaut Corps—on the Space Shuttle was part of "the purpose of adding to the public's understanding of space flight". Payload specialists usually fly for a single specific mission. Chosen outside the standard NASA mission specialist selection process, they are exempt from certain NASA requirements such as colorblindness. Roger K. Crouch and Ulf Merbold are examples of those who flew in space despite not meeting NASA physical requirements; the agency's director of crew training Jim Bilodeau said in April 1981 "we'll be able to take everybody but the walking wounded". Payload specialists were not required to be United States citizens, but had to be approved by NASA and undergo rigorous but shorter training. In contrast, a Space Shuttle mission specialist was selected as a NASA astronaut first and then assigned to a mission. Payload specialists on early missions were technical experts to join specific payloads such as a commercial or scientific satellite. On Spacelab and other missions with science components, payload specialists were scientists with expertise in specific experiments. The term also applied to representatives from partner nations who were given the opportunity of a first flight on board of the Space Shuttle (such as Saudi Arabia and Mexico), to Congressmen and the Teacher in Space program. Under Secretary of the Air Force Edward C. Aldridge Jr. was offered a seat to improve relations between NASA and the United States Air Force. NASA expected to also fly "citizen astronauts", ordinary Americans who could describe space to others. In August 1984 President Ronald Reagan announced the Teacher in Space Project, the first such program. NASA expected to fly reporters (Journalist in Space Project), entertainers, and creative types later. NASA categorized full-time international astronauts as payload specialists unless they received NASA mission specialist training, which some did. Bilodeau estimated that payload specialists would receive a couple of hundred hours of training over four or five weeks. International or scientific payload specialists were generally assigned a back-up who trained alongside the primary payload specialist and would replace them in the event of illness or other disability. Both primary and backup payload specialists received mission-specific and general training. Michael Lampton estimated that about 20% of his training was general, including firefighter school, capsule communicator duty, and use of Personal Egress Air Packs and the space toilet. He described training for Spacelab 1 as "going back to graduate school but majoring in everything"; as the first mission it tested Spacelab's versatility in "medical, metallurgical, remote sensing, astronomy, microgravity, lots more". Payload specialists operated experiments, and participated in experiments needing human subjects. Charles D. Walker recalled that Senator Jake Garn "and I were the obvious subjects" for Rhea Seddon's echocardiograph on STS-51-D. "We really didn't have much of a choice in whether we were going to be subjects or not. 'You're a payload specialist; you’re going to be a subject.'" Besides his own electrophoresis work, Walker operated an unrelated experiment for the University of Alabama Birmingham, and helped build homemade repair tools for a satellite launched on the mission. Payload specialists were flown from 1983 (STS-9) to 2003 (STS-107). The last flown payload specialist was the first Israeli astronaut, Ilan Ramon, who was killed in the Columbia disaster on mission STS-107 with the rest of the crew. Criticism Within NASA, Johnson Space Center (JSC) controlled crewed spaceflight by selecting professional, full-time astronauts. The payload specialist program gave Marshall Space Flight Center (MSFC)—which supervised Spacelab, including a contracted European Space Agency-chosen payload specialist—control as well, causing conflicts. JSC director Chris Kraft and members of the NASA astronaut corps believed that mission specialists—many with doctoral degrees or other scientific background, and all with full-time astronaut training—could operate all experiments. Rick Chappell, chief scientist of MSFC, believed that the scientific community insisted on its own scientists being able to operate experiments in exchange for support of the Space Shuttle program. While mission specialists could operate most experiments, "Since we could take passengers, why not take at least a couple of passengers who had spent their whole careers doing the kind of research they were going to do in space?" he said. During the Space Shuttle design process, some said that crews should be no larger than four people; both for safety, and because a commander, pilot, mission specialist, and payload specialist were sufficient for any mission. NASA expected to fly more payload specialists so it designed a larger vehicle. Only NASA astronauts piloted the space shuttle, but mission specialist astronauts worried about competing with American and international payload specialists for very limited flight opportunities. In 1984 about 45 mission specialists competed for about 15 seats on the five shuttle flights. Only three payload specialists flew that year, but in 1985 eight of nine shuttle flights carried 15 payload specialists (Walker flying twice), no doubt angering mission specialists. Some payload specialists like Walker and Byron Lichtenberg were rejected as full-time astronauts but flew as payload specialists before many selected as such, and some may have flown without understanding the level of danger. Many astronauts worried that without years of training together they would not be able to trust payload specialists in an emergency; Henry Hartsfield described their concern as "If you had a problem on orbit, am I going to have to babysit this person?" NASA's preference for its own training caused the agency to offer some international payload specialists the opportunity to become mission specialists, the first being Claude Nicollier. Those skeptical of the payload specialist program were less critical of scientists and experts like Walker than non-expert passengers ("part-timers", according to Mike Mullane, who called the program public relations-driven and immoral in Riding Rockets) like Garn, US Representative Bill Nelson, and other civilians such as Teacher in Space Christa McAuliffe. They saw Senator John Glenn as a passenger despite being a former Mercury Seven astronaut. A 1986 post-Challenger article in The Washington Post reviewed the issue, reporting that as far back as 1982, NASA was concerned with finding reasonable justifications for flying civilians on the Shuttle as was directed by the Reagan administration. The article says that "A review of records and interviews with past and present NASA and government officials shows the civilian program's controversial background, with different groups pushing for different approaches". The article concludes with: Payload specialists were aware of full-time astronauts' dislike of the program. Garn advised STS-51-D colleague Jeffrey A. Hoffman to not play poker because, the astronaut quoted, "'It took you a while to disguise your initial skepticism about this whole thing'". Merbold said that at JSC he was treated as an intruder. Once payload specialists were assigned to a mission, however, full-time astronauts treated them respectfully and often began long-term friendships. Mullane became less critical of them after his first mission; he and Hartsfield approved of Walker, as did Hoffman of Garn after STS-51-D. List of all payload specialists Backup payload specialists The following list are people who were named as backup (also known as alternate) payload specialists. These people typically received the same training as the "prime" crew payload specialist, but did not fly on the mission. However, many would go on to fly on other missions as "prime" crew payload or mission specialists. Other statistics Multiple flights Nationalities Payload specialists who later trained as mission specialists All were international astronauts. Marc Garneau – mission specialist on STS-77, STS-97 Mamoru Mohri – mission specialist on STS-99 Steven MacLean – mission specialist on STS-115 Hans Schlegel – mission specialist on STS-122 Umberto Guidoni – mission specialist on STS-100 Robert Thirsk – flight engineer on Soyuz TMA-15 and Expedition 20 Bjarni Tryggvason – completed training, retired in June 2008 without flying again See also List of human spaceflights List of Space Shuttle missions List of Space Shuttle crews Shenzhou 16 – the first time of mission has the position of Payload specialist in China Manned Space Program Gui Haichao – the first payload specialist and the first civilian taikonaut in this Program Notes References External links Biographies of Payload Specialists on NASA web site. Astronauts
Payload specialist
Biology
1,836
44,272,100
https://en.wikipedia.org/wiki/Discovery%20and%20development%20of%20gliflozins
Gliflozins are a class of drugs in the treatment of type 2 diabetes (T2D). They act by inhibiting sodium/glucose cotransporter 2 (SGLT-2), and are therefore also called SGLT-2 inhibitors. The efficacy of the drug is dependent on renal excretion and prevents glucose from going into blood circulation by promoting glucosuria. The mechanism of action is insulin independent. Three drugs have been accepted by the Food and Drug Administration (FDA) in the United States; dapagliflozin, canagliflozin and empagliflozin. Canagliflozin was the first SGLT-2 inhibitor that was approved by the FDA, being accepted in March 2013. Dapagliflozin and empagliflozin were accepted in 2014. Introduction Role of kidneys in glucose homeostasis There are at least four members of SLC-5 gene family, which are secondary active glucose transporters. The sodium glucose transporters proteins SGLT-1 and SGLT-2 are the two premier members of the family. These two members are found in the kidneys, among other transporters, and are the main co-transporters there related to the blood sugar. They play a role in renal glucose reabsorption and in intestinal glucose absorption. Blood glucose is freely filtered by the glomeruli and SGLT-1 and SGLT-2 reabsorb glucose in the kidneys and put it back into the circulation cells. SGLT-2 is responsible for 90% of the reabsorption and SGLT-1 for the other 10%. SGLT-2 protein Sodium/glucose co-transporter (SGLT) proteins are bound to the cell membrane and have the role of transporting glucose through the membrane into the cells, against the concentration gradient of glucose. This is done by using the sodium gradient, produced by sodium/potassium ATPase pumps, so at the same time glucose is transported into the cells, the sodium is too. Since it is against the gradient, it requires energy to work. SGLT proteins cause the glucose reabsorption from the glomerular filtrate, independent of insulin. SGLT-2 is a member of the glucose transporter family and is a low-affinity, high-capacity glucose transporter. SGLT-2 is mainly expressed in the S-1 and S-2 segments of the proximal renal tubules where the majority of filtered glucose is absorbed. SGLT-2 has a role in regulation of glucose and is responsible for most glucose reabsorption in the kidneys. In diabetes, extracellular glucose concentration increases and this high glucose level leads to upregulation of SGLT-2, leading in turn to more absorption of glucose in the kidneys. These effects cause maintenance of hyperglycemia. Because sodium is absorbed at the same time as glucose via SGLT-2, the upregulation of SGLT-2 probably leads to development or maintenance of hypertension. In study where rats were given either ramipril or losartan, levels of SGLT-2 protein and mRNA were significantly reduced. In patients with diabetes, hypertension is a common problem so this may have relevance in this disease. Drugs that inhibit sodium/glucose cotransporter 2 inhibit renal glucose reabsorption which leads to enhanced urinary glucose excretion and lower glucose in blood. They work independently of insulin and can reduce glucose levels without causing hypoglycemia or weight gain. Discovery Medieval physicians routinely tasted urine and wrote discourses on their observations. Which physician originally thought that diabetes mellitus was a renal disorder because of glucose discharged in urine is apparently now lost to history. The discovery of insulin eventually led to a diabetes management focus on the pancreas. Traditional foci of therapeutic strategies for diabetes have been on enhancing endogenous insulin secretion and on improving insulin sensitivity. In the previous decade the role of the kidney in the development and maintenance of high glucose levels has been examined. The role of the kidney led to the development of drugs that inhibit the sodium/glucose transporter 2 protein. Every day approximately 180 grams of glucose are filtered through the glomeruli and lost into the primary urine in healthy adults, but more than 90% of the glucose that is initially filtered is reabsorbed by a high capacity system controlled by SGLT-2 in the early convoluted segment of the proximal tubules. Almost all remaining filtered glucose is reabsorbed by sodium/glucose transporter 1 so under normal circumstances almost all filtered glucose will be reabsorbed and less than 100 mg of glucose finds its way into the urine of non-diabetic individuals. Phlorizin Phlorizin is a compound that has been known for over a century. It is a naturally occurring botanical glucoside that produces renal glucosuria and blocks intestinal glucose absorption through inhibition of the sodium/glucose symporters located in the proximal renal tubule and mucosa of the small intestine. Phlorizin was first isolated in 1835 and was subsequently found to be a potent but rather non-selective inhibitor of both SGLT-1 and SGLT-2 proteins. Phlorizin seemed to have very interesting properties and the results in animal studies were encouraging, it improved insulin sensitivity and in diabetic rat models it seemed to increase glucose levels in urine and also normal glucose concentration in plasma occurred without hypoglycemia. Unfortunately, in spite of these properties, phlorizin was not suitable enough for clinical development for several reasons. Phlorizin has very poor oral bioavailability as it is broken down in the gastrointestinal tract, so it has to be given parenterally. Phloretin, the active metabolite of phlorizin, is a potent inhibitor of facilitative glucose transporters and phlorizin seems to lead to serious adverse events in the gastrointestinal tract like diarrhea and dehydration. Because of these reasons, phlorizin was never pursued in humans. Although phlorizin was not suitable for further clinical trials, it served an important role in the development of SGLT-2 inhibitors. It served a basis for the recognition of SGLT inhibitors with improved safety and tolerability profiles. For an example, the SGLT inhibitors are not associated with gastrointestinal adverse events and the bioavailability is much greater. Inhibition of SGLT-2 results as better control of glucose level, lower insulin, lower blood pressure and uric acid levels and augments calorie wasting. Some data supports the hypothesis that SGLT-2 inhibition may have direct renoprotective effects. This includes actions to attenuate tubular hypertrophy and hyperfiltration associated with diabetes and to reduce the tubular toxicity of glucose. Inhibition of SGLT-2 following treatment with dapagliflozin reduces the capacity for tubular glucose reabsorption by approximately 30–50%. Drug development Phlorizin consists of glucose moiety and two aromatic rings (aglycone moiety) joined by an alkyl spacer. Initially, phlorizin was isolated for treatment of fever and infectious diseases, particularly malaria. According to Michael Nauck and his partners, studies were made in the 1950s on phlorizin that showed that it could block sugar transport in the kidney, small intestine, and a few other tissues. In the early 1990s, sodium/glucose cotransporter 2 was fully characterized, so the mechanism of phlorizin became of real interest. In later studies it was said that sugar-blocking effects of phlorizin was due to inhibition of the sodium/glucose cotransporter proteins. Most of the reported SGLT-2 inhibitors are glucoside analogs that can be tracked to the o-aryl glucoside found in the nature. The problem with using o-glucosides as SGLT-2 inhibitors is instability that can be tracked to degradation by β-glucosidase in the small intestine. Because of that, o-glucosides given orally have to be prodrug esters. These prodrugs go through changes in the body leading to carbon–carbon bond between the glucose and the aglycone moiety so c-glucoside are formed from the o-glucosides. C-glucosides have a different pharmacokinetic profile than o-glucosides (e.g. half-life and duration of action) and are not degraded by the β-glucosidase. The first discovered c-glucoside was the drug dapagliflozin. Dapagliflozin was the first highly selective SGLT-2-inhibitor approved by the European Medicines Agency. All SGLT-2 inhibitors in clinical development are prodrugs that have to be converted to its active 'A' form for activity. T-1095 Because Phlorizin is a nonselective inhibitor with poor oral bioavailability, a phlorizin derivative was synthesised and called T-1095. T-1095 is a methyl carbonate prodrug that is absorbed into the circulation when given orally, and is rapidly converted in the liver to the active metabolite T-1095A. By inhibiting SGLT-1 and SGLT-2, urinary glucose excretion increased in diabetic animals. T-1095 did not proceed in clinical development, probably because of the inhibition of SGLT-1 but non-selective SGLT inhibitors may also block glucose transporter 1 (GLUT-1). Because 90% of filtered glucose is reabsorbed through SGLT-2, research has focused specifically on SGLT-2. Inhibition of SGLT-1 may also lead to the genetic disease glucose-galactose malabsorption, which is characterized by severe diarrhea. ISIS 388626 According to preliminary findings of a novel method of SGLT-2 inhibition, the antisense oligonucleotide ISIS 388626 improved plasma glucose in rodents and dogs by reducing mRNA expression in the proximal renal tubules by up to 80% when given once a week. It did not affect SGLT-1. A study results on long-term use of ISIS 388626 in non-human primates observed more than 1000 fold increase in glucosuria without any associated hypoglycemia. This increase in glucosuria can be attributed to a dose-dependent reduction in the expression of SGLT-2, where the highest dose led to more than 75% reduction. In 2011, Ionis Pharmaceuticals initiated a clinical phase 1 study with ISIS-SGLT-2RX, a 12-nucleotide antisense oligonucleotide. Results from this study were published in 2017 and the treatment was "associated with unexpected renal effects". The authors concluded that "Before the concept of antisense-mediated blocking of SGLT2 with ISIS 388626 can be explored further, more preclinical data are needed to justify further investigations." Activity of SGLT-2 inhibitors in glycemic control Michael Nauck recounts that meta-analyses of studies about the activity of SGLT-2 inhibitors in glycemic control in type 2 diabetes mellitus patients shows improvement in the control of glucose, when compared with placebos, metformin, sulfonylurea, thiazolidinediones, insulin and more. The HbA1c was examined after SGLT-2 inhibitors were given alone (as monotherapy) and as an add-on therapy to the other diabetes medicines. The SGLT-2 inhibitors that were used were dapagliflozin and canagliflozin and others in the same drug class. The meta-analysis was taken together from studies ranging from period of few weeks up to more than 100 weeks. The results, summed up, were that 10 mg of dapagliflozin showed more effect than placebo in the control of glucose, when given for 24 weeks. However, no inferior efficacy of 10 mg dapagliflozin was shown when used as an add-on therapy to metformin, compared with glipizide after use for 52 weeks. 10 mg of dapagliflozin showed neither inferior efficacy compared with metformin when both of the medicines were given as monotherapy for 24 weeks. The results from meta-analysis when canagliflozin was examined, showed that compared to a placebo, canagliflozin affects HbA1c. Meta-analysis studies also showed that 10 mg and 25 mg of empagliflozin, improved HbA1c compared with a placebo. Structure-activity relationship (SAR) The aglycones of both phlorizin and dapagliflozin have weak inhibition effects on SGLT-1 and SGLT-2. Two synergistic forces are involved in binding of inhibitors to SGLTs. Different sugars on the aglycone will affect and change the orientation of it in the access vestibule because one of the forces involved in the binding is the binding of sugar to the glucose site. The other force is the binding of the aglycone, which affects the binding affinity of the entire inhibitor. The discovery of T-1095 led to an investigation of how to enhance potency, selectivity and oral bioavailability by adding various substituents to the glycoside core. As an example we can take the change of o-glycosides to c-glycosides by creating a carbon–carbon bond between the glucose and the aglycone moiety. C-glucosides are more stable than o-glucosides which leads to modified half-life and duration of action. These modifications have also led to more specificity to SGLT-2. C-glucosides that have heterocyclic ring at the distal ring or proximal ring are better when it comes to anti-diabetic effect and physicochemical features all together. C-glucoside bearing thiazole at the distal ring on canagliflozin has shown good physicochemical properties that can lead to a clinical development, but still has the same anti-diabetic activity as dapagliflozin, as shown in tables 1 and 2. Song and his partners did preparate thiazole compound by starting with carboxyl acid. Working with that, it took them three steps to get a compound like dapagliflozin with a thiazole ring. Inhibitory effects on SGLT-2 of the compounds were tested by Song and his partners. In tables 1, 2, and 3, the IC50 value changes depending on what compound is in the ring position, in the C-4 region of the proximal phenyl ring, and how the thiazole ring relates. Many compounds gave different IC50 value in the ring position in an in vitro activity. For an example there was a big difference if there was an n-pentyl group (IC50 = 13,3 nM), n-butyl (IC50 = 119 nM), phenyl with 2-furyl (IC50 = 0,720) or 3-thiophenyl (IC50 = 0,772). As seen in table 1, the in vitro activity increases depending on what compound is bonded to the distal ring (given that in the C-4 region of the proximal phenyl ring is a Cl atom). Table 1: Differences in in vitro activity depending on which compound is bonded to the distal ring. *comparator to ethyl group (IC50 = 16,7) In table 2, the in vitro activity changes depending on the compound in the C-4 region of the proximal phenyl ring (X). Small methyl groups or other halogen atoms in the C-4 position gave IC50 ranging from 0.72–36.7 (given that the phenyl with 2-furyl is in the ring position). Table 2: Differences in in vitro activity depending on what compound is in the C-4 region of the proximal phenyl ring. Table 3: Difference in the IC50 value depending on how the thiazole ring relates (nothing else is changed in the structure (X = Cl, R = phenyl with 2-furyl). See also Sodium-glucose transport proteins SLC5A2 SGLT1 SGLT2 Dapagliflozin Empagliflozin Canagliflozin Ipragliflozin References Gliflozins
Discovery and development of gliflozins
Chemistry,Biology
3,527
45,001,671
https://en.wikipedia.org/wiki/Cyanoform
Cyanoform (tricyanomethane) is an organic compound with the chemical formula . It is a colorless liquid. It is a cyanocarbon and derivative of methane with three cyano groups. For many years, chemists have been unable to isolate this compound as a neat, free acid. However, in September 2015, reports surfaced of a successful isolation. Properties Dilute solutions of this acid, as well as its salts, have long been well known. Cyanoform ranks as one of the most acidic of the carbon acids with an estimated pKa of -5.1 in water and measured pKa of 5.1 in acetonitrile. The reaction of sulfuric acid with sodium tricyanomethanide in water (a reaction first tried by H. Schmidtmann in 1896 with inconclusive results) is reported to result in the formation of hydronium tricyanomethanide or the formation of (Z)-3-amino-2-cyano-3-hydroxyacrylamide, , depending on the precise conditions. The reaction of HCl gas with sodium tricyanomethanide dissolved in THF is reported to yield 1-chloro-1-amino-2,2-dicyanoethylene () and its tautomer. Isolation In September 2015 cyanoform was successfully isolated by a team of scientists at Ludwig Maximilian University of Munich. The team discovered that cyanoform was stable at temperatures below −40°C; previous beliefs were that cyanoform was stable at room temperature. The isolation confirmed that cyanoform is a colorless liquid. References Organic acids Nitriles Substances discovered in the 2010s
Cyanoform
Chemistry
347
28,224,443
https://en.wikipedia.org/wiki/Toggle%20bolt
A toggle bolt, also known as a butterfly anchor, is a fastener for hanging objects on hollow walls such as drywall. Toggle bolts have wings that open inside a hollow wall, bracing against it to hold the fastener securely. The wings, once fully opened, greatly expand the surface area making contact with the back of the hollow wall. This ultimately spreads out the weight of the secured item, increasing the weight that can be secured compared to a regular bolt. See also Molly (fastener) References Fasteners Wall anchors
Toggle bolt
Engineering
110
4,634,262
https://en.wikipedia.org/wiki/Exec%20%28system%20call%29
In computing, exec is a functionality of an operating system that runs an executable file in the context of an already existing process, replacing the previous executable. This act is also referred to as an overlay. It is especially important in Unix-like systems, although it also exists elsewhere. As no new process is created, the process identifier (PID) does not change, but the machine code, data, heap, and stack of the process are replaced by those of the new program. The exec call is available for many programming languages including compilable languages and some scripting languages. In OS command interpreters, the built-in command replaces the shell process with the specified program. Nomenclature Interfaces to exec and its implementations vary. Depending on programming language it may be accessible via one or more functions, and depending on operating system it may be represented with one or more actual system calls. For this reason exec is sometimes described as a collection of functions. Standard names of such functions in C are , , , , , and (see below), but not "exec" itself. The Linux kernel has one corresponding system call named "execve", whereas all aforementioned functions are user-space wrappers around it. Higher-level languages usually provide one call named . Unix, POSIX, and other multitasking systems C language prototypes The POSIX standard declares exec functions in the header file, in the C language. The same functions are declared in for DOS (see below), OS/2, and Microsoft Windows. Some implementations provide these functions named with a leading underscore (e.g. _execl). The base of each is exec (execute), followed by one or more letters: e – An array of pointers to environment variables is explicitly passed to the new process image. l – Command-line arguments are passed individually (a list) to the function. p – Uses the PATH environment variable to find the file named in the file argument to be executed. v – Command-line arguments are passed to the function as an array (vector) of pointers. path The argument specifies the path name of the file to execute as the new process image. Arguments beginning at arg0 are pointers to arguments to be passed to the new process image. The argv value is an array of pointers to arguments. arg0 The first argument arg0 should be the name of the executable file. Usually it is the same value as the path argument. Some programs may incorrectly rely on this argument providing the location of the executable, but there is no guarantee of this nor is it standardized across platforms. envp Argument envp is an array of pointers to environment settings. The exec calls named ending with an e alter the environment for the new process image by passing a list of environment settings through the envp argument. This argument is an array of character pointers; each element (except for the final element) points to a null-terminated string defining an environment variable. Each null-terminated string has the form: name=value where name is the environment variable name, and value is the value of that variable. The final element of the envp array must be null. In the , , , and calls, the new process image inherits the current environment variables. Effects A file descriptor open when an exec call is made remains open in the new process image, unless was ed with FD_CLOEXEC or opened with O_CLOEXEC (the latter was introduced in POSIX.1-2001). This aspect is used to specify the standard streams (stdin, stdout and stderr) of the new program. A successful overlay destroys the previous memory address space of the process, and all its memory areas, that were not shared, are reclaimed by the operating system. Consequently, all its data that were not passed to the new program, or otherwise saved, become lost. Return value A successful exec replaces the current process image, so it cannot return anything to the program that made the call. Processes do have an exit status, but that value is collected by the parent process. If an exec function does return to the calling program, an error occurs, the return value is −1, and errno is set to one of the following values: DOS operating systems DOS is not a multitasking operating system, but replacing the previous executable image has a great merit there due to harsh primary memory limitations and lack of virtual memory. The same API is used for overlaying programs in DOS and it has effects similar to ones on POSIX systems. MS-DOS exec functions always load the new program into memory as if the "maximum allocation" in the program's executable file header is set to default value 0xFFFF. The EXEHDR utility can be used to change the maximum allocation field of a program. However, if this is done and the program is invoked with one of the exec functions, the program might behave differently from a program invoked directly from the operating-system command line or with one of the spawn functions (see below). Command interpreters Many Unix shells also offer a builtin command that replaces the shell process with the specified program. Wrapper scripts often use this command to run a program (either directly or through an interpreter or virtual machine) after setting environment variables or other configuration. By using exec, the resources used by the shell program do not need to stay in use after the program is started. The command can also perform a redirection. In some shells it is even possible to use the command for redirection only, without making an actual overlay. Alternatives The traditional Unix system does not have the functionality to create a new process running a new executable program in one step, which explains the importance of exec for Unix programming. Other systems may use spawn as the main tool for running executables. Its result is equivalent to the fork–exec sequence of Unix-like systems. POSIX supports the posix_spawn routines as an optional extension that usually is implemented using vfork. Other systems OS/360 and successors include a system call XCTL (transfer control) that performs a similar function to exec. See also Chain loading, overlaying in system programming exit (system call), terminate a process fork (system call), make a new process (but with the same executable) clone(), the way to create new threads PATH (variable), related to semantics of the argument References External links Process (computing) POSIX Process.h Unix SUS2008 utilities System calls
Exec (system call)
Technology
1,391
11,917,317
https://en.wikipedia.org/wiki/Camp%20bed
A camp bed is a narrow, light-weight bed, often made of sturdy cloth stretched over a folding frame. The term camp bed is common in the United Kingdom, but in North America they are often referred to as cots. Camp beds are used by the military in temporary camps and in emergency situations where large numbers of people are in need of housing after disasters. They are also used for recreational purposes, such as overnight camping trips. Ancient history It is believed that King Tutankhamun, who reigned in Egypt from approximately 1332 to 1323 BC, may have had the first camping bed. When Tutankhamun's tomb was opened in 1922, a room full of furniture was found to contain a three-section camping bed that folded up into a Z shape. Though the king, who had a clubfoot, may never have taken part in long-distance explorations, the elaborate folding bed suggests he had an interest in camping and hunting. 18th- and early 19th-century history The New-York Historical Society owns a camp bed thought to have been used by General George Washington during the American Revolutionary War, including during the hard winter at Valley Forge. It is made in three sections, with each section consisting of a wood frame stretched with canvas, supported by an X-shaped wooden base with iron mounts. According to the donor, Washington gave the camp bed to his recording secretary, Richard Varick, at the close of the war. It was passed down through Varick's descendants until it was donated to the Historical Society in 1871. Napoleon Bonaparte and his high-ranking officers used camp beds with a frame of gilt copper. The bed's six legs had wheels, and its vertical poles could support a canopy. Striped twill was attached to the frame by means of hooks in the copper frame. Napoleon died in such a camp bed on 5 May 1821, on the island of Saint Helena. Gallery See also Camping chair Stretcher Wall bed References External links Video of George Washington's military camp bed Beds Portable furniture
Camp bed
Biology
413
7,764,195
https://en.wikipedia.org/wiki/Rademacher%27s%20theorem
In mathematical analysis, Rademacher's theorem, named after Hans Rademacher, states the following: If is an open subset of and is Lipschitz continuous, then is differentiable almost everywhere in ; that is, the points in at which is not differentiable form a set of Lebesgue measure zero. Differentiability here refers to infinitesimal approximability by a linear map, which in particular asserts the existence of the coordinate-wise partial derivatives. Sketch of proof The one-dimensional case of Rademacher's theorem is a standard result in introductory texts on measure-theoretic analysis. In this context, it is natural to prove the more general statement that any single-variable function of bounded variation is differentiable almost everywhere. (This one-dimensional generalization of Rademacher's theorem fails to extend to higher dimensions.) One of the standard proofs of the general Rademacher theorem was found by Charles Morrey. In the following, let denote a Lipschitz-continuous function on . The first step of the proof is to show that, for any fixed unit vector , the -directional derivative of exists almost everywhere. This is a consequence of a special case of the Fubini theorem: a measurable set in has Lebesgue measure zero if its restriction to every line parallel to has (one-dimensional) Lebesgue measure zero. Considering in particular the set in where the -directional derivative of fails to exist (which must be proved to be measurable), the latter condition is met due to the one-dimensional case of Rademacher's theorem. The second step of Morrey's proof establishes the linear dependence of the -directional derivative of upon . This is based upon the following identity: Using the Lipschitz assumption on , the dominated convergence theorem can be applied to replace the two difference quotients in the above expression by the corresponding -directional derivatives. Then, based upon the known linear dependence of the -directional derivative of upon , the same can be proved of via the fundamental lemma of calculus of variations. At this point in the proof, the gradient (defined as the -tuple of partial derivatives) is guaranteed to exist almost everywhere; for each , the dot product with equals the -directional derivative almost everywhere (although perhaps on a smaller set). Hence, for any countable collection of unit vectors , there is a single set of measure zero such that the gradient and each -directional derivative exist everywhere on the complement of , and are linked by the dot product. By selecting to be dense in the unit sphere, it is possible to use the Lipschitz condition to prove the existence of every directional derivative everywhere on the complement of , together with its representation as the dot product of the gradient with the direction. Morrey's proof can also be put into the context of generalized derivatives. Another proof, also via a reduction to the one-dimensional case, uses the technology of approximate limits. Applications Rademacher's theorem can be used to prove that, for any , the Sobolev space is preserved under a bi-Lipschitz transformation of the domain, with the chain rule holding in its standard form. With appropriate modification, this also extends to the more general Sobolev spaces . Rademacher's theorem is also significant in the study of geometric measure theory and rectifiable sets, as it allows the analysis of first-order differential geometry, specifically tangent planes and normal vectors. Higher-order concepts such as curvature remain more subtle, since their usual definitions require more differentiability than is achieved by the Rademacher theorem. In the presence of convexity, second-order differentiability is achieved by the Alexandrov theorem, the proof of which can be modeled on that of the Rademacher theorem. In some special cases, the Rademacher theorem is even used as part of the proof. Generalizations Alberto Calderón proved the more general fact that if is an open bounded set in then every function in the Sobolev space is differentiable almost everywhere, provided that . Calderón's theorem is a relatively direct corollary of the Lebesgue differentiation theorem and Sobolev embedding theorem. Rademacher's theorem is a special case, due to the fact that any Lipschitz function on is an element of the space . There is a version of Rademacher's theorem that holds for Lipschitz functions from a Euclidean space into an arbitrary metric space in terms of metric differentials instead of the usual derivative. See also Pansu derivative References Sources External links (Rademacher's theorem with a proof is on page 18 and further.) Lipschitz maps Theorems in measure theory
Rademacher's theorem
Mathematics
976
49,021,419
https://en.wikipedia.org/wiki/PET%20response%20criteria%20in%20solid%20tumors
PET response criteria in solid tumors (PERCIST) is a set of rules that define when tumors in cancer patients improve ("respond"), stay the same ("stabilize"), or worsen ("progress") during treatment, using positron emission tomography (PET). The criteria were published in May 2009 in the Journal of Nuclear Medicine (JNM). A pooled analysis from 2016 concluded that its application may give rather different results from RECIST, and might be a more suitable tool for understanding tumor response to treatment. Details Complete metabolic response (CMR) Complete resolution of 18F-FDG uptake within the measurable target lesion so that it is less than mean liver activity and at the level of surrounding background blood pool activity. Disappearance of all other lesions to background blood pool levels. No new suspicious 18F-FDG avid lesions. If progression by RECIST must verify with follow up Partial metabolic response (PMR) Reduction of a minimum of 30% in target measurable tumor 18F-FDG SUL peak, with absolute drop in SUL of at least 0.8 SUL units. No increase >30% of SUL or size in all other lesions No new lesions Stable metabolic disease (SMD) Not CMR, PMR, or Progressive metabolic disease (PMD) No new lesions Progressive metabolic disease (PMD) >30% increase in 18F-FDG SUL peak, with >0.8 SUL units increase in tumor SUV peak from the baseline scan in pattern typical of tumor and not of infection/treatment effect. or Visible increase in the extent of 18F-FDG tumor uptake. or New 18F-FDG avid lesions which are typical of cancer and not related to treatment effect or infection. See also Response evaluation criteria in solid tumors References Cancer research Nuclear medicine PET radiotracers Positron emission tomography
PET response criteria in solid tumors
Physics,Chemistry
392
72,524,757
https://en.wikipedia.org/wiki/Hybridization%20in%20perennial%20plants
Hybridization, when new offspring arise from crosses between individuals of the same or different species, results in the assemblage of diverse genetic material and can act as a stimulus for evolution. Hybrid species are often more vigorous and genetically differed than their ancestors. There are primarily two different forms of hybridization: natural hybridization in an uncontrolled environment, whereas artificial hybridization (or breeding) occurs primarily for the agricultural purposes. Types There are mainly two types of hybridization: interspecific and intraspecific. Interspecific hybridization is the mating process between two different species. Intraspecific hybridization is the mating process within the species, often between genetically distinct lineages. Hybridization sometimes results in introgression, which can occur in response to habitat disturbance that puts plant species into contact with each other. Introgression is gene transfer among taxa and is a result of hybridization, followed by repeated backcrossing with parental individuals. Introgressive hybridization occurs often in plants, and results in increased genetic variation, which can facilitate rapid response to climate change. Hybridization in perennial plant systems Hybridization is considered to be an evolutionary catalyst capable of generating novel genotypes or phenotypes in a single generation. It can also happen with morphologically dissimilar but closely related species (Example: Helianthus giganteus, the giant sunflower). In plants, hybridization mostly generates speciation events, and commonly produces polyploid species. Factors like polyploidy events also plays significant factors for understanding the hybridization events (Example: an F1 hybrid of Jatropha curcas x Ricinus communis), because these polyploids tend to have an advantage for the early stages of adaptation due to their expanded genomes. As a result, hybridization can be a powerful driver for improving agricultural crops, but can also facilitate unwanted species invasions (e.g., annual sunflower). While hybridization in perennial plants can occur naturally, for example as the result of cross breeding with wild type relatives near agricultural fields, intentional hybridization in perennial crops has also been of recent interest in agriculture. While Hybridization and breeding methods have produced successful crop species, declining yield is a major challenge. Thus, further research is needed for leveraging hybridization in perennial crop systems to produce sustainable and high yielding crops. Some methods that are currently being explored include applying modern genotyping, phenotyping, and speed breeding techniques. When crosses in the laboratory are difficult, researchers can study hybrid zones that arise naturally in the field. For efforts to leverage hybridization to improve perennial crops to be successful, there need to be continued efforts toward building a broad collection of crop wild relatives, genomic sequencing of related species, creating and phenotyping desired hybrid populations, and developing a network for genotype and phenotype associations and locate phenotype into crop breeding pipelines. Hybridization among perennials is also of interest because they may hybridize naturally or artificially with annual crops. For one of the most dietarily and economically significant examples, Dewey 1984 finds that a perennial Agropyron has hybridized with hexaploid wheat. Dewey finds an ancient hybridization event contributed significantly to the modern hexaploid multi-genome and as with all other currently grown Triticeae crops, wheat is an annual. References Perennials Hybridization
Hybridization in perennial plants
Biology
697
1,813,514
https://en.wikipedia.org/wiki/Free-fall%20time
The free-fall time is the characteristic time that would take a body to collapse under its own gravitational attraction, if no other forces existed to oppose the collapse. As such, it plays a fundamental role in setting the timescale for a wide variety of astrophysical processes—from star formation to helioseismology to supernovae—in which gravity plays a dominant role. Derivation Infall to a point source of gravity It is relatively simple to derive the free-fall time by applying Kepler's Third Law of planetary motion to a degenerate elliptic orbit. Consider a point mass at distance from a point source of mass which falls radially inward to it. (Crucially, Kepler's Third Law depends only on the semi-major axis of the orbit, and does not depend on the eccentricity). A purely radial trajectory is an example of a degenerate ellipse with an eccentricity of 1 and semi-major axis . Therefore, the time it would take a body to fall inward, turn around, and return to its original position is the same as the period of a circular orbit of radius , by Kepler's Third Law: To see that the semi-major axis is , we must examine properties of orbits as they become increasingly elliptical. Kepler's First Law states that an orbit is an ellipse with the center of mass as one focus. In the case of a very small mass falling toward a very large mass , the center of mass is within the larger mass. The focus of an ellipse is increasingly off-center with increasing ellipticity. In the limiting case of a degenerate ellipse with an eccentricity of 1, the largest diameter of the orbit extends from the initial position of the infalling object to the point source of mass . In other words, the ellipse becomes a line of length . The semi-major axis is half the width of the ellipse along the long axis, which in the degenerate case becomes . If the free-falling body completed a full orbit, it would begin at distance from the point source mass , fall inward until it reached that point source, then return to its original position. In real systems, the point source mass isn't truly a point source and the infalling body eventually collides with some surface. Thus, it only completes half the orbit. But the orbit is symmetrical so the free-fall time is half the period. (This formula also follows from the formula for the falling time as a function of position.) For example, the time for an object in the orbit of the Earth around the Sun with period year to fall into the Sun if it were suddenly stopped in orbit, would be This is about 64.6 days. Infall of a spherically-symmetric distribution of mass Now, consider a case where the mass is not a point mass, but is distributed in a spherically-symmetric distribution about the center, with an average mass density of , where the volume of a sphere is: Let us assume that the only force acting is gravity. Then, as first demonstrated by Newton, and can easily be demonstrated using the divergence theorem, the acceleration of gravity at any given distance from the center of the sphere depends only upon the total mass contained within . The consequence of this result is that if one imagined breaking the sphere up into a series of concentric shells, each shell would collapse only subsequent to the shells interior to it, and no shells cross during collapse. As a result, the free-fall time of a test particle at can be expressed solely in terms of the total mass interior to it. In terms of the average density interior to , the free-fall time is where the latter is in SI units. This result is exactly the same as from the previous section when . Applications The free-fall time is a very useful estimate of the relevant timescale for a number of astrophysical processes. To get a sense of its application, we may write Here we have estimated the numerical value for the free-fall time as roughly 35 minutes for a body of mean density 1 g/cm3. Comparison For an object falling from infinity in a capture orbit, the time it takes from a given position to fall to the central point mass is the same as the free-fall time, except for a constant References Galactic dynamics Binney, James; Tremaine, Scott. Princeton University Press, 1987. Celestial mechanics Falling
Free-fall time
Physics
905
33,797,591
https://en.wikipedia.org/wiki/MIT150
The MIT150 is a list published by the Boston Globe, in honor of the 150th anniversary of the Massachusetts Institute of Technology (MIT) in 2011, listing 150 of the most significant innovators, inventions or ideas from MIT, its alumni, faculty, and related people and organizations in the 150 year history of the institute. The top 30 innovators and inventions on the list are: Tim Berners-Lee, inventor of the World Wide Web Eric Lander, team leader for sequencing one-third of the Human Genome William Shockley, inventor of the solid-state transistor Ray Tomlinson, inventor of the "@" symbol use in email addresses Phillip A. Sharp, founder of Biogen Idec Ken Olsen and Harlan Anderson, founders of Digital Equipment Corp. Helen Greiner and Colin Angle, founders of iRobot Corp. Ellen Swallow Richards, nutrition expert, and the first woman admitted to MIT Amar Bose, founder of Bose Corporation Ivan Getting, founder of Aerospace Corp., co-inventor of GPS Salvador Luria, father of modern biology Joseph Jacobson, co-founder of E Ink Dan Bricklin, Bob Frankston, inventors of VisiCalc Brewster Kahle, founder of the Internet Archive Daniel Lewin, F. Thomson Leighton, co-founders of Akamai Vannevar Bush, science advisor to President Franklin D. Roosevelt, founder of Raytheon, father of the National Science Foundation Pietro Belluschi, dean of the MIT School of Architecture and Planning Ron Rivest, Adi Shamir, Leonard Adleman, inventors of RSA cryptography Charles Draper, inventor of the first inertial guidance system Herbert Kalmus, Daniel Comstock, cofounders of Technicolor John Dorrance, inventor of Campbell Soup David Baltimore, Nobel laureate Robert Weinberg, cofounder of the Whitehead Institute William Thompson Sedgwick, founder of the Harvard School of Public Health Alfred P. Sloan, CEO of General Motors William Hewlett, cofounder of Hewlett Packard Marc Raibert, founder of Boston Dynamics and creator of BigDog Hugh Herr, founder of iWalk and head of the Biomechatronics research group at the MIT Media Lab Hoyt C. Hottel, oil industry pioneer Robert Swanson, cofounder of Genentech See also List of awards for contributions to culture Massachusetts Institute of Technology References External links MIT150 special from the Boston Globe Business and industry awards Science and technology awards Academic awards Awards established in 2011 Invention awards Massachusetts_Institute_of_Technology
MIT150
Technology
525
33,069,429
https://en.wikipedia.org/wiki/Hyeon%20Taeghwan
Taeghwan Hyeon (; born in 1964) is a South Korean chemist. He is SNU distinguished professor in the School of Chemical and Biological Engineering at Seoul National University, director of Center for Nanoparticle Research of Institute for Basic Science (IBS), and an associate editor of the Journal of the American Chemical Society. Hyeon is recognized by his pioneering work in chemical synthesis of uniformly sized nanocrystals and various applications of functional nanomaterials. In 2011, he was listed as the 37th most cited chemist and the 19th in materials science among “Top 100 Chemists” of the decade by UNESCO&IUPAC. He has published over 350 papers in prominent international journals with more than 70,000 citations and an h-index of 137. Since 2014, he has been listed as a Highly Cited Researcher in chemistry and materials science by Clarivate Analytics and became a Clarivate Citation Laureate in 2020. Biography Hyeon was born in Dalseong County, Daegu, South Korea. He received his B.A. degree in 1987 and M.S. in 1989 from Chemistry Department of Seoul National University, and Ph.D. in inorganic chemistry from University of Illinois at Urbana-Champaign in 1996 under the supervision of Kenneth S. Suslick. At Illinois Hyeon studied sonochemical synthesis of nanostructured catalytic and magnetic materials. From June 1996 to July 1997, he was a postdoctoral research associate in the Wolfgang M. H. Sachtler group at Northwestern University. He joined the faculty of the School of Chemical and Biological Engineering at Seoul National University in 1997. Career Hyeon is a leading scientist in the area of synthesis, assembly, and biomedical applications of uniformly sized nanoparticles. In particular, his research group developed a new generalized synthetic strategy, called "heat-up process", for producing uniform-sized nanoparticles of many transition metals and oxides without a size selection process. With this simple and inexpensive method, his group went on to design and fabricate multifunctional nanostructured materials for biomedical applications. Hyeon developed a new T1 MRI contrast agent using biocompatible manganese oxide (MnO) nanoparticles, exhibiting detailed anatomic structures of mouse brain. His group reported on the fabrication of monodisperse magnetite nanoparticles immobilized with uniform pore-sized mesoporous silica spheres for simultaneous MRI, fluorescence imaging, and drug delivery. The first demonstration of high-resolution in vivo three-photon imaging using biocompatible and bright Mn2+ doped ZnS nanocrystals was published in 2013. Uniformly sized iron oxide nanoclusters could be successfully used as T1 MR contrast agent for high-resolution MR angiography of macaque monkeys. His research interests also includes engineering the architecture of nanomaterials and utilizing them in lithium-ion batteries, fuel cell electrocatalysts, solar cells, and thermoelectrics. The group reported in 2013 the first demonstration of galvanic replacement reactions in metal oxide nanocrystals, and were able to synthesize hollow nanocrystals of various multimetallic oxides including Mn3O4/γ-Fe2O3. (Ref) He has delivered more than 30 invited lectures in conferences sponsored by the Materials Research Society, American Chemical Society, and Gordon Research Conferences, and more than 20 invited lectures at UC-Berkeley, Stanford, Harvard, MIT, Cornell, and Columbia. Honors and awards 2024: International Member, National Academy of Engineering 2023: International Fellow of Royal Swedish Academy of Engineering Sciences 2022: National Academy of Engineering of Korea Award 2020: Korea's Top 5 Bio-Field Research Results and News 2020: Clarivate Citation Laureate 2017: SNU Distinguished Professor 2016: Top Scientist and Technologist Award of Korea, () 2016: IUVSTA Prize for Technology (International Union for Vacuum Science, Technique and Applications) 2014–: Highly Cited Researcher, Clarivate Analytics, chemistry (2014–2019), materials science (2014–2019) 2013: Fellow of Materials Research Society 2012: Member of National Academy of Engineering of Korea 2012: Ho-Am Prize in Engineering, Samsung Hoam Foundation 2011: 100 Leaders in Korea, The Dong-A Ilbo 2000–2010: "Top 100 Chemists" of the decade, Thomson Reuters (2011) 2010: Member of Korean Academy of Science and Technology 2010: SNU Distinguished Fellow 2008: POSCO TJ Park Prize, POSCO TJ Park Foundation 2007: Shinyang Science Award, Shinyang Foundation 2006: Fellow of Royal Society of Chemistry, UK 2005: Excellent Researcher Award, Division of Inorganic Chemistry of the Korean Chemical Society 2005: The 4th DuPont Science and Technology Award, DuPont Korea 2002: Scientist of the Month Award, Ministry of Science and Technology, Korea 2002: 5th Korean Young Scientist Award, Awarded to one researcher in a given field per every other year by the President of South Korea 2001: Korean Chemical Society-Wiley Young Chemist Award, Korean Chemical Society 2001: Young Scientist Award, Korean Academy of Science and Technology 1996: T. S. Piper Award, University of Illinois at Urbana-Champaign, Inorganic Chemistry 1993–1996:University of Illinois Chemistry Department Fellowship 1991–1996: Korean Government Overseas Fellowship References External links Hyeon Research Group Google Scholar User Profile A Master of Nanoscience Moves Closer to the Top South Korean scientists Living people Nanotechnologists Seoul National University alumni University of Illinois Urbana-Champaign alumni South Korean expatriates in the United States People from Daegu 1964 births Institute for Basic Science South Korean chemists Recipients of the Ho-Am Prize in Engineering Academic staff of Seoul National University POSCO TJ Park Prize
Hyeon Taeghwan
Materials_science
1,181
53,147,574
https://en.wikipedia.org/wiki/Bacterial%20recombination
Bacterial recombination is a type of genetic recombination in bacteria characterized by DNA transfer from one organism called donor to another organism as recipient. This process occurs in three main ways: Transformation, the uptake of exogenous DNA from the surrounding environment. Transduction, the virus-mediated transfer of DNA between bacteria. Conjugation, the transfer of DNA from one bacterium to another via cell-to-cell contact. The final result of conjugation, transduction, and/or transformation is the production of genetic recombinants, individuals that carry not only the genes they inherited from their parent cells but also the genes introduced to their genomes by conjugation, transduction, and/or transformation. Recombination in bacteria is ordinarily catalyzed by a RecA type of recombinase. These recombinases promote repair of DNA damages by homologous recombination. The ability to undergo natural transformation is present in at least 67 bacterial species. Natural transformation is common among pathogenic bacterial species. In some cases, the DNA repair capability provided by recombination during transformation facilitates survival of the infecting bacterial pathogen. Bacterial transformation is carried out by numerous interacting bacterial gene products. Evolution Evolution in bacteria was previously viewed as a result of mutation or genetic drift. Today, genetic exchange, or gene transfer is viewed as a major driving force in the evolution of prokaryotes. This driving force has been widely studied in organisms like E. coli. Bacteria reproduces asexually, where daughter cells are clones of the parent. This clonal nature leads to random mutations that occur during DNA replication that potentially helps bacteria evolve. It was originally thought that only accumulated mutations helped bacteria evolve. In contrast, bacteria also import genes in a process called homologous recombination, first discovered by the observation of mosaic genes at loci encoding antibiotic resistance. The discovery of homologous recombination has made an impact on the understanding of bacterial evolution. The importance of evolution in bacterial recombination is its adaptivity. For example, bacterial recombination has been shown to promote the transfer of multi drug resistance genes via homologous recombination that goes beyond levels purely obtained by mutation. Mechanisms of bacterial recombination Bacterial recombination undergoes various different processes. The processes include: transformation, transduction, conjugation and homologous recombination. Homologous recombination relies on cDNA transferring genetic material. Complementary DNA sequences transport genetic material in the identical homologous chromosomes. The paternal and maternal paired chromosomes will align in order for the DNA sequences to undergo the process of crossing over. Transformation involves the uptake of exogenous DNA from the encircling environment. DNA fragments from a degraded bacterium will transfer into the surrounding, competent bacterium resulting in an exchange of DNA from the recipient. Transduction is associated with viral-mediated vectors transferring DNA material from one bacterium to another within the genome. Bacterial DNA is placed into the bacteriophage genome via bacterial transduction. In bacterial conjugation, DNA is transferred via cell-to-cell communication. Cell-to-cell communication may involve plasmids that allow for the transfer of DNA into another neighboring cell. The neighboring cells absorb the F-plasmid (fertility plasmid: inherited material that is present in the chromosome). The recipient and donor cell come into contact during a F-plasmid transfer. The cells undergo horizontal gene transfer in which the genetic material is transferred. Mechanisms for double-stranded breaks The RecBCD pathway in homologous recombination repairs the double-strand breaks in DNA that has degraded in bacteria. Base pairs attached to the DNA strands go through an exchange at a Holliday junction. In the second step of bacterial recombination, branch migration. involves the base pairs of the homologous DNA strands to continuously be interchanged at a Holliday junction. This results in the formation of two DNA duplexes. The RecBCD pathway undergoes helicase activity by unzipping the DNA duplex and stops when the nucleotide sequence reaches 5′-GCTGGTGG-3′. This nucleotide sequence is known as the Chi site. RecBCD enzymes will change after the nucleotide sequence reaches the Chi site. The RecF pathway repairs the degradation of the DNA strands. See also Genetic recombination References Gene expression Modification of genetic information
Bacterial recombination
Chemistry,Biology
929
6,564,813
https://en.wikipedia.org/wiki/CD3%20%28immunology%29
CD3 (cluster of differentiation 3) is a protein complex and T cell co-receptor that is involved in activating both the cytotoxic T cell (CD8+ naive T cells) and T helper cells (CD4+ naive T cells). It is composed of four distinct chains. In mammals, the complex contains a CD3γ chain, a CD3δ chain, and two CD3ε chains. These chains associate with the T-cell receptor (TCR) and the CD3-zeta (ζ-chain) to generate an activation signal in T lymphocytes. The TCR, CD3-zeta, and the other CD3 molecules together constitute the TCR complex. Structure The CD3γ, CD3δ, and CD3ε chains are highly related cell-surface proteins of the immunoglobulin superfamily containing a single extracellular immunoglobulin domain. A structure of the extracellular and transmembrane regions of the CD3γε/CD3δε/CD3ζζ/TCRαβ complex was solved with CryoEM, showing for the first time how the CD3 transmembrane regions enclose the TCR transmembrane regions in an open barrel. Containing aspartate residues, the transmembrane region of the CD3 chains is negatively charged, a characteristic that allows these chains to associate with the positively charged TCR chains. The intracellular tails of the CD3γ, CD3ε, and CD3δ molecules each contain a single conserved motif known as an immunoreceptor tyrosine-based activation motif, or ITAM for short, which is essential for the signaling capacity of the TCR. The intracellular tail of CD3ζ contains 3 ITAM motifs. Regulation Phosphorylation of the ITAM on CD3 renders the CD3 chain capable of binding an enzyme called ZAP70 (zeta associated protein), a kinase that is important in the signaling cascade of the T cell. As a drug target Immunosuppressant Because CD3 is required for T-cell activation, drugs (often monoclonal antibodies) that target it are being investigated as immunosuppressant therapies (e.g., otelixizumab, teplizumab) for type 1 diabetes and other autoimmune diseases. Cancer immunotherapy New anticancer drug treatments are being developed based upon the CD3 T cell co-receptor, with molecules being designed for altering the co-stimulatory signal to help get the T-cell to recognize the cancer cell and become fully activated. Cancers that possess the B7-H3 immunoregulatory checkpoint receptor on the tumor cell have been one such target in clinical trials. This B7-H3 protein is expressed on cancer cell for several types of cancer. Often, the drug will contain two domains, one binding the T-cell's CD3 and the other targeting and binding cancer cells. Immunohistochemistry CD3 is initially expressed in the cytoplasm of pro-thymocytes, the stem cells from which T-cells arise in the thymus. The pro-thymocytes differentiate into common thymocytes, and then into medullary thymocytes, and it is at this latter stage that CD3 antigen begins to migrate to the cell membrane. The antigen is found bound to the membranes of all mature T-cells, and in virtually no other cell type, although it does appear to be present in small amounts in Purkinje cells. This high specificity, combined with the presence of CD3 at all stages of T-cell development, makes it a useful immunohistochemical marker for T cells in tissue sections. The antigen remains present in almost all T-cell lymphomas and leukaemias, and can therefore be used to distinguish them from superficially similar B-cell and myeloid neoplasms. References Further reading External links Mouse CD Antigen Chart Human CD Antigen Chart Clusters of differentiation T cells Immunology
CD3 (immunology)
Biology
853
113,021
https://en.wikipedia.org/wiki/Intrusion%20detection%20system
An intrusion detection system (IDS) is a device or software application that monitors a network or systems for malicious activity or policy violations. Any intrusion activity or violation is typically either reported to an administrator or collected centrally using a security information and event management (SIEM) system. A SIEM system combines outputs from multiple sources and uses alarm filtering techniques to distinguish malicious activity from false alarms. IDS types range in scope from single computers to large networks. The most common classifications are network intrusion detection systems (NIDS) and host-based intrusion detection systems (HIDS). A system that monitors important operating system files is an example of an HIDS, while a system that analyzes incoming network traffic is an example of an NIDS. It is also possible to classify IDS by detection approach. The most well-known variants are signature-based detection (recognizing bad patterns, such as exploitation attempts) and anomaly-based detection (detecting deviations from a model of "good" traffic, which often relies on machine learning). Another common variant is reputation-based detection (recognizing the potential threat according to the reputation scores). Some IDS products have the ability to respond to detected intrusions. Systems with response capabilities are typically referred to as an intrusion prevention system (IPS). Intrusion detection systems can also serve specific purposes by augmenting them with custom tools, such as using a honeypot to attract and characterize malicious traffic. Comparison with firewalls Although they both relate to network security, an IDS differs from a firewall in that a conventional network firewall (distinct from a next-generation firewall) uses a static set of rules to permit or deny network connections. It implicitly prevents intrusions, assuming an appropriate set of rules have been defined. Essentially, firewalls limit access between networks to prevent intrusion and do not signal an attack from inside the network. An IDS describes a suspected intrusion once it has taken place and signals an alarm. An IDS also watches for attacks that originate from within a system. This is traditionally achieved by examining network communications, identifying heuristics and patterns (often known as signatures) of common computer attacks, and taking action to alert operators. A system that terminates connections is called an intrusion prevention system, and performs access control like an application layer firewall. Intrusion detection category IDS can be classified by where detection takes place (network or host) or the detection method that is employed (signature or anomaly-based). Analyzed activity Network intrusion detection systems Network intrusion detection systems (NIDS) are placed at a strategic point or points within the network to monitor traffic to and from all devices on the network. It performs an analysis of passing traffic on the entire subnet, and matches the traffic that is passed on the subnets to the library of known attacks. Once an attack is identified, or abnormal behavior is sensed, the alert can be sent to the administrator. NIDS function to safeguard every device and the entire network from unauthorized access. An example of an NIDS would be installing it on the subnet where firewalls are located in order to see if someone is trying to break into the firewall. Ideally one would scan all inbound and outbound traffic, however doing so might create a bottleneck that would impair the overall speed of the network. OPNET and NetSim are commonly used tools for simulating network intrusion detection systems. NID Systems are also capable of comparing signatures for similar packets to link and drop harmful detected packets which have a signature matching the records in the NIDS. When we classify the design of the NIDS according to the system interactivity property, there are two types: on-line and off-line NIDS, often referred to as inline and tap mode, respectively. On-line NIDS deals with the network in real time. It analyses the Ethernet packets and applies some rules, to decide if it is an attack or not. Off-line NIDS deals with stored data and passes it through some processes to decide if it is an attack or not. NIDS can be also combined with other technologies to increase detection and prediction rates. Artificial Neural Network (ANN) based IDS are capable of analyzing huge volumes of data due to the hidden layers and non-linear modeling, however this process requires time due its complex structure. This allows IDS to more efficiently recognize intrusion patterns. Neural networks assist IDS in predicting attacks by learning from mistakes; ANN based IDS help develop an early warning system, based on two layers. The first layer accepts single values, while the second layer takes the first's layers output as input; the cycle repeats and allows the system to automatically recognize new unforeseen patterns in the network. This system can average 99.9% detection and classification rate, based on research results of 24 network attacks, divided in four categories: DOS, Probe, Remote-to-Local, and user-to-root. Host intrusion detection systems Host intrusion detection systems (HIDS) run on individual hosts or devices on the network. A HIDS monitors the inbound and outbound packets from the device only and will alert the user or administrator if suspicious activity is detected. It takes a snapshot of existing system files and matches it to the previous snapshot. If the critical system files were modified or deleted, an alert is sent to the administrator to investigate. An example of HIDS usage can be seen on mission critical machines, which are not expected to change their configurations. Detection method Signature-based Signature-based IDS is the detection of attacks by looking for specific patterns, such as byte sequences in network traffic, or known malicious instruction sequences used by malware. This terminology originates from anti-virus software, which refers to these detected patterns as signatures. Although signature-based IDS can easily detect known attacks, it is difficult to detect new attacks, for which no pattern is available. In signature-based IDS, the signatures are released by a vendor for all its products. On-time updating of the IDS with the signature is a key aspect. Anomaly-based Anomaly-based intrusion detection systems were primarily introduced to detect unknown attacks, in part due to the rapid development of malware. The basic approach is to use machine learning to create a model of trustworthy activity, and then compare new behavior against this model. Since these models can be trained according to the applications and hardware configurations, machine learning based method has a better generalized property in comparison to traditional signature-based IDS. Although this approach enables the detection of previously unknown attacks, it may suffer from false positives: previously unknown legitimate activity may also be classified as malicious. Most of the existing IDSs suffer from the time-consuming during detection process that degrades the performance of IDSs. Efficient feature selection algorithm makes the classification process used in detection more reliable. New types of what could be called anomaly-based intrusion detection systems are being viewed by Gartner as User and Entity Behavior Analytics (UEBA) (an evolution of the user behavior analytics category) and network traffic analysis (NTA). In particular, NTA deals with malicious insiders as well as targeted external attacks that have compromised a user machine or account. Gartner has noted that some organizations have opted for NTA over more traditional IDS. Intrusion prevention Some systems may attempt to stop an intrusion attempt but this is neither required nor expected of a monitoring system. Intrusion detection and prevention systems (IDPS) are primarily focused on identifying possible incidents, logging information about them, and reporting attempts. In addition, organizations use IDPS for other purposes, such as identifying problems with security policies, documenting existing threats and deterring individuals from violating security policies. IDPS have become a necessary addition to the security infrastructure of nearly every organization. IDPS typically record information related to observed events, notify security administrators of important observed events and produce reports. Many IDPS can also respond to a detected threat by attempting to prevent it from succeeding. They use several response techniques, which involve the IDPS stopping the attack itself, changing the security environment (e.g. reconfiguring a firewall) or changing the attack's content. Intrusion prevention systems (IPS), also known as intrusion detection and prevention systems (IDPS), are network security appliances that monitor network or system activities for malicious activity. The main functions of intrusion prevention systems are to identify malicious activity, log information about this activity, report it and attempt to block or stop it.. Intrusion prevention systems are considered extensions of intrusion detection systems because they both monitor network traffic and/or system activities for malicious activity. The main differences are, unlike intrusion detection systems, intrusion prevention systems are placed in-line and are able to actively prevent or block intrusions that are detected. IPS can take such actions as sending an alarm, dropping detected malicious packets, resetting a connection or blocking traffic from the offending IP address. An IPS also can correct errors, defragment packet streams, mitigate TCP sequencing issues, and clean up unwanted transport and network layer options. Classification Intrusion prevention systems can be classified into four different types: Network-based intrusion prevention system (NIPS): monitors the entire network for suspicious traffic by analyzing protocol activity. Wireless intrusion prevention system (WIPS): monitor a wireless network for suspicious traffic by analyzing wireless networking protocols. Network behavior analysis (NBA): examines network traffic to identify threats that generate unusual traffic flows, such as distributed denial of service (DDoS) attacks, certain forms of malware and policy violations. Host-based intrusion prevention system (HIPS): an installed software package which monitors a single host for suspicious activity by analyzing events occurring within that host. Detection methods The majority of intrusion prevention systems utilize one of three detection methods: signature-based, statistical anomaly-based, and stateful protocol analysis. Signature-based detection: Signature-based IDS monitors packets in the Network and compares with pre-configured and pre-determined attack patterns known as signatures. While it is the simplest and most effective method, it fails to detect unknown attacks and variants of known attacks. Statistical anomaly-based detection: An IDS which is anomaly-based will monitor network traffic and compare it against an established baseline. The baseline will identify what is "normal" for that network – what sort of bandwidth is generally used and what protocols are used. It may however, raise a False Positive alarm for legitimate use of bandwidth if the baselines are not intelligently configured. Ensemble models that use Matthews correlation co-efficient to identify unauthorized network traffic have obtained 99.73% accuracy. Stateful protocol analysis detection: This method identifies deviations of protocol states by comparing observed events with "pre-determined profiles of generally accepted definitions of benign activity". While it is capable of knowing and tracing the protocol states, it requires significant resources. Placement The correct placement of intrusion detection systems is critical and varies depending on the network. The most common placement is behind the firewall, on the edge of a network. This practice provides the IDS with high visibility of traffic entering your network and will not receive any traffic between users on the network. The edge of the network is the point in which a network connects to the extranet. Another practice that can be accomplished if more resources are available is a strategy where a technician will place their first IDS at the point of highest visibility and depending on resource availability will place another at the next highest point, continuing that process until all points of the network are covered. If an IDS is placed beyond a network's firewall, its main purpose would be to defend against noise from the internet but, more importantly, defend against common attacks, such as port scans and network mapper. An IDS in this position would monitor layers 4 through 7 of the OSI model and would be signature-based. This is a very useful practice, because rather than showing actual breaches into the network that made it through the firewall, attempted breaches will be shown which reduces the amount of false positives. The IDS in this position also assists in decreasing the amount of time it takes to discover successful attacks against a network. Sometimes an IDS with more advanced features will be integrated with a firewall in order to be able to intercept sophisticated attacks entering the network. Examples of advanced features would include multiple security contexts in the routing level and bridging mode. All of this in turn potentially reduces cost and operational complexity. Another option for IDS placement is within the actual network. These will reveal attacks or suspicious activity within the network. Ignoring the security within a network can cause many problems, it will either allow users to bring about security risks or allow an attacker who has already broken into the network to roam around freely. Intense intranet security makes it difficult for even those hackers within the network to maneuver around and escalate their privileges. Limitations Noise can severely limit an intrusion detection system's effectiveness. Bad packets generated from software bugs, corrupt DNS data, and local packets that escaped can create a significantly high false-alarm rate. It is not uncommon for the number of real attacks to be far below the number of false-alarms. Number of real attacks is often so far below the number of false-alarms that the real attacks are often missed and ignored. Many attacks are geared for specific versions of software that are usually outdated. A constantly changing library of signatures is needed to mitigate threats. Outdated signature databases can leave the IDS vulnerable to newer strategies. For signature-based IDS, there will be lag between a new threat discovery and its signature being applied to the IDS. During this lag time, the IDS will be unable to identify the threat. It cannot compensate for weak identification and authentication mechanisms or for weaknesses in network protocols. When an attacker gains access due to weak authentication mechanisms then IDS cannot prevent the adversary from any malpractice. Encrypted packets are not processed by most intrusion detection devices. Therefore, the encrypted packet can allow an intrusion to the network that is undiscovered until more significant network intrusions have occurred. Intrusion detection software provides information based on the network address that is associated with the IP packet that is sent into the network. This is beneficial if the network address contained in the IP packet is accurate. However, the address that is contained in the IP packet could be faked or scrambled. Due to the nature of NIDS systems, and the need for them to analyse protocols as they are captured, NIDS systems can be susceptible to the same protocol-based attacks to which network hosts may be vulnerable. Invalid data and TCP/IP stack attacks may cause a NIDS to crash. The security measures on cloud computing do not consider the variation of user's privacy needs. They provide the same security mechanism for all users no matter if users are companies or an individual person. Evasion techniques There are a number of techniques which attackers are using, the following are considered 'simple' measures which can be taken to evade IDS: Fragmentation: by sending fragmented packets, the attacker will be under the radar and can easily bypass the detection system's ability to detect the attack signature. Avoiding defaults: The TCP port utilised by a protocol does not always provide an indication to the protocol which is being transported. For example, an IDS may expect to detect a trojan on port 12345. If an attacker had reconfigured it to use a different port, the IDS may not be able to detect the presence of the trojan. Coordinated, low-bandwidth attacks: coordinating a scan among numerous attackers (or agents) and allocating different ports or hosts to different attackers makes it difficult for the IDS to correlate the captured packets and deduce that a network scan is in progress. Address spoofing/proxying: attackers can increase the difficulty of the Security Administrators ability to determine the source of the attack by using poorly secured or incorrectly configured proxy servers to bounce an attack. If the source is spoofed and bounced by a server, it makes it very difficult for IDS to detect the origin of the attack. Pattern change evasion: IDS generally rely on 'pattern matching' to detect an attack. By changing the data used in the attack slightly, it may be possible to evade detection. For example, an (IMAP) server may be vulnerable to a buffer overflow, and an IDS is able to detect the attack signature of 10 common attack tools. By modifying the payload sent by the tool, so that it does not resemble the data that the IDS expects, it may be possible to evade detection. Development The earliest preliminary IDS concept was delineated in 1980 by James Anderson at the National Security Agency and consisted of a set of tools intended to help administrators review audit trails. User access logs, file access logs, and system event logs are examples of audit trails. Fred Cohen noted in 1987 that it is impossible to detect an intrusion in every case, and that the resources needed to detect intrusions grow with the amount of usage. Dorothy E. Denning, assisted by Peter G. Neumann, published a model of an IDS in 1986 that formed the basis for many systems today. Her model used statistics for anomaly detection, and resulted in an early IDS at SRI International named the Intrusion Detection Expert System (IDES), which ran on Sun workstations and could consider both user and network level data. IDES had a dual approach with a rule-based Expert System to detect known types of intrusions plus a statistical anomaly detection component based on profiles of users, host systems, and target systems. The author of "IDES: An Intelligent System for Detecting Intruders", Teresa F. Lunt, proposed adding an artificial neural network as a third component. She said all three components could then report to a resolver. SRI followed IDES in 1993 with the Next-generation Intrusion Detection Expert System (NIDES). The Multics intrusion detection and alerting system (MIDAS), an expert system using P-BEST and Lisp, was developed in 1988 based on the work of Denning and Neumann. Haystack was also developed in that year using statistics to reduce audit trails. In 1986 the National Security Agency started an IDS research transfer program under Rebecca Bace. Bace later published the seminal text on the subject, Intrusion Detection, in 2000. Wisdom & Sense (W&S) was a statistics-based anomaly detector developed in 1989 at the Los Alamos National Laboratory. W&S created rules based on statistical analysis, and then used those rules for anomaly detection. In 1990, the Time-based Inductive Machine (TIM) did anomaly detection using inductive learning of sequential user patterns in Common Lisp on a VAX 3500 computer. The Network Security Monitor (NSM) performed masking on access matrices for anomaly detection on a Sun-3/50 workstation. The Information Security Officer's Assistant (ISOA) was a 1990 prototype that considered a variety of strategies including statistics, a profile checker, and an expert system. ComputerWatch at AT&T Bell Labs used statistics and rules for audit data reduction and intrusion detection. Then, in 1991, researchers at the University of California, Davis created a prototype Distributed Intrusion Detection System (DIDS), which was also an expert system. The Network Anomaly Detection and Intrusion Reporter (NADIR), also in 1991, was a prototype IDS developed at the Los Alamos National Laboratory's Integrated Computing Network (ICN), and was heavily influenced by the work of Denning and Lunt. NADIR used a statistics-based anomaly detector and an expert system. The Lawrence Berkeley National Laboratory announced Bro in 1998, which used its own rule language for packet analysis from libpcap data. Network Flight Recorder (NFR) in 1999 also used libpcap. APE was developed as a packet sniffer, also using libpcap, in November, 1998, and was renamed Snort one month later. Snort has since become the world's largest used IDS/IPS system with over 300,000 active users. It can monitor both local systems, and remote capture points using the TZSP protocol. The Audit Data Analysis and Mining (ADAM) IDS in 2001 used tcpdump to build profiles of rules for classifications. In 2003, Yongguang Zhang and Wenke Lee argue for the importance of IDS in networks with mobile nodes. In 2015, Viegas and his colleagues proposed an anomaly-based intrusion detection engine, aiming System-on-Chip (SoC) for applications in Internet of Things (IoT), for instance. The proposal applies machine learning for anomaly detection, providing energy-efficiency to a Decision Tree, Naive-Bayes, and k-Nearest Neighbors classifiers implementation in an Atom CPU and its hardware-friendly implementation in a FPGA. In the literature, this was the first work that implement each classifier equivalently in software and hardware and measures its energy consumption on both. Additionally, it was the first time that was measured the energy consumption for extracting each features used to make the network packet classification, implemented in software and hardware. See also Application protocol-based intrusion detection system (APIDS) Artificial immune system Bypass switch Denial-of-service attack DNS analytics Extrusion detection Intrusion Detection Message Exchange Format Protocol-based intrusion detection system (PIDS) Real-time adaptive security Security management ShieldsUp Software-defined protection References Further reading Al_Ibaisi, T., Abu-Dalhoum, A. E.-L., Al-Rawi, M., Alfonseca, M., & Ortega, A. (n.d.). Network Intrusion Detection Using Genetic Algorithm to find Best DNA Signature. http://www.wseas.us/e-library/transactions/systems/2008/27-535.pdf Ibaisi, T. A., Kuhn, S., Kaiiali, M., & Kazim, M. (2023). Network Intrusion Detection Based on Amino Acid Sequence Structure Using Machine Learning. Electronics, 12(20), 4294. https://doi.org/10.3390/electronics12204294 External links Common vulnerabilities and exposures (CVE) by product NIST SP 800-83, Guide to Malware Incident Prevention and Handling NIST SP 800-94, Guide to Intrusion Detection and Prevention Systems (IDPS) Study by Gartner "Magic Quadrant for Network Intrusion Prevention System Appliances" Computer network security System administration
Intrusion detection system
Technology,Engineering
4,611
63,195,300
https://en.wikipedia.org/wiki/Computing%20the%20Continuous%20Discretely
Computing the Continuous Discretely: Integer-Point Enumeration in Polyhedra is an undergraduate-level textbook in geometry, on the interplay between the volume of convex polytopes and the number of lattice points they contain. It was written by Matthias Beck and Sinai Robins, and published in 2007 by Springer-Verlag in their Undergraduate Texts in Mathematics series (Vol. 154). A second edition was published in 2015, and a German translation of the first edition by Kord Eickmeyer, Das Kontinuum diskret berechnen, was published by Springer in 2008. Topics The book begins with a motivating problem, the coin problem of determining which amounts of money can be represented (and what is the largest non-representable amount of money) for a given system of coin values. Other topics touched on include face lattices of polytopes and the Dehn–Sommerville equations relating numbers of faces; Pick's theorem and the Ehrhart polynomials, both of which relate lattice counting to volume; generating functions, Fourier transforms, and Dedekind sums, different ways of encoding sequences of numbers into mathematical objects; Green's theorem and its discretization; Bernoulli polynomials; the Euler–Maclaurin formula for the difference between a sum and the corresponding integral; special polytopes including zonotopes, the Birkhoff polytope, and permutohedra; and the enumeration of magic squares. In this way, the topics of the book connect together geometry, number theory, and combinatorics. Audience and reception This book is written at an undergraduate level, and provides many exercises, making it suitable as an undergraduate textbook. Little mathematical background is assumed, except for some complex analysis towards the end of the book. The book also includes open problems, of more interest to researchers in these topics. As reviewer Darren Glass writes, "Even people who are familiar with the material would almost certainly learn something from the clear and engaging exposition that these two authors use." Reviewer Margaret Bayer calls the book "coherent and tightly developed ... accessible and engaging", and reviewer Oleg Karpenkov calls it "outstanding". See also List of books about polyhedra References Polytopes Lattice points Volume Mathematics textbooks 2007 non-fiction books 2015 non-fiction books Springer Science+Business Media books
Computing the Continuous Discretely
Physics,Mathematics
483
14,551,989
https://en.wikipedia.org/wiki/GPRC5D
G-protein coupled receptor family C group 5 member D is a protein that in humans is encoded by the GPRC5D gene. GPRC5D is a class C orphan G protein-coupled receptor predominantly expressed in multiple myeloma cells and hard keratinized tissues, with low expression in normal human tissues, rendering it an appealing target for multiple myeloma cells. Structure Structural analysis of the complex between GPRC5D and talquetamab, a bispecific antibody for the treatment of multiple myeloma, has revealed that GPRC5D exists as a dimer. GPRC5D forms a symmetric dimer via TM4 and TM4/TM5 interactions. The study further demonstrated that only one talquetamab molecule can bind to the dimeric form of GPRC5D. Talquetamab Fab recognizes ECLs and TM3/5/7 of one GPRC5D protomer via six CDRs. Function The protein encoded by this gene is a member of the G protein-coupled receptor family; however, the specific function of this gene has not yet been determined. See also Retinoic acid-inducible orphan G protein-coupled receptor References Further reading G protein-coupled receptors
GPRC5D
Chemistry
259
1,056,276
https://en.wikipedia.org/wiki/NEC%20SX-6
The SX-6 is a NEC SX supercomputer built by NEC Corporation that debuted in 2001; the SX-6 was sold under license by Cray Inc. in the U.S. Each SX-6 single-node system contains up to eight vector processors, which share up to 64 GB of computer memory. The SX-6 processor is a single chip implementation containing a vector processor unit and a scalar processor fabricated in a 0.15 μm CMOS process with copper interconnects, whereas the SX-5 was a multi-chip implementation. The Earth Simulator is based on the SX-6 architecture. The vector processor is made up of eight vector pipeline units each with seventy-two 256-word vector registers. The vector unit performs add/shift, multiply, divide and logical operations. The scalar unit is 64 bits wide and contains a 64 KB cache. The scalar unit can decode, issue and complete four instructions per clock cycle. Branch prediction and speculative execution is supported. A multi-node system is configured by interconnecting up to 128 single-node systems via a high-speed, low-latency IXS (Internode Crossbar Switch). The peak performance of the SX-6 series vector processors is 8 GFLOPS. Thus a single-node system provides a peak performance of 64 GFLOPS, while a multi-node system provides up to 8 TFLOPS of peak floating-point performance. The SX-6 uses SUPER-UX, a Unix-like operating system developed by NEC. A SAN-based global file system (NEC's GFS) is available for a multinode installation. The default batch processing system is NQSII, but open source batch systems such as Sun Grid Engine are also supported. See also SUPER-UX NEC SX Earth Simulator NEC Corporation References External links SX-6 Specifications Scalable Vector Supercomputer - SX Series Downloads Sx-6 Vector supercomputers
NEC SX-6
Technology
418
14,677,231
https://en.wikipedia.org/wiki/Insert%20%28molecular%20biology%29
In Molecular biology, an insert is a piece of DNA that is inserted into a larger DNA vector by a recombinant DNA technique, such as ligation or recombination. This allows it to be multiplied, selected, further manipulated or expressed in a host organism. Inserts can range from physical nucleotide additions using a technique system or the addition of artificial structures on a molecule via mutagenic chemicals, such as ethidium bromide or crystals. Inserts into the genome of an organism normally occur due to natural causes. These causes include environmental conditions and intracellular processes. Environmental inserts range from exposure to radioactive radiation such as Ultraviolet, mutagenic chemicals, or DNA viruses. Intracellular inserts can occur through heritable changes in parent cells or errors in DNA replication or DNA repair. Gene insertion techniques can be used for characteristic mutations in an organism for a desired phenotypic gene expression. A gene insert change can be expressed in a large variety of ends. These variants can range from the loss, or gain, of protein function to changes in physical structure i.e., hair, or eye, color. The goal of changes in expression are focused on a gain of function in proteins for regulation or to termination of cellular function for prevention of disease. The results of the variations are dependent on the place in the genome the addition, or mutation is located. The aim is to learn, understand, and possibly predict the expression of genetic material in organisms using physical and chemical analysis. To see the results of genetic mutations, or inserts, techniques such as DNA sequencing, gel electrophoresis, immunoassay, or microscopy  can observe mutation. History The field has expanded significantly since the publication in 1973 with biochemists Stanley N. Cohen and Herbert W. Boyer by using E. coli bacteria to learn how to cut fragments, rejoin different fragments, and insert the new genes. The field has expanded tremendously in terms of precision and accuracy since then. Computers and technology have made it technologically easier to achieve narrowing of error and expand understanding in this field. Computers having a high capacity for data and calculations which made processing the large volume of information tangible, i.e., the use of ChIP and gene sequence. Techniques and protocols Homology directed repair (HDR) is a technique repairs breaks or lesions in DNA molecules. The most common technique to add inserts to desired sequences is the use of homologous recombination. This technique has a specific requirement where the insert can only be added after it has been introduced to the nucleus of the cell, which can be added to the genome mostly during the G2 and S phases in the cell cycle. CRISPR gene editing CRISPR gene editing based on Clustered regularly interspaced short palindromic repeats (CRISPR) -Cas9 is an enzyme that uses the gene sequences to help control, cleave, and separate specific DNA sequences that are complementary to a CRISPR sequence. These sequences and enzymes were originally derived from bacteriophages. The importance of this technique in the field of genetic engineering is that it gives the ability to have highly precise targeted gene editing and the cost factor for this technique is low compared to other tools. The ability to insert DNA sequences into the organism is easy and fast, although it can run into expression issues in higher complex organisms. Transcription activator-like effector nuclease Transcription activator-like effector nuclease, TALENs, are a set of restriction enzymes that be created to cut out desired DNA sequences. These enzymes are mostly used in combination with CRISPR-CAS9, Zinc finger nuclease, or HDR. The main reason for this is the ability for these enzymes to have the precision to cut and separate the desired sequence within a gene. Zinc finger nuclease Zinc finger nucleases are genetically engineered enzymes that combine fusing a zinc finger DNA-binding domain on a DNA-cleavage domain. These are also combined with CRISPR-CAS9 or TALENs to gain a sequence-specific addition, or deletion, within the genome of more complex cells and organisms. Gene gun The gene gun, also known as a biolistic particle delivery system, is used to deliver transgenes, proteins, or RNA into the cell. It uses a micro-projectile delivery system that shoots coated particles of a typical heavy metal that has DNA of interest into cells using high speed. The genetic material will penetrate the cell and deliver the contents over a space area. The use of micro-projectile delivery systems is a technique known as biolistic. References Molecular biology
Insert (molecular biology)
Chemistry,Biology
939
41,053
https://en.wikipedia.org/wiki/Distortion-limited%20operation
In telecommunications, distortion-limited operation is the condition prevailing when distortion of a received signal, rather than its attenuated amplitude (or power), limits performance under stated operational conditions and limits. Note: Distortion-limited operation is reached when the system distorts the shape of the waveform beyond specified limits. For linear systems, distortion-limited operation is equivalent to bandwidth-limited operation. References Telecommunications engineering
Distortion-limited operation
Engineering
84
52,452,475
https://en.wikipedia.org/wiki/NGC%206638
NGC 6638 is a globular cluster in the constellation Sagittarius. It is magnitude 9.5 and diameter 2 arc minutes, class VI. It is a half degree east of Lambda Sagittarii. It is a member of the Milky Way. The globular cluster was discovered in 1784 by the astronomer William Herschel with his 18.7-inch telescope and the discovery was later entered in the New General Catalogue. References Robert Burnham, Jr, Burnham's Celestial Handbook: An observer's guide to the universe beyond the solar system, vol 3, p. 1557 External links NGC 6638 Globular clusters Sagittarius (constellation) 6638
NGC 6638
Astronomy
141
31,136,418
https://en.wikipedia.org/wiki/Octaazacubane
Octaazacubane is a hypothetical explosive allotrope of nitrogen with formula N8, whose molecules have eight atoms arranged into a cube. (By comparison, nitrogen usually occurs as the diatomic molecule N2.) It can be regarded as a cubane-type cluster, where all eight corners are nitrogen atoms bonded along the edges. It is predicted to be a metastable molecule, in which despite the thermodynamic instability caused by bond strain, and the high energy of the N–N single bonds, the molecule remains kinetically stable for reasons of orbital symmetry. Explosive and fuel Octaazacubane is predicted to have an energy density (assuming decomposition into N2) of 22.9 MJ/kg, which is over 5 times the standard value of TNT. It has therefore been proposed (along with other exotic nitrogen allotropes) as an explosive, and as a component of high performance rocket fuel. Its velocity of detonation is predicted to be 15,000 m/s, much (48.5%) more than octanitrocubane, the fastest known nonnuclear explosive. A prediction for cubic gauche nitrogen energy density is 33 MJ/kg, exceeding octaazacubane by 44%, though a more recent one is of 10.22 MJ/kg, making it less than half of octaazacubane. See also Tetranitrogen (Nitrogen allotrope with formula N4) Hexazine (Nitrogen allotrope with formula N6) Pentazole 1,1′-Azobis-1,2,3-triazole 1-Diazidocarbamoyl-5-azidotetrazole References External links Explosive chemicals Hypothetical chemical compounds Allotropes of nitrogen
Octaazacubane
Chemistry
373
55,598,789
https://en.wikipedia.org/wiki/NGC%204491
NGC 4491 is a dwarf barred spiral galaxy located about 55 million light-years away in the constellation Virgo. NGC 4491 was discovered by astronomer William Herschel on March 15, 1784. NGC 4491 is located in a subgroup of the Virgo Cluster centered on Messier 87 known as the Virgo A subgroup. Tidal interactions NGC 4491 is a strongly barred galaxy. The bar may have grown from the tidal influence of other galaxies in the Virgo Cluster. Possible Seyfert activity The infrared-radio properties of NGC 4491 possibly suggest the presence of an AGN in the galaxy. However, spectral analysis of the galaxy does not support this view since emission lines are absent or very weak and narrow. See also List of NGC objects (4001–5000) Dwarf galaxy References External links Dwarf spiral galaxies Virgo (constellation) Barred spiral galaxies 4491 41376 7657 Astronomical objects discovered in 1784 Virgo Cluster
NGC 4491
Astronomy
195
997,476
https://en.wikipedia.org/wiki/Night%20sky
The night sky is the nighttime appearance of celestial objects like stars, planets, and the Moon, which are visible in a clear sky between sunset and sunrise, when the Sun is below the horizon. Natural light sources in a night sky include moonlight, starlight, and airglow, depending on location and timing. Aurorae light up the skies above the polar circles. Occasionally, a large coronal mass ejection from the Sun or simply high levels of solar wind may extend the phenomenon toward the Equator. The night sky and studies of it have a historical place in both ancient and modern cultures. In the past, for instance, farmers have used the status of the night sky as a calendar to determine when to plant crops. Many cultures have drawn constellations between stars in the sky, using them in association with legends and mythology about their deities. The history of astrology has generally been based on the belief that relationships between heavenly bodies influence or explain events on Earth. The scientific study of objects in the night sky takes place in the context of observational astronomy. Visibility of celestial objects in the night sky is affected by light pollution. The presence of the Moon in the night sky has historically hindered astronomical observation by increasing the amount of sky brightness. With the advent of artificial light sources, however, light pollution has been a growing problem for viewing the night sky. Optical filters and modifications to light fixtures can help to alleviate this problem, but for optimal views, both professional and amateur astronomers seek locations far from urban skyglow. Brightness The fact that the sky is not completely dark at night, even in the absence of moonlight and city lights, can be easily observed, since if the sky were absolutely dark, one would not be able to see the silhouette of an object against the sky. The intensity of the sky brightness varies greatly over the day and the primary cause differs as well. During daytime when the Sun is above the horizon direct scattering of sunlight (Rayleigh scattering) is the overwhelmingly dominant source of light. In twilight, the period of time between sunset and sunrise, the situation is more complicated and a further differentiation is required. Twilight is divided in three segments according to how far the Sun is below the horizon in segments of 6°. After sunset the civil twilight sets in, and ends when the Sun drops more than 6° below the horizon. This is followed by the nautical twilight, when the Sun reaches heights of −6° and −12°, after which comes the astronomical twilight defined as the period from −12° to −18°. When the Sun drops more than 18° below the horizon, the sky generally attains its minimum brightness. Several sources can be identified as the source of the intrinsic brightness of the sky, namely airglow, indirect scattering of sunlight, scattering of starlight, and artificial light pollution. Visual presentation Depending on local sky cloud cover, pollution, humidity, and light pollution levels, the stars visible to the unaided naked eye appear as hundreds, thousands or tens of thousands of white pinpoints of light in an otherwise near black sky together with some faint nebulae or clouds of light. In ancient times the stars were often assumed to be equidistant on a dome above the Earth because they are much too far away for stereopsis to offer any depth cues. Visible stars range in color from blue (hot) to red (cold), but with such small points of faint light, most look white because they stimulate the rod cells without triggering the cone cells. If it is particularly dark and a particularly faint celestial object is of interest, averted vision may be helpful. The stars of the night sky cannot be counted unaided because they are so numerous and there is no way to track which have been counted and which have not. Further complicating the count, fainter stars may appear and disappear depending on exactly where the observer is looking. The result is an impression of an extraordinarily vast star field. Because stargazing is best done from a dark place away from city lights, dark adaptation is important to achieve and maintain. It takes several minutes for eyes to adjust to the darkness necessary for seeing the most stars, and surroundings on the ground are hard to discern. A red flashlight can be used to illuminate star charts and telescope parts without undoing the dark adaptation. Constellations Star charts are produced to aid stargazers in identifying constellations and other celestial objects. Constellations are prominent because their stars tend to be brighter than other nearby stars in the sky. Different cultures have created different groupings of constellations based on differing interpretations of the more-or-less random patterns of dots in the sky. Constellations were identified without regard to distance to each star, but instead as if they were all dots on a dome. Orion is among the most prominent and recognizable constellations. The Big Dipper (which has a wide variety of other names) is helpful for navigation in the northern hemisphere because it points to Polaris, the north star. The pole stars are special because they are approximately in line with the Earth's axis of rotation so they appear to stay in one place while the other stars rotate around them through the course of a night (or a year). Planets Planets, named for the Greek word for 'wanderer', process through the starfield a little each day, executing loops with time scales dependent on the length of the planet's year or orbital period around the Sun. Planets, to the naked eye, appear as points of light in the sky with variable brightness. Planets shine due to sunlight reflecting or scattering from the planets' surface or atmosphere. Thus, the relative Sun-planet-Earth positions determine the planet's brightness. With a telescope or good binoculars, the planets appear as discs demonstrating finite size, and it is possible to observe orbiting moons which cast shadows onto the host planet's surface. Venus is the most prominent planet, often called the "morning star" or "evening star" because it is brighter than the stars and often the only "star" visible near sunrise or sunset, depending on its location in its orbit. Because of its brightness, Venus can sometimes be seen after sunrise. Mercury, Mars, Jupiter and Saturn are also visible to the naked eye in the night sky. The Moon The Moon appears as a grey disc in the sky with cratering visible to the naked eye. It spans, depending on its exact location, 29–33 arcminutes – which is about the size of a thumbnail at arm's length, and is readily identified. Over 29.53 days on average, the moon goes through a full cycle of lunar phases. People can generally identify phases within a few days by looking at the Moon. Unlike stars and most planets, the light reflected from the Moon is bright enough to be seen during the day. Some of the most spectacular moons come during the full moon phase near sunset or sunrise. The Moon on the horizon benefits from the Moon illusion which makes it appear larger. The Sun's light reflected from the Moon traveling through the atmosphere also appears to color the Moon orange and/or red. Comets Comets come to the night sky only rarely. Comets are illuminated by the Sun, and their tails extend away from the Sun. A comet with a visible tail is quite unusual – a great comet appears about once a decade. They tend to be visible only shortly before sunrise or after sunset because those are the times they are close enough to the Sun to show a tail. Clouds Clouds obscure the view of other objects in the sky, though varying thicknesses of cloud cover have differing effects. A very thin cirrus cloud in front of the moon might produce a rainbow-colored ring around the moon. Stars and planets are too small or dim to take on this effect and are instead only dimmed (often to the point of invisibility). Thicker cloud cover obscures celestial objects entirely, making the sky black or reflecting city lights back down. Clouds are often close enough to afford some depth perception, though they are hard to see without moonlight or light pollution. Other objects On clear dark nights in unpolluted areas, when the Moon appears thin or below the horizon, the Milky Way, a band of what looks like white dust, can be seen. The Magellanic Clouds of the southern sky are easily mistaken to be Earth-based clouds (hence the name) but are in fact collections of stars found outside the Milky Way known as dwarf galaxies. Zodiacal light is a glow that appears near the points where the Sun rises and sets, and is caused by sunlight interacting with interplanetary dust. Gegenschein is a faint bright spot in the night sky centered at the antisolar point, caused by the backscatter of sunlight by interplanetary dust. Shortly after sunset and before sunrise, artificial satellites often look like stars – similar in brightness and size – but move relatively quickly. Those that fly in low Earth orbit cross the sky in a couple of minutes. Some satellites, including space debris, appear to blink or have a periodic fluctuation in brightness because they are rotating. Satellite flares can appear brighter than Venus, with notable examples including the International Space Station (ISS) and Iridium Satellites. Meteors streak across the sky infrequently. During a meteor shower, they may average one a minute at irregular intervals, but otherwise their appearance is a random surprise. The occasional meteor will make a bright, fleeting streak across the sky, and they can be very bright in comparison to the night sky. Aircraft are also visible at night, distinguishable at a distance from other objects because their navigation lights blink. Sky map Future and past Beside the Solar System objects changing in the course of them and Earth orbiting and changing orbits over time around the Sun and in the case of the Moon around Earth, appearing over time smaller by expanding its orbit, the night sky also changes over the course of the years with stars having a proper motion and changing brightness because of being variable stars, by the distance to them getting larger or other celestial events like supernovas. Over a timescale of tens of billions of years the night sky in the Local Group will significantly change when the coalescence of the Andromeda Galaxy and the Milky Way merge into a single elliptical galaxy. See also Amateur astronomy Asterism (astronomy) Astrology Astronomical object Constellation Earth's shadow Olbers' paradox Planetarium References External links A virtual panorama of winter night. Pokljuka, Slovenia. Burger.si. Accessed 28 February 2011. Observational astronomy Articles containing video clips Sky Sky
Night sky
Astronomy
2,145
25,313,746
https://en.wikipedia.org/wiki/Certified%20professional%20broadcast%20engineer
Certified Professional Broadcast Engineer (CPBE) is a title granted to an individual who already holds an SBE Senior Broadcast Engineer certification or registered as a professional electrical engineer and also successfully meets the experience and reference requirements of the certification. The certification is regulated by the Society of Broadcast Engineers (SBE). The CPBE title is protected by copyright laws. Individuals who use the title without consent from the Society of Broadcast Engineers could face legal action. The SBE certifications were created to recognize individuals who practice in career fields which are not regulated by state licensing or Professional Engineering programs. Broadcast Engineering is regulated at the national level and not by individual states. External links Certified Professional Broadcast Engineer (CPBE) Requirements & Application SBE Official Website See also List of post-nominal letters References Broadcast engineering Professional certification in engineering
Certified professional broadcast engineer
Engineering
163
1,007,331
https://en.wikipedia.org/wiki/Astaxanthin
Astaxanthin is a keto-carotenoid within a group of chemical compounds known as carotenones or terpenes. Astaxanthin is a metabolite of zeaxanthin and canthaxanthin, containing both hydroxyl and ketone functional groups. It is a lipid-soluble pigment with red coloring properties, which result from the extended chain of conjugated (alternating double and single) double bonds at the center of the compound. The presence of the hydroxyl functional groups and the hydrophobic hydrocarbons render the molecule amphiphilic. Astaxanthin is produced naturally in the freshwater microalgae Haematococcus pluvialis, the yeast fungus Xanthophyllomyces dendrorhous (also known as Phaffia rhodozyma) and the bacteria Paracoccus carotinifaciens. When the algae are stressed by lack of nutrients, increased salinity, or excessive sunshine, they create astaxanthin. Animals who feed on the algae, such as salmon, red trout, red sea bream, flamingos, and crustaceans (shrimp, krill, crab, lobster, and crayfish), subsequently reflect the red-orange astaxanthin pigmentation. Astaxanthin is used as a dietary supplement for human, animal, and aquaculture consumption. Astaxanthin from algae, synthetic and bacterial sources is generally recognized as safe in the United States. The US Food and Drug Administration has approved astaxanthin as a food coloring (or color additive) for specific uses in animal and fish foods. The European Commission considers it as a food dye with E number E161j. The European Food Safety Authority has set an Acceptable Daily Intake of 0.2 mg per kg body weight, as of 2019. As a food color additive, astaxanthin and astaxanthin dimethyldisuccinate are restricted for use in Salmonid fish feed only. Natural sources Astaxanthin is present in most red-coloured aquatic organisms. The content varies from species to species, but also from individual to individual as it is highly dependent on diet and living conditions. Astaxanthin and other chemically related asta-carotenoids have also been found in a number of lichen species of the Arctic zone. The primary natural sources for industrial production of astaxanthin comprise the following: Euphausia pacifica (Pacific krill) Euphausia superba (Antarctic krill) Haematococcus pluvialis (algae) Pandalus borealis (Arctic shrimp) Astaxanthin concentrations in nature are approximately: Algae are the primary natural source of astaxanthin in the aquatic food chain. The microalgae Haematococcus pluvialis contains high levels of astaxanthin (about 3.8% of dry weight), and is the primary industrial source of natural astaxanthin. In shellfish, astaxanthin is almost exclusively concentrated in the shells, with only low amounts in the flesh itself, and most of it only becomes visible during cooking as the pigment separates from the denatured proteins that otherwise bind it. Astaxanthin is extracted from Euphausia superba (Antarctic krill) and from shrimp processing waste. Biosynthesis Astaxanthin biosynthesis starts with three molecules of isopentenyl pyrophosphate (IPP) and one molecule of dimethylallyl pyrophosphate (DMAPP) that are combined by IPP isomerase and converted to geranylgeranyl pyrophosphate (GGPP) by GGPP synthase. Two molecules of GGPP are then coupled by phytoene synthase to form phytoene. Next, phytoene desaturase creates four double bonds in the phytoene molecule to form lycopene. After desaturation, lycopene cyclase first forms γ-carotene by converting one of the ψ acyclic ends of the lycopene as a β-ring, then subsequently converts the other to form β-carotene. From β-carotene, hydrolases (blue) are responsible for the inclusion of two 3-hydroxy groups, and ketolases (green) for the addition of two 4-keto groups, forming multiple intermediate molecules until the final molecule, astaxanthin, is obtained. Synthetic sources The structure of astaxanthin by synthesis was described in 1975. Nearly all commercially available astaxanthin for aquaculture is produced synthetically, with an annual market of about $1 billion in 2019. An efficient synthesis from isophorone, cis-3-methyl-2-penten-4-yn-1-ol and a symmetrical C10-dialdehyde has been discovered and is used in industrial production. It combines these chemicals together with an ethynylation and then a Wittig reaction. Two equivalents of the proper ylide combined with the proper dialdehyde in a solvent of methanol, ethanol, or a mixture of the two, yields astaxanthin in up to 88% yields. Metabolic engineering The cost of astaxanthin extraction, high market price, and lack of efficient fermentation production systems, combined with the intricacies of chemical synthesis, discourage its commercial development. The metabolic engineering of bacteria (Escherichia coli) enables efficient astaxanthin production from beta-carotene via either zeaxanthin or canthaxanthin. Structure Stereoisomers In addition to structural isomeric configurations, astaxanthin also contains two chiral centers at the 3- and 3-positions, resulting in three unique stereoisomers (3R,3R and 3R,3'S meso and 3S,3'S). While all three stereoisomers are present in nature, relative distribution varies considerably from one organism to another. Synthetic astaxanthin contains a mixture of all three stereoisomers, in approximately 1:2:1 proportions. Esterification Astaxanthin exists in two predominant forms, non-esterified (yeast, synthetic) or esterified (algal) with various length fatty acid moieties whose composition is influenced by the source organism as well as growth conditions. The astaxanthin fed to salmon to enhance flesh coloration is in the non-esterified form The predominance of evidence supports a de-esterification of fatty acids from the astaxanthin molecule in the intestine prior to or concomitant with absorption resulting in the circulation and tissue deposition of non-esterified astaxanthin. European Food Safety Authority (EFSA) published a scientific opinion on a similar xanthophyll carotenoid, lutein, stating that "following passage through the gastrointestinal tract and/or uptake lutein esters are hydrolyzed to form free lutein again". While it can be assumed that non-esterified astaxanthin would be more bioavailable than esterified astaxanthin due to the extra enzymatic steps in the intestine needed to hydrolyse the fatty acid components, several studies suggest that bioavailability is more dependent on formulation than configuration. Uses Astaxanthin is used as a dietary supplement and feed supplement as food colorant for salmon, crabs, shrimp, chickens and egg production. For seafood and animals The primary use of synthetic astaxanthin today is as an animal feed additive to impart coloration, including farm-raised salmon and chicken egg yolks. Synthetic carotenoid pigments colored yellow, red or orange represent about 15–25% of the cost of production of commercial salmon feed. In the 21st century, most commercial astaxanthin for aquaculture is produced synthetically. Class action lawsuits were filed against some major grocery store chains for not clearly labeling the astaxanthin-treated salmon as "color added". The chains followed up quickly by labeling all such salmon as "color added". Litigation persisted with the suit for damages, but a Seattle judge dismissed the case, ruling that enforcement of the applicable food laws was up to government and not individuals. Dietary supplement The primary human application for astaxanthin is as a dietary supplement, and it remains under preliminary research. In 2020, the European Food Safety Authority reported that an intake of 8 mg astaxanthin per day from food supplements is safe for adults. Role in the food chain Lobsters, shrimp, and some crabs turn red when cooked because the astaxanthin, which was bound to the protein in the shell, becomes free as the protein denatures and unwinds. The freed pigment is thus available to absorb light and produce the red color. Regulations In April 2009, the United States Food and Drug Administration approved astaxanthin as an additive for fish feed only as a component of a stabilized color additive mixture. Color additive mixtures for fish feed made with astaxanthin may contain only those diluents that are suitable. The color additives astaxanthin, ultramarine blue, canthaxanthin, synthetic iron oxide, dried algae meal, Tagetes meal and extract, and corn endosperm oil are approved for specific uses in animal foods. Haematococcus algae meal (21 CFR 73.185) and Phaffia yeast (21 CFR 73.355) for use in fish feed to color salmonoids were added in 2000. In the European Union, astaxanthin-containing food supplements derived from sources that have no history of use as a source of food in Europe, fall under the remit of the Novel Food legislation, EC (No.) 258/97. Since 1997, there have been five novel food applications concerning products that contain astaxanthin extracted from these novel sources. In each case, these applications have been simplified or substantial equivalence applications, because astaxanthin is recognised as a food component in the EU diet. References Articles containing video clips Carotenoids Cyclohexenes Food colorings Secondary alcohols Tetraterpenes
Astaxanthin
Biology
2,170
16,052,279
https://en.wikipedia.org/wiki/Bioprecipitation
Bioprecipitation is the concept of rain-making bacteria and was proposed by David Sands from Montana State University in the 1970s. This is precipitation that is beneficial for microbial and plant growth, it is a feedback cycle beginning with land plants generating small air-borne particles called aerosols that contain microorganisms that influence the formation of clouds by their ice nucleation properties. The formation of ice in clouds is required for snow and most rainfall. Dust and soot particles can serve as ice nuclei, but biological ice nuclei are capable of catalyzing freezing at much warmer temperatures. The ice-nucleating bacteria currently known are mostly plant pathogens. Recent research suggests that bacteria may be present in clouds as part of an evolved process of dispersal. Ice-nucleating proteins derived from ice-nucleating bacteria are used for snowmaking. A symbiotic relationship between sulphate reducing, lead reducing, sulphur oxidizing, and denitrifying bacteria was found to be responsible for biotransformation and bioprecipitation. Plant pathogens Most known ice-nucleating bacteria are plant pathogens. These pathogens can cause freezing injury in plants. In the United States alone, it has been estimated that frost accounts for approximately $1 billion in crop damage each year. The ice-minus variant of P. syringae is a mutant, lacking the gene responsible for ice-nucleating surface protein production. This lack of surface protein provides a less favorable environment for ice formation. Both strains of P. syringae occur naturally, but recombinant DNA technology has allowed for the synthetic removal or alteration of specific genes, enabling the creation of the ice-minus strain. The introduction of an ice-minus strain of P. syringae to the surface of plants would incur competition between the strains. Should the ice-minus strain win out, the ice nucleate provided by P. syringae would no longer be present, lowering the level of frost development on plant surfaces at normal water freezing temperature (0°C). Dispersal of bacteria through rainfall Bacteria present in clouds may have evolved to use rainfall as a means of dispersing themselves into the environment. The bacteria are found in snow, soils and seedlings in locations, such as, Antarctica, the Yukon Territory of Canada and the French Alps, according to Brent Christner, a microbiologist at Louisiana State University. It has been suggested that the bacteria are part of a constant feedback loop between terrestrial ecosystems and clouds. According to Christine, this bacteria may rely on the rainfall to spread to new habitats, in much the same way as plants rely on windblown pollen grains, which could possibly a key element of the bacterial life cycle. Snowmaking Certain species of bacteria and fungi are known to act as efficient biological ice nuclei at temperatures between −10 and 0 °C. Without ice nuclear agents, to freeze water the temperature has to be at least -40 °C. But ice nucleating bacteria can freeze at -1 °C instead of -40 °C. Even after the death of the bacteria, the glycoproteins continue ice crystallization. It does so by mimicking ice at the site of ice nucleating sites, which it acts as a template for the formation of ice lattice. Many ski resorts use a commercially available freeze-dried preparation of ice-nucleating proteins derived from the bacterium species Pseudomonas syringae to make snow in a snowgun. Pseudomonas syringae is a well studied plant pathogen that can infect plants, which results in loss. By studying this pathogen it can help us understand the plant immune system. See also Pseudomonas syringae Ice-minus bacteria Aeroplankton References Bacteria Weather modification
Bioprecipitation
Engineering,Biology
771
20,565,432
https://en.wikipedia.org/wiki/SB-204741
{{Drugbox | verifiedrevid = 449586687 | IUPAC_name = N-(1-Methyl-1H-indol-5-yl)-''N-(3-methylisothiazol-5-yl)urea | image = SB-204,741_structure.png | width = 220 | tradename = | pregnancy_AU = | pregnancy_US = | pregnancy_category = | legal_AU = | legal_CA = | legal_UK = | legal_US = | legal_status = | routes_of_administration = | bioavailability = | protein_bound = | metabolism = | elimination_half-life = | excretion = | CAS_number_Ref = | CAS_number = 152239-46-8 | UNII_Ref = | UNII = 9VHM49MS42 | ATC_prefix = | ATC_suffix = | PubChem = 3277600 | ChEBI = 140936 | IUPHAR_ligand = 221 | DrugBank_Ref = | DrugBank = | ChemSpiderID = 2526822 | C=14 | H=14 | N=4 | O=1 | S=1 | smiles = c2c(C)nsc2NC(=O)Nc3ccc1n(C)ccc1c3 | synonyms = SB-204,741 | StdInChI = 1S/C14H14N4OS/c1-9-7-13(20-17-9)16-14(19)15-11-3-4-12-10(8-11)5-6-18(12)2/h3-8H,1-2H3,(H2,15,16,19) | StdInChIKey = USFUFHFQWXDVMH-UHFFFAOYSA-N }}SB-204741''' is a drug which acts as a potent and selective antagonist at the serotonin 5-HT2B receptor, with around 135x selectivity over the closely related 5-HT2C receptor, and even higher over the 5-HT2A receptor and other targets. It is used in scientific research for investigating the functions of the 5-HT2B receptor. References 5-HT2B antagonists Isothiazoles Ureas Indoles
SB-204741
Chemistry
525
11,460,098
https://en.wikipedia.org/wiki/Cochliobolus%20hawaiiensis
Cochliobolus hawaiiensis is a fungal plant pathogen. References External links Index Fungorum USDA ARS Fungal Database Fungal plant pathogens and diseases Cochliobolus Fungi described in 1955 Fungus species
Cochliobolus hawaiiensis
Biology
44
38,193,389
https://en.wikipedia.org/wiki/NU%20Pavonis
NU Pavonis (N-U, not "nu") is a variable star in the southern constellation of Pavo. With a nominal apparent visual magnitude of 4.95, it is a faint star but visible to the naked eye. The distance to NU Pav, as determined from its annual parallax shift of as seen from Earth's orbit, is around 460 light years. It is moving closer with a heliocentric radial velocity of −10 km/s. This is an aging red giant with a stellar classification of M6 III, currently on the asymptotic giant branch. Peter M. Corben listed HR 7625 as a possible variable star in 1971. It was given its variable star designation, NU Pavonis, in 1973. It is a semiregular variable star of sub-type SRb that ranges in magnitude from 4.91 down to 5.26 with a period of 60 days. The star has expanded to 204 times the Sun's radius and is radiating 7,412 times the Sun's luminosity from its enlarged photosphere at an effective temperature of 3,516 K. Far-ultraviolet emission has been detected from these coordinates, which may be coming from a companion star. References M-type giants Semiregular variable stars Asymptotic-giant-branch stars Pavo (constellation) Durchmusterung objects 189124 098608 7625 Pavonis, NU
NU Pavonis
Astronomy
301
42,986
https://en.wikipedia.org/wiki/Alternating%20current
Alternating current (AC) is an electric current that periodically reverses direction and changes its magnitude continuously with time, in contrast to direct current (DC), which flows only in one direction. Alternating current is the form in which electric power is delivered to businesses and residences, and it is the form of electrical energy that consumers typically use when they plug kitchen appliances, televisions, fans and electric lamps into a wall socket. The abbreviations AC and DC are often used to mean simply alternating and direct, respectively, as when they modify current or voltage. The usual waveform of alternating current in most electric power circuits is a sine wave, whose positive half-period corresponds with positive direction of the current and vice versa (the full period is called a cycle). "Alternating current" most commonly refers to power distribution, but a wide range of other applications are technically alternating current although it is less common to describe them by that term. In many applications, like guitar amplifiers, different waveforms are used, such as triangular waves or square waves. Audio and radio signals carried on electrical wires are also examples of alternating current. These types of alternating current carry information such as sound (audio) or images (video) sometimes carried by modulation of an AC carrier signal. These currents typically alternate at higher frequencies than those used in power transmission. Transmission, distribution, and domestic power supply Electrical energy is distributed as alternating current because AC voltage may be increased or decreased with a transformer. This allows the power to be transmitted through power lines efficiently at high voltage, which reduces the energy lost as heat due to resistance of the wire, and transformed to a lower, safer voltage for use. Use of a higher voltage leads to significantly more efficient transmission of power. The power losses () in the wire are a product of the square of the current ( I ) and the resistance (R) of the wire, described by the formula: This means that when transmitting a fixed power on a given wire, if the current is halved (i.e. the voltage is doubled), the power loss due to the wire's resistance will be reduced to one quarter. The power transmitted is equal to the product of the current and the voltage (assuming no phase difference); that is, Consequently, power transmitted at a higher voltage requires less loss-producing current than for the same power at a lower voltage. Power is often transmitted at hundreds of kilovolts on pylons, and transformed down to tens of kilovolts to be transmitted on lower level lines, and finally transformed down to 100 V – 240 V for domestic use. High voltages have disadvantages, such as the increased insulation required, and generally increased difficulty in their safe handling. In a power plant, energy is generated at a convenient voltage for the design of a generator, and then stepped up to a high voltage for transmission. Near the loads, the transmission voltage is stepped down to the voltages used by equipment. Consumer voltages vary somewhat depending on the country and size of load, but generally motors and lighting are built to use up to a few hundred volts between phases. The voltage delivered to equipment such as lighting and motor loads is standardized, with an allowable range of voltage over which equipment is expected to operate. Standard power utilization voltages and percentage tolerance vary in the different mains power systems found in the world. High-voltage direct-current (HVDC) electric power transmission systems have become more viable as technology has provided efficient means of changing the voltage of DC power. Transmission with high voltage direct current was not feasible in the early days of electric power transmission, as there was then no economically viable way to step the voltage of DC down for end user applications such as lighting incandescent bulbs. Three-phase electrical generation is very common. The simplest way is to use three separate coils in the generator stator, physically offset by an angle of 120° (one-third of a complete 360° phase) to each other. Three current waveforms are produced that are equal in magnitude and 120° out of phase to each other. If coils are added opposite to these (60° spacing), they generate the same phases with reverse polarity and so can be simply wired together. In practice, higher pole orders are commonly used. For example, a 12-pole machine would have 36 coils (10° spacing). The advantage is that lower rotational speeds can be used to generate the same frequency. For example, a 2-pole machine running at 3600 rpm and a 12-pole machine running at 600 rpm produce the same frequency; the lower speed is preferable for larger machines. If the load on a three-phase system is balanced equally among the phases, no current flows through the neutral point. Even in the worst-case unbalanced (linear) load, the neutral current will not exceed the highest of the phase currents. Non-linear loads (e.g. the switch-mode power supplies widely used) may require an oversized neutral bus and neutral conductor in the upstream distribution panel to handle harmonics. Harmonics can cause neutral conductor current levels to exceed that of one or all phase conductors. For three-phase at utilization voltages a four-wire system is often used. When stepping down three-phase, a transformer with a Delta (3-wire) primary and a Star (4-wire, center-earthed) secondary is often used so there is no need for a neutral on the supply side. For smaller customers (just how small varies by country and age of the installation) only a single phase and neutral, or two phases and neutral, are taken to the property. For larger installations, all three phases and neutral are taken to the main distribution panel. From the three-phase main panel, both single and three-phase circuits may lead off. Three-wire single-phase systems, with a single center-tapped transformer giving two live conductors, is a common distribution scheme for residential and small commercial buildings in North America. This arrangement is sometimes incorrectly referred to as two phase. A similar method is used for a different reason on construction sites in the UK. Small power tools and lighting are supposed to be supplied by a local center-tapped transformer with a voltage of 55 V between each power conductor and earth. This significantly reduces the risk of electric shock in the event that one of the live conductors becomes exposed through an equipment fault whilst still allowing a reasonable voltage of 110 V between the two conductors for running the tools. An additional wire, called the bond (or earth) wire, is often connected between non-current-carrying metal enclosures and earth ground. This conductor provides protection from electric shock due to accidental contact of circuit conductors with the metal chassis of portable appliances and tools. Bonding all non-current-carrying metal parts into one complete system ensures there is always a low electrical impedance path to ground sufficient to carry any fault current for as long as it takes for the system to clear the fault. This low impedance path allows the maximum amount of fault current, causing the overcurrent protection device (breakers, fuses) to trip or burn out as quickly as possible, bringing the electrical system to a safe state. All bond wires are bonded to ground at the main service panel, as is the neutral/identified conductor if present. AC power supply frequencies The frequency of the electrical system varies by country and sometimes within a country; most electric power is generated at either 50 or 60 Hertz. Some countries have a mixture of 50 Hz and 60 Hz supplies, notably electricity power transmission in Japan. Low frequency A low frequency eases the design of electric motors, particularly for hoisting, crushing and rolling applications, and commutator-type traction motors for applications such as railways. However, low frequency also causes noticeable flicker in arc lamps and incandescent light bulbs. The use of lower frequencies also provided the advantage of lower transmission losses, which are proportional to frequency. The original Niagara Falls generators were built to produce 25 Hz power, as a compromise between low frequency for traction and heavy induction motors, while still allowing incandescent lighting to operate (although with noticeable flicker). Most of the 25 Hz residential and commercial customers for Niagara Falls power were converted to 60 Hz by the late 1950s, although some 25 Hz industrial customers still existed as of the start of the 21st century. 16.7 Hz power (formerly 16 2/3 Hz) is still used in some European rail systems, such as in Austria, Germany, Norway, Sweden and Switzerland. High frequency Off-shore, military, textile industry, marine, aircraft, and spacecraft applications sometimes use 400 Hz, for benefits of reduced weight of apparatus or higher motor speeds. Computer mainframe systems were often powered by 400 Hz or 415 Hz for benefits of ripple reduction while using smaller internal AC to DC conversion units. Effects at high frequencies A direct current flows uniformly throughout the cross-section of a homogeneous electrically conducting wire. An alternating current of any frequency is forced away from the wire's center, toward its outer surface. This is because an alternating current (which is the result of the acceleration of electric charge) creates electromagnetic waves (a phenomenon known as electromagnetic radiation). Electric conductors are not conducive to electromagnetic waves (a perfect electric conductor prohibits all electromagnetic waves within its boundary), so a wire that is made of a non-perfect conductor (a conductor with finite, rather than infinite, electrical conductivity) pushes the alternating current, along with their associated electromagnetic fields, away from the wire's center. The phenomenon of alternating current being pushed away from the center of the conductor is called skin effect, and a direct current does not exhibit this effect, since a direct current does not create electromagnetic waves. At very high frequencies, the current no longer flows in the wire, but effectively flows on the surface of the wire, within a thickness of a few skin depths. The skin depth is the thickness at which the current density is reduced by 63%. Even at relatively low frequencies used for power transmission (50 Hz – 60 Hz), non-uniform distribution of current still occurs in sufficiently thick conductors. For example, the skin depth of a copper conductor is approximately 8.57 mm at 60 Hz, so high-current conductors are usually hollow to reduce their mass and cost. This tendency of alternating current to flow predominantly in the periphery of conductors reduces the effective cross-section of the conductor. This increases the effective AC resistance of the conductor since resistance is inversely proportional to the cross-sectional area. A conductor's AC resistance is higher than its DC resistance, causing a higher energy loss due to ohmic heating (also called I2R loss). Techniques for reducing AC resistance For low to medium frequencies, conductors can be divided into stranded wires, each insulated from the others, with the relative positions of individual strands specially arranged within the conductor bundle. Wire constructed using this technique is called Litz wire. This measure helps to partially mitigate skin effect by forcing more equal current throughout the total cross section of the stranded conductors. Litz wire is used for making high-Q inductors, reducing losses in flexible conductors carrying very high currents at lower frequencies, and in the windings of devices carrying higher radio frequency current (up to hundreds of kilohertz), such as switch-mode power supplies and radio frequency transformers. Techniques for reducing radiation loss As written above, an alternating current is made of electric charge under periodic acceleration, which causes radiation of electromagnetic waves. Energy that is radiated is lost. Depending on the frequency, different techniques are used to minimize the loss due to radiation. Twisted pairs At frequencies up to about 1 GHz, pairs of wires are twisted together in a cable, forming a twisted pair. This reduces losses from electromagnetic radiation and inductive coupling. A twisted pair must be used with a balanced signaling system so that the two wires carry equal but opposite currents. Each wire in a twisted pair radiates a signal, but it is effectively canceled by radiation from the other wire, resulting in almost no radiation loss. Coaxial cables Coaxial cables are commonly used at audio frequencies and above for convenience. A coaxial cable has a conductive wire inside a conductive tube, separated by a dielectric layer. The current flowing on the surface of the inner conductor is equal and opposite to the current flowing on the inner surface of the outer tube. The electromagnetic field is thus completely contained within the tube, and (ideally) no energy is lost to radiation or coupling outside the tube. Coaxial cables have acceptably small losses for frequencies up to about 5 GHz. For microwave frequencies greater than 5 GHz, the losses (due mainly to the dielectric separating the inner and outer tubes being a non-ideal insulator) become too large, making waveguides a more efficient medium for transmitting energy. Coaxial cables often use a perforated dielectric layer to separate the inner and outer conductors in order to minimize the power dissipated by the dielectric. Waveguides Waveguides are similar to coaxial cables, as both consist of tubes, with the biggest difference being that waveguides have no inner conductor. Waveguides can have any arbitrary cross section, but rectangular cross sections are the most common. Because waveguides do not have an inner conductor to carry a return current, waveguides cannot deliver energy by means of an electric current, but rather by means of a guided electromagnetic field. Although surface currents do flow on the inner walls of the waveguides, those surface currents do not carry power. Power is carried by the guided electromagnetic fields. The surface currents are set up by the guided electromagnetic fields and have the effect of keeping the fields inside the waveguide and preventing leakage of the fields to the space outside the waveguide. Waveguides have dimensions comparable to the wavelength of the alternating current to be transmitted, so they are feasible only at microwave frequencies. In addition to this mechanical feasibility, electrical resistance of the non-ideal metals forming the walls of the waveguide causes dissipation of power (surface currents flowing on lossy conductors dissipate power). At higher frequencies, the power lost to this dissipation becomes unacceptably large. Fiber optics At frequencies greater than 200 GHz, waveguide dimensions become impractically small, and the ohmic losses in the waveguide walls become large. Instead, fiber optics, which are a form of dielectric waveguides, can be used. For such frequencies, the concepts of voltages and currents are no longer used. Formulation Alternating currents are accompanied (or caused) by alternating voltages. An AC voltage v can be described mathematically as a function of time by the following equation: , where is the peak voltage (unit: volt), is the angular frequency (unit: radians per second). The angular frequency is related to the physical frequency, (unit: hertz), which represents the number of cycles per second, by the equation . is the time (unit: second). The peak-to-peak value of an AC voltage is defined as the difference between its positive peak and its negative peak. Since the maximum value of is +1 and the minimum value is −1, an AC voltage swings between and . The peak-to-peak voltage, usually written as or , is therefore . Root mean square voltage Below an AC waveform (with no DC component) is assumed. The RMS voltage is the square root of the mean over one cycle of the square of the instantaneous voltage. Power The relationship between voltage and the power delivered is: , where represents a load resistance. Rather than using instantaneous power, , it is more practical to use a time-averaged power (where the averaging is performed over any integer number of cycles). Therefore, AC voltage is often expressed as a root mean square (RMS) value, written as , because Power oscillation For this reason, AC power's waveform becomes Full-wave rectified sine, and its fundamental frequency is double of the one of the voltage's. Examples of alternating current To illustrate these concepts, consider a 230 V AC mains supply used in many countries around the world. It is so called because its root mean square value is 230 V. This means that the time-averaged power delivered is equivalent to the power delivered by a DC voltage of 230 V. To determine the peak voltage (amplitude), we can rearrange the above equation to: For 230 V AC, the peak voltage is therefore , which is about 325 V, and the peak power is , that is 460 RW. During the course of one cycle (two cycle as the power) the voltage rises from zero to 325 V, the power from zero to 460 RW, and both falls through zero. Next, the voltage descends to reverse direction, -325 V, but the power ascends again to 460 RW, and both returns to zero. Information transmission Alternating current is used to transmit information, as in the cases of telephone and cable television. Information signals are carried over a wide range of AC frequencies. POTS telephone signals have a frequency of about 3 kHz, close to the baseband audio frequency. Cable television and other cable-transmitted information currents may alternate at frequencies of tens to thousands of megahertz. These frequencies are similar to the electromagnetic wave frequencies often used to transmit the same types of information over the air. History The first alternator to produce alternating current was an electric generator based on Michael Faraday's principles constructed by the French instrument maker Hippolyte Pixii in 1832. Pixii later added a commutator to his device to produce the (then) more commonly used direct current. The earliest recorded practical application of alternating current is by Guillaume Duchenne, inventor and developer of electrotherapy. In 1855, he announced that AC was superior to direct current for electrotherapeutic triggering of muscle contractions. Alternating current technology was developed further by the Hungarian Ganz Works company (1870s), and in the 1880s: Sebastian Ziani de Ferranti, Lucien Gaulard, and Galileo Ferraris. In 1876, Russian engineer Pavel Yablochkov invented a lighting system where sets of induction coils were installed along a high-voltage AC line. Instead of changing voltage, the primary windings transferred power to the secondary windings which were connected to one or several electric candles (arc lamps) of his own design, used to keep the failure of one lamp from disabling the entire circuit. In 1878, the Ganz factory, Budapest, Hungary, began manufacturing equipment for electric lighting and, by 1883, had installed over fifty systems in Austria-Hungary. Their AC systems used arc and incandescent lamps, generators, and other equipment. Transformers The development of the alternating current transformer to change voltage from low to high level and back, allowed generation and consumption at low voltages and transmission, over great distances, at high voltage, with savings in the cost of conductors and energy losses. A bipolar open-core power transformer developed by Lucien Gaulard and John Dixon Gibbs was demonstrated in London in 1881, and attracted the interest of Westinghouse. They exhibited an AC system powering arc and incandescent lights was installed along five railway stations for the Metropolitan Railway in London and a single-phase multiple-user AC distribution system Turin in 1884. These early induction coils with open magnetic circuits were inefficient at transferring power to loads. Until about 1880, the paradigm for AC power transmission from a high voltage supply to a low voltage load was a series circuit. Open-core transformers with a ratio near 1:1 were connected with their primaries in series to allow use of a high voltage for transmission while presenting a low voltage to the lamps. The inherent flaw in this method was that turning off a single lamp (or other electric device) affected the voltage supplied to all others on the same circuit. Many adjustable transformer designs were introduced to compensate for this problematic characteristic of the series circuit, including those employing methods of adjusting the core or bypassing the magnetic flux around part of a coil. The direct current systems did not have these drawbacks, giving it significant advantages over early AC systems. In the UK, Sebastian de Ferranti, who had been developing AC generators and transformers in London since 1882, redesigned the AC system at the Grosvenor Gallery power station in 1886 for the London Electric Supply Corporation (LESCo) including alternators of his own design and open core transformer designs with serial connections for utilization loads - similar to Gaulard and Gibbs. In 1890, he designed their power station at Deptford and converted the Grosvenor Gallery station across the Thames into an electrical substation, showing the way to integrate older plants into a universal AC supply system. In the autumn of 1884, Károly Zipernowsky, Ottó Bláthy and Miksa Déri (ZBD), three engineers associated with the Ganz Works of Budapest, determined that open-core devices were impractical, as they were incapable of reliably regulating voltage. Bláthy had suggested the use of closed cores, Zipernowsky had suggested the use of parallel shunt connections, and Déri had performed the experiments; In their joint 1885 patent applications for novel transformers (later called ZBD transformers), they described two designs with closed magnetic circuits where copper windings were either wound around a ring core of iron wires or else surrounded by a core of iron wires. In both designs, the magnetic flux linking the primary and secondary windings traveled almost entirely within the confines of the iron core, with no intentional path through air (see toroidal cores). The new transformers were 3.4 times more efficient than the open-core bipolar devices of Gaulard and Gibbs. The Ganz factory in 1884 shipped the world's first five high-efficiency AC transformers. This first unit had been manufactured to the following specifications: 1,400 W, 40 Hz, 120:72 V, 11.6:19.4 A, ratio 1.67:1, one-phase, shell form. The ZBD patents included two other major interrelated innovations: one concerning the use of parallel connected, instead of series connected, utilization loads, the other concerning the ability to have high turns ratio transformers such that the supply network voltage could be much higher (initially 140 to 2000 V) than the voltage of utilization loads (100 V initially preferred). When employed in parallel connected electric distribution systems, closed-core transformers finally made it technically and economically feasible to provide electric power for lighting in homes, businesses and public spaces. The other essential milestone was the introduction of 'voltage source, voltage intensive' (VSVI) systems' by the invention of constant voltage generators in 1885. In early 1885, the three engineers also eliminated the problem of eddy current losses with the invention of the lamination of electromagnetic cores. Ottó Bláthy also invented the first AC electricity meter. Adoption The AC power system was developed and adopted rapidly after 1886. In March of that year, Westinghouse engineer William Stanley, designing a system based on the Gaulard and Gibbs transformer, demonstrated a lighting system in Great Barrington: A Siemens generator's voltage of 500 volts was converted into 3000 volts, and then the voltage was stepped down to 500 volts by six Westinghouse transformers. With this setup, the Westinghouse company successfully powered thirty 100-volt incandescent bulbs in twenty shops along the main street of Great Barrington. By the fall of that year Ganz engineers installed a ZBD transformer power system with AC generators in Rome. Based on Stanley's success, the new Westinghouse Electric went on to develop alternating current (AC) electric infrastructure throughout the United States. The spread of Westinghouse and other AC systems triggered a push back in late 1887 by Thomas Edison (a proponent of direct current), who attempted to discredit alternating current as too dangerous in a public campaign called the "war of the currents". In 1888, alternating current systems gained further viability with the introduction of a functional AC motor, something these systems had lacked up till then. The design, an induction motor, was independently invented by Galileo Ferraris and Nikola Tesla (with Tesla's design being licensed by Westinghouse in the US). This design was independently further developed into the modern practical three-phase form by Mikhail Dolivo-Dobrovolsky and Charles Eugene Lancelot Brown in Germany on one side, and Jonas Wenström in Sweden on the other, though Brown favored the two-phase system. The Ames Hydroelectric Generating Plant, constructed in 1890, was among the first hydroelectric alternating current power plants. A long-distance transmission of single-phase electricity from a hydroelectric generating plant in Oregon at Willamette Falls sent power fourteen miles downriver to downtown Portland for street lighting in 1890. In 1891, another transmission system was installed in Telluride Colorado. The first three-phase system was established in 1891 in Frankfurt, Germany. The Tivoli–Rome transmission was completed in 1892. The San Antonio Canyon Generator was the third commercial single-phase hydroelectric AC power plant in the United States to provide long-distance electricity. It was completed on December 31, 1892, by Almarian William Decker to provide power to the city of Pomona, California, which was 14 miles away. Meanwhile, the possibility of transferring electrical power from a waterfall at a distance was explored at the Grängesberg mine in Sweden. A fall at Hällsjön, Smedjebackens kommun, where a small iron work had been located, was selected. In 1893, a three-phase system was used to transfer 400 horsepower a distance of , becoming the first commercial application. In 1893, Westinghouse built an alternating current system for the Chicago World Exposition. In 1893, Decker designed the first American commercial three-phase power plant using alternating current—the hydroelectric Mill Creek No. 1 Hydroelectric Plant near Redlands, California. Decker's design incorporated 10 kV three-phase transmission and established the standards for the complete system of generation, transmission and motors used in USA today. The original Niagara Falls Adams Power Plant with three two-phase generators was put into operation in August 1895, but was connected to the remote transmission system only in 1896. The Jaruga Hydroelectric Power Plant in Croatia was set in operation two days later, on 28 August 1895. Its generator (42 Hz, 240 kW) was made and installed by the Hungarian company Ganz, while the transmission line from the power plant to the City of Šibenik was long, and the municipal distribution grid 3000 V/110 V included six transforming stations. Alternating current circuit theory developed rapidly in the latter part of the 19th and early 20th century. Notable contributors to the theoretical basis of alternating current calculations include Charles Steinmetz, Oliver Heaviside, and many others. Calculations in unbalanced three-phase systems were simplified by the symmetrical components methods discussed by Charles LeGeyt Fortescue in 1918. See also AC power Electrical wiring Heavy-duty power plugs Hertz Leading and lagging current Mains electricity by country AC power plugs and sockets Utility frequency War of the currents AC/DC receiver design References Further reading Willam A. Meyers, History and Reflections on the Way Things Were: Mill Creek Power Plant – Making History with AC, IEEE Power Engineering Review, February 1997, pp. 22–24 External links "AC/DC: What's the Difference?". Edison's Miracle of Light, American Experience. (PBS) "AC/DC: Inside the AC Generator ". Edison's Miracle of Light, American Experience. (PBS) Professor Mark Csele's tour of the 25 Hz Rankine generating station Blalock, Thomas J., "The Frequency Changer Era: Interconnecting Systems of Varying Cycles". The history of various frequencies and interconversion schemes in the US at the beginning of the 20th century AC Power History and Timeline Electrical engineering Electric current Electric power AC power
Alternating current
Physics,Engineering
5,720
16,758,771
https://en.wikipedia.org/wiki/Koebe%20quarter%20theorem
In complex analysis, a branch of mathematics, the Koebe 1/4 theorem states the following: Koebe Quarter Theorem. The image of an injective analytic function from the unit disk onto a subset of the complex plane contains the disk whose center is and whose radius is . The theorem is named after Paul Koebe, who conjectured the result in 1907. The theorem was proven by Ludwig Bieberbach in 1916. The example of the Koebe function shows that the constant in the theorem cannot be improved (increased). A related result is the Schwarz lemma, and a notion related to both is conformal radius. Grönwall's area theorem Suppose that is univalent in . Then In fact, if , the complement of the image of the disk is a bounded domain . Its area is given by Since the area is positive, the result follows by letting decrease to . The above proof shows equality holds if and only if the complement of the image of has zero area, i.e. Lebesgue measure zero. This result was proved in 1914 by the Swedish mathematician Thomas Hakon Grönwall. Koebe function The Koebe function is defined by Application of the theorem to this function shows that the constant in the theorem cannot be improved, as the image domain does not contain the point and so cannot contain any disk centred at with radius larger than . The rotated Koebe function is with a complex number of absolute value . The Koebe function and its rotations are schlicht: that is, univalent (analytic and one-to-one) and satisfying and . Bieberbach's coefficient inequality for univalent functions Let be univalent in . Then This follows by applying Gronwall's area theorem to the odd univalent function Equality holds if and only if is a rotated Koebe function. This result was proved by Ludwig Bieberbach in 1916 and provided the basis for his celebrated conjecture that , proved in 1985 by Louis de Branges. Proof of quarter theorem Applying an affine map, it can be assumed that so that In particular, the coefficient inequality gives that . If is not in , then is univalent in . Applying the coefficient inequality to gives so that Koebe distortion theorem The Koebe distortion theorem gives a series of bounds for a univalent function and its derivative. It is a direct consequence of Bieberbach's inequality for the second coefficient and the Koebe quarter theorem. Let be a univalent function on normalized so that and and let . Then with equality if and only if is a Koebe function Notes References External links Koebe 1/4 theorem at PlanetMath Theorems in complex analysis
Koebe quarter theorem
Mathematics
559
4,562,875
https://en.wikipedia.org/wiki/Motion%20planning
Motion planning, also path planning (also known as the navigation problem or the piano mover's problem) is a computational problem to find a sequence of valid configurations that moves the object from the source to destination. The term is used in computational geometry, computer animation, robotics and computer games. For example, consider navigating a mobile robot inside a building to a distant waypoint. It should execute this task while avoiding walls and not falling down stairs. A motion planning algorithm would take a description of these tasks as input, and produce the speed and turning commands sent to the robot's wheels. Motion planning algorithms might address robots with a larger number of joints (e.g., industrial manipulators), more complex tasks (e.g. manipulation of objects), different constraints (e.g., a car that can only drive forward), and uncertainty (e.g. imperfect models of the environment or robot). Motion planning has several robotics applications, such as autonomy, automation, and robot design in CAD software, as well as applications in other fields, such as animating digital characters, video game, architectural design, robotic surgery, and the study of biological molecules. Concepts A basic motion planning problem is to compute a continuous path that connects a start configuration S and a goal configuration G, while avoiding collision with known obstacles. The robot and obstacle geometry is described in a 2D or 3D workspace, while the motion is represented as a path in (possibly higher-dimensional) configuration space. Configuration space A configuration describes the pose of the robot, and the configuration space C is the set of all possible configurations. For example: If the robot is a single point (zero-sized) translating in a 2-dimensional plane (the workspace), C is a plane, and a configuration can be represented using two parameters (x, y). If the robot is a 2D shape that can translate and rotate, the workspace is still 2-dimensional. However, C is the special Euclidean group SE(2) = R2 SO(2) (where SO(2) is the special orthogonal group of 2D rotations), and a configuration can be represented using 3 parameters (x, y, θ). If the robot is a solid 3D shape that can translate and rotate, the workspace is 3-dimensional, but C is the special Euclidean group SE(3) = R3 SO(3), and a configuration requires 6 parameters: (x, y, z) for translation, and Euler angles (α, β, γ). If the robot is a fixed-base manipulator with N revolute joints (and no closed-loops), C is N-dimensional. Free space The set of configurations that avoids collision with obstacles is called the free space Cfree. The complement of Cfree in C is called the obstacle or forbidden region. Often, it is prohibitively difficult to explicitly compute the shape of Cfree. However, testing whether a given configuration is in Cfree is efficient. First, forward kinematics determine the position of the robot's geometry, and collision detection tests if the robot's geometry collides with the environment's geometry. Target space Target space is a subspace of free space which denotes where we want the robot to move to. In global motion planning, target space is observable by the robot's sensors. However, in local motion planning, the robot cannot observe the target space in some states. To solve this problem, the robot goes through several virtual target spaces, each of which is located within the observable area (around the robot). A virtual target space is called a sub-goal. Obstacle space Obstacle space is a space that the robot can not move to. Obstacle space is not opposite of free space. Algorithms Low-dimensional problems can be solved with grid-based algorithms that overlay a grid on top of configuration space, or geometric algorithms that compute the shape and connectivity of Cfree. Exact motion planning for high-dimensional systems under complex constraints is computationally intractable. Potential-field algorithms are efficient, but fall prey to local minima (an exception is the harmonic potential fields). Sampling-based algorithms avoid the problem of local minima, and solve many problems quite quickly. They are unable to determine that no path exists, but they have a probability of failure that decreases to zero as more time is spent. Sampling-based algorithms are currently considered state-of-the-art for motion planning in high-dimensional spaces, and have been applied to problems which have dozens or even hundreds of dimensions (robotic manipulators, biological molecules, animated digital characters, and legged robots). Grid-based search Grid-based approaches overlay a grid on configuration space and assume each configuration is identified with a grid point. At each grid point, the robot is allowed to move to adjacent grid points as long as the line between them is completely contained within Cfree (this is tested with collision detection). This discretizes the set of actions, and search algorithms (like A*) are used to find a path from the start to the goal. These approaches require setting a grid resolution. Search is faster with coarser grids, but the algorithm will fail to find paths through narrow portions of Cfree. Furthermore, the number of points on the grid grows exponentially in the configuration space dimension, which make them inappropriate for high-dimensional problems. Traditional grid-based approaches produce paths whose heading changes are constrained to multiples of a given base angle, often resulting in suboptimal paths. Any-angle path planning approaches find shorter paths by propagating information along grid edges (to search fast) without constraining their paths to grid edges (to find short paths). Grid-based approaches often need to search repeatedly, for example, when the knowledge of the robot about the configuration space changes or the configuration space itself changes during path following. Incremental heuristic search algorithms replan fast by using experience with the previous similar path-planning problems to speed up their search for the current one. Interval-based search These approaches are similar to grid-based search approaches except that they generate a paving covering entirely the configuration space instead of a grid. The paving is decomposed into two subpavings X−,X+ made with boxes such that X− ⊂ Cfree ⊂ X+. Characterizing Cfree amounts to solve a set inversion problem. Interval analysis could thus be used when Cfree cannot be described by linear inequalities in order to have a guaranteed enclosure. The robot is thus allowed to move freely in X−, and cannot go outside X+. To both subpavings, a neighbor graph is built and paths can be found using algorithms such as Dijkstra or A*. When a path is feasible in X−, it is also feasible in Cfree. When no path exists in X+ from one initial configuration to the goal, we have the guarantee that no feasible path exists in Cfree. As for the grid-based approach, the interval approach is inappropriate for high-dimensional problems, due to the fact that the number of boxes to be generated grows exponentially with respect to the dimension of configuration space. An illustration is provided by the three figures on the right where a hook with two degrees of freedom has to move from the left to the right, avoiding two horizontal small segments. Nicolas Delanoue has shown that the decomposition with subpavings using interval analysis also makes it possible to characterize the topology of Cfree such as counting its number of connected components. Geometric algorithms Point robots among polygonal obstacles Visibility graph Cell decomposition Voronoi diagram Translating objects among obstacles Minkowski sum Finding the way out of a building farthest ray trace Given a bundle of rays around the current position attributed with their length hitting a wall, the robot moves into the direction of the longest ray unless a door is identified. Such an algorithm was used for modeling emergency egress from buildings. Artificial potential fields One approach is to treat the robot's configuration as a point in a potential field that combines attraction to the goal, and repulsion from obstacles. The resulting trajectory is output as the path. This approach has advantages in that the trajectory is produced with little computation. However, they can become trapped in local minima of the potential field and fail to find a path, or can find a non-optimal path. The artificial potential fields can be treated as continuum equations similar to electrostatic potential fields (treating the robot like a point charge), or motion through the field can be discretized using a set of linguistic rules. A navigation function or a probabilistic navigation function are sorts of artificial potential functions which have the quality of not having minimum points except the target point. Sampling-based algorithms Sampling-based algorithms represent the configuration space with a roadmap of sampled configurations. A basic algorithm samples N configurations in C, and retains those in Cfree to use as milestones. A roadmap is then constructed that connects two milestones P and Q if the line segment PQ is completely in Cfree. Again, collision detection is used to test inclusion in Cfree. To find a path that connects S and G, they are added to the roadmap. If a path in the roadmap links S and G, the planner succeeds, and returns that path. If not, the reason is not definitive: either there is no path in Cfree, or the planner did not sample enough milestones. These algorithms work well for high-dimensional configuration spaces, because unlike combinatorial algorithms, their running time is not (explicitly) exponentially dependent on the dimension of C. They are also (generally) substantially easier to implement. They are probabilistically complete, meaning the probability that they will produce a solution approaches 1 as more time is spent. However, they cannot determine if no solution exists. Given basic visibility conditions on Cfree, it has been proven that as the number of configurations N grows higher, the probability that the above algorithm finds a solution approaches 1 exponentially. Visibility is not explicitly dependent on the dimension of C; it is possible to have a high-dimensional space with "good" visibility or a low-dimensional space with "poor" visibility. The experimental success of sample-based methods suggests that most commonly seen spaces have good visibility. There are many variants of this basic scheme: It is typically much faster to only test segments between nearby pairs of milestones, rather than all pairs. Nonuniform sampling distributions attempt to place more milestones in areas that improve the connectivity of the roadmap. Quasirandom samples typically produce a better covering of configuration space than pseudorandom ones, though some recent work argues that the effect of the source of randomness is minimal compared to the effect of the sampling distribution. Employs local-sampling by performing a directional Markov chain Monte Carlo random walk with some local proposal distribution. It is possible to substantially reduce the number of milestones needed to solve a given problem by allowing curved eye sights (for example by crawling on the obstacles that block the way between two milestones). If only one or a few planning queries are needed, it is not always necessary to construct a roadmap of the entire space. Tree-growing variants are typically faster for this case (single-query planning). Roadmaps are still useful if many queries are to be made on the same space (multi-query planning) List of notable algorithms A* D* Rapidly-exploring random tree Probabilistic roadmap Completeness and performance A motion planner is said to be complete if the planner in finite time either produces a solution or correctly reports that there is none. Most complete algorithms are geometry-based. The performance of a complete planner is assessed by its computational complexity. When proving this property mathematically, one has to make sure, that it happens in finite time and not just in the asymptotic limit. This is especially problematic, if there occur infinite sequences (that converge only in the limiting case) during a specific proving technique, since then, theoretically, the algorithm will never stop. Intuitive "tricks" (often based on induction) are typically mistakenly thought to converge, which they do only for the infinite limit. In other words, the solution exists, but the planner will never report it. This property therefore is related to Turing completeness and serves in most cases as a theoretical underpinning/guidance. Planners based on a brute force approach are always complete, but are only realizable for finite and discrete setups. In practice, the termination of the algorithm can always be guaranteed by using a counter, that allows only for a maximum number of iterations and then always stops with or without solution. For realtime systems, this is typically achieved by using a watchdog timer, that will simply kill the process. The watchdog has to be independent of all processes (typically realized by low level interrupt routines). The asymptotic case described in the previous paragraph, however, will not be reached in this way. It will report the best one it has found so far (which is better than nothing) or none, but cannot correctly report that there is none. All realizations including a watchdog are always incomplete (except all cases can be evaluated in finite time). Completeness can only be provided by a very rigorous mathematical correctness proof (often aided by tools and graph based methods) and should only be done by specialized experts if the application includes safety content. On the other hand, disproving completeness is easy, since one just needs to find one infinite loop or one wrong result returned. Formal Verification/Correctness of algorithms is a research field on its own. The correct setup of these test cases is a highly sophisticated task. Resolution completeness is the property that the planner is guaranteed to find a path if the resolution of an underlying grid is fine enough. Most resolution complete planners are grid-based or interval-based. The computational complexity of resolution complete planners is dependent on the number of points in the underlying grid, which is O(1/hd), where h is the resolution (the length of one side of a grid cell) and d is the configuration space dimension. Probabilistic completeness is the property that as more "work" is performed, the probability that the planner fails to find a path, if one exists, asymptotically approaches zero. Several sample-based methods are probabilistically complete. The performance of a probabilistically complete planner is measured by the rate of convergence. For practical applications, one usually uses this property, since it allows setting up the time-out for the watchdog based on an average convergence time. Incomplete planners do not always produce a feasible path when one exists (see first paragraph). Sometimes incomplete planners do work well in practice, since they always stop after a guarantied time and allow other routines to take over. Problem variants Many algorithms have been developed to handle variants of this basic problem. Differential constraints Holonomic Manipulator arms (with dynamics) Nonholonomic Drones Cars Unicycles Planes Acceleration bounded systems Moving obstacles (time cannot go backward) Bevel-tip steerable needle Differential drive robots Optimality constraints Hybrid systems Hybrid systems are those that mix discrete and continuous behavior. Examples of such systems are: Robotic manipulation Mechanical assembly Legged robot locomotion Reconfigurable robots Uncertainty Motion uncertainty Missing information Active sensing Sensorless planning Networked control systems Environmental constraints Maps of dynamics Applications Robot navigation Automation The driverless car Robotic surgery Digital character animation Protein folding Safety and accessibility in computer-aided architectural design See also Moving sofa problem - mathematical problem of finding the largest two-dimensional shape that can be maneuvered around a corner Gimbal lock – similar traditional issue in mechanical engineering Kinodynamic planning Mountain climbing problem OMPL - The Open Motion Planning Library Pathfinding Pebble motion problems – multi-robot motion planning Shortest path problem Velocity obstacle References Further reading Planning Algorithms, Steven M. LaValle, 2006, Cambridge University Press, . Principles of Robot Motion: Theory, Algorithms, and Implementation, H. Choset, W. Burgard, S. Hutchinson, G. Kantor, L. E. Kavraki, K. Lynch, and S. Thrun, MIT Press, April 2005. Chapter 13: Robot Motion Planning: pp. 267–290. External links "Open Robotics Automation Virtual Environment", http://openrave.org/ Jean-Claude Latombe talks about his work with robots and motion planning, 5 April 2000 "Open Motion Planning Library (OMPL)", http://ompl.kavrakilab.org "Motion Strategy Library", http://msl.cs.uiuc.edu/msl/ "Motion Planning Kit", https://ai.stanford.edu/~mitul/mpk "Simox", http://simox.sourceforge.net "Robot Motion Planning and Control", http://www.laas.fr/%7Ejpl/book.html Robot kinematics Theoretical computer science Automated planning and scheduling
Motion planning
Mathematics,Engineering
3,521
32,591,880
https://en.wikipedia.org/wiki/Dara%20Khosrowshahi
Dara Khosrowshahi (, ; born May 28, 1969) is an Iranian and American business executive who is the chief executive officer of Uber. He was previously CEO of Expedia Group, a company that owns several travel fare aggregators. He is on the board of directors of BET.com and Hotels.com, and previously served on the board of The New York Times Company. Early life and education Khosrowshahi was born in 1969 in Iran into a prominent, wealthy family and grew up in a mansion on his family's compound. He is the youngest of three children born to Lili and Asghar (Gary) Khosrowshahi. His family founded the Alborz Investment Company, a diversified conglomerate involved in pharmaceuticals, chemicals, food, distribution, packaging, trading, and services. In 1978, just before the Iranian Revolution, his family was targeted for its wealth and his mother decided to leave everything behind and flee the country. Their company was later nationalized. His family first fled to southern France. They were planning to come back to Iran upon the political climate improving, but when that did not occur and the subsequent Iran-Iraq war started, they immigrated to the United States, eventually moving in with one of his uncles in Tarrytown, New York. Khosrowshahi's mother had very little money to support her children, and having never worked before in Iran, began working full time to contribute towards her son's education. In 1982, when Khosrowshahi was 13 years old, his father went to Iran to care for his grandfather. The Iranian government subsequently barred his father from leaving the country for 6 years, thus Khosrowshahi spent his teenage years without seeing his father. In 1987, he graduated from the Hackley School, a private university-preparatory school in Tarrytown. In 1991, he graduated with a B.S. in electrical and electronics engineering from Brown University, where he was a member of the social fraternity Sigma Chi. Career In 1991, Khosrowshahi joined Allen & Company, an investment bank, as an analyst. While still a junior employee at the firm, when Khosrowshahi's boss fell sick one day, Khosrowshahi was thus tasked with explaining the numerical figures of a major company deal to Barry Diller. The chance meeting with the billionaire thereafter made a deep impression on Khosrowshahi. In 1998, he left Allen & Company to work for Barry Diller, first at Diller's USA Networks, where he held the positions of Senior Vice President for Strategic Planning and then president, and later as chief financial officer of IAC, another company controlled by Diller. In 2001, IAC purchased Expedia, and in August 2005, Khosrowshahi became Expedia's chief executive officer. Ten years later, in 2015, Expedia gave him $90 million in stock options as part of a long-term employment agreement, conditioned on him staying with the company until 2020. In June 2013, he received a Pacific Northwest Entrepreneur of the Year award from Ernst & Young. In 2016, he was one of the highest paid CEOs in the United States. During his tenure as CEO of Expedia, "the gross value of its hotel and other travel bookings more than quadrupled and its pre-tax earnings more than doubled." Under Khosrowshahi, Expedia extended its presence to more than 60 countries and acquired Travelocity, Orbitz, and HomeAway. Khosrowshahi was not considering a career move, and initially when approached by a headhunter refused to apply as Uber CEO, but Spotify co-founder Daniel Ek persuaded him during their meetings in 2017. In August 2017, Khosrowshahi became the CEO of Uber, succeeding co-founder and billionaire Travis Kalanick. He was initially viewed as a "dark horse" candidate in case the initial frontrunners, General Electric's Jeff Immelt and Hewlett Packard Enterprise's Meg Whitman, fell through. However, when Immelt flubbed his presentation, Immelt's initial supporters threw their backing to Khosrowshahi. This included Kalanick, even though Khosrowshahi had made clear that under his watch, Kalanick would have no role in Uber's daily operations; as he put it in one of his slides, "there cannot be two CEOs." After several deadlocked votes, Benchmark, a venture capital firm that had helped lead the effort to push out Kalanick, promised to drop a lawsuit against Kalanick if it named Whitman as CEO. Several of the directors read the announcement as blackmail. One of Whitman's supporters switched his vote to Khosrowshahi, breaking the deadlock and making him Uber's second full-time CEO. He forfeited his un-vested stock options of Expedia, then worth $184 million, but Uber reportedly paid him over $200 million to take the CEO position. He is on Uber's board of directors. Khosrowshahi's main task was to clean up the image of a company that had become one of the most despised in the country, in part due to revelations about Uber's corporate culture. He replaced Kalanick's once-inviolable 14 values, which contained such items as "super pumped" and "always be hustlin'," with eight values focusing on "customer obsession". At all of his public appearances after taking over, Khosrowshahi stressed the message, "We do the right thing. Period." In May 2019, Khosrowshahi led Uber in its initial public offering, which he addressed with employees in a company-wide letter. In 2022, Khosrowshahi’s total compensation from Uber rose 22% to $24.3 million. In 2023, Khosrowshahi's total compensation from Uber was $24.2 million, representing a CEO-to-median worker pay ratio of 292-to-1. Khosrowshahi's net worth is estimated to be at least $170 million as of June 2023. Political activity Khosrowshahi is an outspoken critic of the immigration policy of Donald Trump. In 2016, he donated to the Hillary Victory Fund, Washington Democratic Senator Patty Murray, and the Democratic National Committee. He also donated to Utah Republican Senator Mike Lee, a supporter of libertarianism. In November 2019, Khosrowshahi caused controversy in an interview with Axios on HBO when he compared the assassination of Jamal Khashoggi to the death of Elaine Herzberg by an Uber self-driving car in 2018. He called them both "mistakes" that can "be forgiven". The Saudi government is an investor in Uber and has representation on its board of directors. Personal life Khosrowshahi has two children from a first marriage; a son, Alex and a daughter, Chloe. On December 12, 2012, Khosrowshahi married Sydney Shapiro, a former preschool teacher and actress. He praised his wife for wearing a Slayer t-shirt to the wedding, which was held in Las Vegas. The couple has twin sons, Hayes Epic and Hugo Gubrit, both diagnosed with Autism Spectrum Disorder (ASD), and Khosrowshahi has appeared as a guest speaker for Autism Partnership Foundation. Kaveh Khosrowshahi, Dara's brother, is currently managing director at investment firm Allen & Company. Mehrad Khosrowshahi, Dara's other brother, is managing partner of the boutique consulting firm Confida Inc. Their uncle, Hassan Khosrowshahi, fled Iran due to the Iranian Revolution and is now a billionaire. A cousin, Amir, co-founded Nervana Systems, which was acquired by Intel in 2016 for $408 million. Another cousin, Golnar, founded Reservoir Media in 2007 as a music publishing company. Khosrowshahi is on the list of "Prominent Iranian-Americans" published by the U.S. Virtual Embassy Iran. See also List of Iranian Americans Timeline of Uber References External links Dara Khosrowshahi on Uber Uber gave CEO Dara Khosrowshahi $45 million in total pay last year, but it paid its COO even more, Troy Wolverton, Business Insider India, 12 April 2019 Living people 1969 births Businesspeople from Tehran Iranian emigrants to the United States Exiles of the Iranian revolution in the United States Brown University School of Engineering alumni 21st-century Iranian businesspeople American chief executives Directors of Uber Expedia Group people Businesspeople of Iranian descent Hackley School alumni 21st-century American businesspeople
Dara Khosrowshahi
Technology
1,804
20,449,334
https://en.wikipedia.org/wiki/Schedule%20chicken
Schedule chicken is a concept described in project management and software development circles. The condition occurs when two or more parties working towards a common goal all claim to be holding to their original schedules for delivering their part of the work, even after they know those schedules are impossible to meet. Each party hopes the other will be the first to have their failure exposed and thus take all of the blame for the larger project being delayed. This pretense continually moves forward past one project checkpoint to the next, possibly continuing right up until the functionality is actually due. The practice of schedule chicken often results in contagious schedules slips due to the inter team dependencies and is difficult to identify and resolve, as it is in the best interest of each team not to be the first bearer of bad news. The psychological drivers underlining the "Schedule Chicken" behavior are related to the Hawk-Dove or Snowdrift model of conflict used by players in game theory. The term derives from the game of chicken played between drivers, as depicted in the movie Rebel Without a Cause, in which two drivers race their hot-rods towards a cliff edge. The first driver to jump out of the car is labeled a "chicken," while the one closest to the edge wins bragging rights. References Further reading Schedule (project management)
Schedule chicken
Physics
264
15,235,961
https://en.wikipedia.org/wiki/LtrA
LtrA is an open reading frame found in the Lactococcus lactis group II introns LtrB. It is an intron-encoded protein, which consists of three subdomains: a reverse-transcriptase/maturase, DNA endonuclease, and DNA/RNA binding domain. LtrA helps to capture and stabilize the catalytically active conformation of the LtrB group II intron RNA. It also functions in group II intron retrohoming. References Prokaryote genes
LtrA
Chemistry,Biology
107
17,083
https://en.wikipedia.org/wiki/Keykode
Keykode (also written as either KeyKode or KeyCode) is an Eastman Kodak Company advancement on edge numbers, which are letters, numbers and symbols placed at regular intervals along the edge of 35 mm and 16 mm film to allow for frame-by-frame specific identification. It was introduced in 1990. Keykode is a variation of timecode used in the post-production process which is designed to uniquely identify film frames in a film stock. Edge numbers Edge numbers (also called key numbers or footage numbers) are a series of numbers with key lettering printed along the edge of a 35 mm negative at intervals of one foot (16 frames or 64 perforations) and on a 16 mm negative at intervals of six inches (twenty frames). The numbers are placed on the negative at the time of manufacturing by one of two methods: Latent image exposes the edge of the film while it passes through the perforation machine. This method is primarily used for color negative films. Visible ink is sometimes used to imprint on the edge of the film – again in manufacturing – at the time of perforations. The ink, which is not affected by photographic chemicals, is normally printed onto the base surface of the film. The numbers are visible on both the raw stock (unexposed) and processed (exposed and developed) film. This method is primarily used for black & white negative film. The edge numbers serve a number of purposes. Every key frame is numbered with a multi-digit identifier that may be referred to later. In addition, a date of manufacturing is imprinted, then the type of emulsion and the batch number. This information is transferred from the negative (visible once developed) to the positive prints. The print may be edited and handled while the original negative remains safely untouched. When the film editing is complete, the edge numbers on the final cut film correspond back to their identical frames on the original negative so that a conform edit can be made of the original negative to match the work print. Laboratories can also imprint their own edge numbers on the processed film negative or print to identify the film for their own means. This is normally done in yellow ink. A common workflow for film editing involves edge-coding printed film simultaneously with the film's synchronized audio track, on 35mm magnetic film, so that a foot of film and its synchronized audio have identical edge numbers. Eastman Kodak began using latent image edge numbering on their manufactured 35mm raw film stocks in 1919. Keykode With the popularity of telecine transfers and video edits, Kodak invented a machine readable edge number that could be recorded via computer, read by the editing computer and automatically produce a "cut list" from the video edit of the film. To do this, Kodak utilized the USS-128 barcode alongside the human-readable edge numbers. They also improved the quality and readability of the human-readable information to make it easier to identify. The Keykode consists of 12 characters in human-readable form followed by the same information in barcode form. Keykode is a form of metadata identifier for film negatives. Keykode deciphered An example Keykode: KU 22 9611 1802+02.3 The first two letters in the Keykode are the manufacturer code (E and K both stand for Kodak, F stands for Fuji, etc.) and the stock identifier, respectively (in this case Kodak's U standing for 5279 emulsion); each manufacturer has different stocks' naming convention for their emulsion codes. The next six numbers in the Keykode (usually split in 2+4 digits) are the identification number for that roll of film. On Kodak film stocks, it remains consistent for the entire roll. Fuji Stocks will increment this number when the frame number advances past "9999". Computers read the (optional) frame offset (marked every four perforations on actual film by a single "-" dash) by adding digits to the Keykode after the plus sign. In this case, a frame offset of two frames (with respect to the film foot) is specified. The number of frames within a film foot depends on both the film width and the frame pulldown itself, and can also be uneven within the same roll, but rather repeat periodically (like in the 35mm 3perf. pulldown). The last (optional), dot-separated number is the perforation offset which, if preceded by a frame offset like in the above example, is a bias within the just-specified frame; otherwise (as interpreted by most DI software) this considered to be an offset within the whole film foot. EASTMAN 5279 167 3301 122 KD These numbers are consistent for a whole batch of film and may not change in many rolls. EASTMAN is the film manufacturer, 5279 is the stock type identifier. The next three numbers (167) is the emulsion batch number. The next series of four digits (3301) is the roll and part code, followed by the printer identification number that made the Keykode (122) and finally a two letter date designation (KD). In this case, KD=1997. See also 35 mm film Color motion picture film Film stock List of motion picture film stocks Film base Timecode References Kodak Motion Picture Film (H1) (4th ed). Eastman Kodak Company. The Kodak Worldwide Student Program Student Filmmaker's Handbook: Motion Picture and Television Imaging (H-19) (1991) (2nd Ed). Eastman Kodak Company. Konigsberg, Ira (1987). The Complete Film Dictionary Meridan PAL books. . External links Audiovisual introductions in 1990 Film production Film editing Film and video terminology Metadata Kodak Photographic film markings
Keykode
Technology
1,199
55,731,874
https://en.wikipedia.org/wiki/Complex%20Wishart%20distribution
In statistics, the complex Wishart distribution is a complex version of the Wishart distribution. It is the distribution of times the sample Hermitian covariance matrix of zero-mean independent Gaussian random variables. It has support for Hermitian positive definite matrices. The complex Wishart distribution is the density of a complex-valued sample covariance matrix. Let where each is an independent column p-vector of random complex Gaussian zero-mean samples and is an Hermitian (complex conjugate) transpose. If the covariance of G is then where is the complex central Wishart distribution with n degrees of freedom and mean value, or scale matrix, M. where is the complex multivariate Gamma function. Using the trace rotation rule we also get which is quite close to the complex multivariate pdf of G itself. The elements of G conventionally have circular symmetry such that . Inverse Complex Wishart The distribution of the inverse complex Wishart distribution of according to Goodman, Shaman is where . If derived via a matrix inversion mapping, the result depends on the complex Jacobian determinant Goodman and others discuss such complex Jacobians. Eigenvalues The probability distribution of the eigenvalues of the complex Hermitian Wishart distribution are given by, for example, James and Edelman. For a matrix with degrees of freedom we have where Note however that Edelman uses the "mathematical" definition of a complex normal variable where iid X and Y each have unit variance and the variance of . For the definition more common in engineering circles, with X and Y each having 0.5 variance, the eigenvalues are reduced by a factor of 2. While this expression gives little insight, there are approximations for marginal eigenvalue distributions. From Edelman we have that if S is a sample from the complex Wishart distribution with such that then in the limit the distribution of eigenvalues converges in probability to the Marchenko–Pastur distribution function This distribution becomes identical to the real Wishart case, by replacing by , on account of the doubled sample variance, so in the case , the pdf reduces to the real Wishart one: A special case is or, if a Var(Z) = 1 convention is used then . The Wigner semicircle distribution arises by making the change of variable in the latter and selecting the sign of y randomly yielding pdf In place of the definition of the Wishart sample matrix above, , we can define a Gaussian ensemble such that S is the matrix product . The real non-negative eigenvalues of S are then the modulus-squared singular values of the ensemble and the moduli of the latter have a quarter-circle distribution. In the case such that then is rank deficient with at least null eigenvalues. However the singular values of are invariant under transposition so, redefining , then has a complex Wishart distribution, has full rank almost certainly, and eigenvalue distributions can be obtained from in lieu, using all the previous equations. In cases where the columns of are not linearly independent and remains singular, a QR decomposition can be used to reduce G to a product like such that is upper triangular with full rank and has further reduced dimensionality. The eigenvalues are of practical significance in radio communications theory since they define the Shannon channel capacity of a MIMO wireless channel which, to first approximation, is modeled as a zero-mean complex Gaussian ensemble. References Continuous distributions Multivariate continuous distributions Covariance and correlation Random matrices Conjugate prior distributions Exponential family distributions Complex distributions
Complex Wishart distribution
Physics,Mathematics
737
1,972,580
https://en.wikipedia.org/wiki/Phototropin
Phototropins are blue light photoreceptor proteins (more specifically, flavoproteins) that mediate phototropism responses across many species of algae, fungi and higher plants. Phototropins can be found throughout the leaves of a plant. Along with cryptochromes and phytochromes they allow plants to respond and alter their growth in response to the light environment. When phototropins are hit with blue light, they induce a signal transduction pathway that alters the plant cells' functions in different ways. Phototropins are part of the phototropic sensory system in plants that causes various environmental responses in plants. Phototropins specifically will cause stems to bend towards light and stomata to open. In addition phototropins mediate the first changes in stem elongation in blue light prior to cryptochrome activation. Phototropins are also required for blue light mediated transcript destabilization of specific mRNAs in the cell. Phototropins also regulate the movement of chloroplasts within the cell, notably chloroplast avoidance. It was thought that this avoidance serves a protective function to avoid damage from intense light, however an alternate study argues that the avoidance response is primarily to increase light penetration into deeper mesophyll layers in high light conditions. Phototropins may also be important for the opening of stomata. Enzyme activity Phototropins have two distinct light, oxygen, or voltage regulated domains (LOV1, LOV2) that each bind flavin mononucleotide (FMN). The FMN is noncovalently bound to a LOV domain in the dark, but becomes covalently linked upon exposure to suitable light. The formation of the bond is reversible once light is no longer present. The forward reaction with light is not dependent on temperature, though low temperatures give increased stability of the covalent linkage, leading to a slower reversal reaction. Light excitation will lead to a conformational change within the protein, which allows for kinase activity. There is also evidence to suggest that phototropins undergo autophosphorylation at various sites across the enzyme. Phototropins trigger signaling responses within the cell, but it is unknown which proteins are phosphorylated by phototropins, or exactly how the autophosphorylation events play a role in signaling. Phototropins are typically found on the plasma membrane, but some phototropins have been found in substantial quantities on chloroplast membranes. One study found that phototropins on the plasma membrane play a role in phototropism, leaf flattening, stomatal opening, and chloroplast movements, while phototropins on the chloroplasts only partially affected stomatal opening and chloroplast movement, suggesting that the location of the protein in the cell may also play a role in its signaling function. References Other sources Sensory receptors Signal transduction Biological pigments Integral membrane proteins Molecular biology Plant physiology EC 2.7.11
Phototropin
Chemistry,Biology
632
32,813,994
https://en.wikipedia.org/wiki/Stanislav%20Petrov
Stanislav Yevgrafovich Petrov (; 7 September 1939 – 19 May 2017) was a lieutenant colonel of the Soviet Air Defence Forces who played a key role in the 1983 Soviet nuclear false alarm incident. On 26 September 1983, three weeks after the Soviet military had shot down Korean Air Lines Flight 007, Petrov was the duty officer at the command center for the Oko nuclear early-warning system when the system reported that a missile had been launched from the United States, followed by up to four more. Petrov judged the reports to be a false alarm. His subsequent decision to disobey orders, against Soviet military protocol, is credited with having prevented an erroneous retaliatory nuclear attack on the United States and its NATO allies that would have likely resulted in a large-scale nuclear war. An investigation later confirmed that the Soviet satellite warning system had indeed malfunctioned. Because of his decision not to launch a retaliatory nuclear strike amid this incident, Petrov is often credited as having "saved the world". Early life and military career Petrov was born on 7 September 1939 to a Russian family near Vladivostok. His father, Yevgraf, flew fighter aircraft during World War II. His mother was a nurse. Petrov enrolled at the Kiev Military Aviation Engineering Academy of the Soviet Air Forces, and after graduating in 1972 he joined the Soviet Air Defence Forces. In the early 1970s, he was assigned to the organization that oversaw the new early warning system intended to detect ballistic missile attacks from NATO countries. Petrov was married to Raisa, and had a son, Dmitri, and a daughter, Yelena. His wife died of cancer in 1997. Incident On 26 September 1983, during the Cold War, the Soviet nuclear early warning system Oko reported the launch of one intercontinental ballistic missile with four more missiles behind it from the United States. Petrov, suspecting a false alarm, decided to wait for a confirmation that never came. According to the Permanent Mission of the Russian Federation to the UN, nuclear retaliation requires that multiple sources confirm an attack. In any case, the incident exposed a serious flaw in the Soviet early warning system. Had Petrov reported incoming American missiles, his superiors might have launched an assault against the United States, precipitating a corresponding nuclear response from the United States. Petrov declared the system's indication a false alarm. Later, it was apparent that he was right: no missiles were approaching and the computer detection system was malfunctioning. It was subsequently determined that the false alarm had been created by a rare alignment of sunlight on high-altitude clouds above North Dakota and the Molniya orbits of the satellites, an error later corrected by cross-referencing a geostationary satellite. Petrov later indicated that the influences on his decision included that he had been told a U.S. strike would be all-out, so five missiles seemed an illogical start; that the launch detection system was new and, in his view, not yet wholly trustworthy; that the message passed through 30 layers of verification too quickly; and that ground radar failed to pick up corroborating evidence, even after minutes of delay. However, in a 2013 interview, Petrov said at the time he was never sure that the alarm was erroneous. He felt that his civilian training helped him make the right decision. He said that his colleagues were all professional soldiers with purely military training and, following instructions, would have reported a missile launch if they had been on his shift. Petrov later said "I had obviously never imagined that I would ever face that situation. It was the first and, as far as I know, also the last time that such a thing had happened, except for simulated practice scenarios." Significance In a later interview, Petrov stated that the famous red button was never made operational, as military psychologists did not want to put the decision of initiating a nuclear war into the hands of one single person. There is some confusion as to precisely what Petrov's military role was in this incident. Petrov, as an individual, was not in a position where he could have single-handedly launched any of the Soviet missile arsenal. His sole duty was to monitor satellite surveillance equipment and report missile attack warnings up the chain of command; top Soviet leadership would have decided whether to launch a retaliatory attack against the West. But Petrov's role was crucial in providing information to make that decision. According to Bruce G. Blair, a Cold War nuclear strategies expert and nuclear disarmament advocate, formerly with the Center for Defense Information, "The top leadership, given only a couple of minutes to decide, told that an attack had been launched, would make a decision to retaliate." In contrast, nuclear security scholar Pavel Podvig argues that, while Petrov did the right thing, "there were at least three assessment and decision-making layers above the command center of the army that operated the satellites", so that Petrov's report would not have directly led to a nuclear launch. In addition, he states that, even if the US strike was deemed to be real, the USSR would only have commenced its own strike after actual nuclear explosions on its territory. In 2006, when Petrov was first honored for his actions at the United Nations, the Permanent Mission of the Russian Federation to the United Nations issued a press release contending that a single person could not have started or prevented a nuclear war, stating in part, "Under no circumstances a decision to use nuclear weapons could be made or even considered in the Soviet Union or in the United States on the basis of data from a single source or a system. For this to happen, a confirmation is necessary from several systems: ground-based radars, early warning satellites, intelligence reports, etc." But nuclear security expert Bruce G. Blair has said that at that time, the U.S.–Soviet relationship had deteriorated to the point where "the Soviet Union as a system—not just the Kremlin, not just Andropov, not just the KGB—but as a system, was geared to expect an attack and to retaliate very quickly to it. It was on hair-trigger alert. It was very nervous and prone to mistakes and accidents. The false alarm that happened on Petrov's watch could not have come at a more dangerous, intense phase in US–Soviet relations." At that time, according to Oleg Kalugin, a former KGB chief of foreign counterintelligence, "The danger was in the Soviet leadership thinking, 'The Americans may attack, so we better attack first.'" Aftermath Petrov underwent intense questioning by his superiors about his judgment. Initially, he was praised for his decision. General Yury Votintsev, then commander of the Soviet Air Defense's Missile Defense Units, who was the first to hear Petrov's report of the incident (and the first to reveal it to the public in the 1990s), states that Petrov's "correct actions" were "duly noted". Petrov himself states he was initially praised by Votintsev and promised a reward, but recalls that he was also reprimanded for improper filing of paperwork because he had not described the incident in the war diary. Petrov has said that he was neither rewarded nor punished for his actions. According to Petrov, he received no reward because the incident and other bugs found in the missile detection system embarrassed his superiors and the scientists who were responsible for it, so that if he had been officially rewarded, they would have had to be punished. He was reassigned to a less sensitive post, took early retirement (although he emphasized that he was not "forced out" of the army), and suffered a nervous breakdown. The incident became known publicly in 1998 upon the publication of Votintsev's memoirs. Widespread media reports since then have increased public awareness of Petrov's actions. Later career After leaving the military in 1984, Petrov was hired at the same research institute that had developed the Soviet Union's early warning system. He later retired so he could care for his wife after she was diagnosed with cancer. During a visit to the United States for the filming of the documentary The Man Who Saved the World, Petrov toured the Minuteman Missile National Historic Site in May 2007 and commented, "I would never have imagined being able to visit one of the enemy's securest sites." Petrov died on 19 May 2017 from pneumonia, though it was not widely reported until September. He was 77. Awards and commendations On 21 May 2004, the San Francisco-based Association of World Citizens gave Petrov its World Citizen Award along with a trophy and $1,000 "in recognition of the part he played in averting a catastrophe." In January 2006, Petrov travelled to the United States where he was honored in a meeting at the United Nations in New York City. There the Association of World Citizens presented him with a second special World Citizen Award. The next day, he met American journalist Walter Cronkite at his CBS office in New York City. That interview, in addition to other highlights of Petrov's trip to the United States, was filmed for The Man Who Saved the World, a narrative feature and documentary film, directed by Peter Anthony of Denmark. It premiered in October 2014 at the Woodstock Film Festival in Woodstock, New York, winning "Honorable Mention: Audience Award Winner for Best Narrative Feature" and "Honorable Mention: James Lyons Award for Best Editing of a Narrative Feature." Various internet communities commemorate 26 September as Stanislav Petrov day, following Eliezer Yudkowsky's blog post highlighting the story: "Wherever you are, whatever you're doing, take a minute to not destroy the world.". For his actions in averting a potential nuclear war in 1983, Petrov received the Dresden Peace Prize in Dresden, Germany, on 17 February 2013. The award included €25,000. On 24 February 2012, he was given the 2011 German Media Award, presented to him at a ceremony in Baden-Baden, Germany. On 26 September 2018, he was posthumously honored in New York with the $50,000 Future of Life Award. At a ceremony at the National Museum of Mathematics in New York, former United Nations Secretary General Ban Ki-Moon said: "It is hard to imagine anything more devastating for humanity than all-out nuclear war between Russia and the United States. Yet this might have occurred by accident on September 26, 1983, were it not for the wise decisions of Stanislav Yevgrafovich Petrov. For this, he deserves humanity's profound gratitude. Let us resolve to work together to realize a world free from fear of nuclear weapons, remembering the courageous judgement of Stanislav Petrov." As Petrov had died, the award was collected by his daughter, Elena. Petrov's son Dmitri missed his flight to New York because the US embassy delayed his visa. Petrov said he did not know whether he should have regarded himself as a hero for what he did that day. In an interview for the film The Man Who Saved the World, Petrov says, "All that happened didn't matter to me—it was my job. I was simply doing my job, and I was the right person at the right time, that's all. My late wife for 10 years knew nothing about it. 'So what did you do?' she asked me. 'Nothing. I did nothing.'" The story of the nuclear incident is portrayed in the novel La redención del camarada Petrov by Argentinian writer Eduardo Sguiglia (Edhasa, 2023). See also Vasily Arkhipov – a Soviet naval officer who refused to launch a nuclear torpedo during the 1962 Cuban Missile Crisis List of nuclear close calls References External links BrightStarSound.com a tribute website, multiple pages with photos and reprints of various articles about Petrov Nuclear War: Minuteman 1939 births 2017 deaths People from Vladivostok Soviet Air Defence Force officers Military personnel from Vladivostok Cold War military history of the Soviet Union Deterrence theory during the Cold War Nuclear warfare Deaths from pneumonia in Russia War scare
Stanislav Petrov
Chemistry
2,507
7,321,584
https://en.wikipedia.org/wiki/Polyhydroxyethylmethacrylate
Poly(2-hydroxyethyl methacrylate) (pHEMA) is a polymer that forms a hydrogel in water. Poly (hydroxyethyl methacrylate) (PHEMA) hydrogel for intraocular lens (IOL) materials was synthesized by solution polymerization using 2-hydroxyethyl methacrylate (HEMA) as raw material, ammonium persulfate and sodium pyrosulfite (APS/SMBS) as catalyst, and triethyleneglycol dimethacrylate (TEGDMA) as cross-linking additive. It was invented by Drahoslav Lim and Otto Wichterle for biological use. Together they succeeded in preparing a cross-linking gel which absorbed up to 40% of water, exhibited suitable mechanical properties and was transparent. They patented this material in 1953. Applications Contact lenses In 1959, this material was first used as an optical implant. Wichterle thought pHEMA might be a suitable material for a contact lens and gained his first patent for soft contact lenses. By late 1961, he succeeded in producing the first four pHEMA hydrogel contact lenses on a home-made apparatus. Copolymers of pHEMA are still widely used today. Poly-HEMA functions as a hydrogel by rotating around its central carbon. In air, the non-polar methyl side turns outward, making the material brittle and easy to grind into the correct lens shape. In water, the polar hydroxyethyl side turns outward and the material becomes flexible. Pure pHEMA yields lenses that are too thick for sufficient oxygen to diffuse through, so all contact lenses that are pHEMA based are manufactured with copolymers that make the gel thinner and increase its water of hydration. These copolymer hydrogel lenses are often suffixed "-filcon", such as Methafilcon, which is a copolymer of hydroxyethyl methacrylate and methyl methacrylate. Another copolymer hydrogel lens, called Polymacon, is a copolymer of hydroxyethyl methacrylate and ethylene glycol dimethacrylate. Cell culture pHEMA is commonly used to coat cell culture flasks in order to prevent cell adhesion and induce spheroid formation, particularly in cancer research. Older alternatives to pHEMA include agar and agarose gels. References Plastics Acrylate polymers Czech inventions
Polyhydroxyethylmethacrylate
Physics
519
2,928,775
https://en.wikipedia.org/wiki/Hofstadter%27s%20butterfly
In condensed matter physics, Hofstadter's butterfly is a graph of the spectral properties of non-interacting two-dimensional electrons in a perpendicular magnetic field in a lattice. The fractal, self-similar nature of the spectrum was discovered in the 1976 Ph.D. work of Douglas Hofstadter and is one of the early examples of modern scientific data visualization. The name reflects the fact that, as Hofstadter wrote, "the large gaps [in the graph] form a very striking pattern somewhat resembling a butterfly." The Hofstadter butterfly plays an important role in the theory of the integer quantum Hall effect and the theory of topological quantum numbers. History The first mathematical description of electrons on a 2D lattice, acted on by a perpendicular homogeneous magnetic field, was studied by Rudolf Peierls and his student R. G. Harper in the 1950s. Hofstadter first described the structure in 1976 in an article on the energy levels of Bloch electrons in perpendicular magnetic fields. It gives a graphical representation of the spectrum of Harper's equation at different frequencies. One key aspect of the mathematical structure of this spectrum – the splitting of energy bands for a specific value of the magnetic field, along a single dimension (energy) – had been previously mentioned in passing by Soviet physicist Mark Azbel in 1964 (in a paper cited by Hofstadter), but Hofstadter greatly expanded upon that work by plotting all values of the magnetic field against all energy values, creating the two-dimensional plot that first revealed the spectrum's uniquely recursive geometric properties. Written while Hofstadter was at the University of Oregon, his paper was influential in directing further research. It predicted on theoretical grounds that the allowed energy level values of an electron in a two-dimensional square lattice, as a function of a magnetic field applied perpendicularly to the system, formed what is now known as a fractal set. That is, the distribution of energy levels for small-scale changes in the applied magnetic field recursively repeats patterns seen in the large-scale structure. "Gplot", as Hofstadter called the figure, was described as a recursive structure in his 1976 article in Physical Review B, written before Benoit Mandelbrot's newly coined word "fractal" was introduced in an English text. Hofstadter also discusses the figure in his 1979 book Gödel, Escher, Bach. The structure became generally known as "Hofstadter's butterfly". David J. Thouless and his team discovered that the butterfly's wings are characterized by Chern integers, which provide a way to calculate the Hall conductance in Hofstadter's model. Confirmation In 1997 the Hofstadter butterfly was reproduced in experiments with a microwave guide equipped with an array of scatterers. The similarity between the mathematical description of the microwave guide with scatterers and Bloch's waves in the magnetic field allowed the reproduction of the Hofstadter butterfly for periodic sequences of the scatterers. In 2001, Christian Albrecht, Klaus von Klitzing, and coworkers realized an experimental setup to test Thouless et al.'s predictions about Hofstadter's butterfly with a two-dimensional electron gas in a superlattice potential. In 2013, three separate groups of researchers independently reported evidence of the Hofstadter butterfly spectrum in graphene devices fabricated on hexagonal boron nitride substrates. In this instance the butterfly spectrum results from the interplay between the applied magnetic field and the large-scale moiré pattern that develops when the graphene lattice is oriented with near zero-angle mismatch to the boron nitride. In September 2017, John Martinis's group at Google, in collaboration with the Angelakis group at CQT Singapore, published results from a simulation of 2D electrons in a perpendicular magnetic field using interacting photons in 9 superconducting qubits. The simulation recovered Hofstadter's butterfly, as expected. In 2021 the butterfly was observed in twisted bilayer graphene at the second magic angle. Theoretical model In his original paper, Hofstadter considers the following derivation: a charged quantum particle in a two-dimensional square lattice, with a lattice spacing , is described by a periodic Schrödinger equation, under a perpendicular static homogeneous magnetic field restricted to a single Bloch band. For a 2D square lattice, the tight binding energy dispersion relation is , where is the energy function, is the crystal momentum, and is an empirical parameter. The magnetic field , where the magnetic vector potential, can be taken into account by using Peierls substitution, replacing the crystal momentum with the canonical momentum , where is the particle momentum operator and is the charge of the particle ( for the electron, is the elementary charge). For convenience we choose the gauge . Using that is the translation operator, so that , where and is the particle's two-dimensional wave function. One can use as an effective Hamiltonian to obtain the following time-independent Schrödinger equation: Considering that the particle can only hop between points in the lattice, we write , where are integers. Hofstadter makes the following ansatz: , where depends on the energy, in order to obtain Harper's equation (also known as almost Mathieu operator for ): where and , is proportional to the magnetic flux through a lattice cell and is the magnetic flux quantum. The flux ratio can also be expressed in terms of the magnetic length , such that . Hofstadter's butterfly is the resulting plot of as a function of the flux ratio , where is the set of all possible that are a solution to Harper's equation. Solutions to Harper's equation and Wannier treatment Due to the cosine function's properties, the pattern is periodic on with period 1 (it repeats for each quantum flux per unit cell). The graph in the region of between 0 and 1 has reflection symmetry in the lines and . Note that is necessarily bounded between -4 and 4. Harper's equation has the particular property that the solutions depend on the rationality of . By imposing periodicity over , one can show that if (a rational number), where and are distinct prime numbers, there are exactly energy bands. For large , the energy bands converge to thin energy bands corresponding to the Landau levels. Gregory Wannier showed that by taking into account the density of states, one can obtain a Diophantine equation that describes the system, as where where and are integers, and is the density of states at a given . Here counts the number of states up to the Fermi energy, and corresponds to the levels of the completely filled band (from to ). This equation characterizes all the solutions of Harper's equation. Most importantly, one can derive that when is an irrational number, there are infinitely many solution for . The union of all forms a self-similar fractal that is discontinuous between rational and irrational values of . This discontinuity is nonphysical, and continuity is recovered for a finite uncertainty in or for lattices of finite size. The scale at which the butterfly can be resolved in a real experiment depends on the system's specific conditions. Phase diagram, conductance and topology The phase diagram of electrons in a two-dimensional square lattice, as a function of a perpendicular magnetic field, chemical potential and temperature, has infinitely many phases. Thouless and coworkers showed that each phase is characterized by an integral Hall conductance, where all integer values are allowed. These integers are known as Chern numbers. See also Aubry–André model References Fractals Condensed matter physics 1976 introductions Hall effect
Hofstadter's butterfly
Physics,Chemistry,Materials_science,Mathematics,Engineering
1,586
4,386,199
https://en.wikipedia.org/wiki/Industrial%20style
Industrial style or industrial chic refers to an aesthetic trend in interior design that takes cues from old factories and industrial spaces that in recent years have been converted to lofts and other living spaces. Components of industrial style include weathered wood, building systems, exposed brick, industrial lighting fixtures and concrete. This aesthetic became popular in the late 2000s and remained popular in the 2010s. Industrial style can also be seen in the use of unexpected materials used in building. Shipping containers are now being used in architecture for homes and commercial spaces. The Industrial style of design is most commonly found in urban areas including cities and lofts. These are prime locations because they provide almost a blank space for homeowners to get started with a fresh canvas. These locations also contain some of the key elements used to achieve this style of design including exposed bricks and pipes, concrete flooring, and large open windows. These elements help give the space a “warehouse” feel which is the ultimate goal of this style of design. This style incorporates raw materials to give the space an unfinished feel. To achieve an industrial feel, a natural color palette is most commonly used. A mix of grays, neutrals and rustic colors can be seen in these spaces. These simple colors allow for the use of furniture and other accessories to help liven up the room. Also, having the walls a neutral color allows for open areas like lofts to feel bigger and more connected while giving furniture the opportunity to help create a natural flow of the room. Large sectionals are a staple item in any industrial style room. This is because of their ability to help close off larger spaces and help divide up the living areas. This is important because spaces like lofts tend to be very open. In order to create the illusion of multiple rooms, a sectional can help block the flow and define a separate living area. As far as lighting goes, floor lamps are trending. Any light fixture with metal finishes fits right into this style. Large open windows also help bring natural light into the space which can be very beneficial for smaller spaces. Overhead light fixtures can also give the area an industrial ambiance, especially in the kitchen. To tie into the industrial theme, many homeowners resort to a kitchen island. These islands tend to be made of reclaimed wood or other earthy materials. A kitchen island can also contribute in separating a big room and providing a defined kitchen area. They can be paired with barstools that are made of wood or contain metal finishes. Open faced shelving and storage are big hits when it comes to an industrial kitchen. Free standing metal racks can also provide extra storage and can be beneficial in smaller rooms. If they have wheels, they can multitask. For example, low shelving on wheels can serve as a computer desk one day; the next day it can stand-in as a bar cart. Exposed overhead beams, brick and concrete are notable accents to the kitchen along with darker cabinets and shelving. Lighter colored floors or polished concrete are ways to incorporate this style into any kitchen. To modernize this rather rustic look, decorative tiles look great in the kitchen. Tile as a backsplash can help create a modern twist and help liven up the space. See also High-tech architecture Revivalism (architecture) Rustic modern Shabby chic References Further reading Architectural design + Interior design
Industrial style
Engineering
676
46,838,084
https://en.wikipedia.org/wiki/Prevotella%20bivia
Prevotella bivia is a species of bacteria in the genus Prevotella. It is gram-negative. It is one cause of pelvic inflammatory disease. Other Prevotella spp. are members of the oral and vaginal microbiota, and are recovered from anaerobic infections of the respiratory tract. These infections include aspiration pneumonia, lung abscess, pulmonary empyema, and chronic otitis media and sinusitis. Other species have been isolated from abscesses and burns in the vicinity of the mouth, bites, paronychia, urinary tract infection, brain abscesses, osteomyelitis, and bacteremia associated with upper respiratory tract infections. Prevotella spp. predominate in periodontal disease and periodontal abscesses. The genus also includes gut bacteria. Prevotella species dominate with high fiber, plant-based diets. Human Prevotella spp have been compared genetically with species derived from different body sites of humans. References Using Wikipedia for Research External links Type strain of Prevotella bivia at BacDive - the Bacterial Diversity Metadatabase Bacteroidia Gut flora bacteria Infections with a predominantly sexual mode of transmission Abdominal pain Sexually transmitted diseases and infections Bacterial diseases
Prevotella bivia
Biology
257
41,343,013
https://en.wikipedia.org/wiki/UNC-5
UNC-5 is a receptor for netrins including UNC-6. Netrins are a class of proteins involved in axon guidance. UNC-5 uses repulsion to direct axons while the other netrin receptor UNC-40 attracts axons to the source of netrin production. Discovery of netrins The term netrin was first used in a study done in 1990 in Caenorhabditis elegans and was called UNC-6. Studies performed on rodents in 1994 have determined that netrins are vital to guidance cues. The vertebrate orthologue of UNC-6, netrin-1 was determined to be a key guidance cue for axons moving toward the ventral midline in the rodent embryo spinal cord. Netrin-1 has been identified as a critical component of embryonic development with functions in axon guidance, cell migration, morphogenesis and angiogenesis. The most recent studies have found that there are 5 types of netrins expressed in animals. Ectotopic expression of UNC-5 can result in short or long range repulsion. Axon guidance The guidance of axons to their targets in the developing nervous system is believed to involve diffusible chemotropic factors secreted by target cells. Floor plate cells at the ventral midline of the spinal cord secrete a diffusible factor or factors that promotes the outgrowth of spinal commissural axons and attracts these axons in vitro. Recent studies indicate that several axon guidance mechanisms are highly conserved in all animals, whereas others, though still conserved in a general sense, show strong evolutionary divergence at a detailed mechanistic level. Expression of UNC-6 netrin and its receptor UNC-5 is required for guiding pioneering axons and migrating cells in C. elegans. Netrins are axon guidance molecules that transmit their activity through 2 different receptors. The function of UNC-5 is to repel axons while the other receptor UNC-40 (or Deleted in Colorectal Cancer) attracts axons to the source of UNC-6 production. Methods such as antibody staining, transgene expression and microarray analysis have confirmed that UNC-5 is expressed in DA9 motor neurons. Eight pairs of chemosensory neurons in Caenorhabditis elegans take up fluorescein dyes entering through the chemosensory organs. When filled with dye, the processes and cell bodies of these neurons can be examined in live animals by fluorescence microscopy. Using this technique five genes were identified: unc-33, unc-44, unc-51, unc-76, and unc-106. These genes we found to affect the growth of the amphid and phasmid axons in mutants. Cell migration There are three phases in hermaphrodite distal tip cell migration in Caenorhabditis elegans which are distinguished by the orientation of their movements which alternate between anteroposterior and dorsoventral axes. Experimentation has shown that UNC-5 is coincident with the second migration phase and that premature expression will result in turning in a UNC-6 dependent manner. This also demonstrates the mechanism that regulates UNC-5 is critical for UNC-6 netrin guidance cue responsiveness. Although it normally guides axons along the dorsoventral axis, UNC-40 can be co-opted with SAX-3 to affect cell migrations along the anterior posterior axis. VAB-8 protein is identified as an upstream regulator for UNC-40 and identifies the mechanism for polarity in axon and cell migration. Formation Growth An experiment was performed to determine if UNC-5 is required for localization of presynaptic components in DA9. When testing the effect of unc-5::intron::unc-5 transgene on a mislocalization defect in UNC-5 mutant animals at 25 °C a significant rescue of the mislocalization defect was observed. In mutant animals, ventral and dorsal migrations are disrupted but longitudinal movements are unaffected. They discovered that this rescue does not occur at 16 °C because the transgene fails to produce UNC-5 at that temperature. This is relevant because is shows that the mislocalization defect is due to a change in temperature at the L4 larval stage which occurs after DA9 is fully developed. This suggests that UNC-5 is only required for the early outgrowth phase to guide axons. UNC-5 presents a novel function in maintaining polarized localization of GFP::RAB-3 independently of early polarization and guidance. When testing directly for whether UNC-6 netrin provides information for localization of presynaptic components an interesting discovery was made. The egl-20::unc-6 transgene creates an enlarged asynaptic zone of the DA9 dorsal axon. They further observed that the enlarged asynaptic domain is restored partly in UNC-5 which demonstrates that UNC-5 acts cell autonomously in DA9 in order to mediate ectopic UNC-6 exclusion of presynaptic components. The UNC-6 gradient is high ventrally and low dorsally and encompasses the dendrite and ventral axon of DA9. UNC-6 was recently found to cause the initial polarization of the C. elegans hermaphrodite specific neuronal cell body. The findings of this experiment suggest that UNC-6 and UNC-5 coordinate two different functions in DA9 and that the netrin is expressed after axon guidance is complete. Extracellular cues such as Wnt fibroblast growth factor can promote synapse formation, contradicting the traditional view of synapse formation from contact between synaptic partners to trigger the assembly of synaptic components. Inhibitory factors such as UNC-5 play essential roles in the formation and maintenance of synaptic components. Adult expression In a study done in rat spinal cords, increased netrin-1, UNC-5 homologue levels were observed compared to lower levels measured in the embryo. From this study multiple mRNA transcripts were detected by northern blot analysis. This finding suggests that netrin receptors could be encoded by alternatively spliced mRNAs. During embryonic development only one splice variant is detected while there are two in the adult model. The results of these findings suggest that UNC-5 homologues make up a primary method of netrin-1 signal transduction in the adult spinal cord. This shows that netrin-1 plays a major role in the adult brain and has the potential for therapeutic applications. Plasticity Similar to growth cone guidance, synapse formation is cued by UNC-5 through a UNC-6 gradient that repels the dorsal axon migration. Dendritic filopodia extend from the dendritic shaft during synaptogenesis and appear as though they are reaching out for a presynaptic axon. Despite the appearance of attaching to an axon, cell signaling is still required for complete synaptic formation. An experiment was performed to determine the role of UNC-5 in axonal growth after spinal cord injury. The netrin is expressed by neurons in the corticospinal and rubrospinal projections, and by intrinsic neurons of the spinal cord both before and after the injury. When testing in vitro UNC-5 receptor bodies are taken from the spinal cord to neutralize netrin-1 in myelin. This increases the neurite outgrowth from UNC-5 expressing spinal motor neurons. UNC-129 UNC-129 is a ligand in the transforming growth factor family in C. elegans which encodes transforming growth factor β (TGF-β). Like UNC-6 it guides pioneer axons along the dorsoventral axis of C. elegans. TGF-β is expressed only in dorsal rows of body wall muscles and not ventral. Ectotopic expression of UNC-129 from the muscle results in disrupted growth cone and cell migrations. This shows that UNC-129 is responsible for mediating expression of dorsoventral polarity required for axon guidance. Recent findings have shown that UNC-129 is also responsible for long range repulsive guidance of UNC-6. This mechanism enhances UNC-40 signaling while inhibiting UNC-5 alone. This causes an increase in sensitivity in growth cones to UNC-6 as they travel up the UNC-129 gradient. UNC-129 mediates expression of dorsoventral polarity information required for axon guidance and guided cell migrations in Caenorhabditis elegans. Dendritic self avoidance Recently it was found that dendrites do not overlap and actively avoid each other because cell specific membrane proteins trigger mutual repulsion. In the absence of UNC-6 signaling however, dendrites failed to repel each other. This finding supports the idea that UNC-6 is critical for axon and dendritic guidance in the developmental stage. It is also known that self avoidance requires UNC-6 but not a UNC-6 graded signal. A ventral to dorsal UNC-6 gradient is not required for expression and dendritic self avoidance is independent of such a gradient. UNC-6 that binds to UNC-40 takes on different properties and functions as a short range guidance cue. Vertebrate laminins Netrins share the same terminal structure with vertebrate laminins but appear minimally related. The basement membrane assembly across species, Vertebrate laminin-1 (α1β1γ1) and laminin-10 (α5β1γ1), like the two Caenorhabditis elegans laminins, are embryonically expressed and are essential for basement membrane assembly. During the basement assembly process laminins anchor to the cell surface through their G domains after polymerizing through their LN domains. Netrins are involved in heterotropic LN domain interactions during this process which suggests that although similar in structure, the functions of the two families are different. Applications Tumorigenesis Netrin-1 and its receptors DCC and UNC-5 show a new mechanism for induction or suppression regulation of apoptosis. Evidence shows that this signaling pathway in humans is frequently inactivated. During the last 15 years, controversial data has failed to firmly establish whether DCC is indeed a tumour suppressor gene. However, the recent observations that DCC triggers cell death and is a receptor for netrin-1, a molecule recently implicated in colorectal tumorigenesis. The established role of DCC and netrin-1 during organization of the spinal cord could be viewed as a further challenge to the position that DCC inactivation might play a significant role in tumorigenesis. Recent observations on DCC's functions in intracellular signaling have renewed interest in the potential contribution of DCC inactivation to cancer. Data shows that, when engaged by netrin ligands, DCC may activate downstream signaling pathways and in settings where netrin is absent or at low levels, DCC can promote apoptosis. The binding of netrin-1 to its receptors inhibits the tumor suppressor p53 dependent apoptosis. Such receptors share the property of inducing apoptosis in the absence of ligand, hence creating a cellular state of dependence on the ligand. Thus, netrin-1 may not only be a chemotropic factor for neurons but also a survival factor. This discovery shows that netrin-1 receptor pathways play an important role in tumorigenesis. Schwann cells A study was performed to determine the effect of netrin-1 on schwann cell proliferation. Unc5b is the sole receptor expressed in RT4 schwannoma cells and adult primary Schwann cells, and netrin-1 and Unc5b are found to be expressed in the injured sciatic nerve. It was also found that the netrin-1-induced Schwann cell proliferation was blocked by the specific inhibition of Unc5b expression with RNAi. These data suggests that netrin-1 could be an endogenous trophic factor for Schwann cells in the injured peripheral nerves. See also Netrin Neural development Axon guidance Pioneer axon Neural development in humans Human brain development timeline References Further reading Receptors Single-pass transmembrane proteins
UNC-5
Chemistry
2,604
24,436,099
https://en.wikipedia.org/wiki/C23H26N2O2
{{DISPLAYTITLE:C23H26N2O2}} The molecular formula C23H26N2O2 (molar mass: 362.465 g/mol, exact mass: 362.1994 u) may refer to: Dexetimide Solifenacin Molecular formulas
C23H26N2O2
Physics,Chemistry
66
13,507,959
https://en.wikipedia.org/wiki/Pfitzinger%20reaction
The Pfitzinger reaction (also known as the Pfitzinger-Borsche reaction) is the chemical reaction of isatin with base and a carbonyl compound to yield substituted quinoline-4-carboxylic acids. Several reviews have been published. Reaction mechanism The reaction of isatin with a base such as potassium hydroxide hydrolyses the amide bond to give the keto-acid 2. This intermediate can be isolated, but is typically not. A ketone (or aldehyde) will react with the aniline to give the imine (3) and the enamine (4). The enamine will cyclize and dehydrate to give the desired quinoline (5). Variations Halberkann variant Reaction of N-acyl isatins with base gives 2-hydroxy-quinoline-4-carboxylic acids. See also Camps quinoline synthesis Friedländer synthesis Niementowski quinazoline synthesis Doebner reaction Talnetant, Cinchocaine References Carbon-carbon bond forming reactions Condensation reactions Quinoline forming reactions Ring expansion reactions Name reactions
Pfitzinger reaction
Chemistry
240
22,101,925
https://en.wikipedia.org/wiki/Collaborative%20search%20engine
Collaborative search engines (CSE) are web search engines and enterprise searches within company intranets that let users combine their efforts in information retrieval (IR) activities, share information resources collaboratively using knowledge tags, and allow experts to guide less experienced people through their searches. Collaboration partners do so by providing query terms, collective tagging, adding comments or opinions, rating search results, and links clicked of former (successful) IR activities to users having the same or a related information need. Models of collaboration Collaborative search engines can be classified along several dimensions: intent (explicit and implicit) and synchronization, depth of mediation, task vs. trait, division of labor, and sharing of knowledge. Explicit vs. implicit collaboration Implicit collaboration characterizes Collaborative filtering and recommendation systems in which the system infers similar information needs. I-Spy, Jumper 2.0, Seeks, the Community Search Assistant, the CSE of Burghardt et al., and the works of Longo et al. all represent examples of implicit collaboration. Systems that fall under this category identify similar users, queries and links clicked automatically, and recommend related queries and links to the searchers. Explicit collaboration means that users share an agreed-upon information need and work together toward that goal. For example, in a chat-like application, query terms and links clicked are automatically exchanged. The most prominent example of this class is SearchTogether published in 2007. SearchTogether offers an interface that combines search results from standard search engines and a chat to exchange queries and links. PlayByPlay takes a step further to support general purpose collaborative browsing tasks with an instant messaging functionality. Reddy et al. follow a similar approach and compares two implementations of their CSE called MUSE and MUST. Reddy et al. focus on the role of communication required for efficient CSEs. Cerciamo supports explicit collaboration by allowing one person to concentrate on finding promising groups of documents while having the other person make in-depth judgments of relevance on documents found by the first person. However, in Papagelis et al. terms are used differently: they combine explicitly shared links and implicitly collected browsing histories of users to a hybrid CSE. Community of practice Recent work in collaborative filtering and information retrieval has shown that sharing of search experiences among users having similar interests, typically called a community of practice or community of interest, reduces the effort put in by a given user in retrieving the exact information of interest. Collaborative search deployed within a community of practice deploys novel techniques for exploiting context during search by indexing and ranking search results based on the learned preferences of a community of users. The users benefit by sharing information, experiences and awareness to personalize result-lists to reflect the preferences of the community as a whole. The community representing a group of users who share common interests, similar professions. The best known example is the open-source project ApexKB (previously known as Jumper 2.0). Depth of mediation The depth of mediation refers to the degree that the CSE mediates search. SearchTogether is an example of UI-level mediation: users exchange query results and judgments of relevance, but the system does not distinguish among users when they run queries. PlayByPlay is another example of UI-level mediation where all users have full and equal access to the instant messaging functionality without the system's coordination. Cerchiamo and recommendation systems such as I-Spy keep track of each person's search activity independently and use that information to affect their search results. These are examples of deeper algorithmic mediation. Task vs. trait This model classifies people's membership in groups based on the task at hand vs. long-term interests; these may be correlated with explicit and implicit collaboration. Platforms and modalities CSE systems started off on the desktop end, with the earliest ones being extensions or modifications to existing web browsers. GroupWeb is a desktop web browser that offers a shared visual workspace for a group of users. SearchTogether is a desktop application that combines search results from standard search engines and a chat interface for users to exchange queries and links. CoSense supports sensemaking tasks in collaborative Web search by offering rich and interactive presentations of a group's search activities. With the prevalence of mobile phones and tablets, CSEs are also taking advantage of these additional device modalities. CoSearch is a system that supports co-located collaborative web search by leveraging extra mobile phones and mice. PlayByPlay also supports collaborative browsing between mobile and desktop users. Synchronous vs. asynchronous collaboration Synchronous collaboration model enables different users to work toward the same goal together simultaneously, with each individual user having access to one another's progress in real-time. A typical example of the synchronous collaboration model is GroupWeb, where users are made aware of what others are doing through features such as synchronous scrolling with pages, telepointers for enacting gestures, and group annotations that are attached to web pages. Asynchronous collaboration models offer more flexibility toward when different users' different search processes are carried out while reducing the cognitive effort for later users to consume and build upon previous users' search results. SearchTogether, for example, supports asynchronous collaboration functionalities by persisting previous users' chat logs, search queries, and web browsing histories so that the later users could quickly bring themselves up to speed. Applications of collaborative search engines The applications of CSEs are well-explored in both the academic community and industry. For example, GroupWeb was used as a presentation tool for real-time distance education and conferences. ClassSearch is deployed in middle-school classroom sessions to facilitate collaborative search activities in classrooms and study the space of co-located search pedagogies. Privacy-aware collaborative search engines Search terms and links clicked that are shared among users reveal their interests, habits, social relations and intentions. In other words, CSEs put the privacy of the users at risk. Studies have shown that CSEs increase efficiency. Unfortunately, by the lack of privacy enhancing technologies, a privacy aware user who wants to benefit from a CSE has to disclose their entire search log. (Note, even when explicitly sharing queries and links clicked, the whole (former) log is disclosed to any user that joins a search session). Thus, sophisticated mechanisms that allow on a more fine grained level which information is disclosed to whom are desirable. As CSEs are a new technology just entering the market, identifying user privacy preferences and integrating Privacy enhancing technologies (PETs) into collaborative search are in conflict. On the one hand, PETs have to meet user preferences, on the other hand, one cannot identify these preferences without using a CSE, i.e., implementing PETs into CSEs. Today, the only work addressing this problem comes from Burghardt et al. They implemented a CSE with experts from the information system domain and derived the scope of possible privacy preferences in a user study with these experts. Results show that users define preferences referring to (i) their current context (e.g., being at work), (ii) the query content (e.g., users exclude topics from sharing), (iii) time constraints (e.g., do not publish the query X hours after the query has been issued, do not store longer than X days, do not share between working time), and that users intensively use the option to (iv) distinguish between different social groups when sharing information. Further, users require (v) anonymization and (vi) define reciprocal constraints, i.e., they refer to the behavior of other users, e.g., if a user would have shared the same query in turn. References Information retrieval systems
Collaborative search engine
Technology
1,610
40,532,454
https://en.wikipedia.org/wiki/Cologne%20sewerage%20system
The sewerage system of Cologne is part of the water infrastructure serving Cologne, Germany. Originally built by the Roman Empire in the 1st century, the city's sewer system was modernised in the late 19th century. Parts of the subterranean network are opened for public tours, and the unusual Chandelier Hall () hosts jazz and classical music performances. History The first sewers in Cologne were built by the Romans in the 1st century, and there was little change for 1,800 years. As the population of the city was rapidly increasing throughout the 19th century, it became apparent that the existing sewerage system was unable to cope with the volume of waste that was being produced. Raw sewage was directed to the Rhine river, causing significant problems with disease and odor. English poet Samuel Taylor Coleridge wrote in 1828 that the city had "two and seventy stenches, all well defined, and several stinks!" Paris, London, and other large cities saw an investment in their sewerage system during the 1850s. The people of Cologne had to wait until 1890 for modern sewers to finally open in their city, led by architects Johann Stübben and Carl Steuernagel. By 1900 the boroughs of Deutz, Nippes, and Ehrenfeld were all connected to the system. A mechanised waste water plant opened in 1905 and five purification plants now filter the water before releasing it into the Rhine. By 1933 the length of the system measured , and by 2011 it had expanded to . Notable features and tourism The sewers are opened to the public seven times each year, once a month from March to September, giving the public the opportunity to tour the subterranean network. Tours begin underneath the Neustadt-Nord district in the Regenentlastungbauwerk (storm-water overflow structure), a former harbour created during French occupation of the city. Part of the old Roman sewer system is preserved and features in tours. Sections of these old constructions were used for some time as cellars and, during World War II, as air-raid shelters. An unusual feature of the system is the Kronleuchtersaal (Chandelier Hall). In order to impress German Emperor Wilhelm II chandeliers were installed in the ceiling, though he was unable to attend the opening ceremony. In 1990 a single electric chandelier was installed. The room has hosted jazz and classical music concerts to audiences of up to 50 people. A stone plaque in the room records the names of the architects and Wilhelm von Becker, the then-mayor of Cologne. The area is listed as being protected. References External links Buildings and structures in Cologne Sewerage Tourist attractions in Cologne
Cologne sewerage system
Chemistry,Engineering,Environmental_science
541
11,546,148
https://en.wikipedia.org/wiki/Growth%20hormone%20secretagogue%20receptor
Growth hormone secretagogue receptor(GHS-R), also known as ghrelin receptor, is a G protein-coupled receptor that binds growth hormone secretagogues (GHSs), such as ghrelin, the "hunger hormone". The role of GHS-R is thought to be in regulating energy homeostasis and body weight. In the brain, they are most highly expressed in the hypothalamus, specifically the ventromedial nucleus and arcuate nucleus. GSH-Rs are also expressed in other areas of the brain, including the ventral tegmental area, hippocampus, and substantia nigra. Outside the central nervous system, too, GSH-Rs are also found in the liver, in skeletal muscle, and even in the heart. Structure Two identified transcript variants are expressed in several tissues and are evolutionarily conserved in fish and swine. One transcript, 1a, excises an intron and encodes the functional protein; this protein is the receptor for the ghrelin ligand and defines a neuroendocrine pathway for growth hormone release. The second transcript (1b) retains the intron and does not function as a receptor for ghrelin; however, it may function to attenuate activity of isoform 1a. GHS-R1a is a member of the G-protein-coupled receptor (GPCR) family. Previous studies have shown that GPCRs can form heterodimers, or functional receptor pairs with other types of G-protein coupled receptors (GPCRs). Various studies suggest that GHS-R1a specifically forms dimers with the following hormone and neurotransmitter receptors: somatostatin receptor 5, dopamine receptor type 2 (DRD2), melanocortin-3 receptor (MC3R), and serotonin receptor type 2C (5-HT2c receptor). See "Function" section below for details on the purported functions of these heterodimers. Function Growth hormone release The binding of ghrelin to GHS-R1a in pituitary cells stimulates the secretion, but not the synthesis, of growth hormone (GH) by the pituitary gland. Constitutive activity One important feature of GHS-R1a is that there is still some activity in the receptor even when it is not actively being stimulated. This is called constitutive activity, and it means that the receptor is always "on," unless acted on by an inverse agonist. This constitutive activity seems to provide a tonic signal required for the development of normal height, probably through an effect on the GH axis. In fact, some GHS-R1a genetic variations, caused by single nucleotide polymorphisms (SNPs), have been found to be associated with hereditary obesity and others with hereditary short stature. It was also found that, when GHS-R1A constitutive activity was diminished, there were decreased levels of hunger-inducing hormone neuropeptide Y (NPY) as well as in food intake and body weight. Intracellular signaling mechanisms When the growth hormone secretagogue receptor is activated, a variety of different intracellular signaling cascades can result, depending on the cell type in which the receptor is expressed. These intracellular signaling cascades include mitogen-activated protein kinase (MAPK)), protein kinase A (PKA), protein kinase B (PKB), also known as AKT), and AMP Activated Protein Kinase (AMPK) cascades. Behavioral reinforcement of food intake It is well-characterized that activating the growth hormone secretagogue receptor with ghrelin induces an orexigenic state, or general feeling of hunger. However, ghrelin may also play a role in behavioral reinforcement. Studies in animal models, found that food intake increased when ghrelin was specifically administered to just the ventral tegmental area (VTA), a brain area that uses dopamine signaling to reinforce behavior. In fact, the more ghrelin administered, the more food the rodent consumed. This is called a dose-dependent effect. Building on this, it was found that there are growth hormone secretagogue receptors in the VTA and that ghrelin acts on the VTA through these receptors. Current studies, furthermore, suggest that the VTA may contain dimers of GHS-R1a and dopamine receptor type 2 (DRD2). If these two receptors do indeed form dimers, this would somehow link ghrelin signaling to dopaminergic signaling. Enhancement of learning and memory The growth hormone secretagogue receptor may also be linked to learning and memory. First of all, the receptor is found in the hippocampus, the brain region responsible for long-term memory. Second, it was found that specifically activating the receptor in just the hippocampus increased both long-term potentiation (LTP) and dendritic spine density, two cellular phenomena thought to be involved in learning. Third, short-term calorie restriction, defined as a 30% reduction in caloric intake for two weeks, which naturally increases ghrelin levels and thus activates the receptor, was found to increase both performance on spatial learning tasks as well as neurogenesis in the adult hippocampus. Selective ligands A range of selective ligands for the GHS-R receptor are now available and are being developed for several clinical applications. GHS-R agonists have appetite-stimulating and growth hormone-releasing effects, and are likely to be useful for the treatment of muscle wasting and frailty associated with old-age and degenerative diseases. On the other hand, GHS-R antagonists have anorectic effects and are likely to be useful for the treatment of obesity. Agonists Adenosine (increases hunger-related signaling, but does not promote GH secretion) Alexamorelin Anamorelin Capromorelin CP-464709 Cortistatin-14 Examorelin (hexarelin) Ghrelin (lenomorelin) GHRP-1 GHRP-3 GHRP-4 GHRP-5 GHRP-6 Ibutamoren (MK-677) Ipamorelin L-692,585 LY-426410 LY-444711 Macimorelin Pralmorelin (GHRP-2) Relamorelin SM-130,686 Tabimorelin Ulimorelin Antagonists A-778,193 PF-5190457 References Further reading External links Ghrelin at Colorado State University G protein-coupled receptors
Growth hormone secretagogue receptor
Chemistry
1,406
14,579,013
https://en.wikipedia.org/wiki/Fitted%20carpet
Fitted carpet, also wall-to-wall carpet, is a carpet intended to cover a floor entirely. Carpet over 4 meters in length is usually installed with the use of a power-stretcher (tubed or tubeless). Fitted carpets were originally woven to the dimensions of the specific area they were covering. They were later made in smaller strips, around the time stair carpet became popular, and woven at the site of the job by the carpet fitter. These carpets were then held in place with individually nailed tacks driven through the carpet around the perimeter and occasionally small rings in the carpet which were folded over. The introduction of tack strip, "tackless strip", "gripper strip", or "Smoothedge" simplified the installation of wall-to-wall carpeting, increasing the neatness of the finish at the wall. Because gripper strips are essentially the same thickness as underlay, using gripper strips yields a level edge, whereas tacking gives an uneven edge. There are three types of carpets: the loop pile carpet, the cut carpet and the structured carpet, combining the first two. Very popular in the sixties thanks to its colorful prints, most carpets took a decorative appearance inside houses. History Thomas Sheraton wrote in 1806 that "since the introduction of carpets, fitted all over the floor of a room, the nicety of flooring anciently practiced in the best houses, is now laid aside". Fitted carpets, assembled from strips, had become popular by the second half of the 18th century, remaining so until the 1870s when loose carpets and varnished hardwood became the fashion. One of the most famous carpets in history was given by Louis XVI to George Washington. It was woven for the banquet room of Mount Vernon, where it can still be admired today. In the early twentieth century, a new manufacturing method called "tufting" revolutionized the carpeting industry. Invented in Dalton, Georgia, it quickly replaced the traditional method of weaving. The pile yarns are stitched through a textile backing and coated on the underside of the coating. From 1930, the mechanization of tufting favored its development. It now represents 51% of total production while it amounted to only 10% in the 1950s. Fabrication Tufted carpet The tufted carpet is the most common manufacturing technique. It implies poking yarn tufts in a textile support close to a sewing machine. The carpet is then equipped with a folder (rewoven, jute, plastic or cotton) pasted on the back of the tuft. This technique makes possible the production of cut pile, curly or structured carpets. Woven carpet The woven carpet is one of the oldest manufacturing processes. It is woven like a carpet through a traditional weaving loom. The top and the back of the carpet are made simultaneously. Needled carpet From several superposed layers of fibers, the needling technique consists in hanging the fibers together through the use of special needles. The carpets obtained are very solid but intended for temporary use since they do not have the comfort of woven and tufted carpets. Fibers The different fibers constitute the carpet’s velvet. They have a direct impact on the physical properties of the floor they are covering such as resistance or longevity. There are three types of fiber: natural, coming from animals (wool), vegetable (seagrass, coir, sisal) and synthetic (polyamide or polypropylene). The wool was used for weaving carpets more than five centuries B.C before being predominantly used in the raw carpets' manufacture. However, synthetic fibers are predominantly used nowadays. References External links Floors Rugs and carpets
Fitted carpet
Engineering
733
549,480
https://en.wikipedia.org/wiki/Dragon%27s%20blood
Dragon's blood is a bright red resin which is obtained from different species of a number of distinct plant genera: Calamus spp. (previously Daemonorops) also including Calamus rotang, Croton, Dracaena and Pterocarpus. The red resin has been in continuous use since ancient times as varnish, medicine, incense, pigment, and dye. Name and source A great degree of confusion existed for the ancients in regard to the source and identity of dragon's blood. Some medieval encyclopedias claimed its source as the literal blood of elephants and dragons who had perished in mortal combat. The resin of Dracaena species, "true" dragon's blood, and the very poisonous mineral cinnabar (mercury sulfide) were often confused by the ancient Romans. In ancient China, little or no distinction was made among the types of dragon's blood from the different species. Both Dracaena and Calamus resins are still often marketed today as dragon's blood, with little or no distinction being made between the plant sources; however, the resin obtained from Calamus has become the most commonly sold type in modern times, often in the form of large balls of resin. Resins that come from different species and different continents have been given the name “dragon's blood,” but their purity, appearance, and chemical properties are highly varied. Voyagers to the Canary Islands in the 15th century obtained dragon's blood as dried garnet-red drops from Dracaena draco, a tree native to the Canary Islands and Morocco. The resin is exuded from its wounded trunk or branches. Dragon's blood is also obtained by the same method from the closely related Dracaena cinnabari, which is endemic to the island of Socotra. This resin was traded to ancient Europe via the Incense Road. Dragon's blood resin is also produced from the rattan palms of the genus Calamus of the Indonesian islands and known there as jernang or djernang. It is gathered by breaking off the layer of red resin encasing the unripe fruit of the rattan. The collected resin is then rolled into solid balls before being sold. The red latex of the Sangre de Drago (called Sangre de Grado in Peru), from any of seven species of Croton native to Peru, Bolivia, Ecuador and Brazil, has purported wound-healing and antioxidant properties, and has been used for centuries by native people. The species are: Croton draconoides , (wikispecies) Croton palanostigma , (wikispecies) Croton perpecosus , (wikispecies) Croton rimbachii , (wikispecies) Croton sampatik , (wikispecies) Croton erythrochilus , (wikispecies) Croton lechleri , (wikispecies) Visual characteristics In his study of artists' pigments, the chemist George Field described dragon's blood as “a warm semi-transparent, rather dull, red colour, which is deepened by impure air, and darkened by light.” History and uses The dragon's blood known to the ancient Romans was mostly collected from D. cinnabari, and is mentioned in the 1st century Periplus Maris Erythraei (xxx.10.17) as one of the products of Socotra. Socotra had been an important trading centre since at least the time of the Ptolemies. Dragon's blood was used as a dye, painting pigment, and medicine (respiratory and gastrointestinal problems) in the Mediterranean basin, and was held by early Greeks, Romans, and Arabs to have medicinal properties. Dioscorides and other early Greek writers described its medicinal uses. A notable occurrence of dragon's blood red in art is in Giotto's Pentecost. In this painting, it is believed that the pigment used in the orange-red flames over the Apostles' heads is dragon's blood. Locals on Socotra island use the Dracaena resin as a sort of cure-all, using it for such things as general wound healing, a coagulant (though this is ill-advised with commercial products, as the Calamus species acts as an anti-coagulant and it is usually unknown what species the dragon's blood came from), curing diarrhea, lowering fevers, dysentery diseases, taken internally for ulcers in the mouth, throat, intestines and stomach, as well as an antiviral for respiratory viruses, stomach viruses and for skin disorders such as eczema. It was also used in medieval ritual magic and alchemy. Dragon's blood of both Dracaena draco (commonly referred to as the Draconis Palm) and Dracaena cinnabari were used as a source of varnish for 18th century Italian violinmakers. There was also an 18th-century recipe for toothpaste that contained dragon's blood. Dragon's blood from both Calamus were used for ceremonies in India. Sometimes Dracaena resin, but more often Calamus resin, was used in China as red varnish for wooden furniture. It was also used to colour the surface of writing paper for banners and posters, used especially for weddings and for Chinese New Year. Dragon's blood incense is also occasionally sold as "red rock opium" to unsuspecting would-be drug buyers. It actually contains no opiates, and has only slight psychoactive effects, if any at all. Thaspine from the Dragon's Blood of the species Croton lechleri has possible use as a cancer drug. Today, dragon's blood from a South American plant can be bought in health food stores. According to Pliny the Elder, dragon's blood was used by artists in antiquity. Painters continued to use it in the creation of flesh tones during the 17th century. By the 19th century, publications on artists' materials indicate that it was most useful as a varnish, not as pigment for painting. In 1835, George Field stated that dragon's blood is “unsatisfactory for painting.” However, the pigment was used to prepare the color known as "Chinese orange." Today, dragon's blood has a variety of uses. Outside of it being a pigment in paintings and colors, it is still used as a varnish for violins, in photoengraving, as a medicine, as an incense resin, and as a body oil. The occurrence of bitter taste masking compounds in dragon's blood from Daemonorops draco indicates the relevance of the species for use in food, beverage, and pharmaceutical industries. Safety A study on oral toxicity of the DC resin methanol extract taken from the perennial tree Dracaena cinnabari was performed on female Sprague Dawley rats in February 2018. Acute and sub-acute oral toxicity tests found that the extract could be tolerated up to 2,000 mg/kg body weight. List of botanical sources Calamus rotang L. Calamus draco Willd. (synonyms include Daemonorops draco, D. rubra) Calamus didymophyllus Becc. Ridl. (synonyms include Daemonorops motleyi, D. didymophylla) Croton draconoides Müll. Arg. Croton draco Schltdl. & Cham. Croton lechleri Müll. Arg. Croton erythrochilus Müll. Arg. Croton palanostigma Klotzsch Croton perspeciosus Croizat Croton rimbachii Croizat Croton sampatik Müll. Arg. Croton urucurana Baill. Croton xalapensis Kunth Dracaena cinnabari Balf.f. Dracaena cochinchinensis hort. ex Baker Dracaena draco (L.) L. Pterocarpus officinalis Jacq. See also Crofelemer, South American tree (Croton lechleri), unrelated to Dracaena and rattan palm (the generus Calamus) Footnotes Further reading Incense material Biological pigments Organic pigments Resins Magic substances Traditional medicine Blood in culture
Dragon's blood
Physics,Biology
1,746
40,920,328
https://en.wikipedia.org/wiki/17%20Cygni
17 Cygni is the Flamsteed designation for a multiple star system in the northern constellation of Cygnus. It has an apparent visual magnitude of 5.00, so, according to the Bortle scale, it is visible from suburban skies at night. Measurements of the annual parallax find a shift of 0.0477″, which is equivalent to a distance of around from the Sun. It has a relatively high proper motion, traversing the celestial sphere at the rate of /year. This system consists of two visual binary systems that were discovered by John Herschel in the 1820s. Components A and B form a bright, wide pair with an angular separation of 26.0 arcsecond and an estimated orbital period of ~6,200 years. The faint, close system consists of components F and G with a separation of 2.6 arcsecond and a period of 238 years. The two binaries form a hierarchical system with a separation of about 800 arcseconds and orbital period of 3.7 million years or more. At an angular separation of 791.40 arcseconds is a proper motion companion with a classification of M0.4, indicating this is a red dwarf star. At the estimated distance of the pair, this is equal to a projected separation of 16,320 AU. Although the CCDM lists four other companions, these are not associated with the system. The stellar classification of the primary star, component A, is F7 V, which means it is a main sequence star like the Sun. The star has 1.24 times the mass of the Sun and 1.54 times the Sun's radius. It is some 2.8 billion years old and shines with 3.66 times the Sun's luminosity. The effective temperature of the stellar atmosphere is 6,455 K, giving it the yellow-white hued glow of an F-type star. References F-type main-sequence stars M-type main-sequence stars Binary stars Cygnus (constellation) 7534 Durchmusterung objects Cygni, 17 9670 097295 187013
17 Cygni
Astronomy
438
4,772,849
https://en.wikipedia.org/wiki/Thrustmaster
Thrustmaster is an American designer, developer and manufacturer of joysticks, game controllers, and steering wheels for PCs and video gaming consoles. It has licensing agreements with third party brands as Airbus, Boeing, Ferrari, Gran Turismo and U.S. Air Force as well as licensing some products under Sony's PlayStation and Microsoft's Xbox licenses. History Norm Winningstad helped found Thrustmaster in 1990 in Hillsboro, Oregon. By early 1991 the company began advertising the Thrustmaster Weapons Control System in computer magazines. It worked mainly on developing flight control for simulation on IBM Compatible Computers. The company has utilized the HOTAS system for use in computer flight simulation and has modeled some controllers after flight controls of real aircraft. The company made its name in making expensive but high-quality HOTAS controllers in the mid-1990s. By 1995, its sales grew to $15 million, and then to $25 million by 1998. In July 1999, the gaming peripherals operations and brand name was acquired from Thrustmaster for $15 million by the Guillemot Corporation Group of France (which also bought Hercules Computer Technology that same year and merged the two companies in a company called Hercules Thrustmaster, with headquarters in Carentoir, France, while keeping the 2 brands separate). The new Thrustmaster company gradually extended the product portfolio beyond flight simulation to other simulation peripherals for PC, PlayStation and Xbox consoles: racing simulation as the T-GT, TS-XW, TX-RW, T300, T150, T80, TH8A gearbox, TSSH Handbrake, BT LED display, and wheels add-ons, gamepads as GPX, Score-A, T-Wireless, Dual Analog, eSwap, DualShock controller, gaming headphones as Y-C300 CPX, T.Assault Six, T.Racing Scuderia Ferrari Edition, and T.Flight U.S. Air Force Edition headset. In 2019, Thrustmaster turnover is €59 million (US$66 million). HOTAS Cougar Formerly one of their most expensive joysticks is the HOTAS Cougar, a close but not exact reproduction of both the throttle and stick that is used in the real F-16 block 52 fighter aircraft. The product features all-metal construction and numerous programming possibilities but is hampered by low-quality potentiometers, leading to a thriving replacement industry. Some of the devices have had reported quality problems, including play in the centering springs and the tendency of the speedbrake switch to break due to a manufacturing defect (this has been fixed on later serial numbers). Many independent companies have produced replacement components for the Cougar to address these issues. These include redesigned gimbals that center more firmly, contactless potentiometers to replace worn originals, and even several force-controlled mods that make the stick sense pressure without moving (similar to an F-16 stick). Besides fixing complaints with the original product, these aftermarket parts have the potential to extend the life of the Cougar well past the time when Thrustmaster stops supporting it, but usually at double, even triple the price of the original purchase. However, the market for such mods tends to be limited, and many customers keep their Cougars as they came from the factory. The HOTAS Cougar was replaced by the HOTAS Warthog in 2010, which replicates the flight controls used in the A-10 Thunderbolt II, using Hall effect sensors for the joystick and throttle axes instead of potentiometers. Ferrari partnership In 1999, Thrustmaster made their first ever replica wheel of the Ferrari 360 Modena. This is then followed in 2002 when a wheel was inspired by seven-time World Champion Michael Schumacher. After three years, Thrustmaster came out with the Enzo racing wheel. The next wheel to come out was in May 2010 when the concept of the Ferrari Wireless GT Cockpit 430 Scuderia Edition racing wheel comes out. In July of that same year the Cockpit is named "Product of the Month" and crowned "#1 Racing Wheel" for July/August by Spanish magazine Playmania. Then in August 2011, the 458 Italia wheel is released making the first time a wheel was licensed by Microsoft. After that at the 2011 Italian Grand Prix in Monza, Italy, they unveil new products under the Ferrari License. The two products were the Ferrari F1 Wheel Integral T500 and the Ferrari F1 Wheel Add-on. Later in 2011 with Ferrari make two Thrustmaster Gamepads under the colors of the Ferrari 150th Italia. They were the F1 Wireless Gamepad Ferrari 150th Italia Alonso Edition and the F1 Dual Analog Gamepad Ferrari 150th Italia Exclusive Edition. 2013 saw the release of the TX Racing Wheel 458 Italia Edition with Brushless motors and magnetic sensors. 2014 saw the release of the most affordable wheel to ever have an official Microsoft License at just under US$100, the Ferrari 458 Spider Racing Wheel. 2015 had the release of the T150 Ferrari Wheel Force Feedback. 2018 saw the last Ferrari product with Thrustmaster for now with a Bundle of the T.Racing Scuderia Ferrari Edition headset and the 599xx Evo wheel. In 2021, Thrustmaster unveiled a sim racing replica of the Ferrari SF1000 wheel. Thrustmaster Civil Aviation In 2020, Thrustmaster launched the Thrustmaster Civil Aviation (TCA) line with the TCA Sidestick Airbus Edition, a 1:1 replica of the sidestick on an Airbus A320, followed by the miniaturized TCA Quadrant Airbus Edition, which replicates the throttle, and finally the TCA Throttle Quadrant Add-On Airbus Edition, which replicated the flaps, speed brake, landing gear, and parking brake controls, among others. The sidestick and throttle quadrant were sold together as the TCA Officer Pack Airbus Edition, and with the add-on sold as the TCA Captains Pack Airbus Edition in 2021. In 2021, the Airbus products were followed by the TCA Yoke Pack Boeing Edition, which is a replica of the yoke on a Boeing 787 and a three-axis quadrant, which can be configured with flaps, throttle, and spoilers, which were eventually available separately. Hybrid racing wheels The company is well known for its racing steering wheel controllers using hybrid gear and belt-driven mechanics. This type of controllers is a golden mean between the two competing technologies. 2015 saw the release of the T150 RS racing wheel with a 1080-degree angle of rotation and potentiometer sensors. 2021 saw the release of the T248 racing wheel with a 1080-degree angle of rotation and magnetic sensors; a 3-pedal T3PM pedal unit was included with the T248 wheel kit. The T3PM unit features an adjustable brake pedal. 2022 saw the release of the T128 racing wheel with a 900-degree angle of rotation and magnetic sensors; an entry-level 2-pedal T2PM pedal unit was included. This was created as the budget version of the T248 racing wheel. The T2PM unit is not adjustable, has a small size, but it also has two holes for hex bolts making the mounting possible. 2023 saw the release of the T818 direct-drive wheel base. This is just the base with no pedals or wheel. Products Flying Driving Software support Linux kernel support for various steering wheel models was added in March 2021. References External links Computing input devices Game controllers Companies based in Hillsboro, Oregon Electronics companies established in 1990 1990 establishments in Oregon Computer companies of the United States Computer peripheral companies Computer hardware companies
Thrustmaster
Technology
1,559
6,240,358
https://en.wikipedia.org/wiki/Dual%20process%20theory
In psychology, a dual process theory provides an account of how thought can arise in two different ways, or as a result of two different processes. Often, the two processes consist of an implicit (automatic), unconscious process and an explicit (controlled), conscious process. Verbalized explicit processes or attitudes and actions may change with persuasion or education; though implicit process or attitudes usually take a long amount of time to change with the forming of new habits. Dual process theories can be found in social, personality, cognitive, and clinical psychology. It has also been linked with economics via prospect theory and behavioral economics, and increasingly in sociology through cultural analysis. History The foundations of dual process theory are probably ancient. Spinoza (1632-1677) distinguished between the passions and reason. William James (1842-1910) believed that there were two different kinds of thinking: associative and true reasoning. James theorized that empirical thought was used for things like art and design work. For James, images and thoughts would come to mind of past experiences, providing ideas of comparison or abstractions. He claimed that associative knowledge was only from past experiences describing it as "only reproductive". James believed that true reasoning could enable overcoming “unprecedented situations” just as a map could enable navigating past obstacles. There are various dual process theories that were produced after William James's work. Dual process models are very common in the study of social psychological variables, such as attitude change. Examples include Petty and Cacioppo's elaboration likelihood model (explained below) and Chaiken's heuristic systematic model. According to these models, persuasion may occur after either intense scrutiny or extremely superficial thinking. In cognitive psychology, attention and working memory have also been conceptualized as relying on two distinct processes. Whether the focus be on social psychology or cognitive psychology, there are many examples of dual process theories produced throughout the past. The following just show a glimpse into the variety that can be found. Peter Wason and Jonathan St B. T. Evans suggested dual process theory in 1974. In Evans' later theory, there are two distinct types of processes: heuristic processes and analytic processes. He suggested that during heuristic processes, an individual chooses which information is relevant to the current situation. Relevant information is then processed further whereas irrelevant information is not. Following the heuristic processes come analytic processes. During analytic processes, the relevant information that is chosen during the heuristic processes is then used to make judgments about the situation. Richard E. Petty and John Cacioppo proposed a dual process theory focused in the field of social psychology in 1986. Their theory is called the elaboration likelihood model of persuasion. In their theory, there are two different routes to persuasion in making decisions. The first route is known as the central route and this takes place when a person is thinking carefully about a situation, elaborating on the information they are given, and creating an argument. This route occurs when an individual's motivation and ability are high. The second route is known as the peripheral route and this takes place when a person is not thinking carefully about a situation and uses shortcuts to make judgments. This route occurs when an individual's motivation or ability are low. Steven Sloman produced another interpretation on dual processing in 1996. He believed that associative reasoning takes stimuli and divides it into logical clusters of information based on statistical regularity. He proposed that how you associate is directly proportional to the similarity of past experiences, relying on temporal and similarity relations to determine reasoning rather than an underlying mechanical structure. The other reasoning process in Sloman's opinion was of the Rule-based system. The system functioned on logical structure and variables based upon rule systems to come to conclusions different from that of the associative system. He also believed that the Rule-based system had control over the associative system, though it could only suppress it. This interpretation corresponds well to earlier work on computational models of dual processes of reasoning. Daniel Kahneman provided further interpretation by differentiating the two styles of processing more, calling them intuition and reasoning in 2003. Intuition (or system 1), similar to associative reasoning, was determined to be fast and automatic, usually with strong emotional bonds included in the reasoning process. Kahneman said that this kind of reasoning was based on formed habits and very difficult to change or manipulate. Reasoning (or system 2) was slower and much more volatile, being subject to conscious judgments and attitudes. Fritz Strack and Roland Deutsch proposed another dual process theory focused in the field of social psychology in 2004. According to their model, there are two separate systems: the reflective system and the impulsive system. In the reflective system, decisions are made using knowledge and the information that is coming in from the situation is processed. On the other hand, in the impulsive system, decisions are made using schemes and there is little or no thought required. Theories Dual process learning model Ron Sun proposed a dual-process model of learning (both implicit learning and explicit learning). The model (named CLARION) re-interpreted voluminous behavioral data in psychological studies of implicit learning and skill acquisition in general. The resulting theory is two-level and interactive, based on the idea of the interaction of one-shot explicit rule learning (i.e., explicit learning) and gradual implicit tuning through reinforcement (i.e. implicit learning), and it accounts for many previously unexplained cognitive data and phenomena based on the interaction of implicit and explicit learning. The Dual Process Learning model can be applied to a group-learning environment. This is called The Dual Objective Model of Cooperative Learning and it requires a group practice that consists of both cognitive and affective skills among the team. It involves active participation by the teacher to monitor the group throughout its entirety until the product has been successfully completed. The teacher focuses on the effectiveness of cognitive and affective practices within the group's cooperative learning environment. The instructor acts as an aide to the group by encouraging their positive affective behavior and ideas. In addition, the teacher remains, continually watching for improvement in the group's development of the product and interactions amongst the students. The teacher will interject to give feedback on ways the students can better contribute affectively or cognitively to the group as a whole. The goal is to foster a sense of community amongst the group while creating a proficient product that is a culmination of each student's unique ideas. Dual coding Using a somewhat different approach, Allan Paivio has developed a dual-coding theory of information processing. According to this model, cognition involves the coordinated activity of two independent, but connected systems, a nonverbal system and a verbal system that is specialized to deal with language. The nonverbal system is hypothesized to have developed earlier in evolution. Both systems rely on different areas of the brain. Paivio has reported evidence that nonverbal, visual images are processed more efficiently and are approximately twice as memorable. Additionally, the verbal and nonverbal systems are additive, so one can improve memory by using both types of information during learning. This additive dual coding claim is compatible with evidence that verbalized thinking does not necessarily overcome common faulty intuitions or heuristics, such as studies showing that thinking aloud during heuristics and biases tests did not necessarily improve performance on the test. Dual-process accounts of reasoning Background Dual-process accounts of reasoning postulate that there are two systems or minds in one brain. A current theory is that there are two cognitive systems underlying thinking and reasoning and that these different systems were developed through evolution. These systems are often referred to as "implicit" and "explicit" or by the more neutral "System 1" and "System 2", as coined by Keith Stanovich and Richard West. The systems have multiple names by which they can be called, as well as many different properties. System 1 John Bargh reconceptualized the notion of an automatic process by breaking down the term "automatic" into four components: awareness, intentionality, efficiency, and controllability. One way for a process to be labeled as automatic is for the person to be unaware of it. There are three ways in which a person may be unaware of a mental process: they can be unaware of the presence of the stimulus (subliminal), how the stimulus is categorized or interpreted (unaware of the activation of stereotype or trait constructs), or the effect the stimulus has on the person's judgments or actions (misattribution). Another way for a mental process to be labeled as automatic is for it to be unintentional. Intentionality refers to the conscious "start up" of a process. An automatic process may begin without the person consciously willing it to start. The third component of automaticity is efficiency. Efficiency refers to the amount of cognitive resources required for a process. An automatic process is efficient because it requires few resources. The fourth component is controllability, referring to the person's conscious ability to stop a process. An automatic process is uncontrollable, meaning that the process will run until completion and the person will not be able to stop it. Bargh conceptualized automaticity as a component view (any combination awareness, intention, efficiency, and control) as opposed to the historical concept of automaticity as an all-or-none dichotomy. One takeaway from the psychological research on dual process theory is that our System 1 (intuition) is more accurate in areas where we’ve gathered a lot of data with reliable and fast feedback, like social dynamics, or even cognitive domains in which we've become expert or even merely familiar. System 2 in humans System 2 is evolutionarily recent and speculated as specific to humans. It is also known as the explicit system, the rule-based system, the rational system, or the analytic system. It performs the more slow and sequential thinking. It is domain-general, performed in the central working memory system. Because of this, it has a limited capacity and is slower than System 1 which correlates it with general intelligence. It is known as the rational system because it reasons according to logical standards. Some overall properties associated with System 2 are that it is rule-based, analytic, controlled, demanding of cognitive capacity, and slow. Social psychology The dual process has impact on social psychology in such domains as stereotyping, categorization, and judgment. Especially, the study of automaticity and of implicit in dual process theories has the most influence on a person's perception. People usually perceive other people's information and categorize them by age, gender, race, or role. According to Neuberg and Fiske (1987) a perceiver who receives a good amount of information about the target person then will use their formal mental category (Unconscious) as a basis for judging the person. When the perceiver is distracted, the perceiver has to pay more attention to target information (Conscious). Categorization is the basic process of stereotyping in which people are categorized into social groups that have specific stereotypes associated with them. It is able to retrieve people's judgment automatically without subjective intention or effort. Attitude can also be activated spontaneously by the object. John Bargh's study offered an alternative view, holding that essentially all attitudes, even weak ones are capable of automatic activation. Whether the attitude is formed automatically or operates with effort and control, it can still bias further processing of information about the object and direct the perceivers' actions with regard to the target. According to Shelly Chaiken, heuristic processing is the activation and application of judgmental rules and heuristics are presumed to be learned and stored in memory. It is used when people are making accessible decisions such as "experts are always right" (system 1) and systematic processing is inactive when individuals make effortful scrutiny of all the relevant information which requires cognitive thinking (system 2). The heuristic and systematic processing then influence the domain of attitude change and social influence. Unconscious thought theory is the counterintuitive and contested view that the unconscious mind is adapted to highly complex decision making. Where most dual system models define complex reasoning as the domain of effortful conscious thought, UTT argues complex issues are best dealt with unconsciously. Stereotyping Dual process models of stereotyping propose that when we perceive an individual, salient stereotypes pertaining to them are activated automatically. These activated representations will then guide behavior if no other motivation or cognition take place. However, controlled cognitive processes can inhibit the use of stereotypes when there is motivation and cognitive resources to do so. Devine (1989) provided evidence for the dual process theory of stereotyping in a series of three studies. Study 1 linked found prejudice (according to the Modern Racism Scale) was unrelated to knowledge of cultural stereotypes of African Americans. Study 2 showed that subjects used automatically activated stereotypes in judgments regardless of prejudice level (personal belief). Participants were primed with stereotype relevant or non-relevant words and then asked to give hostility ratings of a target with an unspecified race who was performing ambiguously hostile behaviors. Regardless of prejudice level, participants who were primed with more stereotype-relevant words gave higher hostility ratings to the ambiguous target. Study 3 investigated whether people can control stereotype use by activating personal beliefs. Low-prejudice participants asked to list African Americans listed more positive examples than did those high in prejudice. Terror management theory and the dual process model According to psychologists Pyszczynski, Greenberg, & Solomon, the dual process model, in relation to terror management theory, identifies two systems by which the brain manages fear of death: distal and proximal. Distal defenses fall under the system 1 category because it is unconscious whereas proximal defenses fall under the system 2 category because it operates with conscious thought. However, recent work by the ManyLabs project has shown that the mortality salience effect (e.g., reflecting on one's own death encouraging a greater defense of one's own worldview) has failed to replicate (ManyLabs attempt to replicate a seminal theoretical finding across multiple laboratories—in this case some of these labs included input from the original terror management theorists.) Dual process and habituation Habituation can be described as decreased response to a repeated stimulus. According to Groves and Thompson, the process of habituation also mimics a dual process. The dual process theory of behavioral habituation relies on two underlying (non-behavioral) processes; depression and facilitation with the relative strength of one over the other determining whether or not habituation or sensitization is seen in the behavior. Habituation weakens the intensity of a repeated stimulus over time subconsciously. As a result, a person will give the stimulus less conscious attention over time. Conversely, sensitization subconsciously strengthens a stimulus over time, giving the stimulus more conscious attention. Though these two systems are not both conscious, they interact to help people understand their surroundings by strengthening some stimuli and diminishing others. Dual process and steering cognition According to Walker, system 1 functions as a serial cognitive steering processor for system 2, rather than a parallel system. In large-scale repeated studies with school students, Walker tested how students adjusted their imagined self-operation in different curriculum subjects of maths, science and English. He showed that students consistently adjust the biases of their heuristic self-representation to specific states for the different curriculum subjects. The model of cognitive steering proposes that, in order to process epistemically varied environmental data, a heuristic orientation system is required to align varied, incoming environmental data with existing neural algorithmic processes. The brain's associative simulation capacity, centered around the imagination, plays an integrator role to perform this function. Evidence for early-stage concept formation and future self-operation within the hippocampus supports the model. In the cognitive steering model, a conscious state emerges from effortful associative simulation, required to align novel data accurately with remote memory, via later algorithmic processes. By contrast, fast unconscious automaticity is constituted by unregulated simulatory biases, which induce errors in subsequent algorithmic processes. The phrase ‘rubbish in, rubbish out' is used to explain errorful heuristic processing: errors will always occur if the accuracy of initial retrieval and location of data is poorly self-regulated. Application in economic behavior According to Alos-Ferrer and Strack the dual-process theory has relevance in economic decision-making through the multiple-selves model, in which one person's self-concept is composed of multiple selves depending on the context. An example of this is someone who as a student is hard working and intelligent, but as a sibling is caring and supportive. Decision-making involves the use of both automatic and controlled processes, but also depends on the person and situation, and given a person's experiences and current situation the decision process may differ. Given that there are two decision processes with differing goals one is more likely to be more useful in particular situations. For example, a person is presented with a decision involving a selfish but rational motive and a social motive. Depending on the individual one of the motives will be more appealing than the other, but depending on the situation the preference for one motive or the other may change. Using the dual-process theory it is important to consider whether one motive is more automatic than the other, and in this particular case the automaticity would depend on the individual and their experiences. A selfish person may choose the selfish motive with more automaticity than a non-selfish person, and yet a controlled process may still outweigh this based on external factors such as the situation, monetary gains, or societal pressure. Although there is likely to be a stable preference for which motive one will select based on the individual it is important to remember that external factors will influence the decision. Dual process theory also provides a different source of behavioral heterogeneity in economics. It is mostly assumed within economics that this heterogeneity comes from differences in taste and rationality, while dual process theory indicates necessary considerations of which processes are automated and how these different processes may interact within decision making. Moral psychology Moral judgments are said to be explained in part by dual process theory. In moral dilemmas we are presented us with two morally unpalatable options. For example, should we sacrifice one life in order to save many lives or just let many lives be lost? Consider a historical example: should we authorize the use of force against other nations in order to prevent "any future acts of international terrorism" or should we take a more pacifist approach to foreign lives and risk the possibility of terrorist attack? Dual process theorists have argued that sacrificing something of moral value in order to prevent a worse outcome (often called the "utilitarian" option) involves more reflective reasoning than the more pacifist (also known as the "deontological" option). However, some evidence suggests that this is not always the case, that reflection can sometimes increase harm-rejection responses, and that reflection correlates with both the sacrificial and pacifist (but not more anti-social) responses. So some have proposed that tendencies toward sacrificing for the greater good or toward pacifism are better explained by factors besides the two processes proposed by dual process theorists. Religiosity Various studies have found that performance on tests designed to require System 2 thinking (a.k.a., reflection tests) can predict differences in philosophical tendencies, including religiosity (i.e., the degree to which one reports being involved in organized religion). This "analytic atheist" effect has even been found among samples of people that include academic philosophers. Nonetheless, some studies detect this correlation between atheism and reflective, System 2 thinking in only some of the countries that they study, suggesting that it is not just intuitive and reflective thinking that predict variance in religiosity, but also cultural differences. Evidence Belief bias effect A belief bias is the tendency to judge the strength of arguments based on the plausibility of their conclusion rather than how strongly they support that conclusion. Some evidence suggests that this bias results from competition between logical (System 2) and belief-based (System 1) processes during evaluation of arguments. Studies on belief-bias effect were first designed by Jonathan Evans to create a conflict between logical reasoning and prior knowledge about the truth of conclusions. Participants are asked to evaluate syllogisms that are: valid arguments with believable conclusions, valid arguments with unbelievable conclusions, invalid arguments with believable conclusions, and invalid arguments with unbelievable conclusions. Participants are told to only agree with conclusions that logically follow from the premises given. The results suggest when the conclusion is believable, people erroneously accept invalid conclusions as valid more often than invalid arguments are accepted which support unpalatable conclusions. This is taken to suggest that System 1 beliefs are interfering with the logic of System 2. Tests with working memory De Neys conducted a study that manipulated working memory capacity while answering syllogistic problems. This was done by burdening executive processes with secondary tasks. Results showed that when System 1 triggered the correct response, the distractor task had no effect on the production of a correct answer which supports the fact that System 1 is automatic and works independently of working memory, but when belief-bias was present (System 1 belief-based response was different from the logically correct System 2 response) the participants performance was impeded by the decreased availability of working memory. This falls in accordance with the knowledge about System 1 and System 2 of the dual-process accounts of reasoning because System 1 was shown to work independent of working memory, and System 2 was impeded due to a lack of working memory space so System 1 took over which resulted in a belief-bias. fMRI studies Vinod Goel and others produced neuropsychological evidence for dual-process accounts of reasoning using fMRI studies. They provided evidence that anatomically distinct parts of the brain were responsible for the two different kinds of reasoning. They found that content-based reasoning caused left temporal hemisphere activation whereas abstract formal problem reasoning activated the parietal system. They concluded that different kinds of reasoning, depending on the semantic content, activated one of two different systems in the brain. A similar study incorporated fMRI during a belief-bias test. They found that different mental processes were competing for control of the response to the problems given in the belief-bias test. The prefrontal cortex was critical in detecting and resolving conflicts, which are characteristic of System 2, and had already been associated with that System 2. The ventral medial prefrontal cortex, known to be associated with the more intuitive or heuristic responses of System 1, was the area in competition with the prefrontal cortex. Near-infrared spectroscopy Tsujii and Watanabe did a follow-up study to Goel and Dolan's fMRI experiment. They examined the neural correlates on the inferior frontal cortex (IFC) activity in belief-bias reasoning using near-infrared spectroscopy (NIRS). Subjects performed a syllogistic reasoning task, using congruent and incongruent syllogisms, while attending to an attention-demanding secondary task. The interest of the researchers was in how the secondary-tasks changed the activity of the IFC during congruent and incongruent reasoning processes. The results showed that the participants performed better in the congruent test than in the incongruent test (evidence for belief bias); the high demand secondary test impaired the incongruent reasoning more than it impaired the congruent reasoning. NIRS results showed that the right IFC was activated more during incongruent trials. Participants with enhanced right IFC activity performed better on the incongruent reasoning than those with decreased right IFC activity. This study provided some evidence to enhance the fMRI results that the right IFC, specifically, is critical in resolving conflicting reasoning, but that it is also attention-demanding; its effectiveness decreases with loss of attention. The loss of effectiveness in System 2 following loss of attention makes the automatic heuristic System 1 take over, which results in belief bias. Matching bias Matching bias is a non-logical heuristic. The matching bias is described as a tendency to use lexical content matching of the statement about which one is reasoning, to be seen as relevant information and do the opposite as well, ignore relevant information that doesn't match. It mostly affects problems with abstract content. It doesn't involve prior knowledge and beliefs but it is still seen as a System 1 heuristic that competes with the logical System 2. The Wason selection task provides evidence for the matching bias. The test is designed as a measure of a person's logical thinking ability. Performance on the Wason Selection Task is sensitive to the content and context with which it is presented. If you introduce a negative component into the conditional statement of the Wason Selection Task, e.g. 'If there is an A one side of the card then there is not a 3 on the other side', there is a strong tendency to choose cards that match the items in the negative condition to test, regardless of their logical status. Changing the test to be a test of following rules rather than truth and falsity is another condition where the participants will ignore the logic because they will simply follow the rule, e.g. changing the test to be a test of a police officer looking for underaged drinkers. The original task is more difficult because it requires explicit and abstract logical thought from System 2, and the police officer test is cued by relevant prior knowledge from System 1. Studies have shown that you can train people to inhibit matching bias which provides neuropsychological evidence for the dual-process theory of reasoning. When you compare trials before and after the training there is evidence for a forward shift in activated brain area. Pre-test results showed activation in locations along the ventral pathway and post-test results showed activation around the ventro-medial prefrontal cortex and anterior cingulate. Matching bias has also been shown to generalise to syllogistic reasoning. Evolution Dual-process theorists claim that System 2, a general purpose reasoning system, evolved late and worked alongside the older autonomous sub-systems of System 1. The success of Homo sapiens lends evidence to their higher cognitive abilities above other hominids. Mithen theorizes that the increase in cognitive ability occurred 50,000 years ago when representational art, imagery, and the design of tools and artefacts are first documented. She hypothesizes that this change was due to the adaptation of System 2. Most evolutionary psychologists do not agree with dual-process theorists. They claim that the mind is modular, and domain-specific, thus they disagree with the theory of the general reasoning ability of System 2. They have difficulty agreeing that there are two distinct ways of reasoning and that one is evolutionarily old, and the other is new. To ease this discomfort, the theory is that once System 2 evolved, it became a 'long leash' system without much genetic control which allowed humans to pursue their individual goals. Issues with the dual-process account of reasoning The dual-process account of reasoning is an old theory, as noted above. But according to Evans it has adapted itself from the old, logicist paradigm, to the new theories that apply to other kinds of reasoning as well. And the theory seems more influential now than in the past which is questionable. Evans outlined 5 "fallacies": All dual-process theories are essentially the same. There is a tendency to assume all theories that propose two modes or styles of thinking are related and so they end up all lumped under the umbrella term of "dual-process theories". There are just two systems underlying System 1 and System 2 processing. There are clearly more than just two cognitive systems underlying people's performance on dual-processing tasks. Hence the change to theorizing that processing is done in two minds that have different evolutionary histories and that each have multiple sub-systems. System 1 processes are responsible for cognitive biases; System 2 processes are responsible for normatively correct responding. Both System 1 and System 2 processing can lead to normative answers and both can involve cognitive biases. System 1 processing is contextualised while System 2 processing is abstract. Recent research has found that beliefs and context can influence System 2 processing as well as System 1. Fast processing indicates the use of System 1 rather than System 2 processes. Just because a processing is fast does not mean it is done by System 1. Experience and different heuristics can influence System 2 processing to go faster. Another argument against dual-process accounts for reasoning which was outlined by Osman is that the proposed dichotomy of System 1 and System 2 does not adequately accommodate the range of processes accomplished. Moshman proposed that there should be four possible types of processing as opposed to two. They would be implicit heuristic processing, implicit rule-based processing, explicit heuristic processing, and explicit rule-based processing. Another fine-grained division is as follows: implicit action-centered processes, implicit non-action-centered processes, explicit action-centered processes, and explicit non-action-centered processes (that is, a four-way division reflecting both the implicit-explicit distinction and the procedural-declarative distinction). In response to the question as to whether there are dichotomous processing types, many have instead proposed a single-system framework which incorporates a continuum between implicit and explicit processes. Alternative model The dynamic graded continuum (DGC), originally proposed by Cleeremans and Jiménez is an alternative single system framework to the dual-process account of reasoning. It has not been accepted as better than the dual-process theory; it is instead usually used as a comparison with which one can evaluate the dual-process model. The DGC proposes that differences in representation generate variation in forms of reasoning without assuming a multiple system framework. It describes how graded properties of the representations that are generated while reasoning result in the different types of reasoning. It separates terms like implicit and automatic processing where the dual-process model uses the terms interchangeably to refer to the whole of System 1. Instead the DGC uses a continuum of reasoning that moves from implicit, to explicit, to automatic. Fuzzy-trace theory According to Charles Brainerd and Valerie Reyna's fuzzy-trace theory of memory and reasoning, people have two memory representations: verbatim and gist. Verbatim is memory for surface information (e.g. the words in this sentence) whereas gist is memory for semantic information (e.g. the meaning of this sentence). This dual process theory posits that we encode, store, retrieve, and forget the information in these two traces of memory separately and completely independently of each other. Furthermore, the two memory traces decay at different rates: verbatim decays quickly, while gist lasts longer. In terms of reasoning, fuzzy-trace theory posits that as we mature, we increasingly rely more on gist information over verbatim information. Evidence for this lies in framing experiments where framing effects become stronger when verbatim information (percentages) are replaced with gist descriptions. Other experiments rule out predictions of prospect theory (extended and original) as well as other current theories of judgment and decision making. See also References External links Laboratory for Rational Decision Making, Cornell University Cognition Cognitive psychology Psychological theories
Dual process theory
Biology
6,474
41,682,723
https://en.wikipedia.org/wiki/Comparison%20of%20CPU%20microarchitectures
The following is a comparison of CPU microarchitectures. See also Processor design Comparison of instruction set architectures Notes References Computer architecture CPU microarchitectures
Comparison of CPU microarchitectures
Technology,Engineering
35
3,208,076
https://en.wikipedia.org/wiki/Journal%20of%20Medicinal%20Chemistry
The Journal of Medicinal Chemistry is a biweekly peer-reviewed medical journal covering research in medicinal chemistry. It is published by the American Chemical Society. It was established in 1959 as the Journal of Medicinal and Pharmaceutical Chemistry and obtained its current name in 1963. Philip S. Portoghese served as editor-in-chief from 1972 to 2011. In 2012, Gunda Georg (University of Minnesota) and Shaomeng Wang (University of Michigan) succeeded Portoghese (University of Minnesota). In 2021, Craig W. Lindsley (Vanderbilt University) became editor-in-chief. According to the Journal Citation Reports, the journal has a 2022 impact factor of 7.3. See also ACS Medicinal Chemistry Letters References External links American Chemical Society academic journals Medicinal chemistry journals Biweekly journals Academic journals established in 1959 English-language journals
Journal of Medicinal Chemistry
Chemistry
173
1,863,001
https://en.wikipedia.org/wiki/Bone%20china
Bone china is a type of vitreous, translucent pottery, the raw materials for which include bone ash, feldspathic material and kaolin. It has been defined as "ware with a translucent body" containing a minimum of 30% of phosphate derived from calcined animal bone or calcium phosphate. Bone china is amongst the strongest of whiteware ceramics, and is known for its high levels of whiteness and translucency. Its high strength allows it to be produced in thinner cross-sections than other types of whiteware. Like stoneware, it is vitrified, but is translucent due to differing mineral properties. In the mid-18th century, English potters had not succeeded in making hard-paste porcelain (as made in East Asia and Meissen porcelain), but found bone ash a useful addition to their soft-paste porcelain mixtures. This became standard at the Bow porcelain factory in London (operating from around 1747), and spread to some other English factories. The modern product was developed by the Staffordshire potter Josiah Spode in the early 1790s. Spode included kaolin, so his formula, sometimes called "Staffordshire bone-porcelain", was effectively hard-paste, but stronger, and versions were adopted by all the major English factories by around 1815. From its initial development and up to the latter part of the 20th century, bone china was almost exclusively an English product, with production being very largely localised in Stoke-on-Trent. Most major English firms made or still make it, including Spode, and Royal Worcester, Royal Crown Derby, Royal Doulton, Wedgwood, and Mintons. In the 20th century it began to be made elsewhere, including in Russia, China, and Japan. China is now the world's largest manufacturer. In the UK, references to "china" or "porcelain" can refer to bone china, and "English porcelain" has been used as a term for it, both in the UK and around the world.<ref>Osborne, Harold (ed), The Oxford Companion to the Decorative Arts, p. 130, 1975, OUP, ; Faulkner, Charles H., "The Ramseys at Swan Pond: The Archaeology and History of an East Tennessee Farm, p.96, 2008, Univ. of Tennessee Press, 2008, , 9781572336094; Lawrence, Susan, "Archaeologies of the British: Explorations of Identity in the United Kingdom and Its Colonies 1600-1945", p. 196, 2013, Routledge, , 781136801921</ref> History The first development of what would become known as bone china was made by Thomas Frye at his Bow porcelain factory near Bow in East London in 1748. His factory was located very close to the cattle markets and slaughterhouses of London and Essex, and hence had easy access to animal bones. Frye used up to 45% bone ash in his formulation to create what he called "fine porcelain"."Science Of Early English Porcelain." I.C. Freestone. Sixth Conference and Exhibition of the European Ceramic Society. Vol.1 Brighton, 20–24 June 1999, p.11-17 Later, Josiah Spode in Stoke-on-Trent further developed the concept between 1789 and 1793, introducing his "Stoke China" in 1796. He died suddenly the year later, and his son Josiah Spode II quickly rechristened the ware "bone china". Among his developments was to abandon Frye's procedure of calcining the bone together with some of the other raw body materials, instead calcining just the bone. Bone china quickly proved to be highly popular, leading to its production by other English pottery manufacturers. Both Spode's formulation and his business were successful: his formulation of 6 parts bone ash, 4 parts china stone and 3.5 parts kaolin, remains the basis for all bone china. It was only in 2009 that his company, Spode, went into receivership before eventually being purchased by Portmeirion Pottery. Production Raw materials The traditional formulation for bone china is about 25% kaolin, 25% China stone and 50% bone ash. The bone ash that is used in bone china has traditionally been made from cattle bones that have a lower iron content. These bones are crushed before being degelatinised and then calcined to around 1,000 °C to produce bone ash. The ash is milled to a fine particle size. The kaolin component of the body is needed to give the unfired body plasticity which allows articles to be shaped. This mixture is then fired at around 1200 °C. The raw materials for bone china are comparatively expensive, and the production is labour-intensive, which is why bone china maintains a luxury status and high pricing. The use of hydroxyapatite compounds, derived from rock sources, rather than bone ash has seen increased use since the 1990s. If used appropriately the resultant ceramic material conforms to accepted definitions of bone china, and the properties and appearance are indistinguishable from those using naturally derived bone ash.'Bones Of Contention. Asian Ceramics. April 2004'Replacing Bone Ash In China. D.Gratton. Journal Of The Canadian Ceramics Society 65. No.4. 1996 Mineralogy Bone china consists of two crystalline phases, anorthite (CaAl2Si2O8) and β-tricalcium phosphate/whitlockite (Ca3(PO4)2) embedded in a substantial amount of glass. Production locations For almost 200 years from its development bone china was almost exclusively produced in the UK; it was ignored by most European and Asian countries already making porcelain. During the middle part of the 20th century manufacturers in other countries began production, with the first successful ones outside the UK being Japan's Noritake, Nikko and Narumi.Skeletons In The Cupboard. Asian Ceramics. February 2013. Lenox was the only major manufacturer of bone china in the United States, and supplied Presidential dinner service to the White House. The factory closed in March 2020. In the Soviet Union, bone china recipe was reinvented at the Lomonosov Porcelain Factory, production starting in May 1969. The Soviet bone china was said to be thinner and whiter than the British one thanks to particular firing regimen, which landed the recipe developers USSR State Prizes. Bone china is still produced in Russia at the same factory, after its name being reversed to the Imperial Porcelain Factory. In more recent years, production in China has expanded considerably, and the country is now the biggest producer of bone china in the world. Other countries producing considerable amounts of bone china are Bangladesh, India, Indonesia, Sri Lanka and Thailand. Rajasthan had become a hub for bone china in India, with production in the state totalling 16–17 tonnes per day in 2003. From the start of the first factory, Bengal Potteries, in 1964, bone china output from Indian factories had risen to 10,000 tonnes per year by 2009. Cultural issues In the 21st century, so called Islamic or halal bone china has been developed using bone ash from halal animals.The Use of Ceramic Product Derived From Non-ḥalal Animal Bone: Is It Permissible From the Perspective of Islamic Law?. Mohd Mahyeddin Mohd Salleh et al. International Journal of Asian Social Science, 2017, 7(3): 192–198 Due to the use of animal bones in the production of bone china vegetarians and vegans may avoid using or purchasing it. Porcelain manufactured without animal bones is sometimes called vegan porcelain''. References External links Ceramic materials Porcelain Pottery
Bone china
Engineering
1,558
46,548,346
https://en.wikipedia.org/wiki/Trumpler%2015
Trumpler 15 is an open cluster in the constellation Carina that lies on the outskirts of the Carina Nebula. Estimated ages of the stars in Trumpler 15 suggest that the cluster is slightly older than its sibling clusters Trumpler 14 and 16. References External links Carina Nebula Open clusters Carina (constellation) Trumpler catalog Star-forming regions
Trumpler 15
Astronomy
71
58,674,848
https://en.wikipedia.org/wiki/Ann%20Tsukamoto
Ann S. Tsukamoto Weissman (born July 6, 1952) is an Asian American stem cell researcher and inventor. In 1991, she co-patented a process that allowed the human stem cell to be isolated and demonstrated their potential in treating patients with metastatic breast cancer. Tsukamoto’s research and contributions in the medical field have led to groundbreaking advancements in stem cell research, especially in understanding the blood systems of cancer patients. Her work has shown potential treatments for cancers and neurological disorders, for which there were previously thought to be none. Career Ann Tsukamoto was born in California on July 6, 1952. She completed her bachelor's degree at the University of California San Diego and her Ph.D in immunology and microbiology at the University of California Los Angeles. Ann did most of her postdoctoral work at the University of California, San Francisco. Here, she worked on the wnt-1 gene and developed a transgenic model for breast cancer. Wnt-1 was later discovered to be a key player in the stem cell self-renewal pathway. She joined the biotech company SyStemix from 1989 to 1997, where she co-discovered the human hematopoietic stem cell (hHSC) and played a leading role in the launch of the clinical research program for this cell. The purified hHSC was shown to be cancer-free when isolated from the cancer-contaminated hematopoietic mobilized blood of patients with disseminated cancer, and it successfully regenerated the patients' blood-forming system after myeloablative chemotherapy. Ann joined StemCells Inc. in 1998, where she has held several leadership roles overseeing the isolation and application of human neural and liver stem cells for various diseases. She led the scientific team that discovered the human central nervous system stem cell and identified a second candidate stem cell for the liver. Under her guidance, the human neural stem cell transitioned into early clinical development for all three components of the central nervous system: the brain, spinal cord, and eye. The biological potential and activity of these cells were demonstrated in some patients, mirroring the results observed in preclinical rodent studies. As of 2017, Tsukamoto is an inventor on seven issued U.S. patents, six of which are related to the human hematopoietic stem cell. By 2021, she had reached a total of 13 patents. References Living people 20th-century American women scientists 21st-century American women scientists 1952 births Stem cell researchers 20th-century American biologists 21st-century American biologists American women biologists University of California, San Diego alumni University of California, Los Angeles alumni University of California, San Francisco alumni 21st-century American inventors American women inventors
Ann Tsukamoto
Biology
557
43,730,809
https://en.wikipedia.org/wiki/Byers%E2%80%93Yang%20theorem
In quantum mechanics, the Byers–Yang theorem states that all physical properties of a doubly connected system (an annulus) enclosing a magnetic flux through the opening are periodic in the flux with period (the magnetic flux quantum). The theorem was first stated and proven by Nina Byers and Chen-Ning Yang (1961), and further developed by Felix Bloch (1970). Proof An enclosed flux corresponds to a vector potential inside the annulus with a line integral along any path that circulates around once. One can try to eliminate this vector potential by the gauge transformation of the wave function of electrons at positions . The gauge-transformed wave function satisfies the same Schrödinger equation as the original wave function, but with a different magnetic vector potential . It is assumed that the electrons experience zero magnetic field at all points inside the annulus, the field being nonzero only within the opening (where there are no electrons). It is then always possible to find a function such that inside the annulus, so one would conclude that the system with enclosed flux is equivalent to a system with zero enclosed flux. However, for any arbitrary the gauge transformed wave function is no longer single-valued: The phase of changes by whenever one of the coordinates is moved along the ring to its starting point. The requirement of a single-valued wave function therefore restricts the gauge transformation to fluxes that are an integer multiple of . Systems that enclose a flux differing by a multiple of are equivalent. Applications An overview of physical effects governed by the Byers–Yang theorem is given by Yoseph Imry. These include the Aharonov–Bohm effect, persistent current in normal metals, and flux quantization in superconductors. References Theorems in quantum mechanics
Byers–Yang theorem
Physics,Mathematics
368
5,512,894
https://en.wikipedia.org/wiki/Kronecker%20limit%20formula
In mathematics, the classical Kronecker limit formula describes the constant term at s = 1 of a real analytic Eisenstein series (or Epstein zeta function) in terms of the Dedekind eta function. There are many generalizations of it to more complicated Eisenstein series. It is named for Leopold Kronecker. First Kronecker limit formula The (first) Kronecker limit formula states that where E(τ,s) is the real analytic Eisenstein series, given by for Re(s) > 1, and by analytic continuation for other values of the complex number s. γ is Euler–Mascheroni constant τ = x + iy with y > 0. , with q = e2π i τ is the Dedekind eta function. So the Eisenstein series has a pole at s = 1 of residue π, and the (first) Kronecker limit formula gives the constant term of the Laurent series at this pole. This formula has an interpretation in terms of the spectral geometry of the elliptic curve associated to the lattice : it says that the zeta-regularized determinant of the Laplace operator associated to the flat metric on is given by . This formula has been used in string theory for the one-loop computation in Polyakov's perturbative approach. Second Kronecker limit formula The second Kronecker limit formula states that where u and v are real and not both integers. q = e2π i τ and qa = e2π i aτ p = e2π i z and pa = e2π i az for Re(s) > 1, and is defined by analytic continuation for other values of the complex number s. See also Herglotz–Zagier function References Serge Lang, Elliptic functions, C. L. Siegel, Lectures on advanced analytic number theory, Tata institute 1961. External links Chapter0.pdf Theorems in analytic number theory Modular forms
Kronecker limit formula
Mathematics
402
4,746,766
https://en.wikipedia.org/wiki/Process
A process is a series or set of activities that interact to produce a result; it may occur once-only or be recurrent or periodic. Things called a process include: Business and management Business process, activities that produce a specific service or product for customers Business process modeling, activity of representing processes of an enterprise in order to deliver improvements Manufacturing process management, a collection of technologies and methods used to define how products are to be manufactured. Process architecture, structural design of processes, applies to fields such as computers, business processes, logistics, project management Process area, related processes within an area which together satisfies an important goal for improvements within that area Process costing, a cost allocation procedure of managerial accounting Process management (project management), a systematic series of activities directed towards planning, monitoring the performance and causing an end result in engineering activities, business process, manufacturing processes or project management Process-based management, is a management approach that views a business as a collection of processes Law Due process, the concept that governments must respect the rule of law Legal process, the proceedings and records of a legal case Service of process, the procedure of giving official notice of a legal proceeding Science and technology The general concept of the scientific process, see scientific method Process theory, the scientific study of processes Industrial processes, consists of the purposeful sequencing of tasks that combine resources to produce a desired output Biology and psychology Process (anatomy), a projection or outgrowth of tissue from a larger body Biological process, a process of a living organism Cognitive process, such as attention, memory, language use, reasoning, and problem solving Mental process, a function or processes of the mind Neuronal process, also neurite, a projection from the cell body of a neuron Chemistry Chemical process, a method or means of changing one or more chemicals or chemical compounds Unit process, a step in manufacturing in which chemical reaction takes place Computing Process (computing), a computer program, or running a program concurrently with other programs Child process, created by another process Parent process Process management (computing), an integral part of any modern-day operating system (OS) Processing (programming language), an open-source language and integrated development environment Mathematics In probability theory: Branching process, a Markov process that models a population Diffusion process, a solution to a stochastic differential equation Empirical process, a stochastic process that describes the proportion of objects in a system in a given state Lévy process, a stochastic process with independent, stationary increments Poisson process, a point process consisting of randomly located points on some underlying space Predictable process, a stochastic process whose value is knowable Stochastic process, a random process, as opposed to a deterministic process Wiener process, a continuous-time stochastic process Process calculus, a diverse family of related approaches for formally modeling concurrent systems Process function, a mathematical concept used in thermodynamics Thermodynamics Process function, a mathematical concept used in thermodynamics Thermodynamic process, the energetic evolution of a thermodynamic system Adiabatic process, which proceeds without transfer of heat or matter between a system and its surroundings Isenthalpic process, in which enthalpy stays constant Isobaric process, in which the pressure stays constant Isochoric process, in which volume stays constant Isothermal process, in which temperature stays constant Polytropic process, which obeys the equation Quasistatic process, which occurs infinitely slowly, as an approximation Other uses The Process, a concept in the film 3% Food processing, transformation of raw ingredients, by physical or chemical means into food Language processing in the brain Natural language processing Praxis (process), in philosophy, the process by which a theory or skill is enacted or realized Process (engineering), set of interrelated tasks that transform inputs into outputs Process philosophy, which regards change as the cornerstone of reality Process thinking, a philosophy that focuses on present circumstances Writing process, a concept in writing and composition studies Work in process, goods that are partially completed within a company, awaiting finalization for sale. External links Business process Business process management Process engineering Industrial processes Technology-related lists Legal procedure Biological processes Chemical processes Process (computing)
Process
Chemistry,Engineering,Biology
858
2,864,985
https://en.wikipedia.org/wiki/Staggered%20fermion
In lattice field theory, staggered fermions (also known as Kogut–Susskind fermions) are a fermion discretization that reduces the number of fermion doublers from sixteen to four. They are one of the fastest lattice fermions when it comes to simulations and they also possess some nice features such as a remnant chiral symmetry, making them very popular in lattice QCD calculations. Staggered fermions were first formulated by John Kogut and Leonard Susskind in 1975 and were later found to be equivalent to the discretized version of the Dirac–Kähler fermion. Constructing staggered fermions Single-component basis The naively discretized Dirac action in Euclidean spacetime with lattice spacing and Dirac fields at every lattice point, indexed by , takes the form Staggered fermions are constructed from this by performing the staggered transformation into a new basis of fields defined by Since Dirac matrices square to the identity, this position dependent transformation mixes the fermion spin components in a way that repeats itself every two lattice spacings. Its effect is to diagonalize the action in the spinor indices, meaning that the action ends up splitting into four distinct parts, one for each Dirac spinor component. Denoting one of those components by , which is Grassmann variable with no spin structure, the other three components can be dropped, yielding the single-component staggered action where are unit vectors in the direction and the staggered sign function is given by . The staggered transformation is part of a larger class of transformations satisfying . Together with a necessary consistency condition on the plaquettes, all these transformations are equivalent to the staggered transformation. Due to fermion doubling, the original naive action described sixteen fermions, but having discarded three of the four copies this new action describes only four. Spin-taste basis To explicitly show that the single-component staggered fermion action describes four Dirac fermions requires blocking the lattice into hypercubes and reinterpreting the Grassmann fields at the sixteen hypercube sites as the sixteen degrees of freedom of the four fermions. In analogy to the usage of flavour in particle physics, these four fermions are referred to be different tastes of fermions. The blocked lattice sites are indexed by while for each of these the internal hypercube sites are indexed by , whose vector components are either zero or one. In this notation the original lattice vector is written as . The matrices are used to define the spin-taste basis of staggered fermions The taste index runs over the four tastes while the spin index runs over the four spin components. This change of basis turns the one-component action on the lattice with spacing into the spin-taste action with an effective lattice spacing of given by Here and are shorthand for the symmetrically discretized derivative and Laplacian, respectively. Meanwhile, the tensor notation separates out the spin and taste matrices as . Since the kinetic and mass terms are diagonal in the taste indices, the action describes four degenerate Dirac fermions. These interact together in what are known as taste mixing interactions through the second term, which is an irrelevant dimension five operator that vanishes in the continuum limit. This action is very similar to the action constructed using four Wilson fermions with the only difference being in the second term tensor structure, which for Wilson fermions is spin and taste diagonal . A key property of staggered fermions, not shared by some other lattice fermions such as Wilson fermions, is that they have a remnant chiral symmetry in the massless limit. The remnant symmetry is described in the spin-taste basis by The presence of this remnant symmetry makes staggered fermions especially useful for certain applications since they can describe spontaneous symmetry breaking and anomalies. The symmetry also protects massless fermions from gaining a mass upon renormalization. Staggered fermions are gauged in the one-component action by inserting link fields into the action to make it gauge invariant in the same way that this is done for the naive Dirac lattice action. This approach cannot be implemented in the spin-taste action directly. Instead the interacting single-component action must be used together with a modified spin-taste basis where Wilson lines are inserted between the different lattice points within the hypercube to ensure gauge invariance. The resulting action cannot be expressed in a closed form but can be expended out in powers of the lattice spacing, leading to the usual interacting Dirac action for four fermions, together with an infinite series of irrelevant fermion bilinear operators that vanish in the continuum limit. Momentum-space staggered fermions Staggered fermions can also be formulated in momentum space by transforming the single-component action into Fourier space and splitting up the Brillouin zone into sixteen blocks. Shifting these to the origin yields sixteen copies of the single-component fermion whose momenta extend over half the Brillouin zone range . These can be grouped into a matrix which upon a unitary transformation and a momentum rescaling, to ensure that the momenta again range over the full Brillouin range, gives the momentum-space staggered fermion action This can be transformed back into position space through an inverse Fourier transformation. In contrast to the spin-taste action, this action does not mix the taste components together, seemingly giving an action that fully separates out the four fermions. It therefore has a full chiral symmetry group. This is however only achieved at the expense of locality, where now the position-space Dirac operator connects lattice points that are arbitrarily far apart, rather than ones restricted to a hypercube. This conclusion is also seen in the propagator which is discontinuous at the Brillouin zone edges. The momentum-space and position-space formulations differ because they use a different definition of taste, whereby the momentum space definition does not correspond to the local definition in position space. These two definitions only become equivalent in the continuum limit. Chiral symmetry is maintained despite the possibility of simulating a single momentum space fermion because locality was one of the assumptions of the Nielsen–Ninomiya theorem determining whether a theory experiences fermion doubling. The loss of locality makes this formulation hard to use for simulations. Simulating staggered fermions The main issue with simulating staggered fermions is that the different tastes mix together due to the taste-mixing term. If there was no mixing between tastes, lattice simulations could easily untangle the different contributions from the different tastes to end up with the results for processes involving a single fermion. Instead the taste mixing introduces discretization errors that are hard to account for. Initially these discretization errors, of order , were unusually large compared to other lattice fermions, making staggered fermions unpopular for simulations. The main method to reduce these errors is to perform Symanzik improvement, whereby irrelevant operators are added to the action with their coefficients fine-tuned to cancel discretization errors. The first such action was the ASQTAD action, with this being improved after analyzing one-loop taste exchange interactions to further eliminate errors using link-field smearing. This resulted in the highly improved staggered quark (HISQ) action and it forms the basis of modern staggered fermion simulations. Since simulations are done using the single-component action, simulating staggered fermions is very fast as this requires simulating only single-component Grassmann variables rather than four component spinors. The main code and gauge ensembles used for staggered fermions comes from the MILC collaboration. An advantage of staggered fermions over some other lattice fermions in that the remnant chiral symmetry protects simulations from exceptional configurations, which are gauge field configurations that lead to small eigenvalues of the Dirac operator, making numerical inversion difficult. Staggered fermions are protected from this because their Dirac operator is anti-hermitian, so its eigenvalues come in complex conjugate pairs for real . This ensures that the Dirac determinant is real and positive for non-zero masses. Negative or imaginary determinants are problematic during Markov chain Monte Carlo simulations as the determinant is present in the probability weight. Fourth-root trick In the continuum limit the staggered fermion Dirac operator reduces to a four-fold continuum Dirac operator , so its eigenvalues are four-fold degenerate, hence . This degeneracy is broken by taste mixing at non-zero lattice spacings , although simulations show that the eigenvalues are still roughly clustered in groups of four. This motivates the fourth-root trick where a single fermion is simulated by replacing the staggered Dirac operator determinant by its fourth root in the partition function The resulting fermion is called a rooted staggered fermion and it is used in most staggered fermion simulations, including by the MILC collaboration. The theoretical problem in using rooted staggered fermions is that it is unclear whether they give the correct continuum limit, that is whether rooting changes the universality class of the theory. If it does, then there is no reason to suppose that rooted staggered fermions are any good at describing the continuum field theory. The universality class is generally determined by the dimensionality of the theory and by what symmetries it satisfies. The problem with rooted staggered fermions is that they can only be described by a nonlocal action for which the universality classification no longer applies. As nonlocality implies a violation of unitary, rooted staggered fermions are also non-physical at non-zero lattice spacings, although this is not a problem if the nonlocality vanishes in the continuum. It has been found that under reasonable assumptions, the fourth root trick does define a renormalizable theory that at all orders in perturbation theory reproduces a local, unitary theory with the correct number of light quarks in the continuum. It remains an open question whether this is also true non-perturbatively, however theoretical arguments and numerical comparisons to other lattice fermions indicate that rooted staggered fermions do belong to the correct universality class. See also Lattice model Statistical field theory References Lattice field theory Fermions
Staggered fermion
Physics,Materials_science
2,119
53,941,967
https://en.wikipedia.org/wiki/Drug%20recycling
Drug recycling, also referred to as medication redispensing or medication re-use, is the idea that health care organizations or patients with unused drugs can transfer them in a safe and appropriate way to another patient in need. The purpose of such a program is reducing medication waste, thereby saving healthcare costs, enlarging medications’ availability and alleviating the environmental burden of medication. The debate Despite the need for waste-preventive measures, the debate of drug recycling programs is ongoing. It is traditional to expect that consumers get prescription drugs from a pharmacy and that the pharmacy got their drugs from a trusted source, such as manufacturer or wholesaler. In a drug recycling program, consumers would access drugs through a less standardized supply chain. Consequently, concerns of the quality of the recycled drugs arise. However, in a regulated process, monitored by specialized pharmacies or medical organization, these uncertainties can be overcome. For example, monitoring the storage conditions, including temperature, light, humidity and agitation of medication, can contribute to regulation of the quality of recycled drugs. For this purpose, pharmaceutical packaging could be upgraded with sensing technologies, that can also be designed to detect counterfeits. Such packaging requires an initial investment, but this can be compensated with potential cost savings obtained by a drug recycling program. Accordingly, drugs recycling seems economically viable for expensive drugs, such as HIV post-exposure prophylaxis medication. Donating practices In some countries, drug recycling programs operate successfully by donating unused drugs to the less fortunate. In the United States drug recycling programs exist locally. As of 2010, Canada had fewer drug recycling programs than the United States. These programs occur in specific pharmacies only, since these pharmacies are prepared to address the special requirements of participating in a recycling program. Usually, drug returns happen without financial compensation. In Greece, the organization GIVMED operates in drug recycling, and saved over half a million euros by recycling almost 60k drug packages since 2016. However, in other countries, such as Canada, implementation of drug recycling programs is limited. Other initiatives focus on donating drugs to third world countries. However, this is accompanied with ethical constraints due to uncertainties in quality, as well as practical constraints, due to making the drugs only temporarily available and not necessarily addressing local needs. The World Health Organization provided guidelines on appropriate drug donation, thereby discouraging donation practices that do not consider recipient's needs, government policies, effective coordination or quality standards. Towards redispensing as standard of care Alternatively, drug recycling programs could be set as routine clinical practice with the aim of reducing the economic and environmental burden of medication waste. Still, for general implementation of drug recycling programs, clear professional guidelines are required. Research could provide the rationale for these guidelines. For example, research showed that a majority of patients is willing to use recycled drugs if the quality is maintained, and explored requirements for a drug recycling program perceived by stakeholders, including the general public, pharmacists. and policy-makers. One can assume that implementing drug recycling as routine clinical practice is only attractive from an economical perspective, if the savings exceed the operational pharmacy costs. For this purpose, research should assess the feasibility of drug recycling. In the Netherlands, redispensing of unused oral anticancer drugs is currently tested in routine clinical practice to determined cost-savings of a quality-controlled process. This data could help policy-makers to prioritize drug recycling on their agenda, thereby facilitating guidelines for general implementation of drug recycling. References External links Guidance Document: Best Management Practices for Unused Pharmaceuticals at Health Care Facilities, a 2010 publication from the United States Environmental Protection Agency Recycling by product Medical waste
Drug recycling
Biology
745
1,809,113
https://en.wikipedia.org/wiki/Comparative%20biology
Comparative biology uses natural variation and disparity to understand the patterns of life at all levels—from genes to communities—and the critical role of organisms in ecosystems. Comparative biology is a cross-lineage approach to understanding the phylogenetic history of individuals or higher taxa and the mechanisms and patterns that drives it. Comparative biology encompasses Evolutionary Biology, Systematics, Neontology, Paleontology, Ethology, Anthropology, and Biogeography as well as historical approaches to Developmental biology, Genomics, Physiology, Ecology and many other areas of the biological sciences. The comparative approach also has numerous applications in human health, genetics, biomedicine, and conservation biology. The biological relationships (phylogenies, pedigree) are important for comparative analyses and usually represented by a phylogenetic tree or cladogram to differentiate those features with single origins (Homology) from those with multiple origins (Homoplasy). See also Cladistics Comparative Anatomy Evolution Evolutionary Biology Systematics Bioinformatics Neontology Paleontology Phylogenetics Genomics References Evolutionary biology Comparisons
Comparative biology
Biology
215
9,913,028
https://en.wikipedia.org/wiki/Annihilation%20radiation
Annihilation radiation is a term used in Gamma spectroscopy for the photon radiation produced when a particle and its antiparticle collide and annihilate. Most commonly, this refers to 511-keV photons produced by an electron interacting with a positron. These photons are frequently referred to as gamma rays, despite having their origin outside the nucleus, due to unclear distinctions between types of photon radiation. Positively charged electrons (Positrons) are emitted from the nucleus as it undergoes β+ decay. The positron travels a short distance (a few millimeters), depositing any excess energy before it combines with a free electron. The mass of the e- and e+ is completely converted into two photons with an energy of 511 keV each. These annihilation photons are emitted in opposite directions, 180˚ apart. This is the basis for PET scanners in a process called coincidence counting. Annihilation radiation is not monoenergetic, unlike gamma rays produced by radioactive decay. The production mechanism of annihilation radiation introduces Doppler broadening. The annihilation peak produced in a photon spectrum by annihilation radiation therefore has a higher full width at half maximum (FWHM) than decay-generated gamma rays in spectrum. The difference is more apparent with high resolution detectors, such as Germanium detectors, than with low resolution detectors such as Sodium iodide detectors. Because of their well-defined energy (511 keV) and characteristic, Doppler-broadened shape, annihilation radiation can often be useful in defining the energy calibration of a gamma ray spectrum. References Antimatter Gamma rays
Annihilation radiation
Physics
344
20,927,937
https://en.wikipedia.org/wiki/Translational%20research
Translational research (also called translation research, translational science, or, when the context is clear, simply translation) is research aimed at translating (converting) results in basic research into results that directly benefit humans. The term is used in science and technology, especially in biology and medical science. As such, translational research forms a subset of applied research. The term has been used most commonly in life sciences and biotechnology, but applies across the spectrum of science and humanities. In the context of biomedicine, translational research is also known as bench to bedside. In the field of education, it is defined as research which translates concepts to classroom practice. Critics of translational medical research (to the exclusion of more basic research) point to examples of important drugs that arose from fortuitous discoveries in the course of basic research such as penicillin and benzodiazepines. Other problems have stemmed from the widespread irreproducibility thought to exist in translational research literature. Although translational research is relatively new, there are now several major research centers focused on it. In the U.S., the National Institutes of Health has implemented a major national initiative to leverage existing academic health center infrastructure through the Clinical and Translational Science Awards. Furthermore, some universities acknowledge translational research as its own field in which to study for a PhD or graduate certificate. Definitions Translational research is aimed at solving particular problems; the term has been used most commonly in life sciences and biotechnology, but applies across the spectrum of science and humanities. In the field of education, it is defined for school-based education by the Education Futures Collaboration (www.meshguides.org) as research which translates concepts to classroom practice. Examples of translational research are commonly found in education subject association journals and in the MESHGuides which have been designed for this purpose. In bioscience, translational research is a term often used interchangeably with translational medicine or translational science or bench to bedside. The adjective "translational" refers to the "translation" (the term derives from the Latin for "carrying over") of basic scientific findings in a laboratory setting into potential treatments for disease. Biomedical translational research adopts a scientific investigation/enquiry into a given problem facing medical/health practices: it aims to "translate" findings in fundamental research into practice. In the field of biomedicine, it is often called "translational medicine", defined by the European Society for Translational Medicine (EUSTM) as "an interdisciplinary branch of the biomedical field supported by three main pillars: benchside, bedside and community", from laboratory experiments through clinical trials, to therapies, to point-of-care patient applications. The end point of translational research in medicine is the production of a promising new treatment that can be used clinically. Translational research is conceived due to the elongated time often taken to bring to bear discovered medical idea in practical terms in a health system. It is for these reasons that translational research is more effective in dedicated university science departments or isolated, dedicated research centers. Since 2009, the field has had specialized journals, the American Journal of Translational Research and Translational Research dedicated to translational research and its findings. Translational research in biomedicine is broken down into different stages. In a two-stage model, T1 research, refers to the "bench-to-bedside" enterprise of translating knowledge from the basic sciences into the development of new treatments and T2 research refers to translating the findings from clinical trials into everyday practice, although this model is actually referring to the 2 "roadblocks" T1 and T2. Waldman et al. propose a scheme going from T0 to T5. T0 is laboratory (before human) research. In T1-translation, new laboratory discoveries are first translated to human application, which includes phase I & II clinical trials. In T2-translation, candidate health applications progress through clinical development to engender the evidence base for integration into clinical practice guidelines. This includes phase III clinical trials. In T3-translation, dissemination into community practices happens. T4-translation seeks to (1) advance scientific knowledge to paradigms of disease prevention, and (2) move health practices established in T3 into population health impact. Finally, T5-translation focuses on improving the wellness of populations by reforming suboptimal social structures Comparison to basic research or applied research Basic research is the systematic study directed toward greater knowledge or understanding of the fundamental aspects of phenomena and is performed without thought of practical ends. It results in general knowledge and understanding of nature and its laws. For instance, basic biomedical research focuses on studies of disease processes using, for example, cell cultures or animal models without consideration of the potential utility of that information. Applied research is a form of systematic inquiry involving the practical application of science. It accesses and uses the research communities' accumulated theories, knowledge, methods, and techniques, for a specific, often state, business, or client-driven purpose. Translational research forms a subset of applied research. In life-sciences, this was evidenced by a citation pattern between the applied and basic sides in cancer research that appeared around 2000. In fields such as psychology, translational research is seen as a bridging between applied research and basic research types. The field of psychology defines translational research as the use of basic research to develop and test applications, such as treatment. Challenges and criticisms Critics of translational medical research (to the exclusion of more basic research) point to examples of important drugs that arose from fortuitous discoveries in the course of basic research such as penicillin and benzodiazepines, and the importance of basic research in improving our understanding of basic biological facts (e.g. the function and structure of DNA) that go on to transform applied medical research. Examples of failed translational research in the pharmaceutical industry include the failure of anti-aβ therapeutics in Alzheimer's disease. Other problems have stemmed from the widespread irreproducibility thought to exist in translational research literature. Translational research-facilities in life-sciences In U.S., the National Institutes of Health has implemented a major national initiative to leverage existing academic health center infrastructure through the Clinical and Translational Science Awards. The National Center for Advancing Translational Sciences (NCATS) was established on December 23, 2011. Although translational research is relatively new, it is being recognized and embraced globally. Some major centers for translational research include: About 60 hubs of the Clinical and Translational Science Awards program. Texas Medical Center, Houston, Texas, United States Translational Research Institute (Australia), Brisbane, Queensland, Australia. University of Rochester, Rochester, New York, United States has a dedicated Clinical and Translational Science Institute Stanford University Medical Center, Stanford, California, United States. Translational Genomics Research Institute, Phoenix, Arizona, United States. Maine Medical Center in Portland, Maine, United States has a dedicated translational research institute. Scripps Research Institute, Florida, United States, has a dedicated translational research institute. UC Davis Clinical and Translational Science Center, Sacramento, California Clinical and Translational Science Institute, University of Pittsburgh, Pittsburgh, Pennsylvania Weill Cornell Medicine has a Clinical and Translational Science Center. Hansjörg Wyss Institute for Biologically Inspired Engineering at Harvard University in Boston, Massachusetts, United States. Additionally, translational research is now acknowledged by some universities as a dedicated field to study a PhD or graduate certificate in, in a medical context. These institutes currently include Monash University in Victoria, Australia, the University of Queensland, Diamantina Institute in Brisbane, Australia, at Duke University in Durham, North Carolina, America, at Creighton University in Omaha, Nebraska at Emory University in Atlanta, Georgia, and at The George Washington University in Washington, D.C. The industry and academic interactions to promote translational science initiatives has been carried out by various global centers such as European Commission, GlaxoSmithKline and Novartis Institute for Biomedical Research. See also Biological engineering Clinical and Translational Science (journal) Clinical trials Implementation research Personalized medicine Systems biology Translational research informatics Research practice gap (Knowledge transfer) References External links Translational Research Institute NIH Roadmap American Journal of Translational Research Center for Comparative Medicine and Translational Research OSCAT2012: Conference on translational medicine Medical research Research Research Nursing research
Translational research
Biology
1,728
1,375,635
https://en.wikipedia.org/wiki/Adenylate%20kinase
Adenylate kinase (EC 2.7.4.3) (also known as ADK or myokinase) is a phosphotransferase enzyme that catalyzes the interconversion of the various adenosine phosphates (ATP, ADP, and AMP). By constantly monitoring phosphate nucleotide levels inside the cell, ADK plays an important role in cellular energy homeostasis. Substrate and products The reaction catalyzed is: ATP + AMP ⇔ 2 ADP The equilibrium constant varies with condition, but it is close to 1. Thus, ΔGo for this reaction is close to zero. In muscle from a variety of species of vertebrates and invertebrates, the concentration of ATP is typically 7-10 times that of ADP, and usually greater than 100 times that of AMP. The rate of oxidative phosphorylation is controlled by the availability of ADP. Thus, the mitochondrion attempts to keep ATP levels high due to the combined action of adenylate kinase and the controls on oxidative phosphorylation. Isozymes To date there have been nine human ADK protein isoforms identified. While some of these are ubiquitous throughout the body, some are localized into specific tissues. For example, ADK7 and ADK8 are both only found in the cytosol of cells; and ADK7 is found in skeletal muscle whereas ADK8 is not. Not only do the locations of the various isoforms within the cell vary, but the binding of substrate to the enzyme and kinetics of the phosphoryl transfer are different as well. ADK1, the most abundant cytosolic ADK isozyme, has a Km about a thousand times higher than the Km of ADK7 and 8, indicating a much weaker binding of ADK1 to AMP. Sub-cellular localization of the ADK enzymes is done by including a targeting sequence in the protein. Each isoform also has different preference for NTP's. Some will only use ATP, whereas others will accept GTP, UTP, and CTP as the phosphoryl carrier. Some of these isoforms prefer other NTP's entirely. There is a mitochondrial GTP:AMP phosphotransferase, also specific for the phosphorylation of AMP, that can only use GTP or ITP as the phosphoryl donor. ADK has also been identified in different bacterial species and in yeast. Two further enzymes are known to be related to the ADK family, i.e. yeast uridine monophosphokinase and slime mold UMP-CMP kinase. Some residues are conserved across these isoforms, indicating how essential they are for catalysis. One of the most conserved areas includes an Arg residue, whose modification inactivates the enzyme, together with an Asp that resides in the catalytic cleft of the enzyme and participates in a salt bridge. Subfamilies Adenylate kinase, subfamily UMP-CMP kinase Adenylate kinase, isozyme 1 Mechanism Phosphoryl transfer only occurs on closing of the 'open lid'. This causes an exclusion of water molecules that brings the substrates in proximity to each other, lowering the energy barrier for the nucleophilic attack by the α-phosphoryl of AMP on the γ-phosphoryl group of ATP resulting in formation of ADP by transfer of the γ-phosphoryl group to AMP. In the crystal structure of the ADK enzyme from E. coli with inhibitor Ap5A, the Arg88 residue binds the Ap5A at the α-phosphate group. It has been shown that the mutation R88G results in 99% loss of catalytic activity of this enzyme, suggesting that this residue is intimately involved in the phosphoryl transfer. Another highly conserved residue is Arg119, which lies in the adenosine binding region of the ADK, and acts to sandwich the adenine in the active site. It has been suggested that the promiscuity of these enzymes in accepting other NTP's is due to this relatively inconsequential interactions of the base in the ATP binding pocket. A network of positive, conserved residues (Lys13, Arg123, Arg156, and Arg167 in ADK from E. coli) stabilize the buildup of negative charge on phosphoryl group during the transfer. Two distal aspartate residues bind to the arginine network, causing the enzyme to fold and reduces its flexibility. A magnesium cofactor is also required, essential for increasing the electrophilicity of the phosphate on AMP, though this magnesium ion is only held in the active pocket by electrostatic interactions and dissociates easily. Structure Flexibility and plasticity allow proteins to bind to ligands, form oligomers, aggregate, and perform mechanical work. Large conformational changes in proteins play an important role in cellular signaling. Adenylate Kinase is a signal transducing protein; thus, the balance between conformations regulates protein activity. ADK has a locally unfolded state that becomes depopulated upon binding. A 2007 study by Whitford et al. shows the conformations of ADK when binding with ATP or AMP. The study shows that there are three relevant conformations or structures of ADK—CORE, Open, and Closed. In ADK, there are two small domains called the LID and NMP. ATP binds in the pocket formed by the LID and CORE domains. AMP binds in the pocket formed by the NMP and CORE domains. The Whitford study also reported findings that show that localized regions of a protein unfold during conformational transitions. This mechanism reduces the strain and enhances catalytic efficiency. Local unfolding is the result of competing strain energies in the protein. The local (thermodynamic) stability of the substrate-binding domains ATPlid and AMPlid has been shown to be significantly lower when compared with the CORE domain in ADKE. coli. Furthermore, it has been shown that the two subdomains (ATPlid and AMPlid) can fold and unfold in a "non-cooperative manner." Binding of the substrates causes preference for 'closed' conformations amongst those that are sampled by ADK. These 'closed' conformations are hypothesized to help with removal of water from the active site to avoid wasteful hydrolysis of ATP in addition to helping optimize alignment of substrates for phosphoryl-transfer. Furthermore, it has been shown that the apoenzyme will still sample the 'closed' conformations of the ATPlid and AMPlid domains in the absence of substrates. When comparing the rate of opening of the enzyme (which allows for product release) and the rate of closing that accompanies substrate binding, closing was found to be the slower process. Function Metabolic monitoring The ability for a cell to dynamically measure energetic levels provides it with a method to monitor metabolic processes. By continually monitoring and altering the levels of ATP and the other adenyl phosphates (ADP and AMP levels) adenylate kinase is an important regulator of energy expenditure at the cellular level. As energy levels change under different metabolic stresses adenylate kinase is then able to generate AMP; which itself acts as a signaling molecule in further signaling cascades. This generated AMP can, for example, stimulate various AMP-dependent receptors such as those involved in glycolytic pathways, K-ATP channels, and 5' AMP-activated protein kinase (AMPK). Common factors that influence adenine nucleotide levels, and therefore ADK activity are exercise, stress, changes in hormone levels, and diet. It facilitates decoding of cellular information by catalyzing nucleotide exchange in the intimate “sensing zone” of metabolic sensors. ADK shuttle Adenylate kinase is present in mitochondrial and myofibrillar compartments in the cell, and it makes two high-energy phosphoryls (β and γ) of ATP available to be transferred between adenine nucleotide molecules. In essence, adenylate kinase shuttles ATP to sites of high energy consumption and removes the AMP generated over the course of those reactions. These sequential phosphotransfer relays ultimately result in propagation of the phosphoryl groups along collections of ADK molecules. This process can be thought of as a bucket brigade of ADK molecules that results in changes in local intracellular metabolic flux without apparent global changes in metabolite concentrations. This process is extremely important for overall homeostasis of the cell. Disease relevance Nucleoside diphosphate kinase deficiency Nucleoside diphosphate (NDP) kinase catalyzes in vivo ATP-dependent synthesis of ribo- and deoxyribonucleoside triphosphates. In mutated Escherichia coli that had a disrupted nucleoside diphosphate kinase, adenylate kinase performed dual enzymatic functions. ADK complements nucleoside diphosphate kinase deficiency. AK1 and post-ischemic coronary reflow Knock out of AK1 disrupts the synchrony between inorganic phosphate and turnover at ATP-consuming sites and ATP synthesis sites. This reduces the energetic signal communication in the post-ischemic heart and precipitates inadequate coronary reflow following ischemia-reperfusion. ADK2 deficiency Adenylate Kinase 2 (AK2) deficiency in humans causes hematopoietic defects associated with sensorineural deafness. Reticular dysgenesis is an autosomal recessive form of human combined immunodeficiency. It is also characterized by an impaired lymphoid maturation and early differentiation arrest in the myeloid lineage. AK2 deficiency results in absent or a large decrease in the expression of proteins. AK2 is specifically expressed in the stria vascularis of the inner ear which indicates why individuals with an AK2 deficiency will have sensorineural deafness. Structural adaptations AK1 genetic ablation decreases tolerance to metabolic stress. AK1 deficiency induces fiber-type specific variation in groups of transcripts in glycolysis and mitochondrial metabolism. This supports muscle energy metabolism. Plastidial ADK deficiency in Arabidopsis thaliana Enhanced growth and elevated photosynthetic amino acid is associated with plastidial adenylate kinase deficiency in Arabidopsis thaliana. References External links Cellular respiration EC 2.7.4
Adenylate kinase
Chemistry,Biology
2,189
3,517,589
https://en.wikipedia.org/wiki/Knowledge%20collection%20from%20volunteer%20contributors
Knowledge collection from volunteer contributors (KCVC) is a subfield of knowledge acquisition within artificial intelligence which attempts to drive down the cost of acquiring the knowledge required to support automated reasoning by having the public enter knowledge in computer processable form over the internet. KCVC might be regarded as similar in spirit to Wikipedia, although the intended audience, artificial intelligence systems, differs. History The 2005 AAAI Spring Symposium on Knowledge Collection from Volunteer Contributors (KCVC05) may have been the first research meeting on this topic. The first large-scale KCVC project was probably the Open Mind Common Sense (OMCS) project, initiated by Push Singh and Marvin Minsky at the MIT Media Lab. In this project, volunteers entered words or simple sentences in English in response to prompts or images. Although the resulting knowledge is not formally represented, it is provided to researchers with parses and other meta-information intended to increase its utility. Later, this group released ConceptNet, which embedded the knowledge contained in the OpenMind Common Sense database as a semantic network. In late 2005, Cycorp released a KCVC system called FACTory that attempts to acquire knowledge in a form directly usable for automated reasoning. It automatically generates questions in English from an underlying predicate calculus representation of candidate assertions produced by automated reading of web pages, by reviewing information previously entered directly in logical form, and by analogy and abduction. References External links Open Mind Project Open Mind Common Sense ISI's Learner Cycorp's FACTory Knowledge engineering
Knowledge collection from volunteer contributors
Technology,Engineering
304
188,379
https://en.wikipedia.org/wiki/Operant%20conditioning%20chamber
An operant conditioning chamber (also known as a Skinner box) is a laboratory apparatus used to study animal behavior. The operant conditioning chamber was created by B. F. Skinner while he was a graduate student at Harvard University. The chamber can be used to study both operant conditioning and classical conditioning. Skinner created the operant conditioning chamber as a variation of the puzzle box originally created by Edward Thorndike. While Skinner's early studies were done using rats, he later moved on to study pigeons. The operant conditioning chamber may be used to observe or manipulate behaviour. An animal is placed in the box where it must learn to activate levers or respond to light or sound stimuli for reward. The reward may be food or the removal of noxious stimuli such as a loud alarm. The chamber is used to test specific hypotheses in a controlled setting. Name Skinner was noted to have expressed his distaste for becoming an eponym. It is believed that psychologist Clark Hull and his Yale students coined the expression "Skinner box". Skinner said that he did not use the term himself; he went so far as to ask Howard Hunt to use "lever box" instead of "Skinner box" in a published document. History In 1898, American psychologist, Edward Thorndike proposed the 'law of effect', which formed the basis of operant conditioning. Thorndike conducted experiments to discover how cats learn new behaviors. His work involved monitoring cats as they attempted to escape from puzzle boxes. The puzzle box trapped the animals until they moved a lever or performed an action which triggered their release. Thorndike ran several trials and recorded the time it took for them to perform the actions necessary to escape. He discovered that the cats seemed to learn from a trial-and-error process rather than insightful inspections of their environment. The animals learned that their actions led to an effect, and the type of effect influenced whether the behavior would be repeated. Thorndike's 'law of effect' contained the core elements of what would become known as operant conditioning. B. F. Skinner expanded upon Thorndike's existing work. Skinner theorized that if a behavior is followed by a reward, that behavior is more likely to be repeated, but added that if it is followed by some sort of punishment, it is less likely to be repeated. He introduced the word reinforcement into Thorndike's law of effect. Through his experiments, Skinner discovered the law of operant learning which included extinction, punishment and generalization. Skinner designed the operant conditioning chamber to allow for specific hypothesis testing and behavioural observation. He wanted to create a way to observe animals in a more controlled setting as observation of behaviour in nature can be unpredictable. Purpose An operant conditioning chamber allows researchers to study animal behaviour and response to conditioning. They do this by teaching an animal to perform certain actions (like pressing a lever) in response to specific stimuli. When the correct action is performed the animal receives positive reinforcement in the form of food or other reward. In some cases, the chamber may deliver positive punishment to discourage incorrect responses. For example, researchers have tested certain invertebrates' reaction to operant conditioning using a "heat box". The box has two walls used for manipulation; one wall can undergo temperature change while the other cannot. As soon as the invertebrate crosses over to the side which can undergo temperature change, the researcher will increase the temperature. Eventually, the invertebrate will be conditioned to stay on the side that does not undergo a temperature change. After conditioning, even when the temperature is turned to its lowest setting, the invertebrate will avoid that side of the box. Skinner's pigeon studies involved a series of levers. When the lever was pressed, the pigeon would receive a food reward. This was made more complex as researchers studied animal learning behaviours. A pigeon would be placed in the conditioning chamber and another one would be placed in an adjacent box separated by a plexiglass wall. The pigeon in the chamber would learn to press the lever to receive food as the other pigeon watched. The pigeons would then be switched, and researchers would observe them for signs of cultural learning. Structure The outside shell of an operant conditioning chamber is a large box big enough to easily accommodate the animal being used as a subject. Commonly used animals include rodents (usually lab rats), pigeons, and primates. The chamber is often sound-proof and light-proof to avoid distracting stimuli. Operant conditioning chambers have at least one response mechanism that can automatically detect the occurrence of a behavioral response or action (i.e., pecking, pressing, pushing, etc.). This may be a lever or series of lights which the animal will respond to in the presence of stimulus. Typical mechanisms for primates and rats are response levers; if the subject presses the lever, the opposite end closes a switch that is monitored by a computer or other programmed device. Typical mechanisms for pigeons and other birds are response keys with a switch that closes if the bird pecks at the key with sufficient force. The other minimal requirement of an operant conditioning chamber is that it has a means of delivering a primary reinforcer such as a food reward.A simple configuration, such as one response mechanism and one feeder, may be used to investigate a variety of psychological phenomena. Modern operant conditioning chambers may have multiple mechanisms, such as several response levers, two or more feeders, and a variety of devices capable of generating different stimuli including lights, sounds, music, figures, and drawings. Some configurations use an LCD panel for the computer generation of a variety of visual stimuli or a set of LED lights to create patterns they wish to be replicated. Some operant conditioning chambers can also have electrified nets or floors so that shocks can be given to the animals as a positive punishment or lights of different colors that give information about when the food is available as a positive reinforcement. Research impact Operant conditioning chambers have become common in a variety of research disciplines especially in animal learning. The chambers design allows for easy monitoring of the animal and provides a space to manipulate certain behaviours. This controlled environment may allow for research and experimentation which cannot be performed in the field. There are a variety of applications for operant conditioning. For instance, shaping the behavior of a child is influenced by the compliments, comments, approval, and disapproval of one's behavior. An important factor of operant conditioning is its ability to explain learning in real-life situations. From an early age, parents nurture their children's behavior by using reward and praise following an achievement (crawling or taking a first step) which reinforces such behavior. When a child misbehaves, punishment in the form of verbal discouragement or the removal of privileges are used to discourage them from repeating their actions. Skinner's studies on animals and their behavior laid the framework needed for similar studies on human subjects. Based on his work, developmental psychologists were able to study the effect of positive and negative reinforcement. Skinner found that the environment influenced behavior and when that environment is manipulated, behaviour will change. From this, developmental psychologists proposed theories on operant learning in children. That research was applied to education and the treatment of illness in young children. Skinner's theory of operant conditioning played a key role in helping psychologists understand how behavior is learned. It explains why reinforcement can be used so effectively in the learning process, and how schedules of reinforcement can affect the outcome of conditioning. Commercial applications Slot machines, online games, and dating apps are examples where sophisticated operant schedules of reinforcement are used to reinforce certain behaviors. Gamification, the technique of using game design elements in non-game contexts, has also been described as using operant conditioning and other behaviorist techniques to encourage desired user behaviors. See also Behaviorism Radical behaviorism Operant conditioning Punishment (psychology) Reinforcement Synchronicity References External links B.F. Skinner Foundation Laboratory equipment Behaviorism Behavioral neuroscience Learning Animal testing techniques
Operant conditioning chamber
Chemistry,Biology
1,626
57,875,269
https://en.wikipedia.org/wiki/Daniel%20Tordera
Daniel Antonio Tordera Salvador (born 17 January 1986) is a Spanish chemist, material scientist and writer. He is currently an Assistant Professor in the Physical Chemistry Department at the University of Valencia. Early life and education Tordera was born in Valencia, where he studied Chemistry at the University of Valencia. He graduated First of his promotion in 2009. He then moved to Strasbourg, where he earned his Material Science Engineers degree at the École européene de chimie, polymères et matériaux. He earned his PhD at the University of Valencia in 2014, working on light-emitting electrochemical cells. He worked as a postdoctoral researcher at the University of Linköping, studying photonics and plasmonic materials. He was then a researcher at Holst Centre (TNO) in Eindhoven working on organic photodetectors, position which he held until October 2020 when he accepted a position as Assistant Professor at the University of Valencia. Research Tordera joined University of Valencia in 2009 where he worked in light-emitting electrochemical cells (LECs). He determined the elusive operational mechanism of LECs and dramatically increased the performance of these devices. He was a visiting scientist at University of California, Santa Barbara, where he applied his knowledge on conjugated polyelectrolytes. He then started a company with the aim of commercializing products based on his findings. He joined the University of Linköping to research the optical and thermal properties of plasmonic nanoholes. He worked on the development of a plasmonic thermoelectric device, a photoconductive paper and a plasmonic display, among others. He then joined Holst Centre leading a team working on near-infrarred organic photodetectors. There he created the first large-area thin-film vein detector and contributed to the progress of photodetectors in the fields of biometrics and healthcare. He has published over 50 publications in peer-reviewed scientific journals and is the inventor of 4 patents. Awards 2020 - Society of Information Display Distinguished Paper. 2015 - Nanomatmol Award: Best PH.D. in Nanotechnology and Molecular Materials in Spain by the Spanish Royal Society of Chemistry. 2015 - Outstanding Doctorate Award by the Universidad de Valencia. 2012 - European Materials Research Society Spring Meeting Young Scientist Award. 2010 - Award “Suschem Young Chemistry Researchers” to the Highest Academic Achievement in Spain by the Spanish Royal Society of Chemistry Personal life Daniel Tordera considers himself a politically active person. He has developed a satirical political video game Pedro Sanchez Simulator that was played by more than 50,000 people in less than a month and gained lot of media attention. He enjoys writing, photography and making music. He received the honor of being one of the 10 finalists to the 2018 Planeta awards for his novel "El Arte de la Fuga" References External links Google Scholar. 1986 births Living people Materials scientists and engineers University of Valencia alumni Spanish chemists People from Valencia
Daniel Tordera
Materials_science,Engineering
611
24,747,714
https://en.wikipedia.org/wiki/Dynamic%20Business%20Modeling
Dynamic Business Modeling ("DBM") describes the ability to automate business models within an open framework. The independent analyst firm Gartner has recently called Dynamic Business Modeling "critical for BSS solutions to succeed". Dynamic Business Modeling is based on principles wherein the business logic of an application is managed independently from the application servers that automate the services and processes defined in the business logic. Business modeling and integration (which itself is defined as part of the business model) are defined in a business logic layer, allowing underlying application servers to be business logic agnostic and therefore need no business driven customization. DBM applied correctly should reduce both the cost and risk in the initial implementation and its future evolution of systems. Previous generations of IT systems (from 1990 to approximately 2001) were designed to address specific business models and regulatory practices and no value was given to logic–infrastructure segregation. These systems provided value by automating predefined business models (commonly referred to as "off-the-shelf"). As a result, they implicitly drove business strategy where DBM states that they should be driven by it. By being "predefined" they do not: openly incorporate business changes in the business landscape of an industry leverage potential business models that new technologies allow Dynamic Business modeling is suited for open automation of strategy-driven business models. By removing the need for customization of core application servers it is postulated as more cost efficient, rapidly deployed and evolveable. Dynamic Business Modeling was initially described (though applied much earlier in practice) by Doug Zone at MetraTech Corp. in reference to the billing segment of the enterprise software market. "Service Oriented Applications" (also known as "service based applications") coined by IBM describe potential methodologies to achieve DBM. Technical definition Dynamic Business Modeling is defined as the automation of Enterprise Business Models based on the principle that the model's underlying business processes and business services need to be dynamically and openly definable and re-definable. Business definition Dynamic Business Modeling is defined as the enabler of a strategic advantage achieved by focused differentiation in any aspect of business (from marketing to finance to operations). This differentiation is achieved through how business is conducted: openly and dynamically defining the business model. Capital investment – human, physical and intellectual – must be aimed at allowing the definition of the business model to be dynamic. Dynamic Business Modeling recognises that businesses dynamically evolve, re-inventing their (business) models to achieve strategic advantage. DBM posits that the role of enterprise software (CRM, billing, ERP) is to dynamically automate and advance the business processes and services that lie behind these Business models. History The term was first used to describe the architecture of MetraNet, a charging, billing, settlement and customer care from MetraTech Corp. Core principles Business strategy drives selection of business models. These business models drive the design of underlying processes and services. Business Analysis is critical: Any number of models can address a strategic imperative. But the best models, services and processes will exploit existing business capabilities (human, IT and physical), the areas where change is possible and the areas where investment will make most change possible at the lowest cost. Enterprise software automates these services and processes. DBM enables change: Strategic adeptness requires tuning and/or the re-definition of the present Business Model. The business must begin with the principle that allows rapid tuning and/or re-definition of the underlying services and processes. This must apply at human and technological levels. Key success criteria Open modeling capabilities: Dynamic Business Modeling requires IT architecture and enterprise applications that automate the business model – not just a business model. Ease of modeling: Definition and automation of new and evolved business services and processes must be accessible at the business analysis level. Ideally the models and its services and processes are defined and stored in open business analyst oriented data (for example metadata.) Open integration: Dynamic Business Modeling must work with processes and services (both automated and human) that are NOT dynamic. These fixed constraints are not external to the new business model but are part of its fabric. IT Architecture and Enterprise applications must be able to incorporate, embed and/or build upon these existing processes and services. Robustness: Regardless of the dynamism of the business model. The automated and human-based business processes and services must have all the robustness of long standing static processes and services. Dynamic IT automation must have a full audit capability, reprocessing ability and standards compliance (i.e. PCI). Perpetual dynamism: Automation is never finished. Processes and services change and are added constantly. IT Architecture and Enterprise applications must be designed to prevent “lock down” where the service and process automation on “Day One” is so tightly coupled that only minor evolution is economic. SOA principles of openness and loss coupling must be applied inside business applications. Best practice DBM is service-based: The application should be based on the principle that processes and integration can be de-constructed internally into services. Services and processes are loosely coupled: Changing one should not impact the others. Services and process definitions are open: And accessible to a business analyst. Ideally definitions are kept in metadata. Application servers must be free of embedded business logic: For services, processes, data, workflows alike. Dynamic documentation is a feature: As the model evolves the documentation must evolve as well. The application should allow the business analyst to document at service level and then generate a cohesive document that encompasses entire model. Business Analyst Interface is friendly and flexible: The application must provide a way to put the definition of services and processes in analyst terms – using universal concepts such as flows and tables. The interface should encourage documentation, warn on inconsistencies, and allow testing. Graphics References http://ralyx.inria.fr/2008/Raweb/triskell/triskell.pdf External links Gartner Dataquest Insight: Telecommunications BSS Software Solutions Can Help Other Industries Improve Efficiency Information systems Business software
Dynamic Business Modeling
Technology
1,237
11,818,111
https://en.wikipedia.org/wiki/Mycosphaerella%20eumusae
Mycosphaerella eumusae is a fungal disease of banana (Musa spp.), causing Eumusae leaf spot. Its symptoms are similar to black leaf streak (Black Sigatoka, ). M. eumusae is the predominant Mycospharella of banana in mainland Malaysia and in Thailand, and is present in Mauritius and Nigeria. Septoria eumusae is an anamorph of Mycosphaerella eumusae. References See also List of Mycosphaerella species eumusae Fungal plant pathogens and diseases Fungi described in 2000 Fungus species
Mycosphaerella eumusae
Biology
121
9,390,286
https://en.wikipedia.org/wiki/Perkins%204.236
The Perkins 4.236 is a diesel engine manufactured by Perkins Engines. First produced in 1964,over 70,000 were produced in the first three years, and production increased to 60,000 units per annum. The engine was both innovative (using direct injection) and reliable, becoming a worldwide sales success over several decades. The Perkins 4.236 is rated at ASE (DIN), and is widely used in Massey Ferguson tractors, as well as other well-known industrial and agricultural machines, e.g. Clark, Manitou, JCB, Landini and Vermeer. The designation "4.236" The designation 4.236 arose as follows: "4" represents four cylinders, "236" represents , which is the total displacement of the engine. This logic can be used for most of Perkins engine designations. Bore and stroke , for an overall displacement of . Applications The Massey Ferguson tractors that were originally fitted with this engine are: 168S, 175, 175S, (174 - Romanian model). Later came the 261, 265, 275, 365, 375, 384S. Volvo Trucks used this engine in their Snabbe and Trygge trucks beginning in 1967; they called it the D39. A now defunct American car manufacturer, Checker Motors Corporation of Kalamazoo, Mich., offered the 4.236 in their Checker Marathon, as an option in 1969 only. Also the Dodge 50 Series received this engine, from July 1979 until July 1987 as the 4.236 and also between July 1986 and July 1987 in turbocharged T38-specification. It was also fitted as an option for Renault 50 Series vehicles. In Brazil, the locally developed Puma trucks received the Perkins 4.236 engine, with a maximum of DIN. Brazilian versions of the Chevrolet C/K series also relied on the Perkins 4.236 throughout the 80's as its only Diesel option. The Vermeer BC1250 brush chipper used this engine until the BC1250A replaced it. The BC1250A used the turbocharged version of the same engine. In Republic of Korea, Hyundai Motor Company produced this engine under license by Perkins in 1977 to 1981 and Hyundai Bison Truck(HD3000, HD5000) equipped it as called 'HD4236'. Long-term liveaboard sailors Bill & Laurel Cooper installed three Perkins 4.236 engines with three screws and stern gear into their 88' schooner-rigged Dutch barge, Hosanna. Having three engines (using just one on a calm canal, but engaging the other two in fast rivers or for manoeuvering) was still cheaper than having an equivalent single engine such as a Cummins or Volvo. Perkins Tightening Torques for 4.236 Specification Idle speed: 750 rpm, Rated speed: 2,000 rpm, Max. torque at 1,300 rpm Early models were fitted with Lucas M50 electric starter and Lucas dynamo charger. See also List of Perkins engines References Dodge 50 website Perkins engines Diesel engines by model Automobile engines Straight-four engines
Perkins 4.236
Technology
621
33,214,072
https://en.wikipedia.org/wiki/John%20Morton%20%28zoologist%29
John Edward Morton (18 July 1923 – 6 March 2011) was a biologist, scholar, theologian, and conservationist from New Zealand. Trained at Auckland University College and the University of London, he became the author of numerous books, papers, and newspaper columns. Morton researched New Zealand's ecology and marine life, and was a marine zoologist. He was also the presenter of the imported nature and science television programme, Our World. Early life Born in Morrinsville on 18 July 1923, Morton was the son of Ronald Bampton Morton. He was educated at Morrinsville District High School, and went on to study zoology at Auckland University College, graduating with the degree of MSc with first-class honours in 1948. In 1952 he completed his PhD, followed in 1959 by a DSc, at the University of London. During this time he was also a lecturer in the zoology department at the same university. Career On his return from London in the early 1960s, he became the first person to be appointed to the chair of the School of Zoology and Biological Sciences at the University of Auckland, a position he held from 1959 to 1988. He was considered at this time one of New Zealand's most talented up-and-coming academics, and was later regarded by many as one of New Zealand's greatest marine biologists. His teaching style and influence have been well-documented in A History of Biology at Auckland University 1883–1983. He believed in "humanising" complex scientific issues, and presenting them in laymen's language. Morton was also regarded as one of New Zealand's leading Christian academics and believed in a unified view of science and religion. He told The New Zealand Herald upon his retirement in 1988 that "I find that my scientific work has confirmed my Christian convictions. To me biology and theology complement each other." In his 1984 book Redeeming Creation he acknowledged the influence of the French palaeontologist Teilhard de Chardin in forming the teleological view he expounded in his academic life. Morton did much for conservation in New Zealand. In 1975, he was a leader in the establishment of New Zealand's first marine reserve, Cape Rodney-Okakari Point Marine Reserve (which is near Cape Rodney and Leigh and includes Te Hāwere-a-Maki / Goat Island). He led the conservation movement to a series of victories in the 1970s and 1980s, which saved the last of New Zealand's mainland native forests, Pureora, Whirinaki, Waitututu and South Westland from logging. He served on the Auckland Regional Authority from 1971 to 1974 for Takapuna, losing his re-election bid after switching his party affiliation to Labour. In 1989 he became a founding member of the New Labour Party, which in 1991 formed a coalition with other parties called the Alliance. Notable students of Morton's include Patricia Bergquist and John Croxall. Morton died at his home in Auckland on 6 March 2011. Honours and awards Fellow of the Royal Society of New Zealand (1969) Honorary Fellow of Linnean Society of London (HonFLS) Companion of the Queen's Service Order for public services in the 1986 Queen's Birthday Honours Winner of Wattie Book of the Year 1968, for The New Zealand sea shore, together with Michael C. Miller In 1965, malacologist Winston Ponder named the gastropod species Eatoniella mortoni after Morton. Selected bibliography Seashore ecology of New Zealand and the Pacific. John Edward Morton, Bruce William Hayward. Bateman, 2004. , . The shore ecology of Upolu – Western Samoa. Issue 31 of Leigh Lab. bulletin. John Edward Morton, Andrew Jeffs, Leigh Marine Laboratory. University of Auckland, 1993. Shore life between Fundy tides. John Edward Morton, J. C. Roff, Mary Beverley-Burton. Canadian Scholars Press, 1991. The shore ecology of the tropical Pacific. John Edward Morton. Unesco Regional Office for Science and Technology for South-East Asia, 1990. Christ, creation, and the environment. John Edward Morton. Anglican Communications, 1989. , . Marine molluscs: Opisthobranchia, Part 2. Richard Carden Willan, John Edward Morton, John Walsby, Leigh Marine Laboratory, University of Auckland, 1984. The sea shore ecology of Hong Kong. Brian Morton, John Edward Morton. The University of Hong Kong, 1983. . Marine molluscs: Amphineura, archaeogastropoda & pulmonata, Part 1. Issue 4 of Leigh Lab. bulletin. John Walsby, John Edward Morton, Leigh Marine Laboratory, University of Auckland, 1982. Molluscs. John Edward Morton. Hutchinson University Library, 1979. Seacoast in the seventies: the future of the New Zealand shoreline. John Edward Morton, David A. Thom, Ronald Harry Locker. Hodder and Stoughton, 1973. Man, science and God. John Edward Morton. Collins, 1972. The New Zealand sea shore. John Edward Morton, Michael C. Miller. Collins, 1968. References 1923 births 2011 deaths Alliance (New Zealand political party) politicians Alumni of the University of London Auckland regional councillors New Zealand Christian writers Companions of the Queen's Service Order Fellows of the Linnean Society of London Fellows of the Royal Society of New Zealand Marine zoologists NewLabour Party (New Zealand) politicians 20th-century New Zealand biologists New Zealand Labour Party politicians Theistic evolutionists University of Auckland alumni Academic staff of the University of Auckland Writers about religion and science People from Morrinsville People educated at Morrinsville College
John Morton (zoologist)
Biology
1,139
36,017,435
https://en.wikipedia.org/wiki/Anonymous%20blog
An anonymous blog is a blog without any acknowledged author or contributor. Anonymous bloggers may achieve anonymity through the simple use of a pseudonym, or through more sophisticated techniques such as layered encryption routing, manipulation of post dates, or posting only from publicly accessible computers. Motivations for posting anonymously include a desire for privacy or fear of retribution by an employer (e.g., in whistleblower cases), a government (in countries that monitor or censor online communication), or another group. Deanonymizing techniques Fundamentally, deanonymization can be divided into two categories: Social correlation compares known details about a person's life with the contents of an anonymous blog to look for similarities. If the author does not attempt to conceal their identity, social correlation is a very straightforward procedure: a simple correlation between the "anonymous" blogger's name, profession, lifestyle, etc., and the known person. Even if an author generally attempts to conceal their identity (by not providing their name, location, etc.), the blog can be deanonymized by correlating seemingly innocuous, general details. Technical identification determines the author's identity through the blog's technical details. In extreme cases, technical identification entails looking at the server logs, the Internet provider logs, and payment information associated with the domain name. These techniques may be used together. The order of techniques employed typically escalates from the social correlation techniques, which do not require the compliance of any outside authorities (e.g., Internet providers, server providers, etc.), to more technical identification. Types Just as a blog can be on any subject, so can an anonymous blog. Most fall into the following major categories: Political: A commentary on the political situation within a country, where being open may risk prosecution. Anonymous blogging can also add power to a political debate, such as in 2008 when blogger Eduwonkette, later revealed as Columbia University sociology graduate student Jennifer Jennings, successfully questioned New York Mayor Michael Bloomberg's takeover of New York schools. Revolutionary and counter-revolutionary: These can either be inspiring activity or counter activity, often against a violent state apparatus. For example, Salam Pax, the Baghdad blogger, wrote for The Guardian newspaper under a pseudonym that he could shed only when Saddam Hussein no longer ruled in Iraq. Similar bloggers appeared during the Arab Spring. Dissident: Dissident blogs may document life under an oppressive or secretive regime, while not actively promoting or inspiring revolutionary or counter-revolutionary action. Mosul Eye, which has described life under ISIL occupation in Mosul, Iraq, has been called one of the few reliable sources of information on life inside the city since it began in June 2014. Religious: Views and comments about religious view points and issues, perhaps questioning some written standpoints. Whistleblower: The whistleblower blog is a modern-day twist on the classical "insider spotting illegality" theme. This can cover all sectors or issues. Among the most notable is that by the Irish Red Cross head of the international department Noel Wardick, who highlighted that €162,000 in donations to the 2004 Indian Ocean earthquake and tsunami had sat in an account for over three years. After spending over €140,000 on private investigators and legal expenses to find the whistle blower, including court orders to obtain Wardick's identity from UPC and Google, the IRC disciplined and later dismissed Wardick. In 2010, an internal enquiry into Wardick's allegations found other such bank accounts, and proposals to overhaul the IRC's management were discussed in the Dáil on 15 December. Questions were answered by Tony Killeen, then the Minister of Defence. Wardick later successfully sued the IRC for unfair dismissal. Company insider: A company employee or insider reports on company operations and issues from within the organisation. The most famous is probably the Dooce.com blogger Heather Armstrong, who was fired for writing satirical accounts of her experiences at a dot-com startup on her personal blog, dooce.com. Community pressure: Written by a citizen of an area, on a particular subject, to bring about a change. In 2007, reporter and blogger Mike Stark came out in support of anonymous blogger Spocko, who was trying to bring what he called "violent commentary" on San Francisco area radio station KSFO to the attention of its advertisers. Experience/Customer Service: Most experience blogs focus on personal insights or views of customer service, frequently with dissatisfaction. Most anonymous experience blogs are written anonymously as they allow the customer/user to keep experiencing and using the service, and reporting/blogging, while nudging at a defined and appropriate level against the target organisation. Among these are Sarah Wu's/Mrs Q. "Fed Up With Lunch" blog, a chronicle of her experience as an adult eating Chicago area high school lunch every day for a year, which has now been turned into a book. Personal: The personal blog strays into personal life in ways that allow more risk taking and open in terms of detail. Hence, many of these blogs are sexual in nature, although many also exist for those with health problems and disabilities and how they see the world and cope with its challenges. Some of the latest personal blogs are seen by many as extended group therapy, covering issues including weight loss. Recently, anonymous blogging has moved into a more aggressive and active style, with organized crime groups such as the Mafia using anonymous blogs against mayors and local administrators in Italy. How online identity is determined IP addresses An IP address is a unique numerical label assigned to a computer connected to a computer network that uses the Internet Protocol for communication. The most popular implementation of the Internet Protocol would be the Internet (capitalized, to differentiate it from smaller internetworks). Internet Service Providers (ISPs) are allocated chunks of IP addresses by a Regional Internet registry, which they then assign to customers. However, ISPs do not have enough addresses to give the customers their own address. Instead, DHCP is used; a customer's device (typically a modem or router) is assigned an IP address from a pool of available addresses. It keeps that address for a certain amount of time (e.g., two weeks). If the device is still active at the end of the lease, it can renew its connection and keep the same IP address. Otherwise, the IP address is collected and added to the pool to be redistributed. Thus, IP addresses provide regional information (through Regional Internet registries) and, if the ISP has logs, specific customer information. While this does not prove that a specific person was the originator of a blog post (it could have been someone else using that customer's Internet, after all), it provides powerful circumstantial evidence. Word and character frequency analysis Character frequency analysis takes advantage of the fact that all individuals have a different vocabulary: if there is a large body of data that can be tied to an individual (for example, a public figure with an official blog), statistical analysis can be applied to both this body of data and an anonymous blog to see how similar they are. In this way, anonymous bloggers can tentatively be deanonymized. This is known as stylometry; adversarial stylometry is the study of techniques for resisting such stylistic identification. See also Anonymous P2P Anonymous web browsing List of anonymously published works Citizen journalism Mix network Anonymous remailer Tor (anonymity network) I2P References External links Computer Law and Security Report Volume 22 Issue 2, Pages 127-136 blogs, Lies and the Doocing by Sylvia Kierkegaard (2006) Legal Guide for bloggers by the Electronic Frontier Foundation Blog Personal Blogging Internet terminology Non-fiction genres Internet privacy Anonymity
Anonymous blog
Technology
1,606
38,218,032
https://en.wikipedia.org/wiki/Space%20mapping
The space mapping methodology for modeling and design optimization of engineering systems was first discovered by John Bandler in 1993. It uses relevant existing knowledge to speed up model generation and design optimization of a system. The knowledge is updated with new validation information from the system when available. Concept The space mapping methodology employs a "quasi-global" formulation that intelligently links companion "coarse" (ideal or low-fidelity) and "fine" (practical or high-fidelity) models of different complexities. In engineering design, space mapping aligns a very fast coarse model with the expensive-to-compute fine model so as to avoid direct expensive optimization of the fine model. The alignment can be done either off-line (model enhancement) or on-the-fly with surrogate updates (e.g., aggressive space mapping). Methodology At the core of the process is a pair of models: one very accurate but too expensive to use directly with a conventional optimization routine, and one significantly less expensive and, accordingly, less accurate. The latter (fast model) is usually referred to as the "coarse" model (coarse space). The former (slow model) is usually referred to as the "fine" model. A validation space ("reality") represents the fine model, for example, a high-fidelity physics model. The optimization space, where conventional optimization is carried out, incorporates the coarse model (or surrogate model), for example, the low-fidelity physics or "knowledge" model. In a space-mapping design optimization phase, there is a prediction or "execution" step, where the results of an optimized "mapped coarse model" (updated surrogate) are assigned to the fine model for validation. After the validation process, if the design specifications are not satisfied, relevant data is transferred to the optimization space ("feedback"), where the mapping-augmented coarse model or surrogate is updated (enhanced, realigned with the fine model) through an iterative optimization process termed "parameter extraction". The mapping formulation itself incorporates "intuition", part of the engineer's so-called "feel" for a problem. In particular, the Aggressive Space Mapping (ASM) process displays key characteristics of cognition (an expert's approach to a problem), and is often illustrated in simple cognitive terms. Development Following John Bandler's concept in 1993, algorithms have utilized Broyden updates (aggressive space mapping), trust regions, and artificial neural networks. Developments include implicit space mapping, in which we allow preassigned parameters not used in the optimization process to change in the coarse model, and output space mapping, where a transformation is applied to the response of the model. A 2004 paper reviews the state of the art after the first ten years of development and implementation. Tuning space mapping utilizes a so-called tuning model—constructed invasively from the fine model—as well as a calibration process that translates the adjustment of the optimized tuning model parameters into relevant updates of the design variables. The space mapping concept has been extended to neural-based space mapping for large-signal statistical modeling of nonlinear microwave devices. Space mapping is supported by sound convergence theory and is related to the defect-correction approach. A 2016 state-of-the-art review is devoted to aggressive space mapping. It spans two decades of development and engineering applications. A comprehensive 2021 review paper discusses space mapping in the context of radio frequency and microwave design optimization; in the context of engineering surrogate model, feature-based and cognition-driven design; and in the context of machine learning, intuition, and human intelligence. The space mapping methodology can also be used to solve inverse problems. Proven techniques include the Linear Inverse Space Mapping (LISM) algorithm, as well as the Space Mapping with Inverse Difference (SM-ID) method. Category Space mapping optimization belongs to the class of surrogate-based optimization methods, that is to say, optimization methods that rely on a surrogate model. Applications The space mapping technique has been applied in a variety of disciplines including microwave and electromagnetic design, civil and mechanical applications, aerospace engineering, and biomedical research. Some examples: Optimizing aircraft wing curvature Automotive crashworthiness design EEG source analysis Handset antenna optimization Design centering of microwave circuits Design of electric machines using multi-physical modeling Control of partial differential equations. Voice coil actuator design Reconstruction of local magnetic properties Structural optimization Design of microwave filters and multiplexers Optimization of delay structures Power electronics Signal integrity Civil engineering Simulators Various simulators can be involved in a space mapping optimization and modeling processes. In the microwave and radio frequency (RF) area Keysight ADS Keysight Momentum Ansys HFSS CST Microwave Studio FEKO Sonnet em Conferences Three international workshops have focused significantly on the art, the science and the technology of space mapping. First International Workshop on Surrogate Modelling and Space Mapping for Engineering Optimization (Lyngby, Denmark, Nov. 2000) Second International Workshop on Surrogate Modelling and Space Mapping for Engineering Optimization (Lyngby, Denmark, Nov. 2006) Third International Workshop on Surrogate Modelling and Space Mapping for Engineering Optimization (Reykjavik, Iceland, Aug. 2012) Terminology There is a wide spectrum of terminology associated with space mapping: ideal model, coarse model, coarse space, fine model, companion model, cheap model, expensive model, surrogate model, low fidelity (resolution) model, high fidelity (resolution) model, empirical model, simplified physics model, physics-based model, quasi-global model, physically expressive model, device under test, electromagnetics-based model, simulation model, computational model, tuning model, calibration model, surrogate model, surrogate update, mapped coarse model, surrogate optimization, parameter extraction, target response, optimization space, validation space, neuro-space mapping, implicit space mapping, output space mapping, port tuning, predistortion (of design specifications), manifold mapping, defect correction, model management, multi-fidelity models, variable fidelity/variable complexity, multigrid method, coarse grid, fine grid, surrogate-driven, simulation-driven, model-driven, feature-based modeling. See also References Electromagnetic radiation Optimization algorithms and methods Microwave technology Mathematical modeling
Space mapping
Physics,Mathematics
1,271
42,807,835
https://en.wikipedia.org/wiki/Coexistence%20theory
Coexistence theory is a framework to understand how competitor traits can maintain species diversity and stave-off competitive exclusion even among similar species living in ecologically similar environments. Coexistence theory explains the stable coexistence of species as an interaction between two opposing forces: fitness differences between species, which should drive the best-adapted species to exclude others within a particular ecological niche, and stabilizing mechanisms, which maintains diversity via niche differentiation. For many species to be stabilized in a community, population growth must be negative density-dependent, i.e. all participating species have a tendency to increase in density as their populations decline. In such communities, any species that becomes rare will experience positive growth, pushing its population to recover and making local extinction unlikely. As the population of one species declines, individuals of that species tend to compete predominantly with individuals of other species. Thus, the tendency of a population to recover as it declines in density reflects reduced intraspecific competition (within-species) relative to interspecific competition (between-species), the signature of niche differentiation (see Lotka-Volterra competition). Types of coexistence mechanisms Two qualitatively different processes can help species to coexist: a reduction in average fitness differences between species or an increase in niche differentiation between species. These two factors have been termed equalizing and stabilizing mechanisms, respectively. For species to coexist, any fitness differences that are not reduced by equalizing mechanisms must be overcome by stabilizing mechanisms. Equalizing mechanisms Equalizing mechanisms reduce fitness differences between species. As its name implies, these processes act in a way that push the competitive abilities of multiple species closer together. Equalizing mechanisms affect interspecific competition (the competition between individuals of different species). For example, when multiple species compete for the same resource, competitive ability is determined by the minimum level of resources a species needs to maintain itself (known as an R*, or equilibrium resource density). Thus, the species with the lowest R* is the best competitor and excludes all other species in the absence of any niche differentiation. Any factor that reduces R*s between species (like increased harvest of the dominant competitor) is classified as an equalizing mechanism. Environmental variation (which is the focus of the Intermediate Disturbance Hypothesis) can be considered an equalizing mechanism. Since the fitness of a given species is intrinsically tied to a specific environment, when that environment is disturbed (e.g. through storms, fires, volcanic eruptions, etc.) some species may lose components of their competitive advantage which were useful in the previous version of the environment. Stabilizing mechanisms Stabilizing mechanisms promote coexistence by concentrating intraspecific competition relative to interspecific competition. In other words, these mechanisms "encourage" an individual to compete more with other individuals of its own species, rather than with individuals of other species. Resource partitioning (a type of niche differentiation) is a stabilizing mechanism because interspecific competition is reduced when different species primarily compete for different resources. Similarly, if species are differently affected by environmental variation (e.g., soil type, rainfall timing, etc.), this can create a stabilizing mechanism (see the storage effect). Stabilizing mechanisms increase the low-density growth rate of all species. Chesson's categories of stabilizing mechanisms In 1994, Chesson proposed that all stabilizing mechanisms could be categorized into four categories. These mechanisms are not mutually exclusive, and it is possible for all four to operate in any environment at a given time. Variation-independent mechanisms (also called fluctuation-independent mechanisms) are any stabilizing mechanism that functions within a local place and time. Resource partitioning, predator partitioning, and frequency-dependent predation are three classic examples of variation-independent mechanisms. When a species is at very low density, individuals gain an advantage, because they are less constrained by competition across the landscape. For example, under frequency-dependent predation, a species is less likely to be consumed by predators when they are very rare. The storage effect occurs when species are affected differently by environmental variation in space or time. For example, coral reef fishes have different reproductive rates in different years, plants grow differently in different soil types, and desert annual plants germinate at different rates in different years. When a species is at low density, individuals gain an advantage because they experience less competition in times or locations that they grow best. For example, if annual plants germinate in different years, then when it is a good year to germinate, species will be competing predominately with members of the same species. Thus, if a species becomes rare, individuals will experience little competition when they germinate whereas they would experience high competition if they were abundant. For the storage effect to function, species must be able to "store" the benefits of a productive time period or area, and use it to survive during less productive times or areas. This can occur, for example, if species have a long-lived adult stage, a seed bank or diapause stage, or if they are spread out over the environment. A fitness-density covariance occurs when species are spread out non-uniformly across the landscape. Most often, it occurs when species are found in different areas. For example, mosquitoes often lay eggs in different locations, and plants who partition habitat are often found predominately where they grow best. Species can gain two possible advantages by becoming very rare. First, because they are physically separated from other species, they mainly compete with members of the same species (and thus experience less competition when they become very rare). Second, species are often more able to concentrate in favorable habitat as their densities decline. For example, if individuals are territorial, then members of an abundant species may not have access to ideal habitat; however, when that species becomes very rare, then there may be enough ideal habitat for all of the few remaining individuals. The Janzen-Connell hypothesis is an excellent example of a stabilizing mechanism that operates (in part) through fitness-density covariance. Relative nonlinearity occurs when species benefit in different ways from variation in competitive factors. For example, two species might coexist if one can grow better when resources are rare, and the other grows better when resources are abundant. Species will be able to coexist if the species which benefits from variation in resources tends to reduce variation in resources. For example, a species which can rapidly consume excess resources tends to quickly reduce the level of excess resources favoring the other species, whereas a species which grows better when resources are rare is more likely to cause fluctuations in resource density favoring the other species. Quantifying stabilizing mechanisms A general way of measuring the effect of stabilizing mechanisms is by calculating the growth rate of species i in a community as where: is the long-term average growth rate of the species i when at low density. Because species are limited from growing indefinitely, viable populations have an average long-term growth rate of zero. Therefore, species at low-density can increase in abundance when their long-term average growth rate is positive. is a species-specific factor that reflects how quickly species i responds to a change in competition. For example, species with faster generation times may respond more quickly to a change in resource density than longer lived species. In an extreme scenario, if ants and elephants were to compete for the same resources, elephant population sizes would change much more slowly to changes in resource density than would ant populations. is the difference between the fitness of species i when compared to the average fitness of the community excluding species i. In the absence of any stabilizing mechanisms, species i will only have a positive growth rate if its fitness is above its average competitor, i.e. where this value is greater than zero. measures the effect of all stabilizing mechanisms acting within this community. Example calculation: Species competing for resource In 2008 Chesson and Kuang showed how to calculate fitness differences and stabilizing mechanisms when species compete for shared resources and competitors. Each species j captures resource type l at a species-specific rate, cjl. Each unit of resource captured contributes to species growth by value vl. Each consumer requires resources for the metabolic maintenance at rate μi. In conjunction, consumer growth is decreased by attack from predators. Each predator species m attacks species j at rate ajm. Given predation and resource capture, the density of species i, Ni, grows at rate where l sums over resource types and m sums over all predator species. Each resource type exhibits logistic growth with intrinsic rate of increase, rRl, and carrying capacity, KRl = 1/αRl, such that growth rate of resource l is Similarly, each predator species m exhibits logistic growth in the absence of the prey of interest with intrinsic growth rate rPm and carrying capacity KPm = 1/αPm. The growth rate of a predator species is also increased by consuming prey species where again the attack rate of predator species m on prey j is ajm. Each unit of prey has a value to predator growth rate of w. Given these two sources of predator growth, the density of predator m, Pm, has a per-capita growth rate where the summation terms is contributions to growth from consumption over all j focal species. The system of equations describes a model of trophic interactions between three sets of species: focal species, their resources, and their predators. Given this model, the average fitness of a species j is where the sensitivity to competition and predation is The average fitness of a species takes into account growth based on resource capture and predation as well as how much resource and predator densities change from interactions with the focal species. The amount of niche overlap between two competitors i and j is which represents the amount to which resource consumption and predator attack are linearly related between two competing species, i and j. This model conditions for coexistence can be directly related to the general coexistence criterion: intraspecific competition, αjj, must be greater than interspecific competition, αij. The direct expressions for intraspecific and interspecific competition coefficients from the interaction between shared predators and resources are and Thus, when intraspecific competition is greater than interspecific competition, which, for two species leads to the coexistence criteria Notice that, in the absence of any niche differences (i.e. ρ = 1), species cannot coexist. Empirical evidence A 2012 study reviewed different approaches which tested coexistence theory, and identified three main ways to separate the contributions of stabilizing and equalizing mechanisms within a community. These are: Experimental manipulations, which involved determining the effect of relative fitness or stabilizing mechanisms by manipulating resources or competitive advantages. Trait-Phylogeny-Environment relationships, in which the phylogeny of members of a set of communities can be tested for evidence of trait clustering, which would suggest that certain traits are important (and perhaps necessary) to thrive in that environment, or trait overdispersion, which would suggest a high ability of species to exclude close relatives. Such tests have been widely used, although they have also been criticized as simplistic and flawed. Demographic analyses, which can be used to recognize frequency- or density-dependent processes simply by measuring the number and per-capita growth rates of species in natural communities over time. If such processes are operating, the per-capita growth rate would vary with the number of individuals in species comprising the community. A 2010 review argued that an invasion analysis should be used as the critical test of coexistence. In an invasion analysis, one species (termed the "invader") is removed from the community, and then reintroduced at a very low density. If the invader shows positive population growth, then it cannot be excluded from the community. If every species has a positive growth rate as an invader, then those species can stably coexist. An invasion analysis could be performed using experimental manipulation, or by parameterizing a mathematical model. The authors argued that in the absence of a full-scale invasion analysis, studies could show some evidence for coexistence by showing that a trade-off produced negative density-dependence at the population level. The authors reviewed 323 papers (from 1972 to May 2009), and claimed that only 10 of them met the above criteria (7 performing an invasion analysis, and 3 showing some negative-density dependence). However, an important caveat is that invasion analysis may not always be sufficient for identifying stable coexistence. For example, priority effects or Allee effects may prevent species from successfully invading a community from low density even if they could persist stably at a higher density. Conversely, high order interactions in communities with many species can lead to complex dynamics following an initially successful invasion, potentially preventing the invader from persisting stably in the long term. For example, an invader that can only persist when a particular resident species is present at high density could alter community structure following invasion such that that resident species' density declines or that it goes locally extinct, thereby preventing the invader from successfully establishing in the long term. Neutral theory and coexistence theory The 2001 Neutral theory by Stephen P. Hubbell attempts to model biodiversity through a migration-speciation-extinction balance, rather through selection. It assumes that all members within a guild are inherently the same, and that changes in population density are a result of random births and deaths. Particular species are lost stochastically through a random walk process, but species richness is maintained via speciation or external migration. Neutral theory can be seen as a particular case of coexistence theory: it represents an environment where stabilizing mechanisms are absent (i.e., ), and there are no differences in average fitness (i.e., for all species). It has been hotly debated how close real communities are to neutrality. Few studies have attempted to measure fitness differences and stabilizing mechanisms in plant communities, for example in 2009 or in 2015 These communities appear to be far from neutral, and in some cases, stabilizing effects greatly outweigh fitness differences. Cultural coexistence theory Cultural Coexistence Theory (CCT), also called Social-ecological Coexistence Theory, expands on coexistence theory to explain how groups of people with shared interests in natural resources (e.g., a fishery) can come to coexist sustainably. Cultural Coexistence Theory draws on work by anthropologists such as Frederik Barth and John Bennett, both of whom studied the interactions among culture groups on shared landscapes. In addition to the core ecological concepts described above, which CCT summarizes as limited similarity, limited competition, and resilience, CCT argues the following features are essential for cultural coexistence: Adaptability describes the ability of people to respond to change or surprise. It is essential to CCT because it helps capture the importance of human agency. Pluralism describes where people value cultural diversity and recognize the fundamental rights of people not like them to live in the same places and access shared resources. Equity as used in CCT describes whether social institutions exist that ensure that people's basic human rights, including the ability to meet basic needs, are protected, and whether people are protected from being marginalized in society. Cultural Coexistence Theory fits in under the broader area of sustainability science, common pool resources theory, and conflict theory. References Ecology Ecological theories Community ecology Theoretical ecology
Coexistence theory
Biology
3,209
3,032,314
https://en.wikipedia.org/wiki/History%20of%20the%20camera
The history of the camera began even before the introduction of photography. Cameras evolved from the camera obscura through many generations of photographic technologydaguerreotypes, calotypes, dry plates, filmto the modern day with digital cameras and camera phones. Camera obscura (pre-17th century) The camera obscura (from the Latin for 'dark room') is a natural optical phenomenon and precursor of the photographic camera. It projects an inverted image (flipped left to right and upside down) of a scene from the other side of a screen or wall through a small aperture onto a surface opposite the opening. The earliest documented explanation of this principle comes from Chinese philosopher Mozi (), who correctly argued that the inversion of the camera obscura image is a result of light traveling in straight lines from its source. From around 1550, lenses were used in the openings of walls or closed window shutters in dark rooms to project images, aiding in drawing. By the late 17th century, portable camera obscura devices in tents and boxes had come into use as drawing tools. The images produced by these early cameras could only be preserved by manually tracing them, as no photographic processes had been invented yet. The first cameras were large enough to accommodate one or more people, and over time they evolved into increasingly compact models. By the time of Niépce, portable box camera obscurae suitable for photography were widely available. Johann Zahn envisioned the first camera small and portable enough for practical photography in 1685, but it took nearly 150 years for such an application to become possible. Ibn al-Haytham (1040), an Arab physicist also known as Alhazen, made significant contributions to the understanding of the camera obscura, conducting experiments with light in a darkened room with a small opening. He is often credited with the invention of the pinhole camera. He also provided the first correct analysis of the camera obscura, offering the first geometrical and quantitative descriptions of the phenomenon, and was the first to utilize a screen in a dark room for image projection from a hole in the surface. He was the first to understand the relationship between the focal point and the pinhole, and was the pioneer of early afterimage experiments. The work of Ibn al-Haytham on optics, circulated through Latin translations, played a significant role in inspiring notable individuals such as Witelo, John Peckham, Roger Bacon, Leonardo da Vinci, René Descartes, and Johannes Kepler. The Camera Obscura was used as a drawing aid since at least around 1550. By the late 17th century, portable versions of the device housed in tents and boxes became commonly used for drawing purposes. Early photographic camera (18th–19th centuries) Before the development of the photography camera, it had been known for hundreds of years that some substances, such as silver salts, darkened when exposed to sunlight. In a series of experiments, published in 1727, the German scientist Johann Heinrich Schulze demonstrated that the darkening of the salts was due to light alone, and not influenced by heat or exposure to air.The Swedish chemist Carl Wilhelm Scheele showed in 1777 that silver chloride was especially susceptible to darkening from light exposure, and that once darkened, it becomes insoluble in an ammonia solution. The first person to use this chemistry to create images was Thomas Wedgwood. To create images, Wedgwood placed items, such as leaves and insect wings, on ceramic pots coated with silver nitrate, and exposed the set-up to light. These images weren't permanent, however, as Wedgwood didn't employ a fixing mechanism. He ultimately failed at his goal of using the process to create fixed images created by a camera obscura. The first permanent photograph of a camera image was made in 1826 by Nicéphore Niépce using a sliding wooden box camera made by Charles and Vincent Chevalier in Paris. Niépce had been experimenting with ways to fix the images of a camera obscura since 1816. The photograph Niépce succeeded in creating shows the view from his window. It was made using an 8-hour exposure on pewter coated with bitumen. Niépce called his process "heliography". Niépce corresponded with the inventor Louis Daguerre, and the pair entered into a partnership to improve the heliographic process. Niépce had experimented further with other chemicals, to improve contrast in his heliographs. Daguerre contributed an improved camera obscura design, but the partnership ended when Niépce died in 1833. Daguerre succeeded in developing a high-contrast and extremely sharp image by exposing on a plate coated with silver iodide, and exposing this plate again to mercury vapor. By 1837, he was able to fix the images with a common salt solution. He called this process Daguerreotype, and tried unsuccessfully for a couple of years to commercialize it. Eventually, with help of the scientist and politician François Arago, the French government acquired Daguerre's process for public release. In exchange, pensions were provided to Daguerre as well as Niépce's son, Isidore. In the 1830s, the English scientist William Henry Fox Talbot independently invented a process to capture camera images using silver salts. Although dismayed that Daguerre had beaten him to the announcement of photography, he submitted a pamphlet to the Royal Institution entitled Some Account of the Art of Photogenic Drawing on 31 Jan 1839, which was the first published description of photography. Within two years, Talbot developed a two-step process for creating photographs on paper, which he called calotypes. The calotype process was the first to utilize negative printing, which reverses all values in the reproduction process – black shows up as white and vice versa. Negative printing allows, in principle, an unlimited number of positive prints to be made from the original negative. The Calotype process also introduced the ability for a printmaker to alter the resulting image through retouching of the negative.Calotypes were never as popular or widespread as daguerreotypes, owing mainly to the fact that the latter produced sharper details. However, because daguerreotypes only produce a direct positive print, no duplicates can be made. It is the two-step negative/positive process that formed the basis for modern photography. The first photographic camera developed for commercial manufacture was a daguerreotype camera, built by Alphonse Giroux in 1839. Giroux signed a contract with Daguerre and Isidore Niépce to produce the cameras in France, with each device and accessories costing 400 francs. The camera was a double-box design, with a landscape lens fitted to the outer box, and a holder for a ground glass focusing screen and image plate on the inner box. By sliding the inner box, objects at various distances could be brought to as sharp a focus as desired. After a satisfactory image had been focused on the screen, the screen was replaced with a sensitized plate. A knurled wheel controlled a copper flap in front of the lens, which functioned as a shutter. The early daguerreotype cameras required long exposure times, which in 1839 could be from 5 to 30 minutes. After the introduction of the Giroux daguerreotype camera, other manufacturers quickly produced improved variations. Charles Chevalier, who had earlier provided Niépce with lenses, created in 1841 a double-box camera using a half-sized plate for imaging. Chevalier's camera had a hinged bed, allowing for half of the bed to fold onto the back of the nested box. In addition to having increased portability, the camera had a faster lens, bringing exposure times down to 3 minutes, and a prism at the front of the lens, which allowed the image to be laterally correct. Another French design emerged in 1841, created by Marc Antoine Gaudin. The Nouvel Appareil Gaudin camera had a metal disc with three differently-sized holes mounted on the front of the lens. Rotating to a different hole effectively provided variable f-stops, allowing different amounts of light into the camera. Instead of using nested boxes to focus, the Gaudin camera used nested brass tubes. In Germany, Peter Friedrich Voigtländer designed an all-metal camera with a conical shape that produced circular pictures of about 3 inches in diameter. The distinguishing characteristic of the Voigtländer camera was its use of a lens designed by Joseph Petzval. The Petzval lens was nearly 30 times faster than any other lens of the period, and was the first to be made specifically for portraiture. Its design was the most widely used for portraits until Carl Zeiss introduced the anastigmat lens in 1889. Within a decade of being introduced in America, 3 general forms of camera were in popular use: the American- or chamfered-box camera, the Robert's-type camera or "Boston box", and the Lewis-type camera. The American-box camera had beveled edges at the front and rear, and an opening in the rear where the formed image could be viewed on ground glass. The top of the camera had hinged doors for placing photographic plates. Inside there was one available slot for distant objects, and another slot in the back for close-ups. The lens was focused either by sliding or with a rack and pinion mechanism. The Robert's-type cameras were similar to the American-box, except for having a knob-fronted worm gear on the front of the camera, which moved the back box for focusing. Many Robert's-type cameras allowed focusing directly on the lens mount. The third popular daguerreotype camera in America was the Lewis-type, introduced in 1851, which utilized a bellows for focusing. The main body of the Lewis-type camera was mounted on the front box, but the rear section was slotted into the bed for easy sliding. Once focused, a set screw was tightened to hold the rear section in place. Having the bellows in the middle of the body facilitated making a second, in-camera copy of the original image. Daguerreotype cameras formed images on silvered copper plates and images were only able to develop with mercury vapor. The earliest daguerreotype cameras required several minutes to half an hour to expose images on the plates. By 1840, exposure times were reduced to just a few seconds owing to improvements in the chemical preparation and development processes, and to advances in lens design. American daguerreotypists introduced manufactured plates in mass production, and plate sizes became internationally standardized: whole plate (6.5 × 8.5 inches), three-quarter plate (5.5 × 7 1/8 inches), half plate (4.5 × 5.5 inches), quarter plate (3.25 × 4.25 inches), sixth plate (2.75 × 3.25 inches), and ninth plate (2 × 2.5 inches). Plates were often cut to fit cases and jewelry with circular and oval shapes. Larger plates were produced, with sizes such as 9 × 13 inches ("double-whole" plate), or 13.5 × 16.5 inches (Southworth & Hawes' plate). The collodion wet plate process that gradually replaced the daguerreotype during the 1850s required photographers to coat and sensitize thin glass or iron plates shortly before use and expose them in the camera while still wet. Early wet plate cameras were very simple and little different from Daguerreotype cameras, but more sophisticated designs eventually appeared. The Dubroni of 1864 allowed the sensitizing and developing of the plates to be carried out inside the camera itself rather than in a separate darkroom. Other cameras were fitted with multiple lenses for photographing several small portraits on a single larger plate, useful when making cartes de visite. It was during the wet plate era that the use of bellows for focusing became widespread, making the bulkier and less easily adjusted nested box design obsolete. For many years, exposure times were long enough that the photographer simply removed the lens cap, counted off the number of seconds (or minutes) estimated to be required by the lighting conditions, then replaced the cap. As more sensitive photographic materials became available, cameras began to incorporate mechanical shutter mechanisms that allowed very short and accurately timed exposures to be made. The use of photographic film was pioneered by George Eastman, who started manufacturing paper film in 1885 before switching to celluloid in 1889. His first camera, which he called the "Kodak", was first offered for sale in 1888. It was a very simple box camera with a fixed-focus lens and single shutter speed, which along with its relatively low price appealed to the average consumer. The Kodak came pre-loaded with enough film for 100 exposures and needed to be sent back to the factory for processing and reloading when the roll was finished. By the end of the 19th century Eastman had expanded his lineup to several models including both box and folding cameras. Films also made possible capture of motion (cinematography) establishing the movie industry by the end of the 19th century. Early fixed images The first partially successful photograph of a camera image was made in approximately 1816 by Nicéphore Niépce, using a very small camera of his own making and a piece of paper coated with silver chloride, which darkened where it was exposed to light. No means of removing the remaining unaffected silver chloride was known to Niépce, so the photograph was not permanent, eventually becoming entirely darkened by the overall exposure to light necessary for viewing it. In the mid-1820s, Niépce used a wooden box camera made by Parisian opticians Charles and Vincent Chevalier, to experiment with photography on surfaces thinly coated with Bitumen of Judea. The bitumen slowly hardened in the brightest areas of the image. The unhardened bitumen was then dissolved away. One of those photographs has survived. Daguerreotypes and calotypes After Niépce's death in 1833, his partner Louis Daguerre continued to experiment and by 1837 had created the first practical photographic process, which he named the daguerreotype and publicly unveiled in 1839. Daguerre treated a silver-plated sheet of copper with iodine vapor to give it a coating of light-sensitive silver iodide. After exposure in the camera, the image was developed by mercury vapor and fixed with a strong solution of ordinary salt (sodium chloride). Henry Fox Talbot perfected a different process, the calotype, in 1840. As commercialized, both processes used very simple cameras consisting of two nested boxes. The rear box had a removable ground glass screen and could slide in and out to adjust the focus. After focusing, the ground glass was replaced with a light-tight holder containing the sensitized plate or paper and the lens was capped. Then the photographer opened the front cover of the holder, uncapped the lens, and counted off as many minutes as the lighting conditions seemed to require before replacing the cap and closing the holder. Despite this mechanical simplicity, high-quality achromatic lenses were standard. Dry plates Collodion dry plates had been available since 1857, thanks to the work of Désiré van Monckhoven, but it was not until the invention of the gelatin dry plate in 1871 by Richard Leach Maddox that the wet plate process could be rivaled in quality and speed. The 1878 discovery that heat-ripening a gelatin emulsion greatly increased its sensitivity finally made so-called "instantaneous" snapshot exposures practical. For the first time, a tripod or other support was no longer an absolute necessity. With daylight and a fast plate or film, a small camera could be hand-held while taking the picture. The ranks of amateur photographers swelled and informal "candid" portraits became popular. There was a proliferation of camera designs, from single- and twin-lens reflexes to large and bulky field cameras, simple box cameras, and even "detective cameras" disguised as pocket watches, hats, or other objects. The short exposure times that made candid photography possible also necessitated another innovation, the mechanical shutter. The very first shutters were separate accessories, though built-in shutters were common by the end of the 19th century. Invention of photographic film The use of photographic film was pioneered by George Eastman, who started manufacturing paper film in 1885 before switching to celluloid in 1888–1889. His first camera, which he called the "Kodak", was first offered for sale in 1888. It was a very simple box camera with a fixed-focus lens and single shutter speed, which along with its relatively low price appealed to the average consumer. The Kodak came pre-loaded with enough film for 100 exposures and needed to be sent back to the factory for processing and reloading when the roll was finished. By the end of the 19th century Eastman had expanded his lineup to several models including both box and folding cameras. In 1900, Eastman took mass-market photography one step further with the Brownie, a simple and very inexpensive box camera that introduced the concept of the snapshot. The Brownie was extremely popular and various models remained on sale until the 1960s. Film also allowed the movie camera to develop from an expensive toy to a practical commercial tool. Despite the advances in low-cost photography made possible by Eastman, plate cameras still offered higher-quality prints and remained popular well into the 20th century. To compete with rollfilm cameras, which offered a larger number of exposures per loading, many inexpensive plate cameras from this era were equipped with magazines to hold several plates at once. Special backs for plate cameras allowing them to use film packs or rollfilm were also available, as were backs that enabled rollfilm cameras to use plates. Except for a few special types such as Schmidt cameras, most professional astrographs continued to use plates until the end of the 20th century when electronic photography replaced them. 35 mm A number of manufacturers started to use 35 mm film for still photography between 1905 and 1913. The first 35 mm cameras available to the public, and reaching significant numbers in sales were the Tourist Multiple, in 1913, and the Simplex, in 1914. Oskar Barnack, who was in charge of research and development at Leitz, decided to investigate using 35 mm cine film for still cameras while attempting to build a compact camera capable of making high-quality enlargements. He built his prototype 35 mm camera (Ur-Leica) around 1913, though further development was delayed for several years by World War I. It wasn't until after World War I that Leica commercialized their first 35 mm cameras. Leitz test-marketed the design between 1923 and 1924, receiving enough positive feedback that the camera was put into production as the Leica I (for Leitz camera) in 1925. The Leica's immediate popularity spawned several of competitors, most notably the Contax (introduced in 1932), and cemented the position of 35 mm as the format of choice for high-end compact cameras. Kodak got into the market with the Retina I in 1934, which introduced the 135 cartridge used in all modern 35 mm cameras. Although the Retina was comparatively inexpensive, 35 mm cameras were still out of reach for most people and rollfilm remained the format of choice for mass-market cameras. This changed in 1936 with the introduction of the inexpensive Argus A and to an even greater extent in 1939 with the arrival of the immensely popular Argus C3. Although the cheapest cameras still used rollfilm, 35 mm film had come to dominate the market by the time the C3 was discontinued in 1966. The fledgling Japanese camera industry began to take off in 1936 with the Canon 35 mm rangefinder, an improved version of the 1933 Kwanon prototype. Japanese cameras would begin to become popular in the West after Korean War veterans and soldiers stationed in Japan brought them back to the United States and elsewhere. TLRs and SLRs The first practical reflex camera was the Franke & Heidecke Rolleiflex medium format TLR of 1928. Though both single- and twin-lens reflex cameras had been available for decades, they were too bulky to achieve much popularity. The Rolleiflex, however, was sufficiently compact to achieve widespread popularity and the medium-format TLR design became popular for both high- and low-end cameras. A similar revolution in SLR design began in 1933 with the introduction of the Ihagee Exakta, a compact SLR which used 127 rollfilm. This was followed three years later by the first Western SLR to use 135 film (otherwise known as 35 mm film), the Kine Exakta (World's first true 35 mm SLR was Soviet "Sport" camera, marketed several months before Kine Exakta, though "Sport" used its own film cartridge). The 35 mm SLR design gained immediate popularity and there was an explosion of new models and innovative features after World War II. There were also a few 35 mm TLRs, the best-known of which was the Contaflex of 1935, but for the most part these met with little success. The first major post-war SLR innovation was the eye-level viewfinder, which first appeared on the Hungarian Duflex in 1947 and was refined in 1948 with the Contax S, the first camera to use a pentaprism. Prior to this, all SLRs were equipped with waist-level focusing screens. The Duflex was also the first SLR with an instant-return mirror, which prevented the viewfinder from being blacked out after each exposure. This same time period also saw the introduction of the Hasselblad 1600F, which set the standard for medium format SLRs for decades. In 1952 the Asahi Optical Company (which later became well known for its Pentax cameras) introduced the first Japanese SLR using 135 film, the Asahiflex. Several other Japanese camera makers also entered the SLR market in the 1950s, including Canon, Yashica, and Nikon. Nikon's entry, the Nikon F, had a full line of interchangeable components and accessories and is generally regarded as the first Japanese system camera. It was the F, along with the earlier S series of rangefinder cameras, that helped establish Nikon's reputation as a maker of professional-quality equipment and one of the world's best known brands. Instant cameras While conventional cameras were becoming more refined and sophisticated, an entirely new type of camera appeared on the market in 1949. This was the Polaroid Model 95, the world's first viable instant-picture camera. Known as a Land Camera after its inventor, of 1965, was a huge success and remains one of the top-selling cameras of all time. Automation In 1936, Albert Einstein and Gustav Bucky designed one of the first automatic cameras which used an electric eye to determine aperture and exposure. The first production camera to feature automatic exposure was the selenium light meter-equipped, fully automatic Super Kodak Six-20 pack of 1938, but its extremely high price (for the time) of $225 () kept it from achieving any degree of success. By the 1960s, however, low-cost electronic components were commonplace and cameras equipped with light meters and automatic exposure systems became increasingly widespread. The next technological advance came in 1960, when the German Mec 16 SB subminiature became the first camera to place the light meter behind the lens for more accurate metering. However, through-the-lens metering ultimately became a feature more commonly found on SLRs than other types of camera; the first SLR equipped with a TTL system was the Topcon RE Super of 1962. Digital cameras Digital cameras differ from their analog predecessors primarily in that they do not use film, but capture and save photographs on digital memory cards or internal storage instead. Their low operating costs have relegated chemical cameras to niche markets. Digital cameras now include wireless communication capabilities (for example Wi-Fi or Bluetooth) to transfer, print, or share photos, and are commonly found on mobile phones. Digital imaging technology The first semiconductor image sensor was the CCD, invented by Willard S. Boyle and George E. Smith at Bell Labs in 1969. While researching MOS technology, they realized that an electric charge was the analogy of the magnetic bubble and that it could be stored on a tiny MOS capacitor. As it was fairly straightforward to fabricate a series of MOS capacitors in a row, they connected a suitable voltage to them so that the charge could be stepped along from one to the next. The CCD is a semiconductor circuit that was later used in the first digital video cameras for television broadcasting. The NMOS active-pixel sensor (APS) was invented by Olympus in Japan during the mid-1980s. This was enabled by advances in MOS semiconductor device fabrication, with MOSFET scaling reaching smaller micron and then sub-micron levels. The NMOS APS was fabricated by Tsutomu Nakamura's team at Olympus in 1985. The CMOS active-pixel sensor (CMOS sensor) was later developed by Eric Fossum's team at the NASA Jet Propulsion Laboratory in 1993. Early digital camera prototypes The concept of digitizing images on scanners, and the concept of digitizing video signals, predate the concept of making still pictures by digitizing signals from an array of discrete sensor elements. Early spy satellites used the extremely complex and expensive method of de-orbit and airborne retrieval of film canisters. Technology was pushed to skip these steps through the use of in-satellite developing and electronic scanning of the film for direct transmission to the ground. The amount of film was still a major limitation, and this was overcome and greatly simplified by the push to develop an electronic image capturing array that could be used instead of film. The first electronic imaging satellite was the KH-11 launched by the NRO in late 1976. It had a charge-coupled device (CCD) array with a resolution of (0.64 megapixels). At Philips Labs in New York, Edward Stupp, Pieter Cath and Zsolt Szilagyi filed for a patent on "All Solid State Radiation Imagers" on 6 September 1968 and constructed a flat-screen target for receiving and storing an optical image on a matrix composed of an array of photodiodes connected to a capacitor to form an array of two terminal devices connected in rows and columns. Their US patent was granted on 10 November 1970. Texas Instruments engineer Willis Adcock designed a filmless camera that was not digital and applied for a patent in 1972, but it is not known whether it was ever built. The Cromemco Cyclops, introduced as a hobbyist construction project in 1975, was the first digital camera to be interfaced to a microcomputer. Its image sensor was a modified metal–oxide–semiconductor (MOS) dynamic RAM (DRAM) memory chip. The first recorded attempt at building a self-contained digital camera was in 1975 by Steven Sasson, an engineer at Eastman Kodak. It used the then-new solid-state CCD image sensor chips developed by Fairchild Semiconductor in 1973. The camera weighed 8 pounds (3.6 kg), recorded black-and-white images to a compact cassette tape, had a resolution of 0.01 megapixels (10,000 pixels), and took 23 seconds to capture its first image in December 1975. The prototype camera was a technical exercise, not intended for production. Analog electronic cameras Handheld electronic cameras, in the sense of a device meant to be carried and used as a handheld film camera, appeared in 1981 with the demonstration of the Sony Mavica (Magnetic Video Camera). This is not to be confused with the later cameras by Sony that also bore the Mavica name. This was an analog camera, in that it recorded pixel signals continuously, as videotape machines did, without converting them to discrete levels; it recorded television-like signals to a 2 × 2 inch "video floppy". In essence, it was a video movie camera that recorded single frames, 50 per disk in field mode, and 25 per disk in frame mode. The image quality was considered equal to that of then-current televisions. Analog electronic cameras do not appear to have reached the market until 1986 with the Canon RC-701. Canon demonstrated a prototype of this model at the 1984 Summer Olympics, printing the images in the Yomiuri Shimbun, a Japanese newspaper. In the United States, the first publication to use these cameras for real reportage was USA Today, in its coverage of World Series baseball. Several factors held back the widespread adoption of analog cameras; the cost (upwards of $20,000, ), poor image quality compared to film, and the lack of quality affordable printers. Capturing and printing an image originally required access to equipment such as a frame grabber, which was beyond the reach of the average consumer. The "video floppy" disks later had several reader devices available for viewing on a screen but were never standardized as a computer drive. The early adopters tended to be in the news media, where the cost was negated by the utility and the ability to transmit images by telephone lines. The poor image quality was offset by the low resolution of newspaper graphics. This capability to transmit images without a satellite link was useful during the 1989 Tiananmen Square protests and the first Gulf War in 1991. US government agencies also took a strong interest in the still video concept, notably the US Navy for use as a real-time air-to-sea surveillance system. The first analog electronic camera marketed to consumers may have been the Casio VS-101 in 1987. A notable analog camera produced the same year was the Nikon QV-1000C, designed as a press camera and not offered for sale to general users, which sold only a few hundred units. It recorded images in greyscale, and the quality in newspaper print was equal to film cameras. In appearance it closely resembled a modern digital single-lens reflex camera. Images were stored on video floppy disks. Silicon Film, a proposed digital sensor cartridge for film cameras that would allow 35 mm cameras to take digital photographs without modification was announced in late 1998. Silicon Film was to work as a roll of 35 mm film, with a 1.3 megapixel sensor behind the lens and a battery and storage unit fitting in the film holder in the camera. The product, which was never released, became increasingly obsolete due to improvements in digital camera technology and affordability. Silicon Films' parent company filed for bankruptcy in 2001. Early true digital cameras By the late 1970s, the technology required to produce truly commercial digital cameras existed. The first true portable digital camera that recorded images as a computerized file was likely the Fuji DS-1P of 1988, which recorded to a 2 MB SRAM (static RAM) memory card that used a battery to keep the data in memory. This camera was never marketed to the public. The first digital camera of any kind ever sold commercially was possibly the MegaVision Tessera in 1987 though there is not extensive documentation of its sale known. The first portable digital camera that was actually marketed commercially was sold in December 1989 in Japan, the DS-X by Fuji The first commercially available portable digital camera in the United States was the Dycam Model 1, first shipped in November 1990. It was originally a commercial failure because it was black-and-white, low in resolution, and cost nearly $1,000 (). It later saw modest success when it was re-sold as the Logitech Fotoman in 1992. It used a CCD image sensor, stored pictures digitally, and connected directly to a computer for download. Digital SLRs (DSLRs) Nikon was interested in digital photography since the mid-1980s. In July 1986, while presenting to Photokina, Nikon introduced an operational prototype of the first SLR-type digital camera (Still Video Camera), manufactured by Panasonic. The Nikon SVC was built around a sensor 2/3 " charge-coupled device of 300,000 pixels. Storage media, a magnetic floppy inside the camera, allows recording 25 or 50 B&W images, depending on the definition. In 1991, Kodak brought to market the Kodak DCS (Kodak Digital Camera System), the beginning of a long line of professional Kodak DCS SLR cameras that were based in part on film bodies, often Nikons. The Kodak DCS was the first commercially available Digital SLR (DSLR) It used a 1.3 megapixel sensor, had a bulky external digital storage system and was priced at $13,000 (). At the arrival of the Kodak DCS-200, the Kodak DCS was dubbed Kodak DCS-100. The move to digital formats was helped by the formation of the first JPEG and MPEG standards in 1988, which allowed image and video files to be compressed for storage. The first consumer camera with a liquid crystal display on the back was the Casio QV-10 developed by a team led by Hiroyuki Suetaka in 1995. The first camera to use CompactFlash was the Kodak DC-25 in 1996. The first camera that offered the ability to record video clips may have been the Ricoh RDC-1 in 1995. In 1995 Minolta introduced the RD-175, which was based on the Minolta 500si SLR with a splitter and three independent CCDs. This combination delivered 1.75M pixels. The benefit of using an SLR base was the ability to use any existing Minolta AF mount lens. 1999 saw the introduction of the Nikon D1, a 2.74 megapixel camera that was the first digital SLR developed entirely from the ground up by a major manufacturer, and at a cost of under $6,000 () at introduction was affordable by professional photographers and high-end consumers. This camera also used Nikon F-mount lenses, which meant film photographers could use many of the same lenses they already owned. Digital camera sales continued to flourish, driven by technology advances. The digital market segmented into different categories, Compact Digital Still Cameras, Bridge Cameras, Mirrorless Compacts and Digital SLRs. Since 2003, digital cameras have outsold film cameras and Kodak announced in January 2004 that they would no longer sell Kodak-branded film cameras in the developed world – and in 2012 filed for bankruptcy after struggling to adapt to the changing industry. Camera phones The first commercial camera phone was the Kyocera Visual Phone VP-210, released in Japan in May 1999. It was called a "mobile videophone" at the time, and had a 110,000-pixel front-facing camera. It stored up to 20 JPEG digital images, which could be sent over e-mail, or the phone could send up to two images per second over Japan's Personal Handy-phone System (PHS) cellular network. The Samsung SCH-V200, released in South Korea in June 2000, was also one of the first phones with a built-in camera. It had a TFT liquid-crystal display (LCD) and stored up to 20 digital photos at 350,000-pixel resolution. However, it could not send the resulting image over the telephone function, but required a computer connection to access photos. The first mass-market camera phone was the J-SH04, a Sharp J-Phone model sold in Japan in November 2000. It could instantly transmit pictures via cell phone telecommunication. One of the major technology advances was the development of CMOS sensors, which helped drive sensor costs low enough to enable the widespread adoption of camera phones. Smartphones now routinely include high resolution digital cameras. See also History of photography Photographic lens design Movie camera List of photographs considered the most important References External links The Digital Camera Museum, with history section Cameras Camera Camera
History of the camera
Technology
7,369
56,096,452
https://en.wikipedia.org/wiki/Myo-Inositol%20trispyrophosphate
myo-Inositol trispyrophosphate (ITPP) is an inositol phosphate, a pyrophosphate, a drug candidate, and a putative performance-enhancing substance, which exerts its biological effects by increasing tissue oxygenation. Chemistry ITPP is a pyrophosphate derivative of phytic acid with the molecular formula C6H12O21P6. Biological effects ITPP is a membrane-permeant allosteric regulator of hemoglobin that mildly reduces its oxygen-binding affinity, which shifts the oxygen-hemoglobin dissociation curve to the right and thereby increases oxygen release from the blood into tissue. Phytic acid, in contrast, is not membrane-permeant due to its charge distribution. Rodent studies in vivo demonstrated increased tissue oxygenation and dose-dependent increases in endurance during physical exercise, in both healthy mice and transgenic mice expressing a heart failure phenotype. The substance is believed to have a high potential for use in athletic doping, and liquid chromatography–mass spectrometry tests have been developed to detect ITPP in urine tests. Its use as a performance-enhancing substance in horse racing has also been suspected and similar tests have been developed for horses ITPP has been studied for potential adjuvant use in the treatment of cancer in conjunction with chemotherapy, due to its effects in reducing tissue hypoxia. Human clinical trials were registered in 2014 under the compound number OXY111A. The substance has also been examined in the context of other illnesses involving hypoxia, such as cardiovascular disease and dementia See also Phytic acid Inositol Inositol phosphate Inositol trisphosphate myo-Inositol References Phospholipids Inositol Signal transduction
Myo-Inositol trispyrophosphate
Chemistry,Biology
380
17,608,453
https://en.wikipedia.org/wiki/Danofloxacin
Danofloxacin is a fluoroquinolone antibiotic used in veterinary medicine. References Fluoroquinolone antibiotics Cyclopropyl compounds Veterinary drugs Carboxylic acids
Danofloxacin
Chemistry
41
2,338,670
https://en.wikipedia.org/wiki/Yoshi%20Sodeoka
Yoshi Sodeoka is a Japanese-born multimedia artist and musician renowned for his exploration of video, gifs, print and NFTs (non-fungible tokens). Trained as an oil painter from the age of 5 and guitarist from the age of 13, Sodeoka's early immersion in traditional art informs his approach to digital expression. His career has spanned three decades. Biography Originally hailing from Yokohama, Japan, Sodeoka relocated to New York in the 1990s, enrolling at Pratt Institute. He has called New York home ever since. Sodeoka's neo-psychedelic style is a direct reflection of his deep-rooted passion for music, drawing inspiration from genres such as noise, punk, and metal. His immersive artwork often integrates digital video feedback, footage sampling, found online imagery and experimental audio soundscapes. Sodeoka has collaborated on video projects with notable musical artists including Metallica, Psychic TV, Tame Impala, OPN, Beck, The Presets, and Max Cooper. He has created editorial illustrations for publications like The New York Times, Wired, The Atlantic, and MIT Technology Review. And Sodeoka's artwork has been featured in ad campaigns from brands like Adidas, Nike, Apple and Samsung. Sodeoka's digital artwork has been showcased in venues like the Centre Pompidou, the Cleveland Museum of Art, Deitch Projects, La Gaîté Lyrique, the Museum of the Moving Image and Laforet Museum Harajuku. Sodeoka's work is included in the permanent collections of the Museum of the Moving Image in New York City and the San Francisco Museum of Modern Art. In 1995, Sodeoka was the founding art director of Word Magazine, one of the earliest ezines on the web. Notable Projects Prototype #31: C404.40.40.31 (2001) Released in 2001, "Prototype #31: C404.40.40.31" is a 31-minute audio/video DVD incorporating animated graphics layered on video and TV footage set to an original 31-minute experimental electronic noise music score composed by Sodeoka. "Prototype #31: C404.40.40.31" made its debut as an installation at "Digital Dumbo," a four-day digital art festival held in September 2001. ASCII BUSH (2004) ' Originally showcased at Turbulence.org, "ASCII BUSH" presents an ASCII video rendition of two State of the Union addresses—one delivered by George W. Bush on January 12, 2003, just before the onset of the Iraq War, and the other by his father, George H.W. Bush, on March 6, 1991, shortly after Operation Desert Storm. By repurposing the "debris" of contemporary political discourse, Sodeoka invites viewers to engage critically with the underlying messages and symbolism embedded within these speeches. Noise Driven Ambient Audio And Visuals DVD (2005) Released in 2005, "Noise Driven Ambient Audio And Visuals" showcases Sodeoka's video feedback loops, video flicker induction, and integration of visual and audio static from analog sources. With over 50 minutes of content, the DVD features contributions from guest directors including Associates In Science, Lew Baldwin, Day-Dream, Jonathan Turner, and WeWorkForThem. Video Metal DVD-R (2009) Released in 2009, "Video Metal" is presented on DVD-r in NTSC format, region-free. The 5" DVD-r is wrapped in a hand-screen-printed 20" x 15", 4-color poster, folded and housed in a 5" x 6" resealable poly bag with an outside sticker. The edition was limited to 100 units. Published by Table of Contents, "Video Metal" was reviewed by Neural on April 22, 2010, and described as a "real visual and sound trip." #46 — 35.23N 139.30E [FAC 3097] E5150xx - Digital/Analog Intermix (2012) Expanding upon his 2004 Turbulence.org commission, Prototype #44, Net Pirate Number Station, Sodeoka's 2012 work, #46 — 35.23N 139.30E [FAC 3097] E5150xx - Digital/Analog Intermix, explores themes of telecommunication technologies and espionages. The project documents a psychedelic virtual journey to the Fukaya Communication Site, also known as Naval Transmitter Station Totsuka, located near Sodeoka's childhood home in Yokohama, Japan. Historically classified as a US army jurisdictional area with strict prohibitions against non-American citizens, the site holds a mystique as a center of espionage activities involving Russia and North Korea. Surrounding the site, stories abound of missing local women and warnings from adults to children to avoid the area. Music Video Projects In addition to his own multimedia art projects, Yoshi Sodeoka has been commissioned to create music videos for a diverse array of musical artists. Notable music video projects include: The Presets "Youth in Trouble" (2012) Tame Impala "Elephant" (2012) Yeasayer "PSCYVOTV" (2012) — Eleven videos were created for each song of the album "Fragrant World" by Yeasayer MYMK "Drag" (2015) Digitalism - "Utopia" (2016) Mark Stoermer "39 Steps (Shannoncut Remix)" (2017) Oliver Coates "Norrin Radd Dreaming" (2018) Oneohtrix Point Never, "Magic OPN" (promo videos produced with Robert Beatty, 2020) Max Cooper "Spike" (2020) Genghis Tron "Pyrocene" (2021) Metallica "You Must Burn" (co-directed with Tim Saccenti, 2023) Illustrations and Animations for Publications In addition to his work in the digital domain, Sodeoka has contributed art to print publications like The New York Times, Wired Magazine, Harper's Magazine, The Atlantic, MIT Technology Review, The Intercept, and Bloomberg Businessweek. Employing animated formats such as GIF or MP4 loops, Sodeoka creates dynamic artwork that transcends the conventional static image, offering an immersive visual experience to readers. Notable editorial projects include: The New York Times, Addicted to Distraction (2015) The New York Times Sunday Review, Do You Believe in God, or Is That a Software Glitch (2016) The WIRED Magazine, How One Woman's Digital Life Was Weaponized Against Her (2017) MIT Technology Review, Paying with Your Face (2017) The WIRED Magazine The Dawn of Twitter and the Age of Awareness (2018) The New York Times, "Are We Living in a Computer Simulation? Let’s Not Find Out" (2019) The Atlantic, How to Put Out Democracy’s Dumpster Fire (2021) Propublica, Why It’s Hard to Sanction Ransomware Groups (2022) The New Yorker, Cory Doctorow Wants You to Know What Computers Can and Can’t Do (2022) New York Times, We Study Virus Evolution. Here’s Where We Think the Coronavirus Is Going. (2022) The New York Times, Starfield’s 1,000 Planets May Be One Giant Leap for Game Design (2023) Bloomberg Businessweek, AI Stocks Supercharge a Tech Bull Market—and Maybe a Bubble (2023) Prism Break - Ambient Swim on HBO Max (2021) Created as part of the Adult Swim Festival 2021 in collaboration with RVNG INTL, "Prism Break - Ambient Swim" premiered on HBO Max on November 13, 2021. The project consists of a series of immersive videos designed to transport viewers to otherworldly realms of sound and color. Wetware - Bacteria NFT (2021) Released on the Foundation.app platform, "Wetware - Bacteria" is an NFT project drawing inspiration from the notion of an organic, self-organizing computer driven by virtual bacteria. Wind Flags #4 Mural at Memorial Sloan Kettering Cancer Center (2023) Commissioned in 2023, "Wind Flags #4" is a mural commissioned for the lobby of the newly renovated Memorial Sloan Kettering Cancer Center. Collaborating with the MSK Computational Oncology Research team, Sodeoka composed the image from stills of his algorithmically generated videos, resembling oscillation patterns in the Belousov-Zhabotinsky reaction. Spanning 8 x 20 feet, the murals organic and computer-generated forms serve as a vibrant centerpiece, offering solace and inspiration to patients, visitors, and staff alike at the Memorial Sloan Kettering Cancer Center. The Flood NFT (2024) "The Flood" is a 2024 NFT series hosted on Verse.works, exploring the intricacies of arachnid behavior through code-based simulations. The project meticulously examines over 50 randomized parameters to portray predator-prey interactions within a digital ecosystem. Viewers are immersed in a world where algorithmically guided predators mirror spider hunting tactics while delicate prey respond with nuanced evasive maneuvers. The project presents two narratives: "The Flood: Orchestrated," characterized by meticulously scripted interactions, and "The Flood: Chaos," embracing spontaneity and unpredictability. Undervolt & Co. Undervolt & Co. was a video art collective founded in 2013 by Yoshi Sodeoka. Johnny Woods, Nicholas O'Brien and Rea McNamara later joined as collaborators. The project aimed to plunge viewers into an abstract audio-visual world as they browsed exclusive videos uploaded to the label's website and online shop. The work ranged in style from psychedelic to 1980s glitch to near-overwhelming, saturated color. Among the first wave of artists to join the label were Jennifer Juniper Stratford, Jimmy Joe Roche, Spectral Net (a group comprising Birch Cooper, Brenna Murphy, Sabrina Ratté, and Roger Tellier-Craig, Johnny Woods, Cristopher Cichocki and Yoshi Sodeoka). The collective continued to add new artists over time, aiming to represent a diverse range of voices within the video art community, including Robert Beatty, Andrew Benson, Peter Burr, Camilla Padgitt-Cole, Scot Cotterell, Di-Andre Caprice Davis, e*rock, Extreme Animals, Adam Ferriss, Carrie Gates, Faith Holland, Jodie Mack, Rea McNamara, A. Bill Miller, MSHR, Nicholas O'Brien, Eva Papamargariti, Suzy Poling, Javier Galán Rico, Rick Silva, Leigh Silverblatt, Ryoya Usuha, and Giselle Zatonyl. Undervolt & Co. aspired to be more than just a label; it sought to establish itself as an unparalleled database of video art. The projects's archives were crucial in centralizing the world of video artists.. The collective's commitment to inclusivity and accessibility was reflected in its mass distribution model, which aimed to democratize the consumption of video art. By providing a centralized hub for video artists to share their work, Undervolt & Co. fostered collaboration and dialogue within the global video art community. Undervolt & Co. participated in festivals like the Moog Fest 2016, Aurora Festival 2015 in Dallas, Texas, Solstice at The Cleveland Museum of Art in 2018 and 2019, and Times Square Arts: Interference AV in 2018. References External links Yoshi Sodeoka's portfolio site Interview with Yoshi Sodeoka: Infinite Cycles - Sedition IN DIGITAL: BEHIND THE SCREEN WITH YOSHI SODEOKA Yoshi Sodeoka Interview by Mâché Studio FACT Magazine: Shiva Feshareki & Yoshi Sodeoka share spirallic collaboration, Vapour It's Nice That: Digital artist Yoshi Sodeoka’s work cannot be categorized Massage Magazine: Interview with Yoshi Sodeoka (In Japanese) Pen Magazine: Yoshi Sodeoka’s Digital Distortions Redefine Mag: Yoshi Sodeoka Video Artist Interview: Psychedelic Apocalypse in the Digital Realm Niio Editorial: Yoshi Sodeoka: human audio visualizer The Interface is the Massage: Yoshi Sodeoka by Curt Cloninger Multimedia artists Japanese musicians
Yoshi Sodeoka
Technology
2,511
10,209,776
https://en.wikipedia.org/wiki/Energy%20applications%20of%20nanotechnology
As the world's energy demand continues to grow, the development of more efficient and sustainable technologies for generating and storing energy is becoming increasingly important. According to Dr. Wade Adams from Rice University, energy will be the most pressing problem facing humanity in the next 50 years and nanotechnology has potential to solve this issue. Nanotechnology, a relatively new field of science and engineering, has shown promise to have a significant impact on the energy industry. Nanotechnology is defined as any technology that contains particles with one dimension under 100 nanometers in length. For scale, a single virus particle is about 100 nanometers wide. People in the fields of science and engineering have already begun developing ways of utilizing nanotechnology for the development of consumer products. Benefits already observed from the design of these products are an increased efficiency of lighting and heating, increased electrical storage capacity, and a decrease in the amount of pollution from the use of energy. Benefits such as these make the investment of capital in the research and development of nanotechnology a top priority. Commonly used nanomaterials in energy An important sub-field of nanotechnology related to energy is nanofabrication, the process of designing and creating devices on the nanoscale. The ability to create devices smaller than 100 nanometers opens many doors for the development of new ways to capture, store, and transfer energy. Improvements in the precision of nanofabrication technologies are critical to solving many energy related problems that the world is currently facing. Graphene-based materials There is enormous interest in the use of graphene-based materials for energy storage. The research on the use of graphene for energy storage began very recently, but the growth rate of relative research is rapid. Graphene recently emerged as a promising material for energy storage because of several properties, such as low weight, chemical inertness and low price. Graphene is an allotrope of carbon that exists as a two-dimensional sheet of carbon atoms organized in a hexagonal lattice. A family of graphene-related materials, called "graphenes" by the research community, consists of structural or chemical derivatives of graphene. The most important chemically derived graphene is graphene oxide (defined as single layer of graphite oxide, Graphite oxide can be obtained by reacting graphite with strong oxidizers, for example, a mixture of sulfuric acid, sodium nitrate, and potassium permanganate) which is usually prepared from graphite by oxidization to graphite oxide and consequent exfoliation. The properties of graphene depend greatly on the method of fabrication. For example, reduction of graphene oxide to graphene results in a graphene structure that is also one-atom thick but contains a high concentration of defects, such as nanoholes and Stone–Wales defects. Moreover, carbon materials, which have relatively high electrical conductivity and variable structures are extensively used in the modification of sulfur. Sulfur–carbon composites with diverse structures have been synthesized and exhibited remarkably improved electrochemical performance than pure sulfur, which is crucial for battery design. Graphene has great potential in the modification of a sulfur cathode for high performance Li-S batteries, which has been broadly investigated in recent years. Silicon-based nano semiconductors Silicon-based nano semiconductors have the most useful application in solar energy and it also has been extensively studied at many places, such as Kyoto University. They utilize silicon nanoparticles in order to absorb a greater range of wavelengths from the electromagnetic spectrum. This can be done by putting many identical and equally spaced silicon rods on the surface. Also, the height and length of spacing have to be optimized for reaching the best results. This arrangement of silicon particles allows solar energy to be reabsorbed by many different particles, exciting electrons and resulting in much of the energy being converted to heat. Then, the heat can be converted to electricity. Researchers from Kyoto University have shown that these nano-scale semiconductors can increase efficiency by at least 40%, compared to the regular solar cells. Nanocellulose‐based materials Cellulose is the most abundant natural polymer on earth. Currently, nanocellulose‐based mesoporous structures, flexible thin films, fibers, and networks are developed and used in photovoltaic (PV) devices, energy storage systems, mechanical energy harvesters, and catalysts components. Inclusion of nanocellulose in those energy‐related devices largely raises the portion of eco‐friendly materials and is very promising in addressing the relevant environmental concerns. Furthermore, cellulose manifests itself in the low cost and large‐scale promises. Nanostructures in energy One-dimensional nanomaterials One-dimensional nanostructures have shown promise to increase energy density, safety, and cycling-life of energy storage systems, an area in need of improvement for Li-ion batteries. These nanostructures are mainly used in battery electrodes because of their shorter bi-continuous ion and electron transport pathways, which results in higher battery performance. Additionally, 1D nanostructures are capable of increasing charge storage by double layering, and can also be used on supercapacitors because of their fast pseudocapacitive surface redox processes. In the future, novel design and controllable synthesis of these materials will be developed much more in-depth. 1D nanomaterials are also environmentally friendly and cost-effective. Two-dimensional nanomaterials The most important feature of two dimensional nanomaterials is that their properties can be precisely controlled. This means that 2D nanomaterials can be easily modified and engineered on nanostructures. The interlayer space can also be manipulated for nonlayered materials, called 2D nanofluidic channels. 2D nanomaterials can also be engineered into porous structures in order to be used for energy storage and catalytic applications by applying facile charge and mass transport. 2D nanomaterials also have a few challenges. There are some side effects of modifying the properties of the materials, such as activity and structural stability, which can be compromised when they are engineered. For example, creating some defects can increase the number of active sites for higher catalytic performance, but side reactions may also happen, which could possibly damage the catalyst's structure. Another example is that interlayer expansion can lower the ion diffusion barrier in the catalytic reaction, but it can also potentially lower its structural stability. Because of this, there is a tradeoff between performance and stability. A second issue is consistency in design methods. For example, heterostructures are the main structures of the catalyst in interlayer space and energy storage devices, but these structures may lack the understanding of mechanism on the catalytic reaction or charge storage mechanisms. A deeper understanding of 2D nanomaterial design is required, because fundamental knowledge will lead to consistent and efficient methods of designing these structures. A third challenge is the practical application of these technologies. There is a huge difference between lab-scale and industry-scale applications of 2D nanomaterials due to their intrinsic instability during storage and processing. For example, porous 2D nanomaterial structures have low packing densities, which makes them difficult to pack into dense films. New processes are still being developed for the application of these materials on an industrial scale. Applications Lithium-sulfur based high-performance batteries The Li-ion battery is currently one of the most popular electrochemical energy storage systems and has been widely used in areas from portable electronics to electric vehicles. However, the gravimetric energy density of Li-ion batteries is limited and less than that of fossil fuels. The lithium sulfur (Li-S) battery, which has a much higher energy density than the Li-ion battery, has been attracting worldwide attention in recent years. A group of researches from the National Natural Science Foundation of China (Grant No. 21371176 and 21201173) and the Ningbo Science and Technology Innovation Team (Grant No. 2012B82001) have developed a nanostructure-based lithium-sulfur battery consisting of graphene/sulfur/carbon nano-composite multilayer structures. Nanomodification of sulfur can increase the electrical conductivity of the battery and improve electron transportation in the sulfur cathode. A graphene/sulfur/carbon nanocomposite with a multilayer structure (G/S/C), in which nanosized sulfur is layered on both sides of chemically reduced graphene sheets and covered with amorphous carbon layers, can be designed and successfully prepared. This structure achieves high conductivity, and surface protection of sulfur simultaneously, and thus gives rise to excellent charge/discharge performance. The G/S/C composite shows promising characteristics as a high performance cathode material for Li-S batteries. Nanomaterials in solar cells Engineered nanomaterials are key building blocks of the current generation solar cells. Today's best solar cells have layers of several different semiconductors stacked together to absorb light at different energies but still only manage to use approximately 40% of the Sun's energy. Commercially available solar cells have much lower efficiencies (15-20%). Nanostructuring has been used to improve the efficiencies of established photovoltaic (PV) technologies, for example, by improving current collection in amorphous silicon devices, plasmonic enhancement in dye-sensitized solar cells, and improved light trapping in crystalline silicon. Furthermore, nanotechnology could help increase the efficiency of light conversion by utilizing the flexible bandgaps of nanomaterials, or by controlling the directivity and photon escape probability of photovoltaic devices. Titanium dioxide (TiO2) is one of the most widely investigated metal oxides for use in PV cells in the past few decades because of its low cost, environmental benignity, plentiful polymorphs, good stability, and excellent electronic and optical properties. However, their performances are greatly limited by the properties of the TiO2 materials themselves. One limitation is the wide band gap, making TiO2 only sensitive to ultraviolet (UV) light, which just occupies less than 5% of the solar spectrum. Recently, core–shell structured nanomaterials have attracted a great deal of attention as they represent the integration of individual components into a functional system, showing improved physical and chemical properties (e.g., stability, non-toxicity, dispersibility, multi-functionality), which are unavailable from the isolated components. For TiO2 nanomaterials, this core–shell structured design would provide a promising way to overcome their disadvantages, thus resulting in improved performances. Compared to sole TiO2 material, core–shell structured TiO2 composites show tunable optical and electrical properties, even new functions, which are originated from the unique core–shell structures. Nanoparticle fuel additives Nanomaterials can be used in a variety of ways to reduce energy consumption. Nanoparticle fuel additives can also be of great use in reducing carbon emissions and increasing the efficiency of combustion fuels. Cerium oxide nanoparticles have been shown to be very good at catalyzing the decomposition of unburnt hydrocarbons and other small particle emissions due to their high surface area to volume ratio, as well as lowering the pressure within the combustion chamber of engines to increase engine efficiency and curb NOx emissions. Addition of carbon nanoparticles has also successfully increased burning rate and ignition delay in jet fuel. Iron nanoparticle additives to biodiesel and diesel fuels have also shown a decrease in fuel consumption and volumetric emissions of hydrocarbons by 3-6%, carbon monoxide by 6-12% and nitrogen oxides by 4-11% in one study. Environmental and health impacts of fuel additives While nanomaterials can increase energy efficiency of fuel in several ways, a drawback of their use lies in the effect of nanoparticles on the environment. With cerium oxide nanoparticle additives in fuel, trace amounts of these toxic particles can be emitted within the exhaust. Cerium oxide additives in diesel fuel have been shown to cause lung inflammation and increased bronchial alveolar lavage fluid in rats. This is concerning, especially in areas with high road traffic, where these particles are likely to accumulate and cause adverse health effects. Naturally occurring nanoparticles created by the incomplete combustion of diesel fuels are also large contributors to toxicity of diesel fumes. More research needs to be conducted to determine whether the addition of artificial nanoparticles to fuels decreases the net amount of toxic particle emissions due to combustion. Economic benefits The relatively recent shift toward using nanotechnology with respect to the capture, transfer, and storage of energy has and will continue to have many positive economic impacts on society. The control of materials that nanotechnology offers to scientists and engineers of consumer products is one of the most important aspects of nanotechnology and allows for efficiency improvements of a variety of products. More efficient capture and storage of energy by use of nanotechnology may lead to decreased energy costs in the future, as preparation costs of nanomaterials becomes less expensive with more development. A major issue with current energy generation is the generation of waste heat as a by-product of combustion. A common example of this is in an internal combustion engine. The internal combustion engine loses about 64% of the energy from gasoline as heat and an improvement of this alone could have a significant economic impact. However, improving the internal combustion engine in this respect has proven to be extremely difficult without sacrificing performance. Improving the efficiency of fuel cells through the use of nanotechnology appears to be more plausible by using molecularly tailored catalysts, polymer membranes, and improved fuel storage. In order for a fuel cell to operate, particularly of the hydrogen variant, a noble-metal catalyst (usually platinum, which is very expensive) is needed to separate the electrons from the protons of the hydrogen atoms. However, catalysts of this type are extremely sensitive to carbon monoxide reactions. In order to combat this, alcohols or hydrocarbons compounds are used to lower the carbon monoxide concentration in the system. Using nanotechnology, catalysts can be designed through nanofabrication that limit incomplete combustion and thus decrease the amount of carbon monoxide, improving the efficiency of the process. See also Nanotechnology Energy Fuel cell References Nanotechnology Energy technology
Energy applications of nanotechnology
Materials_science,Engineering
2,962
70,805,540
https://en.wikipedia.org/wiki/Chiang%20Ti%20Ming
Chiang Ti Ming (; 27 July 1976 - 6 January 2007) was a Malaysian Chinese particle physicist and child prodigy. He was the youngest student to be admitted to the California Institute of Technology. Biography Chiang Ti Ming was a native of Seremban, Malaysia. He was tested to have an IQ of 148 as a child. He displayed an exceptional ability in science and languages in childhood, writing poetry in Chinese and English that expressed his awe of science and eagerness to explore it. In 1988, he made national headlines when he skipped from Form 1 to Form 6 and was preparing to enter university in the US to study physics or computer science. He soon went to INTI International University College to take classes and prepare for university admission. Dr. Lee Fah Onn, the president of the college, said he was very special as he was able to understand abstract ideas. In 1989, at age 13, he was admitted to the second year of the four-year physics degree programme at the California Institute of Technology, setting a record of the youngest student ever to enter the prestigious university. Unable to obtain Malaysian government scholarship, Chiang was sponsored by private organisations and Malaysian Chinese community. During his undergraduate years, Chiang's results were also among the top five percent of students, and he was the youngest student ever to receive the Undergraduate Students Merit Award two years in a row. He was a member of the Tau Beta Pi. He earned his B.S. degree with honours in 1992. In 1992, Chiang was admitted to Cornell University to study for the Ph.D. degree in physics. He earned his doctorate in string theory in 1998 under the guidance of Brian Greene. Then he went to the mathematics department of Harvard University to do a postdoctoral research with Shing-Tung Yau. His mentor Yau said that while he had done good work and had written papers well, he had had trouble interacting with people and had had poor living skills. Then Chiang began to show signs of mental issues. He returned to Malaysia in 2001, and he was admitted into a hospital in Kuala Lumpur for treatment of depression and withdrawal symptoms in 2002. According to a statement made by his father that year, he became reticent because he was too young to adapt to the environment and the work pressure of American society after receiving his Ph.D., and because he was being viewed differently by others, so he was taken back home for his health. His father also requested the media to stop giving him attention. He refused to eat nor drink and would not speak over long periods of time. His life had to be sustained by medication. His condition was worsened on 5 January 2007. He was rushed to Tuanku Ja'afar Hospital and died the following day. His death was caused by neurogenic sepsis which was a rare complication of diabetes. Chiang was survived by his parents and a younger sister. Another younger sister of his drowned at a swimming pool in 1993, aged four. Publications References External links Malaysian physicists Malaysian people of Chinese descent Particle physicists California Institute of Technology alumni Cornell University alumni
Chiang Ti Ming
Physics
621
8,722,775
https://en.wikipedia.org/wiki/Systematic%20code
In coding theory, a systematic code is any error-correcting code in which the input data are embedded in the encoded output. Conversely, in a non-systematic code the output does not contain the input symbols. Systematic codes have the advantage that the parity data can simply be appended to the source block, and receivers do not need to recover the original source symbols if received correctly – this is useful for example if error-correction coding is combined with a hash function for quickly determining the correctness of the received source symbols, or in cases where errors occur in erasures and a received symbol is thus always correct. Furthermore, for engineering purposes such as synchronization and monitoring, it is desirable to get reasonable good estimates of the received source symbols without going through the lengthy decoding process which may be carried out at a remote site at a later time. Properties Every non-systematic linear code can be transformed into a systematic code with essentially the same properties (i.e., minimum distance). Because of the advantages cited above, linear error-correcting codes are therefore generally implemented as systematic codes. However, for certain decoding algorithms such as sequential decoding or maximum-likelihood decoding, a non-systematic structure can increase performance in terms of undetected decoding error probability when the minimum free distance of the code is larger. For a systematic linear code, the generator matrix, , can always be written as , where is the identity matrix of size . Examples Checksums and hash functions, combined with the input data, can be viewed as systematic error-detecting codes. Linear codes are usually implemented as systematic error-correcting codes (e.g., Reed-Solomon codes in CDs). Convolutional codes are implemented as either systematic or non-systematic codes. Non-systematic convolutional codes can provide better performance under maximum-likelihood (Viterbi) decoding. In DVB-H, for additional error protection and power efficiency for mobile receivers, a systematic Reed-Solomon code is employed as an erasure code over packets within a data burst, where each packet is protected with a CRC: data in verified packets count as correctly received symbols, and if all are received correctly, evaluation of the additional parity data can be omitted, and receiver devices can switch off reception until the start of the next burst. Fountain codes may be either systematic or non-systematic: as they do not exhibit a fixed code rate, the set of source symbols is diminishing among the possible output set. Notes References Coding theory
Systematic code
Mathematics
519
42,732,399
https://en.wikipedia.org/wiki/Thermal%20history%20coating
A thermal history coating (THC) is a robust coating containing various non-toxic chemical compounds whose crystal structures irreversibly change at high temperatures. This allows for temperature measurements and thermal analysis to be performed on intricate and inaccessible components, which operate in harsh environments. Like thermal barrier coatings, THCs provide protection from intense heat to the surfaces on which they are applied. The temperature range that THCs provide accurate temperature measurements in is 900 °C to 1400 °C with an accuracy of ±10 °C. Application of THCs THCs are applied by atmospheric plasma spraying, which is a thermal spraying technique. This ensures that the coatings are robust to allow long life-times in harsh environments, such as on jet engine components, which experience temperatures in excess of 1000 °C and angular velocities of up to 10,000rpm (revolutions per minute). Temperature Measurement Phosphorescent Properties THCs are composed of phosphor materials, whose luminescent characteristics are temperature- and duration-dependent. Phosphor thermometry is the measurement technique used for determining the past temperatures of THCs, whereby the luminescent characteristics of the coatings are exploited and matched to calibration tables. Instrumentation The phosphorescence of THCs is excited by use of an external light source such as a laser pen. An optical system then collects a reflected light signal, whose characteristics provide information on the crystal structure of the THC. Crystal structure properties are then converted into temperatures, which had previously been experience by the coatings. This allows for point measurements to be made across the coated surfaces of components and allows thermal analysis to be carried out. Applications R&D THCs are used in high temperature applications where temperature knowledge is essential in research and development programmes, for example in identifying hot spots, which could lead to structural damage of components. Warranty As the THCs provide historic temperature information, they can be used as warranty tools, where certain components, such as valves or particular engine or machinery components must not exceed certain temperatures. Other High-Temperature Detection Technologies Thermocouple Thermocrystal Pyrometer References Materials science Thermal protection
Thermal history coating
Physics,Materials_science,Engineering
441
91,440
https://en.wikipedia.org/wiki/John%20C.%20Fr%C3%A9mont
Major-General John Charles Frémont (January 21, 1813July 13, 1890) was a United States Army officer, explorer, and politician. He was a United States senator from California and was the first Republican nominee for president of the U.S. in 1856 and founder of the California Republican Party when he was nominated. He lost the election to Democrat James Buchanan when the vote was split by the Know Nothings. A native of Georgia, he attended the College of Charleston for two years until he was expelled after irregular attendance. He opposed slavery. In the 1840s, he led five expeditions into the western states. During the Mexican–American War, he was a major in the U.S. Army and took control of a portion of California north of San Francisco from the short-lived California Republic in 1846. During this time, he led several massacres against indigenous peoples in California as part of the California genocide. Frémont was court-martialed and convicted of mutiny and insubordination after a conflict over who was the rightful military governor of California. His sentence was commuted, and he was reinstated by President James K. Polk, but Frémont resigned from the Army. Afterwards, he settled in California at Monterey while buying cheap land in the Sierra foothills. Gold was found on his Mariposa ranch, and Frémont became a wealthy man during the California Gold Rush. He became one of the first two U.S. senators elected from the new state of California in 1850. At the beginning of the American Civil War in 1861, he was given command of the Department of the West by President Abraham Lincoln. Frémont had successes during his brief tenure there, though he ran his department autocratically and made hasty decisions without consulting President Lincoln or Army headquarters. He issued an unauthorized emancipation edict and was relieved of his command for insubordination by Lincoln. After a brief service tenure in the Mountain Department in 1862, Frémont resided in New York, retiring from the army in 1864. He was nominated for president in 1864 by the Radical Democracy Party, a breakaway faction of abolitionist Republicans, but he withdrew before the election. After the Civil War, he lost much of his wealth in the unsuccessful Pacific Railroad in 1866, and he lost more in the Panic of 1873. Frémont served as Governor of the Arizona Territory from 1878 to 1881. After his resignation as governor, he retired from politics and died destitute in New York City in 1890. Historians portray Frémont as controversial, impetuous, and contradictory. Some scholars regard him as a military hero of significant accomplishment, while others view him as a failure who repeatedly defeated his own best interests. The keys to Frémont's character and personality, several historians argue, lie in his having been born "illegitimate" (to unwed parents) and in his drive for success, need for self-justification, and passive-aggressive behavior. His biographer Allan Nevins wrote that Frémont lived a dramatic life of remarkable successes and dismal failures. Early life, education, and early career John Charles Frémont was born on January 21, 1813, the son of Charles Frémon, a French-Canadian immigrant school-teacher, and Anne Beverley Whiting, the youngest daughter of socially prominent Virginia planter Col. Thomas Whiting. At age 17, Anne married Major John Pryor, a wealthy Richmond resident in his early 60s. In 1810, Pryor hired Frémon to tutor his young wife Anne. Pryor confronted Anne when he found out she was having an affair with Frémon. Anne and Frémon fled to Williamsburg on July 10, 1811, later settling in Norfolk, Virginia, taking with them household slaves Anne had inherited. The couple later settled in Savannah, Georgia, where she gave birth to their son Frémont out of wedlock. Pryor published a divorce petition in the Virginia Patriot and charged that his wife had "for some time past indulged in criminal intercourse". When the Virginia House of Delegates refused the divorce petition, it was impossible for the couple to marry. In Savannah, Anne took in boarders while Frémon taught French and dancing. Their domestic slave, Black Hannah, helped raise young John. On December 8, 1818, Frémont's father died in Norfolk, Virginia, leaving Anne a widow to take care of John and several young children alone on a limited inherited income. Anne and her family moved to Charleston, South Carolina. Frémont, knowing his origins and coming from relatively modest means, grew up a proud, reserved, restless loner who although self-disciplined, was ready to prove himself and unwilling to play by the rules. The young Frémont was considered to be "precious, handsome, and daring," having the ability of obtaining protectors. A lawyer, John W. Mitchell, provided for Frémont's early education whereupon Frémont in May 1829 entered Charleston College, teaching at intervals in the countryside, but was expelled for irregular attendance in 1831. Frémont, however, had been grounded in mathematics and natural sciences. Frémont attracted the attention of eminent South Carolina politician Joel R. Poinsett, an Andrew Jackson supporter, who secured Frémont an appointment as a teacher of mathematics aboard the sloop USS Natchez, sailing the South American seas in 1833. Frémont resigned from the navy and was appointed second lieutenant in the U.S. Topographical Corps, surveying a route for the Charleston, Louisville, and Cincinnati railroad. Working in the Carolina mountains, Frémont desired to become an explorer. Between 1837 and 1838, Frémont's desire for exploration increased while in Georgia on reconnaissance to prepare for the removal of Cherokee Indians. When Poinsett became Secretary of War, he arranged for Frémont to assist French explorer and scientist Joseph Nicollet in exploring the lands between the Mississippi and Missouri rivers. Frémont became a first rate topographer, trained in astronomy, and geology, describing fauna, flora, soil, and water resources. Gaining valuable western frontier experience Frémont met Henry Sibley, Joseph Renville, J.B. Faribault, Étienne Provost, and the Sioux nation. Marriage and senatorial patronage Frémont's exploration work with Nicollet brought him in contact with Senator Thomas Hart Benton of Missouri, powerful chairman of the Senate Committee on Military Affairs. Benton invited Frémont to his Washington home where he met Benton's 16-year-old daughter Jessie Benton. A romance blossomed between the two; however, Benton was initially against it because Frémont was not considered upper society. In 1841, Frémont (age 28) and Jessie eloped and were married by a Catholic priest. Initially Benton was furious at their marriage, but in time, because he loved his daughter, he accepted their marriage and became Frémont's patron. Benton, Democratic Party leader for more than 30 years in the Senate, championed the expansionist movement, a political cause that became known as Manifest Destiny. The expansionists believed that the North American continent, from one end to the other, north and south, east and west, should belong to the citizens of the U.S. They believed it was the nation's destiny to control the continent. This movement became a crusade for politicians such as Benton and his new son-in-law. Benton pushed appropriations through Congress for national surveys of the Oregon Trail, the Oregon Country, the Great Basin, and Sierra Nevada Mountains to California. Through his power and influence, Senator Benton obtained for Frémont the leadership, funding, and patronage of three expeditions. Frémont's explorations The opening of the American West began in 1804, when the Lewis and Clark Expedition (led by Meriwether Lewis and William Clark) started exploration of the new Louisiana Purchase territory to find a northwest passage up the Missouri River to the Pacific Ocean. President Thomas Jefferson had envisioned a Western empire, and also sent the Pike Expedition under Zebulon Pike to explore the southwest. American and European fur trappers, including Peter Skene Ogden and Jedediah Smith, explored much of the American West in the 1820s. Frémont, who would later be known as The Pathfinder, carried on this tradition of Western overland exploration, building on and adding to the work of earlier pathfinders to expand knowledge of the American West. Frémont's talent lay in his scientific documentation, publications, and maps made based on his expeditions, making the American West accessible for many Americans. Beginning in 1842, Frémont led five western expeditions; however, between the third and fourth expeditions, Frémont's career took a fateful turn because of the Mexican–American War. Frémont's initial explorations – specifically, his timely scientific reports (co-authored by his wife Jessie) and their romantic writing style – encouraged Americans to travel West. A series of seven maps produced from his findings, published by the Senate in 1846, served as a guide for thousands of American emigrants, depicting the entire length of the Oregon Trail. First expedition (1842) When Nicollet was too ill to continue any further explorations, Frémont was chosen to be his successor. His first important expedition was planned by Benton, Senator Lewis Linn, and other Westerners interested in acquiring the Oregon Territory. The scientific expedition started in the summer of 1842 and was to explore the Wind River of the Rocky Mountains, examine the Oregon Trail through the South Pass, and report on the rivers and the fertility of the lands, find optimal sites for forts, and describe the mountains beyond in Wyoming. By chance meeting, Frémont was able to gain the valuable assistance of mountain man and guide Kit Carson. Frémont and his party of 25 men, including Carson, embarked from the Kansas River on June 15, 1842, following the Platte River to the South Pass, and starting from Green River he explored the Wind River Range. Frémont climbed a , Frémont's Peak, planted an American flag, claiming the Rocky Mountains and the West for the United States. On Frémont's return trip he and his party carelessly rafted the swollen Platte River losing much of his equipment. His five-month exploration, however, was a success, returning to Washington in October. Frémont and his wife Jessie wrote a Report of the Exploring Expedition to the Rocky Mountains (1843), which was printed in newspapers across the country; the public embraced his vision of the west not as a place of danger but wide open and inviting lands to be settled. Second expedition (1843–1844) Frémont's successful first expedition led quickly to a second; it began in the summer of 1843. The more ambitious goal this time was to map and describe the second half of the Oregon Trail, find an alternate route to the South Pass, and push westward toward the Pacific Ocean on the Columbia River in Oregon Country. Frémont and his almost 40 well-equipped men left the Missouri River in May after he controversially obtained a 12-pound howitzer cannon in St. Louis. Frémont invited Carson on the second expedition, due to his proven skills, and he joined Frémont's party on the Arkansas River. Unable to find a new route through Colorado to the South Pass, Frémont took to the regular Oregon Trail, passing the main body of the great migration of 1843. His party stopped to explore the northern part of the Great Salt Lake, then traveling by way of Fort Hall and Fort Boise to Marcus Whitman's mission on the Walla Walla River at the Columbia River in Oregon. Frémont's endurance, energy, and resourcefulness over the long journey was remarkable. Traveling west along the Columbia, they came within sight of the Cascade Range peaks and saw Mount St. Helens and Mount Hood. Reaching the Dalles on November 5, Frémont left his party and traveled to the Hudson’s Bay Company Fort Vancouver for supplies. Rather than turning around and heading back to St. Louis, Frémont resolved to explore the Great Basin between the Rockies and the Sierras and advance Benton's dream of acquiring the West for the United States. Frémont and his party turned south along the eastern flank of the Cascades to Pyramid Lake, which he named. Staying on the eastern side of the Sierra Nevada mountain range, they went on south as far as present-day Minden, Nevada, reaching the Carson River on January 18, 1844. From there Frémont turned west into the cold and snowy Sierra Nevada, becoming one of the first Americans to see Lake Tahoe. Carson successfully led Frémont's party through a new pass over the high Sierras, which Frémont named Carson Pass in his honor. Frémont and his party then descended the American River to Sutter's Fort (Spanish: Nueva Helvetia) at present-day Sacramento, California, in early March. Captain John Sutter, a Swiss-Mexican (and later American by treaty) immigrant and founder of the fort, received Frémont gladly and refitted his expedition party. While at Sutter's Fort, Frémont talked to American settlers, who were growing numerous, and found that Mexican authority over California was very weak. Leaving Sutter's Fort, Frémont and his men headed south along the eastern edge of the San Joaquin Valley and crossed Tehachapi Pass and Antelope Valley, struck the Spanish Trail at present Victorville, California and then northeast through present-day Las Vegas, through Utah and back to South Pass. Exploring the Great Basin, Frémont verified that all the land (centered on modern-day Nevada between Reno and Salt Lake City) was an endorheic, without any outlet rivers flowing towards the sea. The finding contributed greatly to a better understanding of North American geography, and disproved a longstanding legend of a "Buenaventura River" that flowed out the Great Basin across the Sierra Nevada. After exploring Utah Lake, Frémont traveled by way of the Pueblo until he reached Bent's Fort on the Arkansas River. In August 1844, Frémont and his party finally arrived back in St. Louis, ending the journey that lasted over one year. His wife Jessie and Frémont returned to Washington, where the two wrote a second report, scientific in detail, showing the Oregon Trail was not difficult to travel and that the Northwest had fertile land. The Senate and House each ordered the printing of 10,000 copies to be distributed to the press and public, used to promote the cause of national expansion. Third expedition (1845) With the backdrop of an impending war with Mexico, after James K. Polk had been elected president, Benton quickly organized a third expedition for Frémont. The plan for Frémont under the War Department was to survey the central Rockies, the Great Salt Lake region, and part of the Sierra Nevada. Back in St. Louis, Frémont organized an armed surveying expedition of 60 men, with Carson as a guide, and two distinguished scouts, Joseph Walker and Alexis Godey. Working with Benton and Secretary of Navy George Bancroft, Frémont was secretly told that if war started with Mexico he was to turn his scientific expedition into a military force. President Polk, who had met with Frémont at a cabinet meeting, was set on taking California. Frémont desired to conquer California for its beauty and wealth, and would later explain his very controversial conduct there. On June 1, 1845, Frémont and his armed expedition party left St. Louis having the immediate goal to locate the source of the Arkansas River, on the east side of the Rocky Mountains. Frémont and his party struck west by way of Bent's Fort, The Great Salt Lake, and the "Hastings Cut-Off". When Frémont reached the Ogden River, which he renamed the Humboldt, he divided his party in two to double his geographic information. Upon reaching the Arkansas River, Frémont suddenly made a blazing trail through Nevada straight to California, having a rendezvous with his men from the split party at Walker Lake in west-central Nevada. Attacks against Native Americans in California and Oregon Country (1845–1846) Taking 16 men, Frémont split his party again, arriving at Sutter's Fort in the Sacramento Valley on December 9. Frémont promptly sought to stir up patriotic enthusiasm among the American settlers there. He promised that if war with Mexico started, his military force would protect the settlers. Frémont went to Monterey, California, to talk with the American consul, Thomas O. Larkin, and Mexican commandant Jose Castro, under the pretext of gaining fuller supplies. In February 1846, Frémont reunited with 45 men of his expedition party near Mission San José, giving the United States a relatively strong military presence in California. Castro and Mexican officials were suspicious of Frémont and he was ordered to leave the country. Frémont and his men withdrew and camped near the summit of what is now named Fremont Peak. Frémont raised the United States Flag in defiance of Mexican authority. After a four-day standoff and Castro having a superior number of Mexican troops, Frémont and his men went north to Oregon, bringing about the Sacramento River massacre along the way. Estimates of the casualties vary. Expedition members Thomas E. Breckenridge and Thomas S. Martin claim the number of Native Americans killed as "120–150" and "over 175" respectively, but the eyewitness Tustin claimed that at least 600–700 Native Americans were killed on land, with another 200 or more dying in the water. There are no records of any expedition members being killed or wounded in the massacre. Kit Carson, one of the mounted attackers, later stated, "It was a perfect butchery." Fremont and his men eventually made their way to camp at Klamath Lake, killing Native Americans on sight as they went. On May 8, Frémont was overtaken by Lieutenant Archibald Gillespie from Washington, who gave him copies of dispatches he had previously given to Larkin. Gillespie told Frémont secret instructions from Benton and Buchanan justifying aggressive action and that a declaration of war with Mexico was imminent. On May 9, 1846, Native Americans ambushed his expedition party in retaliation for numerous killings of Native Americans that Frémont's men had engaged in along the trail, killing three members of Frémont's party in their sleep, including a Native American who was traveling with Frémont. Frémont retaliated by attacking a Klamath fishing village named Dokdokwas the following day in the Klamath Lake massacre, although the people living there might not have been involved in the first action. The village was at the junction of the Williamson River and Klamath Lake. On May 12, 1846, the Frémont group completely destroyed it, killing at least fourteen people. Frémont believed that the British were responsible for arming and encouraging the Native Americans to attack his party. Afterward, Carson was nearly killed by a Klamath warrior. As Carson's gun misfired, the warrior drew to shoot a poison arrow; however, Frémont, seeing that Carson was in danger, trampled the warrior with his horse. Carson felt that he owed Frémont his life. A few weeks later, Frémont and his armed militia returned to California. Mexican–American War (1846–1848) Having reentered Mexican California headed south, Frémont and his army expedition stopped off at Peter Lassen's Rancho Bosquejo on May 24, 1846. Frémont learned from Lassen that the USS Portsmouth, commanded by John B. Montgomery, was anchored at Sausalito. Frémont sent Lt. Gillespie to Montgomery and requested supplies including 8,000 percussion caps, of rifle lead, one keg of powder, and food provisions, intending to head back to St. Louis. On May 31, Frémont made his camp on the Bear and Feather rivers north of Sutter's Fort, where American immigrants ready for revolt against Mexican authority joined his party. From there he made another attack on local Native Americans in a rancheria (see Sutter Buttes massacre). In early June, believing war with Mexico to be a virtual certainty, Frémont joined the Sacramento Valley insurgents in a "silent partnership", rather than head back to St. Louis, as originally planned. On June 10, instigated by Frémont, four men from Frémont's party and 10 rebel volunteers seized 170 horses intended for Castro's Army and returned them to Frémont's camp. According to historian H. H. Bancroft, Frémont incited the American settlers indirectly and "guardedly" to revolt. On June 14, 34 armed rebels independently captured Sonoma, the largest settlement in northern California, and forced the surrender of Colonel Mariano Vallejo, taking him and three others prisoner. The following day, the rebelling Americans, who were called Osos (Spanish for "bears") by the residents of Sonoma, amidst a brandy-filled party, hoisted a roughly sewn flag and formed the Bear Flag Republic, electing William Ide as their leader. The four prisoners were then taken to Frémont's camp away. On June 15, the prisoners and escorts arrived at Frémont's new camp on the American River, but Frémont publicly denied responsibility for the raid. The escorts then removed the prisoners south to Sutter's Fort, where they were imprisoned by Sutter under Frémont's orders. It was at this time Frémont began signing letters as "Military Commander of U.S. Forces in California". On June 24, Frémont and his men, upon hearing that Californio (people of Spanish or Mexican descent) Juan N. Padilla had captured, tortured, killed, and mutilated the bodies of two Osos and held others prisoner, rode to Sonoma, arriving on June 25. On June 26, Frémont, his own men, Lieutenant Henry Ford and a detachment of Osos, totaling 125 men, rode south to San Rafael, searching for Captain Joaquin de la Torre and his lancers, rumored to have been ordered by Castro to attack Sonoma, but was unable to find them. On June 28, General Castro, on the other side of San Francisco Bay, sent a row boat across to Point San Pablo on the shores of San Rafael with a message for de la Torre. Kit Carson, Granville Swift and Sam Neal rode to the beach to intercept the three unarmed men who came ashore, including Don José Berreyesa and the 20-year-old de Haro twin brothers Ramon and Francisco, sons of Don Francisco de Haro. The three were murdered in cold blood. Exactly who committed the murders is a point of controversy, but later accounts point to Carson acting at the behest, if not the order, of Frémont. On July 1, Commodore John D. Sloat, commanding the U.S. Navy's Pacific Squadron, sailed into Monterey harbor with orders to seize San Francisco Bay and blockade the other California ports upon learning "without a doubt" that war had been declared. On July 5, Sloat received a message from Montgomery reporting the events in Sonoma and Frémont's involvement. Believing Frémont to be acting on orders from Washington, Sloat began to carry out his orders. Early on July 7, 225 sailors and marines on the United States Navy frigate USS Savannah and the two sloops USS Cyane and USS Levant landed and captured Monterey with no shots being fired and raised the flag of the United States. Commodore Sloat had his proclamation read and posted in English and Spanish: "... henceforth California would be a portion of the United States." On July 10, Frémont received a message from Montgomery that the U.S. Navy had occupied Monterey and Yerba Buena. Two days later, Frémont received a letter from Sloat, describing the capture of Monterey and ordering Frémont to bring at least 100 armed men to Monterey. Frémont would bring 160 men. On July 15, Commodore Robert F. Stockton arrived in Monterey to replace the 65-year-old Sloat in command of the Pacific Squadron. Sloat named Stockton commander-in-chief of all land forces in California. On July 19, Frémont's party entered Monterey, where he met with Sloat on board the Savannah. When Sloat learned that Frémont had acted on his own authority (thus raising doubt about a war declaration), he retired to his cabin. On July 23, Stockton mustered Frémont's party and the former Bear Flaggers into military service as the "Naval Battalion of Mounted Volunteer Riflemen" with Frémont appointed major in command of the California Battalion, which he had helped form with his survey crew and volunteers from the Bear Flag Republic, now totaling 428 men. Stockton incorporated the California Battalion into the U.S. military giving them soldiers pay. Frémont and about 160 of his troops went by ship to San Diego, and with Stockton's marines took Los Angeles on August 13. Frémont afterwards went north to recruit more Californians into his battalion. In late 1846, under orders from Stockton, Frémont led a military expedition of 300 men to capture Santa Barbara. In September, Mexican Californians unwilling to be ruled by the United States, under José María Flores, fought back and retook Los Angeles, driving out Americans. In December 1846, U.S. Brigadier General Stephen W. Kearny arrived in California under "orders from President Polk" after taking New Mexico, then to march onto California, where, "Should you conquer and take possession of California, you will establish a civil government." Kearny, who had earlier trimmed his forces from 300 to 100 dragoons, based upon Kit Carson's dispatches he was carrying to Washington, stating that Stockton and Fremont had successfully taken control of California. Unknown to Carson at this time, the Californians had revolted, which would lead Kearny to a disastrous attack on waiting Mexican lancers at the Battle of San Pasqual, losing 19 men killed and being himself seriously lanced. He was later reinforced when Stockton sent troops to drive off Pio Pico and his forces. It was at this time a dispute began between Stockton and Kearny over who had control of the military, but the two managed to work together to stop the Los Angeles uprising. Frémont led his unit over the Santa Ynez Mountains at San Marcos Pass in a rainstorm on the night of December 24, 1846. Despite losing many of his horses, mules and cannons, which slid down the muddy slopes during the rainy night, his men regrouped in the foothills (behind what is today Rancho Del Ciervo) the next morning, and captured the Presidio of Santa Barbara and the town without bloodshed. A few days later, Frémont led his men southeast towards Los Angeles. Fremont accepted Andres Pico's surrender upon signing the Treaty of Cahuenga on January 13, 1847, which terminated the war in upper California. It was at this time Kearny ordered Frémont to join his military dragoons, but Frémont refused, believing he was under authority of Stockton. Court martial and resignation On January 16, 1847, Commodore Stockton appointed Frémont military governor of California following the Treaty of Cahuenga, and then left Los Angeles. Frémont functioned for a few weeks without controversy, but he had little money to administer his duties as governor. Previously, unknown to Stockton and Frémont, the Navy Department had sent orders for Sloat and his successors to establish military rule over California. These orders, however, postdated Kearny's orders to establish military control over California. Kearny did not have the troop strength to enforce those orders, and was forced to rely on Stockton's Marines and Frémont's California Battalion until army reinforcements arrived. On February 13, specific orders were sent from Washington through Commanding General Winfield Scott giving Kearny the authority to be military governor of California. Kearny, however, did not directly inform Frémont of these orders from Scott. Kearny ordered that Frémont's California Battalion be enlisted into the U.S. Army and Frémont bring his battalion archives to Kearny's headquarters in Monterey. Frémont delayed obeying these orders, hoping Washington would send instructions for Frémont to be military governor. Also, the California Battalion refused to join the U.S. Army. Frémont gave orders for the California Battalion not to surrender arms, rode to Monterey to talk to Kearny, and told Kearny he would obey orders. Kearny sent Col. Richard B. Mason, who was to succeed Kearny as military governor of California, to Los Angeles, both to inspect troops and to give Frémont further orders. Frémont and Mason, however, were at odds with each other and Frémont challenged Mason to a duel. After an arrangement to postpone the duel, Kearny rode to Los Angeles and refused Frémont's request to join troops in Mexico. Ordered to march with Kearny's army back east, Frémont was arrested on August 22, 1847, when they arrived at Fort Leavenworth. He was charged with mutiny, disobedience of orders, assumption of powers, and several other military offenses. Ordered by Kearny to report to the adjutant general in Washington to stand for court-martial, Frémont was found innocent of mutiny, but was convicted on January 31, 1848, of disobedience toward a superior officer and military misconduct. While approving the court's decision, President James K. Polk quickly commuted Frémont's sentence of dishonorable discharge and reinstated him into the Army, due to his war services. Polk felt that Frémont was guilty of disobeying orders and misconduct, but he did not believe Frémont was guilty of mutiny. Additionally, Polk wished to placate Thomas Hart Benton, a powerful senator and Frémont's father-in-law, who felt that Frémont was innocent. Frémont, only gaining a partial pardon from Polk, resigned his commission in protest and settled in California. Despite the court-martial, Frémont remained popular among the American public. Historians are divided in their opinions on this period of Frémont's career. Mary Lee Spence and Donald Jackson, editors of a large collection of letters by Fremont and others dating from this period, concluded that "...in the California episode, Frémont was as often right as wrong. And even a cursory investigation of the court-martial record produces one undeniable conclusion: neither side in the controversy acquitted itself with distinction." Allan Nevins states that Kearny: was a stern-tempered soldier who made few friends and many enemies – who has been justly characterized by the most careful historian of the period, Justin H. Smith, as "grasping, jealous, domineering, and harsh." Possessing these traits, feeling his pride stung by his defeat at San Pasqual, and anxious to assert his authority, he was no sooner in Los Angeles than he quarreled bitterly with Stockton; and Frémont was not only at once involved in this quarrel, but inherited the whole burden of it as soon as Stockton left the country. Theodore Grivas wrote that "It does not seem quite clear how Frémont, an army officer, could have imagined that a naval officer [Stockton] could have protected him from a charge of insubordination toward his superior officer [Kearny]". Grivas goes on to say, however, that "This conflict between Kearny, Stockton, and Frémont perhaps could have been averted had methods of communication been what they are today." Fourth expedition (1848–1849) Intent on restoring his honor and explorer reputation after his court martial, in 1848, Frémont and his father-in-law Sen. Benton developed a plan to advance their vision of Manifest Destiny. With a keen interest in the potential of railroads, Sen. Benton had sought support from the Senate for a railroad connecting St. Louis to San Francisco along the 38th parallel, the latitude which both cities approximately share. After Benton failed to secure federal funding, Frémont secured private funding. In October 1848 he embarked with 35 men up the Missouri, Kansas and Arkansas rivers to explore the terrain. The artists and brothers Edward Kern and Richard Kern, and their brother Benjamin Kern, were part of the expedition, but Frémont was unable to obtain the valued service of Kit Carson as guide as in his previous expeditions. On his party's reaching Bent's Fort, he was strongly advised by most of the trappers against continuing the journey. Already a foot of snow was on the ground at Bent's Fort, and the winter in the mountains promised to be especially snowy. Part of Frémont's purpose was to demonstrate that a 38th parallel railroad would be practical year-round. At Bent's Fort, he engaged "Uncle Dick" Wootton as guide, and at what is now Pueblo, Colorado, he hired the eccentric Old Bill Williams and moved on. Had Frémont continued up the Arkansas, he might have succeeded. On November 25 at what is now Florence, Colorado, he turned sharply south. By the time his party crossed the Sangre de Cristo Range via Mosca Pass, they had already experienced days of bitter cold, blinding snow and difficult travel. Some of the party, including the guide Wootton, had already turned back, concluding that further travel would be impossible. Benjamin Kern and "Old Bill" Williams were killed by Ute warriors while retracing the expedition trail to look for gear and survivors. Although the passes through the Sangre de Cristo had proven too steep for a railroad, Frémont pressed on. From this point the party might still have succeeded had they gone up the Rio Grande to its source, or gone by a more northerly route, but the route they took brought them to the very top of Mesa Mountain. By December 12, on Boot Mountain, it took ninety minutes to progress three hundred yards. Mules began dying and by December 20, only 59 animals remained alive. It was not until December 22 that Frémont acknowledged that the party needed to regroup and be resupplied. They began to make their way to Taos in the New Mexico Territory. By the time the last surviving member of the expedition made it to Taos on February 12, 1849, 10 of the party had died and been eaten by the survivors. Except for the efforts of member Alexis Godey, another 15 would have been lost. After recuperating in Taos, Frémont and only a few of the men left for California via an established southern trade route. Edward and Richard Kern joined J.H. Simpson's military reconnaissance expedition to the Navajos in 1849, and gave the American public some of its earliest authentic graphic images of the people and landscape of Arizona, New Mexico, and southern Colorado; with views of Canyon de Chelly, Chaco Canyon, and El Morro (Inscription Rock). In 1850, Frémont was awarded the Patron's Medal by the Royal Geographical Society for his various exploratory efforts. Rancho Las Mariposas On February 10, 1847, Frémont purchased a 70-square-mile parcel of land in the Sierra foothills through land speculator Thomas Larkin for $3,000 ($ in ). Known as Las Mariposas (Spanish for "The Butterflies"), an allusion to the great number of Monarch butterflies found there, the land had previously been owned by former California governor Juan Bautista Alvarado and his wife Martina Caston de Alvarado. Frémont had hoped that Las Mariposas was near San Francisco or Monterey, but was disappointed to learn that it was further inland, near Yosemite, on the Miwok Indians' hunting and gathering grounds. After his court martial in 1848, Frémont moved to Las Mariposas and became a rancher, borrowing money from his father-in-law Benton and Senator John Dix to construct a house, corral, and barn. Frémont ordered a sawmill and had it shipped by the Aspinwall steamer Fredonia to Las Mariposas. Frémont was informed by Sonora Mexicans that gold had been discovered on his property. Frémont was instantly a wealthy man; a five-mile quartz vein produced hundreds of pounds of placer gold each month. In 1851 Hiland Hall, a former Governor of Vermont, was appointed chairman of the federal commission created to settle Mexican land titles in California; he traveled to San Francisco to begin his work, and his son-in-law Trenor W. Park traveled with him. Frémont hired Park as a managing partner to oversee the day-to-day activities of the estate, and Mexican laborers to wash out the gold on his property in exchange for a percentage of the profits. Frémont acquired large landholdings in San Francisco, and while developing his Las Mariposas gold ranch, he lived a wealthy lifestyle in Monterey. Legal issues, however, soon mounted over property and mineral rights. Disputes erupted as squatters moved on Frémont's Las Mariposas land mining for gold. There was question whether the three mining districts on the land were public domain, while the Merced Mining Company was actively mining on Frémont's property. Since Alvarado had purchased Las Mariposas on a "floating grant", the property borders were not precisely defined by the Mexican government. Alvarado's ownership of the land was legally contested since Alvarado never actually settled on the property as required by Mexican law. All of these matters lingered and were argued in court for many years until the Supreme Court finally ruled in Frémont's favor in 1856. Although Frémont's legal victory allowed him to keep his wealth, it created lingering animosity among his neighbors. During the late 1850s, Frederick H. Billings, a partner in the Halleck, Peachy & Billings law firm that employed Park, partnered with Frémont in several successful business ventures. Billings later embarked on several trips to Europe in an unsuccessful effort to sell Frémont's Mariposa mine shares. At the start of the American Civil War, Billings acted as Frémont's agent when Frémont took the initiative to purchase arms in England for use by Union troops. U.S. Senator from California (1850–1851) On November 13, 1849, General Bennet C. Riley, without Washington approval, called for a state election to ratify the new California State constitution. On December 20, the California legislature voted to seat two senators to represent the state in the Senate. The front-runner was Frémont, a Free Soil Democrat, known for being a western hero, and regarded by many as an innocent victim of an unjustified court-martial. The other candidates were T. Butler King, a Whig, and William Gwin, a Democrat. Frémont won the first Senate seat, easily having 29 out of 41 votes and Gwin, having Southern backing, was elected to the second Senate seat, having won 24 out of 41 votes. By random draw of straws, Gwin won the longer Senate term while Frémont won the shorter Senate term. In Washington, Frémont, whose California ranch had been purchased from a Mexican land grantee, supported an unsuccessful law that would have rubber-stamped Mexican land grants, and another law that prevented foreign workers from owning gold claims (Fremont's ranch was in gold country), derisively called "Frémont's Gold Bill". Frémont voted against harsh penalties for those who assisted runaway slaves and he was in favor of abolishing the slave trade in the District of Columbia. Democratic pro-slavery opponents of Frémont, called the Chivs, strongly opposed Frémont's re-election, and endorsed Solomon Heydenfeldt. Rushing back to California hoping to thwart the Chivs, Frémont started his own election newspaper, the San Jose Daily Argus, however, to no avail, he was unable to get enough votes for re-election to the Senate. Neither Heydenfeldt, nor Frémont's other second-time competitor King, were able to obtain a majority of votes, allowing Gwin to be California's lone senator. Frémont's term lasted 175 days from September 10, 1850, to March 3, 1851, and he only served 21 working days in Washington in the Senate. Pro-slavery John B. Weller, supported by the Chivs, was elected one year later to the empty Senate seat previously held by Frémont. Fifth expedition (1853–1854) In the fall of 1853, Frémont embarked on another expedition to identify a viable route for a transcontinental railroad along the 38th parallel. The party journeyed between Missouri and San Francisco, California, over a combination of known trails and unexplored terrain. A primary objective was to pass through the Rocky Mountains and Sierra Nevada Mountains during winter to document the amount of snow and the feasibility of winter rail passage along the route. His photographer (daguerreotypist) was Solomon Nunes Carvalho. Frémont followed the Santa Fe Trail, passing Bent's Fort before heading west and entering the San Luis Valley of Colorado in December. The party then followed the North Branch of the Old Spanish Trail, crossing the Continental Divide at Cochetopa Pass and continuing west into central Utah. But following the trail was made difficult by snow cover. On occasion, they were able to detect evidence of Captain John Gunnison's expedition, which had followed the North Branch just months before. Weeks of snow and bitter cold took its toll and slowed progress. Nonessential equipment was abandoned and one man died before the struggling party reached the Mormon settlement of Parowan in southwestern Utah on February 8, 1854. After spending two weeks in Parowan to regain strength, the party continued across the Great Basin and entered the Owens Valley near present-day Big Pine, California, on the eastern flank of the Sierra Nevada Mountains. Frémont journeyed south before crossing the Sierra Nevadas and entering the Kern River drainage, which he and his party then followed west into the San Joaquin Valley. Frémont arrived in San Francisco on April 16, 1854. Having completed a winter passage across the mountainous west, Frémont was optimistic that a railroad along the 38th Parallel was viable and that winter travel along the line would be possible through the Rocky Mountains. Republican Party presidential candidate (1856) In 1856, Frémont (age 43) became the first presidential candidate of the newly-formed Republican Party. The Republicans, whose party had been established in 1854, were united in their opposition to the Pierce Administration and the spread of slavery into the West. Initially, Frémont was asked to be the Democratic candidate by former Virginia Governor John B. Floyd and the powerful Preston family. Frémont announced that he was for Free Soil Kansas and was against the enforcement of the 1850 Fugitive Slave Law. However, Republican leaders Nathaniel P. Banks, Henry Wilson, and John Bigelow were able to get Frémont to join their political party. Seeking a united front and a fresh face for the party, the Republicans nominated Frémont for president over other candidates, and conservative William L. Dayton of New Jersey, for vice president, at their June 1856 convention held in Philadelphia. The Republican campaign used the slogan "Free Soil, Free Men, and Frémont" to crusade for free farms (homesteads) and against the Slave Power. Frémont, popularly known as The Pathfinder, however, had voter appeal and remained the symbol of the Republican Party. The Democratic Party nominated James Buchanan. Frémont's wife Jessie, Bigelow, and Issac Sherman ran Frémont's campaign. As the daughter of a senator, Jessie had been raised in Washington, and she understood politics more than Frémont. Many treated Jessie as an equal political professional, while Frémont was treated as an amateur. She received popular attention much more than potential First Ladies, and Republicans celebrated her participation in the campaign calling her Our Jessie. Jessie and the Republican propaganda machine ran a strong campaign, but she was unable to get her powerful father, Senator Benton, to support Frémont. While praising Frémont, Benton announced his support for Buchanan. Frémont, along with the other presidential candidates, did not actively participate in the campaign, and he mostly stayed home at 56 West Street, in New York City. This practice was typical in presidential campaigns of the 19th century. To win the presidency, the Republicans concentrated on four swing states, Pennsylvania, New Jersey, Indiana, and Illinois. Republican luminaries were sent out decrying the Democratic Party's attachment to slavery and its support of the repeal of the Missouri Compromise. The experienced Democrats, knowing the Republican strategy, also targeted these states, running a rough media campaign, while illegally naturalizing thousands of alien immigrants in Pennsylvania. The campaign was particularly abusive, as the Democrats attacked Frémont's illegitimate birth and alleged Frémont was Catholic. In a counter-crusade against the Republicans, the Democrats ridiculed Frémont's military record and warned that his victory would bring civil war. Much of the private rhetoric of the campaign focused on unfounded rumors regarding Frémont – talk of him as president taking charge of a large army that would support slave insurrections, the likelihood of widespread lynchings of slaves, and whispered hope among slaves for freedom and political equality. Frémont's campaign was headquartered near his home (St. George) next to the Clifton ferry landing. Many campaign rallies were held on the lawn, now the corner of Greenfield and Bay Street. Frémont was defeated, having placed second to James Buchanan in a three-way election; he did not carry his home state of California. Frémont received 114 electoral votes to 174 votes received by Buchanan. Millard Fillmore ran as a third-party candidate representing the American (Know Nothing) Party. The popular vote went to Buchanan who received 1,836,072 votes to 1,342,345 votes received by Frémont on November 4, 1856. Fremont carried 11 states, and Buchanan carried 19. The Democrats were better organized while the Republicans had to operate on limited funding. After the campaign, Frémont returned to California and devoted himself to his mining business on the Mariposa gold estate, estimated by some to be valued at ten million dollars. Frémont's title to Mariposa land had been confirmed by the U.S. Supreme Court in 1856. American Civil War At the start of the Civil War, Frémont was touring Europe in an attempt to find financial backers in his California Las Mariposas estate ranch. President Abraham Lincoln wanted to appoint Frémont as the American minister to France, thereby taking advantage of his French ancestry and the popularity in Europe of his anti-slavery positions. However Secretary of State William Henry Seward objected to Frémont's radicalism, and the appointment was not made. Instead, Lincoln appointed Frémont Union Army Major General on May 15, 1861. He arrived in Boston from England on June 27, 1861, and Lincoln promoted him commander of the Department of the West on July 1, 1861. The Western department included the area west of the Appalachian Mountains to the Mississippi River. After Frémont arrived in Washington, D.C., he conferred with Lincoln and Commanding General Winfield Scott, himself making a plan to clear all Confederates out of Missouri and to make a general campaign down the Mississippi and advance on Memphis. According to Frémont, Lincoln had given him carte blanche authority on how to conduct his campaign and to use his own judgement, while talking on the steps of the White House portico. Frémont's main goal as commander of the Western Armies was to protect Cairo, Illinois, at all costs in order for the Union Army to move southward on the Mississippi River. Both Frémont and his subordinate, General John Pope, believed that Ulysses S. Grant was the fighting general needed to secure Missouri from the Confederates. Frémont had to contend with a hard-driving Union General Nathaniel Lyon, whose irregular war policy disturbed the complex loyalties of Missouri. Department of the West (1861) Command and duties On July 25, 1861, Frémont arrived in St. Louis and formally took command of a Department of the West that was in crisis. Frémont was forty-eight years old, grey-haired and considered handsome. He brought with him the great reputation as "the Pathfinder of the West", for his eleven years of topographical service, and he was focused on driving the Confederate forces from Missouri. Frémont had to organize an army in a slave state that was largely disloyal, having a limited number of Union soldiers, supplies, and arms. Guerilla warfare was breaking out and two Confederate armies were planning on capturing Springfield and invading Illinois to capture Cairo. Frémont's duties upon taking command of the Western Department were broad, his resources were limited, and the secession crisis in Missouri appeared to be uncontrollable. Frémont was responsible for safeguarding Missouri and all of the Northwest. Frémont's mission was to organize, equip, and lead the Union Army down the Mississippi River, reopen commerce, and break off the Western part of the Confederacy. Frémont was given only 23,000 men, whose volunteer 3-month enlistments were about to expire. Western Governors sent more troops to Frémont, but he did not have any weapons with which to arm them. There were no uniforms or military equipment either, and the soldiers were subject to food rationing, poor transportation, and lack of pay. Fremont's intelligence was also faulty, leading him to believe the Missouri state militia and the Confederate forces were twice as numerous as they actually were. Blair feud and corruption charges Frémont's arrival brought an aristocratic air that raised eyebrows and general disapproval among the people of St. Louis. Soon after he came into command, Frémont became involved in a political feud with Frank Blair, who was a member of the powerful Blair family and brother of Lincoln's cabinet member. To gain control of Missouri politics, Blair complained to Washington that Frémont was "extravagant" and that his command was brimming with a "horde of pirates", who were defrauding the army. This caused Lincoln to send Adjutant General Lorenzo Thomas to check in on Frémont, who reported that Frémont was incompetent and had made questionable army purchases. The imbroglio became a national scandal, and Frémont was unable to keep a handle on supply affairs. A Congressional subcommittee investigation headed by Elihu B. Washburne and a later Commission on War Claims investigation into the entire Western Department confirmed that many of Blair's charges were true. Frémont ran his headquarters in St. Louis "like a European autocrat". Perhaps this was due to a sojourn through France before his appointment by President Lincoln. There, Frémont had rented a lavish mansion for $6,000 a year, paid for by the government, and surrounded himself with Hungarian and Italian guards in brassy uniforms. Frémont additionally set up a headquarters bodyguard of 300 Kentucky men, chosen for their uniform physical attributes. Frémont had surrounded himself with California associates, who made huge profits by securing army contracts without the competitive bidding required by federal law. One Californian contracted for the construction of 38 mortar boats for $8,250 apiece, almost double what they were worth. Another Californian, who was a personal friend of Frémont's, but had no construction experience, received a contract worth $191,000 to build a series of forts, which should have cost one third less. Frémont's favorite sellers received "the most stupendous contracts" for railroad cars, horses, Army mules, tents, and other equipment, most of them of shoddy quality. A rumor spread in Washington that Frémont was planning to start his own republic or empire in the West. Frémont's supply line, headed by Major Justus McKinstry, also came under scrutiny for graft and profiteering. Frémont's biographer Nevins stresses that much of Frémont's trouble stemmed from the fact that the newly created Western Department was without organization, war materials, and trained recruits, while waste and corruption were endemic in the War Department under Lincoln's first appointed secretary, Simon Cameron. Confederate capture of Springfield Earlier in May, a tough, impetuous Regular Army captain, Nathaniel Lyon, exercising irregular authority, led troops who captured a legal contingent of Missouri state militia camped in a Saint Louis suburb; during the capture, civilians were killed. Missouri had not officially seceded from the Union when Lyon was promoted to brigadier general by President Abraham Lincoln and appointed temporary commander of the Department of the West. Lyon, who believed a show of force would keep Missouri in the Union, effectively declared war on secession-minded Missouri governor Claiborne Jackson, who was driven by Lyon to the Ozarks. Lyon occupied Jefferson City, the state capital, and installed a pro-Union state government. However, Lyon became trapped at Springfield with only 6,000 men (including Union Colonel Franz Sigel and his German corps). A primary concern for Frémont, after he assumed command, was the protection of Cairo, a Union-occupied city on the Mississippi River, vital to the security of the Union Army's western war effort. It contained too few troops to defend against a Confederate attack. Compared to the Confederates, Frémont's forces were dispersed and disorganized. Frémont ordered Lyon to retreat from Springfield and fall back to Rolla, while Frémont sent reinforcement troops to Cairo rather than to Lyon, who had requested more troops. Frémont believed with some accuracy that the Confederates were planning to attack Cairo. Lyon, however, hastily chose to attack Confederate General Sterling Price at the Battle of Wilson's Creek, rather than retreat. During the battle Lyon was shot through the heart and died instantly. As the Union line broke, similar to the first Battle of Bull Run in the east, the Confederates won the battle and captured Springfield, thus opening Western Missouri for Confederate advancement. Frémont was severely criticized for the defeat and for Lyon's death, having sent troops to reinforce Cairo, rather than to help Lyon's depleted forces south of Springfield. Response to Confederate threat Responding the best he could to the Confederate and state militia threat, Frémont raised volunteer troops, purchased open market weapons and equipment, and sent his wife, Jessie, to Washington, D.C., where she lobbied President Lincoln for more reinforcements. While commanding the Department of the West, Frémont was looking for a brigadier general to command a post at Cairo. At first Frémont was going to appoint John Pope, but upon the recommendation of Major McKinstry, he interviewed unobtrusive Brigadier General Ulysses S. Grant. Grant had a reputation for being a "drifter and a drunkard" in the Old Army, but Frémont viewed Grant independently using his own judgment. Frémont concluded that Grant was an "unassuming character not given to self elation, of dogged persistence, of iron will". Frémont chose Grant and appointed him commander of the Cairo post at the end of August 1861. Grant was sent to Ironton, with 3,000 untrained troops, to stop a potential Confederate attack led by Confederate General William J. Hardee. Immediately thereafter, Frémont sent Grant to Jefferson City, to keep it safe from a potential attack by Confederate General Price a week after the Battle of Wilson's Creek. Grant got the situation in control at Jefferson City, drilling and disciplining troops, increased supply lines, and deploying troops on the outskirts of the city. The city was kept safe as Price and his troops, badly battered from the Battle of Wilson's Creek, retreated. With Price retreating, Frémont become more aggressive and went on the offensive. Frémont knew the key to victory in the West was capturing control of the Mississippi River for the Union forces. Frémont decided to meet Confederate General Leonidas Polk head-on to control the trunk of the Mississippi. In a turning point of the Civil War, on August 27, 1861, Frémont gave Ulysses S. Grant field command in charge of a combined Union offensive whose goal was to capture Memphis, Vicksburg, and New Orleans, to keep Missouri and Illinois safe from Confederate attack. On August 30, Grant assumed charge of the Union Army on the Mississippi. With Frémont's approval, Grant proceeded to capture Paducah, Kentucky, without firing a shot, after Polk had violated Kentucky neutrality and had captured Columbus. The result was that the Kentucky legislature voted to remain in the Union. Recaptured Springfield Desiring to regain the upper hand and make up for Union losses at the Battle of Wilson's Creek and the occupation of Lexington, Frémont and about 40,000 troops set out to regain Springfield. On October 25, 1861, Frémont's forces, led by Major Charles Zagonyi, won the First Battle of Springfield. This was the first and only Union victory in the West for the year 1861. On November 1, Frémont ordered Grant to make a demonstration against Belmont, a steamboat landing across the river from Columbus, in an effort to drive Confederate General Price from Missouri. Grant had earlier requested to attack Columbus, but Frémont had overruled Grant's initiative. Emancipation edict controversy Frémont came under increasing pressure for decisive action, as Confederates controlled half of Missouri, Confederate troops under Price and McCulloch remained ready to strike, and rebel guerillas were wreaking havoc, wrecking trains, cutting telegraph lines, burning bridges, raiding farms, and attacking Union posts. Confederate sympathies in stronger slave-holding counties needed to be reduced or broken up. Confederate warfare was causing thousands of Union loyalists to take refuge, destitute, in Illinois, Iowa, and Kansas. Radicals in his camp and his wife Jessie urged Frémont to free the slaves of known Confederate supporters. They argued that these men were in rebellion and no longer protected by the Constitution, and it was legal to confiscate rebel property, including their slaves. So, on the morning of August 30, 1861, at dawn, Frémont, without notifying President Lincoln, issued a proclamation putting Missouri under martial law. The edict declared that civilians taken in arms against would be subject to court martial and execution, that the property of those who aided secessionists would be confiscated, and that the slaves of all rebels were immediately emancipated. This last clause caused much concern. Kentucky was still "neutral", and Unionists there feared Frémont's action would sway opinion there toward secession. One group in Louisville implored President Abraham Lincoln's friend Joshua Speed to tell Lincoln: [T]here is not a day to lose in disavowing emancipation or Kentucky is gone over the mill dam. Lincoln, fearing that Frémont's emancipation order would tip Missouri (and other Border states) to secession, asked Frémont to revise the order. Frémont refused to do so, and sent his wife to plead his case. President Lincoln told Jessie that Frémont "should never have dragged the Negro into the war". When Frémont remained obdurate, Lincoln publicly revoked the emancipation clause of the proclamation on 11 September. Frémont's abolitionist allies attacked Lincoln for this, creating more bad feeling. Meanwhile, the War Department compiled a report on Frémont's misconduct as commander in Missouri. This included the arrest of Frank Blair, which ended Frémont's alliance with the Blair family, who had backed him for the presidential nomination in 1856. Finally Lincoln decided Frémont had to go. He issued an order removing Frémont from command of the Western Department, which was hand-delivered to him by Lincoln's friend Leonard Swett on 2 November. Lincoln's actions prompted much hostility among Radical Republicans throughout the North, including from old friends like Senator Orville Browning. Lincoln himself later privately stated his sympathy for Frémont, noting that the first reformer in some area often overreaches and fails, but he continued to insist that Frémont had exceeded his authority and endangered the Union cause. Mountain departments (1862) After being dismissed by Lincoln, Frémont left Springfield and returned to St. Louis. On the outside Frémont expressed joy being free from the cares of duty, but on the inside Frémont was smolderingly angry believing the Republicans ran an incompetent war and that the Blairs, acting under malicious motives, were responsible for what he believed to be his unjustified firing by Lincoln. More humiliations followed: Frémont's Zagonyi Guard was mustered out of the Army without pay, and all the contracts he made were suspended upon approval from Washington. Pressure soon mounted among Radicals and Frémont supporters for his reinstatement of command in the Army. In March 1862, Lincoln placed Frémont in command of the Mountain Department, which was responsible for parts of western Virginia, eastern Tennessee and eastern Kentucky, although he had clearly lost trust in the Pathfinder. Battles of Cross Keys and Port Republic Frémont and his army and two other generals, Nathaniel P. Banks and Irvin McDowell, and their respective armies, were in charge of protecting the Shenandoah Valley and Washington D.C. Rather than having these armies under one command, Lincoln and Stanton micromanaged their movements. Confederate General Stonewall Jackson took advantage of this divided command and systematically attacked each Union Army, putting fear in Washington, D.C., taking spoils and thousands of prisoners. Early in June 1862 Frémont pursued the Confederate General Stonewall Jackson for eight days, finally engaging part of Jackson's force, led by Richard S. Ewell, at Battle of Cross Keys. Frémont commanded 10,500 Union troops while Ewell commanded about 5,000 Confederate troops. Frémont had moved down the Valley Pike from the northwest through Harrisonburg to Cross Keys, while Union Brigadier General James Shields closed in from the northeast, hoping to entrap Jackson's forces. Ewell, who was in charge of defending Jackson's western flank, established strong defensive positions. On June 8, 1862, at 10:00 am Frémont's infantry, composed of German immigrants, advanced on the Confederate line opening the Battle of Cross Keys and slowly pushed back the Confederate advance. The 15th Alabama Infantry held off Frémont's attack for a half hour, followed by a long range artillery duel. The Confederates, reinforced by the 44th Virginia regiment, beat back several Union assaults. Frémont launched a major attack, but the Confederates held their fire until the German Union soldiers were up close, releasing a devastating volley that repelled the Union assault. Frémont withdrew, declining to launch a second assault, and the Confederates gained the territory previously occupied by the Union Army. Fronting Frémont's Army by a holding brigade, Ewell's men, on order's from Jackson, retreated to Port Republic. At the Battle of Port Republic the following day, Frémont attacked Jackson's rear flank using artillery, but did not launch a major assault. By that afternoon Jackson put his army in motion to Brown's Gap beyond the reach of Frémont's artillery. Jackson and his army managed to slip out of the Shenandoah Valley and rejoin Robert E. Lee in Richmond. Lincoln ordered Shields and Frémont to withdraw from the Shenandoah Valley. Frémont was criticized for being late in linking up with McDowell at Strasburg and allowing Jackson's army to escape. Army of Virginia, New York, and resignation (1862–1864) When the Army of Virginia was created on June 26, 1862, to include General Frémont's corps with John Pope in command, Frémont declined to serve on the grounds that he was senior to Pope, and for personal reasons. He went to New York City, where he remained throughout the war, expecting to receive another command, but none was forthcoming. In 1863, African Americans in Poughkeepsie, New York, tried to raise "a 10,000-man all-Black army to be known as the 'Fremont Legion.' It would be commanded by General John C. Frémont, a hero to many African Americans because of his August 1861 unilateral order freeing slaves in Missouri.... Ultimately, nothing came of the Fremont Legion proposal." Recognizing that he would not be able to contribute further to the Union Army's efforts, Frémont resigned his commission in June 1864. Presidential candidate Radical Democracy Party (1864) In 1860 the Republicans nominated Abraham Lincoln for president, who won the presidency and then ran for re-election in 1864. The Radical Republicans, a group of hard-line abolitionists, were upset with Lincoln's positions on the issues of slavery and postwar reconciliation with the southern states. These radicals had bitterly resented Lincoln's dismissal of Frémont in 1861 over his emancipation edict in St. Louis. On May 31, 1864, the short-lived Radical Democracy Party nominated Frémont (age 51) for president in Cleveland. Frémont was supported by Radical Republicans, immigrants from Western Germany, and War Democrats. This fissure in the Republican Party divided the party into two factions: the anti-Lincoln Radical Republicans, who nominated Frémont, and the pro-Lincoln Republicans. On September 22, 1862, Lincoln had issued his own Emancipation Proclamation, effective January 1, 1863, that "forever" freed slaves in Southern states fighting under the Confederacy. Frémont reluctantly withdrew from the election on September 22, 1864; the following day, in a prearranged compromise, Lincoln removed the more conservative Montgomery Blair from his cabinet. Rancho Pocaho In 1864, the Frémonts purchased an estate ranch in present-day Sleepy Hollow, New York, from the newspaper publisher James Watson Webb. They named it Pocaho, an Indian name. For Jessie it was a chance to recapture some of the charm and isolation of living in the countryside, now that John had retired from politics. The house, now at 7 Pokahoe Drive in Sleepy Hollow, is currently a private residence. Later life, Arizona territorial governor, and death The state of Missouri took possession of the Pacific Railroad in February 1866, when the company defaulted in its interest payment. In June 1866 the state conveyed the company to Frémont in a private sale. He reorganized its assets as the Southwest Pacific Railroad in August, but less than a year later (June 1867), the railroad was repossessed by the state after Frémont was unable to pay the second installment of the purchase price. The Panic of 1873, caused by over speculation in the railroad industry, and the depression that followed, wiped out much of Frémont's remaining wealth. Their financial straits required the Frémonts to sell Pocaho in 1875, and to move back to New York City. Frémont was appointed governor of the Arizona Territory by President Rutherford B. Hayes and served from 1878 to 1881. He spent little time in Arizona, and was asked to resume his duties in person or resign; Frémont chose resignation. Destitute, the family depended on the publication earnings of his wife Jessie. Frémont lived on Staten Island in retirement. In April 1890, he was reappointed as a major general and then added to the Army's retired list, an action taken to ease his financial condition by enabling him to qualify for a pension. On Sunday, July 13, 1890, at the age of 77, Frémont died of peritonitis at his residence at 49 West Twenty-fifth Street in New York. His death was unexpected and his brief illness was not generally known. On Tuesday, July 8, Frémont had been affected by the heat of a particularly hot summer day. On Wednesday he came down with a chill and was confined to his bedroom. His symptoms progressed to peritonitis (an abdominal infection) which caused his death. At the time he died, Frémont was popularly known as the "Pathfinder of the Rocky Mountains". Initially interred at Trinity Church Cemetery, he was reinterred in Rockland Cemetery in Sparkill, New York, on March 17, 1891. Upon Fremont's death, his wife Jessie received a Civil War Pension with an annual income of $2,000 . Historical reputation Frémont's legacy has been shrouded in considerable polarizing controversy. He played a major role in opening up the American West to settlement by white American pioneers, and did so in large part by ordering and engaging in attacks on Native Americans that killed indigenous men, women, and children, driving them from their land. Throughout the course of his career, Frémont became a major advocate for what would later be referred to as Native American genocide, and his actions within California positioned him as a key player in the waging of the California genocide. In contrast, during his lifetime Frémont was widely considered an American hero, with the public nicknaming him "The Pathfinder". His reliable accounts, including published maps, narrations, and scientific documentations of his expeditions, guided American emigrants overland into the West starting in the mid-1840s. Many contemporary Americans believed that Frémont's arrest and court-martial by Kearny during the Mexican–American War were unjustified. During the Civil War, Frémont's victory over the Confederates at Springfield was the only successful Union battle in the Western Department in 1861. Frémont's reputation, however, was damaged after he was relieved of command by Lincoln for insubordination. After leaving the Mountain Department in 1862, Frémont's active service career in the war virtually ended. Frémont's 1861 promotion of Ulysses S. Grant, going against the grain of Army gossip, was fruitful; Grant went on to become the greatest Union general. He invested heavily in the railroad industry, but the Panic of 1873 wiped out Frémont's fortune, and his appearance thereafter looked tired and aging. Frémont is remembered for his planting of the American flag on the Rocky Mountains during his first expedition, symbolically claiming the West for the United States. For his botanical records and information collected on his explorations, many plants are named in honor of Frémont. A large statue/sculpture of Frémont is displayed at Pathfinder Regional Park near Florence, Colorado. In his memoirs, Frémont coined the phrase "Golden Gate" for the strait between Marin County and San Francisco County. Frémont's biographer Allan Nevins said there were two fascinating things about Frémont. The first was the "unfailing drama of his life; a life wrought out of the fiercest tempests and most radiant bursts of sunshine". The second was Frémont's dramatic career asking, "How could the man who sometimes succeeded so dazzlingly at other times fail so abysmally?" Nevins said that Frémont's psychological problem was in part attributed to his inheritance of impulsiveness and brilliancy from his "emotionally and ill-balanced" parents. Nevins said Frémont was encouraged by his parents to heighten his inherited self-reliant, heedless, and adventuresome traits and that he lacked the discipline his passionate spirit and quick mind most needed. Concerning Frémont's tenure as commander of the West, Lincoln thought Frémont was personally honest, but his "cardinal mistake" was that "he isolates himself, and allows nobody to see him; and by which he does not know what is going on in the very matter he is dealing with." Many historians are in agreement with Lincoln. According to Rebecca Solnit, the celebrated murders of Californios Berryessa and his two nephews on the shores of San Rafael, commanded by Frémont during the Bear Flag Revolt on June 28, 1846, highlighted a dubious path to California's statehood. Solnit wrote that Frémont's unpopularity in California, while Frémont was a Republican candidate during the presidential election of 1856, and losing the state, was in part due to this incident. Although their killings are not disputed, the events surrounding their deaths are in controversy. Frémont and his men may have been taking revenge on the deaths of two Osos by Californios. Frémont may have mistaken the de Haro brothers for soldiers, while one person contends that the murders represented the racism of the white Osos. Berryessa and his two nephews may have been considered Native Americans by European Americans, and received harsher treatment from Frémont and Carson. Family The Frémonts were the parents of five children: Elizabeth Benton "Lily" Frémont was born in Washington, D.C., on November 15, 1842. She died in Los Angeles on May 28, 1919. Benton Frémont was born in Washington on July 24, 1848; he died in St. Louis before he was a year old. John Charles Frémont Jr. was born in San Francisco on April 19, 1851. He served in the United States Navy from 1868 to 1911, and attained the rank of rear admiral. He served as commander of the monitor USS Florida (1903–05), naval attaché to Paris and St. Petersburg (1906–08), commander of the battleship USS Mississippi (1908–09) and, finally as commandant of the Boston Navy Yard (1909–11). He died in Boston, Massachusetts on March 7, 1911. Anne Beverly Fremont was born in France on February 1, 1853, and died five months later. Francis Preston Fremont was born on May 17, 1855. He died in Cuba in September 1931. Plant eponyms Places and organizations named in commemoration Frémont is commemorated by many places and other things named in his honor. Places US counties: Fremont County, Colorado Fremont County, Idaho Fremont County, Iowa Fremont County, Wyoming Cities and towns Fremont, California (the largest city that bears his name) Fremont, Indiana Fremont, Iowa Fremont, Michigan Fremont, Minnesota and Fremont Township, Minnesota Fremont, Nebraska Fremont, New Hampshire Fremont, Steuben County, New York Fremont, Sullivan County, New York Fremont, Ohio Fremont, Utah Fremont, Clark County, Wisconsin Fremont (village) and Fremont (town) Waupaca County, Wisconsin Also Fremont, Seattle, a neighborhood established by migrants from Fremont, Nebraska. Fort Fremont, South Carolina – one of two surviving coastal fortifications in the United States from the Spanish–American War era Geographical features Fremont Peak (Wyoming) in the Wind River Mountains Fremont Peak (California) in Monterey County and San Benito County, California Fremont Peak (Arizona) in the San Francisco Peaks Fremont Pass (Colorado), a pass over the Continental Divide near the headwaters of the Arkansas River Fremont Island in the Great Salt Lake Fremont Canyon on the North Platte River in Wyoming Pathfinder Reservoir on the North Platte, just upstream from Fremont Canyon Fremont River (Utah), a tributary of the Colorado River Other Fremont–Winema National Forest in Oregon The John C. Fremont Trail (the path of Fremont's march into Santa Barbara, California in December 1846) Fremont Campground in the Los Padres National Forest Fremont Bridge (Portland, Oregon) Fremont Street (Las Vegas, Nevada) Fremont Ave in Staten Island, NY Fremont Ave in Sunnyvale, CA Organizations Hospitals John C. Fremont Hospital, Mariposa, California (where Frémont and his wife lived during the Gold Rush) Fremont Hospital, Yuba City, California Libraries John C. Fremont Branch Library on Melrose Avenue in Los Angeles. John C. Fremont Library in Florence, Colorado Schools and school districts Other commemorations The prehistoric Fremont culture, first discovered near the Fremont River The United States honored Frémont in 1898 with a commemorative stamp as part of the Trans-Mississippi Issue. The SS John C. Fremont, laid down on 24 May 1941 and launched on 27 September 1941, was the first Liberty ship delivered by a West Coast shipyard. It struck a mine in Manila harbor in 1945. The Fremont Cannon, the "largest and most expensive trophy in college football is a replica of a cannon that accompanied Captain John C. Frémont on his expedition through Oregon, Nevada and California in 1843–44". The annual game between the University of Nevada, Reno and the University of Nevada, Las Vegas is for possession of it. The Fremont monument in Joaquin Miller Park in Oakland, California, marking the spot of his first view of the San Francisco Bay. The Pathfinder Chorus, a barbershop chorus in Fremont, Nebraska. The Fremont Pathfinders Artillery Battery, an American Civil War reenactment group from Fremont, Nebraska. The U.S. Army's (now inactive) 8th Infantry Division (Mechanized) is called the Pathfinder Division, after Frémont. The gold arrow on the 8th ID crest is called the "Arrow of General Frémont". The 8th Division was based at Camp Fremont in Menlo Park, California during World War I. In 2000, Frémont was inducted into the Hall of Great Westerners of the National Cowboy & Western Heritage Museum. In 2013, the Georgia Historical Society erected a historical marker at the birthplace of native son John C. Frémont in Savannah, Georgia. In 2006, the society published an award-winning website titled The 1856 Handbook, which documented Frémont's first presidential campaign and the world in which it was embedded. Gallery See also List of people pardoned or granted clemency by the president of the United States Notes References Citations Works cited Lincoln: A Life of Purpose and Power (2006). Alfred A. Knopf. . New York and London: G.P. Putnam's Sons. PDF versions of the three volumes of this work are available for download. Volume 2 is at Illinois Digital Environment for Access to Learning and Scholarship (IDEALS)|accessed September 2018 Guelzo, Allen C. (2004). Lincoln's Emancipation Proclamation: The End of Slavery in America. New York: Simon & Schuster. Further reading Bashford, Herbert and Wagner, Harr. A Man Unafraid: The Story of John Charles Frémont (Harr Wagner, San Francisco, 1927) Bicknell, John. Lincoln's Pathfinder: John C. Frémont and the Violent Election of 1856 (2017), popular history of 1856 election from Frémont's perspective. 355 pp. Brandon, William. The Men and the Mountain (1955) . An account of Frémont's failed fourth expedition. Chaffin, Tom. Pathfinder: John Charles Frémont and the Course of American Empire, New York: Hill and Wang, 2002 Denton, Sally. Passion and Principle, John and Jessie Fremont, The Couple whose Power, Politics, and Love Shaped Nineteenth-Century America, New York: Bloomsbury, 2007 Eyre, Alice. The Famous Fremonts and Their America, Boston: The Christopher Publishing House, 1948; Fleek, Sherman L. "The Kearny/Stockton/Frémont Feud: The Mormon Battalion's Most Significant Contribution in California." Journal of Mormon History 37.3 (2011): 229–57. online Gano, Geneva M. "At the Frontier of Precision and Persuasion: The Convergence of Natural Philosophy and National Philosophy in John C. Fremont's 1842, 1843–44 Report and Map," ATQ ("The American Transcendental Quarterly"), September 2004, Vol. 18, #3, pp. 131–54. Goetzmann, William H. Army Exploration in the American West 1803–1863 (Yale University Press, 1959; University of Nebraska Press, 1979) Goodwin, Cardinal. John Charles Frémont: An Explanation of His Career (Stanford University Press, 1930) Harvey, Miles. The Island of Lost Maps: A True Story of Cartographic Crime, Random House, 2000, . Herr, Pamela. Jessie Benton Frémont: American Woman of the 19th Century (1987), biography of his wife Inskeep, Steve. Imperfect Union: How Jessie and John Frémont Mapped the West, Invented Celebrity, and Helped Cause the Civil War (Penguin Press, 2020) Menard, Andrew. Sight Unseen: How Frémont's First Expedition Changed the American Landscape (University of Nebraska Press, 2012) 249 pp. Miller, David. "Heroes of American Empire: John C. Frémont, Kit Carson, and the Culture of Imperialism, 1842–1898", Dissertation Abstracts International, 2008, Vol. 68 Issue 10, p. 4447 Nevins, Allan. Frémont: The West's Greatest Adventurer Being a Biography from certain hitherto unpublished sources of General John C. Frémont Together with His Wife Jessie Benton Frémont and some account of the period of expansion which found a brilliant leader in The Pathfinder (two volumes) (Harper & Brothers, 1928) (revised in 1939 and 1955 as Frémont: Pathmarker of the West) Roberts, David (2001). A Newer World: Kit Carson, John C. Frémont, and the Claiming of the American West, New York: Touchstone Rolle, Andrew F. (1991). John Charles Frémont: Character as Destiny. University of Oklahoma Press. Tompkins, Walker A. Santa Barbara, Past and Present. Tecolote Books, Santa Barbara, CA, 1975. Yanoff, Stephen G. (2024). Wonder of the West: The Adventurous Life of John C. Frémont. Bloomington, Indiana: AuthorHouse. Primary sources Charles Wentworth Upham, Life, Explorations and Public Services of John Charles Fremont (Ticknor and Fields, Boston, 1856). Horace Greeley, Life of Col. Fremont (Greeley and M'Elrath, New York, 1856). This 32-page pamphlet does not identify its author, but Greeley's company published it. John Bigelow, Memoir of the Life and Public Services of John Charles Fremont (Derby & Jackson, New York, 1856). Samuel M. Smucker, The Life of Col. John Charles Fremont, and His Narrative of Explorations and Adventures in Kansas, Nebraska, Oregon and California: The Memoir by Samuel M. Smucker, A.M. (Miller, Orton & Mulligan, New York and Auburn, 1856). Harper's Weekly political cartoon, "That's What's the Trouble with John C."; Fremont's 1864 challenge to Lincoln's re-nomination. David H. Miller and Mark J. Stegmaier, James F. Milligan: His Journal of Fremont's Fifth Expedition, 1853–1854; His Adventurous Life on Land and Sea, Arthur H. Clark Co., 1988. 300 pp. External links Finding Frémont Exhibit Des Chutes Historical Museum in Bend, Oregon 2015 Oil Portrait of John Charles Frémont, 1878–1882 Territorial Governor of Arizona Mr. Lincoln and Freedom: John C. Frémont Retrieved on 2009-05-01 The Generals of the American Civil War – Pictures of John Charles Frémont Guide to the Frémont Family Papers at The Bancroft Library Memoirs of my life : including in the narrative five journeys of western explorations during the years 1842, 1843–4, 1845–6–7, 1848–9, 1853–4 by John c. Fremont Address of welcome to General John C. Fremont, governor of Arizona territory, upon the occasion of his reception by his associates of the Association Pioneers of the Territorial Days of California, at their headquarters, Sturtevant House, New York, on ... August 1, 1878 "Las Mariposas" Photos of Frémont's Mariposa gold estate taken in 1860. PDF Birthplace of John C. Frémont historical marker – Georgia Historical Society Fremont's Travels 1838–1854 Map Portrait of John Charles Fremont by Bass Otis, at University of Michigan Museum of Art |- |- |- |- 19th-century American military personnel 19th-century American scientists 19th-century American Episcopalians 19th-century American explorers American explorers of North America Explorers of California Explorers of Oregon American abolitionists American botanists American military personnel of the Mexican–American War American people of French-Canadian descent American surveyors American taxonomists Arizona Republicans Botanists with author abbreviations Governors of Arizona Territory 19th-century California politicians Democratic Party United States senators from California California Democrats California Republicans Commanders of the California Republic Perpetrators of the California genocide People of the Conquest of California People of the California Gold Rush History of Mariposa County, California People from Mariposa County, California Military personnel from California People of California in the American Civil War People from Sleepy Hollow, New York People from St. George, Staten Island New York (state) Republicans Radical Republicans People pardoned by James K. Polk Recipients of the Pour le Mérite (civil class) Republican Party (United States) presidential nominees Candidates in the 1856 United States presidential election Union army generals United States Army Corps of Topographical Engineers United States Army personnel who were court-martialed United States military governors of California Deaths from peritonitis 1813 births 1890 deaths 19th-century United States senators
John C. Frémont
Engineering
16,875
10,787,476
https://en.wikipedia.org/wiki/NGC%207052
NGC 7052 is an elliptical galaxy in the constellation Vulpecula. The galaxy harbours a supermassive black hole with mass c. 220–630 million solar masses in its nucleus. References External links Elliptical galaxies Vulpecula 7052 11718 66537
NGC 7052
Astronomy
57