text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
Tellurium tetrafluoride , TeF 4 , is a stable, white, hygroscopic crystalline solid and is one of two fluorides of tellurium . The other binary fluoride is tellurium hexafluoride . [ 1 ] The widely reported Te 2 F 10 has been shown to be F 5 TeOTeF 5 [ 1 ] There are other tellurium compounds that contain fluorine, but only the two mentioned contain solely tellurium and fluorine. Tellurium difluoride, TeF 2 , and ditellurium difluoride, Te 2 F 2 are not known. [ 1 ]
Tellurium tetrafluoride can be prepared by the following reaction:
It is also prepared by reacting nitryl fluoride with tellurium or from the elements at 0 °C or by reacting selenium tetrafluoride with tellurium dioxide at 80 °C. Fluorine in nitrogen can react with TeCl 2 or TeBr 2 to form TeF 4 . PbF 2 will also fluorinate tellurium to TeF 4 .
Tellurium tetrafluoride will react with water or silica and forms tellurium oxides. Copper , silver , gold or nickel will react with tellurium tetrafluoride at 185 °C. It does not react with platinum . It is soluble in SbF 5 and will precipitate out the complex TeF 4 SbF 5 .
Tellurium tetrafluoride melts at 130 °C and decomposes to tellurium hexafluoride at 194 °C. In the solid phase, it consists of infinite chains of TeF 3 F 2/2 in an octahedral geometry. A lone pair of electrons occupies the sixth position. | https://en.wikipedia.org/wiki/TeF4 |
Tellurium hexafluoride is the inorganic compound of tellurium and fluorine with the chemical formula TeF 6 . It is a colorless and highly toxic gas with an unpleasant odor. [ 4 ]
Tellurium hexafluoride can be prepared by treating tellurium with fluorine gas at 150 °C. [ 4 ] [ 5 ] It can also be prepared by fluorination of TeO 3 with bromine trifluoride . Upon heating, TeF 4 disproportionates to give TeF 6 and Te. [ citation needed ]
Tellurium hexafluoride is a highly symmetric octahedral molecule. Its physical properties resemble those of the hexafluorides of sulfur and selenium . It is less volatile , however, due to the increase in polarizability . At temperatures below −38 °C, tellurium hexafluoride condenses to a volatile white solid.
Tellurium hexafluoride is much more chemically reactive than SF 6 . [ 6 ] For example, TeF 6 slowly hydrolyzes to Te(OH) 6 :
Treatment of tellurium hexafluoride with tetramethylammonium fluoride (Me 4 NF) gives, sequentially, the hepta- and octafluorides: | https://en.wikipedia.org/wiki/TeF6 |
Tellurium iodide may refer to: | https://en.wikipedia.org/wiki/TeI2 |
Tellurium tetraiodide ( Te I 4 ) is an inorganic chemical compound . It has a tetrameric structure which is different from the tetrameric solid forms of TeCl 4 and TeBr 4 . [ 2 ] In TeI 4 the Te atoms are octahedrally coordinated and edges of the octahedra are shared. [ 2 ]
Tellurium tetraiodide can be prepared by reacting Te and iodomethane , CH 3 I. [ 2 ] In the vapour TeI 4 dissociates: [ 3 ]
It can be also obtained by reacting telluric acid with hydrogen iodide . [ 4 ]
It can also be obtained by reacting the elements, which can also produce tellurium diiodide and tellurium monoiodide , depending on the reaction conditions: [ 5 ]
Tellurium tetraiodide is an iron-gray solid that decomposes slowly in cold water and quickly in warm water to form tellurium dioxide and hydrogen iodide . [ 6 ] It is stable even in moist air and decomposes when heated, releasing iodine. It is soluble in hydriodic acid to form H[TeI 5 ] and it is slightly soluble in acetone . [ 4 ]
Tellurium tetraiodide is a conductor when molten, dissociating into the ions TeI 3 + and I − . In solvents with donor properties such as acetonitrile , CH 3 CN ionic complexes are formed which make the solution conducting: [ 3 ]
Five modifications of tellurium tetraiodide are known, all of which are composed of tetrameric molecules. [ 7 ] The δ form is the most thermodynamically stable form. This is structurally derived (as well as the α, β and γ forms) from the ε form. | https://en.wikipedia.org/wiki/TeI4 |
Tellurium dioxide (TeO 2 ) is a solid oxide of tellurium . It is encountered in two different forms, the yellow orthorhombic mineral tellurite , β-TeO 2 , and the synthetic, colourless tetragonal (paratellurite), α-TeO 2 . [ 2 ] Most of the information regarding reaction chemistry has been obtained in studies involving paratellurite, α-TeO 2 . [ 3 ]
Paratellurite, α-TeO 2 , is produced by reacting tellurium with O 2 : [ 2 ]
An alternative preparation is to dehydrate tellurous acid, H 2 TeO 3 , or to thermally decompose basic tellurium nitrate , Te 2 O 4 ·HNO 3 , above 400 °C. [ 2 ]
The longitudinal speed of sound in Tellurium dioxide is 4,260 metres per second (14,000 ft/s) at around room temperature. [ 4 ]
TeO 2 is barely soluble in water and soluble in strong acids and alkali metal hydroxides . [ 5 ] It is an amphoteric substance and therefore can act both as an acid or as a base depending on the solution it is in. [ 6 ] It reacts with acids to make tellurium salts and bases to make tellurites . It can be oxidized to telluric acid or tellurates .
The tellurite ion is kinetically inert, but TeO 2 equivalents will oxidize thioates in acid to the diacyl disulfide. [ 7 ]
Paratellurite, α-TeO 2 , converts at high pressure into the β-, tellurite form. [ 8 ] Both the α-, (paratellurite) and β- (tellurite forms) contain four coordinate Te with the oxygen atoms at four of the corners of a trigonal bipyramid. In paratellurite all vertices are shared to give a rutile -like structure, where the O-Te-O bond angle are 140°. α-TeO 2 In tellurite pairs of trigonal pyramidal, TeO 4 units, sharing an edge, share vertices to then form a layer. [ 8 ] The shortest Te-Te distance in tellurite is 317 pm, compared to 374 pm in paratellurite. [ 8 ] Similar Te 2 O 6 units are found in the mineral denningite . [ 8 ]
TeO 2 melts at 732.6 °C, forming a red liquid. [ 9 ] The structure of the liquid, as well as the glass which can be formed from it with sufficiently rapid cooling, are also based on approximately four coordinate Te. However, compared to the crystalline forms, the liquid and glass appear to incorporate short-range disorder (a variety of coordination geometries) which marks TeO 2 glass as distinct from the canonical single-oxide glass-formers such as SiO 2 , which share the same short-range order with their parent liquids. [ 10 ]
It is used as an acousto-optic material. [ 4 ]
Tellurium dioxide is also a reluctant glass former, it will form a glass under suitable cooling conditions, [ 11 ] or with additions of a small molar fraction of a second compound such as an oxide or halide. TeO 2 glasses have high refractive indices and transmit into the mid- infrared part of the electromagnetic spectrum , therefore they are of technological interest for optical waveguides . Tellurite glasses have also been shown to exhibit Raman gain up to 30 times that of silica , useful in optical fibre amplification. [ 12 ]
TeO 2 is a possible teratogen . [ 13 ]
Exposure to tellurium compounds produces a garlic -like odour on the breath, caused by the formation of diethyl telluride . [ 14 ] | https://en.wikipedia.org/wiki/TeO2 |
Tellurium trioxide ( Te O 3 ) is an inorganic chemical compound of tellurium and oxygen . In this compound, tellurium is in the +6 oxidation state .
There are two forms, yellow-red α-TeO 3 and grey, rhombohedral, β-TeO 3 which is less reactive. [ 1 ] α-TeO 3 has a structure similar to FeF 3 with octahedral TeO 6 units that share all vertices. [ 2 ]
α-TeO 3 can be prepared by heating orthotelluric acid , Te(OH) 6 , at over 300 °C. [ 1 ] The β-TeO 3 form can be prepared by heating α-TeO 3 in a sealed tube with O 2 and H 2 SO 4 . α-TeO 3 is unreactive to water but is a powerful oxidising agent when heated. [ 2 ] With alkalis it forms tellurates . [ 2 ] α-TeO 3 when heated loses oxygen to form firstly Te 2 O 5 and then TeO 2 . [ 1 ]
This inorganic compound –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/TeO3 |
Tea tree oil , also known as melaleuca oil , is an essential oil with a fresh, camphoraceous odour and a colour that ranges from pale yellow to nearly colourless and clear. [ 1 ] [ 2 ] It is derived from the leaves of the tea tree, Melaleuca alternifolia , native to southeast Queensland and the northeast coast of New South Wales , Australia. The oil comprises many constituent chemicals, and its composition changes if it is exposed to air and oxidises . Commercial use of tea tree oil began in the 1920s, pioneered by the entrepreneur Arthur Penfold .
There is little evidence for the effectiveness of tea tree oil in treating mite-infected crusting of eyelids , [ 3 ] although some claims of efficacy exist. [ 4 ] [ 5 ] In traditional medicine , it may be applied topically in low concentrations for skin diseases, although there is little evidence for efficacy. [ 2 ] [ 6 ] [ 7 ] [ 8 ]
Tea tree oil is neither a patented product nor an approved drug in the United States, although it has been used in skin care products [ 2 ] [ 8 ] and is approved as a complementary medicine for aromatherapy in Australia. [ 9 ] It is poisonous if consumed by mouth and is unsafe for children. [ 10 ]
Although tea tree oil is claimed to be useful for treating dandruff , acne , lice , herpes , insect bites , scabies , and skin fungal or bacterial infections, [ 8 ] [ 11 ] insufficient evidence exists to support any of these claims due to the limited quality of research. [ 2 ] [ 7 ] [ 12 ] A 2015 Cochrane review of acne complementary therapies found a single low-quality trial showing benefit on skin lesions compared to placebo . [ 13 ] Tea tree oil was also used during World War II to treat skin lesions of munitions factory workers. [ 2 ]
According to the Committee on Herbal Medicinal Products (CHMP) of the European Medicines Agency , traditional usage suggests that tea tree oil is a possible treatment for "small, superficial wounds, insect bites, and small boils" and that it may reduce itching in minor cases of athlete's foot. The CHMP states that tea tree oil products should not be used on people under 12 years of age. [ 14 ]
Tea tree oil is not recommended for treating nail fungus because it is yet to be proven effective, [ 15 ] It is not recommended for treating head lice in children because its effectiveness and safety have not been established and it could cause skin irritation or allergic reactions . [ 16 ] [ 17 ] As of 2020 [update] , there is uncertainty regarding the effectiveness of 5-50% tea tree oil as an effective treatment for demodex mite infestations, although products claiming efficacy exist. [ 18 ]
Tea tree oil is highly toxic when ingested orally. [ 2 ] [ 7 ] [ 19 ] [ 12 ] It may cause drowsiness, confusion, hallucinations, coma, unsteadiness, weakness, vomiting, diarrhoea, nausea, blood-cell abnormalities, and severe rashes. It should be kept away from pets and children. [ 12 ] It should not be used in or around the mouth. [ 2 ] [ 7 ] [ 10 ]
Application of tea tree oil to the skin can cause an allergic reaction in some, [ 2 ] the potential for which increases as the oil ages and its chemical composition changes. [ 20 ] Adverse effects include skin irritation, allergic contact dermatitis, systemic contact dermatitis , linear immunoglobulin A disease , erythema multiforme -like reactions, and systemic hypersensitivity reactions. [ 11 ] [ 21 ] Allergic reactions may be due to the various oxidation products that are formed by exposure of the oil to light and air. [ 21 ] [ 22 ] Consequently, oxidised tea tree oil should not be used. [ 23 ]
In Australia, tea tree oil is one of the many essential oils causing poisoning, mostly of children. From 2014 to 2018, 749 cases were reported in New South Wales, accounting for 17% of essential oil poisoning incidents. [ 24 ]
Tea tree oil potentially poses a risk for causing abnormal breast enlargement in men [ 25 ] [ 26 ] and prepubertal children. [ 27 ] [ 28 ] A 2018 study by the National Institute of Environmental Health Sciences found four of the constituent chemicals ( eucalyptol , 4-terpineol , dipentene , and alpha-terpineol ) are endocrine disruptors , raising concerns of potential environmental health impacts from the oil. [ 29 ]
In dogs and cats, death [ 30 ] [ 31 ] or transient signs of toxicity (lasting two to three days), such as lethargy, weakness, incoordination, and muscle tremors, have been reported after external application at high doses. [ 32 ]
As a test of toxicity by oral intake, the median lethal dose (LD 50 ) in rats is 1.9–2.4 ml/kg. [ 33 ]
Tea tree oil is defined by the International Standard ISO 4730 ("Oil of Melaleuca , terpinen-4-ol type"), containing terpinen-4-ol, γ- terpinene , and α-terpinene as about 70% to 90% of whole oil, while p -cymene , terpinolene, α-terpineol, and α-pinene collectively account for some 15% of the oil (table). [ 1 ] [ 6 ] [ 8 ] The oil has been described as colourless to pale yellow [ 1 ] [ 2 ] having a fresh, camphor -like smell. [ 34 ]
Tea tree oil products contain various phytochemicals , among which terpinen-4-ol is the major component. [ 1 ] [ 2 ] [ 6 ] Adverse reactions diminish with lower eucalyptol content. [ 11 ]
The name "tea tree" is used for several plants, mostly from Australia and New Zealand , from the family Myrtaceae related to the myrtle . The use of the name probably originated from Captain James Cook 's description of one of these shrubs that he used to make an infusion to drink in place of tea . [ 35 ]
The commercial tea tree oil industry originated in the 1920s when Australian chemist Arthur Penfold investigated the business potential of a number of native extracted oils; he reported that tea tree oil had promise, as it exhibited antiseptic properties. [ 33 ]
Tea tree oil was first extracted from Melaleuca alternifolia in Australia, and this species remains the most important commercially. In the 1970s and 1980s, commercial plantations began to produce large quantities of tea tree oil from M. alternifolia . Many of these plantations are located in New South Wales. [ 33 ] Since the 1970s and 80s, the industry has expanded to include several other species for their extracted oil: Melaleuca armillaris and Melaleuca styphelioides in Tunisia and Egypt; Melaleuca leucadendra in Egypt, Malaysia, and Vietnam; Melaleuca acuminata in Tunisia; Melaleuca ericifolia in Egypt; and Melaleuca quinquenervia in the United States (considered an invasive species in Florida [ 36 ] ).
Similar oils can also be produced by water distillation from Melaleuca linariifolia and Melaleuca dissitiflora . [ 37 ] Whereas the availability and nonproprietary nature of tea tree oil would make it – if proved effective – particularly well-suited to a disease such as scabies that affects poor people disproportionately, those same characteristics diminish corporate interest in its development and validation. [ 8 ] | https://en.wikipedia.org/wiki/Tea_tree_oil |
A teachable moment , in education, is the time at which learning a particular topic or idea becomes possible or easiest.
The concept was popularized by Robert Havighurst in his 1952 book, Human Development and Education. In the context of education theory , Havighurst explained,
The concept pre-dates Havighurst's book, as does the use of the phrase, [ 2 ] but he is credited with popularizing it. [ 3 ]
The phrase sometimes denotes not a developmental stage, but rather "that moment when a unique, high interest situation arises that lends itself to discussion of a particular topic." [ 4 ] It implies "personal engagement" with issues and problems. [ 5 ]
These moments can (and often do) come when least expected. Teachers and parents alike can benefit from the use of teachable moments.
In July 2009, Harvard professor Henry Louis Gates was arrested at his home; the incident garnered media attention throughout the United States . The mayor of Cambridge, E. Denise Simmons , said that she hoped that the result would be a "teachable moment". [ 6 ] U.S. President Barack Obama expressed the same:
My hope is, is that as a consequence of this event this ends up being what's called a 'teachable moment', where all of us instead of pumping up the volume spend a little more time listening to each other and try to focus on how we can generally improve relations between police officers and minority communities, and that instead of flinging accusations we can all be a little more reflective in terms of what we can do to contribute to more unity." [ 7 ]
Obama's use of the phrase attracted considerable comment in the American media and blogosphere. Gates himself echoed the same theme, stating, "I told the President that my entire career as an educator has been devoted to racial healing and improved race relations in this country. I am determined that this be a teaching moment." [ 8 ]
On July 4, 2011, Glyn Davis, vice-chancellor of the University of Melbourne , used the term in an article [ 9 ] in Campus Review , describing the Australian Higher Education Base Funding Review as a rare opportunity to educate a wider public about how public tertiary education is supported. Davis argued that "We (Australian Universities) must show why Australia's public universities returned to the community, many times over, the money spent providing higher education," and that this constituted a teachable moment. [ 9 ]
During the 2025 Canadian federal election , it was revealed that Paul Chiang , a Liberal MP, remarked that Conservative candidate Joe Tay should be reported to the Toronto Chinese Consulate in exchange for a monetary bounty placed by the Hong Kong police. He later apologized. [ 10 ] [ 11 ] Liberal Party leader and Prime Minister Mark Carney viewed Chiang's apology as a "teachable moment" by saying it "underscores the respect with which we treat human rights in this country – the differences between Canadian society and other countries." [ 12 ] [ 13 ] On March 31, 2025, Chiang announced that he would withdraw from the 2025 election after the Royal Canadian Mounted Police opened an investigation into his comment. [ 14 ] | https://en.wikipedia.org/wiki/Teachable_moment |
Quantum mechanics is a difficult subject to teach due to its counterintuitive nature. [ 1 ] As the subject is now offered by advanced secondary schools, educators have applied scientific methodology to the process of teaching quantum mechanics , in order to identify common misconceptions and ways of improving students' understanding.
Students' misconceptions range from fully classical physics thinking, mixed models, to quasi-quantum ideas. [ 1 ] For example, if the concept that quantum mechanics does not describe a path for electrons or photons is misunderstood, students may believe that they follow specific trajectories (classical), or sinusoidal paths (mixes), or are simultaneously wave and particles (quasi-quantum: "in which students understand that quantum objects can behave as both particles and waves, but still have difficulty describing events in a nondeterministic way"). Among the concepts most often misunderstood are:
Issues also arise from misunderstanding classical concepts related to quantum concepts, such as the difference between light energy and light intensity.
Quantum mechanics can be taught with a focus on different interpretations, different models, or via mathematical techniques. Studies have shown that focus on non-mathematical concepts can lead to adequate understanding. [ 6 ]
Despite the fundamental impossibility of directly viewing quantum states, multimedia visualizations are an important tool in education.
Interactive media provides an alternative experience beyond everyday personal experience as a tool for understanding quantum mechanics. [ 2 ] Among the multimedia sites that have been studied with positive results are QuVis [ 7 ] and Phet . [ 8 ]
In introducing history as part of the process of teaching quantum mechanics sets up a potential conflict of goals: accurate history or pedagogical clarity. [ 9 ] Studies have shown that teaching through history helps students recognize that the counterintuitive issues are fundamental rather than simply something they don't understand. Specifically discussing the historical debates on quantum concepts drives home the idea the quantum differs from classical. [ 2 ] Discussing the philosophy of science introduces the idea that language derived from everyday experience limits our ability to describe quantum phenomena.
Mohan [ 10 ] analyzes two widely used representative quantum mechanics textbooks against the learning challenges reported by Krijtenburg-Lewerissa [ 1 ] and others. Both texts adopt language ('waves' and 'particles') familiar to students in other contexts without directly exploring the significant shifts in meaning required by quantum mechanics. Mohan attributes some of the learning challenges to this unexplored application of inappropriate language.
N. David Mermin reports that an unconventional strategy based on abstract but simple math concepts is sufficient to teach quantum mechanics to students interested in quantum computing application rather than physics. [ 11 ] Many of the issues that confound students of physics to not apply to this case and the mathematical background of quantum computing resembles the background already taught in computer science . Mermin develops notation and operations with classical bits then introduces quantum bits as superpositions of two classical states. He never needs to discuss even the Planck constant , which he suggests is important for quantum computer hardware but not software.
Philipp Blitzenbauer engages students through simple but intrinsically quantum single-photon experiments. [ 12 ] The approach avoids the ambiguous classical vs quantum character of photons in optical interference experiments like the double slit. Students exposed to quantum mechanics in this way avoid developing misconceptions apparent among students in the control group. | https://en.wikipedia.org/wiki/Teaching_quantum_mechanics |
TeamNote is a mobile-first business communication and collaboration software developed by the Hong Kong –based technology company TeamNote Limited. TeamNote is a product that is provided as a white label solution to corporations and deployed in a private cloud or an on-premises server. It allows users to send text messages and voice messages , share images, documents, user locations, and other content. It is not available for download on the iOS App Store or in Google Play . TeamNote adds new users by sending out links or manual deployment. [ 1 ]
TeamNote offers standardized communication features, customizable workflow modules and system integration . The primary features of TeamNote are instant messaging including text and voice, individual and group chat mode, as well as news announcements organized by top management. It also offers GPS location tracking, polling or voting, task assignments, photo reporting, sales reporting in chat rooms and share training manual. In addition, TeamNote also has customized features such as form filling, HR tasks, job dispatch, and duty roster.
TeamNote provides Android and iOS mobile apps for end user , and web portal for web clients including end user and superuser .
TeamNote offers subscription business model and claimed to charge US$5 per user per month. The fee would be adjusted accordingly for additional features. [ 2 ] A custom rate could be applied if a deeper integration is required. [ 3 ]
TeamNote is a product started off its research and development in 2012, under its then-parent company Apptask Limited, a project-led mobile applications development company. TeamNote was originally developed for a Hong Kong local real estate conglomerate as a customized corporate communication app, which inspired its founder Roy Law and the team to develop TeamNote as a product. [ 3 ]
TeamNote Limited as an independent company was founded in July 2013, after spinning off from its now-sister company Apptask Limited [ 2 ] and TeamNote is officially launched to the Asia market in the first quarter of 2014. In January 2015, TeamNote Limited as a startup company was shortlisted to be a part of Y Combinator 's three-month-long accelerator programme and received $120,000 seed money and later raised approximately USD$1M angel round . TeamNote announced its global launch during an interview with TechCrunch in March 2015. [ 4 ]
The original TeamNote app focused on secure messaging. This included password-protected conversations, the ability to send a message out to a group and get private replies, and even a feature to make sensitive messages disappear after a specific expiration date. As it expanded, the application has gained features for managing shifts for workers on the field, who can send back messages and photographs related to their work in to their company’s home base to complete tasks. There are also mobile training modules, letting teams quickly get new workers out on the field up to speed without making them sit down and watch an entire training session. [ 4 ]
In 2014, TeamNote won the Red Herring (magazine) Asia Top 100 Technology Award in Hong Kong and Global Top 100 Technology Award in Los Angeles . In 2015, TeamNote won the Best Mobile Apps Grand Award and Best Mobile Apps (Business and Enterprise Solution) Gold Award in Hong Kong ICT Awards . TeamNote also won merit of Asia Pacific ICT Alliance Awards (APICTA). | https://en.wikipedia.org/wiki/TeamNote |
Teamwire is a technology start-up based in Munich, Germany , that was originally called grouptime GmbH. The company focuses on mobile messaging apps and secure communications for regulated industries. The core product is Teamwire, an encrypted instant messaging app for enterprises and the public sector. In addition to text messaging, users can send photos, videos, locations, voice messages and files with Teamwire.
The company was founded in August 2010 in Munich (Germany) by Tobias Stepan. In September 2011 the grouptime app was officially launched for devices with Apple `s iOS . [ 1 ] [ 2 ] In 2012 several updates of the app were released to improve group messaging and sharing of digital content.
In March 2014 grouptime launched a secure enterprise messaging app called Teamwire to simplify and improve the internal communication. [ 3 ] The idea was to provide a messenger for businesses, to replace the email as the dominant channel for team communication. By mid of 2014 grouptime abandoned its consumer product and completely focused on the enterprise messaging market. In January 2015 in addition to the German cloud Teamwire became also available as an on-premise and private cloud deployment in order to fulfill the strong data protection requirements of enterprises, large corporations and the public sector. [ 4 ] In March 2016 Teamwire became a cross-platform enterprise messaging app with the release of apps for desktop devices like Windows, Mac and Linux in addition to the existing iOS and Android apps. [ 5 ]
In January 2017 the largest German financial services group deployed Teamwire as a secure messenger for the bank. [ 6 ] In May 2017 it became public that the police in the state of Bavaria in Germany uses Teamwire as a secure alternative to WhatsApp . [ 7 ] [ 8 ]
This article about a technological corporation or company is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Teamwire |
The teapot effect , also known as dribbling , is a fluid dynamics phenomenon that occurs when a liquid being poured from a container runs down the spout or the body of the vessel instead of flowing out in an arc. [ 1 ]
Markus Reiner coined the term "teapot effect" in 1956 to describe the tendency of liquid to dribble down the side of a vessel while pouring. [ 2 ] [ 3 ] Reiner received his PhD at TU Wien in 1913 and made significant contributions to the development of the study of flow behavior known as rheology . [ 1 ] Reiner believed the teapot effect could be explained by Bernoulli's principle , which states that an increase in the speed of a fluid is always accompanied by a decrease in its pressure. When tea is poured from a teapot, the liquid's speed increases as it flows through the narrowing spout. This decrease in pressure was what Reiner thought to cause the liquid to dribble down the side of the pot. [ 4 ] [ 3 ] However, a 2021 study found the primary cause of the phenomenon to be an interaction of inertia and capillary forces . [ 3 ] The study found that the smaller the angle between the container wall and the liquid surface, the more the teapot effect is slowed down. [ 5 ]
Around 1950, researchers from the Technion Institute in Haifa (Israel) and from New York University tried to explain this effect scientifically. [ 6 ] In fact, there are two phenomena that contribute to this effect: on the one hand, the Bernoulli equation is used to explain it, on the other hand, the adhesion between the liquid and the spout material is also important.
According to the Bernoulli explanation, the liquid is pressed against the inner edge of the spout when pouring out, because the pressure conditions at the end, the edge, change significantly; the surrounding air pressure pushes the liquid towards the spout. With the help of a suitable pot geometry (or a sufficiently high pouring speed) it can be avoided that the liquid reaches the spout and thus triggers the teapot effect. Laws of hydrodynamics (flow theory) describe this situation, the relevant ones are explained in the following sections.
Since adhesion also plays a role, the material of the spout or the type of liquid (water, alcohol or oil, for example) is also relevant for the occurrence of the teapot effect.
The Coandă effect is sometimes mentioned in this context, [ 7 ] [ 8 ] [ 9 ] [ 10 ] but it is rarely cited in the scientific literature [ 8 ] and is therefore not precisely defined. Often several different phenomena seem to be mixed up in this one.
In hydrodynamics, the behavior of flowing liquids is illustrated by flow lines. They run in the same direction as the flow itself. If the outflowing liquid hits an edge, the flow is compressed into a smaller cross-section. It only does not break off if the flow rate of liquid particles remains constant, regardless of where an imaginary cross section (perpendicular to the flow) is located. So the same amount of mass must flow in through one cross-sectional area as flows out of another. One can now conclude from this, but also observe in reality, that the flow accelerates at bottlenecks and the streamlines are bundled. This situation describes the continuity equation for non-turbulent flows.
But what happens to the pressure conditions in the flow if you change the flow speed? The scientist Daniel Bernoulli dealt with this question as early as the beginning of the 18th century. Based on the considerations of continuity mentioned above, and incorporating the conservation of energy, he linked the two quantities of pressure and speed. The core statement of the Bernoulli equation is that the pressure in a liquid falls where the velocity increases (and vice versa): Flow according to Bernoulli and Venturi.
The pressure in the flow is reduced at the edge of the can spout. However, since the air pressure on the outside of the flow is the same everywhere, there is a pressure difference that pushes the liquid to the edge. Depending on the materials used, the outside of the spout is now wetted during the flow process. At this point, additional interfacial forces occur : the liquid runs as a narrow trickle along the spout and can until it detaches from the underside.
The unwanted teapot effect only occurs when pouring slowly and carefully. [ 6 ] In fast pouring, the liquid flows out of the spout in an arc without dripping, so it is given a relatively high velocity with which the liquid moves away from the edge (see Torricelli outflow velocity). The pressure difference resulting from the Bernoulli equation is then not sufficient to influence the flow to such an extent that the liquid is pushed around the edge of the spout.
Since the flow conditions can be described mathematically, a critical outflow velocity is also defined. If it falls below when pouring, the liquid flows down the pot; it drips. Theoretically, this speed could be precisely calculated for a specific can geometry, the current air pressure and the fill level of the can, the spout material, the viscosity of the liquid and the pouring angle. Since, apart from the fill level, most of the influencing variables cannot be changed (at least not sufficiently precisely in practice), the only way to avoid the teapot effect is usually to choose a suitable geometry for the pot.
Another phenomenon is the reduction in air pressure between the spout and the jet of liquid due to the entrainment of gas molecules (one-sided water jet pumping effect), so that the air pressure on the opposite side would push the jet of liquid to the spout side. However, under the conditions usually prevailing when pouring tea, this effect will hardly appear.
A good jug should, regardless of fashion, have a spout with a tear-off edge (i.e. no rounded edge) to make it more difficult to run around the edge. More importantly, the spout should first lead upwards (regardless of the position in which the jug is held). As a result, the liquid would be forced to flow upwards after going around the edge of the spout when pouring, but this is prevented by gravity. The flow can thus resist wetting even when pouring slowly and the liquid does not reach the downwardly inclined part of the spout and the body of the jug.
The image on the right shows three vessels in the front row, with poor pouring behavior. Even in a horizontal position, that is standing on the table, the bottom edges of the spouts do not point upwards. [ 6 ] Behind are four vessels with good flow characteristics resulting from well formed tips. Here, the liquid rises at the lower edge of the spout at an angle of less than 45°. [ 6 ] In part, this only becomes apparent when one considers the normal maximum fill level: the glass carafe on the far right, for example, appears at first glance to be a poor pourer because of its slender neck. However, since such vessels are generally filled at most up to the edge of the round part of the flask, an advantageous rise at the neck is then obtained when pouring horizontally.Upward angle for the liquid when pouring. With the two lower jugs on the right, the high position of the spout (above the maximum filling level) means that the vessel has to be tilted quite a bit before pouring, so that the spout can also be pushed up directly after the edge (against gravity). indicates.
To avoid the teapot effect, the pot can be filled less, so that a larger tilting angle is necessary from the start. However, the effect or the ideal filling level again depends on the can geometry.
The teapot effect does not occur with bottles because the slender neck of the bottle always points upwards when pouring; the current would therefore have to "flow uphill" a long way. [ 6 ] Bottle-like containers are therefore often used for liquid chemicals in the laboratory. Certain materials are also used there to prevent dripping, for example glass, which can be easily shaped or even ground to create the sharpest possible edges, or Teflon, for example, which reduces the adhesion effect described above. | https://en.wikipedia.org/wiki/Teapot_effect |
Tear resistance (or tear strength ) is a measure of how well a material can withstand the effects of tearing . [ 1 ] It is a useful engineering measurement for a wide variety of materials by many different test methods .
For example, with rubber , tear resistance measures how the test specimen resists the growth of any cuts when under tension , it is usually expressed in kN / m . [ 2 ] Tear resistance can be gauged via the same ASTM D 412 apparatus used to measure tensile strength , modulus and elongation . ASTM D 624 can be applied to measure the resistance to the formation of a tear (tear initiation) and the resistance to the expansion of a tear (tear propagation). Regardless of which of these two is being measured, the sample is held between two holders and a uniform pulling force applied until the aforementioned deformation occurs. Tear resistance is then calculated by dividing the force applied by the thickness of the material. [ 2 ]
Materials with low tear resistance sometimes have poor resistance to abrasion and when damaged will quickly fail (this includes hard materials, since hardness is not related to tear resistance). [ 2 ]
Substances with high tear resistance include epichlorohydrin , natural rubber and polyurethane . In contrast, materials such as silicone and fluorosilicone have low tear resistance. [ 2 ]
The ratio of tear resistance to the yield strength is called the tear-yield ratio. It is a measure of notch toughness . [ 3 ]
This article about materials science is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Tear-yield_ratio |
Tear gas , also known as a lachrymatory agent or lachrymator (from Latin lacrima ' tear ' ), sometimes colloquially known as " mace " after the early commercial self-defense spray , is a chemical weapon that stimulates the nerves of the lacrimal gland in the eye to produce tears. In addition, it can cause severe eye and respiratory pain, skin irritation, bleeding, and blindness. Common lachrymators both currently and formerly used as tear gas include pepper spray (OC gas), PAVA spray ( nonivamide ), CS gas , CR gas , CN gas (phenacyl chloride), bromoacetone , xylyl bromide , chloropicrin (PS gas) and Mace (a branded mixture).
While lachrymatory agents are commonly deployed for riot control by law enforcement and military personnel, its use in warfare is prohibited by various international treaties. [ NB 1 ] During World War I , increasingly toxic and deadly lachrymatory agents were used.
The short and long-term effects of tear gas are not well studied. The published peer-reviewed literature consists of lower quality evidence that do not establish causality. [ 1 ] Exposure to tear gas agents may produce numerous short-term and long-term health effects, including development of respiratory illnesses, severe eye injuries and diseases (such as traumatic optic neuropathy, keratitis, glaucoma, and cataracts), dermatitis, damage of cardiovascular and gastrointestinal systems, and death, especially in cases with exposure to high concentrations of tear gas or application of the tear gases in enclosed spaces. [ 2 ]
Tear gas generally consists of aerosolized solid or liquid compounds ( bromoacetone or xylyl bromide ), not gas. [ 2 ] Tear gas works by irritating mucous membranes in the eyes, nose, mouth and lungs. It causes crying, coughing, difficulty breathing, pain in the eyes, and temporary blindness. With CS gas , symptoms of irritation typically appear after 20 to 60 seconds of exposure [ 3 ] and commonly resolve within 30 minutes of leaving (or being removed from) the area.
As with all non-lethal or less-lethal weapons , there is a risk of serious permanent injury or death when tear gas is used. [ 1 ] [ 4 ] [ 5 ] [ 2 ] This includes risks from being hit by tear gas cartridges that may cause severe bruising, loss of eyesight, or skull fracture, resulting in immediate death. [ 6 ] A case of serious vascular injury from tear gas shells has also been reported from Iran, with high rates of associated nerve injury (44%) and amputation (17%), [ 7 ] as well as instances of head injuries in young people. [ 8 ] Novel findings suggest that menstrual changes are one of the most commonly reported health issues in women. [ 1 ]
While the medical consequences of the gases themselves are typically limited to minor skin inflammation , delayed complications are also possible. People with pre-existing respiratory conditions such as asthma are particularly at risk. They are likely to need medical attention [ 3 ] and may sometimes require hospitalization or even ventilation support . [ 9 ] Skin exposure to CS may cause chemical burns [ 10 ] [ 1 ] or induce allergic contact dermatitis . [ 3 ] [ 11 ] When people are hit at close range or are severely exposed, eye injuries involving scarring of the cornea can lead to a permanent loss in visual acuity . [ 12 ] Frequent or high levels of exposure carry increased risks of respiratory illness. [ 2 ]
Venezuelan chemist Mónica Kräuter studied thousands of tear gas canisters fired by Venezuelan authorities since 2014. She concluded that the majority of canisters used the main component CS gas , but that 72% of the tear gas used was expired. She noted that expired tear gas "breaks down into cyanide oxide, phosgenes and nitrogens that are extremely dangerous". [ 13 ]
In the 2019–20 Chilean protests various people have had complete and permanent loss of vision in one or both eyes as result of the impact of tear gas grenades. [ 14 ] [ 15 ] [ 16 ]
The majority (2116; 93.8%) of protestors who reported exposure to tear gas during the 2020 protests in Portland, Oregon (USA) reported physical (2114; 93.7%) or psychological (1635; 72.4%) health issues experienced immediately after (2105; 93.3%) or days following (1944; 86.1%) the exposure. The majority (1233; 54.6%) of respondents who reported exposure to tear gas during the 2020 protests in Portland, Oregon (US) have also reported receiving or planning to seek medical or mental healthcare for their tear gas-related health issues. [ 1 ] It has been shown that health issues associated with the exposure to tear gas are often require medical attention. [ 1 ]
TRPA1 ion channels expressed on nociceptors have been implicated as the site of action for CS gas , CR gas , CN gas (phenacyl chloride), chloropicrin and bromoacetone in rodent models. [ 17 ] [ 18 ]
During World War I , various forms of tear gas were used in combat and tear gas was the most common form of chemical weapon used. None of the belligerents believed that the use of irritant gases violated the Hague Convention of 1899 which prohibited the use of "poison or poisoned weapons" in warfare. Use of chemical weapons escalated during the war to lethal gases, after 1914 (during which only tear gas was used).
The US Chemical Warfare Service developed tear gas grenades for use in riot control in 1919. [ 19 ]
Use of tear gas in interstate warfare, as with all other chemical weapons , was prohibited by the Geneva Protocol of 1925: it prohibited the use of "asphyxiating gas, or any other kind of gas, liquids, substances or similar materials", a treaty that most states have signed. Police and civilian self-defense use is not banned in the same manner. [ 20 ]
Tear gas was used in combat by Italy in the Second Italo-Ethiopian War , by Japan in the Second Sino-Japanese War , by Spain in the Rif War and by the United States in the Vietnam War , and Israel Israel–Palestine conflict . [ 21 ] [ 22 ]
Tear gas exposure is an element of military training programs, typically as a means of improving trainees' tolerance to tear gas and encouraging confidence in the ability of their issued protective equipment to prevent chemical weapons exposure. [ 23 ] [ 24 ] [ 25 ]
Certain lachrymatory agents, most notably tear gas, are often used by police to force compliance. [ 5 ] In some countries (e.g., Finland, Australia, and United States), another common substance is mace . The self-defense weapon form of mace is based on pepper spray which comes in small spray cans. Versions including CS are manufactured for police use. [ 26 ] Xylyl bromide, CN and CS are the oldest of these agents. CS is the most widely used. CN has the most recorded toxicity. [ 3 ]
Typical manufacturer warnings on tear gas cartridges state "Danger: Do not fire directly at person(s). Severe injury or death may result." [ 27 ] Tear gas guns do not have a manual setting to adjust the range of fire. The only way to adjust the projectile's range is to aim towards the ground at the correct angle. Incorrect aim will send the capsules away from the targets, causing risk for non-targets instead. [ 28 ]
A variety of protective equipment may be used, including gas masks and respirators . In riot control situations, protesters sometimes use equipment (aside from simple rags or clothing over the mouth) such as swimming goggles and adapted water bottles, as well as covering as much skin as possible. [ 29 ] [ 30 ] [ 31 ]
Activists in United States, the Czech Republic, Venezuela and Turkey have reported using antacid solutions such as Maalox diluted with water to repel effects of tear gas attacks, [ 32 ] [ 33 ] [ 34 ] with Venezuelan chemist Mónica Kräuter recommending the usage of diluted antacids as well as baking soda . [ 35 ] There have also been reports of these antacids being helpful for tear gas, [ 36 ] and for capsaicin-induced skin pain. [ 37 ]
During the 2019 Hong Kong protests , frontline protesters became adept at extinguishing tear gas: they formed special teams that sprang into action as soon as it was fired. These individuals generally wore protective clothing, including heat-proof gloves, or covered their arms and legs with cling film to prevent the painful skin irritation. Canisters were sometimes picked up and lobbed back at police or extinguished straight away with water, or neutralized using objects such as traffic cones. They shared information about models of 3M respirator filters which had been found to be most effective against tear gas, and where those models could be purchased. Other volunteers carried saline solutions to rinse the eyes of those affected. [ 38 ] Similarly, Chilean protesters of Primera Línea had specialized individuals collecting and extinguishing the tear gas grenades. Others acted as tear gas medics, and another group, the "shield-bearers," protected the protesters from the direct physical impact of the grenades. [ 39 ]
There is no specific antidote to common tear gases. [ 3 ] [ 40 ] At the first sign of exposure or potential exposure, masks are applied when available. People are removed from the affected area when possible. [ 41 ] [ 42 ] Immediate removal of contact lenses has also been recommended, as they can retain particles. [ 42 ] [ 40 ]
Decontamination is by physical or mechanical removal (brushing, washing, rinsing) of solid or liquid agents. Water may transiently exacerbate the pain caused by CS gas and pepper spray but is still effective, although fat-containing oils or soaps may be more effective against pepper spray. Eyes are decontaminated by copious flushing with sterile water or saline or (with OC) open-eye exposure to wind from a fan. Referral to an ophthalmologist is needed if slit-lamp examination shows impaction of solid particles of agent. [ 3 ] [ 41 ] [ 43 ] Blowing the nose to get rid of the chemicals is recommended, as is avoiding rubbing of the eyes. [ 31 ] There are reports that water may increase pain from CS gas, but the balance of limited evidence currently suggests water or saline are the best options. [ 40 ] [ 36 ] [ 44 ] Some evidence suggests that Diphoterine , a hypertonic amphoteric salt solution, a first aid product for chemical splashes, may help with ocular burns or chemicals in the eye. [ 43 ] [ 45 ]
Bathing and washing the body vigorously with soap and water can remove particles that adhere to the skin. Clothes, shoes and accessories that come into contact with vapors must be washed well since all untreated particles can remain active for up to a week. [ 46 ] Some advocate using fans or hair dryers to evaporate the spray, but this has not been shown to be better than washing out the eyes and it may spread contamination. [ 40 ]
Anticholinergics can work like some antihistamines as they reduce lacrymation and decrease salivation, acting as an antisialagogue , and for overall nose discomfort as they are used to treat allergic reactions in the nose (e.g., itching, runny nose, and sneezing). [ citation needed ]
Oral analgesics may help relieve eye pain. [ 40 ]
Most effects resulting from riot-control agents are transient and do not require treatment beyond decontamination, and most patients do not need observation beyond 4 hours. However, patients should be instructed to return if they develop effects such as blistering or delayed-onset shortness of breath. [ 41 ]
Vinegar, petroleum jelly , milk and lemon juice solutions have also been used by activists. [ 47 ] [ 48 ] [ 49 ] [ 50 ] It is unclear how effective these remedies are. In particular, vinegar itself can burn the eyes and prolonged inhalation can also irritate the airways. [ 51 ] Vegetable oil and vinegar have been reported as helping relieve burning caused by pepper spray, [ 42 ] Kräuter suggests the usage of baking soda or toothpaste, stating that they trap the particles emanating from the gas near the airways that are more feasible to inhale. [ 35 ] A small trial of baby shampoo for washing out the eyes did not show any benefit. [ 40 ]
Informational notes
Citations
Further reading | https://en.wikipedia.org/wiki/Tear_gas |
Tear resistance (or tear strength ) is a measure of how well a material can withstand the effects of tearing . [ 1 ] It is a useful engineering measurement for a wide variety of materials by many different test methods .
For example, with rubber , tear resistance measures how the test specimen resists the growth of any cuts when under tension , it is usually expressed in kN / m . [ 2 ] Tear resistance can be gauged via the same ASTM D 412 apparatus used to measure tensile strength , modulus and elongation . ASTM D 624 can be applied to measure the resistance to the formation of a tear (tear initiation) and the resistance to the expansion of a tear (tear propagation). Regardless of which of these two is being measured, the sample is held between two holders and a uniform pulling force applied until the aforementioned deformation occurs. Tear resistance is then calculated by dividing the force applied by the thickness of the material. [ 2 ]
Materials with low tear resistance sometimes have poor resistance to abrasion and when damaged will quickly fail (this includes hard materials, since hardness is not related to tear resistance). [ 2 ]
Substances with high tear resistance include epichlorohydrin , natural rubber and polyurethane . In contrast, materials such as silicone and fluorosilicone have low tear resistance. [ 2 ]
The ratio of tear resistance to the yield strength is called the tear-yield ratio. It is a measure of notch toughness . [ 3 ]
This article about materials science is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Tear_resistance |
The teardrop tattoo or tear tattoo is a symbolic tattoo of a tear that is placed underneath the eye . The teardrop is one of the most widely recognised prison tattoos [ 1 ] and has various meanings.
It can signify that the wearer has spent time in prison, [ 2 ] [ 3 ] or more specifically that the wearer was raped while incarcerated and tattooed by the rapist as a "property" mark and for humiliation, since facial tattoos cannot be concealed. [ 4 ] [ 5 ] [ 6 ] [ 7 ]
The tattoo is sometimes worn by the female companions of prisoners in solidarity with their loved ones. [ 8 ] Amy Winehouse had a teardrop drawn on her face in eyeliner after her husband Blake entered the Pentonville prison hospital following a suspected drug overdose. [ 9 ]
It can acknowledge the loss of a friend or family member: Basketball player Amar'e Stoudemire has had a teardrop tattoo since 2012 honouring his older brother Hazell Jr., who died in a car accident. [ 10 ]
In West Coast United States gang culture , the tattoo may signify that the wearer has killed someone [ 11 ] [ 12 ] and in some of those circles, the tattoo's meaning can change: an empty outline meaning the wearer attempted murder .
Sometimes the exact meaning of the tattoo is known only by the wearer [ 12 ] [ 13 ] as in the case of Portuguese footballer Ricardo Quaresma , who has never explained his teardrop tattoos. [ 14 ] | https://en.wikipedia.org/wiki/Teardrop_tattoo |
Tebbe's reagent is the organometallic compound with the formula (C 5 H 5 ) 2 TiCH 2 ClAl(CH 3 ) 2 . It is used in the methylidenation of carbonyl compounds, that is it converts organic compounds containing the R 2 C=O group into the related R 2 C=CH 2 derivative. [ 1 ] It is a red solid that is pyrophoric in the air, and thus is typically handled with air-free techniques . It was originally synthesized by Fred Tebbe at DuPont Central Research .
Tebbe's reagent contains two tetrahedral metal centers linked by a pair of bridging ligands . The titanium has two cyclopentadienyl ( [C 5 H 5 ] − , or Cp) rings and aluminium has two methyl groups. The titanium and aluminium atoms are linked together by both a methylene bridge (-CH 2 -) and a chloride atom in a nearly square-planar (Ti–CH 2 –Al–Cl) geometry. [ 2 ] The Tebbe reagent was the first reported compound where a methylene bridge connects a transition metal (Ti) and a main group metal (Al). [ 3 ]
The Tebbe reagent is synthesized from titanocene dichloride and trimethylaluminium in toluene solution. [ 3 ] [ 4 ]
After about 3 days, the product is obtained after recrystallization to remove Al(CH 3 ) 2 Cl. [ 3 ] Although syntheses using the isolated Tebbe reagent give a cleaner product, successful procedures using the reagent "in situ" have been reported. [ 5 ] [ 6 ] Instead of isolating the Tebbe reagent, the solution is merely cooled in an ice bath or dry ice bath before adding the starting material.
An alternative but less convenient synthesis entails the use of dimethyltitanocene (Petasis reagent): [ 7 ]
One drawback to this method, aside from requiring Cp 2 Ti(CH 3 ) 2 , is the difficulty of separating product from unreacted starting reagent.
Tebbe's reagent itself does not react with carbonyl compounds, but must first be treated with a mild Lewis base , such as pyridine , which generates the active Schrock carbene .
Also analogous to the Wittig reagent, the reactivity appears to be driven by the high oxophilicity of Ti(IV). The Schrock carbene ( 1 ) reacts with carbonyl compounds ( 2 ) to give a postulated oxatitanacyclobutane intermediate ( 3 ). This cyclic intermediate has never been directly isolated, presumably because it breaks down immediately to the produce the desired alkene ( 5 ).
The Tebbe reagent is used in organic synthesis for carbonyl methylidenation. [ 8 ] [ 9 ] [ 10 ] This conversion can also be effected using the Wittig reaction , although the Tebbe reagent is more efficient especially for sterically encumbered carbonyls. Furthermore, the Tebbe reagent is less basic than the Wittig reagent and does not give the β-elimination products.
Methylidenation reactions also occur for aldehydes as well as esters , lactones and amides . The Tebbe reagent converts esters and lactones to enol ethers and amides to enamines. In compounds containing both ketone and ester groups, the ketone selectively reacts in the presence of one equivalent of the Tebbe reagent.
The Tebbe reagent methylidenates carbonyls without racemizing a chiral α carbon. For this reason, the Tebbe reagent has found applications in reactions of sugars where maintenance of stereochemistry can be critical. [ 11 ]
The Tebbe reagent reacts with acid chlorides to form titanium enolates by replacing Cl − .
It is possible to modify Tebbe's reagent through the use of different ligands. This can alter the reactivity of the complex, allowing for a broader range of reactions. For example, cyclopropanation can be achieved using a chlorinated analogue. [ 12 ] | https://en.wikipedia.org/wiki/Tebbe's_reagent |
TechRadar is an online technology publication owned by Future plc . It has editorial teams in the United States , United Kingdom , and Australia that provide news and reviews of tech products and gadgets. It was launched in 2008 [ 1 ] [ 2 ] and expanded to the US in January 2012. [ 3 ] It further expanded to Australia in October 2012. [ 4 ] It was the largest consumer technology, news and review site from the UK as of 2013. [ 5 ]
TechRadar also has licensed versions in Italy, Spain, Germany, France, Norway, Sweden, Denmark, Finland, the Netherlands and Belgium. The Indian and Middle East versions of the site closed in October 2022. It also has two spin-off sites, TechRadar Pro and TechRadar Gaming.
TechRadar is owned by Future plc , [ 6 ] the sixth-largest publisher in the United Kingdom . In Q4 2017, TechRadar entered the top 100 [ 7 ] of Similarweb 's US Media Publications Rankings as the 93rd biggest media site in the United States.
In 2023, TechRadar underwent a significant redesign, which the company described as a relaunch. [ 8 ] The redesign aimed to enhance user navigation, with a shift from story-type to product category-based navigation.
Marc McLaren is the global editor-in-chief and Lance Ulanoff [ 9 ] is editor-at-large .
Previous editor-in-chiefs include Paul Douglas, [ 10 ] Gareth Beavis, [ 11 ] Darren Murph, [ 12 ] Patrick Goss [ 13 ] and Marc Chacksfield. [ 14 ]
As of February 2025, the TechRadar masthead lists 40 staff members, not including subbrands TechRadar Pro and TechRadar Gaming. [ 15 ]
TechRadar Pro , an arm of the main site, is a B2B -focused property with an emphasis on small business. The subbrand "acts as a complementary source of information targeted specifically at businesses and decision makers," the company says. [ 16 ] Désiré Athow is managing editor of the subbrand and is part of a nine-person staff. [ 15 ]
The newest brand extension – TechRadar Gaming, or TRG – was launched 17 December 2021 [ 17 ] and aims to "sit at the intersection of hardware and gaming, leveraging strengths of existing brands to bring the best experience to gaming audience." The company described a related hiring spree for the site as "the biggest investment in gaming in a decade." Rob Dwiar is managing editor for TRG and is part of a three-person team. [ 15 ] | https://en.wikipedia.org/wiki/TechRadar |
TechRepublic is an online trade publication and social community for IT professionals, providing advice on best practices and tools for the needs of IT decision-makers. [ 1 ] [ 2 ]
It was founded in 1997 in Louisville, Kentucky , by Tom Cottingham and Kim Spalding, [ 3 ] and debuted as a website in May 1999. [ 4 ]
The site was purchased by CNET Networks in 2001 for $23 million. [ 5 ] TechRepublic was a part of the Red Ventures business portfolio alongside ZDNet , CNET , GameSpot , and Metacritic .
On August 9, 2021, a Nashville-based technology marketing company, TechnologyAdvice, announced the acquisition of TechRepublic. [ 6 ]
This journalism -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/TechRepublic |
A tech camp is a summer camp which focuses on technology education, sometimes referred to as a computer camp . These camps often include programs such as video game design , robotics , and programming . [ 1 ] These camps first began to appear in the United States in the late 1970s. National Computer Camps was the first computer camp established in 1977. [ 2 ] | https://en.wikipedia.org/wiki/Tech_camp |
Technetium-99 ( 99 Tc ) is an isotope of technetium that decays with a half-life of 211,000 years to stable ruthenium-99 , emitting beta particles , but no gamma rays. It is the most significant long-lived fission product of uranium fission, producing the largest fraction of the total long-lived radiation emissions of nuclear waste . Technetium-99 has a fission product yield of 6.0507% for thermal neutron fission of uranium-235 .
The metastable technetium-99m ( 99m Tc) is a short-lived (half-life about 6 hours) nuclear isomer used in nuclear medicine , produced from molybdenum-99. It decays by isomeric transition to technetium-99, a desirable characteristic, since the very long half-life and type of decay of technetium-99 imposes little further radiation burden on the body.
The weak beta emission is stopped by the walls of laboratory glassware. Soft X-rays are emitted when the beta particles are stopped, but as long as the body is kept more than 30 cm away these should pose no problem. The primary hazard when working with technetium is inhalation of dust; such radioactive contamination in the lungs can pose a significant cancer risk. [ citation needed ]
Due to its high fission yield, relatively long half-life, and mobility in the environment, technetium-99 is one of the more significant components of nuclear waste. Measured in becquerels per amount of spent fuel, it is the dominant producer of radiation in the period from about 10 4 to 10 6 years after the creation of the nuclear waste. [ 2 ] The next shortest-lived fission product is samarium-151 with a half-life of 90 years, though a number of actinides produced by neutron capture have half-lives in the intermediate range.
An estimated 160 TBq (about 250 kg) of technetium-99 was released into the environment up to 1994 by atmospheric nuclear tests. [ 2 ] The amount of technetium-99 from civilian nuclear power released into the environment up to 1986 is estimated to be on the order of 1000 TBq (about 1600 kg), primarily by outdated methods of nuclear fuel reprocessing ; most of this was discharged into the sea. In recent years, reprocessing methods have improved to reduce emissions, but as of 2005 [update] the primary release of technetium-99 into the environment is by the Sellafield plant, which released an estimated 550 TBq (about 900 kg) from 1995–1999 into the Irish Sea . From 2000 onwards the amount has been limited by regulation to 90 TBq (about 140 kg) per year. [ 3 ]
The long half-life of technetium-99 and its ability to form an anionic species make it (along with 129 I ) a major concern when considering long-term disposal of high-level radioactive waste . [ citation needed ] Many of the processes designed to remove fission products from medium-active process streams in reprocessing plants are designed to remove cationic species like caesium (e.g., 137 Cs , 134 Cs ) and strontium (e.g., 90 Sr ). Hence the pertechnetate escapes through these treatment processes. Current disposal options favor burial in geologically stable rock. The primary danger with such a course is that the waste is likely to come into contact with water, which could leach radioactive contamination into the environment. The natural cation-exchange capacity of soils tends to immobilize plutonium , uranium , and caesium cations. However, the anion-exchange capacity is usually much smaller, so minerals are less likely to adsorb the pertechnetate and iodide anions, leaving them mobile in the soil. For this reason, the environmental chemistry of technetium is an active area of research.
Several methods have been proposed for technetium-99 separation including: crystallization, [ 4 ] [ 5 ] liquid-liquid extraction, [ 6 ] [ 7 ] [ 8 ] molecular recognition methods, [ 9 ] volatilization, and others.
In 2012 the crystalline compound Notre Dame Thorium Borate-1 (NDTB-1) was presented by researchers at the University of Notre Dame. It can be tailored to safely absorb radioactive ions from nuclear waste streams. Once captured, the radioactive ions can then be exchanged for higher-charged species of a similar size, recycling the material for re-use. Lab results using the NDTB-1 crystals removed approximately 96 percent of technetium-99. [ 10 ] [ 11 ]
An alternative disposal method, transmutation , has been demonstrated at CERN and NIIAR for technetium-99. This transmutation process bombards the technetium ( 99 Tc as a metal target [ 12 ] [ 13 ] ) with neutrons , forming the short-lived 100 Tc (half-life 16 seconds), which decays by beta decay to stable ruthenium ( 100 Ru ). [ 14 ] Given the relatively high market value of ruthenium [ 15 ] and the particularly undesirable properties of technetium, this type of nuclear transmutation appears particularly promising. [ 16 ] | https://en.wikipedia.org/wiki/Technetium-99 |
Technetium-99m ( 99m Tc) is a metastable nuclear isomer of technetium-99 (itself an isotope of technetium ), symbolized as 99m Tc, that is used in tens of millions of medical diagnostic procedures annually, making it the most commonly used medical radioisotope in the world.
Technetium-99m is used as a radioactive tracer and can be detected in the body by medical equipment ( gamma cameras ). It is well suited to the role, because it emits readily detectable gamma rays with a photon energy of 140 keV (these 8.8 pm photons are about the same wavelength as emitted by conventional X-ray diagnostic equipment) and its half-life for gamma emission is 6.0058 hours (meaning 93.7% of it decays to 99 Tc in 24 hours). The relatively "short" physical half-life of the isotope and its biological half-life of 1 day (in terms of human activity and metabolism) allows for scanning procedures which collect data rapidly but keep total patient radiation exposure low. The same characteristics make the isotope unsuitable for therapeutic use.
Technetium-99m was discovered as a product of cyclotron bombardment of molybdenum . This procedure produced molybdenum-99 , a radionuclide with a longer half-life (2.75 days), which decays to 99m Tc. This longer decay time allows for 99 Mo to be shipped to medical facilities, where 99m Tc is extracted from the sample as it is produced. In turn, 99 Mo is usually created commercially by fission of highly enriched uranium in a small number of research and material testing nuclear reactors in several countries.
In 1938, Emilio Segrè and Glenn T. Seaborg isolated for the first time the metastable isotope technetium-99m, after bombarding natural molybdenum with 8 MeV deuterons in the 37-inch (940 mm) cyclotron of Ernest Orlando Lawrence 's Radiation laboratory . [ 2 ] In 1970 Seaborg explained that: [ 3 ]
we discovered an isotope of great scientific interest, because it decayed by means of an isomeric transition with emission of a line spectrum of electrons coming from an almost completely internally converted gamma ray transition. [actually, only 12% of the decays are by internal conversion] (...) This was a form of radioactive decay which had never been observed before this time. Segrè and I were able to show that this radioactive isotope of the element with the atomic number 43 decayed with a half-life of 6.6 h [later updated to 6.0 h] and that it was the daughter of a 67-h [later updated to 66 h] molybdenum parent radioactivity. This chain of decay was later shown to have the mass number 99, and (...) the 6.6-h activity acquired the designation 'technetium-99m.
Later in 1940, Emilio Segrè and Chien-Shiung Wu published experimental results of an analysis of fission products of uranium-235, including molybdenum-99, and detected the presence of an isomer of element 43 with a 6-hour half life, later labelled as technetium-99m. [ 4 ] [ 5 ]
99m Tc remained a scientific curiosity until the 1950s when Powell Richards realized the potential of technetium-99m as a medical radiotracer and promoted its use among the medical community. While Richards was in charge of the radioisotope production at the Hot Lab Division of the Brookhaven National Laboratory , Walter Tucker and Margaret Greene were working on how to improve the separation process purity of the short-lived eluted daughter product iodine-132 from its parent, tellurium-132 (with a half life of 3.2 days), produced in the Brookhaven Graphite Research Reactor. [ 6 ] They detected a trace contaminant which proved to be 99m Tc, which was coming from 99 Mo and was following tellurium in the chemistry of the separation process for other fission products. Based on the similarities between the chemistry of the tellurium-iodine parent-daughter pair, Tucker and Greene developed the first technetium-99m generator in 1958. [ 7 ] [ 8 ] It was not until 1960 that Richards became the first to suggest the idea of using technetium as a medical tracer. [ 9 ] [ 10 ] [ 11 ] [ 12 ]
The first US publication to report on medical scanning of 99m Tc appeared in August 1963. [ 13 ] [ 14 ] Sorensen and Archambault demonstrated that intravenously injected carrier-free 99 Mo selectively and efficiently concentrated in the liver, becoming an internal generator of 99m Tc. After build-up of 99m Tc, they could visualize the liver using the 140 keV gamma ray emission.
The production and medical use of 99m Tc rapidly expanded across the world in the 1960s, benefiting from the development and continuous improvements of the gamma cameras . [ citation needed ]
Between 1963 and 1966, numerous scientific studies demonstrated the use of 99m Tc as radiotracer or diagnostic tool. [ 15 ] [ 16 ] [ 17 ] [ 18 ] As a consequence the demand for 99m Tc grew exponentially and by 1966, Brookhaven National Laboratory was unable to cope with the demand. Production and distribution of 99m Tc generators were transferred to private companies. "TechneKow-CS generator" , the first commercial 99m Tc generator, was produced by Nuclear Consultants, Inc. (St. Louis, Missouri) and Union Carbide Nuclear Corporation (Tuxedo, New York). [ 19 ] [ 20 ] From 1967 to 1984, 99 Mo was produced for Mallinckrodt Nuclear Company at the Missouri University Research Reactor (MURR). [ citation needed ]
Union Carbide actively developed a process to produce and separate useful isotopes like 99 Mo from mixed fission products that resulted from the irradiation of highly enriched uranium (HEU) targets in nuclear reactors developed from 1968 to 1972 at the Cintichem facility (formerly the Union Carbide Research Center built in the Sterling forest in Tuxedo, New York ( 41°14′6.88″N 74°12′50.78″W / 41.2352444°N 74.2141056°W / 41.2352444; -74.2141056 )). [ 21 ] The Cintichem process originally used 93% highly enriched U-235 deposited as UO 2 on the inside of a cylindrical target. [ 22 ] [ 23 ]
At the end of the 1970s, 200,000 Ci (7.4 × 10 15 Bq) of total fission product radiation were extracted weekly from 20 to 30 reactor bombarded HEU capsules, using the so-called "Cintichem [chemical isolation] process." [ 24 ] The research facility with its 1961 5-MW pool-type research reactor was later sold to Hoffman-LaRoche and became Cintichem Inc. [ 25 ] In 1980, Cintichem, Inc. began the production/isolation of 99 Mo in its reactor, and became the single U.S. producer of 99 Mo during the 1980s. However, in 1989, Cintichem detected an underground leak of radioactive products that led to the reactor shutdown and decommissioning, putting an end to the commercial production of 99 Mo in the USA. [ 26 ]
The production of 99 Mo started in Canada in the early 1970s and was shifted to the NRU reactor in the mid-1970s. [ 27 ] By 1978 the reactor provided technetium-99m in large enough quantities that were processed by AECL's radiochemical division, which was privatized in 1988 as Nordion, now MDS Nordion . [ 28 ] In the 1990s a substitution for the aging NRU reactor for production of radioisotopes was planned. The Multipurpose Applied Physics Lattice Experiment (MAPLE) was designed as a dedicated isotope-production facility. Initially, two identical MAPLE reactors were to be built at Chalk River Laboratories , each capable of supplying 100% of the world's medical isotope demand. However, problems with the MAPLE 1 reactor, most notably a positive power co-efficient of reactivity , led to the cancellation of the project in 2008.
The first commercial 99m Tc generators were produced in Argentina in 1967, with 99 Mo produced in the CNEA 's RA-1 Enrico Fermi reactor. [ 29 ] [ 30 ] Besides its domestic market CNEA supplies 99 Mo to some South American countries. [ 31 ]
In 1967, the first 99m Tc procedures were carried out in Auckland , New Zealand . [ 32 ] 99 Mo was initially supplied by Amersham, UK, then by the Australian Nuclear Science and Technology Organisation ( ANSTO ) in Lucas Heights, Australia. [ 33 ]
In May 1963, Scheer and Maier-Borst were the first to introduce the use of 99m Tc for medical applications. [ 13 ] [ 34 ] In 1968, Philips-Duphar (later Mallinckrodt, today Covidien ) marketed the first technetium-99m generator produced in Europe and distributed from Petten, the Netherlands. [ citation needed ]
Global shortages of technetium-99m emerged in the late 2000s because two aging nuclear reactors ( NRU and HFR ) that provided about two-thirds of the world's supply of molybdenum-99, which itself has a half-life of only 66 hours, were shut down repeatedly for extended maintenance periods. [ 35 ] [ 36 ] [ 37 ] In May 2009, the Atomic Energy of Canada Limited announced the detection of a small leak of heavy water in the NRU reactor that remained out of service until completion of the repairs in August 2010. [ citation needed ]
After the observation of gas bubble jets released from one of the deformations of primary cooling water circuits in August 2008, the HFR reactor was stopped for a thorough safety investigation. NRG received in February 2009 a temporary license to operate HFR only when necessary for medical radioisotope production. HFR stopped for repairs at the beginning of 2010 and was restarted in September 2010. [ 38 ]
Two replacement Canadian reactors (see MAPLE Reactor ) constructed in the 1990s were closed before beginning operation, for safety reasons. [ 35 ] [ 39 ] A construction permit for a new production facility to be built in Columbia, MO was issued in May 2018. [ 40 ]
Technetium-99m is a metastable nuclear isomer , as indicated by the "m" after its mass number 99. This means it is a nuclide in an excited (metastable) state that lasts much longer than is typical. The nucleus will eventually relax (i.e., de-excite) to its ground state through the emission of gamma rays or internal conversion electrons . Both of these decay modes rearrange the nucleons without transmuting the technetium into another element. [ citation needed ]
99m Tc decays mainly by gamma emission, slightly less than 88% of the time. ( 99m Tc → 99 Tc + γ) About 98.6% of these gamma decays result in 140.5 keV gamma rays and the remaining 1.4% are to gammas of a slightly higher energy at 142.6 keV. These are the radiations that are picked up by a gamma camera when 99m Tc is used as a radioactive tracer for medical imaging . The remaining approximately 12% of 99m Tc decays are by means of internal conversion , resulting in ejection of high speed internal conversion electrons in several sharp peaks (as is typical of electrons from this type of decay) also at about 140 keV ( 99m Tc → 99 Tc + + e − ). These conversion electrons will ionize the surrounding matter like beta radiation electrons would do, contributing along with the 140.5 keV and 142.6 keV gammas to the total deposited dose . [ citation needed ]
Pure gamma emission is the desirable decay mode for medical imaging because other particles deposit more energy in the patient body ( radiation dose ) than in the camera. Metastable isomeric transition is the only nuclear decay mode that approaches pure gamma emission. [ citation needed ]
99m Tc's half-life of 6.0058 hours is considerably longer (by 14 orders of magnitude, at least) than most nuclear isomers, though not unique. This is still a short half-life relative to many other known modes of radioactive decay and it is in the middle of the range of half lives for radiopharmaceuticals used for medical imaging . [ citation needed ]
After gamma emission or internal conversion, the resulting ground-state technetium-99 then decays with a half-life of 211,000 years to stable ruthenium-99 . This process emits soft beta radiation without a gamma. Such low radioactivity from the daughter product(s) is a desirable feature for radiopharmaceuticals. [ citation needed ]
The parent nuclide of 99m Tc, 99 Mo, is mainly extracted for medical purposes from the fission products created in neutron-irradiated uranium-235 targets, the majority of which is produced in five nuclear research reactors around the world using highly enriched uranium (HEU) targets. [ 41 ] [ 42 ] Smaller amounts of 99 Mo are produced from low-enriched uranium in at least three reactors.
Production of 99 Mo by neutron activation of natural molybdenum, or molybdenum enriched in 98 Mo, [ 46 ] is another, currently smaller, route of production. [ 47 ]
The feasibility of 99m Tc production with the 22-MeV-proton bombardment of a 100 Mo target in medical cyclotrons was demonstrated in 1971. [ 48 ] The recent shortages of 99m Tc reignited the interest in the production of "instant" 99mTc by proton bombardment of isotopically enriched 100 Mo targets (>99.5%) following the reaction 100 Mo(p,2n) 99m Tc. [ 49 ] Canada is commissioning such cyclotrons, designed by Advanced Cyclotron Systems , for 99m Tc production at the University of Alberta and the Université de Sherbrooke , and is planning others at the University of British Columbia , TRIUMF , University of Saskatchewan and Lakehead University . [ 50 ] [ 51 ] [ 52 ]
A particular drawback of cyclotron production via (p,2n) on 100 Mo is the significant co-production of 99g Tc. The preferential in-growth of this nuclide occurs due to the larger reaction cross-section pathway leading to the ground state, which is almost five times higher at the cross-section maximum in comparison with the metastable one at the same energy. Depending on the time required to process the target material and recovery of 99m Tc, the amount of 99m Tc relative to 99g Tc will continue to decrease, in turn reducing the specific activity of 99m Tc available. It has been reported that ingrowth of 99g Tc as well as the presence of other Tc isotopes can negatively affect subsequent labelling and/or imaging; [ 53 ] however, the use of high purity 100 Mo targets, specified proton beam energies, and appropriate time of use have shown to be sufficient for yielding 99m Tc from a cyclotron comparable to that from a commercial generator. [ 54 ] [ 55 ] Liquid metal molybdenum-containing targets have been proposed that would aid in streamlined processing, ensuring better production yields. [ 56 ] A particular problem associated with the continued reuse of recycled, enriched 100 Mo targets is unavoidable transmutation of the target as other Mo isotopes are generated during irradiation and cannot be easily removed post-processing. [ citation needed ]
Other particle accelerator-based isotope production techniques have been investigated. The supply disruptions of 99 Mo in the late 2000s and the ageing of the producing nuclear reactors forced the industry to look into alternative methods of production. [ 57 ] The use of cyclotrons or electron accelerators to produce 99 Mo from 100 Mo via (p,pn) [ 58 ] [ 59 ] [ 60 ] or (γ,n) [ 61 ] reactions, respectively, has been further investigated. The (n,2n) reaction on 100 Mo yields a higher reaction cross-section for high energy neutrons than of (n,γ) on 98 Mo with thermal neutrons. [ 62 ] In particular, this method requires accelerators that generate fast neutron spectrums, such as ones using D-T [ 63 ] or other fusion-based reactions, [ 64 ] or high energy spallation or knock out reactions. [ 65 ] A disadvantage of these techniques is the necessity for enriched 100 Mo targets, which are significantly more expensive than natural isotopic targets and typically require recycling of the material, which can be costly, time-consuming, and arduous. [ 66 ] [ 67 ]
Technetium-99m's short half-life of 6 hours makes storage impossible and would make transport very expensive. Instead, its parent nuclide 99 Mo is supplied to hospitals after its extraction from the neutron-irradiated uranium targets and its purification in dedicated processing facilities. [ notes 1 ] [ 69 ] It is shipped by specialised radiopharmaceutical companies in the form of technetium-99m generators worldwide or directly distributed to the local market. The generators, colloquially known as moly cows, are devices designed to provide radiation shielding for transport and to minimize the extraction work done at the medical facility. A typical dose rate at 1 metre from the 99m Tc generator is 20-50 μSv/h during transport. [ 70 ] These generators' output declines with time and must be replaced weekly, since the half-life of 99 Mo is still only 66 hours.
Molybdenum-99 spontaneously decays to excited states of 99 Tc through beta decay . Over 87% of the decays lead to the 142 keV excited state of 99m Tc. A β − electron and a ν e electron antineutrino are emitted in the process ( 99 Mo → 99m Tc + β − + ν e ). The β − electrons are easily shielded for transport, and 99m Tc generators are only minor radiation hazards, mostly due to secondary X-rays produced by the electrons (also known as Bremsstrahlung ).
At the hospital, the 99m Tc that forms through 99 Mo decay is chemically extracted from the technetium-99m generator. Most commercial 99 Mo/ 99m Tc generators use column chromatography , in which 99 Mo in the form of water-soluble molybdate, MoO 4 2− is adsorbed onto acid alumina (Al 2 O 3 ). When the 99 Mo decays, it forms pertechnetate TcO 4 − , which, because of its single charge, is less tightly bound to the alumina. Pulling normal saline solution through the column of immobilized 99 MoO 4 2− elutes the soluble 99m TcO 4 − , resulting in a saline solution containing the 99m Tc as the dissolved sodium salt of the pertechnetate . One technetium-99m generator, holding only a few micrograms of 99 Mo, can potentially diagnose 10,000 patients [ citation needed ] because it will be producing 99m Tc strongly for over a week.
Technetium exits the generator in the form of the pertechnetate ion, TcO 4 − . The oxidation state of Tc in this compound is +7. This is directly suitable for medical applications only in bone scans (it is taken up by osteoblasts) and some thyroid scans (it is taken up in place of iodine by normal thyroid tissues). In other types of scans relying on 99m Tc, a reducing agent is added to the pertechnetate solution to bring the oxidation state of the technecium down to +3 or +4. Secondly, a ligand is added to form a coordination complex . The ligand is chosen to have an affinity for the specific organ to be targeted. For example, the exametazime complex of Tc in oxidation state +3 is able to cross the blood–brain barrier and flow through the vessels in the brain for cerebral blood flow imaging. Other ligands include sestamibi for myocardial perfusion imaging and mercapto acetyl triglycine for MAG3 scan to measure renal function. [ 71 ]
In 1970, Eckelman and Richards presented the first "kit" containing all the ingredients required to release the 99m Tc, "milked" from the generator, in the chemical form to be administered to the patient. [ 71 ] [ 72 ] [ 73 ] [ 74 ]
Technetium-99m is used in 20 million diagnostic nuclear medical procedures every year. Approximately 85% of diagnostic imaging procedures in nuclear medicine use this isotope as radioactive tracer . Klaus Schwochau's book Technetium lists 31 radiopharmaceuticals based on 99m Tc for imaging and functional studies of the brain , myocardium , thyroid , lungs , liver , gallbladder , kidneys , skeleton , blood , and tumors . [ 75 ] A more recent review is also available. [ 76 ]
Depending on the procedure, the 99m Tc is tagged (or bound to) a pharmaceutical that transports it to its required location. For example, when 99m Tc is chemically bound to exametazime (HMPAO), the drug is able to cross the blood–brain barrier and flow through the vessels in the brain for cerebral blood-flow imaging. This combination is also used for labeling white blood cells ( 99m Tc labeled WBC ) to visualize sites of infection. 99m Tc sestamibi is used for myocardial perfusion imaging, which shows how well the blood flows through the heart. Imaging to measure renal function is done by attaching 99m Tc to mercaptoacetyl triglycine ( MAG3 ); this procedure is known as a MAG3 scan .
Technetium-99m (Tc-99m) can be readily detected in the body by medical equipment because it emits 140.5 keV gamma rays (these are about the same wavelength as emitted by conventional X-ray diagnostic equipment), and its half-life for gamma emission is six hours (meaning 94% of it decays to 99 Tc in 24 hours). Besides, it emits virtually no beta radiation, thus keeping radiation dosage low. Its decay product, 99 Tc, has a relatively long half-life (211,000 years) and emits little radiation. The short physical half-life of 99m Tc and its biological half-life of 1 day with its other favourable properties allows scanning procedures to collect data rapidly and keep total patient radiation exposure low. Chemically, technetium-99m is selectively concentrated in the stomach, thyroid, and salivary glands, and excluded from cerebrospinal fluid ; combining it with perchlorate abolishes its selectiveness. [ 77 ]
Diagnostic treatment involving technetium-99m will result in radiation exposure to technicians, patients, and passers-by. Typical quantities of technetium administered for immunoscintigraphy tests, such as SPECT tests, range from 400 to 1,100 MBq (11 to 30 mCi) ( millicurie or mCi; and Mega- Becquerel or MBq) for adults. [ 78 ] [ 79 ] These doses result in radiation exposures to the patient around 10 m Sv (1000 mrem ), the equivalent of about 500 chest X-ray exposures. [ 80 ] This level of radiation exposure is estimated by the linear no-threshold model to carry a 1 in 1000 lifetime risk of developing a solid cancer or leukemia in the patient. [ 81 ] The risk is higher in younger patients, and lower in older ones. [ 82 ] Unlike a chest x-ray, the radiation source is inside the patient and will be carried around for a few days, exposing others to second-hand radiation. A spouse who stays constantly by the side of the patient through this time might receive one thousandth of patient's radiation dose this way.
The short half-life of the isotope allows for scanning procedures that collect data rapidly. The isotope is also of a very low energy level for a gamma emitter. Its ~140 keV of energy make it safer for use because of the substantially reduced ionization compared with other gamma emitters. The energy of gammas from 99m Tc is about the same as the radiation from a commercial diagnostic X-ray machine, although the number of gammas emitted results in radiation doses more comparable to X-ray studies like computed tomography .
Technetium-99m has several features that make it safer than other possible isotopes. Its gamma decay mode can be easily detected by a camera, allowing the use of smaller quantities. And because technetium-99m has a short half-life, its quick decay into the far less radioactive technetium-99 results in relatively low total radiation dose to the patient per unit of initial activity after administration, as compared with other radioisotopes. In the form administered in these medical tests (usually pertechnetate), technetium-99m and technetium-99 are eliminated from the body within a few days. [ citation needed ]
Single-photon emission computed tomography (SPECT) is a nuclear medicine imaging technique using gamma rays. It may be used with any gamma-emitting isotope, including 99m Tc. In the use of technetium-99m, the radioisotope is administered to the patient and the escaping gamma rays are incident upon a moving gamma camera which computes and processes the image. To acquire SPECT images, the gamma camera is rotated around the patient. Projections are acquired at defined points during the rotation, typically every three to six degrees. In most cases, a full 360° rotation is used to obtain an optimal reconstruction. The time taken to obtain each projection is also variable, but 15–20 seconds are typical. This gives a total scan time of 15–20 minutes.
The technetium-99m radioisotope is used predominantly in bone and brain scans. For bone scans , the pertechnetate ion is used directly, as it is taken up by osteoblasts attempting to heal a skeletal injury, or (in some cases) as a reaction of these cells to a tumor (either primary or metastatic) in the bone. In brain scanning, 99m Tc is attached to the chelating agent HMPAO to create technetium ( 99m Tc) exametazime , an agent which localizes in the brain according to region blood flow, making it useful for the detection of stroke and dementing illnesses that decrease regional brain flow and metabolism.
Most recently, technetium-99m scintigraphy has been combined with CT coregistration technology to produce SPECT/CT scans. These employ the same radioligands and have the same uses as SPECT scanning, but are able to provide even finer 3-D localization of high-uptake tissues, in cases where finer resolution is needed. An example is the sestamibi parathyroid scan which is performed using the 99m Tc radioligand sestamibi , and can be done in either SPECT or SPECT/CT machines.
The nuclear medicine technique commonly called the bone scan usually uses 99m Tc. It is not to be confused with the "bone density scan", DEXA , which is a low-exposure X-ray test measuring bone density to look for osteoporosis and other diseases where bones lose mass without rebuilding activity. The nuclear medicine technique is sensitive to areas of unusual bone rebuilding activity, since the radiopharmaceutical is taken up by osteoblast cells which build bone. The technique therefore is sensitive to fractures and bone reaction to bone tumors, including metastases. For a bone scan, the patient is injected with a small amount of radioactive material, such as 700–1,100 MBq (19–30 mCi) of 99m Tc-medronic acid and then scanned with a gamma camera . Medronic acid is a phosphate derivative which can exchange places with bone phosphate in regions of active bone growth, so anchoring the radioisotope to that specific region. To view small lesions (less than 1 centimetre (0.39 in)) especially in the spine, the SPECT imaging technique may be required, but currently in the United States, most insurance companies require separate authorization for SPECT imaging.
Myocardial perfusion imaging (MPI) is a form of functional cardiac imaging, used for the diagnosis of ischemic heart disease . The underlying principle is, under conditions of stress, diseased myocardium receives less blood flow than normal myocardium. MPI is one of several types of cardiac stress test . As a nuclear stress test , the average radiation exposure is 9.4 mSv, which when compared with a typical 2 view chest X-ray (.1 mSv) is equivalent to 94 Chest X-rays. [ 83 ]
Several radiopharmaceuticals and radionuclides may be used for this, each giving different information. In the myocardial perfusion scans using 99m Tc, the radiopharmaceuticals 99m Tc- tetrofosmin (Myoview, GE Healthcare ) or 99m Tc- sestamibi (Cardiolite, Bristol-Myers Squibb ) are used. Following this, myocardial stress is induced, either by exercise or pharmacologically with adenosine , dobutamine or dipyridamole (Persantine), which increase the heart rate or by regadenoson (Lexiscan), a vasodilator. ( Aminophylline can be used to reverse the effects of dipyridamole and regadenoson). Scanning may then be performed with a conventional gamma camera, or with SPECT/CT.
In cardiac ventriculography , a radionuclide, usually 99m Tc, is injected, and the heart is imaged to evaluate the flow through it, to evaluate coronary artery disease , valvular heart disease , congenital heart diseases , cardiomyopathy , and other cardiac disorders . As a nuclear stress test , the average radiation exposure is 9.4 mSv, which when compared with a typical 2 view chest X-ray (.1 mSv) is equivalent to 94 Chest X-Rays. [ 83 ] [ 84 ] It exposes patients to less radiation than comparable chest X-ray studies. [ 84 ]
Usually the gamma-emitting tracer used in functional brain imaging is 99m Tc-HMPAO (hexamethylpropylene amine oxime, exametazime ). The similar 99m Tc-EC tracer may also be used. These molecules are preferentially distributed to regions of high brain blood flow, and act to assess brain metabolism regionally, in an attempt to diagnose and differentiate the different causal pathologies of dementia . When used with the 3-D SPECT technique, they compete with brain FDG-PET scans and fMRI brain scans as techniques to map the regional metabolic rate of brain tissue.
The radioactive properties of 99m Tc can be used to identify the predominant lymph nodes draining a cancer, such as breast cancer or malignant melanoma . This is usually performed at the time of biopsy or resection . 99m Tc-labelled filtered sulfur colloid or Technetium (99mTc) tilmanocept are injected intradermally around the intended biopsy site. The general location of the sentinel node is determined with the use of a handheld scanner with a gamma-sensor probe that detects the technetium-99m–labeled tracer that was previously injected around the biopsy site. An injection of Methylene blue or isosulfan blue is done at the same time to dye any draining nodes visibly blue. An incision is then made over the area of highest radionuclide accumulation, and the sentinel node is identified within the incision by inspection; the isosulfan blue dye will usually stain any lymph nodes blue that are draining from the area around the tumor. [ 85 ]
Immunoscintigraphy incorporates 99m Tc into a monoclonal antibody , an immune system protein , capable of binding to cancer cells. A few hours after injection, medical equipment is used to detect the gamma rays emitted by the 99m Tc; higher concentrations indicate where the tumor is. This technique is particularly useful for detecting hard-to-find cancers, such as those affecting the intestines . These modified antibodies are sold by the German company Hoechst (now part of Sanofi-Aventis ) under the name Scintimun . [ 86 ]
When 99m Tc is combined with a tin compound, it binds to red blood cells and can therefore be used to map circulatory system disorders. It is commonly used to detect gastrointestinal bleeding sites as well as ejection fraction , heart wall motion abnormalities, abnormal shunting, and to perform ventriculography .
A pyrophosphate ion with 99m Tc adheres to calcium deposits in damaged heart muscle, making it useful to gauge damage after a heart attack . [ citation needed ]
The sulfur colloid of 99m Tc is scavenged by the spleen , making it possible to image the structure of the spleen. [ 87 ]
Pertechnetate is actively accumulated and secreted by the mucoid cells of the gastric mucosa, [ 88 ] and therefore, technetate(VII) radiolabeled with Tc99m is injected into the body when looking for ectopic gastric tissue as is found in a Meckel's diverticulum with Meckel's Scans. [ 89 ]
Carbon inhalation aerosol labeled with technetium-99m (Technegas) is indicated for the visualization of pulmonary ventilation and the evaluation of pulmonary embolism. [ 90 ] [ 91 ] [ 92 ] | https://en.wikipedia.org/wiki/Technetium-99m |
A technetium-99m generator , or colloquially a technetium cow or moly cow , is a device used to extract the metastable isotope 99m Tc of technetium from a decaying sample of molybdenum-99 . 99 Mo has a half-life of 66 hours [ 1 ] and can be easily transported over long distances to hospitals where its decay product technetium-99m (with a half-life of only 6 hours, inconvenient for transport) is extracted and used for a variety of nuclear medicine diagnostic procedures , where its short half-life is very useful.
99 Mo can be obtained by the neutron activation (n,γ reaction) of 98 Mo in a high- neutron-flux reactor. However, the most frequently used method is through fission of uranium -235 in a nuclear reactor . While most reactors currently engaged in 99 Mo production use highly enriched uranium-235 targets, proliferation concerns have prompted some producers to transition to low-enriched uranium targets. [ 2 ] The target is irradiated with neutrons to form 99 Mo as a fission product (with 6.1% yield ). [ 3 ] Molybdenum-99 is then separated from unreacted uranium and other fission products in a hot cell . [ 4 ]
99m Tc remained a scientific curiosity until the 1950s when Powell Richards realized the potential of technetium-99m as a medical radiotracer and promoted its use among the medical community. [ 5 ] While Richards was in charge of the radioisotope production at the Hot Lab Division of the Brookhaven National Laboratory , Walter Tucker and Margaret Greene were working on how to improve the separation process purity of the short-lived eluted daughter product iodine-132 from tellurium-132 , its 3.2-days parent, produced in the Brookhaven Graphite Research Reactor. [ 6 ] They detected a trace contaminant which proved to be 99m Tc, which was coming from 99 Mo and was following tellurium in the chemistry of the separation process for other fission products. Based on the similarities between the chemistry of the tellurium-iodine parent-daughter pair, Tucker and Greene developed the first technetium-99m generator in 1958. [ 7 ] [ 8 ] It was not until 1960 that Richards became the first to suggest the idea of using technetium as a medical tracer. [ 9 ] [ 10 ] [ 11 ] [ 12 ]
Technetium-99m's short half-life of 6 hours makes long-term storage impossible. Transport of 99m Tc from the limited number of production sites to radio pharmacies (for manufacture of specific radiopharmaceuticals ) and other end users would be complicated by the need to significantly overproduce to have sufficient remaining activity after long journeys. Instead, the longer-lived parent nuclide 99 Mo can be supplied to radio pharmacies in a generator, after its extraction from the neutron -irradiated uranium targets and its purification in dedicated processing facilities. [ 13 ] Radio pharmacies may be hospital-based or stand-alone facilities, and in many cases will subsequently distribute 99m Tc radiopharmaceuticals to regional nuclear medicine departments. Development in direct production of 99m Tc, without first producing the parent 99 Mo, precludes the use of generators; however, this is uncommon and relies on suitable production facilities close to radio pharmacies. [ 14 ]
Generators provide radiation shielding for transport and to minimize the extraction work done at the medical facility. A typical dose rate at 1 metre from 99m Tc generator is 20–50 μSv/h during transport. [ 15 ]
These generators' output declines with time and must be replaced weekly, since the half-life of 99 Mo is still only 66 hours. Since the half-life of the parent nuclide ( 99 Mo) is much longer than that of the daughter nuclide ( 99m Tc), 50% of equilibrium activity is reached within one daughter half-life, 75% within two daughter half-lives. Hence, removing the daughter nuclide ( elution process) from the generator ("milking" the cow) is reasonably done as often as every 6 hours in a 99 Mo/ 99m Tc generator. [ 16 ]
Most commercial 99 Mo/ 99m Tc generators use column chromatography , in which 99 Mo in the form of molybdate , MoO 4 2− is adsorbed onto acid alumina (Al 2 O 3 ). When the 99 Mo decays it forms pertechnetate TcO 4 − , which, because of its single charge, is less tightly bound to the alumina. Pouring normal saline solution through the column of immobilized 99 Mo elutes the soluble 99m Tc, resulting in a saline solution containing the 99m Tc as pertechnetate, with sodium as the counterion .
The solution of sodium pertechnetate may then be added in an appropriate concentration to the pharmaceutical kit to be used, or sodium pertechnetate can be used directly without pharmaceutical tagging for specific procedures requiring only the 99m TcO 4 − as the primary radiopharmaceutical . A large percentage of the 99m Tc generated by a 99 Mo/ 99m Tc generator is produced in the first 3 parent half-lives, or approximately one week. Hence, clinical nuclear medicine units purchase at least one such generator per week or order several in a staggered fashion. [ 17 ]
When the generator is left unused, 99 Mo decays to 99m Tc, which in turn decays to 99 Tc. The half-life of 99 Tc is far longer than its metastable isomer, so the ratio of 99 Tc to 99m Tc increases over time. Both isomers are carried out by the elution process and react equally well with the ligand, but the 99 Tc is an impurity useless to imaging (and cannot be separated).
The generator is washed of 99 Tc and 99m Tc at the end of the manufacturing process of the generator, but the ratio of 99 Tc to 99m Tc then builds up again during transport or any other period when the generator is left unused. The first few elutions will have reduced effectiveness because of this high ratio. [ 18 ] | https://en.wikipedia.org/wiki/Technetium-99m_generator |
Technetium ( 99m Tc) arcitumomab was a drug used for the diagnostic imaging of colorectal cancers , marketed by Immunomedics . [ 1 ] It consisted of the Fab' fragment of a monoclonal antibody (arcitumomab, trade name CEA-Scan ) and a radionuclide , technetium-99m .
CEA-Scan was approved by the European Medicine Association (EMA) on October of 1996 for imaging in the case of metastases and/or recurrence in patients that were suffering from colon or rectum cancer. Under the same decision, it was also approved to be used in patients that were suspected to have colon or rectal carcinoma recurrence and/or metastasis in association with rising blood CEA-levels. [ 2 ]
Technetium ( 99m Tc) arcitumomab is an immunoconjugate . Arcitumomab is a Fab' fragment of IMMU-4, a murine IgG1 monoclonal antibody extracted from the ascites of mice. The enzyme pepsin cleaves the F(ab') 2 fragment off the antibody. From this, the Fab' fragment is prepared by mild reduction .
Before application, arcitumomab is reconstituted with a solution of the radioactive agent sodium pertechnetate ( 99m Tc) from a technetium generator . [ 1 ]
Arcitumomab recognizes carcinoembryonic antigen (CEA), an antigen over- expressed in 95% of colorectal cancers. [ 3 ] Consequently, the antibody accumulates in such tumours together with the radioisotope, which emits photons . Via single photon emission computed tomography (SPECT), high-resolution images showing localisation, remission or progression, and metastases of the tumour can be obtained. [ 1 ] [ 4 ]
Technetium ( 99m Tc) arcitumomab is contraindicated for patients with known allergies or hypersensitivity to mouse proteins, as well as during pregnancy. Women should pause breast feeding for 24 hours after application of the drug. [ 1 ]
Only mild and transient side effects have been observed, mostly immunological reactions like eosinophilia , itching and fever. Some patients develop human anti-mouse antibodies , so there is the theoretical possibility of anaphylactic reactions . High doses of IMMU-4 (up to 20-fold diagnostic arcitumomab dose) have not led to any serious events. One patient has been reported to develop a grand mal after application. [ 1 ]
Radioactivity can lead to radiation poisoning . Since the dose of an arcitumomab application is about 10 m Sv , [ 1 ] such an overdose is unlikely.
In August 2005, the marketing company Immunodemics voluntarily decided to withdraw the product from the market.
In September 2005, EMA accepted the decision and CEA-Scan was removed from the market. [ 5 ] | https://en.wikipedia.org/wiki/Technetium_(99mTc)_arcitumomab |
Technetium ( 99m Tc) exametazime is a radiopharmaceutical sold under the trade name Ceretec , and is used by nuclear medicine physicians for the detection of altered regional cerebral perfusion in stroke [ 1 ] and other cerebrovascular diseases. It can also be used for the labelling of leukocytes to localise intra- abdominal infections [ 2 ] and inflammatory bowel disease . [ 3 ] Exametazime (the part without technetium) is sometimes referred to as hexamethylpropylene amine oxime or HMPAO , although correct chemical names are: [ 4 ]
The drug consists of exametazime as a chelating agent for the radioisotope technetium-99m . Both enantiomeric forms of exametazime are used—the drug is racemic . [ 5 ] The third stereoisomer of this structure, the meso form , is not included. | https://en.wikipedia.org/wiki/Technetium_(99mTc)_exametazime |
Technetium ( 99m Tc) fanolesomab (trade name NeutroSpec , manufactured by Palatin Technologies) is a mouse monoclonal antibody formerly used to aid in the diagnosis of appendicitis . It is labeled with a radioisotope , technetium-99m ( 99m Tc).
NeutroSpec was approved by the U.S. Food and Drug Administration (FDA) in June 2004 for imaging of patients with symptoms of appendicitis. It consisted of an intact murine (mouse) IgM monoclonal antibody against human CD15 , labeled with technetium-99m so as to be visible on a gamma camera image. Since anti-CD15 antibodies bind selectively to white blood cells such as neutrophils , it could be used to localize the site of an infection.
The FDA received reports from Palatin of 2 deaths and 15 life-threatening adverse events in patients who had received NeutroSpec.
These events occurred within minutes of administration of NeutroSpec and included shortness of breath , low blood pressure , and cardiopulmonary arrest . Affected patients required resuscitation with intravenous fluids, blood pressure support, and oxygen. Most, but not all, of the patients who experienced these events had existing cardiac and/or pulmonary conditions that may have placed them at higher risk for these adverse events. A review of all post-marketing reports showed an additional 46 patients who experienced adverse events that were similar but less severe. All of the reactions occurred immediately after NeutroSpec was administered. [ 1 ]
Marketing of the product was suspended in December 2005.
Immune activation : Dostarlimab Other: Ibalizumab | https://en.wikipedia.org/wiki/Technetium_(99mTc)_fanolesomab |
Technetium ( 99m Tc) nofetumomab merpentan (trade name Verluma ) is a mouse monoclonal antibody derivative used in the diagnosis of lung cancer , [ 1 ] gastrointestinal , breast , ovary , pancreas , kidney , cervix , and bladder carcinoma . [ 2 ] The antibody part, nofetumomab , is attached to the chelator merpentan , which links it to the radioisotope technetium-99m ( 99m Tc). [ 3 ]
Nofetumomab is an antibody fragment that recognises the pancarcinoma glycoprotein antigen EpCAM . [ 4 ] and/or CD20 / MS4A1 [ 5 ]
It is the Fab part of murine MAb NR-LU-10 . [ 6 ]
The chelator part : merpentan is a phenthioate ligand, 2,3,5,6-tetrafluorophenyl-4,5-bis-5-[1-ethoxyethyl]-thioacetoamidopentanoate. [ 6 ]
Phenthioate is an insecticide (Cidial) = O,o-dimethyl-S-(carbethoxy-phenylmethyl)dithiophosphate [ 7 ]
This monoclonal antibody –related article is a stub . You can help Wikipedia by expanding it .
This antineoplastic or immunomodulatory drug article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Technetium_(99mTc)_nofetumomab_merpentan |
Technetium ( 99m Tc) sulesomab (trade name LeukoScan ) is a radio-pharmaceutical composed of anti-human mouse monoclonal antibody [ 1 ] that targets the granulocyte associated NCA-90 cell antigen and a conjugated technetium-99m radionuclide . After intravenous administration, Leukoscan enables sensitive and specific whole body measurement of granulocyte infiltration and activation by gamma camera imaging of 99m Tc-antibody bound cells. [ 2 ] Total clearance of LeukoScan from blood samples after administration and imaging has been reported at 48 hour time points indicating limited retention of the agent in circulation [ 3 ]
It is approved in European markets for the imaging of infections and inflammations in patients with suspected osteomyelitis [ 4 ] [ 5 ] but has not secured FDA approval for use in American markets. [ 6 ] In addition to approved uses, Leukoscan is currently being investigated for other diagnostic purposes like the detection of soft tissue infections, malignant external otitis and prosthetic joint infection. [ 7 ] [ 8 ] [ 9 ] However, the future clinical and investigational use of this agent may be limited as sale of the agent by the parent company Immunomedics was discontinued in 2018. [ 10 ]
Immune activation : Dostarlimab Other: Ibalizumab
This monoclonal antibody –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Technetium_(99mTc)_sulesomab |
Technetium ( 99m Tc) votumumab (trade name HumaSPECT ) is a human monoclonal antibody labelled with the radionuclide technetium-99m . [ 1 ] [ 2 ] It was developed for the detection of colorectal tumors , but has never been marketed. [ 3 ]
The target of votumumab is CTAA16.88, a complex of cytokeratin polypeptides in the molecular weight range of 35 to 43 kDa , which is expressed in colorectal tumors. [ 4 ]
This monoclonal antibody –related article is a stub . You can help Wikipedia by expanding it .
This nuclear medicine article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Technetium_(99mTc)_votumumab |
Technical Guidance WM2 : Hazardous Waste: Interpretation of the definition and classification of hazardous waste [ 1 ] is a guidance document developed and jointly published by the English Environment Agency , Natural Resources Wales , Scottish Environment Protection Agency and the Northern Ireland Environment Agency to provide guidance on the assessment and classification of hazardous waste based on the revised Waste Framework Directive [ 2 ] definition of hazardous waste. Waste producers, consultants, contractors and waste management companies use the guidance to a) identify the correct waste code for their waste and b) determine whether the waste is hazardous or not based on its chemical composition.
The revised Waste Framework Directive [ 2 ] (rWFD) is the primary legislative framework for the collection, transport, recovery and disposal of waste across Europe. It uses a waste hierarchy to define a priority order for waste prevention , legislation and policy. WM2 follows the European wide definition of hazardous waste defined by the rWFD as a waste which displays one or more of the fifteen hazard properties listed in Annex III of the rWFD.
Dangerous substances are substances that possess one or more of the 68 Risk Phrases described in the Dangerous Substances Directive (67/548/EEC) . For example:
The rWFD also refers to the list of wastes known as the European Waste Catalogue (EWC). [ 3 ] The EWC contains 846, six digit waste codes arranged in 20 chapters, where each chapter is based on a generic industry or process that generated the waste or upon the type of waste. The EWC differentiates between hazardous and non-hazardous by identifying hazardous waste entries with an asterisk. Examples of a hazardous entry and its equivalent non-hazardous entry from "Chapter 17 Construction and Demolition Wastes (including excavated soil from contaminated sites)" are:
with the * indicating the hazardous entry.
Regulation (EC) No 1272/2008, the Classification, Labelling and Packaging of Substances Regulation ( CLP Regulation ) [ 4 ] was published in December 2008. This directly acting regulation amends and repeals both the Dangerous Substances Directive (67/548/EEC) and the Dangerous Preparations Directive (1999/45/EC) and amends part of the REACH regulation ( Registration, Evaluation, Authorisation and Restriction of Chemicals Regulation (EC) No 1907/2006). Depending on whether you are dealing with substances or mixtures (preparations), these changes take place between 2009 and 2015. [This also means that the CHIP 4 Regulation 2009 [ 5 ] will need to be repealed in 2015 as it enacts the two European directives mentioned above.]
Of particular relevance to waste classification and WM2 is Table 3.2 of Annex VI of the CLP. This table is the primary data source for the risk phrases and other attributes for more than 4000 substances.
Note that since the CLP Regulation was first published, it has been amended by six Adaptations to Technical Progress (ATPs). [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] These amendments have particular impact on the substance data managed by Table 3.2.
WM2 is a detailed technical guide for the classification and assessment of wastes that may or may not be hazardous. It provides a step by step process to determine whether a waste is hazardous or not along with more detailed guidance when assessing the chemical analysis (composition) of the waste. The basic steps are:
Step 1 : Is the waste "directive waste" or required to be assessed due to domestic legislative provision?
Step 2 : How is the waste coded and classified in the LoW?
Step 3 : Are the substances in the waste known or can they be determined?
Step 4 : Are there dangerous substances in the waste?
Step 5 : Does the waste possess any of the hazard properties H1 to H15?
Waste containing dangerous substances may be hazardous if the concentrations of those substances are above specified thresholds. This can be checked by
The various calculations required for assessing the different hazard properties are detailed in Appendix C of WM2; this appendix contains 15 separate sections, one for each hazard property.
While classifications can be done by hand or via custom built spreadsheets, these approaches are time consuming, difficult to maintain and to audit. The introduction of commercial software in 2010 allowed users to concentrate on what is in the waste rather than how to carry out the calculations.
After the waste has been classified as either hazardous or non hazardous, it can then be assessed with respect to disposal. One of the routes for disposal is to landfill.
Under the Landfill Directive (1999/31/EC), [ 12 ] landfills are classified according to whether they can accept hazardous, non-hazardous or inert wastes and wastes can only be accepted at a landfill if they meet the waste acceptance criteria (WAC) for that class of landfill.
To assess against a landfill's WAC, representative samples of the waste are sent to a laboratory for WAC testing (leachate tests). The analysis is needed to demonstrate that as a hazardous waste, or a stable, non reactive hazardous waste, or an inert waste, any load of waste meets the appropriate landfill waste acceptance criteria. The landfill WAC are maximum limits which must not be exceeded and should be viewed as treatment specifications for landfill. | https://en.wikipedia.org/wiki/Technical_Guidance_WM2 |
The Technical Service Council was set up to combat the " brain drain " of Canadian engineers to the United States , when over 20% of the graduating classes were emigrating. Ireland , India , New Zealand and even Switzerland have had similar problems.
In 1927, Canadian industry financed the council, whose directors concluded that a non-profit employment service that was free to graduates might minimize emigration. The service survived the Depression, played a part in recruiting scientists and engineers for war work, pioneered outplacement and expanded to include other professional occupations. It financed major studies of the supply of and demand for engineers and offered free-job-hunting courses to professionals.
Although started in Toronto , the Council eventually had offices in Montreal , Winnipeg , Calgary , Edmonton and Vancouver before becoming bankrupt in 1994. It may have reduced the brain drain during its first 20 or 25 years, but it's not possible to judge its later record.
In the 1920s over 20% of Canadian engineering graduates emigrated to the United States. [ 1 ] [ 2 ] At that time, jobs in the U.S. were both much more numerous and more varied than in Canada. Meanwhile, the number of graduates soared, Canadian employers were unconvinced of the value of engineering degrees and new graduates complained of the lack of jobs.
Robert A. Bryce, president of Macassa Mines Ltd. and Prof. H.E.T. Haultain of the University of Toronto resolved to act. In April 1927 they and Rev.Canon H.J. Cody, chairman of the board of governors of the University of Toronto, invited the chief executives of major firms to a dinner at the National Club in Toronto. After hearing how the loss of talent could hamper industry, each of the 12 executives promised $1,000 to fund a non-profit organization to combat the "brain drain". The brain drain, the selling of science to employers and Canadian nationalism were tightly intertwined ideas. The firm was called the Technical Service Council. [ 3 ]
Rolsa Eric Smythe was hired to run the council. Appropriately, he was a Canadian engineer who had been working in Detroit. (3)
After a study of placement operations in other countries and consultation with employers, the directors decided that engineers would not respond to urges to stay in Canada. Instead the Technical Service Council would find jobs for them by operating a free (to graduates) placement service. [ 2 ] Employers would be invited to donate to the service, although later some companies used the service without contributing.
The objectives were: To retain for Canada young Canadians educated along technical and scientific lines; to bring graduates of universities and technical institutions into practical contact with Canadian industry; to submit to universities the recommendations of industry concerning scientific courses and to aid industry in technical and scientific employment problems. [ 4 ]
A small office was opened in Toronto in 1928 with $30,000 "seed money" from 30 firms to finance a three-year experiment. [ 5 ] Between July and December, 159 job hunters registered, 185 jobs were listed by employers and 81 engineers were placed by the staff of two. [ 6 ]
The Great Depression soon arrived, wiping out many jobs. Some graduates were placed in welcome, but undemanding jobs, like street car conductor. Raising money was difficult and the Council survived only because of grants from the government of Ontario in 1932-34 and sometimes, Smythe's forgoing his salary. It was decided to ask those who had found jobs through the TSC to make donations. This produced some money, but the organization had a hand-to-mouth existence until 1957, apart from the World War II years. (3)
By June 30, 1933, over 1,180 personnel had been placed, 110 of whom were repatriated Canadians. Expenses for the first five years of operations
were $44,988. [ 7 ] In 1933, 111 men and women were placed by the council's staff of two. [ 2 ]
Even then it was clear that engineers needed business knowledge. The Council persuaded the University of Western Ontario to offer a diploma course in management for engineers. [ 2 ] Then such a course was novel, if not unique. In 1951 numerous employers and graduates in ceramic engineering were surveyed on behalf of the University of Saskatchewan to estimate future demand. Some time later a similar survey was made for the University of Toronto. As a result of these studies, both universities discontinued their ceramic engineering programs. [ 8 ]
By 1938, in response employers' demand for "one-stop service", the Council expanded to include executives, accountants, marketing, production and personnel staff. [ 2 ] A year later, the economy had improved, but the council's placements were mainly in Ontario and Quebec , where Canada's industry was concentrated.
Job vacancies soared with the start of World War II. Shipyards, steel mills, armaments and munitions factories, aircraft manufacturers and construction companies urgently needed engineers. Few engineers even considered emigrating to the United States because of patriotic reasons and the plethora of jobs.
The Technical Service Council was the only placement service allowed to operate during the war. [ 2 ] Its bank of professionals was such an important national resource that 15 recruiters from Defence Industries Ltd., the major munitions manufacturer, were loaned to the council. [ 2 ]
After the war, veterans were entitled to free university tuition. Therefore, record numbers of engineers were graduated in 1949 and 1950. [ 9 ] [ 10 ] Graduates of Western and Maritime universities, both in areas with limited industry, greatly outnumbered local vacancies. Many engineers moved to Ontario, Quebec and the United States. About 2,500 professional men emigrated to the United States in 1950 alone. [ 11 ] Nevertheless, one study showed that the exodus of technically trained graduates dropped from 27% of the graduating classes in 1927 to under 10% in 1951. [ 12 ]
Pioneering work was done on group interviews and recruitment advertising in 1950–52. [ 13 ] [ 14 ] The latter study showed how employers could increase response to their ads. The Federal government engaged the council to write a handbook on the job market for immigrants while the Ontario Government asked the council to appraise opportunities for prospective immigrants from Great Britain .
Canadian industry contributed more than $300,000 to the Council between 1927 and 1953. [ 5 ] During the same period, employers listed 16,533 job vacancies. [ 4 ] 6,817 men with special training were placed in key positions in business and industry. [ 5 ] The Council registered and interviewed 24,607 men with higher education. [ 5 ] The qualifications of each were carefully cross-indexed and maintained for employers. [ 5 ] An additional 100,000 individuals were interviewed to assess qualifications and give free vocational advice. The average cost per placement rose from $50 to $100 between 1948 and 1954. [ 5 ]
Between 1951 and 1956, 3,072 engineers, equivalent to 31% of graduating classes in engineering, emigrated to the United States. [ 15 ] They could have staffed the largest missile centre in the Western World. [ 15 ] In 1951 the equivalent of 11% of the graduating classes in engineering left for the United States. In 1956, as immigrants were less likely to be drafted, the percentage had soared to 46%. [ 15 ]
In 1957 the Council almost collapsed, but it was revived by new management who increased placement fees.
Shortages of engineers and scientists in Canada often coincided with equally acute shortages in the United States. American companies then recruited actively in Canada, as they did following the 1959 cancellation of Canada's much-vaunted Avro Arrow jet fighter. In addition, Canadians completing post-graduate training in the U.S. often found getting a job locally easier than searching for one in distant Canada. [ 16 ]
In 1962 a branch in Montreal called Technical Service Council/Le Conseil de Placement Professionnel was opened. It was followed by others in Winnipeg , Calgary , Edmonton and Vancouver . [ 2 ]
The council was one of the pioneers of outplacement (then called relocation counselling) in Canada. [ 2 ] [ 13 ] [ 17 ] Its first contract in 1970 eventually developed into a significant activity. In addition to individual counselling, free office services and other benefits, clients were given How to Job Hunt Effectively, a substantial hand and work book. [ 18 ] The book was also available to the public and over 5,000 copies were sold.
From 1967 regular one-day employment interviewing courses for line managers were run in major cities. Students received written critiques of their interviews with actors.
By 1971 out-of-work university graduates were so numerous that free "How to Job Hunt" courses were held in several cities. [ 2 ] As another public service, over $200,000. was spent researching and publishing ten-year forecasts of the supply of and demand for engineering graduates in 1975 and again in 1988. Both studies were intended to improve understanding of the job market, candidate mobility and help minimize "mismatch". [ 18 ] They were provided free to Canadian universities and sold at below cost to employers. [ 2 ]
In the same year, an executive search division, Bryce, Haultain & Associates, was opened and named after two of the council's co-founders. [ 2 ]
By 1976 the council had placed over 16,000 men and women. [ 2 ] An equal number were estimated to have rejected job offers from the council's client companies. Studies showed that 25% of job listings were never filled from any source. Employers' reasons included budget cuts, inability to find someone who filled the job specifications, candidates' high asking salaries, reorganizations and a belated realization that existing staff could do the job.
In 1976 573 firms were members. [ 2 ] Annual membership fees were mainly $100 to $500., depending upon company size and usage. Placement fees were kept low in order to attract job listings. The greater the choice of vacancies, the more likely candidates were to stay in Canada. Placement fees for member companies were 4% to 5% of the placement's annual income. [ 2 ] Commercial employment agencies charged 20% to 30%. [ 2 ]
Over 17,000 engineers and scientists emigrated from Canada to the United States between 1960 and 1979. The number of engineers emigrating declined from 1,209 in 1967 to only 289 in 1977, and the number of chemists emigrating dropped from 156 to 58 during the same period. [ 19 ] However, engineers and scientists emigrating increased from 727 in 1982 to 1,433 in 1985. [ 16 ]
Active job listings reached 4,328 in June 1981, an astonishing figure for such a clearing house. [ 20 ] Orders plummeted when Prime Minister Pierre Trudeau 's highly controversial National Energy Program took effect. About half the council's staff was laid off.
Nevertheless, between 1928 and 1988, over 46,000 men and women had received job offers from about 1,700 of the council's employer clients. [ 18 ]
Frequent dramatic swings in the job market caused the council to build a financial reserve equal to two times' annual operating expenses. [ 2 ] The reserve was over three times' expenses in December, 1991, but the council was declared bankrupt in September, 1994. [ 21 ]
From 1928 to 1939, job vacancies were mainly advertised locally so job hunters had difficulty learning of distant jobs. The Maritimes and West had so little industry that their substantial engineering graduating classes had to seek positions elsewhere. Employers seldom sought professionals through the Federal employment service while universities had tiny or non-existent placement services. This lack of job information made the council's numerous industrial contacts especially important to job hunters. The exodus of technically trained Canadians is said to have dropped from 27% of graduating classes in 1927 to under 10% in 1951 [ 12 ] and 5% in 1967. [ 22 ]
Any evaluation of the later years is difficult. The number of job vacancies and job hunters both increased. But often supply and demand were out of sync, encouraging emigration. Universities devoted more resources to placing their graduates, but often gave little attention to experienced graduates. Commercial employment agencies expanded, but few lasted five years because of the erratic job market. Eventually, one and then another national newspaper spread news of distant vacancies. Although graduates had better information than ever, the council was still busy. Despite this, 2,500 professional men moved to the United States in 1950 alone. [ 11 ] It is impossible to judge the council's impact on the "brain drain" since then.
The Ministry of State, Science and Technology asked the council to study the feasibility of a National Register of Canadians in research-oriented occupations who are working or studying out of the country. [ 16 ] The study found that 65% of employers contacted had a strong or moderate interest in a register. It was estimated that only one or two per cent of candidates would find jobs through the register. Neither the register nor a free handbook on job hunting in Canada would get at the reasons why many Canadians do not return: a perceived lack of opportunities in their specialty and lack of research support in Canada. [ 16 ]
The study noted that efforts by the Association of Medical Colleges of Canada and the Association of Universities and Colleges of Canada to recruit Canadians in the U.S. had failed. In 1986 twenty British firms advertised for British-trained engineers in North America . The ads produced 6,500 replies and about 1,800 job offers. Only 89 offers were accepted at what was considered an uneconomical cost. [ 16 ]
16. Cuddihy, Basil Robert. "How to Give Phased-out Managers a New Chance", Harvard Business Review , Vol.54, No.4. Jul.-Aug. 1974. Neither the Technical Service Council nor the other consultant used is named. (This entry replaces No.16 above.)
22. Technical Service Council advertisement in Ontario Technologist and about six other Canadian technical publications. Summer, 1981. | https://en.wikipedia.org/wiki/Technical_Service_Council |
In telecommunications , a technical control facility ( TCF ) is defined by US Federal Standard 1037C as a telecommunications facility , or a designated and specially configured part thereof, that:
This article related to telecommunications is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Technical_control_facility |
A technical data management system ( TDMS ) is a document management system (DMS) pertaining to the management of technical and engineering drawings and documents. Often the data are contained in 'records' of various forms, such as on paper, microfilms or digital media. Hence technical data management is also concerned with record management involving technical data. Technical document management systems are used within large organisations with large scale projects involving engineering. For example, a TDMS can be used for integrated steel plants (ISP), automobile factories, aero-space facilities, infrastructure companies, city corporations, research organisations, etc. In such organisations, technical archives or technical documentation centres are created as central facilities for effective management of technical data and records.
TDMS functions are similar to that of conventional archive functions in concepts, except that the archived materials in this case are essentially engineering drawings, survey maps, technical specifications , plant and equipment data sheets, feasibility reports, project reports, operation and maintenance manuals, standards, etc.
Document registration, indexing, repository management, reprography, etc. are parts of TDMS. Various kinds of sophisticated technologies such as document scanners, microfilming and digitization camera units, wide format printers, digital plotters, software, etc. are available, making TDMS functions an easier process than previous times.
Technical data refers to both scientific and technical information recorded and presented in any form or manner (excluding financial and management information). [ 1 ] A Technical Data Management System is created within an organisation for archiving and sharing information such as technical specifications , datasheets and drawings. Similar to other types of data management system, a Technical Data Management System consists of the 4 crucial constituents mentioned below.
Data plans (long-term or short-term) are constructed as the first essential step of a proper and complete TDMS. It is created to ultimately help with the 3 other constituents, data acquisition, data management and data sharing. A proper data plan should not exceed 2 pages and should address the following basics: [ 2 ]
Raw data is collected from primary sites of the organisations through the use of modern technologies. [ 4 ] Please reference the table below for examples. [ 4 ]
The data collected is then transferred to technical data centres for data management.
After data acquisition, data is sorted out, whilst useful data is archived, unwanted data is disposed. When managing and archiving data, the features below of the data are considered. [ 5 ]
Archived and managed data are accessible to rightful entities. A proper and complete TDMS should share data to a suitable extent, under suitable security, in order to achieve optimal usage of data within the organisation. It aims for easy access when reused by other researchers and hence it enhances other research processes. Data is often referred in other tests and technical specifications , where new analysis is generated, managed and archived again. As a result, data is flowing within the organisation under effective management through the use of TDMS. [ 6 ]
There are strengths and weakness when using technical data management systems (TDMS) to archive data. Some of the advantages and disadvantages are listed below. [ 7 ] [ 8 ] [ 9 ]
Since TDMS is integrated into the organisation's systems, whenever workers develop data files ( SolidWorks , AutoCAD , Microsoft Word , etc.), they can also archive and manage data, linking what they need to their current work, at the same time they can also update the archives with useful data. This speeds up working processes and makes them more efficient.
All data files are centralized, hence internal and external data leakages are less likely to happen, and the data flow is more closely monitored. As a result, data in the organisation is more secured.
Since the data files are centralized and the data flow within the organisation increases, researchers and workers within the organisation are able to work on joint projects. More complex tasks can be performed for higher yields.
TDMS is compatible to many formats of data, from basic data like Microsoft Words to complex data like voice data. This enhances the quality of the management of data archived.
Implementing TDMS into the organisation's systems involves monetary costs. Maintenance costs certain amount of human resources and money as well. These resources involve opportunity costs as they can be utilized in other aspects.
Since TDMS manages and centralizes all the data the organisation processes, it links the working processes within the whole organisation together. It also increases the vulnerability of the organisation data network. If TDMS is not stable enough or when it is exposed to hacker and virus attacks, the organisation's data flow might shut down completely, affecting the work in an organisation-wide scale and leading to a lower stability as results.
Test engineers and researchers are facing great challenges in turning complex test results and simulation data into usable information for higher yields of firms. These challenges are listed below. [ 10 ]
Many organisations are still applying the conventional file management systems, due to the difficulty in building a proper and complete archives for data management.
The first approach is the simple file-folder system. This costs the problem of ineffectiveness as workers and researchers have to manually go through numerous layers of systems and files for the target data. Moreover, the target data may contain files with different formats and these files may not be stored in the same machine. These files are also easily lost if renamed or moved to another location.
The second approach is conventional databases such as Oracle. These databases are capable of enabling easy search and access of data. However, a great drawback is that huge effort for preparing and modeling the data is required. For large-scale projects, huge monetary costs are induced, and extra IT human resources must be employed for constant handling, expanding and maintaining the inflexible system, which is custom for specific tasks, instead of all tasks. In the long-term, it is not cost-effective.
TDMS is developed based on 3 principles, flexible and organized file storage, self-scaling hybrid data index, and an interactive post-processing environment. The system in practical, mainly consists of 3 components, data files with essential and relevant Metadata , data finders for organizing and managing data regardless of files formats, and, a software of searching, analyzing and reporting. With metadata attached to original data files, the data finder can identify different related data files during searches, even if they are in different file formats. TDMS hence allows researchers to search for data like browsing the Internet. Last but not least, it can adapt to changes and update itself according to the changes, unlike databases.
Complex organizations may need large amounts of technical information, which can be distributed among several independent archives. Existing approaches span from "no integration" to "strong integration", that is based on a common database or product model. The so-called weak information systems (WIS) [ 11 ] lie somewhere in the middle. Their basic concept is to add to the pre-existing information a new layer of multiple partial models of products and processes, so that it is possible to reuse existing databases, to reduce the development from scratch, and to provide evolutionary paths relevant for the development of the WIS. Each partial model may include specific knowledge and it acts as a way to structure and access the information according to a specific user view.
The comparison between strong and weak information systems may be summarized as follows:
The architecture of a weak information system is composed of:
The integration layer comprises the following sub-layers:
In some countries, such as in the US, record and document management are considered very vital functions, and much stress is given in the management of Technical Archives. Records and documents coming under the public domain are governed by appropriate laws. [ 12 ] However, this has not been so in many underdeveloped and developing nations . For example, India enacted the ' Public Records Act' [ 13 ] in 1993. However, many in the country are not aware of the existence of such a law or its importance.
Technical Data Management Systems (TDMS) are widely applied across the globe, in different sectors. Some of the examples are listed below.
Data management solutions are tools and technologies that organizations use to manage their data. These solutions can include a wide range of different tools and technologies, such as databases and data warehouses, data integration and ETL (extract, transform, load) tools, data governance and quality tools, and data visualization and reporting tools. Data management solutions can help organizations store, organize, and manage their data in a more effective and efficient manner. They can also help to improve the accuracy and reliability of the data that is used to make important decisions and enable organizations to gain insights from their data more easily. | https://en.wikipedia.org/wiki/Technical_data_management_system |
Windows Vista (formerly codenamed Windows "Longhorn") has many significant new features compared with previous Microsoft Windows versions, covering most aspects of the operating system.
In addition to the new user interface, security capabilities, and developer technologies, several major components of the core operating system were redesigned, most notably the audio, print, display, and networking subsystems; while the results of this work will be visible to software developers, end-users will only see what appear to be evolutionary changes in the user interface.
As part of the redesign of the networking architecture, IPv6 has been incorporated into the operating system, and a number of performance improvements have been introduced, such as TCP window scaling . Prior versions of Windows typically needed third-party wireless networking software to work properly; this is no longer the case with Windows Vista, as it includes comprehensive wireless networking support.
For graphics, Windows Vista introduces a new as well as major revisions to Direct3D . The new display driver model facilitates the new Desktop Window Manager , which provides the tearing -free desktop and special effects that are the cornerstones of the Windows Aero graphical user interface . The new display driver model is also able to offload rudimentary tasks to the GPU , allow users to install drivers without requiring a system reboot, and seamlessly recover from rare driver errors due to illegal application behavior.
At the core of the operating system, many improvements have been made to the memory manager, process scheduler, heap manager, and I/O scheduler . A Kernel Transaction Manager has been implemented that can be used by data persistence services to enable atomic transactions . The service is being used to give applications the ability to work with the file system and registry using atomic transaction operations.
Windows Vista features a completely re-written audio stack designed to provide low-latency 32-bit floating point audio, higher-quality digital signal processing, bit-for-bit sample level accuracy, up to 144 dB of dynamic range and new audio APIs created by a team including Steve Ball and Larry Osterman. [ 1 ] [ 2 ] The new audio stack runs at user level, thus reducing impact on system stability. Also, the new Universal Audio Architecture (UAA) model has been introduced, replacing WDM audio, which allows compliant audio hardware to automatically work under Windows without needing device drivers from the audio hardware vendor.
There are three major APIs in the Windows Vista audio architecture:
Applications communicate with the audio driver through Sessions , and these Sessions are programmed through the Windows Audio Session API (WASAPI) . In general, WASAPI operates in two modes. In exclusive mode (also called DMA mode ), unmixed audio streams are rendered directly to the audio adapter and no other application's audio will play and signal processing has no effect. Exclusive mode is useful for applications that demand the least amount of intermediate processing of the audio data or those that want to output compressed audio data such as Dolby Digital , DTS or WMA Pro over S/PDIF . WASAPI exclusive mode is similar to kernel streaming in function, but no kernel mode programming is required. In shared mode , audio streams are rendered by the application and optionally applied per-stream audio effects known as Local Effects (LFX) (such as per-session volume control). Then the streams are mixed by the global audio engine, where a set of global audio effects (GFX) may be applied. Finally, they're rendered on the audio device.
After passing through WASAPI, all host-based audio processing, including custom audio processing, can take place. Host-based processing modules are referred to as Audio Processing Objects , or APOs . All these components operate in user mode, only the audio driver runs in kernel mode.
The Windows Kernel Mixer ( KMixer ) is completely gone. DirectSound and MME are emulated as Session instances rather than being directly connected to the audio driver. This does have the effect of preventing DirectSound from being hardware-accelerated, and completely removes support for DirectSound3D and EAX extensions , [ 4 ] however APIs such as ASIO and OpenAL are not affected.
Windows Vista also includes a new Multimedia Class Scheduler Service (MMCSS) that allows multimedia applications to register their time-critical processing to run at an elevated thread priority, thus ensuring prioritized access to CPU resources for time-sensitive DSP processing and mixing tasks.
For audio professionals, a new WaveRT port driver has been introduced that strives to achieve real-time performance by using the multimedia class scheduler and supports audio applications that reduce the latency of audio streams. All the existing audio APIs have been re-plumbed and emulated to use these APIs internally, all audio goes through these three APIs, so that most applications "just work".
A fault in the MME WaveIn/WaveOut emulation was introduced in Windows Vista: if sample rate conversion is needed, audible noise is sometimes introduced, such as when playing audio in a web browser that uses these APIs. This is because the internal resampler, which is no longer configurable, defaults to linear interpolation, which was the lowest-quality conversion mode that could be set in previous versions of Windows. The resampler can be set to a high-quality mode via a hotfix for Windows 7 and Windows Server 2008 R2 only. [ 5 ] [ 6 ]
New digital signal processing functionalities such as Room Correction , Bass Management , Loudness Equalization and Speaker Fill have been introduced. These adapt and modify an audio signal to take best advantage of the speaker configuration a given system has. Windows Vista also includes the ability to calibrate speakers to a given room's acoustics automatically using a software wizard. [ 7 ]
Windows Vista also includes the ability for audio drivers to include custom DSP effects, which are presented to the user through user-mode System Effect Audio Processing Objects (sAPOs). [ 8 ] These sAPOs are also reusable by third-party software.
Windows Vista builds on the Universal Audio Architecture, a new class driver definition that aims to reduce the need for third-party drivers, and to increase the overall stability and reliability of audio in Windows.
Microsoft has also included a new high quality voice capture DirectX Media Object (DMO) as part of DirectShow that allows voice capture applications such as instant messengers and speech recognition applications to apply Acoustic Echo Cancellation and microphone array processing to speech signals. [ 16 ]
Windows Vista is the first Windows operating system to include fully integrated support for speech recognition . Under Windows 2000 and XP, Speech Recognition was installed with Office 2003, or was included in Windows XP Tablet PC Edition.
A brief speech-driven tutorial is included to help familiarize a user with speech recognition commands. Training could also be completed to improve the accuracy of speech recognition.
Windows Vista includes speech recognition for 8 languages at release time: English (U.S. and British), Spanish, German, French, Japanese and Chinese (traditional and simplified). Support for additional languages is planned for post-release.
Speech recognition in Vista utilizes version 5.3 of the Microsoft Speech API [ 17 ] (SAPI) and version 8 of the Speech Recognizer.
Speech synthesis was first introduced in Windows with Windows 2000 , but it has been significantly enhanced for Windows Vista (code name Mulan ). The old voice, Microsoft Sam , has been replaced with two new, more natural sounding voices of generally greater intelligibility: Anna and Lili , the latter of which is capable of speaking Chinese. The screen-reader Narrator which uses these voices has also been updated. Microsoft Agent and other text to speech applications now use the newer SAPI 5 voices. [ 18 ]
Windows Vista includes a redesigned print architecture, [ 19 ] built around Windows Presentation Foundation . It provides high-fidelity color printing through improved use of color management , removes limitations of the current GDI -based print subsystem, enhances support for printing advanced effects such as gradients, transparencies, etc., and for color laser printers through the use of XML Paper Specification (XPS).
The print subsystem in Windows Vista implements the new XPS print path as well as the legacy GDI print path for legacy support. Windows Vista transparently makes use of the XPS print path for those printers that support it, otherwise using the GDI print path. On documents with intensive graphics, XPS printers are expected to produce much greater quality prints than GDI printers.
In a networked environment with a print server running Windows Vista, documents will be rendered on the client machine, [ 20 ] rather than on the server, using a feature known as Client Side Rendering . The rendered intermediate form will just be transferred to the server to be printed without additional processing, making print servers more scalable by offloading rendering computation to clients.
The XPS Print Path introduced in Windows Vista supports high quality 16-bit color printing. [ 21 ] The XPS print path uses XML Paper Specification (XPS) as the print spooler file format, that serves as the page description language (PDL) for printers. The XPS spooler format is the intended replacement for the Enhanced Metafile (EMF) format which is the print spooler format in the Graphics Device Interface (GDI) print path. [ 22 ] XPS is an XML -based (more specifically XAML -based) color-managed device and resolution independent vector-based paged document format which encapsulates an exact representation of the actual printed output. XPS documents are packed in a ZIP container along with text, fonts, raster images, 2D vector graphics and DRM information. For printers supporting XPS, this eliminates an intermediate conversion to a printer-specific language, increasing the reliability and fidelity of the printed output. Microsoft claims that major printer vendors are planning to release printers with built-in XPS support and that this will provide better fidelity to the original document. [ 23 ]
At the core of the XPS print path is XPSDrv, the XPS-based printer driver which includes the filter pipeline. It contains a set of filters which are print processing modules and an XML-based configuration file to describe how the filters are loaded. Filters receive the spool file data as input, perform document processing, rendering and PDL post-processing, and then output PDL data for the printer to consume. Filters can perform a single function such as watermarking a page or doing color transformations or they can perform several print processing functions on specific document parts individually or collectively and then convert the spool file to the page description language supported by the printer.
Windows Vista also provides improved color support through the Windows Color System for higher color precision and dynamic range. It also supports CMYK colorspace and multiple ink systems for higher print fidelity. The print subsystem also has support for named colors simplifying color definition for images transmitted to printer supporting those colors.
The XPS print path can automatically calibrate color profile settings with those being used by the display subsystem. Conversely, XPS print drivers can express the configurable capabilities of the printer, by virtue of the XPS PrintCapabilities class , to enable more fine-grained control of print settings, tuned to the individual printing device.
Applications which use the Windows Presentation Foundation for the display elements can directly print to the XPS print path without the need for image or colorspace conversion. The XPS format used in the spool file, represents advanced graphics effects such as 3D images, glow effects, and gradients as Windows Presentation Foundation primitives, which are processed by the printer drivers without rasterization , preventing rendering artifacts and reducing computational load. When the legacy GDI Print Path is used, the XPS spool file is used for processing before it is converted to a GDI image to minimize the processing done at raster level.
Print schemas provide an XML-based format for expressing and organizing a large set of properties that describe either a job format or print capabilities in a hierarchically structured manner. Print schemas are intended to address the problems associated with internal communication between the components of the print subsystem, and external communication between the print subsystem and applications.
Windows Vista contains a new networking stack, which brings large improvements in all areas of network-related functionality. [ 24 ] It includes a native implementation of IPv6 , as well as complete overhaul of IPv4 . IPv6 is now supported by all networking components, services, and the user interface. In IPv6 mode, Windows Vista can use the Link Local Multicast Name Resolution ( LLMNR ) protocol to resolve names of local hosts on a network which does not have a DNS server running. The new TCP/IP stack uses a new method to store configuration settings that enables more dynamic control and does not require a computer restart after settings are changed. The new stack is also based on a strong host model and features an infrastructure to enable more modular components that can be dynamically inserted and removed.
The user interface for configuring, troubleshooting and working with network connections has changed significantly from prior versions of Windows as well. Users can make use of the new "Network Center" to see the status of their network connections, and to access every aspect of configuration. The network can be browsed using Network Explorer , which replaces Windows XP's " My Network Places ". Network Explorer items can be a shared device such as a scanner, or a file share. Network Location Awareness uniquely identifies each network and exposes the network's attributes and connectivity type. Windows Vista graphically presents how different devices are connected over a network in the Network Map view, using the LLTD protocol. In addition, the Network Map uses LLTD to determine connectivity information and media type (wired or wireless). Any device can implement LLTD to appear on the Network Map with an icon representing the device, allowing users one-click access to the device's user interface. When LLTD is invoked, it provides metadata about the device that contains static or state information, such as the MAC address , IPv4/IPv6 address, signal strength etc.
Support for wireless networks is built into the network stack itself, and does not emulate wired connections, as was the case with previous versions of Windows. This allows implementation of wireless-specific features such as larger frame sizes and optimized error recovery procedures. Windows Vista uses various techniques like Receive Window Auto-scaling, Explicit Congestion Notification , TCP Chimney offload and Compound TCP to improve networking performance. Quality of service (QoS) policies can be used to prioritize network traffic, with traffic shaping available to all applications, even those that do not explicitly use QoS APIs. Windows Vista includes in-built support for peer-to-peer networks and SMB 2.0. For improved network security, Windows Vista supports for 256-bit and 384-bit Diffie-Hellman (DH) algorithms, as well as for 128-bit, 192-bit and 256-bit Advanced Encryption Standard (AES) is included in the network stack itself, while integrating IPsec with Windows Firewall .
Windows Vista introduces an overhaul of the previous Windows NT operating system loader architecture NTLDR . Used by versions of Windows NT since its inception with Windows NT 3.1 , NTLDR has been completely replaced with a new architecture designed to address modern firmware technologies such as the Unified Extensible Firmware Interface . [ 36 ] [ 37 ] The new architecture introduces a firmware-independent data store and is backward compatible with previous versions of the Windows operating system. [ 37 ]
Windows Vista introduces an improved driver model, Windows Driver Foundation which is an opt-in framework to replace the older Windows Driver Model . It includes:
Windows Vista includes the following changes and enhancements in processor power management : [ 54 ]
Windows Vista is the first client version of Windows to ship with the .NET Framework. The .NET Framework is a set of managed code APIs that is slated to succeed Win32 . The Win32 API is also present in Windows Vista, but does not give direct access to all the new functionality introduced with the .NET Framework. In addition, .NET Framework is intended to give programmers easier access to the functionality present in Windows itself.
.NET Framework 3.0 includes APIs such as ADO.NET , ASP.NET , Windows Forms , among others, and adds four core frameworks to the .NET Framework:
Windows Presentation Foundation (codenamed Avalon) is the overhaul of the graphical subsystem in Windows and the flagship resolution independent API for 2D and 3D graphics , raster and vector graphics ( XAML ), fixed and adaptive documents ( XPS ), advanced typography , animation ( XAML ), data binding, audio and video in Windows Vista . WPF enables richer control, design, and development of the visual aspects of Windows programs. Based on DirectX, it renders all graphics using Direct3D . Routing the graphics through Direct3D allows Windows to offload graphics tasks to the GPU , reducing the workload on the computer's CPU . This capability is used by the Desktop Window Manager to make the desktop, all windows and all other shell elements into 3D surfaces. WPF applications can be deployed on the desktop or hosted in a web browser ( XBAP ).
The 3D capabilities in WPF are limited compared to what's available in Direct3D. However, WPF provides tighter integration with other features like user interface (UI), documents, and media. This makes it possible to have 3D UI, 3D documents, and 3D media. A set of built-in controls is provided as part of WPF, containing items such as button, menu, and list box controls. WPF provides the ability to perform control composition, where a control can contain any other control or layout. WPF also has a built-in set of data services to enable application developers to bind data to the controls. Images are supported using the Windows Imaging Component. For media, WPF supports any audio and video formats which Windows Media Player can play. In addition, WPF supports time-based animations , in contrast to the frame-based approach. This delinks the speed of the animation from how slow or fast the system is performing. Text is anti-aliased and rendered using ClearType .
WPF uses Extensible Application Markup Language ( XAML ), which is a variant of XML , intended for use in developing user interfaces. Using XAML to develop user interfaces also allows for separation of model and view. In XAML, every element maps onto a class in the underlying API, and the attributes are set as properties on the instantiated classes. All elements of WPF may also be coded in a .NET language such as C#. The XAML code is ultimately compiled into a managed assembly in the same way all .NET languages are, which means that the use of XAML for development does not incur a performance cost.
Windows Communication Foundation (codenamed Indigo) is a new communication subsystem to enable applications, in one machine or across multiple machines connected by a network, to communicate. WCF programming model unifies Web Services, .NET Remoting, Distributed Transactions, and Message Queues into a single Service-oriented architecture model for distributed computing , where a server exposes a service via an interface, defined using XML , to which clients connect. WCF runs in a sandbox and provides the enhanced security model all .NET applications provide.
WCF is capable of using SOAP for communication between two processes, thereby making WCF based applications interoperable with any other process that communicates via SOAP. When a WCF process communicates with a non-WCF process, XML based encoding is used for the SOAP messages but when it communicates with another WCF process, the SOAP messages are encoded in an optimized binary format, to optimize the communication. Both the encodings conform to the data structure of the SOAP format, called Infoset.
Windows Vista also incorporates Microsoft Message Queuing 4.0 (MSMQ) [ 62 ] that supports subqueues, poison messages (messages which continually fail to be processed correctly by the receiver), and transactional receives of messages from a remote queue.
Windows Workflow Foundation is a Microsoft technology for defining, executing and managing workflows . This technology is part of .NET Framework 3.0 and therefore targeted primarily for the Windows Vista operating system. The Windows Workflow Foundation runtime components provide common facilities for running and managing the workflows and can be hosted in any CLR application domain.
Workflows comprise 'activities'. Developers can write their own domain-specific activities and then use them in workflows. Windows Workflow Foundation also provides a set of general-purpose 'activities' that cover several control flow constructs. It also includes a visual workflow designer. The workflow designer can be used within Visual Studio 2005, including integration with the Visual Studio project system and debugger.
Windows CardSpace (codenamed InfoCard), a part of .NET Framework 3.0, is an implementation of Identity Metasystem, which centralizes acquiring, usage and management of digital identity. A digital identity is represented as logical Security Tokens , that each consist of one or more Claims , which provide information about different aspects of the identity, such as name, address etc.
Any identity system centers around three entities — the User who is to be identified, an Identity Provider who provides identifying information regarding the User , and Relying Party who uses the identity to authenticate the user. An Identity Provider may be a service like Active Directory , or even the user who provides an authentication password, or biometric authentication data.
A Relying Party issues a request to an application for an identity, by means of a Policy that states what Claims it needs and what will be the physical representation of the security token. The application then passes on the request to Windows CardSpace, which then contacts a suitable Identity Provider and retrieves the Identity . It then provides the application with the Identity along with information on how to use it.
Windows CardSpace also keeps a track of all Identities used, and represents them as visually identifiable virtual cards, accessible to the user from a centralized location. Whenever an application requests any identity, Windows CardSpace informs the user about which identity is being used and needs confirmation before it provides the requestor with the identity.
Windows CardSpace presents an API that allows any application to use Windows CardSpace to handle authentication tasks. Similarly, the API allows Identity Providers to hook up with Windows CardSpace. To any Relying Party , it appears as a service which provides authentication credentials.
Media Foundation is a set of COM -based APIs to handle audio and video playback that provides DirectX Video Acceleration 2.0 and better resilience to CPU, I/O, and memory stress for glitch-free low-latency playback of audio and video. It also enables high color spaces through the multimedia processing pipeline. DirectShow and Windows Media SDK will be gradually deprecated in future versions.
The Windows Vista Instant Search index can also be accessed programmatically using both managed as well as native code. [ 63 ] Native code connects to the index catalog by using a Data Source Object retrieved from Windows Vista shell's Indexing Service OLE DB provider. Managed code use the MSIDXS ADO.NET provider with the index catalog name. A catalog on a remote machine can also be specified using a UNC path. The criteria for the search is specified using a SQL -like syntax.
The default catalog is called SystemIndex and it stores all the properties of indexed items with a predefined naming pattern. For example, the name and location of documents in the system is exposed as a table with the column names System. ItemName and System. ItemURL respectively. [ 64 ] An SQL query can directly refer these tables and index catalogues and use the MSIDXS provider to run queries against them. The search index can also be used via OLE DB , using the CollatorDSO provider. [ 65 ] However, OLE DB provider is read-only, supporting only SELECT and GROUP ON SQL statements.
The Windows Search API can also be used to convert a search query written using Advanced Query Syntax (or Natural Query Syntax , the natural language version of AQS) to SQL queries. It exposes a method GenerateSQLFromUserQuery method of the ISearchQueryHelper interface. [ 66 ] Searches can also be performed using the search-ms: protocol , which is a pseudo protocol that lets searches be exposed as an URI . It contains all the operators and search terms specified in AQS. It can refer to saved search folders as well. When such a URI is activated, Windows Search, which is registered as a handler for the protocol, parses the URI to extract the parameters and perform the search.
Winsock Kernel (WSK) is a new transport-independent kernel-mode Network Programming Interface (NPI) for that provides TDI client developers with a sockets-like programming model similar to those supported in user-mode Winsock . While most of the same sockets programming concepts exist as in user-mode Winsock such as socket, creation, bind, connect, accept, send and receive, Winsock Kernel is a completely new programming interface with unique characteristics such as asynchronous I/O that uses IRPs and event callbacks to enhance performance. TDI is supported in Windows Vista for backward compatibility.
Windows Vista includes a specialized QoS API called qWave ( Quality Windows Audio/Video Experience ), [ 67 ] which is a pre-configured quality of service module for time dependent multimedia data, such as audio or video streams. qWave uses different packet priority schemes for real-time flows (such as multimedia packets) and best-effort flows (such as file downloads or e-mails) to ensure that real time data gets as little delays as possible, while providing a high quality channel for other data packets.
Windows Filtering Platform allows external applications to access and hook into the packet processing pipeline of the networking subsystem.
Windows Vista features an update to the Microsoft Crypto API known as Cryptography API: Next Generation (CNG). CNG is an extensible, user mode and kernel mode API that includes support for Elliptic curve cryptography and a number of newer algorithms that are part of the National Security Agency (NSA) Suite B . It also integrates with the smart card subsystem by including a Base CSP module which encapsulates the smart card API so that developers do not have to write complex CSPs . | https://en.wikipedia.org/wiki/Technical_features_new_to_Windows_Vista |
In engineering , technical peer review is a well defined review process for finding and correcting defects conducted by a team of peers with assigned roles. Technical peer reviews are carried out by peers representing areas of life cycle affected by material being reviewed (usually limited to 6 or fewer people). Technical peer reviews are held within development phases, between milestone reviews, on completed products, or on completed portions of products. [ 1 ] A technical peer review may also be called an engineering peer review , a product peer review , a peer review/inspection or an inspection.
The purpose of a technical peer review is to remove defects as early as possible in the development process. By removing defects at their origin (e.g., requirements and design documents, test plans and procedures, software code, etc.), technical peer reviews prevent defects from propagating through multiple phases and work products and reduce the overall amount of rework necessary on projects. Improved team efficiency is a side effect (e.g., by improving team communication, integrating the viewpoints of various engineering specialty disciplines, more quickly bringing new members up to speed, and educating project members about effective development practices).
In CMMI , peer reviews are used as a principal means of verification in the Verification process area and as an objective evaluation method in the Process and Product Quality Assurance process area. The results of technical peer reviews can be reported at milestone reviews.
Peer reviews are distinct from management reviews, which are conducted by management representatives rather than by colleagues and for management and control purposes rather than for technical evaluation. This is especially true of line managers of the author or other participants in the review. A policy of encouraging management to stay out of peer reviews encourages the peer review team to concentrate on the product being reviewed and not on the people or personalities involved.
They are also distinct from software audit reviews , which are conducted by personnel external to the project, to evaluate compliance with specifications, standards, contractual agreements, or other criteria. A software peer review is a type of technical peer review. The IEEE defines formal structures, roles, and processes for software peer reviews. [ 2 ]
There are two philosophies about the vested interest of the inspectors in the product under review. On one hand, project personnel who have a vested interest in the work product under review have the most knowledge of the product and are motivated to find and fix defects. On the other hand, personnel from outside the project who do not have a vested interest in the work product bring objectivity and a fresh viewpoint to the technical peer review team.
Each inspector is invited to disclose vested interests to the rest of the technical peer review panel so the moderator can exercise sound judgement in evaluating the inspector's inputs. | https://en.wikipedia.org/wiki/Technical_peer_review |
A technical standard is an established norm or requirement for a repeatable technical task which is applied to a common and repeated use of rules, conditions, guidelines or characteristics for products or related processes and production methods, and related management systems practices. A technical standard includes definition of terms; classification of components; delineation of procedures; specification of dimensions, materials, performance, designs, or operations; measurement of quality and quantity in describing materials, processes, products, systems, services, or practices; test methods and sampling procedures; or descriptions of fit and measurements of size or strength. [ 1 ]
It is usually a formal document that establishes uniform engineering or technical criteria, methods, processes, and practices. In contrast, a custom, convention, company product, corporate standard, and so forth that becomes generally accepted and dominant is often called a de facto standard.
A technical standard may be developed privately or unilaterally, for example by a corporation, regulatory body, military, etc. Standards can also be developed by groups such as trade unions and trade associations. Standards organizations often have more diverse input and usually develop voluntary standards: these might become mandatory if adopted by a government (i.e., through legislation), business contract, etc.
The standardization process may be by edict or may involve the formal consensus of technical experts.
The primary types of technical standards are:
Technical standards are defined [ 4 ] as:
Technical standards may exist as:
When a geographically defined community must solve a community-wide coordination problem , it can adopt an existing standard or produce a new one. The main geographic levels are:
National/Regional/International standards is one way of overcoming technical barriers in inter-local or inter-regional commerce caused by differences among technical regulations and standards developed independently and separately by each local, local standards organisation , or local company. Technical barriers arise when different groups come together, each with a large user base, doing some well established thing that between them is mutually incompatible. Establishing national/regional/international standards is one way of preventing or overcoming this problem. To further support this, the WTO Technical Barriers to Trade (TBT) Committee published the "Six Principles" guiding members in the development of international standards. [ 7 ]
The existence of a published standard does not imply that it is always useful or correct. For example, if an item complies with a certain standard, there is not necessarily assurance that it is fit for any particular use. The people who use the item or service (engineers, trade unions, etc.) or specify it (building codes, government, industry, etc.) have the responsibility to consider the available standards, specify the correct one, enforce compliance, and use the item correctly. Validation of suitability is necessary.
Standards often get reviewed, revised and updated on a regular basis. It is critical that the most current version of a published standard be used or referenced. The originator or standard writing body often has the current versions listed on its web site.
In social sciences , including economics , a standard is useful if it is a solution to a coordination problem :
it emerges from situations in which all parties realize mutual gains, but only by making mutually consistent decisions.
Examples :
Private standards are developed by private entities such as companies, non-governmental organizations or private sector multi-stakeholder initiatives, also referred to as multistakeholder governance . Not all technical standards are created equal. In the development of a technical standard, private standards adopt a non-consensus process in comparison to voluntary consensus standards. This is explained in the paper International standards and private standards . [ 8 ]
The International Trade Centre published a literature review series with technical papers on the impacts of private standards [ 9 ] [ 10 ] [ 11 ] [ 12 ] and the Food and Agriculture Organization (FAO) published a number of papers in relation to the proliferation of private food safety standards in the agri-food industry, mostly driven by standard harmonization under the multistakeholder governance of the Global Food Safety Initiative (GFSI). [ 13 ] [ 14 ] [ 15 ] [ 16 ] With concerns around private standards and technical barriers to trade (TBT), and unable to adhere to the TBT Committee's Six Principles for the development of international standards because private standards are non-consensus, the WTO does not rule out the possibility that the actions of private standard-setting bodies may be subject to WTO law. [ 17 ] [ 18 ]
BSI Group compared private food safety standards with "plugs and sockets", explaining the food sector is full of "confusion and complexity". Also, "the multiplicity of standards and assurance schemes has created a fragmented and inefficient supply chain structure imposing unnecessary costs on businesses that have no choice but to pass on to consumers". [ 19 ] BSI provide examples of other sectors working with a single international standard ; ISO 9001 (quality), ISO 14001 (environment), ISO 45001 (occupational health and safety), ISO 27001 (information security) and ISO 22301 (business continuity). Another example of a sector working with a single international standard is ISO 13485 (medical devices), which is adopted by the International Medical Device Regulators Forum (IMDRF).
In 2020, Fairtrade International , and in 2021, Programme for the Endorsement of Forest Certification (PEFC) issued position statements [ 20 ] [ 21 ] defending their use of private standards in response to reports from The Institute for Multi-Stakeholder Initiative Integrity (MSI Integrity) [ 22 ] and Greenpeace. [ 23 ]
Private standards typically require a financial contribution in terms of an annual fee from the organizations who adopt the standard. Corporations are encouraged to join the board of governance of the standard owner [ 24 ] which enables reciprocity. Meaning corporations have permission to exert influence over the requirements in the standard, and in return the same corporations promote the standards in their supply chains which generates revenue and profit for the standard owner. Financial incentives with private standards can result in a perverse incentive , where some private standards are created solely with the intent of generating money. BRCGS, as scheme owner of private standards, was acquired in 2016 by LGC Ltd who were owned by private equity company Kohlberg Kravis Roberts . [ 25 ] This acquisition triggered substantial increases in BRCGS annual fees. [ 26 ] In 2019, LGC Ltd was sold to private equity companies Cinven and Astorg. [ 27 ] | https://en.wikipedia.org/wiki/Technical_standard |
A technical writer is a professional communicator whose task is to convey complex information in simple terms to an audience of the general public or a very select group of readers. Technical writers research and create information through a variety of delivery media (electronic, printed, audio-visual, and even touch). [ 1 ] Example types of information include online help , manuals, white papers , design specifications , project plans, and software test plans. With the rise of e-learning , technical writers are increasingly hired to develop online training material.
According to the Society for Technical Communication (STC): [ 2 ]
Technical writing is sometimes defined as simplifying the complex. Inherent in such a concise and deceptively simple definition is a whole range of skills and characteristics that address nearly every field of human endeavor at some level. A significant subset of the broader field of technical communication, technical writing involves communicating complex information to those who need it to accomplish some task or goal.
In other words, technical writers take advanced technical concepts and communicate them as clearly, accurately, and comprehensively as possible to their intended audience, ensuring that the work is accessible to its users.
Kurt Vonnegut described technical writers as: [ 3 ]
...trained to reveal almost nothing about themselves in their writing. This makes them freaks in the world of writers, since almost all of the other ink-stained wretches in that world reveal a lot about themselves to the reader.
Engineers, scientists, and other professionals may also be involved in technical writing ( developmental editing , proofreading , etc.), but are more likely to employ professional technical writers to develop, edit and format material, and follow established review procedures as a means delivering information to their audiences.
According to the Society for Technical Communication (STC), the professions of technical communication and technical writing were first referenced around World War I , [ 2 ] when technical documents became a necessity for military purposes. The job title emerged in the US during World War II, [ 4 ] Although it was not until 1951 that the first "Help Wanted: Technical Writer" ad was published. [ 5 ] In fact, the title "Technical Writer" was not added to the US Bureau of Labor Statistic's Occupational Employment Handbook until 2010. [ 6 ] During the 1940s and 50s, technical communicators and writers were hired to produce documentation for the military, often including detailed instructions on new weaponry. Other technical communicators and writers were involved in developing documentation for new technologies that were developed around this time. According to O'Hara: [ 7 ]
War was the most important driver of scientific and technological advance. The U.S. Army Medical Corps battled malaria in the jungles of Panama, the Chemical Corps pushed chemical advances in explosives and poisonous gases (and defenses against them), the Manhattan District of the Corps of Engineers literally made quantum leaps in the understanding of physics, and the Air Corps pioneered aviation design.
Since the early days of the profession, technical writers have worked in teams with a pool of other technical writers. To this day, most organizations still employ a team to produce and edit technical writing for an assigned product or service. As a member of a team, technical writers work independently to research their assignments. Regular one-on-one meetings with Subject Matter Experts (SMEs) and internal research references (e.g., mechanical drawings, specifications, BOMs, datasheets, etc.) provide the technical writer with the necessary checks to ensure a document's accuracy. Once the accuracy of a document has been reviewed and approved by the assigned SME, technical writers rely on their writing team to provide peer reviews. The peer review focuses exclusively on content format, style, and grammar standardization. The goal of the team's peer reviews are to ensure an organization's technical writing "speaks with one voice".
During World War II, one of the most important characteristics for technical writers was their ability to follow stringent government specifications for documents. [ 7 ] After the war, the rise of new technology, such as the computer, allowed technical writers to work in other areas, producing [ 7 ] "user manuals, quick reference guides, hardware installation manuals, and cheat sheets." After the war (1953–1961), technical communicators (including technical writers) became interested in "professionalizing" their field. [ 6 ] According to Malone, [ 6 ] technical communicators/writers did so by creating professional organizations, cultivating a "specialized body of knowledge" for the profession, imposing ethical standards on technical communicators, initiating a conversation about certifying practitioners in the field, and working to accredit education programs in the field.
The profession has continued to grow—according to O'Hara, the writing/editing profession, including technical writers, experienced a 22% increase in positions between the years 1994 and 2005. [ 7 ] Modern day technical writers work in a variety of contexts. Many technical writers work remotely using VPN or communicate with their team via videotelephony platforms such as Skype or Zoom . Other technical writers work in an office, but share content with their team through complex content management systems that store documents online. Technical writers may work on government reports, internal documentation, instructions for technical equipment, embedded help within software or systems, or other technical documents. As technology continues to advance, the array of possibilities for technical writers will continue to expand. Many technical writers are responsible for creating technical documentation for mobile applications or help documentation built within mobile or web applications. They may be responsible for creating content that will only be viewed on a hand-held device; much of their work will never be published in a printed booklet like technical documentation of the past.
Historically, technical writers, or technical and professional communicators, have been concerned with writing and communication. However, recently user experience (UX) design has become more prominent in technical and professional communications as companies look to develop content for a wide range of audiences and experiences. [ 8 ]
The User Experience Professionals Association defines UX as “Every aspect of the user’s interaction with a product, service, or company that make up the user’s perception of the whole.” [ 9 ] Therefore, “user experience design as a discipline is concerned with all the elements that together make up that interface, including layout, visual design, text, brand, sound, and interaction." [ 9 ]
It is now an expectation that technical communication skills should be coupled with UX design. As Verhulsdonck, Howard, and Tham state “...it is not enough to write good content. According to industry expectations, next to writing good content, it is now also crucial to design good experiences around that content." [ 8 ] Technical communicators must now consider different platforms such as social media and apps, as well as different channels like web and mobile. [ 8 ]
As Redish explains, a technical communications professional no longer writes content but “writes around the interface” itself as user experience surrounding content is developed. This includes usable content customized to specific user needs, that addresses user emotions, feelings, and thoughts across different channels in a UX ecology. [ 10 ] [ 8 ]
Lauer and Brumberger further assert, “…UX is a natural extension of the work that technical communicators already do, especially in the modern technological context of responsive design, in which content is deployed across a wide range of interfaces and environments." [ 11 ]
UX design is a product of both technical communication and the user identity. Effective UX design is configured to maximize usability according to unique user backgrounds, in a process called design ethnography. [ 12 ] Design ethnography closely analyzes user culture through interviews and usability tests, in which the technical writer directly immerses themself in the user environment and gathers UX information from local users.
In addition to solid research, language, writing, and revision skills, a technical writer may have skills in:
A technical writer may apply their skills in the production of non-technical content, for example, writing high-level consumer information. Usually, a technical writer is not a subject-matter expert (SME), but interviews SMEs and conducts the research necessary to write and compile technically accurate content. Technical writers complete both primary and secondary research to fully understand the topic. [ citation needed ]
Proficient technical writers have the ability to create, assimilate, and convey technical material in a concise and effective manner. They may specialize in a particular area but must have a good understanding of the products they describe. [ 14 ] For example, API writers primarily work on API documents, while other technical writers specialize in electronic commerce , manufacturing, scientific, or medical material. [ 14 ]
Technical writers gather information from many sources. Their information sources are usually scattered throughout an organization, which can range from developers to marketing departments.
According to Markel, [ 15 ] useful technical documents are measured by eight characteristics: "honesty, clarity, accuracy, comprehensiveness, accessibility, conciseness, professional appearance, and correctness." Technical writers are focused on using their careful research to create effective documents that meet these eight characteristics.
To create effective technical documentation, the writer must analyze three elements that comprise the rhetorical situation of a particular project: audience, purpose, and context. [ 16 ] These are followed by document design, which determines what the reader sees.
Technical writers strive to simplify complex concepts or processes to maximize reader comprehension. The final goal of a particular document is to help readers find what they need, understand what they find, and use what they understand appropriately. [ 17 ] To reach this goal, technical writers must understand how their audiences use and read documentation. An audience analysis at the outset of a document project helps define what an audience for a particular document requires.
When analyzing an audience the technical writer typically asks: [ 17 ]
Accurate audience analysis provides a set of guidelines that shape document content, design and presentation (online help system, interactive website, manual, etc.), and tone and knowledge level.
A technical writer analyzes the purpose (or function) of a communication to understand what a document must accomplish. Determining if a communication aims to persuade readers to “think or act a certain way, enable them to perform a task, help them understand something, change their attitude,” [ 16 ] etc., guides the technical writer on how to format their communication, and the kind of communication they choose (online help system, white paper, proposal, etc.).
Context is the physical and temporal circumstances in which readers use communication—for example: at their office desks, in a manufacturing plant, during the slow summer months, or in the middle of a company crisis. [ 16 ] Understanding the context of a situation tells the technical writer how readers use communication. This knowledge significantly influences how the writer formats communication. For example, if the document is a quick troubleshooting guide to the controls on a small watercraft, the writer may have the pages laminated to increase usable life.
Once the above information has been gathered, the document is designed for optimal readability and usability. According to one expert, technical writers use six design strategies to plan and create technical communication: arrangement, emphasis, clarity, conciseness, tone, and ethos. [ 16 ]
Technical writers normally possess a mixture of technical and writing abilities. They typically have a degree or certification in a technical field, but may have one in journalism, business, or other fields. Many technical writers switch from another field, such as journalism—or a technical field such as engineering or science, often after learning important additional skills through technical communications classes. [ 18 ]
To create a technical document, a technical writer must understand the subject, purpose, and audience. They gather information by studying existing material, interviewing SMEs, and often actually using the product. They study the audience to learn their needs and technical understanding level.
A technical publication's development life cycle typically consists of five phases, coordinated with the overall product development plan: [ 19 ]
The document development life cycle typically consists of six phases (This changes organization to organization, how they are following).
This is similar to the software development life cycle.
Well-written technical documents usually follow formal standards or guidelines. Technical documentation comes in many styles and formats, depending on the medium and subject area. Printed and online documentation may differ in various ways, but still adhere to largely identical guidelines for prose, information structure, and layout. Usually, technical writers follow formatting conventions described in a standard style guide . In the US, technical writers typically use The Associated Press Stylebook or the Chicago Manual of Style (CMOS). Many companies have internal corporate style guides that cover specific corporate issues such as logo use, branding, and other aspects of corporate style. The Microsoft Manual of Style for Technical Publications is typical of these.
Engineering projects, particularly defense or aerospace-related projects, often follow national and international documentation standards—such as ATA100 for civil aircraft or S1000D for civil and defense platforms.
Technical writers often work as part of a writing or project development team. Typically, the writer finishes a draft and passes it to one or more SMEs who conduct a technical review to verify accuracy and completeness. Another writer or editor may perform an editorial review that checks conformance to styles, grammar, and readability. This person may request for clarification or make suggestions. In some cases, the writer or others test the document on audience members to make usability improvements. A final production typically follows an inspection checklist to ensure the quality and uniformity of the published product. [ 20 ]
The physical working environment of most company-employed technical writers typically entails an open office with desktop computers and individual desks. A technical writer's workspace is largely dependent on their industry. A 2018 Intercom census of mostly American technical communicators showed that the majority of respondents worked in technology and IT. [ 21 ] Prevalence of various industries in technical writing is correlated to geographic location, and the industries that are most common in certain regions of the world. A study of technical communication careers in Europe showed that the majority of technical communicators work in IT.
In the wake of the stay-at-home suggestions from the World Health Organization in March 2020, due to the COVID-19 pandemic, employees around the world experienced a shift in work environment from in-person to remote and/or virtual. As of 2023, after social distancing policies have been loosened, many organizations have decided to maintain the option for employees to work remotely. In the particular case of professional technical writers, this change forces an alternative approach to communication with subject matter experts, colleagues, and project managers who are directly involved in the technical communication process. Employees who work remotely typically rely on virtual, at times asynchronous, communication with collaborators, and spend working hours either at home or in an isolated office. [ 22 ]
There is no single standard career path for technical writers, but they may move into project management over other writers. A writer may advance to a senior technical writer position, handling complex projects or a small team of writers and editors. In larger groups, a documentation manager might handle multiple projects and teams.
Technical writers may also gain expertise in a particular technical domain and branch into related forms, such as software quality analysis or business analysis. A technical writer who becomes a subject matter expert in a field may transition from technical writing to work in that field. Technical writers commonly produce training for the technologies they document—including classroom guides and e-learning—and some transition to specialize as professional trainers and instructional designers.
Technical writers with expertise in writing skills can join printed media or electronic media companies, potentially providing an opportunity to make more money or improved working conditions.
In April 2021, the U.S Department of Labor expected technical writer employment to grow seven percent from 2019 to 2029, slightly faster than the average for all occupations. They expect job opportunities, especially for applicants with technical skills, to be good. The BLS also noted that the expansion of "scientific and technical products" and the need for technical writers to work in "Web-based product support" will drive increasing demand. [ 23 ]
As of May 2022, the average annual pay for a freelance technical writer in the United States is $70,191 according to ZipRecruiter . [ 24 ]
Technical writers can have various job titles, including technical communicator, information developer , technical content developer or technical documentation specialist . In the United Kingdom and some other countries, a technical writer is often called a technical author or knowledge author . | https://en.wikipedia.org/wiki/Technical_writer |
Since haematopoietic stem cells cannot be isolated as a pure population, it is not possible to identify them under a microscope. [ citation needed ] Therefore, there are many techniques to isolate haematopoietic stem cells (HSCs). HSCs can be identified or isolated by the use of flow cytometry where the combination of several different cell surface markers is used to separate the rare HSCs from the surrounding blood cells. HSCs lack expression of mature blood cell markers and are thus, called Lin-. Lack of expression of lineage markers is used in combination with detection of several positive cell-surface markers to isolate HSCs. In addition, HSCs are characterized by their small size and low staining with vital dyes such as rhodamine 123 (rhodamine lo ) or Hoechst 33342 (side population).
CD34+ Cells can be isolated by 4 different techniques from peripheral blood samples
The classical marker of human HSC is CD34 first described independently by Civin et al. and Tindle et al. [ 1 ] [ 2 ] [ 3 ] [ 4 ] It is used to isolate HSC for reconstitution of patients who are haematologically incompetent as a result of chemotherapy or disease.
Many markers belong to the cluster of differentiation series, like: CD34 , CD38 , CD90 , CD133 , CD105 , CD45 , and also c-kit – the receptor for stem cell factor .
There are many differences between the human and murine hematopoietic cell markers for the commonly accepted type of hematopoietic stem cells. [ 5 ]
However, not all stem cells are covered by these combinations that, nonetheless, have become popular. In fact, even in humans, there are hematopoietic stem cells that are CD34 − / CD38 − . [ 6 ] [ 7 ] Also some later studies suggested that earliest stem cells may lack c-kit on the cell surface. [ 8 ] For human HSCs use of CD133 was one step ahead as both CD34 + and CD34 − HSCs were CD133 + .
Traditional purification method used to yield a reasonable purity level of mouse hematopoietic stem cells, in general, requires a large (~10–12) battery of markers, most of which were surrogate markers with little functional significance, and thus partial overlap with the stem cell populations and sometimes other closely related cells that are not stem cells. Also, some of these markers (e.g., Thy1 ) are not conserved across mouse species, and use of markers like CD34 − for HSC purification requires mice to be at least 8 weeks old.
Alternative methods that could give rise to a similar or better harvest of stem cells is an active area of research, and are presently [ when? ] emerging. One such method uses a signature of SLAM family cell surface molecules. The SLAM ( Signaling lymphocyte activation molecule ) family is a group of more than 10 molecules whose genes are located mostly tandemly in a single locus on chromosome 1 (mouse), all belonging to a subset of the immunoglobulin gene superfamily, and originally thought to be involved in T-cell stimulation. This family includes CD48 , CD150 , CD244 , etc., CD150 being the founding member, and, thus, also known as slamF1, i.e., SLAM family member 1.
The signature SLAM codes for the hemopoietic hierarchy are:
For HSCs, CD150 + CD48 − was sufficient instead of CD150 + CD48 − CD244 − because CD48 is a ligand for CD244, and both would be positive only in the activated lineage-restricted progenitors. It seems that this code was more efficient than the more tedious earlier set of the large number of markers, and are also conserved across the mouse strains; however, recent work has shown that this method excludes a large number of HSCs and includes an equally large number of non-stem cells. [ 9 ] [ 10 ] CD150 + CD48 − gave stem cell purity comparable to Thy1 lo SCA-1 + lin − c-kit + in mice. [ 11 ]
Irving Weissman 's group at Stanford University was the first to isolate mouse hematopoietic stem cells in 1986 [ 12 ] [ 13 ] and was also the first to work out the markers to distinguish the mouse long-term (LT-HSC) and short-term (ST-HSC) hematopoietic stem cells (self-renew-capable), and the Multi-potent progenitors (MPP, low or no self-renew capability – the later the developmental stage of MPP, the lesser the self-renewal ability and the more of some of the markers like CD4 and CD135 ): | https://en.wikipedia.org/wiki/Techniques_to_isolate_haematopoietic_stem_cells |
Techno-economic assessment or techno-economic analysis (abbreviated TEA) is a method of analyzing the economic performance of an industrial process, product, or service. The methodology originates from earlier work on combining technical, economic and risk assessments for chemical production processes. [ 1 ] It typically uses software modeling to estimate capital cost, operating cost, and revenue based on technical and financial input parameters. [ 2 ] One desired outcome is to summarize results in a concise and visually coherent form, using visualization tools such as tornado diagrams and sensitivity analysis graphs.
At present, TEA is most commonly used to analyze technologies in the chemical , bioprocess , petroleum , energy , and similar industries. This article focuses on these areas of application.
TEA can be used for studying new technologies or optimizing existing ones. Ideally, a techno-economic model represents the best current understanding of the system being modeled. The following are examples of typical uses.
Techno-economic analyses are usually performed using a techno-economic model. A techno-economic model is an integrated process and cost model . It combines elements of process design , process modeling , equipment sizing, capital cost estimation, and operating cost estimation.
To begin with, the system is defined in the form of a process flow diagram (PFD). A typical PFD shows major equipment and material streams. The term ‘material stream’ refers to liquids, solids, or gases entering or exiting the system, or flowing from one piece of equipment to another.
The process model uses engineering and material balance calculations to more fully characterize the system being analyzed. The results are often summarized in the form of a material balance table or stream table, which corresponds to the PFD.
The output from the process model is used to:
Capital costs are typically estimated using a major equipment factored approach. [ 3 ] [ 4 ] First, the purchase cost for each piece of equipment is estimated from the results of the equipment sizing calculations, often using power law scaling relationships. [ 2 ] Next, the balance of the capital costs are estimated by applying multiplying factors based on similar systems. [ 5 ]
Typical operating costs include raw materials, operating labor, waste treatment, and disposal, utilities, and overhead . Raw material and waste treatment costs are estimated by applying prices to raw material and waste flow rates from the process model. Similarly, utility costs are estimated by applying prices to the utility rates from equipment sizing. [ 5 ]
Operating labor can be estimated based on equipment size, quantity, and type. Overhead is typically estimated by applying heuristic factors to capital costs and operating labor. [ 5 ]
Techno-economic models may also include a discounted cash flow analysis to calculate metrics like net present value and internal rate of return. A cash flow analysis will typically incorporate financial parameters like taxes and discount rates.
TEA is typically performed using one of two platforms: spreadsheet software, like Microsoft Excel, or a process simulator , like AVEVA Process Simulation, Aspen, SuperPro Designer, integrated tools such as thecubeSphere, or open source software such as the python-based BioSTEAM. [ 6 ] In general, these platforms use the methodology described above.
Spreadsheet modeling is often preferred for early-stage technologies and startups since it tends to offer greater flexibility, accessibility, and transparency. Process simulators, on the other hand, offer more powerful process simulation capabilities, greater standardization, and integrated cost-estimation modules.
More recently, researchers have demonstrated that machine learning models can be trained on simulation outputs to produce so-called surrogate models capable of predicting costs, mass balances, and energy balances. [ 7 ]
Assuming a complete process design, the major equipment factored approach that is often used in TEA has an expected accuracy of -30% to +50%. [ 4 ] In the early stages of development, however, the process design is often incomplete or inaccurate, so the error bounds are often considerably larger. Examples of how uncertainty is managed in process modeling and economic analysis of early stage technologies can be found for materials used in long duration energy storage and hydrogen storage. [ 8 ] [ 9 ]
Educational material
Online tools
Integrated tools (TEA, LCA, Risk Assessment, Process Modelling)
Guidelines | https://en.wikipedia.org/wiki/Techno-economic_assessment |
Applied mathematics is the application of mathematical methods by different fields such as physics , engineering , medicine , biology , finance , business , computer science , and industry . Thus, applied mathematics is a combination of mathematical science and specialized knowledge. The term "applied mathematics" also describes the professional specialty in which mathematicians work on practical problems by formulating and studying mathematical models .
In the past, practical applications have motivated the development of mathematical theories, which then became the subject of study in pure mathematics where abstract concepts are studied for their own sake. The activity of applied mathematics is thus intimately connected with research in pure mathematics.
Historically, applied mathematics consisted principally of applied analysis , most notably differential equations ; approximation theory (broadly construed, to include representations , asymptotic methods, variational methods , and numerical analysis ); and applied probability . These areas of mathematics related directly to the development of Newtonian physics , and in fact, the distinction between mathematicians and physicists was not sharply drawn before the mid-19th century. This history left a pedagogical legacy in the United States: until the early 20th century, subjects such as classical mechanics were often taught in applied mathematics departments at American universities rather than in physics departments, and fluid mechanics may still be taught in applied mathematics departments. [ 1 ] Engineering and computer science departments have traditionally made use of applied mathematics.
As time passed, Applied Mathematics grew alongside the advancement of science and technology. With the advent of modern times, the application of mathematics in fields such as science, economics, technology, and more became deeper and more timely. The development of computers and other technologies enabled a more detailed study and application of mathematical concepts in various fields.
Today, Applied Mathematics continues to be crucial for societal and technological advancement. It guides the development of new technologies, economic progress, and addresses challenges in various scientific fields and industries. The history of Applied Mathematics continually demonstrates the importance of mathematics in human progress.
Today, the term "applied mathematics" is used in a broader sense. It includes the classical areas noted above as well as other areas that have become increasingly important in applications. Even fields such as number theory that are part of pure mathematics are now important in applications (such as cryptography ), though they are not generally considered to be part of the field of applied mathematics per se .
There is no consensus as to what the various branches of applied mathematics are. Such categorizations are made difficult by the way mathematics and science change over time, and also by the way universities organize departments, courses, and degrees.
Many mathematicians distinguish between "applied mathematics", which is concerned with mathematical methods, and the "applications of mathematics" within science and engineering. A biologist using a population model and applying known mathematics would not be doing applied mathematics, but rather using it; however, mathematical biologists have posed problems that have stimulated the growth of pure mathematics. Mathematicians such as Poincaré and Arnold deny the existence of "applied mathematics" and claim that there are only "applications of mathematics." Similarly, non-mathematicians blend applied mathematics and applications of mathematics. The use and development of mathematics to solve industrial problems is also called "industrial mathematics". [ 2 ]
The success of modern numerical mathematical methods and software has led to the emergence of computational mathematics , computational science , and computational engineering , which use high-performance computing for the simulation of phenomena and the solution of problems in the sciences and engineering. These are often considered interdisciplinary.
Sometimes, the term applicable mathematics is used to distinguish between the traditional applied mathematics that developed alongside physics and the many areas of mathematics that are applicable to real-world problems today, although there is no consensus as to a precise definition. [ 3 ]
Mathematicians often distinguish between "applied mathematics" on the one hand, and the "applications of mathematics" or "applicable mathematics" both within and outside of science and engineering, on the other. [ 3 ] Some mathematicians emphasize the term applicable mathematics to separate or delineate the traditional applied areas from new applications arising from fields that were previously seen as pure mathematics. [ 4 ] For example, from this viewpoint, an ecologist or geographer using population models and applying known mathematics would not be doing applied, but rather applicable, mathematics. Even fields such as number theory that are part of pure mathematics are now important in applications (such as cryptography ), though they are not generally considered to be part of the field of applied mathematics per se . Such descriptions can lead to applicable mathematics being seen as a collection of mathematical methods such as real analysis , linear algebra , mathematical modelling , optimisation , combinatorics , probability and statistics , which are useful in areas outside traditional mathematics and not specific to mathematical physics .
Other authors prefer describing applicable mathematics as a union of "new" mathematical applications with the traditional fields of applied mathematics. [ 4 ] [ 5 ] [ 6 ] With this outlook, the terms applied mathematics and applicable mathematics are thus interchangeable.
Historically, mathematics was most important in the natural sciences and engineering . However, since World War II , fields outside the physical sciences have spawned the creation of new areas of mathematics, such as game theory and social choice theory , which grew out of economic considerations. Further, the utilization and development of mathematical methods expanded into other areas leading to the creation of new fields such as mathematical finance and data science .
The advent of the computer has enabled new applications: studying and using the new computer technology itself ( computer science ) to study problems arising in other areas of science (computational science) as well as the mathematics of computation (for example, theoretical computer science , computer algebra , [ 7 ] [ 8 ] [ 9 ] [ 10 ] numerical analysis [ 11 ] [ 12 ] [ 13 ] [ 14 ] ). Statistics is probably the most widespread mathematical science used in the social sciences .
Academic institutions are not consistent in the way they group and label courses, programs, and degrees in applied mathematics. At some schools, there is a single mathematics department, whereas others have separate departments for Applied Mathematics and (Pure) Mathematics. It is very common for Statistics departments to be separated at schools with graduate programs, but many undergraduate-only institutions include statistics under the mathematics department.
Many applied mathematics programs (as opposed to departments) consist primarily of cross-listed courses and jointly appointed faculty in departments representing applications. Some Ph.D. programs in applied mathematics require little or no coursework outside mathematics, while others require substantial coursework in a specific area of application. In some respects this difference reflects the distinction between "application of mathematics" and "applied mathematics".
Some universities in the U.K . host departments of Applied Mathematics and Theoretical Physics , [ 15 ] [ 16 ] [ 17 ] but it is now much less common to have separate departments of pure and applied mathematics. A notable exception to this is the Department of Applied Mathematics and Theoretical Physics at the University of Cambridge , housing the Lucasian Professor of Mathematics whose past holders include Isaac Newton , Charles Babbage , James Lighthill , Paul Dirac , and Stephen Hawking .
Schools with separate applied mathematics departments range from Brown University , which has a large Division of Applied Mathematics that offers degrees through the doctorate , to Santa Clara University , which offers only the M.S. in applied mathematics. [ 20 ] Research universities dividing their mathematics department into pure and applied sections include MIT . Students in this program also learn another skill (computer science, engineering, physics, pure math, etc.) to supplement their applied math skills.
Applied mathematics is associated with the following mathematical sciences:
Mathematics is used in all branches of engineering and has subsequently developed as distinct specialties within the engineering profession.
For example, continuum mechanics is foundational to civil , mechanical and aerospace engineering, with courses in solid mechanics and fluid mechanics being important components of the engineering curriculum. Continuum mechanics is also an important branch of mathematics in its own right. It has served as the inspiration for a vast range of difficult research questions for mathematicians involved in the analysis of partial differential equations , differential geometry and the calculus of variations . Perhaps the most well-known mathematical problem posed by a continuum mechanical system is the question of Navier-Stokes existence and smoothness . Prominent career mathematicians rather than engineers who have contributed to the mathematics of continuum mechanics are Clifford Truesdell , Walter Noll , Andrey Kolmogorov and George Batchelor .
An essential discipline for many fields in engineering is that of control engineering . The associated mathematical theory of this specialism is control theory , a branch of applied mathematics that builds off the mathematics of dynamical systems . Control theory has played a significant enabling role in modern technology, serving a foundational role in electrical , mechanical and aerospace engineering. Like continuum mechanics, control theory has also become a field of mathematical research in its own right, with mathematicians such as Aleksandr Lyapunov , Norbert Wiener , Lev Pontryagin and fields medallist Pierre-Louis Lions contributing to its foundations.
Scientific computing includes applied mathematics (especially numerical analysis [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 21 ] ), computing science (especially high-performance computing [ 22 ] [ 23 ] ), and mathematical modelling in a scientific discipline.
Computer science relies on logic , algebra , discrete mathematics such as graph theory , [ 24 ] [ 25 ] and combinatorics .
Operations research [ 26 ] and management science are often taught in faculties of engineering, business, and public policy.
Applied mathematics has substantial overlap with the discipline of statistics. Statistical theorists study and improve statistical procedures with mathematics, and statistical research often raises mathematical questions. Statistical theory relies on probability and decision theory , and makes extensive use of scientific computing, analysis, and optimization ; for the design of experiments , statisticians use algebra and combinatorial design . Applied mathematicians and statisticians often work in a department of mathematical sciences (particularly at colleges and small universities).
Actuarial science applies probability, statistics, and economic theory to assess risk in insurance, finance and other industries and professions. [ 27 ]
Mathematical economics is the application of mathematical methods to represent theories and analyze problems in economics. [ 28 ] [ 29 ] [ 30 ] The applied methods usually refer to nontrivial mathematical techniques or approaches. Mathematical economics is based on statistics, probability, mathematical programming (as well as other computational methods ), operations research, game theory, and some methods from mathematical analysis. In this regard, it resembles (but is distinct from) financial mathematics , another part of applied mathematics. [ 31 ]
According to the Mathematics Subject Classification (MSC), mathematical economics falls into the Applied mathematics/other classification of category 91:
with MSC2010 classifications for ' Game theory ' at codes 91Axx Archived 2015-04-02 at the Wayback Machine and for 'Mathematical economics' at codes 91Bxx Archived 2015-04-02 at the Wayback Machine .
The line between applied mathematics and specific areas of application is often blurred. Many universities teach mathematical and statistical courses outside the respective departments, in departments and areas including business , engineering , physics , chemistry , psychology , biology , computer science , scientific computation , information theory , and mathematical physics .
The Society for Industrial and Applied Mathematics is an international applied mathematics organization. As of 2024, the society has 14,000 individual members. [ 32 ] The American Mathematics Society has its Applied Mathematics Group. [ 33 ] | https://en.wikipedia.org/wiki/Techno-mathematics |
Techno-progressivism , or tech-progressivism , [ 1 ] is a stance of active support for the convergence of technological change and social change . Techno-progressives argue that technological developments can be profoundly empowering and emancipatory when they are regulated by legitimate democratic and accountable authorities to ensure that their costs , risks and benefits are all fairly shared by the actual stakeholders to those developments. [ 2 ] [ 3 ] [ self-published source? ] One of the first mentions of techno-progressivism appeared within extropian jargon in 1999 as the removal of "all political, cultural, biological, and psychological limits to self-actualization and self-realization". [ 4 ]
Techno-progressivism maintains that accounts of progress should focus on scientific and technical dimensions, as well as ethical and social ones. For most techno-progressive perspectives, then, the growth of scientific knowledge or the accumulation of technological powers will not represent the achievement of proper progress unless and until it is accompanied by a just distribution of the costs, risks, and benefits of these new knowledges and capacities. At the same time, for most techno-progressive critics and advocates , the achievement of better democracy , greater fairness , less violence, and a wider rights culture are all desirable, but inadequate in themselves to confront the quandaries of contemporary technological societies unless and until they are accompanied by progress in science and technology to support and implement these values. [ 3 ] [ self-published source? ]
Strong techno-progressive positions include support for the civil right of a person to either maintain or modify his or her own mind and body , on his or her own terms, through informed, consensual recourse to, or refusal of, available therapeutic or enabling biomedical technology . [ 5 ] [ better source needed ]
During the November 2014 Transvision Conference , many of the leading transhumanist organizations signed the Technoprogressive Declaration. The Declaration stated the values of technoprogressivism. [ 6 ]
Technocritic Dale Carrico, who has used "techno-progressive" as a shorthand to describe progressive politics that emphasize technoscientific issues, [ 16 ] has expressed concern that some " transhumanists " are using the term to describe themselves, with the consequence of possibly misleading the public regarding their actual cultural, social and political views, which may or may not be compatible with critical techno-progressivism. [ 17 ] [ self-published source? ] | https://en.wikipedia.org/wiki/Techno-progressivism |
Technocracy is a form of government in which decision-makers appoint knowledge experts in specific domains to provide them with advice and guidance in various areas of their policy-making responsibilities. Technocracy follows largely in the tradition of other meritocratic theories and works best when the state exerts strong control over social and economic issues.
This system is sometimes presented as explicitly contrasting with representative democracy , the notion that elected representatives should be the primary decision-makers in government, [ 1 ] despite the fact that technocracy does not imply eliminating elected representatives. In a technocracy, decision-makers rely on individuals and institutions possessing specialized knowledge and data-based evidence rather than advisors with political affiliations or loyalty.
The term technocracy was initially used to signify the application of the scientific method to solving social problems. In its most extreme form, technocracy is an entire government running as a technical or engineering problem and is mostly hypothetical . In more practical use, technocracy is any portion of a bureaucracy run by technologists . A government in which elected officials appoint experts and professionals to administer individual government functions, and recommend legislation, can be considered technocratic. [ 2 ] [ 3 ] Some uses of the word refer to a form of meritocracy , where the most suitable are placed in charge, ostensibly without the influence of special interest groups. [ 4 ] Critics have suggested that a "technocratic divide" challenges more participatory models of democracy, describing these divides as "efficacy gaps that persist between governing bodies employing technocratic principles and members of the general public aiming to contribute to government decision making". [ 5 ]
The term technocracy is derived from the Greek words τέχνη, tekhne meaning skill and κράτος, kratos meaning power , as in governance , or rule . William Henry Smyth, a California engineer, is usually credited with inventing the word technocracy in 1919 to describe "the rule of the people made effective through the agency of their servants, the scientists and engineers", although the word had been used before on several occasions. [ 4 ] [ 6 ] [ 7 ] Smyth used the term Technocracy in his 1919 article "'Technocracy'—Ways and Means to Gain Industrial Democracy" in the journal Industrial Management (57). [ 8 ] Smyth's usage referred to Industrial democracy : a movement to integrate workers into decision-making through existing firms or revolution. [ 8 ]
In the 1930s, through the influence of Howard Scott and the technocracy movement he founded, the term technocracy came to mean 'government by technical decision making', using an energy metric of value. It was based on organising and directing economic activity within a geographical region nearly self-sufficient in resources and with a highly developed technology. Scott proposed that money be replaced by energy certificates denominated in units such as ergs or joules , equivalent in total amount to an appropriate national net energy budget, and then distributed equally among the North American population, according to resource availability. [ 9 ] [ 1 ]
There is in common usage found the derivative term technocrat . The word technocrat can refer to someone exercising governmental authority because of their knowledge, [ 10 ] "a member of a powerful technical elite", or "someone who advocates the supremacy of technical experts". [ 11 ] [ 2 ] [ 3 ] McDonnell and Valbruzzi define a prime minister or minister as a technocrat if "at the time of their appointment to government, they: have never held public office under the banner of a political party; are not a formal member of any party; and are said to possess recognized non-party political expertise which is directly relevant to the role occupied in government". [ 12 ] In Russia, the President of Russia often nominates individuals with technical expertise and no prior political experience as core ministers, such appointees are termed as "technocrats". [ 13 ] [ 14 ]
Before the term technocracy was coined, technocratic or quasi-technocratic ideas involving governance by technical experts were promoted by various individuals, most notably early socialist theorists such as Henri de Saint-Simon . This was expressed by the belief in state ownership over the economy, with the state's function being transformed from pure philosophical rule over men into a scientific administration of things and a direction of production processes under scientific management. [ 15 ] According to Daniel Bell :
"St.-Simon's vision of industrial society, a vision of pure technocracy, was a system of planning and rational order in which society would specify its needs and organize the factors of production to achieve them." [ 16 ]
Citing the ideas of St.-Simon, Bell concludes that the "administration of things" by rational judgment is the hallmark of technocracy. [ 16 ]
Alexander Bogdanov , a Russian scientist and social theorist, also anticipated a conception of technocratic process. Both Bogdanov's fiction and his political writings, which were highly influential, suggest that he was concerned that a coming revolution against capitalism could lead to a technocratic society. [ 17 ] [ 18 ] : 114
From 1913 until 1922, Bogdanov immersed himself in writing a lengthy philosophical treatise of original ideas, Tectology: Universal Organization Science . Tectology anticipated many basic ideas of systems analysis , later explored by cybernetics . In Tectology , Bogdanov proposed unifying all social, biological, and physical sciences by considering them as systems of relationships and seeking organizational principles that underlie all systems.
Arguably, the Platonic idea of philosopher-kings represents a sort of technocracy in which the state is run by those with specialist knowledge, in this case, knowledge of the Good rather than scientific knowledge. [ citation needed ] The Platonic claim is that those who best understand goodness should be empowered to lead the state, as they would lead it toward the path of happiness. Whilst knowledge of the Good differs from knowledge of science, rulers are here appointed based on a certain grasp of technical skill rather than democratic mandate.
Technocrats are individuals with technical training and occupations who perceive many important societal problems as being solvable with the applied use of technology and related applications. The administrative scientist Gunnar K. A. Njalsson theorizes that technocrats are primarily driven by their cognitive "problem-solution mindsets" and only in part by particular occupational group interests. Their activities and the increasing success of their ideas are thought to be a crucial factor behind the modern spread of technology and the largely ideological concept of the " information society ". Technocrats may be distinguished from " econocrats " and " bureaucrats " whose problem-solution mindsets differ from those of the technocrats. [ 19 ]
The former government of the Soviet Union has been referred to as a technocracy. [ 20 ] Soviet leaders like Leonid Brezhnev often had a technical background. In 1986, 89% of Politburo members were engineers. [ 20 ]
Many previous leaders of the Chinese Communist Party had backgrounds in engineering and practical sciences. According to surveys of municipal governments of cities with a population of 1 million or more in China , it has been found that over 80% of government personnel had a technical education. [ 21 ] [ 22 ] Under the five-year plans of the People's Republic of China, projects such as the National Trunk Highway System , the China high-speed rail system , and the Three Gorges Dam have been completed. [ 23 ] [ page needed ] During China's 20th National Congress , a class of technocrats in finance and economics were replaced in favor of high-expertise technocrats. [ 24 ] [ 25 ]
In 2013, a European Union library briefing on its legislative structure referred to the Commission as a "technocratic authority", holding a "legislative monopoly" over the EU lawmaking process. [ 26 ] The briefing suggests that this system, which elevates the European Parliament to a vetoing and amending body, was "originally rooted in the mistrust of the political process in post-war Europe". This system is unusual since the Commission's sole right of legislative initiative is a power usually associated with Parliaments.
Several governments in European parliamentary democracies have been labelled 'technocratic' based on the participation of unelected experts ('technocrats') in prominent positions. [ 2 ] Since the 1990s, Italy has had several such governments (in Italian, governo tecnico ) in times of economic or political crisis, [ 27 ] [ 28 ] including the formation in which economist Mario Monti presided over a cabinet of unelected professionals . [ 29 ] [ 30 ] The term 'technocratic' has been applied to governments where a cabinet of elected professional politicians is led by an unelected prime minister, such as in the cases of the 2011-2012 Greek government led by economist Lucas Papademos and the Czech Republic's 2009–2010 caretaker government presided over by the state's chief statistician, Jan Fischer . [ 3 ] [ 31 ] In December 2013, in the framework of the national dialogue facilitated by the Tunisian National Dialogue Quartet , political parties in Tunisia agreed to install a technocratic government led by Mehdi Jomaa . [ 32 ]
The Syrian Salvation Government , the predecessor to the Syrian transitional government , [ 33 ] was characterized by observers as an authoritarian technocracy. [ 34 ] [ 35 ] [ 36 ] [ 37 ] : 34
The article "Technocrats: Minds Like Machines" [ 3 ] states that Singapore is perhaps the best advertisement for technocracy: the political and expert components of the governing system there seem to have merged completely. This was underlined in a 1993 article in Wired by Sandy Sandfort, [ 38 ] where he describes the information technology system of the island highly effective even during the early days.
Following Samuel Haber, [ 39 ] Donald Stabile argues that engineers were faced with a conflict between physical efficiency and cost efficiency in the new corporate capitalist enterprises of the late nineteenth-century United States . Because of their perceptions of market demand, the profit-conscious, non-technical managers of firms where the engineers work often impose limits on the projects that engineers desire to undertake.
The prices of all inputs vary with market forces, thereby upsetting the engineer's careful calculations. As a result, the engineer loses control over projects and must continually revise plans. To maintain control over projects, the engineer must attempt to control these outside variables and transform them into constant factors. [ 40 ]
The American economist and sociologist Thorstein Veblen was an early advocate of technocracy and was involved in the Technical Alliance , as were Howard Scott and M. King Hubbert (the latter of whom later developed the theory of peak oil ). Veblen believed technological developments would eventually lead to a socialistic reorganization of economic affairs. Veblen saw socialism as one intermediate phase in an ongoing evolutionary process in society that would be brought about by the natural decay of the business enterprise system and the rise of the engineers. [ 41 ] Daniel Bell sees an affinity between Veblen and the Technocracy movement . [ 42 ]
In 1932, Howard Scott and Marion King Hubbert founded Technocracy Incorporated and proposed that money be replaced by energy certificates. The group argued that apolitical, rational engineers should be vested with the authority to guide an economy into a thermodynamically balanced load of production and consumption, thereby doing away with unemployment and debt . [ 1 ]
The technocracy movement was briefly popular in the US in the early 1930s during the Great Depression . By the mid-1930s, interest in the movement was declining. Some historians have attributed the decline to the rise of Roosevelt's New Deal . [ 43 ] [ 44 ]
Historian William E. Akin rejects this conclusion. Instead, Akin argues that the movement declined in the mid-1930s due to the technocrats' failure to devise a 'viable political theory for achieving change'. [ 45 ] Akin postulates that many technocrats remained vocal, dissatisfied, and often sympathetic to anti-New Deal third-party efforts. [ 46 ]
Critics have suggested that a "technocratic divide" exists between a governing body controlled to varying extents by technocrats and members of the general public. [ 5 ] Technocratic divides are "efficacy gaps that persist between governing bodies employing technocratic principles and members of the general public aiming to contribute to government decision making." [ 5 ] Technocracy privileges the opinions and viewpoints of technical experts, exalting them into a kind of aristocracy while marginalizing the opinions and viewpoints of the general public. [ 47 ] [ 48 ]
As major multinational technology corporations (e.g., FAANG ) swell market caps and customer counts, critiques of technocratic government in the 21st century see its manifestation in American politics not as an "authoritarian nightmare of oppression and violence" but rather as an éminence grise : a democratic cabal directed by Mark Zuckerberg and the entire cohort of " Big Tech " executives. [ 49 ] [ 50 ] In his 1982 Technology and Culture journal article, "The Technocratic Image and the Theory of Technocracy", John G. Gunnell writes: "...politics is increasingly subject to the influence of technological change", with specific reference to the advent of The Long Boom and the genesis of the Internet , following the 1973–1975 recession . [ 51 ] [ 52 ] Gunnel goes on to add three levels of analysis that delineate technology's political influence:
In each of the three analytical levels, Gunnell foretells technology's infiltration of political processes and suggests that the entanglement of the two (i.e. technology and politics) will inevitably produce power concentrations around those with advanced technological training, namely the technocrats. [ 51 ] Forty years after the publication of Gunnell's writings, technology and government have become, for better or for worse, increasingly intertwined. [ 54 ] [ 55 ] [ 56 ] Facebook can be considered a technocratic microcosm, a "technocratic nation-state" with a cyberspatial population that surpasses any terrestrial nation. [ 57 ] In a broader sense, critics fear that the rise of social media networks (e.g. Twitter , YouTube , Instagram , Pinterest ), coupled with the "decline in mainstream engagement", imperil the "networked young citizen" to inconspicuous coercion and indoctrination by algorithmic mechanisms, and, less insidiously, to the persuasion of particular candidates based predominantly on "Social Media engagement". [ 58 ] [ 59 ] [ 60 ]
In a 2022 article published in Boston Review , political scientist Matthew Cole highlights two problems with technocracy: that it creates "unjust concentrations of power" and that the concept itself is poorly defined. [ 61 ] With respect to the first point, Cole argues that technocracy excludes citizens from policy-making processes while advantaging elites. With respect to the second, he argues that the value of expertise is overestimated in technocratic systems, and points to an alternative concept of "smart democracy" which enlists the knowledge of ordinary citizens. | https://en.wikipedia.org/wiki/Technocracy |
Technocriticism is a branch of critical theory devoted to the study of technological change .
Technocriticism treats technological transformation as historically specific changes in personal and social practices of research , invention , regulation , distribution , promotion , appropriation , use, and discourse , rather than as an autonomous or socially indifferent accumulation of useful inventions, or as an uncritical narrative of linear " progress ", " development " or " innovation ".
Technocriticism studies these personal and social practices in their changing practical and cultural significance. It documents and analyzes both their private and public uses, and often devotes special attention to the relations among these different uses and dimensions. Recurring themes in technocritical discourse include the deconstruction of essentialist concepts such as " health ", " human ", " nature " or " norm ".
Technocritical theory can be either "descriptive" or "prescriptive" in tone. Descriptive forms of technocriticism include some scholarship in the history of technology , science and technology studies , cyberculture studies and philosophy of technology . More prescriptive forms of technocriticism can be found in the various branches of technoethics , for example, media criticism , infoethics , bioethics , neuroethics , roboethics , nanoethics , existential risk assessment and some versions of environmental ethics and environmental design theory.
Figures engaged in technocritical scholarship and theory include Donna Haraway and Bruno Latour (who work in the closely related field of science studies ), N. Katherine Hayles (who works in the field of Literature and Science ), Phil Agree and Mark Poster (who works in intellectual history ), Marshall McLuhan and Friedrich Kittler (who work in the closely related field of media studies ), Susan Squier and Richard Doyle (who work in the closely related field of medical sociology ), and Hannah Arendt , Walter Benjamin , Martin Heidegger , and Michel Foucault (who sometimes wrote about the philosophy of technology ). Technocriticism can be juxtaposed with a number of other innovative interdisciplinary areas of scholarship which have surfaced in recent years such as technoscience and technoethics . | https://en.wikipedia.org/wiki/Technocriticism |
Technological applications of superconductivity include:
The biggest application for superconductivity is in producing the large-volume, stable, and high-intensity magnetic fields required for magnetic resonance imaging (MRI) and nuclear magnetic resonance (NMR). This represents a multi-billion-US$ market for companies such as Oxford Instruments and Siemens . The magnets typically use low-temperature superconductors (LTS) because high-temperature superconductors are not yet cheap enough to cost-effectively deliver the high, stable, and large-volume fields required, notwithstanding the need to cool LTS instruments to liquid helium temperatures. Superconductors are also used in high field scientific magnets.
Particle accelerators such as the Large Hadron Collider can include many high field electromagnets requiring large quantities of LTS. To construct the LHC magnets required more than 28 percent of the world's niobium-titanium wire production for five years, with large quantities of NbTi also used in the magnets for the LHC's huge experiment detectors. [ 2 ]
Conventional fusion machines (JET, ST-40, NTSX-U and MAST) use blocks of copper. This limits their fields to 1-3 Tesla. Several superconducting fusion machines are planned for the 2024-2026 timeframe. These include ITER , ARC and the next version of ST-40 . The addition of High Temperature Superconductors should yield an order of magnitude improvement in fields (10-13 tesla) for a new generation of Tokamaks. [ 3 ]
The commercial applications so far for high-temperature superconductors (HTS) have been limited by other properties of the materials discovered thus far. HTS require only liquid nitrogen , not liquid helium , to cool to superconducting temperatures. However, currently known high-temperature superconductors are brittle ceramics that are expensive to manufacture and not easily formed into wires or other useful shapes. [ 4 ] Therefore, the applications for HTS have been where it has some other intrinsic advantage, e.g. in:
HTS has application in scientific and industrial magnets, including use in NMR and MRI systems. Commercial systems are now available in each category. [ 5 ]
Also one intrinsic attribute of HTS is that it can withstand much higher magnetic fields than LTS, so HTS at liquid helium temperatures are being explored for very high-field inserts inside LTS magnets.
Promising future industrial and commercial HTS applications include Induction heaters , transformers , fault current limiters , power storage , motors and generators , fusion reactors (see ITER ) and magnetic levitation devices.
Early applications will be where the benefit of smaller size, lower weight or the ability to rapidly switch current (fault current limiters) outweighs the added cost. Longer-term as conductor price falls HTS systems should be competitive in a much wider range of applications on energy efficiency grounds alone. (For a relatively technical and US-centric view of state of play of HTS technology in power systems and the development status of Generation 2 conductor see Superconductivity for Electric Systems 2008 US DOE Annual Peer Review .)
The Holbrook Superconductor Project , also known as the LIPA project, was a project to design and build the world's first production superconducting transmission power cable. The cable was commissioned in late June 2008 by the Long Island Power Authority (LIPA) and was in operation for two years. The suburban Long Island electrical substation is fed by a 2,000 foot (600 m) underground cable system which consists of about 99 miles (159 km) of high-temperature superconductor wire manufactured by American Superconductor chilled to −371 °F (−223.9 °C; 49.3 K) with liquid nitrogen , [ dubious – discuss ] greatly reducing the cost required to deliver additional power. [ 6 ] In addition, the installation of the cable bypassed strict regulations for overhead power lines, and offered a solution for the public's concerns [ which? ] on overhead power lines. [ 7 ] [ failed verification ]
The Tres Amigas Project was proposed in 2009 as an electrical HVDC interconnector between the Eastern Interconnection , the Western Interconnection and Texas Interconnection . [ 8 ] It was proposed to be a multi-mile, triangular pathway of superconducting electric cables, capable of transferring five gigawatts of power between the three U.S. power grids. The project lapsed in 2015 when the Eastern Interconnect withdrew from the project. Construction was never begun. [ 9 ]
Essen, Germany has the world's longest superconducting power cable in production at 1 kilometer. It is a 10 kV liquid nitrogen cooled cable. The cable is smaller than an equivalent 110 kV regular cable and the lower voltage has the additional benefit of smaller transformers. [ 10 ] [ 11 ]
In 2020, an aluminium plant in Voerde , Germany, announced plans to use superconductors for cables carrying 200 kA, citing lower volume and material demand as advantages. [ 12 ] [ 13 ]
Magnesium diboride is a much cheaper superconductor than either BSCCO or YBCO in terms of cost per current-carrying capacity per length (cost/(kA*m)), in the same ballpark as LTS, and on this basis many manufactured wires are already cheaper than copper. Furthermore, MgB 2 superconducts at temperatures higher than LTS (its critical temperature is 39 K, compared with less than 10 K for NbTi and 18.3 K for Nb 3 Sn), introducing the possibility of using it at 10-20 K in cryogen-free magnets or perhaps eventually in liquid hydrogen. [ citation needed ] However MgB 2 is limited in the magnetic field it can tolerate at these higher temperatures, so further research is required to demonstrate its competitiveness in higher field applications.
Exposing superconducting materials to a brief magnetic field can trap the field for use in machines such as generators. In some applications they could replace traditional permanent magnets. [ 14 ] [ 15 ] [ 16 ] | https://en.wikipedia.org/wiki/Technological_applications_of_superconductivity |
Technological convergence is the tendency for technologies that were originally unrelated to become more closely integrated and even unified as they develop and advance. For example, watches , telephones , television , computers , and social media platforms began as separate and mostly unrelated technologies, but have converged in many ways into an interrelated telecommunication, media, and technology industry.
"Convergence is a deep integration of knowledge, tools, and all relevant activities of human activity for a common goal, to allow society to answer new questions to change the respective physical or social ecosystem. Such changes in the respective ecosystem open new trends, pathways, and opportunities in the following divergent phase of the process". [ 1 ] [ 2 ]
Siddhartha Menon defines convergence as integration and digitalization. Integration, here, is defined as "a process of transformation measure by the degree to which diverse media such as phone, data broadcast and information technology infrastructures are combined into a single seamless all purpose network architecture platform". [ 3 ] Digitalization is not so much defined by its physical infrastructure, but by the content or the medium. Jan van Dijk suggests that "digitalization means breaking down signals into bytes consisting of ones and zeros". [ 4 ] [ page needed ] [ 5 ]
Convergence is defined by Blackman (1998) as a trend in the evolution of technology services and industry structures. [ 6 ] Convergence is later defined more specifically as the coming together of telecommunications, computing and broadcasting into a single digital bit-stream. [ 7 ] [ 8 ]
Mueller stands against the statement that convergence is really a takeover of all forms of media by one technology: digital computers. [ 9 ] [ page needed ]
Some acronyms for converging scientific or technological fields include:
A 2010 citation analysis of patent data shows that biomedical devices are strongly connected to computing and mobile telecommunications, and that molecular bioengineering is strongly connected to several IT fields. [ 15 ] : 447
Bioconvergence is the integration of biology with engineering. [ 16 ] Possible areas of bioconvergence include: [ 16 ] [ 17 ]
Digital convergence is the inclination for various digital innovations and media to become more similar with time. It enables the convergence of access devices and content, as well as the industry participant operations and strategy. [ 18 ] This is how this type of technological convergence creates opportunities, particularly in the area of product development and growth strategies for digital product companies. [ 18 ] The same can be said in the case of individual content creators, such as vloggers on YouTube . The convergence in this example is demonstrated in the involvement of the Internet, home devices such as a smart television, camera, the YouTube application, and digital content. In this setup, there are the so-called "spokes", [ 19 ] which are the devices that connect to a central hub (such as a PC or smart TV). Here, the Internet serves as the intermediary, particularly through its interactive tools and social networking, in order to create unique mixes of products and services via horizontal integration. [ 18 ]
The above example highlights how digital convergence encompasses three phenomena:
Another example is the convergence of different types of digital content. According to Harry Strasser, former CTO of Siemens "[digital convergence will substantially impact people's lifestyle and work style]". [ 21 ] [ verification needed ]
The functions of the cellphone change as technology converges. Because of technological advancement, a cellphone functions as more than just a phone: it can also contain an Internet connection, video players, MP3 players, gaming, and a camera. Their areas of use have increased over time, partly substituting for other devices.
A mobile convergence device is one that, if connected to a keyboard, monitor, and mouse, can run applications as a desktop computer would. [ 22 ] [ 23 ] [ 24 ] Convergent operating systems include the Linux operating systems Ubuntu Touch , [ 25 ] Plasma Mobile [ 26 ] and PureOS . [ 27 ]
Convergence can also refer to being able to run the same app across different devices and being able to develop apps for different devices (such as smartphones, TVs and desktop computers) at once, with the same code base. [ 28 ] [ 26 ] This can be done via Linux applications that adapt to the device they are being used on [ 26 ] [ 29 ] [ 30 ] (including native apps designed for such via frameworks like Kirigami) [ 31 ] [ 32 ] or by the use of multi-platform frameworks like the Quasar framework that use tools such as Apache Cordova , Electron and Capacitor , which can increase the userbase, the pace and ease of development and the number of reached platforms while decreasing development costs. [ 33 ] [ 34 ] [ 35 ]
The role of the Internet has changed from its original use as a communication tool to easier and faster access to information and services, mainly through a broadband connection. The television, radio, and newspapers were the world's media for accessing news and entertainment; now, all three media have converged into one, and people all over the world can read and hear news and other information on the Internet. The convergence of the Internet and conventional TV became popular in the 2010s, through Smart TV , also sometimes referred to as "Connected TV" or "Hybrid TV", (not to be confused with IPTV , Internet TV , or with Web TV ). Smart TV is used to describe the current trend of integration of the Internet and Web 2.0 features into modern television sets and set-top boxes , as well as the technological convergence between computers and these television sets or set-top boxes. These new devices most often also have a much higher focus on online interactive media , Internet TV , over-the-top content , as well as on-demand streaming media , and less focus on traditional broadcast media like previous generations of television sets and set-top boxes always have had. [ 36 ]
The integration of social movements in cyberspace is one of the potential strategies that social movements can use in the age of media convergence. Because of the neutrality of the Internet and the end-to-end design , the power structure of the Internet was designed to avoid discrimination between applications. Mexico's Zapatistas campaign for land rights was one of the most influential case in the information age; Manuel Castells defines the Zapatistas as "the first informational guerrilla movement". [ 37 ] The Zapatista uprising had been marginalized by the popular press. The Zapatistas were able to construct a grassroots, decentralized social movement by using the Internet. The Zapatistas Effect, observed by Cleaver, [ 38 ] continues to organize social movements on a global scale. A sophisticated webmetric analysis, which maps the links between different websites and seeks to identify important nodal points in a network, demonstrates that the Zapatistas cause binds together hundreds of global NGOs. [ 39 ] The majority of the social movement organized by Zapatistas targets their campaign especially against global neoliberalism. [ 40 ] A successful social movement not only need online support but also protest on the street. Papic wrote, "Social Media Alone Do Not Instigate Revolutions", which discusses how the use of social media in social movements needs good organization both online and offline. [ 41 ]
Media technological convergence is the tendency that as technology changes , different technological systems sometimes evolve toward performing similar tasks. It is the interlinking of computing and other information technologies, media content, media companies, and communication networks that have arisen as the result of the evolution and popularization of the Internet as well as the activities, products, and services that have emerged in the digital media space.
Generally, media convergence refers to the merging of both old and new media and can be seen as a product, a system, or a process. Jenkins states that convergence is "the flow of content across multiple media platforms, the cooperation between multiple media industries, and the migratory behaviour of media audiences who would go almost anywhere in search of the kinds of entertainment experiences they wanted". [ 42 ] According to Jenkins, there are five areas of convergence: technological, economic, social or organic, cultural, and global. [ 43 ] Media convergence is not just a technological shift or a technological process, it also includes shifts within the industrial, cultural, and social paradigms that encourage the consumer to seek out new information. Convergence, simply put, is how individual consumers interact with others on a social level and use various media platforms to create new experiences, new forms of media and content that connect us socially, and not just to other consumers, but to the corporate producers of media in ways that have not been as readily accessible in the past. However, Lugmayr and Dal Zotto argued that media convergence takes place on the technology, content, consumer, business model, and management level. [ 44 ] They argue that media convergence is a matter of evolution and can be described through the triadic phenomena of convergence, divergence, and coexistence. Today's digital media ecosystems coexist, as e.g., mobile app stores provide vendor lock-ins into particular eco-systems; some technology platforms are converging under one technology, due to, for example, the usage of common communication protocols as in digital TV; and other media are diverging, as, for example, media content offerings are more and more specializing and provides a space for niche media. [ 45 ]
Closely linked to the multilevel process of media convergence are also several developments in different areas of the media and communication sector, which are also summarized under the term of media deconvergence . Many experts [ who? ] view this as simply being the tip of the iceberg, as all facets of institutional activity and social life such as business, government, art, journalism, health, and education, are increasingly being carried out in these digital media spaces across a growing network of information and communication technology devices. Also included in this topic is the basis of computer networks, wherein many different operating systems are able to communicate via different protocols .
Convergent services, such as VoIP , IPTV , Smart TV , and others, tend to replace the older technologies and thus can disrupt markets . IP-based convergence is inevitable and will result in new service and new demand in the market. [ 46 ] When the old technology converges into the public-owned common, IP based services become access-independent or less dependent. The old service is access-dependent. [ 47 ]
Advances in technology bring the ability for technological convergence that Rheingold believes can alter the "social-side effects," in that "the virtual, social, and physical world are colliding, merging, and coordinating." [ 48 ] It was predicted in the late 1980s, [ 49 ] around the time that CD-ROM was becoming commonplace, that a digital revolution would take place, and that old media would be pushed to one side by new media . Broadcasting is increasingly being replaced by the Internet, enabling consumers all over the world the freedom to access their preferred media content more easily and at a more available rate than ever before.
However, when the dot-com bubble of the 1990s suddenly popped, that poured cold water over the talk of such a digital revolution. [ 50 ] In today's society, the idea of media convergence has once again emerged as a key point of reference as newer as well as established media companies attempt to visualize the future of the entertainment industry. If this revolutionary digital paradigm shift presumed that old media would be increasingly replaced by new media, the convergence paradigm that is currently emerging suggests that new and old media would interact in more complex ways than previously predicted. The paradigm shift that followed the digital revolution assumed that new media was going to change everything. When the dot-com market crashed, there was a tendency to imagine that nothing had changed. The real truth lay somewhere in between as there were so many aspects of the current media environment to take into consideration. Many industry leaders are increasingly reverting to media convergence as a way of making sense in an era of disorientating change. In that respect, media convergence in theory is essentially an old concept taking on a new meaning. Media convergence, in reality, is more than just a shift in technology. It alters relationships between industries, technologies, audiences, genres and markets. Media convergence changes the rationality media industries operate in, and the way that media consumers process news and entertainment. Media convergence is essentially a process and not an outcome, so no single black box controls the flow of media. With the proliferation of different media channels and increasing portability of new telecommunications and computing technologies, we have entered into an era where media constantly surrounds us. [ 51 ]
Media convergence requires that media companies rethink existing assumptions about media from the consumer's point of view, as these affect marketing and programming decisions. Media producers must respond to newly empowered consumers. Conversely, it would seem that hardware is instead diverging whilst media content is converging. Media has developed into brands that can offer content in a number of forms. Two examples of this are Star Wars and The Matrix . Both are films, but are also books, video games, cartoons, and action figures. Branding encourages expansion of one concept, rather than the creation of new ideas. [ 52 ] In contrast, hardware has diversified to accommodate media convergence. Hardware must be specific to each function. While most scholars argue that the flow of cross-media is accelerating, [ 53 ] O'Donnell suggests, especially between films and video game, the semblance of media convergence is misunderstood by people outside of the media production industry. The conglomeration of media industry continues to sell the same story line in different media. For example, Batman is in comics, films, anime, and games. However, the data to create the image of batman in each media is created individually by different teams of creators. The same character and the same visual effect repetitively appear in different media is because of the synergy of media industry to make them similar as possible. In addition, convergence does not happen when the game of two different consoles is produced. No flows between two consoles because it is faster to create game from scratch for the industry. [ 54 ]
One of the more interesting new media journalism forms is virtual reality. Reuters, a major international news service, has created and staffed a news "island" in the popular online virtual reality environment Second Life . Open to anyone, Second Life has emerged as a compelling 3D virtual reality for millions of citizens around the world who have created avatars (virtual representations of themselves) to populate and live in an altered state where personal flight is a reality, altered egos can flourish, and real money ( US$ 1,296,257 were spent during the 24 hours concluding at 10:19 a.m. eastern time January 7, 2008) can be made without ever setting foot into the real world. The Reuters Island in Second Life is a virtual version of the Reuters real-world news service but covering the domain of Second Life for the citizens of Second Life (numbering 11,807,742 residents as of January 5, 2008). [ 55 ]
Media convergence in the digital era means the changes that are taking place with older forms of media and media companies. Media convergence has two roles, the first is the technological merging of different media channels – for example, magazines, radio programs, TV shows, and movies, now are available on the Internet through laptops, iPads, and smartphones. As discussed in Media Culture (by Campbell), convergence of technology is not new. It has been going on since the late 1920s. An example is RCA, the Radio Corporation of America, which purchased Victor Talking Machine Company and introduced machines that could receive radio and play recorded music. Next came the TV, and radio lost some of its appeal as people started watching television, which has both talking and music as well as visuals. As technology advances, convergence of media change to keep up. The second definition of media convergence Campbell discusses is cross-platform by media companies. This usually involves consolidating various media holdings, such as cable, phone, television (over the air, satellite, cable) and Internet access under one corporate umbrella. This is not for the consumer to have more media choices, this is for the benefit of the company to cut down on costs and maximize its profits. [ 56 ] As stated in the article Convergence Culture and Media Work by Mark Deuze, "the convergence of production and consumption of media across companies, channels, genres, and technologies is an expression of the convergence of all aspects of everyday life: work and play, the local and the global, self and social identity." [ 57 ]
Communication networks were designed to carry different types of information independently. The older media, such as television and radio, are broadcasting networks with passive audiences. Convergence of telecommunication technology permits the manipulation of all forms of information, voice, data, and video. Telecommunication has changed from a world of scarcity to one of seemingly limitless capacity. Consequently, the possibility of audience interactivity morphs the passive audience into an engaged audience. [ 6 ] The historical roots of convergence can be traced back to the emergence of mobile telephony and the Internet , although the term properly applies only from the point in marketing history when fixed and mobile telephony began to be offered by operators as joined products. Fixed and mobile operators were, for most of the 1990s, independent companies. Even when the same organization marketed both products, these were sold and serviced independently.
In the 1990s, an implicit and often explicit assumption was that new media was going to replace the old media and Internet was going to replace broadcasting. In Nicholas Negroponte's Being Digital , Negroponte predicts the collapse of broadcast networks in favor of an era of narrow-casting. He also suggests that no government regulation can shatter the media conglomerate . "The monolithic empires of mass media are dissolving into an array of cottage industries... Media barons of today will be grasping to hold onto their centralized empires tomorrow.... The combined forces of technology and human nature will ultimately take a stronger hand in plurality than any laws Congress can invent." [ 58 ] The new media companies claimed that the old media would be absorbed fully and completely into the orbit of the emerging technologies. George Gilder dismisses such claims saying, [ clarification needed ] "The computer industry is converging with the television industry in the same sense that the automobile converged with the horse, the TV converged with the nickelodeon, the word-processing program converged with the typewriter, the CAD program converged with the drafting board, and digital desktop publishing converged with the Linotype machine and the letterpress." Gilder believes that computers had come not to transform mass culture but to destroy it.
Media companies put media convergence back to their agenda after the dot-com bubble burst. In 1994, Knight Ridder promulgated the concept of portable magazines, newspaper, and books: "Within news corporations it became increasingly obvious that an editorial model based on mere replication in the Internet of contents that had previously been written for print newspapers, radio, or television was no longer sufficient." [ 59 ] The rise of digital communication in the late 20th century has made it possible for media organizations (or individuals) to deliver text, audio, and video material over the same wired, wireless, or fiber-optic connections. At the same time, it inspired some media organizations to explore multimedia delivery of information. This digital convergence of news media, in particular, was called "Mediamorphosis" by researcher Roger Fidler in his 1997 book by that name. [ 60 ] Today, we are surrounded by a multi-level convergent media world where all modes of communication and information are continually reforming to adapt to the enduring demands of technologies, "changing the way we create, consume, learn and interact with each other". [ 61 ]
Henry Jenkins determines convergence culture to be the flow of content across multiple media platforms, the cooperation between multiple media industries, and the migratory behavior of media audiences who will go almost anywhere in search of the kinds of entertainment experiences they want. The convergence culture is an important factor in transmedia storytelling . Convergence culture introduces new stories and arguments from one form of media into many. Transmedia storytelling is defined by Jenkins as a process "where integral elements of a fiction get dispersed systematically across multiple delivery channels for the purpose of creating a unified and coordinated entertainment experience. Ideally, each medium makes its own unique contribution to the unfolding of the story". [ 62 ] For instance, The Matrix starts as a film , which is followed by two other instalments, but in a convergence culture it is not constrained to that form. It becomes a story not only told in the movies but in animated shorts , video games and comic books, three different media platforms. Online, a wiki is created to keep track of the story's expanding canon. Fan films, discussion forums, and social media pages also form, expanding The Matrix to different online platforms. Convergence culture took what started as a film and expanded it across almost every type of media. [ 63 ] Bert is Evil (images) Bert and Bin Laden appeared in CNN coverage of anti-American protest following September 11. The association of Bert and Bin Laden links back to the Ignacio's Photoshop project for fun. [ 64 ]
Convergence culture is a part of participatory culture . Because average people can now access their interests on many types of media they can also have more of a say. Fans and consumers are able to participate in the creation and circulation of new content. Some companies take advantage of this and search for feedback from their customers through social media and sharing sites such as YouTube . Besides marketing and entertainment, convergence culture has also affected the way we interact with news and information. We can access news on multiple levels of media from the radio, TV, newspapers, and the Internet. The Internet allows more people to be able to report the news through independent broadcasts and therefore allows a multitude of perspectives to be put forward and accessed by people in many different areas. Convergence allows news to be gathered on a much larger scale. For instance, photographs were taken of torture at Abu Ghraib . These photos were shared and eventually posted on the Internet. This led to the breaking of a news story in newspapers, on TV, and the Internet. [ 63 ]
Media scholar Henry Jenkins has described the media convergence with participatory culture as:
...a "catalyst" for amateur digital film-making and what this case study suggests about the future directions popular culture may take. Star Wars fan films represent the intersection of two significant cultural trends—the corporate movement towards media convergence and the unleashing of significant new tools, which enable the grassroots archiving, annotation, appropriation, and recirculation of media content. These fan films build on long-standing practices of the fan community but they also reflect the influence of this changed technological environment that has dramatically lowered the costs of film production and distribution. [ 65 ]
Some media observers expect that we will eventually access all media content through one device, or "black box". [ 66 ] As such, media business practice has been to identify the next "black box" to invest in and provide media for. This has caused a number of problems. Firstly, as "black boxes" are invented and abandoned, the individual is left with numerous devices that can perform the same task, rather than one dedicated for each task. For example, one may own both a computer and a video games console, subsequently owning two DVD players. This is contrary to the streamlined goal of the "black box" theory, and instead creates clutter. [ 67 ] Secondly, technological convergence tends to be experimental in nature. This has led to consumers owning technologies with additional functions that are harder, if not impractical, to use rather than one specific device. Many people would only watch the TV for the duration of the meal's cooking time, or whilst in the kitchen, but would not use the microwave as the household TV. These examples show that in many cases technological convergence is unnecessary or unneeded.
Furthermore, although consumers primarily use a specialized media device for their needs, other "black box" devices that perform the same task can be used to suit their current situation. As a 2002 Cheskin Research report explained: "...Your email needs and expectations are different whether you're at home, work, school, commuting, the airport, etc., and these different devices are designed to suit your needs for accessing content depending on where you are- your situated context." Despite the creation of "black boxes", intended to perform all tasks, the trend is to use devices that can suit the consumer's physical position. [ 68 ] Due to the variable utility of portable technology, convergence occurs in high-end mobile devices. They incorporate multimedia services, GPS, Internet access, and mobile telephony into a single device, heralding the rise of what has been termed the "smartphone," a device designed to remove the need to carry multiple devices. Convergence of media occurs when multiple products come together to form one product with the advantages of all of them, also known as the black box. This idea of one technology, concocted by Henry Jenkins , has become known more as a fallacy because of the inability to actually put all technical pieces into one. For example, while people can have email and Internet on their phone, they still want full computers with Internet and email in addition. Mobile phones are a good example, in that they incorporate digital cameras , MP3 players, voice recorders , and other devices. For the consumer, it means more features in less space; for media conglomerates it means remaining competitive.
However, convergence has a downside. Particularly in initial forms, converged devices are frequently less functional and reliable than their component parts (e.g., a mobile phone's web browser may not render some web pages correctly, due to not supporting certain rendering methods, such as the iPhone browser not supporting Flash content). As the number of functions in a single device escalates, the ability of that device to serve its original function decreases. [ 61 ] As Rheingold asserts, technological convergence holds immense potential for the "improvement of life and liberty in some ways and (could) degrade it in others". [ 48 ] He believes the same technology has the potential to be "used as both a weapon of social control and a means of resistance". [ 48 ] Since technology has evolved in the past ten years or so, companies are beginning to converge technologies to create demand for new products. This includes phone companies integrating 3G and 4G on their phones. In the mid 20th century, television converged the technologies of movies and radio, and television is now being converged with the mobile phone industry and the Internet. Phone calls are also being made with the use of personal computers. Converging technologies combine multiple technologies into one. Newer mobile phones feature cameras, and can hold images, videos, music, and other media. Manufacturers now integrate more advanced features, such as video recording, GPS receivers, data storage, and security mechanisms into the traditional cellphone.
Telecommunications convergence or network convergence describes emerging telecommunications technologies, and network architecture used to migrate multiple communications services into a single network. [ 69 ] Specifically, this involves the converging of previously distinct media such as telephony and data communications into common interfaces on single devices, such as most smart phones can make phone calls and search the web. [ citation needed ]
Combination services include those that integrate SMS with voice, such as voice SMS. Providers include Bubble Motion, Jott, Kirusa, and SpinVox. Several operators have launched services that combine SMS with mobile instant messaging (MIM) and presence. Text-to-landline services also exist, where subscribers can send text messages to any landline phone and are charged at standard rates. The text messages are converted into spoken language. This service has been popular in America, where fixed and mobile numbers are similar. Inbound SMS has been converging to enable reception of different formats (SMS, voice, MMS, etc.). In April 2008, O2 UK launched voice-enabled shortcodes, adding voice functionality to the five-digit codes already used for SMS. This type of convergence is helpful for media companies, broadcasters, enterprises, call centres and help desks who need to develop a consistent contact strategy with the consumer. Because SMS is very popular today, it became relevant to include text messaging as a contact possibility for consumers. To avoid having multiple numbers (one for voice calls, another one for SMS), a simple way is to merge the reception of both formats under one number. This means that a consumer can text or call one number and be sure that the message will be received. [ citation needed ]
"Mobile service provisions" refers not only to the ability to purchase mobile phone services, but the ability to wirelessly access everything: voice, Internet, audio, and video. Advancements in WiMAX and other leading edge technologies provide the ability to transfer information over a wireless link at a variety of speeds, distances, and non-line-of-sight conditions. [ citation needed ]
Multi-play is a marketing term describing the provision of different telecommunication services, such as Internet access , television, telephone, and mobile phone service, by organizations that traditionally only offered one or two of these services. Multi-play is a catch-all phrase; usually, the terms triple play (voice, video and data) or quadruple play (voice, video, data and wireless) are used to describe a more specific meaning. A dual play service is a marketing term for the provisioning of the two services: it can be high-speed Internet ( digital subscriber line ) and telephone service over a single broadband connection in the case of phone companies, or high-speed Internet ( cable modem ) and TV service over a single broadband connection in the case of cable TV companies. The convergence can also concern the underlying communication infrastructure . An example of this is a triple play service, where communication services are packaged allowing consumers to purchase TV, Internet, and telephony in one subscription. The broadband cable market is transforming as pay-TV providers move aggressively into what was once considered the telco space. Meanwhile, customer expectations have risen as consumer and business customers alike seek rich content, multi-use devices, networked products and converged services including on-demand video, digital TV, high speed Internet, VoIP, and wireless applications. It is uncharted territory for most broadband companies. [ citation needed ]
A quadruple play service combines the triple play service of broadband Internet access, television, and telephone with wireless service provisions. [ 70 ] A quadruple play service may be formed through either the co-ownership of a wireless carrier by a provider of triple play services, [ 71 ] [ 72 ] or the establishment of a mobile virtual network operator (MVNO) in partnership with an existing incumbent (such as Comcast 's Xfinity Mobile , which operates on the Verizon network)—a turnkey option that relieves the provider from needing to acquire or construct its own network. [ 73 ]
Early in the 21st century, home LAN convergence so rapidly integrated home routers , wireless access points , and DSL modems that users were hard put to identify the resulting box they used to connect their computers to their Internet service. A general term for such a combined device is a residential gateway . [ citation needed ]
The U.S. Federal Communications Commission (FCC) has not been able to decide how to regulate VoIP (Internet Telephony) because the convergent technology is still growing and changing. In addition to its growth, FCC is tentative to set regulation on VoIP in order to promote competition in the telecommunication industry. [ 74 ] There is not a clear line between telecommunication service and the information service because of the growth of the new convergent media. Historically, telecommunication is subject to state regulation. The state of California concerned about the increasing popularity of Internet telephony will eventually obliterate funding for the Universal Service Fund . [ 75 ] Some states attempt to assert their traditional role of common carrier oversight onto this new technology. [ 76 ] Meisel and Needles (2005) suggests that the FCC, federal courts, and state regulatory bodies on access line charges will directly impact the speed in which Internet telephony market grows. [ 77 ] On one hand, the FCC is hesitant to regulate convergent technology because VoIP with different feature from the old Telecommunication; no fixed model to build legislature yet. On the other hand, the regulations is needed because Service over the Internet might be quickly replaced telecommunication service, which will affect the entire economy.
Convergence has also raised several debates about classification of certain telecommunications services. As the lines between data transmission, and voice and media transmission are eroded, regulators are faced with the task of how best to classify the converging segments of the telecommunication sector. Traditionally, telecommunication regulation has focused on the operation of physical infrastructure, networks, and access to network. No content is regulated in the telecommunication because the content is considered private. In contrast, film and Television are regulated by contents. The rating system regulates its distribution to the audience. Self-regulation is promoted by the industry. Bogle senior persuaded the entire industry to pay 0.1 percent levy on all advertising and the money was used to give authority to the Advertising Standards Authority, which keeps the government away from setting legislature in the media industry. [ 78 ]
The premises to regulate the new media, two-ways communications, concerns much about the change from old media to new media. Each medium has different features and characteristics. First, Internet, the new medium, manipulates all form of information – voice, data and video. Second, the old regulation on the old media, such as radio and Television, emphasized its regulation on the scarcity of the channels. Internet, on the other hand, has the limitless capacity, due to the end-to-end design. Third, Two-ways communication encourages interactivity between the content producers and the audiences.
"...Fundamental basis for classification, therefore, is to consider the need for regulation in terms of either market failure or in the public interests"(Blackman). [ 6 ] The Electronic Frontier Foundation , founded in 1990, is a non profit organization that defends free speech, privacy, innovation, and consumer rights. [ 79 ] The Digital Millennium Copyright Act regulates and protect the digital content producers and consumers. [ citation needed ]
Network neutrality is an issue. Wu and Lessig set out two reasons for network neutrality: firstly, by removing the risk of future discrimination, it incentivizes people to invest more in the development of broadband applications; secondly, it enables fair competition between applications without network bias. [ 80 ] The two reasons also coincide with FCC's interest to stimulate investment and enhance innovation in broadband technology and services. [ 81 ] Despite regulatory efforts of deregulation, privatization, and liberalization, the infrastructure barrier has been a negative factor in achieving effective competition. Kim et al. argues that IP dissociates the telephony application from the infrastructure and Internet telephony is at the forefront of such dissociation. [ 82 ] The neutrality of the network is very important for fair competition. [ 83 ] [ page needed ] As the former FCC Charman Michael Copps put it: "From its inception, the Internet was designed, as those present during the course of its creating will tell you, to prevent government or a corporation or anyone else from controlling it. It was designed to defeat discrimination against users, ideas and technologies". [ 84 ] Because of these reasons, Shin concludes that regulator should make sure to regulate application and infrastructure separately.
The layered model was first proposed by Solum and Chug, Sicker, and Nakahata. Sicker, Warbach and Witt have supported using a layered model to regulate the telecommunications industry with the emergence of convergence services. Many researchers have different layered approach, but they all agree that the emergence of convergent technology will create challenges and ambiguities for regulations. [ 46 ] The key point of the layered model is that it reflects the reality of network architecture, and current business model. [ 85 ] [ page needed ] The layered model consists of:
Shin combines the layered model and network neutrality as the principle to regulate the convergent media industry. [ 46 ]
Medical applications of robotics have become increasingly prominent in the robotics literature. [ 86 ]
The use of robots in service sectors is much less than the use of robots in manufacturing. [ 86 ] | https://en.wikipedia.org/wiki/Technological_convergence |
Technological determinism is a reductionist theory in assuming that a society's technology progresses by following its own internal logic of efficiency , while determining the development of the social structure and cultural values . [ 1 ] The term is believed to have originated from Thorstein Veblen (1857–1929), an American sociologist and economist . The most radical technological determinist in the United States in the 20th century was most likely Clarence Ayres who was a follower of Thorstein Veblen as well as John Dewey . William Ogburn was also known for his radical technological determinism and his theory on cultural lag .
The origins of technological determinism as a formal concept are often traced to Thorstein Veblen (1857–1929), an influential American sociologist and economist. Veblen, known for his work on social and economic issues, introduced ideas that portrayed technology as a powerful, autonomous force capable of shaping societal norms and structures. He argued that the development and use of machinery exerted an independent influence on human thought and behavior, notably asserting that "the machine throws out anthropomorphic habits of thought.” [ 2 ] [ 3 ] This notion laid the foundation for technological determinism by suggesting that technology inherently transforms society by reshaping patterns of thought and behavior .
During Veblen's time, rapid industrialization and advancements in technology were radically altering American society . Innovations in manufacturing and transportation , such as the assembly line and railroads , demonstrated technology’s potential to reshape economic and social structures. These changes helped popularize the idea that technology could independently drive societal evolution , creating the conditions for Veblen's ideas to resonate widely. [ 4 ]
Although Veblen is credited with coining the core ideas behind technological determinism, the influence of Karl Marx on these ideas is also significant. Marx argued that technology drives historical change by shaping the "material base" of society. For instance, he suggested that the railway in colonial India would challenge and erode the caste system by introducing new economic activities and altering social hierarchies. [ 5 ] [ 6 ] Later, Clarence Ayres , a 20th-century economist inspired by Veblen, expanded on these ideas by introducing the concept of "technological drag." According to Ayres, technology progresses as a dynamic, self-generating force, while traditional institutions often lag, resisting the transformative potential of technological change. Ayres’ theory further solidified technological determinism, emphasizing the inevitable clash between technological progress and social conservatism . [ 7 ] [ 8 ]
Technological determinism seeks to show technical developments, media, or technology as a whole, as the key mover in history and social change. [ 9 ] It is a theory subscribed to by "hyperglobalists" who claim that as a consequence of the wide availability of technology, accelerated globalization is inevitable. Therefore, technological development and innovation become the principal motor of social, economic or political change. [ 10 ]
Strict adherents to technological determinism do not believe the influence of technology differs based on how much a technology is or can be used. Instead of considering technology as part of a larger spectrum of human activity, technological determinism sees technology as the basis for all human activity.
Technological determinism has been summarized as 'The belief in technology as a key governing force in society ...' ( Merritt Roe Smith ). 'The idea that technological development determines social change ...' (Bruce Bimber). It changes the way people think and how they interact with others and can be described as '...a three-word logical proposition: "Technology determines history"' ( Rosalind H. Williams ) . It is, '... the belief that social progress is driven by technological innovation, which in turn follows an "inevitable" course.' [ 11 ] This 'idea of progress' or 'doctrine of progress' is centralized around the idea that social problems can be solved by technological advancement, and this is the way that society moves forward. Technological determinists believe that "'You can't stop progress', implying that we are unable to control technology" ( Lelia Green ). This suggests that we are somewhat powerless, and society allows technology to drive social changes because "societies fail to be aware of the alternatives to the values embedded in it [technology]" ( Merritt Roe Smith ).
Technological determinism has been defined as an approach that identifies technology, or technological advances, as the central causal element in processes of social change. [ 12 ] As technology is stabilized, its design tends to dictate users' behaviors, consequently stating that "technological progress equals social progress." [ 13 ] Key notions of this theory are separated into two parts, with the first being that the development of the technology itself may also be separate from social and political factors, arising from "the ways of inventors, engineers, and designers following an internal, technical logic that has nothing to do with social relationships". [ 13 ] The second is that as technology is stabilized, its design tends to dictate users' behaviors, consequently resulting in social change.
As technology changes, the ways in which it is utilized and incorporated into the daily lives of individuals within a culture consequently affect the ways of living, highlighting how technology ultimately determines societal growth through its influence on relations and ways of living within a culture. To illustrate, "the invention of the wheel revolutionized human mobility, allowing humans to travel greater distances and carry greater loads with them". [ 14 ] This technological advancement also leads to interactions between different cultural groups, advanced trade, and thus impacts the size and relations both within and between different networks. Other examples include the invention of language, expanding modes of communication between individuals, the introduction of bookkeeping and written documentation, impacting the circulation of knowledge, and having streamlined effects on the socioeconomic and political systems as a whole. As Dusek (2006) notes, "culture and society cannot affect the direction of technology…[and] as technology develops and changes, the institutions in the rest of society change, as does the art and religion of a society." [ 15 ] Thus, technological determinism dictates that technological advances and social relations are inevitably tied, with the change of either affecting the other by consequence of normalization. [ 16 ]
This stance however ignores the social and cultural circumstances in which the technology was developed. Sociologist Claude Fischer (1992) characterized the most prominent forms of technological determinism as "billiard ball" approaches, in which technology is seen as an external force introduced into a social situation, producing a series of ricochet effects. [ 17 ]
Rather than acknowledging that a society or culture interacts with and even shapes the technologies that are used, a technological determinist view holds that "the uses made of technology are largely determined by the structure of the technology itself, that is, that its functions follow from its form" ( Neil Postman ). However, this is not the sole view of TD following Smith and Marx's (1998) [ 18 ] notion of "hard" determinism, which states that once a technology is introduced into a culture what follows is the inevitable development of that technology. In this view, the role of "agency (the power to affect change) is imputed on the technology itself, or some of its intrinsic attributes; thus the invention of technology leads to a situation of inescapable necessity."
The other view follows what Smith and Marx (1998) [ 18 ] dictate as "soft" determinism, where the development of technology is also dependent on social context, affecting how it is adopted into a culture, "and, if the technology is adopted, the social context will have important effects on how the technology is used and thus on its ultimate impact". [ 16 ]
For example, we could examine the spread of mass-produced knowledge through the role of the printing press in the Protestant Reformation. Because of the urgency from the protestant side to get the reform off the ground before the church could react, "early Lutheran leaders, led by Luther himself, wrote thousands of anti-papal pamphlets in the Reformation's first decades and these works spread rapidly through reprinting in various print shops throughout central Europe". [ 19 ] As such the urgency of the socio-political context to utilize such technology in the beginning of its invention caused its fast adoption and normalization into European culture. We could view its uses in its popularization – for political propaganda purposes – in line with the continued traditions of newspapers in modern times, as well as newly adopted uses for other printed text, adapting to change in a social context such as an emphasis on leisurely activities such as reading. This follows the soft deterministic view because the technological invention – the printing press – was quickly adopted because of the socio-political context, and because of its fast integration into society, has impacted and continues to impact how society operates.
In examining determinism , “hard determinism” can be contrasted with “soft determinism”. A compatibilist says that it is possible for free will and determinism to exist in the world together, while an incompatibilist would say that they cannot and there must be one or the other. Those who support determinism can be further divided.
“Hard determinists” would view technology as developing independent from social concerns. They would say that technology creates a set of powerful forces acting to regulate our social activity and its meaning. According to this view of determinism we organize ourselves to meet the needs of technology, and the outcome of this organization is beyond our control or we do not have the freedom to make a choice regarding the outcome (autonomous technology). The 20th century French philosopher and social theorist Jacques Ellul could be said to be a hard determinist and proponent of autonomous technique (technology). In his 1954 work The Technological Society , Ellul essentially posits that technology, by virtue of its power through efficiency, determines which social aspects are best suited for its own development through a process of natural selection. A social system's values, morals, philosophy etc. that are most conducive to the advancement of technology allow that social system to enhance its power and spread at the expense of those social systems whose values, morals, philosophy etc. are less promoting of technology. While geography, climate, and other "natural" factors largely determined the parameters of social conditions for most of human history, technology has recently become the dominant objective factor (largely due to forces unleashed by the industrial revolution) and it has been the principal objective and determining factor.
“Soft determinism”, as the name suggests, is a more passive view of the way technology interacts with socio-political situations. Soft determinists still subscribe to the fact that technology is the guiding force in our evolution but would maintain that we have a chance to make decisions regarding the outcomes of a situation. This is not to say that free will exists, but that the possibility for us to roll the dice and see what the outcome exists. A slightly different variant of soft determinism is the 1922 technology-driven theory of social change proposed by William Fielding Ogburn , in which society must adjust to the consequences of major inventions, but often does so only after a period of cultural lag . [ 20 ]
Skepticism about technological determinism emerged alongside increased pessimism about techno-science in the mid-20th century, in particular around the use of nuclear energy in the production of nuclear weapons , Nazi human experimentation during World War II , and the problems of economic development in the Third World . As a direct consequence, desire for greater control of the course of development of technology gave rise to disenchantment with the model of technological determinism in academia.
Modern theorists of technology and society no longer consider technological determinism to be a very accurate view of the way in which we interact with technology, even though determinist assumptions and language fairly saturate the writings of many boosters of technology, the business pages of many popular magazines, and much reporting on technology [ citation needed ] . Instead, research in science and technology studies , social construction of technology and related fields have emphasized more nuanced views that resist easy causal formulations. They emphasize that "The relationship between technology and society cannot be reduced to a simplistic cause-and-effect formula. It is, rather, an 'intertwining'", whereby technology does not determine but "operates, and are operated upon in a complex social field" (Murphie and Potts).
T. Snyder approached the aspect of technological determinism in his concept: 'politics of inevitability'. [ 21 ] A concept utilized by politicians in which society is promised the idea that the future will be only more of the present, this concept removes responsibility. This could be applied to free markets, the development of nation states and technological progress.
In his article "Subversive Rationalization: Technology, Power and Democracy with Technology," Andrew Feenberg argues that technological determinism is not a very well founded concept by illustrating that two of the founding theses of determinism are easily questionable and in doing so calls for what he calls democratic rationalization ( Feenberg 210–212).
Prominent opposition to technologically determinist thinking has emerged within work on the social construction of technology (SCOT). SCOT research, such as that of Mackenzie and Wajcman (1997) argues that the path of innovation and its social consequences are strongly, if not entirely shaped by society itself through the influence of culture, politics, economic arrangements, regulatory mechanisms and the like. In its strongest form, verging on social determinism , "What matters is not the technology itself, but the social or economic system in which it is embedded" ( Langdon Winner ).
In his influential but contested (see Woolgar and Cooper, 1999) article "Do Artifacts Have Politics?", Langdon Winner illustrates not a form of determinism but the various sources of the politics of technologies. Those politics can stem from the intentions of the designer and the culture of the society in which a technology emerges or can stem from the technology itself, a "practical necessity" for it to function. For instance, New York City urban planner Robert Moses is purported to have built Long Island's parkway tunnels too low for buses to pass in order to keep minorities away from the island's beaches, an example of externally inscribed politics. On the other hand, an authoritarian command-and-control structure is a practical necessity of a nuclear power plant if radioactive waste is not to fall into the wrong hands. As such, Winner neither succumbs to technological determinism nor social determinism. The source of a technology's politics is determined only by carefully examining its features and history.
Although "The deterministic model of technology is widely propagated in society" (Sarah Miller), it has also been widely questioned by scholars. Lelia Green explains that, "When technology was perceived as being outside society, it made sense to talk about technology as neutral". Yet, this idea fails to take into account that culture is not fixed and society is dynamic. When "Technology is implicated in social processes, there is nothing neutral about society" ( Lelia Green ). This confirms one of the major problems with "technological determinism and the resulting denial of human responsibility for change. There is a loss of human involvement that shape technology and society" (Sarah Miller).
Another conflicting idea is that of technological somnambulism , a term coined by Winner in his essay "Technology as Forms of Life". Winner wonders whether or not we are simply sleepwalking through our existence with little concern or knowledge as to how we truly interact with technology. In this view, it is still possible for us to wake up and once again take control of the direction in which we are traveling (Winner 104). However, it requires society to adopt Ralph Schroeder 's claim that, "users don't just passively consume technology, but actively transform it". [ 22 ]
In opposition to technological determinism are those who subscribe to the belief of social determinism and postmodernism . Social determinists believe that social circumstances alone select which technologies are adopted, with the result that no technology can be considered "inevitable" solely on its own merits. Technology and culture are not neutral and when knowledge comes into the equation, technology becomes implicated in social processes. The knowledge of how to create, enhance, and use technology is socially bound knowledge. Postmodernists take another view, suggesting that what is right or wrong is dependent on circumstance. They believe technological change can have implications on the past, present and future. [ 23 ] While they believe technological change is influenced by changes in government policy, society and culture, they consider the notion of change to be a paradox, since change is constant.
Media and cultural studies theorist Brian Winston , in response to technological determinism, developed a model for the emergence of new technologies which is centered on the Law of the suppression of radical potential . In two of his books – Technologies of Seeing: Photography, Cinematography and Television (1997) and Media Technology and Society (1998) – Winston applied this model to show how technologies evolve over time, and how their 'invention' is mediated and controlled by society and societal factors which suppress the radical potential of a given technology.
Some interpret Karl Marx as advocating technological determinism, with such statements as "The Handmill gives you society with the feudal lord: the steam-mill , society with the industrial capitalist" ( The Poverty of Philosophy, 1847), but others argue that Marx was not a determinist. [ 24 ]
Technological determinist Walter J. Ong reviews the societal transition from an oral culture to a written culture in his work Orality and Literacy: The Technologizing of the Word (1982). He asserts that this particular development is attributable to the use of new technologies of literacy (particularly print and writing,) to communicate thoughts which could previously only be verbalized. He furthers this argument by claiming that writing is purely context dependent as it is a "secondary modelling system" (8). Reliant upon the earlier primary system of spoken language, writing manipulates the potential of language as it depends purely upon the visual sense to communicate the intended information. Furthermore, the rather stagnant technology of literacy distinctly limits the usage and influence of knowledge, it unquestionably effects the evolution of society. In fact, Ong asserts that "more than any other single invention, writing has transformed human consciousness" (Ong 1982: 78).
Media determinism is a form of technological determinism, a philosophical and sociological position which posits the power of the media to impact society. [ 25 ] Two foundational media determinists are the Canadian scholars Harold Innis and Marshall McLuhan . One of the best examples of technological determinism in media theory is Marshall McLuhan's theory " the medium is the message " and the ideas of his mentor Harold Adams Innis. Both these Canadian theorists saw media as the essence of civilization. The association of different media with particular mental consequences by McLuhan and others can be seen as related to technological determinism. It is this variety of determinism that is referred to as media determinism. According to McLuhan, there is an association between communications media/technology and language; similarly, Benjamin Lee Whorf argues that language shapes our perception of thinking ( linguistic determinism ). For McLuhan, media is a more powerful and explicit determinant than is the more general concept of language. McLuhan was not necessarily a hard determinist. As a more moderate version of media determinism, he proposed that our use of particular media may have subtle influences on us, but more importantly, it is the social context of use that is crucial. [ 26 ] See also Media ecology .
Media determinism is a form of the popular dominant theory of the relationship between technology and society . In a determinist view, technology takes on an active life of its own and is seen be as a driver of social phenomena. Innis believed that the social, cultural, political, and economic developments of each historical period can be related directly to the technology of the means of mass communication of that period. In this sense, like Dr. Frankenstein's monster, technology itself appears to be alive, or at least capable of shaping human behavior. [ 27 ] However, it has been increasingly subject to critical review by scholars. For example, scholar Raymond Williams , criticizes media determinism and rather believes social movements define technological and media processes. [ 28 ] With regard to communications media, audience determinism is a viewpoint opposed to media determinism. This is described as instead of media being presented as doing things to people; the stress is on the way people do things with media. Individuals need to be aware that the term "deterministic" is a negative one for many social scientists and modern sociologists; in particular they often use the word as a term of abuse. [ 29 ] | https://en.wikipedia.org/wiki/Technological_determinism |
Technological innovation is an extended concept of innovation . While innovation is a rather well-defined concept, it has a broad meaning to many people, and especially numerous understanding in the academic and business world. [ 1 ]
Innovation refers to adding extra steps to developing new services and products in the marketplace or in the public that fulfill unaddressed needs or solve problems that were not in the past. Technological Innovation however focuses on the technological aspects of a product or service rather than covering the entire organization business model . It is important to clarify that Innovation is not only driven by technology , but can also be driven by various other factors, including market demand , social and environmental factors , and process improvements.
Technological innovation is the process where an organization (or a group of people working outside a structured organization) embarks in a journey where the importance of technology as a source of innovation has been identified as a critical success factor for increased market competitiveness. [ 2 ] The wording "technological innovation" is preferred to "technology innovation". "Technology innovation" gives a sense of working on technology for the sake of technology. "Technological innovation" better reflects the business consideration of improving business value by working on technological aspects of the product or services. These advancements would show improvement for the business's that adapt to this new technology. Moreover, in a vast majority of products and services, there is not one unique technology at the heart of the system. It is the combination, integration, and interaction of different technologies that make the product or service successful.
If the process of technological innovation is formalized (typically within an organization: a company, a public body , a think tank , a university, etc.) it can be referred to as technological innovation management (or Technology Innovation Management - TIM). The "management" aspect refers to the inputs, outputs and constraints a "manager" or team of "managers" are responsible to govern the process of technological innovation in a way that aligns with the company strategy. In a context where technological innovation is not to be guided along known paths within the organization, the wording and concept of technological innovation leadership is preferred. On many occasions, especially in start-ups and new ventures, technological innovation is performed in an unknown context. The boundaries and constraints of the technology at work are not precisely known. Hence it requires leaders and not managers to give the vision and coach the team to explore the unknown part of the technology.
Technological innovation will impact prices of stock in companies. This can be due to new inventions in technology which make it easier for jobs to be done in the market. Investors see bigger returns on investments of companies with new technology due to innovations that have changed the market. Although companies that can’t keep up with the pace of change and adapt to disruptive innovation often find themselves floundering. [ 3 ] With new innovations being added to companies value this in turn will create an increase in profits of the company thus increasing stock prices for the company.
The stock market is a way that companies can raise money for the company’s production or operations by selling shares of stock in the company. [ 4 ] With newly raised money, companies can invest that money into new advancements which will bring more profit's in the future.
Although companies do adopt technological innovations often, some decide to not which leads to major gaps between what is the new "normal" and what used to be "old fashioned". Innovations benefit companies but leave those who do not adapt to them become outpaced. Companies that do not respond to different market changes from innovation, tend to miss out on opportunities which could end up ruining a company . [ 5 ]
Technological innovation:
How can a society plan and protect its future amid constantly developing technological innovations? [ 6 ] | https://en.wikipedia.org/wiki/Technological_innovation |
A technological revolution is a period in which one or more technologies is replaced by another new technology in a short amount of time. It is a time of accelerated technological progress characterized by innovations whose rapid application and diffusion typically cause an abrupt change in society.
A technological revolution may involve material or ideological changes caused by the introduction of a device or system. It may potentially impact business management, education, social interactions, finance and research methodology, and is not limited to technical aspects. It has been shown to increase productivity and efficiency . A technological revolution often significantly changes the material conditions of human existence and has been seen to reshape culture. [ 1 ]
A technological revolution can be distinguished from a random collection of technology systems by two features:
1. A strong interconnectedness and interdependence of the participating systems in their technologies and markets.
2. A potential capacity to greatly affect the rest of the economy (and eventually society). [ 2 ]
On the other hand, negative consequences have also been attributed to technological revolutions. For example, the use of coal as an energy source have negative environmental impacts, including being a contributing factor to climate change and the increase of greenhouse gases [ 3 ] in the atmosphere, and have caused technological unemployment . Joseph Schumpeter described this contradictory nature of technological revolution as creative destruction . [ 4 ] The concept of technological revolution is based on the idea that technological progress is not linear but undulatory . Technological revolution can be:
The concept of universal technological revolutions is a "contributing factor in the Neo-Schumpeterian theory of long economic waves/cycles", [ 5 ] according to Carlota Perez , Tessaleno Devezas , Daniel Šmihula and others.
Some examples of technological revolutions were the Neolithic Revolution , the Industrial Revolution in the mid 1800s, the scientific-technical revolution about 1950–1960, and the Digital Revolution . The distinction between universal technological revolution and singular revolutions have been debated. One universal technological revolution may be composed of several sectoral technological revolutions (such as in science , industry , or transport ).
There are several universal technological revolutions during the modern era in Western culture : [ 6 ]
Comparable periods of well-defined technological revolutions in the pre-modern era are seen as highly speculative. [ 7 ] One such example is an attempt by Daniel Šmihulato to suggest a timeline of technological revolutions in pre-modern Europe : [ 8 ]
Each revolution comprises the following engines for growth:
Technological revolutions has historically been seen to focus on cost reduction. For instance, the accessibility of coal at a low cost during the Industrial Revolution allowed for iron steam engines which led to production of Iron railways , and the progression of the internet was contributed by inexpensive microelectronics for computer development. [ citation needed ] A combination of low-cost input and new infrastructures are at the core of each revolution to achieve their all pervasive impact. [ 9 ]
Since 2000, there has been speculations of a new technological revolution which would focus on the fields of nanotechnologies , alternative fuel and energy systems , biotechnologies , genetic engineering , new materials technologies and so on. [ 10 ]
The Second Machine Age is the term adopted in a 2014 book by Erik Brynjolfsson and Andrew McAfee . The industrial development plan of Germany began promoting the term Industry 4.0 . In 2019, at the World Economic Forum meeting in Davos , Japan promoted another round of advancements called Society 5.0 . [ 11 ] [ 12 ]
The phrase Fourth Industrial Revolution was first introduced by Klaus Schwab , the executive chairman of the World Economic Forum , in a 2015 article in Foreign Affairs . [ 13 ] Following the publication of the article, the theme of the World Economic Forum Annual Meeting 2016 in Davos-Klosters, Switzerland was "Mastering the Fourth Industrial Revolution". On October 10, 2016, the Forum announced the opening of its Centre for the Fourth Industrial Revolution in San Francisco . [ 14 ] According to Schwab, fourth era technologies includes technologies that combine hardware, software , and biology ( cyber-physical systems ), [ 15 ] and which will put an emphases on advances in communication and connectivity . Schwab expects this era to be marked by breakthroughs in emerging technologies in fields such as robotics , artificial intelligence , nanotechnology , quantum computing , biotechnology , the internet of things , the industrial internet of things (IIoT) , decentralized consensus, fifth-generation wireless technologies (5G) , 3D printing and fully autonomous vehicles . [ 16 ]
Jeremy Rifkin includes technologies like 5G , autonomous vehicles, Internet of Things , and renewable energy in the Third Industrial Revolution. [ 17 ]
Some economists do not think that technological growth will continue to the same degree it has in the past. Robert J. Gordon holds the view that today's inventions are not as radical as electricity and the internal combustion engine were. He believes that modern technology is not as innovative as others claim, and is far from creating a revolution. [ 18 ] | https://en.wikipedia.org/wiki/Technological_revolution |
Technological somnambulism is a concept used when talking about the philosophy of technology . The term was used by Langdon Winner in his essay Technology as forms of life . Winner puts forth the idea that we are simply in a state of sleepwalking in our mediations with technology . This sleepwalking is caused by a number of factors. One of the primary causes is the way we view technology as tools, something that can be put down and picked up again. Because of this view of objects as something we can easily separate ourselves from technology, and so we fail to look at the long term implications of using that object. A second factor is the separation of those who make the technology and those who use the technology. This division causes there to be little thought and research going into the effects of using/developing that technology. The third and most important idea is the way in which technology seems to create new worlds in which we live. These worlds are created by the restructuring of the common and seemingly everyday things around us. In most situations the changes take place with little attention or care from us because we are more focused on the menial aspects of the technology (Winner 105–107). [ 1 ]
The concept can be found in the earlier work of Marshall McLuhan , cf. Understanding Media , where he refers to a comment made by David Sarnoff expressing a socially deterministic view of "value free" technology whose value is solely defined by its usage as representing, "...the voice of the current somnambulism". [ 2 ] Given that this piece by McLuhan has become standard reading in Media Theory it is reasonable to suspect that Winner encountered the concept there or elsewhere and then went on to develop it further. [ 3 ] [ 4 ] [ 5 ]
This philosophy -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Technological_somnambulism |
Technological transitions (TT) can best be described as a collection of theories regarding how technological innovations occur, the driving forces behind them, and how they are incorporated into society. [ 1 ] TT draws on a number of fields, including history of science , technology studies, and evolutionary economics . Alongside the technological advancement, TT considers wider societal changes such as "user practices, regulation, industrial networks (supply, production, distribution), infrastructure, and symbolic meaning or culture". [ 2 ] Hughes [ 3 ] refers to the 'seamless web' where physical artifacts, organizations, scientific communities, and social practices combine. A technological transition occurs when there is a major shift in these socio-technical configurations. [ 2 ] [ 4 ]
Work on technological transitions draws on a number of fields including history of science , technology studies, and evolutionary economics . [ 2 ] The focus of evolutionary economics is on economic change, but as a driver of this technological change has been considered in the literature. [ 5 ] Joseph Schumpeter , in his classic Theory of Economic Development [ 6 ] placed the emphasis on non-economic forces as the driver for growth. The human actor, the entrepreneur is seen as the cause of economic development which occurs as a cyclical process. Schumpeter proposed that radical innovations were the catalyst for Kondratiev cycles.
The Russian economist Kondratiev [ 7 ] proposed that economic growth operated in boom and bust cycles of approximately 50 year periods. These cycles were characterised by periods of expansion, stagnation and recession. The period of expansion is associated with the introduction of a new technology, e.g. steam power or the microprocessor. At the time of publication, Kondratiev had considered that two cycles had occurred in the nineteenth century and third was beginning at the turn of the twentieth. Modern writers, such as Freeman and Perez [ 8 ] outlined five cycles in the modern age:
Freeman and Perez [ 8 ] proposed that each cycle consists of pervasive technologies, their production and economic structures that support them. Termed 'techno-economic paradigms', they suggest that the shift from one paradigm to another is the result of emergent new technologies.
Following the recent economic crisis , authors such as Moody and Nogrady [ 9 ] have suggested that a new cycle is emerging from the old, centred on the use of sustainable technologies in a resource depleted world.
Thomas Kuhn [ 10 ] described how a paradigm shift is a wholesale shift in the basic understanding of a scientific theory. Examples in science include the change of thought from miasma to germ theory as a cause of disease. Building on this work, Giovanni Dosi [ 11 ] developed the concept of 'technical paradigms' and 'technological trajectories'. In considering how engineers work, the technical paradigm is an outlook on the technological problem, a definition of what the problems and solutions are. It charts the idea of specific progress. By identifying the problems to be solved the paradigm exerts an influence on technological change. The pattern of problem solving activity and the direction of progress is the technological trajectory. In similar fashion, Nelson and Winter (, [ 12 ] [ 13 ] )defined the concept of the 'technological regime' which directs technological change through the beliefs of engineers of what problems to solve. The work of the actors and organisations is the result of organisational and cognitive routines which determines search behaviour. This places boundaries and also trajectories (direction) to those boundaries.
Recently, the scope of academic sustainability discourse and investigative focus has broadened beyond the study of technological products, innovations and subsequent transitions. [ 14 ] Much of the literature now examines technological artefacts and innovations through a wider scope of socio-technical systems. [ 15 ] It has been argued that this contemporary framework has emerged in response to both an increased understanding of the urgency of environmental problems and the recognition that more substantiative transitions are required across multiple interdependent systems to mitigate impacts. [ 16 ]
The technological transitions framework does acknowledge the co-evolution and mutual unfolding of societal change alongside technological innovation. However, the socio-technical transitions framework considers a more encompassing view of the interdependent links that technology maintains with systems that both generate the need for new innovations and ultimately produce and maintain them. [ 17 ] More specifically, the systems that comprise the socio-technical paradigm include technology, supply networks, infrastructure, maintenance networks, regulation, cultural meaning as well as user practices and markets. [ 18 ] As such, socio-technical transitions can be defined as the multi-dimensional shift from one socio-technical system to another involving changes in both technological and social systems that are intrinsically linked in a feedback loop. [ 14 ] Generally speaking, socio-technical transitions are a slow process as technological innovation tends to occur incrementally along fixed trajectories due to the rigidity of economic, social, cultural, infrastructural and regulative norms. [ 19 ] This is referred to as path dependency, creating technological 'lock-ins' which prevent innovation that disrupts the status quo. [ 20 ] Therefore, the breakthrough and dissemination of technological innovations is dependent on more than their respective benefits, providing an insight into the complexity of the forces and multiple dimensions at play.
The multi-level perspective (MLP) is an analytical tool that attempts to deal with this complexity and resistance to change. Focussing on the dynamics of wider transitionary developments as opposed to discrete technological innovations, the MLP concerns itself with socio-technical system transformations, particularly with transitions towards sustainability and resilience. [ 21 ] As the name implies, the MLP posits three analytical and heuristic levels on which processes interact and align to result in socio-technical system transformations; landscape (macro-level), regimes (meso-level) and niches (micro-level) . [ 22 ] Firstly, the regime level represents the current structures and practices characterised by dominant rules, institutions and technologies that are self-reinforcing. [ 23 ] The socio-technical regime is dynamically stable in the sense that innovation still transpires albeit incrementally and along a predictable trajectory. [ 14 ] This makes the regime 'locked-in' and resistant to both technological and social transitions. [ 24 ] Secondly, the landscape level is defined as the exogenous, broader contextual developments in deep-seated cultural patterns, macro-economics, macro-politics and spatial structures, potentially arising from shocks associated with wars, economic crisis, natural disaster and political upheaval. [ 25 ] Additionally, landscapes are beyond the direct influence of actors, yet stimulate and exert pressure on them at the regime and niche levels. Finally, the niche is defined as the "locus for radical innovations" where dedicated actors nurture the development of technological novelties. [ 26 ] Incubated from market and regulation influences, the niche fosters innovations that differ fundamentally from the prevailing regime and usually require landscape developments that open windows of opportunity in at the regime level. [ 19 ] Therefore, the MLP attributes socio-technical transitions to the interaction of stabilising forces at the regime level with destabilising forces from both the landscape and niche levels. [ 20 ]
Due to the systems approach inherent in the MLP, analysis can be approached from different disciplinary perspectives according to their respective ontologies and priorities. From an urban planning perspective, the framework could be used to pinpoint the barriers and drivers associated with low carbon transport systems to better target policy efforts. [ 27 ] To begin, from an urban mobility perspective, the landscape level is currently pressured by both stabilising and destabilising pressures. Namely, Peak Oil, public concern surrounding inaction towards climate change mitigation and information technologies that digitise daily life (e.g. tele-commuting) destabilises the landscape and automobility regime. [ 28 ] Conversely, the landscape level is solidified by stabilising forces such as cultural preferences for private ownership, timesaving, autonomy and privacy, as well as car-favouring urban fabric and infrastructure. [ 29 ] This is further enhanced by universal pressures of globalisation which presupposes urban mobility to increase flows of goods and people. [ 28 ]
This tension between stabilising and destabilising forces is mirrored in the prevailing automobility regime. The regime is stabilised by persistent investment in road projects, lifestyle norms and consumer preferences that perpetuate car use and resistance to major change by vested actors such as transport planners, policy makers and industry actors (e.g. car manufactures). [ 29 ] Despite this stability, shifts in the landscape has allowed "cracks" in the regime such as traffic management policy (traffic calming, parking restrictions, etc.), diminishing policy commitment to the regime and industry actors proclaiming awareness of landscape pressures associated with climate change [ 30 ]
In these contexts, niche socio-technical innovations that challenge the assumptions and norms of the regime have been birthed, mainly in the form of local policy and infrastructure initiatives on a city-scale. For example, intermodal travel in the form of bus/bike-rail integration schemes, bike rental/sharing have been trialled in many cities globally. [ 29 ] Also, niche sustainable urban planning concepts such as compact cities, smart growth and transit-oriented development have modestly emerged into sustainably mobility discourse. [ 29 ] However, the persistence of the automobility regime due to the general stability of the landscape has resulted in limited, small-scale implementations of these niche innovations. [ 29 ] As such, prevailing user preference and cultural values at the landscape level appear to be a major barrier in transport system socio-technical transitions, as they stabilise the automobility regime, disallowing niche innovations to gain a foothold.
The nature of transitions varies and the differing qualities result in multiple pathways occurring. Geels and Schot [ 31 ] defined five transition paths:
Six characteristics of technological transitions have been identified., [ 1 ] [ 32 ]
Transitions are co-evolutionary and multi-dimensional Technological developments occur intertwined with societal needs, wants and uses. A technology is adopted and diffused based on this interplay between innovation and societal requirements. Co-evolution has different aspects. As well as the co-evolution of technology and society, aspects between science, technology, users and culture have been considered. [ 5 ]
Multi-actors are involved Scientific and engineering communities are central to the development of a technology, but a wide range of actors are involved in a transition. This can include organisations, policy-makers, government, NGOs, special interest groups and others.
Transitions occur at multiple levels As shown in the MLP, transitions occur through the interplay of processes at different levels.
Transitions are a long-term process Complete system-change takes time and can be decades in the making. Case studies show them to be between 40 and 90 years. [ 33 ]
Transitions are radical For a true transition to occur the technology has to be a radical innovation.
Change is Non-linear The rate of change will vary over time. For example, the pace of change may be slow at the gestation period (at the niche level) but much more rapid when a breakthrough is occurring.
Diffusion of an innovation is the concept of how it is picked up by society, at what rate and why. [ 34 ] The diffusion of a technological innovation into society can be considered in distinct phases. [ 35 ] Pre-development is the gestation period where the new technology has yet to make an impact. Take-off is when the process of a system shift is beginning. A breakthrough is occurring when fundamental changes are occurring in existing structures through the interplay of economic, social and cultural forces. Once the rate of change has decreased and a new balance is achieved, stabilization is said to have occurred. A full transition involves an overhaul of existing rules and change of beliefs which takes time, typically spanning at least a generation. [ 35 ] This process can be speeded up through seismic, unforeseen events such as war or economic strife.
Geels [ 5 ] proposed a similar four-phase approach which draws on the multi-level perspective (MLP) developed by Dutch scholars. Phase one sees the emergence of a novelty, born from the existing regime. Development then occurs in the niche level at phase two. As before, breakthrough then occurs at phase three. In the parlance of the MLP the new technology, having been developed at the niche level, is in competition with the established regime. To break through and achieve wide diffusion, external factors – 'windows of opportunity' – are required.
A number of possible circumstances can act as windows of opportunity for the diffusion of new technologies:
Alongside external influences, internal drivers catalyse diffusion. [ 5 ] These include economic factors such as the price performance ration. Socio-technical perspectives focus on the links between disparate social and technological elements. [ 36 ] Following the breakthrough, the final phases see the new technology supersede the old.
The study of technological transitions has an impact beyond academic interest. The transitions referred to in the literature may relate to historic processes, such as the transportation transitions studied by Geels, but system changes are required to achieve a safe transition to a low-carbon economy . ( [ 1 ] [ 5 ] ). Current structural problems are apparent in a range of sectors. [ 5 ] Dependency on oil is problematic in the energy sector due to availability, access and contribution to greenhouse gas (GHG) emissions. Transportation is a major user of energy causing significant emission of GHGs. Food production will need to keep pace with an ever-growing world population while overcoming challenges presented by global warming and transportation issues. Incremental change has provided some improvements but a more radical transition is required to achieve a more sustainable future.
Developed from the work on technological transitions is the field of transition management. Within this is an attempt to shape the direction of change complex socio-technical systems to more sustainable patterns. [ 1 ] Whereas work on technological transitions is largely based on historic processes, proponents of transition management seek to actively steer transitions in progress.
Genus and Coles [ 33 ] outlined a number of criticisms against the analysis of technological transitions, in particular when using the MLP. Empirical research on technological transitions occurring now has been limited, with the focus on historic transitions. Depending on the perspective on transition case studies they could be presented as having occurred on a different transition path to what was shown. For example, the bicycle could be considered an intermediate transport technology between the horse and the car. Judged from shorter different time-frame this could appear a transition in its own right. Determining the nature of a transition is problematic; when it started and ended, or whether one occurred in the sense of a radical innovation displacing an existing socio-technical regime. The perception of time casts doubt on whether a transition has occurred. If viewed over a long enough period even inert regimes may demonstrate radical change in the end. The MLP has also been criticised by scholars studying sustainability transitions using Social Practice Theories. [ 37 ] | https://en.wikipedia.org/wiki/Technological_transitions |
Technology For All is a nonprofit organization based in Houston , Texas . Developed in 1997 by local entrepreneurs, Technology For All services community-based organizations (such as development centers, YMCAs, and local schools) with computer technology, training, and other digital incentives “to empower under-resourced communities through the tools of technology.” Through the National Telecommunications and Information Administration 's Broadband Technology Opportunities Program grant, Technology For All (TFA) currently hosts 19 public computer centers.
Technology For All was formed in 1997 as a response to a perceived lack of digital inclusion for historically low-income areas. In 1998, it received M.D Anderson Foundation's first $50,000 grant to help build a community technology center at the M.D. Anderson YMCA . [ 1 ] According to their website, TFA has created 180 community technology centers in the United States since its inception, all partnered with community-serving organizations.
When the Reliant Astrodome sheltered Hurricane Katrina refugees in 2005, TFA coordinated a lab with 40 computers and other free supplies. [ 2 ]
Technology For All divides its goals into three priorities: community technology center support and development, technology research and innovation, and technology services. [ 3 ]
TFA and Rice University operate the TFA-Wireless project, which provides free high-speed wireless Internet to Pecan Park, Houston . [ 4 ] In 2011, they installed the first residential deployment of Super Wi-Fi , which uses longer wavelengths to penetrate typical wireless barriers. [ 5 ]
Texas Connects Coalition (TXC2) is a partnership between TFA, Austin FreeNet (AFN), and the Metropolitan Austin Interactive Network (MAIN). It was recently awarded a Broadband Technology Opportunities Program (BTOP) grant valued at over nine million dollars. [ 6 ] The grant is provided by the National Telecommunications and Information Administration and funded by the American Recovery and Reinvestment Act of 2009 . The coalition is summarized as a “comprehensive … initiative significantly expanding broadband public computer center capacity ... across Texas.” [ 7 ] With the grant, TXC2 plans to install and maintain 70 public computer centers throughout Austin , San Antonio , Houston , and the Brazos Valley to "provide computer access, technical support, digital literacy, workforce development and other services to low-income and vulnerable populations." [ 8 ]
Through the BTOP grant, TFA aims to extend its network of public computer centers to 19. [ 9 ] Each lab is partnered with organizations in historically underprivileged neighborhoods, such as Eastside University Village Community Learning Center in Third Ward and the Spring Branch Family Development Center. Each center provides public computers, printers, and Internet access, plus a trainer to manage the center and teach various computer literacy courses. [ 10 ]
According to their website, TFA operates open labs in these community spaces, located in the following super neighborhoods:
TFA also has rural sites and sites in San Antonio which it manages under the name TFA-Rural Texas San Antonio (TFA-RTSA), listed on the TXC2 website url= http://txc2.org/?page_id=100 TXC2 is a coalition including TFA, TFA-RTSA and Austin FreeNet. The TFA-RTSA sites include: | https://en.wikipedia.org/wiki/Technology_For_All |
The technology acceptance model ( TAM ) is an information systems theory that models how users come to accept and use a technology .
The actual system use is the end-point where people use the technology. Behavioral intention is a factor that leads people to use the technology. The behavioral intention (BI) is influenced by the attitude (A) which is the general impression of the technology.
The model suggests that when users are presented with a new technology, a number of factors influence their decision about how and when they will use it, notably:
External variables such as social influence is an important factor to determine the attitude. When these things (TAM) are in place, people will have the attitude and intention to use the technology. However, the perception may change depending on age and gender because everyone is different.
The TAM has been continuously studied and expanded—the two major upgrades being the TAM 2 [ 2 ] [ 3 ] and the unified theory of acceptance and use of technology (or UTAUT). [ 4 ] A TAM 3 has also been proposed in the context of e-commerce with an inclusion of the effects of trust and perceived risk on system use. [ 5 ]
TAM is one of the most influential extensions of Ajzen and Fishbein's theory of reasoned action (TRA) in the literature. Davis's technology acceptance model (Davis, 1989; Davis, Bagozzi, & Warshaw, 1989)
is the most widely applied model of users' acceptance and usage of technology
(Venkatesh, 2000). It was developed by Fred Davis and Richard Bagozzi . [ 1 ] [ 6 ] [ 7 ] TAM replaces many of TRA's attitude measures with the two technology acceptance measures— ease of use , and usefulness . TRA and TAM, both of which have strong behavioural elements, assume that when someone forms an intention to act, that they will be free to act without limitation. In the real world there will be many constraints, such as limited freedom to act. [ 6 ]
Bagozzi, Davis and Warshaw say:
Because new technologies such as personal computers are complex and an element of uncertainty exists in the minds of decision makers with respect to the successful adoption of them, people form attitudes and intentions toward trying to learn to use the new technology prior to initiating efforts directed at using. Attitudes towards usage and intentions to use may be ill-formed or lacking in conviction or else may occur only after preliminary strivings to learn to use the technology evolve. Thus, actual usage may not be a direct or immediate consequence of such attitudes and intentions. [ 6 ]
Earlier research on the diffusion of innovations also suggested a prominent role for perceived ease of use. Tornatzky and Klein [ 8 ] analysed the adoption, finding that compatibility, relative advantage, and complexity had the most significant relationships with adoption across a broad range of innovation types. Eason studied perceived usefulness in terms of a fit between systems, tasks and job profiles, using the terms "task fit" to describe the metric. [ 9 ] Legris, Ingham and Collerette suggest that TAM must be extended to include variables that account for change processes and that this could be achieved through adoption of the innovation model into TAM. [ 10 ]
Several researchers have replicated Davis's original study [ 1 ] to provide empirical evidence on the relationships that exist between usefulness, ease of use and system use. [ 11 ] Much attention has focused on testing the robustness and validity of the questionnaire instrument used by Davis. Adams et al. [ 12 ] replicated the work of Davis [ 1 ] to demonstrate the validity and reliability of his instrument and his measurement scales. They also extended it to different settings and, using two different samples, they demonstrated the internal consistency and replication reliability of the two scales. Hendrickson et al. found high reliability and good test-retest reliability. [ 13 ] Szajna found that the instrument had predictive validity for intent to use, self-reported usage and attitude toward use. [ 14 ] The sum of this research has confirmed the validity of the Davis instrument, and to support its use with different populations of users and different software choices.
Segars and Grover [ 15 ] re-examined Adams et al.'s [ 12 ] )replication of the Davis work. They were critical of the measurement model used, and postulated a different model based on three constructs: usefulness, effectiveness, and ease-of-use. These findings do not yet seem to have been replicated. However, some aspects of these findings were tested and supported by Workman [ 16 ] by separating the dependent variable into information use versus technology use.
Mark Keil and his colleagues have developed (or, perhaps rendered more popularisable) Davis's model into what they call the Usefulness/ EOU Grid , which is a 2×2 grid where each quadrant represents a different combination of the two attributes. In the context of software use, this provides a mechanism for discussing the current mix of usefulness and EOU for particular software packages, and for plotting a different course if a different mix is desired, such as the introduction of even more powerful software. [ 17 ] The TAM model has been used in most technological and geographic contexts. One of these contexts is health care, which is growing rapidly [ 18 ]
Saravanos et al. [ 19 ] extended the TAM model to incorporate emotion and the effect that may play on the behavioral intention to accept a technology. Specifically, they looked at warm-glow.
Venkatesh and Davis extended the original TAM model to explain perceived usefulness and usage intentions in terms of social influence (subjective norms, voluntariness, image) and cognitive instrumental processes (job relevance, output quality, result demonstrability, perceived ease of use). The extended model, referred to as TAM2, was tested in both voluntary and mandatory settings. The results strongly supported TAM2. [ 2 ]
In an attempt to integrate the main competing user acceptance models, Venkatesh et al. formulated the unified theory of acceptance and use of technology (UTAUT). This model was found to outperform each of the individual models (Adjusted R square of 69 percent). [ 4 ] UTAUT has been adopted by some recent studies in healthcare. [ 22 ]
In addition, authors Jun et al. also think that the technology acceptance model is essential to analyze the factors affecting customers’ behavior towards online food delivery services. It is also a widely adopted theoretical model to demonstrate the acceptance of new technology fields. The foundation of TAM is a series of concepts that clarifies and predicts people’s behaviors with their beliefs, attitudes, and behavioral intention. In TAM, perceived ease of use and perceived usefulness, considered general beliefs, play a more vital role than salient beliefs in attitudes toward utilizing a particular technology. [ 23 ]
TAM has been widely criticised, despite its frequent use, leading the original proposers to attempt to redefine it several times. Criticisms of TAM as a "theory" include its questionable heuristic value, limited explanatory and predictive power, triviality, and lack of any practical value. [ 30 ] Benbasat and Barki suggest that TAM "has diverted researchers' attention away from other important research issues and has created an illusion of progress in knowledge accumulation. Furthermore, the independent attempts by several researchers to expand TAM in order to adapt it to the constantly changing IT environments has lead [ sic ] to a state of theoretical chaos and confusion". [ 31 ] In general, TAM focuses on the individual 'user' of a computer, with the concept of 'perceived usefulness', with extension to bring in more and more factors to explain how a user 'perceives' 'usefulness', and ignores the essentially social processes of IS development and implementation, without question where more technology is actually better, and the social consequences of IS use. Lunceford argues that the framework of perceived usefulness and ease of use overlooks other issues, such as cost and structural imperatives that force users into adopting the technology. [ 32 ] For a recent analysis and critique of TAM, see Bagozzi. [ 33 ]
Legris et al. [ 34 ] claim that, together, TAM and TAM2 account for only 40% of a technological system's use.
Perceived ease of use is less likely to be a determinant of attitude and usage intention according to studies of telemedicine, [ 35 ] mobile commerce, [ 36 ] ) and online banking. [ 37 ] | https://en.wikipedia.org/wiki/Technology_acceptance_model |
The technology adoption lifecycle is a sociological model that describes the adoption or acceptance of a new product or innovation, according to the demographic and psychological characteristics of defined adopter groups. The process of adoption over time is typically illustrated as a classical normal distribution or "bell curve". The model calls the first group of people to use a new product " innovators ", followed by " early adopters ". Next come the "early majority" and "late majority", and the last group to eventually adopt a product are called "laggards" or "phobics". For example, a phobic may only use a cloud service when it is the only remaining method of performing a required task, but the phobic may not have an in-depth technical knowledge of how to use the service.
The demographic and psychological (or " psychographic ") profiles of each adoption group were originally specified by agricultural researchers in 1956: [ 1 ]
The model has subsequently been adapted for many areas of technology adoption in the late 20th century, for example in the spread of policy innovations among U.S. states. [ 2 ]
The model has spawned a range of adaptations that extend the concept or apply it to specific domains of interest.
In his book Crossing the Chasm , Geoffrey Moore proposes a variation of the original lifecycle. He suggests that for discontinuous innovations, which may result in a Foster disruption based on an s-curve , [ 3 ] there is a gap or chasm between the first two adopter groups (innovators/early adopters), and the vertical markets.
Disruption as it is used today are of the Clayton M. Christensen variety. These disruptions are not s-curve based.
In educational technology , Lindy McKeown has provided a similar model (a pencil metaphor [ 4 ] ) describing the Information and Communications Technology uptake in education.
In medical sociology , Carl May has proposed normalization process theory that shows how technologies become embedded and integrated in health care and other kinds of organization.
Wenger, White and Smith, in their book Digital habitats: Stewarding technology for communities , talk of technology stewards: people with sufficient understanding of the technology available and the technological needs of a community to steward the community through the technology adoption process. [ 5 ]
Rayna and Striukova (2009) propose that the choice of initial market segment has crucial importance for crossing the chasm, as adoption in this segment can lead to a cascade of adoption in the other segments. This initial market segment has, at the same time, to contain a large proportion of visionaries, to be small enough for adoption to be observed from within the segment and from other segment and be sufficiently connected with other segments. If this is the case, the adoption in the first segment will progressively cascade into the adjacent segments, thereby triggering the adoption by the mass-market. [ 6 ]
Stephen L. Parente (1995) implemented a Markov Chain to model economic growth across different countries given different technological barriers. [ 7 ]
In Product marketing , Warren Schirtzinger proposed an expansion of the original lifecycle (the Customer Alignment Lifecycle [ 8 ] ) which describes the configuration of five different business disciplines that follow the sequence of technology adoption.
One way to model product adoption [ 9 ] is to understand that people's behaviors are influenced by their peers and how widespread they think a particular action is. For many format-dependent technologies, people have a non-zero payoff for adopting the same technology as their closest friends or colleagues. If two users both adopt product A, they might get a payoff a > 0; if they adopt product B, they get b > 0. But if one adopts A and the other adopts B, they both get a payoff of 0.
A threshold can be set for each user to adopt a product. Say that a node v in a graph has d neighbors: then v will adopt product A if a fraction p of its neighbors is greater than or equal to some threshold. For example, if v's threshold is 2/3, and only one of its two neighbors adopts product A, then v will not adopt A. Using this model, we can deterministically model product adoption on sample networks.
The technology adoption lifecycle is a sociological model that is an extension of an earlier model called the diffusion process , which was originally published in 1956 by George M. Beal and Joe M. Bohlen. [ 1 ] This article did not acknowledge the contributions of Beal's Ph.D. student Everett M. Rogers; however Beal, Bohlen and Rogers soon co-authored a scholarly article on their methodology. [ 10 ] This research built on prior work by Neal C. Gross and Bryce Ryan. [ 11 ] [ 12 ] [ 13 ]
Rogers generalized the diffusion process to innovations outside the agricultural sector of the midwestern USA, and successfully popularized his generalizations in his widely acclaimed 1962 book Diffusion of Innovations [ 14 ] (now in its fifth edition). | https://en.wikipedia.org/wiki/Technology_adoption_life_cycle |
1800s: Martineau · Tocqueville · Marx · Spencer · Le Bon · Ward · Pareto · Tönnies · Veblen · Simmel · Durkheim · Addams · Mead · Weber · Du Bois · Mannheim · Elias
Technology, society and life or technology and culture refers to the inter-dependency, co-dependence , co-influence, and co-production of technology and society upon one another. Evidence for this synergy has been found since humanity first started using simple tools. The inter-relationship has continued as modern technologies such as the printing press and computers have helped shape society. The first scientific approach to this relationship occurred with the development of tektology , the "science of organization", in early twentieth century Imperial Russia . [ 1 ] In modern academia, the interdisciplinary study of the mutual impacts of science, technology, and society, is called science and technology studies .
The simplest form of technology is the development and use of basic tools . The prehistoric discovery of how to control fire and the later Neolithic Revolution increased the available sources of food, and the invention of the wheel helped humans to travel in and control their environment. Developments in historic times have lessened physical barriers to communication and allowed humans to interact freely on a global scale, such as the printing press , telephone , and Internet .
Technology has developed advanced economies , such as the modern global economy , and has led to the rise of a leisure class . Many technological processes produce by-products known as pollution , and deplete natural resources to the detriment of Earth's environment . Innovations influence the values of society and raise new questions in the ethics of technology . Examples include the rise of the notion of efficiency in terms of human productivity , and the challenges of bioethics .
Philosophical debates have arisen over the use of technology, with disagreements over whether technology improves the human condition or worsens it. Neo-Luddism , anarcho-primitivism , and similar reactionary movements criticize the pervasiveness of technology, arguing that it harms the environment and alienates people. However, proponents of ideologies such as transhumanism and techno-progressivism view continued technological progress as beneficial to society and the human condition .
The importance of stone tools , circa 2.5 million years ago, is considered fundamental in the human development in the hunting hypothesis . [ citation needed ]
Primatologist, Richard Wrangham , theorizes that the control of fire by early humans and the associated development of cooking was the spark that radically changed human evolution. [ 2 ] Texts such as Guns, Germs, and Steel suggest that early advances in plant agriculture and husbandry fundamentally shifted the way that collective groups of individuals, and eventually societies, developed.
Technology has taken a large role in society and day-to-day life. When societies know more about the development in a technology, they become able to take advantage of it. When an innovation achieves a certain point after it has been presented and promoted, this technology becomes part of the society. The use of technology in education provides students with technology literacy, information literacy, capacity for life-long learning, and other skills necessary for the 21st century workplace. [ 3 ] Digital technology has entered each process and activity made by the social system . In fact, it constructed another worldwide communication system in addition to its origin. [ 4 ]
A 1982 study by The New York Times described a technology assessment study by the Institute for the Future , "peering into the future of an electronic world." The study focused on the emerging videotex industry, formed by the marriage of two older technologies, communications, and computing. It estimated that 40 percent of American households will have two-way videotex service by the end of the century. By comparison, it took television 16 years to penetrate 90 percent of households from the time commercial service was begun.
Since the creation of computers achieved an entire better approach to transmit and store data . Digital technology became commonly used for downloading music and watching movies at home either by DVDs or purchasing it online.
Digital music records are not quite the same as traditional recording media. Obviously, because digital ones are reproducible, portable and free. [ 5 ]
Around the globe many schools have implemented educational technology in primary schools, universities and colleges. According to the statistics, in the early beginnings of 1990s the use of Internet in schools was, on average, 2–3%. [ citation needed ] Continuously, by the end of 1990s the evolution of technology increases rapidly and reaches to 60%, and by the year of 2008 nearly 100% of schools use Internet on educational form. According to ISTE researchers, technological improvements can lead to numerous achievements in classrooms. E-learning system, collaboration of students on project based learning, and technological skills for future results in motivation of students. [ citation needed ]
Although these previous examples only show a few of the positive aspects of technology in society, there are negative side effects as well. [ 6 ] Within this virtual realm, social media platforms such as Instagram , Facebook , and Snapchat have altered the way Generation Y culture is understanding the world and thus how they view themselves. In recent years, there has been more research on the development of social media depression in users of sites like these. "Facebook Depression" is when users are so affected by their friends' posts and lives that their own jealousy depletes their sense of self-worth. They compare themselves to the posts made by their peers and feel unworthy or monotonous because they feel like their lives are not nearly as exciting as the lives of others. [ 3 ]
Technology has a serious effect on youth's health. The overuse of technology is said to be associated with sleep deprivation which is linked to obesity and poor academic performance in the lives of adolescents. [ 7 ]
In ancient history, economics began when spontaneous exchange of goods and services was replaced over time by deliberate trade structures. Makers of arrowheads, for example, might have realized they could do better by concentrating on making arrowheads and barter for other needs. Regardless of goods and services bartered, some amount of technology was involved—if no more than in the making of shell and bead jewelry. Even the shaman's potions and sacred objects can be said to have involved some technology. So, from the very beginnings, technology can be said to have spurred the development of more elaborate economies. Technology is seen as primary source in economic development. [ 8 ]
Technology advancement and economic growth are related to each other. The level of technology is important to determine the economic growth. It is the technological process which keeps the economy moving.
In the modern world, superior technologies, resources, geography, and history give rise to robust economies; and in a well-functioning, robust economy, economic excess naturally flows into greater use of technology. Moreover, because technology is such an inseparable part of human society, especially in its economic aspects, funding sources for (new) technological endeavors are virtually illimitable. However, while in the beginning, technological investment involved little more than the time, efforts, and skills of one or a few men, today, such investment may involve the collective labor and skills of many millions.
Most recently, because of the COVID-19 pandemic , the proportion of firms employing advanced digital technology in their operations expanded dramatically. It was found that firms that adopted technology were better prepared to deal with the pandemic's disruptions. Adaptation strategies in the form of remote working, 3D printing, and the use of big data analytics and AI to plan activities to adapt to the pandemic were able to ensure positive job growth. [ 9 ] [ 10 ] [ 11 ]
Consequently, the sources of funding for large technological efforts have dramatically narrowed, since few have ready access to the collective labor of a whole society, or even a large part. It is conventional to divide up funding sources into governmental (involving whole, or nearly whole, social enterprises ) and private (involving more limited, but generally more sharply focused) business or individual enterprises.
The government is a major contributor to the development of new technology in many ways. In the United States alone, many government agencies specifically invest billions of dollars in new technology.
In 1980, the UK government invested just over six million pounds in a four-year program, later extended to six years, called the Microelectronics Education Programme (MEP), which was intended to give every school in Britain at least one computer, software, training materials, and extensive teacher training. Similar programs have been instituted by governments around the world.
Technology has frequently been driven by the military, with many modern applications developed for the military before they were adapted for civilian use. However, this has always been a two-way flow, with industry often developing and adopting a technology only later adopted by the military.
Entire government agencies are specifically dedicated to research, such as America's National Science Foundation , the United Kingdom's scientific research institutes , America's Small Business Innovative Research effort. Many other government agencies dedicate a major portion of their budget to research and development.
Research and development is one of the smallest areas of investments made by corporations toward new and innovative technology. [ citation needed ]
Many foundations and other nonprofit organizations contribute to the development of technology. In the OECD , about two-thirds of research and development in scientific and technical fields is carried out by industry, and 98 percent and 10 percent, respectively, by universities and government. But in poorer countries such as Portugal and Mexico the industry contribution is significantly less. The U.S. government spends more than other countries on military research and development, although the proportion has fallen from about 30 percent in the 1980s to less than 10 percent. [ 12 ]
The 2009 founding of Kickstarter allows individuals to receive funding via crowdsourcing for many technology related products including both new physical creations as well as documentaries, films, and web-series that focus on technology management . This circumvents the corporate or government oversight most inventors and artists struggle against but leaves the accountability of the project completely with the individual receiving the funds.
The relationship between science and technology can be complex. Science may drive technological development, by generating demand for new instruments to address a scientific question, or by illustrating technical possibilities previously unconsidered. An environment of encouraged science will also produce scientists and engineers, and technical schools, which encourages innovation and entrepreneurship that are capable of taking advantage of the existing science. In fact, it is recognized that "innovators, like scientists, do require access to technical information and ideas" and "must know enough to recognize useful knowledge when they see it." [ 13 ] Science spillover also contributes to greater technological diffusion. [ 14 ] Having a strong policy contributing to basic science allows a country to have access to a strong a knowledge base that will allow them to be "ready to exploit unforeseen developments in technology," [ 15 ] when needed in times of crisis.
For most of human history, technological improvements were arrived at by chance, trial and error, or spontaneous inspiration. Stokes referred to these innovators as " 'improvers of technology'…who knew no science and would not have been helped by it if they had." [ 15 ] This idea is supported by Diamond who further indicated that these individuals are "more likely to achieve a breakthrough if [they do] not hold the currently dominant theory in too high regard." [ 16 ] Research and development directed towards immediate technical application is a relatively recent occurrence, arising with the Industrial Revolution and becoming commonplace in the 20th century. In addition, there are examples of economies that do not emphasize science research that have been shown to be technological leaders despite this. For example, the United States relied on the scientific output of Europe in the early 20th century, though it was regarded as a leader in innovation. Another example is the technological advancement of Japan in the latter part of the same century, which emphasized more applied science (directly applicable to technology). [ 15 ]
Though the link between science and technology has need for more clarity, what is known is that a society without sufficient building blocks to encourage this link are critical. A nation without emphasis on science is likely to eventually stagnate technologically and risk losing competitive advantage. The most critical areas for focus by policymakers are discouraging too many protections on job security, leading to less mobility of the workforce, [ 17 ] encouraging the reliable availability of sufficient low-cost capital for investment in R&D, by favorable economic and tax policies, [ 18 ] and supporting higher education in the sciences to produce scientists and engineers. [ 18 ]
The implementation of technology influences the values of a society by changing expectations and realities. The implementation of technology is also influenced by values. There are (at least) three major, interrelated values that inform, and are informed by, technological innovations:
Technology often enables organizational and bureaucratic group structures that otherwise and heretofore were simply not possible. Examples of this might include:
Technology enables greater knowledge of international issues, values, and cultures. Due mostly to mass transportation and mass media, the world seems to be a much smaller place, due to the following: [ 21 ]
Technology can provide understanding of and appreciation for the world around us, enable sustainability and improve environmental conditions but also degrade the environment and facilitate unsustainability.
Some polities may conclude that certain technologies' environmental detriments and other risks to outweigh their benefits, especially if or once substitutive technologies have been or can be invented, leading to directed technological phase-outs such as the fossil fuel phase-out and the nuclear fission power phase-out .
Most modern technological processes produce unwanted byproducts in addition to the desired products, which are known as waste and pollution . While material waste is often re-used in industrial processes , many processes lead to a release into the environment with negative environmental side effects, such as pollution and lack of sustainability.
Some technologies are designed specifically with the environment in mind, but most are designed first for financial or economic effects such as the free market's profit motive . [ 22 ] The effects of a specific technology is often not only dependent on how it is used – e.g. its usage context – but also predetermined by the technology's design or characteristics, as in the theory of " the medium is the message " which relates to media-technologies in specific. In many cases, such predetermined or built-in implications may vary depending on factors of contextual contemporary conditions such as human biology, international relations and socioeconomics. However, many technologies may be harmful to the environment only when used in specific contexts or for specific purposes that not necessarily result from the nature of the technology.
Historically, from the perspective of economic agent-centered responsibility, an increased, as of 2021 commonly theoretic and informal, value of healthy environments and more efficient productive processes may be the result of an increase in the wealth of society. Once people are able to provide for their basic needs , they can – and are often facilitated to – not only afford more environmentally destructive products and services, but could often also be able to put an – e.g. individual morality -motivated – effort into valuing less tangible goods such as clean air and water if product-, alternatives-, consequences- and services-information are adequate.
From the perspective of systems science and cybernetics , economies (systems) have economic actors and sectors make decisions based upon a range of system-internal factors with structures – or sometimes forms of leveraging existing structures – that lead to other outcomes being the result of other architectures – or systems-level configurations of the existing designs – which are considered to be possible in the sense that such could be modeled, tested, priorly assessed , developed and studied.
The effects of technology on the environment are both obvious and subtle. The more obvious effects include the depletion of nonrenewable natural resources (such as petroleum, coal, ores, and precious metals), and the added pollution of air , water, and land. The more subtle effects may include long-term effects (e.g. global warming , deforestation , natural habitat destruction , and coastal wetland loss.)
Each wave of technology creates a set of waste previously unknown by humans: toxic waste , radioactive waste , electronic waste , plastic waste , and space waste .
Electronic waste creates direct environmental impacts through the production and maintaining the infrastructure necessary for using technology and indirect impacts by breaking barriers for global interaction through the use of information and communications technology. [ 23 ] Certain usages of information technology and infrastructure maintenance consume energy that contributes global warming . This includes software-designs such as international cryptocurrencies [ 24 ] and most hardware powered by nonrenewable sources.
One of the main problems is the lack of societal decision-making processes – such as the contemporary economy and politics – that lead to sufficient implementation of existing as well as potential efficient ways to remove, recycle and prevent these pollutants on a large scale expediently.
Digital technologies, however, are important in achieving the green transition and specifically, the SDGs and European Green Deal 's environmental targets. Emerging digital technologies, if correctly applied, have the potential to play a critical role in addressing environmental issues. A few examples are: smart city mobility, precision agriculture , sustainable supply chains, environmental monitoring , and catastrophe prediction. [ 25 ] [ 26 ]
Society also controls technology through the choices it makes. These choices not only include consumer demands; they also include:
According to Williams and Edge, [ 27 ] the construction and shaping of technology includes the concept of choice (and not necessarily conscious choice). Choice is inherent in both the design of individual artifacts and systems, and in the making of those artifacts and systems.
The idea here is that a single technology may not emerge from the unfolding of a predetermined logic or a single determinant, technology could be a garden of forking paths, with different paths potentially leading to different technological outcomes. This is a position that has been developed in detail by Judy Wajcman . Therefore, choices could have differing implications for society and for particular social groups.
In one line of thought, technology develops autonomously, in other words, technology seems to feed on itself, moving forward with a force irresistible by humans. To these individuals, technology is "inherently dynamic and self-augmenting." [ 28 ]
Jacques Ellul is one proponent of the irresistibleness of technology to humans. He espouses the idea that humanity cannot resist the temptation of expanding our knowledge and our technological abilities. However, he does not believe that this seeming autonomy of technology is inherent. But the perceived autonomy is because humans do not adequately consider the responsibility that is inherent in technological processes.
Langdon Winner critiques the idea that technological evolution is essentially beyond the control of individuals or society in his book Autonomous Technology. He argues instead that the apparent autonomy of technology is a result of "technological somnambulism," the tendency of people to uncritically and unreflectively embrace and utilize new technologies without regard for their broader social and political effects.
In 1980, Mike Cooley published a critique of the automation and computerisation of engineering work under the title "Architect or Bee? The human/technology relationship". The title alludes to a comparison made by Karl Marx , on the issue of the creative achievements of human imaginative power. [ 29 ] According to Cooley ""Scientific and technological developments have invariably proved to be double-edged. They produced the beauty of Venice and the hideousness of Chernobyl; the caring therapies of Rontgen's X-rays and the destruction of Hiroshima," [ 30 ]
Individuals rely on governmental assistance to control the side effects and negative consequences of technology.
Recently, the social shaping of technology has had new influence in the fields of e-science and e-social science in the United Kingdom, which has made centers focusing on the social shaping of science and technology a central part of their funding programs. | https://en.wikipedia.org/wiki/Technology_and_society |
Technology assessment ( TA , German : Technikfolgenabschätzung , French : Évaluation des choix scientifiques et technologiques ) is a practical process of determining the value of a new or emerging technology in and of itself or against existing technologies. [ 1 ] This is a means of assessing and rating the new technology from the time when it was first developed to the time when it is potentially accepted by the public and authorities for further use. In essence, TA could be defined as "a form of policy research that examines short- and long term consequences (for example, societal, economic, ethical, legal) of the application of technology." [ 2 ]
TA is the study and evaluation of new technologies. It is a way of trying to forecast and prepare for the upcoming technological advancements and their repercussions to the society, and then make decisions based on the judgments. It is based on the conviction that new developments within, and discoveries by, the scientific community are relevant for the world at large rather than just for the scientific experts themselves, and that technological progress can never be free of ethical implications. Technology assessment was initially practiced in the 1960s in the United States where it would focus on analyzing the significance of "supersonic transportation, pollution of the environment and ethics of genetic screening." [ 3 ]
Also, technology assessment recognizes the fact that scientists normally are not trained ethicists themselves and accordingly ought to be very careful when passing ethical judgement on their own, or their colleagues, new findings, projects, or work in progress. TA is a very broad phenomenon which also includes aspects such as "diffusion of technology (and technology transfer), factors leading to rapid acceptance of new technology, and the role of technology and society." [ 3 ]
Technology assessment assumes a global perspective and is future-oriented, not anti-technological. TA considers its task as an interdisciplinary approach to solving already existing problems and preventing potential damage caused by the uncritical application and the commercialization of new technologies.
Therefore, any results of technology assessment studies must be published, and particular consideration must be given to communication with political decision-makers.
An important problem concerning technology assessment is the so-called Collingridge dilemma : on the one hand, impacts of new technologies cannot be easily predicted until the technology is extensively developed and widely used; on the other hand, control or change of a technology is difficult as soon as it is widely used. It emphasizes on the fact that technologies, in their early stage, are unpredictable with regards to their implications and rather tough to regulate or control once it has been widely accepted by the society. Shaping or directing this technology is the desired direction becomes difficult for the authorities at this period of time. There have been several approaches put in place in order to tackle this dilemma, one of the common ones being "anticipation." In this approach, authorities and assessors "anticipate ethical impacts of a technology ("technomoral scenarios"), being too speculative to be reliable, or on ethically regulating technological developments ("sociotechnical experiments"), discarding anticipation of the future implications." [ 4 ]
Technology assessments, which are a form of cost–benefit analysis , are a medium for decision makers to evaluate and analyze solutions with regards to the particular technology assessment, and choose a best possible option which is cost effective and obeys the authoritative and budgetary requirements. However, they are difficult if not impossible to carry out in an objective manner since subjective decisions and value judgments have to be made regarding a number of complex issues such as (a) the boundaries of the analysis (i.e., what costs are internalized and externalized), (b) the selection of appropriate indicators of potential positive and negative consequences of the new technology, (c) the monetization of non-market values, and (d) a wide range of ethical perspectives. [ 5 ] Consequently, most technology assessments are neither objective nor value-neutral exercises but instead are greatly influenced and biased by the values of the most powerful stakeholders, which are in many cases the developers and proponents (i.e., corporations and governments) of new technologies under consideration. In the most extreme view, as expressed by Ian Barbour in '’Technology, Environment, and Human Values'’, technology assessment is "a one-sided apology for contemporary technology by people with a stake in its continuation." [ 6 ]
Overall, technology assessment is a very broad field which reaches beyond just technology and industrial phenomenons. It handles the assessment of effects, consequences, and risks of a technology, but also is a forecasting function looking into the projection of opportunities and skill development as an input into strategic planning." [ 7 ] Some of the major fields of TA are: information technology, hydrogen technologies , nuclear technology , molecular nanotechnology , pharmacology , organ transplants , gene technology , artificial intelligence , the Internet and many more.
The following types of concepts of TA are those that are most visible and practiced. There are, however, a number of further TA forms that are only proposed as concepts in the literature or are the label used by a particular TA institution. [ 8 ]
Many TA institutions are members of the European Parliamentary Technology Assessment (EPTA) network, some are working for the STOA panel of the European Parliament and formed the European Technology Assessment Group (ETAG). | https://en.wikipedia.org/wiki/Technology_assessment |
Technology education [ 1 ] is the study of technology , in which students "learn about the processes and knowledge related to technology". [ 2 ] As a field of study, it covers the human's ability to shape and change the physical world to meet needs, by manipulating materials and tools with techniques. It addresses the disconnect between wide usage and the lack of knowledge about technical components of technologies used and how to fix them. [ 3 ] This emergent discipline seeks to contribute to the learners' overall scientific and technological literacy , [ 4 ] and technacy .
Technology education should not be confused with educational technology . Educational technology focuses on a more narrow subset of technology use that revolves around the use of technology in and for education as opposed to technology education's focus on technology's use in general. [ 5 ]
Technology education is an offshoot of the Industrial Arts tradition in the United States and the Craft teaching or vocational education in other countries. [ 4 ] In 1980, through what was called the "Futuring Project", the name of " industrial arts education " was changed to be "technology education" in New York State ; the goal of this movement was to increase students' technological literacy. [ 6 ] Since the nature of technology education is significantly different from its predecessor, Industrial Arts teachers underwent inservice education in the mid-1980s while a Technology Training Network was also established by the New York State Education Department (NYSED). [ 4 ]
In Sweden, technology as a new subject emerged from the tradition of crafts subjects while in countries like Taiwan and Australia, its elements are discernible in historical vocational programs. [ 7 ]
In the 21st century, Mars suit design was utilized as a topic for technology education. [ 8 ] Technical education is entirely different from general education
TeachThought, a private entity, described technology education as being in the “status of childhood and bold experimentation. [ 9 ] ” A survey of teachers across the United States by an independent market research company found out that 86 percent of teacher-respondents agree that technology must be used in the classroom. 96 percent say it promotes engagement of students and 89% agree technology improves student outcomes. [ 10 ] Technology is present in many education systems. As of July 2018, American public schools provide one desktop computer for every five students and spend over $3 billion annually on digital content. [ 11 ] In school year 2015–2016, the government conducted more state-standardized testing for elementary and middle levels through digital platforms instead of the traditional pen and paper method. [ 12 ]
The digital revolution offers fresh learning prospects. Students can learn online even if they are not inside the classroom. Advancement in technology entails new approaches of combining present and future technological improvements and incorporating these innovations into the public education system. [ 13 ] With technology incorporated into everyday learning, this creates a new environment with new personalized and blended learning. Students are able to complete work based on their own needs as well as having the versatility of individualized study and it evolves the overall learning experience. Technology space in education is huge. It advances and changes rapidly. [ 14 ] In the United Kingdom, computer technology helped elevate standards in different schools to confront various challenges. [ 15 ] The UK adopted the “Flipped Classroom” concept after it became popular in the United States. The idea is to reverse conventional teaching methods through the delivery of instructions online and outside of traditional classrooms. [ 16 ]
In Europe, the European Commission espoused a Digital Education Plan in January 2018. The program consists of 11 initiatives that support utilization of technology and digital capabilities in education development. [ 17 ] The Commission also adopted an action plan called the Staff Working Document [ 18 ] which details its strategy in implementing digital education. This plan includes three priorities formulating measures to assist European Union member-states to tackle all related concerns. [ 19 ] The whole framework will support the European Qualifications Framework for Lifelong Learning [ 20 ] and European Classification of Skills, Competences, Qualifications, and Occupations. [ 21 ]
In East Asia, The World Bank co-sponsored a yearly (two-day) international symposium [ 22 ] In October 2017 with South Korea's Ministry of Education, Science, and Technology and the World Bank to support education and ICT concerns for industry practitioners and senior policymakers. Participants plan and discuss issues in use of new technologies for schools within the region. [ 23 ] | https://en.wikipedia.org/wiki/Technology_education |
The use of electronic and communication technologies as a therapeutic aid to healthcare practices is commonly referred to as telemedicine [ 1 ] or eHealth . [ 2 ] [ 3 ] [ 4 ] The use of such technologies as a supplement to mainstream therapies for mental disorders is an emerging mental health treatment field which, it is argued, could improve the accessibility, effectiveness and affordability of mental health care. [ 5 ] [ 6 ] Mental health technologies used by professionals as an adjunct to mainstream clinical practices include email , SMS , virtual reality , computer programs , blogs , social networks , the telephone , video conferencing , computer games , instant messaging and podcasts . [ 7 ] [ page needed ]
Traditional methods of helping people with a mental health problem have been to use approaches such as medication, counselling, cognitive behavioral therapy (CBT), exercise and a healthy diet. New technology can also be used in conjunction with traditional methods.
TED speaker Jane McGonigal's website Games For Change includes a health category, which presents many mental health improving and education games. Additionally, her own game, Super Better for PC, [ 8 ] IOS [ 9 ] and Android [ 10 ] is also meant for mental health improvement.
Rizzo et al. [ 11 ] have used virtual reality (VR) (simulated real environments through digital media) to successfully treat post-traumatic stress disorder (PTSD). The VR system offers a sense of realism in a safe environment. By gradually exposing the person to their fear with a Virtual Environment the patient becomes accustomed to the trigger of their problem to an extent that it no longer becomes an issue. This form of treatment has also been applied to other mental health problems such as phobias (where anxiety is triggered by a certain situation). For example, fear of flying or arachnophobia (fear of spiders). Computer games have also been used to provide therapy for adolescents. [ 12 ] Many adolescents are reluctant to have therapy and a computer game is a fun, anonymous and accessible way to receive therapeutic advice. An example of a computer game that provides such therapy is SPARX , which has notably been shown to be about as effective as face-to-face therapy in a clinical trial. [ 13 ]
Relatively new technology such as mobile phones have also been used to help people with mental health problems by providing timely information. [ 5 ] [ 14 ]
As technology improves, it may soon be possible for mobile phones or other devices to sense when people are changing state (e.g. entering a manic or a deeply depressed phase), for instance by noticing a change in voice pattern or usage frequency, or facial tension. It may also become possible to measure physical evidence of levels of distress and suffering, such as changes in hormones or adrenalin in blood, and changes in brain activity. Apps may also be able to predict high stress situations, based on location, time, activity (e.g. purchasing of alcohol) and nearby presence of high risk people. The technology could then send calming messages to patients, automatically alert carers and even automatically administer meds. [ 15 ]
There are different technologies that are used in the mental health field over the past 30 years. "Mobile devices like cell phones, smartphones, and tablets are giving the public, doctors, and researchers new ways to access help, monitor progress, and increase understanding of mental wellbeing. New technology can also be packaged into an extremely sophisticated app for smartphones or tablets. Such apps might use the device's built-in sensors to collect information on a user's typical behavior patterns. If the app detects a change in behavior, it may provide a signal that help is needed before a crisis occurs" (Technology and the Future of Mental Health Treatment, n.d.). This connects to Quan-Haase reading about surveillance. The use of a mobile app that knows people behavior has private information about the people who use it. The people are being watched by the app creator or company. Functional view argues that societies, in order to operate effectively, require some element of security and safety. To achieve these goals, personal information in surveillance are only for a degree, not of kind. "This form of surveillance is harmless since third-party companies are primarily interested in aggregate data and will use this information for the purpose of developing and marketing better products, which will benefit consumers in the long run". [ 16 ] (Quan-Haase, 2016, p. 222-223). There are many pros of using mental health app such as it is convenience, lower cost, and 24-hour service.
Technology companies are developing mobile-based artificial intelligence chatbot applications that use evidence-based techniques, such as cognitive behavioral therapy (CBT), to provide early intervention to support mental health and emotional well-being challenges. [ 17 ] Artificial intelligence (AI) text-based conversational applications delivered securely and privately over mobile devices have the ability to scale globally and offer contextual and always-available support. A recent real world data evaluation study, [ 18 ] published in the open access journal JMIR mHealth & uHealth, that used an AI-based emotionally intelligent mobile chatbot app, Wysa, identified a significantly higher average improvement in symptoms of depression and a higher proportion of positive in-app experience among the more engaged users of the app as compared to the less engaged users.
On 15 June 2020, the Food and Drug Administration approved the first video game treatment, a game for children aged 8–12 with certain types of ADHD called EndeavorRx . It can be downloaded with a prescription onto a mobile device, and is intended for use in tandem with other treatments. Patients play it for 30 minutes a day, 5 days a week, over a month-long treatment plan. [ 19 ]
The development of mobile phone apps using cognitive behavioral therapy (CBT) has an increasing research area. [ 20 ] Using the idea of cognitive behavioral therapy (CBT) apps, self-rated mental health (SRMH) situations can be implemented into these apps and used as information before seeing a professional. Recent research done with self-rated mental health (SRMH) involves survey research which is conducted by with a question that asks respondents to rate their overall mental or emotional health from poor to excellent. [ 21 ] The research found with SRMH showed that 62% of people with a mental health problem rated themselves as having positive mental health. The respondents who rated their mental health as good when compared to those with poor mental health, had 30% lower odds of having a mental health problem at a follow-up. This research showcased that without treatment, people with a mental health problem did better if they perceived their mental health in a positive way by declaring a good overall mental or emotional health. [ 21 ]
While studies have investigated the clinical efficacy of remote-, internet- and chatbot-based therapy, there are other factors, such as enjoyment and smoothness, that are important for evaluating therapy sessions. Research published in 2019 reported a comparative study of therapy sessions following the interaction of 10 participants with human therapists versus a chatbot (simulated using a Wizard of Oz protocol), finding evidence to suggest that when compared against a human therapist control, participants find chatbot-provided therapy less useful, less enjoyable, and their conversations less smooth (a key dimension of a positively-regarded therapy session). [ 22 ]
A study suggests that combining cognitive behavioral therapy (CBT) with SlowMo, an app that helps people notice their "unhelpful fast-thinking" might be more effective for treating paranoia in people with psychosis than CBT alone. [ 23 ] [ 24 ]
From an economical perspective, digital interventions for mental health conditions seem to be cost-effective compared to no intervention or non-therapeutic responses such as monitoring. However, when compared to in-person therapy or medication their added value is currently uncertain. [ 25 ]
There is uncertainty around the ethical and legal implications of digital technologies in the mental health context, including the use of artificial intelligence, machine learning, deep learning, and other forms of automation. Ethical and legal issues tend to not be explicitly addressed in empirical studies on algorithmic and data-driven technologies in mental health initiatives. [ 26 ] Concerns have been raised about the near-complete lack of involvement of mental health service users, the scant consideration of algorithmic accountability , and the potential for overmedicalization and techno-solutionism. [ 26 ] | https://en.wikipedia.org/wiki/Technology_in_mental_disorder_treatment |
The technology life cycle ( TLC ) describes the commercial gain of a product through the expense of research and development phase, and the financial return during its "vital life". Some technologies, such as steel, paper or cement manufacturing, have a long lifespan (with minor variations in technology incorporated with time) while in other cases, such as electronic or pharmaceutical products, the lifespan may be quite short. [ 1 ]
The TLC associated with a product or technological service is different from product life-cycle (PLC) dealt with in product life-cycle management . The latter is concerned with the life of a product in the marketplace with respect to timing of introduction, marketing measures, and business costs. The technology underlying the product (for example, that of a uniquely flavoured tea) may be quite marginal but the process of creating and managing its life as a branded product will be very different.
The technology life cycle is concerned with the time and cost of developing the technology, the timeline of recovering cost, and modes of making the technology yield a profit proportionate to the costs and risks involved. The TLC may, further, be protected during its cycle with patents and trademarks seeking to lengthen the cycle and to maximize the profit from it.
The product of the technology may be a commodity such as polyethylene plastic or a sophisticated product like the integrated circuits used in a smartphone .
The development of a competitive product or process can have a major effect on the lifespan of the technology, making it longer. Equally, the loss of intellectual property rights through litigation or loss of its secret elements (if any) through leakages also work to reduce a technology's lifespan. Thus, it is apparent that the management of the TLC is an important aspect of technology development.
Most new technologies follow a similar technology maturity life cycle describing the technological maturity of a product. This is not similar to a product life cycle, but applies to an entire technology, or a generation of a technology.
Technology adoption is the most common phenomenon driving the evolution of industries along the industry life cycle. After expanding new uses of resources they end with exhausting the efficiency of those processes, producing gains that are first easier and larger over time then exhaustingly more difficult, as the technology matures .
The Soviet economist Nikolai Kondratiev was the first to observe technology life cycle in his book The Major Economic Cycles (1925). [ 2 ] [ 3 ] [ 4 ] Today, these cycles are called Kondratiev wave , the predecessor of TLC. TLC is composed of four phases:
The shape of the technology life cycle is often referred to as S-curve. [ 5 ]
There is usually technology hype at the introduction of any new technology, but only after some time has passed can it be judged as mere hype or justified true acclaim.
Because of the logistic curve nature of technology adoption, it is difficult to see in the early stages whether the hype is excessive.
Similarly, in the later stages, the opposite mistakes can be made relating to the possibilities of technology maturity and market saturation .
The technology adoption life cycle typically occurs in an S curve, as modelled in diffusion of innovations theory. This is because customers respond to new products in different ways. Diffusion of innovations theory, pioneered by Everett Rogers , posits that people have different levels of readiness for adopting new innovations and that the characteristics of a product affect overall adoption. Rogers classified individuals into five groups: innovators, early adopters, early majority, late majority, and laggards. In terms of the S curve, innovators occupy 2.5%, early adopters 13.5%, early majority 34%, late majority 34%, and laggards 16%.
The four stages of technology life cycle are as follows: [ 6 ]
Large corporations develop technology for their own benefit and not with the objective of licensing. The tendency to license out technology only appears when there is a threat to the life of the TLC (business gain) as discussed later. [ 7 ]
There are always smaller firms ( SMEs ) who are inadequately situated to finance the development of innovative R&D in the post-research and early technology phases. By sharing incipient technology under certain conditions, substantial risk financing can come from third parties. This is a form of quasi-licensing which takes different formats. Even large corporates may not wish to bear all costs of development in areas of significant and high risk (e.g. aircraft development) and may seek means of spreading it to the stage that proof-of-concept is obtained.
In the case of small and medium firms, entities such as venture capitalists or business angels, can enter the scene and help to materialize technologies. Venture capitalists accept both the costs and uncertainties of R&D, and that of market acceptance, in reward for high returns when the technology proves itself. Apart from finance, they may provide networking, management and marketing support. Venture capital connotes financial as well as human capital.
Larger firms may opt for Joint R&D or work in a consortium for the early phase of development. Such vehicles are called strategic alliances – strategic partnerships.
With both venture capital funding and strategic (research) alliances, when business gains begin to neutralize development costs (the TLC crosses the X-axis), the ownership of the technology starts to undergo change.
In the case of smaller firms, venture capitalists help clients enter the stock market for obtaining substantially larger funds for development, maturation of technology, product promotion and to meet marketing costs. A major route is through initial public offering (IPO) which invites risk funding by the public for potential high gain. At the same time, the IPOs enable venture capitalists to attempt to recover expenditures already incurred by them through part sale of the stock pre-allotted to them (subsequent to the listing of the stock on the stock exchange). When the IPO is fully subscribed, the assisted enterprise becomes a corporation and can more easily obtain bank loans, etc. if needed.
Strategic alliance partners, allied on research, pursue separate paths of development with the incipient technology of common origin but pool their accomplishments through instruments such as 'cross-licensing'. Generally, contractual provisions among the members of the consortium allow a member to exercise the option of independent pursuit after joint consultation; in which case the optee owns all subsequent development.
The ascent stage of the technology usually refers to some point above Point A in the TLC diagram but actually it commences when the R&D portion of the TLC curve inflects (only that the cashflow is negative and unremunerative to Point A). The ascent is the strongest phase of the TLC because it is here that the technology is superior to alternatives and can command premium profit or gain. The slope and duration of the ascent depends on competing technologies entering the domain, although they may not be as successful in that period. Strongly patented technology extends the duration period.
The TLC begins to flatten out (the region shown as M) when equivalent or challenging technologies come into the competitive space and begin to eat away marketshare.
Till this stage is reached, the technology-owning firm would tend to exclusively enjoy its profitability, preferring not to license it. If an overseas opportunity does present itself, the firm would prefer to set up a controlled subsidiary rather than license a third party.
The maturity phase of the technology is a period of stable and remunerative income but its competitive viability can persist over the larger timeframe marked by its 'vital life'. However, there may be a tendency to license out the technology to third parties during this stage to lower risk of decline in profitability (or competitivity) and to expand financial opportunity.
The exercise of this option is, generally, inferior to seeking participatory exploitation; in other words, engagement in joint venture , typically in regions where the technology would be in the ascent phase , as say, a developing country. In addition to providing financial opportunity it allows the technology-owner a degree of control over its use. Gain flows from the two streams of investment-based and royalty incomes. Further, the vital life of the technology is enhanced in such strategy.
After reaching a point such as D in the above diagram, the earnings from the technology begin to decline rather rapidly. To prolong the life cycle, owners of technology might try to license it out at some point L when it can still be attractive to firms in other markets. This, then, traces the lengthening path, LL'. Further, since the decline is the result of competing rising technologies in this space, licenses may be attracted to the general lower cost of the older technology (than what prevailed during its vital life).
Licenses obtained in this phase are 'straight licenses'. They are free of direct control from the owner of the technology (as would otherwise apply, say, in the case of a joint-venture). Further, there may be fewer restrictions placed on the licensee in the employment of the technology.
The utility, viability, and thus the cost of straight-licenses depends on the estimated 'balance life' of the technology. For instance, should the key patent on the technology have expired, or would expire in a short while, the residual viability of the technology may be limited, although balance life may be governed by other criteria such as know-how which could have a longer life if properly protected.
The license has no way of knowing the stage at which the prime, and competing technologies, are on their TLCs . It would be evident to competing licensor firms, and to the originator, from the growth, saturation or decline of the profitability of their operations.
The license may, however, be able to approximate the stage by vigorously negotiating with the licensor and competitors to determine costs and licensing terms. A lower cost, or easier terms, may imply a declining technology.
In any case, access to technology in the decline phase is a large risk that the licensee accepts. (In a joint-venture this risk is substantially reduced by licensor sharing it). Sometimes, financial guarantees from the licensor may work to reduce such risk and can be negotiated.
There are instances when, even though the technology declines to becoming a technique, it may still contain important knowledge or experience which the licensee firm cannot learn of without help from the originator. This is often the form that technical service and technical assistance contracts take (encountered often in developing country contracts). Alternatively, consulting agencies may fill this role.
According to the Encyclopedia of Earth , "In the simplest formulation, innovation can be thought of as being composed of research, development, demonstration, and deployment." [ 8 ]
Technology development cycle describes the process of a new technology through the stages of technological maturity: | https://en.wikipedia.org/wiki/Technology_life_cycle |
Technology management refers to the integrated planning, design, optimization, operation and control of technological products, processes and services, in order to manage of the use of technology for human advantage. It contains a number of management disciplines that allow organizations to manage their technological fundamentals to benefit their customers. The role of the technology management function in an organization is to understand the value of certain technology for the organization and for the customer, and to identify when it is better to invest in technology development and when to withdraw.
Typical concepts used in technology management are:
In the United States, Technology Management was deemed an emerging field of study by the Department of Education and received a new Classification of Instructional Program (CIP) code in 2020. [ 3 ] The Association of Technology, Management, and Applied Engineering (ATMAE) accredits collegiate programs in technology management. An instructor or graduate of a technology management program may choose to become a Certified Technology Manager (CTM) by sitting an exam administered by ATMAE covering production planning & control, safety, quality, and management/supervision. The ATMAE program accreditation is recognized by the Council for Higher Education Accreditation (CHEA) for accrediting associate, baccalaureate, and master's degree technology management programs. [ 4 ] | https://en.wikipedia.org/wiki/Technology_management |
Tech mining or technology mining refers to applying text mining methods to technical documents. For patent analysis purposes, it is named ‘ patent mining ’. Porter, as one of the pioneers in technology mining, defined ‘tech mining’ in his book [ 1 ] as follows: “the application of text mining tools to science and technology information, informed by understanding of technological innovation processes.” Therefore, tech mining has two significant characteristics: 1) using ‘text mining tools’, 2) applying these tools to ‘technology management’. Also, technology mining can be considered as one of technology intelligence branches.
Technology mining have many applications including R&D portfolio selection, R&D project initiation, new product development , strategic technology planning, technology roadmapping , etc. [ 2 ] Tech miner should communicate closely with target users what technological issue they have, and how they want to address the issues. The number of published papers and the number of citations in technology mining area illustrates a hyperbolically progress; there is a jump in the number of publications after 2005 and a huge rise in the number of citations after 2012. [ 3 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Technology_mining |
Since the late 20th century, the Metropolitan Transportation Authority has started several projects to maintain and improve the New York City Subway . Some of these projects, such as subway line automation , proposed platform screen doors , the FASTRACK maintenance program, and infrastructural improvements proposed in 2015–2019 Capital Program, contribute toward improving the system's efficiency. Others, such as train-arrival "countdown clocks", "Help Point" station intercoms, "On the Go! Travel Station" passenger kiosks, wireless and cellular network connections in stations, MetroCard fare payment alternatives, and digital ads, are meant to benefit individual passengers. Yet others, including the various methods of subway construction, do not directly impact the passenger interface, but are used to make subway operations efficient.
In the mid-1990s, it started converting the BMT Canarsie Line to use communications-based train control , using a moving block signal system that allowed more trains to use the tracks and thus increasing passenger capacity. After the Canarsie Line tests were successful, the MTA expanded the automation program in the 2000s and 2010s to include other lines. This led to a 2017 proposal to install platform screen doors in one Canarsie Line station. Additionally, as part of another program called FASTRACK, the MTA started closing certain lines during weekday nights in 2012, with each of the lines closing overnight for a week in order to allow workers to clean these lines without being hindered by train movements. The program was expanded beyond Manhattan the next year after observing the increased efficiency of the FASTRACK program compared to previous service diversions. In 2015, the MTA announced a wide-ranging improvement program as part of the 2015–2019 Capital Program. Thirty stations would be extensively rebuilt under the Enhanced Station Initiative , and new R211 subway cars would be able to fit more passengers.
The MTA has also started some projects to improve passenger amenities. It added train arrival "countdown clocks" to most A Division (numbered route ) stations and the BMT Canarsie Line ( L train) by late 2011, allowing passengers on these routes to see train arrival times using real-time data. A similar countdown-clock project for the B Division (lettered routes) and the IRT Flushing Line was deferred until 2016, when a new Bluetooth -based clock system was tested successfully. Beginning in 2011, the MTA installed "Help Point" to aid with emergency calls or station agent assistance, in all stations. Interactive touchscreen kiosks, which give station advisories, itineraries, and timetables, were installed starting in 2011. Cellular phone and wireless data in stations, first installed in 2011 as part of a pilot program, was expanded systemwide due to positive passenger feedback. Additionally, credit-card trials at several subway stations in 2006 and 2010 led to proposals for OMNY , a contactless payment system to replace the aging MetroCard system used to pay fares on MTA-operated transportation. Finally, in 2017, the MTA started installing digital advertisements in trains and stations.
When the IRT subway debuted in 1904, [ 1 ] [ 2 ] the typical tunnel construction method was cut-and-cover . [ 3 ] [ 4 ] The street was torn up to dig the tunnel below before being rebuilt from above. [ 3 ] [ 4 ] Traffic on the street above would be interrupted due to the digging up of the street. [ 5 ] Temporary steel and wooden bridges carried surface traffic above the construction. [ 6 ] The 7,700 workers who built the original subway lines were mostly immigrants living in Manhattan. [ 7 ] [ 8 ]
Contractors in this type of construction faced many obstacles, both natural and man-made. They had to deal with rock formations, and ground water, which required pumps. 12 miles (19 km) of sewers, as well as water and gas mains, electric conduits, and New York City steam system pipes had to be rerouted. Street railways had to be torn up to allow the work. The foundations of tall buildings often ran near the subway construction, and in some cases needed underpinning to ensure stability. [ 9 ]
This method worked well for digging soft dirt and gravel near the street surface. [ 3 ] However, tunnelling shields were required for deeper sections, such as the Harlem and East River tunnels, which used cast-iron tubes. Segments between 33rd and 42nd streets under Park Avenue , 116th Street and 120th Street under Broadway, and 145th Street and Dyckman Street (Fort George) under Broadway and Saint Nicholas Avenue as well as the tunnel from 96th Street to Central Park North–110th Street & Lenox Avenue , used either rock or concrete-lined tunnels. [ 3 ] [ 4 ]
About 40% of the subway system runs on surface or elevated tracks, including steel or cast iron elevated structures , concrete viaducts , embankments , open cuts and surface routes. [ 10 ] All of these construction methods are completely grade-separated from road and pedestrian crossings, and most crossings of two subway tracks are grade-separated with flying junctions . The sole level junctions of two lines in regular revenue service are the 142nd Street junction [ 11 ] and the Myrtle Avenue junction . [ 12 ] [ 13 ]
More recent projects use tunnel boring machines , which minimize disruption at street level and avoid already existing utilities, but increase cost. [ 14 ] Examples of such projects include the extension of the IRT Flushing Line [ 15 ] [ 16 ] [ 17 ] [ 18 ] and the IND Second Avenue Line . [ 19 ]
The MTA has plans to upgrade much of New York City Subway system from a fixed block signaling system to one with communications-based train control (CBTC) technology, which will control the speed and starting and stopping of subway trains. The CBTC system is mostly automated and uses a moving block system – which reduces headways between trains, increases train frequencies and capacities, and relays the trains' positions to a control room – rather than a fixed block system. This will require new rolling stock to be built for the subway system, as only newer trains can use CBTC systems. [ 20 ] [ 21 ]
Trains using CBTC locate themselves based on measuring their distance past fixed transponders installed between the rails. Trains equipped with CBTC have a transponder interrogator antenna beneath each carriage, which communicates with the fixed trackside transponders and report the trains' location to a wayside Zone Controller via radio. Then, the Controller issues Movement Authorities to the trains. This technology upgrade will allow trains to be operated at closer distances, slightly increasing capacity; will allow the MTA to keep track of trains in real time and provide more information to the public regarding train arrivals and delays; and will obviate the need for complex interlocking towers. [ 22 ] The trains are also equipped with high-tech computers inside the cab so that the conductor could monitor the train's speed and relative location. [ 23 ]
The BMT Canarsie Line ( L service) was the first line to implement the automated technology using Siemens 's Trainguard MT CBTC system, as it was a self-contained line with none of the route interlining seen elsewhere around the system. [ 24 ] The CBTC project was first proposed in 1994 and approved by the MTA in 1997. [ 23 ] Installation of the signal system was begun in 2000. Initial testing began in 2004, [ 25 ] and installation was mostly completed by December 2006, with all CBTC-equipped R143 subway cars in service by that date. [ 24 ] Due to an unexpected ridership increase on the Canarsie Line, the MTA ordered more R160 cars and these were put into service in 2010. This enabled the agency to operate up to 26 trains per hour up from the May 2007 service level of 15 trains per hour, an achievement that would not be possible without the CBTC technology or a redesign of the previous automatic block signal system. [ 24 ] The R143s and R160s both use Trainguard MT CBTC , supplied by Siemens. [ 26 ]
The next line to have CBTC installed was the pre-existing IRT Flushing Line and its western extension opened in 2015 (served by the 7 and <7> trains). The Flushing Line was chosen for the second implementation of CBTC because it is also a self-contained line with no direct connections to other subway lines currently in use. The 2010–2014 capital budget provided funding for CBTC installation on the Flushing Line, with scheduled installation originally set for completion in 2016. [ 27 ] The R188 cars were ordered in 2010 to equip the line with compatible rolling stock. [ 28 ] This order consists of new cars and retrofits of existing R142A cars for CBTC. [ 29 ] However, the CBTC retrofit date was later pushed back to 2017 [ 30 ] or 2018. [ 31 ] The installation is being done by Thales Group . [ 32 ]
Siemens and Thales successfully conducted tests on one of the IND Culver Line 's tracks to determine if their CBTC systems were compatible, thus allowing installation of CBTC on the rest of the B Division . [ 33 ] In 2016, Siemens and Thales were awarded a contract to install CBTC on the IND Queens Boulevard Line from 50th Street/8th Avenue and 47th–50th Streets–Rockefeller Center to Kew Gardens–Union Turnpike . [ 26 ] Planning for phase one started in 2015 and was complete by February 2016, with major engineering work following in November 2016. [ 34 ] [ 35 ] Funding for CBTC on the IND Eighth Avenue Line from 59th Street–Columbus Circle to High Street is also provided in the 2015–2019 Capital Program, along with the modernization of interlockings at 30th and 42nd Streets. [ 36 ] The local tracks of the IND Culver Line would also get CBTC as part of the 2015–2019 Capital Program, as well as the entire line between Church Avenue and West Eighth Street–New York Aquarium , with three interlockings to be upgraded on that stretch. [ 36 ]
As of 2014 [update] , MTA projects that 355 miles (571 km) of track will receive CBTC signals by 2029, including most of the IND, as well as the IRT Lexington Avenue Line and the BMT Broadway Line . [ 20 ] The MTA also is planning to install CBTC equipment on the IND Crosstown Line , the BMT Fourth Avenue Line and the BMT Brighton Line before 2025. [ 21 ]
Additionally, the New York City Subway uses a system known as Automatic Train Supervision (ATS) for dispatching and train routing on the A Division [ 37 ] (the Flushing line, and the trains used on the 7 and <7> services, do not have ATS.) [ 37 ] ATS allows dispatchers in the Operations Control Center (OCC) to see where trains are in real time, and whether each individual train is running early or late. [ 37 ] Dispatchers can hold trains for connections, re-route trains, or short-turn trains to provide better service when a disruption causes delays. [ 37 ]
In 2017, the MTA started testing ultra-wideband radio-enabled train signaling on the IND Culver Line . The ultra-wideband train signals would be able to carry more data wirelessly in a manner similar to CBTC, but can be installed faster. The ultra-wideband signals would have the added benefit of allowing passengers to use cellphones while between stations, instead of the current setup (see Technology of the New York City Subway § Cellular phone and wireless data ) that only provides cellphone signals within the stations. [ 38 ] [ 39 ]
The MTA has long been reluctant to install platform screen doors in the subway system, though it had been considering such an idea since the 1980s. [ 40 ] Originally, it was planned to install platform doors in several stations along the Second Avenue Subway and on the 7 Subway Extension , but their installation presented substantial technical challenges, as there are different placements of doors on New York City Subway rolling stock . [ 40 ] [ 41 ] The platform-door proposal was scrapped in 2012 because of high installation and maintenance costs; rolling stock door placement; the need to provide a suitable signal interface between the train and platform; and the potential delay in operations that would result from the operation of such doors. [ 41 ]
The MTA is also interested in retrofitting platform screen doors on the Canarsie Line , along the L train, and on the IRT Flushing Line , along the 7 and <7> trains. However, it is unlikely that the entire New York City Subway system will get retrofitted with platform screen doors or automatic platform gates [ 42 ] due to, again, the varying placements of doors on rolling stock. [ 43 ] Following a series of incidents during one week in November 2016, in which 3 people were injured or killed after being pushed into tracks, the MTA started to consider installing platform edge doors for the 42nd Street Shuttle . [ 44 ] By 2017, a pilot program for platform screen door technology was underway at the Pelham Parkway station in the Bronx. [ 45 ]
The MTA conducted an internal study of the system in 2019 to determine whether platform screen doors could be installed at each station. The MTA concluded that only 128 stations, or 27 percent of the network, could theoretically be fitted with platform screen doors. Between those, only 41 such stations would be able to theoretically receive such doors in 2019 due to mismatches in door positions between different rolling stock, [ note 1 ] and it would take ten years to have a uniform door position among all rolling stock. [ 46 ] [ 47 ] Of the infeasible stations, 154 stops could not receive platform doors because the resulting platform would be too narrow under the ADA, while 100 stops (mostly above ground) had precast concrete platforms that would not be able to support the weight of the doors. The MTA claimed the remaining stations could not be refitted because of persistent fleet alignment issues; [ note 2 ] columns that were too close to the platform edge; an inaccessible platform; insufficient space for a platform-door equipment room; and, in one case ( 14th Street-Union Square on the Lexington Avenue Line ), gap fillers . [ 46 ]
In October 2017, it was announced that as part of a pilot program , the Canarsie Line's Third Avenue station was planned to be refitted with platform screen doors while the 14th Street Tunnel was rebuilt from April 2019 to March 2020. This was possible as a result of the L train's automated train operation . The MTA would have used the results of the pilot in order to determine the feasibility of adding such doors citywide. [ 48 ] [ 49 ] The PSDs would have been approximately 54 in (140 cm) high and would have been coordinated with the location of the subway car doors when a train was in the station. [ 50 ] To ensure that the subway cars were precisely lined up with the doors, a wayside-only berthing system would be installed. Emergency egress gates would be installed in between the regular doors to allow people to exit in the case of an emergency. The platform edges and topping would be removed and replaced so that they align with the sills of the train doors and to be in compliance with the Americans with Disabilities Act of 1990 . To ensure that people do not get trapped in between the subway car doors and the PSDs, sensors and CCTV cameras would be installed with monitors at the center and front of the platforms visible to the train operator and conductor. [ 51 ] In June 2018, the $30 million for the platform edge door pilot program was diverted to another project, and the pilot program was postponed until sufficient funding could be found. [ 52 ] Stations constructed as part of the Second Avenue Subway's Phase 2 may receive platform screen doors depending on the results of studies being conducted for their installation elsewhere. [ 53 ] : 15
The MTA announced another PSD pilot program at three stations in February 2022: the 7 and <7> trains' platform at Times Square ; the E train's platform at Sutphin Boulevard–Archer Avenue–JFK Airport ; and the Third Avenue station. [ 54 ] [ 55 ] The announcement came after several people had been shoved onto tracks, including one incident that led to the death of Michelle Go at the Times Square stations. [ 55 ]
On July 13, 2022, the MTA released a request for proposals for a design-build contract to install PSDs at the three pilot stations. [ 56 ] To ensure the maintenance of the PSDs, there will be a separate long-term maintenance contract. The platforms at the stations will be rebuilt to support the weight of the PSDs, including the replacement of concrete and rubbing boards, the repositioning of tactile tiles, and steel reinforcement. Wayside-only berthing systems will be installed, with stopping locations at Times Square and Third Avenue being synchronized with the existing CBTC signal system. At these two stations, existing track will be replaced. To ensure riders can exit trains in the case of an emergency, emergency exit doors with push bars will be installed in the three stations, and to prevent riders from being trapped between the PSDs and train doors, door entrapment sensors will be installed. A PSD storage room and a PSD control room will be constructed in each station. [ 57 ] The doors are planned to be installed starting in December 2023 at a cost of $6 million. [ 58 ] Designs for the platform doors were being finalized by June 2023. [ 59 ] [ 60 ]
In 2023, short barriers were installed at the centers of the platforms at 57th Street , Bedford Avenue , and Crescent Street to reduce the probability of passengers being pushed into the tracks. [ 61 ] In 2024, the MTA announced that it would install low platform fences at four stations (including 191st Street and Clark Street ) to reduce the likelihood of passengers falling onto the tracks. [ 62 ] [ 63 ] The barriers consist of low yellow fences, spaced along the length of the platform; there are no sliding platform screen doors between the barriers. [ 62 ] [ 64 ] The barriers have since been installed at additional stations, including Fifth Avenue , Flushing–Main Street , and multiple BMT Canarsie Line stations. [ 65 ]
All subway trains have been air-conditioned since 1993, but most stations do not have any form of air conditioning. [ 66 ] Seven of the New York City Subway's 472 stations contain artificial air-conditioning systems. The air-cooling systems are mostly located in subway stations that were built in the 21st century. In August 2006, the MTA revealed that all new subway stations would be outfitted with air-cooling systems to reduce the temperature along platforms by as much as 10 °F (6 °C). [ 67 ] [ 68 ] The stations with artificial cooling systems are the Grand Central–42nd Street station on the 4 , 5 , 6 , and <6> trains; [ 69 ] the Cortlandt Street and South Ferry stations on the 1 train, which both replaced older stations; [ 70 ] [ 69 ] the 34th Street–Hudson Yards station on the 7 and <7> trains; [ 71 ] three stations on the Second Avenue Subway ; [ 68 ] the Lexington Avenue–63rd Street station ; and the Cortlandt Street station on the N , R , and W trains. [ 72 ] Fans are used at five additional stations, all on IRT lines. [ 72 ]
The leader of MTA's construction department said in 2022 that it was not feasible to install air conditioning in most older stations. [ 66 ] This is both because of the high power requirements for the air-cooling systems and because the presence of ventilation grates in older stations would reduce the efficiency of an air conditioning system. The Grand Central–42nd Street station is a major exception, since there is a large cooling plant for Grand Central Terminal immediately above the platforms that are air-conditioned; the plant was installed in 2000. According to The New York Times , it would cost $4.8 billion to install air-conditioning units in all other below-ground stations. [ 66 ] In September 2023, the MTA began studying the feasibility of installing air conditioning in other stations. [ 73 ] [ 74 ]
In January 2012, [ 75 ] [ 76 ] the MTA introduced a new maintenance program, FASTRACK, to speed up repair work. This program involves a more drastic approach than previous construction, and completely shuts down a major portion of a line for overnight work on four consecutive weeknights from 10 p.m. to 6 a.m. [ 77 ] According to the MTA, this new program proved much more efficient and quicker than regular service changes, especially because it happened at night and not the weekend, when most transit closures had occurred before. [ 78 ] In 2012 the program only closed lines in Midtown and Lower Manhattan, [ 79 ] [ note 3 ] but due to the success of the program, the MTA decided to expand it to the outer boroughs as well. [ 80 ] In 2013, FASTRACK was expanded to other corridors requiring minimal shuttle buses [ 81 ] [ note 4 ] and in 2014 to even more locations. [ 82 ] There were corridors scheduled for 2014 during 24 weeks of the year, [ note 5 ] 12 corridors scheduled during 22 weeks in 2015, [ 83 ] and 13 corridors scheduled during 21 weeks in 2016. [ 84 ]
As part of an $836 million program to resolve the subway's 2017–2021 state of emergency , MTA Chairman Joe Lhota announced the expansion of the FASTRACK program in order to fix critical infrastructure faster. [ 85 ] [ 86 ] [ 87 ]
The 2015–2019 MTA Capital Plan included funds for the Enhanced Station Initiative (ESI), under which thirty-three stations in all five boroughs would undergo a complete overhaul and would be entirely closed for up to 6 months at a time. [ 88 ] [ 89 ] The 34th Street–Penn Station stops on the IRT Broadway–Seventh Avenue Line and IND Eighth Avenue Line were added to the plan later, but would not be entirely closed due to their key location. The 30 original stations as part of the ESI would be rebuilt for $881 million, the two Penn Station stops would be rebuilt for $40 million, and the Richmond Valley stop on the Staten Island Railway would be rebuilt for $15 million. [ 90 ] Five stations on the Metro-North Railroad were added to the plan in December 2017, [ 91 ] as were sixteen stations on the Long Island Rail Road , which were proposed in several phases. [ 96 ]
Updates included cellular service, Wi-Fi, charging stations, interactive service advisories and maps, improved signage, strip maps for the subway routes, subway countdown clocks, service alerts, On-The-Go Informational Dashboards, neighborhood maps, new art, and improved station lighting. [ 97 ] [ 98 ] [ note 6 ] Cables and conduits were decluttered, simplifying the stations' wiring. The stations also included glass barriers near fare control areas (rather than the metal fences that separate the paid and unpaid areas of the stations), as well as new tiled floors that are easy to clean. [ 98 ] Concrete repairs, new platform edges, waterproofing, most tile patching, and structural steel repairs got the stations into states of good repair. [ 100 ] Passenger amenities included next-train countdown clocks and neighborhood wayfinding maps at the exterior of each entrance; digital maps, MetroCard vending machines, and station agent booths situated in a central location in the mezzanine; and digital next-train information and service change notices at platform level. [ 101 ] One additional station, Richmond Valley of the Staten Island Railway , was also overhauled, without being closed. [ 100 ]
The renovations were done in several stages called "packages", which allowed contractors to renovate three to five stations in a given area simultaneously. The first four packages were completed in 23 months, by early 2019. [ 100 ] The first package consisted of the Prospect Avenue , 53rd Street and Bay Ridge Avenue stations along the BMT Fourth Avenue Line in Brooklyn, for which the contract was awarded on November 30, 2016. [ 102 ] From March to June 2017, these stations closed for construction, [ 103 ] reopening from September to November 2017. [ 104 ] The second group of stations, comprising the 30th Avenue , Broadway , 36th Avenue , and 39th Avenue stations on the BMT Astoria Line in Queens, was awarded on April 14, 2017, to Skanska USA , [ 105 ] and entailed renovating these stations on a staggered schedule from October 2017 to February 2019. [ 106 ] [ 107 ] Originally, this package entailed renovating one platform at a time since the stations are all consecutive, unlike in other packages, [ 108 ] but the plan was later amended so two sets of two non-consecutive stations would be completely closed at once. [ 106 ]
The third package of stations was on the IND Eighth Avenue Line in Manhattan. The 163rd Street , 110th Street , 86th Street , and 72nd Street stations were included as part of an amendment to the Capital Program. [ 109 ] The New York City Transit and Bus Committee officially recommended that the MTA Board award the $111 million contract for Package 3 to ECCO III Enterprises in October 2017. [ 110 ] These stations were closed on a staggered schedule between March and June 2018, and reopened between September and November 2018. [ 111 ] The fourth package of stations consisted of stations in midtown Manhattan, and included the 34th Street–Penn Station stops on the IND Eighth Avenue Line and the IRT Broadway–Seventh Avenue Line , 57th Street and 23rd Street on the IND Sixth Avenue Line , and 28th Street on the IRT Lexington Avenue Line . [ 100 ] These stations, except the two 34th Street–Penn Station stops, were closed between July and December 2018. [ 112 ] The fifth and final package for the New York City Subway included the remaining three stations in upper Manhattan and the southwest Bronx: 145th Street on the IRT Lenox Avenue Line , and 167th Street and 174th–175th Streets on the IND Concourse Line . It was originally the eighth of eight planned packages. [ 100 ] The 145th Street station was closed between July and November 2018, while the Concourse Line stations was closed from August 2018 to December 2018. [ 113 ] An additional package included the Metro-North Railroad stations at White Plains , Harlem–125th Street , Crestwood , Port Chester , and Riverdale . [ 91 ]
The ESI program formerly contained thirteen more stations in three packages numbered 5 through 7, but these were deferred to the 2020–2024 Capital Program due to a lack of funding. [ 114 ] The fifth package of stations would have been in northern and eastern Brooklyn, along with Richmond Valley of the SIR. This package would have included Flushing Avenue and Classon Avenue on the IND Crosstown Line , and Van Siclen Avenue , Kingston–Throop Avenues , and Clinton–Washington Avenues on the IND Fulton Street Line . [ 100 ] The sixth package would have included stations in the eastern and northern Bronx, comprising Pelham Parkway on the IRT Dyre Avenue Line , as well as Third Avenue–138th Street , Brook Avenue , East 149th Street , and Westchester Square–East Tremont Avenue on the IRT Pelham Line . [ 100 ] The seventh package would have included three stations on the IND Queens Boulevard Line : Northern Boulevard , 67th Avenue , and Parsons Boulevard . [ 100 ]
In July 2017, after Package 1 had been assigned, [ 102 ] the nonprofit Citizens Budget Commission released a study critical of the plan. In the study, the CBC noted that the 30 original stations only constituted 8% of weekday boardings, and none of these stations were in the list of 25 most-used stations in 2016. [ 115 ] [ 116 ] Compared to stations that would only be "renewed" under this Capital Plan, i.e. with less comprehensive improvements performed under partial closures, the average ESI station could be 2 to 2.5 times as expensive as the average non-ESI station. [ 115 ] The CBC wrote that the MTA had added $857 million to the ESI's original $64 million in funding, and that the cost of extensive renovations offset the savings afforded by using design–build contracts for ESI projects. [ 115 ] The ESI program has also been criticized for the full station closures that entails, which force riders to walk to the next station and add extra time to their commute. Some transit advocates have also pointed out that the Enhanced Station Initiative does not include improvements, such as elevators, that would make the stations compliant with the Americans with Disabilities Act of 1990 . [ 117 ]
In January 2018, the NYCT and Bus Committee recommended that Judlau Contracting receive the $125 million contract for Package 4 and that Citnalta-Forte receive the $125 million contract for Package 8. [ 118 ] However, the MTA Board temporarily deferred the vote for these packages after city representatives refused to vote to award the contracts, citing the high cost and relatively low importance of the program. Some executives had pointed out that improving subway service was more important than renovating stations that were used by relatively few people. [ 119 ] [ 120 ] In response, MTA Chairman Joe Lhota said that these stations had been selected because ESI was a "pilot" program, and thus, the renovations would be tested on smaller stations first. [ 121 ] NYCT Chairman Andy Byford looked over the list of ESI stations and concluded that the list was suitable because these stations were in need of structural improvements. He said that the MTA's decision to not add elevators was reasonable because the work involved would have delayed many of the projects for several years, and in some cases, other nearby stops already had or were getting elevators. [ 122 ] The ESI packages were put back for a vote in February, and the two contracts were ultimately approved, with three city representatives dissenting. [ 123 ] [ 122 ]
In April 2018, Lhota announced that cost overruns had forced the MTA to reduce the number of subway stations included in the program from 33 stations to 19. The 19 subway stations still part of the program include those in Packages 1, 2, 3, 4, and 8, although the Staten Island Railway's Richmond Valley station from package 5 would still be included. Most of the $936 million allocated to the ESI was already used for the 19 stations underway. During the work, contractors had discovered additional infrastructure issues that had to be dealt with. In total, the work on the 19 subway stations will cost $850 million. The remaining $86 million will be used for subway accessibility projects. The 13 stations without funding will be pushed back to the 2020–2024 Capital Plan. [ 114 ]
Minor component work, such as station signage, tiling, and lighting, would also be performed at over 170 other stations as part of the plan. [ 88 ] The MTA would also begin designing OMNY , a new contactless fare payment system to replace the MetroCard (see § Contactless fare trials ). [ 124 ] [ 125 ]
In addition, at least 1,025 R211 subway cars are expected to be ordered under the plan. The R211s would include 58-inch (150 cm) wide doors, wider than the current MTA standard of 50 inches (130 cm), thereby projected to reduce station dwell time by 32%. The new cars will have Wi-Fi installed (see § Cellular phone and wireless data ), USB chargers, digital advertisements , digital customer information displays, illuminated door opening alerts, and security cameras , [ 126 ] [ 97 ] [ 98 ] unlike the current New Technology Trains , which lack these features. [ 127 ] Some lines, like the IND Eighth Avenue Line , would get communications-based train control as part of a larger plan to automate the system. [ 128 ] These measures are all projected to help reduce overcrowding on the subway, which is prevalent. [ 97 ] [ 98 ]
In 2003, the MTA signed a $160 million contract with Siemens Transportation Systems to install digital real-time message boards (officially Public Address Customer Information Screens , or PA/CIS [ 129 ] ) at 158 of its IRT stations to display the number of minutes until the arrival of the next trains. [ 130 ] Payments to the company were stopped in May 2006 following many technical problems and delays [ 131 ] and MTA started to look for alternative suppliers and technologies. [ 130 ] In January 2007 Siemens announced that the issues had been resolved and that screens would start appearing at 158 stations by the end of the year. [ 132 ] In 2008, the system-wide roll-out was pushed back again, to 2011, with the MTA citing technical problems. [ 133 ] [ 134 ]
An in-house simpler system developed by MTA for the L train was operational by early 2009 [ 130 ] [ 135 ] and the first three displays of the larger Siemens system became operational at stations on the IRT Pelham Line ( 6 and <6> trains) in the Bronx in December 2009. [ 136 ] Siemens signs were in operation in 110 A Division stations by March 2011 [ 137 ] [ 138 ] [ 139 ] [ 140 ] [ 141 ] [ 142 ] and in 153 IRT mainline and 24 Canarsie Line stations by late 2011. [ 129 ] Simpler countdown clocks , which only announce the track on which the train is arriving and the number of stops the train is from the station, are used at 40 stations. This includes 13 stations on the IND Queens Boulevard Line , [ 129 ] 19 stations on the IND Eighth Avenue Line (including four that also have next-train displays that show this information), [ 129 ] [ 143 ] three stations on the BMT Broadway Line , [ 129 ] and five stations on the BMT Astoria Line ; [ 144 ] however, the clocks on the Broadway and Astoria Lines are not in use as of 2016 [update] . [ 129 ] The announcements are voiced by former radio traffic reporter Bernie Wagenblast [ 145 ] and Carolyn Hopkins . [ 146 ]
In 2012, real-time station information for the "mainline" IRT, comprising all the IRT services except the 7 train, was made available to third party developers via an API , through MTA's Subway Time mobile app and as open data . [ 147 ] In early 2014, data for the L train were also given to developers. [ 148 ] Displays at 5 IRT Dyre Avenue Line stations were the last in the mainline A Division to be added, as a result of signal modernizations for IRT Dyre Avenue Line stations. [ 149 ]
Displays at 267 B Division stations were funded as part of the 2015–2019 capital program . [ 150 ] Upon the October 2015 approval of funding for the 2015–2019 capital program, full installation of the countdown clocks was deferred to beyond 2020, with 323 out of 472 stations [ note 7 ] having countdown clocks by then. [ 151 ] This was attributed to the rate of installation of Wi-Fi and 3G systems in subway stations, which, among other things, makes countdown clocks viable. [ 152 ] The B , D , and N were expected to get countdown clocks in 2016; the B and D would get the PA/CIS along their shared IND Concourse Line stations, the D along the BMT West End Line , and the N along the BMT Sea Beach Line . [ 152 ] [ 153 ] Meanwhile, the IRT Flushing Line ( 7 and <7> ) was to get the clocks in 2018, a delay from an earlier announced date of 2016. [ 152 ]
In August 2016, a 90-day testing period began for updated countdown clocks on eight BMT Broadway Line stations on the N , Q , R , and W services. The clocks feature new LCD screens as opposed to the old LED screens. The new countdown clocks show the date and time, current weather, next trains, advertisements, other media, and service changes, unlike the old countdown clocks, which can only show the date and time and the next train arrivals. The LCD clocks also use data from the Bluetooth receivers installed at the end of each platform in the stations, which connect with Bluetooth receivers installed on the first and last cars of every train. If the test was successful, the remaining 269 B Division stations would receive the new LCD countdown clocks. [ 154 ] The MTA was able to speed up the test by using Bluetooth receivers and wireless data in stations. As opposed to the countdown clocks on the numbered lines, the system calculates when the trains will pull into their next stop based on when trains enter and leave the stations. [ 155 ] The new Bluetooth clocks performed accurately 97% of the time. [ 156 ]
In November 2016, the MTA declared the Broadway Line countdown clock test successful. All B Division stations would get countdown clocks by March 2018 (several years ahead of schedule), using the same Bluetooth technology as the clocks in the Broadway Line stations. The countdown clocks would use either existing and new Siemens tricolor LED displays like the ones on the A Division and across scattered parts of the B Division, or new multicolor LCD display like the ones on the Broadway Line. [ 157 ] The R was the first mainline B Division route to receive countdown clocks along its entire length in July 2017. Under the MTA's rollout schedule released in July 2017, the countdown clocks on other routes would be enabled in stages through December 2017, [ 158 ] [ 156 ] including on the L train, where the existing LED clocks would be upgraded to use the new LCD displays. [ 156 ] All of the countdown-clock data for the B Division services would also be available in the MTA's Subway Time app, in addition to the data for the A Division and L services that were already included in the app prior to the test. [ 157 ]
The countdown clocks for the rest of the B Division were to be installed as part of the Integrated Service Information and Management – B Division (ISIM-B) project, which would upgrade signal towers and connect track circuits to a central database. [ 159 ] The project was called the Beacon Train Arrival System, and all 268 underground stations would have it installed by the end of 2017. [ 102 ] In each of the remaining 269 stations without countdown clocks, there would be two displays for each platform, as well as a single display installed just outside fare control. The cost would be around $31.7 million to install, plus $5 million in annual maintenance costs. [ 157 ] Since the clocks are based on the Transit Wireless Wi-Fi, installation of each set of displays would cost $211,000 at every aboveground station (which did not have Transit Wireless as of 2016 [update] ) and $54,000 at every underground station with Transit Wireless. The MTA would upgrade the aboveground stations so they could also get Wi-Fi capabilities. [ 157 ]
As the first batch of Bluetooth-enabled B Division countdown clocks was installed in September 2017, there were some passenger complaints about the location of the clocks. Although the MTA places the clocks at the middle of each platform, as well as offers train arrival data on its Subway Time app, riders noted that these clocks were not always placed near locations where the riders would actually wait, such as the stairs to the platforms or the station entrances. Sometimes, the clocks were hidden behind signs or located far away from the station entrances. [ 160 ] [ 161 ] Riders also reported instances where the clocks froze, displayed the wrong information, projected wildly fluctuating arrival times, or forgot to display upcoming trains. [ 162 ] [ 163 ] All of the system's 472 stations had countdown clocks by New Year's Day 2018. The last route to get countdown clocks was the 7, which received Bluetooth-enabled clocks in December 2017 because of issues with the installation of communications-based train control on the Flushing Line. [ 164 ] By the 2020s, the countdown clocks were also being used to display advertisements; the MTA earned $170 million a year from these advertisements. [ 165 ]
Access to the paid area is by turnstile . Starting in 1992, MetroCards made by Cubic Transportation Systems replaced the subway tokens that had been used as the subway's form of fare payment from the 1950s on; by 2003, the MetroCard was the exclusive method of fare payment systemwide. [ 166 ] Since then, there have been programs to replace the MetroCard itself. In the first program, introduced in early 2006, the MTA signed a deal with MasterCard to test out a new radio-frequency identification card payment scheme. [ 167 ] Customers had to sign up at a special MasterCard website and use a MasterCard PayPass credit or debit card/tag to participate. [ 168 ] Originally scheduled to end in December 2006, the trial was extended into 2007 due to "overwhelming positive response". [ 169 ] In light of the success of the first PayPass pilot project in 2006, another trial was started by the MTA. This one started on June 1, 2010, and ended on November 30, 2010. The first two months started with the customer just using the MasterCard PayPass debit or credit card. [ 170 ] [ 171 ] [ 172 ] [ 173 ] However, this trial was the debut of having a rider use the VISA PayWave debit or credit card to enter the system, which started on August 1, 2010. [ 174 ] The trial continued for six months. [ 175 ] [ 176 ]
In 2016, the MTA announced that it would begin designing a new contactless fare payment system to replace the MetroCard. [ 124 ] The system would probably use phone- and bank card-based payment systems like Apple Pay and Android Pay . [ 125 ] On October 23, 2017, it was announced that the MetroCard would be phased out and replaced by OMNY , a contactless fare payment system also by Cubic, with fare payment being made using Apple Pay , Google Wallet , debit/credit cards with near-field communication enabled, or radio-frequency identification cards. [ 177 ] [ 178 ] The OMNY system was rolled out starting in 2019, though support of the MetroCard is slated to remain until 2025. [ 178 ] The fare system was criticized because the new turnstiles could be hacked, thereby leaving credit card and phone information vulnerable to theft. [ 179 ] [ 180 ]
The New York City Subway primarily employs two types of turnstiles : a waist-high turnstile, and a full-height turnstile known as a High Entry-Exit Turnstile (HEET). The waist-high turnstiles, the most prominent in the system, were installed beginning in 1993 along with the implementation of MetroCard, though they originally accepted tokens. [ 181 ] They are manufactured in Tennessee by Cubic Corporation . Some of the waist-high turnstiles date to the late 20th century, when tokens were used to pay fares; as such, they still have token-return compartments. [ 182 ] The waist-high turnstiles are vulnerable to a practice called "back-cocking", in which people entering the system can partially rotate the turnstile as if they were exiting, then slip through the side of the turnstile without paying. [ 183 ] [ 184 ]
The newer HEETs resemble several older turnstiles of that design informally called "iron maidens", and are prevalent at subway entrances without token booths to discourage fare evasion. [ 185 ] Both turnstiles are stainless steel and are bidirectional, allowing passengers to enter with fare payment and to exit. A third older type of turnstile, the High Exit Turnstile (HET), is a black-painted unidirectional iron maiden and only turns in the exiting direction. [ 185 ] Entrance is also available via Service Entry gates or AutoGates, which cater primarily to handicapped passengers [ 186 ] [ 187 ] [ 188 ] or passengers with large items such as strollers and luggage. These gates double as pushbar Emergency Exits, though they are often used for regular exiting in crowded stations. [ 189 ]
The MTA set up another technology pilot project called "Help Point" in April 2011. Help Point, a new digital-audio communications system, was designed for use in case of an emergency or to obtain subway information for travel directions. [ 204 ] The top button is labeled red for emergencies and connects to the Rail Control Center. The bottom button is labeled green and connects to a MTA station agent for any inquiries. All units are equipped with a microphone and speaker, [ 205 ] and can optionally be installed with a camera. [ 206 ] Also, the test units were equipped for the hearing impaired (under ADA compliance). [ 207 ]
The two subway stations that were part of this trial were on the IRT Lexington Avenue Line . They were the 23rd Street and the Brooklyn Bridge–City Hall stations. The Help Points at the Brooklyn Bridge–City Hall station were wireless, while those at the 23rd Street station ones were hard-wired, to test which type of transmission is best for the subway. [ 208 ] [ 209 ] [ 210 ]
After the Help Point test was successfully completed, the MTA started to install Help Points in all 472 subway stations to replace the existing Customer Assistance Intercom (CAI) units. [ 208 ] The help points were installed in 166 stations by 2014, [ 211 ] at which time the remaining stations were scheduled to have Help Points by the end of 2019. [ 212 ] The Help Point installation timeline was later accelerated to the end of 2017, [ 125 ] and Boyce Technologies was hired to install the devices. [ 213 ] The MTA finished installing over 3,000 Help Points in 2018 for a total of $252.7 million. [ 214 ] A 2024 audit of the Help Points found that, during a six-month period in 2023, half of emergency calls made through Help Points were prank calls and that more than 1,000 emergency calls made through Help Points were not answered. [ 214 ] [ 215 ]
On September 19, 2011, the MTA set up another pilot project, an online, interactive touchscreen computer program called "On the Go! Travel Station" (OTG). It lists any planned work or service changes occurring on the subway as well as information to help travelers find landmarks or locales near the stations with an OTG outlet, with advertisements as well. The first station to test this new technology was Bowling Green on the IRT Lexington Avenue Line . [ 216 ] Other stations scheduled to participate in this program were Penn Station (with the LIRR ), Grand Central Terminal (with Metro-North ), Atlantic Avenue–Barclays Center in Brooklyn , and Jackson Heights–Roosevelt Avenue/74th Street–Broadway in Queens . [ 217 ] [ 218 ]
New and existing On the Go! kiosks were to receive an interface overhaul as a result of the MTA's partnership with Control Group , a technology and design consultancy firm. Control Group were adding route lookups, countdown to train arrivals, and service alerts. Between 47 and 90 interactive wayfinding kiosks were scheduled to be deployed in 2013. [ 219 ] As of January 2016 [update] , there are 155 kiosks at 31 stations. [ 220 ] At the completion of Phase 2, there was to be a total of 380 kiosks installed. [ 102 ] By 2020, these had been supplanted by digital screens systemwide . [ 221 ] [ 222 ]
In 2005, Transit Wireless , a BAI Communications majority-owned company, was formed in order to compete for the MTA's request for proposals for a wireless network in the subway system. The MTA ultimately awarded the contract for building and operating the network to Transit Wireless. [ 223 ] The New York City Subway began to provide underground cellular phone with voice and data service, and free Wi-Fi to passengers in 2011 at six stations in Chelsea, Manhattan . The new network was installed and owned by Transit Wireless as part of the company's $200 million investment. [ 224 ] The company expanded the services to 30 more stations in 2013 [ 225 ] [ 226 ] and signed an agreement with all 4 major wireless network operators ( Verizon Wireless , AT&T , Sprint , and T-Mobile ) to allow their cellular phone customers to use its network. The MTA and Transit Wireless are splitting the fees received from those wireless carriers for the usage of the network. [ 227 ] The Wi-Fi service, which operates using antennae, [ 228 ] is operated by Boingo Wireless . [ 229 ]
Transit Wireless expected to provide service to the remaining 241 underground stations by 2017. The next 40 key stations (11 in midtown Manhattan and 29 in Queens) have antennas that were in service by March 2014. [ 217 ] [ 224 ] [ 230 ] The wireless for these 40 underground stations were completed by October 2014. [ 224 ] Phase 3 of the project was completed in March 2015 and added service to the Flushing–Main Street station in Queens, as well as stations in Lower Manhattan , West Harlem , and Washington Heights . [ 231 ] Phase 4 of the project covered 20 underground stations in the Bronx and seventeen in Upper Manhattan; this phase, completed in November 2015, provided service to major stations such as Lexington Avenue–53rd Street , Lexington Avenue–59th Street , 149th Street–Grand Concourse , and 125th Street . [ 224 ] [ 232 ] Because Governor Andrew Cuomo had implemented a timeline for accelerated implementation of in-station wireless service, phases 6 and 7 of the Transit Wireless network build-out will connect the 90 remaining Brooklyn and Manhattan underground stations by early 2017, about one year ahead of the original completion date of 2018. [ 224 ] [ 231 ]
In late December 2016, it was expected that all stations would have wireless by the final day of that year. [ 233 ] However, Governor Cuomo later announced that by January 9, 2017, cellular connectivity and wireless service would be available in all underground stations, except at four stations. (As of March 2020 [update] , this is not yet the case.) These stations were the New South Ferry station and the three stations on the BMT Fourth Avenue Line–Prospect Avenue, 53rd Street, and Bay Ridge Avenue—that would have wireless installed as part of their Enhanced Station renovations. Cellular connectivity was completed one year early. [ 234 ] [ 235 ] [ 236 ] The entire project was completed for $300 million, with Transit Wireless sharing revenues derived from the network's service with the MTA. The partnership between Transit Wireless and the MTA is for 27 years. [ 235 ] [ 236 ] Wi-Fi and cellular service are currently available in all underground stations except Pelham Parkway on the Dyre Avenue line. [ 237 ]
In June 2016, the MTA began installing Wi-Fi in subway cars as well. Wireless service was installed on four R160 subway cars assigned to the Jamaica Yard , then tested along the all-underground E route; in-car Wi-Fi was expanded to 20 R160s on the E route by September. [ 238 ] However, this pilot program was not advertised to passengers. In addition, the wireless service was not working all the time; one passenger described the signal on board the trains as spotty, and only really available on the platforms. [ 239 ] At the time, the MTA was not planning to retrofit subway tunnels with wireless service. [ 239 ] Still, this in-car Wi-Fi pilot program is part of the wider program to install Wi-Fi in underground stations and onboard newer MTA buses . [ 238 ] [ 240 ] Future subway cars, like the R211 , will also include Wi-Fi upon their delivery. [ 126 ] [ 241 ] [ 242 ]
In 2017, the MTA partnered with NYC Public Libraries, New York State, and Transit Wireless, to create Subway Library, a system that allows users to choose from a selection of e-books to read for free when connected to TransitWireless Wi-Fi. [ 243 ] [ 244 ]
Despite the rollout of Wi-Fi at almost all underground stations, wireless and cellular data are generally not available in the tunnels between the stations. [ 223 ] [ 234 ] In early 2018, the MTA started testing out Wi-Fi in the 42nd Street Shuttle tunnels. [ 223 ] During the 14th Street Tunnel shutdown in 2019–2020, the MTA added cellular service to the portion of the 14th Street Tunnel that travels under the East River . [ 245 ] In July 2022, MTA officials announced plans to add cellular service to 418 miles (673 km) of tunnels across the network. In addition, internet service would be added to the 191 subway stations above ground level and the 21 stations in the Staten Island Railway network. [ 246 ] [ 247 ] The same month, the MTA awarded a $600 million contract to Transit Wireless for the installation of cellular and Wi-Fi equipment at these stations and in subway tunnels. [ 245 ] [ 248 ] In September 2024, the 42nd Street Shuttle tunnels became the first in the system to be fully equipped with 5G cell service. [ 249 ] [ 250 ]
The first major wave of digital advertisements in the subway were introduced with the deployment of the On the Go! Travel Station in 2011. [ 218 ] From 2016 on, the LCD countdown clocks also provided another way to show advertisements to passengers. [ 154 ]
In September 2017, the MTA announced plans to add 31,000 digital advertising screens in 5,134 cars, as well as 9,500 extra screens in stations, far more than what the clocks or travel stations could provide. The advertising screens were installed by Outfront Media from 2019 to 2022. [ 251 ] There would eventually be 50,000 screens systemwide; the screens would also show service information. Prior to the announcement, most of the few digital advertising displays in use systemwide had been used to advertise the Second Avenue Subway's opening earlier that year. [ 252 ] In 2020, the MTA started displaying real-time service metrics on the screens, such as service changes and dynamic transfer information. [ 221 ] [ 222 ] By then, the subway system had 5,000 such screens, with another 9,000 to be installed by September 2021 at a cost of $100 million. [ 222 ] The screens cost $800 million in total. [ 251 ] [ 253 ]
The displays were vulnerable to vandalism. The MTA received over 600 reports of shattered or cracked digital screens between August 2020 and April 2023, although the website thecity.nyc implied that the actual number of damaged screens could be much higher. [ 254 ] | https://en.wikipedia.org/wiki/Technology_of_the_New_York_City_Subway |
Technology readiness levels ( TRLs ) are a method for estimating the maturity of technologies during the acquisition phase of a program. TRLs enable consistent and uniform discussions of technical maturity across different types of technology. [ 1 ] TRL is determined during a technology readiness assessment ( TRA ) that examines program concepts, technology requirements, and demonstrated technology capabilities. TRLs are based on a scale from 1 to 9 with 9 being the most mature technology. [ 1 ]
TRL was developed at NASA during the 1970s. The US Department of Defense has used the scale for procurement since the early 2000s. By 2008 the scale was also in use at the European Space Agency (ESA). [ 2 ] The European Commission advised EU-funded research and innovation projects to adopt the scale in 2010. [ 1 ] TRLs were consequently used in 2014 in the EU Horizon 2020 program . In 2013, the TRL scale was further canonized by the International Organization for Standardization (ISO) with the publication of the ISO 16290:2013 standard. [ 1 ]
A comprehensive approach and discussion of TRLs has been published by the European Association of Research and Technology Organisations (EARTO). [ 3 ] Extensive criticism of the adoption of TRL scale by the European Union was published in The Innovation Journal, stating that the "concreteness and sophistication of the TRL scale gradually diminished as its usage spread outside its original context (space programs)". [ 1 ]
A Technology Readiness Level Calculator was developed by the United States Air Force . [ 6 ] This tool is a standard set of questions implemented in Microsoft Excel that produces a graphical display of the TRLs achieved. This tool is intended to provide a snapshot of technology maturity at a given point in time. [ 7 ]
The Defense Acquisition University (DAU) Decision Point (DP) Tool originally named the Technology Program Management Model was developed by the United States Army . [ 8 ] and later adopted by the DAU. The DP/TPMM is a TRL-gated high-fidelity activity model that provides a flexible management tool to assist Technology Managers in planning, managing, and assessing their technologies for successful technology transition. The model provides a core set of activities including systems engineering and program management tasks that are tailored to the technology development and management goals. This approach is comprehensive, yet it consolidates the complex activities that are relevant to the development and transition of a specific technology program into one integrated model. [ 9 ]
The primary purpose of using technology readiness levels is to help management in making decisions concerning the development and transitioning of technology. It is one of several tools that are needed to manage the progress of research and development activity within an organization. [ 10 ]
Among the advantages of TRLs: [ 11 ]
Some of the characteristics of TRLs that limit their utility: [ 11 ]
TRL models tend to disregard negative and obsolescence factors. There have been suggestions made for incorporating such factors into assessments. [ 12 ]
For complex technologies that incorporate various development stages, a more detailed scheme called the Technology Readiness Pathway Matrix has been developed going from basic units to applications in society. This tool aims to show that a readiness level of a technology is based on a less linear process but on a more complex pathway through its application in society. [ 13 ]
Technology readiness levels were conceived at NASA in 1974 and formally defined in 1989. The original definition included seven levels, but in the 1990s NASA adopted the nine-level scale that subsequently gained widespread acceptance. [ 14 ]
Original NASA TRL Definitions (1989) [ 15 ]
The TRL methodology was originated by Stan Sadin at NASA Headquarters in 1974. [ 14 ] Ray Chase was then the JPL Propulsion Division representative on the Jupiter Orbiter design team. At the suggestion of Stan Sadin, Chase used this methodology to assess the technology readiness of the proposed JPL Jupiter Orbiter spacecraft design. [ citation needed ] Later Chase spent a year at NASA Headquarters helping Sadin institutionalize the TRL methodology. Chase joined ANSER in 1978, where he used the TRL methodology to evaluate the technology readiness of proposed Air Force development programs. He published several articles during the 1980s and 90s on reusable launch vehicles utilizing the TRL methodology. [ 16 ]
These documented an expanded version of the methodology that included design tools, test facilities, and manufacturing readiness on the Air Force Have Not program. [ citation needed ] The Have Not program manager, Greg Jenkins, and Ray Chase published the expanded version of the TRL methodology, which included design and manufacturing. [ citation needed ] Leon McKinney and Chase used the expanded version to assess the technology readiness of the ANSER team's Highly Reusable Space Transportation (HRST) concept. [ 17 ] ANSER also created an adapted version of the TRL methodology for proposed Homeland Security Agency programs. [ 18 ]
The United States Air Force adopted the use of technology readiness levels in the 1990s. [ citation needed ]
In 1995, John C. Mankins , NASA, wrote a paper that discussed NASA's use of TRL, extended the scale, and proposed expanded descriptions for each TRL. [ 1 ] In 1999, the United States General Accounting Office produced an influential report [ 19 ] that examined the differences in technology transition between the DOD and private industry. It concluded that the DOD takes greater risks and attempts to transition emerging technologies at lesser degrees of maturity than does private industry. The GAO concluded that use of immature technology increased overall program risk. The GAO recommended that the DOD make wider use of technology readiness levels as a means of assessing technology maturity prior to transition. [ 20 ]
In 2001, the Deputy Under Secretary of Defense for Science and Technology issued a memorandum that endorsed use of TRLs in new major programs. Guidance for assessing technology maturity was incorporated into the Defense Acquisition Guidebook . [ 21 ] Subsequently, the DOD developed detailed guidance for using TRLs in the 2003 DOD Technology Readiness Assessment Deskbook.
Because of their relevance to Habitation, 'Habitation Readiness Levels (HRL)' were formed by a group of NASA engineers (Jan Connolly, Kathy Daues, Robert Howard, and Larry Toups). They have been created to address habitability requirements and design aspects in correlation with already established and widely used standards by different agencies, including NASA TRLs. [ 22 ] [ 23 ]
More recently, Dr. Ali Abbas, Professor of chemical engineering and Associate Dean of Research at the University of Sydney and Dr. Mobin Nomvar, a chemical engineer and commercialisation specialist, have developed Commercial Readiness Level (CRL), a nine-point scale to be synchronised with TRL as part of a critical innovation path to rapidly assess and refine innovation projects to ensure market adoption and avoid failure. [ 24 ]
The European Space Agency [ 1 ] adopted the TRL scale in the mid-2000s. Its handbook [ 2 ] closely follows the NASA definition of TRLs. In 2022, the ESA TRL Calculator was released to the public. The universal usage of TRL in EU policy was proposed in the final report of the first High Level Expert Group on Key Enabling Technologies, [ 25 ] and it was implemented in the subsequent EU framework program, called H2020, running from 2013 to 2020. [ 1 ] This means not only space and weapons programs, but everything from nanotechnology to informatics and communication technology. | https://en.wikipedia.org/wiki/Technology_readiness_level |
Technology scouting is an element of technology management in which
It is a starting point of a long term and interactive matching process between external technologies and internal requirements of an existing organization for strategic purposes. [ 3 ] This matching may also be aided by technology roadmapping . [ 4 ] Technology scouting is also known to be part of competitive intelligence , which firms apply as a tool of competitive strategy. [ 5 ] It can also be regarded as a method of technology forecasting [ 6 ] or in the broader context also an element of corporate foresight . [ 7 ] Technology scouting may also be applied as an element of an open innovation approach. [ 8 ] [ 9 ] Technology scouting is seen as an essential element of a modern technology management system. [ 10 ]
The technology scout is either an employee of the company or an external consultant who engages in boundary spanning processes to tap into novel knowledge and span internal boundaries. [ 11 ] They may be assigned part-time or full-time to the scouting task. The desired characteristics of a technology scout are similar to the characteristics associated with the technological gatekeeper. These characteristics include being a lateral thinker , knowledgeable in science and technology, respected inside the company, cross-disciplinary orientated, and imaginative personality. [ 1 ] Technology scouts would also often play a vital role in a formalised technology foresight process. [ 12 ]
Documented case studies include: | https://en.wikipedia.org/wiki/Technology_scouting |
An IT specialist , computer professional , or an IT professional may be:
Job titles for a computer professional include: | https://en.wikipedia.org/wiki/Technology_specialist |
Technology trajectory refers to a single branch in the evolution of a technological design of a product/service, with nodes representing separate designs. With Technology trajectory referring to a single branch we do expect the development of new technologies to precede recent uses and advance future technologies. The development of future technologies allows for the innovation of new ideas, research, and much more.
It also can be defined as the paths by which innovations in a given field occur.
Movement along the technology trajectory is associated with research and development. Due to the institutionalization of ideas, markets, and professions, technology development can get 'stuck' (locked-in) within one trajectory, and firms and engineers are unable to adapt to ideas and innovation from the outside. Technological trajectory/development may break- out of trajectory and can cause three understandings 1. when technology will lock in into a trajectory, 2.) when technology may break out of lock-in, and 3.) when competing technologies may co-exist in a balance. [ 1 ] A lock-in is when a certain technology develops along a certain trajectory allowing the development to get stuck due to certain circumstances. Not all trajectories are permanently locked into a trajectory. [ 1 ] Let us take for example the Technological Advancement/Trajectory of Increasing Resource use. In 1929 after a man who worked for the USGS wanted to make sure there were enough materials and technological advancements after the war on metal production. He considered 4 important factors to make sure metal production would be made: Geology, Technology, Economics, and Politics. There are technical factors that go into mining, treatment, and refining. “The history of sulfur extraction and production technology also reflects continuous improvement upon processes developed from other industries to meet changing materials use requirements and societal needs". [ 2 ] The process of sulfur extraction is found deep underground or underwater. The Clean Air Act of 1970 made rules for getting sulfur from oil refining, processing of sulfide, ores, and even the combustion of electricity generation. This required technologies to be made in order to coincide with the Clean Air Act.
The continuous improvement of sulfur extraction over the years shows how this technological trajectory/ advancement has developed over the years.
Technology Trajectory doesn't just focus on firms or engineers but it can deal with healthcare, schools, the daily life of everyone, and much more. Technology Trajectory also poses the question of whether innovations are integrated into systems nationally, regionally, or sectorally. This then makes you wonder about the environmental issues and the structure of how Technology trajectory affects everyone. Technology in this day in age is all around us and with that being said we must have a Technology Trajectory of where we want to advance to maintain our ability to take technology beyond our imagination. Technology is shaping how we learn, gather information, move forward, and change. Technology is like a policy because it tells us how we are supposed to do things, and makes some ways of doing things more rational and practical than others.
See also | https://en.wikipedia.org/wiki/Technology_trajectory |
Technology transfer in computer science refers to the transfer of technology developed in computer science or applied computing research, from universities and governments to the private sector . These technologies may be abstract, such as algorithms and data structures , or concrete, such as open source software packages.
Notable examples of technology transfer in computer science include:
Field(s)
1992 (interconnection)
The Internet
1992 law permitting commercial interconnection
Scientific computing
Numerical computing
Yes (from 2001) [ 2 ]
1994 (Netscape) [ 5 ]
World Wide Web
Consortium (to create recommended standards)
The Internet
Information retrieval
Freeware
World Wide Web
Algorithms
2011 (incorporation)
Object-oriented programming | https://en.wikipedia.org/wiki/Technology_transfer_in_computer_science |
The technology-organization-environment framework , also known as the TOE framework , is a theoretical framework that explains technology adoption in organizations and describes how the process of adopting and implementing technological innovations are influenced by the technological context, organizational context, and environmental context. Louis G. Tornatzky and Mitchell Fleischer published the model in 1990. [ 1 ]
Numerous application examples of the TOE framework have been summarized by Olivera and Martins (2011). [ 2 ]
As Awa, Ojiabo & Orokor (2017) reiterated, [ 3 ] the TOE framework is for organizational level analysis. The framework focuses on higher level attributes (i.e. the technological, organizational, and environmental contexts) instead of detailed behaviors of individuals in the organization. To understand technology adoption at individual level, behavioral models such as the theory of reasoned action , the theory of planned behavior , and the technology acceptance model should be applied. While this classification of organization level theory and individual level theory is generally accepted, it also leads to the difficulty of how to investigate the higher level attributes. Information can only be obtained from individuals in the target organization and hence inevitably biased by individuals' viewpoints. Li (2020) has demonstrated a rough equivalence of behavioral models and TOE framework when individual perception has been taken into account. [ 4 ]
Despite the TOE framework having been widely used, it has undergone limited theoretical development since its introduction. [ 5 ] According to Zhu and Kraemer (2005), [ 6 ] the reason for the lack of development is that the TOE framework is "too generic" and offers a high degree of freedom to vary factors and measures so there is little need to change the theory itself. Another important reason, according to Baker (2012), is the theory aligns "too well" with other technology adoption theories and does not offer competitive explanations. [ 5 ] Thus, there is very limited tension to modify the framework. | https://en.wikipedia.org/wiki/Technology–organization–environment_framework |
Technomimetics are molecular systems that can mimic man-made devices. The term was first introduced in 1997. [ 1 ] The current set of technomimetic molecules [ 2 ] includes motors, [ 3 ] rotors, [ 4 ] gears, [ 5 ] gyroscopes, [ 6 ] tweezers, [ 7 ] and other molecular devices. [ 8 ] Technomimetics [ 9 ] can be considered as the essential components of molecular machines and have the primary use in molecular nanotechnology .
This nanotechnology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Technomimetics |
Technorealism is an attempt to expand the middle ground between techno-utopianism and Neo-Luddism by assessing the social and political implications of technologies so that people might all have more control over the shape of their future . [ 1 ] An account cited that technorealism emerged in the early 1990s and was introduced by Douglas Rushkoff and Andrew Shapiro . In the Technorealism manifesto, which described the term as a new generation of cultural criticism, it was stated that the goal was not to promote or dismiss technology but to understand it so the application could be aligned with basic human values. [ 2 ] Technorealism suggests that a technology, however revolutionary it may seem, remains a continuation of similar revolutions throughout human history. [ 3 ]
The technorealist approach involves a continuous critical examination of how technologies might help or hinder people in the struggle to improve the quality of their lives, their communities, and their economic, social, and political structures. [ 4 ] In addition, instead of policy wonks, experts, and the elite, it is the technology critic who assumes the center stage in the discourse of technology policy issues. [ 1 ]
Although technorealism began with a focus on U.S.-based concerns about information technology , it has evolved into an international intellectual movement with a variety of interests such as biotechnology and nanotechnology . [ 5 ] | https://en.wikipedia.org/wiki/Technorealism |
In common usage, technoscience refers to the entire long-standing and global human activity of technology , combined with the relatively recent scientific method that occurred primarily in Europe during the 17th and 18th centuries . Technoscience is the study of how humans interact with technology using the scientific method. [ 1 ] Technoscience thus comprises the history of human application of technology and modern scientific methods, ranging from the early development of basic technologies for hunting , agriculture , or husbandry (e.g. the well, the bow, the plow, the harness) and all the way through atomic applications, biotechnology , robotics , and computer sciences . This more common and comprehensive usage of the term technoscience can be found in general textbooks and lectures concerning the history of science.
The relationship with the history of science is important in this subject and also underestimated, for example, by more modern sociologists of science. Instead, it is worth emphasising the links that exist between books on the history of science and technology and the study of the relationship between science and technology within a framework of social developments. We [ who? ] must always consider the generational leap between historical periods and scientific discoveries, the construction of machines, the creation of tools in relation to the technological changes that occurs in very specific situations. [ 2 ]
An alternate, more narrow usage occurs in some philosophical science and technology studies . In this usage, technoscience refers specifically to the technological and social context of science . Technoscience recognises that scientific knowledge is not only socially coded and historically situated but sustained and made durable by material (non-human) networks . Technoscience states that the fields of science and technology are linked and grow together, and scientific knowledge requires an infrastructure of technology in order to remain stationary or move forward.
The latter, philosophic use of the term technoscience was popularized by French philosopher Gaston Bachelard in 1953. [ 3 ] [ 4 ] [ 5 ] It was popularized in the French-speaking world by Belgian philosopher Gilbert Hottois in the late 1970s and early 1980s, and entered English academic usage in 1987 with Bruno Latour 's book Science in Action . [ 6 ]
In translating the concept to English, Latour also combined several arguments about technoscience that had circulated separately within science and technology studies (STS) before into a comprehensive framework:
We [ who? ] look at the concept of technoscience by considering three levels: a descriptive-analytic level, a deconstructivist level, and a visionary level.
On a descriptive-analytic level, technoscientific studies examine the decisive role of science and technology in how knowledge is being developed. What is the role played by large research labs in which experiments on organisms are undertaken, when it comes to a certain way of looking at the things surrounding us? To what extent do such investigations, experiments and insights shape views of 'nature' and of human bodies? How do these insights link to the concept of living organisms as biofacts ? To what extent do such insights inform technological innovation ? Can the laboratory be understood as a metaphor for social structures in their entirety?
On a deconstructive level, theoretical work is being undertaken on technoscience to address scientific practices critically, e.g. by Bruno Latour ( sociology ), by Donna Haraway ( history of science ), and by Karen Barad ( theoretical physics ). It is pointed out that scientific descriptions may be only allegedly objective; that descriptions are of a performative character, and that there are ways to de-mystify them. Likewise, new forms of representing those involved in research are being sought.
On a visionary level, the concept of technoscience comprises a number of social, literary, artistic and material technologies from western cultures in the third millennium. This is undertaken in order to focus on the interplay of hitherto separated areas and to question traditional boundary-drawing: this concerns the boundaries drawn between scientific disciplines as well as those commonly upheld for instance between research, technology, the arts and politics. One aim is to broaden the term ' technology ' (which by the Greek etymology of ' techné ' connotes all of the following: arts, handicraft, and skill) so as to negotiate possibilities of participation in the production of knowledge and to reflect on strategic alliances. Technoscience can be juxtaposed with a number of other innovative interdisciplinary areas of scholarship which have surfaced in these recent years such as technoetic , technoethics and technocriticism .
As with any subject, technoscience exists within a broader social context that must be considered. Science & Technology Studies researcher Sergio Sismondo argues, "Neither the technical vision nor the social vision will come into being without the other, though with enough Concerted Effort both may be brought into being together". [ 7 ] Despite the frequent separation between innovators and the consumers, Sismondo argues that development of technologies, though stimulated by a technoscientific themes, is an inherently social process.
Technoscience is so deeply embedded in people's everyday lives that its developments exist outside a space for critical thought and evaluation, argues Daniel Lee Kleinman (2005). Those who do attempt to question the perception of progress as being only a matter of more technology are often seen as champions of technological stagnation. The exception to this mentality is when a development is seen as threatening to human or environmental well-being. This holds true with the popular opposition of GMO crops, where the questioning of the validity of monopolized farming and patented genetics was simply not enough to rouse awareness. [ 8 ]
Science and technology are tools that continually change social structures and behaviors. Technoscience can be viewed as a form of government or having the power of government because of its impact on society. The impact extends to public health, safety, the environment, and beyond. [ 9 ] Innovations create fundamental changes and drastically change the way people live. For example, C-SPAN and social media give American voters a near real-time view of Congress . This has allowed journalists and the people to hold their elected officials accountable in new ways.
Chlorine chemists and their scientific knowledge helped set the agenda for many environmental problems: PCBs in the Hudson River are polychlorinated biphenols ; [ 10 ] DDT , dieldrin , and aldrin are chlorinated pesticides ; CFCs that deplete the ozone layer are chlorofluorocarbons . Industry actually manufactured the chemicals and consumers purchased them. Therefore, one can determine that chemists are not the sole cause for these issues, but they are not blameless. [ 11 ] | https://en.wikipedia.org/wiki/Technoscience |
Technosignature or technomarker is any measurable property or effect that provides scientific evidence of past or present technology . [ 1 ] [ 2 ] Technosignatures are analogous to biosignatures , which signal the presence of life, whether intelligent or not. [ 1 ] [ 3 ] Some authors prefer to exclude radio transmissions from the definition, [ 4 ] but such restrictive usage is not widespread. Jill Tarter has proposed that the search for extraterrestrial intelligence (SETI) be renamed "the search for technosignatures". [ 1 ] Various types of technosignatures, such as radiation leakage from megascale astroengineering installations such as Dyson spheres , the light from an extraterrestrial ecumenopolis , or Shkadov thrusters with the power to alter the orbits of stars around the Galactic Center , may be detectable with hypertelescopes . Some examples of technosignatures are described in Paul Davies 's 2010 book The Eerie Silence , although the terms "technosignature" and "technomarker" do not appear in the book.
In February 2023, astronomers reported, after scanning 820 stars, the detection of 8 possible technosignatures for follow-up studies. [ 5 ]
A Dyson sphere , constructed by life forms dwelling in proximity to a Sun-like star , would cause an increase in the amount of infrared radiation in the star system's emitted spectrum. Hence, Freeman Dyson selected the title "Search for Artificial Stellar Sources of Infrared Radiation" for his 1960 paper on the subject. [ 6 ] SETI has adopted these assumptions in its search, looking for such "infrared heavy" spectra from solar analogs . Since 2005, Fermilab has conducted an ongoing survey for such spectra, analyzing data from the Infrared Astronomical Satellite . [ 7 ] [ 8 ]
Identifying one of the many infra-red sources as a Dyson sphere would require improved techniques for discriminating between a Dyson sphere and natural sources. [ 9 ] Fermilab discovered 17 "ambiguous" candidates, of which four have been named "amusing but still questionable". [ 10 ] Other searches also resulted in several candidates, which remain unconfirmed. [ 7 ] In October 2012, astronomer Geoff Marcy , one of the pioneers of the search for extrasolar planets , was given a research grant to search data from the Kepler telescope, with the aim of detecting possible signs of Dyson spheres. [ 11 ]
Shkadov thrusters, with the hypothetical ability to change the orbital paths of stars in order to avoid various dangers to life such as cold molecular clouds or cometary impacts , would also be detectable in a similar fashion to the transiting extrasolar planets searched by Kepler . Unlike planets, though, the thrusters would appear to abruptly stop over the surface of a star rather than crossing it completely, revealing their technological origin. [ 12 ] In addition, evidence of targeted extrasolar asteroid mining may also reveal extraterrestrial intelligence (ETI). [ 13 ] Furthermore, it has been suggested that information could be hidden within the transit signatures of other planets. [ 14 ] Advanced civilizations could "cloak their presence, or deliberately broadcast it, through controlled laser emission". [ 15 ] Other characteristics proposed as potential technosignatures (or starting points for detection of clearer signatures) include peculiar orbital periods such as arranging planets in prime number patterns. [ 16 ] [ 17 ] [ 18 ] Coronal and chromospheric activity on stars might be altered. [ 19 ] Extraterrestrial civilizations may use free-floating planets ( rogue planets ) for interstellar transportation with a number of proposed possible technosignatures. [ 20 ]
A study suggests that if ETs exist, they may have established communications network(s) and may already have probes in the Solar System whose communication may be detectable. [ 21 ] Studies by John Gertz suggest flyby (scout) [ 22 ] probes might intermittently surveil nascent planetary systems and permanent probes would communicate with a home base, potentially using triggers and conditions such as detection of electromagnetic leakage or biosignatures. [ 23 ] They also suggest several strategies to detecting local ET probes [ 24 ] such as detecting emitted optical messages. [ 25 ] He also finds that due to interstellar networks of communications nodes, the search for deliberate interstellar signals – as is common in SETI [ 26 ] – may be futile. [ 27 ] The architecture may consist of nodes separated by sub-light-year distances and strung out between neighboring stars. [ 28 ] It may also contain pulsars as beacons [ 29 ] or nodes whose beams are modulated by mechanisms that could be searched for. [ 30 ] Moreover, a study suggests prior searches wouldn't have detected cost-effective electromagnetic signal beacons. [ 31 ]
Various astronomers, including Avi Loeb of the Harvard-Smithsonian Center for Astrophysics . Edwin L. Turner of Princeton University , and Thomas Beatty of the University of Wisconsin have proposed that artificial light from extraterrestrial planets, such as that originating from cities, industries, and transport networks, could be detected and signal the presence of an advanced civilization.
Light and heat detected from planets must be distinguished from natural sources to conclusively prove the existence of intelligent life on a planet. [ 4 ] For example, NASA's 2012 Black Marble experiment showed that significant stable light and heat sources on Earth, such as chronic wildfires in arid Western Australia , originate from uninhabited areas and are naturally occurring. [ 32 ]
Spectroscopic observations of exoplanet nightsides would be able to identify artificial lighting via its distinct spectroscopic signature. Work by astronomer Thomas Beatty has shown that the spectrally concentrated emission from sodium street lights would be distinguishable from natural sources using proposed next generation space telescopes. The proposed LUVOIR A may be able to detect city lights twelve times those of Earth on Proxima b in 300 hours. [ 33 ]
Atmospheric analysis of planetary atmospheres, as is already done on various Solar System bodies and in a rudimentary fashion on several hot Jupiter extrasolar planets, may reveal the presence of chemicals produced by technological civilizations. [ 35 ] [ 36 ] For example, atmospheric emissions from human technology use on Earth, including nitrogen dioxide and chlorofluorocarbons , are detectable from space. [ 37 ] Artificial air pollution may therefore be detectable on extrasolar planets and on Earth via "atmospheric SETI" – including NO 2 pollution levels and with telescopic technology close to today. [ 38 ] [ 39 ] [ 40 ] [ 41 ] [ excessive citations ] Such technosignatures may consist not of the detection of the level of one specific chemical but simultaneous detections of levels of multiple specific chemicals in atmospheres. [ 42 ]
However, there remains a possibility of mis-detection; for example, the atmosphere of Titan has detectable signatures of complex chemicals that are similar to what on Earth are industrial pollutants, though not the byproduct of civilisation. [ 43 ] Some SETI scientists have proposed searching for artificial atmospheres created by planetary engineering to produce habitable environments for colonisation by an ETI. [ 36 ]
Interstellar spacecraft may be detectable from hundreds to thousands of light-years away through various forms of radiation, such as the photons emitted by an antimatter rocket or cyclotron radiation from the interaction of a magnetic sail with the interstellar medium . Such a signal would be easily distinguishable from a natural signal and could hence firmly establish the existence of extraterrestrial life, were it to be detected. [ 44 ] In addition, smaller Bracewell probes within the Solar System itself may also be detectable by means of optical or radio searches. [ 45 ] [ 46 ] Self-replicating spacecraft or their communications networks could potentially be detectable within our Solar system or in nearby star-based systems, [ 47 ] if they are located there. [ 48 ] Such technologies or their footprints could be in Earth's orbit, on the Moon or on the Earth.
A less advanced technology, and one closer to humanity's current technological level, is the Clarke Exobelt proposed by Astrophysicist Hector Socas-Navarro of the Instituto de Astrofisica de Canarias . [ 49 ] This hypothetical belt would be formed by all the artificial satellites occupying geostationary / geosynchronous orbits around an exoplanet . From early simulations it appeared that a very dense satellite belt, requiring only a moderately more-advanced civilization than ours, would be detectable with existing technology in the light curves from transiting exoplanets, [ 50 ] but subsequent analysis has questioned this result, suggesting that exobelts detectable by current and upcoming missions will be very rare. [ 51 ]
It has been suggested that once extraterrestrials arrive "at a new home, such life will almost certainly create technosignatures (because it used technology to get there), and some fraction of them may also eventually give rise to a new biosphere". [ 52 ] Microorganism DNA may have been used for self-replicating messages. [ 53 ] [ additional citation(s) needed ] See also: DNA digital data storage
Low- or high-albedo installations such as solar panels may also be detectable, albeit distinguishing artificial megastructures from high- and low-albedo natural environments (e.g., bright ice caps) may make it unfeasible. [ 26 ]
One of the first attempts to search for Dyson Spheres was made by Vyacheslav Slysh from the Russian Space Research Institute in Moscow in 1985 using data from the Infrared Astronomical Satellite (IRAS) . [ 55 ]
Another search for technosignatures, c. 2001 , involved an analysis of data from the Compton Gamma Ray Observatory for traces of anti-matter, which, besides one "intriguing spectrum probably not related to SETI", came up empty. [ 56 ]
In 2005, Fermilab had an ongoing survey for such spectra by analyzing data from IRAS. [ 57 ] [ 58 ] Identifying one of the many infra-red sources as a Dyson Sphere would require improved techniques for discriminating between a Dyson Sphere and natural sources. [ 59 ] Fermilab discovered 17 potential "ambiguous" candidates of which four have been named "amusing but still questionable". [ 10 ] Other searches also resulted in several candidates, which are, however, unconfirmed. [ 60 ]
In a 2005 paper, Luc Arnold proposed a means of detecting planetary-sized artifacts from their distinctive transit light curve signature. He showed that such technosignature was within the reach of space missions aimed at detecting exoplanets by the transit method , as were Corot or Kepler projects at that time. [ 61 ] The principle of the detection remains applicable for future exoplanets missions. [ 62 ] [ 63 ] [ 64 ]
In 2012, a trio of astronomers led by Jason Wright started a two-year search for Dyson Spheres, aided by grants from the Templeton Foundation . [ 65 ]
In 2013, Geoff Marcy received funding to use data from the Kepler Telescope to search for Dyson Spheres and interstellar communication using lasers, [ 66 ] and Lucianne Walkowicz received funding to detect artificial signatures in stellar photometry. [ 67 ]
Starting in 2016, astronomer Jean-Luc Margot of UCLA has been searching for technosignatures with large radio telescopes. [ 2 ]
In 2016, it was proposed that vanishing stars are a plausible technosignature. [ 68 ] A pilot project searching for vanishing stars was carried out, finding one candidate object. In 2019, the Vanishing & Appearing Sources during a Century of Observations (VASCO) project [ 69 ] began more general searches for vanishing and appearing stars, and other astrophysical transients [ 68 ] They identified 100 red transients of "most likely natural origin", while analyzing 15% of the image data. In 2020, the VASCO collaboration started up a citizen science project, vetting through images of many thousands of candidate objects. [ 70 ] The citizen science project is carried out in close collaboration with schools and amateur associations mainly in African countries. [ 71 ] The VASCO project has been referred to as "Perhaps the most general artefact search to date". [ 72 ] In 2021, VASCO's principal investigator Beatriz Villarroel received a L'Oreal-Unesco prize in Sweden for the project. [ 73 ] In June 2021, the collaboration published the discovery of nine light sources seemingly appearing and vanishing simultaneously from archival plates taken in 1950. [ 74 ] Villarroel's team also found three 16th magnitude stars which had vanished on plates exposed within one hour of each other on 19 July 1952. [ 75 ]
In June 2020, NASA was awarded their first SETI -specific grant in three decades. The grant funds the first NASA-funded search for technosignatures from advanced extraterrestrial civilizations other than radio waves, including the creation and population of an online technosignature library . [ 76 ] [ 77 ] [ 78 ] A 2021 scientific review produced by the i.a. NASA-sponsored online workshop TechnoClimes 2020 classified possible optimal mission concepts for the search of technosignatures. It evaluates signatures based on a metric about the distance of humanity to the capacity of developing the signature's required technology – a comparison to contemporary human technology footprints, associated methods of detection and ancillary benefits of their search for other astronomy. The study's conclusions include a robust rationale for organizing missions for searching artifacts – including probes – within the Solar system. [ 79 ] [ 54 ]
In 2021, astronomers proposed a sequence of "verification checks for narrowband technosignature signals" after concluding that technosignature candidate BLC1 could be the result of a form of local radiofrequency interference . [ 80 ]
It has been suggested that observatories on the Moon could be more successful. [ 81 ] [ 82 ] In 2022, scientists provided an overview of the capabilities of ongoing, recent, past, planned and proposed missions and observatories for detecting various alien technosignatures. [ 83 ] [ 84 ]
Steven J. Dick states that there generally are no principles for dealing with successful SETI detections. Detections of technosignatures may have ethical implications, such as conveying information related to astroethical [ 85 ] and related machine ethics ones (e.g., related to machines' applied ethical values ), or include information about alien societies or histories or fates , which may vary depending on the type, prevalence and form of the detected signature's technology. Moreover, various types of information about detected technosignatures and their distribution or dissemination may have varying implications that may also depend on time and context. | https://en.wikipedia.org/wiki/Technosignature |
Techreport (formerly "The Tech Report") is one of the oldest hardware, news, and tech review sites. [ 1 ] Techreport specialized in hardware and produced quarterly system build guides at various price points, [ 2 ] and occasional price vs. performance scatter plots . [ 3 ] It has an online community and used to have an active podcast . Some of the site's investigative articles regarding hardware benchmarking have been cited by other technology news sites like Anandtech [ 4 ] and PC World . [ 5 ]
The site went through an ownership change and major redesign in the middle of 2019 after which the site's focus and content went through significant changes, no longer specializing in hardware or producing any system guides or podcasts and no longer being focused on computer technology.
Tech Report was founded by Scott Wasson, a Harvard Divinity School graduate, and Andy Brown. [ 1 ] Both started by writing at Ars Technica in 1998. The two later decided to launch their website. The site eventually grew into a business enterprise with multiple full-time staff members.
Tech Report was originally located at tech-report.com in 1999. The site was moved to techreport.com in 2003. [ 6 ]
On August 20, 2007, a beta for a new site design was posted in the forums for review by the user community. It was later moved to live.
Launching on January 1, 2011, the new site design, TR 3.0, rolled out. It offered a completely new layout and two user-switchable colors, blue and white, along with a reduced mobile device format.
On December 2, 2015, Scott Wasson, the founder and Editor-In-Chief stepped down as he accepted a role in AMD 's graphics division. [ 7 ] [ 8 ] Wasson subsequently sold the company in March 2018 to Adam Eiberger, the Tech Report's business manager. [ 9 ]
On December 21, 2018 Jeff Kampman stepped down as Editor-In-Chief. [ 10 ] The site was then sold to investors John Rampton and John Rall, and Renee Johnson took over as Editor-in-Chief.
On July 7, 2019, coinciding with the release of AMD's Ryzen 3 CPUs and Navi GPUs, a site redesign was launched, moving from the Tech Report's former custom CMS and functionality to a WordPress template. On July 9, Johnson posted an introduction to the design. [ 11 ] The new redesign was met with criticism from the users. [ 12 ] In August of the same year, TechReport's senior editing team experienced a series of changes, which was also evident in a change in direction in terms of the focus of the site, no longer covering hardware reviews and system guides to the same extent.
TechReport was one of the first sites in 2007 to document and benchmark the flaw in the translation lookaside buffer (TLB) of AMD Phenom CPUs. Despite claims by AMD that the initial BIOS fix would only result in 10% performance decrease, benchmarks by TechReport have revealed that the performance impact by the initial BIOS fix was much more severe, up to nearly 20% on average, with some applications such as Firefox experiencing performance decrease of 57% in tests. [ 13 ] They were also the first one to notify about AMD stopping the shipment of processors due to this bug. [ 14 ] [ 15 ]
On September 8, 2011, Scott Wasson posted an article titled "Inside the Second: A New Look at Game Benchmarking." This showed gamers that frames per second (FPS) are not the only thing that matters in "smooth" gameplay, but frame latency has a big part. [ 16 ] This innovative benchmarking method was later mentioned and acknowledged by other publications such as Anandtech, which described this method as "a revolution in the 3D game benchmarking scene" [ 4 ] [ 17 ] and Overclockers. [ 18 ]
In 2013, TechReport started an experiment using several SSD drives to determine how many writes they can endure. This test lasted for more than 18 months before all drives used in this test failed, enduring much larger amount of written data than rated by manufacturers themselves, [ 19 ] [ 5 ] and even prompting one of the manufacturers, Samsung, to release a humorous music video dedicated to this test. [ 20 ]
A large portion of the main page was dedicated to "News" and "Blog" entries. Among the news entries were "Shortbread" posts which offered a summary breakdown of reviews and news offered by other sites. Featured articles were often reviews of newly released PC hardware that had been tested by the site's editors and judged on several metrics including performance and value compared to other available hardware.
Adapting to the general trend of more content for digest, The Tech Report launched its podcast on February 9, 2008, hosted by Jordan Drake. While the schedule has varied it provides a casual but quite in-depth look back at the topics that made news from a panel of the site staff. After 2015 episodes were released irregularly, frequently discussing the release of a new microarchitecture with David Kanter of Real World Technologies. [ 21 ] The last episode was made in January 2018.
Tech Report has a phpBB -styled forum that is unrestricted in read-only form and open to the public for contribution via simple registration. The forum is primarily structured around computer technology and related topics, but debates also range from politics and religion in the "opt-in only" R&P forum to general random chatter in the Back Porch. Contributors to the website also have access to a restricted forum called the Smoky Back Room. Registered users may respond to news topics and other entries posted on the front page in an isolated threaded comments section that automatically attaches to each new entry. Although access to the main page comments is linked to the user database, the discussions are logged separately from the forum area of the site and are not counted toward the user forum statistics.
As of 2023, Techreport forums have been taken offline due to outdated forum software. [ 22 ] | https://en.wikipedia.org/wiki/Techreport |
Techron is a patented fuel additive developed by Chevron Corporation and sold in its fuel operations (including Texaco and Caltex ). It contains polyether amine (PEA) and polybutene amine (PBA), which are detergent additives purported to dissolve deposits in automotive engines and prevent them from building up. [ 1 ] Chevron released Techron as an additive in 1981, and began including it in all of their gasoline products in 1995. It is still available as a concentrate today. [ 2 ] [ 3 ]
The Chevron Cars that debuted in 1995 were used to advertise the additive.
Techron consists of five components: [ 4 ] [ failed verification ]
"Techroline" was the predecessor to Techron. The company claimed it could control combustion-chamber deposits in cars, [ 5 ] as well as keep their fuel-intake systems clean. [ 6 ] | https://en.wikipedia.org/wiki/Techron |
The expression " tech–industrial complex " describes the relationship between a country's tech industry and its influence on the concentration of wealth , censorship or manipulation of algorithms to push an agenda, spread of misinformation and disinformation via social media and artificial intelligence , and public policy . The expression is used to describe Big Tech , Silicon Valley , and the largest IT companies in the world. The term is related to the military–industrial complex , and has been used to describe the United States Armed Forces and its adoption of AI-enabled weapons systems. [ 1 ] [ 2 ] [ 3 ] The expression was popularized after a warning of the relationship's detrimental effects, in the farewell address of U.S. President Joe Biden on January 15, 2025. [ 4 ] [ 5 ]
U.S. President Joe Biden used the term in his Farewell Address to the Nation on January 15, 2025: [ 4 ] [ 5 ] [ 6 ]
Today, an oligarchy is taking shape in America of extreme wealth, power, and influence that literally threatens our entire democracy, our basic rights and freedoms, and a fair shot for everyone to get ahead...
We see the consequences all across America. And we've seen it before, more than a century ago. But the American people stood up to the robber barons back then and busted the trusts .
It’s also clear that American leadership in technology is unparalleled — an unparalleled source of innovation that can transform lives. We see the same dangers of the concentration of technology, power, and wealth.
You know, his farewell address, President Eisenhower spoke of the dangers of the military-industrial complex. He warned us then about, and I quote, "the potential for the disastrous rise of misplaced power," end of quote.
Six decades later, I'm equally concerned about the potential rise of a tech-industrial complex that could pose real dangers for our country as well.
Americans are being buried under an avalanche of misinformation and disinformation enabling the abuse of power. The free press is crumbling. Editors are disappearing. Social media is giving up on fact-checking . The truth is smothered by lies told for power and for profit.
We must hold the social platforms accountable to protect our children, our families, and our very democracy from the abuse of power.
Meanwhile, artificial intelligence is the most consequential technology of our time — perhaps of all time. Nothing offers more profound possibilities and risks for our economy and our security, our society, for humanity. [emphasis added]
The term was first used in U.S. President Joe Biden's farewell address, and alluded to Dwight D. Eisenhower 's warning of the military–industrial complex and what Politico described as "echoing Roosevelt's language in calling out the "robber barons" of a new dystopian Gilded Age ". Since Elon Musk purchased X , there's been wide spread allegations that the social media company has been manipulating its algorithm to promote right-wing content as well as supress left-wing content. A Biden aide demurred when asked if Biden was referring to Elon Musk , but said that the billionaire "was certainly an example of one". [ 7 ] The comments came amidst large financial donations by tech leaders to Donald Trump's second presidential inauguration and for taking actions seen as deferential to the president-elect. It also came amidst surging stock prices of " The Magnificent Seven ", seven tech companies whose combined value rose 46% in 2024, vastly beating the S&P 500 share index. [ 8 ] Other tech leaders described as part of the tech–industrial complex included Mark Zuckerberg , Jeff Bezos , Satya Nadella , Sundar Pichai , Shou Zi Chew , Tim Cook , and Vivek Ramaswamy . [ 8 ] [ 7 ]
Economics of technology efforts | https://en.wikipedia.org/wiki/Tech–industrial_complex |
Tecnoquímicas It is a Colombian company that manufactures pharmaceutical products, multivitamin supplements, hygiene products and home care products, founded in 1934.. Its headquarters of operations are located in Cali, Colombia.
In 2023, it will represent 14% of the Colombian pharmaceutical market. [ 3 ]
Tecnoquímicas was founded in Colombia by Francisco Barberi in 1934. Initially, they were called "Colombia Sales Company" and after a merger process with Laboratorios Fixalia the name was changed to "Tecnoquímicas" in 1957. [ 4 ]
It has two industrial plants, one in Cali for pharmaceutical products and one in Villa Rica, where diapers are manufactured. | https://en.wikipedia.org/wiki/Tecnoquímicas |
Tectin is an organic substance secreted by certain ciliates . [ 1 ] [ 2 ] [ 3 ] [ 4 ] Tectin may form an adhesive stalk, disc or other sticky secretion. Tectin may also form a gelatinous envelope or membrane enclosing some ciliates as a protective capsule or lorica . Tectin is also called pseudo chitin . Granules or rods (called protrichocysts ) in the pellicle of some ciliates are also thought to be involved in tectin secretion.
This ciliate -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Tectin_(secretion) |
TectoRNAs are modular RNA units able to self-assemble into larger nanostructures in a programmable fashion. They are generated by rational design through an approach called RNA architectonics, which make use of RNA structural modules identified in natural (or sometimes artificial) RNA molecules to form pre-defined 3D structures spontaneously. [ 1 ]
The abilities of RNA which is capable of catalysis and non-canonical base pairing make it an attractive biomolecule for design. By applying the knowledge of computational modeling and biochemical characterization, RNA can be shaped into defined geometries and conduct various functions. As such, tectoRNA can also carry functions to build large functional nanostructures which can be used for synthetic biology and nanotechnology application.
Nadrian Seeman was the first one who proposed that DNA could be used as material for generating nanoscopic self-assembling structures. [ 2 ] This concept was extended to RNA by Jaeger and collaborators in 2000 by taking advantage of the concept of RNA tectonics initially proposed by Jaeger and Westhof and collaborators in 1996. [ 3 ] [ 4 ]
To design a tectoRNA, the deep knowledge of RNA tertiary structure is required. The rational design of tectoRNA is based on known X-ray and NMR structures. TectoRNAs can be seen as analogous to words, and, by using the natural syntax of RNA structural motifs, all kinds of thermodynamically stable shapes can be rationally designed and synthesized. The sequence specifying for stable, recurrent, and modular structural motifs, e.g. GNRA tetraloop, kissing loops, kink turns, A-minor interaction, etc., can be encoded within tectoRNAs to control their geometry and self-assembly into nanostructures. However, tectoRNA can also incorporate flexible junctions and RNA modules (or RNA aptamers ) responsive to ligands. [ 5 ]
Nowadays, extensive databases and powerful algorithms can be useful tools to design sequences of tectoRNAs. The folding of tectoRNAs are optimized by minimizing the free energy and maximizing their thermodynamic stability. The RNA sequences are mainly transcribed in vitro , and the folding condition for RNA is also important. Mg 2+ and other salts must be added into solution and the concentration is well controlled to fold RNA properly. Their expected folding and self-assembly properties are characterized by a wide range of biochemical tools. Native poly-acrylamide gel electrophoresis (PAGE) is used to test the K d of self-assembled tectoRNAs. Temperature gradient gel electrophoresis (TGGE) is applied to characterize the thermodynamic stability of nanostructures. Chemical probing, like DMS probing , allows us to indirectly understand the folding of RNA structure. Atomic force microscopy (AFM) , transmission electron microscopy (TEM) , and cryo-EM are powerful techniques which give us a direct clue how RNA nanostructures look like. By far, delicate structures like squares or hearts have been successfully demonstrated in different research. [ 1 ]
TectoRNAs are the basic self-assembling unit in RNA architectonics. In RNA architectonics, the sequence length of tectoRNA is usually less than 200 nts. TectoRNAs are typically originating from single stranded RNA molecules and once folded, they act like LEGO bricks to build up higher order architectures. They can be synthesized, folded and self-assembled into multimeric nanostructures during transcription in isothermal conditions. [ 6 ] As such, the RNA architectonics approach can be seen as RNA modular origami. This approach was extended to the synthesis of larger self-assembling units of more than 400 nts. [ 7 ] More recently, RNA origami was extended to the design of long single stranded RNA sequences able to fold into large pre-defined nanostructures. [ 5 ] [ 8 ] Hence, RNA modular origami (originally called RNA architectonics), RNA origami and RNA single stranded origami are both originating from the same concept where RNA sequences can be design to self-fold and assemble into predefined shapes. Note that conceptually, DNA single stranded origami is more related to RNA origami than DNA origami. [ 9 ]
Though RNA nanotechnology is still a burgeoning field, tectoRNAs and resulting nanostructures have already been shown to be useful in nanomedicine, nanotechnology, and synthetic biology. This includes the development of programmable nano-scaffolds and nano-particles for the delivery of RNA therapeutics. [ 10 ] [ 11 ] As such, RNA nanoparticles , like hexagonal nanorings, can be used as a delivery vehicle carrying therapeutic RNA to targeting cells. It is also possible to incorporate modified nucleotides within tectoRNAs in order to increase their chemical stability and resistant towards degradation. Yet, the full potential of tectoRNAs and resulting nanostructures for recruiting proteins and ligands still remain largely unexplored. | https://en.wikipedia.org/wiki/TectoRNA |
Ted Ellis (born 1963) is an American artist and former environmental chemist . Ellis is best known for his African-American themed art and styles which blend elements of folk art , naturalism and impressionism . His personal rendition of Barack Obama in acrylic , Obama, the 44th President , was presented in honor of the 2009 Presidential Inauguration .
Ellis' art business has sold over 1.75 million fine art products, and he works with a number of prominent corporations. He is also known for his community work, especially advancing arts in children's education.
Born and raised in New Orleans , Louisiana , to a professional musician-father and a housewife-mother, [ 5 ] [ 6 ] Ted Ellis' earliest hints of artistic talent began to show at five-years-old. [ 2 ] [ 7 ] Ellis' first attempt at art was a third-grade freehand sketch of a dog from Archie Comics , which he drew so accurately that friends and family believed it had been traced. [ 5 ] Growing up, one of his favorite characters to draw was Wile E. Coyote , and as an adult he continues to enjoy comic books as "refreshing". [ 5 ] [ 8 ]
When he was old enough, Ted would ride the bus alone to downtown New Orleans so as to be exposed to and spend time with the area artists. [ 4 ] He and his friends would in their spare time compete with one another to see who could draw the best designs, and Ted continued developing his art skills throughout primary school despite only receiving "satisfactory" marks in art class. In elementary school he attended a summer program at the New Orleans Center for Creative Arts , and later attended an after-school program at Lawless High School . [ 5 ] Ellis says that he knew he wanted to be an artist in the seventh grade , [ 2 ] and credits his teacher in that class for keeping him focused. [ 4 ]
Ted worked for a time with charcoal and pastel before settling on oil and acrylic . [ 2 ] He took art classes during high school and enrolled in four months of private art lessons, but is otherwise self-taught . [ 4 ] Ellis followed advice from Anna Torregano, his mentor, friend and high school art teacher, and his parents, all of whom advised him to pursue an academic career so as not to become a " starving artist ". [ 1 ] [ 5 ] [ 9 ] [ 10 ] His mother especially stressed university and earning a professional degree. [ 5 ] [ 11 ]
Ted Ellis earned a B.Sc. in chemistry at Dillard University on a United States Army ROTC scholarship as well as academic scholarship , and went on to be commissioned a second lieutenant in the United States Army 's Field Artillery Branch . [ 4 ] [ 5 ] [ 6 ] [ 9 ] Ellis spent the next ten years working in the field of chemistry, eight years of which were as an environmental chemist at Rollins Engineering Services . [ 4 ] [ 5 ] [ 6 ] [ 8 ] [ 12 ]
Ellis has lived in Louisiana and California , and currently resides in Friendswood, Texas with his wife, Erania. They have a daughter, Chaney, and a son, Tanner. [ 5 ] [ 7 ] [ unreliable source? ] [ 9 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ]
Chaney is an aspiring rap artist and has produced a positive, parent-friendly CD about school and drugs titled Off the Chain . [ 9 ]
Ted Ellis maintained a passion for art that preceded his professional art career; he painted throughout his time in the Army and as an environmental chemist , generally working out of a studio in his garage. [ 5 ] His first commissions were produced for two co-workers at Rollins. They had wanted to purchase the piece that he was then working on, but he refused and instead offered to paint them two similar pieces, which they purchased for $40. [ 9 ]
When he first got started, Ellis passed on an opportunity to do work for the J. C. Penney catalog because he was too busy, but ultimately found success in a similar publication. [ 6 ]
Ellis published his first prints through Market Arts ' Dan Rose in Houston, [ 5 ] but his art career took off when he noticed that his wife's Avon boutique magazine, targeted at African-Americans , lacked any art. He sent Avon a proposal which they accepted, and through the magazine he sold 42,610 signed prints of Thee Baptism . Since he was still working as an engineer at the time, he autographed the tens-of-thousands of prints during his lunch-break. [ 5 ] [ 9 ] [ 12 ]
After quitting his job as an engineer in 1996, [ 6 ] Ellis competed against 500 others for, and won, a 1998 Walt Disney Studios commission for art in honor of Black History Month . The piece was used in the 1999 celebration at Epcot Center and appeared on T-shirts, souvenir-mugs, and posters. [ 4 ] [ 11 ] [ 12 ]
An entrepreneurial-minded Ted Ellis has marketed and sold his art throughout his career. [ 4 ] He was already monetizing his creative talents in high school when his classmates and he sold their custom designed T-shirts, first to their school's juniors and seniors , and then throughout the school district . [ 10 ]
In building his art business over two decades, Ellis engaged in fact-finding missions in search of financial patrons and customers at art festivals, conventions, reunions and libraries, as well as local businesses. Ellis incorporated his business in 1991. [ 10 ] He credits his time at Rollins for teaching him that "if you have a quality product and a good form of distribution, you'll succeed". [ 5 ]
When he first got started, Ellis quickly realized that talent was not alone enough after he had to approach 30–40 galleries until he was picked up by two, one of which closed down. He says that "it's a lot of marketing, planning, exhibiting and a lot of rejection". [ 6 ]
Ted's wife Erenia, a loan officer, manages his business, "T. Ellis Art, Incorporated", out of a League City, Texas studio. [ 5 ] Ellis has sold more than 1.75 million fine art products across the country through direct sales, art galleries, catalog outlets, fine art dealers, and licensing, and has marketed community partnership opportunities meant to educate and empower communities by offering maximum returns on minimal investments. [ 4 ] [ 8 ] [ 10 ] In 2005 he signed for representation with art licensing agency "Alaska Momma" with the intent of opening new merchandising avenues in home décor, furnishings, calendars, apparel and stationery. [ 13 ]
Ellis has affiliated with and had art commissioned by corporations like Walt Disney Studios , Minute Maid , Coca-Cola , Marathon Oil , ExxonMobil , State Farm Insurance , Merck Pharmaceutical , J. C. Penney , Southland Corporation , and Avon Products , Philip Morris USA and Integrity Music . [ 4 ] [ 8 ] [ 9 ] [ 10 ] [ 15 ]
Ellis' works have been sold through Army and Air Force Exchange Service catalogs and were available exclusively both through Avon's African American Boutique as well as their core brochures. [ 5 ]
One of his first major sales was of an original depicting a God-like surgeon in an operating-room, sold to the surgeon. [ 5 ] Ellis' art has sold for prices ranging from $750 to $30,000. [ 9 ] [ 14 ]
Despite his success, while Ellis initially had hoped to build his business up to Fortune 500 stature, he now finds satisfaction in "helping others through art". [ 9 ]
Ted Ellis and his wife are both natives of New Orleans, and much of his art along with his passion for art are inspired by the vibrant city. As a young man, he would search the colorful French Quarter for subjects to paint. [ 8 ] In the aftermath of Hurricane Katrina and the devastation of parts of the city, the city's role in his art drastically changed so as to reflect the story of hope and rebirth that he saw in the disaster. [ 13 ]
On the night before the storm hit Louisiana, the Ellis home in Texas was a refuge for 10 New Orleans families, 50 people in all. After the storm, Ellis helped fly home friends stranded outside New Orleans, and he organized colleagues in the art community behind the relief effort. [ 16 ]
Ellis was allowed to enter the city two weeks after the flood waters subsided in order to survey the damage to his mother's home in the Lower Ninth Ward and salvage her possessions. [ 11 ] [ 13 ] While travelling among the destroyed houses and deserted city, Ellis witnessed a lone man repairing his home's roof. The contrast resonated with Ellis, and he memorialized the hope he saw in the man's actions through his piece Surviving Katrina . [ 11 ] The scene is of rising floodwater that traps a family on their house's roof while the father holds up the Flag of the United States , a flag that to Ellis symbolizes the need for the nation to come together to aid those affected by the storm. [ 8 ] [ 13 ]
In Life Begins Anew , a father holds a baby above floodwaters while another man reaches out to take the child. Ellis describes the scene as symbolizing the promise of a new beginning for those who survived Katrina and its aftermath. [ 11 ] [ 13 ]
As its title indicates, the Katrina: The Hope, Healing and Rebirth of New Orleans collection was for Ellis about showing the power of art to assist the healing process: "The largest piece I did is about how life begins anew and how a person can find hope even after such devastation. I want this work to be uplifting, to be a fresh breath of life for the community." [ 16 ]
One of Ted Ellis' more famous works is an abstract depiction of Barack Obama 's signature 'hope' pose. Ellis painted the portrait in honor of Obama's 2008 Presidential inauguration . In Obama, the 44th President , Ellis uses red, blue, yellow, and green acrylic paint to portray Obama as someone who unites people across lines of color, ethnicity, and religion.
The piece was presented at a January 19, 2009, gala held by the National Black Chamber of Commerce and the National Newspaper Publishers Association Foundation at the Embassy of France in Washington, D.C. The proceeds from the autographed prints sold at the event supported NNPA Foundation and the Howard University School of Communications Building Fund (NNPA Media Wing). [ 1 ] [ 8 ] [ 12 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ]
In 2009, Ted Ellis produced an exhibit focusing on the theme of African-American history in light of slavery and emancipation . The exhibit, American Slavery: The Reason Why We're Here , tied into the Juneteenth commemoration of slavery's abolition in the State of Texas. It was located at and included tours of the historic residence of horticulturist Henry Stringfellow , an innovator in organic gardening who was enlightened in how he employed of freedmen . Ellis began the series of works at a 2006 exhibition at the same house. [ 20 ] [ 21 ]
The exhibit's more than 20 paintings were painted with brush and fingers, and Ellis sometimes added a collage of documents. The exhibit included images of the transportation of slaves, the industry of slavery in crop production, and the abolition of slavery. [ 20 ]
The painting Free At Last includes depictions of Buffalo soldiers , Harriet Tubman , the year "1865" , a mighty oak, and in the background, their heads bowed in prayer, the figures of the current owner and restorer of the house, Sam Collins III, and his wife and children. The exhibit also displayed Ellis' famous depiction of Barack Obama, Obama, the 44th President . [ 20 ]
Ellis' art was featured at a 2011 exhibition at the Rosa Parks Library and Museum . The Museum, at Troy University in Montgomery, Alabama , hosted the exhibit, called Our History, Heritage, and Culture: An American Story, the Art of Ted Ellis , as part of its celebration of Black History Month . [ 22 ]
T. Ellis has been pictorially documenting African-American lifestyle, history and culture for thirty years. Ellis paintings are in the permanent collection of the DuSable museum , Charles Wright museum , the McKenna museum, Free People of Color Museum, and the Amistad Research Center . The City of Selma, Alabama, commissioned T. Ellis as the official artist for the 50th anniversary for the civil rights march, known as "Bloody Sunday". [ 23 ] The City and County of Galveston, Texas, recognized T. Ellis for the 150th anniversary of Juneteenth. The Juneteenth Freedom Project was exhibited at the State Capitol in Washington, DC at the U.S. Senate Rotunda and House of Representatives Rayburn Building. [ 24 ] President Barack Obama and the First Lady, Michelle Obama of the White House has thanked T. Ellis for his art and giving.
T.Ellis painting of the Tuskegee airmen, "The Lonely Angels" was signed by all the Tuskegge Airmen who were in attendance to receive their Congressional Medal of Honor from the President. [ 25 ] President George W. Bush and Speaker of the House of Representatives Nancy Pelosi stand amidst 300 Tuskegee Airmen during a photo opportunity Thursday, March 29, 2007, in Statuary Hall at the U.S. Capitol. White House.
Ted Ellis views education as one of his primary missions, and he is involved with several educational initiatives. [ 8 ] [ 9 ] He has a run a number of art workshops with children, including drawing and touching up the school mural which local sixth-graders paint every year, [ 26 ] and joining with his wife Erania to illustrate while she reads aloud to children from books about topics like the Buffalo Soldiers . When working with children with autism, he employs a strategy of engaging the children's creativity in a non-judgemental setting where there is no wrong way for them to express themselves; he hopes to put their art on display at the Houston Children's Museum . [ 12 ]
He has partnered with the Tom Joyner Foundation to fund-raise for students, while "Art with a Purpose", his own nonprofit program, was awarded a federal grant to help disadvantaged students. [ 19 ]
Ellis serves together with Gregory Michael Carter as an artist-in-residence for an arts enrichment program at a Galveston, Texas charter school , "Ambassadors Preparatory Academy". The program, called "Ambassadors for Art", is led by school administrators and members of the community through the Gulf Coast Apollo Chapter of nonprofit volunteer service organization The Links, Incorporated . [ 27 ] Ambassadors for Art also took part in painting a bust of US President Barack Obama . The bust was one of 45 painted by artists nationwide which were displayed in Detroit 's Museum of African American History , part of a project supported by the Smithsonian Institution . [ 28 ]
He has also worked with other arts education programs like the Peoria Public Schools ' "Artreach", in the framework of which he donated five original pieces to the schools and District 150 Foundation. [ 29 ]
Since October 2023, Ellis has been the Director of Florida State University 's Civil Rights Institute . Which works to connect with students and promote the legacy of the U.S. Civil Rights Movement . The institute collaborates with numerous departments and organizations within FSU and throughout Florida in order to preserve the oral traditions of previously underrepresented people(s), working to promote racial equity in local communities. [ 30 ] [ 31 ]
Ted Ellis has over his career created art in a number of styles and has incorporated several primary influences.
African American history and African American culture play central roles in Ellis' art, which regularly feature themes like Buffalo Soldiers , cotton fields, and Jazz music ; [ 14 ] his Jazz works especially lean towards the impressionistic . Other common themes range from fisherman to religious scenes [ 15 ] as in Thee Baptism , My Father's Baptism , and Deacon's Door , to African ethnic scenes like Afrimage and Ashanti . [ 7 ]
The Church-related themes in his art he attributes to formative experiences with his mother at their local house of worship, "Beulah Land Church". [ 9 ]
While he is best known for his ethnic art, many of Ellis' pieces are landscapes , seascapes , and portraits . [ 7 ]
Ted Ellis is a self-taught artist . He has describes his style at times as "conventional realism", [ 7 ] as a bold blend of realism and impressionism , [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 8 ] and as impressionist and naturalist , an old masters ' style that is sometimes figurative and sometimes folk art . "I try to capture the essence in one stroke" he said. [ 4 ]
Ellis has also coined the term "Tedism" to describe his style. "Tedism" blends impressionism, soul , and folk stories to create representational pieces. [ 10 ] [ 19 ]
Ted Ellis considers himself to be a social, political and spiritual artist as well as educator and "creative historian". "I paint subjects that are representative of the many facets of American life as I know it" says Ellis. "I like to think of myself as a creative historian. I was put here to record history...all aspects of American culture and heritage. My sole purpose has always been to educate through my art". [ 1 ] [ 10 ] He draws inspiration from people's memories of family or from history, and says that he's "a history buff at heart. I read a lot, and I have more books than paint." [ 19 ]
As an artist who focuses on African American history, Ellis intentionally does not approach its difficult chapters from a position of pain: "I have to be careful what I do [...] there's a power to art. I don't want to be from an angry position. That's out there, and maybe they do that to shake you up and make you think. I want to be one step ahead. When you're talking about healing, you're in the right zone." As such, his images of slavery and historical Southern life focus on positive values such as family, character, and church. [ 11 ]
Ellis attributes a love for and influence in his art to New Orleans and its rich culture of creativity. He compared the city to an incubator for young talent. The city offered Ted art clubs and opportunities to design murals for school and create signs for special events. New Orleans gave him access to art classes, summer art programs, and the vibrant Jackson Square , somewhere he could talk with and get to know many artists. [ 10 ]
Ellis' favorite artists include Henry Ossawa Tanner , Edward Bannister , Jacob Lawrence , Samella Lewis , and John T. Scott , while among his work the favorite painting is Sunday Worship . [ 4 ]
Ted Ellis and his artwork have been widely recognized and honored in many venues.
Ellis's art was featured in 1992 and 1993 for Black History Month at the Irving Arts Center in Irving, Texas , near Dallas . [ 2 ] [ 3 ] The 1993 exhibit, Realism, Symbolism, and Abstraction: Images from the African American Experience , jointly featured artist Albert M. Shaw . [ 7 ] [ 32 ]
Ellis was selected to creation of Walt Disney World 1998 commemorative Black History Poster. Other than that, he was official artist for the 50th anniversary of Bloody Sunday for the city of Selma, Alabama . Ted was Art ambassador for the City and County of Galveston , [ 23 ] TX celebrating 150th anniversary of Juneteenth showcasing at the U.S. Capitol in Washington DC . [ 24 ]
The Ivory Coast II was featured on children's television show Barney & Friends . In 1997 his African-Americans in Law was unveiled and displayed in New Orleans City Hall . [ 4 ]
He was named Black Heritage Artist of the Year 1998 at Baltimore's Black Heritage Visual Arts Expo and painted the presentation poster for event sponsor PepsiCola . [ 4 ] [ 6 ] [ 15 ] [ 33 ]
Ellis was the Heritage Arts Festival 's winner of the Pallette Award for Impressionism in 2002 [ 15 ]
In 2005, he was named "Entrepreneur of the Year" by the National Black Chamber of Commerce . [ 15 ]
Amistad Research Center at Tulane University in 2005 recognized Ted Ellis as a historical artist [ 15 ] [ 27 ] and mounted a week-long exhibition of his paintings in the Audubon Zoo 's Tropical Bird House. The exhibit, Reflections of African American Culture: Paintings by Ted Ellis , was co-hosted by the Audubon Nature Institute . A reception held in Uptown New Orleans ' Audubon Tea Room included an exhibition of Ted Ellis' works entitled A Celebration of African American Art and Culture: Paintings by Ted Ellis . Ellis donated two of the exhibition's pieces to the center: We Are Americans (2002) and The Struggle Continues (2003). He also chose the center to be the repository for his papers including the yearly donation of a piece of art to his collection there. [ 15 ] [ 34 ]
Ellis featured in a Houston City Council meeting chaired by mayor Bill White where council member Sue Lovell declared May 23, 2006 "Ted Ellis Day", citing Ellis' contributions as a leading African American contemporary artist who in his art captures American culture. [ 35 ] The Austin, Texas -based George Washington Carver Museum and Cultural Center in 2006 hosted Ellis' exhibition, Say My Name .
In 2007, Ellis' work was exhibited at "Embrace: the Fine Art Fair of the National Black Arts Festival ". [ 14 ] In February of that year he was profiled in the "11th Annual Citywide African American Art Exhibition" at the Houston Museum of Fine Arts . He was a 2007 Honoree of the National Black MBA Association .
Several of Ellis' works are in the permanent collections of the DuSable Museum of African American History in Chicago, the Charles H. Wright Museum of African American History in Detroit, Walt Disney Studios , the McKenna Museum of African American Art and the Free People of Color Museum in New Orleans, and the Amistad Research Center at Tulane University. [ 15 ] [ 19 ]
A "Ted Ellis Day" was declared for February 21, 2009, by W. Wesley Perry , the mayor of Midland, Texas . In 2009, Houston Citizens Chamber of Commerce awarded Ted Ellis one of four 15th Annual African-American Business Achievement Pinnacle Awards for his success in his art business. [ 36 ]
In 2010, New Orleans African American Museum recognized Ellis as an Art Ambassador and hosted an exhibition of his work, "Sumpt'n to See, Native Son Comes Home". [ 15 ] [ 27 ] [ 37 ] [ 38 ]
Ellis has been recognized in proclamations by Governor of Texas Rick Perry , Texas State Representative Sylvester Turner , Louisiana Lt. Gov. Mitch Landrieu , as well as Baton Rouge mayor Melvin Holden .
Louisiana State Senate resolution number 88, brought by Senator Karen Carter Peterson in 2012, also commends and recognizes Ellis for his accomplishments and contributions.
He has been featured at the New Orleans Jazz Festival , won Grand Prize Best of Show and Patrons Award at Fairhope, Alabama 's Fairhope Arts Festival , and was honored as the "Official Artist" of the 2006 Essence Music Festival . [ 39 ] He was also featured at the Bayou City Art Festival [ 14 ] and the Charleston Annual Fine Arts Weekend in 2005. [ 15 ]
Ellis is a frequent speaker at Avon President's Luncheon and other corporate functions. [ 5 ]
Ted Ellis' art has been purchased by celebrities like Bryant Gumbel , Angela Bassett , Johnnie Cochran , [ 11 ] Blair Underwood , [ 6 ] Susan L. Taylor , Joyce M. Roche , Spike Lee and Brad Pitt . [ citation needed ] In 2017, T. Ellis received proclamation from The Senate of the State of Texas for his exhibition Pride, Dignity and Courage: celebrating African-American History and Culture".
Ted Ellis is involved with various causes and charitable organizations including United Way , ICLS , African American Visual Arts Association , [ 27 ] Jack and Jill of America Inc. , the United Negro College Fund , Heritage Christian Academy, and various public school districts. He was the featured artist of Big Brothers Big Sisters 2012 "Houston's Big Black Tie Ball" fundraiser gala and is a partner of the Houston Child Protective Services Black History Program. [ citation needed ]
In a column published in Images Magazine , Ted Ellis calls on the younger generations of black artists to recognize the hardship faced by and the effort put in by the previous generation of black artists in order to pave the way for the newer generation to be able to succeed, and to do so with far less difficulty. [ 4 ] [ 40 ]
He says African-American are still not within the mainstream, and despite being the most financially minded black artists yet, that the current generation still earns less than it should. Ellis further assigns partial responsibility on an academic world that he sees as not paying sufficient attention to the genre. [ 10 ]
He believes strongly that African-Americans should value their art on their terms, looking past aesthetics, and that only thus will the genre grow and be better appreciated. [ 4 ] [ 10 ] | https://en.wikipedia.org/wiki/Ted_Ellis_(artist) |
Theo Willem Jan Marie Janssen (13 August 1936 – 29 September 2017), better known as Ted Janssen , was a Dutch physicist and Full Professor of Theoretical Physics at the Radboud University Nijmegen . Together with Pim de Wolff and Aloysio Janner, he was one of the founding fathers of N-dimensional superspace approach in crystal structure analysis for the description of quasi periodic crystals and modulated structures. [ 1 ] [ 2 ] For this work he received the Aminoff Prize of the Royal Swedish Academy of Sciences (together with de Wolff and Janner) in 1988 and the Ewald Prize of the International Union of Crystallography (with Janner) in 2014. These achievements were merit of his unique talent, combining a deep knowledge of physics with a rigorous mathematical approach. Their theoretical description of the structure and symmetry of incommensurate crystals using higher dimensional superspace groups also included the quasicrystals that were discovered in 1982 by Dan Schechtman , who received the Nobel Prize in Chemistry in 2011. The Swedish Academy of Sciences explicitly mentioned their work at this occasion. [ 3 ] [ 4 ]
Ted Janssen was born on August 13, 1936, in Vught , near 's-Hertogenbosch in the Netherlands . Already as a young boy he was fascinated by the sciences. He built radios, set up a chemistry lab in the attic of his parental home, was an avid bird watcher and he built his own telescopes. He remembered high school as ‘not very inspiring’ and he passed all exams without much effort, but viewed it as a time that truly formed him. Instead of spending time on homework he studied the history and philosophy of science and was very interested in astronomy and astrophysics .
During his high school years he also developed a deep appreciation of literature and music. Later he added the visual arts, ballet, and architecture to that list. The enjoyment of the arts was vital to Ted. He called it essential components of life. He started playing the piano, harpsichord and cello in his early twenties. Too late to become an accomplished musician, but it brought him great joy.
In 1954 he started college in Utrecht , studying mathematics and physics with minors in chemistry and astronomy. He again showed his interest in a wide variety of topics by attending lectures in ethics, philosophy, music and sculpture. After his candidate degree he concentrated on theoretical physics, but always included a deep understanding of mathematics in his work.
After studying theoretical physics in Utrecht University Ted graduated under Leon van Hove with his doctoral dissertation ‘the classical limit of quantum mechanical diagram expansions’ and he was offered the opportunity to present it at an international conference in Utrecht on ‘ Many-body Problems ’. No less than six Nobel laureates (Yang, Lee, Prigogine , Anderson, Cooper and Schrieffer ) were in the audience for Ted’s first presentation, which led to his first publication as well: ’On the classical limit of the diagram expansion in quantum statistics’ by T.W.J.M. Janssen. [ 5 ] All his later publications were published as T. Janssen or Ted Janssen.
After his doctoral exam in 1960 Ted worked for several years with professors Theo Ruijgrok, Tini Veltman , and John Tjon in Utrecht. Earlier he developed a friendship with co-student and co-worker Geert Fast. Geert’s promotor van Hove moved from Utrecht to CERN in Geneva and Geert asked Ted to keep an eye on his little sister, Loes Fast, who was studying veterinary medicine in Utrecht. Ted quickly developed strong feelings for Loes and in 1965 they got married.
In 1965, he became the first PhD student of Aloysio Janner at the Catholic University Nijmegen and started on the work that resulted in his PhD thesis, Crystallographic Groups in Space and Time, in 1968, thereby already providing the theoretical basis of what would become the superspace approach.
After his promotion Ted Janssen got a position in Nijmegen at the department of Theoretical Solid State Physics . He immediately was given teaching responsibilities. In the years that followed Ted taught many classes, including electrodynamics, classical mechanics, quantum mechanics, complex functions, crystallographic groups, group theory for physicists, chaos theory , soft modes and solid state physics.
Ted was always interested in international collaboration and taught ‘ crystallographic groups ’ for 6 months in Leuven in 1969. In 1971 Ted accepted an invite from professor Baltensberger to come to the ETH in Zürich for one year. Baltensberger organized weekly meetings between theoretical and experimental physicists. Ted ever since made it a habit to bring theoretical and experimental physicists together on a regular basis.
Back in Nijmegen Ted was promoted to associate professor in 1972 and he continued working with Aloysio Janner and Li Ching Chen on space-time symmetry of electromagnetic fields and independently on PUA (projective unitary/anti-unitary) representations. In 1972 Aloysio and Ted also started their long collaboration with Pim de Wolff . Together with Aloysio Janner and Pim de Wolff he was one of the founders of the higher dimensional superspace approach in crystal structure analysis for the description of quasiperiodic crystals and modulated structures. This collaboration and its results received international recognition in 1998 with the Aminoff Prize from the Swedish Academy of Science . The award ceremony was followed by a symposium and the speakers were Aloysio Janner , Ted Janssen, Gervais Chapuis , Mike Glazer , Borje Johansson , Sander van Smaalen , Vaclav Petrcek and Reine Wallenberg .
In 1973 and 1975 Ted and Aloysio organized conferences on ‘Group Theoretical Methods in Physics’ in Nijmegen. These are small conferences that attract both mathematicians and physicists. The series still exists. In 1993 Ted was appointed as professor at Utrecht University and in 1994 he took Aloysio’s position in Nijmegen after Aloysio retired. Also in 1994 Ted organized the conference Dyproso 1994 (Dynamic Properties of Solids) in Lunteren.
In 1987 Ted joined the board of EMF (European Meeting on Ferro- electricity) and a few years later also the IMF (International Meeting on Ferroelectricity). Ted organized EMF-8 in 1995, in Nijmegen. In 1997 he joined the board of Aperiodic (Modulated Structures, Polytypes and Quasicrystals) and he organized Aperiodic-2000, again in Nijmegen. Ted also was a board member of ICQ, NVK ( Nederlandse Vereniging voor Kristallografie – Dutch Union of Crystallography), LOTN (Collaboration of Dutch Institutes for Theoretical Physics), and the Dutch organization of Fundamental Research in Solid State Physics.
Ted attended many conferences and was often traveling. In the earlier years his wife Loes worked as a veterinarian and took care of the children, but once all children had left the house Loes would join Ted on many of his travels. Ted spent time as visiting lector or professor in Leuven (1969), Zürich (1971-1972), Dijon (1987), Paris, Orsay, Palaiseau (1992), Gif-sur-Yvette (1993), Grenoble (1986 and 1990), Marseille (2001), Nagoya (1992), Lausanne (2003), Beer Sheva (2003) en Sendai (2004-2005 and 2013). [ 6 ] [ 7 ]
In 2014 Aloyiso and Ted received a second award, the Ewald Prize , [ 8 ] one of the most prestigious prizes in crystallography, of the International Union of Crystallography during the IUCr conference in Montreal.
Ted Janssen died in Groesbeek , Netherlands , on September 29, 2017, after a short and devastating struggle with leukemia . [ 9 ] He did however work until the last day, finishing his edits for the second edition of the book "Aperiodic structures: from modulated structures to quasicrystals " [ 10 ] that was published in 2018. | https://en.wikipedia.org/wiki/Ted_Janssen |
Teeny Ted from Turnip Town (2007), published by Robert Chaplin , is certified by Guinness World Records as the world's smallest reproduction of a printed book. [ 1 ] The book was produced in the Nano Imaging Laboratory at Simon Fraser University in Vancouver, British Columbia , Canada, with the assistance of SFU scientists Li Yang and Karen Kavanagh . [ 2 ]
The book's size is 0.07 mm x 0.10 mm. The letters are carved into 30 microtablets on a polished piece of single crystalline silicon , using a focused- gallium - ion beam with a minimum diameter of 7 nanometers (this was compared to the head of a pin at 2 mm, 2,000,000 nm, across). The book has its own ISBN, 978-1-894897-17-4 . [ 2 ]
The story was written by Malcolm Douglas Chaplin and is "a fable about Teeny Ted's victory in the turnip contest at the annual county fair." [ 2 ]
The book has been published in a limited edition of 100 copies by the laboratory and requires a scanning electron microscope to read the text.
In December 2012, a Library Edition of the book was published with a full title of Teeny Ted from Turnip Town & the Tale of Scale: A Scientific Book of Word Puzzles and an ISBN 978-1-894897-36-5 . On the title page it is referred to as the "Large Print Edition of the World's Smallest Book". The book was published using funds from a successful Kickstarter campaign with contributors' names shown on the dust jacket. [ 3 ] | https://en.wikipedia.org/wiki/Teeny_Ted_from_Turnip_Town |
In mathematics, the Teichmüller–Tukey lemma (sometimes named just Tukey's lemma ), named after John Tukey and Oswald Teichmüller , is a lemma that states that every nonempty collection of finite character has a maximal element with respect to inclusion . Over Zermelo–Fraenkel set theory , the Teichmüller–Tukey lemma is equivalent to the axiom of choice , and therefore to the well-ordering theorem , Zorn's lemma , and the Hausdorff maximal principle . [ 1 ]
A family of sets F {\displaystyle {\mathcal {F}}} is of finite character provided it has the following properties:
Let Z {\displaystyle Z} be a set and let F ⊆ P ( Z ) {\displaystyle {\mathcal {F}}\subseteq {\mathcal {P}}(Z)} . If F {\displaystyle {\mathcal {F}}} is of finite character and X ∈ F {\displaystyle X\in {\mathcal {F}}} , then there is a maximal Y ∈ F {\displaystyle Y\in {\mathcal {F}}} (according to the inclusion relation) such that X ⊆ Y {\displaystyle X\subseteq Y} . [ 2 ]
In linear algebra , the lemma may be used to show the existence of a basis . Let V be a vector space . Consider the collection F {\displaystyle {\mathcal {F}}} of linearly independent sets of vectors. This is a collection of finite character . Thus, a maximal set exists, which must then span V and be a basis for V . | https://en.wikipedia.org/wiki/Teichmüller–Tukey_lemma |
Teichoic acids ( cf. Greek τεῖχος, teīkhos , "wall", to be specific a fortification wall, as opposed to τοῖχος, toīkhos , a regular wall) [ 1 ] are bacterial copolymers [ 2 ] of glycerol phosphate or ribitol phosphate and carbohydrates linked via phosphodiester bonds .
Teichoic acids are found within the cell wall of most Gram-positive bacteria such as species in the genera Staphylococcus , Streptococcus , Bacillus , Clostridium , Corynebacterium , and Listeria , and appear to extend to the surface of the peptidoglycan layer. They can be covalently linked to N -acetylmuramic acid or a terminal D - alanine in the tetrapeptide crosslinkage between N -acetylmuramic acid units of the peptidoglycan layer, or they can be anchored in the cytoplasmic membrane with a lipid anchor. Teichoic acid's chemical signal is CH17P4O29NOH.
Teichoic acids that are anchored to the lipid membrane are referred to as lipoteichoic acids (LTAs), whereas teichoic acids that are covalently bound to peptidoglycan are referred to as wall teichoic acids (WTA). [ 3 ]
The most common structure of Wall teichoic acids are a ManNAc(β1→4)GlcNAc disaccharide with one to three glycerol phosphates attached to the C4 hydroxyl of the ManNAc residue followed by a long chain of glycerol- or ribitol phosphate repeats. [ 3 ] Variations come in the long chain tail, which generally include sugar subunits being attached to the sides or the body of the repeats. Four types of WTA repeats have been named, as of 2013. [ 4 ]
Lipoteichoic acids follow a similar pattern of putting most variation in the repeats, although the set of enzymes used are different, at least in the case of Type I LTA. The repeats are anchored onto the membrane via a (di)glucosyl-diacylglycerol (Glc (2) DAG) anchor. Type IV LTA from Streptococcus pneumoniae represents a special case where both types intersect: after the tail is synthesized with an undecaprenyl phosphate (C 55 -P) intermediate "head", different TagU/LCP (LytR-CpsA-Psr) family enzymes either attaches it to the wall to form a WTA or to the GlcDAG anchor. [ 5 ]
The main function of teichoic acids is to provide flexibility to the cell-wall by attracting cations such as calcium and potassium. Teichoic acids can be substituted with D -alanine ester residues, [ 6 ] or D - glucosamine , [ 7 ] giving the molecule zwitterionic properties. [ 8 ] These zwitterionic teichoic acids are suspected ligands for toll-like receptors 2 and 4. Teichoic acids also assist in regulation of cell growth by limiting the ability of autolysins to break the β(1-4) bond between the N -acetyl glucosamine and the N -acetylmuramic acid.
Lipoteichoic acids may also act as receptor molecules for some Gram-positive bacteriophage; however, this has not yet been conclusively supported. [ 9 ] It is an acidic polymer and contributes negative charge to the cell wall.
Enzymes involved in the biosynthesis of WTAs have been named: TarO, TarA, TarB, TarF, TarK, and TarL. Their roles are: [ 3 ]
Following the synthesis, the ATP-binding cassette transporters ( teichoic-acid-transporting ATPase ) TarGH ( P42953 , P42954 ) flip the cytoplasmic complex to the external surface of the inner membrane. The redundant TagTUV enzymes link this product to the cell wall. [ 4 ] The enzymes TarI ( Q8RKI9 ) and TarJ ( Q8RKJ0 ) are responsible for producing the substrates that lead to the polymer tail. Many of these proteins are located in a conserved gene cluster. [ 3 ]
Later (2013) studies have identified a few more enzymes that attach unique sugars to the WTA repeat units. A set of enzymes and transporters named DltABCE that adds alanines to both wall and lipo-teichoic acids were found. [ 4 ]
Note that the set of genes are named "Tag" (teichoic acid glycerol) instead of "Tar" (teichoic acid ribitol) in B. subtilis 168, which lacks the TarK/TarL enzymes. TarB/F/L/K all bear some similarities to each other, and belong to the same family ( InterPro : IPR007554 ). [ 3 ] Some linked UniProt entries are in fact the "Tag" ortholog as they are better annotated (because 168/BACSU is the main model strain). The "similarity search" may be used to access the genes in the Tar-producing B. substilis W23 (BACPZ).
This was proposed in 2004. [ 3 ] A further review in 2013 has given more specific parts of the pathways to inhibit given newer knowledge. [ 4 ] | https://en.wikipedia.org/wiki/Teichoic_acid |
The Teknomo–Fernandez algorithm (TF algorithm) , is an efficient algorithm for generating the background image of a given video sequence.
By assuming that the background image is shown in the majority of the video, the algorithm is able to generate a good background image of a video in O ( R ) {\displaystyle O(R)} -time using only a small number of binary operations and Boolean bit operations, which require a small amount of memory and has built-in operators found in many programming languages such as C , C++ , and Java . [ 1 ] [ 2 ] [ 3 ]
People tracking from videos usually involves some form of background subtraction to segment foreground from background. Once foreground images are extracted, then desired algorithms (such as those for motion tracking , object tracking , and facial recognition ) may be executed using these images. [ 1 ] [ 3 ]
However, background subtraction requires that the background image is already available and unfortunately, this is not always the case. Traditionally, the background image is searched for manually or automatically from the video images when there are no objects. More recently, automatic background generation through object detection , medial filtering , medoid filtering , approximated median filtering , linear predictive filter , non-parametric model , Kalman filter , and adaptive smoothening have been suggested; however, most of these methods have high computational complexity and are resource-intensive. [ 1 ] [ 4 ]
The Teknomo–Fernandez algorithm is also an automatic background generation algorithm. Its advantage, however, is its computational speed of only O ( R ) {\displaystyle O(R)} -time, depending on the resolution R {\displaystyle R} of an image and its accuracy gained within a manageable number of frames. Only at least three frames from a video is needed to produce the background image assuming that for every pixel position, the background occurs in the majority of the videos. Furthermore, it can be performed for both grayscale and colored videos. [ 1 ]
Generally, however, the algorithm will certainly work whenever the following single important assumption holds:
For each pixel position, the majority of the pixel values in the entire video contain the pixel value of the actual background image (at that position). [ 1 ]
As long as each part of the background is shown in the majority of the video, the entire background image needs not to appear in any of its frames. The algorithm is expected to work accurately. [ 1 ]
At the first level, three frames are selected at random from the image sequence to produce a background image by combining them using the first equation. This yields a better background image at the second level. The procedure is repeated until desired level L {\displaystyle L} . [ 1 ]
At level ℓ {\displaystyle \ell } , the probability p ℓ {\displaystyle p_{\ell }} that the modal bit predicted is the actual modal bit is represented by the equation p ℓ = ( p ℓ − 1 ) 3 + 3 ( p ℓ − 1 ) 2 ( 1 − p ℓ − 1 ) {\displaystyle p_{\ell }=(p_{\ell -1})^{3}+3(p_{\ell -1})^{2}(1-p_{\ell -1})} .
The table below gives the computed probability values across several levels using some specific initial probabilities. It can be observed that even if the modal bit at the considered position is at a low 60% of the frames, the probability of accurate modal bit determination is already more than 99% at 6 levels. [ 1 ]
The space requirement of the Teknomo–Fernandez algorithm is given by the function O ( R F + R 3 L ) {\displaystyle O(RF+R3^{L})} , depending on the resolution R {\displaystyle R} of the image, the number F {\displaystyle F} of frames in the video, and the desired number L {\displaystyle L} of levels. However, the fact that L {\displaystyle L} will probably not exceed 6 reduces the space complexity to O ( R F ) {\displaystyle O(RF)} . [ 1 ]
The entire algorithm runs in O ( R ) {\displaystyle O(R)} -time, only depending on the resolution of the image. Computing the modal bit for each bit can be done in O ( 1 ) {\displaystyle O(1)} -time while the computation of the resulting image from the three given images can be done in O ( R ) {\displaystyle O(R)} -time. The number of the images to be processed in L {\displaystyle L} levels is O ( 3 L ) {\displaystyle O(3^{L})} . However, since L ≤ 6 {\displaystyle L\leq 6} , then this is actually O ( 1 ) {\displaystyle O(1)} , thus the algorithm runs in O ( R ) {\displaystyle O(R)} . [ 1 ]
A variant of the Teknomo–Fernandez algorithm that incorporates the Monte-Carlo method named CRF has been developed. Two different configurations of CRF were implemented: CRF9,2 and CRF81,1. Experiments on some colored video sequences showed that the CRF configurations outperform the TF algorithm in terms of accuracy. However, the TF algorithm remains more efficient in terms of processing time. [ 5 ] | https://en.wikipedia.org/wiki/Teknomo–Fernandez_algorithm |
The Tektronix 4105 was a video terminal introduced by Tektronix in 1983. It could be used as a conventional text terminal supporting the ANSI escape codes of the VT102 or the VT52 , as well as a graphics terminal using their own Tektronix 4010 series vector graphics . In graphics mode resolution was relatively limited, at 480 by 360 pixels, but it added a wide variety of new commands to the original 4010 set, including up to eight colors on the screen. The color commands would become a standard in their own right, and is supported by most terminal emulators supporting the Tek 4010 series.
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Tektronix_4105 |
The Tel-musici was an early entertainment innovation, which used telephone lines to transmit phonograph recordings to individual households. Subscribers called a central "music room" to request selections, which they listened to at home over specially designed loudspeakers called "magnaphones". The service later incorporated live programs, expanding its operations to more along the lines of a general " telephone newspaper ".
A Tel-musici company was incorporated in Delaware in 1908, and the service began operation in Wilmington the next year. However, although there were plans to expand throughout the United States, only this single location ever became operational, until it ceased operations around 1914.
The primary individual behind the Tel-musici was inventor George R. Webb. In January 1908, while soliciting for investors, he arranged a demonstration of the concept at a Baltimore hotel, where listeners telephoned a remote location with their requests, which were played back as "'10 cents' worth of Lohengrin ,' or 'a quarter's worth of ragtime'" to the assembled participants. [ 2 ] Shortly thereafter, a Tel-musici company with a capitalization of $10,000 was incorporated in the state of Delaware by "a number of Baltimorians". [ 3 ]
In 1909 an operating Tel-musici system was established in Wilmington, Delaware , with George Webb as the company president, and J. J. Comer the general manager. The music rooms' musical library was described as comprehensive and "embracing a complete line of all the latest productions". The charge was three cents for each requested standard tune, and seven cents for grand opera. Subscribers were required to guarantee purchases totaling $18 per year. Provisions were also made for transmitting a general program in lieu of individual requests. [ 1 ]
The promoters hoped to interest local telephone companies in installing their own Tel-musici operations. The Wilmington operation was later taken over by the Wilmington and Philadelphia Traction Co., which operated a Wilmington telephone franchise, and an advertisement for a Tel-musici "dance music program" appeared as late as 1914. [ 4 ] However, it does not appear that any additional installations became operational.
In 1912, George Webb began promoting the similarly conceived Magnaphone system, established in New York City, which was intended to transmit recording and other audio offerings to subscribers for eight dollars a month. [ 5 ] The New York Magnaphone and Music Company was granted a twenty-five year franchise for operations "in the Borough of Manhattan and that part of the Borough of The Bronx west of the Bronx River", however, the franchise was never built. J. J. Comer would later participate, in conjunction with the Automatic Electric Company of Chicago, with development of the Musolaphone system, which briefly operated in southside Chicago, and which transmitted live news and entertainment to subscribing homes and businesses over telephone lines. [ 6 ] | https://en.wikipedia.org/wiki/Tel-musici |
Telecanthus , or dystopia canthorum , refers to increased distance between the inner corners of the eyelids (medial canthi ), while the inter-pupillary distance is normal. This is in contrast to hypertelorism , in which the distance between the whole eyes is increased. [ 1 ] Telecanthus and hypertelorism are each associated with multiple congenital disorders.
The distance between the inner corners of the eyelids is called the intercanthal distance. In most people, the intercanthal distance is equal to the width of each eye (the distance between the inner and outer corners of each eye). The average interpupillary distance is 60–62 millimeters (mm), which corresponds to an intercanthal distance of approximately 30–31 mm. [ 2 ]
Traumatic telecanthus refers to telecanthus resulting from traumatic injury to the nasal- orbital - ethmoid (NOE) complex. [ 1 ] The diagnosis of traumatic telecanthus requires a measurement in excess of those normative values. The pathology can be either unilateral or bilateral, with the former more difficult to measure. [ 2 ]
Telecanthus is often associated with many congenital disorders. Congenital disorders such as Down syndrome , fetal alcohol syndrome , cri du chat syndrome , Klinefelter syndrome , Turner syndrome , Ehlers–Danlos syndrome , Waardenburg syndrome [ 3 ] often present with prominent epicanthal folds , and if these folds are nasal (as they most commonly are) they will cause telecanthus. [ citation needed ]
Telecanthus comes from the Greek word τῆλε ( tele , "far") and the latinized form of the Greek word κάνθος , ( kánthos , meaning 'corner of the eyelid'. Dystopia canthorum comes from the Greek δυσ - ( dus -, “bad”) and τόπος ( tópos , “place”) and the latinized Greek word κάνθος, adapted to latin morphology canthorum ("of the canthi"). [ citation needed ] | https://en.wikipedia.org/wiki/Telecanthus |
A telechelic polymer or oligomer is a prepolymer capable of entering into further polymerization or other reactions through its reactive end-groups. [ 1 ] It can be used for example to synthesize block copolymers .
By definition, a telechelic polymer is a di-end-functional polymer where both ends possess the same functionality. [ 2 ] Where the chain-ends of the polymer are not of the same functionality they are termed di-end-functional polymers.
All polymers resulting from living polymerization are end-functional but may not necessarily be telechelic. [ 2 ]
Telechelic polymers with different number of reactive end groups can be termed according to the number of end-groups as “hemi-” (one), “di-” (two),” and “tri-telechelic” (three) polymers. When it presents many end groups it is called “polytelechelic”. [ 3 ]
To prepare polymers by step-growth polymerization , telechelic polymers like polymeric diols and epoxy prepolymers can be used. The main examples are:
Other examples of telechelic polymers are the halato-telechelic polymers or halatopolymers. [ 4 ] The end-groups of these polymers are ionic or ionizable like carboxylate or quaternary ammonium groups.
Telechelic polymers can be synthesized by different polymerization mechanisms. From vinyl monomers, among synthetic strategies are controlled radical polymerization and anionic polymerization. In the case of olefins, that is difficult to be functionalized, recent advances in insertion polymerization and post-polymerization can be used to produce telechelic polyolefins. [ 5 ] [ 6 ]
Telechelic polymers are important in the preparation of block copolymers acting as building blocks for the structural design of these copolymers. Particularly, ABA triblock copolymers has received much industrial interest for development of thermoplastic elastomers . [ 7 ]
This article about polymer science is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Telechelic_polymer |
The Telecom Infra Project ( TIP ) was formed in 2016 as an engineering-focused, collaborative methodology for building and deploying global telecom network infrastructure, with the goal of enabling global access for all. [ 1 ]
TIP is jointly steered by its group of founding tech and telecom companies, which forms its board of directors, and is chaired by Vodafone's Head of Network Strategy and Architecture, Yago Tenorio. Member companies host technology incubator labs and accelerators, and TIP hosts an annual infrastructure conference, TIP Summit. [ 2 ] Which was renamed to FYUZ and hosted in Madrid in October 2022.
The organization adopts transparency of process and collaboration in the development of new technologies, [ 3 ] by its more than 500 participating member organizations, including operators, suppliers, developers, integrators, startups and other entities, [ 4 ] that participate in various TIP project groups. Projects employ current case studies to evolve telecom equipment and software into more flexible, agile, and interoperable forms. [ 5 ] [ 6 ] [ 7 ]
With telecom technology disaggregated into Access, Backhaul, and Core & Management, each project group focused on one of these three specific network areas. Past and present projects include, [ 8 ] among others: [ 9 ]
Various TIP member companies provide dedicated space for its project groups as "TIP Community Labs," facilitating collaborative projects between member companies in the development of telecom infrastructure. [ 15 ] As of 2020, TIP has 14 labs throughout 8 countries around the world including, Spain, Italy, USA, Indonesia, UK, Japan, Germany, and Brazil.
TIP Ecosystem Acceleration Centers (TEACs) are global technology innovation centers sponsored by one or more member organizations that connect startups to venture capitalists. TEACs are hosted in Seoul , Berlin , Paris and the UK . [ 16 ] [ 17 ] | https://en.wikipedia.org/wiki/Telecom_Infra_Project |
A telecom network protocol analyzer is a protocol analyzer used to analyze switching and telecommunications signaling protocols between different nodes in PSTN or Mobile telephone networks , such as 2G or 3G GSM networks, CDMA networks, WiMAX and so on.
In a mobile telecommunication network it can analyze the traffic between MSC and BSC , BSC and BTS , MSC and HLR , MSC and VLR , VLR and HLR, and so on.
Protocol analyzers are mainly used for performance measurement and troubleshooting . These devices connect to the network to calculate key performance indicators to monitor the network and speed-up troubleshooting activities.
This article related to telecommunications is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Telecom_network_protocol_analyzer |
TIMS , or Telecommunication Instructional Modeling System , is an electronic device invented by Tim Hooper and developed by Australian engineering company Emona Instruments that is used as a telecommunications trainer in educational settings and universities. [ 1 ] [ 2 ] [ 3 ]
TIMS was designed at the University of New South Wales by Tim Hooper in 1971. It was developed to run student experiments for electrical engineering communications courses. [ 4 ] [ 5 ] Hooper’s concept was developed into the current TIMS model in the late 1980s. [ 6 ] [ 7 ] In 1986, the project won a competition organized by Electronics Australia for development work using the Texas Instruments TMS320 . [ 8 ] [ 9 ] Emona Instruments also received an award for TIMS at the fifth Secrets of Australian ICT Innovation Competition. [ 9 ]
TIMS uses a block diagram-based interface for experiments in the classroom. It can model mathematical equations to simulate electric signals, or it can use block diagrams to simulate telecommunications systems. [ 4 ] [ 7 ] [ 10 ] It uses a different hardware card to represent functions for each block of the diagram. [ 11 ]
TIMS consists of a server, a chassis, and boards that can emulate the configurations of a telecommunications system. [ 12 ] It uses electronic circuits as modules to simulate the components of analog and digital communications systems . [ 13 ] [ 14 ] The modules can perform different functions such as signal generation, signal processing , signal measurement, and digital signal processing . [ 10 ] [ 13 ]
The block diagram approach to modeling the mathematics of a telecommunication system has also been ported across to other domains. [ 15 ] [ 16 ]
Where the blocks are patched together onscreen to mimic the hardware implementation but with a simulation engine (known as TutorTIMS). [ 15 ] [ 16 ]
It can be used by multiple students at once across the internet or LAN via a browser based client screen. This utilises a statistical time division multiplexing architecture in the control unit. The method is applied both to Telecommunications and Electronics Laboratories (known as netCIRCUITlabs). [ 15 ] [ 16 ] [ 17 ] | https://en.wikipedia.org/wiki/Telecommunication_Instructional_Modeling_System |
Telecommunications billing is the group of processes of communications service providers that are responsible to collect consumption data, calculate charging and billing information, produce bills to customers, process their payments and manage debt collection. [ 1 ] [ 2 ]
A telecommunications billing system is an enterprise application software designed to support the telecommunications billing processes.
Telecommunications billing is a significant component of any commercial communications service provider regardless specialization: telephone , mobile wireless communication , VoIP companies, mobile virtual network operators , internet service providers , transit traffic companies, cable television and satellite TV companies could not operate without billing, because it creates an economic value of their business.
Billing functions can be grouped to three areas: operations , information management , financial management . In the broad sense, when billing and revenue management ( BRM ) is considered as a single process bundle, as special functional areas could be picked out revenue assurance , profitability management , fraud management .
Operations area includes functions of capturing usage records (depending on the industry it can be call detail records , charging data records , network traffic measurement data, in some cases usage data could be prepared by telecommunications mediation system ), rating consumption (determining factors, significant for further calculation, for example, calculating total time of calls for each tariff zones, count of short messages , traffic summary in gigabytes), applying prices, tariffs, discounts, taxes and compiling charges for each customer account, rendering bills, managing bill delivery, applying adjustments, maintaining of customer account. [ 3 ]
Operations area functions implementation can vary significantly depending on communications type and payment model. In particular, for prepaid customers billing should be realized continuously (in near real-time computing standards, also noted as hot billing ), and when a lower threshold amount at the account is reached, systems could automatically limit a service. In postpaid service model there are no vital requirements to decrease a balance of a customer account in real time, in this case charging scheduled to be rarely, usually, once per month.
Information management area unites functions that responsible to support customer information, product and service data, pricing models, including their possible combinations, as well as billing configuration data, such as billing cycles schedules, event triggers, bill delivery channels, audit settings, data archiving parameters. [ 4 ] Customer information often integrated with customer relationship management system; collaboration with customer can be a function of information management area of billing system or can be completely allocated in CRM. [ 5 ]
Financial management area covers functions of payment tracking and processing, mapping correspondence between payments and consumed services, managing credits and debt collections, calculating company taxes. [ 6 ]
Communication service providers, which operates with multiple services in multiple modes used to integrate in one bill all charges, unify customer management in one system. Term convergent billing system refers to such a solution, that could maintain single customer account and produce a single bill for all services (for example, it could be public switched telephone network , cable TV and cable internet services for one customer) and also do it regardless a payment method (prepaid or postpaid). [ 7 ] [ 8 ]
A global market of the packaged telecommunications billing systems estimated to $6 Billion at 2007 and forecast to grow up to $7.2 Billion in 2012. [ 9 ] Market shares by application specific as of 2007 were following:
As of 2010, market shares of billing systems by vendors were following: | https://en.wikipedia.org/wiki/Telecommunications_billing |
Telecommunications engineering is a subfield of electronics engineering which seeks to design and devise systems of communication at a distance. [ 1 ] [ 2 ] The work ranges from basic circuit design to strategic mass developments. A telecommunication engineer is responsible for designing and overseeing the installation of telecommunications equipment and facilities, such as complex electronic switching system , and other plain old telephone service facilities, optical fiber cabling, IP networks , and microwave transmission systems. Telecommunications engineering also overlaps with broadcast engineering .
Telecommunication is a diverse field of engineering connected to electronic , civil and systems engineering . [ 1 ] Ultimately, telecom engineers are responsible for providing high-speed data transmission services. They use a variety of equipment and transport media to design the telecom network infrastructure; the most common media used by wired telecommunications today are twisted pair , coaxial cables , and optical fibers . Telecommunications engineers also provide solutions revolving around wireless modes of communication and information transfer, such as wireless telephony services, radio and satellite communications , internet , Wi-Fi and broadband technologies.
Telecommunication systems are generally designed by telecommunication engineers which sprang from technological improvements in the telegraph industry in the late 19th century and the radio and the telephone industries in the early 20th century. Today, telecommunication is widespread and devices that assist the process, such as the television, radio and telephone, are common in many parts of the world. There are also many networks that connect these devices, including computer networks, public switched telephone network (PSTN), [ citation needed ] radio networks, and television networks. Computer communication across the Internet is one of many examples of telecommunication. [ citation needed ] Telecommunication plays a vital role in the world economy, and the telecommunication industry's revenue has been placed at just under 3% of the gross world product. [ citation needed ]
Samuel Morse independently developed a version of the electrical telegraph that he unsuccessfully demonstrated on 2 September 1837. Soon after he was joined by Alfred Vail who developed the register — a telegraph terminal that integrated a logging device for recording messages to paper tape. This was demonstrated successfully over three miles (five kilometres) on 6 January 1838 and eventually over forty miles (sixty-four kilometres) between Washington, D.C. and Baltimore on 24 May 1844. The patented invention proved lucrative and by 1851 telegraph lines in the United States spanned over 20,000 miles (32,000 kilometres). [ 3 ]
The first successful transatlantic telegraph cable was completed on 27 July 1866, allowing transatlantic telecommunication for the first time. Earlier transatlantic cables installed in 1857 and 1858 only operated for a few days or weeks before they failed. [ 4 ] The international use of the telegraph has sometimes been dubbed the " Victorian Internet ". [ 5 ]
The first commercial telephone services were set up in 1878 and 1879 on both sides of the Atlantic in the cities of New Haven and London . Alexander Graham Bell held the master patent for the telephone that was needed for such services in both countries. The technology grew quickly from this point, with inter-city lines being built and telephone exchanges in every major city of the United States by the mid-1880s. [ 6 ] [ 7 ] [ 8 ] Despite this, transatlantic voice communication remained impossible for customers until January 7, 1927, when a connection was established using radio. However no cable connection existed until TAT-1 was inaugurated on September 25, 1956, providing 36 telephone circuits. [ 9 ]
In 1880, Bell and co-inventor Charles Sumner Tainter conducted the world's first wireless telephone call via modulated lightbeams projected by photophones . The scientific principles of their invention would not be utilized for several decades, when they were first deployed in military and fiber-optic communications .
Over several years starting in 1894, the Italian inventor Guglielmo Marconi built the first complete, commercially successful wireless telegraphy system based on airborne electromagnetic waves ( radio transmission ). [ 10 ] In December 1901, he would go on to established wireless communication between Britain and Newfoundland, earning him the Nobel Prize in physics in 1909 (which he shared with Karl Braun ). [ 11 ] In 1900, Reginald Fessenden was able to wirelessly transmit a human voice. On March 25, 1925, Scottish inventor John Logie Baird publicly demonstrated the transmission of moving silhouette pictures at the London department store Selfridges . In October 1925, Baird was successful in obtaining moving pictures with halftone shades, which were by most accounts the first true television pictures. [ 12 ] This led to a public demonstration of the improved device on 26 January 1926 again at Selfridges . Baird's first devices relied upon the Nipkow disk and thus became known as the mechanical television . It formed the basis of semi-experimental broadcasts done by the British Broadcasting Corporation beginning September 30, 1929.
The first U.S. satellite to relay communications was Project SCORE in 1958, which used a tape recorder to store and forward voice messages. It was used to send a Christmas greeting to the world from U.S. President Dwight D. Eisenhower . In 1960 NASA launched an Echo satellite ; the 100-foot (30 m) aluminized PET film balloon served as a passive reflector for radio communications. Courier 1B , built by Philco , also launched in 1960, was the world's first active repeater satellite. Satellites these days are used for many applications such as uses in GPS, television, internet and telephone uses.
Telstar was the first active, direct relay commercial communications satellite . Belonging to AT&T as part of a multi-national agreement between AT&T, Bell Telephone Laboratories , NASA, the British General Post Office , and the French National PTT (Post Office) to develop satellite communications, it was launched by NASA from Cape Canaveral on July 10, 1962, the first privately sponsored space launch. Relay 1 was launched on December 13, 1962, and became the first satellite to broadcast across the Pacific on November 22, 1963. [ 13 ]
The first and historically most important application for communication satellites was in intercontinental long distance telephony . The fixed Public Switched Telephone Network relays telephone calls from land line telephones to an earth station , where they are then transmitted a receiving satellite dish via a geostationary satellite in Earth orbit. Improvements in submarine communications cables , through the use of fiber-optics , caused some decline in the use of satellites for fixed telephony in the late 20th century, but they still exclusively service remote islands such as Ascension Island , Saint Helena , Diego Garcia , and Easter Island , where no submarine cables are in service. There are also some continents and some regions of countries where landline telecommunications are rare to nonexistent, for example Antarctica , plus large regions of Australia, South America, Africa, Northern Canada, China, Russia and Greenland .
After commercial long distance telephone service was established via communication satellites, a host of other commercial telecommunications were also adapted to similar satellites starting in 1979, including mobile satellite phones , satellite radio , satellite television and satellite Internet access . The earliest adaption for most such services occurred in the 1990s as the pricing for commercial satellite transponder channels continued to drop significantly.
On 11 September 1940, George Stibitz was able to transmit problems using teleprinter to his Complex Number Calculator in New York and receive the computed results back at Dartmouth College in New Hampshire . [ 14 ] This configuration of a centralized computer or mainframe computer with remote "dumb terminals" remained popular throughout the 1950s and into the 1960s. However, it was not until the 1960s that researchers started to investigate packet switching — a technology that allows chunks of data to be sent between different computers without first passing through a centralized mainframe. A four-node network emerged on 5 December 1969. This network soon became the ARPANET , which by 1981 would consist of 213 nodes. [ 15 ]
ARPANET's development centered around the Request for Comment process and on 7 April 1969, RFC 1 was published. This process is important because ARPANET would eventually merge with other networks to form the Internet, and many of the communication protocols that the Internet relies upon today were specified through the Request for Comment process. In September 1981, RFC 791 introduced the Internet Protocol version 4 (IPv4) and RFC 793 introduced the Transmission Control Protocol (TCP) — thus creating the TCP/IP protocol that much of the Internet relies upon today.
Optical fiber can be used as a medium for telecommunication and computer networking because it is flexible and can be bundled into cables. It is especially advantageous for long-distance communications, because light propagates through the fiber with little attenuation compared to electrical cables. This allows long distances to be spanned with few repeaters .
In 1966 Charles K. Kao and George Hockham proposed optical fibers at STC Laboratories (STL) at Harlow , England, when they showed that the losses of 1000 dB/km in existing glass (compared to 5-10 dB/km in coaxial cable) was due to contaminants, which could potentially be removed.
Optical fiber was successfully developed in 1970 by Corning Glass Works , with attenuation low enough for communication purposes (about 20 dB /km), and at the same time GaAs (Gallium arsenide) semiconductor lasers were developed that were compact and therefore suitable for transmitting light through fiber optic cables for long distances.
After a period of research starting from 1975, the first commercial fiber-optic communications system was developed, which operated at a wavelength around 0.8 μm and used GaAs semiconductor lasers. This first-generation system operated at a bit rate of 45 Mbps with repeater spacing of up to 10 km. Soon on 22 April 1977, General Telephone and Electronics sent the first live telephone traffic through fiber optics at a 6 Mbit/s throughput in Long Beach, California.
The first wide area network fibre optic cable system in the world seems to have been installed by Rediffusion in Hastings, East Sussex, UK in 1978. The cables were placed in ducting throughout the town, and had over 1000 subscribers. They were used at that time for the transmission of television channels, not available because of local reception problems.
The first transatlantic telephone cable to use optical fiber was TAT-8 , based on Desurvire optimized laser amplification technology. It went into operation in 1988.
In the late 1990s through 2000, industry promoters, and research companies such as KMI, and RHK predicted massive increases in demand for communications bandwidth due to increased use of the Internet , and commercialization of various bandwidth-intensive consumer services, such as video on demand , Internet Protocol data traffic was increasing exponentially, at a faster rate than integrated circuit complexity had increased under Moore's Law . [ 16 ]
Transmitter (information source) that takes information and converts it to a signal for transmission. In electronics and telecommunications a transmitter or radio transmitter is an electronic device which, with the aid of an antenna , produces radio waves . In addition to their use in broadcasting , transmitters are necessary component parts of many electronic devices that communicate by radio , such as cell phones ,
Transmission medium over which the signal is transmitted. For example, the transmission medium for sounds is usually air, but solids and liquids may also act as transmission media for sound. Many transmission media are used as communications channel . One of the most common physical media used in networking is copper wire . Copper wire is used to carry signals to long distances using relatively low amounts of power. Another example of a physical medium is optical fiber , which has emerged as the most commonly used transmission medium for long-distance communications. Optical fiber is a thin strand of glass that guides light along its length.
The absence of a material medium in vacuum may also constitute a transmission medium for electromagnetic waves such as light and radio waves .
Receiver ( information sink ) that receives and converts the signal back into required information. In radio communications , a radio receiver is an electronic device that receives radio waves and converts the information carried by them to a usable form. It is used with an antenna . The information produced by the receiver may be in the form of sound (an audio signal ), images (a video signal ) or digital data . [ 17 ]
Wired communications make use of underground communications cables (less often, overhead lines), electronic signal amplifiers (repeaters) inserted into connecting cables at specified points, and terminal apparatus of various types, depending on the type of wired communications used. [ 18 ]
Wireless communication involves the transmission of information over a distance without help of wires, cables or any other forms of electrical conductors. [ 19 ] Wireless operations permit services, such as long-range communications, that are impossible or impractical to implement with the use of wires. The term is commonly used in the telecommunications industry to refer to telecommunications systems (e.g. radio transmitters and receivers, remote controls etc.) which use some form of energy (e.g. radio waves , acoustic energy, etc.) to transfer information without the use of wires. [ 20 ] Information is transferred in this manner over both short and long distances. [ citation needed ]
A telecom equipment engineer is an electronics engineer that designs equipment such as routers, switches, multiplexers, and other specialized computer/electronics equipment designed to be used in the telecommunication network infrastructure.
A network engineer is a computer engineer who is in charge of designing, deploying and maintaining computer networks. In addition, they oversee network operations from a network operations center , designs backbone infrastructure, or supervises interconnections in a data center .
A central-office engineer is responsible for designing and overseeing the implementation of telecommunications equipment in a central office (CO for short), also referred to as a wire center or telephone exchange [ 21 ] A CO engineer is responsible for integrating new technology into the existing network, assigning the equipment's location in the wire center, and providing power, clocking (for digital equipment), and alarm monitoring facilities for the new equipment. The CO engineer is also responsible for providing more power, clocking, and alarm monitoring facilities if there are currently not enough available to support the new equipment being installed. Finally, the CO engineer is responsible for designing how the massive amounts of cable will be distributed to various equipment and wiring frames throughout the wire center and overseeing the installation and turn up of all new equipment.
As structural engineers , CO engineers are responsible for the structural design and placement of racking and bays for the equipment to be installed in as well as for the plant to be placed on.
As electrical engineers , CO engineers are responsible for the resistance , capacitance , and inductance (RCL) design of all new plant to ensure telephone service is clear and crisp and data service is clean as well as reliable. Attenuation or gradual loss in intensity [ citation needed ] and loop loss calculations are required to determine cable length and size required to provide the service called for. In addition, power requirements have to be calculated and provided to power any electronic equipment being placed in the wire center.
Overall, CO engineers have seen new challenges emerging in the CO environment. With the advent of Data Centers, Internet Protocol (IP) facilities, cellular radio sites, and other emerging-technology equipment environments within telecommunication networks, it is important that a consistent set of established practices or requirements be implemented.
Installation suppliers or their sub-contractors are expected to provide requirements with their products, features, or services. These services might be associated with the installation of new or expanded equipment, as well as the removal of existing equipment. [ 22 ] [ 23 ]
Several other factors must be considered such as:
Outside plant (OSP) engineers are also often called field engineers, because they frequently spend much time in the field taking notes about the civil environment, aerial, above ground, and below ground. [ citation needed ] OSP engineers are responsible for taking plant (copper, fiber, etc.) from a wire center to a distribution point or destination point directly. If a distribution point design is used, then a cross-connect box is placed in a strategic location to feed a determined distribution area.
The cross-connect box, also known as a serving area interface , is then installed to allow connections to be made more easily from the wire center to the destination point and ties up fewer facilities by not having dedication facilities from the wire center to every destination point. The plant is then taken directly to its destination point or to another small closure called a terminal, where access can also be gained to the plant, if necessary. These access points are preferred as they allow faster repair times for customers and save telephone operating companies large amounts of money.
The plant facilities can be delivered via underground facilities, either direct buried or through conduit or in some cases laid under water, via aerial facilities such as telephone or power poles, or via microwave radio signals for long distances where either of the other two methods is too costly.
As structural engineers , OSP engineers are responsible for the structural design and placement of cellular towers and telephone poles as well as calculating pole capabilities of existing telephone or power poles onto which new plant is being added. Structural calculations are required when boring under heavy traffic areas such as highways or when attaching to other structures such as bridges. Shoring also has to be taken into consideration for larger trenches or pits. Conduit structures often include encasements of slurry that needs to be designed to support the structure and withstand the environment around it (soil type, high traffic areas, etc.).
As electrical engineers , OSP engineers are responsible for the resistance, capacitance, and inductance (RCL) design of all new plant to ensure telephone service is clear and crisp and data service is clean as well as reliable. Attenuation or gradual loss in intensity [ citation needed ] and loop loss calculations are required to determine cable length and size required to provide the service called for. In addition power requirements have to be calculated and provided to power any electronic equipment being placed in the field. Ground potential has to be taken into consideration when placing equipment, facilities, and plant in the field to account for lightning strikes, high voltage intercept from improperly grounded or broken power company facilities, and from various sources of electromagnetic interference.
As civil engineers , OSP engineers are responsible for drafting plans, either by hand or using Computer-aided design (CAD) software, for how telecom plant facilities will be placed. Often when working with municipalities trenching or boring permits are required and drawings must be made for these. Often these drawings include about 70% or so of the detailed information required to pave a road or add a turn lane to an existing street. Structural calculations are required when boring under heavy traffic areas such as highways or when attaching to other structures such as bridges. As civil engineers, telecom engineers provide the modern communications backbone for all technological communications distributed throughout civilizations today.
Unique to telecom engineering is the use of air-core cable which requires an extensive network of air handling equipment such as compressors, manifolds, regulators and hundreds of miles of air pipe per system that connects to pressurized splice cases all designed to pressurize this special form of copper cable to keep moisture out and provide a clean signal to the customer.
As political and social ambassador , the OSP engineer is a telephone operating company's face and voice to the local authorities and other utilities. OSP engineers often meet with municipalities, construction companies and other utility companies to address their concerns and educate them about how the telephone utility works and operates. [ citation needed ] Additionally, the OSP engineer has to secure real estate in which to place outside facilities, such as an easement to place a cross-connect box. | https://en.wikipedia.org/wiki/Telecommunications_engineering |
In a telecommunications network , a link is a communication channel that connects two or more devices for the purpose of data transmission . The link may be a dedicated physical link or a virtual circuit that uses one or more physical links or shares a physical link with other telecommunications links.
A telecommunications link is generally based on one of several types of information transmission paths such as those provided by communication satellites , terrestrial radio communications infrastructure and computer networks to connect two or more points.
The term link is widely used in computer networking to refer to the communications facilities that connect nodes of a network. [ 1 ]
Sometimes the communications facilities that provide the communication channel that constitutes a link are also included in the definition of link .
A point-to-point link is a dedicated link that connects exactly two communication facilities (e.g., two nodes of a network, an intercom station at an entryway with a single internal intercom station, a radio path between two points, etc.).
Broadcast links connect two or more nodes and support broadcast transmission , where one node can transmit so that all other nodes can receive the same transmission. Classic Ethernet is an example.
Also known as a multidrop link, a multipoint link is a link that connects two or more nodes. Also known as general topology networks, these include ATM and Frame Relay links, as well as X.25 networks when used as links for a network-layer protocol like IP .
Unlike broadcast links, there is no mechanism to efficiently send a single message to all other nodes without copying and retransmitting the message.
A point-to-multipoint link (or simply a multipoint ) is a specific type of multipoint link which consists of a central connection endpoint (CE) that is connected to multiple peripheral CEs. All of the peripheral CEs receive any transmission of data that originates from the central CE while any transmission of data that originates from any of the peripheral CEs is only received by the central CE.
Links are often referred to by terms that refer to the ownership or accessibility of the link.
A forward link is the link from a fixed location (e.g., a base station ) to a mobile user. If the link includes a communications relay satellite , the forward link will consist of both an uplink (base station to satellite) and a downlink (satellite to mobile user). [ 2 ]
The reverse link (sometimes called a return channel ) is the link from a mobile user to a fixed base station.
If the link includes a communications relay satellite , the reverse link will consist of both an uplink (mobile station to satellite) and a downlink (satellite to base station) which together constitute a half hop . | https://en.wikipedia.org/wiki/Telecommunications_link |
A telecommunications network is a group of nodes interconnected by telecommunications links that are used to exchange messages between the nodes. The links may use a variety of technologies based on the methodologies of circuit switching , message switching , or packet switching , to pass messages and signals.
Multiple nodes may cooperate to pass the message from an originating node to the destination node, via multiple network hops. For this routing function, each node in the network is assigned a network address for identification and locating it on the network. The collection of addresses in the network is called the address space of the network.
Examples of telecommunications networks include computer networks , the Internet , the public switched telephone network (PSTN), the global Telex network, the aeronautical ACARS network, [ 1 ] and the wireless radio networks of cell phone telecommunication providers.
this is the structure of network general, every telecommunications network conceptually consists of three parts, or planes (so-called because they can be thought of as being and often are, separate overlay networks ):
Data networks are used extensively throughout the world for communication between individuals and organizations . Data networks can be connected to allow users seamless access to resources that are hosted outside of the particular provider they are connected to. The Internet is the best example of the internetworking of many data networks from different organizations.
Terminals attached to IP networks like the Internet are addressed using IP addresses . Protocols of the Internet protocol suite (TCP/IP) provide the control and routing of messages across the and IP data network. There are many different network structures that IP can be used across to efficiently route messages, for example:
There are three features that differentiate MANs from LANs or WANs:
Data center networks also rely highly on TCP/IP for communication across machines. They connect thousands of servers, are designed to be highly robust, provide low latency and high bandwidth. Data center network topology plays a significant role in determining the level of failure resiliency, ease of incremental expansion, communication bandwidth and latency. [ 3 ]
In analogy to the improvements in the speed and capacity of digital computers, provided by advances in semiconductor technology and expressed in the bi-yearly doubling of transistor density, which is described empirically by Moore's law , the capacity and speed of telecommunications networks have followed similar advances, for similar reasons. In telecommunication, this is expressed in Edholm's law , proposed by and named after Phil Edholm in 2004. [ 4 ] This empirical law holds that the bandwidth of telecommunication networks doubles every 18 months, which has proven to be true since the 1970s. [ 4 ] [ 5 ] The trend is evident in the Internet , [ 4 ] cellular (mobile), wireless and wired local area networks (LANs), and personal area networks . [ 5 ] This development is the consequence of rapid advances in the development of metal-oxide-semiconductor technology . [ 6 ] | https://en.wikipedia.org/wiki/Telecommunications_network |
Telecommunications systems management ( Telecomm or TSM for short, also Telecommunication systems , Telecommunications management , Network management ) is an interdisciplinary area of study offered at some universities to fill the need for a liaison between the technical aspect and the business aspect of telecommunications . At Murray State University it has been regarded as a half-and-half program, half business and half networking classes with the option to specialize in certain aspects in the field.
This computer networking article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Telecommunications_systems_management |
A telecompressor or focal reducer is an optical element used to reduce focal length , increase lens speed , and in some instances improve optical transfer function (OTF) performance. It is also widely known under the name “Speed Booster”, which is the commercial name of a line of telecompressors by the manufacturer Metabones. Popular applications include photography , videography , and astrophotography . In astrophotography, these qualities are most desirable when taking pictures of nearby large objects, such as nebulae . The effects and uses of the telecompressor are largely opposite to those of the teleconverter or Barlow lens . A combined system of a lens and a focal reducer has smaller back focus than the lens alone; this places restrictions on lenses and cameras that focal reducer might be used with.
Lens adapters that include telecompressors are useful with digital mirrorless cameras. [ 1 ] By combining a telecompressor within a lens adapter, mirrorless cameras can use the lenses of both digital single-lens reflex cameras (DSLRs) and film-based SLR ( Single-lens reflex cameras ).
For a refractor telescope or simple camera lens, the new effective focal length f n is given by: [ 2 ] [ 3 ]
where f o = original focal length of telescope, d = distance from telecompressor to image plane, and f r = focal length of telecompressor.
For a reflecting telescope , the calculation is the same. However, since the telecompressor increases the field of view, there could be vignetting in the image, depending on the sizes of the secondary mirror and the telescope tube. [ 4 ]
For a catadioptric system that has a combination of mirror and lens, the determination of reduction is more complicated, due to the fact that the telescope has a variable focal length , where the imaging plane can move along the axis of the imaging system. As the addition of the telecompressor will increase the necessary back focus, the original focal length will increase by a certain amount, and then this new focal length would be used in the above formula.
Telecompressors were used in early digital SLR systems like the Minolta RD-175 and the Nikon E series . The technology of the time used relatively small sensor sizes , so lenses designed for 35 mm film could not be used with their native field of view without additional optics used. Implementing a telecompressor helped to mitigate these limitations. One effect of a telecompressor is that it reduces the diameter of the image circle , which means that a lens meant for a larger format can be used on a smaller sensor, partially making up for the latter's crop factor . [ 5 ]
This photography-related article is a stub . You can help Wikipedia by expanding it .
This film technology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Telecompressor |
Teledeltos paper is an electrically conductive paper. It is formed by a coating of carbon on one side of a sheet of paper , giving one black and one white side. Western Union developed Teledeltos paper in the late 1940s (several decades after it was already in use for mathematical modelling) for use in spark printer based fax machines and chart recorders . [ 1 ]
Teledeltos paper has several uses within engineering that are far removed from its original use in spark printers. Many of these use the paper to model the distribution of electric potential and other scalar fields .
Teledeltos provides a sheet of uniform isotropic resistivity . As it is inexpensive and easily cut to shape, it may be used to make resistors of any shape needed. The paper backing is an insulator. These shapes are usually made to represent or model real-world examples of a two-dimensional scalar fields , such as an electric field, or other fields following the linear distribution rules.
The resistivity of Teledeltos is around 6 kilohms / square. [ 2 ] [ i ] This is low enough that it may be used with safe low voltages, yet high enough that the currents remain low, avoiding problems with contact resistance .
Connections are made to the paper by applying areas of silver-loaded conductive paint and attaching wires to these areas, often with spring clips. [ 2 ] [ 3 ] Each painted area has a sufficiently low resistivity (relative to the carbon) and to be assumed to be a constant voltage. With the voltages applied, the current flow through the sheet will emulate the field distribution. Voltages may be measured within the sheet by applying a voltmeter probe (relative to a known electrodes) or current flows may be measured. As the sheet's resistivity is constant, the simplest way to measure a current flow is to use a small two-probe voltmeter to measure the voltage difference between the probes. As their spacing is known, and the resistivity, the resistance between them and (by Ohm's law ) the current density may be determined.
A sheet that is large in comparison to the experimental area is usually sufficient for modeling an infinite field. [ 3 ]
Although the modelling of electric fields is itself directly useful in some fields such as thermionic valve design, [ 4 ] the main practical use of the broader technique is for modelling fields of other quantities. This technique may be applied to any field that follows the same linear rules as Ohm's law for bulk resistivity. This includes heat flow, some optics and some aspects of Newtonian mechanics. It is not usually applicable to fluid dynamics, owing to viscosity and compressibility effects, or to high-intensity optics where non-linear effects become apparent. It may be applicable to some mechanical problems involving homogeneous and isotropic materials such as metals, but not to composites.
Before the use of Teledeltos, a similar technique had been used for modelling gas flows, where a shallow tray of copper sulphate solution was used as the medium, with copper electrodes at each side. Barriers within the model could be sculpted from wax. Being a liquid, this was far less convenient. Stanley Hooker describes its use pre-war, although he also notes that compressibility effects could be modelled in this way, by sculpting the base of the tank to give additional depth and thus conductivity locally. [ 5 ]
One of the most important applications is for thermal modelling. Voltage is the analog of temperature and current flow that of heat flow . If the boundaries of a heatsink model are both painted with conductive paint to form two separate electrodes, each may be held at a voltage to represent the temperatures of some internal heat source (such as a microprocessor chip) and the external ambient temperature. Potentials within the heatsink represent internal temperatures and current flows represent heat flow. In many cases the internal heat source may be modelled with a constant current source , rather than a voltage, giving a better analogy of power loss as heat, rather than assuming a simple constant temperature. If the external airflow is restricted, the 'ambient' electrode may be subdivided and each section connected to a common voltage supply through a resistor or current limiter, representing the proportionate or maximum heat flow capacity of that airstream.
As heatsinks are commonly manufactured from extruded aluminium sections , the two-dimensional paper is not usually a serious limitation. In some cases, such as pistons for internal combustion engines, three-dimensional modelling may be required. This has been performed, in a manner analogous to Teledeltos paper, by using volume tanks of a conductive electrolyte. [ 6 ]
This thermal modelling technique is useful in many branches of mechanical engineering such as heatsink or radiator design and die casting . [ 7 ]
The development of computational modelling and finite element analysis has reduced the use of Teledeltos, such that the technique is now obscure and the materials can be hard to obtain. [ 2 ] Its use is still highly valuable in teaching, as the technique gives a very obvious method for measuring fields and offers immediate feedback as the shape of an experimental setup is changed, encouraging a more fundamental understanding. [ 3 ] [ 4 ]
Teledeltos can also be used to make sensors , either directly as an embedded resistive element or indirectly, as part of their design process.
A piece of Teledeltos with conductive electrodes at each end makes a simple resistor . Its resistance is slightly sensitive to applied mechanical strain by bending or compression, but the paper substrate is not robust enough to make a reliable sensor for long-term use.
A more common resistive sensor is in the form of a potentiometer . A long, thin resistor with an applied voltage may have a conductive probe slid along its surface. The voltage at the probe depends on its position between the two end contacts. Such a sensor may form the keyboard for a simple electronic musical instrument like a Tannerin or Stylophone .
A similar linear sensor uses two strips of Teledeltos, placed face to face. Pressure on the back of one (finger pressure is enough) presses the two conductive faces together to form a lower resistance contact. This may be used in similar potentiometric fashion to the conductive probe, but without requiring the special probe. This may be used as a classroom demonstration for another electronic musical instrument, with a ribbon controller keyboard, such as the Monotron . If crossed electrodes are used on each piece of Teledeltos, a two-dimensional resistive touchpad may be demonstrated.
Although Teledeltos is not used to manufacture capacitive sensors, its field modelling abilities also allow it to be used to determine the capacitance of arbitrarily shaped electrodes during sensor design. [ 2 ] | https://en.wikipedia.org/wiki/Teledeltos |
Teleforce is a proposed defensive weapon by Nikola Tesla that accelerated pellets or slugs of material to a high velocity inside a vacuum chamber via electrostatic repulsion and then fired them out of aimed nozzles at intended targets. Tesla claimed to have conceived of it after studying the Van de Graaff generator . [ 1 ] [ 2 ] Tesla described the weapon as being able to be used against ground-based infantry or for anti-aircraft purposes. [ 3 ] [ 4 ]
Tesla described Teleforce' s operation in 1934, specifying its superiority to the death rays believed to exist at the time:
My apparatus projects particles which may be relatively large or of microscopic dimensions, enabling us to convey to a small area at a great distance trillions of times more energy than is possible with rays of any kind. Many thousands of horsepower can thus be transmitted by a stream thinner than a hair, so that nothing can resist. [ 5 ] The nozzle would send concentrated beams of particles through the free air, of such tremendous energy that they will bring down a fleet of 10,000 enemy airplanes at a distance of 200 miles from a defending nation's border and will cause armies to drop dead in their tracks. [ 3 ] [ 4 ]
In a letter that was written to J. P. Morgan, Jr. on November 29, 1934, Tesla described the weapon:
I have made recent discoveries of inestimable value... The flying machine has completely demoralized the world, so much that in some cities, as London and Paris, people are in mortal fear from aerial bombing. The new means I have perfected afford absolute protection against this and other forms of attack. ... These new discoveries, which I have carried out experimentally on a limited scale, have created a profound impression. One of the most pressing problems seems to be the protection of London and I am writing to some influential friends in England hoping that my plan will be adopted without delay. The Russians are very anxious to render their borders safe against Japanese invasion and I have made them a proposal which is being seriously considered. [ 6 ]
In 1937, Tesla wrote a treatise, " The Art of Projecting Concentrated Non-dispersive Energy through the Natural Media ", [ 7 ] concerning charged particle beam weapons. [ 8 ] Tesla published the document in an attempt to expound on the technical description of a ' superweapon " that would put an end to all war." This treatise is currently in the Nikola Tesla Museum archive in Belgrade . It describes an open-ended vacuum tube with a gas jet seal that allows particles to exit, a method of charging particles to millions of volts, and a method of creating and directing non-dispersive particle streams (through electrostatic repulsion). [ 8 ]
Teleforce was mentioned publicly in the New York Sun and The New York Times on July 11, 1934. [ 9 ] [ 10 ] The press called it a "peace ray" or death ray . [ 11 ] [ 12 ] The idea of a "death ray" was a misunderstanding in regard to Tesla's term when he referred to his invention as a "death beam" so Tesla went on to explain that "this invention of mine does not contemplate the use of any so-called 'death rays.' Rays are not applicable because they cannot be produced in requisite quantities and diminish rapidly in intensity with distance. All the energy of New York City (approximately two million horsepower) transformed into rays and projected twenty miles, could not kill a human being, because, according to a well known law of physics, it would disperse to such an extent as to be ineffectual. My apparatus projects particles ..." [ 5 ]
What set Tesla's proposal apart from the usual run of fantasy "death rays" was a unique vacuum chamber with one end open to the atmosphere. Tesla devised a unique vacuum seal by directing a high-velocity air stream at the tip of his gun to maintain "high vacua". The necessary pumping action would be accomplished with a large Tesla turbine . [ 13 ]
In total, the components and methods included:
It has been said that the charged particles would self-focus via " gas focusing ,". [ citation needed ] In 1940, Tesla estimated that each station would cost no more than $2,000,000 and could have been constructed in a few months. [ citation needed ]
After Tesla died, in a box purported to contain a part of Tesla's "death ray" apparatus, John G. Trump found a 45-year-old multidecade resistance box . [ 15 ]
By November 1934, Tesla was attempting (unsuccessfully) to obtain funding from J. P. Morgan's son, Jack Morgan . [ 16 ] The idea of Tesla possibly having a new type of weapon and, further, his offer to give it to the League of Nations as a way to prevent future war were seen together as an alarming security threat by one US diplomat – a view not shared by his government. [ 17 ] In 1935, the Soviet Union , through the US Amtorg Trading Corporation , an alleged [ clarification needed ] Soviet-arms front in New York City, paid Tesla $25,000 for detailed plans, specifications, and complete information on the method and apparatus, but it is unclear whether a physical device was ever produced. [ 18 ] [ 13 ] Tesla also attempted to get funding for his device in 1937, sending a paper ("New Art of Projecting Concentrated Non-Dispersive Energy Through Natural Media") outlining his plans to the governments of the United States, the United Kingdom, France, Canada, and Yugoslavia. [ 13 ] The United Kingdom considered Tesla's offer to sell the device to them for $30 million, maybe with the idea that even hinting they had a super weapon would be a deterrent to Adolf Hitler , but by 1938 they had dropped all interest. [ 18 ]
During this period, Tesla claimed that efforts had been made to steal the invention, saying that his room had been entered and that his papers had been scrutinized, but that the thieves or spies had left empty-handed. He said that there was no danger that his invention could be stolen, for he had at no time committed any part of it to paper; the blueprint for the Teleforce weapon was all in his mind. [ 19 ]
At his birthday press conference in 1937, Tesla was asked about his weapon, and he made the claim, "But it is not an experiment... I have built, demonstrated and used it. Only a little time will pass before I can give it to the world." [ 2 ] At the 1940 birthday press conference, 84-year-old Tesla offered to develop his weapon for the US, but there was no interest in his offer. [ 20 ] | https://en.wikipedia.org/wiki/Teleforce |
Telematics 2.0 is the name for the Internet of things -based telematics technology for the automotive industry . [ 1 ] Telematics 2.0 utilises smartphone-based sensors rather than the black box devices used in the traditional pay as you drive insurance industry. Telematics 2.0 solutions reached the consumer market in 2012/3 with solutions being offered by leading auto insurers such as AIG and AssetFENCE in the US. [ 2 ] [ 3 ]
A commercial pilot of SafeDrive was run 2013 in Sweden, where the shortcomings of the smartphone as measurement probe were handled by digital signal processing. [ 4 ]
This mobile technology related article is a stub . You can help Wikipedia by expanding it .
This finance-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Telematics_2.0 |
In mycology , the terms teleomorph , anamorph , and holomorph apply to portions of the life cycles of fungi in the phyla Ascomycota and Basidiomycota :
The terms were introduced in 1981 to simplify the discussion of the procedures of the existing dual-naming system, which (1) permitted anamorphs to have their separate names but (2) treated teleomorphic names as having precedence for being used as the holomorphic name. The Melbourne Code removes the provisions and allows all names to compete on equal footing for priority as the correct name of a fungus, and hence does not use the term holomorph any more.
Fungi are classified primarily based on the structures associated with sexual reproduction , which tend to be evolutionarily conserved. However, many fungi reproduce only asexually, and cannot easily be classified based on sexual characteristics; some produce both asexual and sexual states. These species are often members of the Ascomycota , but a few of them belong to the Basidiomycota . Even among fungi that reproduce both sexually and asexually, often only one method of reproduction can be observed at a specific point in time or under specific conditions. Additionally, fungi typically grow in mixed colonies and sporulate amongst each other. These facts have made it very difficult to link the various states of the same fungus.
Fungi that are not known to produce a teleomorph were historically placed into an artificial phylum , the " Deuteromycota ," also known as " fungi imperfecti ," simply for convenience. Some workers hold that this is an obsolete concept, and that molecular phylogeny allows accurate placement of species which are known from only part of their life cycle. Others retain the term "deuteromycetes," but give it a lowercase "d" and no taxonomic rank. [ 1 ]
Historically, Article 59 of the International Code of Botanical Nomenclature permitted mycologists to give asexually reproducing fungi (anamorphs) separate names from their sexual states (teleomorphs). [ 2 ] This practice was discontinued as of 1 January 2013. [ 3 ]
The dual naming system can be confusing. [ 4 ] However, it is essential for workers in plant pathology, mold identification, medical mycology, and food microbiology, fields in which asexually reproducing fungi are commonly encountered. [ clarification needed ]
The separate names for anamorphs of fungi with a pleomorphic life-cycle has been an issue of debate since the phenomenon was recognized in the mid-19th century. [ 3 ] This was even before the first international rules for botanical nomenclature were issued in 1867. [ 3 ] Special provisions are to be found in the earliest Codes , which were then modified several times, and often substantially. [ 3 ] The rules have been updated regularly and become increasingly complex, and by the mid-1970s they were being interpreted in different ways by different mycologists – even ones working on the same genus. [ 3 ] Following intensive discussions under the auspices of the International Mycological Association , drastic changes were made at the International Botanical Congress in 1981 to clarify and simplify the procedures – and the new terms anamorph, teleomorph, and holomorph entered general use. [ 3 ] An unfortunate effect of the simplification was that many name changes had to be made, including for some well-known and economically important species; at that date, the conservation of species names was not allowed under the Code . [ 3 ]
Unforeseen in the 1970s, when the 1981 provisions were crafted, was the impact of molecular systematics . [ 3 ] A decade later, it was starting to become obvious that fungi with no known sexual stage could confidently be placed in genera which were typified by species in which the sexual stage was known. [ 3 ] This possibility of abandoning the dual nomenclatural system was debated at subsequent International Mycological Congresses and on other occasions, and the need for change was increasingly recognized. [ 3 ] [ 5 ] At the International Botanical Congress in Vienna in 2005, some minor modifications were made which allowed anamorph-typified names to be epitypified by material showing the sexual stage when it was discovered, and for that anamorph name to continue to be used. [ 3 ]
The 1995 edition of the influential Ainsworth and Bisby’s Dictionary of the Fungi sought to replace the term anamorph with mitosporic fungus and teleomorph with meiosporic fungus , based on the idea that the fundamental distinction is whether mitosis or meiosis preceded sporulation. This is a controversial choice because it is not clear that the morphological differences which traditionally define anamorphs and teleomorphs line up completely with sexual practices, or whether those sexual practices are sufficiently well understood in some cases. [ 1 ]
The Vienna Congress (2005) established a Special Committee to investigate the issue further, but it was unable to reach a consensus. [ 3 ] Matters were becoming increasingly desperate as mycologists using molecular phylogenetic approaches started to ignore the provisions, or interpret them in different ways. [ 3 ]
The International Botanical Congress in Melbourne in July 2011 made a change in the International Code of Nomenclature for algae, fungi, and plants and adopted the principle "one fungus, one name". [ 3 ] After 1 January 2013, one fungus can only have one name; the system of permitting separate names to be used for anamorphs then ended. [ 3 ] This means that all legitimate names proposed for a species, regardless of what stage they are typified by, can serve as the correct name for that species. [ 3 ]
Since the Brussels Congress in 1910, there has been provision for a separate name (or names) for the asexual (anamorph) state (or states) of fungi with a pleomorphic life cycle from that applicable to the sexual (teleomorph) state and to the whole fungus. The Brussels Rules (Briquet, Règles Int. Nomencl. Bot., ed. 2. 1912) specified that names given to states other than the sexual one (the “perfect state”) “have only a temporary value”, apparently anticipating a time when they would no longer be needed. At the Melbourne Congress, it was decided that this time had come – but not through disuse as may have been envisaged in Brussels. Throughout the various changes since 1912 to the rules on names of fungi with a pleomorphic life cycle, one element has remained constant: the correct name for the taxon in all its morphs (the holomorph) was the earliest applicable to the sexual state (the teleomorph). In Melbourne, this restriction was overturned and it was decided that all legitimate fungal names were to be treated equally for the purposes of establishing priority, regardless of the life history stage of the type. As a consequence the Melbourne Congress also approved additional special provisions for the conservation and rejection of fungal names to mitigate the nomenclatural disruption that would otherwise arise. [ 6 ]
All names now compete on an equal footing for priority . [ 3 ] In order not to render illegitimate the names that had been introduced in the past for separate morphs, it was agreed that these should not be treated as superfluous alternative names in the sense of the Code . [ 3 ] It was further decided that no anamorph-typified name should be taken up to displace a widely used teleomorph-typified name without the case's having been considered by the General Committee established by the Congress. [ 3 ] Recognizing that there were cases in some groups of fungi where there could be many names that might merit formal retention or rejection, a new provision [ 3 ] was introduced: Lists of names can be submitted to the General Committee and, after due scrutiny, names accepted on those lists are to be treated as conserved over competing synonyms (and listed as Appendices to the Code ). [ 3 ] Lichen -forming fungi (but not lichenicolous fungi ) had always been excluded from the provisions permitting dual nomenclature. [ 3 ]
The provisions are adopted in the Melbourne Code of 2012 as a modification to the existing Article 59. [ 7 ] In the Shenzhen Code of 2018, a new chapter F "Names of organisms treated as fungi" was added, collecting all fungus-specific provisions including the original Article 59 into this chapter. As of April 2025 [update] , the latest revision of this part is the San Juan Chapter F of 2019, published as an addendum of the Shenzhen Code of 2018. [ 8 ]
The problem of choosing one name among many remains to be examined for many large, agriculturally or medically-important genera like Aspergillus and Fusarium . Articles have been published on such specific genera to propose ways to define them under the newer rules. [ 9 ] [ 10 ]
This article incorporates CC-BY-3.0 text from the reference [ 3 ] | https://en.wikipedia.org/wiki/Teleomorph,_anamorph_and_holomorph |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.