id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
61,883,154
https://en.wikipedia.org/wiki/Cockade%20of%20Italy
The cockade of Italy () is the national ornament of Italy, obtained by folding a green, white and red ribbon into a using the technique called (pleating). It is one of the national symbols of Italy and is composed of the three colours of the Italian flag with the green in the centre, the white immediately outside and the red on the edge. The cockade, a revolutionary symbol, was the protagonist of the uprisings that characterized the Italian unification, being pinned on the jacket or on the hats in its tricolour form by many of the patriots of this period of Italian history. During which, the Italian Peninsula achieved its own national unity, culminating on 17 March 1861 with the proclamation of the Kingdom of Italy. On 14 June 1848, it replaced the azure cockade on the uniforms of some departments of the Royal Sardinian Army (becoming the Royal Italian Army in 1861), while on 1 January 1948, with the birth of the Italian Republic, it took its place as a national ornament. The Italian tricolour cockade appeared for the first time in Genoa on 21 August 1789, and with it the colours of the three Italian national colours. Seven years later, the first tricolour military banner was adopted by the Lombard Legion in Milan on 11 October 1796, and eight years later, the birth of the flag of Italy had its origins on 7 January 1797, when it became for the first time a national flag of an Italian sovereign State, the Cispadane Republic. The Italian tricolour cockade is one of the symbols of the Italian Air Force, and is widely used on all Italian state aircraft, not only military. The cockade is the basis of the parade frieze of the Bersaglieri, cavalry regiments, Carabinieri and Guardia di Finanza, and a reproduction of it in fabric is sewn on the shirts of the sports teams holding the Coppa Italia () that are organized in various national team sports. It is tradition, for the most important offices of the State, excluding the President of the Italian Republic, to have a tricolour cockade pinned to their jacket during the military parade of the Festa della Repubblica, which is celebrated every 2 June. Colour position The Italian tricolour cockade, by convention, has the green in the centre, the white in an intermediate position and red in the periphery. This custom derives from one of the conceptual characteristics of the cockades, which can be imagined as flags rolled around the flagpole seen from above. In the case of the Italian tricolour cockade, the green is located in the centre because in the flag of Italy this colour is the one closest to the flagpole. The tricolour cockades with red and green in the opposite position are those of Iran and Suriname. Conversely, the national ornament of Bulgaria and Maldives, starting from the centre, are arranged white, green and red, while that of Madagascar, starting from the centre, is arranged white, red and green. The Hungarian cockade has the same arrangement of colours as the Italian tricolour cockade, having the colour position reversed like the Iranian cockade and the Surinamese one is an urban legend. Other cockades identical to the Italian one, even in the arrangement of the colours, are the national ornaments of Burundi, Mexico, Lebanon, Seychelles, Algeria and Turkmenistan. History The premises The first cockades were introduced in Europe in the 15th century. The armies of the European states used them to signal the nationality of their soldiers to discern allies from enemies. These first cockades were inspired by the distinctive coloured bands and ribbons that were used in the Late Middle Ages by knights, both in war and in tournaments, which had the same purpose, namely to distinguish the opponent from the fellow soldier. The Italian tricolour cockade, which later became a revolutionary symbol par excellence during the insurrectional uprisings of the 18th and 19th centuries, was often worn by the patriots who participated in the uprisings that marked the Italian unification that was characterized by those social ferments that led to the political and administrative unity of the Italian peninsula in the 19th century; for this reason it is considered one of the national symbols of Italy. The Italian tricolour cockade, as well as all similar ornaments made in the same period in other countries, main characteristic was that of being able to be clearly visible, thus giving way to unequivocally identify the political ideas of the person who wore it, as well as that of being, in case of need, better hideable than, for example, a flag. The Italian tricolour cockade was inspired by the French tricolour cockade, as well as the flag of Italy is inspired by the French one, introduced by the French Revolution in the autumn of 1790 on French Navy warships. Other national tricolour European flags were also inspired by the French flag because they were also linked to the ideals of the revolution. The French tricolour cockade originated during the Revolution and over time became one of the symbols of change. Later, the meaning of change assigned to the French tricolour cockade crossed the Alps and arrived in Italy together with the use of the cockade and all the values of the French Revolution, which were perpetrated by the Jacobinism of the origins, including the ideals of social renewal the basis of the advocacy of the Declaration of the Rights of Man and of the Citizen of 1789. Subsequently also political, with the first patriotic ferments directed at national self-determination which subsequently led, on the Italian peninsula, to the Italian unification. The French tricolour cockade was born on 12 July 1789, two days before the storming of the Bastille, when the revolutionary journalist Camille Desmoulins, while hailing the Parisian crowd to revolt, asked the protesters what colour to adopt as a symbol of the French Revolution, proposing the green hope or the blue of the American Revolution, symbol of freedom and democracy. The protesters replied "The green! The green! We want green cockades!" Desmoulins then seized a green leaf from the ground and pointed it to the hat as a distinctive sign of the revolutionaries. The green, in the primitive French cockade, was immediately abandoned in favor of blue and red, or the ancient colours of Paris, because it was also the colour of the king's brother, Count of Artois, who became monarch after the First Restoration with the name of Charles X of France. The French tricolour cockade was then completed on 17 July 1789 with the addition of white, the colour of the House of Bourbon, in deference to King Louis XVI of France, who was still ruling despite the violent revolts that raged in the country: the French monarchy was in fact abolished later, on 10 August 1792. The birth of the Italian national colours The leaves used as the first cockades The first sporadic demonstrations in favor of the ideals of the French Revolution by the Italian population took place in August 1789 with the organization of protests in various places on the Italian peninsula, especially in the Papal States. The rioters, in these early uprisings, had makeshift cockades made of green leaves pinned on their clothes in imitation of the similar protests that took place in France at the dawn of the revolution. The use of cockades during the protests that took place in Italy was not an isolated case. It is documented that on 12 November 1789 the Prussian government forbade the Westphalian population to use cockades because they were viewed with suspicion given their meaning closely linked to the protest movements that were flaring up in France, and their use therefore went beyond the French borders and spread gradually across Europe. This also happened due to gazettes, printed in various European countries, that gave ample prominence to the fact that the cockade had become, in France, one of the most important symbols of the insurrectional uprisings and of the people's struggle against the absolutist regime that ruled at the time. As for the Italian uprisings, noteworthy were the revolts that took place in Fano and Velletri just before 16 August, in Rome between 16 and 28 August, and in Frascati just before 30 August, all of which took place in the Papal States. In Rome, in particular, cockades, which were formed from laurel leaves, were pinned on the hats. The rioters demanded the lowering of the price of basic necessities with the threat of unleashing riots comparable to the violent Parisian protests in case of refusal of the authorities to satisfy these requests. The Milanese gazette Staffetta di Sciaffusa defined the protests in the Papal States as "[a] dance of green cockades" in an article published on 16 August 1789. From September 1789 there was no more news, in the Italian riots, of the cockade formed with the leaves which was replaced by cockades of green fabric. The first Italian tricolour cockade During the first weeks of the revolutionary season, it remained a common belief in Italy that the green, white and red flag was the flag waved by the French rioters. The Italian insurgents therefore used these colours as a simple imitation of the protests that were taking place in France and that were aimed at – in both nations – to the same objectives, namely to achieve better living conditions and to obtain civil and political rights, which have always been denied by absolutist regimes. The Italian gazettes of the time had in fact created confusion on the events of the French riots, in particular by omitting the replacement of green with blue and red and thus reporting the erroneous news that the French tricolour was green, white and red. The error about the colours of the French cockade took root among the demonstrators because the newspapers did not correct the error immediately although at the time, in Italy, about 80 newspapers were printed, five of which in Milan alone. The news published were, at the beginning, also contradictory. For example, the Milanese gazette La Staffetta di Sciaffusa reported the news that the green French cockade made up of leaves had been replaced, the next day, by a red and white cockade (instead of blue and red). Even on the subsequent and definitive French blue, white and red cockade, which was made on July 17, the newspapers made confusion reporting, as in the case of Il Corriere di Gabinetto, that it was only red and blue or, according to other newspapers, such as La Gazzetta Enciclopedica di Milano, which was white and pink. More precise information, subsequently reported by all the Italian newspapers, correctly informed that there are three colours of the French cockade, however their shades erroneously cited as green, white and red cockades. The first documented trace of the use of the green, white and red cockade, which however does not specify the arrangement of the colours on the ornament, is dated 21 August 1789. In the historical archives of the Republic of Genoa it is reported that eyewitnesses had seen some demonstrators wandering around the city with "the new French white, red and green cockade introduced recently in Paris". The use of the term "new cockade" is indicative of the French makeshift cockades made of leaves to those in two and subsequently three colours, despite ignoring the real chromatic composition. The use of the cockade was viewed with suspicion and aversion by the Genoese state authorities, since it recalled those social impulses that were beginning to spread in Europe; the popular ferments had in fact frequently rebellious and destabilizing connotations. The Italian flag was therefore born as a form of popular protest against the absolutist regimes that ruled the peninsula at the time and not as a patriotic manifestation of Italianness, given that it was still far from the birth of that national awareness that then led to the unification of Italy. It is also not excluded that the green, white and red cockade, with the erroneous belief in the use of green instead of blue, an inaccuracy perhaps caused by the previous use of green leaves, was born before 21 August, and in a different city than Genoa. The revolutionary ferments of the French events probably arrived in Italy before that date, it being understood that we do not yet have documented traces of this possible first realization of the tricolour cockade. It is proven by written evidence that the first revolutionary uprisings, in Italy, took place in August in the Papal State, but the sources relating to these events do not mention tricolour cockades, but only ornaments composed with leaves. Finally, when the correct information on the chromatic composition of the French cockade arrived in Italy, the Italian Jacobins decided to keep green instead of blue, because it represented nature, and therefore metaphorically also natural rights, that is equality and freedom, both principles dear to them. Although the green, white and red tricolour, when introduced, simply had an imitative value, it was taken as a symbol of the Italian homeland during the popular uprisings of the early 19th century. The tricolour cockade becomes one of the national symbols of Italy The adoption in Italy of the green, white and red cockade was not immediate and univocal by the Italian patriots. Other appearances, still sporadic, of alternative cockades to the Genoese one after that of 1789 took place the following year, when they appeared in the red and white Grand Duchy of Tuscany, and in 1792 in Porto Maurizio, in the Republic of Genoa, red and white again. The first appearance of the Italian tricolour cockade abroad took place in 1791 in Toulon, France, brought by some Genoese sailors. Later the green, white and red cockade always spread to a greater extent, gradually becoming the only ornament used in Italy by the rioters. The patriots began to call it "Italian cockade" making it become one of the symbols of the country. In fact, the error of gazette headlines on the colours of the French tricolour cockade was clarified, and consequently the connotations of uniqueness were assumed, green, white and red adopted by the Italian patriots as one of the most important symbols of the insurrectional and political struggle, aimed at completing the national unit taking the name of "Italian tricolour". The green, white and red tricolour thus acquired a strong patriotic value, becoming one of the symbols of national awareness, a change that gradually led it to enter the collective imaginary of the Italians. The use of the Italian tricolour was not limited to the presence of green, white and red in a cockade, the latter, having been born on 21 August 1789, heralded by seven years the first tricolour war flag, which was chosen by the Lombard Legion on 11 October 1796, which is associated with the first official approval of the Italian national colours by the authorities. In this case Napoleon, and eight years later with the adoption of the flag of Italy, which was born on 7 January 1797, when it first assumed the role of the national flag of a sovereign Italian state, the Cispadane Republic. The subsequent adoption by the Italian patriots of the green, white and red tricolour was immediate, unambiguous and devoid of political contrasts. In France the opposite happened since the French tricolour was taken as a symbol first by the Republicans, then by the Bonapartists that were in antagonism with the Monarchists and the Catholics, who had the royal white flag with the fleur-de-lis of France as their reference flag. The cockade of the Bologna revolt From a historical perspective, given the judicial process and the clamor that followed, were the tricolour cockades made in 1794 by two students of the University of Bologna, Luigi Zamboni from Bologna and Giovanni Battista De Rolandis from Asti, who placed themselves at the head of an insurrectional attempt to free Bologna from papal rule. In addition to the two students, there were also two medical doctors, Antonio Succi and Angelo Sassoli, who then betrayed the patriots by referring everything to the papal police, and four other people (Giuseppe Rizzoli, known as Dozza, Camillo Tomesani, Antonio Forni Mago Sabino and Camillo Galli). Luigi Zamboni had previously expressed the desire to create a tricolour flag that would become the flag of a united Italy. During this revolt attempt, which took place between 13 and 14 November 1794 (or, according to other sources, 13 December 1794), the demonstrators led by De Rolandis and Zamboni flaunted a red and white cockade (which are also the colours of the municipal coat of arms of Bologna) having a green lining. These cockades were made by Zamboni's parents. These cockades had green in the centre, white immediately outside and red on the edge. During the recruitment work, De Rolandis and Zamboni managed to convince 30 people to participate in their attempt at insurrection. The two, to carry out the attempted revolt, purchased some firearms which later proved to be of poor quality. The goal was to spread a leaflet intended to give rise to Bologna and Castel Bolognese, proclaiming that there was no effect whatsoever. After failing to raise the city, the revolutionaries tried to take refuge in the Grand Duchy of Tuscany, but the local police first captured them in Covigliaio and then handed them over to the papal authorities. After the capture of the fugitives, the latter launched an action "" (a prosecution for fomenting armed treasonous conspiracy throughout the state) at the (the Inquisition of Bologna). The trial involved all the participants in the insurrectional attempt, the family of Zamboni and the Succi brothers. Zamboni was found dead in a cell nicknamed "Inferno" ("Hell"), which he shared with two common criminals, probably killed by them on the orders of the police or perhaps suicide after an unsuccessful escape attempt on 18 August 1795. De Rolandis was publicly executed, after being subjected to interrogation preceded and followed by ferocious torture, on 26 April 1796. Zamboni's father died almost 80 years after suffering terrible torture, while his mother was first whipped through the streets of Bologna and then sentenced to life imprisonment. The other defendants, where they had minor penalties, were freed shortly thereafter by the French, who in the meantime had invaded Emilia driving out the pontiffs. The bodies of De Rolandis and Zamboni were then solemnly buried in Bologna in the Giardino della Montagnola on the direct order of Napoleon, before being dispersed in 1799 with the arrival of the Austrians. The historic cockade, which is owned by the De Rolandis family, has been exhibited for some time in the National Museum of the Italian Risorgimento in Turin. In 2006, during some renovations, it was transferred to the European Student Museum of the University of Bologna, where it is still preserved. Free use during the Napoleonic era The tricolour cockade appeared, after the events of Bologna, during Napoleon's entry into Milan on 15 May 1796. These cockades, having the typical circular shape, possessed red on the outside, green on an intermediate position and white in the centre. These ornaments were worn by the rioters even during the religious ceremonies officiated inside the Milan Cathedral as thanks for the arrival of Napoleon, who was seen, at least initially, as a liberator. The tricolour cockades then became one of the official symbols of the Milanese National Guard, which was founded on 20 November 1796, and then spread elsewhere along the Italian peninsula. The tricolour cockade was particularly linked to the Jacobin movement, which made it one of its most important symbols. Precisely on the occasion of the first adoption of the green, white and red flag by a sovereign Italian state, the Cispadane Republic, which is dated 7 January 1797 and which was decreed by an assembly held in a hall of the town hall of Reggio Emilia, it was decided that the tricolour cockade, also considered one of the official symbols of the newborn Napoleonic state, should have been worn by all citizens. On that occasion, Giuseppe Compagnoni, who is celebrated as the "father of the Italian flag", proposed the adoption of the Italian flag and cockade. In Bergamo, civilians were obliged to wear a tricolour cockade pinned to their clothes, a coercion that was sanctioned, on 13 May 1797, also in Modena and Reggio Emilia. Even without the need for obligations on the part of the authorities, the cockade spread more and more among the population, who wore it with pride, laying the foundations, together with other factors, for the Italian unification. By decree of 18 May 1797, the Provisional Municipality of Venice noted that "the nation had adopted...the tricolour cockade green, white, and red" and adopted it for its own use as well. On 29 June 1797, with the merger of the Cispadane Republic and the Transpadane Republic, the Cisalpine Republic was born, a pro-French state body that extended over Lombardy, on part of Emilia and Romagna and which had Milan as its capital. The event, which took place at the lazaretto of Milan, was characterized by a riot of flags and tricolour cockades. Its use during the Italian unification The first riots With the fall of Napoleon and the restoration of the absolutist monarchical regimes, the national colours of Italy, and with it the tricolour cockade, went underground, becoming the symbol of the patriotic ferments that began to spread in Italy and the symbol which united all the efforts of the Italian people towards freedom and independence. The social ferments that led to the birth of Italian patriotism originated in the Napoleonic era, during which the ideals of the French Revolution spread, including the concept of self-determination of the people. Although the pre-Napoleonic regimes had been restored, liberal ideas often resulted in the will of the people to free themselves from foreign domination by constituting a unitary and independent state body. As in the Italian case, while the demand for greater civil and political rights on the part of the population did not stop with the reconstitution of the absolutist states, the uprisings that would have characterized the 19th century resurfaced. The use of the tricolour cockade was forbidden by the Austrians in the Kingdom of Lombardy–Venetia together with the use of the green, white and red flag under death penalty. The purpose of this provision, quoting the textual words of Emperor Franz Joseph I of Austria, was to "make people forget that they are Italian". The tricolour cockade appeared, for the first time after the Napoleonic era, during the uprisings of 1820–21 in the Kingdom of the Two Sicilies pinned on the hats or clothes of Italian patriots; its reappearance was therefore still sporadic and limited to a specific territory. The tricolour cockade appeared again during the revolts of 1830–31, pinned on the clothing of Italian patriots, which took place mainly in the Papal States, in the Duchy of Modena and Reggio and in the Duchy of Parma and Piacenza, in which there was a profusion of handkerchiefs and of tricolour cockades. Also in this case, its appearance was limited to some states of the Italian peninsula. In this context, in 1820, on the occasion of the solemn celebrations linked to the granting of the constitution by Ferdinand I of the Two Sicilies, the members of the royal family wore tricolour cockades. The uprisings of 1820–21 had in fact the greatest consequences in the Kingdom of Piedmont-Sardinia, where the uprisings were led for a short period by Charles Albert of Piedmont-Sardinia, who had not yet become king, and in the Kingdom of the Two Sicilies. The latter, in particular, the Sicilian Parliament, was also reopened and the Neapolitan Parliament was convened for the first time. If the riots of the 14th and 15th centuries were driven by humanism, with all the effects of chance, including the link with classicism, the patriotic revolts of the 19th century, with their ideas of independence and freedom, and iconic symbols, among which there were the cockades, were instead inspired by Romanticism. The revolutions of 1848 Tricolour cockades continued to be the protagonists, pinned on the chest or on the hats of patriots, in the popular uprisings that followed such as the case of the Five Days of Milan (18–22 March 1848), during which they had a wide diffusion among the insurgents, many of whom were religious. The Milanese clergy actively supported the patriotic demands of their faithful. In this context, on 23 March 1848, the king of Piedmont-Sardinia Charles Albert of Piedmont-Sardinia issued a proclamation with decisive political connotations with which the Sardinian sovereign assured the provisional government of Milan formed following the five days that his troops, ready to come to his aid, would have used the Italian tricolour as a war flag: The Milanese then welcomed Charles Albert of Piedmont-Sardinia and his troops with a profusion of flags and tricolour cockades. On 14 June 1848, a circulaire from the Ministry of War of the Kingdom of Piedmont-Sardinia, decreed the replacement of the Savoy blue cockade, in all military areas in which it was used, with the tricolour cockade: The blue cockade was until then placed on the hat of the uniform of the Carabinieri, on the frieze of the Bersaglieri caps and on the headgear of the cavalry regiments. On the hat of the Carabinieri the blue cockade was present since the foundation of the military branch, which is dated 1814, for the cavalry its introduction is ascribable to 1843 while for the Bersaglieri to 1836. Specifically, the excerpt from the circulaire dated 14 June 1848 stated that the blue cockade would be replaced: In the institutional context, the blue cockade had a different fate. The Statuto Albertino of the Kingdom of Piedmont-Sardinia, which was promulgated on 4 March 1848 by Charles Albert of Piedmont-Sardinia, hence the name, and which later became the fundamental law of the Kingdom of Italy, provided in article 77 that the blue cockade was the national one alone. This article remained in force until 1 January 1948 when the Albertine Statute was replaced by the Constitution of the Italian Republic, which sanctioned the use of the tricolour cockade in all official seats of the Republic. During the revolutions of 1848 of the tricolour cockades, it appeared in all Italian pre-unification states, from the Kingdom of Piedmont-Sardinia pinned on the hats or clothes of Italian patriots, to the Kingdom of Lombardy–Venetia, from the Kingdom of the Two Sicilies, to the Papal States, from the Grand Duchy of Tuscany, to the Duchy of Parma and Piacenza and to that of Modena and Reggio. The tricolour cockade was among the symbols most frowned upon by the authorities, for example, Charles II, Duke of Parma, although he was not among the most reactionary sovereigns (so much so that he granted relative freedom of the press), he forbade its use in his duchy. In the official context, the cockade became one of the official symbols of the Kingdom of Sicily, a state which became independent from the Bourbon kingdom during the Sicilian revolution of 1848. The unification of Italian peninsula During the Second Italian War of Independence the territories that were gradually conquered by Victor Emmanuel II of Piedmont-Sardinia and Napoleon III of France acclaimed the two sovereigns as liberators waving green, white and red flags and wearing tricolour cockades. The regions ready to ask for annexation to the Kingdom of Piedmont-Sardinia through the plebiscites of the unification of Italy also expressed their desire to be part of a united Italy with the waving of flags and the use of cockades on their clothes. The tricolour cockades were also present during the Expedition of the Thousand (1860), starting to appear on the jackets of the Sicilians who gradually swelled the ranks of the Garibaldians. In particular, they made their debut shortly before Giuseppe Garibaldi's conquest of Palermo, and then followed the hero of the two worlds in his victorious campaign in the Kingdom of the Two Sicilies. Tricolour cockades were given to the inhabitants of the Kingdom of the Two Sicilies, just before each movement of insurrection, so that they would have a distinctive sign with an unequivocal meaning. They were also pinned on the cap of the official uniform of the body of public order established by Giuseppe Garibaldi in the lands that were progressively conquered. Tricolour cockades were made by some Milanese patriots, led by Laura Solera Mantegazza, to finance the Expedition of the Thousand. Each tricolour cockade, which was sold for one lira, was associated with a numbered ticket bearing on the front the effigy of Giuseppe Garibaldi, the Italian tricolour and the words "Soccorso a Garibaldi", while on the back the words "Soccorso alla Sicilia". 24,442 cockades were sold, a result below expectations perhaps due to an unfounded rumor spread among the supporting population that part of the profit obtained from the sale of the cockades would go to Giuseppe Mazzini, a patriot disliked by some of the Milanese. The use of tricolour cockades continued even after the Italian unification conquests ended. In the territories then subject to plebiscites, even after the popular consultation, the use of green, white and red ornaments pinned on clothes and caps was very common. On 17 March 1861, there was the proclamation of the Kingdom of Italy, the formal act that sanctioned, with a normative act of the Kingdom of Piedmont-Sardinia, the birth of the unified Kingdom of Italy. Subsequent uses Aeronautical and military field After the Italian unification, the tricolour cockade continued to be used in the military field on the parade headdresses of the aforementioned departments of the Italian armed forces and was also introduced in the aeronautical field. After the entry of the Kingdom of Italy in the First World War, the Italian Supreme Military Command realized the inadequacy of the markings previously used on Italian aircraft, therefore it ordered to paint the vertical empennage with the tricolour and the intrados of the wings with green, white and red sections for the recognition of nationality. Much more often, however, the central section was not painted white, remaining the colour of the canvas. As a further mark, the tricolour cockade, in the roundel version with the external red, the central white and the internal green, was established on 21 December 1917, being placed on the sides of the fuselage and above the upper wing. The following period, tricolour cockades appeared that had a green perimeter and a red central disc with a position of the colours that was inverted compared to that conventionally used. Following complaints from the Allies, aimed at avoiding confusion between cockades used on the planes of the British Royal Flying Corps and with the aircraft of the French Aéronautique Militaire, which operated in the same theater of war. Often the aircraft purchased from France kept however, for practicality, some rosettes with the red on the outside, simply superimposing the green on the central blue, therefore the reverse of the nationally produced airplanes. The Italian tricolour cockade was used discontinuously until 1927, when it was replaced by a cockade depicting the fasces, one of the most identifying symbols of fascism. In the aeronautical field, the tricolour cockade with red outwards and green in the centre returned to use, without being changed, in 1943, during the Second World War, on the occasion of the establishment of the Italian Co-belligerent Air Force. After the fall of fascism, there was the immediate disappearance of all the symbols linked to it, including the fasces. The tricolour cockade, which was then widely used on all Italian state aircraft, not only military, is still today one of the symbols of the Italian Air Force. In 1991, the low visibility tricolour cockade was introduced, which is characterized by a narrower white band than the other two. Also in the military field, the tricolour cockade has been the basis of the parade frieze of the Bersaglieri, cavalry regiments, Carabinieri (when it replaced the Italian blue cockade in this role), and of the Guardia di Finanza since 14 June 1848. The latter was founded in 1862, therefore after the change of the cockade in 1848, the Guardia di Finanza has always had, as the basis of its frieze, the tricolour cockade. Institutional context It is tradition for the most important offices of the Italian State to have pinned on the jacket, during the military parade of the Festa della Repubblica celebrated every 2 June, a tricolour cockade. Sport In Italian sport, the tricolour cockade became the distinctive symbol of the successes in national cups starting in the 1950s; the cockade is sewn on the jersey of the team holding the trophy for the following season. The Italian tricolour cockade made its debut in football in the 1958–59 season on Lazio jerseys. In football, starting from the 1985–86 season, the cockade used for the teams holding the Coppa Italia underwent a change. The version with the inverted colours began to be used, that is with the green outside and the red in the centre. From the 2006–07 season the conventional typology was restored, the one with red on the outside and green in the centre. In football, the cockade is also a symbol, again in the roundel shape, of the victories in the Coppa Italia Serie D, in the Coppa Italia Dilettanti and — with green on the outside and red on the inside — in the Coppa Italia Serie C. The cockade of Italy in music A famous song written by Francesco Dall'Ongaro and set to music by Luigi Gordigiani was dedicated to the tricolour cockade: Italian azure cockade The Italian azure cockade was one of the representative ornaments of Italy, obtained by circularly pleating an azure ribbon. Coming from the Savoy blue, the colour of the Italian royal family from 1861 to 1946, the azure cockade remained officially in use until 1 January 1948, when the constitution of the Italian Republic came into force, after which it was replaced, in all official offices, from the Italian tricolour cockade. The azure cockade originates from at least in the 17th century, as evidenced by some documents which confirm its presence on military uniforms in use at the time of Victor Amadeus II of Sardinia. Other sources testify to its use even in the 18th century. The Albertine Statute of the Kingdom of Piedmont-Sardinia, which was promulgated on 4 March 1848, and which later became the fundamental law of the Kingdom of Italy, foresaw that the azure cockade was the only national one. In this way the azure, historical colour of the Kingdom of Piedmont-Sardinia and even before the Duchy of Savoy, was kept alongside the tricolour cockade born in 1789, and which was instead very common among the population. On 14 June 1848, during the First Italian War of Independence, a circulaire from the Ministry of War decreed the replacement of the azure cockade, which until then had been placed on the hat of the uniform of the Carabinieri, with "the cockade to the three Italian national colours in accordance with the established models". This was not an exception; similarly the tricolour cockade replaced the azure one, for example, on the frieze of the Bersaglieri caps and on the headdresses of the soldiers of the cavalry regiments. On the hat of the Carabinieri the azure cockade was present since the founding of the branch, which is dated 1814, while for the cavalry branch its introduction can be ascribed to 1843, The azure cockade was instead used during the Sardinian campaign in central Italy in 1860, the siege of Gaeta (also dated 1860), the repression of post-unitary brigandage (1860–70) and the Third Italian War of Independence (1866), in all cases pinned on the uniforms of the generals and that of the officers of the Royal Italian Army. The blue cockade was officially in use until 1 January 1948, when the Constitution of the Italian Republic came into force, being replaced, in all official locations, by the Italian tricolour cockade. Historical evolution of the cockade of Italy In the institutional context In the military field In the aeronautical field In the sports field See also Flag of Italy National symbols of Italy Tricolour Day Citations References External links Italy Culture of Italy Italian unification National symbols of Italy Sport in Italy Sports symbols
Cockade of Italy
[ "Mathematics" ]
7,631
[ "Cockades", "Symbols" ]
62,833,608
https://en.wikipedia.org/wiki/Xenobot
Xenobots, named after the African clawed frog (Xenopus laevis), are synthetic lifeforms that are designed by computers to perform some desired function and built by combining together different biological tissues. There is debate among scientists whether xenobots are robots, organisms, or something else entirely. Existing xenobots The first xenobots were built by Douglas Blackiston according to blueprints generated by an AI program, which was developed by Sam Kriegman. Xenobots built to date have been less than wide and composed of just two things: skin cells and heart muscle cells, both of which are derived from stem cells harvested from early (blastula stage) frog embryos. The skin cells provide rigid support and the heart cells act as small motors, contracting and expanding in volume to propel the xenobot forward. The shape of a xenobot's body, and its distribution of skin and heart cells, are automatically designed in simulation to perform a specific task, using a process of trial and error (an evolutionary algorithm). Xenobots have been designed to walk, swim, push pellets, carry payloads, and work together in a swarm to aggregate debris scattered along the surface of their dish into neat piles. They can survive for weeks without food and heal themselves after lacerations. Other kinds of motors and sensors have been incorporated into xenobots. Instead of heart muscle, xenobots can grow patches of cilia and use them as small oars for swimming. However, cilia-driven xenobot locomotion is currently less controllable than cardiac-driven xenobot locomotion. An RNA molecule can also be introduced to xenobots to give them molecular memory: if exposed to specific kind of light during behavior, they will glow a prespecified color when viewed under a fluorescence microscope. Xenobots can also self-replicate. Xenobots can gather loose cells in their environment, forming them into new xenobots with the same capability. Potential applications Currently, xenobots are primarily used as a scientific tool to understand how cells cooperate to build complex bodies during morphogenesis. However, the behavior and biocompatibility of current xenobots suggest several potential applications to which they may be put in the future. Xenobots are composed solely of frog cells, making them biodegradable and environmentally friendly robots. Unlike traditional technologies, xenobots do not generate pollution or require external energy inputs during their life-cycle. They move using energy from fat and protein naturally stored in their tissue, which lasts about a week, at which point they simply turn into dead skin cells. Additionally, since swarms of xenobots tend to work together to push microscopic pellets in their dish into central piles, it has been speculated that future xenobots might be able to find and aggregate tiny bits of ocean-polluting microplastics into a large ball of plastic that a traditional boat or drone could gather and bring to a recycling center. In future clinical applications, such as targeted drug delivery, xenobots could be made from a human patient’s own cells, which would virtually eliminate the immune response challenges inherent in other kinds of micro-robotic delivery systems. Such xenobots could potentially be used to scrape plaque from arteries, and with additional cell types and bioengineering, locate and treat disease. Gallery See also Artificial life Hybrot References External links Webpage summarizing and linking to all of the xenobot papers Xenobot Lab website "These Researchers Used A.I. to Design a Completely New 'Animal Robot'" from Scientific American Artificial life Robots Microbiology 2020 in science 2020 robots Microtechnology
Xenobot
[ "Physics", "Chemistry", "Materials_science", "Technology", "Engineering", "Biology" ]
760
[ "Machines", "Microtechnology", "Robots", "Microbiology", "Materials science", "Physical systems", "Microscopy" ]
62,838,132
https://en.wikipedia.org/wiki/Dependency%20network%20%28graphical%20model%29
Dependency networks (DNs) are graphical models, similar to Markov networks, wherein each vertex (node) corresponds to a random variable and each edge captures dependencies among variables. Unlike Bayesian networks, DNs may contain cycles. Each node is associated to a conditional probability table, which determines the realization of the random variable given its parents. Markov blanket In a Bayesian network, the Markov blanket of a node is the set of parents and children of that node, together with the children's parents. The values of the parents and children of a node evidently give information about that node. However, its children's parents also have to be included in the Markov blanket, because they can be used to explain away the node in question. In a Markov random field, the Markov blanket for a node is simply its adjacent (or neighboring) nodes. In a dependency network, the Markov blanket for a node is simply the set of its parents. Dependency network versus Bayesian networks Dependency networks have advantages and disadvantages with respect to Bayesian networks. In particular, they are easier to parameterize from data, as there are efficient algorithms for learning both the structure and probabilities of a dependency network from data. Such algorithms are not available for Bayesian networks, for which the problem of determining the optimal structure is NP-hard. Nonetheless, a dependency network may be more difficult to construct using a knowledge-based approach driven by expert-knowledge. Dependency networks versus Markov networks Consistent dependency networks and Markov networks have the same representational power. Nonetheless, it is possible to construct non-consistent dependency networks, i.e., dependency networks for which there is no compatible valid joint probability distribution. Markov networks, in contrast, are always consistent. Definition A consistent dependency network for a set of random variables with joint distribution is a pair where is a cyclic directed graph, where each of its nodes corresponds to a variable in , and is a set of conditional probability distributions. The parents of node , denoted , correspond to those variables that satisfy the following independence relationships The dependency network is consistent in the sense that each local distribution can be obtained from the joint distribution . Dependency networks learned using large data sets with large sample sizes will almost always be consistent. A non-consistent network is a network for which there is no joint probability distribution compatible with the pair . In that case, there is no joint probability distribution that satisfies the independence relationships subsumed by that pair. Structure and parameters learning Two important tasks in a dependency network are to learn its structure and probabilities from data. Essentially, the learning algorithm consists of independently performing a probabilistic regression or classification for each variable in the domain. It comes from observation that the local distribution for variable in a dependency network is the conditional distribution , which can be estimated by any number of classification or regression techniques, such as methods using a probabilistic decision tree, a neural network or a probabilistic support-vector machine. Hence, for each variable in domain , we independently estimate its local distribution from data using a classification algorithm, even though it is a distinct method for each variable. Here, we will briefly show how probabilistic decision trees are used to estimate the local distributions. For each variable in , a probabilistic decision tree is learned where is the target variable and are the input variables. To learn a decision tree structure for , the search algorithm begins with a singleton root node without children. Then, each leaf node in the tree is replaced with a binary split on some variable in , until no more replacements increase the score of the tree. Probabilistic Inference A probabilistic inference is the task in which we wish to answer probabilistic queries of the form , given a graphical model for , where (the 'target' variables) (the 'input' variables) are disjoint subsets of . One of the alternatives for performing probabilistic inference is using Gibbs sampling. A naive approach for this uses an ordered Gibbs sampler, an important difficulty of which is that if either or is small, then many iterations are required for an accurate probability estimate. Another approach for estimating when is small is to use modified ordered Gibbs sampler, where is fixed during Gibbs sampling. It may also happen that is rare, e.g. when has many variables. So, the law of total probability along with the independencies encoded in a dependency network can be used to decompose the inference task into a set of inference tasks on single variables. This approach comes with the advantage that some terms may be obtained by direct lookup, thereby avoiding some Gibbs sampling. You can see below an algorithm that can be used for obtain for a particular instance of and , where and are disjoint subsets. Algorithm 1: (* the unprocessed variables *) (* the processed and conditioning variables *) (* the values for *) While : Choose such that has no more parents in than any variable in If all the parents of are in Else Use a modified ordered Gibbs sampler to determine Returns the product of the conditionals Applications In addition to the applications to probabilistic inference, the following applications are in the category of Collaborative Filtering (CF), which is the task of predicting preferences. Dependency networks are a natural model class on which to base CF predictions, once an algorithm for this task only needs estimation of to produce recommendations. In particular, these estimates may be obtained by a direct lookup in a dependency network. Predicting what movies a person will like based on his or her ratings of movies seen; Predicting what web pages a person will access based on his or her history on the site; Predicting what news stories a person is interested in based on other stories he or she read; Predicting what product a person will buy based on products he or she has already purchased and/or dropped into his or her shopping basket. Another class of useful applications for dependency networks is related to data visualization, that is, visualization of predictive relationships. See also Relational dependency network References Graphical models Algorithms
Dependency network (graphical model)
[ "Mathematics" ]
1,243
[ "Algorithms", "Mathematical logic", "Applied mathematics" ]
62,842,020
https://en.wikipedia.org/wiki/Log%209%20Materials
Log9 Materials is an Indian nanotechnology company, headquartered in Bangalore, operating in the areas of sustainable energy and filtration. With 16 patents around Graphene, Log9 Materials has developed Aluminium–air battery, aluminium fuel cells for both mobility and stationary energy applications. Log9 was awarded "Most Innovative Technology Company of 2018" by the Department of Science and Technology (India), Government of India. History Log9 was founded by Akshay Singhal along with Kartik Hajela in 2015 and has acquired 16 patents in graphene synthesis and graphene products. It is the first start-up to be incubated by IIT Roorkee in its business incubator TIDES. In 2017, Log9 secured its first round of funding led by Gems Partners, a micro venture capital fund, to establish its own research & development center in Bangalore and tied up with the Indian Institute of Science to build products jointly using the latter's analytical and research capabilities. The company has set up its subsidiary in Mumbai by the name of Log9 Spill Containment, a graphene-based product development company that specializes in oil and chemical spill containment solutions. In 2017, Indian defence tied up with Log9 for the deployment of nanotechnology. In 2022, Log9 develops indigenous LFP and LTO batteries in its R&D facility. In 2023, Log9 launched India's first commercial Li-ion cell manufacturing facility at its campus in Jakkur, Bengaluru. On April, 2024, Log 9 received BIS certification for its LTO batteries. In September 2024, Musashi Seimitsu Industry Co.,Ltd, Japan has announced a strategic partnership with Log9 Materials, The partnership aims to revolutionize the electric vehicle (EV) market, by combining Musashi's high-performance e-Axle system and Log9's battery technology creating an integrated powertrain solution tailored specifically for electric two-wheelers and three-wheelers. Funding In 2019, Log9 Materials raised $3.5 Million Series A funding led by Sequoia Capital and Exfinity Venture Partners. Awards 2018: Awarded "Most Innovative Technology Company of 2018" by Department of Science and Technology (India) References External links Official website Nanotechnology companies Technology companies established in 2015 Companies based in Bengaluru Manufacturing companies based in Bengaluru Chemical companies of India Indian companies established in 2015 Manufacturing companies established in 2015 2015 establishments in Karnataka
Log 9 Materials
[ "Materials_science" ]
495
[ "Nanotechnology", "Nanotechnology companies" ]
54,300,666
https://en.wikipedia.org/wiki/Pyridine%20alkaloids
Pyridine alkaloids are a class of alkaloids, nitrogen-containing chemical compounds widely found in plants, that contain a pyridine ring. Examples include nicotine and anabasine which are found in plants of the genus Nicotiana including tobacco. Alkaloids with a pyridine partial structure are usually further subdivided according to their occurrence and their biogenetic origin. The most important examples of pyridine alkaloids are the nicotine and anabasine, which are found in tobacco, the areca alkaloids in betel and ricinine in castor oil. References External links
Pyridine alkaloids
[ "Chemistry" ]
132
[ "Pyridine alkaloids", "Alkaloids by chemical classification" ]
54,305,940
https://en.wikipedia.org/wiki/Pentamethylarsenic
Pentamethylarsenic (or pentamethylarsorane) is an organometalllic compound containing five methyl groups bound to an arsenic atom with formula As(CH3)5. It is an example of a hypervalent compound. The molecular shape is trigonal bipyramid. History The first claim to make pentamethylarsenic was in 1862 in a reaction of tetramethylarsonium iodide with dimethylzinc by A. Cahours. For many years all the reproductions of this proved fruitless, so the production proved not to be genuine. It was actually discovered by Karl-Heinz Mitschke and Hubert Schmidbaur in 1973. Production Trimethylarsine is chlorinated to trimethylarsine dichloride, which then reacts with methyl lithium to yield pentamethylarsenic. As(CH3)3 + Cl2 → As(CH3)3Cl2 As(CH3)3Cl2 + 2LiCH3 → As(CH3)5 + 2LiCl Side products include As(CH3)4Cl and As(CH3)3=CH2. Pentamethylarsenic is not produced by biological organisms. Properties Pentamethylarsenic smells the same as pentamethylantimony, but is otherwise unique. The bond lengths in the molecule are for the three equatorial As−C bonds 1.975 Å and the two axial As−C bonds 2.073 Å. The infrared spectrum of pentamethylarsenic shows strong bands at 582 and 358 cm−1 due to axial C-As vibration, and weaker bands at 265 and 297 cm−1 due to equatorial C-As vibration. Raman spectrum shows a strong feature at 519, 388, and 113 cm−1, and weak lines at 570 and 300 cm−1. Reactions Pentamethylarsenic reacts slowly with weak acids. With water it forms tetramethylarsonium hydroxide As(CH3)4OH and trimethylarsenic oxide As(CH3)3O. With methanol, tetramethylmethoxyarsorane As(CH3)4OCH3 is produced. Hydrogen halides react resulting in the formation of tetramethylarsonium halide salts. When pentamethylarsenic is heated to 100° it decomposes forming trimethylarsine, methane, and ethylene. When trimethylindium reacts with pentamethylarsenic in benzene solution, a salt precipitates: tetramethylarsenic(V)tetramethylindate(III). References Organoarsenic compounds Hypervalent molecules Arsenic(V) compounds Substances discovered in the 1970s
Pentamethylarsenic
[ "Physics", "Chemistry" ]
580
[ "Molecules", "Hypervalent molecules", "Matter" ]
54,305,952
https://en.wikipedia.org/wiki/Pentamethylantimony
Pentamethylantimony or pentamethylstiborane is an organometalllic compound containing five methyl groups bound to an antimony atom with formula Sb(CH3)5. It is an example of a hypervalent compound. The molecular shape is trigonal bipyramid. Some other antimony(V) organometallic compounds include pentapropynylantimony (Sb(CCCH3)5) and pentaphenyl antimony (Sb(C6H5)5). Other known pentamethyl-pnictides include pentamethylbismuth and pentamethylarsenic. Production Pentamethylantimony can be made by reacting Sb(CH3)3Br2 with two equivalents of methyl lithium. Another production route is to convert trimethylstibine to the trimethyl antimony dichloride, and then replace the chlorine with methyl groups with methyl lithium. Sb(CH3)3 + Cl2 → Sb(CH3)3Cl2 Sb(CH3)3Cl2 + 2LiCH3 → Sb(CH3)5 + 2LiCl Properties Pentamethylantimony is colourless. At -143 °C it crystallizes in the orthorhombic system with space group Ccmm. Unit cell dimensions are a=6.630 Å b=11.004 Å c=11.090 Å. There are four formula per unit cell. Unit cell volume is 809.1 Å3. The trigonal bipyramid shape has three equatorial positions for carbon, and two axial positions at the peaks of the pyramids. The length of the antimony-carbon bond is around 214 pm for equatorial methyl groups and 222 pm for the axial positions. The bond angles are 120° for ∠C-Sb-C across the equator, and 90° for ∠C-Sb-C between equator and axis. The molecules rapidly change carbon atom position, so that in NMR spectrum as low as −100 °C, there is only one kind of hydrogen position. Pentamethylantimony is more stable than pentamethylbismuth, because in lower energy trimethylbismuth, the non-bonding pair of electrons is more shielded due to the f-electrons and the lanthanoid contraction. Trimethylantimony is higher in energy, and thus less is released in a decomposition of pentamethylantimony. Pentamethylantimony can be stored as a liquid in clean glass at room temperature. Pentamethylantimony melts at -19 °C. Although it decomposes when boiling is attempted and can explode, it has a high vapour pressure at 8 mmHg at 25 °C. There are two absorption bands in the ultraviolet at 2380 and 2500 Å. Reactions Pentamethylantimony reacts with methyl lithium to yield a colourless lithium hexamethylantimonate in tetrahydrofuran. Sb(CH3)5 + LiCH3 → Li(thf)Sb(CH3)6 Pentamethylantimony reacts with silsesquioxanes to yield tetramethylstibonium silsesquioxanes. eg (cyclo-C6H11)7Si7O9(OH)3 yields (cyclo-C6H11)7Si7O9(OSb(CH3)4)3. The reaction happens quickly when there are more than two OH groups. Phosphonic acids and phosphinic acids combine with pentamethylantimony to yield compounds like (CH3)4SbOP(O)Ph2, (CH3)4SbOP(O)(OH)Ph and (CH3)4SbOP(O)(OH)3, eliminating methane. Stannocene Sn(C5H5)2 combines with pentamethylantimony to produce bis(tetramethylstibonium)tetracyclopentadienylstannate ([(CH3)4Sb]2Sn(C5H5)4). Pentamethylantimony reacts with many very weak acids to form a tetramethylstibonium salt or tetramethylstibonium derivative with the acid. Such acids include water (H2O), alcohols, thiols, phenol, carboxylic acids, hydrogen fluoride, thiocyanic acid, hydrazoic acid, difluorophosphoric acid, thiophosphinic acids, and alkylsilols. With halogens, pentamethylantimony has one or two methyl groups replaced by the halogen atoms. Lewis acids also react to form tetramethyl stibonium salts, including [(CH3)4Sb]TlBr4, [(CH3)4Sb][CH3SbCl5], Pentamethylantimony reacts with the surface of silica to coat it with Si-O-Sb(CH3)4 groups. Over 250 °C this decomposes to Sb(CH3) and leaves methyl groups attached to the silica surface. References Organoantimony compounds Hypervalent molecules Methyl complexes
Pentamethylantimony
[ "Physics", "Chemistry" ]
1,102
[ "Molecules", "Hypervalent molecules", "Matter" ]
54,305,958
https://en.wikipedia.org/wiki/Pentamethylbismuth
Pentamethylbismuth (or pentamethylbismuthorane) is an organometalllic compound containing five methyl groups bound to a bismuth atom with formula Bi(CH3)5. It is an example of a hypervalent compound. The molecular shape is trigonal bipyramid. Production Pentamethylbismuth is produced in a two step process. First, trimethylbismuth is reacted with sulfuryl chloride to yield dichloro trimethylbismuth, which is then reacted with two equivalents of methyllithium dissolved in ether. The blue solution is cooled to −110 °C to precipitate the solid product. Bi(CH3)3 + SO2Cl2 → Bi(CH3)3Cl2 + SO2 Bi(CH3)3Cl2 + 2LiCH3 → Bi(CH3)5 + 2LiCl Properties At -110 °C, Bi(CH3)5 is a blue-violet solid. The methyl groups are arranged in a trigonal bipyramid, and the bond-lengths of methyl with bismuth are all the same. However, the molecule is not rigid, as can be determined from the nuclear magnetic resonance spectrum that shows all methyl groups are equivalent. It is stable as a solid, but in the gas phase, when heated or in solution decomposes to trimethylbismuth. The colour is unusual for bismuth or other hypervalent pnictide compounds, which are colourless. Calculations show that the colour is due to HOMO-LUMO transition. The HOMO is ligand based, whereas the LUMO is modified by relativistically stabilised bismuth 6s orbitals. Reactions If excess methyllithium is used in production, an orange hexamethylbismuth salt, LiBi(CH3)6, is formed. References Extra reading Organobismuth compounds Hypervalent molecules Methyl complexes
Pentamethylbismuth
[ "Physics", "Chemistry" ]
409
[ "Molecules", "Hypervalent molecules", "Matter" ]
54,309,186
https://en.wikipedia.org/wiki/Pentamethylcyclopentadienyl%20rhodium%20dichloride%20dimer
Pentamethylcyclopentadienyl rhodium dichloride dimer is an organometallic compound with the formula [(C5(CH3)5RhCl2)]2, commonly abbreviated [Cp*RhCl2]2 This dark red air-stable diamagnetic solid is a reagent in organometallic chemistry. Structure and preparation The compound has idealized C2h symmetry. Each metal centre is pseudo-octahedral. The compound is prepared by the reaction of rhodium trichloride trihydrate and pentamethylcyclopentadiene in hot methanol, from which the product precipitates: It was first prepared by the reaction of hydrated rhodium trichloride with hexamethyl Dewar benzene This complex was first prepared from hexamethyl Dewar benzene and RhCl3(H2O)3. The hydrohalic acid necessary for the ring-contraction rearrangement is generated in situ in methanolic solutions of the rhodium salt, and the second step has been carried out separately, confirming this mechanistic description. The reaction occurs with the formation of 1,1-dimethoxyethane, CH3CH(OCH3)2, and hexamethylbenzene is produced by a side reaction. This rhodium(III) dimer can be reduced with zinc in the presence of CO to produce the rhodium(I) complex [Cp*Rh(CO)2]. Reactions Reductive carbonylation gives [Cp*Rh(CO)2]. The Rh-μ-Cl bonds are labile and cleave en route to a variety of adducts of the general formula Cp*RhCl2L. Treatment with silver ions in polar coordinating solvents causes precipitation of silver(I) chloride, leaving a solution containing dications of the form [Cp*RhL3]2+ (L = H2O, MeCN). The chemistry is similar to that of the analog pentamethylcyclopentadienyl iridium dichloride dimer. Further reading (early literature) References Organorhodium compounds Dimers (chemistry) Pentamethylcyclopentadienyl complexes Chloro complexes Rhodium(III) compounds
Pentamethylcyclopentadienyl rhodium dichloride dimer
[ "Chemistry", "Materials_science" ]
501
[ "Dimers (chemistry)", "Polymer chemistry" ]
54,311,087
https://en.wikipedia.org/wiki/Yttrium%20phosphide
Yttrium phosphide is an inorganic compound of yttrium and phosphorus with the chemical formula YP. The compound may be also classified as yttrium(III) phosphide. Synthesis Heating (500–1000 °C) of pure substances in a vacuum: Properties Yttrium phosphide forms cubic crystals. Uses Ytttium phosphide is a semiconductor used in laser diodes, and in high power and frequency applications. References Phosphides Yttrium compounds Rock salt crystal structure
Yttrium phosphide
[ "Chemistry" ]
109
[ "Inorganic compounds", "Inorganic compound stubs" ]
54,311,175
https://en.wikipedia.org/wiki/Zinc%20transporter%20ZIP9
Zinc transporter ZIP9, also known as Zrt- and Irt-like protein 9 (ZIP9) and solute carrier family 39 member 9, is a protein that in humans is encoded by the SLC39A9 gene. This protein is the 9th member out of 14 ZIP family proteins, which is a membrane androgen receptor (mAR) coupled to G proteins, and also classified as a zinc transporter protein. ZIP family proteins transport zinc metal from the extracellular environment into cells through cell membrane. Classification and nomenclature Mammalian cells have two major groups of zinc transporter proteins; the ones that export zinc from the cytoplasm to the extracellular space (efflux), which are called ZnT (SLC30 family), and ZIP (SLC39 family) proteins whose functions are in the opposite direction (influx). ZIP family proteins are named as Zrt- and Irt-like proteins because of their similarities to Zrt and Irt proteins which are respectively zinc and iron -regulated transporter proteins in yeast and Arabidopsis that were discovered earlier than ZIP and ZnT proteins. ZIP family consists of four subfamilies (I, II, LIV-1, and gufA), and ZIP9 is the only member of subfamily I. Isoforms ZIP9 can be present as 3 different isoforms in human cells. The canonical isoform of this protein has a length of 307 amino acids, with a molecular mass of . In the second isoform, amino acids 135-157 are missing, so its length and molecular weight are respectively reduced to 284 amino acids and . In the third isoform the amino acids 233-307 are missing, so the isoform only has 232 amino acids and its molecular mass is . Additionally, the last amino acid of isoform 3, which is usually serine, is replaced with aspartic acid. Discovery ZIP9 membrane androgen receptor was first discovered in Atlantic croaker (Micropogonias undulatus) brain, ovary and testicular tissues and named "AR2" in 1999, together with another androgen receptor which was found only in brain tissue, and it was named "AR1" in that time. AR1 and AR2 were first thought to be nuclear androgen receptors (nAR), however, further studies on their biochemical and functional features in 2003 illustrated that they were involved in non-genomic mechanisms in the plasma membrane of the cells and were membrane androgen receptors. In 2005, the similarities between the nucleotide and amino acid sequences of AR2 and ZIP family proteins were discovered in other vertebrates, suggesting that AR2 is from this family of proteins. A study in 2014 utilised the latest research technologies to clone and express a particular cDNA of the female Atlantic croaker ovaries, which encoded a protein showing the characteristics of the canonical isoform of ZIP9, as a novel membrane androgen receptor(mAR). Structure Unlike other ZIP subfamilies that are consisted of 8 transmembrane (TM) domains with an extracellular C-terminal, ZIP9 consists of a 7 TM structure with an intracellular C-terminus. ZIP9 is shorter than other ZIP proteins, and only has about 307 amino acids within its structure, however, like other ZIP proteins, between its domains III and IV, within the intracellular loop, it contains histidine-rich clusters. ZIP9 and other ZIP proteins have polar or charged amino acids in their TM domains which probably play important roles in making ion transfer channels and therefore in importing zinc ions into cytoplasm. Location, expression and function ZIP9 influxes zinc ions into the cytosol and its gene is expressed almost in every tissue of human body. The sub-cellular location of ZIP9 is in plasma, nucleus, endoplasmic reticulum and mitochondrial membrane. One of the responsibilities of ZIP9 is the homeostasis of zinc in the secretory pathway, during which this protein stays within the Trans Golgi Network regardless of the change in the concentrations of zinc. ZIP9 is the only ZIP protein that signals through G protein binding, and pharmaceutical agents decrease its ligand binding once ZIP9 is uncoupled from G proteins. ZIP9 is also the only member of ZIP family with mAR characteristics. Ligands Testosterone has high affinity for ZIP9 with a Kd of 14 nM and acts as an agonist of the receptor. In contrast, the other endogenous androgens dihydrotestosterone (DHT) and androstenedione show low affinity for the receptor with less than 1% of that of testosterone, although DHT is still effective in activating the receptor at sufficiently high concentrations. Moreover, the synthetic androgens mibolerone and metribolone (R-1881), the endogenous androgen 11-ketotestoterone, and the other steroid hormones estradiol and cortisol are all ineffective competitors for the receptor. Since mibolerone and metribolone bind to and activate the nuclear androgen receptor (AR) but not ZIP9, they could potentially be employed to differentiate between AR- and ZIP9-mediated responses of testosterone. The nonsteroidal antiandrogen bicalutamide has been identified as an antagonist of ZIP9. Clinical significance Zinc homeostasis is very important in human health, because zinc is present in the structure of some proteins like zinc-dependent metalloenzymes and zinc-finger-containing transcriptional factors. In addition, zinc is involved in signalling for cell growth, proliferation, division and apoptosis. As a result, any dysfunction of zinc transporter proteins can be harmful for the cells, and some of them are associated with different cancers, diabetes and inflammation. For instance, through activation of ZIP9, testosterone has been found to increase intracellular zinc levels in breast cancer, prostate cancer, and ovarian follicle cells and to induce apoptosis in these cells, an action which may be mediated partially or fully by increased zinc concentrations. Gene mutations Mutations in the SLC39A9 gene can occur due to genetic deletion of the q24.1-24.3 band of base pairs within the human chromosome 14. This interstitial deletion mutation deletes the SLC39A9 gene along with 18 other genes found close to the SLC39A9 gene on chromosome 14 Although specific gene associated diseases have not been determined, the deletion of this band causes diseases such as congenital heart defects, mild intellectual disability, brachydactyly, and all patients with band deletion had hypertelorism and a broad nasal bridge. Patient specific clinical issues included ectopic organs, undescended testes, also called cryptorchidism, and malrotation of the small intestine. Deletion mutation involving the SLC39A9 gene has also been reported in 23 cases of patients with circulation related cancers such as B-cell lymphoma and B-cell chronic lymphocytic leukaemia (CLL). Chimeric genes are a result of faulty DNA replication, and arise when two or more coding sequences of the same or different chromosome combine in order to produce a single new gene. SLC39A9 forms a chimeric gene product with a gene called PLEKHD1, that codes for an intracellular protein found within the cerebellum. A study done in Seattle, USA, established the presence of the fusion protein product of the SLC39A9-PLEKHD1 gene to be present in 124 cases of schizophrenia and was closely related to the pathophysiology of disease. The fusion protein had features from both the parent genes and also possessed the ability to interact with cellular signalling pathways involving kinases such as Akt and Erk, leading to their increased phosphorylation within the brain and a consequent onset of schizophrenia. SLC39A9 gene also forms a fusion transcript with another gene called MAP3K9, that encodes for MAP3 kinase enzyme. This SLC39A9-MAP3K9 fusion gene has a repetitive occurrence in breast cancers, demonstrated by a study done on 120 primary breast cancer samples from Korean women in 2015. Cancer Breast and prostate A study in 2014, elucidated the intermediary role of ZIP9 in causing human breast and prostate cancer, as it induced the apoptosis in the presence of testosterone in breast and prostate cancerous cells. unlike ZIP1, 2 and 3, ZIP9 mRNA expression was increased in human prostate and breast malignant biopsy cancer cells, which probably was because cells that divide rapidly require more zinc. Brain Treatment of glioblastoma cells with TPEN showed that upregulation of ZIP9 in glioblastoma cells enhances cell migration in brain cancer by influencing P53 and GSK-3ß, and also ERK and AKT signalling pathways in phosphorylation after activation of B-cell receptors. Diabetes Zinc must be constantly supplied to Pancreatic β-cells to function normally and maintain glycaemic control. The insulin secretory pathway in humans is highly dependent on zinc activities. The cells lose many zinc ions during the secretion of insulin, and need to receive more zinc, and expression of ZIP9 mRNA during this process increases. As a result, ZIP9, which is involved in importing zinc into the cells, is potentially a target for therapeutic studies in the future regarding diabetes type2. See also GPRC6A Ion transporter Membrane androgen receptor Zinc transporter protein Atlantic croaker GPCR References G protein-coupled receptors Solute carrier family
Zinc transporter ZIP9
[ "Chemistry" ]
1,997
[ "G protein-coupled receptors", "Signal transduction" ]
57,617,730
https://en.wikipedia.org/wiki/Carbon-11-choline
Carbon-11 choline is the basis of medical imaging technologies. Because of its involvement in biologic processes, choline is related to diseases, leading to the development of medical imaging techniques to monitor its concentration. When radiolabeled with 11CH3, choline is a useful a tracer in PET imaging. Carbon-11 is radioactive with a half-life of 20.38 minutes. By monitoring the gamma radiation resulting from the decay of carbon-11, the uptake, distribution, and retention of carbon-11 choline can be monitored. Specific applications One of the first uses of carbon-11 choline in PET imaging examined Alzheimer's disease patients. Choline is the precursor to neurotransmitter acetylcholine whose cholinergic activity is impaired in many neurodegenerative diseases including Alzheimer’s. While there was uptake of the tracer in the brain, no pharmacokinetic pattern was found. Carbon-11 choline has found more success in cancer systems imaging. Choline is a precursor for the synthesis of phospholipids. When a cell is about to divide, it synthesizes these phospholipids to generate enough material to build the cell membranes of the two daughter cells. Thus it was hypothesized that highly proliferative tumors would uptake more choline than the surrounding healthy tissue. This was first tested in brain tumors after successful demonstration of choline uptake in the brain. It was found that these brain tumors had over 10x the uptake of carbon-11 choline than the surrounding brain tissue. Furthermore, because of the low choline uptake in healthy brain tissue, carbon-11 choline was found to be a superior PET tracer than fluorine-18 Fludeoxyglucose (FDG) when delineating brain tumors. Carbon-11 choline has also been used to detect tumors in the colon and esophagus and lung metastases. Prostate cancer is another disease where carbon-11 choline PET imaging has found success. As with the brain, there is too much signal from the surrounding tissue, especially the bladder, to accurately measure tumor uptake with fluorine-18 FDG. While it was shown carbon-11 choline could be used to detect the initiation of prostate cancer, its value was found in detecting prostate cancer recurrence when it is the most deadly. In 2012, the U.S. Food and Drug Administration approved carbon-11 choline as an imaging agent to be used during a PET scan to detect recurrent prostate cancer. References Quaternary ammonium compounds 3D nuclear medical imaging Medical physics
Carbon-11-choline
[ "Physics" ]
551
[ "Antimatter", "Applied and interdisciplinary physics", "Positron emission tomography", "Medical physics", "Matter" ]
57,619,067
https://en.wikipedia.org/wiki/Printed%20electronic%20circuit
A printed electronic circuit (PEC) was an ancestor of the hybrid integrated circuit (IC). PECs were common in tube (valve) equipment from the 1940s through the 1970s. Brands Couplate was the Centralab trademark, whilst Sprague called them BulPlates. Aerovox used the generic PEC. Difference from hybrid integrated circuits PECs contained only resistors and capacitors arranged in circuits to simplify construction of tube equipment. Also, their voltage ratings were suitable for tubes. Later, hybrid ICs contained transistors, and often monolithic integrated circuits. Their voltage ratings were suitable for the transistors they contained. References Electronic circuits
Printed electronic circuit
[ "Engineering" ]
136
[ "Electronic engineering", "Electronic circuits" ]
57,621,327
https://en.wikipedia.org/wiki/Radiation%20efficiency
In antenna theory, radiation efficiency is a measure of how well a radio antenna converts the radio-frequency power accepted at its terminals into radiated power. Likewise, in a receiving antenna it describes the proportion of the radio wave's power intercepted by the antenna which is actually delivered as an electrical signal. It is not to be confused with antenna efficiency, which applies to aperture antennas such as a parabolic reflector or phased array, or antenna/aperture illumination efficiency, which relates the maximum directivity of an antenna/aperture to its standard directivity. Definition Radiation efficiency is defined as "The ratio of the total power radiated by an antenna to the net power accepted by the antenna from the connected transmitter." It is sometimes expressed as a percentage (less than 100), and is frequency dependent. It can also be described in decibels. The gain of an antenna is the directivity multiplied by the radiation efficiency. Thus, we have where is the gain of the antenna in a specified direction, is the radiation efficiency, and is the directivity of the antenna in the specified direction. For wire antennas which have a defined radiation resistance the radiation efficiency is the ratio of the radiation resistance to the total resistance of the antenna including ground loss (see below) and conductor resistance. In practical cases the resistive loss in any tuning and/or matching network is often included, although network loss is strictly not a property of the antenna. For other types of antenna the radiation efficiency is less easy to calculate and is usually determined by measurements. Radiation efficiency of an antenna or antenna array having several ports In the case of an antenna or antenna array having multiple ports, the radiation efficiency depends on the excitation. More precisely, the radiation efficiency depends on the relative phases and the relative amplitudes of the signals applied to the different ports. This dependence is always present, but it is easier to interpret in the case where the interactions between the ports are sufficiently small. These interactions may be large in many actual configurations, for instance in an antenna array built in a mobile phone to provide spatial diversity and/or spatial multiplexing. In this context, it is possible to define an efficiency metric as the minimum radiation efficiency for all possible excitations, denoted by , which is related to the radiation efficiency figure given by . Another interesting efficiency metric is the maximum radiation efficiency for all possible excitations, denoted by . It is possible to consider that using as design parameter is particularly relevant to a multiport antenna array intended for MIMO transmission with spatial multiplexing, and that using as design parameter is particularly relevant to a multiport antenna array intended for beamforming in a single direction or over a small solid angle. Measurement of the radiation efficiency Measurements of the radiation efficiency are difficult. Classical techniques include the ″Wheeler method″ (also referred to as ″Wheeler cap method″) and the ″Q factor method″. The Wheeler method uses two impedance measurements, one of which with the antenna located in a metallic box (the cap). Unfortunately, the presence of the cap is likely to significantly modify the current distribution on the antenna, so that the resulting accuracy is difficult to determine. The Q factor method does not use a metallic enclosure, but the method is based on the assumption that the Q factor of an ideal antenna is known, the ideal antenna being identical to the actual antenna except that the conductors have perfect conductivity and any dielectrics have zero loss. Thus, the Q factor method is only semi-experimental, because it relies on a theoretical computation using an assumed geometry of the actual antenna. Its accuracy is also difficult to determine. Other radiation efficiency measurement techniques include: the pattern integration method, which requires gain measurements over many directions and two polarizations; and reverberation chamber techniques, which utilize a mode-stirred reverberation chamber. Ohmic and ground loss The loss of radio-frequency power to heat can be subdivided many different ways, depending on the number of significantly lossy objects electrically coupled to the antenna, and on the level of detail desired. Typically the simplest is to consider two types of loss: ohmic loss and ground loss. When discussed as distinct from ground loss, the term ohmic loss refers to the heat-producing resistance to the flow of radio current in the conductors of the antenna, their electrical connections, and possibly loss in the antenna's feed cable. Because of the skin effect, resistance to radio-frequency current is generally much higher than direct current resistance. For vertical monopoles and other antennas placed near the ground, ground loss occurs due to the electrical resistance encountered by radio-frequency fields and currents passing through the soil in the vicinity of the antenna, as well as ohmic resistance in metal objects in the antenna's surroundings (such as its mast or stalk), and ohmic resistance in its ground plane / counterpoise, and in electrical and mechanical bonding connections. When considering antennas that are mounted a few wavelengths above the earth on a non-conducting, radio-transparent mast, ground losses are small enough compared to conductor losses that they can be ignored. Footnotes References Engineering ratios Antennas (radio) Radio electronics
Radiation efficiency
[ "Mathematics", "Engineering" ]
1,036
[ "Radio electronics", "Quantity", "Metrics", "Engineering ratios" ]
57,622,987
https://en.wikipedia.org/wiki/Akamptisomer
An akamptisomer is a type of conformational isomer characterized by a hindered inversion of a bond angle. It was first discovered in 2018 in a series of bridged porphyrin molecules. References Stereochemistry Isomerism
Akamptisomer
[ "Physics", "Chemistry" ]
51
[ "Stereochemistry", "Space", "Stereochemistry stubs", "Isomerism", "nan", "Spacetime" ]
57,623,219
https://en.wikipedia.org/wiki/Hypercolor%20%28physics%29
In particle physics, hypercolor is a hypothetical attractive force that binds prequarks together by the exchange of hypergluons, analogous to the exchange of gluons by the color force, which binds quarks together. See also Technicolor (physics) References Quantum chromodynamics
Hypercolor (physics)
[ "Physics" ]
64
[ "Particle physics stubs", "Particle physics" ]
57,625,144
https://en.wikipedia.org/wiki/Adamic%E2%80%93Adar%20index
The Adamic–Adar index is a measure introduced in 2003 by Lada Adamic and Eytan Adar to predict links in a social network, according to the amount of shared links between two nodes. It is defined as the sum of the inverse logarithmic degree centrality of the neighbours shared by the two nodes where is the set of nodes adjacent to . The definition is based on the concept that common elements with very large neighbourhoods are less significant when predicting a connection between two nodes compared with elements shared between a small number of nodes. References Further reading Data mining Index numbers Similarity measures
Adamic–Adar index
[ "Physics", "Mathematics" ]
122
[ "Physical quantities", "Distance", "Mathematical objects", "Similarity measures", "Index numbers", "Numbers" ]
68,560,555
https://en.wikipedia.org/wiki/Anila%20Paparisto
Anila Paparisto is an entomologist and taxonomist from Albania, who was appointed in 2021 as Vice Rector for Teaching at the University of Tirana. She is also Professor in Invertebrate Zoology and Teaching Didactics there. Her career began at the university in 1994 and in 2011 was promoted to professor. Her research has focussed on invasive species in Albania, in particular in riverine environments. She is a member of the Academy of Sciences of Albania. She is a board member of the Quality Assurance Agency in Higher Education Board in Albania. Awards In 2002 Paparisto was awarded a fellowship from the L'Oréal-UNESCO For Women in Science Awards for her work in molecular biology. References Year of birth missing (living people) Living people Albanian scientists Women entomologists Women biologists Molecular biologists Academic staff of the University of Tirana L'Oréal-UNESCO Awards for Women in Science fellows
Anila Paparisto
[ "Chemistry" ]
187
[ "Biochemists", "Molecular biology", "Molecular biologists" ]
68,562,033
https://en.wikipedia.org/wiki/Y-cruncher
y-cruncher is a computer program for the calculation of some mathematical constant with theoretical accuracy (limited only by computing time and available storage space). It was originally developed to calculate the Euler-Mascheroni constant ; the y is derived from it in the name. Since 2010, y-cruncher has been used for all record calculations of the number pi and other constants. The software is downloadable from the website of the developers for Microsoft Windows and Linux. It does not have a graphical interface, but works on the command line. Calculation options are selected or entered via the text menu, the results are saved as a file. Some popular uses of y-cruncher are running hardware benchmarks to measure performance of computer system. An example of such benchmark is HWBOT. Also y-cruncher can be used for stress-tests, as performed computations are sensitive to RAM errors and the program can automatically detect such errors. Development Alexander J. Yee started developing in high school a Java library for arbitrary-precision arithmetic called "BigNumber". With this he was able together with his roommate Raymond Chan on 8 December 2006 set the world record for the most number of calculated decimal places for the Euler-Mascheroni constant with 116 580 041 decimal places. In January 2009, they broke their own record and calculated 14 922 244 782 decimal places. At this point, the program was renamed to "y-cruncher" and ported to C and C++. In the aftermath, Shigeru Kondo with the help of y-cruncher calculated to 5 trillion digits on 2 August 2010. Next year, Yee and Kondo calculated 10 trillion decimal places and broke the then-valid world record for decimal places of . After that, Yee decided to completely overhaul the program and rewrite it from scratch in version v0.6.1. This enabled determining with 12.1 trillion digits in just 94 days compared to 371 days that were spent for the previous record. Properties y-cruncher has the following characteristic properties: Multithreading Vector instruction sets (see SIMD) Swapping Using multiple hard drives (in RAID) Automatic detection and correction of smaller arithmetic errors Processor-specific optimization Calculations Since 2009, most of the world record-level calculations of mathematical constants have been performed with y-cruncher. The technical challenge does not (any longer) lie in the calculation itself, but in providing an environment that enables a comparatively efficient execution. Purpose The tool can serve several purposes. On the one hand, it allows the capabilities of CPUs and RAM to be determined and compared with other models. On the other hand, these hardware components can also be tested for stability and error susceptibility through stress testing. An alternative program for this would be Prime95. The advantage of the program lies in the fact that (partial) calculations can be carried out on an old Pentium PC, an up-to-date workstation, and theoretically even supercomputers, without measured performance falling off a measurement scale (or complex benchmarks becoming incompatible due to new hardware and interfaces). Setting new computing records also represents a contemporary feasibility study and can serve as an indicator of computer performance improvement over time when regularly performed and with similar parameters. See also Super PI – a program that is designed solely to computation of digits of Prime95 – a program for searching of prime numbers References Mathematical software Benchmarks (computing) Pi-related software
Y-cruncher
[ "Mathematics", "Technology" ]
708
[ "Pi-related software", "Computing comparisons", "Computer performance", "Benchmarks (computing)", "Pi", "Mathematical software" ]
52,991,019
https://en.wikipedia.org/wiki/SPT-100
SPT-100 is a Hall-effect ion thruster, part of the SPT-family of thrusters. SPT stands for Stationary Plasma Thruster. It creates a stream of electrically charged xenon ions accelerated by an electric field and confined by a magnetic field. The thruster is manufactured by Russian OKB Fakel, and was first launched onboard the Gals-1 satellite in 1994. In 2003, Fakel debuted a second generation of the thruster, called SPT-100B, and in 2011, it presented further upgrades in SPT-100M prototypes. As of 2011, SPT-100 thrusters were used in 18 Russian and 14 foreign spacecraft, including IPSTAR-II, Telstar-8, and Ekspress A and AM constellations. Specifications See also PPS-1350 SPT-140 References External links Stationary plasma thrusters(PDF) Ion engines Spacecraft propulsion Spacecraft components
SPT-100
[ "Physics", "Chemistry" ]
190
[ "Ions", "Ion engines", "Matter" ]
52,994,301
https://en.wikipedia.org/wiki/Solvophoresis
Solvophoresis is a spontaneous motion of dispersed particles in a mixed solvent induced by a gradient of solvent concentration. Solvophoresis was experimentally established by Marek Kosmulski and Egon Matijevic. Solvophoresis is similar to diffusiophoresis. References Colloidal chemistry
Solvophoresis
[ "Chemistry" ]
69
[ "Colloidal chemistry", "Surface science", "Physical chemistry stubs", "Colloids" ]
51,447,977
https://en.wikipedia.org/wiki/%CE%92-Hydroxy%20%CE%B2-methylbutyryl-CoA
β-Hydroxy β-methylbutyryl-coenzyme A (HMB-CoA), also known as 3-hydroxyisovaleryl-CoA, is a metabolite of -leucine that is produced in the human body. Its immediate precursors are β-hydroxy β-methylbutyric acid (HMB) and β-methylcrotonoyl-CoA (MC-CoA). It can be metabolized into HMB, MC-CoA, and HMG-CoA in humans. Metabolic pathway Notes References Biomolecules Metabolism Thioesters of coenzyme A
Β-Hydroxy β-methylbutyryl-CoA
[ "Chemistry", "Biology" ]
132
[ "Natural products", "Biotechnology stubs", "Organic compounds", "Biochemistry stubs", "Cellular processes", "Structural biology", "Biomolecules", "Biochemistry", "Metabolism", "Molecular biology" ]
51,454,694
https://en.wikipedia.org/wiki/Germanium-tin
Germanium-tin is an alloy of the elements germanium and tin, both located in group 14 of the periodic table. It is only thermodynamically stable under a small composition range. Despite this limitation, it has useful properties for band gap and strain engineering of silicon-integrated optoelectronic and microelectronic semiconductor devices. Synthesis Germanium-tin alloys must be kinetically stabilized in order to prevent decomposition. Therefore, low temperature molecular beam epitaxy or chemical vapor deposition techniques are typically used for their synthesis. Microelectronic applications Germanium-tin alloys have higher carrier mobilities than either silicon or germanium. Therefore, it has been proposed that they can be used as a channel material in high speed metal-oxide-semiconductor field effect transistors. In addition, the alloys' larger lattice constant relative to germanium makes it possible to use them as stressors to enhance the carrier mobility of germanium channel transistors. Optoelectronic applications At a Sn content beyond approximately 9%, germanium-tin alloys become direct gap semiconductors having efficient light emission suitable for the fabrication of lasers. Since the constituent elements are chemically compatible with silicon, it is possible to integrate such lasers directly onto silicon microelectronic devices, enabling on-chip optical communication. This is still an active research area, but germanium-tin lasers operating at low temperatures have already been demonstrated. In addition, germanium-tin light emitting diodes operating at room temperature have also been reported. References Germanium Tin alloys
Germanium-tin
[ "Chemistry" ]
324
[ "Tin alloys", "Alloys" ]
51,454,896
https://en.wikipedia.org/wiki/Quantitative%20systems%20pharmacology
Quantitative systems pharmacology (QSP) is a discipline within biomedical research that uses mathematical computer models to characterize biological systems, disease processes and drug pharmacology. QSP can be viewed as a sub-discipline of pharmacometrics that focuses on modeling the mechanisms of drug pharmacokinetics (PK), pharmacodynamics (PD), and disease processes using a systems pharmacology point of view. QSP models are typically defined by systems of ordinary differential equations (ODE) that depict the dynamical properties of the interaction between the drug and the biological system. QSP can be used to generate biological/pharmacological hypotheses in silico to aid in the design of in vitro or in vivo non-clinical and clinical experiments. This can help to guide biomedical experiments so that they yield more meaningful data. QSP is increasingly being used for this purpose in pharmaceutical research & development to help guide the discovery and development of new therapies. QSP has been used by the FDA in a clinical pharmacology review. Origin QSP emerged as a discipline through two workshops held at the National Institutes of Health (NIH) in 2008 and 2010, with the goal of merging of systems biology and pharmacology. The workshops outlined a need for a mathematical discipline to aid in translational medicine. QSP proposed integrating concepts, methods, and investigators from computational biology, systems biology, and biological engineering into pharmacology. A review of the history and future of QSP identified areas where it has advanced understanding of drug mechanisms, supported preclinical to clinical translation, and in general aided in drug development. The FDA has included QSP as a component of the Model-Informed Drug Development Program. References External links QSP Special Interest Group at ISoP QSP as Simulations Plus QSP at Certara eBook: The Emerging Discipline of Quantitative Systems Pharmacology The UK QSP Network Pharmacology
Quantitative systems pharmacology
[ "Chemistry" ]
408
[ "Pharmacology", "Medicinal chemistry" ]
41,398,498
https://en.wikipedia.org/wiki/Filtered%20Rayleigh%20scattering
Filtered Rayleigh scattering (FRS) is a diagnostic technique which measures velocity, temperature, and pressure by determining Doppler shift, total intensity, and spectral line shape of laser induced Rayleigh-Brillouin scattering. References Scattering, absorption and radiative transfer (optics) Visibility Light
Filtered Rayleigh scattering
[ "Physics", "Chemistry", "Mathematics" ]
60
[ "Visibility", "Physical phenomena", " absorption and radiative transfer (optics)", "Physical quantities", "Spectrum (physical sciences)", "Quantity", "Scattering stubs", "Electromagnetic spectrum", "Waves", "Scattering", "Light", "Wikipedia categories named after physical quantities" ]
41,401,861
https://en.wikipedia.org/wiki/AMSilk
AMSilk is an industrial supplier of synthetic silk biopolymers. The polymers are biocompatible and breathable. The company was founded in 2008 and has its headquarters at Campus Neuried in Munich. AMSilk is an industrial biotechnology company with a proprietary production process for their silk materials. AMSilk produces a lightweight material trademarked as Biosteel, created from recombinant spider silk, which was used by Adidas to create a biodegradable running shoe. Jens Klein, former CEO of AMSilk, said during an interview that the biodegradable material can help reduce the amount of waste that has to be burned or pollutes the environment. AMSilk is also developing breast implants made of biodegradable spider silk in collaboration with the German company Polytech. History AMSilk was founded in 2008 by Lin Römer and Professor Thomas Scheibel in Planegg, Germany, with the aim of becoming the world's first industrial supplier of synthetic silk biopolymers. In 2011, the company partnered with the Fraunhofer Institute for Applied Polymer Research (IAP) to develop a new spin process for the AMSilk spider silk proteins. In 2015, AMSilk began producing Biosteel® Fibre made from 100% silk proteins based on natural spider silk. Then, in November 2016, the company used its Biosteel® Fibre to collaborate with Adidas to create the ‘Futurecraft Biofabric’ shoe prototype. The Biosteel® Yarn fibre-based shoe is 100% biodegradable and is designed to replicate spider silk. In April 2017, AMSilk announced its partnership with Gruschwitz Textilwerke. In 2019, Swiss cosmetics manufacturer Givaudan acquired the cosmetics arm of AMSilk to expand the use of spider silk technology in cosmetic products. In May 2021, the company secured a EUR 29 Million Series C fundraising. In April 2023 AMSilk raised an additional €25 million to accelerate industrial scale-up and expand commercial operations. In February 2023, Evonik Industries signed a contract with AMSilk to supply industrial quantities of protein products made from the fermentation of renewable raw materials. In 2023, AMSilk partnered with Brain Biotech, a company that develops and manufactures bio-based products for industry, to develop bio-based protein fibres for the textile industry. Founding and Development AMSilk has developed a range of vegan silk biopolymers designed for application in various medical devices, focusing on enhancing the bio-compatibility of medical implants. In 2017, AMSilk was named one of the 50 most innovative companies in the world by the German edition of MIT Technology Review. In 2018, AMSilk signed a deal with Airbus to develop a spider silk-based material for lightweight, high-performance planes. The collaboration aimed to launch the first prototype composite material in 2019. Within the same year, the company partnered with Polytech Health & Aesthetics, the leading manufacturer of silicone implants, to begin a clinical trial of silk-coated implants on a handful of patients in Austria. Headquarters AMSilk is currently located at Campus Neuried in Munich, Germany, after relocating in October 2022. Products and services AMSilk partnered with Swiss watchmaker Omega SA in 2019 to make the Nato watch strap, which blends polyamide and Biosteel. In January 2022, Mercedes-Benz partnered with AMSilk to develop sustainable door pulls using Biosteel fibre on its VISION EQXX concept electric car. Since announcing its partnership with Airbus in 2018, AMSilk has worked on developing silk-reinforced polymers as a substitute for Carbon-fiber-reinforced polymers (CFRPs). Environmental impact AMSilk has worked with fashion brands to create sustainable alternatives using Biosteel® Fiber, a biosynthetic silk made by adding silk genes into bacteria through biofermentation. This material has been used in collaboration with Omega and Adidas for a watch strap and the "Futurecraft" shoe. AMSilk's Biosteel® Fiber, used in these collaborations is notable for its biodegradability, breaking down in seawater and on land within a few months. Utilizing a bio-fabrication process that reprograms microorganisms based on spider DNA, the company produces this silk-like material at scale using bacteria and natural fermentation. In November 2022, AMSilk participated as one of 100 renowned companies in the VISION 2045 Summit held alongside the United Nations Climate Change Conference (COP27). References External links AMSilk Homepage Materials science organizations Natural materials Polyamides Silk Spider anatomy
AMSilk
[ "Physics", "Materials_science", "Engineering" ]
962
[ "Natural materials", "Materials science", "Materials", "Materials science organizations", "Matter" ]
59,237,500
https://en.wikipedia.org/wiki/Simon%20Devitt
Simon John Devitt (born 17 July 1981) is an Australian theoretical quantum physicist who has worked on large-scale Quantum computing architectures, Quantum network systems design, Quantum programming development and Quantum error correction. In 2022 he was appointed as a member to Australia's National Quantum Advisory Committee. Education Devitt received his BSc (Hons) in Physics from Melbourne University in 2004. He completed his PhD in physics under Lloyd Hollenberg at the Center for Quantum Computation (CQCT) at the University of Melbourne in 2008, with a thesis entitled Quantum information engineering: concepts to quantum technologies. During his Ph.D, Devitt was awarded the Rae and Edith Bennett Travelling Scholarship at the Faculty of Mathematics, University of Cambridge, where he worked within the Centre for Quantum Computation, headed by Artur Ekert. Career and research Following his PhD, Devitt did postdoctoral research at the Japanese National Institute of Informatics in the group of Kae Nemoto, where he was promoted to assistant professor in 2011. Later, in 2014, he took a position of associate professor in physics at Ochanomizu University at the Leading Graduate School Promotion Center. In 2015 he took up a position as senior research scientist at the Japanese National Laboratories, Riken, in the Superconducting Quantum Simulation Research Team, headed by Jaw-Shen Tsai. In 2017, he returned to Australia where he was appointed research fellow for the Australian Research Council Center of Excellence for Engineered Quantum Systems (EQUS) at Macquarie University and in 2018 he was appointed as lecturer in quantum architectures at the Center for Quantum Software and Information (QSI) at the University of Technology Sydney. In 2020 he was awarded the inaugural Warren prize by the Royal Society of New South Wales for his service to quantum computing development and in 2021 he was elected fellow of the Royal Society of New South Wales and the Australian Institute of Physics. In 2022 Devitt was appointed associate professor and research director of the Center for quantum software and information at UTS. Devitt's research has focused on the design of practical large-scale systems architectures for quantum computing and communications system. He published the first architecture, in an atom-optical system, that utilised techniques in topological quantum error correction that could be conceptually scaled to an arbitrary number of encoded qubits. In 2014, in collaboration with NTT Communications and TU Wien, he developed a design for a scalable system using the Nitrogen-vacancy center and in 2017 he developed a large-scale system design for Ion trap quantum computing in collaboration with the University of Sussex. Devitt has also worked in the development of scalable Quantum networks, developing designs for what is now known as 2nd and 3rd generation quantum repeaters and inventing, with scientists in Japan and Australia, a quantum version of Sneakernets. Devitt's recent work has focused largely on developing a software framework for large-scale, error-corrected machines, including methods to map high-level quantum circuits to machine level instructions and how these error-corrected circuits need to be optimised to reduce the resource load on quantum computing hardware. In 2016, he established, with Jared Cole of RMIT University, the first consultancy specialising in quantum technology, which became a founding member of the Spanish based industry group, the Quantum World Association (QWA). He has worked with and advised several companies and government agencies worldwide on quantum technology development, is regularly featured in the popular press, and comments for outlets such as New Scientist and MIT Technology Review on developments in quantum technology research. In 2016, Devitt created and hosts the Meet the meQuanics podcast, where scientists, industry leaders and students discuss issues related to the new quantum technology sector. Selected publications 20 June 2013 . 20 June 2016 . 20 September 2016 . 1 February 2017 . 7 December 2012 . References External links Simon Devitt: Home page. Faculty of Engineering and IT, University of Technology, Sydney. h-bar: Quantum Consultants.. 1981 births Living people Scientists from Adelaide Academic staff of the University of Technology Sydney University of Melbourne alumni Academic staff of Ochanomizu University Quantum physicists 21st-century Australian physicists Theoretical physicists
Simon Devitt
[ "Physics" ]
841
[ "Theoretical physics", "Quantum physicists", "Theoretical physicists", "Quantum mechanics" ]
59,248,142
https://en.wikipedia.org/wiki/SimThyr
SimThyr is a free continuous dynamic simulation program for the pituitary-thyroid feedback control system. The open-source program is based on a nonlinear model of thyroid homeostasis. In addition to simulations in the time domain the software supports various methods of sensitivity analysis. Its simulation engine is multi-threaded and supports multiple processor cores. SimThyr provides a GUI, which allows for visualising time series, modifying constant structure parameters of the feedback loop (e.g. for simulation of certain diseases), storing parameter sets as XML files (referred to as "scenarios" in the software) and exporting results of simulations in various formats that are suitable for statistical software. SimThyr is intended for both educational purposes and in-silico research. Mathematical model The underlying model of thyroid homeostasis is based on fundamental biochemical, physiological and pharmacological principles, e.g. Michaelis-Menten kinetics, non-competitive inhibition and empirically justified kinetic parameters. The model has been validated in healthy controls and in cohorts of patients with hypothyroidism and thyrotoxicosis. Scientific uses Multiple studies have employed SimThyr for in silico research on the control of thyroid function. The original version was developed to check hypotheses about the generation of pulsatile TSH release. Later and expanded versions of the software were used to develop the hypothesis of the TSH-T3 shunt in the hypothalamus-pituitary-thyroid axis, to assess the validity of calculated parameters of thyroid homeostasis (including SPINA-GT and SPINA-GD) and to study allostatic mechanisms leading to non-thyroidal illness syndrome. SimThyr was also used to show that the release rate of thyrotropin is controlled by multiple factors other than T4 and that the relation between free T4 and TSH may be different in euthyroidism, hypothyroidism and thyrotoxicosis. Public perception, reception and discussion of the software SimThyr is free and open-source software. This ensures the source code to be available, which facilitates scientific discussion and reviewing of the underlying model. Additionally, the fact that it is freely available may result in economical benefits. The software provides an editor that enables users to modify most structure parameters of the information processing structure. This functionality fosters simulation of several functional diseases of the thyroid and the pituitary gland. Parameter sets may be stored as MIRIAM- and MIASE-compliant XML files. On the other hand, the complexity of the user interface and the lack of the ability to model treatment effects have been criticized. See also Hypothalamic–pituitary–thyroid axis Thyroid function tests References External links of the SimThyr project Curated information at Zenodo Curated information at SciCrunch Free science software Free biosimulation software Medical simulation Free software programmed in Pascal Scientific simulation software Science software for macOS Science software for Windows Mathematical and theoretical biology Computational biology Cross-platform software Biomedical cybernetics Simulation software Human homeostasis Thyroid homeostasis
SimThyr
[ "Mathematics", "Biology" ]
640
[ "Mathematical and theoretical biology", "Human homeostasis", "Applied mathematics", "Computational biology", "Homeostasis" ]
59,254,904
https://en.wikipedia.org/wiki/G.%20Peter%20Lepage
G. Peter Lepage (born 13 April 1952) is a Canadian American theoretical physicist and an academic administrator. He was the Harold Tanner Dean of the College of Arts and Sciences at Cornell University from 2003 to 2013. Early life and education Gerard Peter Lepage was born in Canada in 1952. Lepage studied at McGill University and graduated with a bachelor's degree in honours physics in 1972 and the University of Cambridge with a master's degree (M.A.St - Part III of the Mathematical Tripos) in 1973. In 1978, he received his PhD in theoretical physics from Stanford University. Academic career Lepage was a research associate at the Stanford Linear Accelerator Center in 1978. He was a postdoctoral research associate at the Laboratory of Nuclear Studies, Cornell University from 1978 to 1980. In 1980, he joined the physics faculty at Cornell University where he became a professor. He received academic tenure in 1984 after only four years on the university faculty. From 1999 to 2003, he was the chair of Cornell's physics department. He was appointed the Harold Tanner Dean of the College of Arts and Sciences, serving from 2003 to 2013. He is a Fellow of the American Academy of Arts and Sciences and a Fellow of the American Physical Society. He was previously an Alfred P. Sloan Fellow (1983–85; 1990) and John Simon Guggenheim Fellow (1996–97) Since 2012 he has been a member of the National Science Board. G. Peter Lepage has been a visiting scholar at a number of institutions: the Institute for Advanced Study, Princeton; Department of Applied Mathematics and Theoretical Physics, Cambridge; the University of California Institute of Theoretical Physics, Santa Barbara, the Fermi National Accelerator Center near Chicago, and the Institute for Nuclear Theory, Seattle. He was on the editorial board of Physical Review D and Physical Review Letters and received the Outstanding Referee Award from the APS in 2009. He has served on the scientific program committees for the Stanford Linear Accelerator Center, the DOE-NSF National Computational Infrastructure for Lattice Gauge Theory, the NSF's Institute for Nuclear Theory in Seattle, the International Particle Data Group, and the NSF's Institute for Theoretical Physics in Santa Barbara. He was the co-chair of the working group for the President Obama’s Council of Advisors on Science and Technology (PCAST) on STEM teaching at colleges and universities, which in 2012 produced the acclaimed report, “Engage to Excel: Producing One Million Additional College Graduates with Degrees in Science, Technology, Engineering, and Mathematics.” He has served on the technical advisory committee for the Association of American Universities’ Undergraduate STEM Education Initiative, and is vice chair of the National Science Board’s Committee on Education and Human Resources. He is also involved in innovations in pedagogy, especially physics education at all levels. He spearheads the Active Learning Initiative (ALI) in Cornell's college of arts and sciences, a five-year pilot project, funded by 1987 alumni, Alex and Laura Hanson, used to enhance strategies for interactive classroom learning using emerging technologies. Research In the late 1970s and early 1980s, he was known for his research with Stanley Brodsky on quantum chromodynamics (QCD) and perturbation theory of scattering processes, His research focus examines high precision calculations, adapted to renormalization techniques and effective field theory. This method is then applied to the fields of QCD in atomic physics, computational quantum field theory, condensed matter physics, nuclear physics (little body problem), systems of heavy quarks and exclusive scattering processes with high momentum transfer. His research also covers high-performance computing (HPC) or large scale numerical simulations of non-perturbative lattice QCD, leading in part to a range of calculations at different observation sizes – quarks, gluons and hadron masses, coupling constants and mixing angles in the Standard Model, magnetic moment of muons and allowed to determine the QCD contributions for precision testing of the Standard Model (distinguishable from possible contributions of new physics beyond the standard model). These particles describe the inner structure of protons, neutrons and other sub-nuclear particles. His research resulted in the VEGAS algorithm for adaptive method for reducing error in Monte Carlo simulations in interaction physics by using a known or approximate probability distribution function. In 2016, Lepage received the J. J. Sakurai Prize from the American Physical Society for “innovative applications of quantum field theory in elementary particle physics, in particular for the justification of the theory of exclusive processes, the development of nonrelativistic effective field theories and the determination of parameters of the standard model with lattice theories.” He has authored more than 250 scientific publications. In 2002, together with fellow academics, Carolyn (Biddy) Martin and Mohsen Mostafavi, he co-edited a book on the future and relevance of the humanities, “Do the Humanities Have to Be Useful?” Personal life G. Peter Lepage is married to Deborah O'Connor and they have three sons: Michael, Daniel and Matthew. O'Connor studied pharmacology at Stanford, worked in biochemistry at Cornell and served on the Ithaca City School District Board of Education. References 1952 births Living people 20th-century American physicists Alumni of the University of Cambridge American academic administrators Canadian particle physicists 20th-century Canadian physicists 21st-century Canadian physicists Cornell University faculty Fellows of the American Academy of Arts and Sciences Fellows of the American Physical Society McGill University Faculty of Science alumni Particle physicists Quantum computing Quantum physicists Sloan Research Fellows Stanford University alumni Theoretical physicists J. J. Sakurai Prize for Theoretical Particle Physics recipients
G. Peter Lepage
[ "Physics" ]
1,134
[ "Theoretical physicists", "Theoretical physics", "Quantum physicists", "Quantum mechanics", "Particle physics", "Particle physicists" ]
55,890,020
https://en.wikipedia.org/wiki/Niles%20Firebrick
Niles Firebrick was manufactured by the Niles Fire brick Company since it was created in 1872 by John Rhys Thomas until the company was sold in 1953 and completely shutdown in 1960. Capital to establish the company was provided by Lizzie B. Ward to construct a small plant across from the Old Ward Mill which was run by her husband James Ward. Thomas immigrated in 1868 from Carmarthenshire in Wales with his wife and son W. Aubrey Thomas who served as secretary of the company until he was appointed as representative to the U. S. Congress in 1904. The company was managed by another son, Thomas E. Thomas, after J.R. Thomas died unexpectedly in 1898. The Thomases returned the favor of their original capitalization by purchasing an iron blast furnace from James Ward when he went bankrupt in 1879. Using their knowledge of firebrick they were able to make this small furnace profitable. Later they used it to showcase the value of adding hot blast to a furnace using 3 ovens packed full of firebrick. The furnace was managed by another son, John Morgan Thomas. Fire brick was first invented in 1822 by William Weston Young in the Vale of Neath in Wales, the county just east of Llanelli where the Thomas family lived before emigrating to Niles. It is recorded that Firebrick was made in the Llanelli area in 1870 but the market was highly cyclical and it was difficult to make a living at it. From 1937 to 1941 the company worked to prevent the United Brick Workers Union (CIO) from organizing the workers in preference for an independent union favored by management. The CIO union prevailed. In spite of this episode the company had good relations with the employees and tried to keep them employed during economic downturns. The "Clingans" mentioned in that referenced interview were Margaret Thomas Clingan, a daughter and John Rhys Thomas Clingan, a grandson, who took over management of the Company when T.E. Thomas died in 1920. Patrick J. Sheehan worked various jobs at Niles Fire Brick Company from age 13 up until 1897 when he was appointed superintendent of the plant. When Sheehan started with the company they occupied a plant covering a floor space of 3,600 square feet, two kilns, and the output was 640,000 bricks per year. The plant was moved to Langley street eighteen months afterward, and the output increased to 1,200,000. This Langley street works has constantly added to each year, until the output was 6 million, and in 1905 they built the "Falcon" plant on the site formerly occupied by the Langley street plant, which doubled production to 12 million per year. By 1955 the output was 25 million. The work of molding and firing brick was highly labor-intensive. Immigrants from Southern States and European countries, especially Italy, were sought to perform labor under difficult working conditions. An article in the March-April "The Niles Register" of the Niles Historical Society discusses the history of the headquarters of the company at 216 Langely Street with a pattern shop in the back where skilled workers created the molds for custom bricks ordered by the mills in the 1902- 1912 period. After that the pattern shop was used by the Sons of Italy and later by the Bagnoli-Irpino Club. This was a result of the large percentage of immigrants from the Bagnoli-Irpino area in Italy. One of the founders of the club was Lawrence Pallante, an early immigrant from that area and presumably an ancestor of the reference articles. Immigration from that area began in 1880 and extended to about 1960. References Refractory materials Silicates Bricks Niles, Ohio
Niles Firebrick
[ "Physics" ]
746
[ "Refractory materials", "Materials", "Matter" ]
44,267,928
https://en.wikipedia.org/wiki/Turbine%E2%80%93electric%20powertrain
A turbine–electric transmission system includes a turboshaft gas turbine connected to an electrical generator, creating electricity that powers electric traction motors. No clutch is required. Turbine–electric transmissions are used to drive both gas turbine locomotives (rarely) and warships. Locomotive applications A handful of experimental locomotives from the 1930s and 1940s used gas turbines as prime movers. These turbines were based on stationary practice, with single large reverse-flow combustors, heat exchangers and using low-cost heavy oil bunker fuel. In the 1960s the idea re-emerged, using developments in light weight engines developed for helicopters and using lighter kerosene fuels. As these turbines were compact and lightweight, the vehicles were produced as railcars rather than separate locomotives. Naval applications Turboelectric powertrains are a subset of what is referred to in marine nomenclature as integrated electric propulsion or IEP where generated power is converted into electricity before being used to power propellers or pump-jets. Power can be provided by diesel engines, nuclear reactors, or gas turbines in which case it is called turboelectric propulsion. As gas and steam turbines are most efficient at thousands of revolutions per minute, when lower turbine speeds are needed in purely mechanical systems this necessitates extensive, and often heavy, reduction gearing. This is especially important on warship as they often require high electrical power independent of travel speed as well as the ability to perform efficient low speed cruise whilst maintaining the ability to perform less efficient sprints. For that reason warships often use combined power systems where an efficient prime mover, such as a diesel engine or a small gas turbine, is used for cruising while large gas turbines can be activated for high speed. When such a system uses gearboxes and clutches to accomplish a mechanical combination of power they are referred to as CODOG (combined diesel or gas) or COGAG (combined gas and gas) respectively. This further increases the complexity and size of the mechanical power transmission systems. Integrated electric propulsion systems offer the ability to simplify such systems by combining power electrically instead of mechanically. By discarding mechanical power transmission these systems can improve efficiency by allowing each system to operate at its most efficient speed, improve reliability by cutting down on the number of components, and simplify ship layout as without the need for direct mechanical linkages to the propellers engines can be placed optimally. And while turboelectric systems are often heavy compared to simple mechanical systems, they are similar in weight to the complex mechanical systems use to link different engines whilst generating electrical power. An extension of the standard turboelectric propulsion scheme is COGES, or combined gas–electric and steam. In COGES a gas-turbine–electric primary transmission is used with a heat-recovery boiler in the exhaust flow to generate steam that drives a steam turbine that also generates electricity. Thus the system is thus even more efficient, as it converts what would normally be rejected as waste heat into useful power. See also Diesel–electric powertrain Gas turbine Gas turbine–electric locomotive Integrated electric propulsion Turbine Turboshaft References Further reading – an article on the many uses of gas turbines Electric power Engine technology Gas turbine locomotives Gas turbine technology Marine propulsion Turbo generators
Turbine–electric powertrain
[ "Physics", "Technology", "Engineering" ]
635
[ "Physical quantities", "Engines", "Engine technology", "Power (physics)", "Electric power", "Marine engineering", "Electrical engineering", "Marine propulsion" ]
44,270,518
https://en.wikipedia.org/wiki/RB-64
RB-64 is a semi-synthetic derivative of salvinorin A. It is an irreversible agonist, with a reactive thiocyanate group that forms a bond to the κ-opioid receptor (KOR), resulting in very high potency. It is functionally selective, activating G proteins more potently than β-arrestin-2. RB-64 has a bias factor of up to 96 and is analgesic with fewer of the side-effects associated with unbiased KOR agonists. The analgesia is long-lasting. Compared with unbiased agonists, RB-64 evokes considerably less receptor internalization. See also Herkinorin Salvinorin B methoxymethyl ether Salvinorin A Nalfurafine References Synthetic opioids Kappa-opioid receptor agonists Kappa-opioid receptor antagonists Thiocyanates Biased ligands Dissociative drugs Isochromenes Esters Methyl esters 3-Furyl compounds
RB-64
[ "Chemistry" ]
216
[ "Pharmacology", "Esters", "Functional groups", "Medicinal chemistry stubs", "Signal transduction", "Biased ligands", "Organic compounds", "Thiocyanates", "Pharmacology stubs" ]
44,271,020
https://en.wikipedia.org/wiki/Togo%20Triangle
The Togo Triangle is an offshore market for stolen oil off the coast of Nigeria and Togo near the Niger Delta. The Triangle has been compared to an "open-air drug market" for trade in illegal crude oil, noted for the presence of pirates. References Petroleum industry Economy of Nigeria Economy of Togo Black markets Conflicts in Nigeria
Togo Triangle
[ "Chemistry" ]
66
[ "Chemical process engineering", "Petroleum", "Petroleum industry" ]
44,272,100
https://en.wikipedia.org/wiki/Discovery%20and%20development%20of%20gliflozins
Gliflozins are a class of drugs in the treatment of type 2 diabetes (T2D). They act by inhibiting sodium/glucose cotransporter 2 (SGLT-2), and are therefore also called SGLT-2 inhibitors. The efficacy of the drug is dependent on renal excretion and prevents glucose from going into blood circulation by promoting glucosuria. The mechanism of action is insulin independent. Three drugs have been accepted by the Food and Drug Administration (FDA) in the United States; dapagliflozin, canagliflozin and empagliflozin. Canagliflozin was the first SGLT-2 inhibitor that was approved by the FDA, being accepted in March 2013. Dapagliflozin and empagliflozin were accepted in 2014. Introduction Role of kidneys in glucose homeostasis There are at least four members of SLC-5 gene family, which are secondary active glucose transporters. The sodium glucose transporters proteins SGLT-1 and SGLT-2 are the two premier members of the family. These two members are found in the kidneys, among other transporters, and are the main co-transporters there related to the blood sugar. They play a role in renal glucose reabsorption and in intestinal glucose absorption. Blood glucose is freely filtered by the glomeruli and SGLT-1 and SGLT-2 reabsorb glucose in the kidneys and put it back into the circulation cells. SGLT-2 is responsible for 90% of the reabsorption and SGLT-1 for the other 10%. SGLT-2 protein Sodium/glucose co-transporter (SGLT) proteins are bound to the cell membrane and have the role of transporting glucose through the membrane into the cells, against the concentration gradient of glucose. This is done by using the sodium gradient, produced by sodium/potassium ATPase pumps, so at the same time glucose is transported into the cells, the sodium is too. Since it is against the gradient, it requires energy to work. SGLT proteins cause the glucose reabsorption from the glomerular filtrate, independent of insulin. SGLT-2 is a member of the glucose transporter family and is a low-affinity, high-capacity glucose transporter. SGLT-2 is mainly expressed in the S-1 and S-2 segments of the proximal renal tubules where the majority of filtered glucose is absorbed. SGLT-2 has a role in regulation of glucose and is responsible for most glucose reabsorption in the kidneys. In diabetes, extracellular glucose concentration increases and this high glucose level leads to upregulation of SGLT-2, leading in turn to more absorption of glucose in the kidneys. These effects cause maintenance of hyperglycemia. Because sodium is absorbed at the same time as glucose via SGLT-2, the upregulation of SGLT-2 probably leads to development or maintenance of hypertension. In study where rats were given either ramipril or losartan, levels of SGLT-2 protein and mRNA were significantly reduced. In patients with diabetes, hypertension is a common problem so this may have relevance in this disease. Drugs that inhibit sodium/glucose cotransporter 2 inhibit renal glucose reabsorption which leads to enhanced urinary glucose excretion and lower glucose in blood. They work independently of insulin and can reduce glucose levels without causing hypoglycemia or weight gain. Discovery Medieval physicians routinely tasted urine and wrote discourses on their observations. Which physician originally thought that diabetes mellitus was a renal disorder because of glucose discharged in urine is apparently now lost to history. The discovery of insulin eventually led to a diabetes management focus on the pancreas. Traditional foci of therapeutic strategies for diabetes have been on enhancing endogenous insulin secretion and on improving insulin sensitivity. In the previous decade the role of the kidney in the development and maintenance of high glucose levels has been examined. The role of the kidney led to the development of drugs that inhibit the sodium/glucose transporter 2 protein. Every day approximately 180 grams of glucose are filtered through the glomeruli and lost into the primary urine in healthy adults, but more than 90% of the glucose that is initially filtered is reabsorbed by a high capacity system controlled by SGLT-2 in the early convoluted segment of the proximal tubules. Almost all remaining filtered glucose is reabsorbed by sodium/glucose transporter 1 so under normal circumstances almost all filtered glucose will be reabsorbed and less than 100 mg of glucose finds its way into the urine of non-diabetic individuals. Phlorizin Phlorizin is a compound that has been known for over a century. It is a naturally occurring botanical glucoside that produces renal glucosuria and blocks intestinal glucose absorption through inhibition of the sodium/glucose symporters located in the proximal renal tubule and mucosa of the small intestine. Phlorizin was first isolated in 1835 and was subsequently found to be a potent but rather non-selective inhibitor of both SGLT-1 and SGLT-2 proteins. Phlorizin seemed to have very interesting properties and the results in animal studies were encouraging, it improved insulin sensitivity and in diabetic rat models it seemed to increase glucose levels in urine and also normal glucose concentration in plasma occurred without hypoglycemia. Unfortunately, in spite of these properties, phlorizin was not suitable enough for clinical development for several reasons. Phlorizin has very poor oral bioavailability as it is broken down in the gastrointestinal tract, so it has to be given parenterally. Phloretin, the active metabolite of phlorizin, is a potent inhibitor of facilitative glucose transporters and phlorizin seems to lead to serious adverse events in the gastrointestinal tract like diarrhea and dehydration. Because of these reasons, phlorizin was never pursued in humans. Although phlorizin was not suitable for further clinical trials, it served an important role in the development of SGLT-2 inhibitors. It served a basis for the recognition of SGLT inhibitors with improved safety and tolerability profiles. For an example, the SGLT inhibitors are not associated with gastrointestinal adverse events and the bioavailability is much greater. Inhibition of SGLT-2 results as better control of glucose level, lower insulin, lower blood pressure and uric acid levels and augments calorie wasting. Some data supports the hypothesis that SGLT-2 inhibition may have direct renoprotective effects. This includes actions to attenuate tubular hypertrophy and hyperfiltration associated with diabetes and to reduce the tubular toxicity of glucose. Inhibition of SGLT-2 following treatment with dapagliflozin reduces the capacity for tubular glucose reabsorption by approximately 30–50%. Drug development Phlorizin consists of glucose moiety and two aromatic rings (aglycone moiety) joined by an alkyl spacer. Initially, phlorizin was isolated for treatment of fever and infectious diseases, particularly malaria. According to Michael Nauck and his partners, studies were made in the 1950s on phlorizin that showed that it could block sugar transport in the kidney, small intestine, and a few other tissues. In the early 1990s, sodium/glucose cotransporter 2 was fully characterized, so the mechanism of phlorizin became of real interest. In later studies it was said that sugar-blocking effects of phlorizin was due to inhibition of the sodium/glucose cotransporter proteins. Most of the reported SGLT-2 inhibitors are glucoside analogs that can be tracked to the o-aryl glucoside found in the nature. The problem with using o-glucosides as SGLT-2 inhibitors is instability that can be tracked to degradation by β-glucosidase in the small intestine. Because of that, o-glucosides given orally have to be prodrug esters. These prodrugs go through changes in the body leading to carbon–carbon bond between the glucose and the aglycone moiety so c-glucoside are formed from the o-glucosides. C-glucosides have a different pharmacokinetic profile than o-glucosides (e.g. half-life and duration of action) and are not degraded by the β-glucosidase. The first discovered c-glucoside was the drug dapagliflozin. Dapagliflozin was the first highly selective SGLT-2-inhibitor approved by the European Medicines Agency. All SGLT-2 inhibitors in clinical development are prodrugs that have to be converted to its active 'A' form for activity. T-1095 Because Phlorizin is a nonselective inhibitor with poor oral bioavailability, a phlorizin derivative was synthesised and called T-1095. T-1095 is a methyl carbonate prodrug that is absorbed into the circulation when given orally, and is rapidly converted in the liver to the active metabolite T-1095A. By inhibiting SGLT-1 and SGLT-2, urinary glucose excretion increased in diabetic animals. T-1095 did not proceed in clinical development, probably because of the inhibition of SGLT-1 but non-selective SGLT inhibitors may also block glucose transporter 1 (GLUT-1). Because 90% of filtered glucose is reabsorbed through SGLT-2, research has focused specifically on SGLT-2. Inhibition of SGLT-1 may also lead to the genetic disease glucose-galactose malabsorption, which is characterized by severe diarrhea. ISIS 388626 According to preliminary findings of a novel method of SGLT-2 inhibition, the antisense oligonucleotide ISIS 388626 improved plasma glucose in rodents and dogs by reducing mRNA expression in the proximal renal tubules by up to 80% when given once a week. It did not affect SGLT-1. A study results on long-term use of ISIS 388626 in non-human primates observed more than 1000 fold increase in glucosuria without any associated hypoglycemia. This increase in glucosuria can be attributed to a dose-dependent reduction in the expression of SGLT-2, where the highest dose led to more than 75% reduction. In 2011, Ionis Pharmaceuticals initiated a clinical phase 1 study with ISIS-SGLT-2RX, a 12-nucleotide antisense oligonucleotide. Results from this study were published in 2017 and the treatment was "associated with unexpected renal effects". The authors concluded that "Before the concept of antisense-mediated blocking of SGLT2 with ISIS 388626 can be explored further, more preclinical data are needed to justify further investigations." Activity of SGLT-2 inhibitors in glycemic control Michael Nauck recounts that meta-analyses of studies about the activity of SGLT-2 inhibitors in glycemic control in type 2 diabetes mellitus patients shows improvement in the control of glucose, when compared with placebos, metformin, sulfonylurea, thiazolidinediones, insulin and more. The HbA1c was examined after SGLT-2 inhibitors were given alone (as monotherapy) and as an add-on therapy to the other diabetes medicines. The SGLT-2 inhibitors that were used were dapagliflozin and canagliflozin and others in the same drug class. The meta-analysis was taken together from studies ranging from period of few weeks up to more than 100 weeks. The results, summed up, were that 10 mg of dapagliflozin showed more effect than placebo in the control of glucose, when given for 24 weeks. However, no inferior efficacy of 10 mg dapagliflozin was shown when used as an add-on therapy to metformin, compared with glipizide after use for 52 weeks. 10 mg of dapagliflozin showed neither inferior efficacy compared with metformin when both of the medicines were given as monotherapy for 24 weeks. The results from meta-analysis when canagliflozin was examined, showed that compared to a placebo, canagliflozin affects HbA1c. Meta-analysis studies also showed that 10 mg and 25 mg of empagliflozin, improved HbA1c compared with a placebo. Structure-activity relationship (SAR) The aglycones of both phlorizin and dapagliflozin have weak inhibition effects on SGLT-1 and SGLT-2. Two synergistic forces are involved in binding of inhibitors to SGLTs. Different sugars on the aglycone will affect and change the orientation of it in the access vestibule because one of the forces involved in the binding is the binding of sugar to the glucose site. The other force is the binding of the aglycone, which affects the binding affinity of the entire inhibitor. The discovery of T-1095 led to an investigation of how to enhance potency, selectivity and oral bioavailability by adding various substituents to the glycoside core. As an example we can take the change of o-glycosides to c-glycosides by creating a carbon–carbon bond between the glucose and the aglycone moiety. C-glucosides are more stable than o-glucosides which leads to modified half-life and duration of action. These modifications have also led to more specificity to SGLT-2. C-glucosides that have heterocyclic ring at the distal ring or proximal ring are better when it comes to anti-diabetic effect and physicochemical features all together. C-glucoside bearing thiazole at the distal ring on canagliflozin has shown good physicochemical properties that can lead to a clinical development, but still has the same anti-diabetic activity as dapagliflozin, as shown in tables 1 and 2. Song and his partners did preparate thiazole compound by starting with carboxyl acid. Working with that, it took them three steps to get a compound like dapagliflozin with a thiazole ring. Inhibitory effects on SGLT-2 of the compounds were tested by Song and his partners. In tables 1, 2, and 3, the IC50 value changes depending on what compound is in the ring position, in the C-4 region of the proximal phenyl ring, and how the thiazole ring relates. Many compounds gave different IC50 value in the ring position in an in vitro activity. For an example there was a big difference if there was an n-pentyl group (IC50 = 13,3 nM), n-butyl (IC50 = 119 nM), phenyl with 2-furyl (IC50 = 0,720) or 3-thiophenyl (IC50 = 0,772). As seen in table 1, the in vitro activity increases depending on what compound is bonded to the distal ring (given that in the C-4 region of the proximal phenyl ring is a Cl atom). Table 1: Differences in in vitro activity depending on which compound is bonded to the distal ring. *comparator to ethyl group (IC50 = 16,7) In table 2, the in vitro activity changes depending on the compound in the C-4 region of the proximal phenyl ring (X). Small methyl groups or other halogen atoms in the C-4 position gave IC50 ranging from 0.72–36.7 (given that the phenyl with 2-furyl is in the ring position). Table 2: Differences in in vitro activity depending on what compound is in the C-4 region of the proximal phenyl ring. Table 3: Difference in the IC50 value depending on how the thiazole ring relates (nothing else is changed in the structure (X = Cl, R = phenyl with 2-furyl). See also Sodium-glucose transport proteins SLC5A2 SGLT1 SGLT2 Dapagliflozin Empagliflozin Canagliflozin Ipragliflozin References Gliflozins
Discovery and development of gliflozins
[ "Chemistry", "Biology" ]
3,527
[ "Life sciences industry", "Medicinal chemistry", "Drug discovery" ]
47,369,663
https://en.wikipedia.org/wiki/Flail%20space%20model
The flail space model (FSM) is a model of how a car passenger moves in a vehicle that collides with a roadside feature such as a guardrail or a crash cushion. Its principal purpose is to assess the potential risk of harm to the hypothetical occupant as he or she impacts the interior of the passenger compartment and, ultimately, the efficacy of an experimental roadside feature undergoing full-scale vehicle crash testing. The FSM eliminates the complexity and expense of using instrumented anthropometric dummies during the crash test experiments. Furthermore, while crash test dummies were developed to model collisions between vehicles, they are not accurate when used for the sorts of collision angles that occur when a vehicle collides with a roadside feature; by contrast, the FSM was designed for such collisions. History The FSM is based on research performed at Southwest Research Institute in 1980 and published in 1981 in the paper entitled "Collision Risk Assessment Based on Occupant Flail-Space Model" by Jarvis D. Michie. The FSM (coined by Michie) was accepted by the highway community and published as a key part of the "Recommended Procedures for the Safety Evaluation of Highway Appurtenances" published in 1981 in National Cooperative Highway Research Program (NCHRP) Report 230. In 1993, the NCHRP Report was updated and presented as NCHRP Report 350; in this research effort performed by the Texas Transportation Research Institute, the FSM was reexamined and was unmodified in the new publication. In 2004, Douglas Gabauer further examined the efficacy of the FSM in his PhD thesis. The American Association of State Highway and Transportation Officials (AASHTO) retained the FSM as the method of assessing the risk of harm to vehicle occupants in the 2009 "Manual for Assessing Safety Hardware" that replaced NCHRP Report 350, stating that the FSM had "served its intended purpose well". Details The FSM hypothesis divides the collision into two stages. In stage one, the unrestrained occupant is propelled forward and sideways in the compartment space due to vehicle collision accelerations and then impacts one or more surfaces (including the steering wheel) with velocity "V". According to the model, the vehicle (instead of the occupant) is the object that is accelerating. The occupant experiences no injury-producing force prior to contact with the compartment surfaces. In stage two, the occupant is assumed to remain in contact with the compartment surface and experiences the same accelerations as the vehicle for the rest of the collision. The occupant may sustain injury at the end of stage one based on the velocity of impact with the compartment surfaces and due to vehicle accelerations during stage two. The occupant impact velocity and acceleration are computed from the vehicle collision acceleration history and the compartment geometry. Finally, the hypothetical occupant impact velocity and acceleration are then compared to threshold values of human tolerance to these forces. References Scientific models Applied mathematics Mathematical modeling Transport safety
Flail space model
[ "Physics", "Mathematics" ]
627
[ "Mathematical modeling", "Applied mathematics", "Transport safety", "Physical systems", "Transport" ]
47,372,547
https://en.wikipedia.org/wiki/Truncated%20Newton%20method
The truncated Newton method, originated in a paper by Ron Dembo and Trond Steihaug, also known as Hessian-free optimization, are a family of optimization algorithms designed for optimizing non-linear functions with large numbers of independent variables. A truncated Newton method consists of repeated application of an iterative optimization algorithm to approximately solve Newton's equations, to determine an update to the function's parameters. The inner solver is truncated, i.e., run for only a limited number of iterations. It follows that, for truncated Newton methods to work, the inner solver needs to produce a good approximation in a finite number of iterations; conjugate gradient has been suggested and evaluated as a candidate inner loop. Another prerequisite is good preconditioning for the inner algorithm. References Further reading Optimization algorithms and methods
Truncated Newton method
[ "Mathematics" ]
175
[ "Applied mathematics", "Applied mathematics stubs" ]
67,170,495
https://en.wikipedia.org/wiki/Tsou%20plot
The Tsou plot is a graphical method of determining the number and nature of the functional groups on an enzyme that are necessary for its catalytic activity. Theory of Tsou's method Tsou Chen-Lu analysed the relationship between the functional groups of enzymes that are necessary for their biological activity and the loss of activity that occurs when the enzyme is treated with an irreversible inhibitor. Suppose now that there are groups on each monomeric enzyme molecule that react equally fast with the modifying agent, and of these are essential for catalytic activity. After modification of an average of groups on each molecule, the probability that any particular group has been modified is and the probability that it remains unmodified is . For the enzyme molecule to retain activity, all of its essential groups must remain unmodified, for which the probability is . The fraction of activity remaining after modification of groups per molecule must be and so This means that a plot of against should be a straight line. As the value of is initially unknown one must draw plots with values 1, 2, 3, etc. to see which one gives a straight line. There are various complications with this analysis, as discussed in the original paper, and, more recently in a textbook. Experimental applications Despite the possible objections, the Tsou plot gave clear results when applied by Paterson and Knowles to inactivation of pepsin by trimethyloxonium tetrafluoroborate (Meerwein's reagent), a reagent that modifies carboxyl residues in proteins. They were able to deduce from this experiment that three non-essential groups are modified without loss of activity, followed by two essential groups — two because assuming yielded a straight line in the plot, whereas values of 1 and 3 yielded curves, with a total of 13 residues modified, as illustrated in the figure. Tsou's plot has also given good results with other systems, such the type I dehydroquinase from Salmonella typhi, for which modification of just one essential group by diethyl pyrocarbonate was sufficient to inactivate the enzyme. Alternative approach to the same question A little before Tsou published his paper William Ray and Daniel Koshland had described a different way of investigating the number and nature of groups on an enzyme essential for activity. Their method depends on kinetic measurements, and cannot be used, therefore, in cases where the modification is too fast for such measurements, such as the case of pepsin discussed above, but it complements Tsou's approach in useful ways. Suppose that an enzyme has two groups and that are both essential for the catalytic activity, so if either is lost the catalytic activity is also lost. If is converted to an inactive form in a first-order reaction with rate constant , and is inactivated in a first-order reaction with a different rate constant , then the remaining activity after time obeys an equation of the following form: in which is the value of when and is the observed first-order rate constant for inactivation, the sum of the rate constants for the separate reactions. The equation can be extended in an obvious way to the case more than two groups are essential. Ray and Koshland also described the properties to be observed when not all of the modified groups are essential. References Plots (graphics) Enzyme kinetics
Tsou plot
[ "Chemistry" ]
683
[ "Chemical kinetics", "Enzyme kinetics" ]
67,171,353
https://en.wikipedia.org/wiki/The%20Top%20100%20Drugs
The Top 100 Drugs: Clinical Pharmacology and Practical Prescribing is a pocket-size medical manual focusing on the most commonly prescribed medicines by the British National Health Service (NHS). It was first published by Churchill Livingstone, Elsevier, in 2014, revised in a second edition in 2018, and again in 2022 in a third edition. It is authored by four clinical pharmacologists from St George's Hospital, London; Andrew Hitchings, Dagan Lonsdale, Daniel Burrage and Emma Baker. The drugs are described in alphabetical order, with each drug or drug class on a double page. Each is subsequently explained in terms of clinical pharmacology and practical prescribing. Intravenous fluids are dealt with later in the book, followed by a self-assessment. The book received a review in Pulse, and in 2018 was listed as an essential reference book for junior doctors by the Pharmaceutical Journal. Development and publication The Top 100 Drugs is a medical manual which aims to reduce risks in prescribing. It includes a list of commonly prescribed medicines by the British NHS, for undergraduate and postgraduate medical education in the UK. It was first published as an e-book by Churchill Livingstone, Elsevier, in 2014. A second edition was published in 2018. The first edition was based on the 100 most frequently prescribed drugs by the NHS in 2006–2009, first described in the British Journal of Clinical Pharmacology in 2011 by Emma Baker, who identified the drugs with how they appear in the British National Formulary (BNF). The book is authored by Baker and three other clinical pharmacologists from St George's Hospital; Andrew Hitchings, Dagan Lonsdale and Daniel Burrage, and takes into account suggestions from junior doctors. The list was revised in 2015, using data collected from a larger dataset to check that no significant changes had occurred, and formed the basis of the second edition, in which 11 drugs were replaced and the number of self-assessment questions doubled. A third edition was released in 2022. Content The book has 325 pages and being in height and width, it can fit in a pocket. There is a contents page, followed by a list of abbreviations and an introduction. The introduction states how the most frequently prescribed drugs in primary and secondary care were identified. Each drug or class of drugs is listed in alphabetical order, displayed on a double page and explained in two sections; clinical pharmacology and practical prescribing. These are then divided into; Common indications: in which conditions the drug is used. Mechanism of action: the way the drug works. Important adverse effects: side effects. Warnings: cautions and reasons where the drug should not be used. Important interactions: effect with other drugs. Prescription: dose and route of administration of drug. Administration: how the drug is given. Communication: information required by people. Monitoring: checks needed for each drug. Cost: mostly highlighting whether a drug is expensive or inexpensive. Clinical tips: a useful fact from the authors' experience. The pharmacology of a drug or drug class is presented with guidance on prescribing. A drug can also be located by organ system or by clinical indication. Intravenous fluids are dealt with towards the end of the book, followed by a self assessment and an index. Unlike the original list, the second includes the newer diabetic drugs, blood thinners and anti-epileptics such as levetiracetam. Reception In 2014, the book received a review from a general practitioner in Pulse, in which they felt it to be aimed towards those unfamiliar with prescribing, but useful as an aid to revising drugs. It was mentioned in the International Journal of Clinical Skills, and in 2018, the Pharmaceutical Journal listed the book in their "nine essential resources for preregistration trainees". References 2014 non-fiction books 2018 in medicine Medical books Medical manuals Pharmacology literature Elsevier books British non-fiction books
The Top 100 Drugs
[ "Chemistry" ]
824
[ "Pharmacology", "Pharmacology literature" ]
67,172,201
https://en.wikipedia.org/wiki/Boundary%20%28graph%20theory%29
In graph theory, the outer boundary of a subset of the vertices of a graph is the set of vertices in that are adjacent to vertices in , but not in themselves. The inner boundary is the set of vertices in that have a neighbor outside . The edge boundary is the set of edges with one endpoint in the inner boundary and one endpoint in the outer boundary. These boundaries and their sizes are particularly relevant for isoperimetric problems in graphs, separator theorems, minimum cuts, expander graphs, and percolation theory. References Graph theory
Boundary (graph theory)
[ "Mathematics" ]
115
[ "Graph theory stubs", "Mathematical relations", "Graph theory objects", "Graph theory" ]
67,174,955
https://en.wikipedia.org/wiki/Rhodium%28III%29%20bromide
Rhodium(III) bromide refers to inorganic compounds of the formula RhBr3(H2O)n where n = 0 or approximately three. Both forms are brown solids. The hydrate is soluble in water and lower alcohols. It is used to prepare rhodium bromide complexes. Rhodium bromides are similar to the chlorides, but have attracted little academic or commercial attention. Structure Rhodium(III) bromide adopts the aluminium chloride crystal structure. Reactions Rhodium(III) bromide is a starting material for the synthesis of other rhodium halides. For example, it reacts with bromine trifluoride to form rhodium(IV) fluoride and with aqueous potassium iodide to form rhodium(III) iodide. Like most other rhodium trihalides, anhydrous RhBr3 is insoluble in water. The dihydrate RhBr3·2H2O forms when rhodium metal reacts with hydrochloric acid and bromine. References Rhodium(III) compounds Bromides Platinum group halides
Rhodium(III) bromide
[ "Chemistry" ]
244
[ "Bromides", "Salts" ]
49,115,662
https://en.wikipedia.org/wiki/Pan-assay%20interference%20compounds
Pan-assay interference compounds (PAINS) are chemical compounds that often give false positive results in high-throughput screens. PAINS tend to react nonspecifically with numerous biological targets rather than specifically affecting one desired target. A number of disruptive functional groups are shared by many PAINS. While a number of filters have been proposed and are used in virtual screening and computer-aided drug design, the accuracy of filters with regard to compounds they flag and don't flag has been criticized. Common PAINS include toxoflavin, isothiazolones, hydroxyphenyl hydrazones, curcumin, phenol-sulfonamides, rhodanines, enones, quinones, and catechols. See also Drug discovery References Further reading BadApple database Medicinal chemistry
Pan-assay interference compounds
[ "Chemistry", "Biology" ]
168
[ "Pharmacology", "Medicinal chemistry stubs", "Medicinal chemistry", "nan", "Biochemistry", "Pharmacology stubs" ]
49,117,938
https://en.wikipedia.org/wiki/CO-0.40-0.22
CO-0.40-0.22 is a high velocity compact gas cloud near the centre of the Milky Way. It is 200 light years away from the centre in the central molecular zone. The cloud is in the shape of ellipse. The differences in the velocity, termed velocity dispersion, of the gas is unusually high at 100 km/s. The velocity dispersion was once thought to be due to an intermediate-mass black hole (IMBH) with a mass of about 100,000 solar masses. However, observations with the Atacama Large Millimeter/submillimeter Array suggested the evidence for a cloud-cloud collision. Subsequent theoretical studies of the gas cloud and nearby IMBH candidates have re-opened the possibility, though no observational evidence for existence of an IMBH has been reported. The molecular cloud has a mass of 4,000 solar masses. It is located at −0.40°, −0.22° galactic longitude and latitude. The cloud is 0.2° away from Sgr C to the galactic southeast. The gas is moving away from Earth at speeds ranging from 20 to 120 km/s. The spectral lines of carbon monoxide reveal that the gas is dense, and warm and fairly opaque. The gas cloud includes carbon monoxide and hydrogen cyanide molecules. Other molecules detected via microwave spectroscopy include cyanoacetylene, cyclopropenylidene, methanol, silicon monoxide, sulfur monoxide, carbon monosulfide, Thioformaldehyde, Hydrogen isocyanide, Formamide, and ions H2N+ and HCO+. The name followed the precedent set by CO-0.02-0.02, which is another high velocity compact cloud in the central molecular zone. Another example of this naming convention is CO–0.30–0.07. References Molecular clouds Sagittarius (constellation) Intermediate-mass black holes
CO-0.40-0.22
[ "Physics", "Astronomy" ]
400
[ "Black holes", "Unsolved problems in physics", "Intermediate-mass black holes", "Constellations", "Sagittarius (constellation)" ]
49,118,961
https://en.wikipedia.org/wiki/Genetic%20significant%20dose
Genetic significant dose (GSD), or genetically significant dose, was initially defined by United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) in 1958. It represents an estimate of the genetic significance of gonad radiation doses. Annual GSD is calculated by weighting the individual gonad doses received during ionizing imaging by the number of individual examined, and accounting for the number of offspring for each individual. References Medical imaging Radiation health effects
Genetic significant dose
[ "Physics", "Chemistry", "Materials_science" ]
93
[ "Radiation health effects", "Nuclear chemistry stubs", "Nuclear and atomic physics stubs", "Nuclear physics", "Radiation effects", "Radioactivity" ]
49,122,599
https://en.wikipedia.org/wiki/Consumption%20map
A consumption map or efficiency map is a chart that displays the brake-specific fuel consumption of an internal combustion engine at a given rotational speed and mean effective pressure, in grams per kilowatt-hour (g/kWh). The map contains each possible condition combining rotational speed and mean effective pressure. The contour lines show brake-specific fuel consumption, indicating the areas of the speed/load regime where an engine is more or less efficient. A typical rotation power output, P (linear to ), is reached on multiple locations on the map that differ in the amount of fuel consumption. Automatic transmissions are therefore designed to keep the engine at the speed with the lowest possible fuel consumption for a given power output under standard driving conditions. Overall thermal efficiency can depend on the fuel used; diesel and gasoline engines can reach up to 210 g/kWh and about 40% efficiency. Natural gas can yield an overall efficiency of about 200 g/kWh. Average fuel consumption values are 160–180 g/kWh for slower two-stroke diesel cargo ship engines using fuel oil, reaching up to 55% efficiency at 300 rpm; 195–210 g/kWh for turbodiesel passenger cars; 195–225 g/kWh for trucks; and 250–350 g/kWh for naturally aspirated Otto cycle gasoline passenger cars. Literature (German) Richard van Basshuysen: Handbuch Verbrennungsmotor, Fred Schäfer; 3. Auflage; 2005; Vieweg Verlag References Internal combustion engine Engine technology Thermodynamics Diagrams
Consumption map
[ "Physics", "Chemistry", "Mathematics", "Technology", "Engineering" ]
318
[ "Internal combustion engine", "Engines", "Combustion engineering", "Engine technology", "Thermodynamics", "Dynamical systems" ]
49,127,301
https://en.wikipedia.org/wiki/MuSIASEM
MuSIASEM or Multi-Scale Integrated Analysis of Societal and Ecosystem Metabolism, is a method of accounting used to analyse socio-ecosystems and to simulate possible patterns of development. It is based on maintaining coherence across scales and different dimensions (e.g. economic, demographic, energetic) of quantitative assessments generated using different metrics. History MuSIASEM is designed to detect and analyze patterns in the societal use of resources making a distinction between: the internal end uses (who is using which resources, how much, how, and why); the resulting internal environmental pressures associated with the various end uses, allowing an analysis of the impacts they create on the environment; and the level of externalization through trade of both requirements of additional end uses and resulting environmental pressures that are moved to the social-ecological systems producing the imports. The ability to integrate quantitative assessments across dimensions and scales makes MuSIASEM particularly suited for different types of sustainability analysis: the nexus between food, energy, water and land uses; urban metabolism; waste metabolism; tourism metabolism; and rural development. The approach was created around 1997 by Mario Giampietro and Kozo Mayumi, and has been developed since then by the members of the IASTE (Integrated Assessment: Sociology, Technology and the Environment) group at the Institute of Environmental Science and Technology of the Autonomous University of Barcelona and its external collaborators. The purpose of MuSIASEM is to characterize metabolic patterns of socio-ecological systems (how and why humans use resources and how this use depends on and affects the stability of the ecosystems embedding the society). This integrated approach allows for a quantitative implementation of the DPSIR framework (Drivers, Pressures, States, Impacts and Responses) and application as a decision support tool. Different alternatives of the option space can be checked in terms of feasibility (compatibility with processes outside human control), viability (compatibility with processes under human control) and desirability (compatibility with normative values and institutions). The original version of the accounting scheme has been improved using theoretical concepts from complex systems theory leading to the generation of MuSIASEM version 2.0, tested in several case studies. Applications MuSIASEM accounting has been used for the integrated assessment of agricultural systems, biofuels , nuclear power , energetics, sustainability of water use, mining, urban waste management systems, and urban metabolism in developing countries. Moreover, the methodology has been applied to assess societal metabolism at the municipal, regional (rural Laos, Catalonia, China, Europe, Galapagos Islands), national, and supernational scale. An application of MuSIASEM to the nexus between natural resources is in the book Resource Accounting for Sustainability: The Nexus between Energy, Food, Water and Land Use. This work has been tested in collaboration with FAO. The Ecuadorian National Secretariat for Development and Planning (SENPLADES) has included the MuSIASEM approach in the training of its personnel. Finally, several master courses about the application to the approach to energy system in various Southern African Universities have been elaborated under the Participia project. MuSIASEM has been applied to the analysis of Shanghai's urban metabolism. See also Anthropogenic metabolism Industrial metabolism Material flow analysis Social metabolism Urban metabolism References External links Institute of Environmental Science and Technology (ICTA) at the Universitat Autònoma de Barcelona. MAGIC Nexus: an EU H2020 project applying the MuSIASEM approach. The Nexus between Energy, Food, Land Use, and Water: Application of a Multi-Scale Integrated Approach. About MuSIASEM rationale and methodology. The Sustainability Sudoku: Simplified application of the MuSIASEM approach to the energy-food-land nexus for didactic purposes. The Participia Home Page: Participatory Integrated Assessment of Energy Systems to Promote Energy Access and Efficiency. Presentation of the MuSIASEM 2.0 approach, August 2020 report from the MAGIC Nexus project. Scientific models Ecological economics Industrial ecology Natural resources Energy economics Water management Systems ecology Environmental science Environmental social science Systems theory Water and the environment
MuSIASEM
[ "Chemistry", "Engineering", "Environmental_science" ]
833
[ "Energy economics", "Industrial engineering", "nan", "Environmental engineering", "Industrial ecology", "Environmental social science", "Systems ecology" ]
60,730,016
https://en.wikipedia.org/wiki/Table%20of%20specific%20heat%20capacities
The table of specific heat capacities gives the volumetric heat capacity as well as the specific heat capacity of some substances and engineering materials, and (when applicable) the molar heat capacity. Generally, the most notable constant parameter is the volumetric heat capacity (at least for solids) which is around the value of 3 megajoule per cubic meter per kelvin: Note that the especially high molar values, as for paraffin, gasoline, water and ammonia, result from calculating specific heats in terms of moles of molecules. If specific heat is expressed per mole of atoms for these substances, none of the constant-volume values exceed, to any large extent, the theoretical Dulong–Petit limit of 25 J⋅mol−1⋅K−1 = 3 R per mole of atoms (see the last column of this table). For example, Paraffin has very large molecules and thus a high heat capacity per mole, but as a substance it does not have remarkable heat capacity in terms of volume, mass, or atom-mol (which is just 1.41 R per mole of atoms, or less than half of most solids, in terms of heat capacity per atom). The Dulong–Petit limit also explains why dense substances, such as lead, which have very heavy atoms, rank very low in mass heat capacity. In the last column, major departures of solids at standard temperatures from the Dulong–Petit law value of 3 R, are usually due to low atomic weight plus high bond strength (as in diamond) causing some vibration modes to have too much energy to be available to store thermal energy at the measured temperature. For gases, departure from 3 R per mole of atoms is generally due to two factors: (1) failure of the higher quantum-energy-spaced vibration modes in gas molecules to be excited at room temperature, and (2) loss of potential energy degree of freedom for small gas molecules, simply because most of their atoms are not bonded maximally in space to other atoms, as happens in many solids. A Assuming an altitude of 194 metres above mean sea level (the worldwide median altitude of human habitation), an indoor temperature of 23 °C, a dewpoint of 9 °C (40.85% relative humidity), and 760 mmHg sea level–corrected barometric pressure (molar water vapor content = 1.16%). B Calculated values *Derived data by calculation. This is for water-rich tissues such as brain. The whole-body average figure for mammals is approximately 2.9 J⋅cm−3⋅K−1 Mass heat capacity of building materials (Usually of interest to builders and solar ) Human body The specific heat of the human body calculated from the measured values of individual tissues is 2.98 kJ · kg−1 · °C−1. This is 17% lower than the earlier wider used one based on non measured values of 3.47 kJ · kg−1· °C−1. The contribution of the muscle to the specific heat of the body is approximately 47%, and the contribution of the fat and skin is approximately 24%. The specific heat of tissues range from ~0.7 kJ · kg−1 · °C−1 for tooth (enamel) to 4.2 kJ · kg−1 · °C−1 for eye (sclera). See also List of thermal conductivities References Heat conduction
Table of specific heat capacities
[ "Physics", "Chemistry" ]
681
[ "Heat conduction", "Thermodynamics" ]
60,731,950
https://en.wikipedia.org/wiki/Gravity%20filtration
Gravity filtration is a method of filtering impurities from solutions by using gravity to pull liquid through a filter. The two main kinds of filtration used in laboratories are gravity and vacuum/suction. Gravity filtration is often used in chemical laboratories to filter precipitates from precipitation reactions as well as drying agents, inadmissible side items, or remaining reactants. While it can also be used to separate out strong products, vacuum filtration is more commonly used for this purpose. Method The process of removing suspended matter contains two steps: transport and attachment. This mode occurs when particles move to another place through the filter paper. Gravity filtration is an easy way to remove solid impurities or the precipitation from an organic liquid. The impurity is trapped in the filter. Gravity filtration can collect any insoluble solid. History Early in human history, people obtained clear water from muddy rivers or lakes by digging holes in sandy banks to a depth below the waterline of the river or lake. The sand filtered the water and clear water fills the hole; this method was used to reform cities and purify urban waters. In farming, people used gravity filtration to let water from higher areas flow to lower areas through filters. In this way, sand and small stones filter impurities producing clear water. In Asia, people pump water from wells and put it into a jar with a small hole at the bottom. The jar is filled with small stones and the hole is covered with layers of gauze. Classic methods Filtration is commonly used to filter out solutions contain solids or insoluble precipitation. The solution is poured through a piece of filter paper folded into a cone in a glass funnel. Solids (or flocs) remain on the filter paper while the filtered solution is caught by a flask under the funnel. If a large volume of solution is filtered, the filter paper will need to be changed in order to prevent clogging. Experimental error In many laboratories, gravity filtration is used to filter out solids to determine reaction yield. Several experimental errors need to be taken into account. Some precipitated solid remains on the filter paper or in the funnel. In this case, a gap appears between the product yield and the measured yield. If the precipitated solid is not dried thoroughly, excess fluid influences the experimental results. The actual yield of precipitation may then appear larger than the theoretical yield. Incorrect use of filter paper may influence the filtration. Additionally, damaging the filter paper can allow small bits of precipitated solids to pass through the filter. Industry examples Seawater A variety of filtration operations were tested with seawater for dissolving high concentrations of dimethylsulfoniopropionate. Conventional These filters contain three stages: flocculation, clarification and filtration. Typical rapid gravity filters contain filter tanks made of coated or stainless steel or aluminum. Influent flows fall through the filter and are captured by the underdrain. The filter media removes particles from the water. It usually has 3 layers: anthracite coal, silica sand and gravel. This approach is effective for removing impurities and uses less cleaning time, lowering cost. Microorganisms The project was to remove parasites and other contaminants such as lead. The project used multi-hole filters with diameters that allow water to flow by gravity. Gravity media filters These filters are used for industry applications. The filter lets the fluid stream pass through the media to remain or filter out impurities. It can support large volumes. Some gravity filter systems in the chemistry industry can remove chlorine and other organics or remove iron and heavy sediments or sand. Gravity filter (liquid) Liquid is removed from a gas stream by coalescers in a single stage. The elements enter with the flow and then pass through the distributor. This is a primary separation device that can remove particles and then coalesce the cartridges in an inside-to-out direction. In this case, the liquids pass through the structure of the filter and then drain from the vessel. Model SK This filter is an open sand filter system used for water treatment in low budget environments. It can suit a variety of pressure-controlled backwashing. This filter is an automatic gravity filter that uses different pressures and backwashes the system with an injector. The system has no controls. The filters have no moving parts and no pumps. The backwashing water is held in a tank below the filter. See also Separation process Separation of mixtures Microfiltration Ultrafiltration Nanofiltration Reverse osmosis Crossflow filtration Sieve Sieve analysis Reference list Analytical chemistry Filters Filtration Laboratory techniques
Gravity filtration
[ "Chemistry", "Engineering" ]
968
[ "Separation processes", "Chemical equipment", "Filters", "Filtration", "nan" ]
42,828,447
https://en.wikipedia.org/wiki/Selig%20Hecht
Selig Hecht (February 8, 1892 – September 18, 1947) was an American physiologist who studied photochemistry in photoreceptor cells. Biography Hecht was born into a Jewish family in Glogau, then in the German Empire (now Głogów, Poland), the son of Mandel Hecht and Mirel Mresse. The family migrated to the United States in 1898, settling in New York City. In June 1917 Hecht received his Ph.D. and married Celia Huebschmann. Their daughter Maressa was born in 1924. He became professor of biophysics at Columbia University in 1928. Work Hecht began his study into light sensitivity with clams (Mya arenaria) and insects. His specialty was photochemistry, the kinetics of the reactions initiated by light in the receptors. He made contributions to the knowledge of dark adaptation, visual acuity, brightness discrimination, color vision, and the mechanism of the visual threshold. He spent time as a post-doctoral researcher with the group of Edward Charles Cyril Baly at the University of Liverpool, UK. Baly was a pioneer in the application of the technique of spectroscopy in chemistry, and Hecht took this further by applying it to biological molecules. Hecht's responsibility in showing the protein character of rhodopsin was recounted by historians of protein science: Identification of visual purple as a protein of high molecular weight ...[came] from the work of Selig Hecht at Columbia University in New York, begun in 1937. Ultracentrifugation was one of methods he used for characterization and this produced an added dividend, demonstrating that the complex absorption of the 'pigment' (suggesting the possibility of many components) segmented in toto with the protein. By this time the carotenoid prosthetic group had been discovered as the source of colour by George Wald and Hecht pointed out that this meant that the protein had to be a conjugated protein, with the chromophore firmly attached. According to biographer Pirenne, Hecht was a "brilliant lecturer and expositor." Pirenne continues, The lack of synthesis discernible in present-day knowledge and teaching perturbed him, and he took an active interest in all the human implications of science. He dealt with persons and ideas on the basis of their intrinsic worth,... In 1941, The Optical Society of America awarded him the Frederic Ives Medal, the Society's highest honor. Explaining the atom When World War II ended with the use of atomic weapons which had been developed in secret by the Manhattan Project, Hecht was concerned that the American public was uninformed about the development of this new source of energy. He wrote a book Explaining the Atom (1947), to educate the public. He wrote, So long as one supposes this business is mysterious and secret, one cannot have a just evaluation of our possessions and security. Only by understanding the basis and development of atomic energy can one judge the legislation and foreign policy that concern it. In a 1947 review in the New York Times, Stephen Wheeler wrote that it was "by all odds the best book on atomic energy so far to be published for the ordinary reader." Similarly, James J. Jelinek wrote that it was an "invaluable contribution to the layman." He credits Hecht with "conveying to the layman the intellectual drama" of the development. Jelinek asserts that the book is "profoundly provocative in its political and sociological implications." After Hecht died, a second edition was issued in 1959 by Eugene Rabinowitch. Both editions were recommended by George Gamow. Selected publications Hecht, S. (1937) "Rods, cones, and the chemical basis of vision", Physiological Reviews 17: 239 to 89. Hecht, S. & Pickels, E.G. (1938) "The sedimentation constant of visual purple", Proceedings of the National Academy of Sciences of the USA 24: 172 to 6 References 1892 births 1947 deaths American physiologists Columbia University faculty Emigrants from the German Empire to the United States Jewish American scientists Jewish biologists Photochemists City College of New York alumni Harvard University alumni Creighton University faculty
Selig Hecht
[ "Chemistry" ]
881
[ "Photochemists", "Physical chemists" ]
42,831,000
https://en.wikipedia.org/wiki/EF50
In the field of electronics, the EF50 is an early all-glass wideband remote cutoff pentode designed in 1938 by Philips. It was a landmark in the development of vacuum tube technology, departing from construction techniques that were largely unchanged from light bulb designs. Initially used in television receivers, it quickly gained a vital role in British radar, and great efforts were made to secure a continuing supply of the device as Holland fell in World War II. The EF50 tube is a 9-pin Loctal-socket device with short internal wires to nine short chromium-iron pins. The short wiring was key to making it suitable for Very High Frequency (VHF) use. History Early tube construction Early vacuum tubes were built using light bulb techniques, which had been highly automated by the 1920s. In a standard light bulb of the era, the tungsten filament was supported on two metal rods, which were fastened together by inserting them into a glass tube and then heating the glass and squeezing it flat with the rods inside. The resulting support was known as the "glass pinch". The pinch was then inserted into a larger glass envelope, the bulb itself, welded, and then fit with a metal cap for the electrical connections. For vacuum tube use, little was changed, with the various internal components supported on rods which passed through the pinch. As tubes grew in complexity, the number of leads also grew. Since light bulb sizes were standardized, all of these had to pass through the same pinch, which placed them increasingly close to each other. This led to increased capacitance, which limited the tube's ability to work at high frequencies. To address this, to some degree at least, it became somewhat common to attach the control grid leads to a metal button at the top of the tube rather than the bottom, but this made construction much more complex, as well as making connections in radio sets more difficult as they could no longer be on a single circuit board. VHF experiments Through the early 1930s, a number of companies experimented with metal tubes, using a variety of sealing methods. These worked well, but tended to be rather large and were never able to be successfully mass-produced at low cost. RCA continued experiments with all-glass tubes and introduced their "acorn" (or "door knob") tubes late in 1934. These were essentially two half-tubes that were assembled separately, carefully folded together, and then sealed along the centerline. Despite using low-cost materials and construction, the manual assembly led to high costs. In Germany, Telefunken introduced the "Stahlröhre" (~steel tube) with its own issues. Philips had been working from 1934 to 1935 on an alternative that would solve the problems of the other base designs, in a system that could be produced cheaply and in large quantities. A presentation by M.J.O. Strutt from the tube development group at Philips Research at the first "Internationale Fernseh-Tagung in Zürich" (international television conference in Zürich) described their work in September 1938. A few months later, Professor J.L.H. Jonker, who had a leading role in the development of the EF50, published an internal Philips Research Technical Note, Titled: "New radio Tube Constructions". Jonker's role was confirmed decades later by Th. P. Tromp, head of radio-valve manufacturing and production: "Prof. Dr. Jonker (head of development lab of electronic valves in the mid-thirties) was the originator of the EF50. This development started as early as 1934–1935. It was, indeed, developed in view of possible television application." Their first attempts faced problems due to the mechanical loads of the connection pins. If they used leads that were strong enough to be pushed into a conventional socket, these were large enough that the holes in the glass plate greatly reduced the plate's physical strength, and cracking was a serious problem. Thinner wires would solve this problem, but these proved difficult to connect to in the socket, and the tubes tended to disconnect when jolted. The solution was to use bent pins, which exited the bottom of the tube and were then bent through a 90 degree arc toward the center of the tube's base. These were used with a special socket; when pressed in and rotated slightly, the pins locked into place. With this problem solved, the team then turned to consider whether the top control grid connection could be eliminated, as it had been in the RCA acorns. This was easy enough to do electrically, but Philips had already taken to using the metal cap on the electrode as a convenient place to hide the gas evacuation tube, used during the final steps of construction. They developed a way to weld the tube into the base plate instead of the top of the tube, but this left the tube projecting from the bottom, where it could be easily snapped off. The solution to this was a metal shell that was fit onto the bottom of the tube at the end of construction, covering the evacuation tube while allowing the connection pins to project through holes. This was known as "the metal trouser". Television requirements Pye Ltd., a leading British electronics firm of the time, had pioneered television receiver design, and in the late 1930s, wanted to market receivers that would allow reception further and further from the single Alexandra Palace television transmitter. In particular, they wanted to be able to receive these transmissions at their Cambridge factories. They initially turned to their subsidiaries, Cathodeon and Hi-Vac, but they were not capable of producing much of an improvement. They turned to Mullard, who turned to their Philips managers in Eindhoven. With some tweaking from Baden John Edwards and Donald Jackson from Pye (for example the metal shield), the final EF50 pentode was produced and used in Pye's 45 MHz TRF design, and created a receiver able to receive transmissions at up to five times the distance than the competition. Radar uses While Pye was working on their television systems, the top secret work on radar was being carried out at Bawdsey Manor. As part of this research, a team under Edward George Bowen was developing a receiver that was small and light enough to be used on aircraft. Their original design was based on a television chassis from EMI using RCA acorn valves. Only one set was available and almost lost in an accident, so Bowen was eager to find additional receivers. When the war began in the summer of 1939, all work on civilian television was suspended. This left Pye with many completed chassis and no way to sell them. Edward Victor Appleton, who had been the thesis advisor for both Bowen and Harold Pye, mentioned these surplus chassis to Bowen and suggested he try them. Bowen contacted Pye and found that "scores and scores" of completed chassis were available. When tested, they were found to completely outperform the EMI model. Operational requirements, mostly the size of the dipole antennas suitable for external mounting on an aircraft, demanded that short wavelengths be used, and the team had already selected 200 MHz as the basic operational frequency. Like the earlier EMI model, the Pye receiver was then adapted from the BBC 45 MHz standard to 200 MHz by adding a single step-down stage in front of an otherwise unmodified Pye chassis. The resulting "Pye strip" became the basis for many UK radar designs of the era, including AI Mk. IV and ASV Mk. II. Flight from Holland Because the EF50 had to come from Holland, yet was vital for the RDF (radar), great efforts were made to secure a continuing supply as the risk of Holland being overrun increased. Mullard in England did not have the ability to manufacture the special glass base, for example. Just before Germany invaded Holland, a truckload of 25,000 complete EF50s and many more of their special bases were successfully sent to England. The entire EF50 production line was hurriedly relocated to Britain. On 13 May, the day before the Germans flattened Rotterdam in 1940, members of the Philips family escaped together with the Dutch government on the British destroyer HMS Windsor, taking with them a small wooden box containing the industrial diamonds that were to be used to make the dies needed to make the fine tungsten wires in the valves. Characteristics Equivalents To meet great wartime demand, the EF50 was also made by Marconi-Osram (with the name Z90) and Cossor (their version named 63SPT) in the United Kingdom as well as Mullard (who were effectively using the Philips production line after it was moved from Holland). Versions were also made in Canada by Rogers Vacuum Tube Company and in the United States by Sylvania Electric Products. British military (Ministry of Aircraft Production Specification) and U.S. JAN type numbers assigned to the EF50 include: ARP35 (Army Receiving Pentode 35) VR91 (Original A.M. Name) CV1091 (from 1943) ZA3058 (Army) ZC1051 or ZC/10E/92 (Army) 10E/92 (Air Ministry) The tube was also assigned the GPO (PO)VT-207 type number, VT-250, and CV1578. Valves of similar characteristics were produced with different bases, for example, the later EF42 and 9-pin miniature (B9A) EF80. References See also Mullard EF50 data sheet Vacuum tubes Philips products
EF50
[ "Physics" ]
1,956
[ "Vacuum tubes", "Vacuum", "Matter" ]
42,836,025
https://en.wikipedia.org/wiki/Fostemsavir
Fostemsavir, sold under the brand name Rukobia, is an antiretroviral medication for adults living with HIV/AIDS who have tried multiple HIV medications and whose HIV infection cannot be successfully treated with other therapies because of resistance, intolerance or safety considerations. The most common adverse reaction is nausea. Severe adverse reactions included elevations in liver enzymes among participants also infected with hepatitis B or C virus, and changes in the immune system (immune reconstitution syndrome). Fostemsavir is an HIV entry inhibitor and a prodrug of temsavir (BMS-626529). Fostemsavir is a human immunodeficiency virus type 1 (HIV-1) gp120-directed attachment inhibitor. It was approved for medical use in the United States in July 2020, and in the European Union in February 2021. The U.S. Food and Drug Administration (FDA) considers it to be a first-in-class medication. Medical uses Fostemsavir in combination with other antiretroviral(s), is indicated for the treatment of HIV-1 infection in heavily treatment-experienced adults with multidrug-resistant HIV-1 infection failing their current antiretroviral regimen due to resistance, intolerance, or safety considerations. Adverse effects Fostemsavir may cause a serious condition called immune reconstitution syndrome, similar to other approved drugs for treatment of HIV-1 infection. This condition can happen at the beginning of HIV-1 treatment when the immune system may get stronger and begin to fight infections that have been hidden in the body for a long time. Other serious side effects include heart rhythm problems due to prolongation of heart electrical activity (QT prolongation) and an increase of liver enzymes in patients with hepatitis B or C virus co-infection. History It was under development by ViiV Healthcare / GlaxoSmithKline for use in the treatment of HIV infection. When the active form of fostemsavir binds to the gp120 protein of the virus, it prevents initial viral attachment to the host CD4+ T cell and entry into the host immune cell; its method of action is a first for HIV drugs. Because it targets a different step of the viral lifecycle, it offers promise for individuals with virus that has become highly resistant to other HIV drugs. Since the function of gp120 that this drug inhibits is highly conserved, the drug is unlikely to promote resistance to itself. Investigators found that enfuvirtide-resistant and ibalizumab-resistant HIV envelopes remained susceptible to fostemsavir. Conversely, fostemsavir-resistant HIV remained susceptible to all the entry inhibitors. Furthermore, HIV isolates that do not require the CD4 receptor for cell entry were also susceptible to fostemsavir, and the virus did not escape the attachment inhibitor by becoming CD4-independent. Prior in vitro studies showed that fostemsavir inhibits both CCR5-tropic and CXCR4-tropic HIV. Fostemsavir was approved for medical use in the United States in July 2020. The safety and efficacy of fostemsavir, taken twice daily by mouth, were evaluated in a clinical trial of 371 heavily treatment-experienced adult participants who continued to have high levels of virus (HIV-RNA) in their blood despite being on antiretroviral drugs. Two hundred seventy-two participants were treated in the main trial arm, and an additional 99 participants received fostemsavir in a different arm of the trial. Most participants had been treated for HIV for more than 15 years (71 percent), had been exposed to five or more different HIV treatment regimens before entering the trial (85 percent) and/or had a history of AIDS (86 percent). Participants in the main cohort of the trial received either fostemsavir or a placebo twice daily for eight days, in addition to their failing antiretroviral regimen. On the eighth day, participants treated with fostemsavir had a significantly greater decrease in levels of HIV-RNA in their blood compared to those taking the placebo. After the eighth day, all participants received fostemsavir with other antiretroviral drugs. After 24 weeks of fostemsavir plus other antiretroviral drugs, 53 percent of participants achieved HIV RNA suppression, where levels of HIV were low enough to be considered undetectable. After 96 weeks, 60 percent of participants continued to have HIV RNA suppression. The clinical trial (NCT02362503) was conducted at 108 sites in 23 countries in North America, South America, Europe, Australia, Taiwan and South Africa. The U.S. Food and Drug Administration (FDA) granted the application for fostemsavir fast track, priority review, and breakthrough therapy designations. The FDA granted the approval of Rukobia to ViiV Healthcare. On 10 December 2020, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Rukobia, intended for the treatment of multi-drug resistant HIV-1 infection. The applicant for this medicinal product is ViiV Healthcare B.V. Fostemsavir was approved for medical use in the European Union in February 2021. References Further reading External links Drugs developed by GSK plc Entry inhibitors Organophosphates N-benzoylpiperazines Prodrugs Pyrrolopyridines Triazoles
Fostemsavir
[ "Chemistry" ]
1,173
[ "Chemicals in medicine", "Prodrugs" ]
71,495,032
https://en.wikipedia.org/wiki/Faltings%27%20annihilator%20theorem
In abstract algebra (specifically commutative ring theory), Faltings' annihilator theorem states: given a finitely generated module M over a Noetherian commutative ring A and ideals I, J, the following are equivalent: for any , there is an ideal in A such that and annihilates the local cohomologies , provided either A has a dualizing complex or is a quotient of a regular ring. The theorem was first proved by Faltings in . References Abstract algebra Commutative algebra
Faltings' annihilator theorem
[ "Mathematics" ]
112
[ "Mathematical theorems", "Theorems in algebra", "Algebra", "Fields of abstract algebra", "Mathematical problems", "Commutative algebra", "Abstract algebra" ]
71,496,340
https://en.wikipedia.org/wiki/Wang%20algebra
In algebra and network theory, a Wang algebra is a commutative algebra , over a field or (more generally) a commutative unital ring, in which has two additional properties:(Rule i) For all elements x of , x + x = 0 (universal additive nilpotency of degree 1).(Rule ii) For all elements x of , xx = 0 (universal multiplicative nilpotency of degree 1). History and applications Rules (i) and (ii) were originally published by K. T. Wang (Wang Ki-Tung, 王 季同) in 1934 as part of a method for analyzing electrical networks. From 1935 to 1940, several Chinese electrical engineering researchers published papers on the method. The original Wang algebra is the Grassman algebra over the finite field mod 2. At the 57th annual meeting of the American Mathematical Society, held on December 27–29, 1950, Raoul Bott and Richard Duffin introduced the concept of a Wang algebra in their abstract (number 144t) The Wang algebra of networks. They gave an interpretation of the Wang algebra as a particular type of Grassman algebra mod 2. In 1969 Wai-Kai Chen used the Wang algebra formulation to give a unification of several different techniques for generating the trees of a graph. The Wang algebra formulation has been used to systematically generate King-Altman directed graph patterns. Such patterns are useful in deriving rate equations in the theory of enzyme kinetics. According to Guo Jinhai, professor in the Institute for the History of Natural Sciences of the Chinese Academy of Sciences, Wang Ki Tung's pioneering method of analyzing electrical networks significantly promoted electrical engineering not only in China but in the rest of the world; the Wang algebra formulation is useful in electrical networks for solving problems involving topological methods, graph theory, and Hamiltonian cycles. Wang Algebra and the Spanning Trees of a Graph The Wang Rules for Finding all Spanning Trees of a Graph G For each node write the sum of all the edge-labels that meet that node. Leave out one node and take the product of the sums of labels for all the remaining nodes. Expand the product in 2. using the Wang algebra. The terms in the sum of the expansion obtained in 3. are in 1-1 correspondence with the spanning trees in the graph. References Commutative algebra Electrical engineering Network theory Ring theory
Wang algebra
[ "Mathematics", "Engineering" ]
477
[ "Graph theory stubs", "Algebra stubs", "Ring theory", "Graph theory", "Network theory", "Fields of abstract algebra", "Mathematical relations", "Electrical engineering", "Commutative algebra", "Algebra" ]
71,498,717
https://en.wikipedia.org/wiki/Coherent%20microwave%20scattering
Coherent microwave scattering is a diagnostic technique used in the characterization of classical microplasmas. In this technique, the plasma to be studied is irradiated with a long-wavelength microwave field relative to the characteristic spatial dimensions of the plasma. For plasmas with sufficiently low skin-depths, the target is periodically polarized in a uniform fashion, and the scattered field can be measured and analyzed. In this case, the emitted radiation resembles that of a short-dipole predominantly determined by electron contributions rather than ions. The scattering is correspondingly referred to as constructive elastic. Various properties can be derived from the measured radiation such as total electron numbers, electron number densities (if the plasma volume is known), local magnetic fields through magnetically-induced depolarization, and electron collision frequencies for momentum transfer through the scattered phase. Notable advantages of the technique include a high sensitivity, ease of calibration using a dielectric scattering sample, good temporal resolution, low shot noise, non-intrusive probing, species-selectivity when coupled with resonance-enhanced multiphoton ionization (REMPI), single-shot acquisition, and the capability of time-gating due to continuous scanning. History Initially devised by Mikhail Shneider and Richard Miles at Princeton University, coherent microwave scattering has become a valuable technique in applications ranging from photoionization and electron-loss rate measurements to trace species detection, gaseous mixture and reaction characterization, molecular spectroscopy, electron propulsion device characterization, standoff measurement of electron collision frequencies for momentum transfer through the scattered phase, and standoff measurement of local vector magnetic fields through magnetically-induced depolarization. Scattering regimes For the simplest embodiment of linearly-polarized microwave scattering in the absence of magnetic depolarization, three regimes may arise due to the correlation between scatterers. The Thomson regime refers to free plasma electrons oscillating in-phase with the incident microwave field. The total scattering cross-section of an independent electron then coincides with the classical Thomson cross section and is independent of the microwave wavelength λ. Second, Shneider-Miles scattering (SM, often referred to as collisional scattering) refers to collision-dominated electron motion with displacement oscillations shifted 90 degrees with respect to the irradiating field. The total scattering cross-section correspondingly exhibits a ω2 dependency - a unique regime made possible through interparticle interactions. Finally, the Rayleigh scattering regime can be observed which is associated with restoring-force-dominated electron motion and shares a ω4 dependence with its volumetric polarizability optical counterpart. In this case the "scattering particle" refers to the entire plasma object. As such, plasma expansion may cause a transition towards Mie scattering. Note that the Rayleigh regime refers to small particle ω4 scattering here, rather than an even broader small-dipole approximation of the radiation. References Spectroscopy
Coherent microwave scattering
[ "Physics", "Chemistry" ]
591
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
71,498,836
https://en.wikipedia.org/wiki/Deep%20Space%20Transport%20LLC
Deep Space Transport LLC is a joint venture that is set to provide launch services for the Space Launch System rocket. The joint venture consists of Boeing, the prime contractor for the Space Launch System core stage and the Exploration Upper Stage that will be used on Space Launch System missions, and Northrop Grumman, the prime contractor for the Space Launch System's solid rocket boosters. Deep Space Transport LLC would be responsible for producing hardware and services for up to 10 Artemis launches beginning with the Artemis V mission, and up to 10 launches for other NASA missions. NASA expects to procure at least one flight per year to the Moon or other deep-space destinations. See also United Space Alliance, a similar entity for streamlining Space Shuttle contracts that operated from September 1996 (partnership between Rockwell International and Lockheed Martin) United Launch Alliance, a joint venture between Boeing and Lockheed Martin to operate launch vehicles References Space Space missions Space Shuttle program NASA programs Boeing
Deep Space Transport LLC
[ "Physics", "Mathematics" ]
189
[ "Spacetime", "Space", "Geometry" ]
54,313,516
https://en.wikipedia.org/wiki/Heavy%20quark%20effective%20theory
In quantum chromodynamics, heavy quark effective theory (HQET) is an effective field theory describing the physics of heavy (that is, of mass far greater than the QCD scale) quarks. It is used in studying the properties of hadrons containing a single charm or bottom quark. The effective theory was formalised in 1990 by Howard Georgi, Estia Eichten and Christopher Hill, building upon the works of Nathan Isgur and Mark Wise, Voloshin and Shifman, and others. Quantum chromodynamics (QCD) is the theory of strong force, through which quarks and gluons interact. HQET is the limit of QCD with the quark mass taken to infinity while its four-velocity is held fixed. This approximation enables non-perturbative (in the strong interaction coupling) treatment of quarks that are much heavier than the QCD mass scale. The mass scale is of order 200 MeV. Hence the heavy quarks include charm, bottom and top quarks, whereas up, down and strange quarks are considered light. Since the top quark is extremely short-lived, only the charm and bottom quarks are of significant interest to HQET, of which only the latter has mass sufficiently high that the effective theory can be applied without major perturbative corrections. References Further reading Quantum chromodynamics
Heavy quark effective theory
[ "Physics" ]
295
[ "Particle physics stubs", "Particle physics" ]
54,316,088
https://en.wikipedia.org/wiki/Non-relativistic%20spacetime
In physics, a non-relativistic spacetime is any mathematical model that fuses n–dimensional space and m–dimensional time into a single continuum other than the (3+1) model used in relativity theory. In the sense used in this article, a spacetime is deemed "non-relativistic" if (a) it deviates from (3+1) dimensionality, even if the postulates of special or general relativity are otherwise satisfied, or if (b) it does not obey the postulates of special or general relativity, regardless of the model's dimensionality. Introduction There are many reasons why spacetimes may be studied that do not satisfy relativistic postulates and/or that deviate from the apparent (3+1) dimensionality of the known universe. Galilean/Newtonian spacetime The classic example of a non-relativistic spacetime is the spacetime of Galileo and Newton. It is the spacetime of everyday "common sense". Galilean/Newtonian spacetime assumes that space is Euclidean (i.e. "flat"), and that time has a constant rate of passage that is independent of the state of motion of an observer, or indeed of anything external. Newtonian mechanics takes place within the context of Galilean/Newtonian spacetime. For a huge problem set, the results of computations using Newtonian mechanics are only imperceptibly different from computations using a relativistic model. Since computations using Newtonian mechanics are considerably simpler than those using relativistic mechanics, as well as correspond to intuition, most everyday mechanics problems are solved using Newtonian mechanics. Model systems Efforts since 1930 to develop a consistent quantum theory of gravity have not yet produced more than tentative results. The study of quantum gravity is difficult for multiple reasons. Technically, general relativity is a complex, nonlinear theory. Very few problems of significant interest admit of analytical solution, and numerical solutions in the strong-field realm can require immense amounts of supercomputer time. Conceptual issues present an even greater difficulty, since general relativity states that gravity is a consequence of the geometry of spacetime. To produce a quantum theory of gravity would therefore require quantizing the basic units of measurement themselves: space and time. A completed theory of quantum gravity would undoubtedly present a visualization of the Universe unlike any that has hitherto been imagined. One promising research approach is to explore the features of simplified models of quantum gravity that present fewer technical difficulties while retaining the fundamental conceptual features of the full-fledged model. In particular, general relativity in reduced dimensions (2+1) retains the same basic structure of the full (3+1) theory, but is technically far simpler. Multiple research groups have adopted this approach to studying quantum gravity. "New physics" theories The idea that relativistic theory could be usefully extended with the introduction of extra dimensions originated with Nordstöm's 1914 modification of his previous 1912 and 1913 theories of gravitation. In this modification, he added an additional dimension resulting in a 5-dimensional vector theory. Kaluza–Klein theory (1921) was an attempt to unify relativity theory with electromagnetism. Although at first enthusiastically welcomed by physicists such as Einstein, Kaluza–Klein theory was too beset with inconsistencies to be a viable theory. Various superstring theories have effective low-energy limits that correspond to classical spacetimes with alternate dimensionalities than the apparent dimensionality of the observed universe. It has been argued that all but the (3+1) dimensional world represent dead worlds with no observers. Therefore, on the basis of anthropic arguments, it would be predicted that the observed universe should be one of (3+1) spacetime. Space and time may not be fundamental properties, but rather may represent emergent phenomena whose origins lie in quantum entanglement. It had occasionally been wondered whether it is possible to derive sensible laws of physics in a universe with more than one time dimension. Early attempts at constructing spacetimes with extra timelike dimensions inevitably met with issues such as causality violation and so could be immediately rejected, but it is now known that viable frameworks exist of such spacetimes that can be correlated with general relativity and the Standard Model, and which make predictions of new phenomena that are within the range of experimental access. Possible observational evidence Observed high values of the cosmological constant may imply kinematics significantly different from relativistic kinematics. A deviation from relativistic kinematics would have significant cosmological implications in regards to such puzzles as the "missing mass" problem. To date, general relativity has satisfied all experimental tests. However, proposals that may lead to a quantum theory of gravity (such as string theory and loop quantum gravity) generically predict violations of the weak equivalence principle in the 10−13 to 10−18 range. Currently envisioned tests of the weak equivalence principle are approaching a degree of sensitivity such that non-discovery of a violation would be just as profound a result as discovery of a violation. Non-discovery of equivalence principle violation in this range would suggest that gravity is so fundamentally different from other forces as to require a major reevaluation of current attempts to unify gravity with the other forces of nature. A positive detection, on the other hand, would provide a major guidepost towards unification. Condensed matter physics Research on condensed matter has spawned a two-way relationship between spacetime physics and condensed matter physics: On the one hand, spacetime approaches have been used to investigate certain condensed matter phenomena. For example, spacetimes with local non-relativistic symmetries have been investigated capable of supporting massive matter fields. This approach has been used to investigate the details of matter couplings, transport phenomena, and the thermodynamics of non-relativistic fluids. On the other hand, condensed matter systems can be used to mimic certain aspects of general relativity. Although intrinsically non-relativistic, these systems provide models of curved spacetime quantum field theory that are experimentally accessible. The include acoustical models in flowing fluids, Bose–Einstein condensate systems, or quasiparticles in moving superfluids, such as the quasiparticles and domain walls of the A-phase of superfluid . Examples of model systems Examples of "new physics" theories Examples of possible observational evidence Examples in condensed matter physics Further reading Debono, I. and G. F. Smoot. General Relativity and Cosmology: Unsolved Questions and Future Directions See also Non-relativistic gravitational fields References Theory of relativity
Non-relativistic spacetime
[ "Physics" ]
1,374
[ "Theory of relativity" ]
54,318,480
https://en.wikipedia.org/wiki/Haifukiho
Haifuki-ho (灰吹法; literally "ash-blowing method"), also known as Lead-silver separation method (Korean: 연은분리법; Hanja: 鉛銀分離法) is a method of silver mining developed in Joseon dynasty of Korea in the 16th century and spread to China and Feudal Japan. The industrial process involved cupellation, and was a contributing factor to the large amount of silver traditionally exported by Japan. History In 1526 Kamiya Jutei, a wealthy merchant from Hakata, founded the Iwami Ginzan Silver Mine in Ōda. Seeking to increase silver production, In 1533 he introduced a Korean method of silver refining to the mine which became the Hai-Fuki-Ho method. The two technicians, Keiju (慶寿; Korean: 경수; Revised Romanization: Gyeongsu) and Sotan (宗丹; Korean: 종단; Revised Romanization: Jongdan), were invited to Japan to instruct their skills. Historians have compared the Hai-Fuki-Ho method to the Medieval European method of silver smelting. Under the Hai-Fuki-Ho method, silver-containing copper ore would be cast-smelted with lead, then allowed to dry. The silver in the copper ore would bind to the lead, creating a single mixture. This mixture would then be heated so that the lead melted and separated out of the copper, taking the bonded silver with it. The silver-rich lead would then be treated with an oxidizing airflow to separate the silver. This was akin to a liquation method. The high-purity silver produced by the Hai-Fuki-Ho method was highly desired by foreign merchants. In addition, the process allowed for greater amounts of the silver to be produced by Japanese mines, which had more efficient refining processes than their competitors. By the 16th century, Japanese mines were producing up to one third of the world's silver. The Hai-Fuki-Ho method was eventually replaced by more modern methods of silver mining. References Metallurgical processes Archaeometallurgy
Haifukiho
[ "Chemistry", "Materials_science" ]
454
[ "Metallurgical processes", "Archaeometallurgy", "Metallurgy" ]
54,321,473
https://en.wikipedia.org/wiki/Chalconatronite
Chalconatronite is a carbonate mineral and rare secondary copper mineral that contains copper, sodium, carbon, oxygen, and hydrogen, its chemical formula is Na2Cu(CO3)2•3(H2O). Chalconatronite is partially soluble in water, and only decomposes, although chalconatronite is soluble while cold, in dilute acids. The name comes from the mineral's compounds, copper ("chalcos" in Greek) and natron, naturally forming sodium carbonate. The mineral is thought to be formed by water carrying alkali carbonates (possibly from soil) reacting with bronze. Similar minerals include malachite, azurite, and other copper carbonates. Chalconatronite has also been found and recorded in Australia, Germany, and Colorado. Bronze Disease Most chalconatronite formed on bronze and silver that have been treated with either sodium sesquicarbonate or sodium cyanide to prevent corrosion and bronze disease. The mineral has also been proven to form on the surface of copper artifacts after being treated with aqueous sodium carbonate. This formation by using sodium sesquicarbonate is undesirable by many antique collectors, as the mineral changes the patinas of copper artifacts. When the mineral forms, it can replace copper salts within the patina, and turn the color from a rich green to a blue-green or even black. Historical Occurrence The mineral was recorded in 1955 on three bronze artifacts from ancient Egypt, which were being held in the Fogg Art Museum at Harvard. Chalconatronite was found inside of two bronze figures (one depicting a seated Sekhmet, and another one depicting a group of cats and kittens) from around the late Nubian Dynasty or early Saite Period. Another chalconatronite specimen was found under a bronze censer from the late Coptic Period. The chalconatronite found on the censer formed over cuprite and some atacamite crystals, which are associated minerals. Chalconatronite was also found on iron and copper Roman armor in 1982 at a site in Chester, England. Some of the mineral was found on a copper pin in St. Mark's Basilica, Venice and in two different Mayan paintings. Along with pseudomalachite, chalconatronite was found on an illuminated manuscript from the sixteenth century. Synthetic chalconatronite could have possibly been made in ancient China as a form of pigment, named "synthetic malachite". It was made by taking copper oxide and boiling it with white alum in a "sufficient amount of water". After the result is cooled, a natron solution would be added to precipitate a synthetic form of chalconatronite, as sodium copper carbonate. See also Atacamite Cuprite Patina Botallackite Bronze disease References Carbonate minerals Copper(II) minerals Corrosion Minerals
Chalconatronite
[ "Chemistry", "Materials_science" ]
601
[ "Metallurgy", "Electrochemistry", "Materials degradation", "Corrosion" ]
54,323,077
https://en.wikipedia.org/wiki/LeDock
LeDock is a molecular docking software, designed for protein-ligand interactions, that is compatible with Linux, macOS, and Windows. The software can run as a standalone programme or from Jupyter Notebook. It supports the Tripos Mol2 file format. Methodology LeDock utilizes a simulated annealing and genetic algorithm approach for facilitating the docking process of ligands with protein targets. The software employs a knowledge-based scoring scheme that is derived from extensive prospective virtual screening campaigns. It is categorized as a flexible docking method. Performance In a study involving 2,002 protein-ligand complexes, LeDock demonstrated a notable level of accuracy in predicting molecular poses. The Linux version contains command line tools to run automated virtual screening of different large molecular libraries in the cloud. In a performance evaluation of ten docking programs, LeDock demonstrated strong sampling power when compared against other commercial and academic alternatives. According to a review from 2017, LeDock was noted for its effectiveness in sampling ligand conformational space, identifying near-native binding poses, and having a flexible docking protocol. The Linux version includes tools for high-throughput virtual screening in the cloud. See also Drug design Macromolecular docking Molecular mechanics Molecular modelling Protein structure Protein design List of software for molecular mechanics modeling List of protein-ligand docking software Molecular design software Lead Finder Virtual screening Scoring functions for docking References External links Official website Medicinal chemistry Drug discovery Molecular modelling software
LeDock
[ "Chemistry", "Biology" ]
286
[ "Molecular modelling software", "Computational chemistry software", "Drug discovery", "Life sciences industry", "Molecular modelling", "nan", "Medicinal chemistry", "Biochemistry" ]
54,325,825
https://en.wikipedia.org/wiki/Esso%20Research%20Centre
The Esso Research Centre was a research centre in Oxfordshire. History The site was Esso's main European technical centre for fuels and lubricants. The site was extended in 1957. Operations ceased in the early 2000s. Structure The site had the staff of Esso Research, with around 500 scientists and engineers. Function It conducted research into chemistry. Location It was situated on the western side of the A4130 (the original A34 trunk route) on Milton Hill, above Steventon, Oxfordshire. On disposal the site was split in two, between the headquarters of Infineum and the Milton Hill Business and Technology Centre. By 2018 the site had been cleared. External links Photos of pump/water tower in 2018 and 2013. References Engineering research institutes Earth science research institutes History of the petroleum industry in the United Kingdom Petroleum organizations Research institutes in Oxfordshire Vale of White Horse Energy research institutes 2000s disestablishments in England ExxonMobil buildings and structures
Esso Research Centre
[ "Chemistry", "Engineering" ]
196
[ "Engineering research institutes", "Petroleum", "Petroleum organizations", "Energy research institutes", "Energy organizations" ]
52,999,026
https://en.wikipedia.org/wiki/Junction-mediating%20and%20regulatory%20protein
Junction-mediating and regulatory protein, or JMY, is a 110 kDa protein that interacts with p300 and has a role in regulating p53 activity. Additionally, JMY is a member of the WASp family of actin nucleators. References Proteins
Junction-mediating and regulatory protein
[ "Chemistry" ]
57
[ "Biomolecules by chemical classification", "Protein stubs", "Biochemistry stubs", "Molecular biology", "Proteins" ]
53,002,210
https://en.wikipedia.org/wiki/Bouncing%20ball
The physics of a bouncing ball concerns the physical behaviour of bouncing balls, particularly its motion before, during, and after impact against the surface of another body. Several aspects of a bouncing ball's behaviour serve as an introduction to mechanics in high school or undergraduate level physics courses. However, the exact modelling of the behaviour is complex and of interest in sports engineering. The motion of a ball is generally described by projectile motion (which can be affected by gravity, drag, the Magnus effect, and buoyancy), while its impact is usually characterized through the coefficient of restitution (which can be affected by the nature of the ball, the nature of the impacting surface, the impact velocity, rotation, and local conditions such as temperature and pressure). To ensure fair play, many sports governing bodies set limits on the bounciness of their ball and forbid tampering with the ball's aerodynamic properties. The bounciness of balls has been a feature of sports as ancient as the Mesoamerican ballgame. Forces during flight and effect on motion The motion of a bouncing ball obeys projectile motion. Many forces act on a real ball, namely the gravitational force (FG), the drag force due to air resistance (FD), the Magnus force due to the ball's spin (FM), and the buoyant force (FB). In general, one has to use Newton's second law taking all forces into account to analyze the ball's motion: where m is the ball's mass. Here, a, v, r represent the ball's acceleration, velocity, and position over time t. Gravity The gravitational force is directed downwards and is equal to where m is the mass of the ball, and g is the gravitational acceleration, which on Earth varies between and . Because the other forces are usually small, the motion is often idealized as being only under the influence of gravity. If only the force of gravity acts on the ball, the mechanical energy will be conserved during its flight. In this idealized case, the equations of motion are given by where a, v, and r denote the acceleration, velocity, and position of the ball, and v0 and r0 are the initial velocity and position of the ball, respectively. More specifically, if the ball is bounced at an angle θ with the ground, the motion in the x- and y-axes (representing horizontal and vertical motion, respectively) is described by The equations imply that the maximum height (H) and range (R) and time of flight (T) of a ball bouncing on a flat surface are given by Further refinements to the motion of the ball can be made by taking into account air resistance (and related effects such as drag and wind), the Magnus effect, and buoyancy. Because lighter balls accelerate more readily, their motion tends to be affected more by such forces. Drag Air flow around the ball can be either laminar or turbulent depending on the Reynolds number (Re), defined as: where ρ is the density of air, μ the dynamic viscosity of air, D the diameter of the ball, and v the velocity of the ball through air. At a temperature of , and . If the Reynolds number is very low (Re < 1), the drag force on the ball is described by Stokes' law: where r is the radius of the ball. This force acts in opposition to the ball's direction (in the direction of ). For most sports balls, however, the Reynolds number will be between 104 and 105 and Stokes' law does not apply. At these higher values of the Reynolds number, the drag force on the ball is instead described by the drag equation: where Cd is the drag coefficient, and A the cross-sectional area of the ball. Drag will cause the ball to lose mechanical energy during its flight, and will reduce its range and height, while crosswinds will deflect it from its original path. Both effects have to be taken into account by players in sports such as golf. Magnus effect The spin of the ball will affect its trajectory through the Magnus effect. According to the Kutta–Joukowski theorem, for a spinning sphere with an inviscid flow of air, the Magnus force is equal to where r is the radius of the ball, ω the angular velocity (or spin rate) of the ball, ρ the density of air, and v the velocity of the ball relative to air. This force is directed perpendicular to the motion and perpendicular to the axis of rotation (in the direction of ). The force is directed upwards for backspin and downwards for topspin. In reality, flow is never inviscid, and the Magnus lift is better described by where ρ is the density of air, CL the lift coefficient, A the cross-sectional area of the ball, and v the velocity of the ball relative to air. The lift coefficient is a complex factor which depends amongst other things on the ratio rω/v, the Reynolds number, and surface roughness. In certain conditions, the lift coefficient can even be negative, changing the direction of the Magnus force (reverse Magnus effect). In sports like tennis or volleyball, the player can use the Magnus effect to control the ball's trajectory (e.g. via topspin or backspin) during flight. In golf, the effect is responsible for slicing and hooking which are usually a detriment to the golfer, but also helps with increasing the range of a drive and other shots. In baseball, pitchers use the effect to create curveballs and other special pitches. Ball tampering is often illegal, and is often at the centre of cricket controversies such as the one between England and Pakistan in August 2006. In baseball, the term 'spitball' refers to the illegal coating of the ball with spit or other substances to alter the aerodynamics of the ball. Buoyancy Any object immersed in a fluid such as water or air will experience an upwards buoyancy. According to Archimedes' principle, this buoyant force is equal to the weight of the fluid displaced by the object. In the case of a sphere, this force is equal to The buoyant force is usually small compared to the drag and Magnus forces and can often be neglected. However, in the case of a basketball, the buoyant force can amount to about 1.5% of the ball's weight. Since buoyancy is directed upwards, it will act to increase the range and height of the ball. Impact When a ball impacts a surface, the surface recoils and vibrates, as does the ball, creating both sound and heat, and the ball loses kinetic energy. Additionally, the impact can impart some rotation to the ball, transferring some of its translational kinetic energy into rotational kinetic energy. This energy loss is usually characterized (indirectly) through the coefficient of restitution (or COR, denoted e): where vf and vi are the final and initial velocities of the ball, and uf and ui are the final and initial velocities of the impacting surface, respectively. In the specific case where a ball impacts on an immovable surface, the COR simplifies to For a ball dropped against a floor, the COR will therefore vary between 0 (no bounce, total loss of energy) and 1 (perfectly bouncy, no energy loss). A COR value below 0 or above 1 is theoretically possible, but would indicate that the ball went through the surface (), or that the surface was not "relaxed" when the ball impacted it (), like in the case of a ball landing on spring-loaded platform. To analyze the vertical and horizontal components of the motion, the COR is sometimes split up into a normal COR (ey), and tangential COR (ex), defined as where r and ω denote the radius and angular velocity of the ball, while R and Ω denote the radius and angular velocity the impacting surface (such as a baseball bat). In particular rω is the tangential velocity of the ball's surface, while RΩ is the tangential velocity of the impacting surface. These are especially of interest when the ball impacts the surface at an oblique angle, or when rotation is involved. For a straight drop on the ground with no rotation, with only the force of gravity acting on the ball, the COR can be related to several other quantities by: Here, K and U denote the kinetic and potential energy of the ball, H is the maximum height of the ball, and T is the time of flight of the ball. The 'i' and 'f' subscript refer to the initial (before impact) and final (after impact) states of the ball. Likewise, the energy loss at impact can be related to the COR by The COR of a ball can be affected by several things, mainly the nature of the impacting surface (e.g. grass, concrete, wire mesh) the material of the ball (e.g. leather, rubber, plastic) the pressure inside the ball (if hollow) the amount of rotation induced in the ball at impact the impact velocity External conditions such as temperature can change the properties of the impacting surface or of the ball, making them either more flexible or more rigid. This will, in turn, affect the COR. In general, the ball will deform more at higher impact velocities and will accordingly lose more of its energy, decreasing its COR. Spin and angle of impact Upon impacting the ground, some translational kinetic energy can be converted to rotational kinetic energy and vice versa depending on the ball's impact angle and angular velocity. If the ball moves horizontally at impact, friction will have a "translational" component in the direction opposite to the ball's motion. In the figure, the ball is moving to the right, and thus it will have a translational component of friction pushing the ball to the left. Additionally, if the ball is spinning at impact, friction will have a "rotational" component in the direction opposite to the ball's rotation. On the figure, the ball is spinning clockwise, and the point impacting the ground is moving to the left with respect to the ball's center of mass. The rotational component of friction is therefore pushing the ball to the right. Unlike the normal force and the force of gravity, these frictional forces will exert a torque on the ball, and change its angular velocity (ω). Three situations can arise: If a ball is propelled forward with backspin, the translational and rotational friction will act in the same directions. The ball's angular velocity will be reduced after impact, as will its horizontal velocity, and the ball is propelled upwards, possibly even exceeding its original height. It is also possible for the ball to start spinning in the opposite direction, and even bounce backwards. If a ball is propelled forward with topspin, the translational and rotational friction act will act in opposite directions. What exactly happens depends on which of the two components dominate. If the ball is spinning much more rapidly than it was moving, rotational friction will dominate. The ball's angular velocity will be reduced after impact, but its horizontal velocity will be increased. The ball will be propelled forward but will not exceed its original height, and will keep spinning in the same direction. If the ball is moving much more rapidly than it was spinning, translational friction will dominate. The ball's angular velocity will be increased after impact, but its horizontal velocity will be decreased. The ball will not exceed its original height and will keep spinning in the same direction. If the surface is inclined by some amount θ, the entire diagram would be rotated by θ, but the force of gravity would remain pointing downwards (forming an angle θ with the surface). Gravity would then have a component parallel to the surface, which would contribute to friction, and thus contribute to rotation. In racquet sports such as table tennis or racquetball, skilled players will use spin (including sidespin) to suddenly alter the ball's direction when it impacts surface, such as the ground or their opponent's racquet. Similarly, in cricket, there are various methods of spin bowling that can make the ball deviate significantly off the pitch. Non-spherical balls The bounce of an oval-shaped ball (such as those used in gridiron football or rugby football) is in general much less predictable than the bounce of a spherical ball. Depending on the ball's alignment at impact, the normal force can act ahead or behind the centre of mass of the ball, and friction from the ground will depend on the alignment of the ball, as well as its rotation, spin, and impact velocity. Where the forces act with respect to the centre of mass of the ball changes as the ball rolls on the ground, and all forces can exert a torque on the ball, including the normal force and the force of gravity. This can cause the ball to bounce forward, bounce back, or sideways. Because it is possible to transfer some rotational kinetic energy into translational kinetic energy, it is even possible for the COR to be greater than 1, or for the forward velocity of the ball to increase upon impact. Multiple stacked balls A popular demonstration involves the bounce of multiple stacked balls. If a tennis ball is stacked on top of a basketball, and the two of them are dropped at the same time, the tennis ball will bounce much higher than it would have if dropped on its own, even exceeding its original release height. The result is surprising as it apparently violates conservation of energy. However, upon closer inspection, the basketball does not bounce as high as it would have if the tennis ball had not been on top of it, and transferred some of its energy into the tennis ball, propelling it to a greater height. The usual explanation involves considering two separate impacts: the basketball impacting with the floor, and then the basketball impacting with the tennis ball. Assuming perfectly elastic collisions, the basketball impacting the floor at 1 m/s would rebound at 1 m/s. The tennis ball going at 1 m/s would then have a relative impact velocity of 2 m/s, which means it would rebound at 2 m/s relative to the basketball, or 3 m/s relative to the floor, and triple its rebound velocity compared to impacting the floor on its own. This implies that the ball would bounce to 9 times its original height. In reality, due to inelastic collisions, the tennis ball will increase its velocity and rebound height by a smaller factor, but still will bounce faster and higher than it would have on its own. While the assumptions of separate impacts is not actually valid (the balls remain in close contact with each other during most of the impact), this model will nonetheless reproduce experimental results with good agreement, and is often used to understand more complex phenomena such as the core collapse of supernovae, or gravitational slingshot manoeuvres. Sport regulations Several sports governing bodies regulate the bounciness of a ball through various ways, some direct, some indirect. AFL: Regulates the gauge pressure of the football to be between and . FIBA: Regulates the gauge pressure so the basketball bounces between 1035 mm and 1085 mm (bottom of the ball) when it is dropped from a height of 1800 mm (bottom of the ball). This corresponds to a COR between 0.758 and 0.776. FIFA: Regulates the gauge pressure of the soccer ball to be between of and at sea level (61 to 111 kPa). FIVB: Regulates the gauge pressure of the volleyball to be between to (29.4 to 31.9 kPa) for indoor volleyball, and to (17.2 to 22.1 kPa) for beach volleyball. ITF: Regulates the height of the tennis ball bounce when dropped on a "smooth, rigid and horizontal block of high mass". Different types of ball are allowed for different types of surfaces. When dropped from a height of , the bounce must be for Type 1 balls, for Type 2 and Type 3 balls, and for High Altitude balls. This roughly corresponds to a COR of 0.735–0.775 (Type 1 ball), 0.728–0.762 (Type 2 & 3 balls), and 0.693–0.728 (High Altitude balls) when dropped on the testing surface. ITTF: Regulates the playing surface so that the table tennis ball bounces approximately 23 cm when dropped from a height of 30 cm. This roughly corresponds to a COR of about 0.876 against the playing surface. NBA: Regulates the gauge pressure of the basketball to be between 7.5 and 8.5 psi (51.7 to 58.6 kPa). NFL: Regulates the gauge pressure of the American football to be between 12.5 and 13.5 psi (86 to 93 kPa). R&A/USGA: Limits the COR of the golf ball directly, which should not exceed 0.83 against a golf club. The pressure of an American football was at the center of the deflategate controversy. Some sports do not regulate the bouncing properties of balls directly, but instead specify a construction method. In baseball, the introduction of a cork-based ball helped to end the dead-ball era and trigger the live-ball era. See also Bouncy ball List of ball games Quantum bouncing ball Notes References Further reading Balls Sports rules and regulations Classical mechanics Kinematics Dynamical systems Motion (physics)
Bouncing ball
[ "Physics", "Mathematics", "Technology" ]
3,618
[ "Machines", "Kinematics", "Physical phenomena", "Classical mechanics", "Physical systems", "Motion (physics)", "Space", "Mechanics", "Spacetime", "Dynamical systems" ]
57,632,169
https://en.wikipedia.org/wiki/H3K9me2
H3K9me2 is an epigenetic modification to the DNA packaging protein Histone H3. It is a mark that indicates the di-methylation at the 9th lysine residue of the histone H3 protein. H3K9me2 is strongly associated with transcriptional repression. H3K9me2 levels are higher at silent compared to active genes in a 10kb region surrounding the transcriptional start site. H3K9me2 represses gene expression both passively, by prohibiting acetylation as therefore binding of RNA polymerase or its regulatory factors, and actively, by recruiting transcriptional repressors. H3K9me2 has also been found in megabase blocks, termed Large Organised Chromatin K9 domains (LOCKS), which are primarily located within gene-sparse regions but also encompass genic and intergenic intervals. Its synthesis is catalyzed by G9a, G9a-like protein, and PRDM2. H3K9me2 can be removed by a wide range of histone lysine demethylases (KDMs) including KDM1, KDM3, KDM4 and KDM7 family members. H3K9me2 is important for various biological processes including cell lineage commitment, the reprogramming of somatic cells to induced pluripotent stem cells, regulation of the inflammatory response, and addiction to drug use. Nomenclature H3K9me2 indicates dimethylation of lysine 9 on histone H3 protein subunit: Lysine methylation This diagram shows the progressive methylation of a lysine residue. The di-methylation (third from left) denotes the methylation presentpresent in H3K9me2. Understanding histone modifications The genomic DNA of eukaryotic cells is wrapped around special protein molecules known as histones. The complexes formed by the looping of the DNA are known as chromatin. The basic structural unit of chromatin is the nucleosome: this consists of the core octamer of histones (H2A, H2B, H3 and H4) as well as a linker histone and about 180 base pairs of DNA. These core histones are rich in lysine and arginine residues. The carboxyl (C) terminal end of these histones contribute to histone-histone interactions, as well as histone-DNA interactions. The amino (N) terminal charged tails are the site of the post-translational modifications, such as the one seen in H3K9me2. Epigenetic implications The post-translational modification of histone tails by either histone modifying complexes or chromatin remodelling complexes are interpreted by the cell and lead to complex, combinatorial transcriptional output. It is thought that a histone code dictates the expression of genes by a complex interaction between the histones in a particular region. The current understanding and interpretation of histones comes from two large scale projects: ENCODE and the Epigenomic roadmap. The purpose of the epigenomic study was to investigate epigenetic changes across the entire genome. This led to chromatin states which define genomic regions by grouping the interactions of different proteins and/or histone modifications together. Chromatin states were investigated in Drosophila cells by looking at the binding location of proteins in the genome. Use of chromatin immunoprecipitation (ChIP)-sequencing revealed regions in the genome characterised by different banding. Different developmental stages were profiled in Drosophila as well, an emphasis was placed on histone modification relevance. A look in to the data obtained led to the definition of chromatin states based on histone modifications. Certain modifications were mapped and enrichment was seen to localize in certain genomic regions. Five core histone modifications were found with each respective one being linked to various cell functions. H3K4me3-promoters H3K4me1- primed enhancers H3K36me3-gene bodies H3K27me3-polycomb repression H3K9me3-heterochromatin H3K9me2-facultative heterochromatin The human genome was annotated with chromatin states. These annotated states can be used as new ways to annotate a genome independently of the underlying genome sequence. This independence from the DNA sequence enforces the epigenetic nature of histone modifications. Chromatin states are also useful in identifying regulatory elements that have no defined sequence, such as enhancers. This additional level of annotation allows for a deeper understanding of cell specific gene regulation. Clinical significance Addiction Chronic addictive drug exposure results in ΔFosB-mediated repression of G9a and reduced H3K9 dimethylation in the nucleus accumbens, which in turn causes dendritic arborization, altered synaptic protein expression, and increased drug seeking behavior. In contrast, accumbal G9a hyperexpression results in markedly increased H3K9 dimethylation and blocks the induction of this neural and behavioral plasticity by chronic drug use, which occurs via H3K9me2-mediated repression of transcription factors for ΔFosB and H3K9me2-mediated repression of various ΔFosB transcriptional targets (e.g., CDK5). Due to the involvement of H3K9me2 in these feedback loops and the central pathophysiological role of ΔFosB overexpression as the mechanistic trigger for addiction, the reduction of accumbal H3K9me2 following repeated drug exposure directly mediates the development of drug addictions. Friedreich's ataxia R-loop's are found with H3K9me2 mark at FXN in Friedreich's ataxia cells. Cardiovascular disease H3K9me2 is present at a subset of cardiovascular disease-associated gene promoters in vascular smooth muscle cells to block binding of NFκB and AP-1 (activator protein-1) transcription factors. Reduced levels of H3K9me2 have been observed in vascular smooth muscle cells from human atherosclerotic lesions compared to healthy aortic tissue in patients. Vascular smooth muscle cells from diabetic patients display reduced levels of H3K9me2 compared to non-diabetic controls; it has therefore been suggested that dysregulation of H3K9me2 might underlie the vascular complications associated with diabetes. Loss of H3K9me2 in vascular smooth muscle cells exacerbates upregulation of a subset of cardiovascular disease-associated genes in vascular disease models. Methods Histone modifications, including H3K9me2, can be detected using a variety of methods: Chromatin Immunoprecipitation Sequencing (ChIP-sequencing) measures the amount of DNA enrichment once bound to a targeted protein and immunoprecipitated. It results in good optimization and is used in vivo to reveal DNA-protein binding occurring in cells. ChIP-Seq can be used to identify and quantify various DNA fragments for different histone modifications along a genomic region. CUT&RUN(Cleavage Under Targets and Release Using Nuclease). In CUT&RUN, targeted DNA-protein complexes are isolated directly from the cell nucleus rather than following a precipitation step. To perform CUT&RUN, a specific antibody to the DNA-binding protein of interest and ProtA-MNase is added to permeabilised cells. MNase is tethered to the protein of interest through the ProtA-antibody interaction and MNase cleaves the surrounding, unprotected DNA to release protein-DNA complexes, which can then be isolated and sequenced. CUT&RUN is reported to give a much higher signal to noise ratio compared to traditional ChIP. CUT&RUN therefore requires one tenth of the sequencing depth of ChIP and permits genomic mapping of histone modifications and transcription factors using extremely low cell numbers. Modification-specific intracellular antibody probes. Sensitive fluorescent genetically encoded histone modification-specific intracellular antibody (mintbody) probes can be used to monitor changes in histone modifications in living cells. See also Histone methylation Histone methyltransferase Methyllysine References Δ Epigenetics Post-translational modification
H3K9me2
[ "Chemistry" ]
1,751
[ "Post-translational modification", "Gene expression", "Biochemical reactions" ]
57,639,032
https://en.wikipedia.org/wiki/Dihydromaltophilin
Dihydromaltophilin, or heat stable anti-fungal factor (HSAF), is a secondary metabolite of bacteria in the genera Streptomyces and Lysobacter. HSAF is a polycyclic tetramate lactam containing a single tetramic acid unit and a 5,5,6-tricyclic system. HSAF has been shown to have anti-fungal activity mediated through the disruption of a ceramide synthase that is unique to fungi. Biosynthesis The backbone of HSAF is formed through a hybrid PKS-NRPS cluster containing one nonribosomal peptide synthase (NRPS) module and one polyketide synthase (PKS) module. The single PKS module functions in a non-canonical fashion in that it is an iterative type I PKS responsible for the generation of the two unique polyketides needed in the backbone of HSAF using malonyl-CoA as both the starter and extender unit, while the NRPS module is responsible for the linking of the polyketides to an L-ornithine unit and the initial cyclization to create the tetramate back bone. The coding region related to HSAF production contains a PKS-NRPS with a total of 9 domains, (KS-AT-DH-KR-ACP-C-A-PCP-TE), while a cascade of FAD-dependent redox reactions (OX1-OX4) flank the PKS-NRPS cluster proposed to be responsible for formation of the 5,5,6-tricyclic system, there are additional coding regions for a putative regulator, an arginase for L-ornithine production from Arginine, and a transporter which flank the PKS-NRPS. References Polyketides
Dihydromaltophilin
[ "Chemistry" ]
391
[ "Biomolecules by chemical classification", "Natural products", "Polyketides" ]
70,058,377
https://en.wikipedia.org/wiki/Bebtelovimab
Bebtelovimab is a monoclonal antibody developed by AbCellera and Eli Lilly as a treatment for COVID-19. Possible side effects include itching, rash, infusion-related reactions, nausea and vomiting. Bebtelovimab works by binding to the spike protein of the virus that causes COVID-19, similar to other monoclonal antibodies that have been authorized for the treatment of high-risk people with mild to moderate COVID-19 and shown a benefit in reducing the risk of hospitalization or death. Bebtelovimab is a neutralizing human immunoglobulin G1 (IgG1) monoclonal antibody, isolated from a patient who has recovered from the Coronavirus disease 2019 (COVID-19), directed against the spike (S) protein of severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2), that can potentially be used for immunization against COVID-19. , bebtelovimab is not authorized for emergency use in the US because it is not expected to neutralize Omicron subvariants BQ.1 and BQ.1.1. Medical uses Bebtelovimab was granted an emergency use authorization (EUA) by the US Food and Drug Administration (FDA) in February 2022, and revoked it in November 2022. The EUA for bebtelovimab is for the treatment of mild to moderate COVID-19 in people aged 12 years of age and older weighing at least with a positive COVID-19 test, and who are at high risk for progression to severe COVID-19, including hospitalization or death, and for whom alternative COVID-19 treatment options approved or authorized by the FDA are not accessible or clinically appropriate. Bebtelovimab is not authorized for people who are hospitalized due to COVID-19 or require oxygen therapy due to COVID-19. Treatment with bebtelovimab has not been studied in people hospitalized due to COVID-19. Bebtelovimab is not expected to neutralize Omicron subvariants BQ.1 and BQ.1.1. History Bebtelovimab emerged from a collaboration between Eli Lilly and AbCellera. Bebtelovimab was discovered by AbCellera and the National Institute of Allergy and Infectious Diseases (NIAID) Vaccine Research Center. Society and culture Legal status Bebtelovimab was authorized for medical use in the United States via an emergency use authorization in February 2022. , bebtelovimab is not authorized for emergency use in the US because it is not expected to neutralize Omicron subvariants BQ.1 and BQ.1.1. Eli Lilly and its authorized distributors have paused commercial distribution of bebtelovimab until further notice by the U.S. Food and Drug Administration (FDA). Names Bebtelovimab is the proposed international nonproprietary name (pINN). References External links Antiviral drugs Experimental monoclonal antibodies COVID-19 drug development
Bebtelovimab
[ "Chemistry", "Biology" ]
657
[ "Antiviral drugs", "COVID-19 drug development", "Biocides", "Drug discovery" ]
62,844,572
https://en.wikipedia.org/wiki/Journal%20of%20Computing%20in%20Civil%20Engineering
The Journal of Computing in Civil Engineering is a bimonthly peer-reviewed scientific journal published by the American Society of Civil Engineers. It covers research specific to computing as it relates to civil engineering. Abstracting and indexing The journal is abstracted and indexed in Ei Compendex, Science Citation Index Expanded, ProQuest databases, Civil Engineering Database, Inspec, Scopus, and EBSCO databases. References External links Civil engineering journals American Society of Civil Engineers academic journals Academic journals established in 1987
Journal of Computing in Civil Engineering
[ "Engineering" ]
105
[ "Civil engineering journals", "Civil engineering" ]
62,847,188
https://en.wikipedia.org/wiki/Sijbren%20Otto
Sybren Otto (Groningen, 3 August 1971) is Professor of Systems chemistry at the Stratingh Institute for Chemistry, University of Groningen. Career Otto studied chemistry at the University of Groningen and in 1994, he received his Master's degree, focusing on physical organic chemistry and biochemistry, with the distinction cum laude. In 1998, he obtained his PhD, again with the distinction cum laude, from his supervisor Prof. Jan B.F.N. Engberts for his thesis entitled Catalysis of Diels-Alder reactions in water. After his subsequent research in both the United States (in 1998, with Prof. Steven L. Regen) at Lehigh University and in the United Kingdom (first with Prof. Jeremy K.M. Sanders and then, from 2001 onwards, as a Royal Society University Research Fellow, both at the University of Cambridge), he was appointed assistant professor at the University of Groningen in 2009. In 2011, he was promoted to associate professor and in 2016, to full professor. From 2014 to 2019, he coordinated the master's degree programme in chemistry. Alongside his work at the university, Otto is also one of the six principal investigators of the Dutch national gravity programme for functional molecular systems (FMS; €26 million, over 10 years, 2013–2023). The ambition of this programme is to gain control over molecular self-assembly. With this technology, nanomotors could be made, for example, or biomaterials to repair damaged bodily tissues. Otto was the lead applicant and chair of the European Cooperation in Science & Technology (COST) Action CM1304 (Emergence and Evolution of Complex Chemical Systems), which united more than 95 European research groups. He is the chair of the Gordon Research Conference on Systems Chemistry 2020 and is editor-in-chief of the Journal of Systems Chemistry. Otto is a member of the Royal Dutch Chemical Society (KNCV), fellow of the Royal Society of Chemistry and member of the American Chemical Society. He is member of the steering committee of the Origins Center. The Origins Center is a Dutch research platform for scientists who are involved in the key questions of the Dutch Research Agenda on the origin, evolution and future of life on Earth and in the universe. Otto is active on several fronts in both the Netherlands and abroad. Otto was elected a member of the Royal Netherlands Academy of Arts and Sciences in 2020. Research The research conducted by Otto and his research group is focused on various fields, varying from the origin of life (self-replicating systems and the Darwinian evolution thereof), to materials chemistry (self-synthesizing fibres, hydrogels and nanoparticle surfaces). Specific interests include self-replicating molecules, foldamers, catalysis, molecular recognition of biomolecules and self-synthesizing materials (materials of which their self-assembly drives the synthesis of the molecules that assemble). The complex chemical mixtures that are designed, made and researched often display new properties that are relevant to understanding how new traits are able to arise in nature. The final goal of all of this research is the de novo synthesis of new forms of life via the integration of self-replicating systems with metabolism and compartmentalization. His 114 publications have been cited a total of 8,873 times by other scientists. His h-index is 51. Grants and prizes 1999 Marie Curie Fellowship, University of Cambridge, United Kingdom. 2000 Junior Research Fellowship, Wolfson College, Cambridge, United Kingdom. 2001 Royal Society University Research Fellowship, University of Cambridge, United Kingdom. 2011 ERC Starting Grant (subsidy) from the European Research Council for research into: Self-replication in dynamic molecular networks 2013 Vici grant from the Dutch Research Council (NWO) for research into: the Darwinian evolution of molecules. (Vici grants are intended for excellent senior researchers who can demonstrably develop their own innovative research lines and who are suitable for coaching early-career researchers. 2013 Appointed as Fellow of the Royal Society of Chemistry. 2013 Visiting professor, University of Strasbourg, France. 2017 ERC Advanced Grant (subsidy) from the European Research Council. 2018 Visiting professor, Ludwig Maximilian University of Munich, Germany. 2018 Supramolecular chemistry prize of the Royal Society of Chemistry. 2020 Member of the Royal Dutch Academy of Science (link). 2023 ERC Synergy Grant (subsidy) from the European Research Council. References External links Sijbren Otto's Staff page (University of Groningen website) An overview of Sijbren Otto's work on making life in the lab Overview of Otto's publications Website of the Otto Research Group Can we make life in the lab? Presentation by Sijbren Otto (YouTube) ''Self Replication: How molecules can make copies of themselves'' (YouTube) 1971 births Living people 21st-century Dutch chemists Members of the Royal Netherlands Academy of Arts and Sciences Organic chemists University of Groningen alumni Academic staff of the University of Groningen Scientists from Groningen (city)
Sijbren Otto
[ "Chemistry" ]
1,023
[ "Organic chemists" ]
62,853,988
https://en.wikipedia.org/wiki/Intermodal%20railfreight%20in%20Great%20Britain
Intermodal railfreight in Great Britain is a way of transporting containers between ports, inland ports and terminals in England, Scotland and Wales, by using rail to do so. Initially started by British Rail in the 1960s, the use of containers that could be swapped between different modes of transport goes back to the days of the London, Midland & Scottish Railway. The transport of containers from ship to rail is classified by the UK government as Lo-Lo traffic (lift-on, lift-off). Volumes of intermodal traffic in the United Kingdom have been rising since 1998, with an expectation of further growth in the years ahead; by 2017, railfreight was moving one in four of containers that entered the United Kingdom. However, the movement of containers through the Channel Tunnel has been labelled as disappointing, but this has suffered myriad problems such as migrant issues and safety problems. Since privatisation of the railways in the 1990s, the market has grown from one initial operator (Freightliner), to four main operators, DB Cargo, Direct Rail Services and GB Railfreight, although other entrants have tried to run intermodal trains. Many of the older terminals opened by British Rail have closed down, with the focus on strategic rail freight interchanges (SRFIs), which will focus on a wider area or region with good onward road, or water, transport links. History As a transfer container service, Freightliner was set up by British Rail as a separate company, with the first train running in November 1965. It was one of the reformative ideas put forward under the aegis of Richard Beeching as part of the rationalisation of the railway network in the 1960s. The idea of trains moving containers pre-dated the Beeching cuts, with some suggestions being put forward in the 1950s when the railway was under the control of the British Transport Commission. In the 1950s, British Rail ran a Condor service (an Anglo-Scottish container train that ran on two axle-wagons). The first service of Condor containers ran in March 1959, consisting of roller-bearing flat wagons that containers could be moved on and off with ease. Even further back, the swapping of containers between modes of transport was utilised in the 19th century, when wooden containers were used, but after the railways were grouped in 1921, the London Midland & Scottish Railway (LMS) introduced this type of system with steel and aluminium containers. Initially, the new Freightliner service was intended for the domestic movement of freight in containers between points in the Great Britain, with 16 terminals in operation in 1968, and Southampton and Tilbury under construction. However, in 1968 a London to Paris working was started which relied upon the Dover to Dunquerke train ferry, and by 1969, the service was linked into ports with a short-sea and a deep-sea service to other countries. By the end of the 1960s, liner trains (united transport) were carrying per year. By the end of 1978, this average was . In 1969, British Rail transferred ownership of Freightliner to the National Freight Corporation, but with BR supplying the wagons and locomotives. It was returned to BR in 1978. By 1981, Freightliner was operating to 43 terminals, 25 of their own and 18 privately used locations. In 1982, the Port of Felixstowe was despatching three daily freight trains with containers on. In 1983, a second terminal opened (Felixstowe North), and between the two terminals, the amount of containers transhipped to and from rail was about 80,000 per year (20%). When a third terminal was opened in 2013 (named Felixstowe North, with the previous one being renamed Felixstowe Central), over 40 million TEUs (twenty-foot equivalent units) with 36 daily departures carrying containers were being handled. In 1986 and 1987, several terminals were closed, including four in Scotland (Aberdeen, Clydeport [Greenock], Dundee and Edinburgh) despite the potential for long-distance services from these terminals. British Rail deemed it more efficient to load containers at Coatbridge in Glasgow, and use electric traction south on the West Coast Main Line. Before the closures, Freightliner operated 35 terminals, including ports, compared with 19 under privatisation. In 1988, Freightliner, Speedlink and Railfreight International, were amalgamated into one entity by British Rail, called Railfreight Distribution. A large section of the business that these three separate arms dealt with, were loss-making and the combined efforts were a way in which it was hoped to turn the businesses around. In 1992, it was assessed that Freightliner was making a 50% loss on its £70 million turnover, and the business was only serving nine locations. One of the problems causing this was that the deep-sea nature of the traffic carried was increasingly geared up to using the containers, which required gauge enhancement or specially adapted wagons to be carried on the British railway system. The advent of the Channel Tunnel opening, led to a resurgence in container traffic terminals being opened. These were separated into sites away from the main railfreight business as operated between UK terminals and deep-sea ports such as Southampton and Felixstowe. New European freight terminals were built at Trafford Park in Manchester, Wakefield in West Yorkshire and Willesden in North West London. After this, the intermodal services in Britain could be subdivided into three streams; traffic to and from ports, Channel Tunnel traffic and domestic flows, of which much Anglo-Scottish traffic falls into the latter. This is a complete modal shift of the domestic nature of the Freightliner network as instigated in the mid 1960s which initially envisaged the market being domestic traffic dominating. One suggestion for the change in traffic origin has been that containers entering ports have a lower transport cost, as they only need onward road transport to their final destination, as opposed to the domestic traffic which needs to be road-hauled, railed and then road-hauled again. The opening of Daventry International Rail Freight Terminal (DIRFT) in July 1997, heralded another new venture into the intermodal business. The site is located on the Northampton Loop of the West Coast Main Line, and close to the M1 motorway and the A45 road. The land had been designated as a "motorway orientated growth point" in 1978, and so was ideally situated for this type of interchange and delivery point for intermodal traffic. In 1997, services through the Channel Tunnel operated between Birmingham Landor Street, Daventry, Mossend, Seaforth, Trafford Park, Wakefield and Willesden in the United Kingdom, with terminals in Europe (Avignon, Barcelona, Lyon, Melzo, Metz, Muizen, Novara, Oleggio, Paris, Perpignan, Rogoredo and Turin). Even so, the volumes of intermodal traffic (and other commodities) shifted by railfreight through the channel Tunnel have been low compared with forecasted freight volumes. Whilst some problems range from the physical; migrants using the services to cross and at one point, invading the railway yard at Frethun, other problems have been strikes by French workers and fires in the tunnel which hampered pathing trains through. Binliners and other traffic Binliners are so named because they carry waste traffic in containers on the same type of wagons used to carry (freight)liner trains, (binliner being a portmanteau of the words bin and liner, so it sounds like a binliner). The carrying of waste on the railway network, used to involve slow moving wagons, but in the 1970s, terminals began opening which would take compacted waste in containers direct to a landfill site. Whilst this traffic is not routinely grouped under the intermodal umbrella, its use of containers makes it an intermodal railfreight service, even if no onward road transport was used at the destination. Most binliners would run as block trains, but occasional special traffics would be railed to its final destination via the wagonload network, such as spent shot blast from Falmouth to Brindle Heath in Greater Manchester. Most destinations were former quarrying or mining operations that had applications to take landfill. The main sites were at Forders in Bedfordshire, Calvert in Buckinghamshire, Appleford in Oxfordshire, Roxby Gullet in Lincolnshire and Appley Bridge in Greater Manchester. The main authorities using these sites were Greater London for Forders and Calvert, Avon for Calvert and Appleford, with Greater Manchester utilising first Appley Bridge, then Roxby when Appley Bridge was full. A similar operation was used on the Powderhall Branch in Edinburgh, which used to take compacted waste to exhausted quarry workings at the cement works at Oxwellmains in the Scottish Borders. As an adaptation of the binliner trains, a landfill tax introduced in the 2010s, prompted some authorities to send their waste to be burnt in an energy from waste plant (EfW). Merseyside waste is burnt at the Wilton EfW plant, and some waste from London (loaded at Brentford) is burnt at the Severnside EfW plant. Other commodities have been sent via containers such as desulphogypsum from power stations to gypsum processing plants, however, the containers are used solely for this purpose and not used as a generic swap container service available for different goods. Containers are used on the desulphogypsum traffic as it is sticky, so the use of hopper wagons would not work, and the use of tippler wagons would have been more expensive. Rail versus other modes In many areas of freight transport, rail loses out to road (or water transport), typically in smaller consists which has led to the demise of the wagonload network in Great Britain due to the small tonnages involved. Many containers are transferred between ports in Britain by water transport, mostly at sea using coastal shipping, but some on the canal or river systems. In 2018, the movement of Ro-Ro shipping traffic (which accounts for containers transported by sea, instead of the sea to land designation, which is Lo-Lo), equated to 3.3 billion tonne kilometres, in and around the United Kingdom. Even so, one of four containers that enter the United Kingdom, are then transported/part transported onwards by the use of railfreight. Where rail transport has been beneficial, it has been over long distances such as Felixstowe to Coatbridge (Glasgow). Short distance flows are deemed uneconomic unless they can either be back filled, or be given a guaranteed full load on each train. An example of this was the Wilton to Doncaster Railport service in the 1990s/early 2000s, which carried containerised chemicals a distance of just . A similar service operates between Tees Dock and Doncaster iPort, which has an out and back run of only , and as such, the train and locomotive can be utilised twice in one day, making greater use of the resources. A service between Grangemouth on the Firth of Forth, and Elderslie in Renfrewshire, travelled a distance of only in the one direction. Whilst it normally loaded to 100% going eastbound (from Elderslie), it was only very lightly loaded westbound (from Grangemouth). However, its ability to deliver containers the short distance and avoid the congested M8, M80, M876 and M9 motorways, meant that it afforded customers a better transit time. The wagons and locomotive were used on additional freight services in between its intermodal run. The movement of railfreight is measured in net tonne kilometres (NTK). The figures for intermodal railfreight between 1998 and 2018 are given below. Between 1975 and 1995, the NTK for intermodal traffic steadily decreased from 3.1 billion to 2.3 billion. Post 1996 (privatisation of the railfreight companies), this has seen a steady rise. Operational enhancements Constraints on the movement of containers across the UK rail network have been the loading gauge of the railway lines themselves, with most lines being able to accommodate containers. Only a few lines can handle the larger containers which has led to some lines being adapted to accept the larger gauge, while other routes have used 'pocket' wagons, where the container sits lower down in the wagon. Due to the steady year-on-year increase of intermodal traffic volumes, Network Rail, the owner and infrastructure manager of the UK rail network, has undertaken a series of schemes to allow easier pathing and the removal of gauge restrictions on core routes across the network. Additionally, due to the increase in billion tonne kilometres travelled, and intermodal slowly gaining a larger market share of railfreight tonnage moved, there have been several key network enhancement operations to enable smoother running of intermodal trains. Outside of the development of STRI's and general improvements in terminals and ports, the key programmes are listed below. 2000 (onwards) - Felixstowe branch line - a programme of engineering works to improve pathing availability on what was largely a single-track branch line in the early days of privatisation. The 2019 engineering works saw a new passing loop installed to a length of . The electrification of the line between Felixstowe and is designated as a "priority route(s) to support electrification of railfreight services." 2004 - Ipswich tunnel enhancement - work to lower the floor of the tunnel, thus allowing containers to be carried through the tunnel. This was part of a wider £30 million Strategic Rail Authority programme to enhance the gauge between Felixstowe and Birmingham via London. 2010–2011 - Gauge enhancement on the route between Southampton and the West Midlands. 2012 - Nuneaton North Chord - previous to the chord being built at , freight trains had to cross the tracks on the flat at Nuneaton station. The new chord allows northbound trains to access the down line without conflicting movements of other trains on the busy West Coast Main Line. 2014 - Ipswich Chord - a new chord going from east to north allowing trains to access the Peterborough line from the Port of Felixstowe (and vice versa) without having to reverse in Ipswich yard. 2021 Werrington Dive Under - works undertaken at Werrington Junction north of to allow freight trains to access/egress the Lincoln line and the March line without conflicting with fast passenger trains on the East Coast Main Line. 2030 (possible) - a gauge enhancement of the Northallerton–Eaglescliffe line to bring that line, the Stillington line and beyond to to W12 gauge clearance. Network Rail have other schemes in the proposal category that can affect intermodal traffic. One of these is known as the Castlefield corridor, a section of track between Castlefield Junction in West Manchester, and Manchester Picadilly railway station. Both Trafford Park intermodal terminals have east facing connections that lead onto the Castlefield Corridor, and so must traverse the bottleneck through and . After the Ordsall Chord opened in West Manchester, more trains were diverted to go through this bottleneck causing delays and cancellations, with Network Rail going so far as to label the stretch of line as "congested infrastructure". Some suggestions have been to have a west facing connection to the intermodal terminals so that they can access the West Coast Main Line via a new curve in the Warrington area. Another proposal, put forward by Railfuture, is to relocate the Manchester intermodal terminals on the old Carrington Branch, and therefore freeing up paths through Castlefield for passenger trains, or to add flex to the operational capacity of the corridor. Open terminals Mothballed/unused terminals The following are either not in use as intermodal terminals at present, but remain connected to the national network. Most will still be in use for rail business, but not handling containers. Closed terminals This section relates to former terminals which had dedicated services, and infrastructure such as gantry cranes, which have now closed. It does not include such terminals such as those at ports which operated a service previously. For example, in the early part of the 2000s, containers of car parts were transferred from Avonmouth to Tyne Dock for Nissan. Both these freight terminals still operate, but not necessarily in an intermodal capacity. Future and proposed sites There are proposals to also open SRFIs (Strategic Rail Freight Interchanges) at Skypark in Devon, Parkside in Lancashire, Etwall in Derbyshire, Burbage, Peterborough and SIFE (Slough International Freight Exchange) with a connection on the Colnbrook branch. Operators Intermodal trains were operated by British Rail from its inception until Privatisation in 1996. Immediately after Privatisation, the main company providing intermodal services was Freightliner, though EWS carried containers on their Enterprise wagonload service, and had started an initial service between Harwich and Doncaster to rival services run by Freightliner from Felixstowe. Later, other operators took on their own services, oftentimes running to their own unique locations, though with the gradual increase in Strategic Railfreight Interchanges (SFRI), many operators would rail containers to the same destinations from the same point of origin. DIRFT, which opened in 1997, had ten departures daily operated by Freightliner, DB Cargo (previously EWS), and Direct Rail Services. Five of those trains went to Scotland going to their own loading points for each company; typically Coatbridge for Freightliner, Mossend for DB Cargo and either Mossend, Elderslie or Grangemouth for DRS. Advenza Freight; 2004–2009 British Rail 1967–1997 Direct Rail Services 2001–present DB Cargo 1996–present Fastline Freight 2006–2010 Freightliner 1995–present GB Railfreight 2002–present See also Rail freight in Great Britain Notes References Sources External links British Rail freight services Freight transport
Intermodal railfreight in Great Britain
[ "Physics" ]
3,648
[ "Physical systems", "Transport", "Intermodal transport" ]
62,854,237
https://en.wikipedia.org/wiki/Symmetry%20energy
In nuclear physics, the symmetry energy reflects the variation of the binding energy of the nucleons in the nuclear matter depending on its neutron to proton ratio as a function of baryon density. Symmetry energy is an important parameter in the equation of state describing the nuclear structure of heavy nuclei and neutron stars. References Nuclear physics
Symmetry energy
[ "Physics" ]
65
[ "Nuclear and atomic physics stubs", "Nuclear physics" ]
68,571,110
https://en.wikipedia.org/wiki/Psychoplastogen
Psychoplastogens are a group of small molecule drugs that produce rapid and sustained effects on neuronal structure and function, intended to manifest therapeutic benefit after a single administration. Several existing psychoplastogens have been identified and their therapeutic effects demonstrated; several are presently at various stages of development as medications including ketamine, MDMA, scopolamine, and the serotonergic psychedelics, including LSD, psilocin (the active metabolite of psilocybin), DMT, and 5-MeO-DMT. Compounds of this sort are being explored as therapeutics for a variety of brain disorders including depression, addiction, and PTSD. The ability to rapidly promote neuronal changes via mechanisms of neuroplasticity was recently discovered as the common therapeutic activity and mechanism of action. Etymology and nomenclature The term psychoplastogen comes from the Greek roots - (mind), - (molded), and - (producing) and covers a variety of chemotypes and receptor targets. It was coined by David E. Olson in collaboration with Valentina Popescu, both at the University of California, Davis. The term neuroplastogen is sometimes used as a synonym for psychoplastogen, especially when speaking to the biological substrate rather than the therapeutic. Chemistry Psychoplastogens come in a variety of chemotypes and chemical families, but, by definition, are small-molecule drugs. Ketamine has been described as, "the prototypical psychoplastogen". Pharmacology Psychoplastogens exert their effects by promoting structural and functional neural plasticity through diverse targets including, but not limited to, 5-HT2A, NMDA, and muscarinic receptors. Some are biased agonists. While each compound may have a different receptor binding profile, signaling appears to converge at the tyrosine kinase B (TrkB) and mammalian target of rapamycin (mTOR) pathways. Convergence at TrkB and mTOR parallels that of traditional antidepressants with known efficacies, but with more rapid onset. Due to their rapid and sustained effects, psychoplastogens could potentially be dosed intermittently. In addition to the neuroplasticity effects, these compounds can have other epiphenomena including sedation, dissociation, and hallucinations. Psychedelics show complex effects on neuroplasticity and can both promote and inhibit neuroplasticity depending on the circumstances. Single doses of DMT, 5-MeO-DMT, psilocybin, and DOI have been found to produce robust and long-lasting increases in neuroplasticity in animals. Likewise, repeated doses of LSD for 7days increased neuroplasticity. However, chronic intermittent administration of DMT for several weeks resulted in dendritic spine retraction, suggesting physiological homeostatic compensation in response to overstimulation. In addition, DOI has been found to decrease brain-derived neurotrophic factor (BDNF) levels in the hippocampus. The effects of psychedelics on neuroplasticity appear to be dependent on serotonin 5-HT2A receptor activation, as they are abolished in 5-HT2A receptor knockout mice. Non-hallucinogenic serotonin 5-HT2A receptor agonists, like tabernanthalog and lisuride, have also been found to increase neuroplasticity, and to a magnitude comparable to psychedelics. In terms of neurogenesis, DOI and LSD showed no impact on hippocampal neurogenesis, while psilocybin and 25I-NBOMe decreased hippocampal neurogenesis. 5-MeO-DMT however has been found to increase hippocampal neurogenesis, and this could be blocked by sigma σ1 receptor antagonists. Approved medical uses Several psychoplastogens have either been approved or are in development for the treatment of a variety of brain disorders associated with neuronal atrophy where neuroplasticity can elicit beneficial effects. Esketamine, sold under the brand name Spravato and produced by Janssen Pharmaceuticals, was approved by the FDA in March 2019 for the treatment of Treatment-Resistant Depression (TRD) and suicidal ideation. As of 2022, it is the only psychoplastogen approved in the US for the treatment of a neuropsychiatric disorder. Esketamine is the S(+) enantiomer of ketamine and functions as an NMDA receptor antagonist. Clinical development Other psychoplastogens that are being investigated in the clinic include: MDMA-assisted psychotherapy is being investigated for treatment of PTSD. A recent placebo controlled Phase 3 trial found that 67% of participants in the MDMA+therapy group no longer met the diagnostic criteria for PTSD whereas 32% of those in the placebo+therapy group no longer met PTSD threshold. MDMA-assisted psychotherapy is also currently in Phase 2 trials for eating disorders, anxiety associated with life-threatening illness, and social anxiety in autistic adults. Psilocybin, a compound in psilocybin mushrooms that serves as a prodrug for psilocin, is currently being investigated in clinical trials of Hallucinogen-Assisted Therapy for a variety of neuropsychiatric disorders. To date studies have explored the utility of psilocybin in a variety of diseases, including TRD, smoking addiction, and anxiety and depression in people with cancer diagnoses. LSD is being tested in phase 2 trials for cluster headaches and anxiety. DMT is being studied for depression. 5-MeO-DMT is being studied for depression and eating disorders. Ibogaine and Noribogaine are being studied for addiction. List of known psychoplastogens Substituted tryptamines: psilocin (including psilocybin and psilacetin), DMT, 5-MeO-DMT Ergolines: LSD, lisuride Substituted phenethylamines: DOI, MDMA and mescaline Dissociatives: ketamine (including esketamine, arketamine), Iboga-derivatives: ibogaine, noribogaine, tabernanthine and tabernanthalog AAZ-A-154 Scopolamine Rapastinel Tropoflavin (7,8-DHF) (including R7, R13) LY-341495 Isoflurane See also Ariadne (drug) Neuroplasticity Notes References Neuropharmacology
Psychoplastogen
[ "Chemistry" ]
1,402
[ "Pharmacology", "Neuropharmacology" ]
68,572,267
https://en.wikipedia.org/wiki/Method%20of%20moments%20%28electromagnetics%29
The method of moments (MoM), also known as the moment method and method of weighted residuals, is a numerical method in computational electromagnetics. It is used in computer programs that simulate the interaction of electromagnetic fields such as radio waves with matter, for example antenna simulation programs like NEC that calculate the radiation pattern of an antenna. Generally being a frequency-domain method, it involves the projection of an integral equation into a system of linear equations by the application of appropriate boundary conditions. This is done by using discrete meshes as in finite difference and finite element methods, often for the surface. The solutions are represented with the linear combination of pre-defined basis functions; generally, the coefficients of these basis functions are the sought unknowns. Green's functions and Galerkin method play a central role in the method of moments. For many applications, the method of moments is identical to the boundary element method. It is one of the most common methods in microwave and antenna engineering. History Development of boundary element method and other similar methods for different engineering applications is associated with the advent of digital computing in the 1960s. Prior to this, variational methods were applied to engineering problems at microwave frequencies by the time of World War II. While Julian Schwinger and Nathan Marcuvitz have respectively compiled these works into lecture notes and textbooks, Victor Rumsey has formulated these methods into the "reaction concept" in 1954. The concept was later shown to be equivalent to the Galerkin method. In the late 1950s, an early version of the method of moments was introduced by Yuen Lo at a course on mathematical methods in electromagnetic theory at University of Illinois. In the 1960s, early research work on the method was published by Kenneth Mei, Jean van Bladel and Jack Richmond. In the same decade, the systematic theory for the method of moments in electromagnetics was largely formalized by Roger Harrington. While the term "the method of moments" was coined earlier by Leonid Kantorovich and Gleb Akilov for analogous numerical applications, Harrington has adapted the term for the electromagnetic formulation. Harrington published the seminal textbook Field Computation by Moment Methods on the moment method in 1968. The development of the method and its indications in radar and antenna engineering attracted interest; MoM research was subsequently supported United States government. The method was further popularized by the introduction of generalized antenna modeling codes such as Numerical Electromagnetics Code, which was released into public domain by the United States government in the late 1980s. In the 1990s, introduction of fast multipole and multilevel fast multipole methods enabled efficient MoM solutions to problems with millions of unknowns. Being one of the most common simulation techniques in RF and microwave engineering, the method of moments forms the basis of many commercial design software such as FEKO. Many non-commercial and public domain codes of different sophistications are also available. In addition to its use in electrical engineering, the method of moments has been applied to light scattering and plasmonic problems. Background Basic concepts An inhomogeneous integral equation can be expressed as: where denotes a linear operator, denotes the known forcing function and denotes the unknown function. can be approximated by a finite number of basis functions (): By linearity, substitution of this expression into the equation yields: We can also define a residual for this expression, which denotes the difference between the actual and the approximate solution: The aim of the method of moments is to minimize this residual, which can be done by using appropriate weighting or testing functions, hence the name method of weighted residuals. After the determination of a suitable inner product for the problem, the expression then becomes: Thus, the expression can be represented in the matrix form: The resulting matrix is often referred as the impedance matrix. The coefficients of the basis functions can be obtained through inverting the matrix. For large matrices with a large number of unknowns, iterative methods such as conjugate gradient method can be used for acceleration. The actual field distributions can be obtained from the coefficients and the associated integrals. The interactions between each basis function in MoM is ensured by Green's function of the system. Basis and testing functions Different basis functions can be chosen to model the expected behavior of the unknown function in the domain; these functions can either be subsectional or global. Choice of Dirac delta function as basis function is known as point-matching or collocation. This corresponds to enforcing the boundary conditions on discrete points and is often used to obtain approximate solutions when the inner product operation is cumbersome to perform. Other subsectional basis functions include pulse, piecewise triangular, piecewise sinusoidal and rooftop functions. Triangular patches, introduced by S. Rao, D. Wilton and A. Glisson in 1982, are known as RWG basis functions and are widely used in MoM. Characteristic basis functions were also introduced to accelerate computation and reduce the matrix equation. The testing and basis functions are often chosen to be the same; this is known as the Galerkin method. Depending on the application and studied structure, the testing and basis functions should be chosen appropriately to ensure convergence and accuracy, as well as to prevent possible high order algebraic singularities. Integral equations Depending on the application and sought variables, different integral or integro-differential equations are used in MoM. Radiation and scattering by thin wire structures, such as many types of antennas, can be modeled by specialized equations. For surface problems, common integral equation formulations include electric field integral equation (EFIE), magnetic field integral equation (MFIE) and mixed-potential integral equation (MPIE). Thin-wire equations As many antenna structures can be approximated as wires, thin wire equations are of interest in MoM applications. Two commonly used thin-wire equations are Pocklington and Hallén integro-differential equations. Pocklington's equation precedes the computational techniques, having been introduced in 1897 by Henry Cabourn Pocklington. For a linear wire that is centered on the origin and aligned with the z-axis, the equation can be written as: where and denote the total length and thickness, respectively. is the Green's function for free space. The equation can be generalized to different excitation schemes, including magnetic frills. Hallén integral equation, published by E. Hallén in 1938, can be given as: This equation, despite being more well-behaved than the Pocklington's equation, is generally restricted to the delta-gap voltage excitations at the antenna feed point, which can be represented as an impressed electric field. Electric field integral equation (EFIE) The general form of electric field integral equation (EFIE) can be written as: where is the incident or impressed electric field. is the Green function for Helmholtz equation and represents the wave impedance. The boundary conditions are met at a defined PEC surface. EFIE is a Fredholm integral equation of the first kind. Magnetic field integral equation (MFIE) Another commonly used integral equation in MoM is the magnetic field integral equation (MFIE), which can be written as: MFIE is often formulated to be a Fredholm integral equation of the second kind and is generally well-posed. Nevertheless, the formulation necessitates the use of closed surfaces, which limits its applications. Other formulations Many different surface and volume integral formulations for MoM exist. In many cases, EFIEs are converted to mixed potential integral equations (MFIE) through the use of Lorenz gauge condition; this aims to reduce the orders of singularities through the use of magnetic vector and scalar electric potentials. In order to bypass the internal resonance problem in dielectric scattering calculations, combined-field integral equation (CFIE) and Poggio—Miller—Chang—Harrington—Wu—Tsai (PMCHWT) formulations are also used. Another approach, the volumetric integral equation, necessitates the discretization of the volume elements and is often computationally expensive. MoM can also be integrated with physical optics theory and finite element method. Green's functions Appropriate Green's function for the studied structure must be known to formulate MoM matrices: automatic incorporation of the radiation condition into the Green's function makes MoM particularly useful for radiation and scattering problems. Even though the Green function can be derived in closed form for very simple cases, more complex structures necessitate numerical derivation of these functions. Full wave analysis of planarly-stratified structures in particular, such as microstrips or patch antennas, necessitate the derivation of Green's functions that are peculiar to these geometries. This can be achieved in two different methods. In the first method, known as spectral-domain approach, the inner products and convolution operation for MoM matrix entries are evaluated in the Fourier space with analytically-derived spectral-domain Green's functions through Parseval's theorem. The other approach is based on the use of spatial-domain Green's functions. This involves the inverse Hankel transform of the spectral-domain Green's function, which is defined on the Sommerfeld integration path. Nevertheless, this integral cannot be evaluated analytically, and its numerical evaluation is often computationally expensive due to the oscillatory kernels and slowly-converging nature of the integral. Common approaches for evaluating these integrals include tail extrapolation approaches such as weighted-averages method. Other approaches include the approximation of the integral kernel. Following the extraction of quasi-static and surface pole components, these integrals can be approximated as closed-form complex exponentials through Prony's method or generalized pencil-of-function method; thus, the spatial Green's functions can be derived through the use of appropriate identities such as Sommerfeld identity. This method is known in the computational electromagnetics literature as the discrete complex image method (DCIM), since the Green's function is effectively approximated with a discrete number of image dipoles that are located within a complex distance from the origin. The associated Green's functions are referred as closed-form Green's functions. The method has also been extended for cylindrically-layered structures. Rational-function fitting method, as well as its combinations with DCIM, can also be used to approximate closed-form Green's functions. Alternatively, the closed-form Green's function can be evaluated through method of steepest descent. For the periodic structures such as phased arrays and frequency selective surfaces, series acceleration methods such as Kummer's transformation and Ewald summation is often used to accelerate the computation of the periodic Green's function. See also Boundary element method Characteristic mode analysis Discrete dipole approximation Fast multipole method Finite element method Multilevel fast multipole method Notes References Bibliography Computational electromagnetics Numerical differential equations
Method of moments (electromagnetics)
[ "Physics" ]
2,209
[ "Computational electromagnetics", "Computational physics" ]
68,577,665
https://en.wikipedia.org/wiki/Lak%20wettability%20index
In petroleum engineering, Lak wettability index is a quantitative indicator to measure wettability of rocks from relative permeability data. This index is based on a combination of Craig's first rule. and modified Craig's second rule where : Lak wettability index (index values near -1 and 1 represent strongly oil-wet and strongly water-wet rocks, respectively) : Water relative permeability measured at residual oil saturation : Water saturation at the intersection point of water and oil relative permeability curves (fraction) : Residual oil saturation (in fraction) : Irreducible water saturation (in fraction) : Reference crossover saturation (in fraction) defined as: and and are two constant coefficients defined as: and if and if and if To use the above formula, relative permeability is defined as the effective permeability divided by the oil permeability measured at irreducible water saturation. Craig's triple rules of thumb Craig proposed three rules of thumb for interpretation of wettability from relative permeability curves. These rules are based on the value of interstitial water saturation, the water saturation at the crossover point of relative permeability curves (i.e., where relative permeabilities are equal to each other), and the normalized water permeability at residual oil saturation (i.e., normalized by the oil permeability at interstitial water saturation). According to Craig's first rule of thumb, in water-wet rocks the relative permeability to water at residual oil saturation is generally less than 30%, whereas in oil-wet systems this is greater than 50% and approaching 100%. The second rule of thumb considers a system as water-wet, if saturation at the crossover point of relative permeability curves is greater than water saturation of 50%, otherwise oil-wet. The third rule of thumb states that in a water-wet rock the value of interstitial water saturation is usually greater than 20 to 25% pore volume, whereas this is generally less than 15% pore volume (frequently less than 10%) for an oil-wet porous medium. Modified Craig's second rule In 2021, Abouzar Mirzaei-Paiaman investigated the validity of Craig's rules of thumb and showed that while the third rule is generally unreliable, the first rule is suitable. Moreover, he showed that the second rule needed a modification. He pointed out that using 50% water saturation as a reference value in the Craig's second rule is unrealistic. That author defined a reference crossover saturation (RCS). According to the modified Craig's second rule, the crossover point of relative permeability curves lies to the right of RCS in water-wet rocks, whereas for oil-wet systems, the crossover point is expected to be located at the left of the RCS. Modified Lak wettability index Modified Lak wettability index exists which is based on the areas below water and oil relative permeability curves. where : modified Lak wettability index (index values near -1 and 1 represent strongly oil-wet and strongly water-wet rocks, respectively) : Area under the oil relative permeability curve : Area under the water relative permeability curve See also Wetting Amott test Relative permeability TEM-function USBM wettability index References Petroleum geology Surface science Fluid mechanics
Lak wettability index
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
719
[ "Surface science", "Petroleum", "Civil engineering", "Condensed matter physics", "Petroleum geology", "Fluid mechanics" ]
68,580,282
https://en.wikipedia.org/wiki/Time%20in%20Madagascar
Time in Madagascar is given by a single time zone, officially denoted as East Africa Time (EAT; UTC+03:00). Madagascar does not observe daylight saving time. IANA time zone database In the IANA time zone database, Madagascar is given one zone in the file zone.tab – Indian/Antananarivo, which is an alias to Africa/Nairobi. "MG" refers to the country's ISO 3166-1 alpha-2 country code. Data for Madagascar directly from zone.tab of the IANA time zone database; columns marked with * are the columns from zone.tab itself: See also Time in Africa List of time zones by country List of UTC time offsets References External links Current time in Madagascar at Time.is Time in Madagascar at TimeAndDate.com Time by country Geography of Madagascar Time in Africa
Time in Madagascar
[ "Physics" ]
173
[ "Spacetime", "Physical quantities", "Time", "Time by country" ]
68,581,298
https://en.wikipedia.org/wiki/Julie%20Schoenung
Julie Mae Schoenung is an American materials scientist who is a professor at the University of California, Irvine. She is co-director for the University of California Toxic Substances Research and Teaching Program Lead Campus in Green Materials. Her research considers trimodal composites and green engineering. She was elected Fellow of The Minerals, Metals & Materials Society in 2021. Early life and education Schoenung was an undergraduate student in Chicago, where she studied materials science at the University of Illinois Urbana-Champaign. She moved to Massachusetts Institute of Technology for graduate studies, earning a Master's degree in 1985 and a PhD in 1987. Her doctoral research considered an economic assessment of ceramics for automotive engines. After earning her doctorate Schonung moved to California. She joined California State Polytechnic University in 1989. Research and career Schonung moved to the University of California, Davis. She was appointed to the faculty at the University of California, Irvine in 2015. She is interested in nanostructured materials and green engineering processes. To generate nanostructures in functional materials, Schoenung makes use of cryomilling. Cryomilling can improve the oxidation behaviour of thermal barrier coatings as well as generating boron carbide reinforced aluminium nanocomposites. Green engineering processes are safer for the environment; they are less energy demanding, generate less pollution and do not release toxic chemicals. In particular, Schoenung is interested in the problem of electronic-waste and the infrastructure required for e-waste recycling. Her research considers the factors that surround decision making in materials selection, with a particular focus on sustainability. She combines life-cycle assessment with management theory and environmental economics. In 2008, Schoenung was appointed to the Green Ribbon Science Panel, a group of researchers appointed by Arnold Schwarzenegger to protect Californians from toxic chemicals. Awards and honors 2012 Elected Fellow of the ASM International 2016 Acta Materialia, Inc Holloman Award for Materials & Society 2016 Elected Fellow of the Alpha Sigma Mu International Professional Honor Society 2017 Materials Science & Engineering-A Innovation in Research Award 2018 ASM International Edward DeMille Campbell Memorial Lectureship 2018 Elected Fellow of the American Ceramic Society 2021 The Minerals, Metals & Materials Society Fellow Award Selected publications References Living people American materials scientists Women materials scientists and engineers University of California, Irvine faculty Year of birth missing (living people) Place of birth missing (living people) University of Illinois Urbana-Champaign alumni University of California, Davis faculty Massachusetts Institute of Technology alumni California State Polytechnic University, Pomona faculty Fellows of the American Ceramic Society 21st-century American women scientists 21st-century American scientists Fellows of the Minerals, Metals & Materials Society
Julie Schoenung
[ "Materials_science", "Technology" ]
540
[ "Women materials scientists and engineers", "Materials scientists and engineers", "Women in science and technology" ]
68,581,533
https://en.wikipedia.org/wiki/Afghan%20Girls%20Robotics%20Team
The Afghan Girls Robotics Team, also known as the Afghan Dreamers, is an all-girl robotics team from Herat, Afghanistan, founded through the Digital Citizen Fund (DCF) in 2017 by Roya Mahboob and Alireza Mehraban. It is made up of girls between ages 12 and 18 and their mentors. Several members of the team were relocated to Qatar and Mexico following the fall of Kabul in August 2021. A documentary film featuring members of the team, titled Afghan Dreamers, was released by MTV Documentary Films in 2023. Origins The Afghan Girls Robotics Team was co-founded in 2017 by Roya Mahboob, who is their coach, mentor and sponsor, and founder of the Digital Citizen Fund (DCF), which is the parent organization for the team. Dean Kamen was planning a 2017 competition in the United States and had recruited Mahboob to form a team from Afghanistan. Out of 150 girls, 12 were selected for the first team. Before parts were sent by Kamen, they trained in the basement of the home of Mahboob's parents, with scrap metal and without safety equipment under the guidance of their coach, Mahboob's brother Alireza Mehraban, who is also a co-founder of the team 2017 and 2018 In 2017, six members of the Afghan Girls Robotics Team traveled to the United States to participate in the international FIRST Global Challenge robotics competition. Their visas were rejected twice after they made two journeys from Herat to Kabul through Taliban-controlled areas, before officials in the United States government intervened to allow them to enter the United States. Customs officials also detained their robotics kits, which left them two weeks to construct their robot, unlike some teams that had more time. They were awarded a Silver medal for Courageous Achievement. One week after they returned home from the competition, the father of team captain Fatemah Qaderyan, Mohammad Asif Qaderyan, was killed in a suicide bombing. After their United States visas expired, the team participated in competitions in Estonia and Istanbul. Three of the 12 members participated in the 2017 Entrepreneurial Challenge at the Robotex festival in Estonia, and won the competition for their solar-powered robot designed to assist farmers. In 2018, the team trained in Canada, continued to travel in the United States for months and participate in competitions. 2019 The Afghan Girls Robotics team had aspirations to develop a science and technology school for girls in Afghanistan. Roya Mahboob interfaced with the School of Engineering and Applied Sciences (SEAS), the School of Architecture, and the Whitney and Betty MacMillan Center for International and Area Studies Yale University to design the infrastructure for what they named The Dreamer Institute. 2020 In March 2020, the governor of Herat at the time, in response to the COVID-19 pandemic in Afghanistan and a scarcity of ventilators, sought help with the design of low-cost ventilators, and the Afghan Girls Robotics Team was one of six teams contacted by the government. Using a design from Massachusetts Institute of Technology and with guidance from MIT engineers and Douglas Chin, a surgeon in California, the team developed a prototype with Toyota Corolla parts and a chain drive from a Honda motorcycle. UNICEF also supported the team with the acquisition of necessary parts during the three months they spent building the prototype that was completed in July 2020. Their design costs around $500 compared to $50,000 for a ventilator. In December 2020, Minister of Industry and Commerce Nizar Ahmad Ghoryani donated funding and obtained land for a factory to produce the ventilators. Under the direction of their mentor Roya Mahboob, the Afghan Dreamers also designed a UVC Robot for sanitization, and a Spray Robot for disinfection, both of which were approved by the Ministry of Health for production. 2021 In early August 2021, Somaya Faruqi, former captain of the team, was quoted by Public Radio International about the future of Afghanistan, stating, "We don’t support any group over another but for us what’s important is that we be able to continue our work. Women in Afghanistan have made a lot of progress over the past two decades and this progress must be respected." On August 17, 2021, the Afghan Girls Robotics Team and their coaches were reported to be attempting to evacuate, but unable to obtain a flight out of Afghanistan, and a lawyer appealed to Canada for assistance regarding the evacuation of the team members. As of August 19, 2021, nine members of the team and their coaches had evacuated to Qatar. The founder of the team, Roya Mahboob, and DCF board member, Elizabeth Schaeffer Brown, were previously in contact with the Qatari government to assist the team members in their evacuation from Afghanistan. By August 25, 2021, some members arrived in Mexico. Saghar, a team member who evacuated to Mexico, said, "We wanted to continue the path that we started to continue to go for our achievements and to go for having our dreams through reality. So that's why we decided to leave Afghanistan and go for somewhere safe" in an interview with The Associated Press. The members who have left Afghanistan participated in an online robotics competition in September and plan to continue their education. A documentary film titled Afghan Dreamers, produced by Beth Murphy and directed by David Greenwald, was in post-production when the team began to evacuate. 2022 The Afghan Dreamers were involved in a training program at the Texas A&M University at Qatar’s STEM Hub. 2023 The Afghan Girls Robotics Team had a booth at the 5th UN Conference on the Least Developed Countries, where they displayed some of the robots the team had constructed. Afghan Dreamers documentary The Afghan Dreamers documentary from MTV Documentary Films premiered in May 2023 on Paramount+. The film was directed by David Greenwald and produced by David Cowan and Beth Murphy. In a review for Screen Daily, Wendy Ide wrote, "This film, with its likeable cast of girl nerds and positive message, should enjoy a warm reception on the festival circuit, and will be of particular interest to events seeking to showcase women's stories from around the world. It also serves as a timely cautionary tale – a case study on just how quickly the rights and the opportunities of women can be curtailed, at the behest of the men in power." Honors and awards 2017 Silver medal for Courageous Achievement at the FIRST Global Challenge, science and technology 2017 Benefiting Humanity in AI Award at World Summit AI 2017 Winner, Entrepreneurship Challenge at Robotex in Estonia 2018 Permission to Dream Award, Raw Film Festival 2018 Conrad Innovation Challenge, Raw Film Festival 2018 Rookie All Start – District Championship, Canada 2018 Asia Game Changer Award Honoree 2019 Inspiring in Engineering Award – FIRST Detroit World Championship 2019 Asia Game Changer Award of California 2019 Safety Award – FIRST Global, Dubai 2021 Forbes 30 Under 30 Asia 2022 World Championships, Genoa, Switzerland References External links Official Afghan Dreamers documentary website A day of pride for Afghan girl grads amid growing threats (PBS NewsHour, January 5, 2016) Women in Afghanistan Women in engineering 21st-century Afghan women 21st-century Afghan people Robotics Student robotics competitions Women in science and technology Afghan refugees
Afghan Girls Robotics Team
[ "Technology", "Engineering" ]
1,482
[ "Robotics", "Women in science and technology", "Automation" ]
55,896,388
https://en.wikipedia.org/wiki/Hexahydroxytriphenylene
Hexahydroxytriphenylene (HHTP) is any of a set of organic compounds consisting of a polycyclic aromatic hydrocarbon core—triphenylene—with six hydroxy group substituents attached to the rings. These compounds have found use as a component of two-dimensional polymers. The first covalent organic framework used this chemical as a monomer building block. It can be used for self-assembling metal–organic frameworks. References Polycyclic aromatic compounds Polyphenols Porous polymer monomers
Hexahydroxytriphenylene
[ "Chemistry", "Materials_science" ]
117
[ "Porous polymers", "Porous polymer monomers" ]
55,900,182
https://en.wikipedia.org/wiki/The%20University%20of%20Sydney%20Nano%20Institute
The University of Sydney Nano Institute (Sydney Nano) is a multidisciplinary research institute at the University of Sydney in Camperdown, Sydney, Australia. It focuses on multidisciplinary research in nanoscale science and technology. It is one of ten multidisciplinary research institutes at the University of Sydney, along with the Charles Perkins Centre and the Brain and Mind Centre. Location and facilities Sydney Nano is headquartered at the Sydney Nanoscience Hub. It was built for nanoscience research and opened in 2015 on the University of Sydney's Camperdown/Darlington campus. History Sydney Nano was originally launched in April 2016, as the Australian Institute for Nanoscale Science and Technology (AINST). The institute was renamed The University of Sydney Nano Institute in November 2017. In July 2017, the University of Sydney announced a multi-year partnership with Microsoft to conduct research into quantum computing and the official establishment of Microsoft Quantum - Sydney at the Sydney Nanoscience Hub. In March 2018, the New South Wales Government provided a A$500,000 grant to set up the Sydney Quantum Academy to strengthen postgraduate research and training in quantum computing. The academy is led by the University of Sydney in partnership with Macquarie University, the University of New South Wales and the University of Technology, Sydney. Directors Sydney Nano was jointly led by three interim directors, Thomas Maschmeyer, Simon Ringer, and Zdenka Kuncic, who oversaw the launch period of the institute from March 2016. Susan Pond was appointed to the directorship in February 2017, for a period of 12 months. Ben Eggleton served as director from May 2018 to December 2022, when Alice Motion was appointed interim director for six months. References Nano Institute Nanotechnology institutions 2016 establishments in Australia
The University of Sydney Nano Institute
[ "Materials_science" ]
360
[ "Nanotechnology", "Nanotechnology institutions" ]
55,900,328
https://en.wikipedia.org/wiki/Demonstration%20and%20Shakedown%20Operation
A Demonstration and Shakedown Operation (DASO) is a series of missile tests conducted by the United States Navy and the Royal Navy. These tests are employed to validate a weapon system (SLBM) and ensure a submarine crew's readiness to use that system. A shakedown operation usually occurs after a refueling and overhaul process or construction of a new submarine. Testing of missile systems allows collection of flight-data, and examinations of submarine launch platforms. The first DASO test occurred July 20, 1960 on the USS George Washington, using the Polaris A-1. Modern tests use the UGM-133 Trident II, launching from an Ohio-class submarine. References Aerospace engineering Product testing Ballistic missile submarines
Demonstration and Shakedown Operation
[ "Engineering" ]
144
[ "Aerospace engineering" ]
55,903,753
https://en.wikipedia.org/wiki/Lakhori%20bricks
Lakhori bricks (also Badshahi bricks, Kakaiya bricks, Lakhauri bricks) are flat, thin, red burnt-clay bricks, originating from Lahore, Pakistan that became increasingly popular element of Mughal architecture during Shah Jahan, and remained so till early 20th century when lakhori bricks and similar Nanak Shahi bricks were replaced by the larger standard 9"x4"x3" bricks called ghumma bricks that were introduced by the colonial British India. Several still surviving famous 17th to 19th century structures of Mughal India, characterized by jharokhas, jalis, fluted sandstone columns, ornamental gateways and grand cusped-arch entrances are made of lakhori bricks, including fort palaces (such as Red Fort), protective bastions and pavilions (as seen in Bawana Zail Fortess), havelis (such as Bagore-ki-Haveli, Chunnamal Haveli, Ghalib ki Haveli, Dharampura Haveli and Hemu's Haveli), temples and gurudwaras (such as in Maharaja Patiala's Bahadurgarh Fort), mosques and tombs (such as Mehram Serai, Teele Wali Masjid), water wells and baoli stepwells (such as Choro Ki Baoli), bridges (such as Mughal bridge at Karnal), Kos minar road-side milestones (such as at Palwal along Grand Trunk Road) and other notable structures. Origin The exact origin of lakhori bricks is not confirmed, especially if they existed, or not, prior to becoming more prevalent in use during the Mughal India. Prior to the rise in frequent use of lakhori bricks during Mughal India, Indian architecture primarily used trabeated prop and lintel (point and slot) gravity-based technique of shaping large stones to fit into each other that required no mortar. The reason lakhori bricks became more popular during the Mughal period, starting from Shah Jahan's reign, is mainly because lakhori bricks that were used to construct structures with the typical elements of Mughal architecture such as arches, jalis, jharokas, mouldings, cornices, cladding, etc. were easy to create intricate patterns due to the small shape and slim size of lakhori bricks. Regional, socio-strata and dimensional variations The slim and compact Lakhorie bricks became popular across pan-Indian subcontinental Mughal Empire, specially in North India, resulting in several variations in their dimensions as well as due to the use of lower strength local soil by poor people and higher strength clay by affluent people. Restoration architect author Anil Laul reasons that poor people used local soil to bake slimmer bricks using locally available cheaper dung cakes as fuel and richer people used higher-end thicker and bigger bricks made of higher strength clay baked in kilns using not so easily locally available more expensive coal, both methods yielded bricks of similar strength but different proportions at different economic levels of strata. Lakhori bricks versus Nanakshahi bricks Due to the lack of understanding, sometimes contemporary writers confuse the lakhori bricks with other similar but distinct regional variants. For example, some writers use "Lakhori bricks and Nanak Shahi bricks" implying two different things, and others use "Lakhori bricks or Nanak Shahi bricks" inadvertently implying either same or two different things, leading to confusion as if they are same, especially if these words are casually mentioned interchangeably. Lakhori bricks were used by Mughal Empire that spanned across the Indian subcontinent, whereas Nanak Shahi bricks were used mainly across the Sikh Empire, that was spread across Punjab region in north-west Indian subcontinent, when Sikhs were in conflict with Mughal Empire due to the religious persecution of Sikhs by Mughal Muslims. Coins struck by Sikh rulers between 1764 CE to 1777 CE were called "Gobind Shahi" coins (bearing inscription in the name of Guru Gobind Singh), and coins struck from 1777 onward were called "Nanak Shahi" coins (bearing inscription in the name of Guru Nanak). A similar concept applies to the Nanak Shahi bricks of Sikh Empire, i.e. Lakhori and Nanak Shahi bricks being two similar, but a different type of bricks due to the regional variations as well as political reasons. Closely related similar things may be considered separate, and on the other hand considerably different things might be considered the same, in both cases due to the social-political-religious contextual reasons, for example closely related mutually intelligible Sanskritised-Hindustani language Hindi versus Arabised-Hindustani language Urdu being favored as separate languages by Hindus and Muslims respectively as seen in the context of Hindu-Muslim conflict that resulted in Partition of India, whereas mutually unintelligible speech varieties that differ considerably in structure such as Moroccan Arabic, Yemeni Arabic and Lebanese Arabic are considered the same language due to the pan-Islamism religious movement. Mughal-era lakhorie bricks predate the Nanak Shahi bricks as seen in Bahadurgarh Fort of Patiala that was built by Mughal Nawab Saif Khan in 1658 CE using earlier-era lakhori bricks, and nearly 80 years later it was renovated using later-era Nanak Shahi bricks and renamed in the honor of Guru Teg Bahadur (where Guru Teg Bahadur stayed at this fort for three months and nine days before leaving for Delhi when he was executed by Aurangzeb in 1675 CE) by Maharaja of Patiala Karam Singh in 1837 CE. Since the timeline of both Mughal Empire and Sikh Empire overlapped, both Lakhori bricks and Nanak Shahi bricks were used around the same time in their respective dominions. Restoration architect author Anil Laul clarifies "We, therefore, had slim bricks known as the Lakhori and Nanakshahi bricks in India and the slim Roman bricks or their equivalents for many other parts of the world." Mortar recipe They were used to construct structures with crushed bricks and lime mortar, and walls were usually plastered with lime mortar. The concrete mixture of that era was a preparation of lime, surki (trass), jaggery and bael fruit (wood apple) pulp where some recipe used as much as 23 ingredients including urad ki daal (paste of vigna mungo pulse). References Works cited External links Lakhori brick rampart of Bavana Fortress of Zail (administrative unit) of Jat chiefs Haveli Dharampura built with lakhori bricks has a restaurant named "lakhori" Rajput architecture Indian architectural history Mughal architecture elements Building materials
Lakhori bricks
[ "Physics", "Engineering" ]
1,361
[ "Building engineering", "Construction", "Materials", "Building materials", "Matter", "Architecture" ]
55,904,512
https://en.wikipedia.org/wiki/Topological%20geometry
Topological geometry deals with incidence structures consisting of a point set and a family of subsets of called lines or circles etc. such that both and carry a topology and all geometric operations like joining points by a line or intersecting lines are continuous. As in the case of topological groups, many deeper results require the point space to be (locally) compact and connected. This generalizes the observation that the line joining two distinct points in the Euclidean plane depends continuously on the pair of points and the intersection point of two lines is a continuous function of these lines. Linear geometries Linear geometries are incidence structures in which any two distinct points and are joined by a unique line . Such geometries are called topological if depends continuously on the pair with respect to given topologies on the point set and the line set. The dual of a linear geometry is obtained by interchanging the roles of points and lines. A survey of linear topological geometries is given in Chapter 23 of the Handbook of incidence geometry. The most extensively investigated topological linear geometries are those which are also dual topological linear geometries. Such geometries are known as topological projective planes. History A systematic study of these planes began in 1954 with a paper by Skornyakov. Earlier, the topological properties of the real plane had been introduced via ordering relations on the affine lines, see, e.g., Hilbert, Coxeter, and O. Wyler. The completeness of the ordering is equivalent to local compactness and implies that the affine lines are homeomorphic to and that the point space is connected. Note that the rational numbers do not suffice to describe our intuitive notions of plane geometry and that some extension of the rational field is necessary. In fact, the equation for a circle has no rational solution. Topological projective planes The approach to the topological properties of projective planes via ordering relations is not possible, however, for the planes coordinatized by the complex numbers, the quaternions or the octonion algebra. The point spaces as well as the line spaces of these classical planes (over the real numbers, the complex numbers, the quaternions, and the octonions) are compact manifolds of dimension . Topological dimension The notion of the dimension of a topological space plays a prominent rôle in the study of topological, in particular of compact connected planes. For a normal space , the dimension can be characterized as follows: If denotes the -sphere, then if, and only if, for every closed subspace each continuous map has a continuous extension . For details and other definitions of a dimension see and the references given there, in particular Engelking or Fedorchuk. 2-dimensional planes The lines of a compact topological plane with a 2-dimensional point space form a family of curves homeomorphic to a circle, and this fact characterizes these planes among the topological projective planes. Equivalently, the point space is a surface. Early examples not isomorphic to the classical real plane have been given by Hilbert and Moulton. The continuity properties of these examples have not been considered explicitly at that time, they may have been taken for granted. Hilbert’s construction can be modified to obtain uncountably many pairwise non-isomorphic -dimensional compact planes. The traditional way to distinguish from the other -dimensional planes is by the validity of Desargues’s theorem or the theorem of Pappos (see, e.g., Pickert for a discussion of these two configuration theorems). The latter is known to imply the former (Hessenberg). The theorem of Desargues expresses a kind of homogeneity of the plane. In general, it holds in a projective plane if, and only if, the plane can be coordinatized by a (not necessarily commutative) field, hence it implies that the group of automorphisms is transitive on the set of quadrangles ( points no of which are collinear). In the present setting, a much weaker homogeneity condition characterizes : Theorem. If the automorphism group of a -dimensional compact plane is transitive on the point set (or the line set), then has a compact subgroup which is even transitive on the set of flags (=incident point-line pairs), and is classical. The automorphism group of a -dimensional compact plane , taken with the topology of uniform convergence on the point space, is a locally compact group of dimension at most , in fact even a Lie group. All -dimensional planes such that can be described explicitly; those with are exactly the Moulton planes, the classical plane is the only -dimensional plane with ; see also. Compact connected planes The results on -dimensional planes have been extended to compact planes of dimension . This is possible due to the following basic theorem: Topology of compact planes. If the dimension of the point space of a compact connected projective plane is finite, then with . Moreover, each line is a homotopy sphere of dimension , see or. Special aspects of 4-dimensional planes are treated in, more recent results can be found in. The lines of a -dimensional compact plane are homeomorphic to the -sphere; in the cases the lines are not known to be manifolds, but in all examples which have been found so far the lines are spheres. A subplane of a projective plane is said to be a Baer subplane, if each point of is incident with a line of and each line of contains a point of . A closed subplane is a Baer subplane of a compact connected plane if, and only if, the point space of and a line of have the same dimension. Hence the lines of an 8-dimensional plane are homeomorphic to a sphere if has a closed Baer subplane. Homogeneous planes. If is a compact connected projective plane and if is transitive on the point set of , then has a flag-transitive compact subgroup and is classical, see or. In fact, is an elliptic motion group. Let be a compact plane of dimension , and write . If , then is classical, and is a simple Lie group of dimension respectively. All planes with are known explicitly. The planes with are exactly the projective closures of the affine planes coordinatized by a so-called mutation of the octonion algebra , where the new multiplication is defined as follows: choose a real number with and put . Vast families of planes with a group of large dimension have been discovered systematically starting from assumptions about their automorphism groups, see, e.g.,. Many of them are projective closures of translation planes (affine planes admitting a sharply transitive group of automorphisms mapping each line to a parallel), cf.; see also for more recent results in the case and for . Compact projective spaces Subplanes of projective spaces of geometrical dimension at least 3 are necessarily Desarguesian, see §1 or §16 or. Therefore, all compact connected projective spaces can be coordinatized by the real or complex numbers or the quaternion field. Stable planes The classical non-euclidean hyperbolic plane can be represented by the intersections of the straight lines in the real plane with an open circular disk. More generally, open (convex) parts of the classical affine planes are typical stable planes. A survey of these geometries can be found in, for the -dimensional case see also. Precisely, a stable plane is a topological linear geometry such that is a locally compact space of positive finite dimension, each line is a closed subset of , and is a Hausdorff space, the set is an open subspace ( stability), the map is continuous. Note that stability excludes geometries like the -dimensional affine space over or . A stable plane is a projective plane if, and only if, is compact. As in the case of projective planes, line pencils are compact and homotopy equivalent to a sphere of dimension , and with , see or. Moreover, the point space is locally contractible. Compact groups of (proper) 'stable planes are rather small. Let denote a maximal compact subgroup of the automorphism group of the classical -dimensional projective plane . Then the following theorem holds: If a -dimensional stable plane admits a compact group of automorphisms such that , then , see. Flag-homogeneous stable planes. Let be a stable plane. If the automorphism group is flag-transitive, then is a classical projective or affine plane, or is isomorphic to the interior of the absolute sphere of the hyperbolic polarity of a classical plane; see. In contrast to the projective case, there is an abundance of point-homogeneous stable planes, among them vast classes of translation planes, see and. Symmetric planes Affine translation planes have the following property: There exists a point transitive closed subgroup of the automorphism group which contains a unique reflection at some and hence at each point. More generally, a symmetric plane is a stable plane satisfying the aforementioned condition; see, cf. for a survey of these geometries. By Corollary 5.5, the group is a Lie group and the point space is a manifold. It follows that is a symmetric space. By means of the Lie theory of symmetric spaces, all symmetric planes with a point set of dimension or have been classified. They are either translation planes or they are determined by a Hermitian form. An easy example is the real hyperbolic plane. Circle geometries Classical models are given by the plane sections of a quadratic surface in real projective -space; if is a sphere, the geometry is called a Möbius plane. The plane sections of a ruled surface (one-sheeted hyperboloid) yield the classical Minkowski plane, cf. for generalizations. If is an elliptic cone without its vertex, the geometry is called a Laguerre plane. Collectively these planes are sometimes referred to as Benz planes. A topological Benz plane is classical, if each point has a neighbourhood which is isomorphic to some open piece of the corresponding classical Benz plane. Möbius planes Möbius planes consist of a family of circles, which are topological 1-spheres, on the -sphere such that for each point the derived structure is a topological affine plane. In particular, any distinct points are joined by a unique circle. The circle space is then homeomorphic to real projective -space with one point deleted. A large class of examples is given by the plane sections of an egg-like surface in real -space. Homogeneous Möbius planes If the automorphism group of a Möbius plane is transitive on the point set or on the set of circles, or if , then is classical and , see. In contrast to compact projective planes there are no topological Möbius planes with circles of dimension , in particular no compact Möbius planes with a -dimensional point space. All 2-dimensional Möbius planes such that can be described explicitly. Laguerre planes The classical model of a Laguerre plane consists of a circular cylindrical surface in real -space as point set and the compact plane sections of as circles. Pairs of points which are not joined by a circle are called parallel. Let denote a class of parallel points. Then is a plane , the circles can be represented in this plane by parabolas of the form . In an analogous way, the classical -dimensional Laguerre plane is related to the geometry of complex quadratic polynomials. In general, the axioms of a locally compact connected Laguerre plane require that the derived planes embed into compact projective planes of finite dimension. A circle not passing through the point of derivation induces an oval in the derived projective plane. By or, circles are homeomorphic to spheres of dimension or . Hence the point space of a locally compact connected Laguerre plane is homeomorphic to the cylinder or it is a -dimensional manifold, cf. A large class of -dimensional examples, called ovoidal Laguerre planes, is given by the plane sections of a cylinder in real 3-space whose base is an oval in . The automorphism group of a -dimensional Laguerre plane () is a Lie group with respect to the topology of uniform convergence on compact subsets of the point space; furthermore, this group has dimension at most . All automorphisms of a Laguerre plane which fix each parallel class form a normal subgroup, the kernel of the full automorphism group. The -dimensional Laguerre planes with are exactly the ovoidal planes over proper skew parabolae. The classical -dimensional Laguerre planes are the only ones such that , see, cf. also. Homogeneous Laguerre planes If the automorphism group of a -dimensional Laguerre plane is transitive on the set of parallel classes, and if the kernel is transitive on the set of circles, then is classical, see 2.1,2. However, transitivity of the automorphism group on the set of circles does not suffice to characterize the classical model among the -dimensional Laguerre planes. Minkowski planes The classical model of a Minkowski plane has the torus as point space, circles are the graphs of real fractional linear maps on . As with Laguerre planes, the point space of a locally compact connected Minkowski plane is - or -dimensional; the point space is then homeomorphic to a torus or to , see. Homogeneous Minkowski planes If the automorphism group of a Minkowski plane of dimension is flag-transitive, then is classical. The automorphism group of a -dimensional Minkowski plane is a Lie group of dimension at most . All -dimensional Minkowski planes such that can be described explicitly. The classical -dimensional Minkowski plane is the only one with , see. Notes References Topology Incidence geometry
Topological geometry
[ "Physics", "Mathematics" ]
2,811
[ "Combinatorics", "Topology", "Space", "Geometry", "Spacetime", "Incidence geometry" ]
51,457,085
https://en.wikipedia.org/wiki/%C5%A0varc%E2%80%93Milnor%20lemma
In the mathematical subject of geometric group theory, the Švarc–Milnor lemma (sometimes also called Milnor–Švarc lemma, with both variants also sometimes spelling Švarc as Schwarz) is a statement which says that a group , equipped with a "nice" discrete isometric action on a metric space , is quasi-isometric to . This result goes back, in different form, before the notion of quasi-isometry was formally introduced, to the work of Albert S. Schwarz (1955) and John Milnor (1968). Pierre de la Harpe called the Švarc–Milnor lemma "the fundamental observation in geometric group theory" because of its importance for the subject. Occasionally the name "fundamental observation in geometric group theory" is now used for this statement, instead of calling it the Švarc–Milnor lemma; see, for example, Theorem 8.2 in the book of Farb and Margalit. Precise statement Several minor variations of the statement of the lemma exist in the literature. Here we follow the version given in the book of Bridson and Haefliger (see Proposition 8.19 on p. 140 there). Let be a group acting by isometries on a proper length space such that the action is properly discontinuous and cocompact. Then the group is finitely generated and for every finite generating set of and every point the orbit map is a quasi-isometry. Here is the word metric on corresponding to . Sometimes a properly discontinuous cocompact isometric action of a group on a proper geodesic metric space is called a geometric action. Explanation of the terms Recall that a metric space is proper if every closed ball in is compact. An action of on is properly discontinuous if for every compact the set is finite. The action of on is cocompact if the quotient space , equipped with the quotient topology, is compact. Under the other assumptions of the Švarc–Milnor lemma, the cocompactness condition is equivalent to the existence of a closed ball in such that Examples of applications of the Švarc–Milnor lemma For Examples 1 through 5 below see pp. 89–90 in the book of de la Harpe. Example 6 is the starting point of the part of the paper of Richard Schwartz. For every the group is quasi-isometric to the Euclidean space . If is a closed connected oriented surface of negative Euler characteristic then the fundamental group is quasi-isometric to the hyperbolic plane . If is a closed connected smooth manifold with a smooth Riemannian metric then is quasi-isometric to , where is the universal cover of , where is the pull-back of to , and where is the path metric on defined by the Riemannian metric . If is a connected finite-dimensional Lie group equipped with a left-invariant Riemannian metric and the corresponding path metric, and if is a uniform lattice then is quasi-isometric to . If is a closed hyperbolic 3-manifold, then is quasi-isometric to . If is a complete finite volume hyperbolic 3-manifold with cusps, then is quasi-isometric to , where is a certain -invariant collection of horoballs, and where is equipped with the induced path metric. References Geometric group theory Metric geometry
Švarc–Milnor lemma
[ "Physics" ]
693
[ "Geometric group theory", "Group actions", "Symmetry" ]
51,462,681
https://en.wikipedia.org/wiki/Objective%20vision
Objective Vision (Object Oriented Visionary) is a project mainly aimed at real-time computer vision and simulation vision of living creatures. it has three sections contain of an open-source library of programming functions for using inside the projects, Virtual laboratory for scholars to check the application of functions directly and by command-line code for external and instant access, and the research section consists of paperwork and libraries to expand the scientific prove of works. Background The process has been used in the OVC libraries is as same as what's happening when living see a picture, and it's designed to give the researchers to experience the brain's visual cortex most close simulation for picture perception. The OVC was designed to work as a simulated visual cortex that has a critical job in processing and classify the objects to make it easier to work with pictures and graphical perception and processing. The human brain is much more aware of how it solves complex problems such as playing chess or solving algebra equations, which is why computer programmers have had so much success building machines that emulate this type of activity. but when the whole process is still a riddle that how the entities visionary system works. The project was simulated the visionary system by how it starts to convert the signals to image(actually the edges and colors) and then recognizing the shapes to find a relation between brain's information and image. The Objective Visionary system actually is concentrating on the separable sections, this separation gives the application visionary system the excellence processing result, because with this method the system do not waste much time on processing non significant sections and signals. this operation in the Objective Vision project called objective processing and because the O.V. mission is focused on human visionary simulation, so the developer refers with Objective Vision. History Objective-Vision is a Human (Natural) Visionary simulation Project developed by Michael Bidollahkhany. Following an explosion of interest during the 21st century were characterized by the maturing of the field and the significant growth of active applications; simulation of visionary systems, visionary based autonomous vehicle guidance, medical imaging (2D and 3D) and automatic surveillance are the most rapidly developing areas. This progress can be seen in an increasing number of software and hardware products on the market, as well as in a number of digital image processing software and APIs and also machine vision courses offered at universities worldwide. Therefore, the OVC project has been released as a research software project in 2016. One of important parts of this project was O.V.C. (Objective Vision Class library), that was designed to able companies and scientists to use the brain's most likely functionalities as visionary libraries to simplify and accelerate the image processing algorithms developments. The project started under MIT copyright license, but since 2018 the project continued as classified based on sponsors opinion. The Algorithm As developers claimed the algorithm used in the class library and developer's kit of project has been developed based on natural visionary system, and the functionalities containing image processing, optimization and labeling etc. are mostly upgraded and near techniques. Suppose that we've a picture of a jungle, or somewhere else, with this library developer will be able to manipulate not only the pixel of images for data extraction, but automatically based on which algorithm is used and image quality, he can manipulate directly a list of objects, same pixels and every data project needs to have, said the developer in his lecture answering how the algorithm works. Viewpoint For long times digital image processing and storing, was actually by processing just pixels; this Project tries to present a new kind of image processing and even storing, "objective vision" or "object-oriented visionary" is called. This project officially launched in May 2016, with the aim of making more adaptation between Computer Vision (Include Visionary, Digital image processing, discernment and even Perception) and Human Visual System; about development of the project: "...so we decided to research on Human Vision System, besides we worked on Artificial Retinal image processing and new visionary optimization unit(Presented at Istanbul Technical University Conference(Turkey 2015-2016)) and grew our research to Visionary CORTEX of Brain", Michael Bidollahkhany said. Applications The OVC application areas include: 2D and 3D feature toolkits Egomotion estimation Human–computer interaction (HCI) Mobile robotics Motion understanding Object identification Segmentation and recognition Stereopsis stereo vision: depth perception from two cameras Structure from motion (SFM) Motion tracking Programming language In first initial release of Objective Visionary Project the algorithm has been written in C++ and C#, and the virtual laboratory has been developed in C# and Delphi. Based on developers last lecture since the second release the complete algorithm has been re-written in C# based on .Net Core 1.0 to make it easier to work on different operating systems. See also Human visual system model Visual system Machine Vision Image processing OpenCV References Machine vision Image processing
Objective vision
[ "Engineering" ]
989
[ "Machine vision", "Robotics engineering" ]
59,268,121
https://en.wikipedia.org/wiki/Magneto-electric%20spin-orbit
Magneto-electric spin-orbit (MESO) is a technology designed for constructing scalable integrated circuits, that works with a different operating principle than CMOS devices such as MOSFETs, proposed by Intel, that is compatible with CMOS device manufacturing techniques and machinery. MESO devices operate by the coupling of the magnetoelectric effect with the spin orbit coupling. Specifically, the magnetoelectric effect will induce a change in magnetization within the device due to an induced electric field, which can then be read out by the spin orbit coupling component which converts it into an electric charge. This mechanism is analogous to how a CMOS device operates with the source, gate and drain electrodes working together to form a logic gate. As of 2020, the technology is under development by Intel and University of California, Berkeley. The first experiment, conducted in 2020 in nanoGUNE, proved that spin-orbit coupling could be used for implementing MESO. Performance Before the introduction of MESO, Intel evaluated 17 different device architectures for beyond CMOS scaling which aims to circumvent scaling challenges present with CMOS devices such as MOSFETs used in integrated circuits. For testing, these architectures were made with production processes compatible with those used for CMOS devices since some CMOS devices are still necessary for interfacing with other circuits and for providing the clock signal for an integrated circuit, and for reusing existing production equipment: Tunneling FETs, graphene p-n junctions, ITFETs, BisFET, spinFETs, all spin logic, spin torque oscillators, domain wall logic, spin torque majority, spin torque triad, spin wave device, nano magnet logic, charge spin logic, piezo FETs, MITFETs, FeFETs and negative capacitance FETs were tested and it was found that none offered both improved performance characteristics and lower power consumption compared with CMOS. According to VentureBeat, simulations showed that, on a 32-bit ALU, MESO devices offer both higher performance (processing speed in TOPS per cm2) and lower power density than CMOS HP devices, which had the highest performance among all other devices except MESO. Compared to CMOS, MESO circuits can require less energy for switching, can have a lower operating voltage, feature a higher integration density, possess non-volatility which allows for ultra low standby power consumption, and the energy required to switch MESO devices scales down cubically with every miniaturization by a factor of two of the device. These features make MESO attractive for replacing CMOS devices in the design of future logic gates and circuits in integrated circuits as it can help increase their performance and lower their power consumption. There is a huge challenge in the ME writing processes regarding the necessary materials. In recent years, great efforts are being made in the scientific community in order to make the magnetoelectric effects work in nanostructure (thin film). The main issue is that, when ferroelectric material transfers to thinfilm, it loses its FE properties, making it even more difficult to achieve a high efficiency-coupling of FE-FM (ME) at nanometer-size systems. References Spintronics
Magneto-electric spin-orbit
[ "Physics", "Materials_science" ]
657
[ "Spintronics", "Condensed matter physics" ]
59,279,900
https://en.wikipedia.org/wiki/Spanish%20Astrobiology%20Center
Spanish Astrobiology Center ( (CAB)) is a state-run institute in Spain dedicated to astrobiology research, and it is part of the National Institute of Aerospace Technology (INTA) as well as the Spanish National Research Council (CSIC). It was created in 1999 and it is affiliated with NASA Astrobiology Institute. Its main objective is "understanding life as a consequence of the evolution of the matter and energy in the Universe." History The foundation of Spain's Astrobiology Center (CAB) had its beginnings in 1998 when a group of Spanish scientists led by Juan Pérez-Mercader, presented a proposal of affiliation to the newly created NASA Astrobiology Institute (NAI). The affiliation was accepted and the center was officially created on 19 November 1999. It operated from offices at the National Institute of Aerospace Technology (INTA) until it moved to its own building inaugurated in January 2003. Organization The Astrobiology Center is based in Madrid, Spain, its director is Víctor Parro García, and the Vicedirector is Francisco Najarro. The center is organized into several research and support units, and some of these are associated to Spanish universities, including the University of Valladolid and the Autonomous University of Madrid. The center is part of the National Institute of Aerospace Technology (INTA) as well as the Spanish National Research Council (CSIC). The center is structured in several departments: Astrophysics Department, Molecular Evolution Department, Planetary Science and Habitability Department, Advanced Instrumentations Department, as well as several support units. Research CAB has contributed to NASA in its mission to better characterize and find conditions for life in the Universe, and has prioritized Martian weather research and endurance of some extremophile microorganisms. CAB has developed instruments for multiple missions: Rover Environmental Monitoring Station (REMS) for the Curiosity rover Temperature and Winds for InSight (TWINS) on the InSight mission MEDA (Mars Environmental Dynamics Analyzer), which rides on NASA's Perseverance rover launched in 2020 Raman Laser Spectrometer (RLS) for detecting minerals and potential biological pigments for the European Space Agency's Rosalind Franklin rover also to be launched in 2022. CAB is also developing a life-detector called Signs Of LIfe Detector (SOLID) to be potentially flown in a future mission. References External links Official website Official flier of the Astrobiology Center (in English) Science and technology in Spain Government of Spain 1999 establishments in Spain Research institutes in the Community of Madrid Astrobiology Origin of life Instituto Nacional de Técnica Aeroespacial
Spanish Astrobiology Center
[ "Astronomy", "Biology" ]
526
[ "Origin of life", "Speculative evolution", "Astrobiology", "Biological hypotheses", "Astronomical sub-disciplines" ]
59,284,284
https://en.wikipedia.org/wiki/Hazards%20of%20synthetic%20biology
The hazards of synthetic biology include biosafety hazards to workers and the public, biosecurity hazards stemming from deliberate engineering of organisms to cause harm, and hazards to the environment. The biosafety hazards are similar to those for existing fields of biotechnology, mainly exposure to pathogens and toxic chemicals; however, novel synthetic organisms may have novel risks. For biosecurity, there is concern that synthetic or redesigned organisms could theoretically be used for bioterrorism. Potential biosecurity risks include recreating known pathogens from scratch, engineering existing pathogens to be more dangerous, and engineering microbes to produce harmful biochemicals. Lastly, environmental hazards include adverse effects on biodiversity and ecosystem services, including potential changes to land use resulting from agricultural use of synthetic organisms. In general, existing hazard controls, risk assessment methodologies, and regulations developed for traditional genetically modified organisms (GMOs) also apply to synthetic organisms. "Extrinsic" biocontainment methods used in laboratories include biosafety cabinets and gloveboxes, as well as personal protective equipment. In agriculture, they include isolation distances and pollen barriers, similar to methods for biocontainment of GMOs. Synthetic organisms might potentially offer increased hazard control because they can be engineered with "intrinsic" biocontainment methods that limit their growth in an uncontained environment, or prevent horizontal gene transfer to natural organisms. Examples of intrinsic biocontainment include auxotrophy, biological kill switches, inability of the organism to replicate or to pass synthetic genes to offspring, and the use of xenobiological organisms using alternative biochemistry, for example using artificial xeno nucleic acids (XNA) instead of DNA. Existing risk analysis systems for GMOs are generally applicable to synthetic organisms, although there may be difficulties for an organisms built "bottom-up" from individual genetic sequences. Synthetic biology generally falls under existing regulations for GMOs and biotechnology in general, as well as any regulations that exist for downstream commercial products, although there are generally no regulations in any jurisdiction that are specific to synthetic biology. Background Synthetic biology is an outgrowth of biotechnology distinguished by the use of biological pathways or organisms not found in nature. This contrasts with "traditional" genetically modified organisms created by transferring existing genes from one cell type to another. Major goals of synthetic biology include re-designing genes, cells, or organisms for gene therapy; development of minimal cells and artificial protocells; and development of organisms based on alternative biochemistry. This work has been driven by the development of genome synthesis and editing tools, as well as pools of standardized synthetic biological circuits with defined functions. The availability of these tools has spurred the expansion of a do-it-yourself biology movement. Synthetic biology has potential commercial applications in energy, agriculture, medicine, and the production of chemicals including pharmaceuticals. Biosynthetic applications are often distinguished as either for "contained use" within laboratories and manufacturing facilities, or for "intentional release" outside of the laboratory for medical, veterinary, cosmetic, or agricultural applications. As synthetic biology applications become increasingly used in industry, the number and variety of workers exposed to synthetic biology risk is expected to increase. Hazards Biosafety Biosafety hazards to workers from synthetic biology are similar to those in existing fields of biotechnology, mainly exposure to pathogens and toxic chemicals used in a laboratory or industrial setting. These include hazardous chemicals; biological hazards including organisms, prions, and biologically-derived toxins; physical hazards such as ergonomic hazards, radiation, and noise hazards; and additional hazards of injury from autoclaves, centrifuges, compressed gas, cryogens, and electrical hazards. Novel protocells or xenobiological organisms, as well as gene editing of higher animals, may have novel biosafety hazards that affect their risk assessment. As of 2018, most laboratory biosafety guidance is based on preventing exposure to existing rather than new pathogens. Lentiviral vectors derived from the HIV-1 virus are widely used in gene therapy due to their unique ability to infect both dividing and non-dividing cells, but unintentional exposure of workers could lead to cancer and other diseases. In the case of an unintentional exposure, antiretroviral drugs can be used as post-exposure prophylaxis. Given the overlap between synthetic biology and the do-it-yourself biology movement, concerns have been raised that its practitioners may not abide by risk assessment and biosafety practices required of professionals, although it has been suggested that an informal code of ethics exists that recognizes health risks and other adverse outcomes. Biosecurity The rise of synthetic biology has also spurred biosecurity concerns that synthetic or redesigned organisms could be engineered for bioterrorism. This is considered possible but unlikely given the resources needed to perform this kind of research. However, synthetic biology could expand the group of people with relevant capabilities, and reduce the amount of time needed to develop them. A 2018 National Academies of Sciences, Engineering, and Medicine (NASEM) report identified three capabilities as being of greatest concern. The first is the recreation of known pathogens from scratch, for example using genome synthesis to recreate historical viruses such as the Spanish Flu virus or polio virus. Current technology allows genome synthesis for almost any mammalian virus, the sequences of known human viruses are publicly available, and the procedure has relatively low cost and requires access to basic laboratory equipment. However, the pathogens would have known properties and could be mitigated by standard public health measures, and could be partially prevented by screening of commercially produced DNA molecules. In contrast to viruses, creating existing bacteria or completely novel pathogens from scratch was not yet possible as of 2018, and was considered a low risk. Another capability of concern cited by NASEM is engineering existing pathogens to be more dangerous. This includes altering the targeted host or tissue, as well as enhancing the pathogen's replication, virulence, transmissibility, or stability; or its ability to produce toxins, reactivate from a dormant state, evade natural or vaccine-induced immunity, or evade detection. The NASEM considered engineered bacteria to be a higher risk than viruses because they are easier to manipulate and their genomes are more stable over time. A final capability of concern cited by NASEM is engineering microbes to produce harmful biochemicals. Metabolic engineering of microorganisms is a well established field that has targeted production of fuels, chemicals, food ingredients, and pharmaceuticals, but it could be used to produce toxins, antimetabolites, controlled substances, explosives, or chemical weapons. This was considered to be a higher risk for naturally occurring substances than for artificial ones. There is also the possibility of novel threats that were considered lower risks by NASEM due to their technical challenges. Delivery of an engineered organism into the human microbiome has the challenges of delivery and persistence in the microbiome, though an attack would be difficult to detect and mitigate. Pathogens engineered to alter the human immune system by causing immunodeficiency, hyperreactivity, or autoimmunity, or to directly alter the human genome, were also considered lower-risk due to extreme technical challenges. Environmental Environmental hazards include toxicity to animals and plants, as well as adverse effects on biodiversity and ecosystem services. For example, a toxin engineered into a plant to resist specific insect pests may also affect other invertebrates. Some highly speculative hazards include engineered organisms becoming invasive and outcompeting natural ones, and horizontal gene transfer from engineered to natural organisms. Gene drives to suppress disease vectors may inadvertently affect the target species' fitness and alter ecosystem balance. In addition, synthetic biology could lead to land-use changes, such as non-food synthetic organisms displacing other agricultural uses or wild land. It could also cause products to be produced by non-agricultural means or through large-scale commercial farming, which could economically outcompete small-scale farmers. Finally, there is a risk that conservation methods based on synthetic biology, such as de-extinction, may reduce support for traditional conservation efforts. Hazard controls Extrinsic Extrinsic biocontainment encompasses physical containment through engineering controls such as biosafety cabinets and gloveboxes, as well as personal protective equipment including gloves, coats, gowns, shoe covers, boots, respirators, face shields, safety glasses, and goggles. In addition, facilities used for synthetic biology may include decontamination areas, specialized ventilation and air treatment systems, and separation of laboratory work areas from public access. These procedures are common to all microbiological laboratories. In agriculture, extrinsic biocontainment methods include maintaining isolation distances and physical pollen barriers to prevent modified organisms from fertilizing wild-type plants, as well as sowing modified and wild-type seed at different times so that their flowering periods do not overlap. Intrinsic Intrinsic biocontainment is the proactive design of functionalities or deficiencies into organisms and systems to reduce their hazards. It is unique to engineered organisms such as GMOs and synthetic organisms, and is an example of hazard substitution and of prevention through design. Intrinsic biocontainment can have many goals, including controlling growth in the laboratory or after an unintentional release, preventing horizontal gene transfer to natural cells, preventing use for bioterrorism, or protecting the intellectual property of the organism's designers. There has been concern that existing genetic safeguards are not reliable enough due to the organism's ability to lose them through mutation. However, they may be useful in combination with other hazard controls, and may provide enhanced protections relative to GMOs. Many approaches fall under the umbrella of intrinsic biocontainment. Auxotrophy is the inability of an organism to synthesize a particular compound required for its growth, meaning that the organism cannot survive unless the compound is provided to it. A kill switch is a pathway that initiates cell death that is triggered by a signal from humans. Inability of the organisms to replicate is another such method. Methods specific to plants include cytoplasmic male sterility, where viable pollen cannot be produced; and transplastomic plants where modifications are made only to the chloroplast DNA, which is not incorporated into pollen. Methods specific to viral vectors include splitting key components between multiple plasmids, omitting accessory proteins related to the wild-type virus' function as a pathogen but not as a vector, and the use of self-inactivating vectors. It has been speculated that xenobiology, the use of alternative biochemistry that differs from natural DNA and proteins, may enable novel intrinsic biocontainment methods that are not possible with traditional GMOs. This would involve engineering organisms that use artificial xeno nucleic acids (XNA) instead of DNA and RNA, or that have an altered or expanded genetic code. These would be theoretically incapable of horizontal gene transfer to natural cells. There is speculation that these methods may have lower failure rates than traditional methods. Risk assessment While the hazards of synthetic biology are similar to those of existing biotechnology, risk assessment procedures may differ given the rapidity with which new components and organisms are generated. Existing risk analysis systems for GMOs are also applicable for synthetic organisms, and workplace health surveillance can be used to enhance risk assessment. However, there may be difficulties in risk assessment for an organism built "bottom-up" from individual genetic sequences rather than from a donor organism with known traits. Synthetic organisms also may not be included in preexisting classifications of microorganisms into risk groups. An additional challenge is that synthetic biology engages a wide range of disciplines outside of biology, whose practitioners may be unfamiliar with microbiological risk assessment. For biosecurity, risk assessment includes evaluating the ease of use by potential actors; its efficacy as a weapon; practical requirements such as access to expertise and resources; and the capability to prevent, anticipate, and respond to an attack. For environmental hazards, risk assessments and field trials of synthetic biology applications are most effective when they include metrics on non-target organisms and ecosystem functions. Some researchers have suggested that traditional life-cycle assessment methods may be insufficient because unlike with traditional industries, the boundary between industry the environment is blurred, and materials have an information-rich description that cannot be described only by their chemical formula. Regulation International Several treaties contain provisions which apply to synthetic biology. These include the Convention on Biological Diversity, Cartagena Protocol on Biosafety, Nagoya–Kuala Lumpur Supplementary Protocol on Liability, Biological Weapons Convention, and Australia Group Guidelines. United States In general, the United States relies on the regulatory frameworks established for chemicals and pharmaceuticals to regulate synthetic biology, mainly the Toxic Substances Control Act of 1976 as updated by the Frank R. Lautenberg Chemical Safety for the 21st Century Act, as well as the Federal Food, Drug, and Cosmetic Act. The biosafety concerns about synthetic biology and its gene-editing tools are similar to the concerns lodged about recombinant DNA technology when it emerged in the mid-1970s. The recommendations of the 1975 Asilomar Conference on Recombinant DNA formed the basis for the U.S. National Institutes of Health (NIH) guidelines, which were updated in 2013 to address organisms and viruses containing synthetic nucleic acid molecules. The NIH Guidelines for Research Involving Recombinant and Synthetic Nucleic Molecules are the most comprehensive resource for synthetic biology safety. Although they are only binding on recipients of NIH funding, other government and private funders sometimes require their use, and they are often voluntarily implemented by others. In addition, the 2010 NIH Screening Framework Guidance for Providers of Synthetic Double-Stranded DNA provides voluntary guidelines for vendors of synthetic DNA to verify the identity and affiliation of buyers, and screen for sequences of concern. The Occupational Safety and Health Administration (OSHA) regulates the health and safety of workers, including those involved in synthetic biology. In the mid-1980s, OSHA maintained that the general duty clause and existing regulatory standards were sufficient to protect biotechnology workers. The Environmental Protection Agency, Department of Agriculture Animal and Plant Health Inspection Service, and Food and Drug Administration regulate the commercial production and use of genetically modified organisms. The Department of Commerce Bureau of Industry and Security has authority over dual-use technology, and synthetic biology falls under select agent rules. Other countries In the European Union, synthetic biology is governed by Directives 2001/18/EC on the intentional release of GMOs, and 2009/41/EC on the contained use of genetically modified micro-organisms, as well as Directive 2000/54/EC on biological agents in the workplace. As of 2012, neither the European Community nor any member state had specific legislation on synthetic biology. In the United Kingdom, the Genetically Modified Organisms (Contained Use) Regulations 2000 and subsequent updates are the main law relevant to synthetic biology. China had not developed synthetic biology specific regulations as of 2012, relying on regulations developed for GMOs. Singapore relies on its Biosafety Guidelines for GMOs, Biological Agents and Toxins Act, and the Workplace Safety and Health Act. See also Biosafety level Regulation of genetic engineering Biocontainment of genetically modified organisms References Synthetic biology Occupational hazards Biological contamination Regulation of biotechnologies
Hazards of synthetic biology
[ "Engineering", "Biology" ]
3,117
[ "Synthetic biology", "Biotechnology law", "Biological engineering", "Regulation of biotechnologies", "Bioinformatics", "Molecular genetics" ]
47,378,041
https://en.wikipedia.org/wiki/Synthetic%20ribosome
Synthetic ribosomes are artificial small-molecules that can synthesize peptides in a sequence-specific matter. David Alan Leigh's lab built synthetic ribosome using a chemical structure based on a rotaxane. The Cédric Orelle research group created ribosomes with tethered and inseparable subunits (or Ribo-T). References Synthetic biology
Synthetic ribosome
[ "Engineering", "Biology" ]
78
[ "Synthetic biology", "Biological engineering", "Molecular genetics", "Bioinformatics" ]
44,276,996
https://en.wikipedia.org/wiki/Scarborough%20criterion
The Scarborough criterion is used for satisfying convergence of a solution while solving linear equations using an iterative method. Introduction Analytical solutions for certain systems of equations can be difficult or impossible to obtain. A well known example are the Navier-Stokes equations describing the flow of Newtonian fluids. Solutions of such equations can be obtained numerically, at discrete points of the solution domain (e.g. at discrete time points and points in space). Numerical solutions based on the integration of the equations at discrete control volumes of the solution domain (for example the Finite Volume Method) result in a system of algebraic equations, one for each nodal point (corresponding to a particular control volume). These algebraic equations are usually referred to as discretised equations. The Scarborough criterion formulated by Scarborough (1958), can be expressed in terms of the values of the coefficients of the discretised equations: Here is the net coefficient of a random central node P and the summation in the numerator is taken over all the neighbouring nodes. For a one, two and three-dimensional problem there will be two (east & west), four (east, west, south & north), and six (east, west, south north, top & bottom) neighbours for each node, respectively. Comments This is a sufficient condition, not a necessary one. This means that we can get convergence, even if, at times, we violate the criterion. The satisfaction of this criterion ensures that the equations will be converged by at least one iterative method. Gauss–Seidel method If Scarborough criterion is not satisfied then Gauss–Seidel method iterative procedure is not guaranteed to converge a solution. This criterion is a sufficient condition, not a necessary one. If this criterion is satisfied then it means equation will be converged by at least one iterative method. The Scarborough criterion is used as a sufficient condition for convergent iterative method. The finite volume method uses this criterion for obtaining a convergent solution and implementing boundary conditions. Diagonal dominance If the differencing scheme produces coefficients that satisfy the above criterion the resulting matrix of coefficients is diagonally dominant. To achieve diagonal dominance we need large values of net coefficient so the linearisation practice of source terms should ensure that SP is always negative. If this is the case –SP is always positive and adds to aP. Diagonal dominance is a desirable feature for satisfying the boundedness criterion. This states that in the absence of sources the internal nodal values of the property ф should be bounded by its boundary values. Hence in a steady state conduction problem without sources and with boundary temperatures of 500 °C and 200 °C all interior values of T should be less than 500 °C and greater than 200 °C. See also Computational fluid dynamics Linear equation References External links Introduction to Computational Fluid Dynamics and Principles of Conservation - video lecture Overview of Numerical Methods Implementation of BC in FVM Computational fluid dynamics Numerical analysis Applied mathematics Functional analysis Convergence (mathematics)
Scarborough criterion
[ "Physics", "Chemistry", "Mathematics" ]
598
[ "Sequences and series", "Functions and mappings", "Convergence (mathematics)", "Functional analysis", "Mathematical structures", "Computational fluid dynamics", "Applied mathematics", "Mathematical objects", "Computational mathematics", "Computational physics", "Mathematical relations", "Numer...
44,277,627
https://en.wikipedia.org/wiki/High%20energy%20density%20physics
High-energy-density physics (HEDP) is a subfield of physics intersecting condensed matter physics, nuclear physics, astrophysics and plasma physics. It has been defined as the physics of matter and radiation at energy densities in excess of about 100 GJ/m3 equivalent to pressures of about 1 Mbar (or roughly 1 million times atmospheric pressure). Definition High energy density (HED) science includes the study of condensed matter at densities common to the deep interiors of giant planets, and hot plasmas typical of stellar interiors. This multidisciplinary field provides a foundation for understanding a wide variety of astrophysical observations and understanding and ultimately controlling the fusion regime. Specifically, thermonuclear ignition by inertial confinement in the laboratory – as well as the transition from planets to brown dwarfs and stars in nature – takes place via the HED regime. A wide variety of new and emerging experimental capabilities (National Ignition Facility (NIF), Jupiter Laser Facility (JLF), etc.) together with the push towards Exascale Computing help make this new scientific frontier rich with discovery. The HED domain is often defined by an energy density (units of pressure) above 1 Mbar = 100 GPa ~ 1 Million of Atmosphere. This is comparable to the energy density of a chemical bond such as in a water molecule. Thus at 1 Mbar, chemistry as we know it changes. Experiments at NIF now routinely probe matter at 100 Mbar. At these "atomic pressure" conditions the energy density is comparable to that of the inner core electrons, so the atoms themselves change. The dense HED regime includes highly degenerate matter, with interatomic spacing less than the de Broglie wavelength. This is similar to quantum regime achieved at low temperatures (e.g. Bose–Einstein condensation), however, unlike the low temperature analog, this HED regime simultaneously probes interatomic separations less than the Bohr radius. This opens an entirely new quantum mechanical domain, where core electrons - not just valence electrons - determine material properties and gives rise to core-electron-chemistry and a new structural complexity in solids. Potential exotic electronic, mechanical, and structural behavior of such matter include room temperature superconductivity, high-density electrides, first order fluid-fluid transitions, and new insulator-metal transitions. Such matter is likely quite common throughout the universe, existing in the more than 1000 recently discovered exoplanets. Importance HED conditions at higher temperatures are important to the birth and death of stars and controlling thermonuclear fusion in the laboratory. Take as an example the birth and cooling of a neutron star. The central part of a star, ~8-20 times the mass of the Sun, fuses its way to iron and cannot go further since iron has the highest binding energy per nucleon of any element. As the iron core accumulates to ~1.4 solar masses, electron degeneracy pressure gives up against gravity and collapses. Initially the star cools by the rapid emission of neutrinos. The outer Fe surface layer (~109 K) gives rise to spontaneous pair production then reaches a temperature where the radiation pressure is comparable to the thermal pressure and where thermal pressure is comparable to coulomb interactions. Recent discoveries include metallic fluid hydrogen and superionic water. See also High energy physics References Nuclear physics Astrophysics Plasma theory and modeling
High energy density physics
[ "Physics", "Astronomy" ]
705
[ "Nuclear physics", "Plasma physics", "Astrophysics", "Plasma theory and modeling", "Astronomical sub-disciplines" ]
44,280,482
https://en.wikipedia.org/wiki/Five%20Years
Five Years may refer to: Music Albums Five Years (1969–1973), a 2015 compilation album by David Bowie Five Years: Singles, a 2001 compilation album by Takako Matsu 5 Years (album), a 2010 album by Kaela Kimura Songs "Five Years" (David Bowie song), a song by David Bowie from the 1972 album Ziggy Stardust "5 Years" (Björk song), a song by Björk from the 1997 album Homogenic "Five Years", a song by Bo Burnham from the 2022 album The Inside Outtakes Other Five Years (book), a 1966 autobiographical collection of American social critic Paul Goodman's notebooks Five Years, a 2019 series by Terry Moore See also "Five Long Years", a 1952 song by blues vocalist/pianist Eddie Boyd Five-year plan (disambiguation) Lustrum, a term for a five-year period in Ancient Rome. Units of time
Five Years
[ "Physics", "Mathematics" ]
197
[ "Physical quantities", "Time", "Units of time", "Quantity", "Spacetime", "Units of measurement" ]
44,281,046
https://en.wikipedia.org/wiki/Error%20analysis%20%28mathematics%29
In mathematics, error analysis is the study of kind and quantity of error, or uncertainty, that may be present in the solution to a problem. This issue is particularly prominent in applied areas such as numerical analysis and statistics. Error analysis in numerical modeling In numerical simulation or modeling of real systems, error analysis is concerned with the changes in the output of the model as the parameters to the model vary about a mean. For instance, in a system modeled as a function of two variables Error analysis deals with the propagation of the numerical errors in and (around mean values and ) to error in (around a mean ). In numerical analysis, error analysis comprises both forward error analysis and backward error analysis. Forward error analysis Forward error analysis involves the analysis of a function which is an approximation (usually a finite polynomial) to a function to determine the bounds on the error in the approximation; i.e., to find such that The evaluation of forward errors is desired in validated numerics. Backward error analysis Backward error analysis involves the analysis of the approximation function to determine the bounds on the parameters such that the result Backward error analysis, the theory of which was developed and popularized by James H. Wilkinson, can be used to establish that an algorithm implementing a numerical function is numerically stable. The basic approach is to show that although the calculated result, due to roundoff errors, will not be exactly correct, it is the exact solution to a nearby problem with slightly perturbed input data. If the perturbation required is small, on the order of the uncertainty in the input data, then the results are in some sense as accurate as the data "deserves". The algorithm is then defined as backward stable. Stability is a measure of the sensitivity to rounding errors of a given numerical procedure; by contrast, the condition number of a function for a given problem indicates the inherent sensitivity of the function to small perturbations in its input and is independent of the implementation used to solve the problem. Applications Global positioning system The analysis of errors computed using the global positioning system is important for understanding how GPS works, and for knowing what magnitude errors should be expected. The Global Positioning System makes corrections for receiver clock errors and other effects but there are still residual errors which are not corrected. The Global Positioning System (GPS) was created by the United States Department of Defense (DOD) in the 1970s. It has come to be widely used for navigation both by the U.S. military and the general public. Molecular dynamics simulation In molecular dynamics (MD) simulations, there are errors due to inadequate sampling of the phase space or infrequently occurring events, these lead to the statistical error due to random fluctuation in the measurements. For a series of measurements of a fluctuating property , the mean value is: When these measurements are independent, the variance of the mean is: but in most MD simulations, there is correlation between quantity at different time, so the variance of the mean will be underestimated as the effective number of independent measurements is actually less than . In such situations we rewrite the variance as: where is the autocorrelation function defined by We can then use the auto correlation function to estimate the error bar. Luckily, we have a much simpler method based on block averaging. Scientific data verification Measurements generally have a small amount of error, and repeated measurements of the same item will generally result in slight differences in readings. These differences can be analyzed, and follow certain known mathematical and statistical properties. Should a set of data appear to be too faithful to the hypothesis, i.e., the amount of error that would normally be in such measurements does not appear, a conclusion can be drawn that the data may have been forged. Error analysis alone is typically not sufficient to prove that data have been falsified or fabricated, but it may provide the supporting evidence necessary to confirm suspicions of misconduct. See also Error analysis (linguistics) Error bar Errors and residuals in statistics Propagation of uncertainty Validated numerics References External links All about error analysis. Numerical analysis Error
Error analysis (mathematics)
[ "Mathematics" ]
821
[ "Computational mathematics", "Mathematical relations", "Approximations", "Numerical analysis" ]
67,182,867
https://en.wikipedia.org/wiki/Directional%20component%20analysis
Directional component analysis (DCA) is a statistical method used in climate science for identifying representative patterns of variability in space-time data-sets such as historical climate observations, weather prediction ensembles or climate ensembles. The first DCA pattern is a pattern of weather or climate variability that is both likely to occur (measured using likelihood) and has a large impact (for a specified linear impact function, and given certain mathematical conditions: see below). The first DCA pattern contrasts with the first PCA pattern, which is likely to occur, but may not have a large impact, and with a pattern derived from the gradient of the impact function, which has a large impact, but may not be likely to occur. DCA differs from other pattern identification methods used in climate research, such as EOFs, rotated EOFs and extended EOFs in that it takes into account an external vector, the gradient of the impact. DCA provides a way to reduce large ensembles from weather forecasts or climate models to just two patterns. The first pattern is the ensemble mean, and the second pattern is the DCA pattern, which represents variability around the ensemble mean in a way that takes impact into account. DCA contrasts with other methods that have been proposed for the reduction of ensembles in that it takes impact into account in addition to the structure of the ensemble. Overview Inputs DCA is calculated from two inputs: a multivariate dataset of weather or climate data, such as historical climate observations, or a weather or climate ensemble a linear impact function. The linear impact function is a function which defines a level of impact for every spatial pattern in the weather or climate data as a weighted sum of the values at different locations in the spatial pattern. An example is the mean value across the spatial pattern. The linear impact function can be generated as the first term in the multivariate Taylor series of a non-linear impact function. Formula Consider a space-time data set , containing individual spatial pattern vectors , where the individual patterns are each considered as single samples from a multivariate normal distribution with mean zero and covariance matrix . We define a linear impact function of a spatial pattern as , where is a vector of spatial weights. The first DCA pattern is given in terms the covariance matrix and the weights by the proportional expression . The pattern can then be normalized to any length as required. Properties If the weather or climate data is elliptically distributed (e.g., is distributed as a multivariate normal distribution or a multivariate t-distribution) then the first DCA pattern (DCA1) is defined as the spatial pattern with the following mathematical properties: DCA1 maximises probability density for a given value of impact DCA1 maximises impact for a given value of probability density DCA1 maximises the product of impact and probability density DCA1 is the conditional expectation, conditional on exceeding a certain level of impact DCA1 is the impact-weighted ensemble mean Any modification of DCA1 will lead to a pattern that is either less extreme, or has a lower probability density. Rainfall Example For instance, in a rainfall anomaly dataset, using an impact metric defined as the total rainfall anomaly, the first DCA pattern is the spatial pattern that has the highest probability density for a given total rainfall anomaly. If the given total rainfall anomaly is chosen to have a large value, then this pattern combines being extreme in terms of the metric (i.e., representing large amounts of total rainfall) with being likely in terms of the pattern, and so is well suited as a representative extreme pattern. Comparison with PCA The main differences between Principal component analysis (PCA) and DCA are PCA is a function of just the covariance matrix, and the first PCA pattern is defined so as to maximise explained variance DCA is a function of the covariance matrix and a vector direction (the gradient of the impact function), and the first DCA pattern is defined so as to maximise probability density for a given value of the impact metric As a result, for unit vector spatial patterns: The first PCA spatial pattern always corresponds to a higher explained variance, but has a lower value of the impact metric (e.g., the total rainfall anomaly), except in degenerate cases The first DCA spatial pattern always corresponds to a higher value of the impact metric, but has a lower value of the explained variance, except in degenerate cases The degenerate cases occur when the PCA and DCA patterns are equal. Also, given the first PCA pattern, the DCA pattern can be scaled so that: The scaled DCA pattern has the same probability density as the first PCA pattern, but higher impact, or The scaled DCA pattern has the same impact as the first PCA pattern, but higher probability density. Two Dimensional Example Source: Figure 1 gives an example, which can be understood as follows: The two axes represent anomalies of annual mean rainfall at two locations, with the highest total rainfall anomaly values towards the top right corner of the diagram The joint variability of the rainfall anomalies at the two locations is assumed to follow a bivariate normal distribution The ellipse shows a single contour of probability density from this bivariate normal, with higher values inside the ellipse The red dot at the centre of the ellipse shows zero rainfall anomalies at both locations The blue parallel-line arrow shows the principal axis of the ellipse, which is also the first PCA spatial pattern vector In this case, the PCA pattern is scaled so that it touches the ellipse The diagonal straight line shows a line of constant positive total rainfall anomaly, assumed to be at some fairly extreme level The red dotted-line arrow shows the first DCA pattern, which points towards the point at which the diagonal line is tangent to the ellipse In this case, the DCA pattern is scaled so that it touches the ellipse From this diagram, the DCA pattern can be seen to possess the following properties: Of all the points on the diagonal line, it is the one with the highest probability density Of all the points on the ellipse, it is the one with the highest total rainfall anomaly It has the same probability density as the PCA pattern, but represents higher total rainfall (i.e., points further towards the top right hand corner of the diagram) Any change of the DCA pattern will reduce either the probability density (if it moves out of the ellipse) or reduce the total rainfall anomaly (if it moves along or into the ellipse) In this case the total rainfall anomaly of the PCA pattern is quite small, because of anticorrelations between the rainfall anomalies at the two locations. As a result, the first PCA pattern is not a good representative example of a pattern with large total rainfall anomaly, while the first DCA pattern is. In dimensions the ellipse becomes an ellipsoid, the diagonal line becomes an dimensional plane, and the PCA and DCA patterns are vectors in dimensions. Applications Application to Climate Variability DCA has been applied to the CRU data-set of historical rainfall variability in order to understand the most likely patterns of rainfall extremes in the US and China. Application to Ensemble Weather Forecasts DCA has been applied to ECMWF medium-range weather forecast ensembles in order to identify the most likely patterns of extreme temperatures in the ensemble forecast. Application to Ensemble Climate Model Projections DCA has been applied to ensemble climate model projections in order to identify the most likely patterns of extreme future rainfall. Derivation of the First DCA Pattern Source: Consider a space-time data-set , containing individual spatial pattern vectors , where the individual patterns are each considered as single samples from a multivariate normal distribution with mean zero and covariance matrix . As a function of , the log probability density is proportional to . We define a linear impact function of a spatial pattern as , where is a vector of spatial weights. We then seek to find the spatial pattern that maximises the probability density for a given value of the linear impact function. This is equivalent to finding the spatial pattern that maximises the log probability density for a given value of the linear impact function, which is slightly easier to solve. This is a constrained maximisation problem, and can be solved using the method of Lagrange multipliers. The Lagrangian function is given by Differentiating by and setting to zero gives the solution Normalising so that is unit vector gives This is the first DCA pattern. Subsequent patterns can be derived which are orthogonal to the first, to form an orthonormal set and a method for matrix factorisation. References Climate and weather statistics Numerical climate and weather models Data analysis Multivariate statistics Climate
Directional component analysis
[ "Physics" ]
1,794
[ "Weather", "Physical phenomena", "Climate and weather statistics" ]
67,190,786
https://en.wikipedia.org/wiki/Austrian%20Centre%20for%20Electron%20Microscopy%20and%20Nanoanalysis
The Austrian Centre for Electron Microscopy and Nanoanalysis (short: FELMI-ZFE) is a cooperation between the Institute of Electron Microscopy and Nanoanalysis (FELMI) of the Graz University of Technology (TUG) and the Graz Centre of Electron Microscopy (ZFE), which is a member of Austrian Cooperative Research (ACR) and run by the non-profit association for the promotion of electron microscopy. It is located at the “Neue Technik Steyrergasse” campus in Graz. The FELMI-ZFE is offering both research and services, to interested partners from academia and industry, using advanced electron microscopic methods for both structural and chemical characterization. History The acquisition process of the first electron microscope of the Graz University of Technology was started by a donation from industry in 1949. The next year a research group, headed by Fritz Grasenick, was established. Finally, the first electron microscope (“Übermikroskop UEM100” by Siemens & Halske) was bought in 1951 and the opening ceremony was attended by Ernst Ruska, Werner Glaser and Otto Wolf. Despite the Graz University of Technology providing rooms and infrastructure, from the very beginning supplementary income from services research provide for industry was necessary to help cover the high operating and investment cost. Due to the larger interest in measurements and the high utilization of the instrument, the group soon need to expand its personal and look to acquire need microscopes. In order to concentrate all funding sources the non-profit association for the promotion of electron microscopy (Verein zur Förderung der Elektronenmikroskopie und Feinstrukturforschung) was founded in 1959 under the direction of the governor of Styria Josef Krainer senior. The Graz Centre of Electron Microscopy (ZFE) was attached to this non-profit. The combined institutions grew over the years, where the combined role of the head of the university institute and the head of the ZFE in one person played a crucial role in the development of a tight interconnection between fundamental research and application. In 2011 the to this date most expansive and impressive acquisition was possible, a at that time worldwide unique STEM. With the ASTEM (Austrian Scanning Transmission Electron Microscope) magnification of more than one million became possible, this enables atomic resolution. In addition, with an investment volume of 4.5 Mio. Euro, the ASTEM was one of the larges scientific infrastructure investments in Austria. Head of institute Organizational structure, Research & Services Approximately 50 people work at the FELMI-ZFE with the number varying somewhat due to dissertations and research projects. In addition roughly 300 scientist visit the FELMI-ZFE each year. International collaboration There are standing collaboration with approximately 30 research institutes and 140 companies. In addition, since the establishment of the ASTEM, the FELMI-ZFE is part of ESTEEM3 (Enabling Science and Technology through European Electron Microscopy), which is a network of 14 electron microscopy institutes in Europe. Research & Services Five groups work on four main topics of research: Nanoanalytic of materials Functional Nanostructuring 3D and in situ measurements Polymers and biological materials Instruments Scanning electron microscopy (SEM) Transmission electron microscopy (TEM) Infrared- and Raman-microscopy (IR/Raman) Focused-Ion-Beam-Microscopy (FIB) Atomic forces microscopy (AFM) X-ray diffraction (XRD) Sample preparation Teaching and Education In the academic year 2019/2020 approximately 600 students visited lectures and lab exercises of the Institute of Electron Microscopy and Nanoanalysis. The courses offered are on the topics of fundamental physics, material analysis, electron microscopy and nano-manufacturing. In addition apprenticeships for both lab-technician and media technology are offered. External links Webpage of the FELMI-ZFE References Graz University of Technology Electron microscopy
Austrian Centre for Electron Microscopy and Nanoanalysis
[ "Chemistry" ]
789
[ "Electron", "Electron microscopy", "Microscopy" ]
67,192,205
https://en.wikipedia.org/wiki/Teck%20Cominco%20smelter
The Teck Cominco smelter, also known as the Teck Cominco Lead-Zinc Smelter, Cominco Smelter, and Trail smelter located in Trail, British Columbia, Canada, is the largest integrated lead-zinc smelter of its kind in the world. It is situated approximately north of the border between British Columbia, Canada and Washington, in the United States, on the Columbia River. It is owned and operated by Vancouver, British Columbia-based Teck Cominco Metals Ltd—renamed Teck Resources. Since 1896, there has been a copper and gold smelting operation in the area. The original company, Consolidated Mining and Smelting Company of Canada, was founded in 1906 through a merger of several entities then under the control of the Canadian Pacific Railway (CPR). In July 2001, Cominco and Tech Resources merged and in 2008, Teck Cominco renamed itself as Teck. By 2018, the Teck Cominco smelter complex had been in operation for over a century. It provided 1,400 jobs in 2018, making it the largest employer in the small city of Trail, with a population of 7800. In 2017, the smelter produced more than 230,000 tons of zinc, which is used in rustproofing both iron and steel. Teck reported that they had invested CA$525 million in the late 2010s to "improve efficiency and performance at its Trail Operations" and that they intend to invest an added CA$150 million. The Trail Operations contributed CA$169 million to Teck Resources CA$3.3-billion gross profit in 2017. Overview The original Trail smelter for the nearby Rossland mines, was founded by the American mining engineer F. Augustus Heinze (1869 – 1914) who had already built a smelter in Butte, Montana. In 1896, Heinze initially incorporated his smelting and mining company in the United States and then in Canada. Within a period of 4 years, Heinze owned the "smelter, mining interests, railway lines, railway charters, and associated land grants." Walter Hull Aldridge (b. 1867), an American mining and metallurgical engineer, took a position with the president of the Canadian Pacific Railway (CPR), Sir William Van Horne, to negotiate a deal with Heinze. Under Aldridge's direction, the CPR's mining interests were incorporated under the name of the Consolidated Mining & Smelting Company, then known as the Consolidated or CM&S. At that time, Consolidated "controlled many of British Columbia's largest lead, silver, gold and copper mines, as well as the large reduction works at Trail." In 1910, CM&S anticipated the decline of its Rossland mines and purchased the lead-zinc ore-rich Sullivan Mine. At that time, it was difficult to smelt ore from the Sullivan mine because of the presence of iron sulphide. A metallurgist from Ontario, Randolphe 'Ralph' William Diamond who was hired by Consolidated, developed the process known as differential flotation that separated minerals by letting them "float" by "sticking to bubbles formed in certain mixtures of chemicals and oils". This ground-breaking technology increased production at the Sullivan Mine making it profitable for decades. It required a "long-term stable workforce" not just itinerant workers; mining towns grew around the mines and smelter. While 1924, was a peak year in terms of production, by 1927, sulphur dioxide (SO2) emissions from the smelter had contaminated the vegetation and the land of the Columbia River valley in Washington State. Damages were estimated at $350,000 by the International Joint Commission in 1927. In 1934, Cominco had initiated heavy water research at the smelter but it did not gain momentum until the outbreak of World War II. During the war, the Allies cooperated in researching nuclear fission with the goal of developing an atomic bomb. New research had revealed that heavy water could slow down the uranium neutron, making a chain reaction possible. Under the tenure of Selwyn G. Blaylock as Cominco's president, the smelter was upgraded as part of the Manhattan Project's heavy water production program, under code name the P-9 Project. Princeton University physicist Hugh S. Taylor, who was in charge of United States Office of Scientific Research and Development (OSRD) research on heavy water research, gave Cominco $20,000 towards the upgrade modifications. Cominco produced heavy water for the United States from 1942 until 1956. In the 1950s, a hydroelectric dam—the Waneta Dam—was built south of Trail on the Pend D’Oreille River, which provided inexpensive electricity to the smelter. For decades the smelter provided well-paying employment for people who had only a high school education. Intergenerational families worked at the smelter and the company became Trail's "economic and cultural centre." In the spring of 2017, Teck Resources announced that they were considering a CA$1.2-billion deal to sell its Waneta Dam to BC Hydro. At the time, union members who work at the Teck were concerned about the smelter's future. Teck had expanded its operations worldwide and the Trail operations only contributed CA$92 million of Teck's CA$3.3-billion gross profit in 2017. Notes See also Teck Resources Trail Smelter dispute References Lead and zinc mines in Canada Zinc smelters Metallurgical processes Water pollution in the United States Trail, British Columbia Teck Resources Environmental racism
Teck Cominco smelter
[ "Chemistry", "Materials_science" ]
1,166
[ "Metallurgical processes", "Metallurgy" ]
67,195,296
https://en.wikipedia.org/wiki/Platinum-195%20nuclear%20magnetic%20resonance
Platinum-195 nuclear magnetic resonance spectroscopy (platinum NMR or 195Pt NMR) is a spectroscopic technique which is used for the detection and characterisation of platinum compounds. The sensitivity of the technique and therefore its diagnostic utility have increased significantly starting from the 1970s, with 195Pt NMR nowadays considered the method of choice for structural elucidation of Pt species in solution. Examples of compounds routinely characterised with the method include platinum clusters and organoplatinum species such as PtII-based antitumour agents. Additional applications of 195Pt NMR include kinetic and mechanistic studies or investigations on drug binding. 195Pt magnetic properties Among the naturally occurring isotopes of platinum, 195Pt is the most abundant (33.8%) and the only one with non-zero spin I=1/2. The magnetic properties of the nucleus are considered favourable; the high natural abundance coupled with a medium gyromagnetic ratio (5.768×107 rad T−1 s−1) result in good 195Pt NMR signal receptivity, 19 times that of 13C (but still only 0.0034 times that of 1H). The resonance frequency (relative to a 100 MHz 1H NMR instrument) is approximately 21.4 MHz, close to the 13C resonance at 25.1 MHz. Chemical shifts The chemical shifts of 195Pt nuclei span a very large range of over 13000 ppm (cf. with ~300 ppm range for 13C). The NMR signals are also very sharp and highly sensitive to the platinum chemical environment (oxidation state, ligand identity and field strength, coordination number, etc.). Therefore, substituting even very similar ligands can result in shift changes in the order of hundreds of ppm which stand out on the spectrum and are easily monitored. The reference compound typically chosen for 195Pt NMR experiments is 1.2 M sodium hexachloroplatinate(IV) (Na2PtCl6) in D2O; this platinum(IV) complex is preferred due to its commercial availability, chemical stability, lower price relative to other platinum compounds, and high solubility which enables spectrum recording within minutes. Less soluble ionic platinum complexes have spectrum recording times of about an hour, whereas the borderline insoluble neutral complexes may require overnight measurements. The high sensitivity of the experiment means that contributions from different chlorine isotopes in the reference compound or other species can be resolved at high magnetic field strengths, giving a ±5 ppm uncertainty in reported shift values (which is, however, negligible in view of the 13000 ppm overall range). Couplings Coupling of 195Pt to 1H, 13C, 31P, 19F or 15N has been reported through one up to four bonds (1J to 4J) and is commonly studied to provide additional structural information for platinum complexes. The ~34% abundance of 195Pt (with the remaining 66% of natural Pt being NMR-inactive) means that this coupling appears in the respective 1H/31P/15N/13C NMR spectra as satellite peaks (cf. 13C satellites) which, for example, result in 17:66:17 patterns for singlets. The trans influence in 16 e− square planar PtII complexes has been studied by comparing the magnitude of coupling constants in the cis- and trans- isomers. Complicated homonuclear couplings ranging from 60 to 9000 Hz for 1J(195Pt–195Pt) are of interest in the context of platinum cluster compounds. References Nuclear magnetic resonance Platinum
Platinum-195 nuclear magnetic resonance
[ "Physics", "Chemistry" ]
739
[ "Nuclear magnetic resonance", "Nuclear physics" ]
67,197,507
https://en.wikipedia.org/wiki/Gouy%E2%80%93Stodola%20theorem
In thermodynamics and thermal physics, the Gouy-Stodola theorem is an important theorem for the quantification of irreversibilities in an open system, and aids in the exergy analysis of thermodynamic processes. It asserts that the rate at which work is lost during a process, or at which exergy is destroyed, is proportional to the rate at which entropy is generated, and that the proportionality coefficient is the temperature of the ambient heat reservoir. In the literature, the theorem often appears in a slightly modified form, changing the proportionality coefficient. The theorem is named jointly after the French physicist Georges Gouy and Slovak physicist Aurel Stodola, who demonstrated the theorem in 1889 and 1905 respectively. Gouy used it while working on exergy and utilisable energy, and Stodola while working on steam and gas engines. Overview The Gouy-Stodola theorem is often applied upon an open thermodynamic system, which can exchange heat with some thermal reservoirs. It holds both for systems which cannot exchange mass, and systems which mass can enter and leave. Observe such a system, as sketched in the image shown, as it is going through some process. It is in contact with multiple reservoirs, of which one, that at temperature , is the environment reservoir. During the process, the system produces work and generates entropy. Under these conditions, the theorem has two general forms. Work form The reversible work is the maximal useful work which can be obtained, , and can only be fully utilized in an ideal reversible process. An irreversible process produces some work , which is less than . The lost work is then ; in other words, is the work which was lost or not exploited during the process due to irreversibilities. In terms of lost work, the theorem generally stateswhere is the rate at which work is lost, and is the rate at which entropy is generated. Time derivatives are denoted by dots. The theorem, as stated above, holds only for the entire thermodynamic universe - the system along with its surroundings, together:where the index "tot" denotes the total quantities produced within or by the entire universe. Note that is a relative quantity, in that it is measured in relation to a specific thermal reservoir. In the above equations, is defined in reference to the environment reservoir, at . When comparing the actual process to an ideal, reversible process between the same endpoints (in order to evaluate , so as to find the value of ), only the heat interaction with the reference reservoir is allowed to vary. The heat interactions between the system and other reservoirs are kept the same. So, if a different reference reservoir is chosen, the theorem would read , where this time is in relation to , and in the corresponding reversible process, only the heat interaction with is different. By integrating over the lifetime of the process, the theorem can also be expressed in terms of final quantities, rather than rates: . Adiabatic case The theorem also holds for adiabatic processes. That is, for closed systems, which are not in thermal contact with any heat reservoirs. Similarly to the non-adiabatic case, the lost work is measured relative to some reference reservoir . Even though the process itself is adiabatic, the corresponding reversible process may not be, and might require heat exchange with the reference reservoir. Thus, this can be thought of as a special case of the above statement of the theorem - an adiabatic process is one for which the heat interactions with all reservoirs are zero, and in the reversible process, only the heat interaction with the reference thermal reservoir may be different. The adiabatic case of the theorem holds also for the other formulation of the theorem, presented below. Exergy form The exergy of the system is the maximal amount of useful work that the system can generate, during a process which brings it to equilibrium with its environment, or the amount of energy available. During an irreversible process, such as heat exchanges with reservoirs, exergy is destroyed. Generally, the theorem states thatwhere is the rate at which exergy is destroyed, and is the rate at which entropy is generated. As above, time derivatives are denoted by dots. Unlike the lost work formulation, this version of the theorem holds for both the system (the control volume) and for its surroundings (the environment and the thermal reservoirs) separately:andwhere the index "sys" denotes quantities produced within or by the system itself, and "surr" within or by the surroundings. Therefore, summing these two forms, the theorem also holds for the thermodynamic universe as a whole:where the index "tot" denotes the total quantities of the entire universe. Thus, the exergy formulation of the theorem is less limited, as it can be applied on different regions separately. Nevertheless, the work form is used more often. The proof of the theorem, in both forms, uses the first law of thermodynamics, writing out the terms , , and in the relevant regions, and comparing them. Modified coefficient and effective temperature In many cases, it is preferable to use a slightly modified version of the Gouy-Stodola theorem in work form, where is replaced by some effective temperature. When this is done, it often enlarges the scope of the theorem, and adapts it to be applicable to more systems or situations. For example, the corrections elaborated below are only necessary when the system exchanges heat with more than one reservoir - if it exchanges heat only at the environmental temperature , the simple form above holds true. Additionally, modifications may change the reversible process to which the real process is compared in calculating . The modified theorem then readswhere is the effective temperature. For a flow process, let denote the specific entropy (entropy per unit mass) at the inlet, where mass flows in, and the specific entropy at the outlet, where mass flows out. Similarly, denote the specific enthalpies by and . The inlet and outlet, in this case, function as initial and final states a process: mass enters the system at an initial state (the inlet, indexed "1"), undergoes some process, and then leaves at a final state (the outlet, indexed "2"). This process is then compared to a reversible process, with the same initial state, but with a (possibly) different final state. The theoretical specific entropy and enthalpy after this ideal, isentropic process are given by and , respectively. When the actual process is compared to this theoretical reversible process and is evaluated, the proper effective temperature is given byIn general, lies somewhere in between the final temperature in the actual process and the final temperature in the theoretical reversible process . This equation above can sometimes be simplified. If both the pressure and the specific heat capacity remain constant, then the changes in enthalpy and entropy can be written in terms of the temperatures, andHowever, it is important to note that this version of the theorem doesn't relate the exact values which the original theorem does. Specifically, in comparing the actual process to a reversible one, the modified version allows the final state to be different between the two. This is in contrast to the original version, wherein reversible process is constructed to match so that the final states are the same. Applications In general, the Gouy-Stodola theorem is used to quantify irreversibilities in a system and to perform exergy analysis. That is, it allows one to take a thermodynamic system and better understand how inefficient it is (energy-wise), how much work is lost, how much room there is for improvement and where. The second law of thermodynamics states, in essence, that the entropy of a system only increases. Over time, thermodynamic systems tend to gain entropy and lose energy (in approaching equilibrium): thus, the entropy is "somehow" related to how much exergy or potential for useful work a system has. The Gouy-Stodola theorem provides a concrete link. For the most part, this is how the theorem is used - to find and quantify inefficiencies in a system. Flow processes A flow process is a type of thermodynamic process, where matter flows in and out of an open system called the control volume. Such a process may be steady, meaning that the matter and energy flowing into and out of the system are constant through time. It can also be unsteady, or transient, meaning that the flows may change and differ at different times. Many proofs of the theorem demonstrate it specifically for flow systems. Thus, the theorem is particularly useful in performing exergy analysis on such systems. Vapor compression and absorption The Gouy-Stodola theorem is often applied to refrigeration cycles. These are thermodynamic cycles or mechanical systems where external work can be used to move heat from low temperature sources to high temperature sinks, or vice versa. Specifically, the theorem is useful in analyzing vapor compression and vapor absorption refrigeration cycles. The theorem can help identify which components of a system have major irreversibilities, and how much exergy they destroy. It can be used to find at which temperatures the performance is optimal, or what size system should be constructed. Overall, that is, the Gouy-Stodola theorem is a tool to find and quantify inefficiencies in a system, and can point to how to minimize them - this is the goal of exergy analysis. When the theorem is used for these purposes, it is usually applied in its modified form. In ecology Macroscopically, the theorem may be useful environmentally, in ecophysics. An ecosystem is a complex system, where many factors and components interact, some biotic and some abiotic. The Gouy-Stodola theorem can find how much entropy is generated by each part of the system, or how much work is lost. Where there is human interference in an ecosystem, whether the ecosystem continues to exist or is lost may depend on how many irreversibilities it can support. The amount of entropy which is generated or the amount of work the system can perform may vary. Hence, two different states (for example, a healthy forest versus one which has undergone significant deforestation) of the same ecosystem may be compared in terms of entropy generation, and this may be used to evaluate the sustainability of the ecosystem under human interference. In biology The theorem is also useful on a more microscopic scale, in biology. Living systems, such as cells, can be analyzed thermodynamically. They are rather complex systems, where many energy transformations occur, and they often waste heat. Hence, the Gouy-Stodola theorem may be useful, in certain situations, to perform exergy analysis on such systems. In particular, it may help to highlight differences between healthy and diseased cells. Generally, the theorem may find applications in fields of biomedicine, or where biology and physics cross over, such as biochemical engineering thermodynamics. As a variational principle A variational principle in physics, such as the principle of least action or Fermat's principle in optics, allows one to describe the system in a global manner and to solve it using the calculus of variations. In thermodynamics, such a principle would allow a Lagrangian formulation. The Gouy-Stodola theorem can be used as the basis for such a variational principle, in thermodynamics. It has been proven to satisfy the necessary conditions. This is fundamentally different from most of the theorem's other uses - here, it isn't being applied in order to locate components with irreversibilities or loss of exergy, but rather helps give some more general information about the system. References Thermodynamics
Gouy–Stodola theorem
[ "Physics", "Chemistry", "Mathematics" ]
2,479
[ "Thermodynamics", "Dynamical systems" ]
53,010,729
https://en.wikipedia.org/wiki/Mirror%20life
Mirror life (also called mirror-image life) is a hypothetical form of life with mirror-reflected molecular building blocks. The possibility of mirror life was first discussed by Louis Pasteur. This alternative life form has never been discovered in nature, but efforts to build a mirror-image version of biology's molecular machinery are underway. In December 2024, a broad coalition of scientists, including leading synthetic biology researchers and Nobel laureates, warned that the creation of mirror life, including mirror bacteria, could cause "unprecedented and irreversible harm" to human health and ecosystems worldwide. Its potential to escape immune defenses and invade natural ecosystems might lead to "pervasive lethal infections in a substantial fraction of plant and animal species, including humans." Given these risks, the scientists concluded that mirror organisms should not be created without compelling evidence of safety. Homochirality Many of the essential molecules for life on Earth can exist in two mirror-image forms, often called "left-handed" and "right-handed", where handedness refers to the direction in which polarized light skews when beamed through a pure solution of the molecule, but living organisms do not use both. RNA and DNA contain only right-handed sugars; proteins are exclusively composed of left-handed amino acids, although many bacteria and fungi are able to synthesise non-ribosomal peptides containing right-handed amino acids, as the example of peptidoglycan synthesis shows. This phenomenon is known as homochirality. It is not known whether homochirality emerged before or after life, whether the building blocks of life must have this particular chirality, or indeed whether life needs to be homochiral. Protein chains built from amino acids of mixed chirality tend not to fold or function well, but mirror-image proteins have been constructed that have identical function but on substrates of opposite handedness. The concept Advances in synthetic biology, like synthesizing viruses since 2002, partially synthetic bacteria in 2010, and synthetic ribosomes in 2013, may lead to the possibility of fully synthesizing a living cell from small molecules, which could enable synthesizing mirror cells from mirrored versions (enantiomers) of life's building-block molecules. Some proteins have been synthesized in mirror-image versions, including polymerase in 2016. Reconstructing regular lifeforms in mirror-image form, using the mirror-image (chiral) reflection of their cellular components, could be achieved by substituting left-handed amino acids with right-handed ones, in order to create mirror reflections of proteins, and likewise substituting right-handed with left-handed nucleic acids. Because the phospholipids of cell membranes are also chiral, American geneticist George Church proposed using an achiral fatty acid instead of mirror-image phospholipids for the membrane. Electromagnetic force (chemistry) is unchanged under such molecular reflection transformation (P-symmetry). There is a small alteration of weak interactions under reflection, which can produce very small corrections that theoretically favor the natural enantiomers of amino acids and sugars, but it is unknown if this effect is large enough to affect the functionality of mirror biomolecules or explain homochirality in nature. Mirror animals would need to feed on reflected food, produced by reflected plants. Mirror viruses would not be able to attack natural cells, just as natural viruses would not be able to attack mirror cells. Mirror life presents potential dangers. For example, a chiral-mirror version of cyanobacteria, which only needs achiral nutrients and light for photosynthesis, could take over Earth's ecosystem due to lack of natural enemies, disturbing the bottom of the food chain by producing mirror versions of the required sugars. Some bacteria can digest L-Glucose; exceptions like this would give some rare lifeforms an unanticipated advantage. Direct applications Direct application of mirror-chiral organisms can be mass production of enantiomers (mirror-image) of molecules produced by normal life. Enantiopure drugs - some pharmaceuticals have shown different activity depending on enantiomeric form, Aptamers (L-ribonucleic acid aptamers): "That makes mirror-image biochemistry a potentially lucrative business. One company that hopes so is Noxxon Pharma in Berlin. It uses laborious chemical synthesis to make mirror-image forms of short strands of DNA or RNA called aptamers, which bind to therapeutic targets such as proteins in the body to block their activity. The firm has several mirror-aptamer candidates in human trials for diseases including cancer; the idea is that their efficacy might be improved because they aren't degraded by the body's enzymes. A process to replicate mirror-image DNA could offer a much easier route to making the aptamers, says Sven Klussmann, Noxxon Pharma's chief scientific officer." L-Glucose, enantiomer of standard glucose, Tests showed that it tastes likes standard sugar, but is not metabolized the same way. However, it was never marketed due to excessive manufacturing costs. More recent research allows cheap production with high yields; however the authors state that it is not usable as a sweetener due to laxative effects. In fiction The creation of a mirror human is the basis of the 1950 short story "Technical Error" by Arthur C. Clarke. In this story, a physical accident transforms a person into his mirror image, speculatively explained by travel through a fourth physical dimension. H. G. Wells' The Plattner Story (1896) is based on a similar idea. In the 1970 Star Trek novel Spock Must Die! by James Blish, the science officer of the USS Enterprise is replicated in mirror form by a transporter mishap. He locks himself in the sick bay where he is able to synthesize mirror forms of basic nutrients needed for his survival. An alien machine that reverses chirality, and a blood-symbiont that functions properly only when in one chirality, were central to Roger Zelazny's 1976 novel Doorways in the Sand. On the titular planet of Sheri S. Tepper's 1989 novel Grass, some lifeforms have evolved to use the right-handed isomer of alanine. In the Mass Effect series, chirality of amino acids in foodstuffs is discussed often in both dialogue and encyclopedia files. In the 2014 science fiction novel Cibola Burn by James S. A. Corey, the planet Ilus has indigenous life with partially-mirrored chirality. This renders human colonists unable to digest native flora and fauna, and greatly complicates conventional farming. Consequently, the colonists have to rely upon hydroponic farming and food importation. In the 2017 Daniel Suarez novel Change Agent, an antagonist, Otto, nicknamed the "Mirror Man", is revealed to be a genetically-engineered mirror human. He views other humans with disdain and causes them to feel an inexplicable repulsion by his very presence. The concept is used during Ryan North's 2023 run on Fantastic Four as an existential threat towards the human population. See also Xenobiology Mirror matter – A hypothetical form of matter that interacts only weakly with normal matter, which could form mirror planets, potentially inhabited by mirror matter life. References Chirality Synthetic biology Hypothetical life forms
Mirror life
[ "Physics", "Chemistry", "Engineering", "Biology" ]
1,526
[ "Synthetic biology", "Pharmacology", "Biological engineering", "Origin of life", "Biochemistry", "Hypothetical life forms", "Stereochemistry", "Chirality", "Bioinformatics", "Molecular genetics", "Asymmetry", "Biological hypotheses", "Symmetry" ]
53,014,205
https://en.wikipedia.org/wiki/SMIM23
SMIM23 or Small Integral Membrane Protein 23 is a protein which in humans is encoded by the SMIM23 or c5orf50 gene. The longer mRNA isoform is 519 nucleotides which translates to 172 amino acids of a protein. In recent advancements, researchers have identified this gene, along with a few others, could potentially play a role in how facial morphology arises in humans. Gene SMIM23 is a protein-encoding gene. Basic information about its aliases and chromosome location are given in the table. The schematic of the chromosome helps to visualize the location of the gene. mRNA While the gene has two splice isoforms (isoforms X1 and X2), it has three exon/exon boundaries indicating four exons (nucleotide 1-105, 106-157, 158-225, and 226-519). Protein Physical features SMIM23 notably has a transmembrane domain. The predicted isoelectric point for the unmodified/unprocessed protein in mice is 5.779 while only the transmembrane region in humans has an isoelectric point of 5.928 The gene appears to be Leucine and Glutamic Acid rich though not at any usually high number. It is also weak in all other amino acids besides Alanine, Serine, and Glutamine. The region underlined in the conceptual translation was predicted to be an Involucrin repeat. Post-Translational modifications The transmembrane region is 1674.2 daltons while the whole protein is 200008.51 Da. This is very similar to what was found with UniProt where predicted molecular weight was 20.025 kDa. Antibody kits were investigated to see banding pattern and weight changes that may have occurred post translation. C5orf50 Polyclonal Antibody from ThermoFisher Scientific has a Western Blot banding pattern at 40 kDa. This predicts that there is a significant amount of post-translational modification by addition of large components. There are many phosphorylation sites along its sequence including two protein kinase C phosphorylation sites, cAMP- and cGMP-dependent protein kinase phosphorylation site, and a tyrosine kinase phosphorylation site. There is also a confident potential C-terminal GPI-Modification Site. Secondary structure There are two stretches of alpha helices from amino acid 33 to 49 and 89 to 136 based on evidence from various programs that predict secondary structure. The most informative of all the programs from the ones investigated is PELE on Biology Workbench. A 3D protein structure was predicted to look like a series of helices, similar to what was predicted by other programs. Subcellular localization This human integral membrane protein is predicted to be found in the endoplasmic reticulum. The same kind of investigation of protein localization in other types of species returned conflicting results. Many programs predicted the protein to be present in the cytosol. This suggests the possibility of incorrect naming, i.e. the protein may not be integral membrane due to other predicted locations. This type of conclusion will require further information. Expression Not enough consensus exists as to where in the body SMIM23 is expressed. Databases indicate mainly in the testes, but this may be due to the lack of data. Regulation of Expression The promoter region of SMIM23 is approximately 1192 nucleotides long with various predicted transcription factors. Regulation in the secondary structure is a predicted stem-loop in the 5' UTR region with a few areas of conservation across species. Function and clinical significance Novel research has suggested that how face shape arises in individuals may be influenced by a set of genes. This set includes SMIM23. Though in the paper the gene is referred to by an alias (C5orf50), it is clear that the scientists have gathered a list of five genes that likely determine facial shape. This is specifically people of European descent. These findings are supported by replicating phenotypes of each specific gene and statistical analysis. Just like findings elsewhere, the article mentions SMIM23 that likely codes for an unknown transmembrane protein. There have also been studies where a set of genes including SMIM23 may influence human height. Furthermore, a great deal of research is being done on chromosome 5 in general to understand roles of certain genes on it including SMIM23. This could one day provide insight into this gene’s specific roles on the chromosome itself. Interacting proteins The following proteins are predicted to interact with SMIM23. Cilia And Flagella Associated Protein 43 also known as CFAP43 or WDR96 is the most confident of the predicted functional partners and is a tryptophan-aspartic acid repeat domain. SFR1 is SWI5-dependent recombination repair 1 which is a component of the SWI5-SFR1 complex, a complex required for double-strand break repair via homologous recombination. COL17A1 is collagen. Specifically type XVII, alpha 1. This may play a role in overall protein structure. PRDM16 binds to DNA and acts as a transcriptional regulator. It functions in the differentiation between white and brown adipose tissue. It can also be a repressor of transforming growth factor-beta signaling. Homology and evolution There are no known paralogs. There are around 100+ known orthologs which range from primates to small ground animals. From these investigations and that of sequence similarity, an ortholog space can be discussed. The closest relatives to humans with the SMIM23 gene were in primates so two types of monkeys were picked which diverged around 29.4 million years ago and had sequence similarities in the high 70s. Slightly more distant relatives with the gene come from a wide variety of animals from horses, to sea mammals, to bats, and more which all have similarities between 62-69%. Lastly, some distantly related orthologs were included like the Tasmanian devil and various scavenger animals which have similarities between 40-61%. It is interesting to see how some portions are still highly conserved (see conceptual translation above). The most interesting motif is tryptophan 124, leucine 125, and aspartic acid 126. Lastly, in BLAST a protein family of unknown function was returned. There are two small conserved sequences part of the DUF4635 motif (LEQ and DLE). So though not completely conserved in the alignments done with SMIM23, these were labeled in the conceptual translation. Orthologs The protein was not found in bacteria, archaea, protists, plants, fungi, invertebrate, reptiles, and birds. All the found orthologs were under mammals. An unrooted phylogenetic tree of SMIM23 was created with a few close, moderately related, and distant orthologs (listed in table). Here, larger the distance (length of line), longer the time to last common ancestor. Sequence identity refers to similar amino acids while similarity refers to amino acid match. References Suggested Reading Liu F, van der Lijn F, Schurmann C, Zhu G, Chakravarty MM, Hysi PG, et al. (2012) A Genome-Wide Association Study Identifies Five Loci Influencing Facial Morphology in Europeans. PLoS Genet 8(9): e1002932. https://doi.org/10.1371/journal.pgen.1002932 Lowe JK, Maller JB, Pe'er I, Neale BM, Salit J, Kenny EE, et al. (2009) Genome-Wide Association Studies in an Isolated Founder Population from the Pacific Island of Kosrae. PLoS Genet 5(2): e1000365. https://doi.org/10.1371/journal.pgen.1000365 Greliche N, Germain M, Lambert J-C, et al. A genome-wide search for common SNP x SNP interactions on the risk of venous thrombosis. BMC Medical Genetics. 2013;14:36. Schmutz J et al. (2004). The DNA sequence and comparative analysis of human chromosome 5. Nature, 431(7006), 268-74. https://dx.doi.org/10.1038/nature02919 Lango Allen H, Estrada K, Lettre G, et al. Hundreds of variants clustered in genomic loci and biological pathways affect human height. Nature. 2010;467(7317):832-838. Rose JE, Behm FM, Drgon T, Johnson C, Uhl GR. Personalized Smoking Cessation: Interactions between Nicotine Dose, Dependence and Quit-Success Genotype Score. Molecular Medicine. 2010;16(7-8):247-253. Proteins Genes Integral membrane proteins
SMIM23
[ "Chemistry" ]
1,870
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
64,227,074
https://en.wikipedia.org/wiki/Hatice%20Altug
Hatice Altug (; born 1978) is a Turkish physicist and professor in the Bioengineering Department and head of the Bio-nanophotonic Systems laboratory at École Polytechnique Fédérale de Lausanne (EPFL), in Switzerland. Her research focuses on nanophotonics for biosensing and surface enhanced spectroscopy, integration with microfluidics and nanofabrication, to obtain high sensitivity, label-free characterization of biological material. She has developed low-cost biosensor allowing the identification of viruses such as Ebola that can work in difficult settings and therefore particularly useful in case of pandemics. Altug was the recipient of United States Presidential Early Career Award for Scientists and Engineers and The Optical Society of America Adolph Lomb Medal. She also received European Research Council Consolidator Award, Office of Naval Research Young Investigator Award, National Science Foundation CAREER Award and Popular Science Magazine Brilliant 10 Award. She is a Fellow of the Optical Society of America. Education Altug, who was born in Karamanlı district of Burdur in 1978, completed her high school education in 1996 in Antalya Anatolian High School, Antalya, Turkey. She received her B.Sc. degree in physics in 2000 in Bilkent University (Ankara, Turkey), having been awarded a full scholarship there. In 2007, she was awarded a PhD in applied physics from Stanford University (California, U.S.), under the supervision of Professor Jelena Vučković. During her education at Stanford University, she worked on laser systems and optical instruments. Career Altug completed a Postdoctoral Fellowship at the Center for Engineering in Medicine at the Harvard Medical School. From 2007 until 2013, she was first an assistant and later an associate professor of Electrical and Computer Engineering at Boston University. In 2010, she was awarded the Faculty Early Career Development (CAREER) award by the National Science Foundation. Altug disseminated her findings to the public through Boston’s Museum of Science, local educational programs such as Boston Upward Bound Math and Science, and Boston University’s Summer Challenge program on engineering. At the College of Engineering, she added experimental modules to courses relating to nanotechnology. She was also named one of Popular Science’s “Brilliant 10,” a group of researchers under 40 who made transformational contributions to their fields during 2010. In 2011, IEEE Photonics Society named Altug as winner of the Young Investigator Award, which recognizes individuals who make outstanding technical contributions to the field of photonics prior to their 35th birthday. She was honored for her groundbreaking achievements in confining and manipulating light at the nanoscale to dramatically improve biosensing capabilities. Altug was recognized with OSA’s Adolph Lomb Medal in 2012 “for breakthrough contributions on integrated optical nano-biosensor and nanospectroscopy technologies based on nanoplasmonics, nanofluidics, and novel nanofabrication.” She was also named by President Obama among 94 researchers as a recipient of the 2011 Presidential Early Career Awards for Scientists and Engineers (PECASE), the highest honor bestowed by the United States government on science and engineering professionals in the early stages of their independent research careers. As well as attending the White House ceremony, awardees receive a research grant lasting up to five years. She was awarded for leading the development of a biosensor that uses tiny crystals to manipulate light to detect a virus, a protein, or a cancer cell in a drop of blood. In 2013, Altug joined Ecole Polytechnique Federale de Lausanne, where she became full professor in 2020. In 2019, she was awarded the ERC Proof of Concept Grant by the European research council for her project: “Portable Infrared Biochemical Sensor Enabled by Pixelated Dielectric Metasurfaces.” Awards and honors 2021 Fellow of Optica for "pioneering contributions to nano-optics, manipulation of light on-chip, the development of innovative nanobiosensors and sensing techniques, and exemplary contributions to the scientific community and Optica." 2020 European Physical Society Emmy Noether Distinction for Women in Physics 2019 ERC Proof of Concept Grant 2012 Optical Society's Adolph Lomb Medal 2011 Presidential Early Career Awards for Scientists and Engineers 2011 IEEE Photonics Society Young Investigator Award 2010 National Science Foundation Faculty Early Career Development (CAREER) award References External links Fellows of Optica (society) 1978 births Living people Women in optics Optical engineers Turkish women physicists Bilkent University alumni Stanford University alumni Boston University faculty Academic staff of the École Polytechnique Fédérale de Lausanne Metamaterials scientists Biomedical engineers Turkish nanotechnologists People from Burdur Province Turkish expatriates in the United States Turkish expatriates in Switzerland Turkish electrical engineers Electrical engineering academics 21st-century physicists Recipients of the Presidential Early Career Award for Scientists and Engineers 21st-century women physicists
Hatice Altug
[ "Materials_science" ]
994
[ "Metamaterials scientists", "Metamaterials" ]
64,236,931
https://en.wikipedia.org/wiki/MIF4GD
MIF4GD, or MIF4G domain-containing protein, is a protein which in humans is encoded by the MIF4GD gene. It is also known as SLIP1, SLBP (Stem-Loop Binding Protein)-interacting protein 1, AD023, and MIFD. MIF4GD is expressed ubiquitously in humans, and has been found to be involved in activating proteins for histone mRNA translation, alternative splicing and translation of mRNAs, and is a factor in the regulation of cell proliferation. Gene The MIF4GD gene is located in humans on the minus strand of chromosome 17q25.1, and spans 5.0 Kb, from bases 75,266,228 to 75,271,292. mRNA There are 11 alternatively-spliced mRNA transcripts and 3 unspliced mRNA transcripts that can be transcribed from this gene, which include 7 possible exons and 11 distinct introns. Protein There are 10 viable isoforms of the MIF4G domain-containing protein. The longest isoform is MIF4G domain-containing protein isoform 1, which is 263 amino acids long, however, the most common isoform is MIF4G domain-containing protein isoform 4, which consists of 6 exons and is 222 amino acids in length. Features MIF4G domain-containing protein isoform 1 has a predicted molecular weight of 30.1 kDa, and a predicted isoelectric point of 5.2, indicating that it is an acidic protein. It has a normal ratio of each amino acid when compared to the average human protein. Additionally, MIF4GD is expected to form 11 alpha helices. Sub-cellular localization Searches of MIF4GD antibodies showed that MIF4GD is present in the cytoplasm and nucleoli of cells. Additionally, several bioinformatic programs predict human MIF4GD, as well as several of its orthologs, are present in the cytoplasm, nucleus and mitochondria of cells. Post-translational modifications Due to its presumed localization in the cytoplasm, it is predicted that MIF4GD could be phosphorylated, acetylated, ubiquitinated, or sumoylated. Additionally, MIF4GD is predicted to contain a "YinOYang" site at S61, which may be either O-GlcNAcylated or phosphorylated at different times for regulatory purposes. It is not likely that the MIF4GD protein will be lipid-linked or glycosylated. MIF4G domain The MIF4GD protein that contains an MIF4G domain, which is named after the middle domain of eukaryotic initiation factor 4G (eIF4G). The MIF4G domain of the MIF4GD protein has a molecular weight of 17.0 kDa, and has a predicted isoelectric point of 5.7. Similar to the entire protein, it contains normal ratios of each amino acid relative to a reference of human proteins, however, it contains less negatively-charged amino acids and more positively-charged amino acids relative to the entire protein. The MIF4G domain is predicted to contain many alpha-helices and is thought to contain alpha-helical repeats. Expression and regulation MIF4GD is found only in animals, and is expressed ubiquitously in the body, though it has been discovered to be expressed at a somewhat higher rate in lymph nodes, bone marrow and testes. MIF4GD is expressed at an average rate that is 1.7 times higher than the average gene. The promoter region of MIF4GD is approximately 1137 nucleotide base pairs long, and is predicted to interact with various transcription factors. The 5' untranslated region of MIF4GD mRNA transcripts is relatively short, at a length of around 137 nucleotides, and is predicted to form stem-loops and interior-loops to which RNA-binding proteins may bind. The 3' untranslated region is longer, at a length of approximately 510 nucleotides. The 3' UTR is also predicted to form stem-loops, interior-loops, and bulge-loops, as well as more complex secondary structures, and is predicted to bind to RNA-binding proteins and miRNAs at or near these sites. Interactants MIF4GD has been experimentally shown to bind to various other proteins, many of which play a role in alternative splicing of pre-mRNAs and translation of mRNAs into proteins. It also is known to interact with eukaryotic translation initiation factors, RNA, and DNA to form a translation initiation complex. Some of the most notable proteins that interact with MIF4GD are: ATP-dependent RNA helicases DDX19A and DDX19B, which is involved in mRNA export from the nucleus and helicase activity by facilitating the disassociation of nuclear mRNA binding proteins and replacement with cytoplasmic mRNA binding proteins. Cap binding complex dependent translation initiation factor, or CTIF, which is a paralog of MIF4GD. CTIF binds cotranscriptionally to the cap end of the nascent mRNA, and is involved in simultaneous editing and translation of mRNA that happens directly after export from the nucleus. Histone RNA hairpin-binding protein, or SLBP, which is involved in histone pre-mRNA processing and movement of mRNAs from the nucleus to the cytoplasm of cells. Supervillin, or SVIL, which is a peripheral membrane protein that forms a high-affinity link between the actin cytoskeleton and the membrane and contributes to myogenic membrane structure and differentiation. Supervillin also regulates cell spreading and motility during the cell cycle. MIF4GD also has been verified by two-hybrid bait-prey experiments to interact with NSP7ab, or Non-structural protein 7, of SARS-CoV. Function and clinical significance MIF4GD has several known functions, including the activation of proteins that bind histone mRNAs for translation and binding of mRNAs for alternative splicing and translation into proteins. Additionally, down-regulation of the SLIP1/MIF4GD gene and corresponding protein results in a reduced rate of histone mRNA translation and reduced cell viability. Therefore, it is speculated to be needed in eukaryotic cells in order to produce proteins and for cell proliferation. MIF4GD has been shown to bind and stabilize p27kip1, which plays an important role in the regulating the cell cycle and in cancer progression. When bound to MIF4GD, the stabilized protein suppresses phosphorylation by CDK2 at T187, which controls the amount of cell proliferation in hepatocellular carcinoma (HCC). Regulation of this interaction is being studied as a potential therapeutic treatment for patients with hepatocellular carcinoma. This provides more evidence that MIF4GD helps regulate cell proliferation, and suggests MIF4GD may play a role in immune response. Sequence homology and evolutionary history MIF4GD is found in Animalia, and first appeared in Porifera, which diverged from Homo sapiens around 777 million years ago. Relative to humans, this gene is highly conserved (>80% identity and >90% similarity) in mammals and reptiles, moderately conserved (>70% identity and >85% similarity) in chordates, and low levels of conservation (15-25% identity and 25-40% similarity) to the rest of Animalia. MIF4GD is not present in trichoplax, fungi, plants, protists, archaea or bacteria. Orthologs There are currently 310 known and sequenced MIF4GD orthologs found in Animalia. A select number of these orthologs have been analyzed for estimated time of divergence (in millions of years), amino acid sequence identity to humans, and amino acid sequence similarity to humans. The results are shown in the table below: Paralogs MIF4GD has two known paralogs, which are PAIP1 and CTIF. Both known paralogs have moderate to low conservation to MIF4GD, with less than 15% identity and between 20 and 25% similarity. However, both of these genes are predicted to have diverged before the evolution of orthologs, and scored E-values of nearly zero, indicating a significant relationship with MIF4GD. MIF4GD is a slowly-evolving gene, with an approximate average of 75 amino acid changes per hundred amino acids per million years. Multiple sequence alignments of human MIF4GD and its orthologs showed two conserved amino acids throughout all sequences, which are Gly200 and Glu241. References Proteins
MIF4GD
[ "Chemistry" ]
1,884
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
42,840,382
https://en.wikipedia.org/wiki/SYSACCO
SYSACCO is a Syrian-Saudi Chemical Company with headquarter in Aleppo, Syria. Their chemical plant with an area of 142,000 m2 is located east of Aleppo. SYSACCO is Syria’s only chlorine manufacturing plant. Products They mostly produce products for water sterilization to the local market: Sodium hydroxide, also known as Caustic Soda. Chlorine gas cylinders. Sodium hypochlorite, the sodium salt of hypochlorous acid. Hydrochloric Acid, a clear, colorless, highly pungent solution of hydrogen chloride in water. References External links www.sysacco.net Chemical plants
SYSACCO
[ "Chemistry" ]
137
[ "Chemical process engineering", "Chemical plants" ]