text
stringlengths
1
22.8M
Otto Marseus van Schrieck (ca. 1613, in Nijmegen – buried 22 June 1678, in Amsterdam) was a painter in the Dutch Golden Age. He is best known for his paintings of forest flora and fauna. Biography Marseus van Schrieck spent the years 1652–1657 in Rome and Florence with the painters Matthias Withoos and Willem van Aelst, and there joined the Dutch Guild of Artists. Later he worked at the court of the Grand Duke of Tuscany and traveled throughout England and France, after which he settled in Amsterdam. On 25 April 1664, he married Margarita Gysels (the daughter of Cornelius Gysels, an engraver). Arnold Houbraken's biography of Otto mentions that he joined the Bentvueghels in Rome and was called the snuffelaer, or "ferreter", because he was always in the garden looking for detail. Houbraken quotes Otto's wife, who survived him by two husbands and was still alive when he wrote the book; according to her, Marseus van Schrieck kept snakes and lizards in a shed at the back of his house, and also on a piece of land outside the city that was walled in for this purpose. Works Many of his paintings are dark studies of plants, often with lizards at the base and insects on the leaves and branches. According to the Netherlands Institute for Art History, his followers were Willem van Aelst, Anthonie van Borssom, Elias van den Broeck, J Falk, Carl Wilhelm de Hamilton, Trajan Hughes, Nicolaes Lachtropius, Jacob Marrel, Abraham Mignon, Rachel Ruysch, Christiaen Striep, Isac Vromans, Matthias Withoos, and Pieter Withoos. A notable omission in this list is the daughter and sister of the last two mentioned, Alida Withoos, who collaborated with him on creating paintings of the garden of Vijverhof for Agnes Block's collection of garden albums. References Otto Marcelis biography in De groote schouburgh der Nederlantsche konstschilders en schilderessen (1718) by Arnold Houbraken, courtesy of the Digital library for Dutch literature External links Artcyclopedia entry with many links Works and literature on PubHist 1619 births 1678 deaths Dutch Golden Age painters Dutch male painters Animal artists Artists from Nijmegen Members of the Bentvueghels
Łęki is a village in the administrative district of Gmina Łososina Dolna, within Nowy Sącz County, Lesser Poland Voivodeship, in southern Poland. It lies approximately north-west of Łososina Dolna, north of Nowy Sącz, and south-east of the regional capital Kraków. References Villages in Nowy Sącz County
```xml import * as React from 'react'; import { useAnimationFrame } from '@fluentui/react-utilities'; export const Default = () => { const [setAnimationFrame, clearAnimationFrame] = useAnimationFrame(); const [visible, setVisible] = React.useState(false); React.useEffect(() => { setAnimationFrame(() => setVisible(true)); return () => clearAnimationFrame(); }, [setAnimationFrame]); return visible ? <div>Test the renderization</div> : null; }; ```
Bootlegs & B-Sides is a compilation album released by rap group, Luniz. Originally released in 1994 as the EP titled Formally Known as the LuniTunes paying homage to their original name of the group (it was rumored that Warner Brothers threatened them with a lawsuit if they refused to change their name due its likeness of the name of the cartoon Looney Tunes). It was an underground hit throughout Oakland and Richmond earning the Luniz a very loyal fanbase prior to the 1995 release of their first LP Operation Stackola. The album was re-released in summer of 1997 under the title Bootlegs & B-Sides as a jump-off to gain a buzz for the anticipated release of their second album Lunitik Muzik Track listing "Scandalous" - featuring Dru Down and Suga-T - 4:58 "Doin' Dirt" - featuring Dru Down - 4:17 "Dirty Raps" - 4:17 "Scope" - 4:33 "Just a Freak" - featuring Knucklehead - 4:45 "Stupid" - 4:38 Luniz albums 1997 compilation albums B-side compilation albums
```html <html lang="en"> <head> <title>Bug Reporting - Untitled</title> <meta http-equiv="Content-Type" content="text/html"> <meta name="description" content="Untitled"> <meta name="generator" content="makeinfo 4.8"> <link title="Top" rel="start" href="index.html#Top"> <link rel="up" href="Reporting-Bugs.html#Reporting-Bugs" title="Reporting Bugs"> <link rel="prev" href="Bug-Criteria.html#Bug-Criteria" title="Bug Criteria"> <link href="path_to_url" rel="generator-home" title="Texinfo Homepage"> <!-- This file documents the GNU linker LD (GNU Binutils) version 2.26. Permission is granted to copy, distribute and/or modify this document or any later version published by the Free Software Foundation; with no Invariant Sections, with no Front-Cover Texts, and with no Back-Cover Texts. A copy of the license is included in the <meta http-equiv="Content-Style-Type" content="text/css"> <style type="text/css"><!-- pre.display { font-family:inherit } pre.format { font-family:inherit } pre.smalldisplay { font-family:inherit; font-size:smaller } pre.smallformat { font-family:inherit; font-size:smaller } pre.smallexample { font-size:smaller } pre.smalllisp { font-size:smaller } span.sc { font-variant:small-caps } span.roman { font-family:serif; font-weight:normal; } span.sansserif { font-family:sans-serif; font-weight:normal; } --></style> </head> <body> <div class="node"> <p> <a name="Bug-Reporting"></a> Previous:&nbsp;<a rel="previous" accesskey="p" href="Bug-Criteria.html#Bug-Criteria">Bug Criteria</a>, Up:&nbsp;<a rel="up" accesskey="u" href="Reporting-Bugs.html#Reporting-Bugs">Reporting Bugs</a> <hr> </div> <h3 class="section">6.2 How to Report Bugs</h3> <p><a name="index-bug-reports-764"></a><a name="index-g_t_0040command_007bld_007d-bugs_002c-reporting-765"></a> A number of companies and individuals offer support for <span class="sc">gnu</span> products. If you obtained <samp><span class="command">ld</span></samp> from a support organization, we recommend you contact that organization first. <p>You can find contact information for many support companies and individuals in the file <samp><span class="file">etc/SERVICE</span></samp> in the <span class="sc">gnu</span> Emacs distribution. <p>Otherwise, send bug reports for <samp><span class="command">ld</span></samp> to <a href="path_to_url">path_to_url <p>The fundamental principle of reporting bugs usefully is this: <strong>report all the facts</strong>. If you are not sure whether to state a fact or leave it out, state it! <p>Often people omit facts because they think they know what causes the problem and assume that some details do not matter. Thus, you might assume that the name of a symbol you use in an example does not matter. Well, probably it does not, but one cannot be sure. Perhaps the bug is a stray memory reference which happens to fetch from the location where that name is stored in memory; perhaps, if the name were different, the contents of that location would fool the linker into doing the right thing despite the bug. Play it safe and give a specific, complete example. That is the easiest thing for you to do, and the most helpful. <p>Keep in mind that the purpose of a bug report is to enable us to fix the bug if it is new to us. Therefore, always write your bug reports on the assumption that the bug has not been reported previously. <p>Sometimes people give a few sketchy facts and ask, &ldquo;Does this ring a bell?&rdquo; This cannot help us fix a bug, so it is basically useless. We respond by asking for enough details to enable us to investigate. You might as well expedite matters by sending them to begin with. <p>To enable us to fix the bug, you should include all these things: <ul> <li>The version of <samp><span class="command">ld</span></samp>. <samp><span class="command">ld</span></samp> announces it if you start it with the `<samp><span class="samp">--version</span></samp>' argument. <p>Without this, we will not know whether there is any point in looking for the bug in the current version of <samp><span class="command">ld</span></samp>. <li>Any patches you may have applied to the <samp><span class="command">ld</span></samp> source, including any patches made to the <code>BFD</code> library. <li>The type of machine you are using, and the operating system name and version number. <li>What compiler (and its version) was used to compile <samp><span class="command">ld</span></samp>&mdash;e.g. &ldquo;<code>gcc-2.7</code>&rdquo;. <li>The command arguments you gave the linker to link your example and observe the bug. To guarantee you will not omit something important, list them all. A copy of the Makefile (or the output from make) is sufficient. <p>If we were to try to guess the arguments, we would probably guess wrong and then we might not encounter the bug. <li>A complete input file, or set of input files, that will reproduce the bug. It is generally most helpful to send the actual object files provided that they are reasonably small. Say no more than 10K. For bigger files you can either make them available by FTP or HTTP or else state that you are willing to send the object file(s) to whomever requests them. (Note - your email will be going to a mailing list, so we do not want to clog it up with large attachments). But small attachments are best. <p>If the source files were assembled using <code>gas</code> or compiled using <code>gcc</code>, then it may be OK to send the source files rather than the object files. In this case, be sure to say exactly what version of <code>gas</code> or <code>gcc</code> was used to produce the object files. Also say how <code>gas</code> or <code>gcc</code> were configured. <li>A description of what behavior you observe that you believe is incorrect. For example, &ldquo;It gets a fatal signal.&rdquo; <p>Of course, if the bug is that <samp><span class="command">ld</span></samp> gets a fatal signal, then we will certainly notice it. But if the bug is incorrect output, we might not notice unless it is glaringly wrong. You might as well not give us a chance to make a mistake. <p>Even if the problem you experience is a fatal signal, you should still say so explicitly. Suppose something strange is going on, such as, your copy of <samp><span class="command">ld</span></samp> is out of sync, or you have encountered a bug in the C library on your system. (This has happened!) Your copy might crash and ours would not. If you told us to expect a crash, then when ours fails to crash, we would know that the bug was not happening for us. If you had not told us to expect a crash, then we would not be able to draw any conclusion from our observations. <li>If you wish to suggest changes to the <samp><span class="command">ld</span></samp> source, send us context diffs, as generated by <code>diff</code> with the `<samp><span class="samp">-u</span></samp>', `<samp><span class="samp">-c</span></samp>', or `<samp><span class="samp">-p</span></samp>' option. Always send diffs from the old file to the new file. If you even discuss something in the <samp><span class="command">ld</span></samp> source, refer to it by context, not by line number. <p>The line numbers in our development sources will not match those in your sources. Your line numbers would convey no useful information to us. </ul> <p>Here are some things that are not necessary: <ul> <li>A description of the envelope of the bug. <p>Often people who encounter a bug spend a lot of time investigating which changes to the input file will make the bug go away and which changes will not affect it. <p>This is often time consuming and not very useful, because the way we will find the bug is by running a single example under the debugger with breakpoints, not by pure deduction from a series of examples. We recommend that you save your time for something else. <p>Of course, if you can find a simpler example to report <em>instead</em> of the original one, that is a convenience for us. Errors in the output will be easier to spot, running under the debugger will take less time, and so on. <p>However, simplification is not vital; if you do not want to do this, report the bug anyway and send us the entire test case you used. <li>A patch for the bug. <p>A patch for the bug does help us if it is a good one. But do not omit the necessary information, such as the test case, on the assumption that a patch is all we need. We might see problems with your patch and decide to fix the problem another way, or we might not understand it at all. <p>Sometimes with a program as complicated as <samp><span class="command">ld</span></samp> it is very hard to construct an example that will make the program follow a certain path through the code. If you do not send us the example, we will not be able to construct one, so we will not be able to verify that the bug is fixed. <p>And if we cannot understand what bug you are trying to fix, or why your patch should be an improvement, we will not install it. A test case will help us to understand. <li>A guess about what the bug is or what it depends on. <p>Such guesses are usually wrong. Even we cannot guess right about such things without first using the debugger to find the facts. </ul> </body></html> ```
Ramin Mehmanparast () is an Iranian diplomat, the former spokesman of the Iranian foreign ministry and the former ambassador to Poland and Lithuania. He was the deputy Minister of Foreign Affairs, the head of the "Center for Public Diplomacy & Media" and the spokesperson of the Ministry of Foreign Affairs from 31 October 2009 until his resignation on 11 May 2013. He was also the ambassador of Iran to Thailand from 1996 to 2000 and Kazakhstan from 2004 to 2009. After the December 2012 Sandy Hook shooting in the United States, Mehmanparast expressed his condolences to the American people and the victims' families. Mehmanparast was nominated for 2013 presidential election on 11 May 2013 and left his position as the spokesman of the Iranian Foreign Ministry on the same day. He withdrew from his nomination on 19 May 2013. Marriage He married Iranian actress Maryam Kavyani in October 2018. References External links رامین مهمانپرست سخنگوی وزارت امور خارجه شد Mardomsalari Newspaper Living people Iranian diplomats Spokespersons for the Ministry of Foreign Affairs of Iran 1960 births
```xml import dayjs from 'dayjs'; import Assignees from '../Assignees'; import Details from '../Details'; import DueDateLabel from '../DueDateLabel'; import Labels from '../label/Labels'; import EditForm from '../../containers/editForm/EditForm'; import { ItemDate } from '../../styles/common'; import { LastUpdate, Left, PriceContainer, ColumnChild, LabelColumn, StageColumn } from '../../styles/item'; import { IOptions } from '../../types'; import { __ } from '@erxes/ui/src/utils'; import React from 'react'; import PriorityIndicator from '../editForm/PriorityIndicator'; import { IDeal } from '../../../deals/types'; type Props = { stageId?: string; onClick?: () => void; item: IDeal; isFormVisible?: boolean; options: IOptions; groupType?: string; }; class ListItemRow extends React.PureComponent<Props> { renderDate(date) { if (!date) { return null; } return <ItemDate>{dayjs(date).format('lll')}</ItemDate>; } renderForm = () => { const { item, isFormVisible, stageId } = this.props; if (!isFormVisible) { return null; } return ( <EditForm {...this.props} stageId={stageId || item.stageId} itemId={item._id} hideHeader={true} isPopupVisible={isFormVisible} /> ); }; renderStage = () => { const { item, groupType } = this.props; const { labels, stage } = item; if (groupType === 'stage') { return ( <LabelColumn> {this.checkNull(labels.length > 0, <Labels labels={labels} />)} </LabelColumn> ); } return ( <StageColumn> <span>{stage ? stage.name : '-'}</span> </StageColumn> ); }; renderPriority = () => { const { item, groupType } = this.props; const { priority, labels } = item; if (groupType === 'priority') { return ( <LabelColumn> <Labels labels={labels} /> </LabelColumn> ); } return ( <td> {priority ? ( <PriorityIndicator isFullBackground={true} value={priority} /> ) : ( '-' )} </td> ); }; checkNull = (statement: boolean, Component: React.ReactNode) => { if (statement) { return Component; } return '-'; }; render() { const { item, onClick, groupType, options } = this.props; const { customers, companies, closeDate, isComplete, labels, assignedUsers, products } = item; return ( <> <tr onClick={onClick} key={item._id} style={{ cursor: 'pointer' }}> <ColumnChild> <h5>{item.name}</h5> <LastUpdate> {__('Last updated')}: {this.renderDate(item.modifiedAt)} </LastUpdate> </ColumnChild> {this.renderStage()} {(groupType === 'assignee' || groupType === 'dueDate') && ( <LabelColumn> {this.checkNull(labels.length > 0, <Labels labels={labels} />)} </LabelColumn> )} {this.renderPriority()} <td> {this.checkNull( Boolean(closeDate || isComplete), <DueDateLabel closeDate={closeDate} isComplete={isComplete} /> )} </td> {groupType !== 'assignee' && ( <td> {this.checkNull( assignedUsers.length > 0, <PriceContainer> <Left> <Assignees users={assignedUsers} /> </Left> </PriceContainer> )} </td> )} {options.type === 'deal' && ( <td> {this.checkNull( products && products.length > 0, <Details color="#63D2D6" items={products || []} /> )} </td> )} <td> {this.checkNull( customers.length > 0, <Details color="#F7CE53" items={customers || []} /> )} </td> <ColumnChild> {this.checkNull( companies.length > 0, <Details color="#EA475D" items={companies || []} /> )} </ColumnChild> </tr> {this.renderForm()} </> ); } } export default ListItemRow; ```
```yaml Resources: MyFunction: Type: AWS::Serverless::Function Properties: Runtime: nodejs16.x Handler: index.handler InlineCode: | exports.handler = async (event) => { console.log(event); }; MyQueue: Type: AWS::SQS::Queue Connectors: MyConnector: Properties: Destination: Id: MyFunction Permissions: - Read - Write MyEventSourceMapping: DependsOn: MyQueueMyConnector Type: AWS::Lambda::EventSourceMapping Properties: FunctionName: !Ref MyFunction EventSourceArn: !GetAtt MyQueue.Arn ```
Dimitra Pavlou (, born 21 April 2004) is a Greek tennis player. Pavlou has a career-high singles ranking of world No. 448, achieved on 9 October 2023. She also has a career-high doubles ranking by the WTA of 537, reached on 9 October 2023. So far, she has won five doubles title on the ITF Circuit. Pavlou also has represented Greece in the Fed Cup with a win/loss record of 0–2. ITF Circuit finals Singles: 2 titles, 1 runner-up Doubles: 7 (5 titles, 2 runner-ups) References External links 2004 births Living people Greek female tennis players Sportspeople from Athens
Geoff Wragg (9 January 1930 – 15 September 2017) was a Thoroughbred horse trainer who trained champion horses such as Teenoso and Pentire. He was the son of former jockey and trainer Harry Wragg, from whom he took over the licence at Abington Place, Newmarket in 1983 upon his father's retirement. Wragg retired in 2008 after 25 years of training and sold Abington Place to Sheikh Mohammed bin Khalifa Al Maktoum the following spring. He relocated to Yorkshire, the birthplace of his late father, Harry Wragg. He died in 2017. Racing family Wragg's father, Harry, was an extremely successful jockey and trainer, and the pair would be renowned for being the first to trial electronic timing equipment on the gallops as well as weighing their horses. His riding career was littered with success, winning all five domestic Classics – almost repeating the feat as a trainer with only The Oaks eluding him (trained the runner-up in 1974, ironically with the future dam of Teenoso, Furioso). Harry retired in 1982, leaving Geoff to train Teenoso to Classic glory at Epsom the following June. Harry's brothers were jockeys Arthur jr and Sam. Geoff had two siblings: brother Peter was a successful bloodstock until his death in February 2004, and sister Susan was married to top jockey Manny Mercer until his untimely and tragic death in September 1959. Geoff's retirement in 2008 brought to an end a long and hugely successful association with the Wragg name in horse racing. Classic success Wragg enjoyed Classic success in his very first season as a trainer when Teenoso won The Derby under Lester Piggott in 1983. However, the closest Wragg would come to replicating Teenoso's win would be some 23 years later when the unconsidered 66/1 chance Dragon Dancer came within a short head of causing one of the biggest upsets in the race's history in a four-way go-to-the-line, narrowly losing out to Sir Percy. Rather ironically, Wragg had trained the temperamental dam of the winner and both he and his father also trained several of the extended family, the most notable member being Teenoso. His 2001 contender, Asian Heights, well fancied after his last-to-first win in the Predominate Stakes at Goodwood, was cruelly robbed of his chance of running in the Classic after splitting a pastern with just over a week to go before the big race. He recovered to win at Group 3/Listed level, but injuries continued to blight him and his career somewhat fizzled out. Away from the Derby, Wragg failed to win another classic in the UK, though his talented filly Marling landed the 1992 Irish 1000 Guineas at The Curragh. Red Glow was made favourite for the 1988 Epsom Derby, but the colt was a notoriously tricky hold-up ride and found plenty of trouble in running before finishing well to take fourth behind Kahyasi. He never scaled the heights his impressive win in the Dante Stakes the previous month had promised. Successes Other notable horses to have been trained by Wragg include Arcadian Heights, Most Welcome, Owington, First Island, First Trump, Pentire, Island House, Cassandra Go, Asian Heights and 2006 Derby runner up Dragon Dancer. Wragg was noted as targeting meetings like Chester's May Meeting and Glorious Goodwood with a great deal of success, most notably in handicaps with unexposed, improving three-year-olds. He also had a great knack for getting the best out of the fillies he trained, most notably the top-class Marling, Coronation Stakes winners Balisada and Rebecca Sharp and also the smart Danceabout. Pentire Arguably, apart from Teenoso who won the Epsom Derby in 1983 and returned after a stress fracture to win the Grand Prix de St.Cloud and to beat a star studded field in the King George VI and Queen Elizabeth Stakes in 1984, the best horse that Wragg trained in his career was the top-class middle distance colt Pentire who on the basis of his relatively ordinary two-year-old form was not considered for the 1995 Epsom Derby. However, the colt thrived as a three-year-old, winning three Derby trial races at Sandown, Chester and Goodwood, seemingly thriving for the additional winter and also the extra test of stamina in his second season. Pentire subsequently finished half a length behind Lammtarra, winner of The Derby the previous, in the King George VI and Queen Elizabeth Stakes at Ascot in 1995 leading to further speculation that he would have challenged Lammtarra for the Epsom Derby had he turned up himself, particularly as jockey Michael Hills seemed to go too soon at Ascot on his mount who, as was widely recognised, possessed the greater turn of foot of the two colts. Wragg ran Pentire in the King Edward VII Stakes at Royal Ascot, which he won comfortably from future Ascot Gold Cup winner Classic Cliche and the horse was kept in training as a four-year-old, a decision that was justified when Pentire won the King George VI and Queen Elizabeth Stakes in 1996. He was subsequently sold to stand as a stallion in Japan, and enjoyed a good amount of success when subsequently standing in New Zealand. He died in November 2017. Owners Among his main band of owners were Anthony Oppenheimer, Far East businessman John Pearce and also Mollers Racing, formed after the deaths of brothers Eric and Ralph 'Budgie' Moller, who left behind a trust fund to keep their famous chocolate and gold silks in the game beyond their deaths. Mollers Racing's horses were purchased by bloodstock agent John Ferguson in the main, following the sale of its breeding establishment, White Lodge Stud, to Sheikh Mohammed. Notable purchases included First Island, Pentire and Swallow Flight. Mollers Racing Wragg's patient approach was richly rewarded with both Island House and Swallow Flight, neither horse showing anything other than useful form until their four-year-old careers. Swallow Flight ended his three-year-old rated 104 having progressed through the handicap ranks and into Listed company, but the son of Bluebird excelled in his third season as a four-year-old, winning Listed events at Windsor and Goodwood, with a third-place finish in the Group 2 Queen Anne Stakes sandwiched between those two successes. He would repeat his Listed success at Windsor the following season and finally made the breakthrough at pattern level when winning the Group 2 Attheraces Mile at Sandown in April 2002. His last appearance on a racecourse would be in July of that year, turning in a lacklustre performance to come home last of four in Scottish Classic at Ayr. He went on to stand as a stallion, enjoying very minor success. Island House's progress was rather more gradual, still a maiden in the autumn of his three-year-old career but breaking his duck in a classified event at Pontefract in September 1999 and following up in a handicap at Ayr a month later. His four-year-old campaign saw continued progress, following a comeback success in a conditions event at Newmarket with back-to-back victories in Listed events at Goodwood and Kempton. He would go on to record a further five wins at that level, and, in April 2001, landed his one and only win at pattern level in the Group 3 Gordon Richards Stakes at Sandown. Following that victory, he was infamously denied another when jockey Darryll Holland eased up prematurely with the race in the bag in the Huxley Stakes at Chester, with favourite Adilabad collaring the son of Grand Lodge on the line and incurring Holland a 14-day suspension. He covered a small number of mares upon retirement. Another standout performer was Autumn Glory, a son of Charnwood Forest who again didn't show his best until he was four. A rare debut winner for the stable at Leicester in May 2003, he would be seen just twice more that season with limited success. However, he burst onto the scene as a four-year-old with impressive wins in the Spring Mile at Doncaster and Hambleton Rated Stakes at York in the spring of 2004, and he went on to establish himself as a high-class performer thereafter – especially when conditions were on the softer side – going on to score three times at Group 3 level. His career was cut short by injury. Ivy Creek was a son of Gulch who won the first two races of his career, and he was desperately unlucky to not maintain his unbeaten record in the 2006 Dee Stakes at Chester, short of room at a vital stage and only just failing to reel in Art Deco by a neck. He proved disappointing in the Hampton Court Stakes at Royal Ascot the following month when favourite, but went on to fulfil that early potential the following summer with a pair of victories at Listed level at Goodwood and Pontefract. He placed at Group 3 level soon after, but had his career cut tragically short when breaking his leg in the Listed Buckhound Stakes at Ascot in May of the following year. The silks were still in use until 2013 despite Wragg's retirement, with near-neighbour and fellow trainer Chris Wall subsequently housing a small number of Moller horses including the reasonably useful middle distance staying handicapper Snow Hill. With the remaining horses sold in the autumn of that year, it is highly unlikely the famous chocolate and gold silks will be seen on a racecourse ever again with the necessary funds to prove competitive deemed unavailable. Last winner Wragg's last winner was Convallaria on 19 November 2008 at Kempton, the Cape Cross filly winning a low grade 0–55 handicap for one of Wragg's original owners Daphne Lilley. In fact, she would be his penultimate ever runner, with that honour, perhaps fittingly, falling to one of his old stalwarts in latter years, smart all-weather performer Grand Passion, though he could only manage a ninth-place finish in the Listed Churchill Stakes at Lingfield. He would go on to be trained by Chris Wall but never recaptured his top form and was retired in October 2009 after a spate of low-key efforts. Death Geoff Wragg died at Newmarket on 15 September 2017 at the age of 87. Major wins Great Britain Ascot Gold Cup – (1) – Arcadian Heights (1994) Cheveley Park Stakes – (1) – Marling (1991) Child Stakes – (1) – Inchmurrin (1988) Cork and Orrery Stakes – (1) – Owington (1994) Coronation Stakes – (3) – Marling (1992), Rebecca Sharp (1997), Balisada (1999) Derby – (1) – Teenoso (1983) July Cup – (1) – Owington (1994) King George VI and Queen Elizabeth Stakes – (2) – Teenoso (1984), Pentire (1996) King's Stand Stakes – (1) – Cassandra Go (2001) Lockinge Stakes – (2) – Most Welcome (1989), First Island (1997) Middle Park Stakes – (1) – First Trump (1993) Nassau Stakes – (1) – Ela Romara (1988) Prince of Wales's Stakes – (1) – First Island (1996) Queen Anne Stakes – (1) – Nicolotte (1995) Sun Chariot Stakes – (2) – Braiswick (1989), Danceabout (2000) Sussex Stakes – (2) – Marling (1992), First Island (1996) Canada E. P. Taylor Stakes – (1) – Braiswick (1989) France Grand Prix de Saint-Cloud – (1) – Teenoso (1984) Prix d'Ispahan – (1) – Sasuru (1997) Hong Kong Hong Kong Cup – (1) – First Island (1996) Ireland Irish 1,000 Guineas – (1) – Marling (1992) Irish Champion Stakes – (1) – Pentire (1995) Italy Premio Vittorio di Capua – (1) – Nicolotte (1995) References NTRA.com McGrath, J A. The Daily Telegraph Geoff Wragg retires with just one regret – failing to land the Oaks Found at https://www.telegraph.co.uk/sport/horseracing/2633950/Geoff-Wragg-retires-with-just-one-regret-failing-to-land-the-Oaks-Horse-Racing.html 1930 births 2017 deaths British racehorse trainers People from Newmarket, Suffolk
```css /* Hugo Blox color theme: TEAL */ :root { --color-primary-50: 240 253 250; --color-primary-100: 204 251 241; --color-primary-200: 153 246 228; --color-primary-300: 94 234 212; --color-primary-400: 45 212 191; --color-primary-500: 20 184 166; --color-primary-600: 13 148 136; --color-primary-700: 15 118 110; --color-primary-800: 17 94 89; --color-primary-900: 19 78 74; --color-primary-950: 4 47 46; } ```
John Clark Murray (19 March 1836 – 20 November 1917) was a Scottish philosopher and professor. He held the Chair of Mental and Moral Philosophy at Queen's University from 1862 to 1872, and at McGill University from 1872 until 1903. During his academic career, Murray became the first professor at Queen's to offer courses to women; however, his equality advocacy caused unrest among the male professors. He was married to Margaret Polson Murray who founded the Imperial Order Daughters of the Empire. Early life Murray was born on 19 March 1836 in Scotland. He attended Paisley Grammar School in Renfrewshire, Scotland, and was educated at the University of Glasgow and University of Edinburgh. Career After further study at Heidelberg University and University of Göttingen, Murray was appointed a Professor of philosophy and Chair of Mental and Moral Philosophy at Queen's College, Kingston in Canada. In 1869, he became the first professor at Queen's to offer courses to women, nearly a decade before the University of Toronto followed suit. Murray stayed at Queen's until 1871 when he accepted a position at McGill University as their Frothingham Professor of Mental and Moral Philosophy. Upon succeeding a retiring William Turnbull Leach, Murray became the only philosophy professor at the university until 1886. As a result of his academic achievements, Murray was the recipient of an honorary LL.D from the University of Glasgow. At McGill, Murray was not deterred from continuing to advocate for women to attend university, despite pushback from fellow professors. He also lectured at the Montreal Ladies' Educational Association, the Kingston Ladies Educational Association, the Glenmore Summer School of Philosophy, the Cooper Union and the People's Institute in New York City, and the Presbyterian College of Montreal. His continued advocacy caused problems between him and McGill Principal John William Dawson, which forced Murray to retire from teaching in 1903. The height of their confrontations occurred during a women's graduation ceremony, where Murray spoke favourably of including women in men's spaces at McGill. Dawson described Murray's comments as "subversive of the morals and discipline of the university". Murray was also one of the original members of the Royal Society of Canada. Personal life Murray and his wife Margaret Polson Murray had four daughters and one son together. Selected publications A Handbook of Psychology (1885) The industrial kingdom of God (written in 1887, published in 1982) Introduction to Psychology (1904) Further reading A Victorian Frame of Mind: The Thought of John Clark Murray by Charles Nicholas Terpstra (1983) References 1836 births 1917 deaths Fellows of the Royal Society of Canada Academic staff of Queen's University at Kingston Academic staff of McGill University Canadian philosophers Scottish philosophers
The Sommerfeld model can refer to: Bohr–Sommerfeld model Drude–Sommerfeld model
The stone loach (Barbatula barbatula) is a European species of fresh water ray-finned fish in the family Nemacheilidae. It is one of nineteen species in the genus Barbatula. Stone loaches live amongst the gravel and stones of fast flowing water where they can search for food. The most distinctive feature of this small fish is the presence of barbels around the bottom jaw, which they use to detect their invertebrate prey. The body is a mixture of brown, green and yellow. Description The stone loach is a small, slender bottom-dwelling fish that can grow to a length of , but typically is around . Its eyes are situated high on its head and it has three pairs of short barbels on its lower jaw below its mouth. It has a rounded body that is not much laterally flattened and is a little less deep in the body than the spined loach (Cobitis taenia) and lacks that fish's spines beneath the eye. It has rounded dorsal and caudal fins with their tips slightly notched, but the spined loach has even more rounded fins. The general colour of this fish is yellowish-brown with blotches and vertical bands of darker colour. An indistinct dark line runs from the snout to the eye. The fins are brownish with faint dark banding. Distribution and habitat The stone loach is a common species and is found over most of Europe in suitable clear rivers and streams with gravel and sandy bottoms. It is present in upland areas, also chalk streams, lakes and reservoirs as long as the water is well-oxygenated. These fish sometimes venture into estuaries but not into brackish water. They live on the bottom, often partly buried, and they are particularly active at night when they rootle among the sand and gravel for the small invertebrates on which they feed. It is found in Baltic states, Eastern Europe, Austria, Belgium, Bulgaria, Czech Republic, Denmark, Finland, France, Germany, Hungary, Ireland, Italy, Liechtenstein, Luxembourg, Moldova, the Netherlands, Poland, Romania, Serbia and Montenegro, Slovakia, Slovenia, Spain, Sweden, Switzerland, and the United Kingdom. It has been extirpated from Greece. Biology The larvae of stone loaches are benthic, they and small juveniles prefer sandy substrates with a slower current, as they grow they move on to gravel bottoms and faster currents. As adults they prey on relatively large benthic invertebrates such as gammarids, chironomids and other insect larvae. It normally feeds at nights when it uses the barbels around its mouth to detect its prey. They are tolerant of moderate organic pollution and stream canalization but they are highly sensitive to heavy metal pollution, chemical pollution and low oxygen levels which mean that the presence of stone loaches in a river is an indicator of good water quality. They are short-lived fish, normally living to age 3–4 years with 5 years being exceptional. Stone loaches breed over gravel or sand or among aquatic vegetation. In streams with low productivity spawning may be annual but where the water has higher productivity there may be multiple spawning events within a season. The females release eggs in open water, often close to the surface. The eggs drift and adhere to different substrates and are often covered by sand or detritus. A female stone loach may spawn each day for short periods. In Great Britain spawning lasts from April to August and the females may lay as many as 10,000 eggs. References External links Barbatula Freshwater fish of Europe Fish of Europe Fish described in 1758 Taxa named by Carl Linnaeus Taxonomy articles created by Polbot
```go /* path_to_url Unless required by applicable law or agreed to in writing, software WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. */ package openstack import ( "fmt" "github.com/gophercloud/gophercloud/openstack/compute/v2/flavors" "github.com/gophercloud/gophercloud" cinder "github.com/gophercloud/gophercloud/openstack/blockstorage/v3/volumes" az "github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/availabilityzones" "github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/keypairs" "github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/servergroups" "github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/volumeattach" "github.com/gophercloud/gophercloud/openstack/compute/v2/servers" "github.com/gophercloud/gophercloud/openstack/dns/v2/recordsets" "github.com/gophercloud/gophercloud/openstack/dns/v2/zones" "github.com/gophercloud/gophercloud/openstack/imageservice/v2/images" "github.com/gophercloud/gophercloud/openstack/loadbalancer/v2/listeners" "github.com/gophercloud/gophercloud/openstack/loadbalancer/v2/loadbalancers" "github.com/gophercloud/gophercloud/openstack/loadbalancer/v2/monitors" v2pools "github.com/gophercloud/gophercloud/openstack/loadbalancer/v2/pools" l3floatingip "github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/layer3/floatingips" "github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/layer3/routers" sg "github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/security/groups" sgr "github.com/gophercloud/gophercloud/openstack/networking/v2/extensions/security/rules" "github.com/gophercloud/gophercloud/openstack/networking/v2/networks" "github.com/gophercloud/gophercloud/openstack/networking/v2/ports" "github.com/gophercloud/gophercloud/openstack/networking/v2/subnets" v1 "k8s.io/api/core/v1" "k8s.io/kops/cloudmock/openstack/mockblockstorage" "k8s.io/kops/cloudmock/openstack/mockcompute" "k8s.io/kops/cloudmock/openstack/mockdns" "k8s.io/kops/cloudmock/openstack/mockimage" "k8s.io/kops/cloudmock/openstack/mockloadbalancer" "k8s.io/kops/cloudmock/openstack/mocknetworking" "k8s.io/kops/dnsprovider/pkg/dnsprovider" dnsproviderdesignate "k8s.io/kops/dnsprovider/pkg/dnsprovider/providers/openstack/designate" "k8s.io/kops/pkg/apis/kops" "k8s.io/kops/pkg/cloudinstances" "k8s.io/kops/upup/pkg/fi" ) type MockCloud struct { MockCinderClient *mockblockstorage.MockClient MockNeutronClient *mocknetworking.MockClient MockNovaClient *mockcompute.MockClient MockDNSClient *mockdns.MockClient MockLBClient *mockloadbalancer.MockClient MockImageClient *mockimage.MockClient region string tags map[string]string useOctavia bool zones []string extNetworkName *string extSubnetName *string floatingSubnet *string } func InstallMockOpenstackCloud(region string) *MockCloud { i := BuildMockOpenstackCloud(region) openstackCloudInstances[region] = i return i } func BuildMockOpenstackCloud(region string) *MockCloud { return &MockCloud{ region: region, } } var _ fi.Cloud = (*MockCloud)(nil) func (c *MockCloud) ComputeClient() *gophercloud.ServiceClient { client := c.MockNovaClient.ServiceClient() client.UserAgent.Prepend("compute") return client } func (c *MockCloud) BlockStorageClient() *gophercloud.ServiceClient { client := c.MockCinderClient.ServiceClient() client.UserAgent.Prepend("blockstorage") return client } func (c *MockCloud) NetworkingClient() *gophercloud.ServiceClient { client := c.MockNeutronClient.ServiceClient() client.UserAgent.Prepend("networking") return client } func (c *MockCloud) LoadBalancerClient() *gophercloud.ServiceClient { client := c.MockLBClient.ServiceClient() client.UserAgent.Prepend("loadbalancer") return client } func (c *MockCloud) DNSClient() *gophercloud.ServiceClient { client := c.MockDNSClient.ServiceClient() client.UserAgent.Prepend("dns") return client } func (c *MockCloud) ImageClient() *gophercloud.ServiceClient { client := c.MockImageClient.ServiceClient() client.UserAgent.Prepend("image") return client } func (c *MockCloud) DeleteGroup(g *cloudinstances.CloudInstanceGroup) error { return deleteGroup(c, g) } func (c *MockCloud) DeleteInstance(i *cloudinstances.CloudInstance) error { return deleteInstance(c, i) } func (c *MockCloud) DeregisterInstance(i *cloudinstances.CloudInstance) error { return deregisterInstance(c, i.ID) } func (c *MockCloud) DetachInstance(i *cloudinstances.CloudInstance) error { return detachInstance(c, i) } func (c *MockCloud) GetCloudGroups(cluster *kops.Cluster, instancegroups []*kops.InstanceGroup, warnUnmatched bool, nodes []v1.Node) (map[string]*cloudinstances.CloudInstanceGroup, error) { return getCloudGroups(c, cluster, instancegroups, warnUnmatched, nodes) } func (c *MockCloud) ProviderID() kops.CloudProviderID { return kops.CloudProviderOpenstack } func (c *MockCloud) DNS() (dnsprovider.Interface, error) { if c.MockDNSClient == nil { return nil, fmt.Errorf("MockDNS not set") } return dnsproviderdesignate.New(c.DNSClient()), nil } func (c *MockCloud) FindVPCInfo(id string) (*fi.VPCInfo, error) { return findVPCInfo(c, id, c.zones) } func (c *MockCloud) Region() string { return c.region } func (c *MockCloud) AppendTag(resource string, id string, tag string) error { return appendTag(c, resource, id, tag) } func (c *MockCloud) AssociateToPool(server *servers.Server, poolID string, opts v2pools.CreateMemberOpts) (association *v2pools.Member, err error) { return associateToPool(c, server, poolID, opts) } func (c *MockCloud) AttachVolume(serverID string, opts volumeattach.CreateOpts) (attachment *volumeattach.VolumeAttachment, err error) { return attachVolume(c, serverID, opts) } func (c *MockCloud) CreateInstance(opt servers.CreateOptsBuilder, portID string) (*servers.Server, error) { return createInstance(c, opt, portID) } func (c *MockCloud) CreateKeypair(opt keypairs.CreateOptsBuilder) (*keypairs.KeyPair, error) { return createKeypair(c, opt) } func (c *MockCloud) CreateL3FloatingIP(opts l3floatingip.CreateOpts) (fip *l3floatingip.FloatingIP, err error) { return createL3FloatingIP(c, opts) } func (c *MockCloud) CreateLB(opt loadbalancers.CreateOptsBuilder) (*loadbalancers.LoadBalancer, error) { return createLB(c, opt) } func (c *MockCloud) CreateListener(opts listeners.CreateOpts) (listener *listeners.Listener, err error) { return createListener(c, opts) } func (c *MockCloud) CreateNetwork(opt networks.CreateOptsBuilder) (*networks.Network, error) { return createNetwork(c, opt) } func (c *MockCloud) CreatePool(opts v2pools.CreateOpts) (pool *v2pools.Pool, err error) { return createPool(c, opts) } func (c *MockCloud) CreatePoolMonitor(opts monitors.CreateOpts) (*monitors.Monitor, error) { return createPoolMonitor(c, opts) } func (c *MockCloud) CreatePort(opt ports.CreateOptsBuilder) (*ports.Port, error) { return createPort(c, opt) } func (c *MockCloud) CreateRouter(opt routers.CreateOptsBuilder) (*routers.Router, error) { return createRouter(c, opt) } func (c *MockCloud) CreateRouterInterface(routerID string, opt routers.AddInterfaceOptsBuilder) (*routers.InterfaceInfo, error) { return createRouterInterface(c, routerID, opt) } func (c *MockCloud) CreateSecurityGroup(opt sg.CreateOptsBuilder) (*sg.SecGroup, error) { return createSecurityGroup(c, opt) } func (c *MockCloud) CreateSecurityGroupRule(opt sgr.CreateOptsBuilder) (*sgr.SecGroupRule, error) { return createSecurityGroupRule(c, opt) } func (c *MockCloud) CreateServerGroup(opt servergroups.CreateOptsBuilder) (*servergroups.ServerGroup, error) { return createServerGroup(c, opt) } func (c *MockCloud) CreateSubnet(opt subnets.CreateOptsBuilder) (*subnets.Subnet, error) { return createSubnet(c, opt) } func (c *MockCloud) CreateVolume(opt cinder.CreateOptsBuilder) (*cinder.Volume, error) { return createVolume(c, opt) } func (c *MockCloud) DefaultInstanceType(cluster *kops.Cluster, ig *kops.InstanceGroup) (string, error) { return defaultInstanceType(c, cluster, ig) } func (c *MockCloud) DeleteFloatingIP(id string) (err error) { return deleteFloatingIP(c, id) } func (c *MockCloud) DeleteInstanceWithID(instanceID string) error { return deleteInstanceWithID(c, instanceID) } func (c *MockCloud) DeleteKeyPair(name string) error { return deleteKeyPair(c, name) } func (c *MockCloud) DeleteL3FloatingIP(id string) (err error) { return deleteL3FloatingIP(c, id) } func (c *MockCloud) DeleteLB(lbID string, opts loadbalancers.DeleteOpts) error { return deleteLB(c, lbID, opts) } func (c *MockCloud) DeleteListener(listenerID string) error { return deleteListener(c, listenerID) } func (c *MockCloud) DeleteMonitor(monitorID string) error { return deleteMonitor(c, monitorID) } func (c *MockCloud) DeleteNetwork(networkID string) error { return deleteNetwork(c, networkID) } func (c *MockCloud) DeletePool(poolID string) error { return deletePool(c, poolID) } func (c *MockCloud) DeletePort(portID string) error { return deletePort(c, portID) } func (c *MockCloud) DeleteRouter(routerID string) error { return deleteRouter(c, routerID) } func (c *MockCloud) DeleteSecurityGroup(sgID string) error { return deleteSecurityGroup(c, sgID) } func (c *MockCloud) DeleteSecurityGroupRule(ruleID string) error { return deleteSecurityGroupRule(c, ruleID) } func (c *MockCloud) DeleteRouterInterface(routerID string, opt routers.RemoveInterfaceOptsBuilder) error { return deleteRouterInterface(c, routerID, opt) } func (c *MockCloud) DeleteServerGroup(groupID string) error { return deleteServerGroup(c, groupID) } func (c *MockCloud) DeleteSubnet(subnetID string) error { return deleteSubnet(c, subnetID) } func (c *MockCloud) DeleteTag(resource string, id string, tag string) error { return deleteTag(c, resource, id, tag) } func (c *MockCloud) DeleteVolume(volumeID string) error { return deleteVolume(c, volumeID) } func (c *MockCloud) FindClusterStatus(cluster *kops.Cluster) (*kops.ClusterStatus, error) { return findClusterStatus(c, cluster) } func (c *MockCloud) FindNetworkBySubnetID(subnetID string) (*networks.Network, error) { return findNetworkBySubnetID(c, subnetID) } func (c *MockCloud) GetApiIngressStatus(cluster *kops.Cluster) ([]fi.ApiIngressStatus, error) { return getApiIngressStatus(c, cluster) } func (c *MockCloud) GetCloudTags() map[string]string { return c.tags } func (c *MockCloud) GetExternalNetwork() (net *networks.Network, err error) { return getExternalNetwork(c, *c.extNetworkName) } func (c *MockCloud) GetExternalSubnet() (subnet *subnets.Subnet, err error) { return getExternalSubnet(c, c.extSubnetName) } func (c *MockCloud) GetL3FloatingIP(id string) (fip *l3floatingip.FloatingIP, err error) { return getL3FloatingIP(c, id) } func (c *MockCloud) GetImage(name string) (*images.Image, error) { return getImage(c, name) } func (c *MockCloud) GetFlavor(name string) (*flavors.Flavor, error) { return getFlavor(c, name) } func (c *MockCloud) GetInstance(id string) (*servers.Server, error) { return getInstance(c, id) } func (c *MockCloud) GetKeypair(name string) (*keypairs.KeyPair, error) { return getKeypair(c, name) } func (c *MockCloud) GetLB(loadbalancerID string) (lb *loadbalancers.LoadBalancer, err error) { return getLB(c, loadbalancerID) } func (c *MockCloud) GetNetwork(id string) (*networks.Network, error) { return getNetwork(c, id) } func (c *MockCloud) GetLBFloatingSubnet() (subnet *subnets.Subnet, err error) { return getLBFloatingSubnet(c, c.floatingSubnet) } func (c *MockCloud) GetPool(poolID string) (pool *v2pools.Pool, err error) { return getPool(c, poolID) } func (c *MockCloud) GetPoolMember(poolID string, memberID string) (member *v2pools.Member, err error) { return getPoolMember(c, poolID, memberID) } func (c *MockCloud) GetPort(id string) (*ports.Port, error) { return getPort(c, id) } func (c *MockCloud) UpdatePort(id string, opt ports.UpdateOptsBuilder) (*ports.Port, error) { return updatePort(c, id, opt) } func (c *MockCloud) GetStorageAZFromCompute(computeAZ string) (*az.AvailabilityZone, error) { return getStorageAZFromCompute(c, computeAZ) } func (c *MockCloud) GetSubnet(subnetID string) (*subnets.Subnet, error) { return getSubnet(c, subnetID) } func (c *MockCloud) ListAvailabilityZones(serviceClient *gophercloud.ServiceClient) (azList []az.AvailabilityZone, err error) { return listAvailabilityZones(c, serviceClient) } func (c *MockCloud) ListDNSZones(opt zones.ListOptsBuilder) ([]zones.Zone, error) { return listDNSZones(c, opt) } func (c *MockCloud) ListDNSRecordsets(zoneID string, opt recordsets.ListOptsBuilder) ([]recordsets.RecordSet, error) { return listDNSRecordsets(c, zoneID, opt) } func (c *MockCloud) DeleteDNSRecordset(zoneID string, rrsetID string) error { return deleteDNSRecordset(c, zoneID, rrsetID) } func (c *MockCloud) ListInstances(opt servers.ListOptsBuilder) ([]servers.Server, error) { return listInstances(c, opt) } func (c *MockCloud) ListKeypairs() ([]keypairs.KeyPair, error) { return listKeypairs(c) } func (c *MockCloud) ListL3FloatingIPs(opts l3floatingip.ListOpts) (fips []l3floatingip.FloatingIP, err error) { return listL3FloatingIPs(c, opts) } func (c *MockCloud) UpdateMemberInPool(poolID string, memberID string, opts v2pools.UpdateMemberOptsBuilder) (*v2pools.Member, error) { return updateMemberInPool(c, poolID, memberID, opts) } func (c *MockCloud) GetLBStats(loadbalancerID string) (*loadbalancers.Stats, error) { return getLBStats(c, loadbalancerID) } func (c *MockCloud) ListPoolMembers(poolID string, opts v2pools.ListMembersOpts) ([]v2pools.Member, error) { return listPoolMembers(c, poolID, opts) } func (c *MockCloud) ListLBs(opt loadbalancers.ListOptsBuilder) (lbs []loadbalancers.LoadBalancer, err error) { return listLBs(c, opt) } func (c *MockCloud) ListListeners(opts listeners.ListOpts) (listenerList []listeners.Listener, err error) { return listListeners(c, opts) } func (c *MockCloud) ListMonitors(opts monitors.ListOpts) (monitorList []monitors.Monitor, err error) { return listMonitors(c, opts) } func (c *MockCloud) ListNetworks(opt networks.ListOptsBuilder) ([]networks.Network, error) { return listNetworks(c, opt) } func (c *MockCloud) ListPools(opts v2pools.ListOpts) (poolList []v2pools.Pool, err error) { return listPools(c, opts) } func (c *MockCloud) ListPorts(opt ports.ListOptsBuilder) ([]ports.Port, error) { return listPorts(c, opt) } func (c *MockCloud) ListRouters(opt routers.ListOpts) ([]routers.Router, error) { return listRouters(c, opt) } func (c *MockCloud) ListSecurityGroups(opt sg.ListOpts) ([]sg.SecGroup, error) { return listSecurityGroups(c, opt) } func (c *MockCloud) ListSecurityGroupRules(opt sgr.ListOpts) ([]sgr.SecGroupRule, error) { return listSecurityGroupRules(c, opt) } func (c *MockCloud) ListServerFloatingIPs(instanceID string) ([]*string, error) { return listServerFloatingIPs(c, instanceID, true) } func (c *MockCloud) ListServerGroups(opts servergroups.ListOptsBuilder) ([]servergroups.ServerGroup, error) { return listServerGroups(c, opts) } func (c *MockCloud) ListSubnets(opt subnets.ListOptsBuilder) ([]subnets.Subnet, error) { return listSubnets(c, opt) } func (c *MockCloud) ListVolumes(opt cinder.ListOptsBuilder) ([]cinder.Volume, error) { return listVolumes(c, opt) } func (c *MockCloud) SetExternalNetwork(name *string) { c.extNetworkName = name } func (c *MockCloud) SetExternalSubnet(name *string) { c.extSubnetName = name } func (c *MockCloud) SetLBFloatingSubnet(name *string) { c.floatingSubnet = name } func (c *MockCloud) SetVolumeTags(id string, tags map[string]string) error { return setVolumeTags(c, id, tags) } func (c *MockCloud) UseOctavia() bool { return c.useOctavia } func (c *MockCloud) UseZones(zones []string) { c.zones = zones } func (c *MockCloud) UseLoadBalancerVIPACL() (bool, error) { return true, nil } ```
Długie is a village in the administrative district of Gmina Wólka, within Lublin County, Lublin Voivodeship, in eastern Poland. It lies approximately east of Jakubowice Murowane (the gmina seat) and north-east of the regional capital Lublin. References Villages in Lublin County
```javascript import React from 'react' import Head from 'next/head' const Page = () => { return ( <> <Head> <link href="path_to_url" rel="stylesheet" /> </Head> <div>Hi!</div> </> ) } export default Page ```
Michigan Technological University's Winter Carnival is an annual celebration that takes place every winter hosted by Michigan Technological University in Houghton, Michigan. It is a time to celebrate the large amounts of snowfall Michigan's Keweenaw Peninsula receives each winter. Winter Carnival is characterized by snow statues, outdoor games, and many student activities. February 2022 marked the 100th anniversary of Winter Carnival. In 1958, Blue Key began the tradition of selecting a yearly theme for the events, including motion pictures, historical events, comics, music and more. Snow statue designs are inspired by the theme. Eventually, a logo contest was incorporated with cash prizes. References Carnival in the United States Winter festivals in the United States Festivals in Michigan Recurring events established in 1922 Outdoor sculptures in Michigan Buildings and structures made of snow or ice Tourist attractions in Houghton County, Michigan Michigan Technological University Michigan Technological University Winter Carnival Michigan-related lists
Dehkaran (, also Romanized as Dehkarān and Deh Karān; also known as Dehgarān) is a village in Dowlatabad Rural District, in the Central District of Jiroft County, Kerman Province, Iran. At the 2006 census, its population was 23, in 5 families. References Populated places in Jiroft County
Rahat Faiq Jamali (; born 10 March 1965) is a Pakistani politician who was a Member of the Provincial Assembly of Balochistan, from June 2013 to May 2018. Early life and education Jamali was born on 10 March 1965 in Usta Mohammad, Balochistan, Pakistan. She has done Master of Arts in Urdu from the University of Balochistan. She is sister of Jan Mohammad Jamali. Political career She was elected to the Provincial Assembly of Balochistan as a candidate of Pakistan Muslim League (N) from constituency PB-26 Jaffarabad-II in 2013 Pakistani general election. She had secured 12,521 votes. In September 2017, she was appointed as Provincial Minister of Balochistan for Labour in the cabinet of Chief Minister Nawab Sanaullah Khan Zehri, where she remained until resigning in January 2018. References Living people Pakistan Muslim League (N) politicians 1965 births Balochistan MPAs 2013–2018 Women provincial ministers of Balochistan 21st-century Pakistani women politicians
Cerhonice is a municipality and village in Písek District in the South Bohemian Region of the Czech Republic. It has about 100 inhabitants. Cerhonice lies approximately north-west of Písek, north-west of České Budějovice, and south of Prague. Administrative parts The village of Obora u Cerhonic is an administrative part of Cerhonice. References Villages in Písek District
The Outies, formally known as the Out & Equal Workplace Awards, is an annual awards gala hosted by Out & Equal Workplace Advocates. The Outies honor individuals and organizations that are leaders in advancing equality for lesbian, gay, bisexual, and transgender (LGBT) employees in America's workplaces. Through these awards, Out & Equal provides the business and LGBT communities with examples of innovative approaches and proven successes to help create safe and equitable workplaces. The awards are presented annually at the Out & Equal Workplace Summit, a nationwide conference addressing LGBT issues in the workplace. Outie Awards are given in five different categories, with two recognizing individuals and three recognizing organizations. To win an Outie, recipients must have taken significant action to create more equitable workplaces for members of the LGBT community. Award categories The Workplace Excellence Award recognizes any employer that has an historic and ongoing commitment to pursuing and executing workplace equality for LGBTQ employees in their own workplace. This employer has a history of continually raising the bar of workplace equality for others to follow. The Trailblazer Award recognizes an LGBT person who has made a significant contribution to advancing workplace equality. This individual's activities will have made a marked improvement in their own workplace and/or have contributed to equality nationally. The Champion Award recognizes any LGBTQ person or ally who has made a significant contribution to advancing workplace equality. The Champion Award winner will have shown a unique commitment to LGBTQ workplace rights and will have used their talents to further that cause, even if at some risk.. The LGBT ERG of the Year Award recognizes a particular employee resource group (ERG), sometimes referred to as a business group or network, that has a proven track record of success in advocating for LGBTQ equal rights in its own workplace. The Regional Affiliate of the Year Award recognizes an Out & Equal regional affiliate that has demonstrated commitment to the Out & Equal mission through exceptional programming and sound organizational practices. The Significant Achievement Award recognizes any employer that has made significant strides in the past year in advancing a fair and equitable workplace for its LGBTQ employees, such as: announcing domestic partner health insurance, including gender identity diversity training, or initiating a unique general advertising campaign that includes LGBTQ people. Selection process All Outie nominations are read by the Out & Equal awards committee and judging panel, which is made up of a diverse cross-section of leaders in the movement for LGBT workplace equality. Awards Committee members and judges change each year and are expected to opt out of voting if their own affiliations may bias their votes. Nominees are evaluated on originality, duplicability of initiatives, leadership, results, and other criteria. Organizations are evaluated based on the aforementioned criteria, plus the degree to which they have incorporated the 20 Steps to an Out & Equal Workplace. The previous year's awardees are not considered for the same award for the following two years. Additionally, companies or organizations may only submit one nomination in the organization award categories and one nomination in the individual award categories, for a maximum possible total of two nominations from any one company or organization. Outie Award winners Workplace Excellence Award Recognizes any employer that has a longstanding commitment to pursuing and executing workplace equality for LGBT employees in their own workplace. This employer has a history of continually raising the bar of workplace equality for others to follow. 2017 - Bank of America 2016 - Dell 2015 - The Walt Disney Company 2014 – Chevron 2013 – Dow Chemical Company 2012 – Google 2011 – Accenture 2010 – IBM 2009 – Sun Microsystems 2008 – PepsiCo 2007 - Wells Fargo 2006 – JPMorgan Chase 2005 - Citigroup 2004 - Kaiser Permanente and Pacific Gas & Electric Company (Tied) 2003 - NCR Corporation 2002 - American Airlines 2001 – IBM 2000 – Kodak Trailblazer Award Recognizes an LGBT person who has made a significant contribution to advancing workplace equality. This individual's activities will have made a marked improvement in their own workplace and/or have contributed to equality nationally. 2014 – Greer Puckett, Northrop Grumman 2012 – Lance Freedman, Lockheed Martin 2011 – Claudia Woody, IBM 2010 – Bill Hendrix, Dow Chemical Company 2009 - Richard Clark, Accenture 2008 - Chris Crespo, Ernst & Young 2007 - Dr. Judy Lively, Kaiser Permanente 2006 – Emily Jones, Kodak 2005 - Leslie (Les) Hohman, General Motors PLUS 2004 - Robert Burrell, Ford 2003 - Wesley Combs, Witeck-Combs Communications 2002 - Dr. Louise Young, Raytheon 2001 - Mary Ann Horton, Avaya 2000 - Tom Ammiano, Leslie Katz and Susan Leal, San Francisco Supervisors Champion Award The Champion Award recognizes any LGBTQ+ person or ally who has made a significant contribution to advancing workplace equality. The Champion Award winner will have shown a unique commitment to LGBTQ+ workplace rights and will have used their talents to further that cause, even if at some risk... 2018 - Jennifer J. Henderson, Capital One 2017 - Ramkrishna Sinha, Intel Corporation 2016 - Thomas Vilsack, USDA Secretary 2015 - Howard Ungerleider, Dow Chemical Company 2014 – Vijay Anand, Intuit 2013 – Cathy Bessant, Bank of America 2012 – Harry van Dorenmalen, IBM 2011 – Dr. Sophie Vandebroek, Xerox 2010 – Mark Bertolini, Aetna Healthcare 2009 - Randy Kammer, Blue Cross and Blue Shield of Florida 2008 - William C. Thompson, Jr., City of New York 2007 - Ana Duarte McCarthy, Citi 2006 – Deborah Dagit, Merck 2005 - June R. Cohen, DuPont 2004 - Laura Brooks, Kodak 2003 - Judy Boyette, University of California 2002 - William Perez, CEO of SC Johnson & Son 2001 - Cathy Brill & Lisa Vitale, Kodak 2000 - Ethel Batten, Lucent LGBT Employee Resource Group of the Year Recognizes a particular employee resource group (ERG), sometimes referred to as a business group or network that has a proven track record of success in advocating for LGBT equal rights in its own workplace. 2018 - LGBT Alliance (Cracker Barrel) 2017 - IC Pride – (US Intelligence Community) 2016 - PRIDE Team Network – (Wells Fargo) 2015 - Open & Out (Johnson & Johnson) 2014 – Citi PRIDE Network 2013 – PRIDE (Lockheed Martin) & LGBTA Business Council (Target) 2012 – OutServe (United States Department of Defense) 2011 – LGBT Pride Resource Group, Bank of America 2010 – The Clorox Company 2009 - (Tie) General Motors' People Like Us & US Department of State and USAID's Gays and Lesbians in Foreign Affairs Agencies 2008 - Hewlett-Packard's HP PRIDE 2007 - Nike's GLBT & Friends Network 2006 – GLEAM, Microsoft 2005 - Chevron Lesbian & Gay Employee Association (CLGEA) 2004 - Lambda Network at Kodak 2003 – GLBC at SC Johnson 2002 - SEA Shell at Shell Oil Company 2001 - Pride at Walt Disney 2000 - League of AT&T Regional Affiliate of the Year Award Recognizes an Out & Equal regional affiliate that has demonstrated commitment to the Out & Equal mission through exceptional programming and sound organizational practices. 2015 - San Francisco 2014 – Chicagoland 2013 – Seattle 2012 – New York Finger Lakes 2011 – Houston 2010 – Dallas–Fort Worth Significant Achievement 2011 – Google 2010 – Dow Chemical Company 2009 – Salt Lake City Corporation 2008 – Goldman Sachs 2007 - Ernst & Young 2006 – PricewaterhouseCoopers 2005 - IBM 2004 - Hewlett Packard 2003 - Chubb 2002 – JPMorgan Chase 2001 - Motorola 2000 - Ford Selisse Berry Leadership Award Recognizes an exceptional individual whose visionary leadership, tireless efforts, and remarkable accomplishments have been a critical contribution toward achieving LGBT workplace equality. In addition to leading change in the world of employment, this leader inspires countless individuals to champion workplace equality for all inclusive of sexual orientation, gender identity, expression, or characteristics. 2013 – Kevin Jones 2011 – Brian McNaught 2008 – Selisse Berry, Founding Executive Director of Out & Equal LGBT Workplace Equality Pioneer 2007 - Dr. Franklin E. Kameny References American awards LGBT-related awards Business and industry awards Awards established in 2000
```smalltalk /**************************************************************************** * * path_to_url * path_to_url * path_to_url ****************************************************************************/ using System.Collections.Generic; namespace QFramework { public class RootCode : ICodeScope { private List<ICode> mCodes = new List<ICode>(); public List<ICode> Codes { get { return mCodes; } set { mCodes = value; } } public void Gen(ICodeWriter writer) { foreach (var code in Codes) { code.Gen(writer); } } } } ```
```c /* Global common subexpression elimination/Partial redundancy elimination and global constant/copy propagation for GNU compiler. Free Software Foundation, Inc. This file is part of GCC. GCC is free software; you can redistribute it and/or modify it under Software Foundation; either version 2, or (at your option) any later version. GCC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or for more details. along with GCC; see the file COPYING. If not, write to the Free Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ /* TODO - reordering of memory allocation and freeing to be more space efficient - do rough calc of how many regs are needed in each block, and a rough calc of how many regs are available in each class and use that to throttle back the code in cases where RTX_COST is minimal. - a store to the same address as a load does not kill the load if the source of the store is also the destination of the load. Handling this allows more load motion, particularly out of loops. - ability to realloc sbitmap vectors would allow one initial computation of reg_set_in_block with only subsequent additions, rather than recomputing it for each pass */ /* References searched while implementing this. Compilers Principles, Techniques and Tools Aho, Sethi, Ullman Addison-Wesley, 1988 Global Optimization by Suppression of Partial Redundancies E. Morel, C. Renvoise communications of the acm, Vol. 22, Num. 2, Feb. 1979 A Portable Machine-Independent Global Optimizer - Design and Measurements Frederick Chow Stanford Ph.D. thesis, Dec. 1983 A Fast Algorithm for Code Movement Optimization D.M. Dhamdhere SIGPLAN Notices, Vol. 23, Num. 10, Oct. 1988 A Solution to a Problem with Morel and Renvoise's Global Optimization by Suppression of Partial Redundancies K-H Drechsler, M.P. Stadel ACM TOPLAS, Vol. 10, Num. 4, Oct. 1988 Practical Adaptation of the Global Optimization Algorithm of Morel and Renvoise D.M. Dhamdhere ACM TOPLAS, Vol. 13, Num. 2. Apr. 1991 Efficiently Computing Static Single Assignment Form and the Control Dependence Graph R. Cytron, J. Ferrante, B.K. Rosen, M.N. Wegman, and F.K. Zadeck ACM TOPLAS, Vol. 13, Num. 4, Oct. 1991 Lazy Code Motion J. Knoop, O. Ruthing, B. Steffen ACM SIGPLAN Notices Vol. 27, Num. 7, Jul. 1992, '92 Conference on PLDI What's In a Region? Or Computing Control Dependence Regions in Near-Linear Time for Reducible Flow Control Thomas Ball ACM Letters on Programming Languages and Systems, Vol. 2, Num. 1-4, Mar-Dec 1993 An Efficient Representation for Sparse Sets Preston Briggs, Linda Torczon ACM Letters on Programming Languages and Systems, Vol. 2, Num. 1-4, Mar-Dec 1993 A Variation of Knoop, Ruthing, and Steffen's Lazy Code Motion K-H Drechsler, M.P. Stadel ACM SIGPLAN Notices, Vol. 28, Num. 5, May 1993 Partial Dead Code Elimination J. Knoop, O. Ruthing, B. Steffen ACM SIGPLAN Notices, Vol. 29, Num. 6, Jun. 1994 Effective Partial Redundancy Elimination P. Briggs, K.D. Cooper ACM SIGPLAN Notices, Vol. 29, Num. 6, Jun. 1994 The Program Structure Tree: Computing Control Regions in Linear Time R. Johnson, D. Pearson, K. Pingali ACM SIGPLAN Notices, Vol. 29, Num. 6, Jun. 1994 Optimal Code Motion: Theory and Practice J. Knoop, O. Ruthing, B. Steffen ACM TOPLAS, Vol. 16, Num. 4, Jul. 1994 The power of assignment motion J. Knoop, O. Ruthing, B. Steffen ACM SIGPLAN Notices Vol. 30, Num. 6, Jun. 1995, '95 Conference on PLDI Global code motion / global value numbering C. Click ACM SIGPLAN Notices Vol. 30, Num. 6, Jun. 1995, '95 Conference on PLDI Value Driven Redundancy Elimination L.T. Simpson Rice University Ph.D. thesis, Apr. 1996 Value Numbering L.T. Simpson Massively Scalar Compiler Project, Rice University, Sep. 1996 High Performance Compilers for Parallel Computing Michael Wolfe Addison-Wesley, 1996 Advanced Compiler Design and Implementation Steven Muchnick Morgan Kaufmann, 1997 Building an Optimizing Compiler Robert Morgan Digital Press, 1998 People wishing to speed up the code here should read: Elimination Algorithms for Data Flow Analysis B.G. Ryder, M.C. Paull ACM Computing Surveys, Vol. 18, Num. 3, Sep. 1986 How to Analyze Large Programs Efficiently and Informatively D.M. Dhamdhere, B.K. Rosen, F.K. Zadeck ACM SIGPLAN Notices Vol. 27, Num. 7, Jul. 1992, '92 Conference on PLDI People wishing to do something different can find various possibilities in the above papers and elsewhere. */ #include "config.h" #include "system.h" #include "toplev.h" #include "rtl.h" #include "tm_p.h" #include "regs.h" #include "hard-reg-set.h" #include "flags.h" #include "real.h" #include "insn-config.h" #include "recog.h" #include "basic-block.h" #include "output.h" #include "function.h" #include "expr.h" #include "except.h" #include "ggc.h" #include "params.h" #include "cselib.h" #include "obstack.h" /* Propagate flow information through back edges and thus enable PRE's moving loop invariant calculations out of loops. Originally this tended to create worse overall code, but several improvements during the development of PRE seem to have made following back edges generally a win. Note much of the loop invariant code motion done here would normally be done by loop.c, which has more heuristics for when to move invariants out of loops. At some point we might need to move some of those heuristics into gcse.c. */ /* We support GCSE via Partial Redundancy Elimination. PRE optimizations are a superset of those done by GCSE. We perform the following steps: 1) Compute basic block information. 2) Compute table of places where registers are set. 3) Perform copy/constant propagation. 4) Perform global cse. 5) Perform another pass of copy/constant propagation. Two passes of copy/constant propagation are done because the first one enables more GCSE and the second one helps to clean up the copies that GCSE creates. This is needed more for PRE than for Classic because Classic GCSE will try to use an existing register containing the common subexpression rather than create a new one. This is harder to do for PRE because of the code motion (which Classic GCSE doesn't do). Expressions we are interested in GCSE-ing are of the form (set (pseudo-reg) (expression)). Function want_to_gcse_p says what these are. PRE handles moving invariant expressions out of loops (by treating them as partially redundant). Eventually it would be nice to replace cse.c/gcse.c with SSA (static single assignment) based GVN (global value numbering). L. T. Simpson's paper (Rice University) on value numbering is a useful reference for this. ********************** We used to support multiple passes but there are diminishing returns in doing so. The first pass usually makes 90% of the changes that are doable. A second pass can make a few more changes made possible by the first pass. Experiments show any further passes don't make enough changes to justify the expense. A study of spec92 using an unlimited number of passes: [1 pass] = 1208 substitutions, [2] = 577, [3] = 202, [4] = 192, [5] = 83, [6] = 34, [7] = 17, [8] = 9, [9] = 4, [10] = 4, [11] = 2, [12] = 2, [13] = 1, [15] = 1, [16] = 2, [41] = 1 It was found doing copy propagation between each pass enables further substitutions. PRE is quite expensive in complicated functions because the DFA can take awhile to converge. Hence we only perform one pass. The parameter max-gcse-passes can be modified if one wants to experiment. ********************** The steps for PRE are: 1) Build the hash table of expressions we wish to GCSE (expr_hash_table). 2) Perform the data flow analysis for PRE. 3) Delete the redundant instructions 4) Insert the required copies [if any] that make the partially redundant instructions fully redundant. 5) For other reaching expressions, insert an instruction to copy the value to a newly created pseudo that will reach the redundant instruction. The deletion is done first so that when we do insertions we know which pseudo reg to use. Various papers have argued that PRE DFA is expensive (O(n^2)) and others argue it is not. The number of iterations for the algorithm to converge is typically 2-4 so I don't view it as that expensive (relatively speaking). PRE GCSE depends heavily on the second CSE pass to clean up the copies we create. To make an expression reach the place where it's redundant, the result of the expression is copied to a new register, and the redundant expression is deleted by replacing it with this new register. Classic GCSE doesn't have this problem as much as it computes the reaching defs of each register in each block and thus can try to use an existing register. ********************** A fair bit of simplicity is created by creating small functions for simple tasks, even when the function is only called in one place. This may measurably slow things down [or may not] by creating more function call overhead than is necessary. The source is laid out so that it's trivial to make the affected functions inline so that one can measure what speed up, if any, can be achieved, and maybe later when things settle things can be rearranged. Help stamp out big monolithic functions! */ /* GCSE global vars. */ /* -dG dump file. */ static FILE *gcse_file; /* Note whether or not we should run jump optimization after gcse. We want to do this for two cases. * If we changed any jumps via cprop. * If we added any labels via edge splitting. */ static int run_jump_opt_after_gcse; /* Bitmaps are normally not included in debugging dumps. However it's useful to be able to print them from GDB. We could create special functions for this, but it's simpler to just allow passing stderr to the dump_foo fns. Since stderr can be a macro, we store a copy here. */ static FILE *debug_stderr; /* An obstack for our working variables. */ static struct obstack gcse_obstack; /* Nonzero for each mode that supports (set (reg) (reg)). This is trivially true for integer and floating point values. It may or may not be true for condition codes. */ static char can_copy_p[(int) NUM_MACHINE_MODES]; /* Nonzero if can_copy_p has been initialized. */ static int can_copy_init_p; struct reg_use {rtx reg_rtx; }; /* Hash table of expressions. */ struct expr { /* The expression (SET_SRC for expressions, PATTERN for assignments). */ rtx expr; /* Index in the available expression bitmaps. */ int bitmap_index; /* Next entry with the same hash. */ struct expr *next_same_hash; /* List of anticipatable occurrences in basic blocks in the function. An "anticipatable occurrence" is one that is the first occurrence in the basic block, the operands are not modified in the basic block prior to the occurrence and the output is not used between the start of the block and the occurrence. */ struct occr *antic_occr; /* List of available occurrence in basic blocks in the function. An "available occurrence" is one that is the last occurrence in the basic block and the operands are not modified by following statements in the basic block [including this insn]. */ struct occr *avail_occr; /* Non-null if the computation is PRE redundant. The value is the newly created pseudo-reg to record a copy of the expression in all the places that reach the redundant copy. */ rtx reaching_reg; }; /* Occurrence of an expression. There is one per basic block. If a pattern appears more than once the last appearance is used [or first for anticipatable expressions]. */ struct occr { /* Next occurrence of this expression. */ struct occr *next; /* The insn that computes the expression. */ rtx insn; /* Nonzero if this [anticipatable] occurrence has been deleted. */ char deleted_p; /* Nonzero if this [available] occurrence has been copied to reaching_reg. */ /* ??? This is mutually exclusive with deleted_p, so they could share the same byte. */ char copied_p; }; /* Expression and copy propagation hash tables. Each hash table is an array of buckets. ??? It is known that if it were an array of entries, structure elements `next_same_hash' and `bitmap_index' wouldn't be necessary. However, it is not clear whether in the final analysis a sufficient amount of memory would be saved as the size of the available expression bitmaps would be larger [one could build a mapping table without holes afterwards though]. Someday I'll perform the computation and figure it out. */ struct hash_table { /* The table itself. This is an array of `expr_hash_table_size' elements. */ struct expr **table; /* Size of the hash table, in elements. */ unsigned int size; /* Number of hash table elements. */ unsigned int n_elems; /* Whether the table is expression of copy propagation one. */ int set_p; }; /* Expression hash table. */ static struct hash_table expr_hash_table; /* Copy propagation hash table. */ static struct hash_table set_hash_table; /* Mapping of uids to cuids. Only real insns get cuids. */ static int *uid_cuid; /* Highest UID in UID_CUID. */ static int max_uid; /* Get the cuid of an insn. */ #ifdef ENABLE_CHECKING #define INSN_CUID(INSN) (INSN_UID (INSN) > max_uid ? (abort (), 0) : uid_cuid[INSN_UID (INSN)]) #else #define INSN_CUID(INSN) (uid_cuid[INSN_UID (INSN)]) #endif /* Number of cuids. */ static int max_cuid; /* Mapping of cuids to insns. */ static rtx *cuid_insn; /* Get insn from cuid. */ #define CUID_INSN(CUID) (cuid_insn[CUID]) /* Maximum register number in function prior to doing gcse + 1. Registers created during this pass have regno >= max_gcse_regno. This is named with "gcse" to not collide with global of same name. */ static unsigned int max_gcse_regno; /* Table of registers that are modified. For each register, each element is a list of places where the pseudo-reg is set. For simplicity, GCSE is done on sets of pseudo-regs only. PRE GCSE only requires knowledge of which blocks kill which regs [and thus could use a bitmap instead of the lists `reg_set_table' uses]. `reg_set_table' and could be turned into an array of bitmaps (num-bbs x num-regs) [however perhaps it may be useful to keep the data as is]. One advantage of recording things this way is that `reg_set_table' is fairly sparse with respect to pseudo regs but for hard regs could be fairly dense [relatively speaking]. And recording sets of pseudo-regs in lists speeds up functions like compute_transp since in the case of pseudo-regs we only need to iterate over the number of times a pseudo-reg is set, not over the number of basic blocks [clearly there is a bit of a slow down in the cases where a pseudo is set more than once in a block, however it is believed that the net effect is to speed things up]. This isn't done for hard-regs because recording call-clobbered hard-regs in `reg_set_table' at each function call can consume a fair bit of memory, and iterating over hard-regs stored this way in compute_transp will be more expensive. */ typedef struct reg_set { /* The next setting of this register. */ struct reg_set *next; /* The insn where it was set. */ rtx insn; } reg_set; static reg_set **reg_set_table; /* Size of `reg_set_table'. The table starts out at max_gcse_regno + slop, and is enlarged as necessary. */ static int reg_set_table_size; /* Amount to grow `reg_set_table' by when it's full. */ #define REG_SET_TABLE_SLOP 100 /* This is a list of expressions which are MEMs and will be used by load or store motion. Load motion tracks MEMs which aren't killed by anything except itself. (ie, loads and stores to a single location). We can then allow movement of these MEM refs with a little special allowance. (all stores copy the same value to the reaching reg used for the loads). This means all values used to store into memory must have no side effects so we can re-issue the setter value. Store Motion uses this structure as an expression table to track stores which look interesting, and might be moveable towards the exit block. */ struct ls_expr { struct expr * expr; /* Gcse expression reference for LM. */ rtx pattern; /* Pattern of this mem. */ rtx loads; /* INSN list of loads seen. */ rtx stores; /* INSN list of stores seen. */ struct ls_expr * next; /* Next in the list. */ int invalid; /* Invalid for some reason. */ int index; /* If it maps to a bitmap index. */ int hash_index; /* Index when in a hash table. */ rtx reaching_reg; /* Register to use when re-writing. */ }; /* Head of the list of load/store memory refs. */ static struct ls_expr * pre_ldst_mems = NULL; /* Bitmap containing one bit for each register in the program. Used when performing GCSE to track which registers have been set since the start of the basic block. */ static regset reg_set_bitmap; /* For each block, a bitmap of registers set in the block. This is used by expr_killed_p and compute_transp. It is computed during hash table computation and not by compute_sets as it includes registers added since the last pass (or between cprop and gcse) and it's currently not easy to realloc sbitmap vectors. */ static sbitmap *reg_set_in_block; /* Array, indexed by basic block number for a list of insns which modify memory within that block. */ static rtx * modify_mem_list; bitmap modify_mem_list_set; /* This array parallels modify_mem_list, but is kept canonicalized. */ static rtx * canon_modify_mem_list; bitmap canon_modify_mem_list_set; /* Various variables for statistics gathering. */ /* Memory used in a pass. This isn't intended to be absolutely precise. Its intent is only to keep an eye on memory usage. */ static int bytes_used; /* GCSE substitutions made. */ static int gcse_subst_count; /* Number of copy instructions created. */ static int gcse_create_count; /* Number of constants propagated. */ static int const_prop_count; /* Number of copys propagated. */ static int copy_prop_count; /* These variables are used by classic GCSE. Normally they'd be defined a bit later, but `rd_gen' needs to be declared sooner. */ /* Each block has a bitmap of each type. The length of each blocks bitmap is: max_cuid - for reaching definitions n_exprs - for available expressions Thus we view the bitmaps as 2 dimensional arrays. i.e. rd_kill[block_num][cuid_num] ae_kill[block_num][expr_num] */ /* For reaching defs */ static sbitmap *rd_kill, *rd_gen, *reaching_defs, *rd_out; /* for available exprs */ static sbitmap *ae_kill, *ae_gen, *ae_in, *ae_out; /* Objects of this type are passed around by the null-pointer check removal routines. */ struct null_pointer_info { /* The basic block being processed. */ basic_block current_block; /* The first register to be handled in this pass. */ unsigned int min_reg; /* One greater than the last register to be handled in this pass. */ unsigned int max_reg; sbitmap *nonnull_local; sbitmap *nonnull_killed; }; static void compute_can_copy PARAMS ((void)); static char *gmalloc PARAMS ((unsigned int)); static char *grealloc PARAMS ((char *, unsigned int)); static char *gcse_alloc PARAMS ((unsigned long)); static void alloc_gcse_mem PARAMS ((rtx)); static void free_gcse_mem PARAMS ((void)); static void alloc_reg_set_mem PARAMS ((int)); static void free_reg_set_mem PARAMS ((void)); static int get_bitmap_width PARAMS ((int, int, int)); static void record_one_set PARAMS ((int, rtx)); static void record_set_info PARAMS ((rtx, rtx, void *)); static void compute_sets PARAMS ((rtx)); static void hash_scan_insn PARAMS ((rtx, struct hash_table *, int)); static void hash_scan_set PARAMS ((rtx, rtx, struct hash_table *)); static void hash_scan_clobber PARAMS ((rtx, rtx, struct hash_table *)); static void hash_scan_call PARAMS ((rtx, rtx, struct hash_table *)); static int want_to_gcse_p PARAMS ((rtx)); static int oprs_unchanged_p PARAMS ((rtx, rtx, int)); static int oprs_anticipatable_p PARAMS ((rtx, rtx)); static int oprs_available_p PARAMS ((rtx, rtx)); static void insert_expr_in_table PARAMS ((rtx, enum machine_mode, rtx, int, int, struct hash_table *)); static void insert_set_in_table PARAMS ((rtx, rtx, struct hash_table *)); static unsigned int hash_expr PARAMS ((rtx, enum machine_mode, int *, int)); static unsigned int hash_expr_1 PARAMS ((rtx, enum machine_mode, int *)); static unsigned int hash_string_1 PARAMS ((const char *)); static unsigned int hash_set PARAMS ((int, int)); static int expr_equiv_p PARAMS ((rtx, rtx)); static void record_last_reg_set_info PARAMS ((rtx, int)); static void record_last_mem_set_info PARAMS ((rtx)); static void record_last_set_info PARAMS ((rtx, rtx, void *)); static void compute_hash_table PARAMS ((struct hash_table *)); static void alloc_hash_table PARAMS ((int, struct hash_table *, int)); static void free_hash_table PARAMS ((struct hash_table *)); static void compute_hash_table_work PARAMS ((struct hash_table *)); static void dump_hash_table PARAMS ((FILE *, const char *, struct hash_table *)); static struct expr *lookup_expr PARAMS ((rtx, struct hash_table *)); static struct expr *lookup_set PARAMS ((unsigned int, rtx, struct hash_table *)); static struct expr *next_set PARAMS ((unsigned int, struct expr *)); static void reset_opr_set_tables PARAMS ((void)); static int oprs_not_set_p PARAMS ((rtx, rtx)); static void mark_call PARAMS ((rtx)); static void mark_set PARAMS ((rtx, rtx)); static void mark_clobber PARAMS ((rtx, rtx)); static void mark_oprs_set PARAMS ((rtx)); static void alloc_cprop_mem PARAMS ((int, int)); static void free_cprop_mem PARAMS ((void)); static void compute_transp PARAMS ((rtx, int, sbitmap *, int)); static void compute_transpout PARAMS ((void)); static void compute_local_properties PARAMS ((sbitmap *, sbitmap *, sbitmap *, struct hash_table *)); static void compute_cprop_data PARAMS ((void)); static void find_used_regs PARAMS ((rtx *, void *)); static int try_replace_reg PARAMS ((rtx, rtx, rtx)); static struct expr *find_avail_set PARAMS ((int, rtx)); static int cprop_jump PARAMS ((basic_block, rtx, rtx, rtx, rtx)); static void mems_conflict_for_gcse_p PARAMS ((rtx, rtx, void *)); static int load_killed_in_block_p PARAMS ((basic_block, int, rtx, int)); static void canon_list_insert PARAMS ((rtx, rtx, void *)); static int cprop_insn PARAMS ((rtx, int)); static int cprop PARAMS ((int)); static int one_cprop_pass PARAMS ((int, int)); static bool constprop_register PARAMS ((rtx, rtx, rtx, int)); static struct expr *find_bypass_set PARAMS ((int, int)); static bool reg_killed_on_edge PARAMS ((rtx, edge)); static int bypass_block PARAMS ((basic_block, rtx, rtx)); static int bypass_conditional_jumps PARAMS ((void)); static void alloc_pre_mem PARAMS ((int, int)); static void free_pre_mem PARAMS ((void)); static void compute_pre_data PARAMS ((void)); static int pre_expr_reaches_here_p PARAMS ((basic_block, struct expr *, basic_block)); static void insert_insn_end_bb PARAMS ((struct expr *, basic_block, int)); static void pre_insert_copy_insn PARAMS ((struct expr *, rtx)); static void pre_insert_copies PARAMS ((void)); static int pre_delete PARAMS ((void)); static int pre_gcse PARAMS ((void)); static int one_pre_gcse_pass PARAMS ((int)); static void add_label_notes PARAMS ((rtx, rtx)); static void alloc_code_hoist_mem PARAMS ((int, int)); static void free_code_hoist_mem PARAMS ((void)); static void compute_code_hoist_vbeinout PARAMS ((void)); static void compute_code_hoist_data PARAMS ((void)); static int hoist_expr_reaches_here_p PARAMS ((basic_block, int, basic_block, char *)); static void hoist_code PARAMS ((void)); static int one_code_hoisting_pass PARAMS ((void)); static void alloc_rd_mem PARAMS ((int, int)); static void free_rd_mem PARAMS ((void)); static void handle_rd_kill_set PARAMS ((rtx, int, basic_block)); static void compute_kill_rd PARAMS ((void)); static void compute_rd PARAMS ((void)); static void alloc_avail_expr_mem PARAMS ((int, int)); static void free_avail_expr_mem PARAMS ((void)); static void compute_ae_gen PARAMS ((struct hash_table *)); static int expr_killed_p PARAMS ((rtx, basic_block)); static void compute_ae_kill PARAMS ((sbitmap *, sbitmap *, struct hash_table *)); static int expr_reaches_here_p PARAMS ((struct occr *, struct expr *, basic_block, int)); static rtx computing_insn PARAMS ((struct expr *, rtx)); static int def_reaches_here_p PARAMS ((rtx, rtx)); static int can_disregard_other_sets PARAMS ((struct reg_set **, rtx, int)); static int handle_avail_expr PARAMS ((rtx, struct expr *)); static int classic_gcse PARAMS ((void)); static int one_classic_gcse_pass PARAMS ((int)); static void invalidate_nonnull_info PARAMS ((rtx, rtx, void *)); static int delete_null_pointer_checks_1 PARAMS ((unsigned int *, sbitmap *, sbitmap *, struct null_pointer_info *)); static rtx process_insert_insn PARAMS ((struct expr *)); static int pre_edge_insert PARAMS ((struct edge_list *, struct expr **)); static int expr_reaches_here_p_work PARAMS ((struct occr *, struct expr *, basic_block, int, char *)); static int pre_expr_reaches_here_p_work PARAMS ((basic_block, struct expr *, basic_block, char *)); static struct ls_expr * ldst_entry PARAMS ((rtx)); static void free_ldst_entry PARAMS ((struct ls_expr *)); static void free_ldst_mems PARAMS ((void)); static void print_ldst_list PARAMS ((FILE *)); static struct ls_expr * find_rtx_in_ldst PARAMS ((rtx)); static int enumerate_ldsts PARAMS ((void)); static inline struct ls_expr * first_ls_expr PARAMS ((void)); static inline struct ls_expr * next_ls_expr PARAMS ((struct ls_expr *)); static int simple_mem PARAMS ((rtx)); static void invalidate_any_buried_refs PARAMS ((rtx)); static void compute_ld_motion_mems PARAMS ((void)); static void trim_ld_motion_mems PARAMS ((void)); static void update_ld_motion_stores PARAMS ((struct expr *)); static void reg_set_info PARAMS ((rtx, rtx, void *)); static int store_ops_ok PARAMS ((rtx, basic_block)); static void find_moveable_store PARAMS ((rtx)); static int compute_store_table PARAMS ((void)); static int load_kills_store PARAMS ((rtx, rtx)); static int find_loads PARAMS ((rtx, rtx)); static int store_killed_in_insn PARAMS ((rtx, rtx)); static int store_killed_after PARAMS ((rtx, rtx, basic_block)); static int store_killed_before PARAMS ((rtx, rtx, basic_block)); static void build_store_vectors PARAMS ((void)); static void insert_insn_start_bb PARAMS ((rtx, basic_block)); static int insert_store PARAMS ((struct ls_expr *, edge)); static void replace_store_insn PARAMS ((rtx, rtx, basic_block)); static void delete_store PARAMS ((struct ls_expr *, basic_block)); static void free_store_memory PARAMS ((void)); static void store_motion PARAMS ((void)); static void free_insn_expr_list_list PARAMS ((rtx *)); static void clear_modify_mem_tables PARAMS ((void)); static void free_modify_mem_tables PARAMS ((void)); static rtx gcse_emit_move_after PARAMS ((rtx, rtx, rtx)); static void local_cprop_find_used_regs PARAMS ((rtx *, void *)); static bool do_local_cprop PARAMS ((rtx, rtx, int, rtx*)); static bool adjust_libcall_notes PARAMS ((rtx, rtx, rtx, rtx*)); static void local_cprop_pass PARAMS ((int)); /* Entry point for global common subexpression elimination. F is the first instruction in the function. */ int gcse_main (f, file) rtx f; FILE *file; { int changed, pass; /* Bytes used at start of pass. */ int initial_bytes_used; /* Maximum number of bytes used by a pass. */ int max_pass_bytes; /* Point to release obstack data from for each pass. */ char *gcse_obstack_bottom; /* Insertion of instructions on edges can create new basic blocks; we need the original basic block count so that we can properly deallocate arrays sized on the number of basic blocks originally in the cfg. */ int orig_bb_count; /* We do not construct an accurate cfg in functions which call setjmp, so just punt to be safe. */ if (current_function_calls_setjmp) return 0; /* Assume that we do not need to run jump optimizations after gcse. */ run_jump_opt_after_gcse = 0; /* For calling dump_foo fns from gdb. */ debug_stderr = stderr; gcse_file = file; /* Identify the basic block information for this function, including successors and predecessors. */ max_gcse_regno = max_reg_num (); if (file) dump_flow_info (file); orig_bb_count = n_basic_blocks; /* Return if there's nothing to do. */ if (n_basic_blocks <= 1) return 0; /* Trying to perform global optimizations on flow graphs which have a high connectivity will take a long time and is unlikely to be particularly useful. In normal circumstances a cfg should have about twice as many edges as blocks. But we do not want to punish small functions which have a couple switch statements. So we require a relatively large number of basic blocks and the ratio of edges to blocks to be high. */ if (n_basic_blocks > 1000 && n_edges / n_basic_blocks >= 20) { if (warn_disabled_optimization) warning ("GCSE disabled: %d > 1000 basic blocks and %d >= 20 edges/basic block", n_basic_blocks, n_edges / n_basic_blocks); return 0; } /* If allocating memory for the cprop bitmap would take up too much storage it's better just to disable the optimization. */ if ((n_basic_blocks * SBITMAP_SET_SIZE (max_gcse_regno) * sizeof (SBITMAP_ELT_TYPE)) > MAX_GCSE_MEMORY) { if (warn_disabled_optimization) warning ("GCSE disabled: %d basic blocks and %d registers", n_basic_blocks, max_gcse_regno); return 0; } /* See what modes support reg/reg copy operations. */ if (! can_copy_init_p) { compute_can_copy (); can_copy_init_p = 1; } gcc_obstack_init (&gcse_obstack); bytes_used = 0; /* We need alias. */ init_alias_analysis (); /* Record where pseudo-registers are set. This data is kept accurate during each pass. ??? We could also record hard-reg information here [since it's unchanging], however it is currently done during hash table computation. It may be tempting to compute MEM set information here too, but MEM sets will be subject to code motion one day and thus we need to compute information about memory sets when we build the hash tables. */ alloc_reg_set_mem (max_gcse_regno); compute_sets (f); pass = 0; initial_bytes_used = bytes_used; max_pass_bytes = 0; gcse_obstack_bottom = gcse_alloc (1); changed = 1; while (changed && pass < MAX_GCSE_PASSES) { changed = 0; if (file) fprintf (file, "GCSE pass %d\n\n", pass + 1); /* Initialize bytes_used to the space for the pred/succ lists, and the reg_set_table data. */ bytes_used = initial_bytes_used; /* Each pass may create new registers, so recalculate each time. */ max_gcse_regno = max_reg_num (); alloc_gcse_mem (f); /* Don't allow constant propagation to modify jumps during this pass. */ changed = one_cprop_pass (pass + 1, 0); if (optimize_size) changed |= one_classic_gcse_pass (pass + 1); else { changed |= one_pre_gcse_pass (pass + 1); /* We may have just created new basic blocks. Release and recompute various things which are sized on the number of basic blocks. */ if (changed) { free_modify_mem_tables (); modify_mem_list = (rtx *) gmalloc (last_basic_block * sizeof (rtx)); canon_modify_mem_list = (rtx *) gmalloc (last_basic_block * sizeof (rtx)); memset ((char *) modify_mem_list, 0, last_basic_block * sizeof (rtx)); memset ((char *) canon_modify_mem_list, 0, last_basic_block * sizeof (rtx)); orig_bb_count = n_basic_blocks; } free_reg_set_mem (); alloc_reg_set_mem (max_reg_num ()); compute_sets (f); run_jump_opt_after_gcse = 1; } if (max_pass_bytes < bytes_used) max_pass_bytes = bytes_used; /* Free up memory, then reallocate for code hoisting. We can not re-use the existing allocated memory because the tables will not have info for the insns or registers created by partial redundancy elimination. */ free_gcse_mem (); /* It does not make sense to run code hoisting unless we optimizing for code size -- it rarely makes programs faster, and can make them bigger if we did partial redundancy elimination (when optimizing for space, we use a classic gcse algorithm instead of partial redundancy algorithms). */ if (optimize_size) { max_gcse_regno = max_reg_num (); alloc_gcse_mem (f); changed |= one_code_hoisting_pass (); free_gcse_mem (); if (max_pass_bytes < bytes_used) max_pass_bytes = bytes_used; } if (file) { fprintf (file, "\n"); fflush (file); } obstack_free (&gcse_obstack, gcse_obstack_bottom); pass++; } /* Do one last pass of copy propagation, including cprop into conditional jumps. */ max_gcse_regno = max_reg_num (); alloc_gcse_mem (f); /* This time, go ahead and allow cprop to alter jumps. */ one_cprop_pass (pass + 1, 1); free_gcse_mem (); if (file) { fprintf (file, "GCSE of %s: %d basic blocks, ", current_function_name, n_basic_blocks); fprintf (file, "%d pass%s, %d bytes\n\n", pass, pass > 1 ? "es" : "", max_pass_bytes); } obstack_free (&gcse_obstack, NULL); free_reg_set_mem (); /* We are finished with alias. */ end_alias_analysis (); allocate_reg_info (max_reg_num (), FALSE, FALSE); /* Store motion disabled until it is fixed. */ if (0 && !optimize_size && flag_gcse_sm) store_motion (); /* Record where pseudo-registers are set. */ return run_jump_opt_after_gcse; } /* Misc. utilities. */ /* Compute which modes support reg/reg copy operations. */ static void compute_can_copy () { int i; #ifndef AVOID_CCMODE_COPIES rtx reg, insn; #endif memset (can_copy_p, 0, NUM_MACHINE_MODES); start_sequence (); for (i = 0; i < NUM_MACHINE_MODES; i++) if (GET_MODE_CLASS (i) == MODE_CC) { #ifdef AVOID_CCMODE_COPIES can_copy_p[i] = 0; #else reg = gen_rtx_REG ((enum machine_mode) i, LAST_VIRTUAL_REGISTER + 1); insn = emit_insn (gen_rtx_SET (VOIDmode, reg, reg)); if (recog (PATTERN (insn), insn, NULL) >= 0) can_copy_p[i] = 1; #endif } else can_copy_p[i] = 1; end_sequence (); } /* Cover function to xmalloc to record bytes allocated. */ static char * gmalloc (size) unsigned int size; { bytes_used += size; return xmalloc (size); } /* Cover function to xrealloc. We don't record the additional size since we don't know it. It won't affect memory usage stats much anyway. */ static char * grealloc (ptr, size) char *ptr; unsigned int size; { return xrealloc (ptr, size); } /* Cover function to obstack_alloc. */ static char * gcse_alloc (size) unsigned long size; { bytes_used += size; return (char *) obstack_alloc (&gcse_obstack, size); } /* Allocate memory for the cuid mapping array, and reg/memory set tracking tables. This is called at the start of each pass. */ static void alloc_gcse_mem (f) rtx f; { int i, n; rtx insn; /* Find the largest UID and create a mapping from UIDs to CUIDs. CUIDs are like UIDs except they increase monotonically, have no gaps, and only apply to real insns. */ max_uid = get_max_uid (); n = (max_uid + 1) * sizeof (int); uid_cuid = (int *) gmalloc (n); memset ((char *) uid_cuid, 0, n); for (insn = f, i = 0; insn; insn = NEXT_INSN (insn)) { if (INSN_P (insn)) uid_cuid[INSN_UID (insn)] = i++; else uid_cuid[INSN_UID (insn)] = i; } /* Create a table mapping cuids to insns. */ max_cuid = i; n = (max_cuid + 1) * sizeof (rtx); cuid_insn = (rtx *) gmalloc (n); memset ((char *) cuid_insn, 0, n); for (insn = f, i = 0; insn; insn = NEXT_INSN (insn)) if (INSN_P (insn)) CUID_INSN (i++) = insn; /* Allocate vars to track sets of regs. */ reg_set_bitmap = BITMAP_XMALLOC (); /* Allocate vars to track sets of regs, memory per block. */ reg_set_in_block = (sbitmap *) sbitmap_vector_alloc (last_basic_block, max_gcse_regno); /* Allocate array to keep a list of insns which modify memory in each basic block. */ modify_mem_list = (rtx *) gmalloc (last_basic_block * sizeof (rtx)); canon_modify_mem_list = (rtx *) gmalloc (last_basic_block * sizeof (rtx)); memset ((char *) modify_mem_list, 0, last_basic_block * sizeof (rtx)); memset ((char *) canon_modify_mem_list, 0, last_basic_block * sizeof (rtx)); modify_mem_list_set = BITMAP_XMALLOC (); canon_modify_mem_list_set = BITMAP_XMALLOC (); } /* Free memory allocated by alloc_gcse_mem. */ static void free_gcse_mem () { free (uid_cuid); free (cuid_insn); BITMAP_XFREE (reg_set_bitmap); sbitmap_vector_free (reg_set_in_block); free_modify_mem_tables (); BITMAP_XFREE (modify_mem_list_set); BITMAP_XFREE (canon_modify_mem_list_set); } /* Many of the global optimization algorithms work by solving dataflow equations for various expressions. Initially, some local value is computed for each expression in each block. Then, the values across the various blocks are combined (by following flow graph edges) to arrive at global values. Conceptually, each set of equations is independent. We may therefore solve all the equations in parallel, solve them one at a time, or pick any intermediate approach. When you're going to need N two-dimensional bitmaps, each X (say, the number of blocks) by Y (say, the number of expressions), call this function. It's not important what X and Y represent; only that Y correspond to the things that can be done in parallel. This function will return an appropriate chunking factor C; you should solve C sets of equations in parallel. By going through this function, we can easily trade space against time; by solving fewer equations in parallel we use less space. */ static int get_bitmap_width (n, x, y) int n; int x; int y; { /* It's not really worth figuring out *exactly* how much memory will be used by a particular choice. The important thing is to get something approximately right. */ size_t max_bitmap_memory = 10 * 1024 * 1024; /* The number of bytes we'd use for a single column of minimum width. */ size_t column_size = n * x * sizeof (SBITMAP_ELT_TYPE); /* Often, it's reasonable just to solve all the equations in parallel. */ if (column_size * SBITMAP_SET_SIZE (y) <= max_bitmap_memory) return y; /* Otherwise, pick the largest width we can, without going over the limit. */ return SBITMAP_ELT_BITS * ((max_bitmap_memory + column_size - 1) / column_size); } /* Compute the local properties of each recorded expression. Local properties are those that are defined by the block, irrespective of other blocks. An expression is transparent in a block if its operands are not modified in the block. An expression is computed (locally available) in a block if it is computed at least once and expression would contain the same value if the computation was moved to the end of the block. An expression is locally anticipatable in a block if it is computed at least once and expression would contain the same value if the computation was moved to the beginning of the block. We call this routine for cprop, pre and code hoisting. They all compute basically the same information and thus can easily share this code. TRANSP, COMP, and ANTLOC are destination sbitmaps for recording local properties. If NULL, then it is not necessary to compute or record that particular property. TABLE controls which hash table to look at. If it is set hash table, additionally, TRANSP is computed as ~TRANSP, since this is really cprop's ABSALTERED. */ static void compute_local_properties (transp, comp, antloc, table) sbitmap *transp; sbitmap *comp; sbitmap *antloc; struct hash_table *table; { unsigned int i; /* Initialize any bitmaps that were passed in. */ if (transp) { if (table->set_p) sbitmap_vector_zero (transp, last_basic_block); else sbitmap_vector_ones (transp, last_basic_block); } if (comp) sbitmap_vector_zero (comp, last_basic_block); if (antloc) sbitmap_vector_zero (antloc, last_basic_block); for (i = 0; i < table->size; i++) { struct expr *expr; for (expr = table->table[i]; expr != NULL; expr = expr->next_same_hash) { int indx = expr->bitmap_index; struct occr *occr; /* The expression is transparent in this block if it is not killed. We start by assuming all are transparent [none are killed], and then reset the bits for those that are. */ if (transp) compute_transp (expr->expr, indx, transp, table->set_p); /* The occurrences recorded in antic_occr are exactly those that we want to set to nonzero in ANTLOC. */ if (antloc) for (occr = expr->antic_occr; occr != NULL; occr = occr->next) { SET_BIT (antloc[BLOCK_NUM (occr->insn)], indx); /* While we're scanning the table, this is a good place to initialize this. */ occr->deleted_p = 0; } /* The occurrences recorded in avail_occr are exactly those that we want to set to nonzero in COMP. */ if (comp) for (occr = expr->avail_occr; occr != NULL; occr = occr->next) { SET_BIT (comp[BLOCK_NUM (occr->insn)], indx); /* While we're scanning the table, this is a good place to initialize this. */ occr->copied_p = 0; } /* While we're scanning the table, this is a good place to initialize this. */ expr->reaching_reg = 0; } } } /* Register set information. `reg_set_table' records where each register is set or otherwise modified. */ static struct obstack reg_set_obstack; static void alloc_reg_set_mem (n_regs) int n_regs; { unsigned int n; reg_set_table_size = n_regs + REG_SET_TABLE_SLOP; n = reg_set_table_size * sizeof (struct reg_set *); reg_set_table = (struct reg_set **) gmalloc (n); memset ((char *) reg_set_table, 0, n); gcc_obstack_init (&reg_set_obstack); } static void free_reg_set_mem () { free (reg_set_table); obstack_free (&reg_set_obstack, NULL); } /* Record REGNO in the reg_set table. */ static void record_one_set (regno, insn) int regno; rtx insn; { /* Allocate a new reg_set element and link it onto the list. */ struct reg_set *new_reg_info; /* If the table isn't big enough, enlarge it. */ if (regno >= reg_set_table_size) { int new_size = regno + REG_SET_TABLE_SLOP; reg_set_table = (struct reg_set **) grealloc ((char *) reg_set_table, new_size * sizeof (struct reg_set *)); memset ((char *) (reg_set_table + reg_set_table_size), 0, (new_size - reg_set_table_size) * sizeof (struct reg_set *)); reg_set_table_size = new_size; } new_reg_info = (struct reg_set *) obstack_alloc (&reg_set_obstack, sizeof (struct reg_set)); bytes_used += sizeof (struct reg_set); new_reg_info->insn = insn; new_reg_info->next = reg_set_table[regno]; reg_set_table[regno] = new_reg_info; } /* Called from compute_sets via note_stores to handle one SET or CLOBBER in an insn. The DATA is really the instruction in which the SET is occurring. */ static void record_set_info (dest, setter, data) rtx dest, setter ATTRIBUTE_UNUSED; void *data; { rtx record_set_insn = (rtx) data; if (GET_CODE (dest) == REG && REGNO (dest) >= FIRST_PSEUDO_REGISTER) record_one_set (REGNO (dest), record_set_insn); } /* Scan the function and record each set of each pseudo-register. This is called once, at the start of the gcse pass. See the comments for `reg_set_table' for further documenation. */ static void compute_sets (f) rtx f; { rtx insn; for (insn = f; insn != 0; insn = NEXT_INSN (insn)) if (INSN_P (insn)) note_stores (PATTERN (insn), record_set_info, insn); } /* Hash table support. */ struct reg_avail_info { basic_block last_bb; int first_set; int last_set; }; static struct reg_avail_info *reg_avail_info; static basic_block current_bb; /* See whether X, the source of a set, is something we want to consider for GCSE. */ static GTY(()) rtx test_insn; static int want_to_gcse_p (x) rtx x; { int num_clobbers = 0; int icode; switch (GET_CODE (x)) { case REG: case SUBREG: case CONST_INT: case CONST_DOUBLE: case CONST_VECTOR: case CALL: return 0; default: break; } /* If this is a valid operand, we are OK. If it's VOIDmode, we aren't. */ if (general_operand (x, GET_MODE (x))) return 1; else if (GET_MODE (x) == VOIDmode) return 0; /* Otherwise, check if we can make a valid insn from it. First initialize our test insn if we haven't already. */ if (test_insn == 0) { test_insn = make_insn_raw (gen_rtx_SET (VOIDmode, gen_rtx_REG (word_mode, FIRST_PSEUDO_REGISTER * 2), const0_rtx)); NEXT_INSN (test_insn) = PREV_INSN (test_insn) = 0; } /* Now make an insn like the one we would make when GCSE'ing and see if valid. */ PUT_MODE (SET_DEST (PATTERN (test_insn)), GET_MODE (x)); SET_SRC (PATTERN (test_insn)) = x; return ((icode = recog (PATTERN (test_insn), test_insn, &num_clobbers)) >= 0 && (num_clobbers == 0 || ! added_clobbers_hard_reg_p (icode))); } /* Return nonzero if the operands of expression X are unchanged from the start of INSN's basic block up to but not including INSN (if AVAIL_P == 0), or from INSN to the end of INSN's basic block (if AVAIL_P != 0). */ static int oprs_unchanged_p (x, insn, avail_p) rtx x, insn; int avail_p; { int i, j; enum rtx_code code; const char *fmt; if (x == 0) return 1; code = GET_CODE (x); switch (code) { case REG: { struct reg_avail_info *info = &reg_avail_info[REGNO (x)]; if (info->last_bb != current_bb) return 1; if (avail_p) return info->last_set < INSN_CUID (insn); else return info->first_set >= INSN_CUID (insn); } case MEM: if (load_killed_in_block_p (current_bb, INSN_CUID (insn), x, avail_p)) return 0; else return oprs_unchanged_p (XEXP (x, 0), insn, avail_p); case PRE_DEC: case PRE_INC: case POST_DEC: case POST_INC: case PRE_MODIFY: case POST_MODIFY: return 0; case PC: case CC0: /*FIXME*/ case CONST: case CONST_INT: case CONST_DOUBLE: case CONST_VECTOR: case SYMBOL_REF: case LABEL_REF: case ADDR_VEC: case ADDR_DIFF_VEC: return 1; default: break; } for (i = GET_RTX_LENGTH (code) - 1, fmt = GET_RTX_FORMAT (code); i >= 0; i--) { if (fmt[i] == 'e') { /* If we are about to do the last recursive call needed at this level, change it into iteration. This function is called enough to be worth it. */ if (i == 0) return oprs_unchanged_p (XEXP (x, i), insn, avail_p); else if (! oprs_unchanged_p (XEXP (x, i), insn, avail_p)) return 0; } else if (fmt[i] == 'E') for (j = 0; j < XVECLEN (x, i); j++) if (! oprs_unchanged_p (XVECEXP (x, i, j), insn, avail_p)) return 0; } return 1; } /* Used for communication between mems_conflict_for_gcse_p and load_killed_in_block_p. Nonzero if mems_conflict_for_gcse_p finds a conflict between two memory references. */ static int gcse_mems_conflict_p; /* Used for communication between mems_conflict_for_gcse_p and load_killed_in_block_p. A memory reference for a load instruction, mems_conflict_for_gcse_p will see if a memory store conflicts with this memory load. */ static rtx gcse_mem_operand; /* DEST is the output of an instruction. If it is a memory reference, and possibly conflicts with the load found in gcse_mem_operand, then set gcse_mems_conflict_p to a nonzero value. */ static void mems_conflict_for_gcse_p (dest, setter, data) rtx dest, setter ATTRIBUTE_UNUSED; void *data ATTRIBUTE_UNUSED; { while (GET_CODE (dest) == SUBREG || GET_CODE (dest) == ZERO_EXTRACT || GET_CODE (dest) == SIGN_EXTRACT || GET_CODE (dest) == STRICT_LOW_PART) dest = XEXP (dest, 0); /* If DEST is not a MEM, then it will not conflict with the load. Note that function calls are assumed to clobber memory, but are handled elsewhere. */ if (GET_CODE (dest) != MEM) return; /* If we are setting a MEM in our list of specially recognized MEMs, don't mark as killed this time. */ if (dest == gcse_mem_operand && pre_ldst_mems != NULL) { if (!find_rtx_in_ldst (dest)) gcse_mems_conflict_p = 1; return; } if (true_dependence (dest, GET_MODE (dest), gcse_mem_operand, rtx_addr_varies_p)) gcse_mems_conflict_p = 1; } /* Return nonzero if the expression in X (a memory reference) is killed in block BB before or after the insn with the CUID in UID_LIMIT. AVAIL_P is nonzero for kills after UID_LIMIT, and zero for kills before UID_LIMIT. To check the entire block, set UID_LIMIT to max_uid + 1 and AVAIL_P to 0. */ static int load_killed_in_block_p (bb, uid_limit, x, avail_p) basic_block bb; int uid_limit; rtx x; int avail_p; { rtx list_entry = modify_mem_list[bb->index]; while (list_entry) { rtx setter; /* Ignore entries in the list that do not apply. */ if ((avail_p && INSN_CUID (XEXP (list_entry, 0)) < uid_limit) || (! avail_p && INSN_CUID (XEXP (list_entry, 0)) > uid_limit)) { list_entry = XEXP (list_entry, 1); continue; } setter = XEXP (list_entry, 0); /* If SETTER is a call everything is clobbered. Note that calls to pure functions are never put on the list, so we need not worry about them. */ if (GET_CODE (setter) == CALL_INSN) return 1; /* SETTER must be an INSN of some kind that sets memory. Call note_stores to examine each hunk of memory that is modified. The note_stores interface is pretty limited, so we have to communicate via global variables. Yuk. */ gcse_mem_operand = x; gcse_mems_conflict_p = 0; note_stores (PATTERN (setter), mems_conflict_for_gcse_p, NULL); if (gcse_mems_conflict_p) return 1; list_entry = XEXP (list_entry, 1); } return 0; } /* Return nonzero if the operands of expression X are unchanged from the start of INSN's basic block up to but not including INSN. */ static int oprs_anticipatable_p (x, insn) rtx x, insn; { return oprs_unchanged_p (x, insn, 0); } /* Return nonzero if the operands of expression X are unchanged from INSN to the end of INSN's basic block. */ static int oprs_available_p (x, insn) rtx x, insn; { return oprs_unchanged_p (x, insn, 1); } /* Hash expression X. MODE is only used if X is a CONST_INT. DO_NOT_RECORD_P is a boolean indicating if a volatile operand is found or if the expression contains something we don't want to insert in the table. ??? One might want to merge this with canon_hash. Later. */ static unsigned int hash_expr (x, mode, do_not_record_p, hash_table_size) rtx x; enum machine_mode mode; int *do_not_record_p; int hash_table_size; { unsigned int hash; *do_not_record_p = 0; hash = hash_expr_1 (x, mode, do_not_record_p); return hash % hash_table_size; } /* Hash a string. Just add its bytes up. */ static inline unsigned hash_string_1 (ps) const char *ps; { unsigned hash = 0; const unsigned char *p = (const unsigned char *) ps; if (p) while (*p) hash += *p++; return hash; } /* Subroutine of hash_expr to do the actual work. */ static unsigned int hash_expr_1 (x, mode, do_not_record_p) rtx x; enum machine_mode mode; int *do_not_record_p; { int i, j; unsigned hash = 0; enum rtx_code code; const char *fmt; /* Used to turn recursion into iteration. We can't rely on GCC's tail-recursion eliminatio since we need to keep accumulating values in HASH. */ if (x == 0) return hash; repeat: code = GET_CODE (x); switch (code) { case REG: hash += ((unsigned int) REG << 7) + REGNO (x); return hash; case CONST_INT: hash += (((unsigned int) CONST_INT << 7) + (unsigned int) mode + (unsigned int) INTVAL (x)); return hash; case CONST_DOUBLE: /* This is like the general case, except that it only counts the integers representing the constant. */ hash += (unsigned int) code + (unsigned int) GET_MODE (x); if (GET_MODE (x) != VOIDmode) for (i = 2; i < GET_RTX_LENGTH (CONST_DOUBLE); i++) hash += (unsigned int) XWINT (x, i); else hash += ((unsigned int) CONST_DOUBLE_LOW (x) + (unsigned int) CONST_DOUBLE_HIGH (x)); return hash; case CONST_VECTOR: { int units; rtx elt; units = CONST_VECTOR_NUNITS (x); for (i = 0; i < units; ++i) { elt = CONST_VECTOR_ELT (x, i); hash += hash_expr_1 (elt, GET_MODE (elt), do_not_record_p); } return hash; } /* Assume there is only one rtx object for any given label. */ case LABEL_REF: /* We don't hash on the address of the CODE_LABEL to avoid bootstrap differences and differences between each stage's debugging dumps. */ hash += (((unsigned int) LABEL_REF << 7) + CODE_LABEL_NUMBER (XEXP (x, 0))); return hash; case SYMBOL_REF: { /* Don't hash on the symbol's address to avoid bootstrap differences. Different hash values may cause expressions to be recorded in different orders and thus different registers to be used in the final assembler. This also avoids differences in the dump files between various stages. */ unsigned int h = 0; const unsigned char *p = (const unsigned char *) XSTR (x, 0); while (*p) h += (h << 7) + *p++; /* ??? revisit */ hash += ((unsigned int) SYMBOL_REF << 7) + h; return hash; } case MEM: if (MEM_VOLATILE_P (x)) { *do_not_record_p = 1; return 0; } hash += (unsigned int) MEM; /* We used alias set for hashing, but this is not good, since the alias set may differ in -fprofile-arcs and -fbranch-probabilities compilation causing the profiles to fail to match. */ x = XEXP (x, 0); goto repeat; case PRE_DEC: case PRE_INC: case POST_DEC: case POST_INC: case PC: case CC0: case CALL: case UNSPEC_VOLATILE: *do_not_record_p = 1; return 0; case ASM_OPERANDS: if (MEM_VOLATILE_P (x)) { *do_not_record_p = 1; return 0; } else { /* We don't want to take the filename and line into account. */ hash += (unsigned) code + (unsigned) GET_MODE (x) + hash_string_1 (ASM_OPERANDS_TEMPLATE (x)) + hash_string_1 (ASM_OPERANDS_OUTPUT_CONSTRAINT (x)) + (unsigned) ASM_OPERANDS_OUTPUT_IDX (x); if (ASM_OPERANDS_INPUT_LENGTH (x)) { for (i = 1; i < ASM_OPERANDS_INPUT_LENGTH (x); i++) { hash += (hash_expr_1 (ASM_OPERANDS_INPUT (x, i), GET_MODE (ASM_OPERANDS_INPUT (x, i)), do_not_record_p) + hash_string_1 (ASM_OPERANDS_INPUT_CONSTRAINT (x, i))); } hash += hash_string_1 (ASM_OPERANDS_INPUT_CONSTRAINT (x, 0)); x = ASM_OPERANDS_INPUT (x, 0); mode = GET_MODE (x); goto repeat; } return hash; } default: break; } hash += (unsigned) code + (unsigned) GET_MODE (x); for (i = GET_RTX_LENGTH (code) - 1, fmt = GET_RTX_FORMAT (code); i >= 0; i--) { if (fmt[i] == 'e') { /* If we are about to do the last recursive call needed at this level, change it into iteration. This function is called enough to be worth it. */ if (i == 0) { x = XEXP (x, i); goto repeat; } hash += hash_expr_1 (XEXP (x, i), 0, do_not_record_p); if (*do_not_record_p) return 0; } else if (fmt[i] == 'E') for (j = 0; j < XVECLEN (x, i); j++) { hash += hash_expr_1 (XVECEXP (x, i, j), 0, do_not_record_p); if (*do_not_record_p) return 0; } else if (fmt[i] == 's') hash += hash_string_1 (XSTR (x, i)); else if (fmt[i] == 'i') hash += (unsigned int) XINT (x, i); else abort (); } return hash; } /* Hash a set of register REGNO. Sets are hashed on the register that is set. This simplifies the PRE copy propagation code. ??? May need to make things more elaborate. Later, as necessary. */ static unsigned int hash_set (regno, hash_table_size) int regno; int hash_table_size; { unsigned int hash; hash = regno; return hash % hash_table_size; } /* Return nonzero if exp1 is equivalent to exp2. ??? Borrowed from cse.c. Might want to remerge with cse.c. Later. */ static int expr_equiv_p (x, y) rtx x, y; { int i, j; enum rtx_code code; const char *fmt; if (x == y) return 1; if (x == 0 || y == 0) return x == y; code = GET_CODE (x); if (code != GET_CODE (y)) return 0; /* (MULT:SI x y) and (MULT:HI x y) are NOT equivalent. */ if (GET_MODE (x) != GET_MODE (y)) return 0; switch (code) { case PC: case CC0: return x == y; case CONST_INT: return INTVAL (x) == INTVAL (y); case LABEL_REF: return XEXP (x, 0) == XEXP (y, 0); case SYMBOL_REF: return XSTR (x, 0) == XSTR (y, 0); case REG: return REGNO (x) == REGNO (y); case MEM: /* Can't merge two expressions in different alias sets, since we can decide that the expression is transparent in a block when it isn't, due to it being set with the different alias set. */ if (MEM_ALIAS_SET (x) != MEM_ALIAS_SET (y)) return 0; break; /* For commutative operations, check both orders. */ case PLUS: case MULT: case AND: case IOR: case XOR: case NE: case EQ: return ((expr_equiv_p (XEXP (x, 0), XEXP (y, 0)) && expr_equiv_p (XEXP (x, 1), XEXP (y, 1))) || (expr_equiv_p (XEXP (x, 0), XEXP (y, 1)) && expr_equiv_p (XEXP (x, 1), XEXP (y, 0)))); case ASM_OPERANDS: /* We don't use the generic code below because we want to disregard filename and line numbers. */ /* A volatile asm isn't equivalent to any other. */ if (MEM_VOLATILE_P (x) || MEM_VOLATILE_P (y)) return 0; if (GET_MODE (x) != GET_MODE (y) || strcmp (ASM_OPERANDS_TEMPLATE (x), ASM_OPERANDS_TEMPLATE (y)) || strcmp (ASM_OPERANDS_OUTPUT_CONSTRAINT (x), ASM_OPERANDS_OUTPUT_CONSTRAINT (y)) || ASM_OPERANDS_OUTPUT_IDX (x) != ASM_OPERANDS_OUTPUT_IDX (y) || ASM_OPERANDS_INPUT_LENGTH (x) != ASM_OPERANDS_INPUT_LENGTH (y)) return 0; if (ASM_OPERANDS_INPUT_LENGTH (x)) { for (i = ASM_OPERANDS_INPUT_LENGTH (x) - 1; i >= 0; i--) if (! expr_equiv_p (ASM_OPERANDS_INPUT (x, i), ASM_OPERANDS_INPUT (y, i)) || strcmp (ASM_OPERANDS_INPUT_CONSTRAINT (x, i), ASM_OPERANDS_INPUT_CONSTRAINT (y, i))) return 0; } return 1; default: break; } /* Compare the elements. If any pair of corresponding elements fail to match, return 0 for the whole thing. */ fmt = GET_RTX_FORMAT (code); for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--) { switch (fmt[i]) { case 'e': if (! expr_equiv_p (XEXP (x, i), XEXP (y, i))) return 0; break; case 'E': if (XVECLEN (x, i) != XVECLEN (y, i)) return 0; for (j = 0; j < XVECLEN (x, i); j++) if (! expr_equiv_p (XVECEXP (x, i, j), XVECEXP (y, i, j))) return 0; break; case 's': if (strcmp (XSTR (x, i), XSTR (y, i))) return 0; break; case 'i': if (XINT (x, i) != XINT (y, i)) return 0; break; case 'w': if (XWINT (x, i) != XWINT (y, i)) return 0; break; case '0': break; default: abort (); } } return 1; } /* Insert expression X in INSN in the hash TABLE. If it is already present, record it as the last occurrence in INSN's basic block. MODE is the mode of the value X is being stored into. It is only used if X is a CONST_INT. ANTIC_P is nonzero if X is an anticipatable expression. AVAIL_P is nonzero if X is an available expression. */ static void insert_expr_in_table (x, mode, insn, antic_p, avail_p, table) rtx x; enum machine_mode mode; rtx insn; int antic_p, avail_p; struct hash_table *table; { int found, do_not_record_p; unsigned int hash; struct expr *cur_expr, *last_expr = NULL; struct occr *antic_occr, *avail_occr; struct occr *last_occr = NULL; hash = hash_expr (x, mode, &do_not_record_p, table->size); /* Do not insert expression in table if it contains volatile operands, or if hash_expr determines the expression is something we don't want to or can't handle. */ if (do_not_record_p) return; cur_expr = table->table[hash]; found = 0; while (cur_expr && 0 == (found = expr_equiv_p (cur_expr->expr, x))) { /* If the expression isn't found, save a pointer to the end of the list. */ last_expr = cur_expr; cur_expr = cur_expr->next_same_hash; } if (! found) { cur_expr = (struct expr *) gcse_alloc (sizeof (struct expr)); bytes_used += sizeof (struct expr); if (table->table[hash] == NULL) /* This is the first pattern that hashed to this index. */ table->table[hash] = cur_expr; else /* Add EXPR to end of this hash chain. */ last_expr->next_same_hash = cur_expr; /* Set the fields of the expr element. */ cur_expr->expr = x; cur_expr->bitmap_index = table->n_elems++; cur_expr->next_same_hash = NULL; cur_expr->antic_occr = NULL; cur_expr->avail_occr = NULL; } /* Now record the occurrence(s). */ if (antic_p) { antic_occr = cur_expr->antic_occr; /* Search for another occurrence in the same basic block. */ while (antic_occr && BLOCK_NUM (antic_occr->insn) != BLOCK_NUM (insn)) { /* If an occurrence isn't found, save a pointer to the end of the list. */ last_occr = antic_occr; antic_occr = antic_occr->next; } if (antic_occr) /* Found another instance of the expression in the same basic block. Prefer the currently recorded one. We want the first one in the block and the block is scanned from start to end. */ ; /* nothing to do */ else { /* First occurrence of this expression in this basic block. */ antic_occr = (struct occr *) gcse_alloc (sizeof (struct occr)); bytes_used += sizeof (struct occr); /* First occurrence of this expression in any block? */ if (cur_expr->antic_occr == NULL) cur_expr->antic_occr = antic_occr; else last_occr->next = antic_occr; antic_occr->insn = insn; antic_occr->next = NULL; } } if (avail_p) { avail_occr = cur_expr->avail_occr; /* Search for another occurrence in the same basic block. */ while (avail_occr && BLOCK_NUM (avail_occr->insn) != BLOCK_NUM (insn)) { /* If an occurrence isn't found, save a pointer to the end of the list. */ last_occr = avail_occr; avail_occr = avail_occr->next; } if (avail_occr) /* Found another instance of the expression in the same basic block. Prefer this occurrence to the currently recorded one. We want the last one in the block and the block is scanned from start to end. */ avail_occr->insn = insn; else { /* First occurrence of this expression in this basic block. */ avail_occr = (struct occr *) gcse_alloc (sizeof (struct occr)); bytes_used += sizeof (struct occr); /* First occurrence of this expression in any block? */ if (cur_expr->avail_occr == NULL) cur_expr->avail_occr = avail_occr; else last_occr->next = avail_occr; avail_occr->insn = insn; avail_occr->next = NULL; } } } /* Insert pattern X in INSN in the hash table. X is a SET of a reg to either another reg or a constant. If it is already present, record it as the last occurrence in INSN's basic block. */ static void insert_set_in_table (x, insn, table) rtx x; rtx insn; struct hash_table *table; { int found; unsigned int hash; struct expr *cur_expr, *last_expr = NULL; struct occr *cur_occr, *last_occr = NULL; if (GET_CODE (x) != SET || GET_CODE (SET_DEST (x)) != REG) abort (); hash = hash_set (REGNO (SET_DEST (x)), table->size); cur_expr = table->table[hash]; found = 0; while (cur_expr && 0 == (found = expr_equiv_p (cur_expr->expr, x))) { /* If the expression isn't found, save a pointer to the end of the list. */ last_expr = cur_expr; cur_expr = cur_expr->next_same_hash; } if (! found) { cur_expr = (struct expr *) gcse_alloc (sizeof (struct expr)); bytes_used += sizeof (struct expr); if (table->table[hash] == NULL) /* This is the first pattern that hashed to this index. */ table->table[hash] = cur_expr; else /* Add EXPR to end of this hash chain. */ last_expr->next_same_hash = cur_expr; /* Set the fields of the expr element. We must copy X because it can be modified when copy propagation is performed on its operands. */ cur_expr->expr = copy_rtx (x); cur_expr->bitmap_index = table->n_elems++; cur_expr->next_same_hash = NULL; cur_expr->antic_occr = NULL; cur_expr->avail_occr = NULL; } /* Now record the occurrence. */ cur_occr = cur_expr->avail_occr; /* Search for another occurrence in the same basic block. */ while (cur_occr && BLOCK_NUM (cur_occr->insn) != BLOCK_NUM (insn)) { /* If an occurrence isn't found, save a pointer to the end of the list. */ last_occr = cur_occr; cur_occr = cur_occr->next; } if (cur_occr) /* Found another instance of the expression in the same basic block. Prefer this occurrence to the currently recorded one. We want the last one in the block and the block is scanned from start to end. */ cur_occr->insn = insn; else { /* First occurrence of this expression in this basic block. */ cur_occr = (struct occr *) gcse_alloc (sizeof (struct occr)); bytes_used += sizeof (struct occr); /* First occurrence of this expression in any block? */ if (cur_expr->avail_occr == NULL) cur_expr->avail_occr = cur_occr; else last_occr->next = cur_occr; cur_occr->insn = insn; cur_occr->next = NULL; } } /* Scan pattern PAT of INSN and add an entry to the hash TABLE (set or expression one). */ static void hash_scan_set (pat, insn, table) rtx pat, insn; struct hash_table *table; { rtx src = SET_SRC (pat); rtx dest = SET_DEST (pat); rtx note; if (GET_CODE (src) == CALL) hash_scan_call (src, insn, table); else if (GET_CODE (dest) == REG) { unsigned int regno = REGNO (dest); rtx tmp; /* If this is a single set and we are doing constant propagation, see if a REG_NOTE shows this equivalent to a constant. */ if (table->set_p && (note = find_reg_equal_equiv_note (insn)) != 0 && CONSTANT_P (XEXP (note, 0))) src = XEXP (note, 0), pat = gen_rtx_SET (VOIDmode, dest, src); /* Only record sets of pseudo-regs in the hash table. */ if (! table->set_p && regno >= FIRST_PSEUDO_REGISTER /* Don't GCSE something if we can't do a reg/reg copy. */ && can_copy_p [GET_MODE (dest)] /* GCSE commonly inserts instruction after the insn. We can't do that easily for EH_REGION notes so disable GCSE on these for now. */ && !find_reg_note (insn, REG_EH_REGION, NULL_RTX) /* Is SET_SRC something we want to gcse? */ && want_to_gcse_p (src) /* Don't CSE a nop. */ && ! set_noop_p (pat) /* Don't GCSE if it has attached REG_EQUIV note. At this point this only function parameters should have REG_EQUIV notes and if the argument slot is used somewhere explicitly, it means address of parameter has been taken, so we should not extend the lifetime of the pseudo. */ && ((note = find_reg_note (insn, REG_EQUIV, NULL_RTX)) == 0 || GET_CODE (XEXP (note, 0)) != MEM)) { /* An expression is not anticipatable if its operands are modified before this insn or if this is not the only SET in this insn. */ int antic_p = oprs_anticipatable_p (src, insn) && single_set (insn); /* An expression is not available if its operands are subsequently modified, including this insn. It's also not available if this is a branch, because we can't insert a set after the branch. */ int avail_p = (oprs_available_p (src, insn) && ! JUMP_P (insn)); insert_expr_in_table (src, GET_MODE (dest), insn, antic_p, avail_p, table); } /* Record sets for constant/copy propagation. */ else if (table->set_p && regno >= FIRST_PSEUDO_REGISTER && ((GET_CODE (src) == REG && REGNO (src) >= FIRST_PSEUDO_REGISTER && can_copy_p [GET_MODE (dest)] && REGNO (src) != regno) || CONSTANT_P (src)) /* A copy is not available if its src or dest is subsequently modified. Here we want to search from INSN+1 on, but oprs_available_p searches from INSN on. */ && (insn == BLOCK_END (BLOCK_NUM (insn)) || ((tmp = next_nonnote_insn (insn)) != NULL_RTX && oprs_available_p (pat, tmp)))) insert_set_in_table (pat, insn, table); } } static void hash_scan_clobber (x, insn, table) rtx x ATTRIBUTE_UNUSED, insn ATTRIBUTE_UNUSED; struct hash_table *table ATTRIBUTE_UNUSED; { /* Currently nothing to do. */ } static void hash_scan_call (x, insn, table) rtx x ATTRIBUTE_UNUSED, insn ATTRIBUTE_UNUSED; struct hash_table *table ATTRIBUTE_UNUSED; { /* Currently nothing to do. */ } /* Process INSN and add hash table entries as appropriate. Only available expressions that set a single pseudo-reg are recorded. Single sets in a PARALLEL could be handled, but it's an extra complication that isn't dealt with right now. The trick is handling the CLOBBERs that are also in the PARALLEL. Later. If SET_P is nonzero, this is for the assignment hash table, otherwise it is for the expression hash table. If IN_LIBCALL_BLOCK nonzero, we are in a libcall block, and should not record any expressions. */ static void hash_scan_insn (insn, table, in_libcall_block) rtx insn; struct hash_table *table; int in_libcall_block; { rtx pat = PATTERN (insn); int i; if (in_libcall_block) return; /* Pick out the sets of INSN and for other forms of instructions record what's been modified. */ if (GET_CODE (pat) == SET) hash_scan_set (pat, insn, table); else if (GET_CODE (pat) == PARALLEL) for (i = 0; i < XVECLEN (pat, 0); i++) { rtx x = XVECEXP (pat, 0, i); if (GET_CODE (x) == SET) hash_scan_set (x, insn, table); else if (GET_CODE (x) == CLOBBER) hash_scan_clobber (x, insn, table); else if (GET_CODE (x) == CALL) hash_scan_call (x, insn, table); } else if (GET_CODE (pat) == CLOBBER) hash_scan_clobber (pat, insn, table); else if (GET_CODE (pat) == CALL) hash_scan_call (pat, insn, table); } static void dump_hash_table (file, name, table) FILE *file; const char *name; struct hash_table *table; { int i; /* Flattened out table, so it's printed in proper order. */ struct expr **flat_table; unsigned int *hash_val; struct expr *expr; flat_table = (struct expr **) xcalloc (table->n_elems, sizeof (struct expr *)); hash_val = (unsigned int *) xmalloc (table->n_elems * sizeof (unsigned int)); for (i = 0; i < (int) table->size; i++) for (expr = table->table[i]; expr != NULL; expr = expr->next_same_hash) { flat_table[expr->bitmap_index] = expr; hash_val[expr->bitmap_index] = i; } fprintf (file, "%s hash table (%d buckets, %d entries)\n", name, table->size, table->n_elems); for (i = 0; i < (int) table->n_elems; i++) if (flat_table[i] != 0) { expr = flat_table[i]; fprintf (file, "Index %d (hash value %d)\n ", expr->bitmap_index, hash_val[i]); print_rtl (file, expr->expr); fprintf (file, "\n"); } fprintf (file, "\n"); free (flat_table); free (hash_val); } /* Record register first/last/block set information for REGNO in INSN. first_set records the first place in the block where the register is set and is used to compute "anticipatability". last_set records the last place in the block where the register is set and is used to compute "availability". last_bb records the block for which first_set and last_set are valid, as a quick test to invalidate them. reg_set_in_block records whether the register is set in the block and is used to compute "transparency". */ static void record_last_reg_set_info (insn, regno) rtx insn; int regno; { struct reg_avail_info *info = &reg_avail_info[regno]; int cuid = INSN_CUID (insn); info->last_set = cuid; if (info->last_bb != current_bb) { info->last_bb = current_bb; info->first_set = cuid; SET_BIT (reg_set_in_block[current_bb->index], regno); } } /* Record all of the canonicalized MEMs of record_last_mem_set_info's insn. Note we store a pair of elements in the list, so they have to be taken off pairwise. */ static void canon_list_insert (dest, unused1, v_insn) rtx dest ATTRIBUTE_UNUSED; rtx unused1 ATTRIBUTE_UNUSED; void * v_insn; { rtx dest_addr, insn; int bb; while (GET_CODE (dest) == SUBREG || GET_CODE (dest) == ZERO_EXTRACT || GET_CODE (dest) == SIGN_EXTRACT || GET_CODE (dest) == STRICT_LOW_PART) dest = XEXP (dest, 0); /* If DEST is not a MEM, then it will not conflict with a load. Note that function calls are assumed to clobber memory, but are handled elsewhere. */ if (GET_CODE (dest) != MEM) return; dest_addr = get_addr (XEXP (dest, 0)); dest_addr = canon_rtx (dest_addr); insn = (rtx) v_insn; bb = BLOCK_NUM (insn); canon_modify_mem_list[bb] = alloc_EXPR_LIST (VOIDmode, dest_addr, canon_modify_mem_list[bb]); canon_modify_mem_list[bb] = alloc_EXPR_LIST (VOIDmode, dest, canon_modify_mem_list[bb]); bitmap_set_bit (canon_modify_mem_list_set, bb); } /* Record memory modification information for INSN. We do not actually care about the memory location(s) that are set, or even how they are set (consider a CALL_INSN). We merely need to record which insns modify memory. */ static void record_last_mem_set_info (insn) rtx insn; { int bb = BLOCK_NUM (insn); /* load_killed_in_block_p will handle the case of calls clobbering everything. */ modify_mem_list[bb] = alloc_INSN_LIST (insn, modify_mem_list[bb]); bitmap_set_bit (modify_mem_list_set, bb); if (GET_CODE (insn) == CALL_INSN) { /* Note that traversals of this loop (other than for free-ing) will break after encountering a CALL_INSN. So, there's no need to insert a pair of items, as canon_list_insert does. */ canon_modify_mem_list[bb] = alloc_INSN_LIST (insn, canon_modify_mem_list[bb]); bitmap_set_bit (canon_modify_mem_list_set, bb); } else note_stores (PATTERN (insn), canon_list_insert, (void*) insn); } /* Called from compute_hash_table via note_stores to handle one SET or CLOBBER in an insn. DATA is really the instruction in which the SET is taking place. */ static void record_last_set_info (dest, setter, data) rtx dest, setter ATTRIBUTE_UNUSED; void *data; { rtx last_set_insn = (rtx) data; if (GET_CODE (dest) == SUBREG) dest = SUBREG_REG (dest); if (GET_CODE (dest) == REG) record_last_reg_set_info (last_set_insn, REGNO (dest)); else if (GET_CODE (dest) == MEM /* Ignore pushes, they clobber nothing. */ && ! push_operand (dest, GET_MODE (dest))) record_last_mem_set_info (last_set_insn); } /* Top level function to create an expression or assignment hash table. Expression entries are placed in the hash table if - they are of the form (set (pseudo-reg) src), - src is something we want to perform GCSE on, - none of the operands are subsequently modified in the block Assignment entries are placed in the hash table if - they are of the form (set (pseudo-reg) src), - src is something we want to perform const/copy propagation on, - none of the operands or target are subsequently modified in the block Currently src must be a pseudo-reg or a const_int. F is the first insn. TABLE is the table computed. */ static void compute_hash_table_work (table) struct hash_table *table; { unsigned int i; /* While we compute the hash table we also compute a bit array of which registers are set in which blocks. ??? This isn't needed during const/copy propagation, but it's cheap to compute. Later. */ sbitmap_vector_zero (reg_set_in_block, last_basic_block); /* re-Cache any INSN_LIST nodes we have allocated. */ clear_modify_mem_tables (); /* Some working arrays used to track first and last set in each block. */ reg_avail_info = (struct reg_avail_info*) gmalloc (max_gcse_regno * sizeof (struct reg_avail_info)); for (i = 0; i < max_gcse_regno; ++i) reg_avail_info[i].last_bb = NULL; FOR_EACH_BB (current_bb) { rtx insn; unsigned int regno; int in_libcall_block; /* First pass over the instructions records information used to determine when registers and memory are first and last set. ??? hard-reg reg_set_in_block computation could be moved to compute_sets since they currently don't change. */ for (insn = current_bb->head; insn && insn != NEXT_INSN (current_bb->end); insn = NEXT_INSN (insn)) { if (! INSN_P (insn)) continue; if (GET_CODE (insn) == CALL_INSN) { bool clobbers_all = false; #ifdef NON_SAVING_SETJMP if (NON_SAVING_SETJMP && find_reg_note (insn, REG_SETJMP, NULL_RTX)) clobbers_all = true; #endif for (regno = 0; regno < FIRST_PSEUDO_REGISTER; regno++) if (clobbers_all || TEST_HARD_REG_BIT (regs_invalidated_by_call, regno)) record_last_reg_set_info (insn, regno); mark_call (insn); } note_stores (PATTERN (insn), record_last_set_info, insn); } /* The next pass builds the hash table. */ for (insn = current_bb->head, in_libcall_block = 0; insn && insn != NEXT_INSN (current_bb->end); insn = NEXT_INSN (insn)) if (INSN_P (insn)) { if (find_reg_note (insn, REG_LIBCALL, NULL_RTX)) in_libcall_block = 1; else if (table->set_p && find_reg_note (insn, REG_RETVAL, NULL_RTX)) in_libcall_block = 0; hash_scan_insn (insn, table, in_libcall_block); if (!table->set_p && find_reg_note (insn, REG_RETVAL, NULL_RTX)) in_libcall_block = 0; } } free (reg_avail_info); reg_avail_info = NULL; } /* Allocate space for the set/expr hash TABLE. N_INSNS is the number of instructions in the function. It is used to determine the number of buckets to use. SET_P determines whether set or expression table will be created. */ static void alloc_hash_table (n_insns, table, set_p) int n_insns; struct hash_table *table; int set_p; { int n; table->size = n_insns / 4; if (table->size < 11) table->size = 11; /* Attempt to maintain efficient use of hash table. Making it an odd number is simplest for now. ??? Later take some measurements. */ table->size |= 1; n = table->size * sizeof (struct expr *); table->table = (struct expr **) gmalloc (n); table->set_p = set_p; } /* Free things allocated by alloc_hash_table. */ static void free_hash_table (table) struct hash_table *table; { free (table->table); } /* Compute the hash TABLE for doing copy/const propagation or expression hash table. */ static void compute_hash_table (table) struct hash_table *table; { /* Initialize count of number of entries in hash table. */ table->n_elems = 0; memset ((char *) table->table, 0, table->size * sizeof (struct expr *)); compute_hash_table_work (table); } /* Expression tracking support. */ /* Lookup pattern PAT in the expression TABLE. The result is a pointer to the table entry, or NULL if not found. */ static struct expr * lookup_expr (pat, table) rtx pat; struct hash_table *table; { int do_not_record_p; unsigned int hash = hash_expr (pat, GET_MODE (pat), &do_not_record_p, table->size); struct expr *expr; if (do_not_record_p) return NULL; expr = table->table[hash]; while (expr && ! expr_equiv_p (expr->expr, pat)) expr = expr->next_same_hash; return expr; } /* Lookup REGNO in the set TABLE. If PAT is non-NULL look for the entry that matches it, otherwise return the first entry for REGNO. The result is a pointer to the table entry, or NULL if not found. */ static struct expr * lookup_set (regno, pat, table) unsigned int regno; rtx pat; struct hash_table *table; { unsigned int hash = hash_set (regno, table->size); struct expr *expr; expr = table->table[hash]; if (pat) { while (expr && ! expr_equiv_p (expr->expr, pat)) expr = expr->next_same_hash; } else { while (expr && REGNO (SET_DEST (expr->expr)) != regno) expr = expr->next_same_hash; } return expr; } /* Return the next entry for REGNO in list EXPR. */ static struct expr * next_set (regno, expr) unsigned int regno; struct expr *expr; { do expr = expr->next_same_hash; while (expr && REGNO (SET_DEST (expr->expr)) != regno); return expr; } /* Like free_INSN_LIST_list or free_EXPR_LIST_list, except that the node types may be mixed. */ static void free_insn_expr_list_list (listp) rtx *listp; { rtx list, next; for (list = *listp; list ; list = next) { next = XEXP (list, 1); if (GET_CODE (list) == EXPR_LIST) free_EXPR_LIST_node (list); else free_INSN_LIST_node (list); } *listp = NULL; } /* Clear canon_modify_mem_list and modify_mem_list tables. */ static void clear_modify_mem_tables () { int i; EXECUTE_IF_SET_IN_BITMAP (modify_mem_list_set, 0, i, free_INSN_LIST_list (modify_mem_list + i)); bitmap_clear (modify_mem_list_set); EXECUTE_IF_SET_IN_BITMAP (canon_modify_mem_list_set, 0, i, free_insn_expr_list_list (canon_modify_mem_list + i)); bitmap_clear (canon_modify_mem_list_set); } /* Release memory used by modify_mem_list_set and canon_modify_mem_list_set. */ static void free_modify_mem_tables () { clear_modify_mem_tables (); free (modify_mem_list); free (canon_modify_mem_list); modify_mem_list = 0; canon_modify_mem_list = 0; } /* Reset tables used to keep track of what's still available [since the start of the block]. */ static void reset_opr_set_tables () { /* Maintain a bitmap of which regs have been set since beginning of the block. */ CLEAR_REG_SET (reg_set_bitmap); /* Also keep a record of the last instruction to modify memory. For now this is very trivial, we only record whether any memory location has been modified. */ clear_modify_mem_tables (); } /* Return nonzero if the operands of X are not set before INSN in INSN's basic block. */ static int oprs_not_set_p (x, insn) rtx x, insn; { int i, j; enum rtx_code code; const char *fmt; if (x == 0) return 1; code = GET_CODE (x); switch (code) { case PC: case CC0: case CONST: case CONST_INT: case CONST_DOUBLE: case CONST_VECTOR: case SYMBOL_REF: case LABEL_REF: case ADDR_VEC: case ADDR_DIFF_VEC: return 1; case MEM: if (load_killed_in_block_p (BLOCK_FOR_INSN (insn), INSN_CUID (insn), x, 0)) return 0; else return oprs_not_set_p (XEXP (x, 0), insn); case REG: return ! REGNO_REG_SET_P (reg_set_bitmap, REGNO (x)); default: break; } for (i = GET_RTX_LENGTH (code) - 1, fmt = GET_RTX_FORMAT (code); i >= 0; i--) { if (fmt[i] == 'e') { /* If we are about to do the last recursive call needed at this level, change it into iteration. This function is called enough to be worth it. */ if (i == 0) return oprs_not_set_p (XEXP (x, i), insn); if (! oprs_not_set_p (XEXP (x, i), insn)) return 0; } else if (fmt[i] == 'E') for (j = 0; j < XVECLEN (x, i); j++) if (! oprs_not_set_p (XVECEXP (x, i, j), insn)) return 0; } return 1; } /* Mark things set by a CALL. */ static void mark_call (insn) rtx insn; { if (! CONST_OR_PURE_CALL_P (insn)) record_last_mem_set_info (insn); } /* Mark things set by a SET. */ static void mark_set (pat, insn) rtx pat, insn; { rtx dest = SET_DEST (pat); while (GET_CODE (dest) == SUBREG || GET_CODE (dest) == ZERO_EXTRACT || GET_CODE (dest) == SIGN_EXTRACT || GET_CODE (dest) == STRICT_LOW_PART) dest = XEXP (dest, 0); if (GET_CODE (dest) == REG) SET_REGNO_REG_SET (reg_set_bitmap, REGNO (dest)); else if (GET_CODE (dest) == MEM) record_last_mem_set_info (insn); if (GET_CODE (SET_SRC (pat)) == CALL) mark_call (insn); } /* Record things set by a CLOBBER. */ static void mark_clobber (pat, insn) rtx pat, insn; { rtx clob = XEXP (pat, 0); while (GET_CODE (clob) == SUBREG || GET_CODE (clob) == STRICT_LOW_PART) clob = XEXP (clob, 0); if (GET_CODE (clob) == REG) SET_REGNO_REG_SET (reg_set_bitmap, REGNO (clob)); else record_last_mem_set_info (insn); } /* Record things set by INSN. This data is used by oprs_not_set_p. */ static void mark_oprs_set (insn) rtx insn; { rtx pat = PATTERN (insn); int i; if (GET_CODE (pat) == SET) mark_set (pat, insn); else if (GET_CODE (pat) == PARALLEL) for (i = 0; i < XVECLEN (pat, 0); i++) { rtx x = XVECEXP (pat, 0, i); if (GET_CODE (x) == SET) mark_set (x, insn); else if (GET_CODE (x) == CLOBBER) mark_clobber (x, insn); else if (GET_CODE (x) == CALL) mark_call (insn); } else if (GET_CODE (pat) == CLOBBER) mark_clobber (pat, insn); else if (GET_CODE (pat) == CALL) mark_call (insn); } /* Classic GCSE reaching definition support. */ /* Allocate reaching def variables. */ static void alloc_rd_mem (n_blocks, n_insns) int n_blocks, n_insns; { rd_kill = (sbitmap *) sbitmap_vector_alloc (n_blocks, n_insns); sbitmap_vector_zero (rd_kill, n_blocks); rd_gen = (sbitmap *) sbitmap_vector_alloc (n_blocks, n_insns); sbitmap_vector_zero (rd_gen, n_blocks); reaching_defs = (sbitmap *) sbitmap_vector_alloc (n_blocks, n_insns); sbitmap_vector_zero (reaching_defs, n_blocks); rd_out = (sbitmap *) sbitmap_vector_alloc (n_blocks, n_insns); sbitmap_vector_zero (rd_out, n_blocks); } /* Free reaching def variables. */ static void free_rd_mem () { sbitmap_vector_free (rd_kill); sbitmap_vector_free (rd_gen); sbitmap_vector_free (reaching_defs); sbitmap_vector_free (rd_out); } /* Add INSN to the kills of BB. REGNO, set in BB, is killed by INSN. */ static void handle_rd_kill_set (insn, regno, bb) rtx insn; int regno; basic_block bb; { struct reg_set *this_reg; for (this_reg = reg_set_table[regno]; this_reg; this_reg = this_reg ->next) if (BLOCK_NUM (this_reg->insn) != BLOCK_NUM (insn)) SET_BIT (rd_kill[bb->index], INSN_CUID (this_reg->insn)); } /* Compute the set of kill's for reaching definitions. */ static void compute_kill_rd () { int cuid; unsigned int regno; int i; basic_block bb; /* For each block For each set bit in `gen' of the block (i.e each insn which generates a definition in the block) Call the reg set by the insn corresponding to that bit regx Look at the linked list starting at reg_set_table[regx] For each setting of regx in the linked list, which is not in this block Set the bit in `kill' corresponding to that insn. */ FOR_EACH_BB (bb) for (cuid = 0; cuid < max_cuid; cuid++) if (TEST_BIT (rd_gen[bb->index], cuid)) { rtx insn = CUID_INSN (cuid); rtx pat = PATTERN (insn); if (GET_CODE (insn) == CALL_INSN) { for (regno = 0; regno < FIRST_PSEUDO_REGISTER; regno++) if (TEST_HARD_REG_BIT (regs_invalidated_by_call, regno)) handle_rd_kill_set (insn, regno, bb); } if (GET_CODE (pat) == PARALLEL) { for (i = XVECLEN (pat, 0) - 1; i >= 0; i--) { enum rtx_code code = GET_CODE (XVECEXP (pat, 0, i)); if ((code == SET || code == CLOBBER) && GET_CODE (XEXP (XVECEXP (pat, 0, i), 0)) == REG) handle_rd_kill_set (insn, REGNO (XEXP (XVECEXP (pat, 0, i), 0)), bb); } } else if (GET_CODE (pat) == SET && GET_CODE (SET_DEST (pat)) == REG) /* Each setting of this register outside of this block must be marked in the set of kills in this block. */ handle_rd_kill_set (insn, REGNO (SET_DEST (pat)), bb); } } /* Compute the reaching definitions as in Compilers Principles, Techniques, and Tools. Aho, Sethi, Ullman, Chapter 10. It is the same algorithm as used for computing available expressions but applied to the gens and kills of reaching definitions. */ static void compute_rd () { int changed, passes; basic_block bb; FOR_EACH_BB (bb) sbitmap_copy (rd_out[bb->index] /*dst*/, rd_gen[bb->index] /*src*/); passes = 0; changed = 1; while (changed) { changed = 0; FOR_EACH_BB (bb) { sbitmap_union_of_preds (reaching_defs[bb->index], rd_out, bb->index); changed |= sbitmap_union_of_diff_cg (rd_out[bb->index], rd_gen[bb->index], reaching_defs[bb->index], rd_kill[bb->index]); } passes++; } if (gcse_file) fprintf (gcse_file, "reaching def computation: %d passes\n", passes); } /* Classic GCSE available expression support. */ /* Allocate memory for available expression computation. */ static void alloc_avail_expr_mem (n_blocks, n_exprs) int n_blocks, n_exprs; { ae_kill = (sbitmap *) sbitmap_vector_alloc (n_blocks, n_exprs); sbitmap_vector_zero (ae_kill, n_blocks); ae_gen = (sbitmap *) sbitmap_vector_alloc (n_blocks, n_exprs); sbitmap_vector_zero (ae_gen, n_blocks); ae_in = (sbitmap *) sbitmap_vector_alloc (n_blocks, n_exprs); sbitmap_vector_zero (ae_in, n_blocks); ae_out = (sbitmap *) sbitmap_vector_alloc (n_blocks, n_exprs); sbitmap_vector_zero (ae_out, n_blocks); } static void free_avail_expr_mem () { sbitmap_vector_free (ae_kill); sbitmap_vector_free (ae_gen); sbitmap_vector_free (ae_in); sbitmap_vector_free (ae_out); } /* Compute the set of available expressions generated in each basic block. */ static void compute_ae_gen (expr_hash_table) struct hash_table *expr_hash_table; { unsigned int i; struct expr *expr; struct occr *occr; /* For each recorded occurrence of each expression, set ae_gen[bb][expr]. This is all we have to do because an expression is not recorded if it is not available, and the only expressions we want to work with are the ones that are recorded. */ for (i = 0; i < expr_hash_table->size; i++) for (expr = expr_hash_table->table[i]; expr != 0; expr = expr->next_same_hash) for (occr = expr->avail_occr; occr != 0; occr = occr->next) SET_BIT (ae_gen[BLOCK_NUM (occr->insn)], expr->bitmap_index); } /* Return nonzero if expression X is killed in BB. */ static int expr_killed_p (x, bb) rtx x; basic_block bb; { int i, j; enum rtx_code code; const char *fmt; if (x == 0) return 1; code = GET_CODE (x); switch (code) { case REG: return TEST_BIT (reg_set_in_block[bb->index], REGNO (x)); case MEM: if (load_killed_in_block_p (bb, get_max_uid () + 1, x, 0)) return 1; else return expr_killed_p (XEXP (x, 0), bb); case PC: case CC0: /*FIXME*/ case CONST: case CONST_INT: case CONST_DOUBLE: case CONST_VECTOR: case SYMBOL_REF: case LABEL_REF: case ADDR_VEC: case ADDR_DIFF_VEC: return 0; default: break; } for (i = GET_RTX_LENGTH (code) - 1, fmt = GET_RTX_FORMAT (code); i >= 0; i--) { if (fmt[i] == 'e') { /* If we are about to do the last recursive call needed at this level, change it into iteration. This function is called enough to be worth it. */ if (i == 0) return expr_killed_p (XEXP (x, i), bb); else if (expr_killed_p (XEXP (x, i), bb)) return 1; } else if (fmt[i] == 'E') for (j = 0; j < XVECLEN (x, i); j++) if (expr_killed_p (XVECEXP (x, i, j), bb)) return 1; } return 0; } /* Compute the set of available expressions killed in each basic block. */ static void compute_ae_kill (ae_gen, ae_kill, expr_hash_table) sbitmap *ae_gen, *ae_kill; struct hash_table *expr_hash_table; { basic_block bb; unsigned int i; struct expr *expr; FOR_EACH_BB (bb) for (i = 0; i < expr_hash_table->size; i++) for (expr = expr_hash_table->table[i]; expr; expr = expr->next_same_hash) { /* Skip EXPR if generated in this block. */ if (TEST_BIT (ae_gen[bb->index], expr->bitmap_index)) continue; if (expr_killed_p (expr->expr, bb)) SET_BIT (ae_kill[bb->index], expr->bitmap_index); } } /* Actually perform the Classic GCSE optimizations. */ /* Return nonzero if occurrence OCCR of expression EXPR reaches block BB. CHECK_SELF_LOOP is nonzero if we should consider a block reaching itself as a positive reach. We want to do this when there are two computations of the expression in the block. VISITED is a pointer to a working buffer for tracking which BB's have been visited. It is NULL for the top-level call. We treat reaching expressions that go through blocks containing the same reaching expression as "not reaching". E.g. if EXPR is generated in blocks 2 and 3, INSN is in block 4, and 2->3->4, we treat the expression in block 2 as not reaching. The intent is to improve the probability of finding only one reaching expression and to reduce register lifetimes by picking the closest such expression. */ static int expr_reaches_here_p_work (occr, expr, bb, check_self_loop, visited) struct occr *occr; struct expr *expr; basic_block bb; int check_self_loop; char *visited; { edge pred; for (pred = bb->pred; pred != NULL; pred = pred->pred_next) { basic_block pred_bb = pred->src; if (visited[pred_bb->index]) /* This predecessor has already been visited. Nothing to do. */ ; else if (pred_bb == bb) { /* BB loops on itself. */ if (check_self_loop && TEST_BIT (ae_gen[pred_bb->index], expr->bitmap_index) && BLOCK_NUM (occr->insn) == pred_bb->index) return 1; visited[pred_bb->index] = 1; } /* Ignore this predecessor if it kills the expression. */ else if (TEST_BIT (ae_kill[pred_bb->index], expr->bitmap_index)) visited[pred_bb->index] = 1; /* Does this predecessor generate this expression? */ else if (TEST_BIT (ae_gen[pred_bb->index], expr->bitmap_index)) { /* Is this the occurrence we're looking for? Note that there's only one generating occurrence per block so we just need to check the block number. */ if (BLOCK_NUM (occr->insn) == pred_bb->index) return 1; visited[pred_bb->index] = 1; } /* Neither gen nor kill. */ else { visited[pred_bb->index] = 1; if (expr_reaches_here_p_work (occr, expr, pred_bb, check_self_loop, visited)) return 1; } } /* All paths have been checked. */ return 0; } /* This wrapper for expr_reaches_here_p_work() is to ensure that any memory allocated for that function is returned. */ static int expr_reaches_here_p (occr, expr, bb, check_self_loop) struct occr *occr; struct expr *expr; basic_block bb; int check_self_loop; { int rval; char *visited = (char *) xcalloc (last_basic_block, 1); rval = expr_reaches_here_p_work (occr, expr, bb, check_self_loop, visited); free (visited); return rval; } /* Return the instruction that computes EXPR that reaches INSN's basic block. If there is more than one such instruction, return NULL. Called only by handle_avail_expr. */ static rtx computing_insn (expr, insn) struct expr *expr; rtx insn; { basic_block bb = BLOCK_FOR_INSN (insn); if (expr->avail_occr->next == NULL) { if (BLOCK_FOR_INSN (expr->avail_occr->insn) == bb) /* The available expression is actually itself (i.e. a loop in the flow graph) so do nothing. */ return NULL; /* (FIXME) Case that we found a pattern that was created by a substitution that took place. */ return expr->avail_occr->insn; } else { /* Pattern is computed more than once. Search backwards from this insn to see how many of these computations actually reach this insn. */ struct occr *occr; rtx insn_computes_expr = NULL; int can_reach = 0; for (occr = expr->avail_occr; occr != NULL; occr = occr->next) { if (BLOCK_FOR_INSN (occr->insn) == bb) { /* The expression is generated in this block. The only time we care about this is when the expression is generated later in the block [and thus there's a loop]. We let the normal cse pass handle the other cases. */ if (INSN_CUID (insn) < INSN_CUID (occr->insn) && expr_reaches_here_p (occr, expr, bb, 1)) { can_reach++; if (can_reach > 1) return NULL; insn_computes_expr = occr->insn; } } else if (expr_reaches_here_p (occr, expr, bb, 0)) { can_reach++; if (can_reach > 1) return NULL; insn_computes_expr = occr->insn; } } if (insn_computes_expr == NULL) abort (); return insn_computes_expr; } } /* Return nonzero if the definition in DEF_INSN can reach INSN. Only called by can_disregard_other_sets. */ static int def_reaches_here_p (insn, def_insn) rtx insn, def_insn; { rtx reg; if (TEST_BIT (reaching_defs[BLOCK_NUM (insn)], INSN_CUID (def_insn))) return 1; if (BLOCK_NUM (insn) == BLOCK_NUM (def_insn)) { if (INSN_CUID (def_insn) < INSN_CUID (insn)) { if (GET_CODE (PATTERN (def_insn)) == PARALLEL) return 1; else if (GET_CODE (PATTERN (def_insn)) == CLOBBER) reg = XEXP (PATTERN (def_insn), 0); else if (GET_CODE (PATTERN (def_insn)) == SET) reg = SET_DEST (PATTERN (def_insn)); else abort (); return ! reg_set_between_p (reg, NEXT_INSN (def_insn), insn); } else return 0; } return 0; } /* Return nonzero if *ADDR_THIS_REG can only have one value at INSN. The value returned is the number of definitions that reach INSN. Returning a value of zero means that [maybe] more than one definition reaches INSN and the caller can't perform whatever optimization it is trying. i.e. it is always safe to return zero. */ static int can_disregard_other_sets (addr_this_reg, insn, for_combine) struct reg_set **addr_this_reg; rtx insn; int for_combine; { int number_of_reaching_defs = 0; struct reg_set *this_reg; for (this_reg = *addr_this_reg; this_reg != 0; this_reg = this_reg->next) if (def_reaches_here_p (insn, this_reg->insn)) { number_of_reaching_defs++; /* Ignore parallels for now. */ if (GET_CODE (PATTERN (this_reg->insn)) == PARALLEL) return 0; if (!for_combine && (GET_CODE (PATTERN (this_reg->insn)) == CLOBBER || ! rtx_equal_p (SET_SRC (PATTERN (this_reg->insn)), SET_SRC (PATTERN (insn))))) /* A setting of the reg to a different value reaches INSN. */ return 0; if (number_of_reaching_defs > 1) { /* If in this setting the value the register is being set to is equal to the previous value the register was set to and this setting reaches the insn we are trying to do the substitution on then we are ok. */ if (GET_CODE (PATTERN (this_reg->insn)) == CLOBBER) return 0; else if (! rtx_equal_p (SET_SRC (PATTERN (this_reg->insn)), SET_SRC (PATTERN (insn)))) return 0; } *addr_this_reg = this_reg; } return number_of_reaching_defs; } /* Expression computed by insn is available and the substitution is legal, so try to perform the substitution. The result is nonzero if any changes were made. */ static int handle_avail_expr (insn, expr) rtx insn; struct expr *expr; { rtx pat, insn_computes_expr, expr_set; rtx to; struct reg_set *this_reg; int found_setting, use_src; int changed = 0; /* We only handle the case where one computation of the expression reaches this instruction. */ insn_computes_expr = computing_insn (expr, insn); if (insn_computes_expr == NULL) return 0; expr_set = single_set (insn_computes_expr); if (!expr_set) abort (); found_setting = 0; use_src = 0; /* At this point we know only one computation of EXPR outside of this block reaches this insn. Now try to find a register that the expression is computed into. */ if (GET_CODE (SET_SRC (expr_set)) == REG) { /* This is the case when the available expression that reaches here has already been handled as an available expression. */ unsigned int regnum_for_replacing = REGNO (SET_SRC (expr_set)); /* If the register was created by GCSE we can't use `reg_set_table', however we know it's set only once. */ if (regnum_for_replacing >= max_gcse_regno /* If the register the expression is computed into is set only once, or only one set reaches this insn, we can use it. */ || (((this_reg = reg_set_table[regnum_for_replacing]), this_reg->next == NULL) || can_disregard_other_sets (&this_reg, insn, 0))) { use_src = 1; found_setting = 1; } } if (!found_setting) { unsigned int regnum_for_replacing = REGNO (SET_DEST (expr_set)); /* This shouldn't happen. */ if (regnum_for_replacing >= max_gcse_regno) abort (); this_reg = reg_set_table[regnum_for_replacing]; /* If the register the expression is computed into is set only once, or only one set reaches this insn, use it. */ if (this_reg->next == NULL || can_disregard_other_sets (&this_reg, insn, 0)) found_setting = 1; } if (found_setting) { pat = PATTERN (insn); if (use_src) to = SET_SRC (expr_set); else to = SET_DEST (expr_set); changed = validate_change (insn, &SET_SRC (pat), to, 0); /* We should be able to ignore the return code from validate_change but to play it safe we check. */ if (changed) { gcse_subst_count++; if (gcse_file != NULL) { fprintf (gcse_file, "GCSE: Replacing the source in insn %d with", INSN_UID (insn)); fprintf (gcse_file, " reg %d %s insn %d\n", REGNO (to), use_src ? "from" : "set in", INSN_UID (insn_computes_expr)); } } } /* The register that the expr is computed into is set more than once. */ else if (1 /*expensive_op(this_pattrn->op) && do_expensive_gcse)*/) { /* Insert an insn after insnx that copies the reg set in insnx into a new pseudo register call this new register REGN. From insnb until end of basic block or until REGB is set replace all uses of REGB with REGN. */ rtx new_insn; to = gen_reg_rtx (GET_MODE (SET_DEST (expr_set))); /* Generate the new insn. */ /* ??? If the change fails, we return 0, even though we created an insn. I think this is ok. */ new_insn = emit_insn_after (gen_rtx_SET (VOIDmode, to, SET_DEST (expr_set)), insn_computes_expr); /* Keep register set table up to date. */ record_one_set (REGNO (to), new_insn); gcse_create_count++; if (gcse_file != NULL) { fprintf (gcse_file, "GCSE: Creating insn %d to copy value of reg %d", INSN_UID (NEXT_INSN (insn_computes_expr)), REGNO (SET_SRC (PATTERN (NEXT_INSN (insn_computes_expr))))); fprintf (gcse_file, ", computed in insn %d,\n", INSN_UID (insn_computes_expr)); fprintf (gcse_file, " into newly allocated reg %d\n", REGNO (to)); } pat = PATTERN (insn); /* Do register replacement for INSN. */ changed = validate_change (insn, &SET_SRC (pat), SET_DEST (PATTERN (NEXT_INSN (insn_computes_expr))), 0); /* We should be able to ignore the return code from validate_change but to play it safe we check. */ if (changed) { gcse_subst_count++; if (gcse_file != NULL) { fprintf (gcse_file, "GCSE: Replacing the source in insn %d with reg %d ", INSN_UID (insn), REGNO (SET_DEST (PATTERN (NEXT_INSN (insn_computes_expr))))); fprintf (gcse_file, "set in insn %d\n", INSN_UID (insn_computes_expr)); } } } return changed; } /* Perform classic GCSE. This is called by one_classic_gcse_pass after all the dataflow analysis has been done. The result is nonzero if a change was made. */ static int classic_gcse () { int changed; rtx insn; basic_block bb; /* Note we start at block 1. */ if (ENTRY_BLOCK_PTR->next_bb == EXIT_BLOCK_PTR) return 0; changed = 0; FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR->next_bb->next_bb, EXIT_BLOCK_PTR, next_bb) { /* Reset tables used to keep track of what's still valid [since the start of the block]. */ reset_opr_set_tables (); for (insn = bb->head; insn != NULL && insn != NEXT_INSN (bb->end); insn = NEXT_INSN (insn)) { /* Is insn of form (set (pseudo-reg) ...)? */ if (GET_CODE (insn) == INSN && GET_CODE (PATTERN (insn)) == SET && GET_CODE (SET_DEST (PATTERN (insn))) == REG && REGNO (SET_DEST (PATTERN (insn))) >= FIRST_PSEUDO_REGISTER) { rtx pat = PATTERN (insn); rtx src = SET_SRC (pat); struct expr *expr; if (want_to_gcse_p (src) /* Is the expression recorded? */ && ((expr = lookup_expr (src, &expr_hash_table)) != NULL) /* Is the expression available [at the start of the block]? */ && TEST_BIT (ae_in[bb->index], expr->bitmap_index) /* Are the operands unchanged since the start of the block? */ && oprs_not_set_p (src, insn)) changed |= handle_avail_expr (insn, expr); } /* Keep track of everything modified by this insn. */ /* ??? Need to be careful w.r.t. mods done to INSN. */ if (INSN_P (insn)) mark_oprs_set (insn); } } return changed; } /* Top level routine to perform one classic GCSE pass. Return nonzero if a change was made. */ static int one_classic_gcse_pass (pass) int pass; { int changed = 0; gcse_subst_count = 0; gcse_create_count = 0; alloc_hash_table (max_cuid, &expr_hash_table, 0); alloc_rd_mem (last_basic_block, max_cuid); compute_hash_table (&expr_hash_table); if (gcse_file) dump_hash_table (gcse_file, "Expression", &expr_hash_table); if (expr_hash_table.n_elems > 0) { compute_kill_rd (); compute_rd (); alloc_avail_expr_mem (last_basic_block, expr_hash_table.n_elems); compute_ae_gen (&expr_hash_table); compute_ae_kill (ae_gen, ae_kill, &expr_hash_table); compute_available (ae_gen, ae_kill, ae_out, ae_in); changed = classic_gcse (); free_avail_expr_mem (); } free_rd_mem (); free_hash_table (&expr_hash_table); if (gcse_file) { fprintf (gcse_file, "\n"); fprintf (gcse_file, "GCSE of %s, pass %d: %d bytes needed, %d substs,", current_function_name, pass, bytes_used, gcse_subst_count); fprintf (gcse_file, "%d insns created\n", gcse_create_count); } return changed; } /* Compute copy/constant propagation working variables. */ /* Local properties of assignments. */ static sbitmap *cprop_pavloc; static sbitmap *cprop_absaltered; /* Global properties of assignments (computed from the local properties). */ static sbitmap *cprop_avin; static sbitmap *cprop_avout; /* Allocate vars used for copy/const propagation. N_BLOCKS is the number of basic blocks. N_SETS is the number of sets. */ static void alloc_cprop_mem (n_blocks, n_sets) int n_blocks, n_sets; { cprop_pavloc = sbitmap_vector_alloc (n_blocks, n_sets); cprop_absaltered = sbitmap_vector_alloc (n_blocks, n_sets); cprop_avin = sbitmap_vector_alloc (n_blocks, n_sets); cprop_avout = sbitmap_vector_alloc (n_blocks, n_sets); } /* Free vars used by copy/const propagation. */ static void free_cprop_mem () { sbitmap_vector_free (cprop_pavloc); sbitmap_vector_free (cprop_absaltered); sbitmap_vector_free (cprop_avin); sbitmap_vector_free (cprop_avout); } /* For each block, compute whether X is transparent. X is either an expression or an assignment [though we don't care which, for this context an assignment is treated as an expression]. For each block where an element of X is modified, set (SET_P == 1) or reset (SET_P == 0) the INDX bit in BMAP. */ static void compute_transp (x, indx, bmap, set_p) rtx x; int indx; sbitmap *bmap; int set_p; { int i, j; basic_block bb; enum rtx_code code; reg_set *r; const char *fmt; /* repeat is used to turn tail-recursion into iteration since GCC can't do it when there's no return value. */ repeat: if (x == 0) return; code = GET_CODE (x); switch (code) { case REG: if (set_p) { if (REGNO (x) < FIRST_PSEUDO_REGISTER) { FOR_EACH_BB (bb) if (TEST_BIT (reg_set_in_block[bb->index], REGNO (x))) SET_BIT (bmap[bb->index], indx); } else { for (r = reg_set_table[REGNO (x)]; r != NULL; r = r->next) SET_BIT (bmap[BLOCK_NUM (r->insn)], indx); } } else { if (REGNO (x) < FIRST_PSEUDO_REGISTER) { FOR_EACH_BB (bb) if (TEST_BIT (reg_set_in_block[bb->index], REGNO (x))) RESET_BIT (bmap[bb->index], indx); } else { for (r = reg_set_table[REGNO (x)]; r != NULL; r = r->next) RESET_BIT (bmap[BLOCK_NUM (r->insn)], indx); } } return; case MEM: FOR_EACH_BB (bb) { rtx list_entry = canon_modify_mem_list[bb->index]; while (list_entry) { rtx dest, dest_addr; if (GET_CODE (XEXP (list_entry, 0)) == CALL_INSN) { if (set_p) SET_BIT (bmap[bb->index], indx); else RESET_BIT (bmap[bb->index], indx); break; } /* LIST_ENTRY must be an INSN of some kind that sets memory. Examine each hunk of memory that is modified. */ dest = XEXP (list_entry, 0); list_entry = XEXP (list_entry, 1); dest_addr = XEXP (list_entry, 0); if (canon_true_dependence (dest, GET_MODE (dest), dest_addr, x, rtx_addr_varies_p)) { if (set_p) SET_BIT (bmap[bb->index], indx); else RESET_BIT (bmap[bb->index], indx); break; } list_entry = XEXP (list_entry, 1); } } x = XEXP (x, 0); goto repeat; case PC: case CC0: /*FIXME*/ case CONST: case CONST_INT: case CONST_DOUBLE: case CONST_VECTOR: case SYMBOL_REF: case LABEL_REF: case ADDR_VEC: case ADDR_DIFF_VEC: return; default: break; } for (i = GET_RTX_LENGTH (code) - 1, fmt = GET_RTX_FORMAT (code); i >= 0; i--) { if (fmt[i] == 'e') { /* If we are about to do the last recursive call needed at this level, change it into iteration. This function is called enough to be worth it. */ if (i == 0) { x = XEXP (x, i); goto repeat; } compute_transp (XEXP (x, i), indx, bmap, set_p); } else if (fmt[i] == 'E') for (j = 0; j < XVECLEN (x, i); j++) compute_transp (XVECEXP (x, i, j), indx, bmap, set_p); } } /* Top level routine to do the dataflow analysis needed by copy/const propagation. */ static void compute_cprop_data () { compute_local_properties (cprop_absaltered, cprop_pavloc, NULL, &set_hash_table); compute_available (cprop_pavloc, cprop_absaltered, cprop_avout, cprop_avin); } /* Copy/constant propagation. */ /* Maximum number of register uses in an insn that we handle. */ #define MAX_USES 8 /* Table of uses found in an insn. Allocated statically to avoid alloc/free complexity and overhead. */ static struct reg_use reg_use_table[MAX_USES]; /* Index into `reg_use_table' while building it. */ static int reg_use_count; /* Set up a list of register numbers used in INSN. The found uses are stored in `reg_use_table'. `reg_use_count' is initialized to zero before entry, and contains the number of uses in the table upon exit. ??? If a register appears multiple times we will record it multiple times. This doesn't hurt anything but it will slow things down. */ static void find_used_regs (xptr, data) rtx *xptr; void *data ATTRIBUTE_UNUSED; { int i, j; enum rtx_code code; const char *fmt; rtx x = *xptr; /* repeat is used to turn tail-recursion into iteration since GCC can't do it when there's no return value. */ repeat: if (x == 0) return; code = GET_CODE (x); if (REG_P (x)) { if (reg_use_count == MAX_USES) return; reg_use_table[reg_use_count].reg_rtx = x; reg_use_count++; } /* Recursively scan the operands of this expression. */ for (i = GET_RTX_LENGTH (code) - 1, fmt = GET_RTX_FORMAT (code); i >= 0; i--) { if (fmt[i] == 'e') { /* If we are about to do the last recursive call needed at this level, change it into iteration. This function is called enough to be worth it. */ if (i == 0) { x = XEXP (x, 0); goto repeat; } find_used_regs (&XEXP (x, i), data); } else if (fmt[i] == 'E') for (j = 0; j < XVECLEN (x, i); j++) find_used_regs (&XVECEXP (x, i, j), data); } } /* Try to replace all non-SET_DEST occurrences of FROM in INSN with TO. Returns nonzero is successful. */ static int try_replace_reg (from, to, insn) rtx from, to, insn; { rtx note = find_reg_equal_equiv_note (insn); rtx src = 0; int success = 0; rtx set = single_set (insn); validate_replace_src_group (from, to, insn); if (num_changes_pending () && apply_change_group ()) success = 1; /* Try to simplify SET_SRC if we have substituted a constant. */ if (success && set && CONSTANT_P (to)) { src = simplify_rtx (SET_SRC (set)); if (src) validate_change (insn, &SET_SRC (set), src, 0); } if (!success && set && reg_mentioned_p (from, SET_SRC (set))) { /* If above failed and this is a single set, try to simplify the source of the set given our substitution. We could perhaps try this for multiple SETs, but it probably won't buy us anything. */ src = simplify_replace_rtx (SET_SRC (set), from, to); if (!rtx_equal_p (src, SET_SRC (set)) && validate_change (insn, &SET_SRC (set), src, 0)) success = 1; /* If we've failed to do replacement, have a single SET, don't already have a note, and have no special SET, add a REG_EQUAL note to not lose information. */ if (!success && note == 0 && set != 0 && GET_CODE (XEXP (set, 0)) != ZERO_EXTRACT && GET_CODE (XEXP (set, 0)) != SIGN_EXTRACT) note = set_unique_reg_note (insn, REG_EQUAL, copy_rtx (src)); } /* If there is already a NOTE, update the expression in it with our replacement. */ else if (note != 0) XEXP (note, 0) = simplify_replace_rtx (XEXP (note, 0), from, to); /* REG_EQUAL may get simplified into register. We don't allow that. Remove that note. This code ought not to hapen, because previous code ought to syntetize reg-reg move, but be on the safe side. */ if (note && REG_P (XEXP (note, 0))) remove_note (insn, note); return success; } /* Find a set of REGNOs that are available on entry to INSN's block. Returns NULL no such set is found. */ static struct expr * find_avail_set (regno, insn) int regno; rtx insn; { /* SET1 contains the last set found that can be returned to the caller for use in a substitution. */ struct expr *set1 = 0; /* Loops are not possible here. To get a loop we would need two sets available at the start of the block containing INSN. ie we would need two sets like this available at the start of the block: (set (reg X) (reg Y)) (set (reg Y) (reg X)) This can not happen since the set of (reg Y) would have killed the set of (reg X) making it unavailable at the start of this block. */ while (1) { rtx src; struct expr *set = lookup_set (regno, NULL_RTX, &set_hash_table); /* Find a set that is available at the start of the block which contains INSN. */ while (set) { if (TEST_BIT (cprop_avin[BLOCK_NUM (insn)], set->bitmap_index)) break; set = next_set (regno, set); } /* If no available set was found we've reached the end of the (possibly empty) copy chain. */ if (set == 0) break; if (GET_CODE (set->expr) != SET) abort (); src = SET_SRC (set->expr); /* We know the set is available. Now check that SRC is ANTLOC (i.e. none of the source operands have changed since the start of the block). If the source operand changed, we may still use it for the next iteration of this loop, but we may not use it for substitutions. */ if (CONSTANT_P (src) || oprs_not_set_p (src, insn)) set1 = set; /* If the source of the set is anything except a register, then we have reached the end of the copy chain. */ if (GET_CODE (src) != REG) break; /* Follow the copy chain, ie start another iteration of the loop and see if we have an available copy into SRC. */ regno = REGNO (src); } /* SET1 holds the last set that was available and anticipatable at INSN. */ return set1; } /* Subroutine of cprop_insn that tries to propagate constants into JUMP_INSNS. JUMP must be a conditional jump. If SETCC is non-NULL it is the instruction that immediately preceeds JUMP, and must be a single SET of a register. FROM is what we will try to replace, SRC is the constant we will try to substitute for it. Returns nonzero if a change was made. */ static int cprop_jump (bb, setcc, jump, from, src) basic_block bb; rtx setcc; rtx jump; rtx from; rtx src; { rtx new; rtx set = pc_set (jump); rtx set_src = SET_SRC (set); /* First substitute in the INSN condition as the SET_SRC of the JUMP, then substitute that given values in this expanded JUMP. */ if (setcc != NULL_RTX && !modified_between_p (from, setcc, jump) && !modified_between_p (src, setcc, jump)) { rtx setcc_set = single_set (setcc); set_src = simplify_replace_rtx (set_src, SET_DEST (setcc_set), SET_SRC (setcc_set)); } else setcc = NULL_RTX; new = simplify_replace_rtx (set_src, from, src); /* If no simplification can be made, then try the next register. */ if (rtx_equal_p (new, SET_SRC (set))) return 0; /* If this is now a no-op delete it, otherwise this must be a valid insn. */ if (new == pc_rtx) delete_insn (jump); else { /* Ensure the value computed inside the jump insn to be equivalent to one computed by setcc. */ if (setcc && modified_in_p (new, setcc)) return 0; if (! validate_change (jump, &SET_SRC (set), new, 0)) return 0; /* If this has turned into an unconditional jump, then put a barrier after it so that the unreachable code will be deleted. */ if (GET_CODE (SET_SRC (set)) == LABEL_REF) emit_barrier_after (jump); } #ifdef HAVE_cc0 /* Delete the cc0 setter. */ if (setcc != NULL && CC0_P (SET_DEST (single_set (setcc)))) delete_insn (setcc); #endif run_jump_opt_after_gcse = 1; const_prop_count++; if (gcse_file != NULL) { fprintf (gcse_file, "CONST-PROP: Replacing reg %d in jump_insn %d with constant ", REGNO (from), INSN_UID (jump)); print_rtl (gcse_file, src); fprintf (gcse_file, "\n"); } purge_dead_edges (bb); return 1; } static bool constprop_register (insn, from, to, alter_jumps) rtx insn; rtx from; rtx to; int alter_jumps; { rtx sset; /* Check for reg or cc0 setting instructions followed by conditional branch instructions first. */ if (alter_jumps && (sset = single_set (insn)) != NULL && NEXT_INSN (insn) && any_condjump_p (NEXT_INSN (insn)) && onlyjump_p (NEXT_INSN (insn))) { rtx dest = SET_DEST (sset); if ((REG_P (dest) || CC0_P (dest)) && cprop_jump (BLOCK_FOR_INSN (insn), insn, NEXT_INSN (insn), from, to)) return 1; } /* Handle normal insns next. */ if (GET_CODE (insn) == INSN && try_replace_reg (from, to, insn)) return 1; /* Try to propagate a CONST_INT into a conditional jump. We're pretty specific about what we will handle in this code, we can extend this as necessary over time. Right now the insn in question must look like (set (pc) (if_then_else ...)) */ else if (alter_jumps && any_condjump_p (insn) && onlyjump_p (insn)) return cprop_jump (BLOCK_FOR_INSN (insn), NULL, insn, from, to); return 0; } /* Perform constant and copy propagation on INSN. The result is nonzero if a change was made. */ static int cprop_insn (insn, alter_jumps) rtx insn; int alter_jumps; { struct reg_use *reg_used; int changed = 0; rtx note; if (!INSN_P (insn)) return 0; reg_use_count = 0; note_uses (&PATTERN (insn), find_used_regs, NULL); note = find_reg_equal_equiv_note (insn); /* We may win even when propagating constants into notes. */ if (note) find_used_regs (&XEXP (note, 0), NULL); for (reg_used = &reg_use_table[0]; reg_use_count > 0; reg_used++, reg_use_count--) { unsigned int regno = REGNO (reg_used->reg_rtx); rtx pat, src; struct expr *set; /* Ignore registers created by GCSE. We do this because ... */ if (regno >= max_gcse_regno) continue; /* If the register has already been set in this block, there's nothing we can do. */ if (! oprs_not_set_p (reg_used->reg_rtx, insn)) continue; /* Find an assignment that sets reg_used and is available at the start of the block. */ set = find_avail_set (regno, insn); if (! set || set->expr->volatil) continue; pat = set->expr; /* ??? We might be able to handle PARALLELs. Later. */ if (GET_CODE (pat) != SET) abort (); src = SET_SRC (pat); /* Constant propagation. */ if (CONSTANT_P (src)) { if (constprop_register (insn, reg_used->reg_rtx, src, alter_jumps)) { changed = 1; const_prop_count++; if (gcse_file != NULL) { fprintf (gcse_file, "GLOBAL CONST-PROP: Replacing reg %d in ", regno); fprintf (gcse_file, "insn %d with constant ", INSN_UID (insn)); print_rtl (gcse_file, src); fprintf (gcse_file, "\n"); } if (INSN_DELETED_P (insn)) return 1; } } else if (GET_CODE (src) == REG && REGNO (src) >= FIRST_PSEUDO_REGISTER && REGNO (src) != regno) { if (try_replace_reg (reg_used->reg_rtx, src, insn)) { changed = 1; copy_prop_count++; if (gcse_file != NULL) { fprintf (gcse_file, "GLOBAL COPY-PROP: Replacing reg %d in insn %d", regno, INSN_UID (insn)); fprintf (gcse_file, " with reg %d\n", REGNO (src)); } /* The original insn setting reg_used may or may not now be deletable. We leave the deletion to flow. */ /* FIXME: If it turns out that the insn isn't deletable, then we may have unnecessarily extended register lifetimes and made things worse. */ } } } return changed; } /* Like find_used_regs, but avoid recording uses that appear in input-output contexts such as zero_extract or pre_dec. This restricts the cases we consider to those for which local cprop can legitimately make replacements. */ static void local_cprop_find_used_regs (xptr, data) rtx *xptr; void *data; { rtx x = *xptr; if (x == 0) return; switch (GET_CODE (x)) { case ZERO_EXTRACT: case SIGN_EXTRACT: case STRICT_LOW_PART: return; case PRE_DEC: case PRE_INC: case POST_DEC: case POST_INC: case PRE_MODIFY: case POST_MODIFY: /* Can only legitimately appear this early in the context of stack pushes for function arguments, but handle all of the codes nonetheless. */ return; case SUBREG: /* Setting a subreg of a register larger than word_mode leaves the non-written words unchanged. */ if (GET_MODE_BITSIZE (GET_MODE (SUBREG_REG (x))) > BITS_PER_WORD) return; break; default: break; } find_used_regs (xptr, data); } /* LIBCALL_SP is a zero-terminated array of insns at the end of a libcall; their REG_EQUAL notes need updating. */ static bool do_local_cprop (x, insn, alter_jumps, libcall_sp) rtx x; rtx insn; int alter_jumps; rtx *libcall_sp; { rtx newreg = NULL, newcnst = NULL; /* Rule out USE instructions and ASM statements as we don't want to change the hard registers mentioned. */ if (GET_CODE (x) == REG && (REGNO (x) >= FIRST_PSEUDO_REGISTER || (GET_CODE (PATTERN (insn)) != USE && asm_noperands (PATTERN (insn)) < 0))) { cselib_val *val = cselib_lookup (x, GET_MODE (x), 0); struct elt_loc_list *l; if (!val) return false; for (l = val->locs; l; l = l->next) { rtx this_rtx = l->loc; rtx note; if (l->in_libcall) continue; if (CONSTANT_P (this_rtx)) newcnst = this_rtx; if (REG_P (this_rtx) && REGNO (this_rtx) >= FIRST_PSEUDO_REGISTER /* Don't copy propagate if it has attached REG_EQUIV note. At this point this only function parameters should have REG_EQUIV notes and if the argument slot is used somewhere explicitly, it means address of parameter has been taken, so we should not extend the lifetime of the pseudo. */ && (!(note = find_reg_note (l->setting_insn, REG_EQUIV, NULL_RTX)) || GET_CODE (XEXP (note, 0)) != MEM)) newreg = this_rtx; } if (newcnst && constprop_register (insn, x, newcnst, alter_jumps)) { /* If we find a case where we can't fix the retval REG_EQUAL notes match the new register, we either have to abandom this replacement or fix delete_trivially_dead_insns to preserve the setting insn, or make it delete the REG_EUAQL note, and fix up all passes that require the REG_EQUAL note there. */ if (!adjust_libcall_notes (x, newcnst, insn, libcall_sp)) abort (); if (gcse_file != NULL) { fprintf (gcse_file, "LOCAL CONST-PROP: Replacing reg %d in ", REGNO (x)); fprintf (gcse_file, "insn %d with constant ", INSN_UID (insn)); print_rtl (gcse_file, newcnst); fprintf (gcse_file, "\n"); } const_prop_count++; return true; } else if (newreg && newreg != x && try_replace_reg (x, newreg, insn)) { adjust_libcall_notes (x, newreg, insn, libcall_sp); if (gcse_file != NULL) { fprintf (gcse_file, "LOCAL COPY-PROP: Replacing reg %d in insn %d", REGNO (x), INSN_UID (insn)); fprintf (gcse_file, " with reg %d\n", REGNO (newreg)); } copy_prop_count++; return true; } } return false; } /* LIBCALL_SP is a zero-terminated array of insns at the end of a libcall; their REG_EQUAL notes need updating to reflect that OLDREG has been replaced with NEWVAL in INSN. Return true if all substitutions could be made. */ static bool adjust_libcall_notes (oldreg, newval, insn, libcall_sp) rtx oldreg, newval, insn, *libcall_sp; { rtx end; while ((end = *libcall_sp++)) { rtx note = find_reg_equal_equiv_note (end); if (! note) continue; if (REG_P (newval)) { if (reg_set_between_p (newval, PREV_INSN (insn), end)) { do { note = find_reg_equal_equiv_note (end); if (! note) continue; if (reg_mentioned_p (newval, XEXP (note, 0))) return false; } while ((end = *libcall_sp++)); return true; } } XEXP (note, 0) = replace_rtx (XEXP (note, 0), oldreg, newval); insn = end; } return true; } #define MAX_NESTED_LIBCALLS 9 static void local_cprop_pass (alter_jumps) int alter_jumps; { rtx insn; struct reg_use *reg_used; rtx libcall_stack[MAX_NESTED_LIBCALLS + 1], *libcall_sp; bool changed = false; cselib_init (); libcall_sp = &libcall_stack[MAX_NESTED_LIBCALLS]; *libcall_sp = 0; for (insn = get_insns (); insn; insn = NEXT_INSN (insn)) { if (INSN_P (insn)) { rtx note = find_reg_note (insn, REG_LIBCALL, NULL_RTX); if (note) { if (libcall_sp == libcall_stack) abort (); *--libcall_sp = XEXP (note, 0); } note = find_reg_note (insn, REG_RETVAL, NULL_RTX); if (note) libcall_sp++; note = find_reg_equal_equiv_note (insn); do { reg_use_count = 0; note_uses (&PATTERN (insn), local_cprop_find_used_regs, NULL); if (note) local_cprop_find_used_regs (&XEXP (note, 0), NULL); for (reg_used = &reg_use_table[0]; reg_use_count > 0; reg_used++, reg_use_count--) if (do_local_cprop (reg_used->reg_rtx, insn, alter_jumps, libcall_sp)) { changed = true; break; } if (INSN_DELETED_P (insn)) break; } while (reg_use_count); } cselib_process_insn (insn); } cselib_finish (); /* Global analysis may get into infinite loops for unreachable blocks. */ if (changed && alter_jumps) { delete_unreachable_blocks (); free_reg_set_mem (); alloc_reg_set_mem (max_reg_num ()); compute_sets (get_insns ()); } } /* Forward propagate copies. This includes copies and constants. Return nonzero if a change was made. */ static int cprop (alter_jumps) int alter_jumps; { int changed; basic_block bb; rtx insn; /* Note we start at block 1. */ if (ENTRY_BLOCK_PTR->next_bb == EXIT_BLOCK_PTR) { if (gcse_file != NULL) fprintf (gcse_file, "\n"); return 0; } changed = 0; FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR->next_bb->next_bb, EXIT_BLOCK_PTR, next_bb) { /* Reset tables used to keep track of what's still valid [since the start of the block]. */ reset_opr_set_tables (); for (insn = bb->head; insn != NULL && insn != NEXT_INSN (bb->end); insn = NEXT_INSN (insn)) if (INSN_P (insn)) { changed |= cprop_insn (insn, alter_jumps); /* Keep track of everything modified by this insn. */ /* ??? Need to be careful w.r.t. mods done to INSN. Don't call mark_oprs_set if we turned the insn into a NOTE. */ if (GET_CODE (insn) != NOTE) mark_oprs_set (insn); } } if (gcse_file != NULL) fprintf (gcse_file, "\n"); return changed; } /* Perform one copy/constant propagation pass. F is the first insn in the function. PASS is the pass count. */ static int one_cprop_pass (pass, alter_jumps) int pass; int alter_jumps; { int changed = 0; const_prop_count = 0; copy_prop_count = 0; local_cprop_pass (alter_jumps); alloc_hash_table (max_cuid, &set_hash_table, 1); compute_hash_table (&set_hash_table); if (gcse_file) dump_hash_table (gcse_file, "SET", &set_hash_table); if (set_hash_table.n_elems > 0) { alloc_cprop_mem (last_basic_block, set_hash_table.n_elems); compute_cprop_data (); changed = cprop (alter_jumps); if (alter_jumps) changed |= bypass_conditional_jumps (); free_cprop_mem (); } free_hash_table (&set_hash_table); if (gcse_file) { fprintf (gcse_file, "CPROP of %s, pass %d: %d bytes needed, ", current_function_name, pass, bytes_used); fprintf (gcse_file, "%d const props, %d copy props\n\n", const_prop_count, copy_prop_count); } /* Global analysis may get into infinite loops for unreachable blocks. */ if (changed && alter_jumps) delete_unreachable_blocks (); return changed; } /* Bypass conditional jumps. */ /* Find a set of REGNO to a constant that is available at the end of basic block BB. Returns NULL if no such set is found. Based heavily upon find_avail_set. */ static struct expr * find_bypass_set (regno, bb) int regno; int bb; { struct expr *result = 0; for (;;) { rtx src; struct expr *set = lookup_set (regno, NULL_RTX, &set_hash_table); while (set) { if (TEST_BIT (cprop_avout[bb], set->bitmap_index)) break; set = next_set (regno, set); } if (set == 0) break; if (GET_CODE (set->expr) != SET) abort (); src = SET_SRC (set->expr); if (CONSTANT_P (src)) result = set; if (GET_CODE (src) != REG) break; regno = REGNO (src); } return result; } /* Subroutine of bypass_block that checks whether a pseudo is killed by any of the instructions inserted on an edge. Jump bypassing places condition code setters on CFG edges using insert_insn_on_edge. This function is required to check that our data flow analysis is still valid prior to commit_edge_insertions. */ static bool reg_killed_on_edge (reg, e) rtx reg; edge e; { rtx insn; for (insn = e->insns; insn; insn = NEXT_INSN (insn)) if (INSN_P (insn) && reg_set_p (reg, insn)) return true; return false; } /* Subroutine of bypass_conditional_jumps that attempts to bypass the given basic block BB which has more than one predecessor. If not NULL, SETCC is the first instruction of BB, which is immediately followed by JUMP_INSN JUMP. Otherwise, SETCC is NULL, and JUMP is the first insn of BB. Returns nonzero if a change was made. During the jump bypassing pass, we may place copies of SETCC instuctions on CFG edges. The following routine must be careful to pay attention to these inserted insns when performing its transformations. */ static int bypass_block (bb, setcc, jump) basic_block bb; rtx setcc, jump; { rtx insn, note; edge e, enext, edest; int i, change; insn = (setcc != NULL) ? setcc : jump; /* Determine set of register uses in INSN. */ reg_use_count = 0; note_uses (&PATTERN (insn), find_used_regs, NULL); note = find_reg_equal_equiv_note (insn); if (note) find_used_regs (&XEXP (note, 0), NULL); change = 0; for (e = bb->pred; e; e = enext) { enext = e->pred_next; for (i = 0; i < reg_use_count; i++) { struct reg_use *reg_used = &reg_use_table[i]; unsigned int regno = REGNO (reg_used->reg_rtx); basic_block dest, old_dest; struct expr *set; rtx src, new; if (regno >= max_gcse_regno) continue; set = find_bypass_set (regno, e->src->index); if (! set) continue; /* Check the data flow is valid after edge insertions. */ if (e->insns && reg_killed_on_edge (reg_used->reg_rtx, e)) continue; src = SET_SRC (pc_set (jump)); if (setcc != NULL) src = simplify_replace_rtx (src, SET_DEST (PATTERN (setcc)), SET_SRC (PATTERN (setcc))); new = simplify_replace_rtx (src, reg_used->reg_rtx, SET_SRC (set->expr)); /* Jump bypassing may have already placed instructions on edges of the CFG. We can't bypass an outgoing edge that has instructions associated with it, as these insns won't get executed if the incoming edge is redirected. */ if (new == pc_rtx) { edest = FALLTHRU_EDGE (bb); dest = edest->insns ? NULL : edest->dest; } else if (GET_CODE (new) == LABEL_REF) { dest = BLOCK_FOR_INSN (XEXP (new, 0)); /* Don't bypass edges containing instructions. */ for (edest = bb->succ; edest; edest = edest->succ_next) if (edest->dest == dest && edest->insns) { dest = NULL; break; } } else dest = NULL; /* Once basic block indices are stable, we should be able to use redirect_edge_and_branch_force instead. */ old_dest = e->dest; if (dest != NULL && dest != old_dest && redirect_edge_and_branch (e, dest)) { /* Copy the register setter to the redirected edge. Don't copy CC0 setters, as CC0 is dead after jump. */ if (setcc) { rtx pat = PATTERN (setcc); if (!CC0_P (SET_DEST (pat))) insert_insn_on_edge (copy_insn (pat), e); } if (gcse_file != NULL) { fprintf (gcse_file, "JUMP-BYPASS: Proved reg %d in jump_insn %d equals constant ", regno, INSN_UID (jump)); print_rtl (gcse_file, SET_SRC (set->expr)); fprintf (gcse_file, "\nBypass edge from %d->%d to %d\n", e->src->index, old_dest->index, dest->index); } change = 1; break; } } } return change; } /* Find basic blocks with more than one predecessor that only contain a single conditional jump. If the result of the comparison is known at compile-time from any incoming edge, redirect that edge to the appropriate target. Returns nonzero if a change was made. */ static int bypass_conditional_jumps () { basic_block bb; int changed; rtx setcc; rtx insn; rtx dest; /* Note we start at block 1. */ if (ENTRY_BLOCK_PTR->next_bb == EXIT_BLOCK_PTR) return 0; changed = 0; FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR->next_bb->next_bb, EXIT_BLOCK_PTR, next_bb) { /* Check for more than one predecessor. */ if (bb->pred && bb->pred->pred_next) { setcc = NULL_RTX; for (insn = bb->head; insn != NULL && insn != NEXT_INSN (bb->end); insn = NEXT_INSN (insn)) if (GET_CODE (insn) == INSN) { if (setcc) break; if (GET_CODE (PATTERN (insn)) != SET) break; dest = SET_DEST (PATTERN (insn)); if (REG_P (dest) || CC0_P (dest)) setcc = insn; else break; } else if (GET_CODE (insn) == JUMP_INSN) { if (any_condjump_p (insn) && onlyjump_p (insn)) changed |= bypass_block (bb, setcc, insn); break; } else if (INSN_P (insn)) break; } } /* If we bypassed any register setting insns, we inserted a copy on the redirected edge. These need to be commited. */ if (changed) commit_edge_insertions(); return changed; } /* Compute PRE+LCM working variables. */ /* Local properties of expressions. */ /* Nonzero for expressions that are transparent in the block. */ static sbitmap *transp; /* Nonzero for expressions that are transparent at the end of the block. This is only zero for expressions killed by abnormal critical edge created by a calls. */ static sbitmap *transpout; /* Nonzero for expressions that are computed (available) in the block. */ static sbitmap *comp; /* Nonzero for expressions that are locally anticipatable in the block. */ static sbitmap *antloc; /* Nonzero for expressions where this block is an optimal computation point. */ static sbitmap *pre_optimal; /* Nonzero for expressions which are redundant in a particular block. */ static sbitmap *pre_redundant; /* Nonzero for expressions which should be inserted on a specific edge. */ static sbitmap *pre_insert_map; /* Nonzero for expressions which should be deleted in a specific block. */ static sbitmap *pre_delete_map; /* Contains the edge_list returned by pre_edge_lcm. */ static struct edge_list *edge_list; /* Redundant insns. */ static sbitmap pre_redundant_insns; /* Allocate vars used for PRE analysis. */ static void alloc_pre_mem (n_blocks, n_exprs) int n_blocks, n_exprs; { transp = sbitmap_vector_alloc (n_blocks, n_exprs); comp = sbitmap_vector_alloc (n_blocks, n_exprs); antloc = sbitmap_vector_alloc (n_blocks, n_exprs); pre_optimal = NULL; pre_redundant = NULL; pre_insert_map = NULL; pre_delete_map = NULL; ae_in = NULL; ae_out = NULL; ae_kill = sbitmap_vector_alloc (n_blocks, n_exprs); /* pre_insert and pre_delete are allocated later. */ } /* Free vars used for PRE analysis. */ static void free_pre_mem () { sbitmap_vector_free (transp); sbitmap_vector_free (comp); /* ANTLOC and AE_KILL are freed just after pre_lcm finishes. */ if (pre_optimal) sbitmap_vector_free (pre_optimal); if (pre_redundant) sbitmap_vector_free (pre_redundant); if (pre_insert_map) sbitmap_vector_free (pre_insert_map); if (pre_delete_map) sbitmap_vector_free (pre_delete_map); if (ae_in) sbitmap_vector_free (ae_in); if (ae_out) sbitmap_vector_free (ae_out); transp = comp = NULL; pre_optimal = pre_redundant = pre_insert_map = pre_delete_map = NULL; ae_in = ae_out = NULL; } /* Top level routine to do the dataflow analysis needed by PRE. */ static void compute_pre_data () { sbitmap trapping_expr; basic_block bb; unsigned int ui; compute_local_properties (transp, comp, antloc, &expr_hash_table); sbitmap_vector_zero (ae_kill, last_basic_block); /* Collect expressions which might trap. */ trapping_expr = sbitmap_alloc (expr_hash_table.n_elems); sbitmap_zero (trapping_expr); for (ui = 0; ui < expr_hash_table.size; ui++) { struct expr *e; for (e = expr_hash_table.table[ui]; e != NULL; e = e->next_same_hash) if (may_trap_p (e->expr)) SET_BIT (trapping_expr, e->bitmap_index); } /* Compute ae_kill for each basic block using: ~(TRANSP | COMP) This is significantly faster than compute_ae_kill. */ FOR_EACH_BB (bb) { edge e; /* If the current block is the destination of an abnormal edge, we kill all trapping expressions because we won't be able to properly place the instruction on the edge. So make them neither anticipatable nor transparent. This is fairly conservative. */ for (e = bb->pred; e ; e = e->pred_next) if (e->flags & EDGE_ABNORMAL) { sbitmap_difference (antloc[bb->index], antloc[bb->index], trapping_expr); sbitmap_difference (transp[bb->index], transp[bb->index], trapping_expr); break; } sbitmap_a_or_b (ae_kill[bb->index], transp[bb->index], comp[bb->index]); sbitmap_not (ae_kill[bb->index], ae_kill[bb->index]); } edge_list = pre_edge_lcm (gcse_file, expr_hash_table.n_elems, transp, comp, antloc, ae_kill, &pre_insert_map, &pre_delete_map); sbitmap_vector_free (antloc); antloc = NULL; sbitmap_vector_free (ae_kill); ae_kill = NULL; sbitmap_free (trapping_expr); } /* PRE utilities */ /* Return nonzero if an occurrence of expression EXPR in OCCR_BB would reach block BB. VISITED is a pointer to a working buffer for tracking which BB's have been visited. It is NULL for the top-level call. We treat reaching expressions that go through blocks containing the same reaching expression as "not reaching". E.g. if EXPR is generated in blocks 2 and 3, INSN is in block 4, and 2->3->4, we treat the expression in block 2 as not reaching. The intent is to improve the probability of finding only one reaching expression and to reduce register lifetimes by picking the closest such expression. */ static int pre_expr_reaches_here_p_work (occr_bb, expr, bb, visited) basic_block occr_bb; struct expr *expr; basic_block bb; char *visited; { edge pred; for (pred = bb->pred; pred != NULL; pred = pred->pred_next) { basic_block pred_bb = pred->src; if (pred->src == ENTRY_BLOCK_PTR /* Has predecessor has already been visited? */ || visited[pred_bb->index]) ;/* Nothing to do. */ /* Does this predecessor generate this expression? */ else if (TEST_BIT (comp[pred_bb->index], expr->bitmap_index)) { /* Is this the occurrence we're looking for? Note that there's only one generating occurrence per block so we just need to check the block number. */ if (occr_bb == pred_bb) return 1; visited[pred_bb->index] = 1; } /* Ignore this predecessor if it kills the expression. */ else if (! TEST_BIT (transp[pred_bb->index], expr->bitmap_index)) visited[pred_bb->index] = 1; /* Neither gen nor kill. */ else { visited[pred_bb->index] = 1; if (pre_expr_reaches_here_p_work (occr_bb, expr, pred_bb, visited)) return 1; } } /* All paths have been checked. */ return 0; } /* The wrapper for pre_expr_reaches_here_work that ensures that any memory allocated for that function is returned. */ static int pre_expr_reaches_here_p (occr_bb, expr, bb) basic_block occr_bb; struct expr *expr; basic_block bb; { int rval; char *visited = (char *) xcalloc (last_basic_block, 1); rval = pre_expr_reaches_here_p_work (occr_bb, expr, bb, visited); free (visited); return rval; } /* Given an expr, generate RTL which we can insert at the end of a BB, or on an edge. Set the block number of any insns generated to the value of BB. */ static rtx process_insert_insn (expr) struct expr *expr; { rtx reg = expr->reaching_reg; rtx exp = copy_rtx (expr->expr); rtx pat; start_sequence (); /* If the expression is something that's an operand, like a constant, just copy it to a register. */ if (general_operand (exp, GET_MODE (reg))) emit_move_insn (reg, exp); /* Otherwise, make a new insn to compute this expression and make sure the insn will be recognized (this also adds any needed CLOBBERs). Copy the expression to make sure we don't have any sharing issues. */ else if (insn_invalid_p (emit_insn (gen_rtx_SET (VOIDmode, reg, exp)))) abort (); pat = get_insns (); end_sequence (); return pat; } /* Add EXPR to the end of basic block BB. This is used by both the PRE and code hoisting. For PRE, we want to verify that the expr is either transparent or locally anticipatable in the target block. This check makes no sense for code hoisting. */ static void insert_insn_end_bb (expr, bb, pre) struct expr *expr; basic_block bb; int pre; { rtx insn = bb->end; rtx new_insn; rtx reg = expr->reaching_reg; int regno = REGNO (reg); rtx pat, pat_end; pat = process_insert_insn (expr); if (pat == NULL_RTX || ! INSN_P (pat)) abort (); pat_end = pat; while (NEXT_INSN (pat_end) != NULL_RTX) pat_end = NEXT_INSN (pat_end); /* If the last insn is a jump, insert EXPR in front [taking care to handle cc0, etc. properly]. Similary we need to care trapping instructions in presence of non-call exceptions. */ if (GET_CODE (insn) == JUMP_INSN || (GET_CODE (insn) == INSN && (bb->succ->succ_next || (bb->succ->flags & EDGE_ABNORMAL)))) { #ifdef HAVE_cc0 rtx note; #endif /* It should always be the case that we can put these instructions anywhere in the basic block with performing PRE optimizations. Check this. */ if (GET_CODE (insn) == INSN && pre && !TEST_BIT (antloc[bb->index], expr->bitmap_index) && !TEST_BIT (transp[bb->index], expr->bitmap_index)) abort (); /* If this is a jump table, then we can't insert stuff here. Since we know the previous real insn must be the tablejump, we insert the new instruction just before the tablejump. */ if (GET_CODE (PATTERN (insn)) == ADDR_VEC || GET_CODE (PATTERN (insn)) == ADDR_DIFF_VEC) insn = prev_real_insn (insn); #ifdef HAVE_cc0 /* FIXME: 'twould be nice to call prev_cc0_setter here but it aborts if cc0 isn't set. */ note = find_reg_note (insn, REG_CC_SETTER, NULL_RTX); if (note) insn = XEXP (note, 0); else { rtx maybe_cc0_setter = prev_nonnote_insn (insn); if (maybe_cc0_setter && INSN_P (maybe_cc0_setter) && sets_cc0_p (PATTERN (maybe_cc0_setter))) insn = maybe_cc0_setter; } #endif /* FIXME: What if something in cc0/jump uses value set in new insn? */ new_insn = emit_insn_before (pat, insn); } /* Likewise if the last insn is a call, as will happen in the presence of exception handling. */ else if (GET_CODE (insn) == CALL_INSN && (bb->succ->succ_next || (bb->succ->flags & EDGE_ABNORMAL))) { /* Keeping in mind SMALL_REGISTER_CLASSES and parameters in registers, we search backward and place the instructions before the first parameter is loaded. Do this for everyone for consistency and a presumtion that we'll get better code elsewhere as well. It should always be the case that we can put these instructions anywhere in the basic block with performing PRE optimizations. Check this. */ if (pre && !TEST_BIT (antloc[bb->index], expr->bitmap_index) && !TEST_BIT (transp[bb->index], expr->bitmap_index)) abort (); /* Since different machines initialize their parameter registers in different orders, assume nothing. Collect the set of all parameter registers. */ insn = find_first_parameter_load (insn, bb->head); /* If we found all the parameter loads, then we want to insert before the first parameter load. If we did not find all the parameter loads, then we might have stopped on the head of the block, which could be a CODE_LABEL. If we inserted before the CODE_LABEL, then we would be putting the insn in the wrong basic block. In that case, put the insn after the CODE_LABEL. Also, respect NOTE_INSN_BASIC_BLOCK. */ while (GET_CODE (insn) == CODE_LABEL || NOTE_INSN_BASIC_BLOCK_P (insn)) insn = NEXT_INSN (insn); new_insn = emit_insn_before (pat, insn); } else new_insn = emit_insn_after (pat, insn); while (1) { if (INSN_P (pat)) { add_label_notes (PATTERN (pat), new_insn); note_stores (PATTERN (pat), record_set_info, pat); } if (pat == pat_end) break; pat = NEXT_INSN (pat); } gcse_create_count++; if (gcse_file) { fprintf (gcse_file, "PRE/HOIST: end of bb %d, insn %d, ", bb->index, INSN_UID (new_insn)); fprintf (gcse_file, "copying expression %d to reg %d\n", expr->bitmap_index, regno); } } /* Insert partially redundant expressions on edges in the CFG to make the expressions fully redundant. */ static int pre_edge_insert (edge_list, index_map) struct edge_list *edge_list; struct expr **index_map; { int e, i, j, num_edges, set_size, did_insert = 0; sbitmap *inserted; /* Where PRE_INSERT_MAP is nonzero, we add the expression on that edge if it reaches any of the deleted expressions. */ set_size = pre_insert_map[0]->size; num_edges = NUM_EDGES (edge_list); inserted = sbitmap_vector_alloc (num_edges, expr_hash_table.n_elems); sbitmap_vector_zero (inserted, num_edges); for (e = 0; e < num_edges; e++) { int indx; basic_block bb = INDEX_EDGE_PRED_BB (edge_list, e); for (i = indx = 0; i < set_size; i++, indx += SBITMAP_ELT_BITS) { SBITMAP_ELT_TYPE insert = pre_insert_map[e]->elms[i]; for (j = indx; insert && j < (int) expr_hash_table.n_elems; j++, insert >>= 1) if ((insert & 1) != 0 && index_map[j]->reaching_reg != NULL_RTX) { struct expr *expr = index_map[j]; struct occr *occr; /* Now look at each deleted occurrence of this expression. */ for (occr = expr->antic_occr; occr != NULL; occr = occr->next) { if (! occr->deleted_p) continue; /* Insert this expression on this edge if if it would reach the deleted occurrence in BB. */ if (!TEST_BIT (inserted[e], j)) { rtx insn; edge eg = INDEX_EDGE (edge_list, e); /* We can't insert anything on an abnormal and critical edge, so we insert the insn at the end of the previous block. There are several alternatives detailed in Morgans book P277 (sec 10.5) for handling this situation. This one is easiest for now. */ if ((eg->flags & EDGE_ABNORMAL) == EDGE_ABNORMAL) insert_insn_end_bb (index_map[j], bb, 0); else { insn = process_insert_insn (index_map[j]); insert_insn_on_edge (insn, eg); } if (gcse_file) { fprintf (gcse_file, "PRE/HOIST: edge (%d,%d), ", bb->index, INDEX_EDGE_SUCC_BB (edge_list, e)->index); fprintf (gcse_file, "copy expression %d\n", expr->bitmap_index); } update_ld_motion_stores (expr); SET_BIT (inserted[e], j); did_insert = 1; gcse_create_count++; } } } } } sbitmap_vector_free (inserted); return did_insert; } /* Copy the result of INSN to REG. INDX is the expression number. */ static void pre_insert_copy_insn (expr, insn) struct expr *expr; rtx insn; { rtx reg = expr->reaching_reg; int regno = REGNO (reg); int indx = expr->bitmap_index; rtx pat = PATTERN (insn); rtx set, new_insn; int i; /* This block matches the logic in hash_scan_insn. */ if (GET_CODE (pat) == SET) set = pat; else if (GET_CODE (pat) == PARALLEL) { /* Search through the parallel looking got the set whose source was the expression that we're interested in. */ set = NULL_RTX; for (i = 0; i < XVECLEN (pat, 0); i++) { rtx x = XVECEXP (pat, 0, i); if (GET_CODE (x) == SET && expr_equiv_p (SET_SRC (x), expr->expr)) { set = x; break; } } if (! set) abort (); } else abort (); new_insn = gen_move_insn (reg, copy_rtx (SET_DEST (set))); new_insn = emit_insn_after (new_insn, insn); /* Keep register set table up to date. */ record_one_set (regno, new_insn); gcse_create_count++; if (gcse_file) fprintf (gcse_file, "PRE: bb %d, insn %d, copy expression %d in insn %d to reg %d\n", BLOCK_NUM (insn), INSN_UID (new_insn), indx, INSN_UID (insn), regno); update_ld_motion_stores (expr); } /* Copy available expressions that reach the redundant expression to `reaching_reg'. */ static void pre_insert_copies () { unsigned int i; struct expr *expr; struct occr *occr; struct occr *avail; /* For each available expression in the table, copy the result to `reaching_reg' if the expression reaches a deleted one. ??? The current algorithm is rather brute force. Need to do some profiling. */ for (i = 0; i < expr_hash_table.size; i++) for (expr = expr_hash_table.table[i]; expr != NULL; expr = expr->next_same_hash) { /* If the basic block isn't reachable, PPOUT will be TRUE. However, we don't want to insert a copy here because the expression may not really be redundant. So only insert an insn if the expression was deleted. This test also avoids further processing if the expression wasn't deleted anywhere. */ if (expr->reaching_reg == NULL) continue; for (occr = expr->antic_occr; occr != NULL; occr = occr->next) { if (! occr->deleted_p) continue; for (avail = expr->avail_occr; avail != NULL; avail = avail->next) { rtx insn = avail->insn; /* No need to handle this one if handled already. */ if (avail->copied_p) continue; /* Don't handle this one if it's a redundant one. */ if (TEST_BIT (pre_redundant_insns, INSN_CUID (insn))) continue; /* Or if the expression doesn't reach the deleted one. */ if (! pre_expr_reaches_here_p (BLOCK_FOR_INSN (avail->insn), expr, BLOCK_FOR_INSN (occr->insn))) continue; /* Copy the result of avail to reaching_reg. */ pre_insert_copy_insn (expr, insn); avail->copied_p = 1; } } } } /* Emit move from SRC to DEST noting the equivalence with expression computed in INSN. */ static rtx gcse_emit_move_after (src, dest, insn) rtx src, dest, insn; { rtx new; rtx set = single_set (insn), set2; rtx note; rtx eqv; /* This should never fail since we're creating a reg->reg copy we've verified to be valid. */ new = emit_insn_after (gen_move_insn (dest, src), insn); /* Note the equivalence for local CSE pass. */ set2 = single_set (new); if (!set2 || !rtx_equal_p (SET_DEST (set2), dest)) return new; if ((note = find_reg_equal_equiv_note (insn))) eqv = XEXP (note, 0); else eqv = SET_SRC (set); set_unique_reg_note (new, REG_EQUAL, copy_insn_1 (eqv)); return new; } /* Delete redundant computations. Deletion is done by changing the insn to copy the `reaching_reg' of the expression into the result of the SET. It is left to later passes (cprop, cse2, flow, combine, regmove) to propagate the copy or eliminate it. Returns nonzero if a change is made. */ static int pre_delete () { unsigned int i; int changed; struct expr *expr; struct occr *occr; changed = 0; for (i = 0; i < expr_hash_table.size; i++) for (expr = expr_hash_table.table[i]; expr != NULL; expr = expr->next_same_hash) { int indx = expr->bitmap_index; /* We only need to search antic_occr since we require ANTLOC != 0. */ for (occr = expr->antic_occr; occr != NULL; occr = occr->next) { rtx insn = occr->insn; rtx set; basic_block bb = BLOCK_FOR_INSN (insn); if (TEST_BIT (pre_delete_map[bb->index], indx)) { set = single_set (insn); if (! set) abort (); /* Create a pseudo-reg to store the result of reaching expressions into. Get the mode for the new pseudo from the mode of the original destination pseudo. */ if (expr->reaching_reg == NULL) expr->reaching_reg = gen_reg_rtx (GET_MODE (SET_DEST (set))); gcse_emit_move_after (expr->reaching_reg, SET_DEST (set), insn); delete_insn (insn); occr->deleted_p = 1; SET_BIT (pre_redundant_insns, INSN_CUID (insn)); changed = 1; gcse_subst_count++; if (gcse_file) { fprintf (gcse_file, "PRE: redundant insn %d (expression %d) in ", INSN_UID (insn), indx); fprintf (gcse_file, "bb %d, reaching reg is %d\n", bb->index, REGNO (expr->reaching_reg)); } } } } return changed; } /* Perform GCSE optimizations using PRE. This is called by one_pre_gcse_pass after all the dataflow analysis has been done. This is based on the original Morel-Renvoise paper Fred Chow's thesis, and lazy code motion from Knoop, Ruthing and Steffen as described in Advanced Compiler Design and Implementation. ??? A new pseudo reg is created to hold the reaching expression. The nice thing about the classical approach is that it would try to use an existing reg. If the register can't be adequately optimized [i.e. we introduce reload problems], one could add a pass here to propagate the new register through the block. ??? We don't handle single sets in PARALLELs because we're [currently] not able to copy the rest of the parallel when we insert copies to create full redundancies from partial redundancies. However, there's no reason why we can't handle PARALLELs in the cases where there are no partial redundancies. */ static int pre_gcse () { unsigned int i; int did_insert, changed; struct expr **index_map; struct expr *expr; /* Compute a mapping from expression number (`bitmap_index') to hash table entry. */ index_map = (struct expr **) xcalloc (expr_hash_table.n_elems, sizeof (struct expr *)); for (i = 0; i < expr_hash_table.size; i++) for (expr = expr_hash_table.table[i]; expr != NULL; expr = expr->next_same_hash) index_map[expr->bitmap_index] = expr; /* Reset bitmap used to track which insns are redundant. */ pre_redundant_insns = sbitmap_alloc (max_cuid); sbitmap_zero (pre_redundant_insns); /* Delete the redundant insns first so that - we know what register to use for the new insns and for the other ones with reaching expressions - we know which insns are redundant when we go to create copies */ changed = pre_delete (); did_insert = pre_edge_insert (edge_list, index_map); /* In other places with reaching expressions, copy the expression to the specially allocated pseudo-reg that reaches the redundant expr. */ pre_insert_copies (); if (did_insert) { commit_edge_insertions (); changed = 1; } free (index_map); sbitmap_free (pre_redundant_insns); return changed; } /* Top level routine to perform one PRE GCSE pass. Return nonzero if a change was made. */ static int one_pre_gcse_pass (pass) int pass; { int changed = 0; gcse_subst_count = 0; gcse_create_count = 0; alloc_hash_table (max_cuid, &expr_hash_table, 0); add_noreturn_fake_exit_edges (); if (flag_gcse_lm) compute_ld_motion_mems (); compute_hash_table (&expr_hash_table); trim_ld_motion_mems (); if (gcse_file) dump_hash_table (gcse_file, "Expression", &expr_hash_table); if (expr_hash_table.n_elems > 0) { alloc_pre_mem (last_basic_block, expr_hash_table.n_elems); compute_pre_data (); changed |= pre_gcse (); free_edge_list (edge_list); free_pre_mem (); } free_ldst_mems (); remove_fake_edges (); free_hash_table (&expr_hash_table); if (gcse_file) { fprintf (gcse_file, "\nPRE GCSE of %s, pass %d: %d bytes needed, ", current_function_name, pass, bytes_used); fprintf (gcse_file, "%d substs, %d insns created\n", gcse_subst_count, gcse_create_count); } return changed; } /* If X contains any LABEL_REF's, add REG_LABEL notes for them to INSN. If notes are added to an insn which references a CODE_LABEL, the LABEL_NUSES count is incremented. We have to add REG_LABEL notes, because the following loop optimization pass requires them. */ /* ??? This is very similar to the loop.c add_label_notes function. We could probably share code here. */ /* ??? If there was a jump optimization pass after gcse and before loop, then we would not need to do this here, because jump would add the necessary REG_LABEL notes. */ static void add_label_notes (x, insn) rtx x; rtx insn; { enum rtx_code code = GET_CODE (x); int i, j; const char *fmt; if (code == LABEL_REF && !LABEL_REF_NONLOCAL_P (x)) { /* This code used to ignore labels that referred to dispatch tables to avoid flow generating (slighly) worse code. We no longer ignore such label references (see LABEL_REF handling in mark_jump_label for additional information). */ REG_NOTES (insn) = gen_rtx_INSN_LIST (REG_LABEL, XEXP (x, 0), REG_NOTES (insn)); if (LABEL_P (XEXP (x, 0))) LABEL_NUSES (XEXP (x, 0))++; return; } for (i = GET_RTX_LENGTH (code) - 1, fmt = GET_RTX_FORMAT (code); i >= 0; i--) { if (fmt[i] == 'e') add_label_notes (XEXP (x, i), insn); else if (fmt[i] == 'E') for (j = XVECLEN (x, i) - 1; j >= 0; j--) add_label_notes (XVECEXP (x, i, j), insn); } } /* Compute transparent outgoing information for each block. An expression is transparent to an edge unless it is killed by the edge itself. This can only happen with abnormal control flow, when the edge is traversed through a call. This happens with non-local labels and exceptions. This would not be necessary if we split the edge. While this is normally impossible for abnormal critical edges, with some effort it should be possible with exception handling, since we still have control over which handler should be invoked. But due to increased EH table sizes, this may not be worthwhile. */ static void compute_transpout () { basic_block bb; unsigned int i; struct expr *expr; sbitmap_vector_ones (transpout, last_basic_block); FOR_EACH_BB (bb) { /* Note that flow inserted a nop a the end of basic blocks that end in call instructions for reasons other than abnormal control flow. */ if (GET_CODE (bb->end) != CALL_INSN) continue; for (i = 0; i < expr_hash_table.size; i++) for (expr = expr_hash_table.table[i]; expr ; expr = expr->next_same_hash) if (GET_CODE (expr->expr) == MEM) { if (GET_CODE (XEXP (expr->expr, 0)) == SYMBOL_REF && CONSTANT_POOL_ADDRESS_P (XEXP (expr->expr, 0))) continue; /* ??? Optimally, we would use interprocedural alias analysis to determine if this mem is actually killed by this call. */ RESET_BIT (transpout[bb->index], expr->bitmap_index); } } } /* Removal of useless null pointer checks */ /* Called via note_stores. X is set by SETTER. If X is a register we must invalidate nonnull_local and set nonnull_killed. DATA is really a `null_pointer_info *'. We ignore hard registers. */ static void invalidate_nonnull_info (x, setter, data) rtx x; rtx setter ATTRIBUTE_UNUSED; void *data; { unsigned int regno; struct null_pointer_info *npi = (struct null_pointer_info *) data; while (GET_CODE (x) == SUBREG) x = SUBREG_REG (x); /* Ignore anything that is not a register or is a hard register. */ if (GET_CODE (x) != REG || REGNO (x) < npi->min_reg || REGNO (x) >= npi->max_reg) return; regno = REGNO (x) - npi->min_reg; RESET_BIT (npi->nonnull_local[npi->current_block->index], regno); SET_BIT (npi->nonnull_killed[npi->current_block->index], regno); } /* Do null-pointer check elimination for the registers indicated in NPI. NONNULL_AVIN and NONNULL_AVOUT are pre-allocated sbitmaps; they are not our responsibility to free. */ static int delete_null_pointer_checks_1 (block_reg, nonnull_avin, nonnull_avout, npi) unsigned int *block_reg; sbitmap *nonnull_avin; sbitmap *nonnull_avout; struct null_pointer_info *npi; { basic_block bb, current_block; sbitmap *nonnull_local = npi->nonnull_local; sbitmap *nonnull_killed = npi->nonnull_killed; int something_changed = 0; /* Compute local properties, nonnull and killed. A register will have the nonnull property if at the end of the current block its value is known to be nonnull. The killed property indicates that somewhere in the block any information we had about the register is killed. Note that a register can have both properties in a single block. That indicates that it's killed, then later in the block a new value is computed. */ sbitmap_vector_zero (nonnull_local, last_basic_block); sbitmap_vector_zero (nonnull_killed, last_basic_block); FOR_EACH_BB (current_block) { rtx insn, stop_insn; /* Set the current block for invalidate_nonnull_info. */ npi->current_block = current_block; /* Scan each insn in the basic block looking for memory references and register sets. */ stop_insn = NEXT_INSN (current_block->end); for (insn = current_block->head; insn != stop_insn; insn = NEXT_INSN (insn)) { rtx set; rtx reg; /* Ignore anything that is not a normal insn. */ if (! INSN_P (insn)) continue; /* Basically ignore anything that is not a simple SET. We do have to make sure to invalidate nonnull_local and set nonnull_killed for such insns though. */ set = single_set (insn); if (!set) { note_stores (PATTERN (insn), invalidate_nonnull_info, npi); continue; } /* See if we've got a usable memory load. We handle it first in case it uses its address register as a dest (which kills the nonnull property). */ if (GET_CODE (SET_SRC (set)) == MEM && GET_CODE ((reg = XEXP (SET_SRC (set), 0))) == REG && REGNO (reg) >= npi->min_reg && REGNO (reg) < npi->max_reg) SET_BIT (nonnull_local[current_block->index], REGNO (reg) - npi->min_reg); /* Now invalidate stuff clobbered by this insn. */ note_stores (PATTERN (insn), invalidate_nonnull_info, npi); /* And handle stores, we do these last since any sets in INSN can not kill the nonnull property if it is derived from a MEM appearing in a SET_DEST. */ if (GET_CODE (SET_DEST (set)) == MEM && GET_CODE ((reg = XEXP (SET_DEST (set), 0))) == REG && REGNO (reg) >= npi->min_reg && REGNO (reg) < npi->max_reg) SET_BIT (nonnull_local[current_block->index], REGNO (reg) - npi->min_reg); } } /* Now compute global properties based on the local properties. This is a classic global availablity algorithm. */ compute_available (nonnull_local, nonnull_killed, nonnull_avout, nonnull_avin); /* Now look at each bb and see if it ends with a compare of a value against zero. */ FOR_EACH_BB (bb) { rtx last_insn = bb->end; rtx condition, earliest; int compare_and_branch; /* Since MIN_REG is always at least FIRST_PSEUDO_REGISTER, and since BLOCK_REG[BB] is zero if this block did not end with a comparison against zero, this condition works. */ if (block_reg[bb->index] < npi->min_reg || block_reg[bb->index] >= npi->max_reg) continue; /* LAST_INSN is a conditional jump. Get its condition. */ condition = get_condition (last_insn, &earliest); /* If we can't determine the condition then skip. */ if (! condition) continue; /* Is the register known to have a nonzero value? */ if (!TEST_BIT (nonnull_avout[bb->index], block_reg[bb->index] - npi->min_reg)) continue; /* Try to compute whether the compare/branch at the loop end is one or two instructions. */ if (earliest == last_insn) compare_and_branch = 1; else if (earliest == prev_nonnote_insn (last_insn)) compare_and_branch = 2; else continue; /* We know the register in this comparison is nonnull at exit from this block. We can optimize this comparison. */ if (GET_CODE (condition) == NE) { rtx new_jump; new_jump = emit_jump_insn_after (gen_jump (JUMP_LABEL (last_insn)), last_insn); JUMP_LABEL (new_jump) = JUMP_LABEL (last_insn); LABEL_NUSES (JUMP_LABEL (new_jump))++; emit_barrier_after (new_jump); } something_changed = 1; delete_insn (last_insn); if (compare_and_branch == 2) delete_insn (earliest); purge_dead_edges (bb); /* Don't check this block again. (Note that BLOCK_END is invalid here; we deleted the last instruction in the block.) */ block_reg[bb->index] = 0; } return something_changed; } /* Find EQ/NE comparisons against zero which can be (indirectly) evaluated at compile time. This is conceptually similar to global constant/copy propagation and classic global CSE (it even uses the same dataflow equations as cprop). If a register is used as memory address with the form (mem (reg)), then we know that REG can not be zero at that point in the program. Any instruction which sets REG "kills" this property. So, if every path leading to a conditional branch has an available memory reference of that form, then we know the register can not have the value zero at the conditional branch. So we merely need to compute the local properies and propagate that data around the cfg, then optimize where possible. We run this pass two times. Once before CSE, then again after CSE. This has proven to be the most profitable approach. It is rare for new optimization opportunities of this nature to appear after the first CSE pass. This could probably be integrated with global cprop with a little work. */ int delete_null_pointer_checks (f) rtx f ATTRIBUTE_UNUSED; { sbitmap *nonnull_avin, *nonnull_avout; unsigned int *block_reg; basic_block bb; int reg; int regs_per_pass; int max_reg; struct null_pointer_info npi; int something_changed = 0; /* If we have only a single block, then there's nothing to do. */ if (n_basic_blocks <= 1) return 0; /* Trying to perform global optimizations on flow graphs which have a high connectivity will take a long time and is unlikely to be particularly useful. In normal circumstances a cfg should have about twice as many edges as blocks. But we do not want to punish small functions which have a couple switch statements. So we require a relatively large number of basic blocks and the ratio of edges to blocks to be high. */ if (n_basic_blocks > 1000 && n_edges / n_basic_blocks >= 20) return 0; /* We need four bitmaps, each with a bit for each register in each basic block. */ max_reg = max_reg_num (); regs_per_pass = get_bitmap_width (4, last_basic_block, max_reg); /* Allocate bitmaps to hold local and global properties. */ npi.nonnull_local = sbitmap_vector_alloc (last_basic_block, regs_per_pass); npi.nonnull_killed = sbitmap_vector_alloc (last_basic_block, regs_per_pass); nonnull_avin = sbitmap_vector_alloc (last_basic_block, regs_per_pass); nonnull_avout = sbitmap_vector_alloc (last_basic_block, regs_per_pass); /* Go through the basic blocks, seeing whether or not each block ends with a conditional branch whose condition is a comparison against zero. Record the register compared in BLOCK_REG. */ block_reg = (unsigned int *) xcalloc (last_basic_block, sizeof (int)); FOR_EACH_BB (bb) { rtx last_insn = bb->end; rtx condition, earliest, reg; /* We only want conditional branches. */ if (GET_CODE (last_insn) != JUMP_INSN || !any_condjump_p (last_insn) || !onlyjump_p (last_insn)) continue; /* LAST_INSN is a conditional jump. Get its condition. */ condition = get_condition (last_insn, &earliest); /* If we were unable to get the condition, or it is not an equality comparison against zero then there's nothing we can do. */ if (!condition || (GET_CODE (condition) != NE && GET_CODE (condition) != EQ) || GET_CODE (XEXP (condition, 1)) != CONST_INT || (XEXP (condition, 1) != CONST0_RTX (GET_MODE (XEXP (condition, 0))))) continue; /* We must be checking a register against zero. */ reg = XEXP (condition, 0); if (GET_CODE (reg) != REG) continue; block_reg[bb->index] = REGNO (reg); } /* Go through the algorithm for each block of registers. */ for (reg = FIRST_PSEUDO_REGISTER; reg < max_reg; reg += regs_per_pass) { npi.min_reg = reg; npi.max_reg = MIN (reg + regs_per_pass, max_reg); something_changed |= delete_null_pointer_checks_1 (block_reg, nonnull_avin, nonnull_avout, &npi); } /* Free the table of registers compared at the end of every block. */ free (block_reg); /* Free bitmaps. */ sbitmap_vector_free (npi.nonnull_local); sbitmap_vector_free (npi.nonnull_killed); sbitmap_vector_free (nonnull_avin); sbitmap_vector_free (nonnull_avout); return something_changed; } /* Code Hoisting variables and subroutines. */ /* Very busy expressions. */ static sbitmap *hoist_vbein; static sbitmap *hoist_vbeout; /* Hoistable expressions. */ static sbitmap *hoist_exprs; /* Dominator bitmaps. */ dominance_info dominators; /* ??? We could compute post dominators and run this algorithm in reverse to perform tail merging, doing so would probably be more effective than the tail merging code in jump.c. It's unclear if tail merging could be run in parallel with code hoisting. It would be nice. */ /* Allocate vars used for code hoisting analysis. */ static void alloc_code_hoist_mem (n_blocks, n_exprs) int n_blocks, n_exprs; { antloc = sbitmap_vector_alloc (n_blocks, n_exprs); transp = sbitmap_vector_alloc (n_blocks, n_exprs); comp = sbitmap_vector_alloc (n_blocks, n_exprs); hoist_vbein = sbitmap_vector_alloc (n_blocks, n_exprs); hoist_vbeout = sbitmap_vector_alloc (n_blocks, n_exprs); hoist_exprs = sbitmap_vector_alloc (n_blocks, n_exprs); transpout = sbitmap_vector_alloc (n_blocks, n_exprs); } /* Free vars used for code hoisting analysis. */ static void free_code_hoist_mem () { sbitmap_vector_free (antloc); sbitmap_vector_free (transp); sbitmap_vector_free (comp); sbitmap_vector_free (hoist_vbein); sbitmap_vector_free (hoist_vbeout); sbitmap_vector_free (hoist_exprs); sbitmap_vector_free (transpout); free_dominance_info (dominators); } /* Compute the very busy expressions at entry/exit from each block. An expression is very busy if all paths from a given point compute the expression. */ static void compute_code_hoist_vbeinout () { int changed, passes; basic_block bb; sbitmap_vector_zero (hoist_vbeout, last_basic_block); sbitmap_vector_zero (hoist_vbein, last_basic_block); passes = 0; changed = 1; while (changed) { changed = 0; /* We scan the blocks in the reverse order to speed up the convergence. */ FOR_EACH_BB_REVERSE (bb) { changed |= sbitmap_a_or_b_and_c_cg (hoist_vbein[bb->index], antloc[bb->index], hoist_vbeout[bb->index], transp[bb->index]); if (bb->next_bb != EXIT_BLOCK_PTR) sbitmap_intersection_of_succs (hoist_vbeout[bb->index], hoist_vbein, bb->index); } passes++; } if (gcse_file) fprintf (gcse_file, "hoisting vbeinout computation: %d passes\n", passes); } /* Top level routine to do the dataflow analysis needed by code hoisting. */ static void compute_code_hoist_data () { compute_local_properties (transp, comp, antloc, &expr_hash_table); compute_transpout (); compute_code_hoist_vbeinout (); dominators = calculate_dominance_info (CDI_DOMINATORS); if (gcse_file) fprintf (gcse_file, "\n"); } /* Determine if the expression identified by EXPR_INDEX would reach BB unimpared if it was placed at the end of EXPR_BB. It's unclear exactly what Muchnick meant by "unimpared". It seems to me that the expression must either be computed or transparent in *every* block in the path(s) from EXPR_BB to BB. Any other definition would allow the expression to be hoisted out of loops, even if the expression wasn't a loop invariant. Contrast this to reachability for PRE where an expression is considered reachable if *any* path reaches instead of *all* paths. */ static int hoist_expr_reaches_here_p (expr_bb, expr_index, bb, visited) basic_block expr_bb; int expr_index; basic_block bb; char *visited; { edge pred; int visited_allocated_locally = 0; if (visited == NULL) { visited_allocated_locally = 1; visited = xcalloc (last_basic_block, 1); } for (pred = bb->pred; pred != NULL; pred = pred->pred_next) { basic_block pred_bb = pred->src; if (pred->src == ENTRY_BLOCK_PTR) break; else if (pred_bb == expr_bb) continue; else if (visited[pred_bb->index]) continue; /* Does this predecessor generate this expression? */ else if (TEST_BIT (comp[pred_bb->index], expr_index)) break; else if (! TEST_BIT (transp[pred_bb->index], expr_index)) break; /* Not killed. */ else { visited[pred_bb->index] = 1; if (! hoist_expr_reaches_here_p (expr_bb, expr_index, pred_bb, visited)) break; } } if (visited_allocated_locally) free (visited); return (pred == NULL); } /* Actually perform code hoisting. */ static void hoist_code () { basic_block bb, dominated; basic_block *domby; unsigned int domby_len; unsigned int i,j; struct expr **index_map; struct expr *expr; sbitmap_vector_zero (hoist_exprs, last_basic_block); /* Compute a mapping from expression number (`bitmap_index') to hash table entry. */ index_map = (struct expr **) xcalloc (expr_hash_table.n_elems, sizeof (struct expr *)); for (i = 0; i < expr_hash_table.size; i++) for (expr = expr_hash_table.table[i]; expr != NULL; expr = expr->next_same_hash) index_map[expr->bitmap_index] = expr; /* Walk over each basic block looking for potentially hoistable expressions, nothing gets hoisted from the entry block. */ FOR_EACH_BB (bb) { int found = 0; int insn_inserted_p; domby_len = get_dominated_by (dominators, bb, &domby); /* Examine each expression that is very busy at the exit of this block. These are the potentially hoistable expressions. */ for (i = 0; i < hoist_vbeout[bb->index]->n_bits; i++) { int hoistable = 0; if (TEST_BIT (hoist_vbeout[bb->index], i) && TEST_BIT (transpout[bb->index], i)) { /* We've found a potentially hoistable expression, now we look at every block BB dominates to see if it computes the expression. */ for (j = 0; j < domby_len; j++) { dominated = domby[j]; /* Ignore self dominance. */ if (bb == dominated) continue; /* We've found a dominated block, now see if it computes the busy expression and whether or not moving that expression to the "beginning" of that block is safe. */ if (!TEST_BIT (antloc[dominated->index], i)) continue; /* Note if the expression would reach the dominated block unimpared if it was placed at the end of BB. Keep track of how many times this expression is hoistable from a dominated block into BB. */ if (hoist_expr_reaches_here_p (bb, i, dominated, NULL)) hoistable++; } /* If we found more than one hoistable occurrence of this expression, then note it in the bitmap of expressions to hoist. It makes no sense to hoist things which are computed in only one BB, and doing so tends to pessimize register allocation. One could increase this value to try harder to avoid any possible code expansion due to register allocation issues; however experiments have shown that the vast majority of hoistable expressions are only movable from two successors, so raising this threshhold is likely to nullify any benefit we get from code hoisting. */ if (hoistable > 1) { SET_BIT (hoist_exprs[bb->index], i); found = 1; } } } /* If we found nothing to hoist, then quit now. */ if (! found) { free (domby); continue; } /* Loop over all the hoistable expressions. */ for (i = 0; i < hoist_exprs[bb->index]->n_bits; i++) { /* We want to insert the expression into BB only once, so note when we've inserted it. */ insn_inserted_p = 0; /* These tests should be the same as the tests above. */ if (TEST_BIT (hoist_vbeout[bb->index], i)) { /* We've found a potentially hoistable expression, now we look at every block BB dominates to see if it computes the expression. */ for (j = 0; j < domby_len; j++) { dominated = domby[j]; /* Ignore self dominance. */ if (bb == dominated) continue; /* We've found a dominated block, now see if it computes the busy expression and whether or not moving that expression to the "beginning" of that block is safe. */ if (!TEST_BIT (antloc[dominated->index], i)) continue; /* The expression is computed in the dominated block and it would be safe to compute it at the start of the dominated block. Now we have to determine if the expression would reach the dominated block if it was placed at the end of BB. */ if (hoist_expr_reaches_here_p (bb, i, dominated, NULL)) { struct expr *expr = index_map[i]; struct occr *occr = expr->antic_occr; rtx insn; rtx set; /* Find the right occurrence of this expression. */ while (BLOCK_FOR_INSN (occr->insn) != dominated && occr) occr = occr->next; /* Should never happen. */ if (!occr) abort (); insn = occr->insn; set = single_set (insn); if (! set) abort (); /* Create a pseudo-reg to store the result of reaching expressions into. Get the mode for the new pseudo from the mode of the original destination pseudo. */ if (expr->reaching_reg == NULL) expr->reaching_reg = gen_reg_rtx (GET_MODE (SET_DEST (set))); gcse_emit_move_after (expr->reaching_reg, SET_DEST (set), insn); delete_insn (insn); occr->deleted_p = 1; if (!insn_inserted_p) { insert_insn_end_bb (index_map[i], bb, 0); insn_inserted_p = 1; } } } } } free (domby); } free (index_map); } /* Top level routine to perform one code hoisting (aka unification) pass Return nonzero if a change was made. */ static int one_code_hoisting_pass () { int changed = 0; alloc_hash_table (max_cuid, &expr_hash_table, 0); compute_hash_table (&expr_hash_table); if (gcse_file) dump_hash_table (gcse_file, "Code Hosting Expressions", &expr_hash_table); if (expr_hash_table.n_elems > 0) { alloc_code_hoist_mem (last_basic_block, expr_hash_table.n_elems); compute_code_hoist_data (); hoist_code (); free_code_hoist_mem (); } free_hash_table (&expr_hash_table); return changed; } /* Here we provide the things required to do store motion towards the exit. In order for this to be effective, gcse also needed to be taught how to move a load when it is kill only by a store to itself. int i; float a[10]; void foo(float scale) { for (i=0; i<10; i++) a[i] *= scale; } 'i' is both loaded and stored to in the loop. Normally, gcse cannot move the load out since its live around the loop, and stored at the bottom of the loop. The 'Load Motion' referred to and implemented in this file is an enhancement to gcse which when using edge based lcm, recognizes this situation and allows gcse to move the load out of the loop. Once gcse has hoisted the load, store motion can then push this load towards the exit, and we end up with no loads or stores of 'i' in the loop. */ /* This will search the ldst list for a matching expression. If it doesn't find one, we create one and initialize it. */ static struct ls_expr * ldst_entry (x) rtx x; { struct ls_expr * ptr; for (ptr = first_ls_expr(); ptr != NULL; ptr = next_ls_expr (ptr)) if (expr_equiv_p (ptr->pattern, x)) break; if (!ptr) { ptr = (struct ls_expr *) xmalloc (sizeof (struct ls_expr)); ptr->next = pre_ldst_mems; ptr->expr = NULL; ptr->pattern = x; ptr->loads = NULL_RTX; ptr->stores = NULL_RTX; ptr->reaching_reg = NULL_RTX; ptr->invalid = 0; ptr->index = 0; ptr->hash_index = 0; pre_ldst_mems = ptr; } return ptr; } /* Free up an individual ldst entry. */ static void free_ldst_entry (ptr) struct ls_expr * ptr; { free_INSN_LIST_list (& ptr->loads); free_INSN_LIST_list (& ptr->stores); free (ptr); } /* Free up all memory associated with the ldst list. */ static void free_ldst_mems () { while (pre_ldst_mems) { struct ls_expr * tmp = pre_ldst_mems; pre_ldst_mems = pre_ldst_mems->next; free_ldst_entry (tmp); } pre_ldst_mems = NULL; } /* Dump debugging info about the ldst list. */ static void print_ldst_list (file) FILE * file; { struct ls_expr * ptr; fprintf (file, "LDST list: \n"); for (ptr = first_ls_expr(); ptr != NULL; ptr = next_ls_expr (ptr)) { fprintf (file, " Pattern (%3d): ", ptr->index); print_rtl (file, ptr->pattern); fprintf (file, "\n Loads : "); if (ptr->loads) print_rtl (file, ptr->loads); else fprintf (file, "(nil)"); fprintf (file, "\n Stores : "); if (ptr->stores) print_rtl (file, ptr->stores); else fprintf (file, "(nil)"); fprintf (file, "\n\n"); } fprintf (file, "\n"); } /* Returns 1 if X is in the list of ldst only expressions. */ static struct ls_expr * find_rtx_in_ldst (x) rtx x; { struct ls_expr * ptr; for (ptr = pre_ldst_mems; ptr != NULL; ptr = ptr->next) if (expr_equiv_p (ptr->pattern, x) && ! ptr->invalid) return ptr; return NULL; } /* Assign each element of the list of mems a monotonically increasing value. */ static int enumerate_ldsts () { struct ls_expr * ptr; int n = 0; for (ptr = pre_ldst_mems; ptr != NULL; ptr = ptr->next) ptr->index = n++; return n; } /* Return first item in the list. */ static inline struct ls_expr * first_ls_expr () { return pre_ldst_mems; } /* Return the next item in ther list after the specified one. */ static inline struct ls_expr * next_ls_expr (ptr) struct ls_expr * ptr; { return ptr->next; } /* Load Motion for loads which only kill themselves. */ /* Return true if x is a simple MEM operation, with no registers or side effects. These are the types of loads we consider for the ld_motion list, otherwise we let the usual aliasing take care of it. */ static int simple_mem (x) rtx x; { if (GET_CODE (x) != MEM) return 0; if (MEM_VOLATILE_P (x)) return 0; if (GET_MODE (x) == BLKmode) return 0; if (!rtx_varies_p (XEXP (x, 0), 0)) return 1; return 0; } /* Make sure there isn't a buried reference in this pattern anywhere. If there is, invalidate the entry for it since we're not capable of fixing it up just yet.. We have to be sure we know about ALL loads since the aliasing code will allow all entries in the ld_motion list to not-alias itself. If we miss a load, we will get the wrong value since gcse might common it and we won't know to fix it up. */ static void invalidate_any_buried_refs (x) rtx x; { const char * fmt; int i, j; struct ls_expr * ptr; /* Invalidate it in the list. */ if (GET_CODE (x) == MEM && simple_mem (x)) { ptr = ldst_entry (x); ptr->invalid = 1; } /* Recursively process the insn. */ fmt = GET_RTX_FORMAT (GET_CODE (x)); for (i = GET_RTX_LENGTH (GET_CODE (x)) - 1; i >= 0; i--) { if (fmt[i] == 'e') invalidate_any_buried_refs (XEXP (x, i)); else if (fmt[i] == 'E') for (j = XVECLEN (x, i) - 1; j >= 0; j--) invalidate_any_buried_refs (XVECEXP (x, i, j)); } } /* Find all the 'simple' MEMs which are used in LOADs and STORES. Simple being defined as MEM loads and stores to symbols, with no side effects and no registers in the expression. If there are any uses/defs which don't match this criteria, it is invalidated and trimmed out later. */ static void compute_ld_motion_mems () { struct ls_expr * ptr; basic_block bb; rtx insn; pre_ldst_mems = NULL; FOR_EACH_BB (bb) { for (insn = bb->head; insn && insn != NEXT_INSN (bb->end); insn = NEXT_INSN (insn)) { if (GET_RTX_CLASS (GET_CODE (insn)) == 'i') { if (GET_CODE (PATTERN (insn)) == SET) { rtx src = SET_SRC (PATTERN (insn)); rtx dest = SET_DEST (PATTERN (insn)); /* Check for a simple LOAD... */ if (GET_CODE (src) == MEM && simple_mem (src)) { ptr = ldst_entry (src); if (GET_CODE (dest) == REG) ptr->loads = alloc_INSN_LIST (insn, ptr->loads); else ptr->invalid = 1; } else { /* Make sure there isn't a buried load somewhere. */ invalidate_any_buried_refs (src); } /* Check for stores. Don't worry about aliased ones, they will block any movement we might do later. We only care about this exact pattern since those are the only circumstance that we will ignore the aliasing info. */ if (GET_CODE (dest) == MEM && simple_mem (dest)) { ptr = ldst_entry (dest); if (GET_CODE (src) != MEM && GET_CODE (src) != ASM_OPERANDS) ptr->stores = alloc_INSN_LIST (insn, ptr->stores); else ptr->invalid = 1; } } else invalidate_any_buried_refs (PATTERN (insn)); } } } } /* Remove any references that have been either invalidated or are not in the expression list for pre gcse. */ static void trim_ld_motion_mems () { struct ls_expr * last = NULL; struct ls_expr * ptr = first_ls_expr (); while (ptr != NULL) { int del = ptr->invalid; struct expr * expr = NULL; /* Delete if entry has been made invalid. */ if (!del) { unsigned int i; del = 1; /* Delete if we cannot find this mem in the expression list. */ for (i = 0; i < expr_hash_table.size && del; i++) { for (expr = expr_hash_table.table[i]; expr != NULL; expr = expr->next_same_hash) if (expr_equiv_p (expr->expr, ptr->pattern)) { del = 0; break; } } } if (del) { if (last != NULL) { last->next = ptr->next; free_ldst_entry (ptr); ptr = last->next; } else { pre_ldst_mems = pre_ldst_mems->next; free_ldst_entry (ptr); ptr = pre_ldst_mems; } } else { /* Set the expression field if we are keeping it. */ last = ptr; ptr->expr = expr; ptr = ptr->next; } } /* Show the world what we've found. */ if (gcse_file && pre_ldst_mems != NULL) print_ldst_list (gcse_file); } /* This routine will take an expression which we are replacing with a reaching register, and update any stores that are needed if that expression is in the ld_motion list. Stores are updated by copying their SRC to the reaching register, and then storeing the reaching register into the store location. These keeps the correct value in the reaching register for the loads. */ static void update_ld_motion_stores (expr) struct expr * expr; { struct ls_expr * mem_ptr; if ((mem_ptr = find_rtx_in_ldst (expr->expr))) { /* We can try to find just the REACHED stores, but is shouldn't matter to set the reaching reg everywhere... some might be dead and should be eliminated later. */ /* We replace SET mem = expr with SET reg = expr SET mem = reg , where reg is the reaching reg used in the load. */ rtx list = mem_ptr->stores; for ( ; list != NULL_RTX; list = XEXP (list, 1)) { rtx insn = XEXP (list, 0); rtx pat = PATTERN (insn); rtx src = SET_SRC (pat); rtx reg = expr->reaching_reg; rtx copy, new; /* If we've already copied it, continue. */ if (expr->reaching_reg == src) continue; if (gcse_file) { fprintf (gcse_file, "PRE: store updated with reaching reg "); print_rtl (gcse_file, expr->reaching_reg); fprintf (gcse_file, ":\n "); print_inline_rtx (gcse_file, insn, 8); fprintf (gcse_file, "\n"); } copy = gen_move_insn ( reg, SET_SRC (pat)); new = emit_insn_before (copy, insn); record_one_set (REGNO (reg), new); SET_SRC (pat) = reg; /* un-recognize this pattern since it's probably different now. */ INSN_CODE (insn) = -1; gcse_create_count++; } } } /* Store motion code. */ /* This is used to communicate the target bitvector we want to use in the reg_set_info routine when called via the note_stores mechanism. */ static sbitmap * regvec; /* Used in computing the reverse edge graph bit vectors. */ static sbitmap * st_antloc; /* Global holding the number of store expressions we are dealing with. */ static int num_stores; /* Checks to set if we need to mark a register set. Called from note_stores. */ static void reg_set_info (dest, setter, data) rtx dest, setter ATTRIBUTE_UNUSED; void * data ATTRIBUTE_UNUSED; { if (GET_CODE (dest) == SUBREG) dest = SUBREG_REG (dest); if (GET_CODE (dest) == REG) SET_BIT (*regvec, REGNO (dest)); } /* Return nonzero if the register operands of expression X are killed anywhere in basic block BB. */ static int store_ops_ok (x, bb) rtx x; basic_block bb; { int i; enum rtx_code code; const char * fmt; /* Repeat is used to turn tail-recursion into iteration. */ repeat: if (x == 0) return 1; code = GET_CODE (x); switch (code) { case REG: /* If a reg has changed after us in this block, the operand has been killed. */ return TEST_BIT (reg_set_in_block[bb->index], REGNO (x)); case MEM: x = XEXP (x, 0); goto repeat; case PRE_DEC: case PRE_INC: case POST_DEC: case POST_INC: return 0; case PC: case CC0: /*FIXME*/ case CONST: case CONST_INT: case CONST_DOUBLE: case CONST_VECTOR: case SYMBOL_REF: case LABEL_REF: case ADDR_VEC: case ADDR_DIFF_VEC: return 1; default: break; } i = GET_RTX_LENGTH (code) - 1; fmt = GET_RTX_FORMAT (code); for (; i >= 0; i--) { if (fmt[i] == 'e') { rtx tem = XEXP (x, i); /* If we are about to do the last recursive call needed at this level, change it into iteration. This function is called enough to be worth it. */ if (i == 0) { x = tem; goto repeat; } if (! store_ops_ok (tem, bb)) return 0; } else if (fmt[i] == 'E') { int j; for (j = 0; j < XVECLEN (x, i); j++) { if (! store_ops_ok (XVECEXP (x, i, j), bb)) return 0; } } } return 1; } /* Determine whether insn is MEM store pattern that we will consider moving. */ static void find_moveable_store (insn) rtx insn; { struct ls_expr * ptr; rtx dest = PATTERN (insn); if (GET_CODE (dest) != SET || GET_CODE (SET_SRC (dest)) == ASM_OPERANDS) return; dest = SET_DEST (dest); if (GET_CODE (dest) != MEM || MEM_VOLATILE_P (dest) || GET_MODE (dest) == BLKmode) return; if (GET_CODE (XEXP (dest, 0)) != SYMBOL_REF) return; if (rtx_varies_p (XEXP (dest, 0), 0)) return; ptr = ldst_entry (dest); ptr->stores = alloc_INSN_LIST (insn, ptr->stores); } /* Perform store motion. Much like gcse, except we move expressions the other way by looking at the flowgraph in reverse. */ static int compute_store_table () { int ret; basic_block bb; unsigned regno; rtx insn, pat; max_gcse_regno = max_reg_num (); reg_set_in_block = (sbitmap *) sbitmap_vector_alloc (last_basic_block, max_gcse_regno); sbitmap_vector_zero (reg_set_in_block, last_basic_block); pre_ldst_mems = 0; /* Find all the stores we care about. */ FOR_EACH_BB (bb) { regvec = & (reg_set_in_block[bb->index]); for (insn = bb->end; insn && insn != PREV_INSN (bb->end); insn = PREV_INSN (insn)) { /* Ignore anything that is not a normal insn. */ if (! INSN_P (insn)) continue; if (GET_CODE (insn) == CALL_INSN) { bool clobbers_all = false; #ifdef NON_SAVING_SETJMP if (NON_SAVING_SETJMP && find_reg_note (insn, REG_SETJMP, NULL_RTX)) clobbers_all = true; #endif for (regno = 0; regno < FIRST_PSEUDO_REGISTER; regno++) if (clobbers_all || TEST_HARD_REG_BIT (regs_invalidated_by_call, regno)) SET_BIT (reg_set_in_block[bb->index], regno); } pat = PATTERN (insn); note_stores (pat, reg_set_info, NULL); /* Now that we've marked regs, look for stores. */ if (GET_CODE (pat) == SET) find_moveable_store (insn); } } ret = enumerate_ldsts (); if (gcse_file) { fprintf (gcse_file, "Store Motion Expressions.\n"); print_ldst_list (gcse_file); } return ret; } /* Check to see if the load X is aliased with STORE_PATTERN. */ static int load_kills_store (x, store_pattern) rtx x, store_pattern; { if (true_dependence (x, GET_MODE (x), store_pattern, rtx_addr_varies_p)) return 1; return 0; } /* Go through the entire insn X, looking for any loads which might alias STORE_PATTERN. Return 1 if found. */ static int find_loads (x, store_pattern) rtx x, store_pattern; { const char * fmt; int i, j; int ret = 0; if (!x) return 0; if (GET_CODE (x) == SET) x = SET_SRC (x); if (GET_CODE (x) == MEM) { if (load_kills_store (x, store_pattern)) return 1; } /* Recursively process the insn. */ fmt = GET_RTX_FORMAT (GET_CODE (x)); for (i = GET_RTX_LENGTH (GET_CODE (x)) - 1; i >= 0 && !ret; i--) { if (fmt[i] == 'e') ret |= find_loads (XEXP (x, i), store_pattern); else if (fmt[i] == 'E') for (j = XVECLEN (x, i) - 1; j >= 0; j--) ret |= find_loads (XVECEXP (x, i, j), store_pattern); } return ret; } /* Check if INSN kills the store pattern X (is aliased with it). Return 1 if it it does. */ static int store_killed_in_insn (x, insn) rtx x, insn; { if (GET_RTX_CLASS (GET_CODE (insn)) != 'i') return 0; if (GET_CODE (insn) == CALL_INSN) { /* A normal or pure call might read from pattern, but a const call will not. */ return ! CONST_OR_PURE_CALL_P (insn) || pure_call_p (insn); } if (GET_CODE (PATTERN (insn)) == SET) { rtx pat = PATTERN (insn); /* Check for memory stores to aliased objects. */ if (GET_CODE (SET_DEST (pat)) == MEM && !expr_equiv_p (SET_DEST (pat), x)) /* pretend its a load and check for aliasing. */ if (find_loads (SET_DEST (pat), x)) return 1; return find_loads (SET_SRC (pat), x); } else return find_loads (PATTERN (insn), x); } /* Returns 1 if the expression X is loaded or clobbered on or after INSN within basic block BB. */ static int store_killed_after (x, insn, bb) rtx x, insn; basic_block bb; { rtx last = bb->end; if (insn == last) return 0; /* Check if the register operands of the store are OK in this block. Note that if registers are changed ANYWHERE in the block, we'll decide we can't move it, regardless of whether it changed above or below the store. This could be improved by checking the register operands while lookinng for aliasing in each insn. */ if (!store_ops_ok (XEXP (x, 0), bb)) return 1; for ( ; insn && insn != NEXT_INSN (last); insn = NEXT_INSN (insn)) if (store_killed_in_insn (x, insn)) return 1; return 0; } /* Returns 1 if the expression X is loaded or clobbered on or before INSN within basic block BB. */ static int store_killed_before (x, insn, bb) rtx x, insn; basic_block bb; { rtx first = bb->head; if (insn == first) return store_killed_in_insn (x, insn); /* Check if the register operands of the store are OK in this block. Note that if registers are changed ANYWHERE in the block, we'll decide we can't move it, regardless of whether it changed above or below the store. This could be improved by checking the register operands while lookinng for aliasing in each insn. */ if (!store_ops_ok (XEXP (x, 0), bb)) return 1; for ( ; insn && insn != PREV_INSN (first); insn = PREV_INSN (insn)) if (store_killed_in_insn (x, insn)) return 1; return 0; } #define ANTIC_STORE_LIST(x) ((x)->loads) #define AVAIL_STORE_LIST(x) ((x)->stores) /* Given the table of available store insns at the end of blocks, determine which ones are not killed by aliasing, and generate the appropriate vectors for gen and killed. */ static void build_store_vectors () { basic_block bb, b; rtx insn, st; struct ls_expr * ptr; /* Build the gen_vector. This is any store in the table which is not killed by aliasing later in its block. */ ae_gen = (sbitmap *) sbitmap_vector_alloc (last_basic_block, num_stores); sbitmap_vector_zero (ae_gen, last_basic_block); st_antloc = (sbitmap *) sbitmap_vector_alloc (last_basic_block, num_stores); sbitmap_vector_zero (st_antloc, last_basic_block); for (ptr = first_ls_expr (); ptr != NULL; ptr = next_ls_expr (ptr)) { /* Put all the stores into either the antic list, or the avail list, or both. */ rtx store_list = ptr->stores; ptr->stores = NULL_RTX; for (st = store_list; st != NULL; st = XEXP (st, 1)) { insn = XEXP (st, 0); bb = BLOCK_FOR_INSN (insn); if (!store_killed_after (ptr->pattern, insn, bb)) { /* If we've already seen an availale expression in this block, we can delete the one we saw already (It occurs earlier in the block), and replace it with this one). We'll copy the old SRC expression to an unused register in case there are any side effects. */ if (TEST_BIT (ae_gen[bb->index], ptr->index)) { /* Find previous store. */ rtx st; for (st = AVAIL_STORE_LIST (ptr); st ; st = XEXP (st, 1)) if (BLOCK_FOR_INSN (XEXP (st, 0)) == bb) break; if (st) { rtx r = gen_reg_rtx (GET_MODE (ptr->pattern)); if (gcse_file) fprintf (gcse_file, "Removing redundant store:\n"); replace_store_insn (r, XEXP (st, 0), bb); XEXP (st, 0) = insn; continue; } } SET_BIT (ae_gen[bb->index], ptr->index); AVAIL_STORE_LIST (ptr) = alloc_INSN_LIST (insn, AVAIL_STORE_LIST (ptr)); } if (!store_killed_before (ptr->pattern, insn, bb)) { SET_BIT (st_antloc[BLOCK_NUM (insn)], ptr->index); ANTIC_STORE_LIST (ptr) = alloc_INSN_LIST (insn, ANTIC_STORE_LIST (ptr)); } } /* Free the original list of store insns. */ free_INSN_LIST_list (&store_list); } ae_kill = (sbitmap *) sbitmap_vector_alloc (last_basic_block, num_stores); sbitmap_vector_zero (ae_kill, last_basic_block); transp = (sbitmap *) sbitmap_vector_alloc (last_basic_block, num_stores); sbitmap_vector_zero (transp, last_basic_block); for (ptr = first_ls_expr (); ptr != NULL; ptr = next_ls_expr (ptr)) FOR_EACH_BB (b) { if (store_killed_after (ptr->pattern, b->head, b)) { /* The anticipatable expression is not killed if it's gen'd. */ /* We leave this check out for now. If we have a code sequence in a block which looks like: ST MEMa = x L y = MEMa ST MEMa = z We should flag this as having an ANTIC expression, NOT transparent, NOT killed, and AVAIL. Unfortunately, since we haven't re-written all loads to use the reaching reg, we'll end up doing an incorrect Load in the middle here if we push the store down. It happens in gcc.c-torture/execute/960311-1.c with -O3 If we always kill it in this case, we'll sometimes do uneccessary work, but it shouldn't actually hurt anything. if (!TEST_BIT (ae_gen[b], ptr->index)). */ SET_BIT (ae_kill[b->index], ptr->index); } else SET_BIT (transp[b->index], ptr->index); } /* Any block with no exits calls some non-returning function, so we better mark the store killed here, or we might not store to it at all. If we knew it was abort, we wouldn't have to store, but we don't know that for sure. */ if (gcse_file) { fprintf (gcse_file, "ST_avail and ST_antic (shown under loads..)\n"); print_ldst_list (gcse_file); dump_sbitmap_vector (gcse_file, "st_antloc", "", st_antloc, last_basic_block); dump_sbitmap_vector (gcse_file, "st_kill", "", ae_kill, last_basic_block); dump_sbitmap_vector (gcse_file, "Transpt", "", transp, last_basic_block); dump_sbitmap_vector (gcse_file, "st_avloc", "", ae_gen, last_basic_block); } } /* Insert an instruction at the begining of a basic block, and update the BLOCK_HEAD if needed. */ static void insert_insn_start_bb (insn, bb) rtx insn; basic_block bb; { /* Insert at start of successor block. */ rtx prev = PREV_INSN (bb->head); rtx before = bb->head; while (before != 0) { if (GET_CODE (before) != CODE_LABEL && (GET_CODE (before) != NOTE || NOTE_LINE_NUMBER (before) != NOTE_INSN_BASIC_BLOCK)) break; prev = before; if (prev == bb->end) break; before = NEXT_INSN (before); } insn = emit_insn_after (insn, prev); if (gcse_file) { fprintf (gcse_file, "STORE_MOTION insert store at start of BB %d:\n", bb->index); print_inline_rtx (gcse_file, insn, 6); fprintf (gcse_file, "\n"); } } /* This routine will insert a store on an edge. EXPR is the ldst entry for the memory reference, and E is the edge to insert it on. Returns nonzero if an edge insertion was performed. */ static int insert_store (expr, e) struct ls_expr * expr; edge e; { rtx reg, insn; basic_block bb; edge tmp; /* We did all the deleted before this insert, so if we didn't delete a store, then we haven't set the reaching reg yet either. */ if (expr->reaching_reg == NULL_RTX) return 0; reg = expr->reaching_reg; insn = gen_move_insn (expr->pattern, reg); /* If we are inserting this expression on ALL predecessor edges of a BB, insert it at the start of the BB, and reset the insert bits on the other edges so we don't try to insert it on the other edges. */ bb = e->dest; for (tmp = e->dest->pred; tmp ; tmp = tmp->pred_next) { int index = EDGE_INDEX (edge_list, tmp->src, tmp->dest); if (index == EDGE_INDEX_NO_EDGE) abort (); if (! TEST_BIT (pre_insert_map[index], expr->index)) break; } /* If tmp is NULL, we found an insertion on every edge, blank the insertion vector for these edges, and insert at the start of the BB. */ if (!tmp && bb != EXIT_BLOCK_PTR) { for (tmp = e->dest->pred; tmp ; tmp = tmp->pred_next) { int index = EDGE_INDEX (edge_list, tmp->src, tmp->dest); RESET_BIT (pre_insert_map[index], expr->index); } insert_insn_start_bb (insn, bb); return 0; } /* We can't insert on this edge, so we'll insert at the head of the successors block. See Morgan, sec 10.5. */ if ((e->flags & EDGE_ABNORMAL) == EDGE_ABNORMAL) { insert_insn_start_bb (insn, bb); return 0; } insert_insn_on_edge (insn, e); if (gcse_file) { fprintf (gcse_file, "STORE_MOTION insert insn on edge (%d, %d):\n", e->src->index, e->dest->index); print_inline_rtx (gcse_file, insn, 6); fprintf (gcse_file, "\n"); } return 1; } /* This routine will replace a store with a SET to a specified register. */ static void replace_store_insn (reg, del, bb) rtx reg, del; basic_block bb; { rtx insn; insn = gen_move_insn (reg, SET_SRC (PATTERN (del))); insn = emit_insn_after (insn, del); if (gcse_file) { fprintf (gcse_file, "STORE_MOTION delete insn in BB %d:\n ", bb->index); print_inline_rtx (gcse_file, del, 6); fprintf (gcse_file, "\nSTORE MOTION replaced with insn:\n "); print_inline_rtx (gcse_file, insn, 6); fprintf (gcse_file, "\n"); } delete_insn (del); } /* Delete a store, but copy the value that would have been stored into the reaching_reg for later storing. */ static void delete_store (expr, bb) struct ls_expr * expr; basic_block bb; { rtx reg, i, del; if (expr->reaching_reg == NULL_RTX) expr->reaching_reg = gen_reg_rtx (GET_MODE (expr->pattern)); /* If there is more than 1 store, the earlier ones will be dead, but it doesn't hurt to replace them here. */ reg = expr->reaching_reg; for (i = AVAIL_STORE_LIST (expr); i; i = XEXP (i, 1)) { del = XEXP (i, 0); if (BLOCK_FOR_INSN (del) == bb) { /* We know there is only one since we deleted redundant ones during the available computation. */ replace_store_insn (reg, del, bb); break; } } } /* Free memory used by store motion. */ static void free_store_memory () { free_ldst_mems (); if (ae_gen) sbitmap_vector_free (ae_gen); if (ae_kill) sbitmap_vector_free (ae_kill); if (transp) sbitmap_vector_free (transp); if (st_antloc) sbitmap_vector_free (st_antloc); if (pre_insert_map) sbitmap_vector_free (pre_insert_map); if (pre_delete_map) sbitmap_vector_free (pre_delete_map); if (reg_set_in_block) sbitmap_vector_free (reg_set_in_block); ae_gen = ae_kill = transp = st_antloc = NULL; pre_insert_map = pre_delete_map = reg_set_in_block = NULL; } /* Perform store motion. Much like gcse, except we move expressions the other way by looking at the flowgraph in reverse. */ static void store_motion () { basic_block bb; int x; struct ls_expr * ptr; int update_flow = 0; if (gcse_file) { fprintf (gcse_file, "before store motion\n"); print_rtl (gcse_file, get_insns ()); } init_alias_analysis (); /* Find all the stores that are live to the end of their block. */ num_stores = compute_store_table (); if (num_stores == 0) { sbitmap_vector_free (reg_set_in_block); end_alias_analysis (); return; } /* Now compute whats actually available to move. */ add_noreturn_fake_exit_edges (); build_store_vectors (); edge_list = pre_edge_rev_lcm (gcse_file, num_stores, transp, ae_gen, st_antloc, ae_kill, &pre_insert_map, &pre_delete_map); /* Now we want to insert the new stores which are going to be needed. */ for (ptr = first_ls_expr (); ptr != NULL; ptr = next_ls_expr (ptr)) { FOR_EACH_BB (bb) if (TEST_BIT (pre_delete_map[bb->index], ptr->index)) delete_store (ptr, bb); for (x = 0; x < NUM_EDGES (edge_list); x++) if (TEST_BIT (pre_insert_map[x], ptr->index)) update_flow |= insert_store (ptr, INDEX_EDGE (edge_list, x)); } if (update_flow) commit_edge_insertions (); free_store_memory (); free_edge_list (edge_list); remove_fake_edges (); end_alias_analysis (); } #include "gt-gcse.h" ```
Bellevue is an inner city neighbourhood of Johannesburg, Gauteng Province, South Africa. Close to the Johannesburg CBD with the neighbourhood surrounded by Yeoville and Observatory. It is located in Region F of the City of Johannesburg Metropolitan Municipality. It shares the same praise and notoriety as its surrounding neighbourhood Yeoville. The neighbourhood today is usually called Yeoville due to their very close proximity, same demographics and notoriety, leading to the whole area now known as Greater Yeoville, combining Bellevue, Bellevue East and Yeoville. It is home to the Yeoville Hotel. History The suburb is situated on part of an old Witwatersrand farm called Doornfontein. It was established in 1890 and is either named after the land developer Bellevue Township Syndicate or the view of the city to the west and view to the Magaliesberg mountain range in the north. Culture Rockey Street is known for its lively bars and clubs, a hotspot for black nightlife although has a bad reputation for public alcohol consumption and illegal street dealings such as guns and drugs. The neighbourhood shares all its characteristics (both positive and negative) with surrounding and adjacent Yeoville References Johannesburg Region F
```objective-c /* * */ #ifndef _ISOAL_TEST_COMMON_H_ #define _ISOAL_TEST_COMMON_H_ #define TEST_RX_PDU_PAYLOAD_MAX (40) #define TEST_RX_PDU_SIZE (TEST_RX_PDU_PAYLOAD_MAX + 2) #define TEST_RX_SDU_FRAG_PAYLOAD_MAX (100) #define TEST_TX_PDU_PAYLOAD_MAX (40) #define TEST_TX_PDU_SIZE (TEST_TX_PDU_PAYLOAD_MAX + 2) #define TEST_TX_SDU_FRAG_PAYLOAD_MAX (100) #define LLID_TO_STR(llid) (llid == PDU_BIS_LLID_COMPLETE_END ? "COMPLETE_END" : \ (llid == PDU_BIS_LLID_START_CONTINUE ? "START_CONT" : \ (llid == PDU_BIS_LLID_FRAMED ? "FRAMED" : \ (llid == PDU_BIS_LLID_CTRL ? "CTRL" : "?????")))) #define DU_ERR_TO_STR(err) (err == 1 ? "Bit Errors" : \ (err == 2 ? "Data Lost" : \ (err == 0 ? "OK" : "Undefined!"))) #define STATE_TO_STR(s) (s == BT_ISO_SINGLE ? "SINGLE" : \ (s == BT_ISO_START ? "START" : \ (s == BT_ISO_CONT ? "CONT" : \ (s == BT_ISO_END ? "END" : "???")))) #define ROLE_TO_STR(s) \ ((s) == ISOAL_ROLE_BROADCAST_SOURCE ? "Broadcast Source" : \ ((s) == ISOAL_ROLE_BROADCAST_SINK ? "Broadcast Sink" : \ ((s) == ISOAL_ROLE_PERIPHERAL ? "Peripheral" : \ ((s) == ISOAL_ROLE_CENTRAL ? "Central" : "Undefined")))) #define FSM_TO_STR(s) (s == ISOAL_START ? "START" : \ (s == ISOAL_CONTINUE ? "CONTINUE" : \ (s == ISOAL_ERR_SPOOL ? "ERR SPOOL" : "???"))) #if defined(ISOAL_CONFIG_BUFFER_RX_SDUS_ENABLE) #define COLLATED_RX_SDU_INFO(_non_buf, _buf) (_buf) #else #define COLLATED_RX_SDU_INFO(_non_buf, _buf) (_non_buf) #endif /* ISOAL_CONFIG_BUFFER_RX_SDUS_ENABLE */ /* Maximum PDU payload for given number of PDUs */ #define MAX_FRAMED_PDU_PAYLOAD(_pdus) \ (TEST_TX_PDU_PAYLOAD_MAX * _pdus) - \ ((PDU_ISO_SEG_HDR_SIZE * _pdus) + PDU_ISO_SEG_TIMEOFFSET_SIZE) struct rx_pdu_meta_buffer { struct isoal_pdu_rx pdu_meta; struct node_rx_iso_meta meta; uint8_t pdu[TEST_RX_PDU_SIZE]; }; struct rx_sdu_frag_buffer { uint16_t write_loc; uint8_t sdu[TEST_RX_SDU_FRAG_PAYLOAD_MAX]; }; struct tx_pdu_meta_buffer { struct node_tx_iso node_tx; union{ struct pdu_iso pdu; uint8_t pdu_payload[TEST_TX_PDU_PAYLOAD_MAX]; }; }; struct tx_sdu_frag_buffer { struct isoal_sdu_tx sdu_tx; uint8_t sdu_payload[TEST_TX_SDU_FRAG_PAYLOAD_MAX]; }; extern void isoal_test_init_rx_pdu_buffer(struct rx_pdu_meta_buffer *buf); extern void isoal_test_init_rx_sdu_buffer(struct rx_sdu_frag_buffer *buf); extern void isoal_test_create_unframed_pdu(uint8_t llid, uint8_t *dataptr, uint8_t length, uint64_t payload_number, uint32_t timestamp, uint8_t status, struct isoal_pdu_rx *pdu_meta); extern uint16_t isoal_test_insert_segment(bool sc, bool cmplt, uint32_t time_offset, uint8_t *dataptr, uint8_t length, struct isoal_pdu_rx *pdu_meta); extern void isoal_test_create_framed_pdu_base(uint64_t payload_number, uint32_t timestamp, uint8_t status, struct isoal_pdu_rx *pdu_meta); extern uint16_t isoal_test_add_framed_pdu_single(uint8_t *dataptr, uint8_t length, uint32_t time_offset, struct isoal_pdu_rx *pdu_meta); extern uint16_t isoal_test_add_framed_pdu_start(uint8_t *dataptr, uint8_t length, uint32_t time_offset, struct isoal_pdu_rx *pdu_meta); extern uint16_t isoal_test_add_framed_pdu_cont(uint8_t *dataptr, uint8_t length, struct isoal_pdu_rx *pdu_meta); extern uint16_t isoal_test_add_framed_pdu_end(uint8_t *dataptr, uint8_t length, struct isoal_pdu_rx *pdu_meta); extern void isoal_test_init_tx_pdu_buffer(struct tx_pdu_meta_buffer *buf); extern void isoal_test_init_tx_sdu_buffer(struct tx_sdu_frag_buffer *buf); extern void init_test_data_buffer(uint8_t *buf, uint16_t size); #endif /* _ISOAL_TEST_COMMON_H_ */ ```
Get Over Yourself may refer to: "Get Over Yourself" (Eden's Crush song) "Get Over Yourself" (SHeDAISY song)
```c++ // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // Tested by search_test.cc. // // Prog::SearchNFA, an NFA search. // This is an actual NFA like the theorists talk about, // not the pseudo-NFA found in backtracking regexp implementations. // // IMPLEMENTATION // // This algorithm is a variant of one that appeared in Rob Pike's sam editor, // which is a variant of the one described in Thompson's 1968 CACM paper. // See path_to_url~rsc/regexp/ for various history. The main feature // over the DFA implementation is that it tracks submatch boundaries. // // When the choice of submatch boundaries is ambiguous, this particular // implementation makes the same choices that traditional backtracking // implementations (in particular, Perl and PCRE) do. // Note that unlike in Perl and PCRE, this algorithm *cannot* take exponential // time in the length of the input. // // Like Thompson's original machine and like the DFA implementation, this // implementation notices a match only once it is one byte past it. #include <stdio.h> #include <string.h> #include <algorithm> #include <deque> #include <string> #include <utility> #include <vector> #include "util/logging.h" #include "util/strutil.h" #include "re2/pod_array.h" #include "re2/prog.h" #include "re2/regexp.h" #include "re2/sparse_array.h" #include "re2/sparse_set.h" namespace re2 { static const bool ExtraDebug = false; class NFA { public: NFA(Prog* prog); ~NFA(); // Searches for a matching string. // * If anchored is true, only considers matches starting at offset. // Otherwise finds lefmost match at or after offset. // * If longest is true, returns the longest match starting // at the chosen start point. Otherwise returns the so-called // left-biased match, the one traditional backtracking engines // (like Perl and PCRE) find. // Records submatch boundaries in submatch[1..nsubmatch-1]. // Submatch[0] is the entire match. When there is a choice in // which text matches each subexpression, the submatch boundaries // are chosen to match what a backtracking implementation would choose. bool Search(const StringPiece& text, const StringPiece& context, bool anchored, bool longest, StringPiece* submatch, int nsubmatch); private: struct Thread { union { int ref; Thread* next; // when on free list }; const char** capture; }; // State for explicit stack in AddToThreadq. struct AddState { int id; // Inst to process Thread* t; // if not null, set t0 = t before processing id }; // Threadq is a list of threads. The list is sorted by the order // in which Perl would explore that particular state -- the earlier // choices appear earlier in the list. typedef SparseArray<Thread*> Threadq; inline Thread* AllocThread(); inline Thread* Incref(Thread* t); inline void Decref(Thread* t); // Follows all empty arrows from id0 and enqueues all the states reached. // Enqueues only the ByteRange instructions that match byte c. // context is used (with p) for evaluating empty-width specials. // p is the current input position, and t0 is the current thread. void AddToThreadq(Threadq* q, int id0, int c, const StringPiece& context, const char* p, Thread* t0); // Run runq on byte c, appending new states to nextq. // Updates matched_ and match_ as new, better matches are found. // context is used (with p) for evaluating empty-width specials. // p is the position of byte c in the input string for AddToThreadq; // p-1 will be used when processing Match instructions. // Frees all the threads on runq. // If there is a shortcut to the end, returns that shortcut. int Step(Threadq* runq, Threadq* nextq, int c, const StringPiece& context, const char* p); // Returns text version of capture information, for debugging. std::string FormatCapture(const char** capture); void CopyCapture(const char** dst, const char** src) { memmove(dst, src, ncapture_*sizeof src[0]); } Prog* prog_; // underlying program int start_; // start instruction in program int ncapture_; // number of submatches to track bool longest_; // whether searching for longest match bool endmatch_; // whether match must end at text.end() const char* btext_; // beginning of text (for FormatSubmatch) const char* etext_; // end of text (for endmatch_) Threadq q0_, q1_; // pre-allocated for Search. PODArray<AddState> stack_; // pre-allocated for AddToThreadq std::deque<Thread> arena_; // thread arena Thread* freelist_; // thread freelist const char** match_; // best match so far bool matched_; // any match so far? NFA(const NFA&) = delete; NFA& operator=(const NFA&) = delete; }; NFA::NFA(Prog* prog) { prog_ = prog; start_ = prog_->start(); ncapture_ = 0; longest_ = false; endmatch_ = false; btext_ = NULL; etext_ = NULL; q0_.resize(prog_->size()); q1_.resize(prog_->size()); // See NFA::AddToThreadq() for why this is so. int nstack = 2*prog_->inst_count(kInstCapture) + prog_->inst_count(kInstEmptyWidth) + prog_->inst_count(kInstNop) + 1; // + 1 for start inst stack_ = PODArray<AddState>(nstack); freelist_ = NULL; match_ = NULL; matched_ = false; } NFA::~NFA() { delete[] match_; for (const Thread& t : arena_) delete[] t.capture; } NFA::Thread* NFA::AllocThread() { Thread* t = freelist_; if (t != NULL) { freelist_ = t->next; t->ref = 1; // We don't need to touch t->capture because // the caller will immediately overwrite it. return t; } arena_.emplace_back(); t = &arena_.back(); t->ref = 1; t->capture = new const char*[ncapture_]; return t; } NFA::Thread* NFA::Incref(Thread* t) { DCHECK(t != NULL); t->ref++; return t; } void NFA::Decref(Thread* t) { DCHECK(t != NULL); t->ref--; if (t->ref > 0) return; DCHECK_EQ(t->ref, 0); t->next = freelist_; freelist_ = t; } // Follows all empty arrows from id0 and enqueues all the states reached. // Enqueues only the ByteRange instructions that match byte c. // context is used (with p) for evaluating empty-width specials. // p is the current input position, and t0 is the current thread. void NFA::AddToThreadq(Threadq* q, int id0, int c, const StringPiece& context, const char* p, Thread* t0) { if (id0 == 0) return; // Use stack_ to hold our stack of instructions yet to process. // It was preallocated as follows: // two entries per Capture; // one entry per EmptyWidth; and // one entry per Nop. // This reflects the maximum number of stack pushes that each can // perform. (Each instruction can be processed at most once.) AddState* stk = stack_.data(); int nstk = 0; stk[nstk++] = {id0, NULL}; while (nstk > 0) { DCHECK_LE(nstk, stack_.size()); AddState a = stk[--nstk]; Loop: if (a.t != NULL) { // t0 was a thread that we allocated and copied in order to // record the capture, so we must now decref it. Decref(t0); t0 = a.t; } int id = a.id; if (id == 0) continue; if (q->has_index(id)) { if (ExtraDebug) fprintf(stderr, " [%d%s]\n", id, FormatCapture(t0->capture).c_str()); continue; } // Create entry in q no matter what. We might fill it in below, // or we might not. Even if not, it is necessary to have it, // so that we don't revisit id0 during the recursion. q->set_new(id, NULL); Thread** tp = &q->get_existing(id); int j; Thread* t; Prog::Inst* ip = prog_->inst(id); switch (ip->opcode()) { default: LOG(DFATAL) << "unhandled " << ip->opcode() << " in AddToThreadq"; break; case kInstFail: break; case kInstAltMatch: // Save state; will pick up at next byte. t = Incref(t0); *tp = t; DCHECK(!ip->last()); a = {id+1, NULL}; goto Loop; case kInstNop: if (!ip->last()) stk[nstk++] = {id+1, NULL}; // Continue on. a = {ip->out(), NULL}; goto Loop; case kInstCapture: if (!ip->last()) stk[nstk++] = {id+1, NULL}; if ((j=ip->cap()) < ncapture_) { // Push a dummy whose only job is to restore t0 // once we finish exploring this possibility. stk[nstk++] = {0, t0}; // Record capture. t = AllocThread(); CopyCapture(t->capture, t0->capture); t->capture[j] = p; t0 = t; } a = {ip->out(), NULL}; goto Loop; case kInstByteRange: if (!ip->Matches(c)) goto Next; // Save state; will pick up at next byte. t = Incref(t0); *tp = t; if (ExtraDebug) fprintf(stderr, " + %d%s\n", id, FormatCapture(t0->capture).c_str()); if (ip->hint() == 0) break; a = {id+ip->hint(), NULL}; goto Loop; case kInstMatch: // Save state; will pick up at next byte. t = Incref(t0); *tp = t; if (ExtraDebug) fprintf(stderr, " ! %d%s\n", id, FormatCapture(t0->capture).c_str()); Next: if (ip->last()) break; a = {id+1, NULL}; goto Loop; case kInstEmptyWidth: if (!ip->last()) stk[nstk++] = {id+1, NULL}; // Continue on if we have all the right flag bits. if (ip->empty() & ~Prog::EmptyFlags(context, p)) break; a = {ip->out(), NULL}; goto Loop; } } } // Run runq on byte c, appending new states to nextq. // Updates matched_ and match_ as new, better matches are found. // context is used (with p) for evaluating empty-width specials. // p is the position of byte c in the input string for AddToThreadq; // p-1 will be used when processing Match instructions. // Frees all the threads on runq. // If there is a shortcut to the end, returns that shortcut. int NFA::Step(Threadq* runq, Threadq* nextq, int c, const StringPiece& context, const char* p) { nextq->clear(); for (Threadq::iterator i = runq->begin(); i != runq->end(); ++i) { Thread* t = i->value(); if (t == NULL) continue; if (longest_) { // Can skip any threads started after our current best match. if (matched_ && match_[0] < t->capture[0]) { Decref(t); continue; } } int id = i->index(); Prog::Inst* ip = prog_->inst(id); switch (ip->opcode()) { default: // Should only see the values handled below. LOG(DFATAL) << "Unhandled " << ip->opcode() << " in step"; break; case kInstByteRange: AddToThreadq(nextq, ip->out(), c, context, p, t); break; case kInstAltMatch: if (i != runq->begin()) break; // The match is ours if we want it. if (ip->greedy(prog_) || longest_) { CopyCapture(match_, t->capture); matched_ = true; Decref(t); for (++i; i != runq->end(); ++i) { if (i->value() != NULL) Decref(i->value()); } runq->clear(); if (ip->greedy(prog_)) return ip->out1(); return ip->out(); } break; case kInstMatch: { // Avoid invoking undefined behavior (arithmetic on a null pointer) // by storing p instead of p-1. (What would the latter even mean?!) // This complements the special case in NFA::Search(). if (p == NULL) { CopyCapture(match_, t->capture); match_[1] = p; matched_ = true; break; } if (endmatch_ && p-1 != etext_) break; if (longest_) { // Leftmost-longest mode: save this match only if // it is either farther to the left or at the same // point but longer than an existing match. if (!matched_ || t->capture[0] < match_[0] || (t->capture[0] == match_[0] && p-1 > match_[1])) { CopyCapture(match_, t->capture); match_[1] = p-1; matched_ = true; } } else { // Leftmost-biased mode: this match is by definition // better than what we've already found (see next line). CopyCapture(match_, t->capture); match_[1] = p-1; matched_ = true; // Cut off the threads that can only find matches // worse than the one we just found: don't run the // rest of the current Threadq. Decref(t); for (++i; i != runq->end(); ++i) { if (i->value() != NULL) Decref(i->value()); } runq->clear(); return 0; } break; } } Decref(t); } runq->clear(); return 0; } std::string NFA::FormatCapture(const char** capture) { std::string s; for (int i = 0; i < ncapture_; i+=2) { if (capture[i] == NULL) s += "(?,?)"; else if (capture[i+1] == NULL) s += StringPrintf("(%td,?)", capture[i] - btext_); else s += StringPrintf("(%td,%td)", capture[i] - btext_, capture[i+1] - btext_); } return s; } bool NFA::Search(const StringPiece& text, const StringPiece& const_context, bool anchored, bool longest, StringPiece* submatch, int nsubmatch) { if (start_ == 0) return false; StringPiece context = const_context; if (context.data() == NULL) context = text; // Sanity check: make sure that text lies within context. if (text.begin() < context.begin() || text.end() > context.end()) { LOG(DFATAL) << "context does not contain text"; return false; } if (prog_->anchor_start() && context.begin() != text.begin()) return false; if (prog_->anchor_end() && context.end() != text.end()) return false; anchored |= prog_->anchor_start(); if (prog_->anchor_end()) { longest = true; endmatch_ = true; } if (nsubmatch < 0) { LOG(DFATAL) << "Bad args: nsubmatch=" << nsubmatch; return false; } // Save search parameters. ncapture_ = 2*nsubmatch; longest_ = longest; if (nsubmatch == 0) { // We need to maintain match[0], both to distinguish the // longest match (if longest is true) and also to tell // whether we've seen any matches at all. ncapture_ = 2; } match_ = new const char*[ncapture_]; memset(match_, 0, ncapture_*sizeof match_[0]); matched_ = false; // For debugging prints. btext_ = context.data(); // For convenience. etext_ = text.data() + text.size(); if (ExtraDebug) fprintf(stderr, "NFA::Search %s (context: %s) anchored=%d longest=%d\n", std::string(text).c_str(), std::string(context).c_str(), anchored, longest); // Set up search. Threadq* runq = &q0_; Threadq* nextq = &q1_; runq->clear(); nextq->clear(); // Loop over the text, stepping the machine. for (const char* p = text.data();; p++) { if (ExtraDebug) { int c = 0; if (p == btext_) c = '^'; else if (p > etext_) c = '$'; else if (p < etext_) c = p[0] & 0xFF; fprintf(stderr, "%c:", c); for (Threadq::iterator i = runq->begin(); i != runq->end(); ++i) { Thread* t = i->value(); if (t == NULL) continue; fprintf(stderr, " %d%s", i->index(), FormatCapture(t->capture).c_str()); } fprintf(stderr, "\n"); } // This is a no-op the first time around the loop because runq is empty. int id = Step(runq, nextq, p < etext_ ? p[0] & 0xFF : -1, context, p); DCHECK_EQ(runq->size(), 0); using std::swap; swap(nextq, runq); nextq->clear(); if (id != 0) { // We're done: full match ahead. p = etext_; for (;;) { Prog::Inst* ip = prog_->inst(id); switch (ip->opcode()) { default: LOG(DFATAL) << "Unexpected opcode in short circuit: " << ip->opcode(); break; case kInstCapture: if (ip->cap() < ncapture_) match_[ip->cap()] = p; id = ip->out(); continue; case kInstNop: id = ip->out(); continue; case kInstMatch: match_[1] = p; matched_ = true; break; } break; } break; } if (p > etext_) break; // Start a new thread if there have not been any matches. // (No point in starting a new thread if there have been // matches, since it would be to the right of the match // we already found.) if (!matched_ && (!anchored || p == text.data())) { // Try to use prefix accel (e.g. memchr) to skip ahead. // The search must be unanchored and there must be zero // possible matches already. if (!anchored && runq->size() == 0 && p < etext_ && prog_->can_prefix_accel()) { p = reinterpret_cast<const char*>(prog_->PrefixAccel(p, etext_ - p)); if (p == NULL) p = etext_; } Thread* t = AllocThread(); CopyCapture(t->capture, match_); t->capture[0] = p; AddToThreadq(runq, start_, p < etext_ ? p[0] & 0xFF : -1, context, p, t); Decref(t); } // If all the threads have died, stop early. if (runq->size() == 0) { if (ExtraDebug) fprintf(stderr, "dead\n"); break; } // Avoid invoking undefined behavior (arithmetic on a null pointer) // by simply not continuing the loop. // This complements the special case in NFA::Step(). if (p == NULL) { (void) Step(runq, nextq, -1, context, p); DCHECK_EQ(runq->size(), 0); using std::swap; swap(nextq, runq); nextq->clear(); break; } } for (Threadq::iterator i = runq->begin(); i != runq->end(); ++i) { if (i->value() != NULL) Decref(i->value()); } if (matched_) { for (int i = 0; i < nsubmatch; i++) submatch[i] = StringPiece(match_[2 * i], static_cast<size_t>(match_[2 * i + 1] - match_[2 * i])); if (ExtraDebug) fprintf(stderr, "match (%td,%td)\n", match_[0] - btext_, match_[1] - btext_); return true; } return false; } bool Prog::SearchNFA(const StringPiece& text, const StringPiece& context, Anchor anchor, MatchKind kind, StringPiece* match, int nmatch) { if (ExtraDebug) Dump(); NFA nfa(this); StringPiece sp; if (kind == kFullMatch) { anchor = kAnchored; if (nmatch == 0) { match = &sp; nmatch = 1; } } if (!nfa.Search(text, context, anchor == kAnchored, kind != kFirstMatch, match, nmatch)) return false; if (kind == kFullMatch && match[0].end() != text.end()) return false; return true; } // For each instruction i in the program reachable from the start, compute the // number of instructions reachable from i by following only empty transitions // and record that count as fanout[i]. // // fanout holds the results and is also the work queue for the outer iteration. // reachable holds the reached nodes for the inner iteration. void Prog::Fanout(SparseArray<int>* fanout) { DCHECK_EQ(fanout->max_size(), size()); SparseSet reachable(size()); fanout->clear(); fanout->set_new(start(), 0); for (SparseArray<int>::iterator i = fanout->begin(); i != fanout->end(); ++i) { int* count = &i->value(); reachable.clear(); reachable.insert(i->index()); for (SparseSet::iterator j = reachable.begin(); j != reachable.end(); ++j) { int id = *j; Prog::Inst* ip = inst(id); switch (ip->opcode()) { default: LOG(DFATAL) << "unhandled " << ip->opcode() << " in Prog::Fanout()"; break; case kInstByteRange: if (!ip->last()) reachable.insert(id+1); (*count)++; if (!fanout->has_index(ip->out())) { fanout->set_new(ip->out(), 0); } break; case kInstAltMatch: DCHECK(!ip->last()); reachable.insert(id+1); break; case kInstCapture: case kInstEmptyWidth: case kInstNop: if (!ip->last()) reachable.insert(id+1); reachable.insert(ip->out()); break; case kInstMatch: if (!ip->last()) reachable.insert(id+1); break; case kInstFail: break; } } } } } // namespace re2 ```
The Indo-Abrahamic Alliance sometimes known as The Indo-Abrahamic Block or The Middle East QUAD or The Western QUAD or West Asian QUAD or I2-U2 is a geostrategic term coined by the foreign policy thinker and grand strategist Mohammed Soliman in use for a long essay for the Middle East Institute. The Indo-Abrahamic term refers to the growing convergence of geopolitical interests among India, Israel, and the United Arab Emirates, which will create a regional bloc that would include Egypt and Saudi Arabia and eventually fill in the gap left by a future US withdrawal from the Middle East and represents a counterbalance to Turkey and Iran. The Biden Administration later adopted Soliman's Indo-Abrahamic concept by launching the I2U2 Group in October 2021, which was followed by a leaders-level summit in July 2022. Background Soliman argues that regional peace and stability in West Asia are not guaranteed through the military presence of the United States but through a balance of power that will eventually moderate the ambitions of rising states in the region, namely Iran and Turkey. His concept fundamentally alters the Middle East's geography, moving away from the Arab World as a synonym for the Middle East to West Asian geography stretching from Egypt to India. Soliman's concept builds on the normalization of Israel's relations with the UAE and Bahrain under the Washington-sponsored Abraham Accords and a perceived rise of an “Indo-Abrahamic“ transregional order. Four months after Soliman's essay, US Secretary of State Antony Blinken held a first-of-its-kind summit with his counterparts from the UAE, India, and Israel to deepen their four-way connections. One year after Soliman's proposal, the White House stated that President Biden will attend a virtual summit with the leaders of India, Israel, and the UAE in June 2022, while in Israel. The role of the United States In his 2021 essay for Hindustan Times, Soliman concludes that: Unlike NATO in Europe or the Quad in the Indo-Pacific, there is no security architecture in the Middle East that could collectively address the challenges facing the region, in the absence of Washington, which has always been the primary security guarantor and regional convener. Now, the broader Middle East is facing a new reality, a different one where Washington is pivoting away— for real this time— from the Middle East and wants to focus its limited resources and political will on another strategic theater— the Indo-Pacific, where China is Washington’s biggest threat. Whether this pivot succeeds is partly dependent on building a regional security architecture for the Middle East that tackles the region’s challenges without the need for a unilateral U.S. military presence. The UAE and Israel are capitalizing on India’s centrality in the Indo-Pacific strategy and Washington’s traditional convener role in the Middle East to build closer ties with both countries. Soliman argues that the Abraham Accords allows regional actors to better respond to events such as the rise of China. The Indo-Abrahamic block will allow Washington to do less in the Middle East, while still keeping the focus on the Indo-Pacific. Soliman also predicts that in the future, the new Indo-Abrahamic format could empower regional powers to coordinate among themselves on common threats and challenges— from cyber to 5G, from missile defense to maritime security. Formalizing the Indo-Abrahamic Soliman advocated for building the Indo-Abrahamic alliance from the bottom-up. The Indo-Abrahamic bloc can be built from the bottom-up through issue-based working groups focused on critical areas such as space, drones, data security, 5G, cybersecurity, missile defense, and maritime security in the Indian Ocean, the Gulf, and the Mediterranean Sea. The US could also utilize its status as a global power to bring Arab, Asian, and European allies into these working groups. Due to their security capabilities and strategic interests in West Asia and the Indo-Pacific, Egypt, France, Japan, and Korea are the most suitable among US partners to join the working groups. The aim of working groups— and the inclusion of multi-theatre US allies — is to synchronize the work streams among American allies and partners in the region, and eventually, a test run for a Washington-backed bottom-up internationalized security architecture in the region. Other proposed Indo-Abrahamic states Egypt In his review of Soliman's Indo-Abrahamic concept, Raja Mohan argued for the inclusion of Egypt in the Indo-Abrahamic because of its location "at the cusp of the Mediterranean - Europe, Africa, and Asia, Egypt is the center and heart of the Greater Middle East.” In a follow-up essay, Soliman also argued for the necessity of Egypt to be included. Saudi Arabia In his essay for Middle East Institute, Soliman posed the question of Saudi Arabia's possible participation in the Indo-Abrahamic Alliance: Another critical challenge for the Indo-Abrahamic alliance is where Saudi Arabia — the heartland of Islam and the biggest Arab economy — stands in relation to the emerging geopolitical bloc. Riyadh has nurtured good relations with Tel Aviv and New Delhi and may look to this grouping as a strategic opportunity in the long run. Japan Soliman furthered his Indo-Abrahamic construct in an essay for Foreign Policy by advocating for an active Japanese role in the broader Middle East region. During this next phase of its Middle East engagement, Japan has a direct stake in—and is uniquely suited for—helping the region adapt to its changing geopolitical landscape. Multilateral, issue-based working groups, such as one focused on open RAN technologies, could facilitate a broader transregional strategic dialogue that harnesses the Middle East’s access to capital alongside the Indo-Pacific’s innovative potential to usher in a new era of stability and prosperity. See also Abraham Accords Major non-NATO ally Quadrilateral Security Dialogue 2+2 Ministerial Dialogue C5+1 References 2021 in international relations 2021 establishments in India 2021 establishments in Israel 2021 establishments in the United Arab Emirates 2021 establishments in the United States 2021 essays Politics of Southeast Asia Diplomatic conferences Geopolitical rivalry International relations terminology Foreign policy doctrines Foreign relations of India Foreign relations of Israel Foreign relations of the United Arab Emirates Foreign relations of the United States India–Israel relations India–United Arab Emirates relations India–United States relations Israel–United Arab Emirates relations Israel–United States relations United Arab Emirates–United States relations
```c++ // Use, modification and distribution is subject to the Boost Software // path_to_url // Authors: Matthias Troyer // Douglas Gregor /** @file skeleton_and_content.hpp * * This header provides facilities that allow the structure of data * types (called the "skeleton") to be transmitted and received * separately from the content stored in those data types. These * facilities are useful when the data in a stable data structure * (e.g., a mesh or a graph) will need to be transmitted * repeatedly. In this case, transmitting the skeleton only once * saves both communication effort (it need not be sent again) and * local computation (serialization need only be performed once for * the content). */ #ifndef BOOST_MPI_SKELETON_AND_CONTENT_HPP #define BOOST_MPI_SKELETON_AND_CONTENT_HPP #include <boost/mpi/config.hpp> #include <boost/archive/detail/auto_link_archive.hpp> #include <boost/mpi/packed_iarchive.hpp> #include <boost/mpi/packed_oarchive.hpp> #include <boost/mpi/detail/forward_skeleton_iarchive.hpp> #include <boost/mpi/detail/forward_skeleton_oarchive.hpp> #include <boost/mpi/detail/ignore_iprimitive.hpp> #include <boost/mpi/detail/ignore_oprimitive.hpp> #include <boost/shared_ptr.hpp> #include <boost/archive/detail/register_archive.hpp> namespace boost { namespace mpi { /** * @brief A proxy that requests that the skeleton of an object be * transmitted. * * The @c skeleton_proxy is a lightweight proxy object used to * indicate that the skeleton of an object, not the object itself, * should be transmitted. It can be used with the @c send and @c recv * operations of communicators or the @c broadcast collective. When a * @c skeleton_proxy is sent, Boost.MPI generates a description * containing the structure of the stored object. When that skeleton * is received, the receiving object is reshaped to match the * structure. Once the skeleton of an object as been transmitted, its * @c content can be transmitted separately (often several times) * without changing the structure of the object. */ template <class T> struct BOOST_MPI_DECL skeleton_proxy { /** * Constructs a @c skeleton_proxy that references object @p x. * * @param x the object whose structure will be transmitted or * altered. */ skeleton_proxy(T& x) : object(x) {} T& object; }; /** * @brief Create a skeleton proxy object. * * This routine creates an instance of the skeleton_proxy class. It * will typically be used when calling @c send, @c recv, or @c * broadcast, to indicate that only the skeleton (structure) of an * object should be transmitted and not its contents. * * @param x the object whose structure will be transmitted. * * @returns a skeleton_proxy object referencing @p x */ template <class T> inline const skeleton_proxy<T> skeleton(T& x) { return skeleton_proxy<T>(x); } namespace detail { /// @brief a class holding an MPI datatype /// INTERNAL ONLY /// the type is freed upon destruction class BOOST_MPI_DECL mpi_datatype_holder : public boost::noncopyable { public: mpi_datatype_holder() : is_committed(false) {} mpi_datatype_holder(MPI_Datatype t, bool committed = true) : d(t) , is_committed(committed) {} void commit() { BOOST_MPI_CHECK_RESULT(MPI_Type_commit,(&d)); is_committed=true; } MPI_Datatype get_mpi_datatype() const { return d; } ~mpi_datatype_holder() { int finalized=0; BOOST_MPI_CHECK_RESULT(MPI_Finalized,(&finalized)); if (!finalized && is_committed) BOOST_MPI_CHECK_RESULT(MPI_Type_free,(&d)); } private: MPI_Datatype d; bool is_committed; }; } // end namespace detail /** @brief A proxy object that transfers the content of an object * without its structure. * * The @c content class indicates that Boost.MPI should transmit or * receive the content of an object, but without any information * about the structure of the object. It is only meaningful to * transmit the content of an object after the receiver has already * received the skeleton for the same object. * * Most users will not use @c content objects directly. Rather, they * will invoke @c send, @c recv, or @c broadcast operations using @c * get_content(). */ class BOOST_MPI_DECL content { public: /** * Constructs an empty @c content object. This object will not be * useful for any Boost.MPI operations until it is reassigned. */ content() {} /** * This routine initializes the @c content object with an MPI data * type that refers to the content of an object without its structure. * * @param d the MPI data type referring to the content of the object. * * @param committed @c true indicates that @c MPI_Type_commit has * already been excuted for the data type @p d. */ content(MPI_Datatype d, bool committed=true) : holder(new detail::mpi_datatype_holder(d,committed)) {} /** * Replace the MPI data type referencing the content of an object. * * @param d the new MPI data type referring to the content of the * object. * * @returns *this */ const content& operator=(MPI_Datatype d) { holder.reset(new detail::mpi_datatype_holder(d)); return *this; } /** * Retrieve the MPI data type that refers to the content of the * object. * * @returns the MPI data type, which should only be transmitted or * received using @c MPI_BOTTOM as the address. */ MPI_Datatype get_mpi_datatype() const { return holder->get_mpi_datatype(); } /** * Commit the MPI data type referring to the content of the * object. */ void commit() { holder->commit(); } private: boost::shared_ptr<detail::mpi_datatype_holder> holder; }; /** @brief Returns the content of an object, suitable for transmission * via Boost.MPI. * * The function creates an absolute MPI datatype for the object, * where all offsets are counted from the address 0 (a.k.a. @c * MPI_BOTTOM) instead of the address @c &x of the object. This * allows the creation of MPI data types for complex data structures * containing pointers, such as linked lists or trees. * * The disadvantage, compared to relative MPI data types is that for * each object a new MPI data type has to be created. * * The contents of an object can only be transmitted when the * receiver already has an object with the same structure or shape as * the sender. To accomplish this, first transmit the skeleton of the * object using, e.g., @c skeleton() or @c skeleton_proxy. * * The type @c T has to allow creation of an absolute MPI data type * (content). * * @param x the object for which the content will be transmitted. * * @returns the content of the object @p x, which can be used for * transmission via @c send, @c recv, or @c broadcast. */ template <class T> const content get_content(const T& x); /** @brief An archiver that reconstructs a data structure based on the * binary skeleton stored in a buffer. * * The @c packed_skeleton_iarchive class is an Archiver (as in the * Boost.Serialization library) that can construct the the shape of a * data structure based on a binary skeleton stored in a buffer. The * @c packed_skeleton_iarchive is typically used by the receiver of a * skeleton, to prepare a data structure that will eventually receive * content separately. * * Users will not generally need to use @c packed_skeleton_iarchive * directly. Instead, use @c skeleton or @c get_skeleton. */ class BOOST_MPI_DECL packed_skeleton_iarchive : public detail::ignore_iprimitive, public detail::forward_skeleton_iarchive<packed_skeleton_iarchive,packed_iarchive> { public: /** * Construct a @c packed_skeleton_iarchive for the given * communicator. * * @param comm The communicator over which this archive will be * transmitted. * * @param flags Control the serialization of the skeleton. Refer to * the Boost.Serialization documentation before changing the * default flags. */ packed_skeleton_iarchive(MPI_Comm const & comm, unsigned int flags = boost::archive::no_header) : detail::forward_skeleton_iarchive<packed_skeleton_iarchive,packed_iarchive>(skeleton_archive_) , skeleton_archive_(comm,flags) {} /** * Construct a @c packed_skeleton_iarchive that unpacks a skeleton * from the given @p archive. * * @param archive the archive from which the skeleton will be * unpacked. * */ explicit packed_skeleton_iarchive(packed_iarchive & archive) : detail::forward_skeleton_iarchive<packed_skeleton_iarchive,packed_iarchive>(archive) , skeleton_archive_(MPI_COMM_WORLD, boost::archive::no_header) {} /** * Retrieve the archive corresponding to this skeleton. */ const packed_iarchive& get_skeleton() const { return this->implementation_archive; } /** * Retrieve the archive corresponding to this skeleton. */ packed_iarchive& get_skeleton() { return this->implementation_archive; } private: /// Store the actual archive that holds the structure, unless the /// user overrides this with their own archive. packed_iarchive skeleton_archive_; }; /** @brief An archiver that records the binary skeleton of a data * structure into a buffer. * * The @c packed_skeleton_oarchive class is an Archiver (as in the * Boost.Serialization library) that can record the shape of a data * structure (called the "skeleton") into a binary representation * stored in a buffer. The @c packed_skeleton_oarchive is typically * used by the send of a skeleton, to pack the skeleton of a data * structure for transmission separately from the content. * * Users will not generally need to use @c packed_skeleton_oarchive * directly. Instead, use @c skeleton or @c get_skeleton. */ class BOOST_MPI_DECL packed_skeleton_oarchive : public detail::ignore_oprimitive, public detail::forward_skeleton_oarchive<packed_skeleton_oarchive,packed_oarchive> { public: /** * Construct a @c packed_skeleton_oarchive for the given * communicator. * * @param comm The communicator over which this archive will be * transmitted. * * @param flags Control the serialization of the skeleton. Refer to * the Boost.Serialization documentation before changing the * default flags. */ packed_skeleton_oarchive(MPI_Comm const & comm, unsigned int flags = boost::archive::no_header) : detail::forward_skeleton_oarchive<packed_skeleton_oarchive,packed_oarchive>(skeleton_archive_) , skeleton_archive_(comm,flags) {} /** * Construct a @c packed_skeleton_oarchive that packs a skeleton * into the given @p archive. * * @param archive the archive to which the skeleton will be packed. * */ explicit packed_skeleton_oarchive(packed_oarchive & archive) : detail::forward_skeleton_oarchive<packed_skeleton_oarchive,packed_oarchive>(archive) , skeleton_archive_(MPI_COMM_WORLD, boost::archive::no_header) {} /** * Retrieve the archive corresponding to this skeleton. */ const packed_oarchive& get_skeleton() const { return this->implementation_archive; } private: /// Store the actual archive that holds the structure. packed_oarchive skeleton_archive_; }; namespace detail { typedef boost::mpi::detail::forward_skeleton_oarchive<boost::mpi::packed_skeleton_oarchive,boost::mpi::packed_oarchive> type1; typedef boost::mpi::detail::forward_skeleton_iarchive<boost::mpi::packed_skeleton_iarchive,boost::mpi::packed_iarchive> type2; } } } // end namespace boost::mpi #include <boost/mpi/detail/content_oarchive.hpp> // For any headers that have provided declarations based on forward // declarations of the contents of this header, include definitions // for those declarations. This means that the inclusion of // skeleton_and_content.hpp enables the use of skeleton/content // transmission throughout the library. #ifdef BOOST_MPI_BROADCAST_HPP # include <boost/mpi/detail/broadcast_sc.hpp> #endif #ifdef BOOST_MPI_COMMUNICATOR_HPP # include <boost/mpi/detail/communicator_sc.hpp> #endif // required by export BOOST_SERIALIZATION_REGISTER_ARCHIVE(boost::mpi::packed_skeleton_oarchive) BOOST_SERIALIZATION_REGISTER_ARCHIVE(boost::mpi::packed_skeleton_iarchive) BOOST_SERIALIZATION_REGISTER_ARCHIVE(boost::mpi::detail::type1) BOOST_SERIALIZATION_REGISTER_ARCHIVE(boost::mpi::detail::type2) BOOST_SERIALIZATION_USE_ARRAY_OPTIMIZATION(boost::mpi::packed_skeleton_oarchive) BOOST_SERIALIZATION_USE_ARRAY_OPTIMIZATION(boost::mpi::packed_skeleton_iarchive) #endif // BOOST_MPI_SKELETON_AND_CONTENT_HPP ```
The Program is the second album by Marion, released in 1998 on London Records, and produced by former guitarist of The Smiths, Johnny Marr. The album did not chart in the UK. An expanded two CD set was released by Demon Music Group on 16 September 2016. Background and recording To promote This World and Body, Marion embarked on a 18-month long tour around the world. After its conclusion, they found it difficult to finish songs for their next album due to suffering from fatigue and the absence of time to work on them. Their manager Joe Moss, who had worked with the Smiths, told them that former Smiths guitarist Johnny Marr was a fan of theirs after seeing them at a festival in Europe. After expressing an interesting in working with the band, Moss recommended that Marr visit them at their rehearsal space in Manchester to hear the new material they were working on. He was initially only going to provide his opinion; after one afternoon, the band and Marr had worked on all the songs they had up to that point. The Program was recorded at Revolution Studios in Manchester in early 1997 with Johnny Marr as producer and James Spencer as engineer. The majority of the recordings were mixed at RAK Studios in London by Tim Palmer, save for "Miyako Hideaway" and "Sparkle" which were mixed by Marr and Spencer. Harding recalled that it was the first album that Marr had sung on, providing "T. Rex cat-style" backing vocals to "The Program", alongside several guitar riffs throughout it. Marr had given Harding a copy of Low (1977) by David Bowie; he was enthusiastic about it to the extent that he "forced [Marr] to pepper the record with classic Eno-esque Moog sounds". Harding felt this gave the band an air of "maturity and richness" that was absent from their debut. He mentioned that there was interference from their record label, who would contact them each week and ask them to write something similar to whichever track became a hit on that given week. One such instance, Harding recalled, was being asked to make something similar to "You're Gorgeous" (1996) by Babybird. On another occasion, he was asked to come up with a new chorus section for a song "with no real reason behind the request"; he relented on the chorus portion to "The Powder Room Plan". Release When the album was finished, London Records had lost all interest in promoting the band, resulting in them buying the album back from them. They planned to sell it to one of the several interested labels in the United States. Though London agreed to this situation, they ended up releasing The Program in Japan without any kind of promotion or press, aware that they would recoup the cost solely from that country alone. Harding said it "confused and angered us", forcing their fans to pay quadruple the amount to import it, roughly £40. It was eventually released by London Records in the UK in September 1998. Track listing All songs by Jaime Harding, Tony Grantham and Phil Cunningham, except "Miyako Hideaway" by those three and Johnny Marr. All lyrics by Harding. "The Smile" – 4:14 "Miyako Hideaway" (full length mix) – 4:55 "Sparkle" – 4:12 "Is That So?" – 4:33 "What Are We Waiting For?" – 6:30 "Strangers" – 3:47 "The Powder Room Plan" – 3:47 "The Program" – 5:35 "All of These Days" – 3:23 "Comeback" – 6:05 Personnel Personnel per deluxe edition booklet. Marion Jaime Harding – vocals Phil Cunningham – guitar Tony Grantham – guitar Nick Gilbert – bass Murad Mousa – drums Additional musicians Johnny Marr – guitar, keyboards, backing vocals (track 8) Ged Lynch – percussion James Brown – piano (track 10) Production and design Johnny Marr – producer, mix engineer (tracks 2 and 3) James Spencer – engineer, mix engineer (tracks 2 and 3) Tim Palmer – mix engineer (all except tracks 2 and 3) Justin Richards – recording assistant Marion – sleeve design Ian Tilt – band photography Merton Gauster – neon photography References Citations Sources Marion (band) albums 1998 albums London Records albums
Sukladhwaja (also Chilarai) (1510-1577AD), was the 3rd son of Biswa Singha, founder of the Koch Dynasty in the Kamata Kingdom and younger brother of Nara Narayan, the 2nd king of the Koch dynasty of the Kamata kingdom in the 16th century. He was Nara Narayan's commander-in-chief and chief Minister (Dewan) of the kingdom. He got his name Chilarai because, as a general, he executed troop movements that were as fast as a chila (kite/Eagle). Biography Chilaray was the third son of Biswa Singha (1515–1540). It was only due to his royal patronage that Sankardeva was able to establish the ekasarana-namadharma in Assam and bring about his cultural renaissance. Several rulers, namely the then king of Manipur and the Khasi tribal chief (Viryyavanta), submitted to Chilaray. Chilaray and his army also vanquished and killed the Jaintia king, and kings of Tippera (Tripura) and Sylhet. Chilaray is said to have never committed brutalities on unarmed common people, and even those kings who surrendered were treated with respect. He was harsh to kings and soldiers who refused to surrender, but neither him nor his brother ever annexed territories or oppressed the common people. They only collected tributes from the vanquished kings. They even treated enemy prisoners kindly, and gave them land-grants to settle. The duo (Chilaray and Nara Narayan) turned towards Bengal, but unforeseen circumstances led to Chilaray's capture by the Afghan Sultan Sulaiman Khan Karrani, while Naranarayan retreated to his capital. Much of the Koch kingdom was then captured by the Afghans. Chilaray died in 1577 of small pox on the bank of Ganges. Bir Chilaray Divas The birth anniversary of Mahabir Chilaray is organised by Government of Assam annually from 2005. The Government also declares this day as state holiday. It is celebrated on the Purnima of Phagun Maah of Hindu Calendar. Bir Chilaray award The Directorate of Cultural Affairs, Government of Assam instituted these awards in 2005. Theycomprise a shawl, a citation, and a cash award of Rs. 100,000 Notes References 16th-century Indian people People from Assam
```php <?php declare(strict_types=1); /** * Update feeder loop. * * This file is part of MadelineProto. * MadelineProto is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. * If not, see <path_to_url * * @author Daniil Gentili <daniil@daniil.it> * @copyright 2016-2023 Daniil Gentili <daniil@daniil.it> * @license path_to_url AGPLv3 * @link path_to_url MadelineProto documentation */ namespace danog\MadelineProto\Loop\Update; use danog\Loop\Loop; use danog\MadelineProto\AsyncTools; use danog\MadelineProto\Logger; use danog\MadelineProto\Loop\InternalLoop; use danog\MadelineProto\MTProto; use danog\MadelineProto\MTProtoTools\UpdatesState; /** * Update feed loop. * * @internal * * @author Daniil Gentili <daniil@daniil.it> */ final class FeedLoop extends Loop { use InternalLoop { __construct as private init; } /** * Main loop ID. */ public const GENERIC = 0; /** * Incoming updates array. */ private array $incomingUpdates = []; /** * Parsed updates array. */ private array $parsedUpdates = []; /** * Update loop. */ private ?UpdateLoop $updater = null; /** * Update state. */ private ?UpdatesState $state = null; /** * Constructor. */ public function __construct(MTProto $API, private int $channelId = 0) { $this->init($API); } public function __sleep(): array { return ['incomingUpdates', 'parsedUpdates', 'updater', 'API', 'state', 'channelId']; } /** * Main loop. */ public function loop(): ?float { if (!$this->isLoggedIn()) { return self::PAUSE; } $this->updater = $this->API->updaters[$this->channelId]; $this->state = $this->channelId === self::GENERIC ? $this->API->loadUpdateState() : $this->API->loadChannelState($this->channelId); $this->API->logger("Resumed {$this}"); while ($this->incomingUpdates) { $updates = $this->incomingUpdates; $this->incomingUpdates = []; $this->parse($updates); $updates = null; } while ($this->parsedUpdates) { $parsedUpdates = $this->parsedUpdates; $this->parsedUpdates = []; foreach ($parsedUpdates as $update) { try { $this->API->saveUpdate($update); } catch (\Throwable $e) { AsyncTools::rethrow($e); } } $parsedUpdates = null; } return self::PAUSE; } public function parse(array $updates): void { reset($updates); while ($updates) { $key = key($updates); $update = $updates[$key]; unset($updates[$key]); if ($update['_'] === 'updateChannelTooLong') { $this->API->logger('Got channel too long update, getting difference...', Logger::VERBOSE); $this->updater->resume(); continue; } if (isset($update['pts'], $update['pts_count'])) { $logger = function ($msg) use ($update): void { $pts_count = $update['pts_count']; $mid = $update['message']['id'] ?? '-'; $mypts = $this->state->pts(); $computed = $mypts + $pts_count; $this->API->logger("{$msg}. My pts: {$mypts}, remote pts: {$update['pts']}, computed pts: {$computed}, msg id: {$mid}, channel id: {$this->channelId}", Logger::ULTRA_VERBOSE); }; $result = $this->state->checkPts($update); if ($result < 0) { $logger('PTS duplicate'); continue; } if ($result > 0) { $logger('PTS hole'); //$this->updater->setLimit($this->state->pts() + $result); $this->updater->resume(); //$updates = array_merge($this->incomingUpdates, $updates); //$this->incomingUpdates = []; continue; } if (isset($update['message']['id'], $update['message']['peer_id']) && !\in_array($update['_'], ['updateEditMessage', 'updateEditChannelMessage', 'updateMessageID'], true)) { if (!$this->API->checkMsgId($update['message'])) { $logger('MSGID duplicate'); continue; } } $logger('PTS OK'); $this->state->pts($update['pts']); } $this->parsedUpdates[] = $update; } } public function feed(array $updates) { $result = []; foreach ($updates as $update) { $result[$this->feedSingle($update)] = true; } return $result; } public function feedSingle(array $update) { $channelId = self::GENERIC; switch ($update['_']) { case 'updateNewChannelMessage': case 'updateEditChannelMessage': $channelId = $update['message']['peer_id']; break; case 'updateChannelWebPage': case 'updateDeleteChannelMessages': $channelId = $update['channel_id']; break; case 'updateChannelTooLong': case 'updateChannel': $channelId = $update['channel_id'] ?? self::GENERIC; if (!isset($update['pts'])) { $update['pts'] = 1; } break; } if ($channelId && !$this->API->getChannelStates()->has($channelId)) { $this->API->loadChannelState($channelId, $update); if (!isset($this->API->feeders[$channelId])) { $this->API->feeders[$channelId] = new self($this->API, $channelId); } if (!isset($this->API->updaters[$channelId])) { $this->API->updaters[$channelId] = new UpdateLoop($this->API, $channelId); } $this->API->feeders[$channelId]->start(); $this->API->updaters[$channelId]->start(); } switch ($update['_']) { case 'updateNewMessage': case 'updateEditMessage': case 'updateNewChannelMessage': case 'updateEditChannelMessage': $to = false; $from = false; $via_bot = false; $entities = false; if ($update['message']['_'] !== 'messageEmpty' && ( ( $from = isset($update['message']['from_id']) && !($this->API->peerIsset($update['message']['from_id'])) ) || ( $to = !($this->API->peerIsset($update['message']['peer_id'])) ) || ( $via_bot = isset($update['message']['via_bot_id']) && !($this->API->peerIsset($update['message']['via_bot_id'])) ) || ( $entities = isset($update['message']['entities']) && !($this->API->entitiesPeerIsset($update['message']['entities'])) ) ) ) { $log = ''; if ($from) { $from_id = $this->API->getIdInternal($update['message']['from_id']); $log .= "from_id {$from_id}, "; } if ($to) { $log .= 'peer_id '.json_encode($update['message']['peer_id']).', '; } if ($via_bot) { $log .= "via_bot {$update['message']['via_bot_id']}, "; } if ($entities) { $log .= 'entities '.json_encode($update['message']['entities']).', '; } $this->API->logger("Not enough data: for message update {$log}, getting difference...", Logger::VERBOSE); $update = ['_' => 'updateChannelTooLong']; if ($channelId && $to) { $channelId = self::GENERIC; } } break; default: if ($channelId && !$this->API->peerIsset($channelId)) { $this->API->logger('Skipping update, I do not have the channel id '.$channelId, Logger::ERROR); return false; } break; } if ($channelId !== $this->channelId) { if (isset($this->API->feeders[$channelId])) { return $this->API->feeders[$channelId]->feedSingle($update); } elseif ($this->channelId) { return $this->API->feeders[self::GENERIC]->feedSingle($update); } } $this->API->logger('Was fed an update of type '.$update['_']." in {$this}...", Logger::ULTRA_VERBOSE); if ($update['_'] === 'updateLoginToken') { $this->API->saveUpdate($update); return $this->channelId; } $this->incomingUpdates[] = $update; return $this->channelId; } public function saveMessages($messages): void { foreach ($messages as $message) { if (!$this->API->checkMsgId($message)) { $this->API->logger("MSGID duplicate ({$message['id']}) in {$this}"); continue; } if ($message['_'] !== 'messageEmpty') { $this->API->logger('Getdiff fed me message of type '.$message['_']." in {$this}...", Logger::VERBOSE); } $this->parsedUpdates[] = ['_' => $this->channelId === self::GENERIC ? 'updateNewMessage' : 'updateNewChannelMessage', 'message' => $message, 'pts' => -1, 'pts_count' => -1]; } } public function __toString(): string { return !$this->channelId ? 'update feed loop generic' : "update feed loop channel {$this->channelId}"; } } ```
```c++ // This file is part of libigl, a simple c++ geometry processing library. // // // v. 2.0. If a copy of the MPL was not distributed with this file, You can // obtain one at path_to_url #include "insert_into_cdt.h" #include <CGAL/Point_3.h> #include <CGAL/Segment_3.h> #include <CGAL/Triangle_3.h> template <typename Kernel> IGL_INLINE void igl::copyleft::cgal::insert_into_cdt( const CGAL::Object & obj, const CGAL::Plane_3<Kernel> & P, CGAL::Constrained_triangulation_plus_2< CGAL::Constrained_Delaunay_triangulation_2< Kernel, CGAL::Triangulation_data_structure_2< CGAL::Triangulation_vertex_base_2<Kernel>, CGAL::Constrained_triangulation_face_base_2< Kernel> >, CGAL::Exact_intersections_tag > > & cdt) { typedef CGAL::Point_3<Kernel> Point_3; typedef CGAL::Segment_3<Kernel> Segment_3; typedef CGAL::Triangle_3<Kernel> Triangle_3; if(const Segment_3 *iseg = CGAL::object_cast<Segment_3 >(&obj)) { // Add segment constraint cdt.insert_constraint( P.to_2d(iseg->vertex(0)),P.to_2d(iseg->vertex(1))); }else if(const Point_3 *ipoint = CGAL::object_cast<Point_3 >(&obj)) { // Add point cdt.insert(P.to_2d(*ipoint)); } else if(const Triangle_3 *itri = CGAL::object_cast<Triangle_3 >(&obj)) { // Add 3 segment constraints cdt.insert_constraint( P.to_2d(itri->vertex(0)),P.to_2d(itri->vertex(1))); cdt.insert_constraint( P.to_2d(itri->vertex(1)),P.to_2d(itri->vertex(2))); cdt.insert_constraint( P.to_2d(itri->vertex(2)),P.to_2d(itri->vertex(0))); } else if(const std::vector<Point_3 > *polyp = CGAL::object_cast< std::vector<Point_3 > >(&obj)) { const std::vector<Point_3 > & poly = *polyp; const size_t m = poly.size(); assert(m>=2); for(size_t p = 0;p<m;p++) { const size_t np = (p+1)%m; cdt.insert_constraint(P.to_2d(poly[p]),P.to_2d(poly[np])); } }else { throw std::runtime_error("Unknown intersection object!"); } } #ifdef IGL_STATIC_LIBRARY // Explicit template instantiation #include <CGAL/Exact_predicates_exact_constructions_kernel.h> template void igl::copyleft::cgal::insert_into_cdt<CGAL::Epick>(CGAL::Object const&, CGAL::Plane_3<CGAL::Epick> const&, CGAL::Constrained_triangulation_plus_2<CGAL::Constrained_Delaunay_triangulation_2<CGAL::Epick, CGAL::Triangulation_data_structure_2<CGAL::Triangulation_vertex_base_2<CGAL::Epick, CGAL::Triangulation_ds_vertex_base_2<void> >, CGAL::Constrained_triangulation_face_base_2<CGAL::Epick, CGAL::Triangulation_face_base_2<CGAL::Epick, CGAL::Triangulation_ds_face_base_2<void> > > >, CGAL::Exact_intersections_tag> >&); template void igl::copyleft::cgal::insert_into_cdt<CGAL::Epeck>(CGAL::Object const&, CGAL::Plane_3<CGAL::Epeck> const&, CGAL::Constrained_triangulation_plus_2<CGAL::Constrained_Delaunay_triangulation_2<CGAL::Epeck, CGAL::Triangulation_data_structure_2<CGAL::Triangulation_vertex_base_2<CGAL::Epeck, CGAL::Triangulation_ds_vertex_base_2<void> >, CGAL::Constrained_triangulation_face_base_2<CGAL::Epeck, CGAL::Triangulation_face_base_2<CGAL::Epeck, CGAL::Triangulation_ds_face_base_2<void> > > >, CGAL::Exact_intersections_tag> >&); #endif ```
Vladimir Nikolaevich Putyatov (, born 24 December 1945) is a Russian former volleyball player who competed for the Soviet Union in the 1972 Summer Olympics. In 1972 he was part of the Soviet team which won the bronze medal in the Olympic tournament. He played six matches. External links 1947 births Living people Soviet men's volleyball players Olympic volleyball players for the Soviet Union Volleyball players at the 1972 Summer Olympics Olympic bronze medalists for the Soviet Union Olympic medalists in volleyball Russian men's volleyball players Medalists at the 1972 Summer Olympics
```sqlpl /* src/test/modules/test_extensions/test_ext5--1.0.sql */ -- complain if script is sourced in psql, rather than via CREATE EXTENSION \echo Use "CREATE EXTENSION test_ext5" to load this file. \quit ```
Cyperus congensis is a species of sedge that is endemic to parts of western and central Africa. The species was first formally described by the botanist Charles Baron Clarke in 1896. See also List of Cyperus species References congensis Taxa named by Charles Baron Clarke Plants described in 1896 Flora of Togo Flora of Sierra Leone Flora of Benin Flora of Gabon Flora of Ivory Coast Flora of Liberia Flora of Senegal
Christopher Stephens (born 27 March 1975) is a Welsh rugby union player who earned two caps for the national team. He is also known for an on-the-pitch incident in which he punched opposition player Ioan Bebb. He was fined after admitting grievous bodily harm; Bebb's career was ended. Stephens also punched Charles Riechelmann in a later match. References 1975 births Living people Welsh rugby union players Wales international rugby union players
```yaml flags: - key: flag1 name: flag1 type: "VARIANT_FLAG_TYPE" description: description enabled: true variants: - key: variant1 name: variant1 description: variant description default: true attachment: pi: 3.141 happy: true name: Niels answer: everything: 42 list: - 1 - 0 - 2 object: currency: USD value: 42.99 rules: - segment: segment1 rank: 1 distributions: - variant: variant1 rollout: 100 - key: flag2 name: flag2 type: "BOOLEAN_FLAG_TYPE" description: a boolean flag enabled: false rollouts: - description: enabled for internal users segment: key: internal_users value: true - description: enabled for 50% threshold: percentage: 50 value: true segments: - key: segment1 name: segment1 match_type: "ANY_MATCH_TYPE" description: description constraints: - type: STRING_COMPARISON_TYPE property: fizz operator: neq value: buzz ```
```hcl output "aws_region" { description = "The AWS region's name." value = var.aws_region } output "env" { description = "The randomized environment's name." value = "${var.env}-${random_id.env.hex}" } ```
```c++ ///|/ ///|/ PrusaSlicer is released under the terms of the AGPLv3 or higher ///|/ #ifndef slic3r_InstanceCheck_hpp_ #define slic3r_InstanceCheck_hpp_ #include "Event.hpp" #if _WIN32 #include <windows.h> #endif //_WIN32 #include <string> #include <boost/filesystem.hpp> #if __linux__ #include <boost/thread.hpp> #include <mutex> #include <condition_variable> #endif // __linux__ namespace Slic3r { // checks for other running instances and sends them argv, // if there is --single-instance argument or AppConfig is set to single_instance=1 // returns true if this instance should terminate bool instance_check(int argc, char** argv, bool app_config_single_instance); #if __APPLE__ // apple implementation of inner functions of instance_check // in InstanceCheckMac.mm void send_message_mac(const std::string& msg, const std::string& version); void send_message_mac_closing(const std::string& msg, const std::string& version); bool unlock_lockfile(const std::string& name, const std::string& path); #endif //__APPLE__ namespace GUI { class MainFrame; #if __linux__ #define BACKGROUND_MESSAGE_LISTENER #endif // __linux__ using LoadFromOtherInstanceEvent = Event<std::vector<boost::filesystem::path>>; using StartDownloadOtherInstanceEvent = Event<std::vector<std::string>>; using LoginOtherInstanceEvent = Event<std::string>; wxDECLARE_EVENT(EVT_LOAD_MODEL_OTHER_INSTANCE, LoadFromOtherInstanceEvent); wxDECLARE_EVENT(EVT_START_DOWNLOAD_OTHER_INSTANCE, StartDownloadOtherInstanceEvent); wxDECLARE_EVENT(EVT_LOGIN_OTHER_INSTANCE, LoginOtherInstanceEvent); using InstanceGoToFrontEvent = SimpleEvent; wxDECLARE_EVENT(EVT_INSTANCE_GO_TO_FRONT, InstanceGoToFrontEvent); class OtherInstanceMessageHandler { public: OtherInstanceMessageHandler() = default; OtherInstanceMessageHandler(OtherInstanceMessageHandler const&) = delete; void operator=(OtherInstanceMessageHandler const&) = delete; ~OtherInstanceMessageHandler() { assert(!m_initialized); } // inits listening, on each platform different. On linux starts background thread void init(wxEvtHandler* callback_evt_handler); // stops listening, on linux stops the background thread void shutdown(MainFrame* main_frame); //finds paths to models in message(= command line arguments, first should be prusaSlicer executable) //and sends them to plater via LoadFromOtherInstanceEvent //security of messages: from message all existing paths are proccesed to load model // win32 - anybody who has hwnd can send message. // mac - anybody who posts notification with name:@"OtherPrusaSlicerTerminating" // linux - instrospectable on dbus void handle_message(const std::string& message); #ifdef __APPLE__ // Messege form other instance, that it deleted its lockfile - first instance to get it will create its own. void handle_message_other_closed(); #endif //__APPLE__ #ifdef _WIN32 static void init_windows_properties(MainFrame* main_frame, size_t instance_hash); void update_windows_properties(MainFrame* main_frame); #endif //WIN32 private: bool m_initialized { false }; wxEvtHandler* m_callback_evt_handler { nullptr }; #ifdef BACKGROUND_MESSAGE_LISTENER //worker thread to listen incoming dbus communication boost::thread m_thread; std::condition_variable m_thread_stop_condition; mutable std::mutex m_thread_stop_mutex; bool m_stop{ false }; bool m_start{ true }; // background thread method void listen(); #endif //BACKGROUND_MESSAGE_LISTENER #if __APPLE__ //implemented at InstanceCheckMac.mm void register_for_messages(const std::string &version_hash); void unregister_for_messages(); // Opaque pointer to RemovableDriveManagerMM void* m_impl_osx; public: void bring_instance_forward(); #endif //__APPLE__ }; } // namespace GUI } // namespace Slic3r #endif // slic3r_InstanceCheck_hpp_ ```
In mathematics and especially number theory, the sum of reciprocals generally is computed for the reciprocals of some or all of the positive integers (counting numbers)—that is, it is generally the sum of unit fractions. If infinitely many numbers have their reciprocals summed, generally the terms are given in a certain sequence and the first n of them are summed, then one more is included to give the sum of the first n+1 of them, etc. If only finitely many numbers are included, the key issue is usually to find a simple expression for the value of the sum, or to require the sum to be less than a certain value, or to determine whether the sum is ever an integer. For an infinite series of reciprocals, the issues are twofold: First, does the sequence of sums diverge—that is, does it eventually exceed any given number—or does it converge, meaning there is some number that it gets arbitrarily close to without ever exceeding it? (A set of positive integers is said to be large if the sum of its reciprocals diverges, and small if it converges.) Second, if it converges, what is a simple expression for the value it converges to, is that value rational or irrational, and is that value algebraic or transcendental? Finitely many terms The harmonic mean of a set of positive integers is the number of numbers times the reciprocal of the sum of their reciprocals. The optic equation requires the sum of the reciprocals of two positive integers a and b to equal the reciprocal of a third positive integer c. All solutions are given by a = mn + m2, b = mn + n2, c = mn. This equation appears in various contexts in elementary geometry. The Fermat–Catalan conjecture concerns a certain Diophantine equation, equating the sum of two terms, each a positive integer raised to a positive integer power, to a third term that is also a positive integer raised to a positive integer power (with the base integers having no prime factor in common). The conjecture asks whether the equation has an infinitude of solutions in which the sum of the reciprocals of the three exponents in the equation must be less than 1. The purpose of this restriction is to preclude the known infinitude of solutions in which two exponents are 2 and the other exponent is any even number. The n-th harmonic number, which is the sum of the reciprocals of the first n positive integers, is never an integer except for the case n = 1. Moreover, József Kürschák proved in 1918 that the sum of the reciprocals of consecutive natural numbers (whether starting from 1 or not) is never an integer. The sum of the reciprocals of the first n primes is not an integer for any n. There are 14 distinct combinations of four integers such that the sum of their reciprocals is 1, of which six use four distinct integers and eight repeat at least one integer. An Egyptian fraction is the sum of a finite number of reciprocals of positive integers. According to the proof of the Erdős–Graham problem, if the set of integers greater than one is partitioned into finitely many subsets, then one of the subsets can be used to form an Egyptian fraction representation of 1. The Erdős–Straus conjecture states that for all integers n ≥ 2, the rational number 4/n can be expressed as the sum of three reciprocals of positive integers. The Fermat quotient with base 2, which is for odd prime p, when expressed in mod p and multiplied by –2, equals the sum of the reciprocals mod p of the numbers lying in the first half of the range {1, p − 1}. In any triangle, the sum of the reciprocals of the altitudes equals the reciprocal of the radius of the incircle (regardless of whether or not they are integers). In a right triangle, the sum of the reciprocals of the squares of the altitudes from the legs (equivalently, of the squares of the legs themselves) equals the reciprocal of the square of the altitude from the hypotenuse (the inverse Pythagorean theorem). This holds whether or not the numbers are integers; there is a formula (see here) that generates all integer cases. A triangle not necessarily in the Euclidean plane can be specified as having angles and Then the triangle is in Euclidean space if the sum of the reciprocals of p, q, and r equals 1, spherical space if that sum is greater than 1, and hyperbolic space if the sum is less than 1. A harmonic divisor number is a positive integer whose divisors have a harmonic mean that is an integer. The first five of these are 1, 6, 28, 140, and 270. It is not known whether any harmonic divisor numbers (besides 1) are odd, but there are no odd ones less than 1024. The sum of the reciprocals of the divisors of a perfect number is 2. When eight points are distributed on the surface of a sphere with the aim of maximizing the distance between them in some sense, the resulting shape corresponds to a square antiprism. Specific methods of distributing the points include, for example, minimizing the sum of all reciprocals of squares of distances between points. Infinitely many terms Convergent series A sum-free sequence of increasing positive integers is one for which no number is the sum of any subset of the previous ones. The sum of the reciprocals of the numbers in any sum-free sequence is less than  . The sum of the reciprocals of the heptagonal numbers converges to a known value that is not only irrational but also transcendental, and for which there exists a complicated formula. The sum of the reciprocals of the twin primes, of which there may be finitely many or infinitely many, is known to be finite and is called Brun's constant, approximately  . The reciprocal of five conventionally appears twice in the sum. The sum of the reciprocals of the Proth primes, of which there may be finitely many or infinitely many, is known to be finite, The prime quadruplets are pairs of twin primes with only one odd number between them. The sum of the reciprocals of the numbers in prime quadruplets is The sum of the reciprocals of the perfect powers (including duplicates) The sum of the reciprocals of the perfect powers (excluding duplicates) is approximately  . The sum of the reciprocals of the powers is approximately equal to  . The sum is exactly equal to a definite integral: This identity was discovered by Johann Bernoulli in 1697, and is now known as one of the two Sophomore's dream identities. The Goldbach–Euler theorem states that the sum of the reciprocals of the numbers that are 1 less than a perfect power (excluding duplicates) is 1 . The sum of the reciprocals of all the non-zero triangular numbers The reciprocal Fibonacci constant is the sum of the reciprocals of the Fibonacci numbers, which is known to be finite and irrational and approximately equal to 3.3599 . For other finite sums of subsets of the reciprocals of Fibonacci numbers, see here. An exponential factorial is an operation recursively defined as For example, where the exponents are evaluated from the top down. The sum of the reciprocals of the exponential factorials from 1 onward is approximately 1.6111 and is transcendental. A "powerful number" is a positive integer for which every prime appearing in its prime factorization appears there at least twice. The sum of the reciprocals of the powerful numbers is close to 1.9436 . The reciprocals of the factorials sum to the transcendental number (one of two constants called "Euler's number"). The sum of the reciprocals of the square numbers (the Basel problem) is the transcendental number or where is the Riemann zeta function. The sum of the reciprocals of the cubes of positive integers is called Apéry's constant  , and equals approximately 1.2021 . This number is irrational, but it is not known whether or not it is transcendental. The reciprocals of the non-negative integer powers of 2 sum to  . This is a particular case of the sum of the reciprocals of any geometric series where the first term and the common ratio are positive integers. If the first term is and the common ratio is then the sum is  The Kempner series is the sum of the reciprocals of all positive integers not containing the digit "9" in Unlike the harmonic series, which does not exclude those numbers, this series converges, specifically to approximately  . A palindromic number is one that remains the same when its digits are reversed. The sum of the reciprocals of the palindromic numbers converges to approximately  . A pentatope number is a number in the fifth cell of any row of Pascal's triangle starting with the five-term row  . The sum of the reciprocals of the pentatope numbers is  . Sylvester's sequence is an integer sequence in which each member of the sequence is the product of the previous members, plus one. The first few terms of the sequence are  . The sum of the reciprocals of the numbers in Sylvester's sequence The Riemann zeta function is a function of a complex variable that analytically continues the sum of the infinite series to an analytic function on the entire complex plane except for s = 1, where has a pole. This series converges if and only if the real part of is greater than  . The sum of the reciprocals of all the Fermat numbers (numbers of the form ) is irrational. The sum of the reciprocals of the pronic numbers (products of two consecutive integers) (excluding ) is  (see Telescoping series). Divergent series The -th partial sum of the harmonic series, which is the sum of the reciprocals of the first positive integers, diverges as goes to infinity, albeit extremely slowly: The sum of the first terms is less than  . The difference between the cumulative sum and the natural logarithm of converges to the Euler–Mascheroni constant, commonly denoted as which is approximately  . The sum of the reciprocals of the primes diverges. The strong form of Dirichlet's theorem on arithmetic progressions implies that the sum of the reciprocals of the primes of the form is divergent. Similarly, the sum of the reciprocals of the primes of the form is divergent. By Fermat's theorem on sums of two squares, it follows that the sum of reciprocals of numbers of the form where and are non-negative integers, not both equal to , diverges, with or without repetition. If is any ascending series of positive integers with the property that there exists such that for all then the sum of the reciprocals diverges. The Erdős conjecture on arithmetic progressions states that if the sum of the reciprocals of the members of a set of positive integers diverges, then contains arithmetic progressions of any length, however great. the conjecture remains unproven. See also Large set Sum of squares Sums of powers References Mathematics-related lists Diophantine equations Sequences and series
The Judicial Conference of the State of New York is an institution of the New York State Unified Court System responsible for surveying current practice in the administration of the state's courts, compiling statistics, and suggesting legislation and regulations. Its members include the Chief Judge of the New York Court of Appeals and judges from the New York Supreme Court, Appellate Division. History It was created in 1955 and codified at New York Judiciary Law article 7-A (§§ 214, 214-A). It is the successor body of the Judicial Council of the State of New York, which was abolished with the repeal of article 2-A of the Judiciary Law in Laws of 1955, ch. 869. That body was formed for the purpose of surveying current practice in the administration of the State's courts, compiling statistics, and suggesting legislation. References New York (state) state courts Court administration
```javascript // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. // Flags: --allow-natives-syntax function* generator(a) { a.pop().next(); } function prepareGenerators(n) { var a = [{ next: () => 0 }]; for (var i = 0; i < n; ++i) { a.push(generator(a)); } return a; } var gens1 = prepareGenerators(10); assertDoesNotThrow(() => gens1.pop().next()); %OptimizeFunctionOnNextCall(generator); var gens2 = prepareGenerators(200000); assertThrows(() => gens2.pop().next(), RangeError); ```
The Returned () is a French supernatural drama television series created by Fabrice Gobert, based on the 2004 French film They Came Back (Les Revenants), directed by Robin Campillo. The series debuted on 26 November 2012 on Canal+ and completed its first season, consisting of eight episodes, on 17 December. In 2013, the first season won an International Emmy for Best Drama Series. The second season, also comprising eight episodes, premiered on 28 September 2015 on Canal+. It premiered in the UK on 16 October 2015 on More4, and in the US on 31 October 2015 on SundanceTV. Premise In a small French mountain town many dead people reappear apparently alive and normal, including teenage school bus crash victim Camille, suicidal bridegroom Simon, a small boy called "Victor" who was murdered by burglars, and serial killer Serge. While they try to resume their lives, strange phenomena take place: recurring power outages; a mysterious lowering of the local reservoir's water level, revealing the presence of dead animals and a church steeple; and the appearance of strange marks on the bodies of the living and the dead. Cast and characters Main characters The "Returned" Yara Pilartz as Camille Séguret, 15-year-old girl killed in a bus accident four years earlier, and younger Léna Swann Nambotin as Victor/Louis Lewanski, 8-year-old boy who died 35 years prior Pierre Perrier as Simon Delaître, 23 years old, died the day of his wedding to Adèle a decade earlier Guillaume Gouix as Serge Garrel, a serial killer who was killed by his brother Toni seven years earlier Ana Girardot as Lucy Clarsen, murdered waitress Laetitia de Fombelle as Viviane Costa, 45 years old, wife of Michel Costa, died 34 years earlier Ernst Umhauer as Virgil, teenager murdered 35 years ago (season 2) Armande Boulanger as Audrey Sabatini, daughter of Sandrine and Yan, classmate of Camille who died in the same bus accident (season 2) Thomas Doret as Esteban Koretzky, died in the bus accident (season 2) Mélodie Richard as Esther, a young woman killed seven years before (season 2) Michaël Abiteboul as Milan Garrel, father of Serge and Toni, who died 35 years earlier (season 2) Vladimir Consigny as Morgane, boyfriend/client of Lucy's, died 35 years earlier Others Jenna Thiam as Léna Séguret, Camille's twin sister, now 19 Anne Consigny as Claire Séguret, mother of Léna and Camille Frédéric Pierrot as Jérôme Séguret, father of Léna and Camille, now separated from Claire Jean-François Sivadier as Pierre Grégory Gadebois as Toni Garrel, brother of Serge Clotilde Hesme as Adèle Werther, Simon's former fiancée Brune Martin as Chloé Delaitre, 9, daughter of Adèle Werther and Simon Delaitre Céline Sallette as Julie Meyer, a doctor who finds and cares for "Victor" Claude Leveque as Michel Costa, elderly widower of Viviane Costa, who is attended to by Julie Meyer Samir Guesmi as Thomas Mézache Alix Poisson as Laure Valère Constance Dollé as Sandrine Sabatini Jérôme Kircher as Father Jean-François Laurent Lucas as Berg Recurring characters Bertrand Constant as Bruno Matila Malliarakis as Frédéric Guillaume Marquet as Alcide Franck Adrien as Yan Sabatini Carole Franck as Mademoiselle Payet Laurent Capelluto as the army major Alice Butaud as Madame Lewanski Pauline Parigot as Ophélie Nicolas Wanczycki as Lieutenant Janvier Aurélien Recoing as Etienne Berg Episodes Season 1 (2012) Season 2 (2015) Production The series was shot in Haute-Savoie, mainly in the city of Annecy, and in Seynod, Menthon-Saint-Bernard, Poisy, Cran-Gevrier, Sévrier, Annecy-le-Vieux, Veyrier-du-Lac, and Semnoz. The dam, which plays an important role, is the Barrage de Tignes. The first season of the series was filmed in April and May 2012. It was directed by Fabrice Gobert and Frédéric Mermoud. The second season of the series was directed by Fabrice Gobert and Frédéric Goupil. It was originally to be filmed in February and March 2014, for screening from November 2014. However, delays in the writing process pushed filming back until the second half of 2014, and broadcast began in 2015. The synopsis for the second season was released in August 2014, which confirmed there would be a total of eight episodes. Filming for the second season ended on 7 April 2015. Music The series' music was composed by the Scottish post-rock band Mogwai. The band's guitarist John Cummings said in an interview with The Quietus, "They wanted us to start writing it before they started filming it. They described it as inspiring them, they wanted some kind of musical mood in place before they started, so we were working a bit dry at first ... we'd (only) seen the first couple of scripts in English". The band released a four-track sampler of the music (Les Revenants EP) on 17 December 2012, the day of the showing of the final episode. A full-length soundtrack album, Les Revenants, was released on 25 February 2013. In August 2014, it was confirmed that Mogwai would also write the soundtrack for the second season. Reception The series has been critically acclaimed. On Rotten Tomatoes, the first season holds an approval rating of 100% with an average score of 9.2 out of 10 based on 39 reviews and a critics' consensus of, "A pleasant change from typically gory zombie shows, The Returned is a must-see oddity that's both smart and sure to disturb". The second season of the show holds a rating of 95%, with an average score of 9.1 out of 10 based on 20 reviews and a consensus of, "After a long wait, The Returned is back with more of the chilling, deliberate, and masterful storytelling that made season one a spooky success". Le Monde said the series marked a resurgence in the fantasy genre with the dead appearing out of nowhere, trying to regain their lives where they left off. Libération said the series recalled the atmosphere of Twin Peaks by David Lynch. In France, viewing figures averaged 1.4 million over the eight episodes, on Canal+. For its American showing, the series received a 92 out of 100 rating from Metacritic, which averages critics' reviews, based on 28 reviews. The second season received an 82 out of 100 rating based on 11 reviews. During a visit to Paris, Stephen King remarked on being a big fan of the show and later tweeted about it. Accolades In 2013, for the 41st International Emmy Award, The Returned won for Best Drama Series. For the 18th Satellite Awards, it received a nomination for Best Television Series or Miniseries, Genre. In 2014, it was awarded with a Peabody Award. It received a nomination for Outstanding Achievement in Movies, Miniseries and Specials for the 2014 TCA Awards. Fabien Adda and Fabrice Gobert received a nomination for Best Screenplay for the episode "The Horde", for the 2013 Bram Stoker Awards. International broadcasts Season 1 was broadcast in the United Kingdom from 9 June 2013 on Channel 4. It was the first "fully subtitled drama" on the channel in more than 20 years and was screened in French, with English subtitles. First announced under the English name Rebound, the title was amended to The Returned prior to broadcast. The channel made a feature of the subtitles by broadcasting a specially commissioned advertisement break in French with English subtitles. In the United States, SundanceTV began broadcasting the series' first season on 31 October 2013, before picking up the second season on 11 January 2014. The series began airing in Australia on SBS Two on 11 February 2014. In Canada, the series debuted on 26 April 2014 on Space. In advance of the second-season premiere, the first two episodes of the season received an advance preview screening at the 2015 Toronto International Film Festival, as part of the festival's new Primetime platform of selected television projects. The second season premiered on 16 October 2015 on More4 in the United Kingdom and on 31 October 2015 in the U.S. on SundanceTV. International ratings Season 1 Season 2 Home media release The first season was released on DVD in France on 20 December 2012 and in the UK on 9 September 2013. In the United States, the first season was released on both DVD and Blu-ray on 11 February 2014 and the second season on 17 December 2019. Adaptations In May 2013, it was revealed that an English-language adaptation was in development by Paul Abbott and FremantleMedia, with the working title They Came Back. In September 2013, it was revealed that Abbott was no longer involved with the project and that A&E would develop it. In April 2014, A&E ordered 10 episodes with Carlton Cuse and Raelle Tucker as executive producers. The series premiered on 9 March 2015 and was cancelled after one season. References External links 2010s French drama television series 2012 French television series debuts 2015 French television series endings French horror fiction television series French-language television shows Nonlinear narrative television series Peabody Award-winning television programs Serial drama television series Live action television shows based on films Television shows set in France Zombies in television Canal+ original programming French supernatural television series
Cannabis tea (also known as weed tea, pot tea, ganja tea or a cannabis decoction) is a cannabis-infused drink prepared by steeping various parts of the cannabis plant in hot or cold water. Cannabis tea is commonly recognized as an alternative form of preparation and consumption of the cannabis plant, more popularly known as marijuana, pot, or weed. This plant has long been recognized as an herbal medicine employed by health professionals worldwide to ease symptoms of disease, as well as a psychoactive drug used recreationally and in spiritual traditions. Though less commonly practiced than popular methods like smoking or consuming edibles, drinking cannabis tea can produce comparable physical and mental therapeutic effects. Such effects are largely attributed to the THC and CBD content of the tea, levels of which are drastically dependent on individual preparation techniques involving volume, amount of cannabis, and boiling time. Also in common with these administration forms of cannabis is the heating component performed before usage. Due to the rather uncommon nature of this particular practice of cannabis consumption in modern times (in contrast to historical use) as well as the legality of cannabis throughout the World, the research available on the composition of cannabis tea is limited and based broadly around what is known of cannabis as it exists botanically. Composition According to a 2007 study published in the Journal of Ethnopharmacology, the composition of cannabis tea is affected by criteria including, but not limited to, the duration of time over which the cannabis is steeped, the volume of tea prepared, and the period of time for which the tea is stored before consumption. The study mentions the ways in which levels of THC and THCA impact variability of composition by changing the bioactivity of the beverage. Therefore, cannabis teas that include less bioactive cannabinoids, "based on HPLC peak area" will demonstrate varying compositions. Preparation According to a recent study on cannabinoid concentration and stability in preparations of cannabis oil and tea, a boiling period of fifteen minutes was found to be sufficient in order to reach the highest concentrations of cannabinoids in tea solutions. However, preparation of cannabis oil in the study was found to ensure a higher stability of cannabinoids than that which was found in preparation of cannabis tea. To produce psychoactive effects, cannabis used in tea must first be decarboxylated. As with regular tea, spices are often added. Typically, the tea is allowed to simmer for 5-10 minutes. Folk medicine Cannabis tea was used for centuries in folk medicine as it was thought to be a treatment for rheumatism, cystitis, and various urinary diseases. Legal status Cannabis tea is controlled as a derivative of cannabis in most countries as is required of countries whose governments are party to the United Nations' Single Convention on Narcotic Drugs. However, similar to the regulation surrounding alcohol content of kombucha, there are some forms of cannabis tea with cannabis levels considered to be highly undetectable. These variations of the drink do not contain the psychoactive cannabinoid known as THC (delta-9-tetrahydrocannabinol) and, instead, contain the non-psychoactive cannabinoids cannabidiol (CBD) or cannabinol (CBN)—both of which tend to go undetected in cannabis use/intoxication drug tests. As such, the legal status of cannabis tea is largely dependent on its composition and preparation. United States Cannabis tea is scheduled at the federal level in the United States by nature of being a derivative of Cannabis sativa, and it is therefore illegal to possess, buy, and sell. Due to variances in statewide laws, and the reluctance of the federal government to overrule the states, however, the federal legislation has little impact on nationwide use, and is "generally applied only against persons who possess, cultivate, or distribute large quantities of cannabis". As such, regulation of recreational and/or medicinal growth and use on an individual level is not the responsibility of the federal government. Colorado law In Colorado, for medical purposes, cannabis tea is a "Medical Marijuana Infused Product" which is "a product infused with medical marijuana that is intended for use or consumption other than by smoking, including edible products, ointments, and tinctures. These products, when manufactured or sold by a licensed medical marijuana center or a medical marijuana-infused product manufacturer, shall not be considered a food or drug for the purposes of the "Colorado Food and Drug Act", part 4 of article 5 of title 25, C.R.S." Colorado currently stands as one of 33 states that have laws legalizing marijuana as of 2018. Adverse effects Although not as widely published as the beneficial, therapeutic effects of cannabis tea, adverse effects of consumption have been found to exist, in addition to known adverse effects of cannabis use in general. Based upon the findings of select studies, it appears as though such effects occur mainly as a result of unconventional methods or dosage used when interacting with the decoction. Ancient childbirth According to a short communication published in the Journal of Ethnopharmacology, based on the research of Zias et al. regarding cannabis use in ancient childbirth, cannabis is said to have been used to form a decoction with other specific medicinal herbs to terminate a pregnancy in its second to third month. The following is a list of plants used the specific decoction for the termination mentioned by the midwives interviewed for the study: "C. sativa L./Cannabaceae; Atropa baetica Wilk./Solanaceae; Nerium oleander L./Apocynaceae; Ruta montana L./Rutaceae; Peganum harmala L./Zygophyllaceae; Agave americana L./Amaryllidaceae and Urginea maritima L./Liliaceae)". References Cannabis culture Tea Cold drinks Herbal tea Hot drinks
3-Epi-6-deoxocathasterone 23-monooxygenase (, cytochrome P450 90C1, CYP90D1, CYP90C1) is an enzyme with systematic name 3-epi-6-deoxocathasterone,NADPH:oxygen oxidoreductase (C-23-hydroxylating). This enzyme catalyses the following chemical reaction (1) 3-epi-6-deoxocathasterone + NADPH + H+ + O2 6-deoxotyphasterol + NADP+ + H2O (2) (22S,24R)-22-hydroxy-5alpha-ergostan-3-one + NADPH + H+ + O2 3-dehydro-6-deoxoteasterone + NADP+ + H2O 3-Epi-6-deoxocathasterone 23-monooxygenase takes part in brassinosteroid biosynthesis. References External links EC 1.14.13
```assembly ;****************************************************************************** ;* 32 point SSE-optimized DCT transform ;* ;* This file is part of FFmpeg. ;* ;* FFmpeg is free software; you can redistribute it and/or ;* modify it under the terms of the GNU Lesser General Public ;* ;* FFmpeg is distributed in the hope that it will be useful, ;* but WITHOUT ANY WARRANTY; without even the implied warranty of ;* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ;* ;* You should have received a copy of the GNU Lesser General Public ;* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA ;****************************************************************************** %include "libavutil/x86/x86util.asm" SECTION_RODATA 32 align 32 ps_cos_vec: dd 0.500603, 0.505471, 0.515447, 0.531043 dd 0.553104, 0.582935, 0.622504, 0.674808 dd -10.190008, -3.407609, -2.057781, -1.484165 dd -1.169440, -0.972568, -0.839350, -0.744536 dd 0.502419, 0.522499, 0.566944, 0.646822 dd 0.788155, 1.060678, 1.722447, 5.101149 dd 0.509796, 0.601345, 0.899976, 2.562916 dd 0.509796, 0.601345, 0.899976, 2.562916 dd 1.000000, 1.000000, 1.306563, 0.541196 dd 1.000000, 1.000000, 1.306563, 0.541196 dd 1.000000, 0.707107, 1.000000, -0.707107 dd 1.000000, 0.707107, 1.000000, -0.707107 dd 0.707107, 0.707107, 0.707107, 0.707107 align 32 ps_p1p1m1m1: dd 0, 0, 0x80000000, 0x80000000, 0, 0, 0x80000000, 0x80000000 %macro BUTTERFLY 4 subps %4, %1, %2 addps %2, %2, %1 mulps %1, %4, %3 %endmacro %macro BUTTERFLY0 5 %if cpuflag(sse2) && notcpuflag(avx) pshufd %4, %1, %5 xorps %1, %2 addps %1, %4 mulps %1, %3 %else shufps %4, %1, %1, %5 xorps %1, %1, %2 addps %4, %4, %1 mulps %1, %4, %3 %endif %endmacro %macro BUTTERFLY2 4 BUTTERFLY0 %1, %2, %3, %4, 0x1b %endmacro %macro BUTTERFLY3 4 BUTTERFLY0 %1, %2, %3, %4, 0xb1 %endmacro %macro BUTTERFLY3V 5 movaps m%5, m%1 addps m%1, m%2 subps m%5, m%2 SWAP %2, %5 mulps m%2, [ps_cos_vec+192] movaps m%5, m%3 addps m%3, m%4 subps m%4, m%5 mulps m%4, [ps_cos_vec+192] %endmacro %macro PASS6_AND_PERMUTE 0 mov tmpd, [outq+4] movss m7, [outq+72] addss m7, [outq+76] movss m3, [outq+56] addss m3, [outq+60] addss m4, m3 movss m2, [outq+52] addss m2, m3 movss m3, [outq+104] addss m3, [outq+108] addss m1, m3 addss m5, m4 movss [outq+ 16], m1 movss m1, [outq+100] addss m1, m3 movss m3, [outq+40] movss [outq+ 48], m1 addss m3, [outq+44] movss m1, [outq+100] addss m4, m3 addss m3, m2 addss m1, [outq+108] movss [outq+ 40], m3 addss m2, [outq+36] movss m3, [outq+8] movss [outq+ 56], m2 addss m3, [outq+12] movss [outq+ 32], m3 movss m3, [outq+80] movss [outq+ 8], m5 movss [outq+ 80], m1 movss m2, [outq+52] movss m5, [outq+120] addss m5, [outq+124] movss m1, [outq+64] addss m2, [outq+60] addss m0, m5 addss m5, [outq+116] mov [outq+64], tmpd addss m6, m0 addss m1, m6 mov tmpd, [outq+12] mov [outq+ 96], tmpd movss [outq+ 4], m1 movss m1, [outq+24] movss [outq+ 24], m4 movss m4, [outq+88] addss m4, [outq+92] addss m3, m4 addss m4, [outq+84] mov tmpd, [outq+108] addss m1, [outq+28] addss m0, m1 addss m1, m5 addss m6, m3 addss m3, m0 addss m0, m7 addss m5, [outq+20] addss m7, m1 movss [outq+ 12], m6 mov [outq+112], tmpd movss m6, [outq+28] movss [outq+ 28], m0 movss m0, [outq+36] movss [outq+ 36], m7 addss m1, m4 movss m7, [outq+116] addss m0, m2 addss m7, [outq+124] movss [outq+ 72], m0 movss m0, [outq+44] addss m2, m0 movss [outq+ 44], m1 movss [outq+ 88], m2 addss m0, [outq+60] mov tmpd, [outq+60] mov [outq+120], tmpd movss [outq+104], m0 addss m4, m5 addss m5, [outq+68] movss [outq+52], m4 movss [outq+60], m5 movss m4, [outq+68] movss m5, [outq+20] movss [outq+ 20], m3 addss m5, m7 addss m7, m6 addss m4, m5 movss m2, [outq+84] addss m2, [outq+92] addss m5, m2 movss [outq+ 68], m4 addss m2, m7 movss m4, [outq+76] movss [outq+ 84], m2 movss [outq+ 76], m5 addss m7, m4 addss m6, [outq+124] addss m4, m6 addss m6, [outq+92] movss [outq+100], m4 movss [outq+108], m6 movss m6, [outq+92] movss [outq+92], m7 addss m6, [outq+124] movss [outq+116], m6 %endmacro INIT_YMM avx SECTION .text %if HAVE_AVX_EXTERNAL ; void ff_dct32_float_avx(FFTSample *out, const FFTSample *in) cglobal dct32_float, 2,3,8, out, in, tmp ; pass 1 vmovaps m4, [inq+0] vinsertf128 m5, m5, [inq+96], 1 vinsertf128 m5, m5, [inq+112], 0 vshufps m5, m5, m5, 0x1b BUTTERFLY m4, m5, [ps_cos_vec], m6 vmovaps m2, [inq+64] vinsertf128 m6, m6, [inq+32], 1 vinsertf128 m6, m6, [inq+48], 0 vshufps m6, m6, m6, 0x1b BUTTERFLY m2, m6, [ps_cos_vec+32], m0 ; pass 2 BUTTERFLY m5, m6, [ps_cos_vec+64], m0 BUTTERFLY m4, m2, [ps_cos_vec+64], m7 ; pass 3 vperm2f128 m3, m6, m4, 0x31 vperm2f128 m1, m6, m4, 0x20 vshufps m3, m3, m3, 0x1b BUTTERFLY m1, m3, [ps_cos_vec+96], m6 vperm2f128 m4, m5, m2, 0x20 vperm2f128 m5, m5, m2, 0x31 vshufps m5, m5, m5, 0x1b BUTTERFLY m4, m5, [ps_cos_vec+96], m6 ; pass 4 vmovaps m6, [ps_p1p1m1m1+0] vmovaps m2, [ps_cos_vec+128] BUTTERFLY2 m5, m6, m2, m7 BUTTERFLY2 m4, m6, m2, m7 BUTTERFLY2 m1, m6, m2, m7 BUTTERFLY2 m3, m6, m2, m7 ; pass 5 vshufps m6, m6, m6, 0xcc vmovaps m2, [ps_cos_vec+160] BUTTERFLY3 m5, m6, m2, m7 BUTTERFLY3 m4, m6, m2, m7 BUTTERFLY3 m1, m6, m2, m7 BUTTERFLY3 m3, m6, m2, m7 vperm2f128 m6, m3, m3, 0x31 vmovaps [outq], m3 vextractf128 [outq+64], m5, 1 vextractf128 [outq+32], m5, 0 vextractf128 [outq+80], m4, 1 vextractf128 [outq+48], m4, 0 vperm2f128 m0, m1, m1, 0x31 vmovaps [outq+96], m1 vzeroupper ; pass 6, no SIMD... INIT_XMM PASS6_AND_PERMUTE RET %endif %if ARCH_X86_64 %define SPILL SWAP %define UNSPILL SWAP %macro PASS5 0 nop ; FIXME code alignment SWAP 5, 8 SWAP 4, 12 SWAP 6, 14 SWAP 7, 13 SWAP 0, 15 PERMUTE 9,10, 10,12, 11,14, 12,9, 13,11, 14,13 TRANSPOSE4x4PS 8, 9, 10, 11, 0 BUTTERFLY3V 8, 9, 10, 11, 0 addps m10, m11 TRANSPOSE4x4PS 12, 13, 14, 15, 0 BUTTERFLY3V 12, 13, 14, 15, 0 addps m14, m15 addps m12, m14 addps m14, m13 addps m13, m15 %endmacro %macro PASS6 0 SWAP 9, 12 SWAP 11, 14 movss [outq+0x00], m8 pshuflw m0, m8, 0xe movss [outq+0x10], m9 pshuflw m1, m9, 0xe movss [outq+0x20], m10 pshuflw m2, m10, 0xe movss [outq+0x30], m11 pshuflw m3, m11, 0xe movss [outq+0x40], m12 pshuflw m4, m12, 0xe movss [outq+0x50], m13 pshuflw m5, m13, 0xe movss [outq+0x60], m14 pshuflw m6, m14, 0xe movaps [outq+0x70], m15 pshuflw m7, m15, 0xe addss m0, m1 addss m1, m2 movss [outq+0x08], m0 addss m2, m3 movss [outq+0x18], m1 addss m3, m4 movss [outq+0x28], m2 addss m4, m5 movss [outq+0x38], m3 addss m5, m6 movss [outq+0x48], m4 addss m6, m7 movss [outq+0x58], m5 movss [outq+0x68], m6 movss [outq+0x78], m7 PERMUTE 1,8, 3,9, 5,10, 7,11, 9,12, 11,13, 13,14, 8,1, 10,3, 12,5, 14,7 movhlps m0, m1 pshufd m1, m1, 3 SWAP 0, 2, 4, 6, 8, 10, 12, 14 SWAP 1, 3, 5, 7, 9, 11, 13, 15 %rep 7 movhlps m0, m1 pshufd m1, m1, 3 addss m15, m1 SWAP 0, 2, 4, 6, 8, 10, 12, 14 SWAP 1, 3, 5, 7, 9, 11, 13, 15 %endrep %assign i 4 %rep 15 addss m0, m1 movss [outq+i], m0 SWAP 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 %assign i i+8 %endrep %endmacro %else ; ARCH_X86_32 %macro SPILL 2 ; xmm#, mempos movaps [outq+(%2-8)*16], m%1 %endmacro %macro UNSPILL 2 movaps m%1, [outq+(%2-8)*16] %endmacro %define PASS6 PASS6_AND_PERMUTE %macro PASS5 0 movaps m2, [ps_cos_vec+160] shufps m3, m3, 0xcc BUTTERFLY3 m5, m3, m2, m1 SPILL 5, 8 UNSPILL 1, 9 BUTTERFLY3 m1, m3, m2, m5 SPILL 1, 14 BUTTERFLY3 m4, m3, m2, m5 SPILL 4, 12 BUTTERFLY3 m7, m3, m2, m5 SPILL 7, 13 UNSPILL 5, 10 BUTTERFLY3 m5, m3, m2, m7 SPILL 5, 10 UNSPILL 4, 11 BUTTERFLY3 m4, m3, m2, m7 SPILL 4, 11 BUTTERFLY3 m6, m3, m2, m7 SPILL 6, 9 BUTTERFLY3 m0, m3, m2, m7 SPILL 0, 15 %endmacro %endif ; void ff_dct32_float_sse(FFTSample *out, const FFTSample *in) %macro DCT32_FUNC 0 cglobal dct32_float, 2, 3, 16, out, in, tmp ; pass 1 movaps m0, [inq+0] LOAD_INV m1, [inq+112] BUTTERFLY m0, m1, [ps_cos_vec], m3 movaps m7, [inq+64] LOAD_INV m4, [inq+48] BUTTERFLY m7, m4, [ps_cos_vec+32], m3 ; pass 2 movaps m2, [ps_cos_vec+64] BUTTERFLY m1, m4, m2, m3 SPILL 1, 11 SPILL 4, 8 ; pass 1 movaps m1, [inq+16] LOAD_INV m6, [inq+96] BUTTERFLY m1, m6, [ps_cos_vec+16], m3 movaps m4, [inq+80] LOAD_INV m5, [inq+32] BUTTERFLY m4, m5, [ps_cos_vec+48], m3 ; pass 2 BUTTERFLY m0, m7, m2, m3 movaps m2, [ps_cos_vec+80] BUTTERFLY m6, m5, m2, m3 BUTTERFLY m1, m4, m2, m3 ; pass 3 movaps m2, [ps_cos_vec+96] shufps m1, m1, 0x1b BUTTERFLY m0, m1, m2, m3 SPILL 0, 15 SPILL 1, 14 UNSPILL 0, 8 shufps m5, m5, 0x1b BUTTERFLY m0, m5, m2, m3 UNSPILL 1, 11 shufps m6, m6, 0x1b BUTTERFLY m1, m6, m2, m3 SPILL 1, 11 shufps m4, m4, 0x1b BUTTERFLY m7, m4, m2, m3 ; pass 4 movaps m3, [ps_p1p1m1m1+0] movaps m2, [ps_cos_vec+128] BUTTERFLY2 m5, m3, m2, m1 BUTTERFLY2 m0, m3, m2, m1 SPILL 0, 9 BUTTERFLY2 m6, m3, m2, m1 SPILL 6, 10 UNSPILL 0, 11 BUTTERFLY2 m0, m3, m2, m1 SPILL 0, 11 BUTTERFLY2 m4, m3, m2, m1 BUTTERFLY2 m7, m3, m2, m1 UNSPILL 6, 14 BUTTERFLY2 m6, m3, m2, m1 UNSPILL 0, 15 BUTTERFLY2 m0, m3, m2, m1 PASS5 PASS6 RET %endmacro %macro LOAD_INV 2 %if cpuflag(sse2) pshufd %1, %2, 0x1b %elif cpuflag(sse) movaps %1, %2 shufps %1, %1, 0x1b %endif %endmacro %if ARCH_X86_32 INIT_XMM sse DCT32_FUNC %endif INIT_XMM sse2 DCT32_FUNC ```
```php <?php /* * * * path_to_url * * Unless required by applicable law or agreed to in writing, software * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the */ namespace Google\Service\Aiplatform; class GoogleCloudAiplatformV1SampledShapleyAttribution extends \Google\Model { /** * @var int */ public $pathCount; /** * @param int */ public function setPathCount($pathCount) { $this->pathCount = $pathCount; } /** * @return int */ public function getPathCount() { return $this->pathCount; } } // Adding a class alias for backwards compatibility with the previous class name. class_alias(GoogleCloudAiplatformV1SampledShapleyAttribution::class, your_sha256_hashttribution'); ```
```objective-c //your_sha256_hash--------------------------------------- //your_sha256_hash--------------------------------------- #pragma once BOOL FGetResourceString(int32 isz, __out_ecount(cchMax) OLECHAR *psz, int cchMax); BSTR BstrGetResourceString(int32 isz); ```
```java /* * * * path_to_url * * Unless required by applicable law or agreed to in writing, * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * specific language governing permissions and limitations */ package org.ballerinalang.test.runtime.api; import io.ballerina.runtime.api.Module; import io.ballerina.runtime.api.creators.ValueCreator; import io.ballerina.runtime.api.utils.StringUtils; import io.ballerina.runtime.api.values.BMap; import io.ballerina.runtime.api.values.BString; import io.ballerina.runtime.internal.scheduling.Scheduler; import org.ballerinalang.test.BCompileUtil; import org.ballerinalang.test.BRunUtil; import org.ballerinalang.test.CompileResult; import org.testng.Assert; import org.testng.annotations.DataProvider; import org.testng.annotations.Test; import java.util.concurrent.atomic.AtomicReference; /** * Test cases for runtime api. * * @since 2.0.0 */ public class RuntimeAPITest { @Test(dataProvider = "packageNameProvider") public void testRuntimeAPIs(String packageName) { CompileResult result = BCompileUtil.compile("test-src/runtime/api/" + packageName); BRunUtil.invoke(result, "main"); } @DataProvider public Object[] packageNameProvider() { return new String[]{ "values", "errors", "types", "invalid_values", "async", "utils", "identifier_utils", "environment", "stream", "json" }; } @Test public void testRecordNoStrandDefaultValue() { CompileResult strandResult = BCompileUtil.compile("test-src/runtime/api/no_strand"); final Scheduler scheduler = new Scheduler(false); AtomicReference<Throwable> exceptionRef = new AtomicReference<>(); Thread thread1 = new Thread(() -> BRunUtil.runOnSchedule(strandResult, "main", scheduler)); Thread thread2 = new Thread(() -> { try { Thread.sleep(1000); BMap<BString, Object> recordValue = ValueCreator.createRecordValue(new Module("testorg", "no_strand", "1"), "MutualSslHandshake"); Assert.assertEquals(recordValue.getType().getName(), "MutualSslHandshake"); Assert.assertEquals(recordValue.get(StringUtils.fromString("status")), StringUtils.fromString("passed")); Assert.assertNull(recordValue.get(StringUtils.fromString("base64EncodedCert"))); } catch (Throwable e) { exceptionRef.set(e); } finally { scheduler.poison(); } }); try { thread1.start(); thread2.start(); thread1.join(); thread2.join(); Throwable storedException = exceptionRef.get(); if (storedException != null) { throw new AssertionError("Test failed due to an exception in a thread", storedException); } } catch (InterruptedException e) { throw new RuntimeException("Error while invoking function 'main'", e); } } @Test public void testRuntimeManagementAPI() { CompileResult strandResult = BCompileUtil.compileWithoutInitInvocation("test-src/runtime/api/runtime_mgt"); BRunUtil.runMain(strandResult); } } ```
Pitman is a surname. Notable people with the surname include: Almira Hollander Pitman (1854-1939), American suffragist Andrew Pitman, Australian climate scientist Benjamin Pitman, promoter of Pitman's shorthand in the United States Benjamin Pitman (Hawaii judge), New England and Hawaiian businessman Bill Pitman, American guitarist and session musician Charles Pitman (game warden) (1890–1975), British herpetologist and ornithologist Charles H. Pitman (born 1935), American Marine Corps general Chris Pitman, synthist for the U.S. band Guns N' Roses Brett Pitman, professional footballer for Bournemouth, Bristol City, Ipswich Town and Portsmouth. E.J.G. Pitman, statistician noted for the Pitman–Koopman–Darmois theorem concerning exponential families of probability distributions Frederick Pitman, British Olympic rower Frederick I Pitman, British rower and Boat Race umpire (father of Frederick) Gayle E. Pitman, American author Henry Alfred Pitman (1808–1908), English physician Henry Hoʻolulu Pitman (1845–1863), American Civil War soldier of Native Hawaiian ancestry Herbert Pitman, third officer on the RMS Titanic Isaac Pitman, inventor of Pitman Shorthand Jacob Pitman, brother of Isaac, brought shorthand and Swedenborgianism to South Australia James Pitman, inventor of the Initial Teaching Alphabet Jenny Pitman, British racehorse trainer and author Kent Pitman, expert on Lisp programming language Marie J. Davis Pitman (1850–1888), American author who used the pen-name "Margery Deane" Mary Pitman Ailau, Hawaiian noblewomen MC Pitman, East Midlands English rapper Primrose Pitman (1902-1998), English artist Robert Carter Pitman, 19th century Massachusetts legislator and author Rose M. M. Pitman (1868-1947), English illustrator Walter Pitman, educator and former politician in Ontario, Canada Walter C. Pitman, III, geophysicist and a professor emeritus at Columbia University Occupational surnames
```objective-c #pragma once #include "stringbase.h" #include "integerbase.h" #include "floatbase.h" #include <vespa/searchlib/common/rankedhit.h> //TODO: This one should go. // using search::AttributeVector; //your_sha256_hash------------- class AttrVector { public: template <bool MULTI> struct Features { using EnumType = uint32_t; static bool IsMultiValue() { return MULTI; } }; }; namespace search { template <typename B> class NumericDirectAttribute : public B { private: using EnumHandle = typename B::EnumHandle; NumericDirectAttribute(const NumericDirectAttribute &); NumericDirectAttribute & operator=(const NumericDirectAttribute &); typename B::BaseType getFromEnum(EnumHandle e) const override { return _data[e]; } protected: using BaseType = typename B::BaseType; using DocId = typename B::DocId; using Change = typename B::Change; using largeint_t = typename B::largeint_t; using Config = typename B::Config; NumericDirectAttribute(const std::string & baseFileName, const Config & c); ~NumericDirectAttribute() override; bool findEnum(BaseType value, EnumHandle & e) const override; void onCommit() override; void onUpdateStat() override { } bool addDoc(DocId & ) override; std::vector<BaseType> _data; std::vector<uint32_t> _idx; }; } template <typename F, typename B> class NumericDirectAttrVector : public search::NumericDirectAttribute<B> { protected: using DocId = typename B::DocId; using NumDirectAttrVec = NumericDirectAttrVector<F, B>; private: using largeint_t = typename B::largeint_t; public: NumericDirectAttrVector(const std::string & baseFileName); NumericDirectAttrVector(const std::string & baseFileName, const AttributeVector::Config & c); largeint_t getInt(DocId doc) const override { return static_cast<largeint_t>(getHelper(doc, 0)); } double getFloat(DocId doc) const override { return getHelper(doc, 0); } uint32_t get(DocId doc, largeint_t * v, uint32_t sz) const override { return getAllHelper<largeint_t, largeint_t>(doc, v, sz); } uint32_t get(DocId doc, double * v, uint32_t sz) const override { return getAllHelper<double, double>(doc, v, sz); } private: using EnumHandle = typename B::EnumHandle; using BaseType = typename B::BaseType; using Weighted = typename B::Weighted; using WeightedEnum = typename B::WeightedEnum; using WeightedInt = typename B::WeightedInt; using WeightedFloat = typename B::WeightedFloat; BaseType get(DocId doc) const override { return getHelper(doc, 0); } EnumHandle getEnum(DocId doc) const override { return getEnumHelper(doc, 0); } uint32_t get(DocId doc, EnumHandle * e, uint32_t sz) const override { return getAllEnumHelper(doc, e, sz); } uint32_t getValueCount(DocId doc) const override { return getValueCountHelper(doc); } uint32_t getValueCountHelper(DocId doc) const { if (F::IsMultiValue()) { return this->_idx[doc+1] - this->_idx[doc]; } else { return 1; } } EnumHandle getEnumHelper(DocId doc, int idx) const { (void) doc; (void) idx; return uint32_t(-1); } BaseType getHelper(DocId doc, int idx) const { if (F::IsMultiValue()) { return this->_data[this->_idx[doc] + idx]; } else { return this->_data[doc]; } } template <typename T, typename C> uint32_t getAllHelper(DocId doc, T * v, uint32_t sz) const { uint32_t available(getValueCountHelper(doc)); uint32_t num2Read(std::min(available, sz)); for (uint32_t i(0); i < num2Read; i++) { v[i] = T(static_cast<C>(getHelper(doc, i))); } return available; } template <typename T> uint32_t getAllEnumHelper(DocId doc, T * v, uint32_t sz) const { uint32_t available(getValueCountHelper(doc)); uint32_t num2Read(std::min(available, sz)); for (uint32_t i(0); i < num2Read; i++) { v[i] = T(getEnumHelper(doc, i)); } return available; } uint32_t get(DocId doc, WeightedEnum * v, uint32_t sz) const override { return getAllEnumHelper(doc, v, sz); } uint32_t get(DocId doc, WeightedInt * v, uint32_t sz) const override { return getAllHelper<WeightedInt, largeint_t>(doc, v, sz); } uint32_t get(DocId doc, WeightedFloat * v, uint32_t sz) const override { return getAllHelper<WeightedFloat, double>(doc, v, sz); } template <bool asc> long on_serialize_for_sort(DocId doc, void* serTo, long available) const; long onSerializeForAscendingSort(DocId doc, void* serTo, long available, const search::common::BlobConverter* bc) const override; long onSerializeForDescendingSort(DocId doc, void* serTo, long available, const search::common::BlobConverter* bc) const override; }; //your_sha256_hash------------- namespace search { class StringDirectAttribute : public StringAttribute { private: StringDirectAttribute(const StringDirectAttribute &); StringDirectAttribute & operator=(const StringDirectAttribute &); const char * getFromEnum(EnumHandle e) const override { return &_buffer[e]; } const char * getStringFromEnum(EnumHandle e) const override { return &_buffer[e]; } std::unique_ptr<attribute::SearchContext> getSearch(QueryTermSimpleUP term, const attribute::SearchContextParams & params) const override; protected: StringDirectAttribute(const std::string & baseFileName, const Config & c); ~StringDirectAttribute() override; bool findEnum(const char * value, EnumHandle & e) const override; std::vector<EnumHandle> findFoldedEnums(const char *) const override; void onCommit() override; void onUpdateStat() override { } bool addDoc(DocId & ) override; protected: std::vector<char> _buffer; OffsetVector _offsets; std::vector<uint32_t> _idx; }; } template <typename F> class StringDirectAttrVector : public search::StringDirectAttribute { public: StringDirectAttrVector(const std::string & baseFileName); StringDirectAttrVector(const std::string & baseFileName, const Config & c); uint32_t get(DocId doc, const char ** v, uint32_t sz) const override { return getAllHelper(doc, v, sz); } const char * get(DocId doc) const override { return getHelper(doc, 0); } private: uint32_t get(DocId doc, std::string * v, uint32_t sz) const override { return getAllHelper(doc, v, sz); } uint32_t get(DocId doc, EnumHandle * e, uint32_t sz) const override { return getAllEnumHelper(doc, e, sz); } EnumHandle getEnum(DocId doc) const override { return getEnumHelper(doc, 0); } uint32_t getValueCount(DocId doc) const override { return getValueCountHelper(doc); } uint32_t get(DocId doc, WeightedEnum * e, uint32_t sz) const override { return getAllEnumHelper(doc, e, sz); } uint32_t get(DocId doc, WeightedString * v, uint32_t sz) const override { return getAllHelper(doc, v, sz); } uint32_t get(DocId doc, WeightedConstChar * v, uint32_t sz) const override { return getAllHelper(doc, v, sz); } uint32_t getValueCountHelper(DocId doc) const { if (F::IsMultiValue()) { return this->_idx[doc+1] - this->_idx[doc]; } else { return 1; } } EnumHandle getEnumHelper(DocId doc, int idx) const { if (F::IsMultiValue()) { return this->_offsets[this->_idx[doc] + idx]; } else { return this->_offsets[doc]; } return uint32_t(-1); } const char *getHelper(DocId doc, int idx) const { if (F::IsMultiValue()) { return & this->_buffer[this->_offsets[this->_idx[doc] + idx]]; } else if (idx == 0) { return & this->_buffer[this->_offsets[doc]]; } return NULL; } template <typename T> uint32_t getAllHelper(DocId doc, T * v, uint32_t sz) const { uint32_t available(getValueCountHelper(doc)); uint32_t num2Read(std::min(available, sz)); for (uint32_t i(0); i < num2Read; i++) { v[i] = T(getHelper(doc, i)); } return available; } template <typename T> uint32_t getAllEnumHelper(DocId doc, T * v, uint32_t sz) const { uint32_t available(getValueCountHelper(doc)); uint32_t num2Read(std::min(available, sz)); for (uint32_t i(0); i < num2Read; i++) { v[i] = T(getEnumHelper(doc, i)); } return available; } long on_serialize_for_sort(DocId doc, void* serTo, long available, const search::common::BlobConverter* bc, bool asc) const; long onSerializeForAscendingSort(DocId doc, void* serTo, long available, const search::common::BlobConverter* bc) const override; long onSerializeForDescendingSort(DocId doc, void* serTo, long available, const search::common::BlobConverter* bc) const override; }; ```
```c++ /** * @file Statement.cpp * @ingroup SQLiteCpp * @brief A prepared SQLite Statement is a compiled SQL query ready to be executed, pointing to a row of result. * * * or copy at path_to_url */ #include <SQLiteCpp/Statement.h> #include <SQLiteCpp/Database.h> #include <SQLiteCpp/Column.h> #include <SQLiteCpp/Assertion.h> #include <SQLiteCpp/Exception.h> #include <sqlite3.h> namespace SQLite { // Compile and register the SQL query for the provided SQLite Database Connection Statement::Statement(Database &aDatabase, const char* apQuery) : mQuery(apQuery), mStmtPtr(aDatabase.mpSQLite, mQuery), // prepare the SQL query, and ref count (needs Database friendship) mColumnCount(0), mbHasRow(false), mbDone(false) { mColumnCount = sqlite3_column_count(mStmtPtr); } // Compile and register the SQL query for the provided SQLite Database Connection Statement::Statement(Database &aDatabase, const std::string& aQuery) : mQuery(aQuery), mStmtPtr(aDatabase.mpSQLite, mQuery), // prepare the SQL query, and ref count (needs Database friendship) mColumnCount(0), mbHasRow(false), mbDone(false) { mColumnCount = sqlite3_column_count(mStmtPtr); } // Finalize and unregister the SQL query from the SQLite Database Connection. Statement::~Statement() { // the finalization will be done by the destructor of the last shared pointer } // Reset the statement to make it ready for a new execution (see also #clearBindings() bellow) void Statement::reset() { const int ret = tryReset(); check(ret); } int Statement::tryReset() noexcept { mbHasRow = false; mbDone = false; return sqlite3_reset(mStmtPtr); } // Clears away all the bindings of a prepared statement (can be associated with #reset() above). void Statement::clearBindings() { const int ret = sqlite3_clear_bindings(mStmtPtr); check(ret); } // Bind an int value to a parameter "?", "?NNN", ":VVV", "@VVV" or "$VVV" in the SQL prepared statement void Statement::bind(const int aIndex, const int aValue) { const int ret = sqlite3_bind_int(mStmtPtr, aIndex, aValue); check(ret); } // Bind a 32bits unsigned int value to a parameter "?", "?NNN", ":VVV", "@VVV" or "$VVV" in the SQL prepared statement void Statement::bind(const int aIndex, const unsigned aValue) { const int ret = sqlite3_bind_int64(mStmtPtr, aIndex, aValue); check(ret); } // Bind a 64bits int value to a parameter "?", "?NNN", ":VVV", "@VVV" or "$VVV" in the SQL prepared statement void Statement::bind(const int aIndex, const long long aValue) { const int ret = sqlite3_bind_int64(mStmtPtr, aIndex, aValue); check(ret); } // Bind a double (64bits float) value to a parameter "?", "?NNN", ":VVV", "@VVV" or "$VVV" in the SQL prepared statement void Statement::bind(const int aIndex, const double aValue) { const int ret = sqlite3_bind_double(mStmtPtr, aIndex, aValue); check(ret); } // Bind a string value to a parameter "?", "?NNN", ":VVV", "@VVV" or "$VVV" in the SQL prepared statement void Statement::bind(const int aIndex, const std::string& aValue) { const int ret = sqlite3_bind_text(mStmtPtr, aIndex, aValue.c_str(), static_cast<int>(aValue.size()), SQLITE_TRANSIENT); check(ret); } // Bind a text value to a parameter "?", "?NNN", ":VVV", "@VVV" or "$VVV" in the SQL prepared statement void Statement::bind(const int aIndex, const char* apValue) { const int ret = sqlite3_bind_text(mStmtPtr, aIndex, apValue, -1, SQLITE_TRANSIENT); check(ret); } // Bind a binary blob value to a parameter "?", "?NNN", ":VVV", "@VVV" or "$VVV" in the SQL prepared statement void Statement::bind(const int aIndex, const void* apValue, const int aSize) { const int ret = sqlite3_bind_blob(mStmtPtr, aIndex, apValue, aSize, SQLITE_TRANSIENT); check(ret); } // Bind a string value to a parameter "?", "?NNN", ":VVV", "@VVV" or "$VVV" in the SQL prepared statement void Statement::bindNoCopy(const int aIndex, const std::string& aValue) { const int ret = sqlite3_bind_text(mStmtPtr, aIndex, aValue.c_str(), static_cast<int>(aValue.size()), SQLITE_STATIC); check(ret); } // Bind a text value to a parameter "?", "?NNN", ":VVV", "@VVV" or "$VVV" in the SQL prepared statement void Statement::bindNoCopy(const int aIndex, const char* apValue) { const int ret = sqlite3_bind_text(mStmtPtr, aIndex, apValue, -1, SQLITE_STATIC); check(ret); } // Bind a binary blob value to a parameter "?", "?NNN", ":VVV", "@VVV" or "$VVV" in the SQL prepared statement void Statement::bindNoCopy(const int aIndex, const void* apValue, const int aSize) { const int ret = sqlite3_bind_blob(mStmtPtr, aIndex, apValue, aSize, SQLITE_STATIC); check(ret); } // Bind a NULL value to a parameter "?", "?NNN", ":VVV", "@VVV" or "$VVV" in the SQL prepared statement void Statement::bind(const int aIndex) { const int ret = sqlite3_bind_null(mStmtPtr, aIndex); check(ret); } // Bind an int value to a parameter "?NNN", ":VVV", "@VVV" or "$VVV" in the SQL prepared statement void Statement::bind(const char* apName, const int aValue) { const int index = sqlite3_bind_parameter_index(mStmtPtr, apName); const int ret = sqlite3_bind_int(mStmtPtr, index, aValue); check(ret); } // Bind a 32bits unsigned int value to a parameter "?NNN", ":VVV", "@VVV" or "$VVV" in the SQL prepared statement void Statement::bind(const char* apName, const unsigned aValue) { const int index = sqlite3_bind_parameter_index(mStmtPtr, apName); const int ret = sqlite3_bind_int64(mStmtPtr, index, aValue); check(ret); } // Bind a 64bits int value to a parameter "?NNN", ":VVV", "@VVV" or "$VVV" in the SQL prepared statement void Statement::bind(const char* apName, const long long aValue) { const int index = sqlite3_bind_parameter_index(mStmtPtr, apName); const int ret = sqlite3_bind_int64(mStmtPtr, index, aValue); check(ret); } // Bind a double (64bits float) value to a parameter "?NNN", ":VVV", "@VVV" or "$VVV" in the SQL prepared statement void Statement::bind(const char* apName, const double aValue) { const int index = sqlite3_bind_parameter_index(mStmtPtr, apName); const int ret = sqlite3_bind_double(mStmtPtr, index, aValue); check(ret); } // Bind a string value to a parameter "?NNN", ":VVV", "@VVV" or "$VVV" in the SQL prepared statement void Statement::bind(const char* apName, const std::string& aValue) { const int index = sqlite3_bind_parameter_index(mStmtPtr, apName); const int ret = sqlite3_bind_text(mStmtPtr, index, aValue.c_str(), static_cast<int>(aValue.size()), SQLITE_TRANSIENT); check(ret); } // Bind a text value to a parameter "?NNN", ":VVV", "@VVV" or "$VVV" in the SQL prepared statement void Statement::bind(const char* apName, const char* apValue) { const int index = sqlite3_bind_parameter_index(mStmtPtr, apName); const int ret = sqlite3_bind_text(mStmtPtr, index, apValue, -1, SQLITE_TRANSIENT); check(ret); } // Bind a binary blob value to a parameter "?NNN", ":VVV", "@VVV" or "$VVV" in the SQL prepared statement void Statement::bind(const char* apName, const void* apValue, const int aSize) { const int index = sqlite3_bind_parameter_index(mStmtPtr, apName); const int ret = sqlite3_bind_blob(mStmtPtr, index, apValue, aSize, SQLITE_TRANSIENT); check(ret); } // Bind a string value to a parameter "?NNN", ":VVV", "@VVV" or "$VVV" in the SQL prepared statement void Statement::bindNoCopy(const char* apName, const std::string& aValue) { const int index = sqlite3_bind_parameter_index(mStmtPtr, apName); const int ret = sqlite3_bind_text(mStmtPtr, index, aValue.c_str(), static_cast<int>(aValue.size()), SQLITE_STATIC); check(ret); } // Bind a text value to a parameter "?NNN", ":VVV", "@VVV" or "$VVV" in the SQL prepared statement void Statement::bindNoCopy(const char* apName, const char* apValue) { const int index = sqlite3_bind_parameter_index(mStmtPtr, apName); const int ret = sqlite3_bind_text(mStmtPtr, index, apValue, -1, SQLITE_STATIC); check(ret); } // Bind a binary blob value to a parameter "?NNN", ":VVV", "@VVV" or "$VVV" in the SQL prepared statement void Statement::bindNoCopy(const char* apName, const void* apValue, const int aSize) { const int index = sqlite3_bind_parameter_index(mStmtPtr, apName); const int ret = sqlite3_bind_blob(mStmtPtr, index, apValue, aSize, SQLITE_STATIC); check(ret); } // Bind a NULL value to a parameter "?NNN", ":VVV", "@VVV" or "$VVV" in the SQL prepared statement void Statement::bind(const char* apName) { const int index = sqlite3_bind_parameter_index(mStmtPtr, apName); const int ret = sqlite3_bind_null(mStmtPtr, index); check(ret); } // Execute a step of the query to fetch one row of results bool Statement::executeStep() { const int ret = tryExecuteStep(); if ((SQLITE_ROW != ret) && (SQLITE_DONE != ret)) // on row or no (more) row ready, else it's a problem { throw SQLite::Exception(mStmtPtr, ret); } return mbHasRow; // true only if one row is accessible by getColumn(N) } // Execute a one-step query with no expected result int Statement::exec() { const int ret = tryExecuteStep(); if (SQLITE_DONE != ret) // the statement has finished executing successfully { if (SQLITE_ROW == ret) { throw SQLite::Exception("exec() does not expect results. Use executeStep."); } else { throw SQLite::Exception(mStmtPtr, ret); } } // Return the number of rows modified by those SQL statements (INSERT, UPDATE or DELETE) return sqlite3_changes(mStmtPtr); } int Statement::tryExecuteStep() noexcept { if (false == mbDone) { const int ret = sqlite3_step(mStmtPtr); if (SQLITE_ROW == ret) // one row is ready : call getColumn(N) to access it { mbHasRow = true; } else if (SQLITE_DONE == ret) // no (more) row ready : the query has finished executing { mbHasRow = false; mbDone = true; } else { mbHasRow = false; mbDone = false; } return ret; } else { // Statement needs to be reseted ! return SQLITE_MISUSE; } } // Return a copy of the column data specified by its index starting at 0 // (use the Column copy-constructor) Column Statement::getColumn(const int aIndex) { checkRow(); checkIndex(aIndex); // Share the Statement Object handle with the new Column created return Column(mStmtPtr, aIndex); } // Return a copy of the column data specified by its column name starting at 0 // (use the Column copy-constructor) Column Statement::getColumn(const char* apName) { checkRow(); const int index = getColumnIndex(apName); // Share the Statement Object handle with the new Column created return Column(mStmtPtr, index); } // Test if the column is NULL bool Statement::isColumnNull(const int aIndex) const { checkRow(); checkIndex(aIndex); return (SQLITE_NULL == sqlite3_column_type(mStmtPtr, aIndex)); } bool Statement::isColumnNull(const char* apName) const { checkRow(); const int index = getColumnIndex(apName); return (SQLITE_NULL == sqlite3_column_type(mStmtPtr, index)); } // Return the named assigned to the specified result column (potentially aliased) const char* Statement::getColumnName(const int aIndex) const { checkIndex(aIndex); return sqlite3_column_name(mStmtPtr, aIndex); } #ifdef SQLITE_ENABLE_COLUMN_METADATA // Return the named assigned to the specified result column (potentially aliased) const char* Statement::getColumnOriginName(const int aIndex) const { checkIndex(aIndex); return sqlite3_column_origin_name(mStmtPtr, aIndex); } #endif // Return the index of the specified (potentially aliased) column name int Statement::getColumnIndex(const char* apName) const { // Build the map of column index by name on first call if (mColumnNames.empty()) { for (int i = 0; i < mColumnCount; ++i) { const char* pName = sqlite3_column_name(mStmtPtr, i); mColumnNames[pName] = i; } } const TColumnNames::const_iterator iIndex = mColumnNames.find(apName); if (iIndex == mColumnNames.end()) { throw SQLite::Exception("Unknown column name."); } return (*iIndex).second; } // Return the numeric result code for the most recent failed API call (if any). int Statement::getErrorCode() const noexcept // nothrow { return sqlite3_errcode(mStmtPtr); } // Return the extended numeric result code for the most recent failed API call (if any). int Statement::getExtendedErrorCode() const noexcept // nothrow { return sqlite3_extended_errcode(mStmtPtr); } // Return UTF-8 encoded English language explanation of the most recent failed API call (if any). const char* Statement::getErrorMsg() const noexcept // nothrow { return sqlite3_errmsg(mStmtPtr); } //////////////////////////////////////////////////////////////////////////////// // Internal class : shared pointer to the sqlite3_stmt SQLite Statement Object //////////////////////////////////////////////////////////////////////////////// /** * @brief Prepare the statement and initialize its reference counter * * @param[in] apSQLite The sqlite3 database connexion * @param[in] aQuery The SQL query string to prepare */ Statement::Ptr::Ptr(sqlite3* apSQLite, std::string& aQuery) : mpSQLite(apSQLite), mpStmt(NULL), mpRefCount(NULL) { const int ret = sqlite3_prepare_v2(apSQLite, aQuery.c_str(), static_cast<int>(aQuery.size()), &mpStmt, NULL); if (SQLITE_OK != ret) { throw SQLite::Exception(apSQLite, ret); } // Initialize the reference counter of the sqlite3_stmt : // used to share the mStmtPtr between Statement and Column objects; // This is needed to enable Column objects to live longer than the Statement objet it refers to. mpRefCount = new unsigned int(1); // NOLINT(readability/casting) } /** * @brief Copy constructor increments the ref counter * * @param[in] aPtr Pointer to copy */ Statement::Ptr::Ptr(const Statement::Ptr& aPtr) : mpSQLite(aPtr.mpSQLite), mpStmt(aPtr.mpStmt), mpRefCount(aPtr.mpRefCount) { assert(NULL != mpRefCount); assert(0 != *mpRefCount); // Increment the reference counter of the sqlite3_stmt, // asking not to finalize the sqlite3_stmt during the lifetime of the new objet ++(*mpRefCount); } /** * @brief Decrement the ref counter and finalize the sqlite3_stmt when it reaches 0 */ Statement::Ptr::~Ptr() { assert(NULL != mpRefCount); assert(0 != *mpRefCount); // Decrement and check the reference counter of the sqlite3_stmt --(*mpRefCount); if (0 == *mpRefCount) { // If count reaches zero, finalize the sqlite3_stmt, as no Statement nor Column objet use it anymore. // No need to check the return code, as it is the same as the last statement evaluation. sqlite3_finalize(mpStmt); // and delete the reference counter delete mpRefCount; mpRefCount = NULL; mpStmt = NULL; } // else, the finalization will be done later, by the last object } } // namespace SQLite ```
In skydiving, an automatic activation device (AAD) is a dead man's switch consisting of an electronic-pyrotechnic or mechanical device that automatically opens the main or reserve parachute container at a preset altitude or after a preset time. AADs are typically used to open the reserve parachute container at a preset altitude if the descent rate exceeds a preset activation speed. This indicates that the user has not opened their parachute, or that the parachute is malfunctioning and is not slowing the descent rate sufficiently. The older style mechanical AADs are falling out of fashion in favour of newer style electronic-pyrotechnic models. These newer models have been proven more reliable as their built-in computers allow for better estimation of altitude and vertical speed. Electronic AADs typically employ a small pyrotechnic charge to sever the reserve container closing loop, allowing the spring-loaded reserve pilot chute to deploy. Examples Examples of specific AADs are: Safety AADs can malfunction and deploy the reserve parachute when the firing parameters have not been met. This will result in either a premature reserve deployment if it happens prior to main deployment, or in both parachutes being deployed if it happens after main deployment. A premature reserve deployment can be dangerous if it happens while exiting the aircraft, in close proximity to other skydivers in freefall, or if the skydiver is falling faster than the safe deployment speed, which can result in catastrophic equipment failure and injury or even death of the jumper. A deployment of both canopies could result in an entanglement between the two canopies. Undesired AAD activations can also occur due to user error. This can happen if the skydiver deploys the main canopy too low, and the AAD activates while the main is deploying, resulting in both parachutes being deployed. It can also happen if the AAD is not calibrated to the correct ground level, either due to turning the AAD on at a location with a different elevation than the airport, or entering an incorrect altitude offset (a feature that is normally used to compensate for a landing zone that is at a different elevation than the airport). Some models of AAD carry a risk of deploying the reserve inside the aircraft in cases of sudden aircraft depressurization, or during a rapid descent when landing with the aircraft. The risk of an AAD malfunction is far smaller than the risk of a situation in which the AAD can save somebody's life. For this reason, many countries (such as Denmark) require AADs for all skydivers and jumps. In countries where AADs are not legally mandated (such as the US), many drop zones still require all jumpers to use AADs. Others require all student jumpers to use them even if licensed jumpers are not. Possible issues regarding the contained explosives HADOPAD radar actuator High-Altitude Delayed-Opening Parachute Actuating Device, also called HADOPAD, was a radar actuator used as a component in a delayed opening aerial-delivery system. The system was developed by the Harry Diamond Laboratories in the mid-1960s, which later became a part of Army Research Laboratory. The device, based on radar principles, opened a main recovery parachute at either of two preset heights (1,000 or 1,700 ft.) above the ground. The air delivery system consisted of the cargo package, two parachutes (drogue and main), and the radar actuator. The radar was designed to determine when the cargo reached a preset altitude, generating a firing signal which actuated a mechanism releasing the main parachute at that time. See also Adrian Nicholas – a noted skydiver who died from an AAD activation (by reaching the speeds required to activate his AAD under a high performance canopy) References External links Airtec - Cypres and Cypres 2 FXC Corporation - FXC 12000 and Astra Advanced Aerospace Designs - Vigil ParachuteHistory.com - Hi Tek 8000 ParachuteHistory.com - Sentinel Parachuting
Meet the Lees (originally titled Fortune Cookies) is an upcoming British family comedy film written and directed by Brenda Lee in her feature film debut. Cast Production Brenda Lee's Fortune Cookies, inspired by her own 1990s upbringing and family's takeout restaurant, was originally in development in 2011 and 2012 with Reelscape Films and a different cast including Elaine Tan and David Yip, but the project did not come into fruition at the time. Casting began in October 2019. Produced by Screen Northants with support from BBC Children in Need, principal photography took place on location in Northampton in early 2020. Filming locations included Golden Hill in Kingsthorpe among other Chinese takeaway restaurants, the Guild Hall, the Racecourse, Royal & Derngate, The Deco, and Northampton International Academy. Release A teaser was first revealed in August 2020. Preview screenings were held at Cineworld Sixfields in Northampton on 31 January 2022 and at the Prince Charles Cinema in central London on 1 February. The film is being distributed by Phoenix Worldwide Entertainment. References External links Upcoming films British Chinese films British comedy films Entertainment in Northampton Films postponed due to the COVID-19 pandemic Films set in 1997 Films shot in England
The Eisenhower Medical Center (EMC) is a not-for-profit hospital based in Rancho Mirage, California, serving the Coachella Valley region of Southeastern California. It was named one of the top one hundred hospitals in the United States in 2005. History Named for President Dwight D. Eisenhower, the hospital credits its initial creation to two events in 1966 when entertainer Bob Hope was asked to lend his name to a charity golf tournament and to serve on the board of the hospital that would be built from the tournament's proceeds. The original of land were donated by Bob and Dolores Hope and both helped raise private funds for the hospital's construction. Construction began in 1969; the groundbreaking ceremony was attended by President Richard Nixon, Vice President Spiro Agnew, Governor Ronald Reagan, and entertainers Bob Hope, Frank Sinatra, Bing Crosby, Gene Autry, and Lucille Ball. The main Eisenhower hospital, designed by Edward Durrell Stone, opened in November 1971, containing 289 beds. Among the early trustees were actress Martha Hyer (the wife of film producer Hal B. Wallis) and Roy W Hill. The three original medical buildings were named for local philanthropists Mr. and Mrs. Walter Probst, Mr. and Mrs. Peter Kiewit and Mrs. Hazel Wright. Philanthropists Walter and Leonore Annenberg donated funds to establish the Annenberg Center for Health Sciences. A $212.5 million, four story addition to the hospital, the Walter and Leonore Annenberg Pavilion, opened for patient care in November 2010. Lee Annenberg donated over $100 million to Campaign Eisenhower, Phase II. Other institutions on the campus include the Barbara Sinatra Children's Center and the Dolores Hope Outpatient Care Center. Dolores Hope served in the capacities of president, chairman of the board and chairman emeritus since 1968 and participated in every major decision regarding the hospital until her death in 2011. By 1990, the hospital's 3000th open heart surgical procedure was performed. In 2019, the hospital was recognized as one of the top five in the Riverside County-San Bernardino metro area. Figures from the entertainment industry have been involved with fundraising for the hospital during its history; as the area is home to many from the entertainment industry, those notable figures have also received care at the hospital. In January 2006 President Gerald Ford was admitted to EMC for sixteen days for treatment of pneumonia. Upon Ford's death on December 26, 2006, his body was taken to Eisenhower Medical Center, where his wife, Betty Ford, died in 2011. References External links Eisenhower Medical Center Hospital buildings completed in 1971 Hospitals in Riverside County, California 1971 establishments in California Charities based in California Rancho Mirage, California
```elm module Internationalization.Types exposing (..) type alias TranslationSet = { english : String , portuguese : String } type Language = English | Portuguese type TranslationId = About | AboutJarbas | AboutSerenata | SearchFieldsetReimbursement | SearchFieldsetCongressperson | FieldsetSummary | FieldsetTrip | FieldsetReimbursement | FieldsetCongressperson | FieldsetCongresspersonProfile | FieldsetCompanyDetails | FieldsetCurrencyDetails | FieldsetCurrencyDetailsLink | FieldYear | FieldDocumentId | FieldApplicantId | FieldTotalValue | FieldTotalNetValue | FieldNumbers | FieldCongresspersonId | FieldCongressperson | FieldCongresspersonName | FieldCongresspersonDocument | FieldState | FieldParty | FieldTermId | FieldTerm | FieldSubquotaNumber | FieldSubquotaDescription | FieldSubquotaGroupId | FieldSubquotaGroupDescription | FieldCompany | FieldCnpjCpf | FieldDocumentType | FieldDocumentNumber | FieldDocumentValue | FieldIssueDate | FieldIssueDateStart | FieldIssueDateEnd | FieldIssueDateValidation | FieldClaimDate | FieldMonth | FieldRemarkValue | FieldInstallment | FieldBatchNumber | FieldPassenger | FieldLegOfTheTrip | FieldProbability | FieldSuspicions | FieldEmpty | ReimbursementSource | ReimbursementChamberOfDeputies | ReceiptFetch | ReceiptAvailable | ReceiptNotAvailable | RosiesTweet | Map | CompanyCNPJ | CompanyTradeName | CompanyName | CompanyOpeningDate | CompanyLegalEntity | CompanyType | CompanyStatus | CompanySituation | CompanySituationReason | CompanySituationDate | CompanySpecialSituation | CompanySpecialSituationDate | CompanyResponsibleFederativeEntity | CompanyAddress | CompanyNumber | CompanyAdditionalAddressDetails | CompanyNeighborhood | CompanyZipCode | CompanyCity | CompanyState | CompanyEmail | CompanyPhone | CompanyLastUpdated | CompanyMainActivity | CompanySecondaryActivity | CompanySource | CompanyFederalRevenue | ResultTitleSingular | ResultTitlePlural | ReimbursementTitle | Search | NewSearch | Loading | PaginationPage | PaginationOf | ReimbursementNotFound | SameDayTitle | SameSubquotaTitle | BrazilianCurrency String | ThousandSeparator | DecimalSeparator | Suspicion String | DocumentType Int ```
```python # # # path_to_url # # Unless required by applicable law or agreed to in writing, software # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # ============================================================================== """RNN helpers for TensorFlow models. @@bidirectional_dynamic_rnn @@dynamic_rnn @@raw_rnn @@static_rnn @@static_state_saving_rnn @@static_bidirectional_rnn """ from __future__ import absolute_import from __future__ import division from __future__ import print_function from tensorflow.python.framework import constant_op from tensorflow.python.framework import dtypes from tensorflow.python.framework import ops from tensorflow.python.framework import tensor_shape from tensorflow.python.ops import array_ops from tensorflow.python.ops import control_flow_ops from tensorflow.python.ops import math_ops from tensorflow.python.ops import rnn_cell_impl from tensorflow.python.ops import tensor_array_ops from tensorflow.python.ops import variable_scope as vs from tensorflow.python.util import nest # pylint: disable=protected-access _concat = rnn_cell_impl._concat _like_rnncell = rnn_cell_impl._like_rnncell # pylint: enable=protected-access def _transpose_batch_time(x): """Transpose the batch and time dimensions of a Tensor. Retains as much of the static shape information as possible. Args: x: A tensor of rank 2 or higher. Returns: x transposed along the first two dimensions. Raises: ValueError: if `x` is rank 1 or lower. """ x_static_shape = x.get_shape() if x_static_shape.ndims is not None and x_static_shape.ndims < 2: raise ValueError( "Expected input tensor %s to have rank at least 2, but saw shape: %s" % (x, x_static_shape)) x_rank = array_ops.rank(x) x_t = array_ops.transpose( x, array_ops.concat( ([1, 0], math_ops.range(2, x_rank)), axis=0)) x_t.set_shape( tensor_shape.TensorShape([ x_static_shape[1].value, x_static_shape[0].value ]).concatenate(x_static_shape[2:])) return x_t def _infer_state_dtype(explicit_dtype, state): """Infer the dtype of an RNN state. Args: explicit_dtype: explicitly declared dtype or None. state: RNN's hidden state. Must be a Tensor or a nested iterable containing Tensors. Returns: dtype: inferred dtype of hidden state. Raises: ValueError: if `state` has heterogeneous dtypes or is empty. """ if explicit_dtype is not None: return explicit_dtype elif nest.is_sequence(state): inferred_dtypes = [element.dtype for element in nest.flatten(state)] if not inferred_dtypes: raise ValueError("Unable to infer dtype from empty state.") all_same = all([x == inferred_dtypes[0] for x in inferred_dtypes]) if not all_same: raise ValueError( "State has tensors of different inferred_dtypes. Unable to infer a " "single representative dtype.") return inferred_dtypes[0] else: return state.dtype # pylint: disable=unused-argument def _rnn_step( time, sequence_length, min_sequence_length, max_sequence_length, zero_output, state, call_cell, state_size, skip_conditionals=False): """Calculate one step of a dynamic RNN minibatch. Returns an (output, state) pair conditioned on the sequence_lengths. When skip_conditionals=False, the pseudocode is something like: if t >= max_sequence_length: return (zero_output, state) if t < min_sequence_length: return call_cell() # Selectively output zeros or output, old state or new state depending # on if we've finished calculating each row. new_output, new_state = call_cell() final_output = np.vstack([ zero_output if time >= sequence_lengths[r] else new_output_r for r, new_output_r in enumerate(new_output) ]) final_state = np.vstack([ state[r] if time >= sequence_lengths[r] else new_state_r for r, new_state_r in enumerate(new_state) ]) return (final_output, final_state) Args: time: Python int, the current time step sequence_length: int32 `Tensor` vector of size [batch_size] min_sequence_length: int32 `Tensor` scalar, min of sequence_length max_sequence_length: int32 `Tensor` scalar, max of sequence_length zero_output: `Tensor` vector of shape [output_size] state: Either a single `Tensor` matrix of shape `[batch_size, state_size]`, or a list/tuple of such tensors. call_cell: lambda returning tuple of (new_output, new_state) where new_output is a `Tensor` matrix of shape `[batch_size, output_size]`. new_state is a `Tensor` matrix of shape `[batch_size, state_size]`. state_size: The `cell.state_size` associated with the state. skip_conditionals: Python bool, whether to skip using the conditional calculations. This is useful for `dynamic_rnn`, where the input tensor matches `max_sequence_length`, and using conditionals just slows everything down. Returns: A tuple of (`final_output`, `final_state`) as given by the pseudocode above: final_output is a `Tensor` matrix of shape [batch_size, output_size] final_state is either a single `Tensor` matrix, or a tuple of such matrices (matching length and shapes of input `state`). Raises: ValueError: If the cell returns a state tuple whose length does not match that returned by `state_size`. """ # Convert state to a list for ease of use flat_state = nest.flatten(state) flat_zero_output = nest.flatten(zero_output) def _copy_one_through(output, new_output): copy_cond = (time >= sequence_length) with ops.colocate_with(new_output): return array_ops.where(copy_cond, output, new_output) def _copy_some_through(flat_new_output, flat_new_state): # Use broadcasting select to determine which values should get # the previous state & zero output, and which values should get # a calculated state & output. flat_new_output = [ _copy_one_through(zero_output, new_output) for zero_output, new_output in zip(flat_zero_output, flat_new_output)] flat_new_state = [ _copy_one_through(state, new_state) for state, new_state in zip(flat_state, flat_new_state)] return flat_new_output + flat_new_state def _maybe_copy_some_through(): """Run RNN step. Pass through either no or some past state.""" new_output, new_state = call_cell() nest.assert_same_structure(state, new_state) flat_new_state = nest.flatten(new_state) flat_new_output = nest.flatten(new_output) return control_flow_ops.cond( # if t < min_seq_len: calculate and return everything time < min_sequence_length, lambda: flat_new_output + flat_new_state, # else copy some of it through lambda: _copy_some_through(flat_new_output, flat_new_state)) # TODO(ebrevdo): skipping these conditionals may cause a slowdown, # but benefits from removing cond() and its gradient. We should # profile with and without this switch here. if skip_conditionals: # Instead of using conditionals, perform the selective copy at all time # steps. This is faster when max_seq_len is equal to the number of unrolls # (which is typical for dynamic_rnn). new_output, new_state = call_cell() nest.assert_same_structure(state, new_state) new_state = nest.flatten(new_state) new_output = nest.flatten(new_output) final_output_and_state = _copy_some_through(new_output, new_state) else: empty_update = lambda: flat_zero_output + flat_state final_output_and_state = control_flow_ops.cond( # if t >= max_seq_len: copy all state through, output zeros time >= max_sequence_length, empty_update, # otherwise calculation is required: copy some or all of it through _maybe_copy_some_through) if len(final_output_and_state) != len(flat_zero_output) + len(flat_state): raise ValueError("Internal error: state and output were not concatenated " "correctly.") final_output = final_output_and_state[:len(flat_zero_output)] final_state = final_output_and_state[len(flat_zero_output):] for output, flat_output in zip(final_output, flat_zero_output): output.set_shape(flat_output.get_shape()) for substate, flat_substate in zip(final_state, flat_state): substate.set_shape(flat_substate.get_shape()) final_output = nest.pack_sequence_as( structure=zero_output, flat_sequence=final_output) final_state = nest.pack_sequence_as( structure=state, flat_sequence=final_state) return final_output, final_state def _reverse_seq(input_seq, lengths): """Reverse a list of Tensors up to specified lengths. Args: input_seq: Sequence of seq_len tensors of dimension (batch_size, n_features) or nested tuples of tensors. lengths: A `Tensor` of dimension batch_size, containing lengths for each sequence in the batch. If "None" is specified, simply reverses the list. Returns: time-reversed sequence """ if lengths is None: return list(reversed(input_seq)) flat_input_seq = tuple(nest.flatten(input_) for input_ in input_seq) flat_results = [[] for _ in range(len(input_seq))] for sequence in zip(*flat_input_seq): input_shape = tensor_shape.unknown_shape( ndims=sequence[0].get_shape().ndims) for input_ in sequence: input_shape.merge_with(input_.get_shape()) input_.set_shape(input_shape) # Join into (time, batch_size, depth) s_joined = array_ops.stack(sequence) # TODO(schuster, ebrevdo): Remove cast when reverse_sequence takes int32 if lengths is not None: lengths = math_ops.to_int64(lengths) # Reverse along dimension 0 s_reversed = array_ops.reverse_sequence(s_joined, lengths, 0, 1) # Split again into list result = array_ops.unstack(s_reversed) for r, flat_result in zip(result, flat_results): r.set_shape(input_shape) flat_result.append(r) results = [nest.pack_sequence_as(structure=input_, flat_sequence=flat_result) for input_, flat_result in zip(input_seq, flat_results)] return results def bidirectional_dynamic_rnn(cell_fw, cell_bw, inputs, sequence_length=None, initial_state_fw=None, initial_state_bw=None, dtype=None, parallel_iterations=None, swap_memory=False, time_major=False, scope=None): """Creates a dynamic version of bidirectional recurrent neural network. Takes input and builds independent forward and backward RNNs. The input_size of forward and backward cell must match. The initial state for both directions is zero by default (but can be set optionally) and no intermediate states are ever returned -- the network is fully unrolled for the given (passed in) length(s) of the sequence(s) or completely unrolled if length(s) is not given. Args: cell_fw: An instance of RNNCell, to be used for forward direction. cell_bw: An instance of RNNCell, to be used for backward direction. inputs: The RNN inputs. If time_major == False (default), this must be a tensor of shape: `[batch_size, max_time, ...]`, or a nested tuple of such elements. If time_major == True, this must be a tensor of shape: `[max_time, batch_size, ...]`, or a nested tuple of such elements. sequence_length: (optional) An int32/int64 vector, size `[batch_size]`, containing the actual lengths for each of the sequences in the batch. If not provided, all batch entries are assumed to be full sequences; and time reversal is applied from time `0` to `max_time` for each sequence. initial_state_fw: (optional) An initial state for the forward RNN. This must be a tensor of appropriate type and shape `[batch_size, cell_fw.state_size]`. If `cell_fw.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell_fw.state_size`. initial_state_bw: (optional) Same as for `initial_state_fw`, but using the corresponding properties of `cell_bw`. dtype: (optional) The data type for the initial states and expected output. Required if initial_states are not provided or RNN states have a heterogeneous dtype. parallel_iterations: (Default: 32). The number of iterations to run in parallel. Those operations which do not have any temporal dependency and can be run in parallel, will be. This parameter trades off time for space. Values >> 1 use more memory but take less time, while smaller values use less memory but computations take longer. swap_memory: Transparently swap the tensors produced in forward inference but needed for back prop from GPU to CPU. This allows training RNNs which would typically not fit on a single GPU, with very minimal (or no) performance penalty. time_major: The shape format of the `inputs` and `outputs` Tensors. If true, these `Tensors` must be shaped `[max_time, batch_size, depth]`. If false, these `Tensors` must be shaped `[batch_size, max_time, depth]`. Using `time_major = True` is a bit more efficient because it avoids transposes at the beginning and end of the RNN calculation. However, most TensorFlow data is batch-major, so by default this function accepts input and emits output in batch-major form. scope: VariableScope for the created subgraph; defaults to "bidirectional_rnn" Returns: A tuple (outputs, output_states) where: outputs: A tuple (output_fw, output_bw) containing the forward and the backward rnn output `Tensor`. If time_major == False (default), output_fw will be a `Tensor` shaped: `[batch_size, max_time, cell_fw.output_size]` and output_bw will be a `Tensor` shaped: `[batch_size, max_time, cell_bw.output_size]`. If time_major == True, output_fw will be a `Tensor` shaped: `[max_time, batch_size, cell_fw.output_size]` and output_bw will be a `Tensor` shaped: `[max_time, batch_size, cell_bw.output_size]`. It returns a tuple instead of a single concatenated `Tensor`, unlike in the `bidirectional_rnn`. If the concatenated one is preferred, the forward and backward outputs can be concatenated as `tf.concat(outputs, 2)`. output_states: A tuple (output_state_fw, output_state_bw) containing the forward and the backward final states of bidirectional rnn. Raises: TypeError: If `cell_fw` or `cell_bw` is not an instance of `RNNCell`. """ if not _like_rnncell(cell_fw): raise TypeError("cell_fw must be an instance of RNNCell") if not _like_rnncell(cell_bw): raise TypeError("cell_bw must be an instance of RNNCell") with vs.variable_scope(scope or "bidirectional_rnn"): # Forward direction with vs.variable_scope("fw") as fw_scope: output_fw, output_state_fw = dynamic_rnn( cell=cell_fw, inputs=inputs, sequence_length=sequence_length, initial_state=initial_state_fw, dtype=dtype, parallel_iterations=parallel_iterations, swap_memory=swap_memory, time_major=time_major, scope=fw_scope) # Backward direction if not time_major: time_dim = 1 batch_dim = 0 else: time_dim = 0 batch_dim = 1 def _reverse(input_, seq_lengths, seq_dim, batch_dim): if seq_lengths is not None: return array_ops.reverse_sequence( input=input_, seq_lengths=seq_lengths, seq_dim=seq_dim, batch_dim=batch_dim) else: return array_ops.reverse(input_, axis=[seq_dim]) with vs.variable_scope("bw") as bw_scope: inputs_reverse = _reverse( inputs, seq_lengths=sequence_length, seq_dim=time_dim, batch_dim=batch_dim) tmp, output_state_bw = dynamic_rnn( cell=cell_bw, inputs=inputs_reverse, sequence_length=sequence_length, initial_state=initial_state_bw, dtype=dtype, parallel_iterations=parallel_iterations, swap_memory=swap_memory, time_major=time_major, scope=bw_scope) output_bw = _reverse( tmp, seq_lengths=sequence_length, seq_dim=time_dim, batch_dim=batch_dim) outputs = (output_fw, output_bw) output_states = (output_state_fw, output_state_bw) return (outputs, output_states) def dynamic_rnn(cell, inputs, sequence_length=None, initial_state=None, dtype=None, parallel_iterations=None, swap_memory=False, time_major=False, scope=None): """Creates a recurrent neural network specified by RNNCell `cell`. Performs fully dynamic unrolling of `inputs`. `Inputs` may be a single `Tensor` where the maximum time is either the first or second dimension (see the parameter `time_major`). Alternatively, it may be a (possibly nested) tuple of Tensors, each of them having matching batch and time dimensions. The corresponding output is either a single `Tensor` having the same number of time steps and batch size, or a (possibly nested) tuple of such tensors, matching the nested structure of `cell.output_size`. The parameter `sequence_length` is optional and is used to copy-through state and zero-out outputs when past a batch element's sequence length. So it's more for correctness than performance. Args: cell: An instance of RNNCell. inputs: The RNN inputs. If `time_major == False` (default), this must be a `Tensor` of shape: `[batch_size, max_time, ...]`, or a nested tuple of such elements. If `time_major == True`, this must be a `Tensor` of shape: `[max_time, batch_size, ...]`, or a nested tuple of such elements. This may also be a (possibly nested) tuple of Tensors satisfying this property. The first two dimensions must match across all the inputs, but otherwise the ranks and other shape components may differ. In this case, input to `cell` at each time-step will replicate the structure of these tuples, except for the time dimension (from which the time is taken). The input to `cell` at each time step will be a `Tensor` or (possibly nested) tuple of Tensors each with dimensions `[batch_size, ...]`. sequence_length: (optional) An int32/int64 vector sized `[batch_size]`. initial_state: (optional) An initial state for the RNN. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell.state_size`. dtype: (optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype. parallel_iterations: (Default: 32). The number of iterations to run in parallel. Those operations which do not have any temporal dependency and can be run in parallel, will be. This parameter trades off time for space. Values >> 1 use more memory but take less time, while smaller values use less memory but computations take longer. swap_memory: Transparently swap the tensors produced in forward inference but needed for back prop from GPU to CPU. This allows training RNNs which would typically not fit on a single GPU, with very minimal (or no) performance penalty. time_major: The shape format of the `inputs` and `outputs` Tensors. If true, these `Tensors` must be shaped `[max_time, batch_size, depth]`. If false, these `Tensors` must be shaped `[batch_size, max_time, depth]`. Using `time_major = True` is a bit more efficient because it avoids transposes at the beginning and end of the RNN calculation. However, most TensorFlow data is batch-major, so by default this function accepts input and emits output in batch-major form. scope: VariableScope for the created subgraph; defaults to "rnn". Returns: A pair (outputs, state) where: outputs: The RNN output `Tensor`. If time_major == False (default), this will be a `Tensor` shaped: `[batch_size, max_time, cell.output_size]`. If time_major == True, this will be a `Tensor` shaped: `[max_time, batch_size, cell.output_size]`. Note, if `cell.output_size` is a (possibly nested) tuple of integers or `TensorShape` objects, then `outputs` will be a tuple having the same structure as `cell.output_size`, containing Tensors having shapes corresponding to the shape data in `cell.output_size`. state: The final state. If `cell.state_size` is an int, this will be shaped `[batch_size, cell.state_size]`. If it is a `TensorShape`, this will be shaped `[batch_size] + cell.state_size`. If it is a (possibly nested) tuple of ints or `TensorShape`, this will be a tuple having the corresponding shapes. Raises: TypeError: If `cell` is not an instance of RNNCell. ValueError: If inputs is None or an empty list. """ if not _like_rnncell(cell): raise TypeError("cell must be an instance of RNNCell") # By default, time_major==False and inputs are batch-major: shaped # [batch, time, depth] # For internal calculations, we transpose to [time, batch, depth] flat_input = nest.flatten(inputs) if not time_major: # (B,T,D) => (T,B,D) flat_input = [ops.convert_to_tensor(input_) for input_ in flat_input] flat_input = tuple(_transpose_batch_time(input_) for input_ in flat_input) parallel_iterations = parallel_iterations or 32 if sequence_length is not None: sequence_length = math_ops.to_int32(sequence_length) if sequence_length.get_shape().ndims not in (None, 1): raise ValueError( "sequence_length must be a vector of length batch_size, " "but saw shape: %s" % sequence_length.get_shape()) sequence_length = array_ops.identity( # Just to find it in the graph. sequence_length, name="sequence_length") # Create a new scope in which the caching device is either # determined by the parent scope, or is set to place the cached # Variable using the same placement as for the rest of the RNN. with vs.variable_scope(scope or "rnn") as varscope: if varscope.caching_device is None: varscope.set_caching_device(lambda op: op.device) input_shape = tuple(array_ops.shape(input_) for input_ in flat_input) batch_size = input_shape[0][1] for input_ in input_shape: if input_[1].get_shape() != batch_size.get_shape(): raise ValueError("All inputs should have the same batch size") if initial_state is not None: state = initial_state else: if not dtype: raise ValueError("If there is no initial_state, you must give a dtype.") state = cell.zero_state(batch_size, dtype) def _assert_has_shape(x, shape): x_shape = array_ops.shape(x) packed_shape = array_ops.stack(shape) return control_flow_ops.Assert( math_ops.reduce_all(math_ops.equal(x_shape, packed_shape)), ["Expected shape for Tensor %s is " % x.name, packed_shape, " but saw shape: ", x_shape]) if sequence_length is not None: # Perform some shape validation with ops.control_dependencies( [_assert_has_shape(sequence_length, [batch_size])]): sequence_length = array_ops.identity( sequence_length, name="CheckSeqLen") inputs = nest.pack_sequence_as(structure=inputs, flat_sequence=flat_input) (outputs, final_state) = _dynamic_rnn_loop( cell, inputs, state, parallel_iterations=parallel_iterations, swap_memory=swap_memory, sequence_length=sequence_length, dtype=dtype) # Outputs of _dynamic_rnn_loop are always shaped [time, batch, depth]. # If we are performing batch-major calculations, transpose output back # to shape [batch, time, depth] if not time_major: # (T,B,D) => (B,T,D) outputs = nest.map_structure(_transpose_batch_time, outputs) return (outputs, final_state) def _dynamic_rnn_loop(cell, inputs, initial_state, parallel_iterations, swap_memory, sequence_length=None, dtype=None): """Internal implementation of Dynamic RNN. Args: cell: An instance of RNNCell. inputs: A `Tensor` of shape [time, batch_size, input_size], or a nested tuple of such elements. initial_state: A `Tensor` of shape `[batch_size, state_size]`, or if `cell.state_size` is a tuple, then this should be a tuple of tensors having shapes `[batch_size, s] for s in cell.state_size`. parallel_iterations: Positive Python int. swap_memory: A Python boolean sequence_length: (optional) An `int32` `Tensor` of shape [batch_size]. dtype: (optional) Expected dtype of output. If not specified, inferred from initial_state. Returns: Tuple `(final_outputs, final_state)`. final_outputs: A `Tensor` of shape `[time, batch_size, cell.output_size]`. If `cell.output_size` is a (possibly nested) tuple of ints or `TensorShape` objects, then this returns a (possibly nsted) tuple of Tensors matching the corresponding shapes. final_state: A `Tensor`, or possibly nested tuple of Tensors, matching in length and shapes to `initial_state`. Raises: ValueError: If the input depth cannot be inferred via shape inference from the inputs. """ state = initial_state assert isinstance(parallel_iterations, int), "parallel_iterations must be int" state_size = cell.state_size flat_input = nest.flatten(inputs) flat_output_size = nest.flatten(cell.output_size) # Construct an initial output input_shape = array_ops.shape(flat_input[0]) time_steps = input_shape[0] batch_size = input_shape[1] inputs_got_shape = tuple(input_.get_shape().with_rank_at_least(3) for input_ in flat_input) const_time_steps, const_batch_size = inputs_got_shape[0].as_list()[:2] for shape in inputs_got_shape: if not shape[2:].is_fully_defined(): raise ValueError( "Input size (depth of inputs) must be accessible via shape inference," " but saw value None.") got_time_steps = shape[0].value got_batch_size = shape[1].value if const_time_steps != got_time_steps: raise ValueError( "Time steps is not the same for all the elements in the input in a " "batch.") if const_batch_size != got_batch_size: raise ValueError( "Batch_size is not the same for all the elements in the input.") # Prepare dynamic conditional copying of state & output def _create_zero_arrays(size): size = _concat(batch_size, size) return array_ops.zeros( array_ops.stack(size), _infer_state_dtype(dtype, state)) flat_zero_output = tuple(_create_zero_arrays(output) for output in flat_output_size) zero_output = nest.pack_sequence_as(structure=cell.output_size, flat_sequence=flat_zero_output) if sequence_length is not None: min_sequence_length = math_ops.reduce_min(sequence_length) max_sequence_length = math_ops.reduce_max(sequence_length) time = array_ops.constant(0, dtype=dtypes.int32, name="time") with ops.name_scope("dynamic_rnn") as scope: base_name = scope def _create_ta(name, dtype): return tensor_array_ops.TensorArray(dtype=dtype, size=time_steps, tensor_array_name=base_name + name) output_ta = tuple(_create_ta("output_%d" % i, _infer_state_dtype(dtype, state)) for i in range(len(flat_output_size))) input_ta = tuple(_create_ta("input_%d" % i, flat_input[i].dtype) for i in range(len(flat_input))) input_ta = tuple(ta.unstack(input_) for ta, input_ in zip(input_ta, flat_input)) def _time_step(time, output_ta_t, state): """Take a time step of the dynamic RNN. Args: time: int32 scalar Tensor. output_ta_t: List of `TensorArray`s that represent the output. state: nested tuple of vector tensors that represent the state. Returns: The tuple (time + 1, output_ta_t with updated flow, new_state). """ input_t = tuple(ta.read(time) for ta in input_ta) # Restore some shape information for input_, shape in zip(input_t, inputs_got_shape): input_.set_shape(shape[1:]) input_t = nest.pack_sequence_as(structure=inputs, flat_sequence=input_t) call_cell = lambda: cell(input_t, state) if sequence_length is not None: (output, new_state) = _rnn_step( time=time, sequence_length=sequence_length, min_sequence_length=min_sequence_length, max_sequence_length=max_sequence_length, zero_output=zero_output, state=state, call_cell=call_cell, state_size=state_size, skip_conditionals=True) else: (output, new_state) = call_cell() # Pack state if using state tuples output = nest.flatten(output) output_ta_t = tuple( ta.write(time, out) for ta, out in zip(output_ta_t, output)) return (time + 1, output_ta_t, new_state) _, output_final_ta, final_state = control_flow_ops.while_loop( cond=lambda time, *_: time < time_steps, body=_time_step, loop_vars=(time, output_ta, state), parallel_iterations=parallel_iterations, swap_memory=swap_memory) # Unpack final output if not using output tuples. final_outputs = tuple(ta.stack() for ta in output_final_ta) # Restore some shape information for output, output_size in zip(final_outputs, flat_output_size): shape = _concat( [const_time_steps, const_batch_size], output_size, static=True) output.set_shape(shape) final_outputs = nest.pack_sequence_as( structure=cell.output_size, flat_sequence=final_outputs) return (final_outputs, final_state) def raw_rnn(cell, loop_fn, parallel_iterations=None, swap_memory=False, scope=None): """Creates an `RNN` specified by RNNCell `cell` and loop function `loop_fn`. **NOTE: This method is still in testing, and the API may change.** This function is a more primitive version of `dynamic_rnn` that provides more direct access to the inputs each iteration. It also provides more control over when to start and finish reading the sequence, and what to emit for the output. For example, it can be used to implement the dynamic decoder of a seq2seq model. Instead of working with `Tensor` objects, most operations work with `TensorArray` objects directly. The operation of `raw_rnn`, in pseudo-code, is basically the following: ```python time = tf.constant(0, dtype=tf.int32) (finished, next_input, initial_state, _, loop_state) = loop_fn( time=time, cell_output=None, cell_state=None, loop_state=None) emit_ta = TensorArray(dynamic_size=True, dtype=initial_state.dtype) state = initial_state while not all(finished): (output, cell_state) = cell(next_input, state) (next_finished, next_input, next_state, emit, loop_state) = loop_fn( time=time + 1, cell_output=output, cell_state=cell_state, loop_state=loop_state) # Emit zeros and copy forward state for minibatch entries that are finished. state = tf.where(finished, state, next_state) emit = tf.where(finished, tf.zeros_like(emit), emit) emit_ta = emit_ta.write(time, emit) # If any new minibatch entries are marked as finished, mark these. finished = tf.logical_or(finished, next_finished) time += 1 return (emit_ta, state, loop_state) ``` with the additional properties that output and state may be (possibly nested) tuples, as determined by `cell.output_size` and `cell.state_size`, and as a result the final `state` and `emit_ta` may themselves be tuples. A simple implementation of `dynamic_rnn` via `raw_rnn` looks like this: ```python inputs = tf.placeholder(shape=(max_time, batch_size, input_depth), dtype=tf.float32) sequence_length = tf.placeholder(shape=(batch_size,), dtype=tf.int32) inputs_ta = tf.TensorArray(dtype=tf.float32, size=max_time) inputs_ta = inputs_ta.unstack(inputs) cell = tf.contrib.rnn.LSTMCell(num_units) def loop_fn(time, cell_output, cell_state, loop_state): emit_output = cell_output # == None for time == 0 if cell_output is None: # time == 0 next_cell_state = cell.zero_state(batch_size, tf.float32) else: next_cell_state = cell_state elements_finished = (time >= sequence_length) finished = tf.reduce_all(elements_finished) next_input = tf.cond( finished, lambda: tf.zeros([batch_size, input_depth], dtype=tf.float32), lambda: inputs_ta.read(time)) next_loop_state = None return (elements_finished, next_input, next_cell_state, emit_output, next_loop_state) outputs_ta, final_state, _ = raw_rnn(cell, loop_fn) outputs = outputs_ta.stack() ``` Args: cell: An instance of RNNCell. loop_fn: A callable that takes inputs `(time, cell_output, cell_state, loop_state)` and returns the tuple `(finished, next_input, next_cell_state, emit_output, next_loop_state)`. Here `time` is an int32 scalar `Tensor`, `cell_output` is a `Tensor` or (possibly nested) tuple of tensors as determined by `cell.output_size`, and `cell_state` is a `Tensor` or (possibly nested) tuple of tensors, as determined by the `loop_fn` on its first call (and should match `cell.state_size`). The outputs are: `finished`, a boolean `Tensor` of shape `[batch_size]`, `next_input`: the next input to feed to `cell`, `next_cell_state`: the next state to feed to `cell`, and `emit_output`: the output to store for this iteration. Note that `emit_output` should be a `Tensor` or (possibly nested) tuple of tensors with shapes and structure matching `cell.output_size` and `cell_output` above. The parameter `cell_state` and output `next_cell_state` may be either a single or (possibly nested) tuple of tensors. The parameter `loop_state` and output `next_loop_state` may be either a single or (possibly nested) tuple of `Tensor` and `TensorArray` objects. This last parameter may be ignored by `loop_fn` and the return value may be `None`. If it is not `None`, then the `loop_state` will be propagated through the RNN loop, for use purely by `loop_fn` to keep track of its own state. The `next_loop_state` parameter returned may be `None`. The first call to `loop_fn` will be `time = 0`, `cell_output = None`, `cell_state = None`, and `loop_state = None`. For this call: The `next_cell_state` value should be the value with which to initialize the cell's state. It may be a final state from a previous RNN or it may be the output of `cell.zero_state()`. It should be a (possibly nested) tuple structure of tensors. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a `TensorShape`, this must be a `Tensor` of appropriate type and shape `[batch_size] + cell.state_size`. If `cell.state_size` is a (possibly nested) tuple of ints or `TensorShape`, this will be a tuple having the corresponding shapes. The `emit_output` value may be either `None` or a (possibly nested) tuple structure of tensors, e.g., `(tf.zeros(shape_0, dtype=dtype_0), tf.zeros(shape_1, dtype=dtype_1))`. If this first `emit_output` return value is `None`, then the `emit_ta` result of `raw_rnn` will have the same structure and dtypes as `cell.output_size`. Otherwise `emit_ta` will have the same structure, shapes (prepended with a `batch_size` dimension), and dtypes as `emit_output`. The actual values returned for `emit_output` at this initializing call are ignored. Note, this emit structure must be consistent across all time steps. parallel_iterations: (Default: 32). The number of iterations to run in parallel. Those operations which do not have any temporal dependency and can be run in parallel, will be. This parameter trades off time for space. Values >> 1 use more memory but take less time, while smaller values use less memory but computations take longer. swap_memory: Transparently swap the tensors produced in forward inference but needed for back prop from GPU to CPU. This allows training RNNs which would typically not fit on a single GPU, with very minimal (or no) performance penalty. scope: VariableScope for the created subgraph; defaults to "rnn". Returns: A tuple `(emit_ta, final_state, final_loop_state)` where: `emit_ta`: The RNN output `TensorArray`. If `loop_fn` returns a (possibly nested) set of Tensors for `emit_output` during initialization, (inputs `time = 0`, `cell_output = None`, and `loop_state = None`), then `emit_ta` will have the same structure, dtypes, and shapes as `emit_output` instead. If `loop_fn` returns `emit_output = None` during this call, the structure of `cell.output_size` is used: If `cell.output_size` is a (possibly nested) tuple of integers or `TensorShape` objects, then `emit_ta` will be a tuple having the same structure as `cell.output_size`, containing TensorArrays whose elements' shapes correspond to the shape data in `cell.output_size`. `final_state`: The final cell state. If `cell.state_size` is an int, this will be shaped `[batch_size, cell.state_size]`. If it is a `TensorShape`, this will be shaped `[batch_size] + cell.state_size`. If it is a (possibly nested) tuple of ints or `TensorShape`, this will be a tuple having the corresponding shapes. `final_loop_state`: The final loop state as returned by `loop_fn`. Raises: TypeError: If `cell` is not an instance of RNNCell, or `loop_fn` is not a `callable`. """ if not _like_rnncell(cell): raise TypeError("cell must be an instance of RNNCell") if not callable(loop_fn): raise TypeError("loop_fn must be a callable") parallel_iterations = parallel_iterations or 32 # Create a new scope in which the caching device is either # determined by the parent scope, or is set to place the cached # Variable using the same placement as for the rest of the RNN. with vs.variable_scope(scope or "rnn") as varscope: if varscope.caching_device is None: varscope.set_caching_device(lambda op: op.device) time = constant_op.constant(0, dtype=dtypes.int32) (elements_finished, next_input, initial_state, emit_structure, init_loop_state) = loop_fn( time, None, None, None) # time, cell_output, cell_state, loop_state flat_input = nest.flatten(next_input) # Need a surrogate loop state for the while_loop if none is available. loop_state = (init_loop_state if init_loop_state is not None else constant_op.constant(0, dtype=dtypes.int32)) input_shape = [input_.get_shape() for input_ in flat_input] static_batch_size = input_shape[0][0] for input_shape_i in input_shape: # Static verification that batch sizes all match static_batch_size.merge_with(input_shape_i[0]) batch_size = static_batch_size.value if batch_size is None: batch_size = array_ops.shape(flat_input[0])[0] nest.assert_same_structure(initial_state, cell.state_size) state = initial_state flat_state = nest.flatten(state) flat_state = [ops.convert_to_tensor(s) for s in flat_state] state = nest.pack_sequence_as(structure=state, flat_sequence=flat_state) if emit_structure is not None: flat_emit_structure = nest.flatten(emit_structure) flat_emit_size = [emit.get_shape() for emit in flat_emit_structure] flat_emit_dtypes = [emit.dtype for emit in flat_emit_structure] else: emit_structure = cell.output_size flat_emit_size = nest.flatten(emit_structure) flat_emit_dtypes = [flat_state[0].dtype] * len(flat_emit_size) flat_emit_ta = [ tensor_array_ops.TensorArray( dtype=dtype_i, dynamic_size=True, size=0, name="rnn_output_%d" % i) for i, dtype_i in enumerate(flat_emit_dtypes)] emit_ta = nest.pack_sequence_as(structure=emit_structure, flat_sequence=flat_emit_ta) flat_zero_emit = [ array_ops.zeros(_concat(batch_size, size_i), dtype_i) for size_i, dtype_i in zip(flat_emit_size, flat_emit_dtypes)] zero_emit = nest.pack_sequence_as(structure=emit_structure, flat_sequence=flat_zero_emit) def condition(unused_time, elements_finished, *_): return math_ops.logical_not(math_ops.reduce_all(elements_finished)) def body(time, elements_finished, current_input, emit_ta, state, loop_state): """Internal while loop body for raw_rnn. Args: time: time scalar. elements_finished: batch-size vector. current_input: possibly nested tuple of input tensors. emit_ta: possibly nested tuple of output TensorArrays. state: possibly nested tuple of state tensors. loop_state: possibly nested tuple of loop state tensors. Returns: Tuple having the same size as Args but with updated values. """ (next_output, cell_state) = cell(current_input, state) nest.assert_same_structure(state, cell_state) nest.assert_same_structure(cell.output_size, next_output) next_time = time + 1 (next_finished, next_input, next_state, emit_output, next_loop_state) = loop_fn( next_time, next_output, cell_state, loop_state) nest.assert_same_structure(state, next_state) nest.assert_same_structure(current_input, next_input) nest.assert_same_structure(emit_ta, emit_output) # If loop_fn returns None for next_loop_state, just reuse the # previous one. loop_state = loop_state if next_loop_state is None else next_loop_state def _copy_some_through(current, candidate): """Copy some tensors through via array_ops.where.""" def copy_fn(cur_i, cand_i): with ops.colocate_with(cand_i): return array_ops.where(elements_finished, cur_i, cand_i) return nest.map_structure(copy_fn, current, candidate) emit_output = _copy_some_through(zero_emit, emit_output) next_state = _copy_some_through(state, next_state) emit_ta = nest.map_structure( lambda ta, emit: ta.write(time, emit), emit_ta, emit_output) elements_finished = math_ops.logical_or(elements_finished, next_finished) return (next_time, elements_finished, next_input, emit_ta, next_state, loop_state) returned = control_flow_ops.while_loop( condition, body, loop_vars=[ time, elements_finished, next_input, emit_ta, state, loop_state], parallel_iterations=parallel_iterations, swap_memory=swap_memory) (emit_ta, final_state, final_loop_state) = returned[-3:] if init_loop_state is None: final_loop_state = None return (emit_ta, final_state, final_loop_state) def static_rnn(cell, inputs, initial_state=None, dtype=None, sequence_length=None, scope=None): """Creates a recurrent neural network specified by RNNCell `cell`. The simplest form of RNN network generated is: ```python state = cell.zero_state(...) outputs = [] for input_ in inputs: output, state = cell(input_, state) outputs.append(output) return (outputs, state) ``` However, a few other options are available: An initial state can be provided. If the sequence_length vector is provided, dynamic calculation is performed. This method of calculation does not compute the RNN steps past the maximum sequence length of the minibatch (thus saving computational time), and properly propagates the state at an example's sequence length to the final state output. The dynamic calculation performed is, at time `t` for batch row `b`, ```python (output, state)(b, t) = (t >= sequence_length(b)) ? (zeros(cell.output_size), states(b, sequence_length(b) - 1)) : cell(input(b, t), state(b, t - 1)) ``` Args: cell: An instance of RNNCell. inputs: A length T list of inputs, each a `Tensor` of shape `[batch_size, input_size]`, or a nested tuple of such elements. initial_state: (optional) An initial state for the RNN. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell.state_size`. dtype: (optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype. sequence_length: Specifies the length of each sequence in inputs. An int32 or int64 vector (tensor) size `[batch_size]`, values in `[0, T)`. scope: VariableScope for the created subgraph; defaults to "rnn". Returns: A pair (outputs, state) where: - outputs is a length T list of outputs (one for each input), or a nested tuple of such elements. - state is the final state Raises: TypeError: If `cell` is not an instance of RNNCell. ValueError: If `inputs` is `None` or an empty list, or if the input depth (column size) cannot be inferred from inputs via shape inference. """ if not _like_rnncell(cell): raise TypeError("cell must be an instance of RNNCell") if not nest.is_sequence(inputs): raise TypeError("inputs must be a sequence") if not inputs: raise ValueError("inputs must not be empty") outputs = [] # Create a new scope in which the caching device is either # determined by the parent scope, or is set to place the cached # Variable using the same placement as for the rest of the RNN. with vs.variable_scope(scope or "rnn") as varscope: if varscope.caching_device is None: varscope.set_caching_device(lambda op: op.device) # Obtain the first sequence of the input first_input = inputs while nest.is_sequence(first_input): first_input = first_input[0] # Temporarily avoid EmbeddingWrapper and seq2seq badness # TODO(lukaszkaiser): remove EmbeddingWrapper if first_input.get_shape().ndims != 1: input_shape = first_input.get_shape().with_rank_at_least(2) fixed_batch_size = input_shape[0] flat_inputs = nest.flatten(inputs) for flat_input in flat_inputs: input_shape = flat_input.get_shape().with_rank_at_least(2) batch_size, input_size = input_shape[0], input_shape[1:] fixed_batch_size.merge_with(batch_size) for i, size in enumerate(input_size): if size.value is None: raise ValueError( "Input size (dimension %d of inputs) must be accessible via " "shape inference, but saw value None." % i) else: fixed_batch_size = first_input.get_shape().with_rank_at_least(1)[0] if fixed_batch_size.value: batch_size = fixed_batch_size.value else: batch_size = array_ops.shape(first_input)[0] if initial_state is not None: state = initial_state else: if not dtype: raise ValueError("If no initial_state is provided, " "dtype must be specified") state = cell.zero_state(batch_size, dtype) if sequence_length is not None: # Prepare variables sequence_length = ops.convert_to_tensor( sequence_length, name="sequence_length") if sequence_length.get_shape().ndims not in (None, 1): raise ValueError( "sequence_length must be a vector of length batch_size") def _create_zero_output(output_size): # convert int to TensorShape if necessary size = _concat(batch_size, output_size) output = array_ops.zeros( array_ops.stack(size), _infer_state_dtype(dtype, state)) shape = _concat(fixed_batch_size.value, output_size, static=True) output.set_shape(tensor_shape.TensorShape(shape)) return output output_size = cell.output_size flat_output_size = nest.flatten(output_size) flat_zero_output = tuple( _create_zero_output(size) for size in flat_output_size) zero_output = nest.pack_sequence_as( structure=output_size, flat_sequence=flat_zero_output) sequence_length = math_ops.to_int32(sequence_length) min_sequence_length = math_ops.reduce_min(sequence_length) max_sequence_length = math_ops.reduce_max(sequence_length) for time, input_ in enumerate(inputs): if time > 0: varscope.reuse_variables() # pylint: disable=cell-var-from-loop call_cell = lambda: cell(input_, state) # pylint: enable=cell-var-from-loop if sequence_length is not None: (output, state) = _rnn_step( time=time, sequence_length=sequence_length, min_sequence_length=min_sequence_length, max_sequence_length=max_sequence_length, zero_output=zero_output, state=state, call_cell=call_cell, state_size=cell.state_size) else: (output, state) = call_cell() outputs.append(output) return (outputs, state) def static_state_saving_rnn(cell, inputs, state_saver, state_name, sequence_length=None, scope=None): """RNN that accepts a state saver for time-truncated RNN calculation. Args: cell: An instance of `RNNCell`. inputs: A length T list of inputs, each a `Tensor` of shape `[batch_size, input_size]`. state_saver: A state saver object with methods `state` and `save_state`. state_name: Python string or tuple of strings. The name to use with the state_saver. If the cell returns tuples of states (i.e., `cell.state_size` is a tuple) then `state_name` should be a tuple of strings having the same length as `cell.state_size`. Otherwise it should be a single string. sequence_length: (optional) An int32/int64 vector size [batch_size]. See the documentation for rnn() for more details about sequence_length. scope: VariableScope for the created subgraph; defaults to "rnn". Returns: A pair (outputs, state) where: outputs is a length T list of outputs (one for each input) states is the final state Raises: TypeError: If `cell` is not an instance of RNNCell. ValueError: If `inputs` is `None` or an empty list, or if the arity and type of `state_name` does not match that of `cell.state_size`. """ state_size = cell.state_size state_is_tuple = nest.is_sequence(state_size) state_name_tuple = nest.is_sequence(state_name) if state_is_tuple != state_name_tuple: raise ValueError("state_name should be the same type as cell.state_size. " "state_name: %s, cell.state_size: %s" % (str(state_name), str(state_size))) if state_is_tuple: state_name_flat = nest.flatten(state_name) state_size_flat = nest.flatten(state_size) if len(state_name_flat) != len(state_size_flat): raise ValueError("#elems(state_name) != #elems(state_size): %d vs. %d" % (len(state_name_flat), len(state_size_flat))) initial_state = nest.pack_sequence_as( structure=state_size, flat_sequence=[state_saver.state(s) for s in state_name_flat]) else: initial_state = state_saver.state(state_name) (outputs, state) = static_rnn( cell, inputs, initial_state=initial_state, sequence_length=sequence_length, scope=scope) if state_is_tuple: flat_state = nest.flatten(state) state_name = nest.flatten(state_name) save_state = [ state_saver.save_state(name, substate) for name, substate in zip(state_name, flat_state) ] else: save_state = [state_saver.save_state(state_name, state)] with ops.control_dependencies(save_state): last_output = outputs[-1] flat_last_output = nest.flatten(last_output) flat_last_output = [ array_ops.identity(output) for output in flat_last_output ] outputs[-1] = nest.pack_sequence_as( structure=last_output, flat_sequence=flat_last_output) return (outputs, state) def static_bidirectional_rnn(cell_fw, cell_bw, inputs, initial_state_fw=None, initial_state_bw=None, dtype=None, sequence_length=None, scope=None): """Creates a bidirectional recurrent neural network. Similar to the unidirectional case above (rnn) but takes input and builds independent forward and backward RNNs with the final forward and backward outputs depth-concatenated, such that the output will have the format [time][batch][cell_fw.output_size + cell_bw.output_size]. The input_size of forward and backward cell must match. The initial state for both directions is zero by default (but can be set optionally) and no intermediate states are ever returned -- the network is fully unrolled for the given (passed in) length(s) of the sequence(s) or completely unrolled if length(s) is not given. Args: cell_fw: An instance of RNNCell, to be used for forward direction. cell_bw: An instance of RNNCell, to be used for backward direction. inputs: A length T list of inputs, each a tensor of shape [batch_size, input_size], or a nested tuple of such elements. initial_state_fw: (optional) An initial state for the forward RNN. This must be a tensor of appropriate type and shape `[batch_size, cell_fw.state_size]`. If `cell_fw.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell_fw.state_size`. initial_state_bw: (optional) Same as for `initial_state_fw`, but using the corresponding properties of `cell_bw`. dtype: (optional) The data type for the initial state. Required if either of the initial states are not provided. sequence_length: (optional) An int32/int64 vector, size `[batch_size]`, containing the actual lengths for each of the sequences. scope: VariableScope for the created subgraph; defaults to "bidirectional_rnn" Returns: A tuple (outputs, output_state_fw, output_state_bw) where: outputs is a length `T` list of outputs (one for each input), which are depth-concatenated forward and backward outputs. output_state_fw is the final state of the forward rnn. output_state_bw is the final state of the backward rnn. Raises: TypeError: If `cell_fw` or `cell_bw` is not an instance of `RNNCell`. ValueError: If inputs is None or an empty list. """ if not _like_rnncell(cell_fw): raise TypeError("cell_fw must be an instance of RNNCell") if not _like_rnncell(cell_bw): raise TypeError("cell_bw must be an instance of RNNCell") if not nest.is_sequence(inputs): raise TypeError("inputs must be a sequence") if not inputs: raise ValueError("inputs must not be empty") with vs.variable_scope(scope or "bidirectional_rnn"): # Forward direction with vs.variable_scope("fw") as fw_scope: output_fw, output_state_fw = static_rnn( cell_fw, inputs, initial_state_fw, dtype, sequence_length, scope=fw_scope) # Backward direction with vs.variable_scope("bw") as bw_scope: reversed_inputs = _reverse_seq(inputs, sequence_length) tmp, output_state_bw = static_rnn( cell_bw, reversed_inputs, initial_state_bw, dtype, sequence_length, scope=bw_scope) output_bw = _reverse_seq(tmp, sequence_length) # Concat each of the forward/backward outputs flat_output_fw = nest.flatten(output_fw) flat_output_bw = nest.flatten(output_bw) flat_outputs = tuple( array_ops.concat([fw, bw], 1) for fw, bw in zip(flat_output_fw, flat_output_bw)) outputs = nest.pack_sequence_as( structure=output_fw, flat_sequence=flat_outputs) return (outputs, output_state_fw, output_state_bw) ```
Anjani Thomas (born July 10, 1959) is an American singer-songwriter and pianist, best known for her work with singer-songwriter Leonard Cohen, as well as Carl Anderson, Frank Gambale, and Stanley Clarke. She became a solo artist in 2000. Life Anjani was born in Honolulu, Hawaii, where she trained in guitar, piano and voice. She attended Berklee College of Music for a year then moved to New York City to a pursue a music career. She performed in jazz clubs before meeting producer John Lissauer, who hired her to provide backup vocals on Leonard Cohen's influential song "Hallelujah" from Various Positions. Anjani went on to tour with Cohen in 1985, as his keyboardist and backup vocalist, and worked with Cohen for many years after, lending her talents to I'm Your Man, The Future, Dear Heather and Old Ideas. Anjani launched a solo career with Anjani in 2000 followed by The Sacred Names in 2001 – an ode to the Aramaic, Greek and Hebrew names of God. In 2006, Cohen contributed lyrics and production talents to Anjani's music and arrangements for Blue Alert on Columbia Records. The song "Blue Alert" was used in a 2007 Old Navy TV spot. In 2011, Anjani began working on a follow-up to Blue Alert, which continued the collaboration with Leonard Cohen on three new songs. The new record, I Came to Love, was released in download-only form in July 2014. Discography Anjani (Little Fountain Music, 2000) The Sacred Names (Little Fountain Music, 2001) Blue Alert (Columbia, 2006) I Came to Love (2014) References External links Official site "Trusting the Force: Into the Heart of Blue Alert" interview at Music Box, April 2007 "The Story of C" at Leonard Cohen's site 1959 births Living people Musicians from Honolulu Writers from Honolulu 20th-century American women singers 20th-century American pianists American women singer-songwriters American women jazz singers American jazz singers American jazz pianists American women pianists 20th-century American singer-songwriters 21st-century American women musicians Singer-songwriters from Hawaii
Caspar Schamberger (1 September 1623 in Leipzig, Germany – 8 April 1706) was a German surgeon. His name represents the first school of Western medicine in Japan and the beginning of rangaku, or Dutch studies. Schamberger grew up in war-torn Saxony. In 1637 he started studying surgery under the master surgeon of the surgeons guild in his native town of Leipzig. Three years late he finished his education and started traveling through Northern Germany, Denmark, Sweden, and the Netherlands. In 1643 he joined the Dutch East India Company (VOC), signing a contract for four years of service. Schamberger left Europe in the same year aboard the Eiland Mauritius, but the ship wrecked four months later near the Cape of Good Hope. In July 1644 Schamberger finally arrived in Batavia, the administrative center of the expanding Dutch colonial empire. The next few years he worked as a ship surgeon, visiting Portuguese Goa, Ceylon, Gamron and Kismis (Persia), to return to Batavia again in 1646. In summer 1649 he arrived in Nagasaki and began his service at Dejima, the Dutch trading post in Japan. Later that year he traveled to Edo as a member of a special embassy, that was dispatched to Japan due to seriously strained Dutch-Japanese relations. Because of the serious illness of shogun Tokugawa Iemitsu, their audience was postponed several times. During that time Schamberger attracted the attention of Imperial commissioner Inoue Masashige, who was responsible for the internal security of the empire and its relations to the VOC. Inoue, who had a keen interest in useful Western know how, introduced Schamberger to feudal lords and Schamberger started to look after high-ranking patients. His treatment must have been quite successful. When the Dutch envoy finally returned to Nagasaki in spring 1650 four Europeans were requested to stay in Edo to give further instructions: Schamberger (surgery), Willem Bijlevelt (mathematics), the Swedish corporal Juriaen Schedel (mortar shooting), and Schedel's assistant Jan Smidt. After an exceptionally long stay in Edo the four went back to Nagasaki in October 1650. But Schamberger had to return again shortly after, participating in the annual journey to the court of the Dutch trading post chief. This time too he was called to the residences of high-ranking officials. In April 1651, the Dutch entourage left for Nagasaki again. That November, Schamberger's service at Dejima ended, and he returned to Batavia. His interpreter Inomata Dembei, following orders from the governor of Nagasaki, had to draw up an extensive report on Schamberger's surgical art. This report and the contentment and continuous interest among high-ranking officials and feudal lords led to the birth of the so-called “Caspar-style-surgery" (kasuparu-ryû geka), the first Western-style school of medicine inspired by a surgeon stationed in Dejima. In 1655, Schamberger returned to the Netherlands, traveling back to Leipzig a few weeks after. In 1658 he acquired citizenship in Leipzig and started a new career as a merchant. He married three times: to Elisabeth Rost in 1659, to Regina Maria Conrad in 1662, and to Euphrosine Kleinau in 1685. In 1667 his son Johann Christian Schamberg was born. Johann later became a Professor of Medicine at Leipzig University and was elected president twice. One of his greatest achievements was the foundation of the "New Anatomical Theatre". In 1686, Schamberger published an extensive description of three illustrations depicting a great variety of people, exotic fruits, coins, animals, and artifacts he had observed all over "East India". It is dedicated to the Duke elect Johann III. Only one copy of this private print is preserved. In 1706, Schamberger died, only to be followed shortly after by his son. Schamberger's name stands for the beginning of a lasting interest in Western style medicine that gradually led to the upcoming of the so-called Dutch Studies (rangaku) in early modern Japan. Works Dem Durchlauchtigsten Großmächtigen Fürsten und Herrn Herrn Johann Georgen dem Dritten Hertzogen zu Sachsen [...] Dreyer in unterthänigkeit offerirten Schildereyen Der Ost Indischen und angräntzenden Königreichen in Zwölff=jähriger Reise observirte Vornehmste Seltenheiten betreffende Kurtze Erläuterung In Eil entworffen Von Caspar Schambergern Bürgern und Handelsmann in Leipzig. Daselbst gedruckt durch Christoph Fleischern Anno 1686. References Reiner H. Hesselink: Prisoners from Nambu: Reality and Make-Believe in Seventeenth-Century Japanese Diplomacy. University of Hawaii Press, 2002. Wolfgang Michel: Von Leipzig nach Japan - Der Chirurg und Handelsmann Caspar Schamberger. Iudicium, Muenchen 1999. () Wolfgang Michel: «Der Ost-Indischen und angrenzenden Königreiche, vornehmste Seltenheiten betreffende kurze Erläuterung» - Neue Funde zum Leben und Werk des Leipziger Chirurgen und Handelsmanns Caspar Schamberger (1623–1706). Kyushu University, The Faculty of Languages and Cultures Library No 1. Fukuoka: Hana-Shoin 2010. () (pdf file: Kyushu University Institutional Repository) Wolfgang Michel: Medicine and Allied Sciences in the Cultural Exchange between Japan and Europe in the Seventeenth Century. In: Hans Dieter Ölschleger (ed.): Theories and Methods in Japanese Studies: Current State & Future Developments - Papers in Honor of Josef Kreiner. Vandenhoeck & Ruprecht Unipress, Göttingen, 2007, pp. 285–302; (pdf file: Kyushu University Repository) Wolfgang Michel: »Der Ost-Indischen und angrenzenden Königreiche, vornehmste Seltenheiten betreffende kurze Erläuterung«: Neue Funde zum Leben und Werk des Leipziger Chirurgen und Handelsmanns Caspar Schamberger (1623–1706). Kyushu University, The Faculty of Languages and Cultures Library, No 1. Fukuoka: Hana-Shoin, 2010. ( Digitalisat im Kyushu University Institutional Repository) References 1623 births 1706 deaths German surgeons Rangaku Science and technology in Japan 17th-century German physicians Dutch East India Company people 17th-century German writers 17th-century German male writers
The Independence Stadium is a stadium based in Kuala Lumpur, Malaysia. It is known as the site of the formal declaration of independence of the Federation of Malaya on 31 August 1957. The stadium is also the site of the proclamation of Malaysia on 16 September 1963. Currently owned by Permodalan Nasional Berhad (PNB), the stadium has a lower and an upper terrace, with a total capacity of 25,000, as well as 14 tunnels entrance, a covered stand, 50 turnstiles and 4 floodlight tower. The stadium was designed by American architect Stanley Jewkes, under the instruction of the first Prime Minister of Malaysia, Tunku Abdul Rahman. Upon its completion, the stadium holds the world record for the tallest prestressed floodlight towers and the biggest cantilever shell roofs. The stadium was also the largest stadium in the Southeast Asia at the time of completion. The stadium was the principal venue in Kuala Lumpur for celebrations and sporting events until 1998 when the National Stadium was built for the 16th Commonwealth Games. Prior to that, the stadium was the home ground for the Malaysian national football team. The stadium witnessed the historic qualifying match of the 1980 Olympic Games, when the national football team last qualified the Olympic Games. However, due to the boycott led by the United States, the country did not participate in the final tournament. The stadium was also the venue for the Merdeka Tournament until 1995. Besides that, the stadium had hosted three out of the five Southeast Asian Games held in Kuala Lumpur. The stadium also hosted the fight between the legendary boxer Muhammad Ali and British boxer Joe Bugner in 1975, prior to the Thrilla in Manila. In 1975, the stadium also hosted the Hockey World Cup final between Pakistan and India. The stadium is currently a national heritage building. In 2008, the Independence Stadium received the UNESCO Asia-Pacific Award for Excellence for Heritage Conservation owing to its cultural significance and embodiment of a unique independence declaration event. History Background Since the 1930s, the Football Association of Selangor commonly referred to as Selangor F.C., had been urging the government for a professional football stadium to be built. The request had been ignored as there's in fact a MAHA Stadium, the first stadium of Selangor F.C. in collaboration with MAHA (Malayan Agri-Horticultural Association), is still there located at Jalan Ampang at that time. However, the MAHA Stadium was ruined by the Japanese army in the World War II. After the war, the F.A.S and the Football Association of Malaya (FAM) stepped up their efforts to get a new stadium as the MAHA Stadium in Jalan Ampang is now unusable. After Tunku Abdul Rahman was elected as president of the two associations in 1951, both associations fought hard to have a first-class stadium built. In 1952, an ad-hoc committee was formed by the Kuala Lumpur Municipal Commissioners to study the proposal, and a report was released three months later. Several proposals were also brought up to the Federal Legislative Council on this matter, including Tunku himself, but was blocked by the council. After the winning of the Alliance Party in the first general election in Malaya, Tunku, who was now the Chief Minister, started an advisory committee led by E.M. McDonald to study the possibility of building a stadium. On 4 June 1956, a total of 160 proposal plan was submitted to the government. On 2 May 1956, Tunku and McDonald started looking for suitable sites for the stadium, one of the first places they visited was the Chin Woo stadium. While standing on the tower of the stadium, Tunku saw a few athletes practicing near the Coronation Park, and asked "Don't you think it would make an ideal spot for Stadium Merdeka?" Although McDonald was concerned about the traffic congestion that might arise in the future, Tunku insisted that it was the perfect spot for the country's first stadium. The site was a Chinese cemetery before it became the oldest golf course in Kuala Lumpur, which had been abandoned since 1921. The site was then later called "Coronation Park" when George VI was crowned as the King of United Kingdom. Before it was decided to build a stadium on that site, several quarters were planned to be built on the site by the Royal Malaysia Police. The uneven ground of the site means that excavation work had to be carried out before it could be constructed. The construction of the stadium would also mean that a small part of the school ground of Victoria Institution would be acquired. Despite McDonald's efforts to persuade Tunku to choose another site for the stadium, Tunku insisted on building the stadium there. On 11 July, Tunku bought this up to the Legislative Council and gained permission from it. Four days later, the project was transferred to the Malayan Public Works Department. Construction The stadium was constructed from 25 September 1956 to 21 August 1957, and was designed by the then Director of Public Works Department, Stanley Edward Jewkes. Several engineers such as Lee Kwok Thye, Chan Sai Soo and Peter Low were also involved in the project. The cornerstone of the stadium was laid by Tunku himself on 15 February 1957. Due to budget constraints, most of the construction materials were made locally, which meant that imported materials such as structural steels had to be avoided. To ensure that the stadium would be finished in time, the designing was done by "fast-track" method, which means that after each element of the design was finished, it was immediately constructed. The stadium was constructed as an earthed amphitheatre, which means that a part of the stadium is below ground level. The excavated soil was then transferred to the site of Masjid Negara which was originally a valley and was subjected to flooding issues. When the earthworks and excavation were completed, designs of the terrace seating had already been done, and the construction of it began immediately. At the same time, the designing of the covered stands, the upper terraces and the stairs were carried on by the architects. Two contractors were involved in the construction, Lim Quee for the construction of the main covered stands, while Boon & Cheah were responsible for the terrace and the tunnel entrances. Besides designing the stadium, Stanley Jewkes was also responsible for the traffic planning around the stadium. Other than Stanley, architect Edgar Green was also involved in the designing of the interior facilities such as the toilets and the canteen facilities of the restaurant. The stadium held two world records upon its completion: the tallest prestressed floodlight towers at 120 feet and the biggest cantilever shell roofs. The floodlight towers, constructed from Hume culvert pipes, was also the first prestressed tower in the world which was made from precast culvert pipe units. Another interesting feat accomplished at the time is that all four towers were erected without using a crane. The shell roof for the grandstand, made out of concrete, was chosen as it was both economically affordable and aesthetically beautiful. Although the strength of the cantilever roofs were tested before the ceremony, Stanley was concerned that the roof might be unable to withstand the vibrations caused by the firing of the cannons during the ceremony, but it did not happen and the event went well. Engineer Lee Kwok Thye credited the Kongsi Woman, also known as Lai Sui Mui for their role in the construction. The women were responsible for carrying concrete buckets from the ground up to the structures being constructed, where it was then poured into the framework. Opening and the declaration of independence The stadium was completed on 21 August 1957, while the opening ceremony was held on 30 August 1957, a day before the country declared independence. At the time of completion, it was the largest stadium in the Southeast Asia. The opening ceremony was opened by Tunku Abdul Rahman, which was witnessed by over 15,000 spectators, including foreign athletes. It was also Tunku himself who placed the foundation stone on 15 February 1957. The ceremony includes a mass drill performance by 1000 students. On 31 August 1957, power was transferred from the British Empire to the newly independent Malayan government. More than 20,000 people crowded into the stadium, which was built specifically for this occasion. The ceremony was attended by Prince Henry, Duke of Gloucester, representing the Queen of the United Kingdom, the Malay rulers of the nine states, the last High Commissioner of Malaya Sir Donald MacGillivray, foreign dignitaries, members of the federal cabinet and Tunku Abdul Rahman himself. Following the handover of the instrument of the independence from Prince Henry to Tunku, the prime minister read out the Declaration of Independence, followed by the iconic seven shouts of "Merdeka" by Tunku. Following that, the national anthem was sung for the first time by a multiracial choir led by Tony Fonseka, while the national flag was raised by Oliver Cuthbert Samuel. The ceremony was continued with an azan call and a thanksgiving prayer, as well as a gun salute. A mass drill were also performed by the students on the event. Declaration of Malaysia On 16 September 1963, the stadium was the site of the proclamation of the formation of the Malaysia Federation. The event was witnessed by more than 30,000 audience and it was attended by the Yang di-Pertuan Agong, the Malay rulers, the Governor of Penang, Malacca, Singapore, Sarawak and Sabah, as well as the cabinet members, foreign diplomats and invited guests. The Proclamation of Malaysia, which was handed by the Yang di-Pertuan Agong, was read out by the Prime Minister, Tunku Abdul Rahman. He then shouted "Merdeka" seven times, which was echoed by the crowd. This was followed by the playing of the nobat orchestra and the national anthem played by the Royal Malaysia Police Band. It was then followed by a 101-gun salute by the first round of the Federation artillery. The event ended with the prayer by the Mufti of Negeri Sembilan, Ahmad Mohammad Said. Plans for demolition The role of the stadium as the principal venue for celebrations and sporting events in Kuala Lumpur was replaced by the National Stadium built in the mid-1990s. The stadium and its land were given to United Engineers Malaysia (UEM) which had intended to redevelop the land into a RM1 billion entertainment and office complex. However, the company did not proceed with the redevelopment due to public outcry and the company's financial difficulties due to the late 1990s Asian economic crisis. The stadium was now owned by Permodalan Nasional Berhad (PNB). Several options were suggested following the acquirement of the site by PNB, such as redeveloping the stadium for smaller sporting activities, building a sport museum at the site, or relocate it to another site. Nonetheless, the stadium remained as a site for sporting events and concert until this day. Renovations and restoration The stadium had been through several renovations. First in 1974 when the concrete upper tiers were added to increase the stadium capacity to 32,800 seats. The project costs about RM 4.5 million. In 1983, the floodlights of the stadium were replaced to make television colour transmission possible. The seating capacity of the stadium was further increased in early 1986 with the addition of upper tiers rising into the airspace on the north, east and south terraces. Prior to the 1989 SEA Games, the grandstand was changed and the game's torch platform was built, were involved a set of grand steps leading up to the torch. The renovation, which cost RM 5.3 million, also includes the laying of new tracks, repairs to the roofs, enclosing sections of seating and repainting the seating terraces such that the stadium was ready for the Games. In 2007, the stadium underwent massive renovations to restore its 1957 look. With that, the 45,000-capacity stadium was reduced to 20,000, which meant that several of the upper terrace blocks built over the years were demolished. Besides that, the entire stadium was to be decorated as the state it was when Tunku proclaimed independence, which included the word "Merdeka" written in the stadium and the original seating arrangements of the Malay Rulers, the Queen's representatives and officers. The paintworks, main pavilion, two VIP rooms and the changing rooms were to be restored to its original state as well. The project, which costs RM2 million, was led by PNB. Merdeka PNB 118 In December 2009, it was announced that PNB would be building a hundred-floored skyscraper on the site between Independence Stadium and Nation Stadium. The project was officially launched by the then Prime Minister Najib Razak in September 2016. Formerly named as the Warisan Merdeka, the project was estimated to be finished by 2021. The tower, when completed, would be the second tallest building in the world and tallest in Southeast Asia. The tower would include 83 levels of office space, 16 levels of luxury hotel, and the rest of the floors would be occupied observation deck, restaurants, sky lobby, podium and amenities. Besides that, the project would also include a shopping mall and residential areas. The tower was built on the Tunku Abdul Rahman Park (also known as Merdeka Park), which was built alongside the Independence Stadium. Such a move was criticized as the park was supposed to act as a heritage buffer zone. Plus, the park was also a recreation park for the residents in Kuala Lumpur for generations. The project might as well worsen the traffic congestion of that area. There was also concern that the schools nearby might be affected by the project and was forced to be relocated. Sporting events Football Prior to the completion of National Stadium, Independence Stadium was once the home ground for both the national football team (1957–1998) and Selangor FA (1957–1994), and was used temporary by the Kuala Lumpur FA in 1997. Besides that, it was also the venue for the annual Independence Football Tournament and most of the finals of the Malaysia Cup. The stadium had also hosted all the football matches for the 1965 SEAP Games, 1971 SEAP Games and the 1977 SEA Games, as well as the finals for the 1989 SEA Games. The first match of the stadium was the opening match of the 1957 Independence Football Tournament, on 31 August 1957, between Hong Kong League XI and Cambodia. The Hong Kong League XI became the first team to win at Independence Stadium beating Cambodia with 6–2. The first goal was scored by Law Kwok-tai. On the next day, the Malayan national team play its first game at the stadium on a match against Burma, which finished 5–2. The national team will win its first Merdeka Cup in 1958 on a match against South Vietnam. The first Malaya Cup final held at the stadium was played on October 19, 1957, between Selangor and Perak. Perak won the game by 3–2, becoming the first club to win a final at Independence Stadium. Perak also won the first Malaysia Cup at Independence Stadium after the cup was renamed in 1967. The stadium will continue to host 36 Malaysia Cup finals until the 1990s. The stadium had also witnessed the first match played by the newly formed Malaysia football team, which is a combination of the Malaya and Singapore players (Singapore left in 1965 after the separation of Singapore with Malaysia). The match took place on 8 August 1963 (although the federation only existed after 16 September 1963) on the first round of the 1963 Merdeka Tournament against Japan. The team was defeated by 3–4. The first South East Asia Peninsular Games football tournament held in Independence Stadium was the opening match between Thailand and South Vietnam on December 15, with Thailand winning the game by 2–1. The stadium will host the rest of the matches as well as the final held on December 22, which ended with a tie between Burma and Thailand. In 1989, the Malaysian national football team won their fourth SEA Games goal medal, the first at the stadium. The first Olympic qualification match held at the stadium was the preliminary round between Malaysia and Thailand on 12 October 1964, which resulted a draw. In 1980, the stadium was the venue for the Olympics qualifying tournament. On 25 March, the stadium witnessed the qualification of Malaysia at the 1980 Olympic Games. The national team won the match against South Korea by 2–1, thus qualifying the Olympic Games for the second time. However, due to the boycott led by the United States, the country did not participate in the final tournament. The first FIFA qualification match at the stadium was the match between Malaysia and South Korea on March 10, 1985. Followed by the completion of the Shah Alam Stadium in 1994, both Selangor FA and the Malaysian national team moved to the newly built stadium. The national team will then move to the National Stadium after its completion in 1998. The Malaysia Cup final was held again at the stadium since 1993, which was the match between Sarawak and Brunei in 1999, which resulted in a Brunei win 2–1. The stadium had never hosted any Malaysia Cup finals ever since. In February 2015, Kuala Lumpur FA returned to Independence Stadium for the first time in 17 years for the team's opening Premier League match of the season against Sabah. The last international match played at the stadium saw the Malaysian team drawing 1–1 with Cambodia in October 2001. Multi-sport event The stadium was the venue of the Brunei Merdeka Games, which was held to commemorate the Independence day of Malaya. Several events, including the Pestabola Merdeka, were held from 30 August to 8 September 1957. Besides football, the stadium had held cycling, athletics and hockey competition which were a part of the Games. A similar event was held in 1963 when the Malaysia Federation was formed. Operated by Perbadanan Stadium Merdeka (1963–1998), the stadium had also held four out of six of the SEA Games held in Kuala Lumpur. The stadium first host the Southeast Asia Games (known as the Southeast Asia Peninsular Games at that time) in 1965. Originally, Malaysia was planned to host the Games in 1967, however it was decided to be held in Malaysia after the original host, Laos had opted out due to financial difficulties. The stadium was the venue for the opening and closing ceremony, as well as athletics, football and cycling events. The stadium will continue to host the 1971, 1977 and 1989 editions. The stadium had also hosted the first SUKMA Games in 1986. It will also be hosting the second SUKMA Games two years later. Other sports In 1975, the stadium had hosted third Men's Hockey World Cup from 1 to 15 March 1975. India won its only Hockey World Cup after beating Pakistan by 2–1. The 1975 edition is also the Malaysian national team best performance, which won the fourth place in the event. The event was witnessed by over 50,000 spectators, despite the fact that the stadium had only 45,000 seats. The stadium had also held the fight between Muhammad Ali and Joe Bugner on 1 July 1975. It was held as an exhibition bout as a part of the Far East tour. The match was held prior to the infamous Thrilla in Manila that was held three months later. About 20,000 spectators witnessed the fight in the stadium, including the Yang di-Pertuan Agong, the Prime Minister, several kings and governors, as well as Joe Frazier, Ali's former adversary. Ali won the fight by 73–67, 73–65 and 72–65 after the mandatory 15 rounds were over. In athletics, the stadium was also used to held the 1991 Asian Athletics Championships, which was held in Kuala Lumpur. The event was held from 19 to 23 October. Aside from that, the stadium was also regularly used for national championships. Other events Concerts The Independence Stadium had also hosted major concerts. Uriah Heep held its first Malaysian concert at the stadium on 19 October 1983. Michael Jackson's HIStory World Tour filled the stadium to capacity. Jackson performed two sold-out concerts on 27 and 29 October 1996, respectively, in front of 55,000 people each night. Linkin Park performed at the stadium in their Meteora World Tour on 15 October 2003. The concert was attended by over 28,000 audiences. Mariah Carey first perform at the stadium on 20 February 2004 as a part of her Charmbracelet World Tour. She returned to the stadium ten years later in her The Elusive Chanteuse Show on 22 October 2014. Celine Dion performed on 13 April 2008 for a total audience of 48,000 as a part of her Taking Chances World Tour. Avril Lavigne played her first show at the stadium on 29 August 2008. She will return to the stadium again in her Black Star Tour in 2012, and again in 2014 as a part of The Avril Lavigne Tour. Justin Bieber performed at the stadium as a part of his debut world tour on 21 April 2011. Other Western artists who have played the stadium includes Jennifer Lopez, Cliff Richards, Scorpions, Metallica, My Chemical Romance and Bon Jovi. Taiwanese singer Jolin Tsai first performed at the stadium on her Myself World Tour on 11 June 2011. She returned to the stadium again for her Play World Tour on 16 July 2016. On the following year, Mandopop singer Wang Leehom held his Music-Man Tour on March 3. He will return to the stadium again on 16 March 2019 as a part of his Descendants of the Dragon 2060 World Tour. Chinese singer Jay Chou first performed at the stadium in 2003 on The One World Tour. The singer performed again at the stadium two years later on his Incomparable World Tour. His third appearance at the stadium was on February 23 as a part of The World Tour. Chou will return for his Invincible World Tour on August 6, 2016. K-pop group EXO played the stadium on 12 March 2016 as a part of their Exo Planet #2 - The Exo'luxion World Tour. The group will return again on 18 March 2017 on their Exo Planet #3 - The Exo'rdium World Tour. Indian composer A.R. Rahman performed his A.R. Rahman Live in Concert on 14 May 2016 at the stadium. On the same year, South Korean group Big Bang held their MADE (V.I.P) Tour fan meeting at the stadium. G-Dragon performed in the stadium on his own solo tour Act III: M.O.T.T.E World Tour on 17 September 2017. Malaysian singer Michael Wong held his Lonely Planet Concert Tour on 10 November 2018 at the stadium. He was the first local singer to held a solo concert at the stadium. Other Asian singers that had performed at the stadium include Kelly Chen, Beyond, Faye Wong, Wonder Girls, Jacky Cheung and Mayday. Other shows held in the stadium include: Philiac Concert for Peace, May 2011 B.o.B, Far East Movement, Mizz Nina, Watsons Music Festival, 15 December 2012 Political demonstrations On 9 July 2011, protesters of the Bersih 2.0 rally marched to the Independence Stadium. The decision was made after the organisers had consulted the Yang di-Pertuan Agong. On 12 January 2013, The People's Uprising rally () was held in the stadium. World records The stadium had witnessed the largest silat lesson in the world on 29 August 2015. The lesson was participated by 12,393 participants and was directed by Grandmaster YM Syeikh Dr. Md Radzi bin Hanafi, who is the Pewaris Mutlak Silat Cekak from Persekutuan Seni Silat Cekak Pusaka Ustaz Hanafi Malaysia. It was held in conjunction with the National Day celebration on that year. Heritage conservation In February 2003, Independence Stadium was named a national heritage building. In 2007, Independence Stadium underwent restoration to its original 1957 condition as part of Malaysia's 50th-anniversary plans to relive the moment when Tunku Abdul Rahman proclaimed independence there. The restoration was completed by December 2009. The restoration received the UNESCO Asia-Pacific 2008 Award of Excellence for Cultural Heritage Conservation. Transportation The stadium is served by the Maharajalela Monorail station, situated next to one of the stadium's west exits. The station is situated between Tun Sambathan station and the Hang Tuah station. The stadium is also indirectly served by the Merdeka MRT station. The station is situated in between Pasar Seni MRT station and Bukit Bintang MRT station on the Sungai Buloh-Kajang Line. Although its name refers to the stadium, the station serves the adjacent Stadium Negara instead. The stadium can also be reached via the Ampang and Sri Petaling LRT Line by stopping at Plaza Rakyat LRT station. A 180-metre pedestrian linkway was built from the station to the Merdeka MRT station, which is just a few blocks away from the stadium. The walkway was air-conditioned, brightly lit, and travelators were installed to ensure the comfort of the passengers. The stadium can also be reached by bus. Located near the stadium, Pasar Seni bus hub is the terminating stop for a dozen of bus lines in the Klang Valley. Gallery See also Merdeka 118 Stadium Negara Stanley Edward Jewkes National Stadium List of stadiums in Malaysia Malayan Declaration of Independence Notes References Further reading External links Merdeka 118 Precinct : Stadium Merdeka webpage Football venues in Malaysia Athletics (track and field) venues in Malaysia Multi-purpose stadiums in Malaysia Sports venues in Kuala Lumpur Sports venues completed in 1957 1957 establishments in British Malaya Southeast Asian Games stadiums Selangor F.C.
A bhikkhu (Pali: भिक्खु, Sanskrit: भिक्षु, bhikṣu) is an ordained male in Buddhist monasticism. Male and female monastics ("nun", bhikkhunī, Sanskrit bhikṣuṇī) are members of the Sangha (Buddhist community). The lives of all Buddhist monastics are governed by a set of rules called the prātimokṣa or pātimokkha. Their lifestyles are shaped to support their spiritual practice: to live a simple and meditative life and attain nirvana. A person under the age of 20 cannot be ordained as a bhikkhu or bhikkhuni but can be ordained as a śrāmaṇera or śrāmaṇērī. Definition Bhikkhu literally means "beggar" or "one who lives by alms". The historical Buddha, Prince Siddhartha, having abandoned a life of pleasure and status, lived as an alms mendicant as part of his śramaṇa lifestyle. Those of his more serious students who renounced their lives as householders and came to study full-time under his supervision also adopted this lifestyle. These full-time student members of the sangha became the community of ordained monastics who wandered from town to city throughout the year, living off alms and stopping in one place only for the Vassa, the rainy months of the monsoon season. In the Dhammapada commentary of Buddhaghoṣa, a bhikkhu is defined as "the person who sees danger (in samsara or cycle of rebirth)" (Pāli: ikkhatīti: bhikkhu). He therefore seeks ordination to obtain release from it. The Dhammapada states: Buddha accepted female bhikkhunis after his step-mother Mahapajapati Gotami organized a women's march to Vesāli. and Buddha requested her to accept the Eight Garudhammas. So, Gotami agreed to accept the Eight Garudhammas and was accorded the status of the first bhikkhuni. Subsequent women had to undergo full ordination to become nuns. Historical terms in Western literature In English literature before the mid-20th century, Buddhist monks were often referred to by the term bonze, particularly when describing monks from East Asia and French Indochina. This term is derived from Portuguese and French . It is rare in modern literature. Buddhist monks were once called talapoy or talapoin , itself , ultimately . The talapoin is a monkey named after Buddhist monks just as the capuchin monkey is named after the Order of Friars Minor Capuchin (who also are the origin of the word cappuccino). Ordination Theravada Theravada monasticism is organized around the guidelines found within a division of the Pāli Canon called the Vinaya Pitaka. Laypeople undergo ordination as a novitiate (śrāmaṇera or sāmanera) in a rite known as the "going forth" (Pali: pabbajja). Sāmaneras are subject to the Ten Precepts. From there full ordination (Pali: upasampada) may take place. Bhikkhus are subject to a much longer set of rules known as the Pātimokkha (Theravada) or Prātimokṣa (Mahayana and Vajrayana). Mahayana In the Mahayana monasticism is part of the system of "vows of individual liberation". These vows are taken by monks and nuns from the ordinary sangha, in order to develop personal ethical discipline. In Mahayana and Vajrayana, the term "sangha" is, in principle, often understood to refer particularly to the aryasangha (), the "community of the noble ones who have reached the first bhūmi". These, however, need not be monks and nuns. The vows of individual liberation are taken in four steps. A lay person may take the five upāsaka and upāsikā vows (, "approaching virtue"). The next step is to enter the pabbajja or monastic way of life (Skt: pravrajyā, ), which includes wearing monk's or nun's robes. After that, one can become a samanera or samaneri "novice" (Skt. śrāmaṇera, śrāmaṇeri, ). The final step is to take all the vows of a bhikkhu or bhikkhuni "fully ordained monastic" (Sanskrit: bhikṣu, bhikṣuṇī, ). Monastics take their vows for life but can renounce them and return to non-monastic life and even take the vows again later. A person can take them up to three times or seven times in one life, depending on the particular practices of each school of discipline; after that, the sangha should not accept them again. In this way, Buddhism keeps the vows "clean". It is possible to keep them or to leave this lifestyle, but it is considered extremely negative to break these vows. In 9th century Japan, the monk Saichō believed the 250 precepts were for the Śrāvakayāna and that ordination should use the Mahayana precepts of the Brahmajala Sutra. He stipulated that monastics remain on Mount Hiei for twelve years of isolated training and follow the major themes of the 250 precepts: celibacy, non-harming, no intoxicants, vegetarian eating and reducing labor for gain. After twelve years, monastics would then use the Vinaya precepts as a provisional or supplemental, guideline to conduct themselves by when serving in non-monastic communities. Tendai monastics followed this practice. During Japan's Meiji Restoration during the 1870s, the government abolished celibacy and vegetarianism for Buddhist monastics in an effort to secularise them and promote the newly created State Shinto. Japanese Buddhists won the right to proselytize inside cities, ending a five-hundred year ban on clergy members entering cities. Currently, priests (lay religious leaders) in Japan choose to observe vows as appropriate to their family situation. Celibacy and other forms of abstaining are generally "at will" for varying periods of time. After the Japan–Korea Treaty of 1910, when Japan annexed Korea, Korean Buddhism underwent many changes. and Nichiren schools began sending missionaries to Korea under Japanese rule and new sects formed there such as Won Buddhism. The Temple Ordinance of 1911 () changed the traditional system whereby temples were run as a collective enterprise by the Sangha, replacing this system with Japanese-style management practices in which temple abbots appointed by the Governor-General of Korea were given private ownership of temple property and given the rights of inheritance to such property. More importantly, monks from pro-Japanese factions began to adopt Japanese practices, by marrying and having children. In Korea, the practice of celibacy varies. The two sects of Korean Seon divided in 1970 over this issue; the Jogye Order is fully celibate while the Taego Order has both celibate monastics and non-celibate Japanese-style priests. Vajrayana In Tibet, the upāsaka, pravrajyā and bhikṣu ordinations are usually taken at ages six, fourteen and twenty-one or older, respectively. Tibetan Vajrayana often calls ordained monks lama. Additional vows in the Mahayana and Vajrayana traditions In Mahayana traditions, a Bhikṣu may take additional vows not related to ordination, including the Bodhisattva vows, samaya vows and others, which are also open to laypersons in most instances. Robes The special dress of ordained people, referred to in English as robes, comes from the idea of wearing a simple durable form of protection for the body from weather and climate. In each tradition, there is uniformity in the color and style of dress. Color is often chosen due to the wider availability of certain pigments in a given geographical region. In Tibet and the Himalayan regions (Kashmir, Nepal and Bhutan), red is the preferred pigment used in the dyeing of robes. In Myanmar, reddish brown; In India, Sri Lanka and South-East Asia, various shades of yellow, ochre and orange prevail. In China, Korea, Japan and Vietnam, gray or black is common. Monks often make their own robes from cloth that is donated to them. The robes of Tibetan novices and monks differ in various aspects, especially in the application of "holes" in the dress of monks. Some monks tear their robes into pieces and then mend these pieces together again. Upāsakas cannot wear the "chö-göö", a yellow tissue worn during teachings by both novices and full monks. In observance of the Kathina Puja, a special Kathina robe is made in 24 hours from donations by lay supporters of a temple. The robe is donated to the temple or monastery and the resident monks then select from their own number a single monk to receive this special robe. Gallery See also Bhante Sayadaw Ajahn Samanera Oshō Anagarika Bhikkhuni Unsui References Sources Further reading Inwood, Kristiaan. Bhikkhu, Disciple of the Buddha. Bangkok, Thailand: Thai Watana Panich, 1981. Revised edition. Bangkok: Orchid Press, 2005. . External links The Buddhist Monk's Discipline Some Points Explained for Laypeople Thirty Years as a Western Buddhist Monk . Buddhist titles Sanskrit words and phrases Titles and occupations in Hinduism
Vigrahapala was a 9th-century ruler of the Pala dynasty, in the Bengal region of the Indian subcontinent. He was the sixth Pala emperor. He reigned for a brief period before becoming an ascetic. Vigrahapala was a grandson of Dharmapala's younger brother Vakapala and son of Jayapala. He was succeeded by his son, Narayanapala. Ancestry Previously, the historians believed that Shurapala and Vigrahapala were the two names of the same person. However, the discovery of a copper plate in 1970 in the Mirzapur district conclusively established that these two were cousins. They either ruled simultaneously (perhaps over different territories) or in rapid succession. If they ruled in succession, it seems more likely that Shurapala preceded Vigrahapala, since Vigrahapala I and his descendants ruled in unbroken succession. Vigrahapala either dethroned Shurapala, or replaced him peacefully in absence of any direct heir to the throne. The information about him and his ancestors is found in the Bhagalpur copper-plate inscription of his son, Narayanapala. Reign Based on the different interpretations of the various epigraphs and historical records, the different historians estimate Vigrahapala's reign as follows: Vigrahapala was of peaceful disposition, and abdicated the throne in favour of his son Narayanpala. See also List of rulers of Bengal References Pala kings Year of birth unknown Year of death unknown
Kimberly Hyacinthe is a Canadian athlete specializing in the sprinting events. She competed in the 200 meters at the 2011 World Championships in Athletics without advancing to the semifinals. Hyacinthe was born in Terrebonne, Quebec. In 2013, she won gold medal in the 200 meters at the 2013 Summer Universiade. In July 2016 she was officially named to Canada's Olympic team. Personal life Born in Montreal, Quebec, Hyacinthe is of Haitian descent. Competition record Personal bests Outdoor 100 metres – 11.31 (+1.6) (Edmonton 2015) 200 metres – 22.78 (+1.6) (Kazan 2013) 400 metres – 55.71 (Montreal 2009) Indoor 60 metres – 7.29 (Montréal 2014) 200 metres – 23.79 (New York 2011) References External links 1989 births Living people Canadian female sprinters Black Canadian female track and field athletes Haitian Quebecers Canadian people of Haitian descent People from Terrebonne, Quebec Sportspeople from Lanaudière Sportspeople from Quebec Commonwealth Games competitors for Canada Athletes (track and field) at the 2014 Commonwealth Games Athletes (track and field) at the 2015 Pan American Games Pan American Games bronze medalists for Canada Pan American Games medalists in athletics (track and field) World Athletics Championships athletes for Canada Athletes (track and field) at the 2016 Summer Olympics Olympic track and field athletes for Canada Universiade medalists in athletics (track and field) FISU World University Games gold medalists for Canada Canadian Track and Field Championships winners Medalists at the 2009 Summer Universiade Medalists at the 2013 Summer Universiade Medalists at the 2015 Pan American Games
Daughter of the Tong is a 1939 crime film about a detective that goes against a female leader of an Oriental crime ring. Plot Ralph Dickson is an FBI agent assigned to investigate the killing of a colleague. He is chosen to investigate due to an uncanny likeness to the presumed killer. Dickson goes undercover and learns the identity of the gang leader, Carney, who is also known as "the Illustrious One" and the "Daughter of the Tong." Carney stays holed up at the Oriental Hotel while she has her henchmen doing her dirty work. Cast Evelyn Brent as Carney - The Illustrious One Grant Withers as Ralph Dickson Dorothy Short as Marion Morgan Dave O'Brien as Jerry Morgan Richard Loo as Wong, the hotel clerk Dirk Thane as Henchman Ward Harry Harvey as Harold 'Mugsy' Winthrop Budd Buster as 'Lefty' McMillan Robert Frazer as FBI Chief Williams Hal Taliaferro as FBI Agent Lawson Distributors Times Exchange (1939) (USA) (theatrical) Reel Media International (2004) (worldwide) (VHS) Alpha Video Distributors (June 28, 2005) (USA) (DVD) Mill Creek Entertainment (2007) (USA) (DVD) Reel Media International (2007) (non-USA) (all media) References External links 1939 films 1939 crime films American black-and-white films American crime films Films directed by Raymond K. Johnson Films with screenplays by George H. Plympton 1930s English-language films 1930s American films English-language crime films
```xml import { IYammerProvider } from '../yammer/IYammerProvider'; export interface IReactYammerApiProps { yammer: IYammerProvider; defaultSearchQuery: string; strings: IReactYammerApiStrings; } ```
```yaml id: WildFire Malware version: -1 name: WildFire Malware description: |- This playbook handles WildFire Malware alerts. It performs enrichment on the different alert entities and establishes a verdict. For a possible true positive alert, the playbook performs further investigation for related IOCs and executes a containment plan. starttaskid: "0" tasks: "0": id: "0" taskid: 11a57176-6631-4746-8d87-2c8d5ac617b2 type: start task: id: 11a57176-6631-4746-8d87-2c8d5ac617b2 version: -1 name: "" iscommand: false brand: "" description: '' nexttasks: '#none#': - "110" separatecontext: false view: |- { "position": { "x": -310, "y": -1360 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 0 isoversize: false isautoswitchedtoquietmode: false continueonerrortype: "" "3": id: "3" taskid: d5eba40e-7195-481c-88d7-af5f92d29173 type: condition task: id: d5eba40e-7195-481c-88d7-af5f92d29173 version: -1 name: Was the malware prevented? (blocked) description: Is there a WildFire Post-Detection alert? type: condition iscommand: false brand: "" nexttasks: '#default#': - "121" "Yes": - "36" separatecontext: false conditions: - label: "Yes" condition: - - operator: containsGeneral left: value: simple: alert.action iscontext: true right: value: simple: PREVENTED - operator: isEqualString left: value: simple: alert.action iscontext: true right: value: simple: BLOCKED view: |- { "position": { "x": 170, "y": -100 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 0 isoversize: false isautoswitchedtoquietmode: false continueonerrortype: "" "25": id: "25" taskid: a9d14fb6-20de-4c18-85cf-f1dd7ad1d84d type: title task: id: a9d14fb6-20de-4c18-85cf-f1dd7ad1d84d version: -1 name: Investigation type: title iscommand: false brand: "" description: '' nexttasks: '#none#': - "122" separatecontext: false view: |- { "position": { "x": -140, "y": 1950 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 0 isoversize: false isautoswitchedtoquietmode: false continueonerrortype: "" "33": id: "33" taskid: e249e98d-3fb1-4355-8097-9b47893a41ad type: title task: id: e249e98d-3fb1-4355-8097-9b47893a41ad version: -1 name: Pre-Investigation Containment type: title iscommand: false brand: "" description: '' nexttasks: '#none#': - "131" separatecontext: false view: |- { "position": { "x": 730, "y": 430 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 0 isoversize: false isautoswitchedtoquietmode: false continueonerrortype: "" "36": id: "36" taskid: 3b500480-3ba3-4d88-8c70-0755876824d7 type: condition task: id: 3b500480-3ba3-4d88-8c70-0755876824d7 version: -1 name: Check WildFire type description: "Check WildFire alert type." type: condition iscommand: false brand: "" nexttasks: '#default#': - "127" Malware: - "126" separatecontext: false conditions: - label: Malware condition: - - operator: isEqualString left: value: complex: root: WildFire.Verdicts accessor: VerdictDescription transformers: - operator: toLowerCase iscontext: true right: value: simple: malware view: |- { "position": { "x": 170, "y": 940 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 0 isoversize: false isautoswitchedtoquietmode: false continueonerrortype: "" "46": id: "46" taskid: 2d31a492-5195-4c55-89e1-5a3a4f72a469 type: title task: id: 2d31a492-5195-4c55-89e1-5a3a4f72a469 version: -1 name: False Positive Alert type: title iscommand: false brand: "" description: '' nexttasks: '#none#': - "100" separatecontext: false view: |- { "position": { "x": -1260, "y": -730 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 0 isoversize: false isautoswitchedtoquietmode: false continueonerrortype: "" "65": id: "65" taskid: 97186ec1-fbcf-4ac0-8d77-b0e6e1e16f02 type: condition task: id: 97186ec1-fbcf-4ac0-8d77-b0e6e1e16f02 version: -1 name: Should report alert to WildFire and handle as False Positive? description: "Should report alert to WildFire and handle as False Positive?" type: condition iscommand: false brand: "" nexttasks: '#default#': - "124" "yes": - "105" separatecontext: false conditions: - label: "yes" condition: - - operator: isEqualString left: value: complex: root: inputs.AutoMarkFP transformers: - operator: toLowerCase iscontext: true right: value: simple: "true" view: |- { "position": { "x": -1260, "y": -235 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 0 isoversize: false isautoswitchedtoquietmode: false continueonerrortype: "" "66": id: "66" taskid: 68e3dfd4-484f-41b7-854c-65ba29772bcc type: condition task: id: 68e3dfd4-484f-41b7-854c-65ba29772bcc version: -1 name: Manual - Review and handle alert description: "Manual - Review and handle alert." type: condition iscommand: false brand: "" nexttasks: '#default#': - "70" Allow list: - "68" Block list: - "69" separatecontext: false view: |- { "position": { "x": 730, "y": 1430 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 0 isoversize: false isautoswitchedtoquietmode: false continueonerrortype: "" "68": id: "68" taskid: 31672110-c4e9-406c-81cb-23406c8b6f0b type: regular task: id: 31672110-c4e9-406c-81cb-23406c8b6f0b version: -1 name: Add hash to Allowed List description: Adds requested files to allow list if they are not already on block list or allow list. script: '|||core-allowlist-files' type: regular iscommand: true brand: "" nexttasks: '#none#': - "70" scriptarguments: comment: simple: Added by Cortex XSIAM. hash_list: complex: root: inputs.sha256 separatecontext: false view: |- { "position": { "x": 1060, "y": 1610 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 0 isoversize: false isautoswitchedtoquietmode: false continueonerrortype: "" "69": id: "69" taskid: 4cc55fc7-ac15-48c1-8132-d49a04088e57 type: regular task: id: 4cc55fc7-ac15-48c1-8132-d49a04088e57 version: -1 name: Add hash to Blocked List description: Block lists requested files which have not already been block listed or added to allow list. script: '|||core-blocklist-files' type: regular iscommand: true brand: "" nexttasks: '#none#': - "70" scriptarguments: comment: simple: Added by Cortex XSIAM. hash_list: complex: root: inputs.sha256 separatecontext: false view: |- { "position": { "x": 410, "y": 1610 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 0 isoversize: false isautoswitchedtoquietmode: false continueonerrortype: "" "70": id: "70" taskid: 6a8efaa2-3376-45d1-8afe-c4800887730c type: condition task: id: 6a8efaa2-3376-45d1-8afe-c4800887730c version: -1 name: Should investigate further? description: "Should investigate further?" type: condition iscommand: false brand: "" nexttasks: '#default#': - "117" "yes": - "25" separatecontext: false view: |- { "position": { "x": 730, "y": 1780 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 0 isoversize: false isautoswitchedtoquietmode: false continueonerrortype: "" "84": id: "84" taskid: e6a5956d-d3e5-496b-8178-ee8e15e90f8e type: title task: id: e6a5956d-d3e5-496b-8178-ee8e15e90f8e version: -1 name: Done type: title iscommand: false brand: "" description: '' separatecontext: false view: |- { "position": { "x": -1260, "y": 3810 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 0 isoversize: false isautoswitchedtoquietmode: false continueonerrortype: "" "95": id: "95" taskid: b23bee88-5564-4714-8763-b4a58be43dc1 type: title task: id: b23bee88-5564-4714-8763-b4a58be43dc1 version: -1 name: Remediation type: title iscommand: false brand: "" description: '' nexttasks: '#none#': - "98" separatecontext: false view: |- { "position": { "x": -400, "y": 2650 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 0 isoversize: false isautoswitchedtoquietmode: false continueonerrortype: "" "98": id: "98" taskid: fe3720ee-c866-4565-8228-df2a038ebca7 type: playbook task: id: fe3720ee-c866-4565-8228-df2a038ebca7 version: -1 name: Containment Plan description: |- This playbook handles all the containment actions available with Cortex XSIAM, including the following tasks: * Isolate endpoint * Disable account * Quarantine file * Block indicators * Clear user session (currently, the playbook supports only Okta) Note: The playbook inputs enable manipulating the execution flow; read the input descriptions for details. playbookName: Containment Plan type: playbook iscommand: false brand: "" nexttasks: '#none#': - "117" scriptarguments: AutoContainment: complex: root: inputs.AutoContainment BlockIndicators: complex: root: inputs.BlockIndicators ClearUserSessions: simple: "False" EndpointID: complex: root: alert accessor: agentid FileContainment: complex: root: inputs.RelatedFileContainment FileHash: complex: root: alert transformers: - operator: If-Then-Else args: condition: value: simple: lhs!=rhs conditionB: {} conditionInBetween: {} else: value: simple: foundIncidents.CustomFields.initiatorsha256 iscontext: true equals: {} lhs: value: simple: alert.filesha256 iscontext: true lhsB: {} options: {} optionsB: {} rhs: {} rhsB: {} then: value: simple: foundIncidents.CustomFields.filesha256 iscontext: true FilePath: complex: root: alert transformers: - operator: If-Then-Else args: condition: value: simple: lhs!=rhs conditionB: {} conditionInBetween: {} else: value: simple: foundIncidents.CustomFields.initiatorpath iscontext: true equals: {} lhs: value: simple: alert.filesha256 iscontext: true lhsB: {} options: {} optionsB: {} rhs: {} rhsB: {} then: value: simple: foundIncidents.CustomFields.filepath iscontext: true FileRemediation: complex: root: inputs.FileRemediation HostContainment: complex: root: inputs.HostAutoContainment IAMUserDomain: simple: '' UserContainment: simple: "False" UserVerification: simple: "False" separatecontext: true loop: iscommand: false scriptArguments: BlockIndicators: simple: "True" ContainmentType: simple: Auto EndpointContainment: simple: "False" FileContainment: simple: "True" ScheduledTaskConatinment: simple: "True" UserContainment: simple: "True" exitCondition: "" wait: 1 max: 100 view: |- { "position": { "x": -400, "y": 2800 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 0 isoversize: false isautoswitchedtoquietmode: false continueonerrortype: "" "100": id: "100" taskid: ff411158-cffb-487c-8407-18ada6a4c1b4 type: regular task: id: ff411158-cffb-487c-8407-18ada6a4c1b4 version: -1 name: 'WildFire report - Review identified characteristics' description: "WildFire report - Review identified characteristics" type: regular iscommand: false brand: "" nexttasks: '#none#': - "65" separatecontext: false view: |- { "position": { "x": -1260, "y": -415 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 0 isoversize: false isautoswitchedtoquietmode: false continueonerrortype: "" "105": id: "105" taskid: b14c75d0-887e-4c7b-81b7-3b63317b95db type: regular task: id: b14c75d0-887e-4c7b-81b7-3b63317b95db version: -1 name: Report False Positive to WildFire description: report FP to wildfire through XDR script: '|||core-report-incorrect-wildfire' type: regular iscommand: true brand: "" nexttasks: '#none#': - "111" scriptarguments: email: complex: root: inputs.EmailAddress file_hash: complex: root: inputs.sha256 new_verdict: simple: "0" reason: simple: Marked as False Positive in a Cortex XSIAM investigation. separatecontext: false view: |- { "position": { "x": -1260, "y": 110 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 0 isoversize: false isautoswitchedtoquietmode: false continueonerrortype: "" "106": id: "106" taskid: 377be20d-9ffd-4b2d-81d1-11711fc1a769 type: condition task: id: 377be20d-9ffd-4b2d-81d1-11711fc1a769 version: -1 name: Check hash execution timestamp description: "Check hash execution timestamp." type: condition iscommand: false brand: "" nexttasks: '#default#': - "36" 24H: - "33" separatecontext: false conditions: - label: 24H condition: - - operator: greaterThan left: value: simple: alert.autime iscontext: true right: value: simple: TimeNowUnix iscontext: true view: |- { "position": { "x": 530, "y": 260 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 0 isoversize: false isautoswitchedtoquietmode: false continueonerrortype: "" "110": id: "110" taskid: 5b78e324-ed38-4740-8481-3c44a317b848 type: title task: id: 5b78e324-ed38-4740-8481-3c44a317b848 version: -1 name: Verdict type: title iscommand: false brand: "" description: '' nexttasks: '#none#': - "119" separatecontext: false view: |- { "position": { "x": -310, "y": -1210 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 0 isoversize: false isautoswitchedtoquietmode: false continueonerrortype: "" "111": id: "111" taskid: d9f34c1e-5a36-48e3-8b2c-29bbcd5d465e type: playbook task: id: d9f34c1e-5a36-48e3-8b2c-29bbcd5d465e version: -1 name: Handle False Positive Alerts description: | This playbook handles false positive alerts. playbookName: Handle False Positive Alerts type: playbook iscommand: false brand: "" nexttasks: '#none#': - "84" scriptarguments: FileSHA256: complex: root: inputs.sha256 ShouldCloseAutomatically: complex: root: inputs.ShouldCloseAutomatically alertName: complex: root: alert accessor: name sourceIP: complex: root: alert accessor: hostip username: complex: root: alert accessor: username separatecontext: true loop: iscommand: false exitCondition: "" wait: 1 max: 100 view: |- { "position": { "x": -1260, "y": 295 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 0 isoversize: false isautoswitchedtoquietmode: false continueonerrortype: "" "112": id: "112" taskid: 467cd753-37e5-4118-8962-2c0bfefedbb9 type: regular task: id: 467cd753-37e5-4118-8962-2c0bfefedbb9 version: -1 name: close alert description: Close the alert. script: Builtin|||closeInvestigation type: regular iscommand: true brand: Builtin nexttasks: '#none#': - "84" scriptarguments: closeReason: simple: Resolved - Threat Handled separatecontext: false view: |- { "position": { "x": 730, "y": 3640 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 0 isoversize: false isautoswitchedtoquietmode: false continueonerrortype: "" "113": id: "113" taskid: 7d84c915-3451-4325-809d-8ea91a0134c1 type: condition task: id: 7d84c915-3451-4325-809d-8ea91a0134c1 version: -1 name: Should restore affected endpoint? description: "Should restore affected endpoint?" type: condition iscommand: false brand: "" nexttasks: '#default#': - "114" "yes": - "116" separatecontext: false conditions: - label: "yes" condition: - - operator: isEqualString left: value: complex: root: inputs.AutoRecovery transformers: - operator: toLowerCase iscontext: true right: value: simple: "true" view: |- { "position": { "x": 730, "y": 3115 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 0 isoversize: false isautoswitchedtoquietmode: false continueonerrortype: "" "114": id: "114" taskid: 86efbb93-4ae8-4f28-87ed-f9651e61c920 type: condition task: id: 86efbb93-4ae8-4f28-87ed-f9651e61c920 version: -1 name: Should close alert automatically? description: "Should close alert automatically?" type: condition iscommand: false brand: "" nexttasks: '#default#': - "84" "yes": - "112" separatecontext: false conditions: - label: "yes" condition: - - operator: isEqualString left: value: complex: root: inputs.ShouldCloseAutomatically transformers: - operator: toLowerCase iscontext: true right: value: simple: "true" view: |- { "position": { "x": 730, "y": 3460 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 0 isoversize: false isautoswitchedtoquietmode: false continueonerrortype: "" "116": id: "116" taskid: bf231072-4b35-4c62-81c2-d23c8d52faf4 type: playbook task: id: bf231072-4b35-4c62-81c2-d23c8d52faf4 version: -1 name: Recovery Plan description: |- This playbook handles all the recovery actions available with Cortex XSIAM, including the following tasks: * Unisolate endpoint * Restore quarantined file Note: The playbook inputs enable manipulating the execution flow; read the input descriptions for details. playbookName: Recovery Plan type: playbook iscommand: false brand: "" nexttasks: '#none#': - "114" scriptarguments: FileHash: complex: root: inputs.sha256 endpointID: complex: root: alert accessor: agentid releaseFile: simple: "false" unIsolateEndpoint: simple: "true" separatecontext: true loop: iscommand: false exitCondition: "" wait: 1 max: 0 view: |- { "position": { "x": 1060, "y": 3290 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 0 isoversize: false isautoswitchedtoquietmode: false continueonerrortype: "" "117": id: "117" taskid: 32f5a428-81f0-4bba-84f5-c67f27f14312 type: title task: id: 32f5a428-81f0-4bba-84f5-c67f27f14312 version: -1 name: Recovery type: title iscommand: false brand: "" description: '' nexttasks: '#none#': - "113" separatecontext: false view: |- { "position": { "x": 730, "y": 2970 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 0 isoversize: false isautoswitchedtoquietmode: false continueonerrortype: "" "119": id: "119" taskid: 8808fc83-42e2-46ba-8e7f-fb563844a57e type: playbook task: id: 8808fc83-42e2-46ba-8e7f-fb563844a57e version: -1 name: Enrichment for Verdict description: This playbook checks prior alert closing reasons and performs enrichment and prevalence checks on different IOC types. It then returns the information needed to establish the alert's verdict. playbookName: Enrichment for Verdict type: playbook iscommand: false brand: "" nexttasks: '#none#': - "123" scriptarguments: CloseReason: simple: Resolved - False Positive,Resolved - Duplicate Incident,Resolved - Known Issue Domain: complex: root: alert accessor: domainname FileSHA256: complex: root: inputs.sha256 IP: complex: root: alert accessor: hostip URL: complex: root: alert accessor: url User: complex: root: alert accessor: username query: complex: root: inputs.Query threshold: simple: "5" separatecontext: true loop: iscommand: false exitCondition: "" wait: 1 max: 100 view: |- { "position": { "x": -310, "y": -1070 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 0 isoversize: false isautoswitchedtoquietmode: false continueonerrortype: "" "120": id: "120" taskid: 91549068-a885-437b-878f-f3b77bf310e7 type: playbook task: id: 91549068-a885-437b-878f-f3b77bf310e7 version: -1 name: Containment Plan description: |- This playbook handles all the containment actions available with Cortex XSIAM, including the following tasks: * Isolate endpoint * Disable account * Quarantine file * Block indicators * Clear user session (currently, the playbook supports only Okta) Note: The playbook inputs enable manipulating the execution flow; read the input descriptions for details. playbookName: Containment Plan type: playbook iscommand: false brand: "" nexttasks: '#none#': - "36" scriptarguments: AutoContainment: simple: "True" BlockIndicators: simple: "False" ClearUserSessions: simple: "False" EndpointID: complex: root: alert accessor: agentid FileContainment: complex: root: inputs.OriginalFileContainment FileHash: complex: root: inputs.sha256 FilePath: complex: root: alert accessor: filepath FileRemediation: complex: root: inputs.FileRemediation HostAutoContainment: simple: "False" IAMUserDomain: simple: '' UserContainment: simple: "False" separatecontext: true loop: iscommand: false exitCondition: "" wait: 1 max: 100 view: |- { "position": { "x": 730, "y": 770 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 0 isoversize: false isautoswitchedtoquietmode: false continueonerrortype: "" "121": id: "121" taskid: 7cae09e9-4e9f-4429-8bb9-0c8948b300a6 type: regular task: id: 7cae09e9-4e9f-4429-8bb9-0c8948b300a6 version: -1 name: Get time for the last day description: | Retrieves the current date and time. scriptName: GetTime type: regular iscommand: false brand: "" nexttasks: '#none#': - "106" scriptarguments: contextKey: simple: LastDay daysAgo: simple: "1" separatecontext: false view: |- { "position": { "x": 530, "y": 70 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 0 isoversize: false isautoswitchedtoquietmode: false continueonerrortype: "" "122": id: "122" taskid: 3b5bd7f4-a3b2-4482-83fd-1384d3752d89 type: playbook task: id: 3b5bd7f4-a3b2-4482-83fd-1384d3752d89 version: -1 name: Endpoint Investigation Plan description: |- This playbook handles all the endpoint investigation actions available with Cortex XSIAM, including the following tasks: * Pre-defined MITRE Tactics * Host fields (Host ID) * Attacker fields (Attacker IP, External host) * MITRE techniques * File hash (currently, the playbook supports only SHA256) Note: The playbook inputs enable manipulating the execution flow; read the input descriptions for details. playbookName: Endpoint Investigation Plan type: playbook iscommand: false brand: "" nexttasks: '#none#': - "133" scriptarguments: HuntCnCTechniques: simple: "True" HuntCollectionTechniques: simple: "True" HuntDefenseEvasionTechniques: simple: "True" HuntDiscoveryTechniques: simple: "True" HuntExecutionTechniques: simple: "True" HuntImpactTechniques: simple: "True" HuntInitialAccessTechniques: simple: "True" HuntLateralMovementTechniques: simple: "True" HuntPersistenceTechniques: simple: "True" HuntPrivilegeEscalationTechniques: simple: "True" HuntReconnaissanceTechniques: simple: "True" agentID: complex: root: alert accessor: agentid separatecontext: true loop: iscommand: false exitCondition: "" wait: 1 max: 100 view: |- { "position": { "x": -140, "y": 2100 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 0 isoversize: false isautoswitchedtoquietmode: false continueonerrortype: "" "123": id: "123" taskid: 790be732-ca45-4102-84d4-cab60749cf0d type: condition task: id: 790be732-ca45-4102-84d4-cab60749cf0d version: -1 name: Establish verdict description: "Establish verdict for the alert." type: condition iscommand: false brand: "" nexttasks: '#default#': - "132" False Positive: - "46" Possible False Positive: - "129" separatecontext: false conditions: - label: False Positive condition: - - operator: isEqualString left: value: simple: PreviousVerdict iscontext: true right: value: simple: False Positive - label: Possible False Positive condition: - - operator: isNotEqualString left: value: simple: FileVerdict iscontext: true right: value: simple: Suspicious - - operator: containsGeneral left: value: complex: root: Core.AnalyticsPrevalence.Hash accessor: value iscontext: true right: value: simple: "true" ignorecase: true view: |- { "position": { "x": -310, "y": -900 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 0 isoversize: false isautoswitchedtoquietmode: false continueonerrortype: "" "124": id: "124" taskid: 4aff2497-2e70-4c17-8f49-761f71907f5f type: condition task: id: 4aff2497-2e70-4c17-8f49-761f71907f5f version: -1 name: Manual - Mark alert as False Positive? description: "Manual - Mark alert as False Positive?" type: condition iscommand: false brand: "" nexttasks: '#default#': - "25" "Yes": - "105" separatecontext: false view: |- { "position": { "x": -930, "y": -65 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 0 isoversize: false isautoswitchedtoquietmode: false continueonerrortype: "" "126": id: "126" taskid: 58d68deb-99a8-445e-89a4-ef6a3cc5f0bc type: title task: id: 58d68deb-99a8-445e-89a4-ef6a3cc5f0bc version: -1 name: Malware type: title iscommand: false brand: "" description: '' nexttasks: '#none#': - "25" separatecontext: false view: |- { "position": { "x": -140, "y": 1110 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 0 isoversize: false isautoswitchedtoquietmode: false continueonerrortype: "" "127": id: "127" taskid: bb4343c4-ee10-4a3c-8805-3a086cc29f9b type: title task: id: bb4343c4-ee10-4a3c-8805-3a086cc29f9b version: -1 name: Grayware and Phishing type: title iscommand: false brand: "" description: '' nexttasks: '#none#': - "128" separatecontext: false view: |- { "position": { "x": 430, "y": 1110 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 0 isoversize: false isautoswitchedtoquietmode: false continueonerrortype: "" "128": id: "128" taskid: be0aa583-fbf8-4740-864b-0f9dac027680 type: condition task: id: be0aa583-fbf8-4740-864b-0f9dac027680 version: -1 name: Should treat grayware and phishing as malware? description: "Should treat grayware and phishing as malware?" type: condition iscommand: false brand: "" nexttasks: '#default#': - "66" "yes": - "25" separatecontext: false conditions: - label: "yes" condition: - - operator: isEqualString left: value: complex: root: inputs.GraywarePhishingAsMalware transformers: - operator: toLowerCase iscontext: true right: value: simple: "true" view: |- { "position": { "x": 430, "y": 1260 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 0 isoversize: false isautoswitchedtoquietmode: false continueonerrortype: "" "129": id: "129" taskid: 50b6ea0b-3d7e-4427-8893-764c3a7afe27 type: title task: id: 50b6ea0b-3d7e-4427-8893-764c3a7afe27 version: -1 name: Possible False Positive type: title iscommand: false brand: "" description: '' nexttasks: '#none#': - "130" separatecontext: false view: |- { "position": { "x": -480, "y": -730 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 0 isoversize: false isautoswitchedtoquietmode: false continueonerrortype: "" "130": id: "130" taskid: 8c62b789-f058-48cf-8804-f805b639f731 type: condition task: id: 8c62b789-f058-48cf-8804-f805b639f731 version: -1 name: Manuel Review - Should continue to investigate? description: "Manuel Review - Should continue to investigate?" type: condition iscommand: false brand: "" nexttasks: '#default#': - "100" "Yes": - "36" separatecontext: false view: |- { "position": { "x": -480, "y": -590 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 0 isoversize: false isautoswitchedtoquietmode: false continueonerrortype: "" "131": id: "131" taskid: d5ab1ed4-53c0-4885-8e89-a0ccc84a057b type: condition task: id: d5ab1ed4-53c0-4885-8e89-a0ccc84a057b version: -1 name: Is auto-containment set to true? description: "Is auto-containment set to true?" type: condition iscommand: false brand: "" nexttasks: '#default#': - "36" "yes": - "120" separatecontext: false conditions: - label: "yes" condition: - - operator: isEqualString left: value: complex: root: inputs.AutoContainment transformers: - operator: toLowerCase iscontext: true right: value: simple: "true" view: |- { "position": { "x": 730, "y": 570 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 0 isoversize: false isautoswitchedtoquietmode: false continueonerrortype: "" "132": id: "132" taskid: 535e2207-e112-4fe9-8d43-9686af287db7 type: title task: id: 535e2207-e112-4fe9-8d43-9686af287db7 version: -1 name: Possible True Positive type: title iscommand: false brand: "" description: '' nexttasks: '#none#': - "134" separatecontext: false view: |- { "position": { "x": 170, "y": -730 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 0 isoversize: false isautoswitchedtoquietmode: false continueonerrortype: "" "133": id: "133" taskid: 10dae87a-97f4-43fd-8196-14386596e581 type: condition task: id: 10dae87a-97f4-43fd-8196-14386596e581 version: -1 name: Are there investigation findings? description: "Are there investigation findings?" type: condition iscommand: false brand: "" nexttasks: '#default#': - "117" "yes": - "135" separatecontext: false conditions: - label: "yes" condition: - - operator: isNotEmpty left: value: complex: root: foundIncidents iscontext: true view: |- { "position": { "x": -140, "y": 2280 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 0 isoversize: false isautoswitchedtoquietmode: false continueonerrortype: "" "134": id: "134" taskid: 0d11e289-2dc4-4eba-8d92-08335800fcf8 type: regular task: id: 0d11e289-2dc4-4eba-8d92-08335800fcf8 version: -1 name: Set Alert Severity to High description: commands.local.cmd.set.parent.alert.field script: Builtin|||setParentIncidentFields type: regular iscommand: true brand: Builtin nexttasks: '#none#': - "136" scriptarguments: manual_severity: simple: high separatecontext: false continueonerrortype: "" view: |- { "position": { "x": 170, "y": -600 } } note: false timertriggers: [] ignoreworker: false skipunavailable: true quietmode: 2 isoversize: false isautoswitchedtoquietmode: false "135": id: "135" taskid: b40b3fef-fdd3-4523-80a3-671fcfd4d630 type: regular task: id: b40b3fef-fdd3-4523-80a3-671fcfd4d630 version: -1 name: Set Alert Severity to High description: commands.local.cmd.set.parent.alert.field script: Builtin|||setParentIncidentFields type: regular iscommand: true brand: Builtin nexttasks: '#none#': - "95" scriptarguments: manual_severity: simple: high separatecontext: false continueonerrortype: "" view: |- { "position": { "x": -400, "y": 2475 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 2 isoversize: false isautoswitchedtoquietmode: false "136": id: "136" taskid: fceea6e8-768a-4a0a-8103-24549dad7f2a type: condition task: id: fceea6e8-768a-4a0a-8103-24549dad7f2a version: -1 name: Should open a ticket automatically in a ticketing system? description: Checks whether to open a ticket automatically in a ticketing system. type: condition iscommand: false brand: "" nexttasks: '#default#': - "3" "yes": - "137" separatecontext: false conditions: - label: "yes" condition: - - operator: isEqualString left: value: complex: root: inputs.ShouldOpenTicket iscontext: true right: value: simple: "True" ignorecase: true continueonerrortype: "" view: |- { "position": { "x": 170, "y": -440 } } note: false timertriggers: [] ignoreworker: false skipunavailable: false quietmode: 0 isoversize: false isautoswitchedtoquietmode: false "137": id: "137" taskid: 587f0a09-53b6-48fc-89bd-a76903215367 type: playbook task: id: 587f0a09-53b6-48fc-89bd-a76903215367 version: -1 name: Ticket Management - Generic description: "`Ticket Management - Generic` allows you to open new tickets or update comments to the existing ticket in the following ticketing systems:\n-ServiceNow \n-Zendesk \nusing the following sub-playbooks:\n-`ServiceNow - Ticket Management`\n-`Zendesk - Ticket Management`\n" playbookName: Ticket Management - Generic type: playbook iscommand: false brand: "" nexttasks: '#none#': - "3" scriptarguments: CommentToAdd: complex: root: inputs.CommentToAdd ZendeskAssigne: complex: root: inputs.ZendeskAssigne ZendeskCollaborators: complex: root: inputs.ZendeskCollaborators ZendeskPriority: complex: root: inputs.ZendeskPriority ZendeskRequester: complex: root: inputs.ZendeskRequester ZendeskStatus: complex: root: inputs.ZendeskStatus ZendeskSubject: complex: root: inputs.ZendeskSubject ZendeskTags: complex: root: inputs.ZendeskTags ZendeskType: complex: root: inputs.ZendeskType addCommentPerEndpoint: complex: root: inputs.addCommentPerEndpoint description: complex: root: inputs.description serviceNowAssignmentGroup: complex: root: inputs.serviceNowAssignmentGroup serviceNowCategory: complex: root: inputs.serviceNowCategory serviceNowImpact: complex: root: inputs.serviceNowImpact serviceNowSeverity: complex: root: inputs.serviceNowSeverity serviceNowShortDescription: complex: root: inputs.serviceNowShortDescription serviceNowTicketType: complex: root: inputs.serviceNowTicketType serviceNowUrgency: complex: root: inputs.serviceNowUrgency separatecontext: true continueonerrortype: "" loop: iscommand: false exitCondition: "" wait: 1 max: 100 view: |- { "position": { "x": 460, "y": -270 } } note: false timertriggers: [] ignoreworker: false skipunavailable: true quietmode: 0 isoversize: false isautoswitchedtoquietmode: false view: |- { "linkLabelsPosition": { "106_36_#default#": 0.4, "113_116_yes": 0.42, "114_84_#default#": 0.17, "124_25_#default#": 0.5, "128_25_yes": 0.13, "128_66_#default#": 0.59, "130_36_Yes": 0.47, "133_117_#default#": 0.29, "36_126_Malware": 0.61, "36_127_#default#": 0.52, "3_36_Yes": 0.31, "65_105_yes": 0.49, "66_68_Allow list": 0.5, "66_69_Block list": 0.45, "66_70_#default#": 0.53, "70_25_yes": 0.23 }, "paper": { "dimensions": { "height": 5235, "width": 2700, "x": -1260, "y": -1360 } } } inputs: - key: sha256 value: complex: root: alert transformers: - operator: DT args: dt: value: simple: .=pickvalue(val);function pickvalue(x){if(x.initiatorsha256){return x.initiatorsha256} else {return x.filesha256}} required: false description: The SHA256 hash of the suspected file. Decided by the DT expression wether it's the initiator or the target file SHA256. playbookInputQuery: - key: GraywarePhishingAsMalware value: simple: "true" required: false description: Whether to treat grayware and phishing alerts as malware. playbookInputQuery: - key: AutoContainment value: simple: "true" required: false description: |- Whether to execute the containment plan (except isolation) automatically. The specific containment playbook inputs should also be set to 'True'. playbookInputQuery: - key: HostAutoContainment value: simple: "true" required: false description: Whether to automatically execute endpoint isolation in case there are investigation findings. playbookInputQuery: - key: BlockIndicators value: simple: "false" required: false description: Set to True if you want to block the indicators. playbookInputQuery: - key: OriginalFileContainment value: simple: "true" required: false description: Set to True if you want to quarantine the original malicious file. playbookInputQuery: - key: RelatedFileContainment value: simple: "true" required: false description: Set to True to quarantine the identified files found in the investigation. playbookInputQuery: - key: FileRemediation value: simple: Quarantine required: false description: "Choose 'Quarantine' or 'Delete' to avoid file remediation conflicts. \nFor example, choosing 'Quarantine' ignores the 'Delete file' task under the eradication playbook and executes only file quarantine." playbookInputQuery: - key: AutoMarkFP value: {} required: false description: Whether to automatically mark alerts that were found as benign by the 'Enrichment for Verdict' playbook and report false positive alerts to WildFire. True/False. playbookInputQuery: - key: EmailAddress value: {} required: false description: User's email address to use when reporting false positive alerts to WildFire. playbookInputQuery: - key: ShouldCloseAutomatically value: {} required: false description: Whether to automatically close the alert after investigation and remediation are finished. True/False. playbookInputQuery: - key: AutoRecovery value: {} required: false description: Whether to execute the Recovery playbook after the investigation and remediation are finished. True/False. playbookInputQuery: - key: Query value: complex: root: alert transformers: - operator: If-Then-Else args: condition: value: simple: lhs!=rhs conditionB: {} conditionInBetween: {} else: value: simple: ${alert= 'initiatorsha256:"' + val.initiatorsha256 + '" and sourceBrand:"' + val.sourceBrand + '" and name:"' + val.name + '"'} equals: {} lhs: value: simple: alert.filesha256 iscontext: true lhsB: {} options: {} optionsB: {} rhs: {} rhsB: {} then: value: simple: ${alert= '(filesha256:"' + val.filesha256 + '" and sourceBrand:"' + val.sourceBrand + '" and name:"' + val.name + '"'} required: false description: The query for searching previous alerts based on the file we want to respond to. Decided by the If-Then-Else expression wether it's the initiator or the target file. playbookInputQuery: - key: ShouldOpenTicket value: simple: "False" required: false description: Whether to open a ticket automatically in a ticketing system. (True/False). playbookInputQuery: - key: serviceNowShortDescription value: simple: XSIAM Incident ID - ${parentIncidentFields.incident_id} required: false description: A short description of the ticket. playbookInputQuery: - key: serviceNowImpact value: {} required: false description: The impact for the new ticket. Leave empty for ServiceNow default impact. playbookInputQuery: - key: serviceNowUrgency value: {} required: false description: The urgency of the new ticket. Leave empty for ServiceNow default urgency. playbookInputQuery: - key: serviceNowSeverity value: {} required: false description: The severity of the new ticket. Leave empty for ServiceNow default severity. playbookInputQuery: - key: serviceNowTicketType value: {} required: false description: The ServiceNow ticket type. Options are "incident", "problem", "change_request", "sc_request", "sc_task", or "sc_req_item". Default is "incident". playbookInputQuery: - key: serviceNowCategory value: {} required: false description: The category of the ServiceNow ticket. playbookInputQuery: - key: serviceNowAssignmentGroup value: {} required: false description: The group to which to assign the new ticket. playbookInputQuery: - key: ZendeskPriority value: {} required: false description: The urgency with which the ticket should be addressed. Allowed values are "urgent", "high", "normal", or "low". playbookInputQuery: - key: ZendeskRequester value: {} required: false description: The user who requested this ticket. playbookInputQuery: - key: ZendeskStatus value: {} required: false description: The state of the ticket. Allowed values are "new", "open", "pending", "hold", "solved", or "closed". playbookInputQuery: - key: ZendeskSubject value: simple: XSIAM Incident ID - ${parentIncidentFields.incident_id} required: false description: The value of the subject field for this ticket. playbookInputQuery: - key: ZendeskTags value: {} required: false description: The array of tags applied to this ticket. playbookInputQuery: - key: ZendeskType value: {} required: false description: The type of this ticket. Allowed values are "problem", "incident", "question", or "task". playbookInputQuery: - key: ZendeskAssigne value: {} required: false description: The agent currently assigned to the ticket. playbookInputQuery: - key: ZendeskCollaborators value: {} required: false description: The users currently CC'ed on the ticket. playbookInputQuery: - key: description value: simple: ${parentIncidentFields.description}. ${parentIncidentFields.xdr_url} required: false description: The ticket description. playbookInputQuery: - key: addCommentPerEndpoint value: simple: "True" required: false description: 'Whether to append a new comment to the ticket for each endpoint in the incident. Possible values: True/False.' playbookInputQuery: - key: CommentToAdd value: simple: '${alert.name}. Alert ID: ${alert.id}' required: false description: Comment for the ticket. playbookInputQuery: inputSections: - inputs: - EmailAddress - ShouldCloseAutomatically - AutoMarkFP name: Alert Management description: Alert management settings and data, including escalation processes, and user engagements. - inputs: - Query - sha256 name: Enrichment description: Enrichment settings and data, including assets and indicators enrichment using third-party enrichers. - inputs: - GraywarePhishingAsMalware name: Investigation description: Investigation settings and data, including any deep dive alert investigation and verdict determination. - inputs: - AutoContainment - BlockIndicators - HostAutoContainment - OriginalFileContainment - RelatedFileContainment - FileRemediation - AutoRecovery name: Remediation description: Remediation settings and data, including containment, eradication, and recovery. - inputs: - ShouldOpenTicket - serviceNowShortDescription - serviceNowImpact - serviceNowUrgency - serviceNowSeverity - serviceNowTicketType - serviceNowCategory - serviceNowAssignmentGroup - ZendeskPriority - ZendeskRequester - ZendeskStatus - ZendeskSubject - ZendeskTags - ZendeskType - ZendeskAssigne - ZendeskCollaborators - description - addCommentPerEndpoint - CommentToAdd name: Ticket Management description: Ticket management settings and data. outputSections: - outputs: [] name: General (Outputs group) description: Generic group for outputs outputs: [] tests: - Test Playbook - WildFire Malware marketplaces: ["marketplacev2"] fromversion: 6.6.0 contentitemexportablefields: contentitemfields: {} ```
The Lego Movie is a 2014 animated adventure comedy film co-produced by Warner Animation Group, Village Roadshow Pictures, Lego System A/S, Vertigo Entertainment, and Lin Pictures, and distributed by Warner Bros. Pictures. It was written and directed by Phil Lord and Christopher Miller from a story they co-wrote with Dan and Kevin Hageman, based on the Lego line of construction toys. The film stars the voices of Chris Pratt, Will Ferrell, Elizabeth Banks, Will Arnett, Nick Offerman, Alison Brie, Charlie Day, Liam Neeson, and Morgan Freeman. A collaboration between production houses from the United States, Australia, and Denmark, its story focuses on Emmet Brickowski (Pratt), an ordinary Lego minifigure who helps a resistance movement stop a tyrannical businessman (Ferrell) from gluing everything in the Lego world into his vision of perfection. Plans of a feature film based on Lego started in 2008 following a discussion between producers Dan Lin and Roy Lee, before Lin left Warner Bros. to form his own production company, Lin Pictures. By August 2009, it was announced that Dan and Kevin Hageman had begun writing the script. It was officially green-lit by Warner Bros. in November 2011 with a planned 2014 release date. Chris McKay was brought in to co-direct in 2011 with Lord and Miller, and later became the film's animation supervisor. The film was strongly inspired by the visual aesthetic and stylistics of Brickfilms and qualities attributed to Lego Studios sets. While Lord and Miller wanted to make the film's animation replicate a stop motion film, everything was done through computer graphics, with the animation rigs following the same articulation limits actual Lego figures have. While primarily an animated film, it has several live action scenes in the real world. Much of the cast signed on to voice the characters in 2012, including Pratt, Ferrell, Banks, Arnett, Freeman, and Brie, while the animation was provided by Animal Logic, which was expected to comprise 80% of the film. The film was dedicated to Kathleen Fleming, the former director of entertainment development of the Lego company, having died in Cancún, Mexico, in April 2013. The Lego Movie premiered in Los Angeles on February 1, 2014, and was released in the United States on February 7. It became a critical and commercial success, grossing $468.1 million worldwide against its $60–65 million budget, and received acclaim for its animation, story, humor, score, and acting. The National Board of Review selected The Lego Movie as one of the top films of the year. It garnered numerous accolades, including the Producers Guild of America Award for Best Animated Motion Picture and the American Cinema Editors Award for Best Edited Animated Feature Film, as well as a nomination for Best Original Song at the 87th Academy Awards. The Lego Movie is the first entry in what would become the franchise of the same name, which includes three more films—The Lego Batman Movie, The Lego Ninjago Movie (both 2017), and The Lego Movie 2: The Second Part (2019). Plot In the Lego universe, the wizard Vitruvius is blinded when he fails to protect a superweapon called the "Kragle", a misreading of Krazy Glue, from the evil Lord Business, but prophesies that a person called "The Special" will find the Piece of Resistance capable of stopping the Kragle. Lord Business claims Vitruvius made up the prophecy and kicks Vitruvius off a cliff. Eight and a half years later, in Bricksburg, construction worker Emmet Brickowski comes across a beautiful woman searching for something at his construction site. Emmet falls into a pit and finds the Piece of Resistance. Compelled to touch it, Emmet experiences visions, including one of a giant called "The Man Upstairs", and passes out. He awakens in the custody of Bad Cop, Business's lieutenant, with the Piece of Resistance attached to his back. Emmet learns of Business's plans to freeze the world with the Kragle; the Piece of Resistance is the glue tube's cap. The woman, nicknamed Wyldstyle, rescues Emmet, believing him to be the Special. They escape Bad Cop and travel to "The Old West" where they meet Vitruvius. He and Wyldstyle are Master Builders, capable of building anything without instruction manuals, who oppose Business's attempts to suppress their creativity. Though disappointed Emmet is not a Master Builder, they are convinced of his potential when he recalls visions of "the Man Upstairs". Emmet, Wyldstyle, and Vitruvius evade Bad Cop's forces with the help of Wyldstyle's boyfriend, Batman, and escape to "Cloud Cuckoo Land", where all the master builders are in hiding. The Master Builders are unimpressed with Emmet's cowardice and refuse to help him fight Business. Bad Cop's forces attack and capture everyone except Emmet, Wyldstyle, Vitruvius, Batman, and fellow Master Builders MetalBeard, Unikitty, and Benny. Emmet devises a heist to infiltrate Business's headquarters and disarm the Kragle. During the heist, Wyldstyle reveals to Emmet that her real name is Lucy. The heist almost succeeds until Emmet and his friends are captured and imprisoned. Lord Business decapitates Vitruvius and throws the Piece of Resistance into an abyss, before arming a self-destruct device that will execute all the captured Master Builders. Vitruvius reveals he made up the prophecy before he dies, but his spirit returns to tell Emmet that it is his self-belief that makes him the Special. Strapped to the self-destruct mechanism's battery, Emmet flings himself off the edge of the tower, disarming the mechanism and saving his friends and the Master Builders. Inspired by Emmet's sacrifice, Lucy rallies the Lego people across the universe to use whatever creativity they have to build machines and weapons to fight Business's forces. The portal transports Emmet to the human world, where the events of his life are being played out in a basement by a young boy, Finn, on his father's Lego set. Finn's father — "The Man Upstairs" — chastises his son for creating hodgepodges of different playsets and begins to permanently glue his perceived "perfect" creations together. Realizing the danger, Emmet wills himself to move and gains Finn's attention. Finn returns Emmet and the Piece of Resistance to the set, where Emmet becomes Master Builder and confronts Business. In the human world, Finn's father looks at his son's creations and realizes that he was suppressing his son's creativity. Through a speech by Emmet, Finn tells his father that he is very special and has the power to change everything. Finn's father reconciles with his son, which plays out as Business reforming, capping the Kragle with the Piece of Resistance, and ungluing his victims with mineral spirits. After the world has been restored, Lucy and Emmet enter a relationship, with Batman's blessing. Finn's father grants permission for Finn's younger sister to play with the Lego sets as well, causing Duplo aliens to arrive in the Lego universe and threaten destruction. Cast Chris Pratt as Emmet Brickowski, an everyman and construction worker from Bricksburg who is initially mistaken for the Special. Will Ferrell as Lord Business, an evil businessman who hates Master Builders, tyrant of Bricksburg and the Lego Universe who is the company president of the Octan Corporation under the name President Business. Ferrell also plays "The Man Upstairs", a Lego collector and Finn's father in the live-action part of the film. Morgan Freeman as Vitruvius, a blind and elderly wizard-like Master Builder. Elizabeth Banks as Lucy / Wyldstyle, a "tough as nails" and tech-savvy Master Builder. Will Arnett as Bruce Wayne / Batman, a DC Comics character who is one of the Master Builders, as well as Wyldstyle's boyfriend and an amateur musician. Nick Offerman as MetalBeard, a pirate-like Master Builder seeking revenge on Lord Business for taking his body parts following an earlier encounter and causing him to remake his body from bricks. Alison Brie as Princess Unikitty, a unicorn/cat hybrid-like Master Builder who lives in Cloud Cuckoo Land. Charlie Day as Benny, a "1980-something space guy"-like Master Builder who is obsessed with building spaceships. Liam Neeson as Bad Cop / Good Cop / Scribble Cop, a police officer with a two-sided head and a split personality who serves Lord Business as the commander of the Super Secret Police. The character's name and personality are based on the good cop, bad cop interrogation method, which is briefly shown in the film. Neeson also voices Pa Cop, a police officer who is Bad Cop/Good Cop's father and Ma Cop's husband. Channing Tatum as Superman, a DC Comics character who is one of the Master Builders Jonah Hill as Green Lantern, a DC Comics character who is one of the Master Builders Cobie Smulders as Wonder Woman, a DC Comics character who is one of the Master Builders. Jadon Sand as Finn, an eight-and-a-half-year-old boy who is the son of "The Man Upstairs" in the live-action part of the film. Additionally, Anthony Daniels, Keith Ferguson, and Billy Dee Williams appear as protocol droid C-3PO, and smugglers Han Solo and Lando Calrissian from the Star Wars franchise and the television series Robot Chicken. Other appearances from licensed Lego iterations of franchises include Gandalf from the Lord of the Rings and the Hobbit franchises (voiced by Todd Hansen); Dumbledore from the Wizarding World franchise; The Flash and Aquaman from DC Comics; Milhouse from The Simpsons; Michelangelo from the Teenage Mutant Ninja Turtles franchise and Speed Racer from the Lego tie-in sets released alongside the 2008 film adaptation of the eponymous animated television series. Shaquille O'Neal portrays a Lego version of himself who is a Master Builder alongside two generic members of the 2002 NBA All-Stars. Will Forte (credited as Orville Forte) portrays Abraham Lincoln (whom he had previously voiced on Clone High, another Lord/Miller production). Dave Franco, Jake Johnson and Keegan-Michael Key portray Emmet's co-workers Wally, Barry and Foreman Jim respectively. Director Christopher Miller voices as a TV announcer for the Octan comedy show Where Are My Pants?; his son Graham Miller voices the Duplo alien. Production Development The development of The Lego Movie began in 2008, when Dan Lin and Roy Lee discussed it before Lin left Warner Bros. Pictures to form his own production company, Lin Pictures. Warner Bros. home entertainment executive Kevin Tsujihara, who had recognized the value of the Lego franchise by engineering the studio's purchase of Lego video game licensee Traveller's Tales in 2007, thought the success of the Lego-based video games indicated a Lego-based film was a good idea, and reportedly "championed" the development of the film. By August 2009, Dan and Kevin Hageman were writing the script described as "action adventure set in a Lego world". In 2008, Lin visited The Lego Group's headquarters in Denmark to pitch his vision for the film, later remarking uncertainty among executives. "They weren't rude or anything (…) but they didn't feel they needed a movie. They were already a very successful brand. Why take the risk?" Nevertheless, Lego's vice president of licensing and entertainment Jill Wilfert responded positively to the Hagemans' treatment that Lin pitched. "Once we heard the pitch, how Dan felt he could bring the values of the brand to life, we started to think, 'This could be interesting.'" Cloudy with a Chance of Meatballs (2009) directors Phil Lord and Christopher Miller were in talks in June 2010 to write and direct the film. Warner Bros. green-lit the film by November 2011, with a planned 2014 release date. The Australian studio Animal Logic, the same studio that did the animation for other Warner Bros. films such as Happy Feet and Legend of the Guardians: The Owls of Ga'Hoole, was contracted to provide the animation, which was expected to comprise 80% of the film. By this time Chris McKay, the director of Robot Chicken, had also joined Lord and Miller to co-direct. McKay explained that his role was to supervise the production in Australia once Lord and Miller left to work on 22 Jump Street (2014). In March 2012, Lord and Miller revealed the film's working title, Lego: The Piece of Resistance, and a storyline. Casting By June 2012, Chris Pratt had been cast as the voice of Emmet, the lead Lego character, and Will Arnett voicing a Lego version of Batman; the role of Lego Superman was offered to Channing Tatum. By August 2012, Elizabeth Banks was hired to voice Lucy (later getting the alias "Wyldstyle") and Morgan Freeman to voice Vitruvius, an old mystic. In November 2012, Alison Brie, Will Ferrell, Liam Neeson, and Nick Offerman signed on for roles. Brie voices Unikitty, a member of Emmet's team: Ferrell voices the antagonist President/Lord Business; Neeson voices Bad Cop/Good Cop: and Offerman voices MetalBeard, a pirate seeking revenge on Business. Warner Bros. already owns the film rights to intellectual properties from which key characters appear in the film (i.e. DC Comics; Wizarding World), but the filmmakers still ran their depictions by other creatives; this included Christopher Nolan and Zack Snyder, who were directing The Dark Knight Rises (2012) and Man of Steel (2013) at the time of the film's production, as well as Harry Potter creator J.K. Rowling. Lord recalled that Superman was omitted for an extended period of time due to a lawsuit against Warner Bros. by the heirs of co-creator Jerry Siegel, before being reinserted at the last minute. The film also features Keith Ferguson, Billy Dee Williams and Anthony Daniels reprising their roles as Lego iterations of Star Wars characters Han Solo, Lando Calrissian and C-3PO respectively. Lin recalled the closure of their deal to feature the characters as hectic, as The Walt Disney Company announced their purchase of Lucasfilm a few weeks after the filmmakers had traveled there and received permission to include them. Animation process The Lego Movie was strongly inspired by the visual aesthetic and stylistics of Brickfilms and qualities attributed to Lego Studios sets. The film received a great deal of praise in the respective online communities from filmmakers and fans, who saw the film as appraising nod to their work. In the film's live-action segment, Finn returns Emmet to the Lego world via an arts-and-crafts-covered tube labeled "Magic Portal", which production designer Grant Freckleton confirmed was a direct reference to Australian filmmaker Lindsay Fleay's 1989 animated short film The Magic Portal, which similarly incorporated live-action segments. Fleay went on to work at Animal Logic, though he left before production on The Lego Movie began. Animal Logic tried to make the film's animation replicate a stop motion film although everything was done through computer graphics, with the animation rigs following the same articulation limits actual Lego figures have. The camera systems also tried to replicate live action cinematography, including different lenses and a Steadicam simulator. The scenery was projected through The Lego Group's own Lego Digital Designer (created as part of Lego Design byME, which people could design their own Lego models using LDD, then upload them to the Lego website, design their own box design, and order them for actual delivery), which as CG supervisor Aidan Sarsfield detailed, "uses the official LEGO Brick Library and effectively simulates the connectivity of each of the bricks." The saved files were then converted to design and animate in Maya and XSI. At times the minifigures were even placed under microscopes to capture the seam lines, dirt and grime into the digital textures. Benny the spaceman was based on the line of Lego space sets sold in the 1980s, and his design includes the broken helmet chin strap, a common defect of the space sets at that time. Miller's childhood Space Village playset was used in the film. Post-production The Lego Movie was the first theatrical feature film produced by the Warner Animation Group. The film's total cost, including production, prints, and advertising (P&A), was $100 million. Half of the film's cost was financed by Village Roadshow Pictures, and was the only film in the franchise that Village Roadshow ever had involvement working on. The rest was covered by Warner Bros., with RatPac-Dune Entertainment providing a smaller share as part of its multi-year financing agreement with Warner Bros. Initially Warner Bros. turned down Village Roadshow Pictures when it asked to invest in the film. However, Warner Bros. later changed its mind, reportedly due to lack of confidence in the film, initially offering Village Roadshow Pictures the opportunity to finance 25% of the film, and later, an additional 25%. Music The film's original score was composed by Mark Mothersbaugh, who had previously worked with Lord and Miller on Cloudy with a Chance of Meatballs (2009) and 21 Jump Street (2012). The Lego Movie soundtrack contains the score as the majority of its tracks. Also included is the song "Everything Is Awesome" written by Shawn Patterson, Joshua Bartholomew and Lisa Harriton, who also perform the song under the name Jo Li. The single, released on January 23, 2014, is performed by Tegan and Sara featuring The Lonely Island (Andy Samberg, Akiva Schaffer, and Jorma Taccone), who wrote the rap lyrics, and is played in the film's end credits. The soundtrack was released on February 4, 2014, by WaterTower Music. Marketing and release Lego released a number of building toy sets based on scenes from The Lego Movie. The Lego Movie premiered on February 1, 2014, at the Regency Village Theatre in Los Angeles. It was initially scheduled for release on February 28, but was later moved up to February 7. The film was released in Australia by Roadshow Films. Warner Home Video released The Lego Movie for digital download, and on DVD and Blu-ray on June 17, 2014. At the same time, a special Blu-ray 3D "Everything is Awesome Edition" also includes an exclusive Vitruvius minifigure and a collectible 3D Emmet photo. Overall, The Lego Movie was the fourth best-selling film of 2014, after Frozen, The Hunger Games: Catching Fire and Guardians of the Galaxy, selling 4.9 million units and earning a revenue of $105.2 million. Reception Box office The Lego Movie grossed $257.8 million in the United States and Canada and $210.3 million in other territories, for a worldwide total of $468.1 million. Deadline Hollywood calculated the film's net profit as $229million, accounting for production budgets, marketing, talent participations, and other costs; box office grosses and home media revenues placed it third on their list of 2014's "Most Valuable Blockbusters". In the United States and Canada, The Lego Movie was released with The Monuments Men and Vampire Academy on February 7, 2014. It earned $17.2 million on its first day, including $425,000 from Thursday night previews. During its opening weekend, the film earned $69.1 million from 3,775 theaters. This made it the second-highest February opening weekend, behind The Passion of the Christ. The Lego Movie attracted a mostly diverse audience, with about 64 percent for Caucasians, Hispanic 16 percent, African-American 12 percent, and Asian 8 percent, as well as 41 percent being under 18 years of age. Its second weekend earnings dropped by 28 percent to $49.8 million, and followed by another $31.3 million the third weekend. The Lego Movie completed its theatrical run in the United States and Canada on September 4, 2014. Worldwide, The Lego Movie earned $69.1 million in its opening weekend in 34 markets. On its opening weekend elsewhere, the top countries were the United Kingdom ($13.4 million), Australia ($5.7 million), Russia ($3.9 million), Mexico ($3.8 million), and France ($3.1 million). The film had the strongest start for a non-sequel animated film in the United Kingdom ahead of The Simpsons Movie and Up. It would remain as the country's highest opening weekend for a 2014 film until it was surpassed by The Amazing Spider-Man 2 that spring. , its top international markets were the United Kingdom ($57 million), Australia ($20 million), and Germany ($13.1 million). Critical response The Lego Movie was met with universal acclaim. The critical consensus reads, "Boasting beautiful animation, a charming voice cast, laugh-a-minute gags, and a surprisingly thoughtful story, The Lego Movie is colorful fun for all ages." Audiences polled by CinemaScore gave the film an average grade of "A" on an A+ to F scale. Michael Rechtshaffen of The Hollywood Reporter wrote, "Arriving at a time when feature animation was looking and feeling mighty anemic...The LEGO Movie shows 'em how it's done", with Peter Debruge of Variety adding that Lord and Miller "irreverently deconstruct the state of the modern blockbuster and deliver a smarter, more satisfying experience in its place, emerging with a fresh franchise for others to build upon". Susan Wloszczyna of RogerEbert.com gave the film four stars out of four, writing, "It still might be a 100-minute commercial, but at least it's a highly entertaining and, most surprisingly, a thoughtful one with in-jokes that snap, crackle and zoom by at warp speed." Tom Huddleston of Time Out said, "The script is witty, the satire surprisingly pointed, and the animation tactile and imaginative." Drew Hunt of the Chicago Reader said the filmmakers "fill the script with delightfully absurd one-liners and sharp pop culture references", with A. O. Scott of The New York Times noting that, "Pop-culture jokes ricochet off the heads of younger viewers to tickle the world-weary adults in the audience, with just enough sentimental goo applied at the end to unite the generations. Parents will dab their eyes while the kids roll theirs." Claudia Puig of USA Today called the film "a spirited romp through a world that looks distinctively familiar, and yet freshly inventive". Liam Lacey of The Globe and Mail asked, "Can a feature-length toy commercial also work as a decent kids' movie? The bombast of the G.I. Joe and Transformers franchises might suggest no, but after an uninspired year for animated movies, The Lego Movie is a 3-D animated film that connects." Joel Arnold of NPR acknowledged that the film "may be one giant advertisement, but all the way to its plastic-mat foundation, it's an earnest piece of work—a cash grab with a heart". Peter Travers of Rolling Stone called the film "sassy enough to shoot well-aimed darts at corporate branding". Michael O'Sullivan of The Washington Post said that, "While clearly filled with affection for—and marketing tie-ins to—the titular product that's front and center, it's also something of a sharp plastic brick flung in the eye of its corporate sponsor." Moira MacDonald of The Seattle Times, while generally positive, found "it falls apart a bit near the end". Alonso Duralde of The Wrap said the film "will doubtless tickle young fans of the toys. It's just too bad that a movie that encourages you to think for yourself doesn't follow its own advice." The Lego Movie was included on a number of best-of lists. It was listed on many critics' top ten lists in 2014, ranking fifteenth. Several publications have listed the film as one of the best animated films, including: Insider, USA Today (2018), Rolling Stone (2019), Parade, Time Out New York, and Empire (all 2021). The film was also named by filmmaker Edgar Wright and Time film critic Richard Corliss as one of their favorite films of 2014 and acclaimed actress Tilda Swinton named it her favorite film of 2014. Other response Conservative political commentator Glenn Beck praised the film for avoiding "the double meanings and adult humor I just hate". Oscar host Neil Patrick Harris referenced The Lego Movie not being nominated for Best Animated Feature, which many critics considered a snub, saying prior to the award's presentation, "If you're at the Oscar party with the guys who directed 'The Lego Movie,' now would be a great time to distract them." U.S. Senator Ron Johnson criticized the film's anti-corporate message, saying that it taught children that "government is good and business is bad", citing the villain's name of Lord Business. "That's done for a reason", Johnson told WisPolitics.com, "They're starting that propaganda, and it's insidious". The comments were criticized by many, and Russ Feingold brought up the comments on the campaign trail during his 2016 Senate bid against Johnson. Accolades At the 87th Academy Awards, The Lego Movie received a nomination for Best Original Song. The film's other nominations include six Annie Awards (winning one), a British Academy Film Award (which it won), two Critics' Choice Movie Awards (winning one), and a Golden Globe Award. The Lego Movie was named one of the ten best films of 2014 by the National Board of Review (where it also won Best Original Screenplay). Other media In 2014, an adventure video game, The Lego Movie Videogame, was released for multiple platforms. Lego Dimensions (2015) features characters from several media franchises, including The Lego Movie. The Lego Movie: 4D – A New Adventure is a 4-D film at Legoland Florida, that has been in operation since 2016. Written and directed by Rob Schrab, the 12-minute attraction stars A. J. Locascio as Emmet, with Banks, Brie, Day, and Offerman reprising their respective roles; while Patton Oswalt plays President Business's brother, Risky Business. Barbie (2023) features Will Ferrell reprise his role as "The Man Upstairs" from The Lego Movie in a cameo role, depicted as the CEO of Mattel and sporting the same clothes and hairstyle. Follow-ups Warner Bros. released two spin-offs in 2017: The Lego Batman Movie and The Lego Ninjago Movie. Both films set in different universes apart from The Lego Movie one. The Lego Batman Movie was considered a success, while The Lego Ninjago Movie was a failure. A television series Unikitty! (2017–2020) focuses on the eponymous character (Tara Strong) and her friends. The Lego Movie was followed by The Lego Movie 2: The Second Part in 2019. Following the financial failures of both The Lego Ninjago Movie and The Lego Movie 2, Universal Pictures set a five-year film deal with The Lego Group. Notes References External links Official website at Lego.com Official Warner Bros. Site 2010s adventure comedy films 2010s American animated films 2010s animated superhero films 2010s English-language films 2014 3D films 2014 films 2014 animated films 2014 action comedy films 2014 computer-animated films 3D animated films 2010s superhero comedy films American 3D films American action comedy films American adventure comedy films American animated feature films American children's animated comic science fiction films American children's animated science fantasy films American computer-animated films American dystopian films American fantasy adventure films American films with live action and animation Animal Logic films Animated crossover films Annie Award-winning films Best Animated Feature BAFTA winners Best Animated Feature Broadcast Film Critics Association Award winners Films about father–son relationships Films about parallel universes Films about sentient toys Films about toys Films adapted into television shows Films based on toys Films directed by Phil Lord and Christopher Miller Films produced by Dan Lin Films produced by Roy Lee Films scored by Mark Mothersbaugh Films shot in Sydney Films with screenplays by Christopher Miller (filmmaker) Films with screenplays by Phil Lord Films with screenplays by The Hageman Brothers Movie Metafictional works Vertigo Entertainment films Village Roadshow Pictures animated films Warner Animation Group films Warner Bros. animated films Warner Bros. Animation animated films Postmodern films
James Alan Bouton (; March 8, 1939 – July 10, 2019) was an American professional baseball player. Bouton played in Major League Baseball (MLB) as a pitcher for the New York Yankees, Seattle Pilots, Houston Astros, and Atlanta Braves between 1962 and 1978. He was also a best-selling author, actor, activist, sportscaster and one of the creators of Big League Chew. Bouton played college baseball at Western Michigan University, before signing his first professional contract with the Yankees. He was a member of the 1962 World Series champions, appeared in the 1963 MLB All-Star Game, and won both of his starts in the 1964 World Series. Later in his career, he developed and threw a knuckleball. Bouton authored the 1970 baseball book Ball Four, which was a combination diary of his 1969 season and memoir of his years with the Yankees, Pilots, and Astros. Amateur and college career Bouton was born in Newark, New Jersey, the son of Gertrude (Vischer) and George Hempstead Bouton, an executive. He grew up as a fan of the New York Giants in Rochelle Park, New Jersey, where he lived until the age of 13. He lived with his family in Ridgewood, New Jersey until he was 15, when his family relocated to Homewood, Illinois. Bouton enrolled at Bloom High School, where he played for the school's baseball team. Bouton was nicknamed "Warm-Up Bouton" because he never got to play in a game, serving much of his time as a benchwarmer. Bloom's star pitcher at that time was Jerry Colangelo, who later would become owner of the Arizona Diamondbacks and Phoenix Suns. In summer leagues, Bouton did not throw particularly hard, but he got batters out by mixing conventional pitches with the knuckleball that he had experimented with since childhood. Bouton attended Western Michigan University, and pitched for the Western Michigan Broncos baseball team. He earned a scholarship for his second year. That summer, he played amateur baseball, catching the attention of scouts. Yankees scout Art Stewart signed Bouton for $30,000. Professional career Bouton signed with the Yankees as an amateur free agent in 1959. After playing in minor league baseball, Bouton started his major league career in 1962 with the Yankees, where his tenacity earned him the nickname "Bulldog." By this time, he had developed a formidable fastball. He also came to be known for his cap flying off his head at the completion of his delivery to the plate, as well as for his uniform number 56, a number usually assigned in spring training to players designated for the minor leagues. (Bouton later explained that he had been assigned the number in 1962 when he was promoted to the Yankees, and wanted to keep it as a reminder of how close he had come to not making the ball club. He wore number 56 throughout most of his major league career.) Bouton appeared in 36 games (16 starts) during the 1962 season, going 7–7 with two saves and a 3.99 ERA. He did not play in the Yankees' 1962 World Series victory over the San Francisco Giants, although he had originally been slated to start Game 7. When the game was postponed a day because of rain, Ralph Terry pitched instead. Bouton went 21–7 and 18–13 in the next two seasons, and appeared in the 1963 All-Star Game. In Game 3 of the 1963 World Series, Don Drysdale of the Los Angeles Dodgers pitched a three-hit shutout in a 1–0 victory, while Bouton gave up just four hits in seven innings for the Yankees. The only run scored in the first inning on a walk, wild pitch and single by Tommy Davis that bounced off the pitching mound. Bouton won both his starts in the 1964 World Series. He beat the St. Louis Cardinals 2–1 with a complete-game six-hitter on October 10 on a walk-off home run by Mickey Mantle, then won again on October 14 at Busch Stadium, 8–3, backed by another Mantle homer and a Joe Pepitone grand slam. He was 2–1 with a 1.48 ERA in three career World Series starts. Bouton's frequent use by the Yankees during these years (he led the league with 37 starts in 1964 in addition to pitching in that year's World Series) probably contributed to his subsequent arm troubles. In 1965, an arm injury slowed his fastball and ended his status as a pitching phenomenon. Relegated mostly to bullpen duty, Bouton began to throw the knuckleball again, in an effort to lengthen his career. He was 1–1 in 12 appearances when his contract was sold on June 15, 1968, by the Yankees to the Seattle Pilots before the expansion franchise ever played a game. He was assigned to the Seattle Angels for the remainder of the campaign.<ref>[https://news.google.com/newspapers?nid=1917&dat=19680617&id=Y8JGAAAAIBAJ&sjid=eukMAAAAIBAJ&pg=781,3640855 "Major League Teams Beat Clock with Last-Minute Trading Spurt," Schenectady (NY) Gazette, Monday, June 17, 1968.] Retrieved February 17, 2023.</ref> In October 1968, Bouton joined a committee of American sportsmen who traveled to the 1968 Summer Olympics, in Mexico City, to protest the involvement of apartheid South Africa. He was used almost exclusively out of the bullpen by the Pilots in 1969. On May 16, he pitched three hitless innings of relief without allowing a run against the Boston Red Sox at Fenway Park. The Pilots scored six in the top of the 11th inning to earn him the win, even though other Seattle relievers gave five runs back in the bottom of the 11th. Bouton earned another win in July against the Red Sox with 1 innings of relief, again not allowing a hit. Over 57 appearances with the Pilots, he compiled a 2–1 record with a 3.91 ERA. The Pilots traded him to the Houston Astros in late August, where Bouton was 0–2 with a 4.11 ERA in 16 appearances (one start). Ball Four Around 1968, sportswriter Leonard Shecter, who had befriended Bouton during his time with the Yankees, approached him with the idea of writing a season-long diary. Bouton agreed; he had taken some notes during the 1968 season with a similar goal. The diary that became Ball Four chronicled Bouton's experiences the next year with the Pilots. The diary also followed Bouton during his two-week stint with the triple-A Vancouver Mounties in April, and after his trade to the Houston Astros in late August. Ball Four was not the first baseball diary (Cincinnati Reds pitcher Jim Brosnan had written two such books), but it became more widely known and discussed than its predecessors. The book was a frank, insider's look at professional sports teams, covering the off-the-field side of baseball life, including petty jealousies, obscene jokes, drunken tomcatting of the players, and routine drug use, including by Bouton himself. Upon its publication, baseball commissioner Bowie Kuhn called Ball Four "detrimental to baseball", and tried to force Bouton to sign a statement saying that the book was completely fictional. Bouton, however, refused to deny any of Ball Four's revelations. Some teammates never forgave him for disclosing information given to him in confidence, and naming names. The book made Bouton unpopular with many players, coaches, and officials on other teams as well; he was informally blacklisted from baseball. Bouton's writings about Mickey Mantle's lifestyle were most notorious, though they comprise few pages of Ball Four and much of the material was complimentary. For example, when Bouton got his first shutout win as a Yankee, he describes Mantle laying a "red carpet" of white towels leading directly to Bouton's locker in his honor. The controversy and book sales enabled Bouton to write a sequel, I'm Glad You Didn't Take It Personally, in which he discussed both the controversies and reactions to Ball Four, and the end of his original pitching career and his transition to becoming a New York sportscaster. Retirement Bouton retired midway through the 1970 season, shortly after the Astros sent him down to the minor leagues. After a handful of unsatisfactory appearances, Bouton left baseball to become a local sports anchor for New York station WABC-TV, as part of Eyewitness News; he later held the same job for WCBS-TV. In 1973, Bouton published a collection of manager tales, including one by Bouton himself about Joe Schultz his manager with the Seattle Pilots. Bouton also became an actor, playing the part of Terry Lennox in Robert Altman's The Long Goodbye (1973), plus the lead role of Jim Barton in the 1976 CBS television series Ball Four, which was loosely adapted from the book. The show was canceled after five episodes. Decades later, Bouton would also have a brief one-line cameo as a pitching coach in the 2010 James L. Brooks film How Do You Know. By the mid-1970s, a cult audience saw the book Ball Four as a candid and comic portrayal of the ups and downs of baseball life. Bouton went on the college lecture circuit, delivering humorous talks on his experiences. He authored a sequel, I'm Glad You Didn't Take It Personally, and later updated the original book with a new extended postscript that provided a ten-year update, dubbed Ball Five. Return Bouton launched his comeback bid with the Portland Mavericks of the Class A Northwest League in 1975, compiling a 5–1 record. He skipped the 1976 season to work on the TV series, but he returned to the diamond in 1977 when Bill Veeck signed him to a minor league contract with the Chicago White Sox. Bouton was winless for a White Sox farm club; a stint in the Mexican League and a return to Portland followed. In 1978, Ted Turner signed Bouton to a contract with the Atlanta Braves. After a successful season with the Double-A Savannah Braves of the Southern League, he was called up to join Atlanta's rotation in September, and compiled a 1–3 record with a 4.97 ERA in five starts. His winding return to the majors was chronicled in a book by sportswriter Terry Pluto, The Greatest Summer. Bouton also detailed his comeback in a 10th anniversary re-release of his first book, titled Ball Four Plus Ball Five, as well as adding a Ball Six, updating the stories of the players in Ball Four, for the 20th anniversary edition. All were included (in 2000) as Ball Four: The Final Pitch, along with a new coda that detailed the death of his daughter and his reconciliation with the Yankees. After his return to the majors, Bouton continued to pitch at the semi-pro level for a Bergen County, New Jersey team called the Emerson-Westwood Merchants, among other teams in the Metropolitan Baseball League in northern New Jersey, while living in Teaneck, New Jersey. Once his baseball career ended a second time, Bouton became one of the inventors of "Big League Chew", a shredded bubblegum designed to resemble chewing tobacco and sold in a tobacco-like pouch. He also co-authored Strike Zone (a baseball novel) and edited an anthology about managers, entitled I Managed Good, But Boy Did They Play Bad (published 1973). His most recent book is Foul Ball, a non-fiction account of his attempt to save Wahconah Park, a historic minor league baseball stadium in Pittsfield, Massachusetts. The book was released in 2003 and later updated in 2005. Although Bouton had never been officially declared persona non grata by the Yankees or any other team as a result of Ball Fours revelations, he was excluded from most baseball-related functions, including Old-Timers' Games. It was rumored that Mickey Mantle himself had told the Yankees that he would never attend an Old-Timers' Game to which Bouton was invited. Later, Mantle denied this charge during an answering-machine message to Bouton after Mantle's son Billy had died of cancer in 1994 – Mantle was acknowledging a condolence card Bouton had sent. On June 21, 1998 (Father's Day) Bouton's oldest son Michael wrote an open letter to the Yankees, which was published in The New York Times, in which Michael described the agony of his father following the August 1997 death of Michael's sister Laurie at age 31, with Michael wishing that the Yankees would invite Bouton to their Old Timers Game on July 25 (he noted Yogi Berra's decision to not participate in the game as long as George Steinbrenner was owner, but he cited it as just as petty for Berra to spite Steinbrenner as it is for Steinbrenner to spite Bouton). Not long after, the Yankees elected to invite him to the Old Timers Game. On July 25, 1998, Bouton, sporting his familiar number 56, received a standing ovation when he took the mound at Yankee Stadium. Personal life Bouton and his first wife Bobbie had two children together, Michael and Laurie, and adopted a Korean orphan, Kyong Jo. Kyong Jo later changed his name to David. Bobbie and Bouton divorced in 1981. In 1983, Bouton's ex-wife teamed up with Nancy Marshall, the former wife of pitcher Mike Marshall, to write a tell-all book called Home Games. In response to the book's publication, Bouton commented: We all have the right to write about our lives, and she does, too. If the book is insightful, if it helps people, I may be applauding it. I'm sure most of the things she says are true. I smoked grass, I ran around, I found excuses to stay on the road. It got so bad that I smoked grass to numb myself. It took me a year to where my brain worked again. I no longer think of grass as harmless. We were in the death throes of a marriage. She should ask herself how did she not see these things. In 1997, Laurie was killed in a car accident at age 31. Bouton later married Paula Kurman. They had six grandchildren. In 2012, Bouton had a stroke that did not impair him physically but damaged his memory and speaking. Bouton promoted the Vintage Base Ball Federation to form vintage clubs and leagues internationally, to codify the rules and equipment of its 19th-century origins, and to organize competitions. Bouton was a delegate to the 1972 Democratic National Convention for George McGovern. Bouton died at home on July 10, 2019, after weeks of hospice care for cerebral amyloid angiopathy, at age 80. WritingsBall Four has been through numerous significantly revised editions, the most recent being Ball Four: The Final Pitch, Bulldog Publishing. (April 2001), .I'm Glad You Didn't Take It PersonallyI Managed Good, But Boy Did They Play Bad – edited and annotated by Bouton, compiled by Neil Offen.Foul Ball, Bulldog Publishing. (June 2003), .Strike Zone'', Signet Books. (March 1995), (with Eliot Asinof). See also List of knuckleball pitchers References External links 1939 births 2019 deaths 20th-century American memoirists Alacranes de Durango players Amarillo Gold Sox players American League All-Stars American diarists American expatriate baseball players in Canada American expatriate baseball players in Mexico American male non-fiction writers Atlanta Braves players Auburn Yankees players Baseball players from Chicago Baseball players from Newark, New Jersey Greensboro Yankees players Houston Astros players Journalists from Illinois Kearney Yankees players Knoxville Sox players Knuckleball pitchers Major League Baseball pitchers Mexican League baseball pitchers New Jersey Democrats New York (state) television reporters New York Yankees players Oklahoma City 89ers players People from Chicago Heights, Illinois Sportspeople from Ridgewood, New Jersey People from Rochelle Park, New Jersey Sportspeople from Teaneck, New Jersey Baseball players from Bergen County, New Jersey Portland Mavericks players Savannah Braves players Seattle Angels players Seattle Pilots players Syracuse Chiefs players Television anchors from New York City Vancouver Mounties players Western Michigan Broncos baseball players Writers from Newark, New Jersey
```go package api import ( "fmt" "sync" "time" ) const ( // DefaultLockSessionName is the Session Name we assign if none is provided DefaultLockSessionName = "Consul API Lock" // DefaultLockSessionTTL is the default session TTL if no Session is provided // when creating a new Lock. This is used because we do not have another // other check to depend upon. DefaultLockSessionTTL = "15s" // DefaultLockWaitTime is how long we block for at a time to check if lock // acquisition is possible. This affects the minimum time it takes to cancel // a Lock acquisition. DefaultLockWaitTime = 15 * time.Second // DefaultLockRetryTime is how long we wait after a failed lock acquisition // before attempting to do the lock again. This is so that once a lock-delay // is in effect, we do not hot loop retrying the acquisition. DefaultLockRetryTime = 5 * time.Second // DefaultMonitorRetryTime is how long we wait after a failed monitor check // of a lock (500 response code). This allows the monitor to ride out brief // periods of unavailability, subject to the MonitorRetries setting in the // lock options which is by default set to 0, disabling this feature. This // affects locks and semaphores. DefaultMonitorRetryTime = 2 * time.Second // LockFlagValue is a magic flag we set to indicate a key // is being used for a lock. It is used to detect a potential // conflict with a semaphore. LockFlagValue = 0x2ddccbc058a50c18 ) var ( // ErrLockHeld is returned if we attempt to double lock ErrLockHeld = fmt.Errorf("Lock already held") // ErrLockNotHeld is returned if we attempt to unlock a lock // that we do not hold. ErrLockNotHeld = fmt.Errorf("Lock not held") // ErrLockInUse is returned if we attempt to destroy a lock // that is in use. ErrLockInUse = fmt.Errorf("Lock in use") // ErrLockConflict is returned if the flags on a key // used for a lock do not match expectation ErrLockConflict = fmt.Errorf("Existing key does not match lock use") ) // Lock is used to implement client-side leader election. It is follows the // algorithm as described here: path_to_url type Lock struct { c *Client opts *LockOptions isHeld bool sessionRenew chan struct{} lockSession string l sync.Mutex } // LockOptions is used to parameterize the Lock behavior. type LockOptions struct { Key string // Must be set and have write permissions Value []byte // Optional, value to associate with the lock Session string // Optional, created if not specified SessionOpts *SessionEntry // Optional, options to use when creating a session SessionName string // Optional, defaults to DefaultLockSessionName (ignored if SessionOpts is given) SessionTTL string // Optional, defaults to DefaultLockSessionTTL (ignored if SessionOpts is given) MonitorRetries int // Optional, defaults to 0 which means no retries MonitorRetryTime time.Duration // Optional, defaults to DefaultMonitorRetryTime LockWaitTime time.Duration // Optional, defaults to DefaultLockWaitTime LockTryOnce bool // Optional, defaults to false which means try forever } // LockKey returns a handle to a lock struct which can be used // to acquire and release the mutex. The key used must have // write permissions. func (c *Client) LockKey(key string) (*Lock, error) { opts := &LockOptions{ Key: key, } return c.LockOpts(opts) } // LockOpts returns a handle to a lock struct which can be used // to acquire and release the mutex. The key used must have // write permissions. func (c *Client) LockOpts(opts *LockOptions) (*Lock, error) { if opts.Key == "" { return nil, fmt.Errorf("missing key") } if opts.SessionName == "" { opts.SessionName = DefaultLockSessionName } if opts.SessionTTL == "" { opts.SessionTTL = DefaultLockSessionTTL } else { if _, err := time.ParseDuration(opts.SessionTTL); err != nil { return nil, fmt.Errorf("invalid SessionTTL: %v", err) } } if opts.MonitorRetryTime == 0 { opts.MonitorRetryTime = DefaultMonitorRetryTime } if opts.LockWaitTime == 0 { opts.LockWaitTime = DefaultLockWaitTime } l := &Lock{ c: c, opts: opts, } return l, nil } // Lock attempts to acquire the lock and blocks while doing so. // Providing a non-nil stopCh can be used to abort the lock attempt. // Returns a channel that is closed if our lock is lost or an error. // This channel could be closed at any time due to session invalidation, // communication errors, operator intervention, etc. It is NOT safe to // assume that the lock is held until Unlock() unless the Session is specifically // created without any associated health checks. By default Consul sessions // prefer liveness over safety and an application must be able to handle // the lock being lost. func (l *Lock) Lock(stopCh <-chan struct{}) (<-chan struct{}, error) { // Hold the lock as we try to acquire l.l.Lock() defer l.l.Unlock() // Check if we already hold the lock if l.isHeld { return nil, ErrLockHeld } // Check if we need to create a session first l.lockSession = l.opts.Session if l.lockSession == "" { s, err := l.createSession() if err != nil { return nil, fmt.Errorf("failed to create session: %v", err) } l.sessionRenew = make(chan struct{}) l.lockSession = s session := l.c.Session() go session.RenewPeriodic(l.opts.SessionTTL, s, nil, l.sessionRenew) // If we fail to acquire the lock, cleanup the session defer func() { if !l.isHeld { close(l.sessionRenew) l.sessionRenew = nil } }() } // Setup the query options kv := l.c.KV() qOpts := &QueryOptions{ WaitTime: l.opts.LockWaitTime, } start := time.Now() attempts := 0 WAIT: // Check if we should quit select { case <-stopCh: return nil, nil default: } // Handle the one-shot mode. if l.opts.LockTryOnce && attempts > 0 { elapsed := time.Since(start) if elapsed > l.opts.LockWaitTime { return nil, nil } // Query wait time should not exceed the lock wait time qOpts.WaitTime = l.opts.LockWaitTime - elapsed } attempts++ // Look for an existing lock, blocking until not taken pair, meta, err := kv.Get(l.opts.Key, qOpts) if err != nil { return nil, fmt.Errorf("failed to read lock: %v", err) } if pair != nil && pair.Flags != LockFlagValue { return nil, ErrLockConflict } locked := false if pair != nil && pair.Session == l.lockSession { goto HELD } if pair != nil && pair.Session != "" { qOpts.WaitIndex = meta.LastIndex goto WAIT } // Try to acquire the lock pair = l.lockEntry(l.lockSession) locked, _, err = kv.Acquire(pair, nil) if err != nil { return nil, fmt.Errorf("failed to acquire lock: %v", err) } // Handle the case of not getting the lock if !locked { // Determine why the lock failed qOpts.WaitIndex = 0 pair, meta, err = kv.Get(l.opts.Key, qOpts) if pair != nil && pair.Session != "" { //If the session is not null, this means that a wait can safely happen //using a long poll qOpts.WaitIndex = meta.LastIndex goto WAIT } else { // If the session is empty and the lock failed to acquire, then it means // a lock-delay is in effect and a timed wait must be used select { case <-time.After(DefaultLockRetryTime): goto WAIT case <-stopCh: return nil, nil } } } HELD: // Watch to ensure we maintain leadership leaderCh := make(chan struct{}) go l.monitorLock(l.lockSession, leaderCh) // Set that we own the lock l.isHeld = true // Locked! All done return leaderCh, nil } // Unlock released the lock. It is an error to call this // if the lock is not currently held. func (l *Lock) Unlock() error { // Hold the lock as we try to release l.l.Lock() defer l.l.Unlock() // Ensure the lock is actually held if !l.isHeld { return ErrLockNotHeld } // Set that we no longer own the lock l.isHeld = false // Stop the session renew if l.sessionRenew != nil { defer func() { close(l.sessionRenew) l.sessionRenew = nil }() } // Get the lock entry, and clear the lock session lockEnt := l.lockEntry(l.lockSession) l.lockSession = "" // Release the lock explicitly kv := l.c.KV() _, _, err := kv.Release(lockEnt, nil) if err != nil { return fmt.Errorf("failed to release lock: %v", err) } return nil } // Destroy is used to cleanup the lock entry. It is not necessary // to invoke. It will fail if the lock is in use. func (l *Lock) Destroy() error { // Hold the lock as we try to release l.l.Lock() defer l.l.Unlock() // Check if we already hold the lock if l.isHeld { return ErrLockHeld } // Look for an existing lock kv := l.c.KV() pair, _, err := kv.Get(l.opts.Key, nil) if err != nil { return fmt.Errorf("failed to read lock: %v", err) } // Nothing to do if the lock does not exist if pair == nil { return nil } // Check for possible flag conflict if pair.Flags != LockFlagValue { return ErrLockConflict } // Check if it is in use if pair.Session != "" { return ErrLockInUse } // Attempt the delete didRemove, _, err := kv.DeleteCAS(pair, nil) if err != nil { return fmt.Errorf("failed to remove lock: %v", err) } if !didRemove { return ErrLockInUse } return nil } // createSession is used to create a new managed session func (l *Lock) createSession() (string, error) { session := l.c.Session() se := l.opts.SessionOpts if se == nil { se = &SessionEntry{ Name: l.opts.SessionName, TTL: l.opts.SessionTTL, } } id, _, err := session.Create(se, nil) if err != nil { return "", err } return id, nil } // lockEntry returns a formatted KVPair for the lock func (l *Lock) lockEntry(session string) *KVPair { return &KVPair{ Key: l.opts.Key, Value: l.opts.Value, Session: session, Flags: LockFlagValue, } } // monitorLock is a long running routine to monitor a lock ownership // It closes the stopCh if we lose our leadership. func (l *Lock) monitorLock(session string, stopCh chan struct{}) { defer close(stopCh) kv := l.c.KV() opts := &QueryOptions{RequireConsistent: true} WAIT: retries := l.opts.MonitorRetries RETRY: pair, meta, err := kv.Get(l.opts.Key, opts) if err != nil { // If configured we can try to ride out a brief Consul unavailability // by doing retries. Note that we have to attempt the retry in a non- // blocking fashion so that we have a clean place to reset the retry // counter if service is restored. if retries > 0 && IsRetryableError(err) { time.Sleep(l.opts.MonitorRetryTime) retries-- opts.WaitIndex = 0 goto RETRY } return } if pair != nil && pair.Session == session { opts.WaitIndex = meta.LastIndex goto WAIT } } ```
Strength in Numbers is the ninth studio album by Swedish metal band The Haunted, released on 25 August 2017 via Century Media. Track listing Personnel Credits are adapted from the album's liner notes. The Haunted Marco Aro – vocals Patrik Jensen – rhythm guitar Ola Englund – lead guitar Jonas Björler – bass Adrian Erlandsson – drums Production and design Russ Russell − production, engineering, mixing, mastering Jocke Skog − producer Andreas Pettersson − artwork Nilay Pavlovic − photography References 2017 albums The Haunted (Swedish band) albums Century Media Records albums
```scala /* */ package com.lightbend.lagom.internal.scaladsl.registry import java.net.URI import com.lightbend.lagom.internal.registry.AbstractLoggingServiceRegistryClient import com.lightbend.lagom.scaladsl.api.transport.NotFound import scala.collection.immutable import scala.concurrent.ExecutionContext import scala.concurrent.Future private[lagom] class ScalaServiceRegistryClient(registry: ServiceRegistry)(implicit ec: ExecutionContext) extends AbstractLoggingServiceRegistryClient { protected override def internalLocateAll(serviceName: String, portName: Option[String]): Future[immutable.Seq[URI]] = registry .lookup(serviceName, portName) .invoke() .map(immutable.Seq[URI](_)) .recover { case _: NotFound => Nil } } ```
Heydarabad (, also Romanized as Ḩeydarābād) is a village in Kani Shirin Rural District, Karaftu District, Divandarreh County, Kurdistan Province, Iran. At the 2006 census, its population was 232, in 50 families. The village is populated by Kurds. References Towns and villages in Divandarreh County Kurdish settlements in Kurdistan Province
```toml [engineer] name="waruna" [student] name="riyafa" [employee] name="manu" id=101 [employee1] name="riyafa" id=999 [officer] name="gabilan" id=101 [manager] name="hinduja" id=107 [teacher] name="gabilan" id=888 [farmer] name="waruna" id=999 [person] name = "waruna" id = 10 address.city="San Francisco" address.country.name="USA" [person2] name = "manu" id = 11 address.city="Nugegoda" [person3] name = "riyafa" id = 12 [lecturer] name="hinduja" department1.name="IT" department2.name="Finance" department3.name="HR" [lawyer] name="riyafa" address1.city="Colombo" address2.city="Kandy" address3.city="Galle" [lecturer2] name="hinduja" department1.name="IT" department2.name="Finance" department3.name="HR" [lawyer2] name="riyafa" place1.city="Colombo" place2.city="Kandy" place3.city="Galle" [configRecordType.imported_records.doctor] name="waruna" [configRecordType.imported_records.student] name="riyafa" [configRecordType.imported_records.employee] name="manu" id=101 [configRecordType.imported_records.employee1] name="waruna" id=404 [configRecordType.imported_records.officer] name="gabilan" id=101 [configRecordType.imported_records.manager] name="hinduja" id=107 [configRecordType.imported_records.teacher] name="hinduja" id=11 [configRecordType.imported_records.farmer] name="manu" id=22 [configRecordType.imported_records.person] name = "hinduja" id = 100 address.city="Kandy" address.country.name="Sri Lanka" ```
Casting the Net is an oil-on-canvas painting by French artist Suzanne Valadon, executed in 1914. It has the dimensions of 201 by 301 cm. It is held in the collection of the Museum of Fine Arts in Nancy. History and analysis The painting was executed in 1914 and exhibited from March 1 to April 30, 1914, at the Salon des Indépendants. Valadon's son, the painter Maurice Utrillo, and her lover and model for this painting, André Utter, also exhibited there. During this exhibition, the canvas did not go unnoticed for its size and was subjected to some criticism. The Swiss writer and artist Arthur Cravan (known for scandalous outrageousness) subjected the picture and the artist herself to fierce criticism: “[she] knows all sorts of tricks well, but simplifying does not mean doing it in a simple way, old bitch!”. He was convicted of libel, and he later wrote: "Contrary to my assertion, Madame Suzanne Valadon is virtue itself". Valadon was infatuated with the young artist André Utter, a friend of her son who was 25 years younger than Valadon, and who became her lover. They got married in 1914, when the painting was made. It is Utter who is depicted on the canvas, as a naked man, standing in three different poses and who reproduces the same gesture in each of them. He embodies youth; the canvas emphasizes the power of the model, in the action of casting a fishing net. The net actually serves as a pretext for depicting his naked body tense with effort. In the first two positions, the man leans on his left leg. The model's athletic body further enhances the erotic nature of the composition. The canvas represents a classical composition on an academic theme and has a geometric design. This study of movement is reminiscent of the painting Dance (1910) by Henri Matisse (Hermitage Museum, Saint Petersburg). The pink mountain and the blue lake were inspired by the artist's stay in Corsica and are also reminiscent of Paul Cézanne's tones. The artist uses outline, clearly delineating the silhouette of the model in space, a technique previously used, among others, by Edgar Degas and Henri de Toulouse-Lautrec. The colors of the painting are warm and sensual. This is also the last painting where Valadon depicted a male nude. Subsequently, she became more interested in female and child nudes. Valadon was perceived as a symbol of a liberated and active woman, depicting a man as an object of desire. She breaks with the bourgeois conventions of her time by marrying a younger man and by depicting the male nude in her work. Provenance The work was purchased from Madame Vigneron, who received it from the artist herself. The canvas was acquired by the Musée National d'Art Moderne in Paris in 1937, and in 1998 it was transferred to the Museum of Fine Arts of Nancy, for safekeeping. See also List of paintings by Suzanne Valadon References 1914 paintings Paintings by Suzanne Valadon Nude art
```javascript "use strict"; function __export(m) { for (var p in m) if (!exports.hasOwnProperty(p)) exports[p] = m[p]; } Object.defineProperty(exports, "__esModule", { value: true }); __export(require("rxjs-compat/operator/find")); //# sourceMappingURL=find.js.map ```
```java `System.out` vs `System.err` How to create directories in Java Reading and writing text files Retrieving file store attributes Listing a file system's root directories ```
Ulmus 'Frontier' is an American hybrid cultivar , a United States National Arboretum introduction (NA 55393) derived from a crossing of the European Field Elm Ulmus minor (female parent) with the Chinese Elm Ulmus parvifolia in 1971. Released in 1990, the tree is a rare example of the hybridization of spring- and autumn-flowering elms. Tested in the US National Elm Trial coordinated by Colorado State University, 'Frontier' averaged a survival rate of 74% after 10 years. Description 'Frontier' develops a vase or pyramidal shape, with glossy green foliage turning, unusually for elms, to burgundy in autumn. The twigs are pubescent. Slow growing, the ultimate height of the tree has yet to be determined, but should be > 15 m. The tree is autumn-flowering but rarely does so, and has not produced seed. Pests and diseases 'Frontier' has a good resistance to Dutch elm disease, rated 4 out of 5, but tolerance of Elm Yellows from grafts in the United States was found to be poor. However, no mortalities are known to have occurred from the latter disease in the field, the cultivar not known to be vulnerable to infection through natural means. The tree can be heavily to severely damaged by the Elm Leaf Beetle Xanthogaleruca luteola, although it fared better than most of the cultivars assessed at U C Davis, suffering little more than 10% foliar damage. Although susceptible to attack by Japanese Beetle, it is far less seriously affected than most hybrid cultivars available in the United States. Cultivation In trials in eastern Arizona , 'Frontier' and another American hybrid, 'Regal', were found to have the highest tolerance of the hot and arid climate, notably exhibiting minimal leaf scorch. However, 'Frontier' is known to have sustained winter damage where planted in the Great Plains . This failing was repeated in the elm trials conducted by the University of Minnesota, although the tree often recovered the following year. It was also criticized for its form and integrity, considered "unsuitable" for urban forestry. 'Frontier' fared better in 10-year trials at Atherton, California, to evaluate replacements for Californian elms lost to disease: "Strong structure, rapid growth rate, attractive leaf color in spring and fall, and relatively low pruning requirement suggest that Frontier has promise...", although the tree again proved only moderately tolerant of elm leaf beetles. 'Frontier' has had a limited introduction to Europe, where it is largely restricted to arboreta and elm collections; it also featured in trials in New Zealand during the 1990s at the Hortresearch station, Palmerston North. Accessions North America Bartlett Tree Experts, US. Acc. nos. 2001–097/8/9 Brooklyn Botanic Garden , US. Acc. no. 20040606. Chicago Botanic Garden, US. No details available. Dawes Arboretum, US. Newark, US. 3 trees. No acc. details available. Holden Arboretum, US. Acc. no. 95–140 Morton Arboretum, US. Acc. nos. 1284–2004, 433–2005, 270–2008 Parker Arboretum, US. No acc. details. Scott Arboretum, US. Acc. no. 91–242 Smith College, US. Acc. no. 19804, 36005 University of Idaho Arboretum, US. Acc. no. 1995010 U S National Arboretum, Washington, D.C., US. Acc. no. 68984 Europe Grange Farm Arboretum, Lincolnshire, UK. Acc. no. 504 Great Fontley Butterfly Conservation Elm Trials plantation, UK. One small sapling planted 2021 Nurseries North America Backyard Trees, Park Hill, Oklahoma, US. Carlton Plants, LLC , Dayton, Oregon, US. Charles J. Fiore , Prairie View, Illinois, US. ForestFarm , Williams, Oregon, US. Herd Farm Nursery, Belvedere, Tennessee, US. J. Frank Schmidt & Son Co. , Boring, Oregon, US. Johnson's Nursery , Menomonee Falls, Wisconsin, US. Jost Greenhouses, Missouri, US. North American Plants , Lafayette, Oregon, US. Pea Ridge Forest , Hermann, Missouri, US. Sester Farms , Gresham, Oregon, US. Sun Valley Garden Centre , Eden Prairie, Minnesota, US. Europe Golden Hill Plants , Marden, UK. Pan-Global Plants , Frampton-on-Severn, Gloucestershire, UK. Van Den Berk (UK) Ltd., , London, UK References External links http://www.extension.iastate.edu/Publications/SUL4.pdf Summary, inc. photographs, of elm cultivars resistant to Dutch elm disease available in the United States. https://web.archive.org/web/20030413074605/http://fletcher.ces.state.nc.us/programs/nursery/metria/metria11/warren/elm.htm Warren, K., J. Frank Schmidt & Son Co. (2002). The Status of Elms in the Nursery Industry in 2000. Hybrid elm cultivar Ulmus articles with images Ulmus
```scheme prelude: :gerbil/compiler/ssxi package: gerbil/core (begin) ```
Anthony Cottrell (21 March 1806 – 4 May 1860) was a farmer and one of fifteen investors in the Port Phillip Association. The son of Ellen and William Cottrell, a farmer living in the South Esk County of Cornwall, Tasmania. He immigrated to Tasmania in 1824 on the 'Cumberland'. He was to later befriend John Batman and John Charles Darke, fellow Tasmanian investors and move to Port Phillip on the Yarra River in 1835 as one of the original settlers in what was to become Victoria. He later returned to Tasmania. He is officially remembered in the name of a hill and an outer western Melbourne suburb, Mount Cottrell, near Melton. Marriage and business interests In January 1835 he married Frances Solomon. Seven months later he moved to Port Phillip to take up his new holdings he had invested and acquired via the later disputed Batman Treaty. The young couple quickly had three children who were amongst the first Europeans born in the new settlement: Ellen Lowes, 1835, Anthony Crisp, 1837, and Harriet Ann 1839. The extraordinary claims to large areas of land made by members of the Association after Joseph Gellibrand returned from his exploration in February 1836 were such that each began to ship stock to holdings of 40,000 acres (160 km²) or more. The members of the Association claimed in 1838 to have shipped between 2500 and 500 sheep each, and Cottrell is listed with 1000. Of course the Government of NSW did not allow them to keep this land acquired by "treaty" with the aborigines, and it is not known what happened to Cottrell's claim or to his sheep. His share in the Association was sold to banker and fellow member Charles Swanston in July 1838 for 411 pounds, which corresponded to one seventeenth of the value after expenses of the assessed value of the 10,416 acres (42 km²) which the Government eventually allowed them to purchase. By March 1839 Cottrell was later known as a stock agent, the first in Geelong, and as an auctioneer in the area West of Melbourne. Cottrell sold a wide range of Batman's possessions under orders from his executors seeking to recover funds to repay Batman's creditors after his death in May 1839. Cottrell's acquisition of this place of business came by virtue of an interesting legal agreement with Batman in January 1839, for which the deed still exists, in which he undertook to pay Eliza Batman, John Batman's wife who was about to leave for England, 60 pounds a year for the rest of her life in exchange for a peppercorn rental on a building in William Street. In September 1840 he returned to Tasmania where several more children were born to Anthony and Frances Cottrell: William Joseph later known as William Ostler 1842; Fanny Randall 1843; Sarah Alicia Barbara 1844; and Joseph Solomon 1846. Cottrell's original land on the Nile River passed into other hands in 1839 and he acquired another smaller property of near Launceston that year. His later years were lived in Hobart where Anthony Cottrell died at his home in Elphinstone Road on 4 May 1860, aged 54. References K. R von Steiglitz, A History of Evandale, Evandale History Society, Rev. Ed., 1992. p. 27 Hopton, Arthur James, "A Pioneer of two Colonies: John Pascoe Fawkner, 1792-1869", The Victorian Historical Magazine (VHM), 30, 103-250 (1960), p. 153. Crawford, the Hon. Mr. Justice, W. F. Ellis, and G. H. Stancombe. (Eds.) The Diaries of John Helder Wedge 1824-35. The Royal Society of Tasmania, 1962. Introduction p xvi ff. Settlers of Melbourne 1806 births 1860 deaths English emigrants to colonial Australia Australian people of Cornish descent British emigrants to Australia 19th-century Australian businesspeople
```objective-c //===- llvm/Use.h - Definition of the Use class -----------------*- C++ -*-===// // // See path_to_url for license information. // //===your_sha256_hash------===// /// \file /// /// This defines the Use class. The Use class represents the operand of an /// instruction or some other User instance which refers to a Value. The Use /// class keeps the "use list" of the referenced value up to date. /// /// Pointer tagging is used to efficiently find the User corresponding to a Use /// without having to store a User pointer in every Use. A User is preceded in /// memory by all the Uses corresponding to its operands, and the low bits of /// one of the fields (Prev) of the Use class are used to encode offsets to be /// able to find that User given a pointer to any Use. For details, see: /// /// path_to_url#UserLayout /// //===your_sha256_hash------===// #ifndef LLVM_IR_USE_H #define LLVM_IR_USE_H #include "llvm-c/Types.h" #include "llvm/ADT/PointerIntPair.h" #include "llvm/Support/CBindingWrapping.h" #include "llvm/Support/Compiler.h" namespace llvm { template <typename> struct simplify_type; class User; class Value; /// A Use represents the edge between a Value definition and its users. /// /// This is notionally a two-dimensional linked list. It supports traversing /// all of the uses for a particular value definition. It also supports jumping /// directly to the used value when we arrive from the User's operands, and /// jumping directly to the User when we arrive from the Value's uses. /// /// The pointer to the used Value is explicit, and the pointer to the User is /// implicit. The implicit pointer is found via a waymarking algorithm /// described in the programmer's manual: /// /// path_to_url#the-waymarking-algorithm /// /// This is essentially the single most memory intensive object in LLVM because /// of the number of uses in the system. At the same time, the constant time /// operations it allows are essential to many optimizations having reasonable /// time complexity. class Use { public: Use(const Use &U) = delete; /// Provide a fast substitute to std::swap<Use> /// that also works with less standard-compliant compilers void swap(Use &RHS); /// Pointer traits for the UserRef PointerIntPair. This ensures we always /// use the LSB regardless of pointer alignment on different targets. struct UserRefPointerTraits { static inline void *getAsVoidPointer(User *P) { return P; } static inline User *getFromVoidPointer(void *P) { return (User *)P; } enum { NumLowBitsAvailable = 1 }; }; // A type for the word following an array of hung-off Uses in memory, which is // a pointer back to their User with the bottom bit set. using UserRef = PointerIntPair<User *, 1, unsigned, UserRefPointerTraits>; /// Pointer traits for the Prev PointerIntPair. This ensures we always use /// the two LSBs regardless of pointer alignment on different targets. struct PrevPointerTraits { static inline void *getAsVoidPointer(Use **P) { return P; } static inline Use **getFromVoidPointer(void *P) { return (Use **)P; } enum { NumLowBitsAvailable = 2 }; }; private: /// Destructor - Only for zap() ~Use() { if (Val) removeFromList(); } enum PrevPtrTag { zeroDigitTag, oneDigitTag, stopTag, fullStopTag }; /// Constructor Use(PrevPtrTag tag) { Prev.setInt(tag); } public: friend class Value; operator Value *() const { return Val; } Value *get() const { return Val; } /// Returns the User that contains this Use. /// /// For an instruction operand, for example, this will return the /// instruction. User *getUser() const LLVM_READONLY; inline void set(Value *Val); inline Value *operator=(Value *RHS); inline const Use &operator=(const Use &RHS); Value *operator->() { return Val; } const Value *operator->() const { return Val; } Use *getNext() const { return Next; } /// Return the operand # of this use in its User. unsigned getOperandNo() const; /// Initializes the waymarking tags on an array of Uses. /// /// This sets up the array of Uses such that getUser() can find the User from /// any of those Uses. static Use *initTags(Use *Start, Use *Stop); /// Destroys Use operands when the number of operands of /// a User changes. static void zap(Use *Start, const Use *Stop, bool del = false); private: const Use *getImpliedUser() const LLVM_READONLY; Value *Val = nullptr; Use *Next = nullptr; PointerIntPair<Use **, 2, PrevPtrTag, PrevPointerTraits> Prev; void setPrev(Use **NewPrev) { Prev.setPointer(NewPrev); } void addToList(Use **List) { Next = *List; if (Next) Next->setPrev(&Next); setPrev(List); *List = this; } void removeFromList() { Use **StrippedPrev = Prev.getPointer(); *StrippedPrev = Next; if (Next) Next->setPrev(StrippedPrev); } }; /// Allow clients to treat uses just like values when using /// casting operators. template <> struct simplify_type<Use> { using SimpleType = Value *; static SimpleType getSimplifiedValue(Use &Val) { return Val.get(); } }; template <> struct simplify_type<const Use> { using SimpleType = /*const*/ Value *; static SimpleType getSimplifiedValue(const Use &Val) { return Val.get(); } }; // Create wrappers for C Binding types (see CBindingWrapping.h). DEFINE_SIMPLE_CONVERSION_FUNCTIONS(Use, LLVMUseRef) } // end namespace llvm #endif // LLVM_IR_USE_H ```
```go // // path_to_url // // Unless required by applicable law or agreed to in writing, software // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. //go:build !go1.17 // +build !go1.17 package prometheus import ( "runtime" "sync" "time" ) type goCollector struct { base baseGoCollector // ms... are memstats related. msLast *runtime.MemStats // Previously collected memstats. msLastTimestamp time.Time msMtx sync.Mutex // Protects msLast and msLastTimestamp. msMetrics memStatsMetrics msRead func(*runtime.MemStats) // For mocking in tests. msMaxWait time.Duration // Wait time for fresh memstats. msMaxAge time.Duration // Maximum allowed age of old memstats. } // NewGoCollector is the obsolete version of collectors.NewGoCollector. // See there for documentation. // // Deprecated: Use collectors.NewGoCollector instead. func NewGoCollector() Collector { msMetrics := goRuntimeMemStats() msMetrics = append(msMetrics, struct { desc *Desc eval func(*runtime.MemStats) float64 valType ValueType }{ // This metric is omitted in Go1.17+, see path_to_url#issuecomment-861812034 desc: NewDesc( memstatNamespace("gc_cpu_fraction"), "The fraction of this program's available CPU time used by the GC since the program started.", nil, nil, ), eval: func(ms *runtime.MemStats) float64 { return ms.GCCPUFraction }, valType: GaugeValue, }) return &goCollector{ base: newBaseGoCollector(), msLast: &runtime.MemStats{}, msRead: runtime.ReadMemStats, msMaxWait: time.Second, msMaxAge: 5 * time.Minute, msMetrics: msMetrics, } } // Describe returns all descriptions of the collector. func (c *goCollector) Describe(ch chan<- *Desc) { c.base.Describe(ch) for _, i := range c.msMetrics { ch <- i.desc } } // Collect returns the current state of all metrics of the collector. func (c *goCollector) Collect(ch chan<- Metric) { var ( ms = &runtime.MemStats{} done = make(chan struct{}) ) // Start reading memstats first as it might take a while. go func() { c.msRead(ms) c.msMtx.Lock() c.msLast = ms c.msLastTimestamp = time.Now() c.msMtx.Unlock() close(done) }() // Collect base non-memory metrics. c.base.Collect(ch) timer := time.NewTimer(c.msMaxWait) select { case <-done: // Our own ReadMemStats succeeded in time. Use it. timer.Stop() // Important for high collection frequencies to not pile up timers. c.msCollect(ch, ms) return case <-timer.C: // Time out, use last memstats if possible. Continue below. } c.msMtx.Lock() if time.Since(c.msLastTimestamp) < c.msMaxAge { // Last memstats are recent enough. Collect from them under the lock. c.msCollect(ch, c.msLast) c.msMtx.Unlock() return } // If we are here, the last memstats are too old or don't exist. We have // to wait until our own ReadMemStats finally completes. For that to // happen, we have to release the lock. c.msMtx.Unlock() <-done c.msCollect(ch, ms) } func (c *goCollector) msCollect(ch chan<- Metric, ms *runtime.MemStats) { for _, i := range c.msMetrics { ch <- MustNewConstMetric(i.desc, i.valType, i.eval(ms)) } } ```
```html <!DOCTYPE html> <!--[if IE]><![endif]--> <html> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"> <title>Class EdmondKarpMaxFlow&lt;T, TW&gt; | Advanced Algorithms </title> <meta name="viewport" content="width=device-width"> <meta name="title" content="Class EdmondKarpMaxFlow&lt;T, TW&gt; | Advanced Algorithms "> <meta name="generator" content="docfx 2.59.4.0"> <link rel="shortcut icon" href="../favicon.ico"> <link rel="stylesheet" href="../styles/docfx.vendor.css"> <link rel="stylesheet" href="../styles/docfx.css"> <link rel="stylesheet" href="../styles/main.css"> <meta property="docfx:navrel" content=""> <meta property="docfx:tocrel" content="toc.html"> <meta property="docfx:rel" content="../"> </head> <body data-spy="scroll" data-target="#affix" data-offset="120"> <div id="wrapper"> <header> <nav id="autocollapse" class="navbar navbar-inverse ng-scope" role="navigation"> <div class="container"> <div class="navbar-header"> <button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#navbar"> <span class="sr-only">Toggle navigation</span> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> <a class="navbar-brand" href="../index.html"> <img id="logo" class="svg" src="../logo.svg" alt=""> </a> </div> <div class="collapse navbar-collapse" id="navbar"> <form class="navbar-form navbar-right" role="search" id="search"> <div class="form-group"> <input type="text" class="form-control" id="search-query" placeholder="Search" autocomplete="off"> </div> </form> </div> </div> </nav> <div class="subnav navbar navbar-default"> <div class="container hide-when-search" id="breadcrumb"> <ul class="breadcrumb"> <li></li> </ul> </div> </div> </header> <div class="container body-content"> <div id="search-results"> <div class="search-list">Search Results for <span></span></div> <div class="sr-items"> <p><i class="glyphicon glyphicon-refresh index-loading"></i></p> </div> <ul id="pagination" data-first="First" data-prev="Previous" data-next="Next" data-last="Last"></ul> </div> </div> <div role="main" class="container body-content hide-when-search"> <div class="sidenav hide-when-search"> <a class="btn toc-toggle collapse" data-toggle="collapse" href="#sidetoggle" aria-expanded="false" aria-controls="sidetoggle">Show / Hide Table of Contents</a> <div class="sidetoggle collapse" id="sidetoggle"> <div id="sidetoc"></div> </div> </div> <div class="article row grid-right"> <div class="col-md-10"> <article class="content wrap" id="_content" data-uid="Advanced.Algorithms.Graph.EdmondKarpMaxFlow`2"> <h1 id="Advanced_Algorithms_Graph_EdmondKarpMaxFlow_2" data-uid="Advanced.Algorithms.Graph.EdmondKarpMaxFlow`2" class="text-break">Class EdmondKarpMaxFlow&lt;T, TW&gt; </h1> <div class="markdown level0 summary"><p>An Edmond Karp max flow implementation on weighted directed graph using adjacency list representation of graph and residual graph.</p> </div> <div class="markdown level0 conceptual"></div> <div class="inheritance"> <h5>Inheritance</h5> <div class="level0"><a class="xref" href="path_to_url">Object</a></div> <div class="level1"><span class="xref">EdmondKarpMaxFlow&lt;T, TW&gt;</span></div> </div> <div class="inheritedMembers"> <h5>Inherited Members</h5> <div> <a class="xref" href="path_to_url#system-object-tostring">Object.ToString()</a> </div> <div> <a class="xref" href="path_to_url#system-object-equals(system-object)">Object.Equals(Object)</a> </div> <div> <a class="xref" href="path_to_url#system-object-equals(system-object-system-object)">Object.Equals(Object, Object)</a> </div> <div> <a class="xref" href="path_to_url#system-object-referenceequals(system-object-system-object)">Object.ReferenceEquals(Object, Object)</a> </div> <div> <a class="xref" href="path_to_url#system-object-gethashcode">Object.GetHashCode()</a> </div> <div> <a class="xref" href="path_to_url#system-object-gettype">Object.GetType()</a> </div> <div> <a class="xref" href="path_to_url#system-object-memberwiseclone">Object.MemberwiseClone()</a> </div> </div> <h6><strong>Namespace</strong>: <a class="xref" href="Advanced.Algorithms.Graph.html">Advanced.Algorithms.Graph</a></h6> <h6><strong>Assembly</strong>: Advanced.Algorithms.dll</h6> <h5 id="Advanced_Algorithms_Graph_EdmondKarpMaxFlow_2_syntax">Syntax</h5> <div class="codewrapper"> <pre><code class="lang-csharp hljs">public class EdmondKarpMaxFlow&lt;T, TW&gt; where TW : IComparable</code></pre> </div> <h5 class="typeParameters">Type Parameters</h5> <table class="table table-bordered table-striped table-condensed"> <thead> <tr> <th>Name</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><span class="parametername">T</span></td> <td></td> </tr> <tr> <td><span class="parametername">TW</span></td> <td></td> </tr> </tbody> </table> <h3 id="constructors">Constructors </h3> <span class="small pull-right mobile-hide"> <span class="divider">|</span> <a href="path_to_url">Improve this Doc</a> </span> <span class="small pull-right mobile-hide"> <a href="path_to_url#L16">View Source</a> </span> <a id="Advanced_Algorithms_Graph_EdmondKarpMaxFlow_2__ctor_" data-uid="Advanced.Algorithms.Graph.EdmondKarpMaxFlow`2.#ctor*"></a> <h4 id=your_sha256_hashorithms_Graph_IFlowOperators__1__" data-uid="Advanced.Algorithms.Graph.EdmondKarpMaxFlow`2.#ctor(Advanced.Algorithms.Graph.IFlowOperators{`1})">EdmondKarpMaxFlow(IFlowOperators&lt;TW&gt;)</h4> <div class="markdown level1 summary"></div> <div class="markdown level1 conceptual"></div> <h5 class="decalaration">Declaration</h5> <div class="codewrapper"> <pre><code class="lang-csharp hljs">public EdmondKarpMaxFlow(IFlowOperators&lt;TW&gt; operator)</code></pre> </div> <h5 class="parameters">Parameters</h5> <table class="table table-bordered table-striped table-condensed"> <thead> <tr> <th>Type</th> <th>Name</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><a class="xref" href="Advanced.Algorithms.Graph.IFlowOperators-1.html">IFlowOperators</a>&lt;TW&gt;</td> <td><span class="parametername">operator</span></td> <td></td> </tr> </tbody> </table> <h3 id="methods">Methods </h3> <span class="small pull-right mobile-hide"> <span class="divider">|</span> <a href="path_to_url">Improve this Doc</a> </span> <span class="small pull-right mobile-hide"> <a href="path_to_url#L26">View Source</a> </span> <a id="Advanced_Algorithms_Graph_EdmondKarpMaxFlow_2_ComputeMaxFlow_" data-uid="Advanced.Algorithms.Graph.EdmondKarpMaxFlow`2.ComputeMaxFlow*"></a> <h4 id=your_sha256_hashanced_Algorithms_DataStructures_Graph_IDiGraph__0___0__0_" data-uid="Advanced.Algorithms.Graph.EdmondKarpMaxFlow`2.ComputeMaxFlow(Advanced.Algorithms.DataStructures.Graph.IDiGraph{`0},`0,`0)">ComputeMaxFlow(IDiGraph&lt;T&gt;, T, T)</h4> <div class="markdown level1 summary"><p>Compute max flow by searching a path and then augmenting the residual graph until no more path exists in residual graph with possible flow.</p> </div> <div class="markdown level1 conceptual"></div> <h5 class="decalaration">Declaration</h5> <div class="codewrapper"> <pre><code class="lang-csharp hljs">public TW ComputeMaxFlow(IDiGraph&lt;T&gt; graph, T source, T sink)</code></pre> </div> <h5 class="parameters">Parameters</h5> <table class="table table-bordered table-striped table-condensed"> <thead> <tr> <th>Type</th> <th>Name</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><a class="xref" href="Advanced.Algorithms.DataStructures.Graph.IDiGraph-1.html">IDiGraph</a>&lt;T&gt;</td> <td><span class="parametername">graph</span></td> <td></td> </tr> <tr> <td><span class="xref">T</span></td> <td><span class="parametername">source</span></td> <td></td> </tr> <tr> <td><span class="xref">T</span></td> <td><span class="parametername">sink</span></td> <td></td> </tr> </tbody> </table> <h5 class="returns">Returns</h5> <table class="table table-bordered table-striped table-condensed"> <thead> <tr> <th>Type</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><span class="xref">TW</span></td> <td></td> </tr> </tbody> </table> <span class="small pull-right mobile-hide"> <span class="divider">|</span> <a href="path_to_url">Improve this Doc</a> </span> <span class="small pull-right mobile-hide"> <a href="path_to_url#L52">View Source</a> </span> <a id=your_sha256_hasheturnResidualGraph_" data-uid="Advanced.Algorithms.Graph.EdmondKarpMaxFlow`2.ComputeMaxFlowAndReturnResidualGraph*"></a> <h4 id=your_sha256_hashyour_sha256_hashraph__0___0__0_" data-uid="Advanced.Algorithms.Graph.EdmondKarpMaxFlow`2.ComputeMaxFlowAndReturnResidualGraph(Advanced.Algorithms.DataStructures.Graph.IDiGraph{`0},`0,`0)">ComputeMaxFlowAndReturnResidualGraph(IDiGraph&lt;T&gt;, T, T)</h4> <div class="markdown level1 summary"><p>Compute max flow by searching a path and then augmenting the residual graph until no more path exists in residual graph with possible flow.</p> </div> <div class="markdown level1 conceptual"></div> <h5 class="decalaration">Declaration</h5> <div class="codewrapper"> <pre><code class="lang-csharp hljs">public WeightedDiGraph&lt;T, TW&gt; ComputeMaxFlowAndReturnResidualGraph(IDiGraph&lt;T&gt; graph, T source, T sink)</code></pre> </div> <h5 class="parameters">Parameters</h5> <table class="table table-bordered table-striped table-condensed"> <thead> <tr> <th>Type</th> <th>Name</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><a class="xref" href="Advanced.Algorithms.DataStructures.Graph.IDiGraph-1.html">IDiGraph</a>&lt;T&gt;</td> <td><span class="parametername">graph</span></td> <td></td> </tr> <tr> <td><span class="xref">T</span></td> <td><span class="parametername">source</span></td> <td></td> </tr> <tr> <td><span class="xref">T</span></td> <td><span class="parametername">sink</span></td> <td></td> </tr> </tbody> </table> <h5 class="returns">Returns</h5> <table class="table table-bordered table-striped table-condensed"> <thead> <tr> <th>Type</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><a class="xref" href="Advanced.Algorithms.DataStructures.Graph.AdjacencyList.WeightedDiGraph-2.html">WeightedDiGraph</a>&lt;T, TW&gt;</td> <td></td> </tr> </tbody> </table> </article> </div> <div class="hidden-sm col-md-2" role="complementary"> <div class="sideaffix"> <div class="contribution"> <ul class="nav"> <li> <a href="path_to_url" class="contribution-link">Improve this Doc</a> </li> <li> <a href="path_to_url#L12" class="contribution-link">View Source</a> </li> </ul> </div> <nav class="bs-docs-sidebar hidden-print hidden-xs hidden-sm affix" id="affix"> <h5>In This Article</h5> <div></div> </nav> </div> </div> </div> </div> <footer> <div class="grad-bottom"></div> <div class="footer"> <div class="container"> <span class="pull-right"> <a href="#top">Back to top</a> </span> <span>Generated by <strong>DocFX</strong></span> </div> </div> </footer> </div> <script type="text/javascript" src="../styles/docfx.vendor.js"></script> <script type="text/javascript" src="../styles/docfx.js"></script> <script type="text/javascript" src="../styles/main.js"></script> </body> </html> ```
John Craig of Glasgow was elected in 1818 to the Fellowship of the Royal Society of Edinburgh, from which he resigned about 1840. Otherwise, very little is known about his life. Craig wrote the three-volume The Elements of Political Science (1814) and Remarks on Some Fundamental Doctrines in Political Economy (1821). He studied under John Millar (philosopher). Bruce (1938) states that Craig "is continually on the verge of expressing the idea of marginal utility", became "the first exponent of the idea of the connection between utility and value", and "comes close to expressing the idea of consumers' surplus". Craig "not only pioneered in opposing the theories of his time, but laid the groundwork for the utility theory to follow."His outstanding contribution to the discussion of wages is his opposition to the theories of the classical school: first, their theory that wages are fixed by the standard of life; second, the theory that a tax on wages necessarily raises wages; and third, the doctrine that wages and profits must vary inversely to each other. Joseph Schumpeter (1954) states that Craig's Fundamental Doctrines was "a performance of considerable merit" and "a whole Marshall in nuce" (in a nutshell). References 19th-century British economists British economists
Mike McLarnon is a Canadian former politician, who represented the electoral district of Whitehorse Centre in the Yukon Legislative Assembly from 2000 to 2002. McLarnon was born and raised in Whitehorse, Yukon and operated several businesses in Whitehorse until assuming political office in the Yukon Legislature. While in office, he served as the Deputy Speaker of the Yukon Legislature and the Chair of the Committee of the Whole. He also served on the Standing Committee on Public Accounts on behalf of the Yukon. He was elected as a member of the Yukon Liberal Party in the 2000 election. In that election, he beat out the incumbent MLA Todd Hardy, who was the NDP candidate, and Vicky Durrant, the Yukon Party candidate, by a wide margin. However, in 2002 he was one of three MLAs, along with Wayne Jim and Don Roberts, who resigned from the Liberal Party caucus in 2002 to protest the leadership of Pat Duncan, effectively reducing Duncan's government to a minority. He continued to sit as an independent until the time that the following election was called. Analysts claim that the Liberal party's newfound minority party status and subsequent pressure from the opposition is what eventually led to the end of the Liberal party's position in the legislature. In the resulting 2002 election, McLarnon ran as an independent candidate and was defeated by New Democratic Party leader Todd Hardy. Prior to operating his own business and eventually running for political office, McLarnon pursued his higher education at Carleton University (Ottawa) and the British Columbia Institute of Technology (Vancouver). While at Carleton University, McLarnon served in the House of Commons Page Program. McLarnon is married and continues to maintain residence in Whitehorse. He is a recipient of the Queen Elizabeth II Golden Jubilee Medal for his work in representing the political interests of Yukoners. References British Columbia Institute of Technology alumni Yukon Liberal Party MLAs Living people 1965 births Politicians from Whitehorse
Jethro Johnson McCullough (March 8, 1810 – May 25, 1878) was an American politician and businessman from Maryland. He served as a member of the Maryland House of Delegates, representing Cecil County from 1865 to 1867. Early life Jethro Johnson McCullough was born on March 8, 1810, at White Clay Creek hundred near Newark, Delaware, to Enoch McCullough. His father was a carpet maker and weaver. At the age of six, McCullough worked at Roseville cotton factory. He worked there for two years. At the age of eight, McCullough worked at his father's carpet and coverlet weaving shop. He remained working at the shop until his father's death in 1827. He then worked for a farmer for one year before apprenticing to become a millwright for three years. Career McCullough then worked as a journeyman in Manayunk, Philadelphia, for two years. He then started up his own business as millwright. He conducted business in Chester County, Pennsylvania, New Castle County, Delaware, and Cecil County, Maryland. In 1842, McCullough went into business with C. P. (or C. J.) Marshall and J. Marshall on a small rolling mill on Red Clay Creek, near Stanton, Delaware, later named the Marshallton Mill. He remained working there for five years. On February 2, 1847, McCullough purchased a large property in North East, Maryland, and moved there in March of that year. He also formed the partnership McCullough & Co in 1847. In 1853, he purchased the West Amwell Iron Works near Elkton and built the West Amwell Mill. In that same year, the business started manufacturing galvanized iron. In 1856, McCullough purchased the "Stony Chase" property near North East and built the Shannon Mill. In 1857, he purchased the Rowlandsville Mill. In 1861, the iron company that he was associated was incorporated as the McCullough Iron Company of Cecil. In 1863, a steam mill in North East was established to manufacture iron. The McCullough Iron Company was reincorporated in 1865. McCullough sided with the Union in the Civil War. He was elected as county commissioner of Cecil County in 1855 and 1859. McCullough was a Republican. He served as a member of the Maryland House of Delegates, representing Cecil County, from 1865 to 1867. He was an supporter of the temperance movement. Personal life McCullough married Elizabeth Tull, daughter of John Tull, of Cecil County on January 2, 1834. They had nine sons and one daughter, including Enoch, George, John and Samuel D. His son Samuel served as town commissioner and town treasurer of North East. He lived in North East from 1847 to the 1860s. He then moved to Wilmington, Delaware. He was a member of the Methodist Episcopal church. McCullough died on May 25, 1878, at the home of Mrs. George Smyth in Philadelphia. He was buried at the Methodist Episcopal Church cemetery in North East. References 1810 births 1878 deaths People from Newark, Delaware People from Cecil County, Maryland Politicians from Wilmington, Delaware Businesspeople from Wilmington, Delaware County commissioners in Maryland Republican Party members of the Maryland House of Delegates 19th-century American politicians
```javascript OC.L10N.register( "systemtags", { "System tag %1$s added by the system" : "Systemtagg %1$s tillagt av systemet", "Added system tag {systemtag}" : "La till systemtagg {systemtag}", "Added system tag %1$s" : "La till systemtagg %1$s", "%1$s added system tag %2$s" : "%1$s la till systemtagg %2$s", "{actor} added system tag {systemtag}" : "{actor} la till systemtagg {systemtag}", "System tag %1$s removed by the system" : "Systemtagg %1$s borttagen av systemet", "Removed system tag {systemtag}" : "Tog bort systemtagg {systemtag}", "Removed system tag %1$s" : "Tog bort systemtagg %1$s", "%1$s removed system tag %2$s" : "%1$s tog bort systemtagg %2$s", "{actor} removed system tag {systemtag}" : "{actor} tog bort systemtagg {systemtag}", "You created system tag %1$s" : "Du skapade systemtagg %1$s", "You created system tag {systemtag}" : "Du skapade systemtagg {systemtag}", "%1$s created system tag %2$s" : "%1$s skapade systemtagg %2$s", "{actor} created system tag {systemtag}" : "{actor} skapade systemtagg {systemtag}", "You deleted system tag %1$s" : "Du tog bort systemtagg %1$s", "You deleted system tag {systemtag}" : "Du tog bort systemtagg {systemtag}", "%1$s deleted system tag %2$s" : "%1$s tog bort systemtagg %2$s", "{actor} deleted system tag {systemtag}" : "{actor} tog bort systemtagg {systemtag}", "You updated system tag %2$s to %1$s" : "Du uppdaterade systemtagg %2$s till %1$s", "You updated system tag {oldsystemtag} to {newsystemtag}" : "Du uppdaterade systemtagg {oldsystemtag} till {newsystemtag}", "%1$s updated system tag %3$s to %2$s" : "%1$s uppdaterade systemtagg %3$s till %2$s", "{actor} updated system tag {oldsystemtag} to {newsystemtag}" : "{actor} uppdaterade systemtagg {oldsystemtag} till {newsystemtag}", "System tag %2$s was added to %1$s by the system" : "Systemtagg %2$s adderades till %1$s av systemet", "System tag {systemtag} was added to {file} by the system" : "Systemtagg {systemtag} adderades till {file} av systemet", "You added system tag %2$s to %1$s" : "Du la till systemtagg %2$s p %1$s", "You added system tag {systemtag} to {file}" : "Du la till systemtagg {systemtag} p {file}", "%1$s added system tag %3$s to %2$s" : "%1$s la till systemtagg %3$s p %2$s", "{actor} added system tag {systemtag} to {file}" : "{actor} la till systemtagg {systemtag} p {file}", "System tag %2$s was removed from %1$s by the system" : "Systemtagg %2$s togs bort frn %1$s av systemet", "System tag {systemtag} was removed from {file} by the system" : "Systemtagg {systemtag} togs bort frn {file} av systemet", "You removed system tag %2$s from %1$s" : "Du tog bort systemtagg %2$s frn %1$s", "You removed system tag {systemtag} from {file}" : "Du tog bort systemtagg {systemtag} frn {file}", "%1$s removed system tag %3$s from %2$s" : "%1$s tog bort systemtagg %3$s frn %2$s", "{actor} removed system tag {systemtag} from {file}" : "{actor} tog bort systemtagg {systemtag} frn {file}", "%s (restricted)" : "%s (begrnsad)", "%s (invisible)" : "%s (osynlig)", "<strong>System tags</strong> for a file have been modified" : "<strong>Systemtaggar</strong> fr en fil har blivit ndrade", "Files" : "Filer", "Tags" : "Taggar", "All tagged %s " : "Alla taggade %s ", "tagged %s" : "taggade %s", "Collaborative tags" : "Samarbetstaggar", "Collaborative tagging functionality which shares tags among people." : "Samarbetande tagg-funktionalitet som delar taggar bland anvndare.", "Collaborative tagging functionality which shares tags among people. Great for teams.\n\t(If you are a provider with a multi-tenancy installation, it is advised to deactivate this app as tags are shared.)" : "Samarbetande tagg-funktionalitet som delar taggar bland anvndare. Utmrkt fr arbetsgrupper.\n\t(Om du r en leverantr med flera kunder, rekommenderas att deaktivera den hr appen eftersom taggar delas.)", "Create or edit tags" : "Skapa eller redigera taggar", "Search for a tag to edit" : "Sk efter en tagg att redigera", "Collaborative tags " : "Samarbetstaggar ...", "No tags to select" : "Inga taggar att vlja", "Tag name" : "Namn p tagg", "Tag level" : "Taggniv", "Create" : "Skapa", "Update" : "Uppdatera", "Delete" : "Ta bort", "Reset" : "terstll", "Loading " : "Laddar ...", "Public" : "Offentlig", "Restricted" : "Begrnsad", "Invisible" : "Osynlig", "Created tag" : "Skapade tagg", "Failed to create tag" : "Det gick inte att skapa tagg", "Updated tag" : "Uppdaterade tagg", "Failed to update tag" : "Kunde inte uppdatera tagg", "Deleted tag" : "Raderade tagg", "Failed to delete tag" : "Det gick inte att ta bort tagg", "Loading collaborative tags " : "Lser in samarbetstaggar ", "Search or create collaborative tags" : "Sk eller skapa samarbetstaggar", "No tags to select, type to create a new tag" : "Inga taggar att vlja, skriv fr att skapa en ny tagg", "Failed to load tags" : "Kunde inte lsa in taggar", "Failed to load selected tags" : "Det gick inte att lsa in valda taggar", "Failed to select tag" : "Det gick inte att vlja tagg", "Collaborative tags are available for all users. Restricted tags are visible to users but cannot be assigned by them. Invisible tags are for internal use, since users cannot see or assign them." : "Samarbetstaggar r tillgngliga fr alla anvndare. Begrnsade taggar r synliga fr anvndarna men kan inte tilldelas av dem. Osynliga taggar r fr internt bruk, eftersom anvndarna inte kan se eller tilldela dem.", "Assigned collaborative tags" : "Tilldelade samarbetstaggar", "Open in Files" : "ppna i Filer", "List of tags and their associated files and folders." : "Lista ver taggar och deras tillhrande filer och mappar.", "No tags found" : "Inga taggar hittades", "Tags you have created will show up here." : "Taggar du har skapat kommer att visas hr.", "Failed to load last used tags" : "Det gick inte att lsa in senast anvnda taggar", "Missing \"Content-Location\" header" : "\"Content-Location\" header saknas", "Failed to load tags for file" : "Kunde inte lsa in taggar fr filen", "Failed to set tag for file" : "Kunde inte stta tagg fr filen", "Failed to delete tag for file" : "Kunde inte ta bort tagg fr filen", "Collaborative tagging functionality which shares tags among users." : "Samarbetande tagg-funktionalitet som delar taggar bland anvndare.", "Collaborative tagging functionality which shares tags among users. Great for teams.\n\t(If you are a provider with a multi-tenancy installation, it is advised to deactivate this app as tags are shared.)" : "Samarbetande tagg-funktionalitet som delar taggar bland anvndare. Utmrkt fr arbetsgrupper.\n\t(Om du r en leverantr med flera kunder, rekommenderas att deaktivera den hr appen eftersom taggar delas.)", "This file has the tag {tag}" : "Den hr filen har taggen {tag}", "This file has the tags {firstTags} and {lastTag}" : "Den hr filen har taggarna {firstTags} och {lastTag}", "No files in here" : "Inga filer hr inne", "No entries found in this folder" : "Ingenting hittades i denna mapp", "Name" : "Namn", "Size" : "Storlek", "Modified" : "Modifierad" }, "nplurals=2; plural=(n != 1);"); ```
Sakargah is a town, and one of twenty union councils in Battagram District in the Khyber Pakhtunkhwa Province of Pakistan. References Union councils of Battagram District Populated places in Battagram District
Anstruther Easter in Fife was a royal burgh, created in 1583, that returned one commissioner to the Parliament of Scotland and to the Convention of Estates. After the Acts of Union 1707, Anstruther Easter, Anstruther Wester, Crail, Kilrenny and Pittenweem formed the Anstruther Easter district of burghs, returning one member between them to the House of Commons of Great Britain. List of burgh commissioners 1661–63: Alexander Black, councillor 1665 convention: Andrew Martins, bailie 1667 convention: not represented 1673–74: Alexander Gibson 1678 convention: James Lauson, bailie 1681–82: Robert Anstruther 1685–86: Robert Innes of Blairtoun, councillor 1689 convention, 1689–90: David Spence, former baillie of Edinburgh (expelled 1693) 1696–1701: Patrick Murray of Dullary 1702–07: Sir John Anstruther References See also List of constituencies in the Parliament of Scotland at the time of the Union Constituencies of the Parliament of Scotland (to 1707) Politics of Fife History of Fife Constituencies disestablished in 1707 1707 disestablishments in Scotland
The Marbel River, also known as the Marbol River and the Sulphur River, is a river in the province of Cotabato in the Philippines. It is located at the foot of Mount Apo. The Marble River connects to the Kabacan River in Cotabato, a tributary of the Pulangi River which empties into the Rio Grande de Mindanao in Cotabato City. Landforms of Cotabato Rivers of the Philippines
The Homorod () is a right tributary of the river Mureș in Romania. It discharges into the Mureș near Geoagiu. Its length is and its basin size is . References Rivers of Romania Rivers of Hunedoara County
```python from .serializer import DictSerializer, JSONSerializer, SerializationBase # noqa ```
```java package it.sephiroth.android.library.bottomnavigation.app; import android.animation.Animator; import android.app.Activity; import android.util.Log; import android.view.View; import android.view.ViewGroup; import com.readystatesoftware.systembartint.SystemBarTintManager; import androidx.annotation.NonNull; import androidx.appcompat.widget.Toolbar; import androidx.recyclerview.widget.RecyclerView; import it.sephiroth.android.library.bottomnavigation.MiscUtils; /** * Created by crugnola on 6/22/16. * BottomNavigation */ public class ToolbarScrollHelper extends RecyclerView.OnScrollListener implements View.OnAttachStateChangeListener, View.OnLayoutChangeListener { private static final String TAG = ToolbarScrollHelper.class.getSimpleName(); private static final int ANIMATION_DURATION = 150; private final ScrollHelper scrollHelper; private int toolbarHeight; private Toolbar toolbar; private boolean expanding; private boolean collapsing; private boolean dragging; private boolean enabled; public ToolbarScrollHelper(@NonNull final Activity activity, @NonNull final Toolbar toolbar) { this.enabled = false; this.scrollHelper = new ScrollHelper(); this.toolbar = toolbar; this.toolbarHeight = setupToolbar(activity); this.toolbar.addOnLayoutChangeListener(this); if (toolbarHeight > 0) { scrollHelper.setRange(-toolbarHeight, 0); enabled = true; } } public void setEnabled(final boolean enabled) { this.enabled = enabled; } private int setupToolbar(final Activity activity) { SystemBarTintManager manager = new SystemBarTintManager(activity); final SystemBarTintManager.SystemBarConfig config = manager.getConfig(); if (config.getPixelInsetTop(false) > 0) { ViewGroup.MarginLayoutParams params = (ViewGroup.MarginLayoutParams) toolbar.getLayoutParams(); params.topMargin = 0; params.height = config.getActionBarHeight() + config.getStatusBarHeight(); toolbar.setLayoutParams(params); toolbar.setPadding( toolbar.getPaddingLeft(), toolbar.getPaddingTop() + config.getStatusBarHeight(), toolbar.getPaddingRight(), toolbar.getPaddingBottom() ); return params.height; } return config.getActionBarHeight(); } public int getToolbarHeight() { return toolbarHeight; } public void initialize(@NonNull final RecyclerView recyclerView) { recyclerView.addOnScrollListener(this); recyclerView.addOnAttachStateChangeListener(this); } @Override public void onScrollStateChanged(final RecyclerView recyclerView, final int newState) { super.onScrollStateChanged(recyclerView, newState); if (!enabled) { return; } if (newState == RecyclerView.SCROLL_STATE_IDLE) { dragging = false; if (scrollHelper.inRange()) { expand(true); } else { if (scrollHelper.getCurrentScroll() > -toolbarHeight / 2) { if (!expanding) { expand(true); } } else if (scrollHelper.getCurrentScroll() < -toolbarHeight / 2) { if (!collapsing) { collapse(true); } } } } } @Override public void onScrolled(final RecyclerView recyclerView, final int dx, final int dy) { super.onScrolled(recyclerView, dx, dy); if (!enabled) { return; } scrollHelper.scroll(-dy); if (scrollHelper.inRange() || dragging) { // MiscUtils.log(TAG, Log.DEBUG, "inRange: " + toolbar.getTranslationY() + ", " + scrollHelper.getCurrentScroll()); toolbar.setTranslationY(scrollHelper.clamp(toolbar.getTranslationY() - dy)); dragging = scrollHelper.valueInRange(toolbar.getTranslationY()); } else { if (dy < 0 && scrollHelper.getCurrentScroll() > -toolbarHeight / 2) { if (!expanding) { expand(true); } } else if (dy > 0 && scrollHelper.getCurrentScroll() < -toolbarHeight / 2) { if (!collapsing) { collapse(true); } } } } private void expand(final boolean animate) { if (animate) { expanding = true; toolbar.animate().cancel(); toolbar .animate() .translationY(0) .setListener(new Animator.AnimatorListener() { @Override public void onAnimationStart(final Animator animation) { } @Override public void onAnimationEnd(final Animator animation) { onAnimationCompleted(); expanding = false; } @Override public void onAnimationCancel(final Animator animation) { onAnimationCompleted(); expanding = false; } @Override public void onAnimationRepeat(final Animator animation) { } }) .setDuration(ANIMATION_DURATION) .start(); } else { toolbar.setTranslationY(0); onAnimationCompleted(); } } private void collapse(final boolean animate) { if (animate) { collapsing = true; toolbar.animate().cancel(); toolbar .animate() .translationY(-toolbarHeight) .setDuration(ANIMATION_DURATION) .setListener(new Animator.AnimatorListener() { @Override public void onAnimationStart(final Animator animation) { } @Override public void onAnimationEnd(final Animator animation) { onAnimationCompleted(); collapsing = false; } @Override public void onAnimationCancel(final Animator animation) { onAnimationCompleted(); collapsing = false; } @Override public void onAnimationRepeat(final Animator animation) { } }) .start(); } else { toolbar.setTranslationY(-toolbarHeight); onAnimationCompleted(); } } private void onAnimationCompleted() { scrollHelper.setCurrent(toolbar.getTranslationY()); } public boolean isCollapsing() { return collapsing; } public boolean isExpanded() { if (isAnimating()) { return isExpanding(); } else { return toolbar.getTranslationY() == 0; } } public boolean isAnimating() { return expanding || collapsing; } public boolean isExpanding() { return expanding; } public void setExpanded(boolean expanded, boolean animate) { if (!enabled) { return; } if (expanded) { expand(animate); } else { collapse(animate); } } @Override public void onViewAttachedToWindow(final View v) { } @Override public void onViewDetachedFromWindow(final View v) { MiscUtils.INSTANCE.log(Log.INFO, "onViewDetachedFromWindow: " + v); ((RecyclerView) v).removeOnScrollListener(this); if (null != toolbar) { this.toolbar.removeOnLayoutChangeListener(this); } this.toolbar = null; this.enabled = false; } @Override public void onLayoutChange( final View v, final int left, final int top, final int right, final int bottom, final int oldLeft, final int oldTop, final int oldRight, final int oldBottom) { final int height = bottom - top; if (height > 0 && height != toolbarHeight) { MiscUtils.INSTANCE.log(Log.VERBOSE, "height: " + height); toolbarHeight = height; enabled = true; scrollHelper.setRange(-toolbarHeight, 0); } } public static class ScrollHelper { float max; float min; float total; float current; public void setRange(float min, float max) { setMin(min); setMax(max); } public void setMin(final float min) { this.min = min; } public void setMax(final float max) { this.max = max; } public boolean inRange() { return total <= max && total >= min; } public void setCurrent(float current) { this.current = clamp(current); } public float clamp(float value) { return Math.max(min, Math.min(max, value)); } public void scroll(float dy) { total += dy; current = clamp(current + dy); } public float getCurrentScroll() { return current; } public float getTotalScroll() { return total; } public boolean valueInRange(float value) { return value > min || value < max; } } } ```
Bangladesh Chemical Industries Corporation College, BCIC College, is a private higher secondary educational institution in Dhaka, Bangladesh. It is situated at Mirpur near the National Zoo of Bangladesh. Bangladesh Chemical Industries Corporation operates the institution. The intelligence officer of the Bangladesh Army manages this institution. It is a coeducation institution having different shifts for girls and boys. The institution is a double shift (morning and day) institution with over 2000 students. The institution has two separate campuses for each school and college. Both campuses are situated beside each other and operated by one chief principal. The school section has 1200 students, offering education from first to tenth grade. The college section offers eleventh grade and twelfth grade standards of education, having 900 students in total from Science, Commerce, and Humanities subjects. History BCIC college commenced its journey with the school section in 1983. The Science Group, the Business Studies Group and the Humanities Group of the college section started their operations in 1991, 1996 and 1997 respectively. Co-curricular activities BCIC College has many successes in co-curricular activities. The college has won prizes in different categories in the competitions of National Education Week, National drama Competition In NDC, EEE Day, 1st National Television Debate, Math Olympiad and Science project including Victory Flower. Sports Cultural Activities Bangladesh Scouts There is also 4 House system such as: 1. Shahjalal House, 2. Karnaphuli House, 3. Ashuganj House, 4. Jamuna House. References External links Colleges in Dhaka District Universities and colleges in Dhaka 1983 establishments in Bangladesh Educational institutions established in 1983
The Original Mob is an album by jazz drummer Jimmy Cobb. It was released by Smoke Sessions. Background Cobb met the other three musicians on this album – pianist Brad Mehldau, guitarist Peter Bernstein, and bassist John Webber – "at workshops that he led at the New School University in New York in the early 1990s". The four recorded together on Bernstein's first album: Somethin's Burnin', released in 1994. Music and recording The album was recorded at Smoke when the club was closed. Cobb wrote two of the tracks – "Composition 101" and "Remembering You". Other tracks are standards or written by band members. The album was released by Smoke Sessions in 2014. Track listing "Old Devil Moon" "Amsterdam After Dark" "Sunday in New York" "Stranger in Paradise" "Unrequited" "Composition 101" "Remembering U" "Nobody Else But Me" "Minor Blues" "Lickety Split" Personnel Brad Mehldau – piano Peter Bernstein – guitar John Webber – bass Jimmy Cobb – drums References Smoke Sessions Records albums Jimmy Cobb albums
```yaml --- parsed_sample: - appid_services: "" base_os_boot: "" base_os_software_suite: "" border_gateway_function_package: "" crypto_software_suite: "" fips_mode_utilities: "13.2X51-D35.3" hostname: "lab" idp_services: "" junos_version: "13.2X51-D35.3" kernel_software_suite: "" lab_package: "" model: "ex4550-32f" online_documentation: "13.2X51-D35.3" other_device_properties: - "EX 4500 Software Suite " - "Web Management " - "EX 4500 Software Suite " - "Web Management " other_properties_versions: - "13.2X51-D35.3" - "13.2X51-D35.3" - "13.2X51-D35.3" - "13.2X51-D35.3" packet_forwarding_engine_support_m_t_ex_common: "" packet_forwarding_engine_support_mx_common: "" platform_software_suite: "" py_base_i386: "" qfabric_system_id: "" redis_version: "" routing_software_suite: "" runtime_software_suite: "" serial_number: "" services_aacl_container_package: "" services_application_level_gateways: "" services_captive_portal_content_delivery_package: "" services_crypto: "" services_http_content_management_package: "" services_ipsec: "" services_jflow_container_package: "" services_ll_pdf_container_package: "" services_mobile_subscriber_service_package: "" services_mobilenext_software_package: "" services_nat: "" services_ptsp_container_package: "" services_rpm: "" services_ssl: "" services_stateful_firewall: "" voice_services_container_package: "" ```
```javascript import React, { PropTypes } from 'react'; import { Provider } from 'react-redux'; import { Router, RoutingContext } from 'react-router'; import invariant from 'invariant'; import configRoutes from '../routes'; const propTypes = { routerHistory: PropTypes.object.isRequired, store: PropTypes.object.isRequired }; const Root = ({ routerHistory, store }) => { invariant( routerHistory, '<Root /> needs either a routingContext or routerHistory to render.' ); return ( <Provider store={store}> <Router history={routerHistory}> {configRoutes(store)} </Router> </Provider> ); }; Root.propTypes = propTypes; export default Root; ```
Orlando Palmeiro (born January 19, 1969) is an American former Major League Baseball outfielder. He attended high school at Miami Southridge High School and played college baseball at the University of Miami. Palmeiro, a star high school player in Miami, Florida, went on to play baseball at the community college level at Miami-Dade Community College South and then briefly at the University of Miami under UM coach Ron Fraser before being drafted by the California Angels where he played for a number of years on various farm teams before reaching the majors. Palmeiro spent his entire career as a backup outfielder, never having been a regular starter. He was the fourth outfielder of the 2002 World Series Champion Anaheim Angels team, batting .300 for the year. Palmeiro also made the last out of the 2005 World Series for the Houston Astros. His best season was arguably with the St. Louis Cardinals in 2003, when he batted .271 with 3 home runs and 33 RBI. He is the cousin of Rafael Palmeiro. See also List of Cuban Americans References External links 1969 births Living people Anaheim Angels players California Angels players Houston Astros players Major League Baseball designated hitters Major League Baseball outfielders Miami Hurricanes baseball players St. Louis Cardinals players Sportspeople from Hoboken, New Jersey Baseball players from Hudson County, New Jersey American expatriate baseball players in Canada Boise Hawks players Midland Angels players Quad Cities River Bandits players Vancouver Canadians players Miami Southridge Senior High School alumni
```go // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build darwin dragonfly freebsd linux netbsd openbsd solaris package unix_test import ( "fmt" "testing" "github.com/coreos/etcd/Godeps/_workspace/src/golang.org/x/sys/unix" ) func testSetGetenv(t *testing.T, key, value string) { err := unix.Setenv(key, value) if err != nil { t.Fatalf("Setenv failed to set %q: %v", value, err) } newvalue, found := unix.Getenv(key) if !found { t.Fatalf("Getenv failed to find %v variable (want value %q)", key, value) } if newvalue != value { t.Fatalf("Getenv(%v) = %q; want %q", key, newvalue, value) } } func TestEnv(t *testing.T) { testSetGetenv(t, "TESTENV", "AVALUE") // make sure TESTENV gets set to "", not deleted testSetGetenv(t, "TESTENV", "") } func TestItoa(t *testing.T) { // Make most negative integer: 0x8000... i := 1 for i<<1 != 0 { i <<= 1 } if i >= 0 { t.Fatal("bad math") } s := unix.Itoa(i) f := fmt.Sprint(i) if s != f { t.Fatalf("itoa(%d) = %s, want %s", i, s, f) } } ```