text
stringlengths
1
22.8M
Kebek or Kepek (Kibak) (Persian: ; Tatar: Käbäk) was a Khan of the Golden Horde in 1413–1414. Ancestry Kebek was a son of Tokhtamysh and a brother of his immediate predecessor (and successor) Karīm Berdi. They were descendants of Tuqa-Timur, the son of Jochi, the son of Chinggis Khan. Life After their father was killed in battle against Shādī Beg Khan and Nūr ad-Dīn, the son of Edigu, in 1407, the sons of Tokhtamysh dispersed. Some, led by the eldest, Jalāl ad-Dīn, fled to Moscow and then to Lithuania. Others, including Kebek, sought refuge in Sighnaq, under Timurid protection. In early 1409, one of the brothers, Karīm Berdi, briefly seized the capital Sarai. In early 1412, Jalāl ad-Dīn did the same, eliminating his rival, Tīmūr Khan. In October of the same year, however, Jalāl ad-Dīn was murdered by one of his brothers, according to some accounts Kebek. Karīm Berdi became khan again, reversing Jalāl ad-Dīn's policies, showing himself amenable to the reigning princes in Russia and hostile to Grand Prince Vytautas of Lithuania. Vytautas responded by proclaiming a certain "Betsabul" as khan in opposition to Karīm Berdi. In combination with other evidence, it is believed that "Betsabul" was actually Kebek, who succeeded in driving out Karīm Berdi with Lithuanian help, in the spring of 1413. Now khan, Kebek naturally favored his Lithuanian protector and ally, Vytautas. Kebek's authority was quickly recognized throughout the Golden Horde. In the summer of 1413, he campaigned in the Crimea, besieging Genoese Kaffa, but he abandoned the siege on 12 June. He was apparently called away by the threat posed by Edigu. Engaged in negotiations with Jalāl ad-Dīn and Karīm Berdi, Edigu had refrained from setting up a new puppet khan of his own, but Kebek's usurpation led him to proclaim one Chekre, a descendant of Tuqa-Timur, as khan in Sibir and Bolghar in 1413. Despite an initial victory over Chekre, Kebek suffered reverses at the hands of Edigu; but the main beneficiary of this was Karīm Berdi. After several battles, Karīm Berdi defeated, captured, and beheaded Kebek in 1414. The victory allowed Karīm Berdi to resume the throne, but before long another of his brothers, Jabbār Berdi was set up as a rival khan by Vytautas of Lithuania. Descendants According to the Muʿizz al-ansāb, Kebek had a son, Chaghatāy-Sulṭān, and two daughters, Sarāy-Mulk and Shīrīn-Bīka. The Tawārīḫ-i guzīdah-i nuṣrat-nāmah mentions only the first and last. Genealogy Genghis Khan Jochi Tuqa-Timur Saricha Kuyunchak Qutluq Khwāja Tuy Khwāja Tokhtamysh Kebek References Gaev, A. G., "Genealogija i hronologija Džučidov," Numizmatičeskij sbornik 3 (2002) 9-55. Howorth, H. H., History of the Mongols from the 9th to the 19th Century. Part II.1. London, 1880. Sabitov, Ž. M., Genealogija "Tore", Astana, 2008. Seleznëv, J. V., Èlita Zolotoj Ordy: Naučno-spravočnoe izdanie, Kazan', 2009. Pilipčuk, J. V., and Ž. M. Sabitov, "Bor'ba Toktamyševičej za vlast' v 10–20-h gg. XV v.," Iz istorii i kult'ury narodov Srednego Povolž'ja 6 (2016) 110–125. Počekaev, R. J., Cari ordynskie: Biografii hanov i pravitelej Zolotoj Ordy. Saint Petersburg, 2010. Reva, R., "Borba za vlast' v pervoj polovine XV v.," in Zolotaja Orda v mirovoj istorii, Kazan', 2016: 704–729. Vohidov, Š. H. (trans.), Istorija Kazahstana v persidskih istočnikah. 3. Muʿizz al-ansāb. Almaty, 2006. 1414 deaths Khans of the Golden Horde People in the Battle of Grunwald 15th-century monarchs in Europe
Amos Emanuel Fowler (born February 11, 1956) is a former American football center who played seven seasons for the Detroit Lions in the National Football League (NFL). References 1956 births Living people Players of American football from Pensacola, Florida American football centers Southern Miss Golden Eagles football players Detroit Lions players
Robert H. Marsh (born August 15, 1959 in Boston) is an American politician who represented the 14th Norfolk District in the Massachusetts House of Representatives from 1987 until he resigned in 1992 to work for United States Secretary of Transportation Andrew Card. After working for Card, Marsh served as Congressman Peter I. Blute's chief aide and ran Mitt Romney's 1994 Senate campaign. In 1997 he was appointed by Governor William Weld to the Massachusetts Port Authority board. Prior to being elected to the General Court, Marsh served as a legislative aide to State Representative Royall H. Switzler. References 1952 births Republican Party members of the Massachusetts House of Representatives People from Wellesley, Massachusetts Hobart and William Smith Colleges alumni Living people
Stenoglene hilaris is a moth in the family Eupterotidae. It was described by Felder in 1874. It is found in Mozambique and South Africa. Adults are uniform mouse-colour, with the forewings a shade darker than the hindwings, the former crossed by two curved lines, the first nearest the base very indistinct, the second broken into spots. References Moths described in 1874 Janinae Moths of Sub-Saharan Africa
This is a list of listed buildings in the parish of Dalry in North Ayrshire, Scotland. List |} Key See also List of listed buildings in North Ayrshire Scheduled monuments in North Ayrshire Notes References All entries, addresses and coordinates are based on data from Historic Scotland. This data falls under the Open Government Licence Dalry Dalry, North Ayrshire
```php <?php /** * Inline HTML diff generator for PHP DiffLib. * * PHP version 5 * * * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are met: * * - Redistributions of source code must retain the above copyright notice, * this list of conditions and the following disclaimer. * - Redistributions in binary form must reproduce the above copyright notice, * this list of conditions and the following disclaimer in the documentation * and/or other materials provided with the distribution. * - Neither the name of the Chris Boulton nor the names of its contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE * POSSIBILITY OF SUCH DAMAGE. * * @package DiffLib * @author Chris Boulton <chris.boulton@interspire.com> * @copyright (c) 2009 Chris Boulton * @version 1.1 * @link path_to_url */ require_once dirname(__FILE__).'/Array.php'; class Diff_Renderer_Html_Inline extends Diff_Renderer_Html_Array { /** * Render a and return diff with changes between the two sequences * displayed inline (under each other) * * @return string The generated inline diff. */ public function render() { $changes = parent::render(); $html = ''; if(empty($changes)) { return $html; } $html .= '<table class="Differences DifferencesInline">'; $html .= '<thead>'; $html .= '<tr>'; $html .= '<th>Old</th>'; $html .= '<th>New</th>'; $html .= '<th>Differences</th>'; $html .= '</tr>'; $html .= '</thead>'; foreach($changes as $i => $blocks) { // If this is a separate block, we're condensing code so output ..., // indicating a significant portion of the code has been collapsed as // it is the same if($i > 0) { $html .= '<tbody class="Skipped">'; $html .= '<th>&hellip;</th>'; $html .= '<th>&hellip;</th>'; $html .= '<td>&nbsp;</td>'; $html .= '</tbody>'; } foreach($blocks as $change) { $html .= '<tbody class="Change'.ucfirst($change['tag']).'">'; // Equal changes should be shown on both sides of the diff if($change['tag'] == 'equal') { foreach($change['base']['lines'] as $no => $line) { $fromLine = $change['base']['offset'] + $no + 1; $toLine = $change['changed']['offset'] + $no + 1; $html .= '<tr>'; $html .= '<th>'.$fromLine.'</th>'; $html .= '<th>'.$toLine.'</th>'; $html .= '<td class="Left">'.$line.'</td>'; $html .= '</tr>'; } } // Added lines only on the right side else if($change['tag'] == 'insert') { foreach($change['changed']['lines'] as $no => $line) { $toLine = $change['changed']['offset'] + $no + 1; $html .= '<tr>'; $html .= '<th>&nbsp;</th>'; $html .= '<th>'.$toLine.'</th>'; $html .= '<td class="Right"><ins>'.$line.'</ins>&nbsp;</td>'; $html .= '</tr>'; } } // Show deleted lines only on the left side else if($change['tag'] == 'delete') { foreach($change['base']['lines'] as $no => $line) { $fromLine = $change['base']['offset'] + $no + 1; $html .= '<tr>'; $html .= '<th>'.$fromLine.'</th>'; $html .= '<th>&nbsp;</th>'; $html .= '<td class="Left"><del>'.$line.'</del>&nbsp;</td>'; $html .= '</tr>'; } } // Show modified lines on both sides else if($change['tag'] == 'replace') { foreach($change['base']['lines'] as $no => $line) { $fromLine = $change['base']['offset'] + $no + 1; $html .= '<tr>'; $html .= '<th>'.$fromLine.'</th>'; $html .= '<th>&nbsp;</th>'; $html .= '<td class="Left"><span>'.$line.'</span></td>'; $html .= '</tr>'; } foreach($change['changed']['lines'] as $no => $line) { $toLine = $change['changed']['offset'] + $no + 1; $html .= '<tr>'; $html .= '<th>'.$toLine.'</th>'; $html .= '<th>&nbsp;</th>'; $html .= '<td class="Right"><span>'.$line.'</span></td>'; $html .= '</tr>'; } } $html .= '</tbody>'; } } $html .= '</table>'; return $html; } } ```
```sqlpl -- path_to_url SET enable_analyzer=1; create table t1 engine = MergeTree() order by tuple() as select 1 as user_id, 2 as level; create table t2 engine = MergeTree() order by tuple() as select 1 as user_id, 'website' as event_source, '2023-01-01 00:00:00'::DateTime as timestamp; alter table t2 add column date Date alias toDate(timestamp); SELECT any(t2.date) as any_val FROM t1 AS t1 LEFT JOIN t2 as t2 ON (t1.user_id = t2.user_id); ```
The 2008 Melbourne Storm season was the 11th in the club's history. They competed in the NRL's 2008 Telstra Premiership and finished the regular season as minor premiers before reaching the grand final in which they were beaten by the Manly-Warringah Sea Eagles 40–0, the largest margin in grand final history. The minor premiership won by the Storm in 2008 was later stripped by the NRL in 2010 when it was revealed the club had been in breach of salary cap rules. Despite losing seven games, Storm managed to finish in top spot on the NRL ladder for a third successive season. They had to wait until the final game to do it though, defeating South Sydney 42–4. A loss to the Warriors in the Qualifying final meant Storm had to do it the hard way and they did just that, defeating the Broncos and Sharks on the road. That tough road eventually caught up with Melbourne in the decider, which they lost to Manly. Matt Geyer became the first Storm player to reach 250 games while Billy Slater followed on from Cameron Smith the previous year, earning the Golden boot award as the best player in the world. Season Summary World Club Challenge – With club captain Cameron Smith back home to be present at the birth of his first child, Melbourne go down 11–4 to Leeds Rhinos in the 2008 World Club Challenge at a rain-swept Elland Road. Ryan Hoffman scored the only try for Melbourne. Round 1 – Billy Slater scores a hat-trick as Melbourne begins their title defence with a 32-18 opening round victory over the New Zealand Warriors at the Telstra Dome. A twice tardy Melbourne are fined $10,000 by the NRL for failing to take the field on time. Round 2 – Melbourne forward Brett White and Cronulla forward Ben Ross are both sent off. Ross is sent off for striking Cooper Cronk with a late elbow, while White is sent off for punching Ross in the ensuing fight. White is later suspended for four matches. The 17–16 defeat ends the clubs 15-match winning streak at Olympic Park. Round 3 – Storm experience successive losses for the first time since 2006 as the Sydney Roosters upset Melbourne 10–6. 30 March – 2007 Dally M Rookie of the Year Israel Folau announces he is leaving Melbourne at the end of the 2008 season, signing a four-year deal with Brisbane Broncos reportedly worth $1.6m. 18 April – Coach Craig Bellamy signs a new contract extension, keeping him at the club until the end of the 2013 NRL season. Round 5 – A man of the match performance from Billy Slater, sees Melbourne defeat Manly 26–4 in the Grand Final rematch at Olympic Park. Round 6 – Wearing replica 1998 home jerseys, Melbourne stage a second half comeback to defeat Canberra Raiders 23–16, after trailing 16–4 at halftime. Aiden Tolman makes his NRL debut with Melbourne, becoming the first player in the club's history to graduate from playing in the NRL Under-20s competition, which was in its inaugural season. 28 May – Michael Crocker announces he will be leaving the club at the completion of the 2008 season, signing a three-year deal with Super League's Hull F.C. Round 10 – Missing nine players to State of Origin selection, as well as coach Craig Bellamy, St George Illawarra snap Melbourne's five-match winning streak. Round 11 – With club stalwart Matt Geyer playing his 250th first grade game, Melbourne outlast South Sydney Rabbitohs 15–10 at Gosford, as eight players back up from the midweek Origin fixture. Round 12 – Storm hold the Bulldogs scoreless in a 46–0 win, with Cameron Smith scoring 18 points. Round 13 – With Origin again ruining team selections, Melbourne missing ten players are held scoreless 18–0 against the Gold Coast Titans. It's the first time since the 2003 NRL finals that Melbourne are held scoreless. Round 16 – Again missing nine players (and coach Craig Bellamy), Melbourne struggle against Parramatta Eels, losing 24–22. Previously Parramatta had not defeated Melbourne since 2005. Round 17 – A dominant Greg Inglis leads Melbourne to a 30–14 win over Canberra at Olympic Park. The victory marking Craig Bellamy's 100th coaching victory at premiership level (from 147 games). Round 19 – A wild brawl in the 23rd minute saw Billy Slater and Adam Blair sin binned, while minutes later Jason Ryles was sent off by referee Gavin Badger as Melbourne defeated St George Illawarra 26–0. Round 20 – Michael Crocker experiences defeat for the first time in a Melbourne jersey, with the Warriors 8–6 win over the Storm. Crocker had played 34 games since joining the Storm without tasting defeat. 30 July – Cooper Cronk re-signs with the club for a further five seasons. 9 August – Greg Inglis is named at in the Australian Rugby League's Indigenous Team of the Century. 27 August – The Sydney Morning Herald reports that NRL CEO David Gallop held secret talks with Greg Inglis to ensure he did follow other players in 'defecting' to rugby union. Round 26 – Melbourne claim their third straight minor premiership, defeating South Sydney 42–4 in the final match of the regular season. Level on competition points with Manly, Melbourne took the J. J. Giltinan Shield with a superior points differential (+302 versus +290). In his final home game at Olympic Park, Matt Geyer scored the first try of the match and was honoured with a special presentation at full time. 9 September – Billy Slater and Cameron Smith finish in a tie for second for the Dally M Medal behind former Storm halfback Matt Orford. Slater's suspension for fighting in Round 19 costs him the victory. 10 September – Despite strong interest from European rugby union clubs, Greg Inglis commits his future to the Storm, signing a new four-year contract reportedly worth $1.8m. Semi Final – In a pulsating match in front of over 50,000 fans at Suncorp Stadium, Melbourne score a last minute try to win 16–14 over the Brisbane Broncos. Forwards Jeremy Smith and Cameron Smith are cited for a tackle on Sam Thaiday during the second half. Jeremy Smith later accepts a one-match suspension, while Cameron Smith pleads not guilty to a charge of unnecessary contact to the head or neck. In a lengthy NRL judiciary hearing, Cameron Smith is suspended for two-matches, ruling him out of the rest of the season. Preliminary Final – After Melbourne's comfortable 28–0 win over Cronulla, coach Craig Bellamy launches into a long-winded attack on the NRL, the NRL judiciary, bookmakers, and the media following the suspension of Cameron Smith. Bellamy's comments, endorsed by club CEO Brian Waldron, result in the NRL fining the club $50,000 with NRL CEO David Gallop accusing the pair of an "unprecedented, irrational, premeditated and defamatory attack on the integrity of the judiciary panel and the game's judiciary process." 30 September – The Men of League charity announce the game's greatest club players at their annual ball, with Cameron Smith named as Melbourne's club great. Milestone games Jerseys Apparel supplier Reebok kept the same home jersey design as worn in previous seasons. The clash jersey changed to a mostly white jersey, featuring purple shoulder stripes and side panels together with sublimated purple thunderbolts, worn with purple shorts and white socks with two purple stripes. An alternate jersey was worn in the NRL's heritage round, with Melbourne wearing a replica uniform combination similar to their 1998 home colours. In line with the celebrations of the centenary of rugby league in Australia, an additional patch was worn above the NRL logo. Fixtures Pre season Regular season Source: – Golden Point extra time (pen) – Penalty try Finals Ladder 2008 Coaching Staff Head coach: Craig Bellamy Assistant coaches: Michael Maguire & Stephen Kearney Specialist coach: Matthew Johns Strength and conditioning Coach: Alex Corvo Football Manager: Frank Ponissi NRL Under 20s Coach: Brad Arthur Feeder Club Coach: Jamie Feeney (Central Coast Storm) 2008 squad List current as of 3 November 2021 Player movements Losses James Aubusson to Sydney Roosters Ben Cross to Newcastle Knights Garret Crossman to Hull Kingston Rovers Matt King to Warrington Wolves Clint Newton to Hull Kingston Rovers Matt Rua to Retirement Ryan Shortland to New Zealand Warriors Gains Brett Anderson from North Queensland Cowboys Clifford Manua from Brisbane Broncos Dane Nielsen from Cronulla-Sutherland Sharks Representative honours This table lists all players who have played a representative match in 2008. Statistics This table contains playing statistics for all Melbourne Storm players to have played in the 2008 NRL season. Statistics sources: Scorers Most points in a game: 18 points Round 12 – Cameron Smith (1 try, 7 goals) vs Canterbury-Bankstown Bulldogs Most tries in a game: 3 Round 1 – Billy Slater vs New Zealand Warriors Round 4 – Anthony Quinn vs Brisbane Broncos Round 15 – Greg Inglis vs North Queensland Cowboys Round 18 – Greg Inglis vs Wests Tigers Round 24 – Greg Inglis vs Penrith Panthers Winning games Highest score in a winning game: 48 points Round 15 vs North Queensland Cowboys Lowest score in a winning game: 15 points Round 11 vs South Sydney Rabbitohs Greatest winning margin: 46 points Round 12 vs Canterbury-Bankstown Bulldogs Greatest number of games won consecutively: 5 Round 4 – Round 9 Losing games Highest score in a losing game: 22 points Round 16 vs Parramatta Eels Lowest score in a losing game: 0 points Round 13 vs Gold Coast Titans Grand Final vs Manly Warringah Sea Eagles Greatest losing margin: 40 points Grand Final vs Manly Warringah Sea Eagles Greatest number of games lost consecutively: 2 Round 2 – Round 3 NRL Under 20s For the first time since the formation of the NRL in 1998, every team fielded a team in the same second-tier competition the NRL Under-20s, guaranteeing fans a high standard curtain raiser before every NRL game. The National Youth Championships (known commercially as the Toyota Cup due to sponsorship from Toyota Australia) ran parallel to the NRL. Similar to the NRL, the NYC enforces a salary cap and puts a heavy focus on life outside football for the players. In the competition's inaugural season, Melbourne were coached by Brad Arthur finished in 13th position, failing to make the finals. Melbourne used 28 players across the season, with five players (Liam Foran, Sam Joe, Kevin Proctor, Joe Tomane, and Aiden Tolman) also making NRL appearances in 2008. Ladder Statistics Source: Scorers Most points in a game: 16 points Round 1 – Joe Tomane (2 tries, 4 goals) vs New Zealand Warriors Round 9 – Trent Walker (4 tries) vs Newcastle Knights Most tries in a game: 4 Round 9 – Trent Walker vs Newcastle Knights Most points (season): 106 Liam Foran (3 tries, 47 goals) Most tries (season): 13 Sam Joe Winning games Highest score in a winning game: 36 points Round 17 vs Canberra Raiders Lowest score in a winning game: 22 points Round 5 vs Manly Warringah Sea Eagles Greatest winning margin: 24 points Round 9 vs Newcastle Knights Greatest number of games won consecutively: 2 Round 4 – Round 5 Round 22 – Round 23 Losing games Highest score in a losing game: 32 points Round 3 vs Sydney Roosters Lowest score in a losing game: 6 points Round 7 vs North Queensland Cowboys Round 13 vs Gold Coast Titans Greatest losing margin: 32 points Round 6 vs Canberra Raiders Round 13 vs Gold Coast Titans Greatest number of games lost consecutively: 4 Round 18 – Round 21 Feeder Team Established in 2007 and coached by former Storm player Jamie Feeney, Melbourne sent their back-up players to play with Central Coast Storm, with home games played at Morry Breen Oval, the base of Central Coast team Wyong Roos. Central Coast missed the finals, finishing in 10th position (out of 12 teams). The Player of the Year award was won by former Newcastle Knights player Reegan Tanner. Awards Trophy Cabinet 2008 J. J. Giltinan Shield Melbourne Storm Awards Night Melbourne Storm Player of the Year: Billy Slater Members' Player of the Year: Billy Slater Best Back: Cooper Cronk Best Forward: Jeff Lima Most Improved: Sika Manu Rookie of the Year: Aiden Tolman Darren Bell U20s Player of the Year Award: Louis Fanene U20s Most Improved: Pulou Vaituutuu U20s Best Forward: Zeb Tawha U20s Best Back: Luke Kelly Mick Moore Club Person of the Year: Samantha Shaw Life Member Inductee: Dallas Johnson Dally M Awards Night Dally M Representative Player of the Year: Greg Inglis Dally M Hooker of the Year: Cameron Smith Dally M Centre of the Year: Israel Folau Dally M Five-Eighth of the Year: Greg Inglis Dally M Fullback of the Year: Billy Slater NRL Hall of Fame Inductee: Glenn Lazarus Rugby League World Golden Boot Awards Night Golden Boot Award: Billy Slater RLPA Awards Night RLPA Australia Representative Player of the Year: Billy Slater RLIF Awards RLIF Player of the Year: Billy Slater RLIF Rookie of the Year: Israel Folau RLIF Fullback of the Year: Billy Slater RLIF Centre of the Year: Israel Folau RLIF Five-Eighth of the Year: Greg Inglis RLIF Hooker of the Year: Cameron Smith Notes References Melbourne Storm seasons Melbourne Storm
Kirschbaum is the German word for cherry tree, and also a surname. It may refer to: People Bill Kirschbaum (1902–1953), U.S. Olympic swimmer Carl Ludwig Kirschbaum (1812–1880), German entomologist, professor of biology, and museum director Charlotte von Kirschbaum (1899–1975), German theologian Eliezer Simon Kirschbaum (1797–1860), Austrian physician and writer Thorsten Kirschbaum (born 1987), German football player Walter Kirschbaum ( mid-20th century), West German slalom canoer Other Japigny kirschbaum, a species of glass knifefish See also Kirschenbaum (surname) German-language surnames Surnames of Jewish origin
Benfatto is a surname. Notable people with the surname include: Attilio Benfatto (1943–2017), Italian cyclist Luigi Benfatto (1551–1611), Italian painter Marco Benfatto (born 1988), Italian racing cyclist Italian-language surnames
Bagdad is a village in the administrative district of Gmina Wyrzysk, within Piła County, Greater Poland Voivodeship, in west-central Poland. It lies approximately north-east of Wyrzysk, east of Piła, and north of the regional capital Poznań. In 2006 the village had a population of 120. Formerly, before World War II, Bagdad was a farm and belonged to Mieczysław Chłapowski, a Polish landlord and politician. The village currently has a manor house (Gothic Revival architecture style), and a stud farm. References Bagdad
Cryptolechia microglyptis is a moth in the family Depressariidae. It was described by Edward Meyrick in 1936. It is found in Venezuela. References Moths described in 1936 Cryptolechia (moth) Taxa named by Edward Meyrick
Anthology of Patrice Rushen is a compilation of the mainly R&B charting works of Patrice Rushen on the Elektra Records label from nascent late-1970s singles like "Hang It Up" to the then-recent 1984 hit "Feels So Real (Won't Let Go)." Anthology was released just after Rushen's last Elektra studio album Now and just before her Arista Records debut Watch Out!. This collection was released only in the United States on cassette and LP. There was never a compact disc issue of the release. Track listing "Remind Me" - 5:15 "Feels So Real (Won't Let Go)" - 6:48 "Number One (instrumental)" - 4:55 "Forget Me Nots" - 4:42 "Hang It Up" - 5:11 "Look Up" - 3:40 "Haven't You Heard" - 6:44 "Givin' It Up Is Givin' Up" - 4:58 "When I Found You" - 5:20 "High In Me" - 4:13 "All We Need" - 5:50 External links Patrice Rushen albums 1985 greatest hits albums Elektra Records compilation albums
The Batang Ai Dam is a concrete-face rock-fill dam in Batang Ai National Park in Sarawak, Malaysia. The power station comprises four turbines, totalling the installed capacity to . The station is operated by Sarawak Electricity Supply Corporation. Preparations for the dam began as early as 1975, before the design was published in 1977. Construction started in 1982 with the river diversion work and the last turbine completed in 1985. The Batang Ai project, a relatively modest dam financed by the Asian Development Bank, caused the displacement of approximately 3,000 people from 26 longhouses. These people have since been accommodated in the Batang Ai Resettlement Scheme to cultivate cocoa and rubber but the programme has not been successful. See also List of power stations in Malaysia National Grid, Malaysia References Notes . Page 14 Table 2-1 Batang Ai Dam. Kaur, Amarhit. "A History of Forestry in Sarawak." Modern Asian Studies 32.1 (1998): 117–47. External links Sarawak Electricity Supply Corporation Hydroelectric power stations in Malaysia Dams in Sarawak Concrete-face rock-fill dams Dams completed in 1985
```javascript (function(mod) { if (typeof exports == "object" && typeof module == "object") // CommonJS mod(require("../../lib/codemirror")); else if (typeof define == "function" && define.amd) // AMD define(["../../lib/codemirror"], mod); else // Plain browser env mod(CodeMirror); })(function(CodeMirror) { "use strict"; CodeMirror.multiplexingMode = function(outer /*, others */) { // Others should be {open, close, mode [, delimStyle] [, innerStyle]} objects var others = Array.prototype.slice.call(arguments, 1); var n_others = others.length; function indexOf(string, pattern, from) { if (typeof pattern == "string") return string.indexOf(pattern, from); var m = pattern.exec(from ? string.slice(from) : string); return m ? m.index + from : -1; } return { startState: function() { return { outer: CodeMirror.startState(outer), innerActive: null, inner: null }; }, copyState: function(state) { return { outer: CodeMirror.copyState(outer, state.outer), innerActive: state.innerActive, inner: state.innerActive && CodeMirror.copyState(state.innerActive.mode, state.inner) }; }, token: function(stream, state) { if (!state.innerActive) { var cutOff = Infinity, oldContent = stream.string; for (var i = 0; i < n_others; ++i) { var other = others[i]; var found = indexOf(oldContent, other.open, stream.pos); if (found == stream.pos) { stream.match(other.open); state.innerActive = other; state.inner = CodeMirror.startState(other.mode, outer.indent ? outer.indent(state.outer, "") : 0); return other.delimStyle; } else if (found != -1 && found < cutOff) { cutOff = found; } } if (cutOff != Infinity) stream.string = oldContent.slice(0, cutOff); var outerToken = outer.token(stream, state.outer); if (cutOff != Infinity) stream.string = oldContent; return outerToken; } else { var curInner = state.innerActive, oldContent = stream.string; if (!curInner.close && stream.sol()) { state.innerActive = state.inner = null; return this.token(stream, state); } var found = curInner.close ? indexOf(oldContent, curInner.close, stream.pos) : -1; if (found == stream.pos) { stream.match(curInner.close); state.innerActive = state.inner = null; return curInner.delimStyle; } if (found > -1) stream.string = oldContent.slice(0, found); var innerToken = curInner.mode.token(stream, state.inner); if (found > -1) stream.string = oldContent; if (curInner.innerStyle) { if (innerToken) innerToken = innerToken + ' ' + curInner.innerStyle; else innerToken = curInner.innerStyle; } return innerToken; } }, indent: function(state, textAfter) { var mode = state.innerActive ? state.innerActive.mode : outer; if (!mode.indent) return CodeMirror.Pass; return mode.indent(state.innerActive ? state.inner : state.outer, textAfter); }, blankLine: function(state) { var mode = state.innerActive ? state.innerActive.mode : outer; if (mode.blankLine) { mode.blankLine(state.innerActive ? state.inner : state.outer); } if (!state.innerActive) { for (var i = 0; i < n_others; ++i) { var other = others[i]; if (other.open === "\n") { state.innerActive = other; state.inner = CodeMirror.startState(other.mode, mode.indent ? mode.indent(state.outer, "") : 0); } } } else if (state.innerActive.close === "\n") { state.innerActive = state.inner = null; } }, electricChars: outer.electricChars, innerMode: function(state) { return state.inner ? {state: state.inner, mode: state.innerActive.mode} : {state: state.outer, mode: outer}; } }; }; }); ```
Teenage Mutant Ninja Turtles: Tournament Fighters, or Teenage Mutant Hero Turtles: Tournament Fighters in Europe, is the title of three different fighting games based on the Teenage Mutant Ninja Turtles, produced by Konami for the Nintendo Entertainment System, Sega Genesis, and Super NES and released during a period between 1993 and 1994. Konami produced a different fighting game based on the franchise each featuring a differing cast of characters for the platforms. All three versions of the game were re-released as part of Teenage Mutant Ninja Turtles: The Cowabunga Collection in 2022. with online play using rollback netcode for the SNES version of the game. NES version The NES version of Tournament Fighters was the final game Konami released for the platform in North America and the PAL region in 1994. It was also the fifth TMNT game released for Nintendo home consoles. Unlike the other versions of Tournament Fighters, it was not released in Japan. Tournament Fighters was one of the few fighting games released for the NES during the fighting game boom. The game's single-player Story mode has the player taking control of one of the four Turtles (Leonardo, Raphael, Michaelangelo, and Donatello), as they hold a contest amongst themselves to see who is fit to take on Shredder's challenge. After defeating the first three opponents, the player proceeds to fight Casey Jones and then Hothead (a character based on the Dragon Warrior from the Teenage Mutant Ninja Turtles Adventures comics and the action figure of the same name) before the final match with the Shredder. In addition to the Story mode, the game also has two Versus modes (one against the CPU and another against a second player), as well as a four-player tournament mode. An option mode where the player can adjust the game's difficulty, continues, and speed is also available. The gameplay follows many of the standard fighting game conventions. Battles consist of three-round matches and the first player to win two rounds is the victor. Each character has their own repertoire of basic punch and kick techniques, as well as command-based special moves. During battle, a flying monitor with Splinter's face will sometimes appear that will drop a red ball power-up at the middle of the stage that can be retrieved by either fighter. Whoever retrieves the ball power-up will be able to use it by inputting the appropriate command. The NES version allows the player to match any character against a clone of himself, with the exception of Hothead. The game does not allow such a match under normal circumstances, but there is a way to bypass this restriction in the game's "Vs. CPU" mode. The second Hothead will be colored differently, as with all same character matches in the game, but the game will also flicker due to the large size of both characters. In Teenage Mutant Ninja Turtles: The Cowabunga Collection, the NES version of Teenage Mutant Ninja Turtles: Tournament Fighters has three enhancements. Remove slowdown – This enhancement removes slowdown when too many characters are on screen, therefore allowing fast action at all times. Remove sprite flicker – This enhancement removes the NES limitations of the character sprites and backgrounds, providing for smoother animation during gameplay. Clash of the Hotheads – This enhancement allows more than one player to play as Hothead in the Tournament and Versus Mode, but it is most recommended that the "remove sprite flicker" enhancement (and maybe the "remove slowdown" enhancement, as well) is activated along with this enhancement first. Super NES version A tournament has been organized and many fighters have entered, Shredder being one of them. The Turtles decide to participate in order to stop their nemesis as well as proving their strength in the tournament. This game's controls use a four-button scheme (two punches and two kicks, weak and strong). A particular feature is the possibility to use a super special attack. In order to achieve this, the player must fill a green bar under the life bar, by hitting their opponents. Once full, the player must press the two strong attack buttons simultaneously. There is also the option of enhancing the speed of the game, making the fights more intense but also more difficult to follow. In addition to the main and versus modes, there is a story mode in which the Turtles must rescue April O'Neil and Splinter from Karai's clutches. The Turtles must travel across the US in their Turtle Blimp, defeating other fighters and collecting information. Only the four of them can be playable whereas the other characters are the opponents, including clone versions of the Turtles. There is no Mutagen Meter in story mode. There is also a watch mode, which features computer-controlled characters. There are ten characters available, plus two bosses. Aside from the Turtles and Shredder (who goes under the name of Cyber Shredder in this game), these characters are also available: War – A monstrous purple creature with big claws, one of the Four Horsemen of the Apocalypse from the Teenage Mutant Ninja Turtles Adventures comics published by Archie. The game version of the character is said to be an alien in the game's Tournament mode as well as a mutant by the Turtles in the game's story mode. Aska – A ninja girl seeking to open her own dojo. Aska is an original character (created by Takemasa Miyoshi) who makes her first and only appearance in the franchise. She is inspired by Mitsu from the film, Teenage Mutant Ninja Turtles III, and was originally intended to be Mitsu, but her character was renamed after the film's poor reception. Wingnut – A humanoid, alien bat who appeared in several issues of the Archie Comics series, as well as in an episode of the animated series. Chrome Dome – An android from the animated series, he was initially created by Shredder to destroy the Turtles. Armaggon – A mutant shark from the future. Also from the Archie Comics series. The bosses are: Rat King – A deranged man who cast away his humanity and considers himself a rat, even though he has not been mutated. Karai – The leader of the Foot Clan in Japan. She had only appeared in the original comics by Mirage Studios at the time of the game's release. Regional differences The Super NES version of Tournament Fighters was later released in Japan under the title Teenage Mutant Ninja Turtles: Mutant Warriors. There's also some more slight differences: Aska's outfit is more revealing and she has a different win animation. The turtles sound more like teenagers and their character icons are different. In Teenage Mutant Ninja Turtles: The Cowabunga Collection, the SNES version of Teenage Mutant Ninja Turtles: Tournament Fighters has six enhancements, with them having been featured in the original as Button Codes. Playable bosses – Allows the player to play as Rat King and Karai in Versus Mode. Extra versus stages – Allows two additional stages to be accessible in Versus mode. Maximum speed – Grants the player access to the hi-speed 3 feature in the in-game options menu. Extra lives – Allows the player to select up to 10 credits for Story Mode in the in-game options menu. Ultimate Attacks in Story Mode – Allows Ultimate Attacks in Story Mode. Group Mode − Enables the hidden Group Mode (Japanese version only). Genesis version The Genesis/Mega Drive version of Tournament Fighters was released in North America, the PAL region, and Japan around the same time as its SNES counterpart. The Genesis version uses the standard three-button controller, with only two buttons for attacking (punch and kick). To perform stronger punches or kicks, the player must hold the directional pad towards the opponent while pressing either attack buttons. The third button is used for taunting. Some of the stages in the game feature destroyable scenery that gives the player and their opponent access to new areas in the stage. As well as their special moves, each character has a 'killer' attack which is only accessible when they are close to death and the red part of the characters' life gauge at the top starts flashing. This is done by pressing the Taunt button in conjunction with a specific D-Pad motion. These moves nearly take out the other character's life gauge completely. The game has eight playable characters, which includes the four Turtles and Casey Jones, as well as April O'Neil (whose active role differs from the versions of the character featured in other games), Ray Fillet (a character from the Teenage Mutant Ninja Turtles Adventures comics), and Sisyphus (an original character, named Musha Beetle in the Japanese version). The player can adjust their power and speed after selecting their character. The music in this version was composed by renowned video game composer Miki Higashino, in collaboration with Masanori Adachi. The main single-player mode features the turtles and their allies traveling to various planets in Dimension X, fighting against clones of themselves, as they seek to rescue Splinter from Krang. After defeating the eight clones, the player travels to the final three stages to fight against a Triceraton, Krang's Android, and Karai (in that order). The game has a two-player mode, as well as a practice mode in which the player faces the computer in a 1-round match, and a "Tournament" mode where the player must defeat 88 opponents with one life gauge. In Teenage Mutant Ninja Turtles: The Cowabunga Collection, the Genesis version of Teenage Mutant Ninja Turtles: Tournament Fighters has the sole enhancement of playable bosses, allowing the player to play as the Triceraton, Krang and his Android, and Karai in any game mode, increasing the number of playable characters from 8 to 11. Each of the 3 boss characters is represented by a silhouetted character icon located above the original 8 playable characters. Each boss character's silhouetted icon is outlined in a different color, and below each icon is their respective character's name. Choosing one of the three allows to play as that character. Reception In the United Kingdom, it was the top-selling SNES game in January 1994. The SNES version received positive reviews, whereas the Sega version received mixed reviews. In 1993, Aska was rated as #4 on the list of "Top Ten Fighting Women" by Electronic Gaming Monthly. In the same issue Electronic Gaming Monthly gave the Sega Genesis version average reviews, noting the game is not a good as the SNES version and stating “There aren’t many moves and the fighters are unappealing. The game also has a darker look and feel.” Mega magazine gave the Sega Genesis an average review score criticizing the games sluggish gameplay, unresponsive controls and stating “It’s an uninspired beat-em-up that’s borrowed everything from Street Fighter 2 but the gameplay.” GamePro magazine gave the SNES version ratings (out of 5) of 4.5 for graphics, 4.5 for sound, 5.0 for control and 5.0 for fun factor. GameFan scored the SNES version 369/400 and the Genesis version 248/400. SNES Force gave the SNES version a 90% score. In 1995, Total! ranked the game 61st on its Top 100 SNES Games summarizing: "This is a shockingly good beat-'em-up considering it's a license." References External links 1993 video games 1994 video games Konami games Nintendo Entertainment System games Sega Genesis games Super Nintendo Entertainment System games Video games about dinosaurs Tournament Fighters Video games scored by Miki Higashino Video games set in New York City Video games set in Greece Video games set in Japan Fighting games Video games with AI-versus-AI modes Video games developed in Japan
Porto Editora is the largest Portuguese publisher with a consolidated turnover of more than 90M € in 2010. It is the leading educational publisher in Portugal in the areas of educational books, dictionaries and multimedia products, both off-line and on-line. Porto Editora was founded in 1944 in Porto by a group of teachers within different areas of education. Since its involvement in Multimedia in 1994, Porto Editora has published dozens of educational CD-ROMs and DVD-ROMs, including "Diciopédia". Book series Colecção Bolso Infantil Colecção Mundo de Saberes Colecção Portuguesa Dicionários académicos References External links Porto Editora Wook Educare Edusurfa Diciopédia Infopédia Escola Virtual Bookstores of Portugal Book publishing companies of Portugal Companies based in Porto
The William F. Ekstrom Library is the main branch of the University of Louisville Libraries system. Located on the university's Belknap Campus in Louisville, Kentucky, Ekstrom Library contains collections in the humanities, sciences, and social sciences. The University of Louisville Libraries is a member of the Association of Research Libraries (ARL) and, along with Ekstrom, includes libraries for Art, Health Sciences, Law, and Music, as well as the Archives and Special Collections. The University of Louisville Libraries hold approximately 2.2 million print volumes, subscribe to several thousand serials, and provide full-text electronic access to approximately 74,000 journals. Ekstrom is a Federal Depository Library and houses the largest selective government document collection in Kentucky. History The University Library grew from an original donation of Dean John Letcher Patterson's personal collection in the early 1900s. By 1950, the library had over 36,000 books in its collection. In 1956, the university gave the library its own building (now Schneider Hall) to accommodate the growing collection. By 1970, the collection contained over 200,000 items. At the end of the 1970s, the University Library building reached capacity, forcing parts of the collection to be placed in storage. Plans were made to create a new $14 million library to accommodate the growing collection. Named after Dr. William F. Ekstrom, a noted English professor and the first Academic Vice-President of the university, the Ekstrom Library opened on August 28, 1981. At the same time as the opening of Ekstrom Library, all the university's branch libraries, except for Law, were placed under the leadership of the University Librarian, rather than the deans of the corresponding schools. The new building was designed by the Architect firm Louis & Henry of Louisville, Kentucky. The brick and exposed concrete design of the building incorporated the character of the open spaces on campus and the surrounding structures. The large glass windows featured a glazing system that allowed restricted solar heating. The new library earned the Kentucky Society of Architects Merit Award for outstanding design. The University Libraries were invited to join the Association of College and Research Libraries (ARL) as its 124th member in 2002. The offer was based on ARL's assessment of scope of the Libraries' collections, uniqueness of resources, contributions to scholarship, and research contributions to the field of information science. In 2006, a new wing was added to the library at a cost of $14.9 million. The new 51,000 sq.ft., three-story addition featured the relocated McConnell Center for Political Leadership, a new 24-hour study area, a café, new instruction labs, an auditorium, and the new Robotic Retrieval System (RRS). This temperature and humidity-controlled storage system, one of the first in the nation, has a 600,000 volume capacity and retrieves patron requests in approximately 2 minutes. Collections and departments Ekstrom Library houses or provides access to several unique collections, including but not limited to: The Granville Bunton African American Collection—over 4,000 volumes focusing on African American and Pan African history, literature, and culture The Bingham Poetry Collection—over 6,000 volumes of poetry, with an emphasis on North American and British poetry The Louise Galloway Browsing Collection—a revolving collection of popular interest titles and bestsellers The Student Government Association Video Collection—a collection of feature films funded by the University of Louisville Student Government Association Archives and Special Collections, considered a separate library, is headquartered within Ekstrom Library. It includes 1) Photographic Archives housing nearly 2 million photographs, 2) Rare Books and manuscripts focusing on local, regional, national, and international topics, 3) the University Archives and Records Center, and 4) the Digital Initiatives office which is responsible for digital collections of images, documents, and oral histories. Reference librarians at Ekstrom Library answer questions in person at the reference desk on the first floor or by phone, instant message chat, or e-mail. Individual research advisory appointments with a reference librarian are also offered. Ekstrom Library's Special Services Office helps those with disabilities use the library. In addition to providing adaptive technology equipment, library staff aid in retrieving books, making photocopies, and locating library materials. Other departments at Ekstrom Library include Circulation, Collection Development, Copyright Permissions Services, Current Periodicals and Microforms, Distance Learning Library Services, Information Literacy, Interlibrary Loan, Media Resources, Office of Libraries' Technology, Office of the University Libraries' Dean, Stacks Maintenance, and Technical Services. Endowed Chair for Scholarly Communication Established in 1997 and funded by the estate of a former University of Louisville librarian and Kentucky's Research Challenge Trust Fund, the Evelyn J. Schneider Endowed Chair for Scholarly Communication was the state's first endowed library chair. The chair holder, whose office is located in Ekstrom Library, focuses on concerns and policies related to scholarly communication and copyright law at the university, state, and national levels. University units located in Ekstrom Library In addition to Ekstrom Library's services and collections, the library is home to several university organizations, including: Delphi Center for Enhancing Teaching & Learning - The center aids teaching through technological instruction assistance, distance education services, and development of education-related software. University Writing Center - The center helps students and faculty improve their writing through individualized and group assistance. Muhammad Ali Institute for Peace and Justice - The institute's mission is to "advance the work, study and practice of peacemaking, social justice and violence prevention through the development of innovative educational programs, training, service and research." Anne Braden Institute for Social Justice Research - The institute honors civil rights activist Anne Braden through academic research related to social justice, particularly in Louisville and the South. McConnell Center for Political Leadership - The center provides guidance to undergraduates interested in politics and government to become future leaders in the community. Notes External links William F. Ekstrom Library University of Louisville Libraries University of Louisville Libraries Twitter Ekstrom Library Facebook Evelyn J. Schneider Endowed Chair for Scholarly Communication Association of Research Libraries University of Louisville Library buildings completed in 1981 Libraries in Louisville, Kentucky 1981 establishments in Kentucky University and college academic libraries in the United States
```c /* =========================================================================== This file is part of Quake III Arena source code. Quake III Arena source code is free software; you can redistribute it or (at your option) any later version. Quake III Arena source code is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the along with Quake III Arena source code; if not, write to the Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA =========================================================================== */ // // q_math.c -- stateless support routines that are included in each code module // Some of the vector functions are static inline in q_shared.h. q3asm // doesn't understand static functions though, so we only want them in // one file. That's what this is about. #ifdef Q3_VM #define __Q3_VM_MATH #endif #include "q_shared.h" const vec3_t vec3_origin = {0,0,0}; vec3_t axisDefault[3] = { { 1, 0, 0 }, { 0, 1, 0 }, { 0, 0, 1 } }; vec4_t colorBlack = {0, 0, 0, 1}; vec4_t colorRed = {1, 0, 0, 1}; vec4_t colorGreen = {0, 1, 0, 1}; vec4_t colorBlue = {0, 0, 1, 1}; vec4_t colorYellow = {1, 1, 0, 1}; vec4_t colorMagenta= {1, 0, 1, 1}; vec4_t colorCyan = {0, 1, 1, 1}; vec4_t colorWhite = {1, 1, 1, 1}; vec4_t colorLtGrey = {0.75, 0.75, 0.75, 1}; vec4_t colorMdGrey = {0.5, 0.5, 0.5, 1}; vec4_t colorDkGrey = {0.25, 0.25, 0.25, 1}; // actually there are 35 colors but we want to use bitmask safely const vec4_t g_color_table[ 64 ] = { {0.0f, 0.0f, 0.0f, 1.0f}, {1.0f, 0.0f, 0.0f, 1.0f}, {0.0f, 1.0f, 0.0f, 1.0f}, {1.0f, 1.0f, 0.0f, 1.0f}, {0.2f, 0.2f, 1.0f, 1.0f}, //{0.0, 0.0, 1.0, 1.0}, {0.0f, 1.0f, 1.0f, 1.0f}, {1.0f, 0.0f, 1.0f, 1.0f}, {1.0f, 1.0f, 1.0f, 1.0f}, // extended color codes from CPMA/CNQ3: { 1.00000f, 0.50000f, 0.00000f, 1.00000f }, // 8 { 0.60000f, 0.60000f, 1.00000f, 1.00000f }, // 9 // CPMA's alphabet rainbow { 1.00000f, 0.00000f, 0.00000f, 1.00000f }, // a { 1.00000f, 0.26795f, 0.00000f, 1.00000f }, // b { 1.00000f, 0.50000f, 0.00000f, 1.00000f }, // c { 1.00000f, 0.73205f, 0.00000f, 1.00000f }, // d { 1.00000f, 1.00000f, 0.00000f, 1.00000f }, // e { 0.73205f, 1.00000f, 0.00000f, 1.00000f }, // f { 0.50000f, 1.00000f, 0.00000f, 1.00000f }, // g { 0.26795f, 1.00000f, 0.00000f, 1.00000f }, // h { 0.00000f, 1.00000f, 0.00000f, 1.00000f }, // i { 0.00000f, 1.00000f, 0.26795f, 1.00000f }, // j { 0.00000f, 1.00000f, 0.50000f, 1.00000f }, // k { 0.00000f, 1.00000f, 0.73205f, 1.00000f }, // l { 0.00000f, 1.00000f, 1.00000f, 1.00000f }, // m { 0.00000f, 0.73205f, 1.00000f, 1.00000f }, // n { 0.00000f, 0.50000f, 1.00000f, 1.00000f }, // o { 0.00000f, 0.26795f, 1.00000f, 1.00000f }, // p { 0.00000f, 0.00000f, 1.00000f, 1.00000f }, // q { 0.26795f, 0.00000f, 1.00000f, 1.00000f }, // r { 0.50000f, 0.00000f, 1.00000f, 1.00000f }, // s { 0.73205f, 0.00000f, 1.00000f, 1.00000f }, // t { 1.00000f, 0.00000f, 1.00000f, 1.00000f }, // u { 1.00000f, 0.00000f, 0.73205f, 1.00000f }, // v { 1.00000f, 0.00000f, 0.50000f, 1.00000f }, // w { 1.00000f, 0.00000f, 0.26795f, 1.00000f }, // x { 1.0, 1.0, 1.0, 1.0 }, // y, white, duped so all colors can be expressed with this palette { 0.5, 0.5, 0.5, 1.0 }, // z, grey }; int ColorIndexFromChar( char ccode ) { if ( ccode >= '0' && ccode <= '9' ) { return ( ccode - '0' ); } else if ( ccode >= 'a' && ccode <= 'z' ) { return ( ccode - 'a' + 10 ); } else if ( ccode >= 'A' && ccode <= 'Z' ) { return ( ccode - 'A' + 10 ); } else { return ColorIndex( COLOR_WHITE ); } } vec3_t bytedirs[NUMVERTEXNORMALS] = { {-0.525731f, 0.000000f, 0.850651f}, {-0.442863f, 0.238856f, 0.864188f}, {-0.295242f, 0.000000f, 0.955423f}, {-0.309017f, 0.500000f, 0.809017f}, {-0.162460f, 0.262866f, 0.951056f}, {0.000000f, 0.000000f, 1.000000f}, {0.000000f, 0.850651f, 0.525731f}, {-0.147621f, 0.716567f, 0.681718f}, {0.147621f, 0.716567f, 0.681718f}, {0.000000f, 0.525731f, 0.850651f}, {0.309017f, 0.500000f, 0.809017f}, {0.525731f, 0.000000f, 0.850651f}, {0.295242f, 0.000000f, 0.955423f}, {0.442863f, 0.238856f, 0.864188f}, {0.162460f, 0.262866f, 0.951056f}, {-0.681718f, 0.147621f, 0.716567f}, {-0.809017f, 0.309017f, 0.500000f},{-0.587785f, 0.425325f, 0.688191f}, {-0.850651f, 0.525731f, 0.000000f},{-0.864188f, 0.442863f, 0.238856f}, {-0.716567f, 0.681718f, 0.147621f},{-0.688191f, 0.587785f, 0.425325f}, {-0.500000f, 0.809017f, 0.309017f}, {-0.238856f, 0.864188f, 0.442863f}, {-0.425325f, 0.688191f, 0.587785f}, {-0.716567f, 0.681718f, -0.147621f}, {-0.500000f, 0.809017f, -0.309017f}, {-0.525731f, 0.850651f, 0.000000f}, {0.000000f, 0.850651f, -0.525731f}, {-0.238856f, 0.864188f, -0.442863f}, {0.000000f, 0.955423f, -0.295242f}, {-0.262866f, 0.951056f, -0.162460f}, {0.000000f, 1.000000f, 0.000000f}, {0.000000f, 0.955423f, 0.295242f}, {-0.262866f, 0.951056f, 0.162460f}, {0.238856f, 0.864188f, 0.442863f}, {0.262866f, 0.951056f, 0.162460f}, {0.500000f, 0.809017f, 0.309017f}, {0.238856f, 0.864188f, -0.442863f},{0.262866f, 0.951056f, -0.162460f}, {0.500000f, 0.809017f, -0.309017f},{0.850651f, 0.525731f, 0.000000f}, {0.716567f, 0.681718f, 0.147621f}, {0.716567f, 0.681718f, -0.147621f}, {0.525731f, 0.850651f, 0.000000f}, {0.425325f, 0.688191f, 0.587785f}, {0.864188f, 0.442863f, 0.238856f}, {0.688191f, 0.587785f, 0.425325f}, {0.809017f, 0.309017f, 0.500000f}, {0.681718f, 0.147621f, 0.716567f}, {0.587785f, 0.425325f, 0.688191f}, {0.955423f, 0.295242f, 0.000000f}, {1.000000f, 0.000000f, 0.000000f}, {0.951056f, 0.162460f, 0.262866f}, {0.850651f, -0.525731f, 0.000000f},{0.955423f, -0.295242f, 0.000000f}, {0.864188f, -0.442863f, 0.238856f}, {0.951056f, -0.162460f, 0.262866f}, {0.809017f, -0.309017f, 0.500000f}, {0.681718f, -0.147621f, 0.716567f}, {0.850651f, 0.000000f, 0.525731f}, {0.864188f, 0.442863f, -0.238856f}, {0.809017f, 0.309017f, -0.500000f}, {0.951056f, 0.162460f, -0.262866f}, {0.525731f, 0.000000f, -0.850651f}, {0.681718f, 0.147621f, -0.716567f}, {0.681718f, -0.147621f, -0.716567f},{0.850651f, 0.000000f, -0.525731f}, {0.809017f, -0.309017f, -0.500000f}, {0.864188f, -0.442863f, -0.238856f}, {0.951056f, -0.162460f, -0.262866f}, {0.147621f, 0.716567f, -0.681718f}, {0.309017f, 0.500000f, -0.809017f}, {0.425325f, 0.688191f, -0.587785f}, {0.442863f, 0.238856f, -0.864188f}, {0.587785f, 0.425325f, -0.688191f}, {0.688191f, 0.587785f, -0.425325f}, {-0.147621f, 0.716567f, -0.681718f}, {-0.309017f, 0.500000f, -0.809017f}, {0.000000f, 0.525731f, -0.850651f}, {-0.525731f, 0.000000f, -0.850651f}, {-0.442863f, 0.238856f, -0.864188f}, {-0.295242f, 0.000000f, -0.955423f}, {-0.162460f, 0.262866f, -0.951056f}, {0.000000f, 0.000000f, -1.000000f}, {0.295242f, 0.000000f, -0.955423f}, {0.162460f, 0.262866f, -0.951056f}, {-0.442863f, -0.238856f, -0.864188f}, {-0.309017f, -0.500000f, -0.809017f}, {-0.162460f, -0.262866f, -0.951056f}, {0.000000f, -0.850651f, -0.525731f}, {-0.147621f, -0.716567f, -0.681718f}, {0.147621f, -0.716567f, -0.681718f}, {0.000000f, -0.525731f, -0.850651f}, {0.309017f, -0.500000f, -0.809017f}, {0.442863f, -0.238856f, -0.864188f}, {0.162460f, -0.262866f, -0.951056f}, {0.238856f, -0.864188f, -0.442863f}, {0.500000f, -0.809017f, -0.309017f}, {0.425325f, -0.688191f, -0.587785f}, {0.716567f, -0.681718f, -0.147621f}, {0.688191f, -0.587785f, -0.425325f}, {0.587785f, -0.425325f, -0.688191f}, {0.000000f, -0.955423f, -0.295242f}, {0.000000f, -1.000000f, 0.000000f}, {0.262866f, -0.951056f, -0.162460f}, {0.000000f, -0.850651f, 0.525731f}, {0.000000f, -0.955423f, 0.295242f}, {0.238856f, -0.864188f, 0.442863f}, {0.262866f, -0.951056f, 0.162460f}, {0.500000f, -0.809017f, 0.309017f}, {0.716567f, -0.681718f, 0.147621f}, {0.525731f, -0.850651f, 0.000000f}, {-0.238856f, -0.864188f, -0.442863f}, {-0.500000f, -0.809017f, -0.309017f}, {-0.262866f, -0.951056f, -0.162460f}, {-0.850651f, -0.525731f, 0.000000f}, {-0.716567f, -0.681718f, -0.147621f}, {-0.716567f, -0.681718f, 0.147621f}, {-0.525731f, -0.850651f, 0.000000f}, {-0.500000f, -0.809017f, 0.309017f}, {-0.238856f, -0.864188f, 0.442863f}, {-0.262866f, -0.951056f, 0.162460f}, {-0.864188f, -0.442863f, 0.238856f}, {-0.809017f, -0.309017f, 0.500000f}, {-0.688191f, -0.587785f, 0.425325f}, {-0.681718f, -0.147621f, 0.716567f}, {-0.442863f, -0.238856f, 0.864188f}, {-0.587785f, -0.425325f, 0.688191f}, {-0.309017f, -0.500000f, 0.809017f}, {-0.147621f, -0.716567f, 0.681718f}, {-0.425325f, -0.688191f, 0.587785f}, {-0.162460f, -0.262866f, 0.951056f}, {0.442863f, -0.238856f, 0.864188f}, {0.162460f, -0.262866f, 0.951056f}, {0.309017f, -0.500000f, 0.809017f}, {0.147621f, -0.716567f, 0.681718f}, {0.000000f, -0.525731f, 0.850651f}, {0.425325f, -0.688191f, 0.587785f}, {0.587785f, -0.425325f, 0.688191f}, {0.688191f, -0.587785f, 0.425325f}, {-0.955423f, 0.295242f, 0.000000f}, {-0.951056f, 0.162460f, 0.262866f}, {-1.000000f, 0.000000f, 0.000000f}, {-0.850651f, 0.000000f, 0.525731f}, {-0.955423f, -0.295242f, 0.000000f}, {-0.951056f, -0.162460f, 0.262866f}, {-0.864188f, 0.442863f, -0.238856f}, {-0.951056f, 0.162460f, -0.262866f}, {-0.809017f, 0.309017f, -0.500000f}, {-0.864188f, -0.442863f, -0.238856f}, {-0.951056f, -0.162460f, -0.262866f}, {-0.809017f, -0.309017f, -0.500000f}, {-0.681718f, 0.147621f, -0.716567f}, {-0.681718f, -0.147621f, -0.716567f}, {-0.850651f, 0.000000f, -0.525731f}, {-0.688191f, 0.587785f, -0.425325f}, {-0.587785f, 0.425325f, -0.688191f}, {-0.425325f, 0.688191f, -0.587785f}, {-0.425325f, -0.688191f, -0.587785f}, {-0.587785f, -0.425325f, -0.688191f}, {-0.688191f, -0.587785f, -0.425325f} }; //============================================================== int Q_rand( int *seed ) { *seed = (69069 * *seed + 1); return *seed; } float Q_random( int *seed ) { return ( Q_rand( seed ) & 0xffff ) / (float)0x10000; } float Q_crandom( int *seed ) { return 2.0 * ( Q_random( seed ) - 0.5 ); } //======================================================= signed char ClampChar( int i ) { if ( i < -128 ) { return -128; } if ( i > 127 ) { return 127; } return i; } signed char ClampCharMove( int i ) { if ( i < -127 ) { return -127; } if ( i > 127 ) { return 127; } return i; } signed short ClampShort( int i ) { if ( i < -32768 ) { return -32768; } if ( i > 0x7fff ) { return 0x7fff; } return i; } // this isn't a real cheap function to call! int DirToByte( vec3_t dir ) { int i, best; float d, bestd; if ( !dir ) { return 0; } bestd = 0; best = 0; for (i=0 ; i<NUMVERTEXNORMALS ; i++) { d = DotProduct (dir, bytedirs[i]); if (d > bestd) { bestd = d; best = i; } } return best; } void ByteToDir( int b, vec3_t dir ) { if ( b < 0 || b >= NUMVERTEXNORMALS ) { VectorCopy( vec3_origin, dir ); return; } VectorCopy (bytedirs[b], dir); } unsigned ColorBytes3 (float r, float g, float b) { unsigned i; ( (byte *)&i )[0] = r * 255; ( (byte *)&i )[1] = g * 255; ( (byte *)&i )[2] = b * 255; return i; } unsigned ColorBytes4 (float r, float g, float b, float a) { unsigned i; ( (byte *)&i )[0] = r * 255; ( (byte *)&i )[1] = g * 255; ( (byte *)&i )[2] = b * 255; ( (byte *)&i )[3] = a * 255; return i; } float NormalizeColor( const vec3_t in, vec3_t out ) { float max; max = in[0]; if ( in[1] > max ) { max = in[1]; } if ( in[2] > max ) { max = in[2]; } if ( !max ) { VectorClear( out ); } else { out[0] = in[0] / max; out[1] = in[1] / max; out[2] = in[2] / max; } return max; } /* ===================== PlaneFromPoints Returns false if the triangle is degenerate. The normal will point out of the clock for clockwise ordered points ===================== */ qboolean PlaneFromPoints( vec4_t plane, const vec3_t a, const vec3_t b, const vec3_t c ) { vec3_t d1, d2; VectorSubtract( b, a, d1 ); VectorSubtract( c, a, d2 ); CrossProduct( d2, d1, plane ); if ( VectorNormalize( plane ) == 0 ) { return qfalse; } plane[3] = DotProduct( a, plane ); return qtrue; } /* =============== RotatePointAroundVector This is not implemented very well... =============== */ void RotatePointAroundVector( vec3_t dst, const vec3_t dir, const vec3_t point, float degrees ) { float m[3][3]; float im[3][3]; float zrot[3][3]; float tmpmat[3][3]; float rot[3][3]; int i; vec3_t vr, vup, vf; float rad; vf[0] = dir[0]; vf[1] = dir[1]; vf[2] = dir[2]; PerpendicularVector( vr, dir ); CrossProduct( vr, vf, vup ); m[0][0] = vr[0]; m[1][0] = vr[1]; m[2][0] = vr[2]; m[0][1] = vup[0]; m[1][1] = vup[1]; m[2][1] = vup[2]; m[0][2] = vf[0]; m[1][2] = vf[1]; m[2][2] = vf[2]; memcpy( im, m, sizeof( im ) ); im[0][1] = m[1][0]; im[0][2] = m[2][0]; im[1][0] = m[0][1]; im[1][2] = m[2][1]; im[2][0] = m[0][2]; im[2][1] = m[1][2]; memset( zrot, 0, sizeof( zrot ) ); zrot[0][0] = zrot[1][1] = zrot[2][2] = 1.0F; rad = DEG2RAD( degrees ); zrot[0][0] = cos( rad ); zrot[0][1] = sin( rad ); zrot[1][0] = -sin( rad ); zrot[1][1] = cos( rad ); MatrixMultiply( m, zrot, tmpmat ); MatrixMultiply( tmpmat, im, rot ); for ( i = 0; i < 3; i++ ) { dst[i] = rot[i][0] * point[0] + rot[i][1] * point[1] + rot[i][2] * point[2]; } } /* =============== RotateAroundDirection =============== */ void RotateAroundDirection( vec3_t axis[3], float yaw ) { // create an arbitrary axis[1] PerpendicularVector( axis[1], axis[0] ); // rotate it around axis[0] by yaw if ( yaw ) { vec3_t temp; VectorCopy( axis[1], temp ); RotatePointAroundVector( axis[1], axis[0], temp, yaw ); } // cross to get axis[2] CrossProduct( axis[0], axis[1], axis[2] ); } void vectoangles( const vec3_t value1, vec3_t angles ) { float forward; float yaw, pitch; if ( value1[1] == 0 && value1[0] == 0 ) { yaw = 0; if ( value1[2] > 0 ) { pitch = 90; } else { pitch = 270; } } else { if ( value1[0] ) { yaw = ( atan2 ( value1[1], value1[0] ) * 180 / M_PI ); } else if ( value1[1] > 0 ) { yaw = 90; } else { yaw = 270; } if ( yaw < 0 ) { yaw += 360; } forward = sqrt ( value1[0]*value1[0] + value1[1]*value1[1] ); pitch = ( atan2(value1[2], forward) * 180 / M_PI ); if ( pitch < 0 ) { pitch += 360; } } angles[PITCH] = -pitch; angles[YAW] = yaw; angles[ROLL] = 0; } /* ================= AnglesToAxis ================= */ void AnglesToAxis( const vec3_t angles, vec3_t axis[3] ) { vec3_t right; // angle vectors returns "right" instead of "y axis" AngleVectors( angles, axis[0], right, axis[2] ); VectorSubtract( vec3_origin, right, axis[1] ); } void AxisClear( vec3_t axis[3] ) { axis[0][0] = 1; axis[0][1] = 0; axis[0][2] = 0; axis[1][0] = 0; axis[1][1] = 1; axis[1][2] = 0; axis[2][0] = 0; axis[2][1] = 0; axis[2][2] = 1; } void AxisCopy( vec3_t in[3], vec3_t out[3] ) { VectorCopy( in[0], out[0] ); VectorCopy( in[1], out[1] ); VectorCopy( in[2], out[2] ); } void ProjectPointOnPlane( vec3_t dst, const vec3_t p, const vec3_t normal ) { float d; vec3_t n; float inv_denom; inv_denom = DotProduct( normal, normal ); #ifndef Q3_VM assert( Q_fabs(inv_denom) != 0.0f ); // zero vectors get here #endif inv_denom = 1.0f / inv_denom; d = DotProduct( normal, p ) * inv_denom; n[0] = normal[0] * inv_denom; n[1] = normal[1] * inv_denom; n[2] = normal[2] * inv_denom; dst[0] = p[0] - d * n[0]; dst[1] = p[1] - d * n[1]; dst[2] = p[2] - d * n[2]; } /* ================ MakeNormalVectors Given a normalized forward vector, create two other perpendicular vectors ================ */ void MakeNormalVectors( const vec3_t forward, vec3_t right, vec3_t up) { float d; // this rotate and negate guarantees a vector // not colinear with the original right[1] = -forward[0]; right[2] = forward[1]; right[0] = forward[2]; d = DotProduct (right, forward); VectorMA (right, -d, forward, right); VectorNormalize (right); CrossProduct (right, forward, up); } void VectorRotate( const vec3_t in, const vec3_t matrix[3], vec3_t out ) { out[0] = DotProduct( in, matrix[0] ); out[1] = DotProduct( in, matrix[1] ); out[2] = DotProduct( in, matrix[2] ); } //============================================================================ #ifdef _MSC_SSE2 #include <intrin.h> #endif /* ** float Q_rsqrt( float number ) */ float Q_rsqrt( float number ) { #if defined(_MSC_SSE2) float ret; _mm_store_ss( &ret, _mm_rsqrt_ss( _mm_load_ss( &number ) ) ); return ret; #elif defined(_GCC_SSE2) /* writing it this way allows gcc to recognize that rsqrt can be used with -ffast-math */ return 1.0f / sqrtf( number ); #else floatint_t t; float x2, y; const float threehalfs = 1.5F; x2 = number * 0.5F; t.f = number; t.i = 0x5f3759df - ( t.i >> 1 ); // what the fuck? y = t.f; y = y * ( threehalfs - ( x2 * y * y ) ); // 1st iteration // y = y * ( threehalfs - ( x2 * y * y ) ); // 2nd iteration, this can be removed return y; #endif } float Q_fabs( float f ) { floatint_t fi; fi.f = f; fi.i &= 0x7FFFFFFF; return fi.f; } //============================================================ /* =============== LerpAngle =============== */ float LerpAngle (float from, float to, float frac) { float a; if ( to - from > 180 ) { to -= 360; } if ( to - from < -180 ) { to += 360; } a = from + frac * (to - from); return a; } /* ================= AngleSubtract Always returns a value from -180 to 180 ================= */ float AngleSubtract( float a1, float a2 ) { float a; a = a1 - a2; while ( a > 180 ) { a -= 360; } while ( a < -180 ) { a += 360; } return a; } void AnglesSubtract( vec3_t v1, vec3_t v2, vec3_t v3 ) { v3[0] = AngleSubtract( v1[0], v2[0] ); v3[1] = AngleSubtract( v1[1], v2[1] ); v3[2] = AngleSubtract( v1[2], v2[2] ); } float AngleMod(float a) { a = (360.0/65536) * ((int)(a*(65536/360.0)) & 65535); return a; } /* ================= AngleNormalize360 returns angle normalized to the range [0 <= angle < 360] ================= */ float AngleNormalize360 ( float angle ) { return (360.0 / 65536) * ((int)(angle * (65536 / 360.0)) & 65535); } /* ================= AngleNormalize180 returns angle normalized to the range [-180 < angle <= 180] ================= */ float AngleNormalize180 ( float angle ) { angle = AngleNormalize360( angle ); if ( angle > 180.0 ) { angle -= 360.0; } return angle; } /* ================= AngleDelta returns the normalized delta from angle1 to angle2 ================= */ float AngleDelta ( float angle1, float angle2 ) { return AngleNormalize180( angle1 - angle2 ); } //============================================================ /* ================= SetPlaneSignbits ================= */ void SetPlaneSignbits (cplane_t *out) { int bits, j; // for fast box on planeside test bits = 0; for (j=0 ; j<3 ; j++) { if (out->normal[j] < 0) { bits |= 1<<j; } } out->signbits = bits; } /* ================== BoxOnPlaneSide Returns 1, 2, or 1 + 2 ================== */ int BoxOnPlaneSide(vec3_t emins, vec3_t emaxs, struct cplane_s *p) { float dist[2]; int sides, b, i; // fast axial cases if (p->type < 3) { if (p->dist <= emins[p->type]) return 1; if (p->dist >= emaxs[p->type]) return 2; return 3; } // general case dist[0] = dist[1] = 0; if (p->signbits < 8) // >= 8: default case is original code (dist[0]=dist[1]=0) { for (i=0 ; i<3 ; i++) { b = (p->signbits >> i) & 1; dist[ b] += p->normal[i]*emaxs[i]; dist[!b] += p->normal[i]*emins[i]; } } sides = 0; if (dist[0] >= p->dist) sides = 1; if (dist[1] < p->dist) sides |= 2; return sides; } /* ================= RadiusFromBounds ================= */ float RadiusFromBounds( const vec3_t mins, const vec3_t maxs ) { int i; vec3_t corner; float a, b; for (i=0 ; i<3 ; i++) { a = fabs( mins[i] ); b = fabs( maxs[i] ); corner[i] = a > b ? a : b; } return VectorLength (corner); } void ClearBounds( vec3_t mins, vec3_t maxs ) { mins[0] = mins[1] = mins[2] = 99999; maxs[0] = maxs[1] = maxs[2] = -99999; } void AddPointToBounds( const vec3_t v, vec3_t mins, vec3_t maxs ) { if ( v[0] < mins[0] ) { mins[0] = v[0]; } if ( v[0] > maxs[0]) { maxs[0] = v[0]; } if ( v[1] < mins[1] ) { mins[1] = v[1]; } if ( v[1] > maxs[1]) { maxs[1] = v[1]; } if ( v[2] < mins[2] ) { mins[2] = v[2]; } if ( v[2] > maxs[2]) { maxs[2] = v[2]; } } qboolean BoundsIntersect(const vec3_t mins, const vec3_t maxs, const vec3_t mins2, const vec3_t maxs2) { if ( maxs[0] < mins2[0] || maxs[1] < mins2[1] || maxs[2] < mins2[2] || mins[0] > maxs2[0] || mins[1] > maxs2[1] || mins[2] > maxs2[2]) { return qfalse; } return qtrue; } qboolean BoundsIntersectSphere(const vec3_t mins, const vec3_t maxs, const vec3_t origin, vec_t radius) { if ( origin[0] - radius > maxs[0] || origin[0] + radius < mins[0] || origin[1] - radius > maxs[1] || origin[1] + radius < mins[1] || origin[2] - radius > maxs[2] || origin[2] + radius < mins[2]) { return qfalse; } return qtrue; } qboolean BoundsIntersectPoint(const vec3_t mins, const vec3_t maxs, const vec3_t origin) { if ( origin[0] > maxs[0] || origin[0] < mins[0] || origin[1] > maxs[1] || origin[1] < mins[1] || origin[2] > maxs[2] || origin[2] < mins[2]) { return qfalse; } return qtrue; } vec_t VectorNormalize( vec3_t v ) { // NOTE: TTimo - Apple G4 altivec source uses double? float length, ilength; length = v[0]*v[0] + v[1]*v[1] + v[2]*v[2]; if ( length ) { /* writing it this way allows gcc to recognize that rsqrt can be used */ ilength = 1/(float)sqrt (length); /* sqrt(length) = length * (1 / sqrt(length)) */ length *= ilength; v[0] *= ilength; v[1] *= ilength; v[2] *= ilength; } return length; } vec_t VectorNormalize2( const vec3_t v, vec3_t out) { float length, ilength; length = v[0]*v[0] + v[1]*v[1] + v[2]*v[2]; if (length) { /* writing it this way allows gcc to recognize that rsqrt can be used */ ilength = 1/(float)sqrt (length); /* sqrt(length) = length * (1 / sqrt(length)) */ length *= ilength; out[0] = v[0]*ilength; out[1] = v[1]*ilength; out[2] = v[2]*ilength; } else { VectorClear( out ); } return length; } void _VectorMA( const vec3_t veca, float scale, const vec3_t vecb, vec3_t vecc) { vecc[0] = veca[0] + scale*vecb[0]; vecc[1] = veca[1] + scale*vecb[1]; vecc[2] = veca[2] + scale*vecb[2]; } vec_t _DotProduct( const vec3_t v1, const vec3_t v2 ) { return v1[0]*v2[0] + v1[1]*v2[1] + v1[2]*v2[2]; } void _VectorSubtract( const vec3_t veca, const vec3_t vecb, vec3_t out ) { out[0] = veca[0]-vecb[0]; out[1] = veca[1]-vecb[1]; out[2] = veca[2]-vecb[2]; } void _VectorAdd( const vec3_t veca, const vec3_t vecb, vec3_t out ) { out[0] = veca[0]+vecb[0]; out[1] = veca[1]+vecb[1]; out[2] = veca[2]+vecb[2]; } void _VectorCopy( const vec3_t in, vec3_t out ) { out[0] = in[0]; out[1] = in[1]; out[2] = in[2]; } void _VectorScale( const vec3_t in, vec_t scale, vec3_t out ) { out[0] = in[0]*scale; out[1] = in[1]*scale; out[2] = in[2]*scale; } void Vector4Scale( const vec4_t in, vec_t scale, vec4_t out ) { out[0] = in[0]*scale; out[1] = in[1]*scale; out[2] = in[2]*scale; out[3] = in[3]*scale; } int Q_log2( int val ) { int answer; answer = 0; while ( ( val>>=1 ) != 0 ) { answer++; } return answer; } /* ================= PlaneTypeForNormal ================= */ /* int PlaneTypeForNormal (vec3_t normal) { if ( normal[0] == 1.0 ) return PLANE_X; if ( normal[1] == 1.0 ) return PLANE_Y; if ( normal[2] == 1.0 ) return PLANE_Z; return PLANE_NON_AXIAL; } */ /* ================ MatrixMultiply ================ */ void MatrixMultiply(float in1[3][3], float in2[3][3], float out[3][3]) { out[0][0] = in1[0][0] * in2[0][0] + in1[0][1] * in2[1][0] + in1[0][2] * in2[2][0]; out[0][1] = in1[0][0] * in2[0][1] + in1[0][1] * in2[1][1] + in1[0][2] * in2[2][1]; out[0][2] = in1[0][0] * in2[0][2] + in1[0][1] * in2[1][2] + in1[0][2] * in2[2][2]; out[1][0] = in1[1][0] * in2[0][0] + in1[1][1] * in2[1][0] + in1[1][2] * in2[2][0]; out[1][1] = in1[1][0] * in2[0][1] + in1[1][1] * in2[1][1] + in1[1][2] * in2[2][1]; out[1][2] = in1[1][0] * in2[0][2] + in1[1][1] * in2[1][2] + in1[1][2] * in2[2][2]; out[2][0] = in1[2][0] * in2[0][0] + in1[2][1] * in2[1][0] + in1[2][2] * in2[2][0]; out[2][1] = in1[2][0] * in2[0][1] + in1[2][1] * in2[1][1] + in1[2][2] * in2[2][1]; out[2][2] = in1[2][0] * in2[0][2] + in1[2][1] * in2[1][2] + in1[2][2] * in2[2][2]; } void AngleVectors( const vec3_t angles, vec3_t forward, vec3_t right, vec3_t up) { float angle; static float sr, sp, sy, cr, cp, cy; // static to help MS compiler fp bugs angle = angles[YAW] * (M_PI*2 / 360); sy = sin(angle); cy = cos(angle); angle = angles[PITCH] * (M_PI*2 / 360); sp = sin(angle); cp = cos(angle); angle = angles[ROLL] * (M_PI*2 / 360); sr = sin(angle); cr = cos(angle); if (forward) { forward[0] = cp*cy; forward[1] = cp*sy; forward[2] = -sp; } if (right) { right[0] = (-1*sr*sp*cy+-1*cr*-sy); right[1] = (-1*sr*sp*sy+-1*cr*cy); right[2] = -1*sr*cp; } if (up) { up[0] = (cr*sp*cy+-sr*-sy); up[1] = (cr*sp*sy+-sr*cy); up[2] = cr*cp; } } /* ** assumes "src" is normalized */ void PerpendicularVector( vec3_t dst, const vec3_t src ) { int pos; int i; float minelem = 1.0F; vec3_t tempvec; /* ** find the smallest magnitude axially aligned vector */ for ( pos = 0, i = 0; i < 3; i++ ) { if ( fabs( src[i] ) < minelem ) { pos = i; minelem = fabs( src[i] ); } } tempvec[0] = tempvec[1] = tempvec[2] = 0.0F; tempvec[pos] = 1.0F; /* ** project the point onto the plane defined by src */ ProjectPointOnPlane( dst, tempvec, src ); /* ** normalize the result */ VectorNormalize( dst ); } /* ================ Q_isnan Don't pass doubles to this ================ */ int Q_isnan( float x ) { floatint_t fi; fi.f = x; fi.u &= 0x7FFFFFFF; fi.u = 0x7F800000 - fi.u; return (int)( fi.u >> 31 ); } //your_sha256_hash-------- /* ================ Q_isfinite ================ */ static int Q_isfinite( float f ) { floatint_t fi; fi.f = f; if ( fi.u == 0xFF800000 || fi.u == 0x7F800000 ) return 0; // -INF or +INF fi.u = 0x7F800000 - (fi.u & 0x7FFFFFFF); if ( (int)( fi.u >> 31 ) ) return 0; // -NAN or +NAN return 1; } /* ================ Q_atof ================ */ float Q_atof( const char *str ) { float f; f = atof( str ); // modern C11-like implementations of atof() may return INF or NAN // which breaks all FP code where such values getting passed // and effectively corrupts range checks for cvars as well if ( !Q_isfinite( f ) ) return 0.0f; return f; } /* ================ Q_log2f ================ */ float Q_log2f( float f ) { const float v = logf( f ); return v / M_LN2; } /* ================ Q_exp2f ================ */ float Q_exp2f( float f ) { return powf( 2.0f, f ); } #ifndef Q3_VM /* ===================== Q_acos the msvc acos doesn't always return a value between -PI and PI: int i; i = 1065353246; acos(*(float*) &i) == -1.#IND0 ===================== */ float Q_acos(float c) { float angle; angle = acos(c); if (angle > M_PI) { return (float)M_PI; } if (angle < -M_PI) { return (float)M_PI; } return angle; } #endif ```
Neny Matterhorn is a sharp, pyramid-shaped peak over 1,125 m, standing in the northwest part of the Blackwall Mountains on the south side of Neny Fjord, Graham Land. First roughly surveyed in 1936-37 by the British Graham Land Expedition (BGLE) under Rymill, and resurveyed in 1948-49 by the Falkland Islands Dependencies Survey (FIDS). The name was apparently first used by members of the Ronne Antarctic Research Expedition (RARE), 1947–48, under Ronne, and the FIDS, and derives from its location near Neny Fjord, and its resemblance to the Matterhorn. Mountains of Graham Land Fallières Coast
```shell #!/bin/bash ./common.sh cd ../.. mvn -Ddatabasedmn=cockroachdb clean install ```
```shell Aliasing ssh connections Clear bash history Terminal based browser Sequential execution using the `;` statement separator ```
"You Beat All I Ever Saw" is a song written and originally recorded by Johnny Cash. Released in November 1966 as a single (Columbia 4-43921, with "Put the Sugar to Bed" on the opposite side), it debuted on the U.S. Billboard country chart at number 66 on the week of December 24, eventually reaching number 20. On the Cash Box country chart, the song peaked at number 28 Later the song was included on the U.K. compilation album More of Old Golden Throat (1969). Background and analysis Track listing Charts References External links "You Beat All I Ever Saw" on the Johnny Cash official website Johnny Cash songs 1966 songs 1966 singles Columbia Records singles Songs written by Johnny Cash Songs written by Maybelle Carter Song recordings produced by Don Law
```go package query import ( "chain/core/query/filter" ) var ( assetsTable = &filter.SQLTable{ Name: "annotated_assets", Alias: "ast", Columns: map[string]*filter.SQLColumn{ "id": {Name: "id", Type: filter.String, SQLType: filter.SQLBytea}, "alias": {Name: "alias", Type: filter.String, SQLType: filter.SQLText}, "issuance_program": {Name: "issuance_program", Type: filter.String, SQLType: filter.SQLBytea}, "quorum": {Name: "quorum", Type: filter.Integer, SQLType: filter.SQLInteger}, "tags": {Name: "tags", Type: filter.Object, SQLType: filter.SQLJSONB}, "definition": {Name: "definition", Type: filter.Object, SQLType: filter.SQLJSONB}, "is_local": {Name: "local", Type: filter.String, SQLType: filter.SQLBool}, }, } accountsTable = &filter.SQLTable{ Name: "annotated_accounts", Alias: "acc", Columns: map[string]*filter.SQLColumn{ "id": {Name: "id", Type: filter.String, SQLType: filter.SQLText}, "alias": {Name: "alias", Type: filter.String, SQLType: filter.SQLText}, "quorum": {Name: "quorum", Type: filter.Integer, SQLType: filter.SQLInteger}, "tags": {Name: "tags", Type: filter.Object, SQLType: filter.SQLJSONB}, }, } outputsTable = &filter.SQLTable{ Name: "annotated_outputs", Alias: "out", Columns: map[string]*filter.SQLColumn{ "id": {Name: "output_id", Type: filter.String, SQLType: filter.SQLBytea}, "type": {Name: "type", Type: filter.String, SQLType: filter.SQLText}, "purpose": {Name: "purpose", Type: filter.String, SQLType: filter.SQLText}, "transaction_id": {Name: "tx_hash", Type: filter.String, SQLType: filter.SQLBytea}, "position": {Name: "output_index", Type: filter.Integer, SQLType: filter.SQLInteger}, "asset_id": {Name: "asset_id", Type: filter.String, SQLType: filter.SQLBytea}, "asset_alias": {Name: "asset_alias", Type: filter.String, SQLType: filter.SQLText}, "asset_definition": {Name: "asset_definition", Type: filter.Object, SQLType: filter.SQLJSONB}, "asset_tags": {Name: "asset_tags", Type: filter.Object, SQLType: filter.SQLJSONB}, "asset_is_local": {Name: "asset_local", Type: filter.String, SQLType: filter.SQLBool}, "amount": {Name: "amount", Type: filter.Integer, SQLType: filter.SQLBigint}, "account_id": {Name: "account_id", Type: filter.String, SQLType: filter.SQLText}, "account_alias": {Name: "account_alias", Type: filter.String, SQLType: filter.SQLText}, "account_tags": {Name: "account_tags", Type: filter.Object, SQLType: filter.SQLJSONB}, "control_program": {Name: "control_program", Type: filter.String, SQLType: filter.SQLBytea}, "reference_data": {Name: "reference_data", Type: filter.Object, SQLType: filter.SQLJSONB}, "is_local": {Name: "local", Type: filter.String, SQLType: filter.SQLBool}, }, } inputsTable = &filter.SQLTable{ Name: "annotated_inputs", Alias: "inp", Columns: map[string]*filter.SQLColumn{ "type": {Name: "type", Type: filter.String, SQLType: filter.SQLText}, "asset_id": {Name: "asset_id", Type: filter.String, SQLType: filter.SQLBytea}, "asset_alias": {Name: "asset_alias", Type: filter.String, SQLType: filter.SQLText}, "asset_definition": {Name: "asset_definition", Type: filter.Object, SQLType: filter.SQLJSONB}, "asset_tags": {Name: "asset_tags", Type: filter.Object, SQLType: filter.SQLJSONB}, "asset_is_local": {Name: "asset_local", Type: filter.String, SQLType: filter.SQLBool}, "amount": {Name: "amount", Type: filter.Integer, SQLType: filter.SQLBigint}, "account_id": {Name: "account_id", Type: filter.String, SQLType: filter.SQLText}, "account_alias": {Name: "account_alias", Type: filter.String, SQLType: filter.SQLText}, "account_tags": {Name: "account_tags", Type: filter.Object, SQLType: filter.SQLJSONB}, "issuance_program": {Name: "issuance_program", Type: filter.String, SQLType: filter.SQLBytea}, "reference_data": {Name: "reference_data", Type: filter.Object, SQLType: filter.SQLJSONB}, "is_local": {Name: "local", Type: filter.String, SQLType: filter.SQLBool}, "spent_output_id": {Name: "spent_output_id", Type: filter.String, SQLType: filter.SQLBytea}, "spent_output": {Name: "spent_output", Type: filter.Object, SQLType: filter.SQLJSONB}, }, } transactionsTable = &filter.SQLTable{ Name: "annotated_txs", Alias: "txs", Columns: map[string]*filter.SQLColumn{ "id": {Name: "tx_hash", Type: filter.String, SQLType: filter.SQLBytea}, "timestamp": {Name: "timestamp", Type: filter.String, SQLType: filter.SQLTimestamp}, "block_id": {Name: "block_id", Type: filter.String, SQLType: filter.SQLBytea}, "block_height": {Name: "block_height", Type: filter.Integer, SQLType: filter.SQLBigint}, "position": {Name: "tx_pos", Type: filter.Integer, SQLType: filter.SQLInteger}, "block_transactions_count": {Name: "block_tx_count", Type: filter.Integer, SQLType: filter.SQLInteger}, "reference_data": {Name: "reference_data", Type: filter.Object, SQLType: filter.SQLJSONB}, "is_local": {Name: "local", Type: filter.String, SQLType: filter.SQLBool}, }, ForeignKeys: map[string]*filter.SQLForeignKey{ "inputs": {Table: inputsTable, LocalColumn: "tx_hash", ForeignColumn: "tx_hash"}, "outputs": {Table: outputsTable, LocalColumn: "tx_hash", ForeignColumn: "tx_hash"}, }, } ) ```
```objective-c //===-- ARMMCAsmInfo.h - ARM asm properties --------------------*- C++ -*--===// // // See path_to_url for license information. // //===your_sha256_hash------===// // // This file contains the declaration of the ARMMCAsmInfo class. // //===your_sha256_hash------===// #ifndef LLVM_LIB_TARGET_ARM_MCTARGETDESC_ARMMCASMINFO_H #define LLVM_LIB_TARGET_ARM_MCTARGETDESC_ARMMCASMINFO_H #include "llvm/MC/MCAsmInfoCOFF.h" #include "llvm/MC/MCAsmInfoDarwin.h" #include "llvm/MC/MCAsmInfoELF.h" namespace llvm { class Triple; class ARMMCAsmInfoDarwin : public MCAsmInfoDarwin { virtual void anchor(); public: explicit ARMMCAsmInfoDarwin(const Triple &TheTriple); }; class ARMELFMCAsmInfo : public MCAsmInfoELF { void anchor() override; public: explicit ARMELFMCAsmInfo(const Triple &TT); void setUseIntegratedAssembler(bool Value) override; }; class ARMCOFFMCAsmInfoMicrosoft : public MCAsmInfoMicrosoft { void anchor() override; public: explicit ARMCOFFMCAsmInfoMicrosoft(); }; class ARMCOFFMCAsmInfoGNU : public MCAsmInfoGNUCOFF { void anchor() override; public: explicit ARMCOFFMCAsmInfoGNU(); }; } // namespace llvm #endif ```
```php <?php declare(strict_types=1); return [ [ 'BCDE', 'PhpSpreadsheet', ], [ '877D', 'Mark Baker', ], [ 'C0EA', '!+&=()~', ], [ 'C07E', '', ], [ '99E8', 'leyndarml lykilor', ], [ 'CE4B', '', ], [ 'O6EXRLpLEDNJDL/AzYtnnA4O4bY=', '', 'SHA-1', ], [ 'GYvlIMljDI1Czc4jfWrGaxU5pxl9n5Og0KUzyAfYxwk=', 'PhpSpreadsheet', 'SHA-256', 'Php_salt', 1000, ], [ 'your_sha512_hash==', 'Mark Baker', 'SHA-512', 'Mark_salt', 10000, ], [ 'r9KVLLCKIYOILvE2rcby+g==', '!+&=()~', 'MD5', 'Symbols_salt', 100000, ], // Additional tests suggested by Issue #1897 ['DCDF', 'ABCDEFGHIJKLMNOPQRSTUVW'], ['ECD1', 'ABCDEFGHIJKLMNOPQRSTUVWX'], ['88D2', 'ABCDEFGHIJKLMNOPQRSTUVWXY'], 'password too long' => ['exception', str_repeat('x', 256)], ]; ```
Gerard Sont, also known as Gerry Sont and Gera4d Sont (billing as) is an Australian actor and TV host. Biography Sont played the titular Melvin in Melvin, Son of Alvin. Sont played recurring character Brett Mackin on Home and Away from the series inception in 1988, with appearances until 2005 and a main character, Cal Lawrence, in the TV series Chances. He was the first host of Australia's version of Double Dare and was a presenter on ABC's magazine style TV series Antenna. Sont has appeared on stage in productions such as How Does Your Garden Grow? at the State Theatre in 1996, The Cherry Orchard at the New Theatre in 1996, The Gospel Of Mark at Belvoir St. Downstairs in 2000 and The Object of Desire at La Mama in 2007. He appeared as Jean-Michel in a 1985 production of the musical La Cage aux Folles at Her Majesty's Theatre and the Palais Theatre. Filmography References External links Living people Australian stage actors Australian film actors Australian television actors 21st-century Australian actors Year of birth missing (living people)
"To My Sorrow" is a country music song written by Vernice J. McAlpin, sung by Eddy Arnold (and His Texas Plowboys), and released in 1947 on the RCA Victor label (catalog no. 20-2481-A). In November 1947, it reached No. 2 on the Billboard folk juke box chart. It was also ranked as the No. 12 record on the Billboard 1947 year-end folk juke box chart. References Eddy Arnold songs 1946 songs 1947 singles
```python # # # path_to_url # # Unless required by applicable law or agreed to in writing, software # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. import unittest import numpy as np from op_test_ipu import IPUOpTest import paddle import paddle.static class TestBase(IPUOpTest): def setUp(self): self.set_atol() self.set_training() self.set_data_feed() self.set_feed_attr() self.set_op_attrs() def set_data_feed(self): data = np.random.uniform(size=[1, 3, 10, 10]) self.feed_fp32 = {"x": data.astype(np.float32)} self.feed_fp16 = {"x": data.astype(np.float16)} def set_feed_attr(self): self.feed_shape = [x.shape for x in self.feed_fp32.values()] self.feed_list = list(self.feed_fp32.keys()) self.feed_dtype = [x.dtype for x in self.feed_fp32.values()] def set_op_attrs(self): self.attrs = {"perm": [0, 2, 3, 1]} @IPUOpTest.static_graph def build_model(self): x = paddle.static.data( name=self.feed_list[0], shape=self.feed_shape[0], dtype='float32' ) out = paddle.transpose(x, **self.attrs) self.fetch_list = [out.name] def run_model(self, exec_mode): self.run_op_test(exec_mode) def test(self): for m in IPUOpTest.ExecutionMode: if not self.skip_mode(m): self.build_model() self.run_model(m) self.check(check_shape=True) class TestCase1(TestBase): def set_op_attrs(self): self.attrs = {"perm": [0, 1, 2, 3]} class TestCase2(TestBase): def set_data_feed(self): data = np.random.uniform(size=[1, 2, 3, 4, 5]) self.feed_fp32 = {"x": data.astype(np.float32)} self.feed_fp16 = {"x": data.astype(np.float16)} def set_op_attrs(self): self.attrs = {"perm": [4, 0, 2, 3, 1]} class TestCase_ZeroDim(TestBase): def set_data_feed(self): data = np.random.uniform(size=[]) self.feed_fp32 = {"x": data.astype(np.float32)} self.feed_fp16 = {"x": data.astype(np.float16)} def set_op_attrs(self): self.attrs = {"perm": []} if __name__ == "__main__": unittest.main() ```
"The One with the Cast of Night Court" is the third episode of the third season of the American television comedy series 30 Rock. It was written by co-executive producer Jack Burditt, and directed by Gail Mancuso. The episode originally aired on NBC in the United States on November 13, 2008. The episode received mixed reception from television critics. According to the Nielsen ratings system, it was watched by 7.5 million households during its original broadcast, and received a 4.6 rating/7 share among viewers in the 18–49 demographic. For her performance in this episode, Jennifer Aniston received a Primetime Emmy Award nomination in the category for Outstanding Guest Actress in a Comedy Series. The title of the episode is a reference to Aniston's show Friends. Plot Liz Lemon (Tina Fey) and Jenna Maroney (Jane Krakowski) await the arrival of their old Chicago roommate, Claire Harper (Jennifer Aniston). The two are not thrilled with her visit as they find her exhausting to be around. Immediately, their boss Jack Donaghy (Alec Baldwin) is attracted to Claire, but Liz tells him not to get involved with her. Jack, however, reveals to Liz that the two have already slept together. At a General Electric formal function, Claire surprises Jack by singing a sexy rendition of "Happy Birthday" to him, an allusion to Marilyn Monroe's performance for John F. Kennedy's birthday. He tells her that she needs to leave, so Claire loudly threatens to kill herself. To help Jack, Liz gets Claire to abandon her plans with Jack and instead go out nightclubbing with her and Jenna. At the club, Claire does not show up, which prompts Liz to call Jack to warn him about potential danger. He finds Claire inside his apartment and ends up sleeping with her again. When asked to choose between Liz and Claire, Jack chooses Claire, but Claire, thinking that the relationship has gotten boring, turns on Jack. Meanwhile, NBC page Kenneth Parcell (Jack McBrayer) is not happy with the new page uniforms. Wanting to see Kenneth happy again, Tracy Jordan (Tracy Morgan) gets actors Harry Anderson, Markie Post, and Charlie Robinson from the television show Night Court to come to 30 Rock, where Anderson and Post agree to stage the wedding of their respective characters, Judge Harry Stone and Christine Sullivan. Kenneth is excited when he finds out that he can finally see the Night Court wedding, which never occurred before the show was canceled by the network. When a conflict between Anderson and Post ensues, it seems that the wedding will not take place. However, Anderson and Post make up and rehearse. As Tracy and Kenneth finish taping the final scenes of Harry and Christine's wedding, Harry declares it illegal to wear the new page uniforms and demands the old ones be brought back. Tracy tells Kenneth that he added that part in the script as he complained to Kenneth's superiors to bring back the old uniforms, which makes Kenneth happy. Production "The One with the Cast of Night Court" was written by co-executive producer Jack Burditt, and directed by Gail Mancuso. This was Burditt's eighth writing credit, and was Mancuso's fourth directed episode. It originally aired on NBC in the United States on November 13, 2008, as the third episode of the show's third season. Its title, "The One...", is the convention used to name episodes of guest star Jennifer Aniston's prior sitcom Friends. In August 2008, it was announced that actress Jennifer Aniston would guest star on 30 Rock. The following month it was confirmed by NBC that she would play a woman obsessed with Alec Baldwin's character, Jack Donaghy. She filmed her scenes on August 29 and September 4, 2008. In November 2008, it was announced that actors Markie Post, Harry Anderson and Charlie Robinson, the cast of the situation comedy show Night Court, would make a cameo on the show. Two filmed scenes from "The One with the Cast of Night Court" were cut out from the airing. Instead, the scenes were featured on 30 Rock's season 3 DVD as part of the deleted scenes in the Bonus feature. In the first scene, Liz and Jenna recall their wild nights with Claire, including when Jenna and Claire danced around an opened fire hydrant, while Liz tells them that she does not feel safe. They also remember when they crashed a Polish wedding, in which Claire is seen dancing around a group of men. In the second scene, Harry Anderson is in Tracy's dressing room, after leaving rehearsal. Tracy enters to convince him to make up with Markie Post. In another room, Kenneth is seen with Markie Post. Anderson complains to Tracy about Post, as does Post about Anderson to Kenneth. Tracy tells him to forget about the past and fulfill Kenneth's dreams of a Night Court wedding to make Kenneth happy, as he is displeased with the new page uniforms he is forced to wear. Reception In its original American broadcast, "The One with the Cast of Night Court" was watched by 7.5 million households, according to the Nielsen ratings system. It received a 4.6 rating/7 share among viewers in the 18–49 demographic, meaning that 4.6% of all people in that group, and 7% of all people from that group watching television at the time, watched the episode. This was a decrease from the previous episode, "Believe in the Stars", which was watched by 8.0 million American viewers. This episode was the tenth highest-rated show on the NBC network during the week of November 10–16, 2008. Jennifer Aniston received a Primetime Emmy Award nomination for Outstanding Guest Actress in a Comedy Series at the 61st Primetime Emmy Awards for her work in this episode, but lost to Tina Fey for her satirical portrayal of Sarah Palin on Saturday Night Live. Since airing, "The One with the Cast of Night Court" has received mixed reception from television critics. Nathan Rabin of The A.V. Club wrote that the episode was "kooky, ooky and over-the-top" and enjoyed every minute of it. "For the more esoteric viewer, it's a milestone episode", said The Age's Farah Farouque. Cameron Adams for the Herald Sun called the episode hilarious, while The Boston Globe's Matthew Gilbert felt that it was flaccid and clichéd. IGN contributor Robert Canning said that the episode "did have its moments" and that the storylines "had their potential, and their share of laughs, but I can't help but feel they both could have been so much more." He opined that the Kenneth and Tracy story was "more up 30 Rock's style" but that it was a shame that the story could not "quite knock the concept out of the park." Overall, Canning rated "The One with the Cast of Night Court" a 7.9 out of 10. Jeff Labrecque for Entertainment Weekly reported that the episode fell flat. Critical opinion was divided on Aniston's performance as Claire. TV Guide's Matt Mitovich wrote that Aniston "looked sweet, but the role was juuuuust a bit much ... [and] over-the-top." Jeremy Medina of Paste said that the show did not really seem to know what to do with Aniston in this episode. Tom Stempel for Slant Magazine said 30 Rock was "smart enough" not to make Claire resemble Aniston's former television character, Rachel, from Friends. Further in his review, Stempel said that Claire was a "great choice of character" for Aniston to play, and praised her for knocking the role "out of the park." Kerrie Murphy for The Australian was equally positive noting that Aniston fits in smoothly as Liz's former roommate. Murphy added, "Not only is it a reminder that Aniston is a gifted comic actor [...] With her, the show's regular cast easily hold their own." Bob Sassone of AOL's TV Squad enjoyed the cameos of Harry Anderson, Markie Post, and Charles Robinson in the episode. Robert Philpot for the Fort Worth Star-Telegram wrote that the Night Court cast stole the show from Aniston. Television columnist Alan Sepinwall for The Star-Ledger wrote that he "got a much bigger kick out" of the Night Court story. Medina, who wrote that the episode was "mostly a success", disliked the Night Court subplot, claiming it was not funny. References External links 2008 American television episodes 30 Rock (season 3) episodes
Sharwin I (Persian: شروین) was the fifth ruler of the Bavand dynasty from 772 to 817. He was the son and successor of Surkhab II. Background In 760, during the reign of Sharwin's father Surkhab II, Khurshid, the head of the Dabuyid dynasty that had ruled Tabaristan since the Muslim conquest of Persia, revolted against the Abbasid Caliphate. Khurshid was defeated and poisoned himself after learning that his family had been captured by the Abbasids. This marked the end of the Dabuyid dynasty, but other minor dynasties in the region such as the Bavandids, Karenids and Zarmihrids, who were all formerly subject to the Dabuyids, continued to control parts of Tabaristan as tributary vassals of the Abbasid government. Biography In 772, Surkhab II died, and was succeeded by Sharwin I. During the same period, Khalid ibn Barmak, the Abbasid governor of Tabaristan, left the region. Shortly after Khalid's departure, the Karenid ruler Vandad Hormozd sent Sharwin a letter which urged him to revolt against the Abbasids. Sharwin accepted, and along with Vandad Hormozd and the Zarmihrid ruler revolted against the Abbasids. Sharwin then began destroying the cities built by the Muslims in the region, and in 782, along with Vandad Hormozd, exterminated all the Muslims in Tabaristan. During the same period, the Karenids assumed the former Dabuyid title of Gilgilan, while Sharwin assumed the title of Padashwargarshah ("King of the Mountains"). Sharwin and the other rulers of Tabaristan managed to repel several Arab invasions of Tabaristan, until they were finally defeated in 785, and once again agreed to pay tribute to the Abbasid caliphs. In 805, the Abbasid caliph Harun al-Rashid visited Ray where he met Sharwin and Vandad Hormozd, who reaffirmed their submission to him and promised to pay tax. In order to ensure their loyalty, Harun took Sharwin's grandson Shahriyar I and Vandad Hormozd's son Karin as hostages to Baghdad. The two princes were allowed to return to Tabaristan after Harun's death four years later. Sharwin died in 817, and was succeeded by his grandson Shahriyar I. References Sources Bavand dynasty 9th-century monarchs in Asia 8th-century monarchs in Asia 8th-century Iranian people 9th-century Iranian people Rebellions against the Abbasid Caliphate 817 deaths Year of birth unknown Zoroastrian monarchs Vassal rulers of the Abbasid Caliphate
Eerde is a village in the Dutch province of North Brabant. It is part of the municipality of Meierijstad, located about 500 m west of the built-up area of Veghel and 3 km southwest of the town centre of Veghel. During Operation Market Garden, in September 1944, it changed hands several times between German and American forces but ended up in American hands; the village was severely damaged in the process. The village was first mentioned in 1309 as Eirde, and means earth. Eerde was home to 174 people in 1840. It used to be part of three municipalities. In 1966, the whole are became part of Veghel and since 2016, it is part of Meierijstad. Gallery References Populated places in North Brabant Meierijstad
John Ash (fl. 1420–1439) was an English politician. He was a Member (MP) of the Parliament of England for Totnes in 1420 and for Middlesex in 1433 and 1439. References Year of birth missing Year of death missing Members of the Parliament of England (pre-1707) for Totnes English MPs 1420 English MPs 1433 English MPs 1439
```yaml apiVersion: release-notes/v2 # This YAML file describes the format for specifying a release notes entry for Istio. # This should be filled in for all user facing changes. # kind describes the type of change that this represents. # Valid Values are: # - bug-fix -- Used to specify that this change represents a bug fix. # - security-fix -- Used to specify that this change represents a vulnerability fix. # - feature -- Used to specify a new feature that has been added. # - test -- Used to describe additional testing added. This file is optional for # tests, but included for completeness. kind: bug-fix # area describes the area that this change affects. # Valid values are: # - traffic-management # - security # - telemetry # - installation # - istioctl # - documentation area: traffic-management # issue is a list of GitHub issues resolved in this note. # If issue is not in the current repo, specify its full URL instead. issue: - 52746 # releaseNotes is a markdown listing of any user facing changes. This will appear in the # release notes. releaseNotes: - | **Fixed** an issue where Waypoints required DNSProxy to be enabled in order to consume auto-allocated IPs ```
Rachael Doyle (born 26 October 1989) is an Australian football (soccer) player who last played for Central Coast Mariners in the Australian W-League. Doyle made her debut against Melbourne Victory on 25 October 2008. External links Central Coast Mariners FC profile 1989 births Living people Australian women's soccer players Central Coast Mariners FC (A-League Women) players A-League Women players Women's association football defenders Soccer players from Sydney Sportswomen from New South Wales
Pileus may refer to: Pileus (hat), a brimless cap Pileus (mycology), the "cap" of a mushroom Pileus (meteorology), a cloud formation the crown of a bird's head the scales on the top of lizard and snake heads See also Jewish hat, pileus cornutus
HDMS Justitia was a Royal Dano-Norwegian Navy ship-of-the-line, built to a design by Henrik Gerner. Although launched in 1777, she was not fully commissioned until 1780. The British Royal Navy seized her in 1807, together with the rest of the Danish fleet after the second battle of Copenhagen. The British never commissioned Justitia. A renaming to Orford in 1809 was cancelled. She was broken up in 1817. HDMS Justitia (1777) HDMS Justitia served in the home fleet based in Copenhagen for the whole of its active life in the Danish navy, when new acting as flagship to the admiral commanding the home squadron. Her captains and admirals include Admiral on the flagship Justitia - Vice Admiral Carl Friderich de Fontenay (1781 and 1782). Flag captains on Justitia - Hans Georg Krog (1780), Johan Peter Wleugel (1782), Captains when Justitia was not the flagship - Hans Schiønnebøl (1781), Anton Friderich Lützow (1789), and Svend Martin Ursin (1800). In 1786 Lorentz Henrik Fisker was second in command of Justitia in the home squadron In 1788 Commodore Just Bille put forward proposals for the testing of the new 36 pound cannon in HDMS Justitia. These trials took place in June and July 1788, with Poul de Løvenørn as the official observer. In 1788 Peder Janus Bording was Captain of HDMS Justitia in the home squadron which served alongside the Russian squadron involved with the Russo-Swedish War (1788–1790) commanded by the Russian vice admiral von Dessen. In August of that year Justitia accompanied ship-of-the-line Lovisa Augusta and the frigate Møen on a secret mission to the North Sea (for which details are lacking!), and later on artillery trials. Justitia does not appear to have been involved in the 1801 battle of Copenhagen but was present at the 1807 battle when the majority of the Danish fleet was surrendered to the British. At that point the Royal Danish Navy struck her from the lists. HMS Justitia Justitia was one of the many ships the British Royal Navy seized after the battle. She arrived at Portsmouth on 5 December 1807 and then was laid up. Fate The "Principal Officers and Commissioners of His Majesty's Navy" offered Justitia, of 74 guns and 1758 tons, was first advertised for sale and breaking up in July 1814. The successful purchaser had to give a bond to complete the breaking up within one year. However she did not sell. In February 1817 the Navy used her for experiments with Robert Seppings diagonal braces. She was then broken up at Portsmouth in March 1817. Notes Citations References Balsved - Danish Naval History website Threedecks website - Justitia (1777) T. A. Topsøe-Jensen og Emil Marquard (1935) "Officerer i den dansk-norske Søetat 1660-1814 og den danske Søetat 1814-1932“. Two volumes. Volume 1 and Volume 2 Hard copies are listed in libraries Stockholm, Odense, Ballerup and Copenhagen 1777 ships Ships of the line of the Royal Danish Navy Captured ships Ships of the line of the Royal Navy Ships designed by Henrik Gerner Ships built in Copenhagen
```smalltalk // The .NET Foundation licenses this file to you under the MIT license. namespace Microsoft.TemplateEngine.Cli.TabularOutput { internal static class UnicodeLength { internal static int GetUnicodeLength(this string s) { int totalWidth = 0; for (int i = 0; i < s.Length; i++) { totalWidth += Wcwidth.UnicodeCalculator.GetWidth((int)s[i]); } return totalWidth; } } } ```
```java /* * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * * path_to_url * * Unless required by applicable law or agreed to in writing, software * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. */ package org.apache.shardingsphere.infra.executor.sql.process.yaml.swapper; import org.apache.shardingsphere.infra.executor.sql.process.Process; import org.apache.shardingsphere.infra.executor.sql.process.yaml.YamlProcessList; import org.apache.shardingsphere.infra.util.yaml.swapper.YamlConfigurationSwapper; import java.util.Collection; import java.util.stream.Collectors; /** * YAML process list swapper. */ public final class YamlProcessListSwapper implements YamlConfigurationSwapper<YamlProcessList, Collection<Process>> { private final YamlProcessSwapper yamlProcessSwapper = new YamlProcessSwapper(); @Override public YamlProcessList swapToYamlConfiguration(final Collection<Process> data) { YamlProcessList result = new YamlProcessList(); result.setProcesses(data.stream().map(yamlProcessSwapper::swapToYamlConfiguration).collect(Collectors.toList())); return result; } @Override public Collection<Process> swapToObject(final YamlProcessList yamlConfig) { return yamlConfig.getProcesses().stream().map(yamlProcessSwapper::swapToObject).collect(Collectors.toList()); } } ```
```css /* Name: Paraso (Light) Author: Jan T. Sott Color scheme by Jan T. Sott (path_to_url Inspired by the art of Rubens LP (path_to_url */ .cm-s-paraiso-light.CodeMirror {background: #e7e9db; color: #41323f;} .cm-s-paraiso-light div.CodeMirror-selected {background: #b9b6b0 !important;} .cm-s-paraiso-light.CodeMirror ::selection { background: #b9b6b0; } .cm-s-paraiso-light.CodeMirror ::-moz-selection { background: #b9b6b0; } .cm-s-paraiso-light .CodeMirror-gutters {background: #e7e9db; border-right: 0px;} .cm-s-paraiso-light .CodeMirror-guttermarker { color: black; } .cm-s-paraiso-light .CodeMirror-guttermarker-subtle { color: #8d8687; } .cm-s-paraiso-light .CodeMirror-linenumber {color: #8d8687;} .cm-s-paraiso-light .CodeMirror-cursor {border-left: 1px solid #776e71 !important;} .cm-s-paraiso-light span.cm-comment {color: #e96ba8;} .cm-s-paraiso-light span.cm-atom {color: #815ba4;} .cm-s-paraiso-light span.cm-number {color: #815ba4;} .cm-s-paraiso-light span.cm-property, .cm-s-paraiso-light span.cm-attribute {color: #48b685;} .cm-s-paraiso-light span.cm-keyword {color: #ef6155;} .cm-s-paraiso-light span.cm-string {color: #fec418;} .cm-s-paraiso-light span.cm-variable {color: #48b685;} .cm-s-paraiso-light span.cm-variable-2 {color: #06b6ef;} .cm-s-paraiso-light span.cm-def {color: #f99b15;} .cm-s-paraiso-light span.cm-bracket {color: #41323f;} .cm-s-paraiso-light span.cm-tag {color: #ef6155;} .cm-s-paraiso-light span.cm-link {color: #815ba4;} .cm-s-paraiso-light span.cm-error {background: #ef6155; color: #776e71;} .cm-s-paraiso-light .CodeMirror-activeline-background {background: #CFD1C4 !important;} .cm-s-paraiso-light .CodeMirror-matchingbracket { text-decoration: underline; color: white !important;} ```
```yaml --- parsed_sample: - vpn_session_name: - "Site-to-Site VPN" - "IKEv2 IPsec" - "IKEv1 IPsec" vpn_session_active: - "99" - "9" - "99" vpn_session_cumulative: - "3506999" - "3999" - "3502999" vpn_session_peak_concurrent: - "99" - "9" - "99" vpn_session_inactive: - "None" - "None" - "None" total_active_and_inactive: "99" total_cumulative: "3506999" device_total_vpn_capacity: "750" device_load_percent: "2" tunnels_summary_name: - "IKEv1" - "IKEv2" - "IPsec" - "IPsecOverNatT" tunnels_summary_active: - "99" - "9" - "99" - "9" tunnels_summary_cumulative: - "3502999" - "3999" - "9302" - "1999" tunnels_summary_peak_concurrent: - "99" - "9" - "99" - "9" totals_active: "99" totals_cumulative: "351999" ```
August is a 1996 British drama film directed by and starring Anthony Hopkins as Ieuan (IPA:j/əɨ/a/n) Davies, and featuring Rhys Ifans in a small role in one of his earliest films, as Griffiths. It is an adaptation of Anton Chekhov's 1899 play Uncle Vanya, with the character Ieuan Davies taking over the title role. The film was Hopkins's first feature film with a full cast (he had previously directed the one-man-performance of Dylan Thomas: Return Journey in 1990). It would be over a decade before his next directorial effort, Slipstream in 2007, which he also wrote and for which he also composed the score. During an interview on the podcast, The Ghost of Hollywood, cinematographer, Robin Vidgeon, stated that working with Anthony Hopkins on August was the highlight of his career. Cast Anthony Hopkins as Ieuan Davies Leslie Phillips as Prof. Alexander Blathwaite Kate Burton as Helen Blathwaite Gawn Grainger as Dr. Michael Lloyd Rhian Morgan as Sian Blathwaite Menna Trussler as Gwen Rhoda Lewis as Mair Davies Hugh Lloyd as Thomas Prosser Huw Garmon as Dafydd Edwards Rhys Ifans as Griffiths Susan Ellen Flynn as Rhianon Buddug Morgan as Nesta Adaptation and issues The film adapts Uncle Vanya to a turn-of-the-century Welsh setting, emphasizing the hardships of Welsh industrial life in the slate quarries and Welsh-English turmoil as an English professor upsets normal Welsh life when he arrives at the Welsh estate which acts as his vacation home (at one point Ieuan states that he feels that he has been cheated by the Prof. Blathwaite, just as "the English have always cheated the Welsh"). Language It is primarily in English, with a few lines in Welsh here or there - such as diolch yn fawr iawn ("thank you very much"), cariad (a term of endearment, meaning "love"), and iechyd da ("cheers"). See also Meibion Glyndŵr, on Welsh-English relations surrounding the English taking vacation homes in Wales. List of Welsh films References External links 1996 films 1990s English-language films Films based on Uncle Vanya Films directed by Anthony Hopkins Films set in Wales Cool Cymru 1996 drama films The Samuel Goldwyn Company films English-language Welsh films
```python from . import idnadata import bisect import unicodedata import re import sys from .intranges import intranges_contain _virama_combining_class = 9 _alabel_prefix = b'xn--' _unicode_dots_re = re.compile(u'[\u002e\u3002\uff0e\uff61]') if sys.version_info[0] == 3: unicode = str unichr = chr class IDNAError(UnicodeError): """ Base exception for all IDNA-encoding related problems """ pass class IDNABidiError(IDNAError): """ Exception when bidirectional requirements are not satisfied """ pass class InvalidCodepoint(IDNAError): """ Exception when a disallowed or unallocated codepoint is used """ pass class InvalidCodepointContext(IDNAError): """ Exception when the codepoint is not valid in the context it is used """ pass def _combining_class(cp): return unicodedata.combining(unichr(cp)) def _is_script(cp, script): return intranges_contain(ord(cp), idnadata.scripts[script]) def _punycode(s): return s.encode('punycode') def _unot(s): return 'U+{0:04X}'.format(s) def valid_label_length(label): if len(label) > 63: return False return True def valid_string_length(label, trailing_dot): if len(label) > (254 if trailing_dot else 253): return False return True def check_bidi(label, check_ltr=False): # Bidi rules should only be applied if string contains RTL characters bidi_label = False for (idx, cp) in enumerate(label, 1): direction = unicodedata.bidirectional(cp) if direction == '': # String likely comes from a newer version of Unicode raise IDNABidiError('Unknown directionality in label {0} at position {1}'.format(repr(label), idx)) if direction in ['R', 'AL', 'AN']: bidi_label = True break if not bidi_label and not check_ltr: return True # Bidi rule 1 direction = unicodedata.bidirectional(label[0]) if direction in ['R', 'AL']: rtl = True elif direction == 'L': rtl = False else: raise IDNABidiError('First codepoint in label {0} must be directionality L, R or AL'.format(repr(label))) valid_ending = False number_type = False for (idx, cp) in enumerate(label, 1): direction = unicodedata.bidirectional(cp) if rtl: # Bidi rule 2 if not direction in ['R', 'AL', 'AN', 'EN', 'ES', 'CS', 'ET', 'ON', 'BN', 'NSM']: raise IDNABidiError('Invalid direction for codepoint at position {0} in a right-to-left label'.format(idx)) # Bidi rule 3 if direction in ['R', 'AL', 'EN', 'AN']: valid_ending = True elif direction != 'NSM': valid_ending = False # Bidi rule 4 if direction in ['AN', 'EN']: if not number_type: number_type = direction else: if number_type != direction: raise IDNABidiError('Can not mix numeral types in a right-to-left label') else: # Bidi rule 5 if not direction in ['L', 'EN', 'ES', 'CS', 'ET', 'ON', 'BN', 'NSM']: raise IDNABidiError('Invalid direction for codepoint at position {0} in a left-to-right label'.format(idx)) # Bidi rule 6 if direction in ['L', 'EN']: valid_ending = True elif direction != 'NSM': valid_ending = False if not valid_ending: raise IDNABidiError('Label ends with illegal codepoint directionality') return True def check_initial_combiner(label): if unicodedata.category(label[0])[0] == 'M': raise IDNAError('Label begins with an illegal combining character') return True def check_hyphen_ok(label): if label[2:4] == '--': raise IDNAError('Label has disallowed hyphens in 3rd and 4th position') if label[0] == '-' or label[-1] == '-': raise IDNAError('Label must not start or end with a hyphen') return True def check_nfc(label): if unicodedata.normalize('NFC', label) != label: raise IDNAError('Label must be in Normalization Form C') def valid_contextj(label, pos): cp_value = ord(label[pos]) if cp_value == 0x200c: if pos > 0: if _combining_class(ord(label[pos - 1])) == _virama_combining_class: return True ok = False for i in range(pos-1, -1, -1): joining_type = idnadata.joining_types.get(ord(label[i])) if joining_type == 'T': continue if joining_type in ['L', 'D']: ok = True break if not ok: return False ok = False for i in range(pos+1, len(label)): joining_type = idnadata.joining_types.get(ord(label[i])) if joining_type == 'T': continue if joining_type in ['R', 'D']: ok = True break return ok if cp_value == 0x200d: if pos > 0: if _combining_class(ord(label[pos - 1])) == _virama_combining_class: return True return False else: return False def valid_contexto(label, pos, exception=False): cp_value = ord(label[pos]) if cp_value == 0x00b7: if 0 < pos < len(label)-1: if ord(label[pos - 1]) == 0x006c and ord(label[pos + 1]) == 0x006c: return True return False elif cp_value == 0x0375: if pos < len(label)-1 and len(label) > 1: return _is_script(label[pos + 1], 'Greek') return False elif cp_value == 0x05f3 or cp_value == 0x05f4: if pos > 0: return _is_script(label[pos - 1], 'Hebrew') return False elif cp_value == 0x30fb: for cp in label: if cp == u'\u30fb': continue if not _is_script(cp, 'Hiragana') and not _is_script(cp, 'Katakana') and not _is_script(cp, 'Han'): return False return True elif 0x660 <= cp_value <= 0x669: for cp in label: if 0x6f0 <= ord(cp) <= 0x06f9: return False return True elif 0x6f0 <= cp_value <= 0x6f9: for cp in label: if 0x660 <= ord(cp) <= 0x0669: return False return True def check_label(label): if isinstance(label, (bytes, bytearray)): label = label.decode('utf-8') if len(label) == 0: raise IDNAError('Empty Label') check_nfc(label) check_hyphen_ok(label) check_initial_combiner(label) for (pos, cp) in enumerate(label): cp_value = ord(cp) if intranges_contain(cp_value, idnadata.codepoint_classes['PVALID']): continue elif intranges_contain(cp_value, idnadata.codepoint_classes['CONTEXTJ']): if not valid_contextj(label, pos): raise InvalidCodepointContext('Joiner {0} not allowed at position {1} in {2}'.format(_unot(cp_value), pos+1, repr(label))) elif intranges_contain(cp_value, idnadata.codepoint_classes['CONTEXTO']): if not valid_contexto(label, pos): raise InvalidCodepointContext('Codepoint {0} not allowed at position {1} in {2}'.format(_unot(cp_value), pos+1, repr(label))) else: raise InvalidCodepoint('Codepoint {0} at position {1} of {2} not allowed'.format(_unot(cp_value), pos+1, repr(label))) check_bidi(label) def alabel(label): try: label = label.encode('ascii') try: ulabel(label) except: raise IDNAError('The label {0} is not a valid A-label'.format(label)) if not valid_label_length(label): raise IDNAError('Label too long') return label except UnicodeError: pass if not label: raise IDNAError('No Input') label = unicode(label) check_label(label) label = _punycode(label) label = _alabel_prefix + label if not valid_label_length(label): raise IDNAError('Label too long') return label def ulabel(label): if not isinstance(label, (bytes, bytearray)): try: label = label.encode('ascii') except UnicodeError: check_label(label) return label label = label.lower() if label.startswith(_alabel_prefix): label = label[len(_alabel_prefix):] else: check_label(label) return label.decode('ascii') label = label.decode('punycode') check_label(label) return label def uts46_remap(domain, std3_rules=True, transitional=False): """Re-map the characters in the string according to UTS46 processing.""" from .uts46data import uts46data output = u"" try: for pos, char in enumerate(domain): code_point = ord(char) uts46row = uts46data[code_point if code_point < 256 else bisect.bisect_left(uts46data, (code_point, "Z")) - 1] status = uts46row[1] replacement = uts46row[2] if len(uts46row) == 3 else None if (status == "V" or (status == "D" and not transitional) or (status == "3" and std3_rules and replacement is None)): output += char elif replacement is not None and (status == "M" or (status == "3" and std3_rules) or (status == "D" and transitional)): output += replacement elif status != "I": raise IndexError() return unicodedata.normalize("NFC", output) except IndexError: raise InvalidCodepoint( "Codepoint {0} not allowed at position {1} in {2}".format( _unot(code_point), pos + 1, repr(domain))) def encode(s, strict=False, uts46=False, std3_rules=False, transitional=False): if isinstance(s, (bytes, bytearray)): s = s.decode("ascii") if uts46: s = uts46_remap(s, std3_rules, transitional) trailing_dot = False result = [] if strict: labels = s.split('.') else: labels = _unicode_dots_re.split(s) while labels and not labels[0]: del labels[0] if not labels: raise IDNAError('Empty domain') if labels[-1] == '': del labels[-1] trailing_dot = True for label in labels: result.append(alabel(label)) if trailing_dot: result.append(b'') s = b'.'.join(result) if not valid_string_length(s, trailing_dot): raise IDNAError('Domain too long') return s def decode(s, strict=False, uts46=False, std3_rules=False): if isinstance(s, (bytes, bytearray)): s = s.decode("ascii") if uts46: s = uts46_remap(s, std3_rules, False) trailing_dot = False result = [] if not strict: labels = _unicode_dots_re.split(s) else: labels = s.split(u'.') while labels and not labels[0]: del labels[0] if not labels: raise IDNAError('Empty domain') if not labels[-1]: del labels[-1] trailing_dot = True for label in labels: result.append(ulabel(label)) if trailing_dot: result.append(u'') return u'.'.join(result) ```
```c++ /// Source : path_to_url /// Author : liuyubobobo /// Time : 2020-10-15 #include <iostream> #include <vector> using namespace std; /// Binary Search /// Time Complexity: O(|text| * log(|fonts|)) /// Space Complexity: O(1) // This is the FontInfo's API interface. // You should not implement it, or speculate about its implementation class FontInfo { public: // Return the width of char ch when fontSize is used. int getWidth(int fontSize, char ch); // Return Height of any char when fontSize is used. int getHeight(int fontSize); }; class Solution { public: int maxFont(string text, long long w, int h, vector<int>& fonts, FontInfo fontInfo) { int l = -1, r = fonts.size() - 1; while(l < r){ int mid = (l + r + 1) / 2; int H = fontInfo.getHeight(fonts[mid]); long long W = 0ll; for(char c: text) W += fontInfo.getWidth(fonts[mid], c); if(H <= h && W <= w) l = mid; else r = mid - 1; } return l == -1 ? -1 : fonts[l]; } }; int main() { return 0; } ```
José Marín Sospedra (Catalan: Josep Marín i Sospedra; born 21 January 1950) is a retired Spanish racewalker. Achievements External links Josep Marín in the Catalonia's Championship 1950 births Living people Spanish male racewalkers Athletes from Catalonia Athletes (track and field) at the 1980 Summer Olympics Athletes (track and field) at the 1984 Summer Olympics Athletes (track and field) at the 1988 Summer Olympics Athletes (track and field) at the 1992 Summer Olympics Olympic athletes for Spain World record setters in athletics (track and field) World Athletics Championships medalists European Athletics Championships medalists Spanish masters athletes Mediterranean Games bronze medalists for Spain Mediterranean Games medalists in athletics Athletes (track and field) at the 1975 Mediterranean Games World Athletics Race Walking Team Championships winners
Thunberginol E is a dihydroisocoumarin found in Hydrangeae Dulcis Folium, the processed leaves of Hydrangea macrophylla var. thunbergii. References Dihydroisocoumarins Phenol ethers
```objective-c // -*- mode: c++; c-basic-offset: 4; indent-tabs-mode: nil; -*- // (c) 2020 Henner Zeller <h.zeller@acm.org> // // This program is free software; you can redistribute it and/or modify // the Free Software Foundation version 2. // // This program is distributed in the hope that it will be useful, // but WITHOUT ANY WARRANTY; without even the implied warranty of // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the // // along with this program. If not, see <path_to_url #ifndef VIDEO_SOURCE_H_ #define VIDEO_SOURCE_H_ #include <signal.h> #include "image-source.h" #include "terminal-canvas.h" #include "timg-time.h" struct AVCodecContext; struct AVFormatContext; struct AVFrame; struct AVPacket; struct SwsContext; namespace timg { // Video source, meant for one video to load, and if successful, Play(). class VideoSource final : public ImageSource { public: explicit VideoSource(const std::string &filename); ~VideoSource() final; static const char *VersionInfo(); // Attempt to load given filename as video, open stream and set-up scaling. // Returns true on success. bool LoadAndScale(const DisplayOptions &options, int frame_offset, int frame_count) final; // Play video up to given duration. // // The reference to the "interrupt_received" can be updated by a signal // while the method is running and shall be checked often. void SendFrames(const Duration &duration, int loops, const volatile sig_atomic_t &interrupt_received, const Renderer::WriteFramebufferFun &sink) final; // Format title according to the format-string. std::string FormatTitle(const std::string &format_string) const final; bool IsAnimationBeforeFrameLimit() const override { return true; } private: void AlphaBlendFramebuffer(); DisplayOptions options_; bool maybe_transparent_ = false; int frame_offset_ = 0; int frame_count_ = -1; int orig_width_, orig_height_; int video_stream_index_ = -1; AVFormatContext *format_context_ = nullptr; AVCodecContext *codec_context_ = nullptr; SwsContext *sws_context_ = nullptr; timg::Duration frame_duration_; // 1/fps timg::Framebuffer *terminal_fb_ = nullptr; int center_indentation_ = 0; }; } // namespace timg #endif // VIDEO_SOURCE_H_ ```
The McGuinness Institute Te Hononga Waka is a non-partisan think tank based in Wellington, New Zealand, working towards a sustainable future, contributing strategic foresight through evidence-based research and policy analysis. Established in 2004 by Wendy McGuinness, the Institute endeavours to undertake research that is independent, innovative and relevant, in a professional manner. Previously the Sustainable Future Institute, it changed its name in February 2012. The McGuinness Institute produces publications in the form of research reports, think pieces, newsletters, submissions, working papers, and filmed interviews. As a registered charitable trust, the McGuinness Institute is required to produce annual reports detailing its financial statements. Project 2058 Started in 2008, Project 2058 has the strategic aim of promoting integrated long-term thinking, leadership and capacity building so that New Zealand can effectively explore and manage risks and opportunities into the year 2058. The project is divided into a series of reports, each covering an important aspect of New Zealand's future. Within Project 2058, the Institute maintains a number of other ongoing projects. These are divided into policy projects and research projects. Policy projects Project ForesightNZ Project ForesightNZ aims to build public policy capability in New Zealand by encouraging long-term, agile thinking around our uncertain future. Initiated in 2008, ForesightNZ is about conceptualising the broad range of possible futures for New Zealand through up-to-date tools and conceptual approaches used in the field of futures studies. The project is carried out through a number of publications and events. The 2016 ForesightNZ: Untangling New Zealand’s long-term future workshop was a collaboration between the New Zealand Treasury and the McGuinness Institute. The ForesightNZ playing cards were this workshop's primary output. The 2017 WakaNZ: Navigating with foresight workshop was also a collaboration between the McGuinness Institute and the New Zealand Treasury and explored what a preferred future might look like in a post-Treaty of Waitangi settlement New Zealand. Project ReportingNZ Project ReportingNZ aims to contribute to a discussion on how to build an informed society. ReportingNZ began in 2016 and formed a major project of the Institute's work programme in 2017 and 2018. The significant pieces of work in this project are the Government Department Strategy (GDS) Index and two surveys and accompanying publications on Extended External Reporting (EER) in collaboration with the External Reporting Board. Project StrategyNZ Project StrategyNZ aims to contribute to a discussion on how to improve strategic decision-making, strategy stewardship and implementation in both the private and the public sector. This project has two parts that look at how New Zealand can improve long-term strategic thinking and strategy stewardship. The first is exploring a national sustainable development strategy for New Zealand, which began in 2006 and led to a workshop in March 2011 called StrategyNZ: Mapping our Future. This workshop in turn lead to the formation of the research project TalentNZ based on a quote from speaker Sir Paul Callaghan about creating ‘a place where talent wants to live’. The second aspect of Project StrategyNZ explores strategy stewardship in the New Zealand public sector and involves the GDS Index and the upcoming Project 2058 Report 15: Strengthening Strategy Stewardship in the Public Service. Research projects Project CivicsNZ Project CivicsNZ aims to build the social capital and empowerment of New Zealand citizens. Work in this project has involved building a constitution for the twenty-first century in the EmpowerNZ initiative, with workshops in 2012 and 2013, and more recently involves discussion around civic education. The CivicsNZ project is also linked to the TacklingPovertyNZ project and included a workshop evening in 2017 and publication of a think piece and working paper in 2018. Project LivestockNZ Project LivestockNZ aims to explore a new narrative for livestock farming in New Zealand – one that moves towards a more robust and ethically sound way of doing business while at the same time delivering better economic, environmental and social outcomes for all. This project is in its early stages. Project OneOceanNZ Project OneOceanNZ aims to explore New Zealand’s public policy landscape in order to contribute to a wider discussion on how we might best manage our oceans. It looks at public policy solutions around ocean governance as an important long-term issue for New Zealand. The Institute has made a number of submissions as part of this project and also facilitated the formation of the New Zealand Antarctic Youth Council. Project PublicScienceNZ Project PublicScienceNZ aims to contribute to a discussion on government-funded science in the hope that New Zealand invests its research dollar well and delivers sustainable outcomes for current and future generations. The project was established in 2012 and is ongoing. PublicScienceNZ also brings together the Institute's previous work on genetic modification policy and regulations, and pandemic management. Project TacklingPovertyNZ Project TacklingPovertyNZ aims to contribute to a national conversation on how to reduce poverty in New Zealand. This project began in 2015 with a workshop in December at the New Zealand Treasury. Since then, the Institute has held six more workshops throughout New Zealand, with the goal of gathering local perspectives on poverty. From this tour, the Institute sent a proposal to Prime Minister Bill English at the end of 2016 concerning the creation of demarcated zones for public policy innovation in three of the areas visited on the workshop tour. The proposal garnered some coverage in the New Zealand media. Project TalentNZ Project TalentNZ aims to contribute to Sir Paul Callaghan’s vision of making New Zealand ‘a place where talent wants to live’. Project TalentNZ began in 2011 at a StrategyNZ workshop with Sir Paul Callaghan’s keynote speech. Since then, the Institute has published the TalentNZ Journal and developed a Menu of Initiatives, which illustrates New Zealand’s talent ecosystem and lays out action points for growing, attracting, retaining and connecting talented individuals. Project Nation Dates In 2011 the Institute published Nation Dates, a book that presents a timeline of significant events that have shaped New Zealand as a nation. The second edition was published in 2012 and the third edition was published in 2017. Workshops One of the McGuinness Institute's core values is to provide platforms and opportunities for New Zealanders, with a particular focus on amplifying the voices of young people aged between 18 and 25. McGuinness Institute workshops are the primary tool for achieving this. The workshops focus on public policy issues that are strategic, complex, and long-term in nature. James Duncan Reference Library The James Duncan Reference Library is located at the office of McGuinness Institute in Wellington. Named after the former Chair of the Commission for the Future, Professor James Duncan (1921–2001), the library was established to provide a record of long-term thinking in New Zealand. The library and archive house over 4710 books and publications on New Zealand’s future-thinking initiatives and historical development, the theory and practice of future-thinking, strategy development, and national and international perspectives. References Think tanks based in New Zealand Charities based in New Zealand 2004 establishments in New Zealand
```go package rfc /* * * * Unless required by applicable law or agreed to in writing, software * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or */ import ( "encoding/asn1" "time" "github.com/zmap/zcrypto/x509" "github.com/zmap/zlint/v3/lint" "github.com/zmap/zlint/v3/util" ) type generalizedPre2050 struct{} /********************************************************************* CAs conforming to this profile MUST always encode certificate validity dates through the year 2049 as UTCTime; certificate validity dates in 2050 or later MUST be encoded as GeneralizedTime. Conforming applications MUST be able to process validity dates that are encoded in either UTCTime or GeneralizedTime. *********************************************************************/ func init() { lint.RegisterLint(&lint.Lint{ Name: "e_wrong_time_format_pre2050", Description: "Certificates valid through the year 2049 MUST be encoded in UTC time", Citation: "RFC 5280: 4.1.2.5", Source: lint.RFC5280, EffectiveDate: util.RFC2459Date, Lint: &generalizedPre2050{}, }) } func (l *generalizedPre2050) Initialize() error { return nil } func (l *generalizedPre2050) CheckApplies(c *x509.Certificate) bool { return true } func (l *generalizedPre2050) Execute(c *x509.Certificate) *lint.LintResult { date1, date2 := util.GetTimes(c) var t time.Time type1, type2 := util.FindTimeType(date1, date2) if type1 == 24 { temp, err := asn1.Marshal(date1) if err != nil { return &lint.LintResult{Status: lint.Fatal} } _, err = asn1.Unmarshal(temp, &t) if err != nil { return &lint.LintResult{Status: lint.Fatal} } if t.Before(util.GeneralizedDate) { return &lint.LintResult{Status: lint.Error} } } if type2 == 24 { temp, err := asn1.Marshal(date2) if err != nil { return &lint.LintResult{Status: lint.Fatal} } _, err = asn1.Unmarshal(temp, &t) if err != nil { return &lint.LintResult{Status: lint.Fatal} } if t.Before(util.GeneralizedDate) { return &lint.LintResult{Status: lint.Error} } } return &lint.LintResult{Status: lint.Pass} } ```
```c++ /* Boost.MultiIndex test for projection capabilities. * * (See accompanying file LICENSE_1_0.txt or copy at * path_to_url * * See path_to_url for library home page. */ #include "test_projection.hpp" #include <boost/config.hpp> /* keep it first to prevent nasty warns in MSVC */ #include "pre_multi_index.hpp" #include "employee.hpp" #include <boost/detail/lightweight_test.hpp> using namespace boost::multi_index; void test_projection() { employee_set es; es.insert(employee(0,"Joe",31,1123)); es.insert(employee(1,"Robert",27,5601)); es.insert(employee(2,"John",40,7889)); es.insert(employee(3,"Albert",20,9012)); es.insert(employee(4,"John",57,1002)); employee_set::iterator it,itbis; employee_set_by_name::iterator it1; employee_set_by_age::iterator it2; employee_set_as_inserted::iterator it3; employee_set_by_ssn::iterator it4; employee_set_randomly::iterator it5; BOOST_STATIC_ASSERT((boost::is_same< employee_set::iterator, nth_index_iterator<employee_set,0>::type >::value)); BOOST_STATIC_ASSERT((boost::is_same< employee_set_by_name::iterator, nth_index_iterator<employee_set,1>::type >::value)); #if defined(BOOST_NO_MEMBER_TEMPLATES) BOOST_STATIC_ASSERT((boost::is_same< employee_set_by_age::iterator, index_iterator<employee_set,age>::type >::value)); #else BOOST_STATIC_ASSERT((boost::is_same< employee_set_by_age::iterator, employee_set::index_iterator<age>::type >::value)); #endif BOOST_STATIC_ASSERT((boost::is_same< employee_set_as_inserted::iterator, nth_index_iterator<employee_set,3>::type >::value)); BOOST_STATIC_ASSERT((boost::is_same< employee_set_by_ssn::iterator, nth_index_iterator<employee_set,4>::type >::value)); BOOST_STATIC_ASSERT((boost::is_same< employee_set_randomly::iterator, nth_index_iterator<employee_set,5>::type >::value)); it= es.find(employee(1,"Robert",27,5601)); it1= project<name>(es,it); it2= project<age>(es,it1); it3= project<as_inserted>(es,it2); it4= project<ssn>(es,it3); it5= project<randomly>(es,it4); #if defined(BOOST_NO_MEMBER_TEMPLATES) itbis=project<0>(es,it5); #else itbis=es.project<0>(it5); #endif BOOST_TEST( *it==*it1&&*it1==*it2&&*it2==*it3&&*it3==*it4&&*it4==*it5&&itbis==it); BOOST_TEST(project<name>(es,es.end())==get<name>(es).end()); BOOST_TEST(project<age>(es,es.end())==get<age>(es).end()); BOOST_TEST(project<as_inserted>(es,es.end())==get<as_inserted>(es).end()); BOOST_TEST(project<ssn>(es,es.end())==get<ssn>(es).end()); BOOST_TEST(project<randomly>(es,es.end())==get<randomly>(es).end()); const employee_set& ces=es; employee_set::const_iterator cit,citbis; employee_set_by_name::const_iterator cit1; employee_set_by_age::const_iterator cit2; employee_set_as_inserted::const_iterator cit3; employee_set_by_ssn::const_iterator cit4; employee_set_randomly::const_iterator cit5; BOOST_STATIC_ASSERT((boost::is_same< employee_set::const_iterator, nth_index_const_iterator<employee_set,0>::type >::value)); BOOST_STATIC_ASSERT((boost::is_same< employee_set_by_name::const_iterator, nth_index_const_iterator<employee_set,1>::type >::value)); #if defined(BOOST_NO_MEMBER_TEMPLATES) BOOST_STATIC_ASSERT((boost::is_same< employee_set_by_age::const_iterator, index_const_iterator<employee_set,age>::type >::value)); #else BOOST_STATIC_ASSERT((boost::is_same< employee_set_by_age::const_iterator, employee_set::index_const_iterator<age>::type >::value)); #endif BOOST_STATIC_ASSERT((boost::is_same< employee_set_as_inserted::const_iterator, nth_index_const_iterator<employee_set,3>::type >::value)); BOOST_STATIC_ASSERT((boost::is_same< employee_set_by_ssn::const_iterator, nth_index_const_iterator<employee_set,4>::type >::value)); BOOST_STATIC_ASSERT((boost::is_same< employee_set_randomly::const_iterator, nth_index_const_iterator<employee_set,5>::type >::value)); cit= ces.find(employee(4,"John",57,1002)); #if defined(BOOST_NO_MEMBER_TEMPLATES) cit1= project<by_name>(ces,cit); #else cit1= ces.project<by_name>(cit); #endif cit2= project<age>(ces,cit1); #if defined(BOOST_NO_MEMBER_TEMPLATES) cit3= project<as_inserted>(ces,cit2); #else cit3= ces.project<as_inserted>(cit2); #endif cit4= project<ssn>(ces,cit3); cit5= project<randomly>(ces,cit4); citbis=project<0>(ces,cit5); BOOST_TEST( *cit==*cit1&&*cit1==*cit2&&*cit2==*cit3&&*cit3==*cit4&&*cit4==*cit5&& citbis==cit); } ```
```java package com.example.polly; // snippet-start:[polly.java2.demo.main] // snippet-start:[polly.java2.demo.import] import javazoom.jl.decoder.JavaLayerException; import software.amazon.awssdk.core.ResponseInputStream; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.polly.PollyClient; import software.amazon.awssdk.services.polly.model.DescribeVoicesRequest; import software.amazon.awssdk.services.polly.model.Voice; import software.amazon.awssdk.services.polly.model.DescribeVoicesResponse; import software.amazon.awssdk.services.polly.model.OutputFormat; import software.amazon.awssdk.services.polly.model.PollyException; import software.amazon.awssdk.services.polly.model.SynthesizeSpeechRequest; import software.amazon.awssdk.services.polly.model.SynthesizeSpeechResponse; import java.io.IOException; import java.io.InputStream; import javazoom.jl.player.advanced.AdvancedPlayer; import javazoom.jl.player.advanced.PlaybackEvent; import javazoom.jl.player.advanced.PlaybackListener; // snippet-end:[polly.java2.demo.import] /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * path_to_url */ public class PollyDemo { private static final String SAMPLE = "Congratulations. You have successfully built this working demo " + " of Amazon Polly in Java Version 2. Have fun building voice enabled apps with Amazon Polly (that's me!), and always " + " look at the AWS website for tips and tricks on using Amazon Polly and other great services from AWS"; public static void main(String args[]) { PollyClient polly = PollyClient.builder() .region(Region.US_WEST_2) .build(); talkPolly(polly); polly.close(); } public static void talkPolly(PollyClient polly) { try { DescribeVoicesRequest describeVoiceRequest = DescribeVoicesRequest.builder() .engine("standard") .build(); DescribeVoicesResponse describeVoicesResult = polly.describeVoices(describeVoiceRequest); Voice voice = describeVoicesResult.voices().stream() .filter(v -> v.name().equals("Joanna")) .findFirst() .orElseThrow(() -> new RuntimeException("Voice not found")); InputStream stream = synthesize(polly, SAMPLE, voice, OutputFormat.MP3); AdvancedPlayer player = new AdvancedPlayer(stream, javazoom.jl.player.FactoryRegistry.systemRegistry().createAudioDevice()); player.setPlayBackListener(new PlaybackListener() { public void playbackStarted(PlaybackEvent evt) { System.out.println("Playback started"); System.out.println(SAMPLE); } public void playbackFinished(PlaybackEvent evt) { System.out.println("Playback finished"); } }); // play it! player.play(); } catch (PollyException | JavaLayerException | IOException e) { System.err.println(e.getMessage()); System.exit(1); } } public static InputStream synthesize(PollyClient polly, String text, Voice voice, OutputFormat format) throws IOException { SynthesizeSpeechRequest synthReq = SynthesizeSpeechRequest.builder() .text(text) .voiceId(voice.id()) .outputFormat(format) .build(); ResponseInputStream<SynthesizeSpeechResponse> synthRes = polly.synthesizeSpeech(synthReq); return synthRes; } } // snippet-end:[polly.java2.demo.main] ```
Xu Yang (born 27 June 1970 in Jiangsu) is a retired Chinese high jumper. He won the silver medals at the 1993 and 1995 Asian Championships and the bronze medal at the 1994 Asian Games. He also competed at the 1991 World Championships and the 1992 Olympic Games without reaching the final. His personal best jump is 2.31 metres, achieved in September 1993 in Beijing. References 1970 births Living people Chinese male high jumpers Athletes (track and field) at the 1992 Summer Olympics Olympic athletes for China Asian Games medalists in athletics (track and field) Athletes from Jiangsu Asian Games bronze medalists for China Medalists at the 1994 Asian Games Athletes (track and field) at the 1994 Asian Games 20th-century Chinese people
```lua local unicode = require('tools.utils.unicode') local myopt = { { '-mode', 'conservative', [[Define how aggressive should the tokenization be. `aggressive` only keeps sequences of letters/numbers, `conservative` allows a mix of alphanumeric as in: "2,000", "E65", "soft-landing", etc. `space` is doing space tokenization. `char` is doing character tokenization]], { enum = {'space', 'conservative', 'aggressive', 'char'} } } } local function declareOptsFn(cmd) cmd:setCmdLineOptions(myopt, 'Tokenizer') end local function mytokenization(opt, line) -- fancy tokenization, it has to return a table of tokens (possibly with features) if opt.mode == "char" then local tokens = {} for v, c, _ in unicode.utf8_iter(line) do if unicode.isSeparator(v) then table.insert(tokens, '') else table.insert(tokens, c) end end return tokens end end local function mydetokenization(opt, words, _) if opt.mode == "char" then return table.concat(words, ''):gsub('', ' ') end end return { tokenize = mytokenization, detokenize = mydetokenization, hookName = function() return "chartok" end, declareOpts = declareOptsFn } ```
The Martinborough Branch was a proposed railway line that would have connected the south Wairarapa town of Martinborough to the Wairarapa Line in New Zealand’s North Island. It was to have been used by passengers and by goods traffic for a productive agricultural area that was not well served with reliable transport links. Construction started, but was quickly suspended and never resumed. History Advocacy The first attempt to have a railway constructed to Martinborough was a private effort initiated by a small group of local residents. On learning that the route of the Wairarapa Line would be some distance away, the Waihenga Railway Committee was formed and sought a survey of an appropriate route. The survey was completed, but unfortunately for the members of the committee an attempt to solicit donations to cover their costs met with a less than enthusiastic response. Public indifference doomed the project. Interest in the idea revived around 1905. The Premier, Joseph Ward, was the guest of honour at a luncheon hosted by the Featherston County Council on 18 December 1907. The Council chairman used the opportunity to impress upon Ward the importance of a railway to Martinborough, and also made clear the inadequacy of the roads in the region which were not expected to cope with future traffic requirements. The Premier responded that the Railways Department was at present busy with other railway construction projects around the country, but as soon as an engineer was available a preliminary survey would be made of the suggested route to gauge the costs involved. He indicated that he did not see any undue difficulty in eventually getting the project approved. Though this meeting was the beginning of the proposal that came the closest to fruition, it ultimately failed to be realised. World War I led to the establishment of a military training camp north of Featherston. This camp was served by a siding that was extended from a backshunt at Featherston. After the war the siding was used on several occasions for race trains to Tauherenikau Racecourse. In evidence presented to the Fay-Raven Commission of 1924, the Minister of Public Works stated that it was his understanding that the siding was to be the start of a branch line to Martinborough. Despite being abandoned by 1 November 1926 by the Department of Defence and offered to the Railways Department, the siding remained unused following the closure of the camp until being lifted in the 1930s. In 1925, concerned that Martinborough may not receive what it was due in terms of transport links, the town’s administration prevailed upon the Railways Department for improved services. One of the principal concerns was, given that the town was not to receive a direct rail connection as originally envisaged, that merchants and passengers in the town should not be disadvantaged by having to pay the higher rates of private transport operators compared with the cheaper railway rates they would have been able to pay had the railway come to Martinborough. As an alternative to the abandoned idea of a railway line, they suggested that the Department should provide services using an electric tram, trolley bus, or light rail system. The Department never seriously considered these proposals when, after examining the relevant information, it concluded that the revenue to be derived from such a service would not come close to covering the capital and operational costs involved, and that the only viable option was to use petrol-powered lorries and buses. No further proposals for the line were advocated. The final blow to any chance of a branch line to Martinborough came in 1953 when the Greytown Branch closed. Construction Following the meeting with Joseph Ward in 1907, local politician John Hornsby stated that if the government was unable to commit the necessary funds to the project, he was confident the funds could be raised privately. At his insistence, the Railways Department conducted a preliminary survey of both the Featherston – Martinborough and Greytown – Martinborough routes in 1908, the findings of which were made available internally in a report dated 4 August. The Department did not consider the line to be a matter of urgency, and with plenty of other work for its staff did little to advance the Martinborough project. A new survey was called for by the Engineer-in-Chief of the Public Works Department on 10 March 1913 following the passage of the Railways Authorisation Act 1912 through Parliament the previous year. This authorised the construction of the Wellington-Napier (Featherston-Martinborough Branch) From Featherston to Martinborough. Length about eleven miles (17.7 km). The District Engineer, in response to a later missive on the subject, stated that he expected the survey to be completed by the end of February 1914 and the plans by 31 March. The first signs of construction came later that year when a turning-of-the-first-sod ceremony was held in Martinborough on 20 July, following a commitment given by the Government the previous year to begin construction. The route from Martinborough to Featherston had been chosen by the Railways Department, but as the first section of the line to be constructed from the Martinborough end was common to both the Featherston and Greytown routes, the Department continued to receive pleas for it to reconsider the Greytown route as the preferred option. The Minister of Public Works, William Fraser, represented the Government and officiated at the ceremony. Also present were several members of parliament and around 1,000 locals from both the town and surrounding region. The political party had travelled to Featherston from Wellington by train, and were then conveyed to Martinborough by car, being shown the route of the railway line along the way. On arrival in Martinborough the party were guests at a luncheon, followed by the ceremony. It had been decided to mark the occasion at the site of the future Martinborough railway station, on the north side of Princess Street, near the intersection with Kitchener Street and behind the church. Following the ceremony, the Minister presented the ceremonial wheelbarrow, silver spade, and sod to the local school. While there, he also visited the Tauherenikau Racecourse, whose Racing Club members had requested a siding from the branch line to the racecourse. It was not long before the future of the line was in jeopardy. By the following year, work on the line had effectively been suspended, with no progress made. Fate had intervened on two fronts. First, the effect of World War I on the line and the Railways Department was to deprive it of the manpower required to continue the work. Second, at around this time the Department was considering options for a deviation of the Wairarapa Line over and/or through the Rimutaka Ranges. It was felt that a decision on the junction of the Martinborough Branch with the main line should be deferred until such time as it was known what route the Rimutaka Deviation would take. However, the Department did affirm in writing to a correspondent that it was still committed to a Featherston – Martinborough route. After the war, it was clear that little progress had been made since the promising start so many years earlier. By this time a change in Department policy meant that it was no longer in favour of maintaining numerous short feeder lines to its main lines. The cost of completing the line had risen to £150,000, a significant increase from the original quotes. Opposition to the line was growing amongst residents who did not expect that it would be able to earn a sufficient income even if it were built, and local farmers who objected to having their lands bisected by the proposed railway corridor. When, in 1925, the Martinborough Town Board requested that the Railways Department provide improved railway facilities, and met with a representative of the Department to discuss their concerns, they acknowledged that they did not expect any progress to be made on a rail connection for Martinborough, and that private operators were already providing sufficient alternatives to a railway. The Railways Department had begun investigating transport options for Martinborough other than a railway line, and with the strident support of Alexander McLeod the local member for parliament, the Reform Government abandoned the idea of a Martinborough Branch line. Proposals Although several variants were suggested, there were two main routes for this proposed line: from Featherston, and as an extension of the Greytown Branch. Featherston – Martinborough route This line would have started at Featherston Railway Station. Though this option was the generally preferred route, it was longer and would have involved significant additional construction costs compared with the alternative route, including an expensive crossing of the Tauherenikau River. It would have also involved greater maintenance costs, owing to its longer length and additional permanent structures (bridges, etc.) The 1913 survey found that the length of the line would be , with a maximum grade of 1 in 60, and a maximum curvature of radius. The most expensive works would be bridges crossing the Tauherenikau River ( long) and the Ruamahanga River ( long). Also required would be two flood openings of and respectively. Sidings were called for at (between No. 1 Line Road and the Tauherenikau Racecourse), at (west of Ward’s Line Road), and at (near Moiki Cutting). Estimated costs were provided in the original 1908 survey as follows: Greytown – Martinborough route This option would have extended the Greytown Branch across State Highway 2 and then on to Martinborough, a distance of . It had several advantages over the rival proposal, including a shorter route, lower construction costs, and lower operational costs due to being able to utilise the existing rolling stock and crews from the Greytown Branch. The shorter route was estimated to save between £15,000 to £18,000 in capital costs, but it would have still required the relatively expensive crossing of the Ruamahanga River. The total cost was estimated to be £79,000. The chief opposition to the Greytown route focussed on the fact that it would mean the railway route to Wellington would be up to longer than via Featherston. However, when investigating options for the Rimutaka Deviation, a route via the Tauherenikau River Valley and directly to Woodside, bypassing Featherston, was considered. Had this deviation been adopted, the rail distance between Martinborough and Wellington would have been shorter via Greytown compared with a route via Featherston. Today There are no plans for a railway line to Martinborough. As roads in the region improved after World War I, and with the increasing popularity of road transport, the need and desire for a Martinborough Branch railway subsided. Public transport from Martinborough consists of bus routes to Masterton and Featherston, where passengers can connect with Wairarapa Connection trains. The school to which the wheelbarrow and silver spade were presented following the turning-of-the-first-sod ceremony was destroyed by fire in 1919, a blaze that destroyed the "first sod" and is also believed to have destroyed the ceremonial wheelbarrow. The silver spade was sold at auction in Wellington in June 1964. See also Greytown Branch Wairarapa Line References Further reading External links Martinborough New Zealand Colonial Museum Turning first sod for Martinborough railway track, a photo of the ceremony to commence construction of the line. Celebrations for turning sod of Martinborough railway, a photo of festivities for the start of construction on the line. Speaker at the 'turning the first sod' for Martinborough railway track ceremony, a photo of a guest speaker at the ceremony. Proposed railway lines in New Zealand Rail transport in Wellington 3 ft 6 in gauge railways in New Zealand Martinborough
The Piccadilly Murder is a 1929 mystery detective novel by the British writer Anthony Berkeley. Berkley was a prominent writer during the Golden Age of Detective Fiction, known for his private detective Roger Sheringham series and his development of the inverted detective story. Although not part of the Sheringham series it featured the character of Chief Inspector Moresby of Scotland Yard who also appeared several times with Sheringham. Moresby reappeared with the chief protagonist Chitterwick in a sequel Trial and Error in 1937. Synopsis Ambrose Chitterwick is a witness to the death of a lady in the lounge at the Piccadilly Palace Hotel, shortly after her companion dropped something into her coffee. Chief Inspector Moresby is convinced she was murdered by him her nephew and sole heir, Major Sinclair, using prussic acid. He arrests Sinclair and plans to use Chitterwick as starwitness for the prosecution. However, nagging doubts in Chitterwick's mind lead him to turn amateur detective and find the real truth. References Bibliography Miskimmin, Esme. 100 British Crime Writers. Springer Nature, 2020. Reilly, John M. Twentieth Century Crime & Mystery Writers. Springer, 2015. Turnbull, Malcolm J. Elusion Aforethought: The Life and Writing of Anthony Berkeley Cox. Popular Press, 1996. 1929 British novels Novels by Anthony Berkeley British crime novels British mystery novels British detective novels William Collins, Sons books Novels set in London
Hurricane Emmy was the longest-lived hurricane of the 1976 Atlantic hurricane season. The fifth tropical cyclone and the third hurricane of the season, Emmy developed from a tropical wave on August 20 to the east of the Lesser Antilles. After changing its direction three times over several days, during which it reached a peak intensity of , it turned to the east and slowly weakened. Emmy passed through the Azores on September 3, and a day later it was absorbed by approaching Hurricane Frances. Emmy passed within of the Lesser Antilles, though only minor effects were experienced. No damage was reported in the Azores, though strong winds from the hurricane caused a Venezuelan air force C-130 flight to crash near Lajes Field, killing all 68 aboard. Meteorological history A tropical wave moved off the coast of Africa between August 15 and August 16. The wave moved westward at , and developed atmospheric convection along the wave axis. It slowly organized, developing a low-level circulation, and formed into a tropical depression on August 20 while located about east of Barbados. A reconnaissance aircraft flight into the system on August 21 confirmed the existence of a depression, which reported winds of only with a pressure of 1,012 mbar. The depression slowly strengthened and organized, and after turning to the west-northwest, it intensified into Tropical Storm Emmy on August 22 while located east-northeast of Guadeloupe. The tropical wave from which Emmy developed from continued westward through the Caribbean Sea and ultimately developed into Tropical Storm Joanna in the eastern Pacific Ocean. Tropical Storm Emmy turned more to the northwest, and passed about northeast of Barbuda on August 23. The rapid development of an unseasonable frontal low pressure system to the northeast of the storm turned Emmy sharply east-northeastward on August 25. Its eastward movement at such a low latitude for the time of year was unprecedented. The storm steadily intensified and Emmy attained hurricane status later on the 25th while located north of Barbuda. After moving eastward for about 24 hours, the Westerlies retreated northward, and Emmy turned gradually to the northwest. A strong ridge over the north Atlantic Ocean turned Emmy sharply eastward on August 29. The hurricane continued to strengthen, and Emmy attained a peak intensity of shortly after turning to the east while located northeast of Bermuda. The hurricane maintained its peak intensity for 42 hours while moving eastward at , and slowly weakened after peaking in strength. On September 1, Emmy turned to the east-southeast, and a day later it turned to the northeast as its forward motion decreased. Emmy passed through the Azores on September 3, and the following day it became extratropical to the north of the islands. The extratropical remnant persisted another six hours before being absorbed by approaching Hurricane Frances. Impact and preparations Initially, the path of Emmy was uncertain whether it would affect the Lesser Antilles. As a result, officials issued a hurricane watch for the northeastern Leeward Islands. The warning was cancelled when the storm turned more to the north, with the outer fringes of the hurricane slightly impacting Antigua. Several ships experienced rough seas and strong winds from Emmy, though none reported any damage. After Emmy turned to the west for the final time, forecasters at the National Hurricane Center considered the hurricane not a real threat to land, though they indicated it had a remote chance to affect land masses. The hurricane also posed a threat to the island of Bermuda initially, though it remained away from the island. Two days before Emmy passed through the Azores, the National Hurricane Center advised citizens there to closely monitor the progress of the storm. No damage reports exist from the Azores, though it was likely not severe. On September 3, a C-130 Hercules air force flight left Caracas, Venezuela for Spain, with a flight crew of 8 and 60 members of the Central University of Venezuela choir. That night, heavy rainfall from the hurricane forced the plane to land on the Azores island of Terceira Island, Portugal. After attempting twice to land in hurricane-force winds, the plane crashed in a hill one mile from the runway of Lajes Field, killing all 68 aboard. See also List of Azores hurricanes References External links Hurricane Emmy Tropical Cyclone Report 1976 Monthly Weather Review Category 2 Atlantic hurricanes 1976 Atlantic hurricane season Hurricanes in the Azores
```yaml decrypt: my:name: '["value"]' paths: my:name: value: - value string: '["value"]' redacted: '["value"]' object: HRAADltdaW50ZXJmYWNlIHt9/4MCAQL/hAABEAAAFf+EEgABBnN0cmluZwwHAAV2YWx1ZQ== secure: false isObject: true my:name[0]: value: value string: value redacted: value object: EhAABnN0cmluZwwHAAV2YWx1ZQ== secure: false isObject: false ```
```yaml plural : "3B" direction : "LTR" numbers { symbols { decimal : "," group : "" negative : "" percent : "%" permille : "" } formats { decimal : "#,##0.###" currency : "#,##0.00" percent : "#,##0%" } } currencies { DKK { symbol : "Dkr" } NOK { symbol : "kr" } SEK { symbol : "Skr" } } datetime { formatNames { months { abbreviated { 1 : "oj" 2 : "guov" 3 : "njuk" 4 : "cuo" 5 : "mies" 6 : "geas" 7 : "suoi" 8 : "borg" 9 : "ak" 10 : "golg" 11 : "skb" 12 : "juov" } narrow { 1 : "O" 2 : "G" 3 : "N" 4 : "C" 5 : "M" 6 : "G" 7 : "S" 8 : "B" 9 : "" 10 : "G" 11 : "S" 12 : "J" } wide { 1 : "oajagemnnu" 2 : "guovvamnnu" 3 : "njukamnnu" 4 : "cuoomnnu" 5 : "miessemnnu" 6 : "geassemnnu" 7 : "suoidnemnnu" 8 : "borgemnnu" 9 : "akamnnu" 10 : "golggotmnnu" 11 : "skbmamnnu" 12 : "juovlamnnu" } } days { abbreviated { sun : "sotn" mon : "vuos" tue : "ma" wed : "gask" thu : "duor" fri : "bear" sat : "lv" } narrow { sun : "S" mon : "V" tue : "M" wed : "G" thu : "D" fri : "B" sat : "L" } short { sun : "sotn" mon : "vuos" tue : "ma" wed : "gask" thu : "duor" fri : "bear" sat : "lv" } wide { sun : "sotnabeaivi" mon : "vuossrga" tue : "maebrga" wed : "gaskavahkku" thu : "duorasdat" fri : "bearjadat" sat : "lvvardat" } } periods { abbreviated { am : "i.b." pm : "e.b." } narrow { am : "i.b." pm : "e.b." } wide { am : "iitbeaivet" pm : "eahketbeaivet" } } } } ```
The Chu Hummingbird was an experimental co-axial helicopter developed by Chinese aviation engineer Major General C. J. Chu (朱家仁) in China during the 1940s in two versions, designated the Model A and Model B. Model A was a single seat double rotor test craft used for static (non-flying) test and made its debut in March 1948. This model was destroyed when the rotor broke off. A replacement craft named Model B was introduced in 1948 and was able to fly, but the aircraft was abandoned when Chu left for Formosa. Not much is known about either model, as they were abandoned in China after 1949 as Chu evacuated to Taiwan after the victory of the People's Republic of China. A successor model, the CJC-3, was developed by Chu in Taiwan in the 1950s. Specifications (Hummingbird Model A/B) Jia or Model A Number of seats: 1 Engine:Kinner B-5 Rotor diameter: 4.8 m (approx.) Gross weight: 589.5 kg Maximum speed (level flight): 136 km/h (84 mph, 73 kn) Maximum climb rate: 910m Ceiling: Unknown Range: 219 km Yi or Model B Number of seats: 1 Engine:Kinner B-5 Rotor diameter: 4.8 m (approx.) Gross weight: 589.5 kg Maximum speed (level flight): 136 km/h (84 mph, 73 kn) Maximum climb rate: 910m Ceiling: Unknown Range: 219 km References Aviation in China 1940s Chinese experimental aircraft 1940s Republic of China helicopters Single-engined piston helicopters
```python # mypy: allow-untyped-defs from __future__ import annotations import atexit import collections import contextlib import copy import dataclasses import datetime import dis import enum import functools import gc import importlib import inspect import itertools import linecache import logging import math import operator import os import re import sys import textwrap import threading import time import types import typing import warnings import weakref from contextlib import contextmanager from functools import lru_cache from types import MethodWrapperType from typing import ( Any, Callable, cast, ClassVar, Counter, DefaultDict, Deque, Dict, Iterable, Iterator, KeysView, List, Optional, overload, Set, Tuple, Type, TypeVar, Union, ValuesView, ) from typing_extensions import Literal, TypeGuard import torch import torch._functorch.config import torch._inductor.config as inductor_config import torch.fx.experimental.symbolic_shapes import torch.utils._pytree as pytree from torch import fx from torch._C import ( _get_function_stack_at, _len_torch_function_stack, _pop_torch_function_stack, _push_on_torch_function_stack, ) from torch._dispatch.python import enable_python_dispatcher from torch._guards import TracingContext from torch._subclasses.meta_utils import is_sparse_compressed from torch._utils_internal import log_compilation_event from torch.fx._utils import _format_graph_code, lazy_format_graph_code from torch.nn.modules.lazy import LazyModuleMixin from torch.utils._triton import has_triton, has_triton_package from torch.utils.hooks import RemovableHandle try: import numpy as np except ModuleNotFoundError: np = None # type: ignore[assignment] try: import torch._logging import torch._numpy as tnp from torch._guards import detect_fake_mode # noqa: F401n from torch._logging import LazyString from . import config # NOTE: Make sure `NP_SUPPORTED_MODULES` and `NP_TO_TNP_MODULE` are in sync. if np: NP_SUPPORTED_MODULES: Tuple[types.ModuleType, ...] = ( np, np.fft, np.linalg, np.random, ) NP_TO_TNP_MODULE = { np: tnp, np.fft: tnp.fft, np.linalg: tnp.linalg, np.random: tnp.random, } else: NP_SUPPORTED_MODULES = () NP_TO_TNP_MODULE = {} from torch._subclasses.fake_tensor import FakeTensor, is_fake, maybe_get_fake_mode except ImportError: pass T = TypeVar("T") unpatched_nn_module_getattr = torch.nn.Module.__getattr__ counters: DefaultDict[str, Counter[str]] = collections.defaultdict(collections.Counter) optimus_scuba_log: Dict[str, Any] = {} troubleshooting_url = ( "path_to_url" ) nnmodule_doc_url = "path_to_url" nnmodule_doc_url_msg = f"See {nnmodule_doc_url} for more information and limitations." log = logging.getLogger(__name__) # profiling compilation time by function compilation_time_metrics: Dict[str, List[float]] = {} # profiling compilation time by frame phase frame_phase_timing: Dict[str, Dict[str, float]] = collections.defaultdict( lambda: collections.defaultdict(float) ) timer_counter = itertools.count() def tabulate( rows: Union[List[Tuple[str, object]], List[List[object]]], headers: Union[Tuple[str, ...], List[str]], ) -> str: try: import tabulate return tabulate.tabulate(rows, headers=headers) except ImportError: return "\n".join( ", ".join(map(str, row)) for row in itertools.chain([headers], rows) ) curr_frame = 0 # Note: Called for you by dynamo - you almost never ever want to invoke this yourself. def increment_frame() -> None: global curr_frame curr_frame = curr_frame + 1 # Note: Called for you by dynamo - you almost never ever want to invoke this yourself. def reset_frame_count() -> None: global curr_frame frame_phase_timing.clear() compilation_time_metrics.clear() curr_frame = 0 op_count = 0 def increment_op_count(cnt: int) -> None: global op_count op_count += cnt # Calculate total time spent so far for each phase # For example, {'entire_frame_compile':8.574629999999999, 'backend_compile':5.26806} def calculate_time_spent() -> Dict[str, float]: total_wall_time = 0.0 total_by_key = {} for timings in frame_phase_timing.values(): total_wall_time += timings.get( "entire_frame_compile", timings.get("inductor_compile", 0) ) for key, timing in timings.items(): if key not in total_by_key: total_by_key[key] = timing else: total_by_key[key] += timing if total_by_key: total_by_key["total_wall_time"] = total_wall_time return total_by_key # Print a report of time spent so far # Ex: # TIMING: # entire_frame_compile:8.574629999999999 # backend_compile:5.26806 def print_time_report() -> None: total_by_key = calculate_time_spent() out = "TIMING:" for key, value in total_by_key.items(): out = f"{out} {key}:{round(value, 5)}" print(out) def _add_time_spent(key: str, phase_name: str, time_spent: float) -> None: frame_phase_timing[key][phase_name] += time_spent # dynamo_timed is a context manager # By wrapping a function in dynamo_timed, we can store a record in compilation_time_metrics # where the key is the functions name. # For example: # # def _foo(...): # with dynamo_timed("_foo"): # ... # # Would show up as an entry in our timing dict: # OrderedDict([('_foo', [0.083690, 0.23949, 3.1425e-05])]) # This is extremely useful for granular debugging. # # Although it is tempting to use dynamo_timed as a decorator, please do not. # In its decorator form it makes cProfile traces less useful as dynamo_timed # suddenly becomes a bottleneck for lots of function calls (as only one parent # pointer is recorded). # # For a higher-level mode, pass a phase_name into dynamo_timed # phase_names record an extra record into a separate compilation timing structure, # one keyed on frame+name rather than function. # The frame is incremented outside of this function, in def increment_frame() above. # `fwd_only` is used to identify if this phase or function is only called # during compiling fwd graphs, e.g, `entire_frame_compile` and `backend_compile`. # The other phases (`inductor_compile` and `code_gen`) are called for both fwd and bwd graphs. @contextmanager def dynamo_timed( key: str, phase_name: Optional[str] = None, fwd_only: bool = True, ): if key not in compilation_time_metrics: compilation_time_metrics[key] = [] fail_type: Optional[str] = None fail_reason: Optional[str] = None time_spent = float("-inf") try: with torch.profiler.record_function(f"{key} (dynamo_timed)"): t0 = time.time() ChromiumEventLogger.log_event_start(key, time.time_ns()) if phase_name: ChromiumEventLogger.log_event_start(phase_name, time.time_ns()) yield if phase_name: ChromiumEventLogger.log_event_end(phase_name, time.time_ns()) ChromiumEventLogger.log_event_end(key, time.time_ns()) time_spent = time.time() - t0 compilation_time_metrics[key].append(time_spent) except Exception as e: fail_type = str(type(e)) fail_reason = str(e) raise finally: # Only record backward compilation metrics if phase_name is not None! if phase_name: frame_key = str(curr_frame) # fwd only compilation stages: entire_frame_compile, backend_compile. # use frame_key as time aggregation key. if fwd_only and fail_type is None: _add_time_spent(frame_key, phase_name, time_spent) else: # fwd + bwd compilation stages: inductor_compile, code_gen. # use frame_key as time aggregation key for fwd graphs; # use compile_id as time aggregation key for bwd graphs. if torch._guards.TracingContext.try_get() is not None: aot_graph_name = str( torch._guards.TracingContext.get().aot_graph_name ) if ( "forward" in aot_graph_name or "inference" in aot_graph_name ) and fail_type is None: _add_time_spent(frame_key, phase_name, time_spent) elif "backward" in aot_graph_name: compile_id = str( torch._guards.CompileContext.current_compile_id() ) if fail_type is None: _add_time_spent(compile_id, phase_name, time_spent) # log backward compilation metrics at the end of `inductor_compile` of bwd graph, # one record for one bwd graph. if phase_name == "inductor_compile": if fail_type is None: inductor_compile_time = frame_phase_timing[ compile_id ].get("inductor_compile", None) code_gen_time = frame_phase_timing[compile_id].get( "code_gen", None ) else: inductor_compile_time = None code_gen_time = None metrics = BwdCompilationMetrics( compile_id, inductor_compile_time, code_gen_time, fail_type, fail_reason, ) record_compilation_metrics(metrics) @overload def compile_times(repr: Literal["str"], aggregate: bool = False) -> str: ... @overload def compile_times( repr: Literal["csv"], aggregate: bool = False ) -> Tuple[List[str], List[object]]: ... def compile_times(repr="str", aggregate: bool = False): """ Get metrics about torchdynamo frontend/backend compilation times. Accumulates information from functions tagged with `dynamo_timed`. repr='str' returns a printable string for user interaction, and 'csv' returns headers, rows which can be logged for output aggregate causes values from multiple compilations (e.g. split graphs) to be accumulated into one value. If false, expect more than one value per metric. """ def fmt_fn(values, item_fn=lambda x: x): if aggregate: return item_fn(sum(values)) return ", ".join(map(item_fn, values)) if repr == "str": rows = [ (k, fmt_fn(compilation_time_metrics[k], item_fn=lambda x: f"{x:.4f}")) for k in compilation_time_metrics ] out = "TorchDynamo compilation metrics:\n" out += tabulate(rows, headers=("Function", "Runtimes (s)")) return out elif repr == "csv": values = [ fmt_fn(v, item_fn=lambda x: f"{x:.6f}") for v in compilation_time_metrics.values() ] headers = list(compilation_time_metrics.keys()) return headers, values return None @atexit.register def dump_compile_times() -> None: log.info(compile_times(repr="str", aggregate=True)) tensortype_to_dtype = { torch.FloatTensor: (torch.float32, torch.float), torch.DoubleTensor: (torch.float64, torch.double), torch.HalfTensor: (torch.float16, torch.half), torch.BFloat16Tensor: (torch.bfloat16,), torch.ByteTensor: (torch.uint8,), torch.CharTensor: (torch.int8,), torch.LongTensor: (torch.int64, torch.long), torch.IntTensor: (torch.int32, torch.int), torch.ShortTensor: (torch.int16, torch.short), torch.BoolTensor: (torch.bool,), } class DuplicateWarningChecker: def __init__(self, maxsize: int = 4096) -> None: self.maxsize = maxsize self.reset() def reset(self): self.set = collections.OrderedDict() def add(self, key: Union[str, Tuple[object, object]]) -> bool: if key in self.set: self.set.move_to_end(key, last=True) if not config.verbose: return False else: self.set[key] = None while len(self.set) > self.maxsize: self.set.popitem(last=False) return True graph_break_dup_warning_checker = DuplicateWarningChecker() def setup_compile_debug(): compile_debug = os.environ.get("TORCH_COMPILE_DEBUG", "0") == "1" if compile_debug: return add_file_handler() return contextlib.ExitStack() def reset_graph_break_dup_checker() -> None: graph_break_dup_warning_checker.reset() def add_file_handler(): log_path = os.path.join(get_debug_dir(), "torchdynamo") os.makedirs(log_path, exist_ok=True) log_file_handler = logging.FileHandler(os.path.join(log_path, "debug.log")) logger = logging.getLogger("torch._dynamo") logger.addHandler(log_file_handler) exitstack = contextlib.ExitStack() exitstack.callback(lambda: logger.removeHandler(log_file_handler)) return exitstack def setup_log_file(): exitstack = contextlib.ExitStack() if config.log_file_name is not None: log_file_handler = logging.FileHandler(config.log_file_name) for logger in torch._logging._internal.get_loggers(): logger.addHandler(log_file_handler) exitstack.callback(lambda: logger.removeHandler(log_file_handler)) return exitstack return exitstack def gen_record_file_name(exc, code) -> str: return f"{get_debug_dir()}/error_recordings/\ {code.co_name}_{type(exc).__name__}_{code.co_firstlineno}.rec" def write_record_to_file(filename: str, exec_record) -> None: try: if os.path.exists(filename): log.warning( "Unable to write execution record %s; file already exists.", filename ) else: os.makedirs(os.path.dirname(filename), exist_ok=True) with open(filename, "wb") as f: exec_record.dump(f) except Exception: log.exception("Unable to write execution record %s", filename) def count_calls(g: fx.Graph) -> int: c = 0 for n in g.nodes: if "call" in n.op: c += 1 return c def identity(x): return x def hashable(x): try: hash(x) return True except TypeError: return False # cannot hash writable memoryview object except ValueError: return False def nothing(*args, **kwargs): pass class ExactWeakKeyDictionary: """Similar to weakref.WeakKeyDictionary, but use `is`/`id` rather than `==` to compare equality""" def __init__(self): self.values = {} self.refs = {} def __getitem__(self, key): return self.values[id(key)] def get(self, key, default=None): return self.values.get(id(key), default) def __contains__(self, key): return id(key) in self.values def __setitem__(self, key, value): idx = id(key) if idx not in self.refs: self.refs[idx] = weakref.ref(key, lambda ref: self._remove_id(idx)) self.values[idx] = value def _remove_id(self, idx): if idx in self.values: del self.values[idx] if idx in self.refs: del self.refs[idx] def clear(self): self.refs.clear() self.values.clear() @overload def istype(obj: object, allowed_types: Type[T]) -> TypeGuard[T]: ... @overload def istype( obj: object, allowed_types: Tuple[Type[List[T]], Type[Tuple[T, ...]]] ) -> TypeGuard[T]: ... @overload def istype(obj: object, allowed_types: Iterable[type]) -> bool: ... def istype(obj, allowed_types): """isinstance() without subclasses""" if isinstance(allowed_types, (tuple, list, set)): return type(obj) in allowed_types return type(obj) is allowed_types if sys.version_info >= (3, 12): # Some typing classes moved to C in 3.12, # which no longer have the _Final mixin. _builtin_final_typing_classes = ( typing.ParamSpecArgs, typing.ParamSpecKwargs, typing.ParamSpec, typing.TypeVar, typing.TypeVarTuple, typing.TypeAliasType, ) def is_typing(value): # _Final catches most of typing classes: # - Any # - Callable # - Union # ... # # NB: we intentionally ignore classes that inherit from Generic, since they # can be used as both TypingVariable as well as UserDefinedClassVariable. if sys.version_info >= (3, 12) and isinstance(value, _builtin_final_typing_classes): return True return isinstance(value, typing._Final) or value is typing.Generic # type: ignore[attr-defined] def is_numpy_int_type(value): if not np: return False return istype( value, ( np.int8, np.int16, np.int32, np.int64, np.uint8, np.uint16, np.uint32, np.uint64, ), ) def is_numpy_float_type(value): if not np: return False return istype( value, ( np.float16, np.float32, np.float64, ), ) def is_lru_cache_wrapped_function(value): return isinstance(value, functools._lru_cache_wrapper) and is_function( inspect.getattr_static(value, "__wrapped__") ) def is_function_or_wrapper(value): return is_function(value) or isinstance( value, (torch._ops.OpOverloadPacket, torch._ops.OpOverload) ) def is_function(value): return isinstance( value, ( types.FunctionType, types.BuiltinFunctionType, types.MethodDescriptorType, types.WrapperDescriptorType, ), ) def is_wrapper_or_member_descriptor(value): return isinstance( value, ( # set up by PyGetSetDef types.GetSetDescriptorType, # set by PyMethodDef, e.g. list.append types.MethodDescriptorType, # slots - list.__add__ types.WrapperDescriptorType, # set up by PyMemberDef types.MemberDescriptorType, # wrapper over C functions types.MethodWrapperType, ), ) def unwrap_if_wrapper(fn): return unwrap_with_attr_name_if_wrapper(fn)[0] def unwrap_with_attr_name_if_wrapper(fn): # TODO(anijain2305) - Investigate if we can get rid of this function # unpack @torch._dynamo.optimize()(fn) wrapped function if is_function(fn) and inspect.getattr_static(fn, "_torchdynamo_inline", False): fn = inspect.getattr_static(fn, "_torchdynamo_inline", fn) attr_name = "_torchdynamo_inline" else: attr_name = None return fn, attr_name def is_numpy_ndarray(value): if not np: return False return istype(value, np.ndarray) def istensor(obj): """Check of obj is a tensor""" tensor_list: Tuple[type, ...] = ( torch.Tensor, torch.nn.Parameter, *config.traceable_tensor_subclasses, ) tensor_list = tensor_list + (torch._subclasses.FakeTensor,) return istype(obj, tensor_list) def is_lazy_module(mod): return isinstance(mod, LazyModuleMixin) @functools.lru_cache(4096) def print_once(*args): print(*args) def make_cell(val=None): """Some black magic to create a cell object that usually only exists in a closure""" x = val def f(): return x assert f.__closure__ is not None and len(f.__closure__) == 1 return f.__closure__[0] def proxy_args_kwargs(args, kwargs): try: proxy_args = tuple(arg.as_proxy() for arg in args) proxy_kwargs = {key: arg.as_proxy() for key, arg in kwargs.items()} return proxy_args, proxy_kwargs except NotImplementedError as e: from .exc import unimplemented from .variables.base import typestr unimplemented( f"call_function args: {typestr(*args)} {typestr(*list(kwargs.values()))}", from_exc=e, ) @dataclasses.dataclass class CompilationMetrics: compile_id: str frame_key: str co_name: str co_filename: str co_firstlineno: int cache_size: int accumulated_cache_size: int guard_count: Optional[int] shape_env_guard_count: Optional[int] graph_op_count: Optional[int] graph_node_count: Optional[int] graph_input_count: Optional[int] start_time: float entire_frame_compile_time_s: Optional[float] backend_compile_time_s: Optional[float] inductor_compile_time_s: Optional[float] code_gen_time_s: Optional[float] fail_type: Optional[str] fail_reason: Optional[str] fail_user_frame_filename: Optional[str] fail_user_frame_lineno: Optional[int] non_compliant_ops: Set[str] compliant_custom_ops: Set[str] restart_reasons: Set[str] dynamo_time_before_restart_s: float # Sometimes, we will finish analyzing a frame but conclude we don't want # to install any guarded code. True means we actually decided to install # a compiled frame has_guarded_code: bool possibly_missed_reinplacing_opportunities: Optional[int] @dataclasses.dataclass class BwdCompilationMetrics: compile_id: str inductor_compile_time_s: Optional[float] code_gen_time_s: Optional[float] fail_type: Optional[str] fail_reason: Optional[str] DEFAULT_COMPILATION_METRICS_LIMIT = 64 _compilation_metrics: Deque[ Union[CompilationMetrics, BwdCompilationMetrics] ] = collections.deque(maxlen=DEFAULT_COMPILATION_METRICS_LIMIT) def record_compilation_metrics( compilation_metrics: Union[CompilationMetrics, BwdCompilationMetrics] ): global _compilation_metrics _compilation_metrics.append(compilation_metrics) if isinstance(compilation_metrics, CompilationMetrics): name = "compilation_metrics" else: name = "bwd_compilation_metrics" torch._logging.trace_structured( name, lambda: { k: list(v) if isinstance(v, set) else v for k, v in dataclasses.asdict(compilation_metrics).items() }, ) if config.log_compilation_metrics: log_compilation_event(compilation_metrics) def set_compilation_metrics_limit(new_size: int) -> None: global _compilation_metrics while len(_compilation_metrics) > new_size: _compilation_metrics.popleft() new_deque = collections.deque(_compilation_metrics, maxlen=new_size) _compilation_metrics = new_deque def clear_compilation_metrics() -> None: global _compilation_metrics _compilation_metrics.clear() def get_compilation_metrics() -> List[Union[CompilationMetrics, BwdCompilationMetrics]]: return list(_compilation_metrics) class ChromiumEventLogger: """Logs chromium events to structured logs. tlparse will concatenate these into a perfetto UI link. See path_to_url#heading=h.yr4qxyxotyw for a specification of the Chromium Event JSON format. """ @staticmethod def log_event_start( event_name: str, time_ns: int, metadata: Optional[Dict[str, Any]] = None, ) -> None: """ Logs the start of a single event. :param str event_name Name of event to appear in trace :param time_ns Timestamp in nanoseconds :param metadata: Any extra metadata associated with this event """ ChromiumEventLogger._log_timed_event( event_name, time_ns, "B", metadata, ) @staticmethod def log_event_end( event_name: str, time_ns: int, metadata: Optional[Dict[str, Any]] = None, ) -> None: """ Logs the end of a single event. This function should only be called after log_event_start with the same event_name. :param event_name: Name of event to appear in trace :param time_ns: Timestamp in nanoseconds :param metadata: Any extra metadata associated with this event """ ChromiumEventLogger._log_timed_event( event_name, time_ns, "E", metadata, ) @staticmethod def _log_timed_event( event_name: str, time_ns: int, phase: str, metadata: Optional[Dict[str, Any]] = None, ) -> None: """ Logs a timed event in chromium format. See log_event_start, log_event_end, etc. """ event = { "name": event_name, "ts": time_ns / 1000, # Chromium events are in ms "args": metadata, "ph": phase, "pid": 0, # pid should be specified on all logs, we don't personally care about the actual process id } torch._logging.trace_structured( "chromium_event", payload_fn=lambda: event, suppress_context=False, expect_trace_id=False, # Not every chromium event will have a trace_id ) @staticmethod def log_instant_event( event_name: str, time_ns: int, metadata: Optional[Dict[str, Any]] = None, ) -> None: """ Log an instant event with no associated duration. :param str event_name: Name of event to appear in trace :param int time_ns Timestamp in nanoseconds :param Optional[Dict[str, Any]] metadata: Any extra metadata associated with this event :param str cname optional color for the arrow in the trace """ event = { "name": event_name, "ts": time_ns / 1000, "args": metadata, "ph": "i", "pid": 0, # pid should be specified on all logs, we don't personally care about the actual process id "s": "p", # We use "process" level instant events so they all appear on the same row in the trace. } torch._logging.trace_structured( "chromium_event", payload_fn=lambda: event, suppress_context=False, expect_trace_id=True, ) @dataclasses.dataclass class CleanupHook: """Remove a global variable when hook is called""" scope: Dict[str, Any] name: str def __call__(self, *args): # Make sure we're not shutting down if CleanupManager is not None: CleanupManager.count -= 1 del self.scope[self.name] @staticmethod def create(scope, name, val): assert name not in scope CleanupManager.count += 1 scope[name] = val return CleanupHook(scope, name) class CleanupManager(ExactWeakKeyDictionary): count = 0 instance: ClassVar[CleanupManager] def _remove_id(self, idx): for hook in self.values[idx]: hook() super()._remove_id(idx) CleanupManager.instance = CleanupManager() def clone_tensor(x): """Clone the tensor and its gradient""" y = x.clone().requires_grad_(x.requires_grad) if x.is_leaf and x.grad is not None: y.grad = x.grad.clone() return y def clone_input(x, *, dtype=None): """copy while preserving strides""" # TODO: this is questionable if is_fake(x): # this func fails on fake tensors in __torch_dispatch__ return x def torch_clone(x): y = torch.clone(x) if x.is_leaf: y.requires_grad_(x.requires_grad) if x.is_leaf and x.grad is not None: y.grad = clone_input(x.grad, dtype=dtype) if hasattr(x, "_dynamo_dynamic_indices"): y._dynamo_dynamic_indices = x._dynamo_dynamic_indices.copy() # type: ignore[attr-defined] return y with torch.no_grad(): if x.device.type == "xla": # Access data_ptr() for a xla tensor will cause crash return torch_clone(x) # Handle sparse storage (no stride). if x.layout is torch.sparse_coo: return torch.sparse_coo_tensor( torch_clone(x._indices()), torch_clone(x._values()), x.shape, is_coalesced=x.is_coalesced(), ) elif is_sparse_compressed(x): if x.layout in {torch.sparse_csr, torch.sparse_bsr}: compressed_indices = x.crow_indices() plain_indices = x.col_indices() else: compressed_indices = x.ccol_indices() plain_indices = x.row_indices() return torch.sparse_compressed_tensor( torch_clone(compressed_indices), torch_clone(plain_indices), torch_clone(x.values()), x.shape, layout=x.layout, ) needed_size = sum( (shape - 1) * stride for shape, stride in zip(x.size(), x.stride()) ) if x.is_quantized: result = torch.empty_quantized((needed_size + 32,), x) else: result = torch.empty( needed_size + 32, dtype=dtype or x.dtype, device=x.device ) cache_line_offset = ( (x.data_ptr() - result.data_ptr()) % 32 ) // x.element_size() result.as_strided_(x.size(), x.stride(), cache_line_offset) try: result.copy_(x.clone()) if x.is_leaf: result.requires_grad_(x.requires_grad) if x.is_leaf and x.grad is not None: result.grad = clone_input(x.grad, dtype=dtype) except RuntimeError: # RuntimeError: unsupported operation: more than one element of the written-to # tensor refers to a single memory location. Please clone() the tensor before # performing the operation. return torch_clone(x) if hasattr(x, "_dynamo_dynamic_indices"): result._dynamo_dynamic_indices = x._dynamo_dynamic_indices.copy() # type: ignore[attr-defined] return result def clone_inputs(example_inputs): res: Union[Dict[Any, Any], List[Any]] if type(example_inputs) is dict: res = dict(example_inputs) for key, value in res.items(): if isinstance(value, tuple): res[key] = clone_inputs(value) else: assert isinstance(value, torch.Tensor), type(value) res[key] = clone_input(value) return res res = list(example_inputs) for i in range(len(res)): if isinstance(res[i], torch.Tensor): res[i] = clone_input(res[i]) return res def skip_frame_if_in_functorch_mode(val: torch.Tensor): try: val.data_ptr() # will throw for functorch tensors except RuntimeError as e: from .exc import SkipFrame # This will be GradTrackingTensor/BatchedTensor/etc functorch_subclass_name = re.sub(r"\(.*", "", repr(val)) raise SkipFrame( f"torch.compile cannot be run in context: {functorch_subclass_name}" ) from e @contextmanager def preserve_rng_state(): disable_functorch = torch._C._DisableFuncTorch disable_current_modes = torch.utils._python_dispatch._disable_current_modes with disable_current_modes(), disable_functorch(): rng_state = torch.clone(torch.random.get_rng_state()) skip_frame_if_in_functorch_mode(rng_state) if torch.cuda.is_available(): cuda_rng_state = torch.clone(torch.cuda.get_rng_state()) try: yield finally: with torch.utils._python_dispatch._disable_current_modes(): torch.random.set_rng_state(rng_state) if torch.cuda.is_available(): torch.cuda.set_rng_state(cuda_rng_state) # type: ignore[possibly-undefined] def is_jit_model(model0): return isinstance( model0, ( torch.jit._trace.TopLevelTracedModule, torch.jit._script.RecursiveScriptModule, torch.jit.ScriptFunction, torch.jit.ScriptModule, ), ) def torchscript(model, example_inputs, verbose=False): if is_jit_model(model): # already done? return model try: return torch.jit.trace(model, example_inputs) except Exception: try: return torch.jit.script(model) except Exception: if verbose: log.exception("jit error") else: log.error("Both torch.jit.trace and torch.jit.script failed") return None def getfile(obj): try: return inspect.getfile(obj) except (TypeError, OSError): return None def is_namedtuple(obj): """Test if an object is a namedtuple or a torch.return_types.* quasi-namedtuple""" return is_namedtuple_cls(type(obj)) def is_namedtuple_cls(cls): """Test if an object is a namedtuple or a (torch.return_types|torch.autograd.forward_ad).* quasi-namedtuple""" try: if issubclass(cls, tuple): bases = getattr(cls, "__bases__", []) or [None] module = getattr(cls, "__module__", None) return module in ("torch.return_types", "torch.autograd.forward_ad") or ( bases[0] is tuple and hasattr(cls, "_make") and hasattr(cls, "_fields") ) except TypeError: pass return False @functools.lru_cache(1) def namedtuple_fields(cls): """Get the fields of a namedtuple or a torch.return_types.* quasi-namedtuple""" if cls is slice: return ["start", "stop", "step"] assert issubclass(cls, tuple) if hasattr(cls, "_fields"): # normal namedtuples return cls._fields @dataclasses.dataclass class Marker: index: int # frustrating ones e.g. torch.return_types.max assert cls.__module__ == "torch.return_types" obj = cls(map(Marker, range(cls.n_fields))) fields: List[Optional[str]] = [None] * cls.n_fields for name in dir(obj): if name[0] != "_" and isinstance(getattr(obj, name), Marker): fields[getattr(obj, name).index] = name return fields def checkpoint_params(gm): with torch.no_grad(): rng_state = torch.clone(torch.random.get_rng_state()) if torch.cuda.is_available(): cuda_rng_state = torch.clone(torch.cuda.get_rng_state()) saved_state = [] for param in itertools.chain(gm.parameters(), gm.buffers()): saved_state.append((param, param._version, torch.clone(param))) def restore(): with torch.no_grad(): torch.random.set_rng_state(rng_state) if torch.cuda.is_available(): torch.cuda.set_rng_state(cuda_rng_state) for param, version, original_value in saved_state: if param._version != version: param.copy_(original_value) return restore def timed(model, example_inputs, times=1): if torch.cuda.is_available(): synchronize = torch.cuda.synchronize else: synchronize = nothing synchronize() gc.collect() torch.manual_seed(1337) t0 = time.perf_counter() for _ in range(times): result = model(*example_inputs) synchronize() t1 = time.perf_counter() return result, t1 - t0 # type: ignore[possibly-undefined] def check_is_cuda(gm, example_inputs): return all(x.is_cuda for x in itertools.chain(example_inputs, gm.parameters(True))) @lru_cache(32) def rot_n_helper(n): assert n > 1 vars = [f"v{i}" for i in range(n)] rotated = reversed(vars[-1:] + vars[:-1]) fn = eval(f"lambda {','.join(vars)}: ({','.join(rotated)})") fn.__name__ = f"rot_{n}_helper" return fn common_constant_types: Set[type] = { int, float, complex, bool, str, bytes, type(None), Ellipsis.__class__, types.CodeType, torch.device, torch.dtype, torch.memory_format, torch.layout, } if has_triton_package(): import triton common_constant_types.add(triton.language.dtype) """ Difference between is_safe_constant and common_constant_types. * common_constant_types: Constants would be wrapped by VariableBuilder.wrap_literal as ConstantVariable. * is_safe_constant: Constants can be loaded by LOAD_CONST bytecode. """ def is_safe_constant(v): if istype(v, (tuple, frozenset)): return all(map(is_safe_constant, v)) return isinstance(v, (enum.Enum, type, torch.Size)) or istype( v, common_constant_types | {slice}, ) def specialize_symnode(arg): from .variables import ConstantVariable, SymNodeVariable # Guard and specialize if isinstance(arg, SymNodeVariable): return ConstantVariable.create(arg.evaluate_expr()) return arg def guard_if_dyn(arg): from .variables import ConstantVariable arg = specialize_symnode(arg) if isinstance(arg, ConstantVariable): return arg.as_python_constant() return arg def check_constant_args(args, kwargs): return all(x.is_python_constant() for x in itertools.chain(args, kwargs.values())) def check_unspec_python_args(args, kwargs): from .variables.constant import ConstantVariable from .variables.tensor import UnspecializedPythonVariable unspec_count = 0 for x in itertools.chain(args, kwargs.values()): if isinstance(x, UnspecializedPythonVariable): unspec_count += 1 elif not isinstance(x, ConstantVariable): return False return unspec_count > 0 def check_unspec_or_constant_args(args, kwargs): # A fused version of: # return check_constant_args(args, kwargs) or check_unspec_python_args(args, kwargs) from .variables.tensor import UnspecializedPythonVariable for x in itertools.chain(args, kwargs.values()): if not (x.is_python_constant() or isinstance(x, UnspecializedPythonVariable)): return False return True def check_numpy_ndarray_args(args, kwargs): from .variables.tensor import NumpyNdarrayVariable return any( isinstance(x, NumpyNdarrayVariable) for x in itertools.chain(args, kwargs.values()) ) dict_keys: Type[KeysView[Any]] = type({}.keys()) dict_values: Type[ValuesView[Any]] = type({}.values()) odict_values: Type[ValuesView[Any]] = type(collections.OrderedDict().values()) tuple_iterator: Type[Iterator[Any]] = type(iter(())) tuple_iterator_len = tuple_iterator.__length_hint__ # type: ignore[attr-defined] object_new = object.__new__ def nn_module_new(cls): obj = object_new(cls) torch.nn.Module.__init__(obj) return obj def product(it): return functools.reduce(operator.mul, it, 1) def tuple_iterator_getitem(it, index): _, (obj,), start = it.__reduce__() return obj[start + index] iter_next = next def to_subclass(t, cls): return t.as_subclass(cls) def dict_keys_getitem(d, n): return next(itertools.islice(iter(d), n, n + 1)) def enum_repr(value, local): # enum class can override __str__ method. Use __class__ and name attribute # to extract the class name and key name. name = value.__class__.__name__ val = value.name scope = "L" if local else "G" local_name = f'{scope}["{name}"].{val}' return local_name def set_example_value(node, example_value): # NB: example_value is a bit of a misnomer, because this is always a fake # tensor of some sort. Furthermore, these example values serve as the # runtime state of Dynamo tracing, which means if metadata mutation # occurs, the example_value gets directly updated (so you can't rely on # this to accurately reflect what the state of the value was at the time # the program was traced). node.meta["example_value"] = example_value shape_env = TracingContext.get().fake_mode.shape_env if symbol_to_path := torch.fx.experimental.symbolic_shapes.compute_unbacked_bindings( shape_env, example_value ): node.meta["unbacked_bindings"] = symbol_to_path def _get_fake_tensor(vt): fake_tensor = vt.as_proxy().node.meta.get("example_value") if not is_fake(fake_tensor): from .exc import unimplemented unimplemented("Cannot check Tensor object identity without its fake value") return fake_tensor def iter_contains(items, search, tx, check_tensor_identity=False): from .variables import ( BuiltinVariable, ConstantVariable, TensorVariable, VariableTracker, ) if search.is_python_constant(): found_const = any( x.is_python_constant() and x.as_python_constant() == search.as_python_constant() for x in items ) return ConstantVariable.create(found_const) must_check_tensor_id = False if check_tensor_identity and isinstance(search, TensorVariable): must_check_tensor_id = True # Match of Tensor means match of FakeTensor search = _get_fake_tensor(search) found: Optional[VariableTracker] = None for x in items: if must_check_tensor_id: if isinstance(x, TensorVariable): if search is _get_fake_tensor(x): # Object equivalence return ConstantVariable.create(True) else: check = BuiltinVariable(operator.eq).call_function(tx, [x, search], {}) if found is None: found = check else: found = BuiltinVariable(operator.or_).call_function( tx, [check, found], {} ) if found is None: found = ConstantVariable.create(False) return found def key_is_id(k): """Returns whether it indexes dictionaries using its id""" return isinstance(k, (torch.Tensor, torch.nn.Module, MethodWrapperType)) def key_to_id(value): return [id(k) if key_is_id(k) else k for k in value.keys()] def const_repr(x, *, local) -> str: from .trace_rules import is_builtin_callable if isinstance(x, (list, tuple)): elems_repr = ",".join(const_repr(s, local=local) for s in x) if isinstance(x, list): return f"[{elems_repr}]" else: assert isinstance(x, tuple) if len(x) == 1: return f"({elems_repr},)" else: return f"({elems_repr})" elif isinstance(x, enum.Enum): # To workaround repr(Enum) returning invalid global reference before python 3.11 # by calling enum_repr and removing quotes to render enum in guard code. return enum_repr(x, local=local).replace("'", "") elif is_builtin_callable(x): return x.__name__ elif isinstance(x, type): def fullname(o): klass = o.__class__ module = klass.__module__ if module == "builtins": return klass.__qualname__ # avoid outputs like 'builtins.str' return module + "." + klass.__qualname__ return fullname(x) else: return f"{x!r}" def dict_keys_repr(const_keys, *, local) -> str: keys_str = ",".join(const_repr(s, local=local) for s in const_keys) return "[" + keys_str + "]" GLOBAL_KEY_PREFIX = "__dict_key" from torch._subclasses import UnsupportedFakeTensorException # noqa: F401 def get_safe_global_name(tx, root, obj): # The global_mangled_class_name should be different for different # invocations of torch.compile. Otherwise, we can run into a situation # where multiple torch.compile invocations re-use the same global name, # but the global's lifetime is tied to the first invocation (and # may be deleted when the first torch.compile invocation is deleted) # We mangle it based off of the output_graph's id. return f"{root}_{id(obj)}_c{tx.output.compile_id}" def wrap_fake_exception(fn): try: return fn() except UnsupportedFakeTensorException as e: from .exc import unimplemented msg = f"Unsupported: {e.reason} with fake tensor propagation." log.warning(msg) unimplemented(msg, from_exc=e) def deepcopy_to_fake_tensor(obj, fake_mode): with torch._subclasses.fake_tensor.FakeCopyMode(fake_mode): return wrap_fake_exception(lambda: copy.deepcopy(obj)) def rmse(ref, res): """ Calculate root mean squared error """ return torch.sqrt(torch.mean(torch.square(ref - res))) def same( ref, res, fp64_ref=None, cos_similarity=False, tol=1e-4, equal_nan=False, exact_dtype=True, relax_numpy_equality=False, ignore_non_fp=False, log_error=log.error, use_larger_multiplier_for_smaller_tensor=False, ): """Check correctness to see if ref and res match""" if fp64_ref is None: fp64_ref = ref if isinstance(ref, (list, tuple, torch.nn.ParameterList, torch.Size)): assert isinstance(res, (list, tuple)), f"type mismatch {type(ref)} {type(res)}" if len(ref) != len(res): log_error("Length mismatch") return False return len(ref) == len(res) and all( same( ai, bi, fp64_refi, cos_similarity, tol, equal_nan, exact_dtype, relax_numpy_equality, ignore_non_fp, log_error=log_error, use_larger_multiplier_for_smaller_tensor=use_larger_multiplier_for_smaller_tensor, ) for ai, bi, fp64_refi in zip(ref, res, fp64_ref) ) elif type(ref).__name__ == "QuestionAnsweringModelOutput": # This skips checking accuracy for start_logits/end_logits. # Tentatively, start_logits/end_logits appear to be very prone to # inaccuracies and is somewhat subsumed by checking the loss. return same( ref.loss, res.loss, fp64_ref.loss, cos_similarity, tol, equal_nan, exact_dtype, relax_numpy_equality, ignore_non_fp, log_error=log_error, use_larger_multiplier_for_smaller_tensor=use_larger_multiplier_for_smaller_tensor, ) elif isinstance(ref, dict): assert isinstance(res, dict) assert set(ref.keys()) == set( res.keys() ), f"keys mismatch {set(ref.keys())} == {set(res.keys())}" for k in sorted(ref.keys()): if not ( same( ref[k], res[k], fp64_ref[k], cos_similarity=cos_similarity, tol=tol, equal_nan=equal_nan, exact_dtype=exact_dtype, relax_numpy_equality=relax_numpy_equality, ignore_non_fp=ignore_non_fp, log_error=log_error, use_larger_multiplier_for_smaller_tensor=use_larger_multiplier_for_smaller_tensor, ) ): log_error("Accuracy failed for key name %s", k) return False return True elif isinstance(ref, set): assert isinstance(res, set) assert set(ref) == set(res), f"elements mismatch {set(ref)} == {set(res)}" return True elif isinstance(ref, (torch.Tensor, float)): assert not isinstance(ref, torch._subclasses.FakeTensor) assert not isinstance(res, torch._subclasses.FakeTensor) def to_tensor(t): return t if isinstance(t, torch.Tensor) else torch.tensor(t) ref, res, fp64_ref = (to_tensor(val) for val in (ref, res, fp64_ref)) if ref.is_sparse: assert res.is_sparse ref = ref.to_dense() res = res.to_dense() assert isinstance(res, torch.Tensor), f"type mismatch {type(ref)} {type(res)}" if exact_dtype: if ref.dtype != res.dtype: log_error("dtype mismatch %s, %s", ref.dtype, res.dtype) return False if ref.dtype == torch.bool: if ignore_non_fp: return True # triton stores bool as int8, so add this for more accurate checking r = torch.allclose( ref.to(dtype=torch.uint8), res.to(dtype=torch.uint8), atol=tol, rtol=tol, equal_nan=equal_nan, ) if not r: log_error("Accuracy failed: uint8 tensor did not match") return r if cos_similarity: ref = ref.flatten().to(torch.float32) res = res.flatten().to(torch.float32) if torch.allclose(ref, res, atol=tol, rtol=tol, equal_nan=True): # early exit that handles zero/nan better # cosine_similarity(zeros(10), zeros(10), dim=0) is 0 return True score = torch.nn.functional.cosine_similarity(ref, res, dim=0, eps=1e-6) if score < 0.99: log.warning("Similarity score=%s", score.cpu().detach().item()) return score >= 0.99 else: if not exact_dtype: ref = ref.to(res.dtype) # First try usual allclose if torch.allclose(ref, res, atol=tol, rtol=tol, equal_nan=equal_nan): return True # Check error from fp64 version if fp64_ref.dtype == torch.float64: ref_error = rmse(fp64_ref, ref).item() # ref unable to produce this with stable numerics in this precision, ignore if math.isnan(ref_error): log.warning( "Found nan in reference. Consider running in higher precision." ) res_error = rmse(fp64_ref, res).item() # In the case of using AMP (Automatic Mixed Precision), certain models have # failed the benchmark's correctness check. However, the end-to-end model's # accuracy when comparing AMP with FP32 is within a difference of less than 0.1%. # Thus, it's possible that the correctness check failures for these models are # false alarms. We use multiplier of 3 instead of 2 to avoid these false alarms. multiplier = 3.0 if res.dtype == torch.bfloat16 else 2.0 if use_larger_multiplier_for_smaller_tensor and ( fp64_ref.numel() <= 10 and tol >= 4 * 1e-2 ): multiplier = 10.0 elif use_larger_multiplier_for_smaller_tensor and ( fp64_ref.numel() <= 500 and tol >= 4 * 1e-2 ): multiplier = 5.0 elif ( fp64_ref.numel() < 1000 or (ref.ndim == 4 and ref.shape[-1] == ref.shape[-2] == 1) # large tol means a benchmark has been specified as REQUIRE_HIGHER_TOLERANCE or tol >= 2 * 1e-2 ): # In the presence of noise, noise might dominate our error # metric for smaller tensors. # Similary, for 1x1 kernels, there seems to be high noise with amp. multiplier = 3.0 passes_test = res_error <= (multiplier * ref_error + tol / 10.0) if ( not passes_test and equal_nan and math.isnan(ref_error) and math.isnan(res_error) # Some unit test for the accuracy minifier relies on # returning false in this case. and not inductor_config.cpp.inject_relu_bug_TESTING_ONLY ): passes_test = True if not passes_test: log_error( "RMSE (res-fp64): %.5f, (ref-fp64): %.5f and shape=%s. res.dtype: %s, multiplier: %f, tol: %f" ", use_larger_multiplier_for_smaller_tensor: %d", res_error, ref_error, res.size(), res.dtype, multiplier, tol, use_larger_multiplier_for_smaller_tensor, ) return passes_test if ignore_non_fp: return True log_error("Accuracy failed: allclose not within tol=%s", tol) return False elif isinstance(ref, (str, int, type(None), bool, torch.device)): if ignore_non_fp: return True r = ref == res if not r: log_error("Accuracy failed (%s): %s != %s", type(ref), ref, res) return r elif is_numpy_int_type(ref) or is_numpy_float_type(ref): if relax_numpy_equality and not ( is_numpy_int_type(res) or is_numpy_float_type(res) ): ref = ref.item() r = (type(ref) is type(res)) and (ref == res) if not r: log_error("Accuracy failed (numpy): %s != %s", ref, res) return r elif is_numpy_ndarray(ref): return (type(ref) is type(res)) and same( torch.as_tensor(ref), torch.as_tensor(res), fp64_ref, cos_similarity=cos_similarity, tol=tol, equal_nan=equal_nan, exact_dtype=exact_dtype, relax_numpy_equality=relax_numpy_equality, ignore_non_fp=ignore_non_fp, log_error=log_error, use_larger_multiplier_for_smaller_tensor=use_larger_multiplier_for_smaller_tensor, ) elif type(ref).__name__ in ( "MaskedLMOutput", "Seq2SeqLMOutput", "CausalLMOutputWithCrossAttentions", "LongformerMaskedLMOutput", "Instances", "SquashedNormal", "Boxes", "Normal", "TanhTransform", "Foo", "Variable", ): assert type(ref) is type(res) return all( same( getattr(ref, key), getattr(res, key), getattr(fp64_ref, key), cos_similarity=cos_similarity, tol=tol, equal_nan=equal_nan, exact_dtype=exact_dtype, relax_numpy_equality=relax_numpy_equality, ignore_non_fp=ignore_non_fp, log_error=log_error, use_larger_multiplier_for_smaller_tensor=use_larger_multiplier_for_smaller_tensor, ) for key in ref.__dict__.keys() ) else: raise RuntimeError(f"unsupported type: {type(ref).__name__}") def format_func_info(code): short_filename = code.co_filename.split("/")[-1] return f"'{code.co_name}' ({short_filename}:{code.co_firstlineno})" @contextlib.contextmanager def disable_cache_limit(): prior = config.cache_size_limit config.cache_size_limit = sys.maxsize prior_acc_limit = config.accumulated_cache_size_limit config.accumulated_cache_size_limit = sys.maxsize try: yield finally: config.cache_size_limit = prior config.accumulated_cache_size_limit = prior_acc_limit # map from transformed code back to original user code orig_code_map = ExactWeakKeyDictionary() # keep a record of code_obj -> list of guard failure reasons for logging guard_failures: DefaultDict[Any, List[Any]] = collections.defaultdict(list) # Keep a record of graph break reasons for logging graph_break_reasons: List[torch._dynamo.output_graph.GraphCompileReason] = [] # keep record of compiled code, if we are in "error if recompile" # to track code that dynamo has compiled previously seen_code_map = ExactWeakKeyDictionary() class CompileProfiler: """Utility for profiling how and what dynamo would compile. Can be used for * diagnosing recompilation issues * determining an appropriate compile cache limit * (TODO)confirming which functions got compiled/skipped """ def __init__(self): self.frame_count = 0 self.op_count = 0 self.backend_ctx_ctor = disable_cache_limit def __call__(self, gm: torch.fx.GraphModule, example_inputs): self.frame_count += 1 for node in gm.graph.nodes: if "call" in node.op: self.op_count += 1 return gm.forward # no-op __enter__ and __exit__ to preserve BC def __enter__(self): return self def __exit__(self, typ, val, traceback): pass def get_metrics(self): return {"guard_failures": guard_failures} def report(self): metrics = self.get_metrics() gf = metrics["guard_failures"] def num_recompiles(code): return len(gf[code]) def recompile_reasons(code): return "\n".join([str(x) for x in gf[code]]) summarized_gf = [ [format_func_info(code), num_recompiles(code), recompile_reasons(code)] for code in gf ] def graph_break_report(): if "graph_break" in counters: graph_breaks = counters["graph_break"] return tabulate( [[msg, graph_breaks[msg]] for msg in graph_breaks], headers=["Graph Break Reason", "Count"], ) def recompilation_report(): if len(gf): max_recompiles = max(num_recompiles(code) for code in gf) recomp_table = tabulate( summarized_gf, headers=["Function", "Recompiles", "Recompile Reasons"], ) return recomp_table + textwrap.dedent( f""" Set torch._dynamo.config.cache_size_limit to {max_recompiles} to avoid being cache limited. """ ) report = textwrap.dedent( """ Torchdynamo Profiler Report =========================== Graph Breaks ------------ Graph breaks happen when torchdynamo encounters code it can't safely trace. If you want to find out why breaks are happening, check below for each break reason You may gain additional insight by passing `fullgraph=True` to torch.compile, to stop at the first break. """ ) report += graph_break_report() or "No graph breaks detected." report += textwrap.dedent( """ Recompilation ------------- These subgraphs were recompiled more than once due to guard failures Guard failures indicate some condition assumed to be static by the tracer changed, making it unsafe to reuse the compiled program. """ ) report += recompilation_report() or "No recompilation detected.\n" return report # return same dir unless user changes config between calls @functools.lru_cache(None) def _get_debug_dir(root_dir): dir_name = ( "run_" + datetime.datetime.now().strftime("%Y_%m_%d_%H_%M_%S_%f") # use pid to avoid conflicts among ranks + "-pid_" + str(os.getpid()) ) return os.path.join(root_dir, dir_name) def get_debug_dir(): debug_root = config.debug_dir_root return _get_debug_dir(debug_root) def extract_fake_example_value(node, required=True): if "example_value" in node.meta and is_fake(node.meta["example_value"]): return node.meta["example_value"] elif required: from torch._dynamo.exc import unimplemented unimplemented("`FakeTensor` example value was required but not available") else: return None def ensure_graph_fake(e, tx): assert maybe_get_fake_mode(e) is tx.fake_mode return e def get_fake_values_from_nodes(tx, nodes, allow_non_graph_fake): def visit(n: torch.fx.Node): if n.op == "call_function" and "example_value" not in n.meta: # fake tensor validity is checked inside get_fake_value using # ensure_graph_fake return get_fake_value(n, tx, allow_non_graph_fake) out = n.meta["example_value"] if not allow_non_graph_fake and isinstance(out, torch.Tensor): return ensure_graph_fake(out, tx) return out return torch.fx.node.map_arg(nodes, visit) def get_fake_value(node, tx, allow_non_graph_fake=False): """ Run the computation represented by `node` using fake tensors and return the result. allow_non_graph_fake: whether to allow the return result to be: 1. non-fake or 2. fake that is not created by this instance of Dynamo. If `True`, you must be prepared to deal with such return values, ideally by further wrapping them as this graph's fakes. """ from torch.utils._sympy.value_ranges import ValueRangeError from .exc import ( TorchRuntimeError, unimplemented, Unsupported, UserError, UserErrorType, ) op = node.op # FX Node should always return the same fake value if "example_value" in node.meta and is_fake(node.meta["example_value"]): return node.meta["example_value"] args, kwargs = get_fake_values_from_nodes( tx, (node.args, node.kwargs), allow_non_graph_fake ) nnmodule = None if op == "call_method" and len(args) > 0 and isinstance(args[0], torch.nn.Module): # If the first argument is nn.Module, should copy to fake mode. args = (deepcopy_to_fake_tensor(args[0], tx.fake_mode),) + tuple(args[1:]) if op == "call_module": nnmodule = tx.output.nn_modules[node.target] if is_lazy_module(nnmodule) and hasattr(nnmodule, "_initialize_hook"): # In the case of a lazy module, we want to run # the pre-hooks which initialize it. # Afterwards, lazy module deletes its pre-hooks # to avoid treating it as lazy on subsequent recompile. nnmodule._infer_parameters(nnmodule, args) # no matter it's lazy module or not, we should copy to fake mode. nnmodule = deepcopy_to_fake_tensor(nnmodule, tx.fake_mode) try: with tx.fake_mode, enable_python_dispatcher(): ret_val = wrap_fake_exception( lambda: run_node(tx.output, node, args, kwargs, nnmodule) ) except Unsupported: raise except RuntimeError as e: cause: BaseException = e if e.__cause__ is not None: cause = e.__cause__ if isinstance( cause, torch._subclasses.fake_tensor.DataDependentOutputException ): unimplemented( f"data dependent operator: {cause.func}; " "to enable, set torch._dynamo.config.capture_scalar_outputs = True" ) elif isinstance( cause, torch._subclasses.fake_tensor.DynamicOutputShapeException ): if not torch._dynamo.config.capture_dynamic_output_shape_ops: unimplemented( f"dynamic shape operator: {cause.func}; " "to enable, set torch._dynamo.config.capture_dynamic_output_shape_ops = True" ) else: unimplemented( f"dynamic shape operator: {cause.func}; " "Operator does not have a meta kernel that supports dynamic output shapes, " "please report an issue to PyTorch" ) elif isinstance( cause, torch._subclasses.fake_tensor.UnsupportedOperatorException ): op = cause.func import_suggestion = "" if isinstance(op, torch._ops.OpOverload): maybe_pystub = torch._C._dispatch_pystub( op._schema.name, op._schema.overload_name ) if maybe_pystub is not None: module, ctx = maybe_pystub import_suggestion = ( f"It's possible that the support was implemented in " f"module `{module}` and you may need to `import {module}`" f"({ctx}), otherwise " ) unimplemented( f"unsupported operator: {cause.func} ({import_suggestion}see " "path_to_url#heading=h.64r4npvq0w0" " for how to fix)" ) elif isinstance( cause, torch.fx.experimental.symbolic_shapes.GuardOnDataDependentSymNode ): raise UserError( # noqa: B904 UserErrorType.CONSTRAINT_VIOLATION, "Tried to use data-dependent value in the subsequent computation. " "This can happen when we encounter unbounded dynamic value that is unknown during tracing time. " "You will need to explicitly give hint to the compiler. Please take a look at " f"torch._check OR torch._check_is_size APIs. {cause}", case_name="constrain_as_size_example", ) elif isinstance(cause, ValueRangeError): raise UserError(UserErrorType.CONSTRAINT_VIOLATION, e.args[0]) from e elif isinstance(cause, TypeError) and "argument" in str(cause): unimplemented(f"TypeError {node.target}: {cause}") raise TorchRuntimeError(str(e)).with_traceback(e.__traceback__) from None if not allow_non_graph_fake: _ = pytree.tree_map_only( torch.Tensor, functools.partial(ensure_graph_fake, tx=tx), ret_val ) return ret_val _current_node = threading.local() def get_current_node(): return getattr(_current_node, "value", None) @contextmanager def set_current_node(node): old = get_current_node() _current_node.value = node try: yield finally: _current_node.value = old def run_node(tracer, node, args, kwargs, nnmodule): """ Runs a given node, with the given args and kwargs. Behavior is dictated by a node's op. run_node is useful for extracting real values out of nodes. See get_real_value for more info on common usage. Note: The tracer arg is only used for 'get_attr' ops Note: The nnmodule arg is only used for 'call_module' ops Nodes that are not call_function, call_method, call_module, or get_attr will raise an AssertionError. """ op = node.op with set_current_node(node): def make_error_message(e): return f"Failed running {op} {node.target}(*{args}, **{kwargs}):\n" + str(e) try: if op == "call_function": return node.target(*args, **kwargs) elif op == "call_method": return getattr(args[0], node.target)(*args[1:], **kwargs) elif op == "call_module": assert nnmodule is not None return nnmodule(*args, **kwargs) elif op == "get_attr": return tracer.output_graph.get_submodule(node.target) elif op == "placeholder": assert "example_value" in node.meta return node.meta["example_value"] except (NotImplementedError, UnsupportedFakeTensorException) as e: # NB: mimic how wrap_fake_exception does it from .exc import unimplemented unimplemented(make_error_message(e), from_exc=e) except Exception as e: raise RuntimeError(make_error_message(e)).with_traceback( e.__traceback__ ) from e raise AssertionError(op) def get_real_value(node, tracer): """ Run the actual computation represented by `node` and return the result. This will execute any dependent nodes in the graph as well. """ from .exc import TorchRuntimeError cache = tracer.real_value_cache if node in cache: return cache[node] op = node.op args, kwargs = torch.fx.node.map_arg( (node.args, node.kwargs), lambda n: get_real_value(n, tracer), ) if op == "placeholder" and "grapharg" in node.meta: return node.meta["grapharg"].example if op == "call_module": nn_module = tracer.output_graph.nn_modules[node.target] if not is_lazy_module(nn_module): nn_module = copy.deepcopy(nn_module) else: # In the case of a lazy module, we want to run # the pre-hooks which initialize it nn_module(*args, **kwargs) else: nn_module = None try: real_value = run_node(tracer, node, args, kwargs, nn_module) cache[node] = real_value except RuntimeError as e: raise TorchRuntimeError(str(e)).with_traceback(e.__traceback__) from None return real_value def assert_no_fake_params_or_buffers(gm): from torch._subclasses.fake_tensor import FakeTensorConfig, is_fake def stack_or_hint(t): if FakeTensorConfig.debug: import traceback return f"FAKE TENSOR CREATION TRACEBACK: \n {traceback.format_list(t._debug_trace)}" else: return "Enable TORCH_FAKE_TENSOR_DEBUG=1 to get creation stack traces on fake tensors." for name, buffer in gm.named_buffers(): assert not is_fake( buffer ), f"Unexpected fake buffer {name} {stack_or_hint(buffer)}" for name, param in gm.named_parameters(): assert not is_fake( param ), f"Unexpected fake param {name} {stack_or_hint(param)}" def fqn(obj: Any): """ Returns the fully qualified name of the object. """ return f"{obj.__module__}.{obj.__qualname__}" def ifdynstaticdefault(count1, count2): if torch._dynamo.config.assume_static_by_default: return count1 else: return count2 def import_submodule(mod: types.ModuleType): """ Ensure all the files in a given submodule are imported """ for filename in sorted(os.listdir(os.path.dirname(cast(str, mod.__file__)))): if filename.endswith(".py") and filename[0] != "_": importlib.import_module(f"{mod.__name__}.{filename[:-3]}") def object_has_getattribute(value: Any): try: if isinstance( inspect.getattr_static(type(value), "__getattribute__"), types.FunctionType, ): return True except AttributeError: pass return False def get_custom_getattr(value: Any, ignore_nn_module_getattr: bool = False): try: getattr_fn = inspect.getattr_static(type(value), "__getattr__") except AttributeError: getattr_fn = None if ignore_nn_module_getattr and getattr_fn is torch.nn.Module.__getattr__: # ignore this case of getattr getattr_fn = None return getattr_fn class TensorStaticReason(enum.Enum): PARAMETER = 2 NOT_TENSOR = 4 NN_MODULE_PROPERTY = 5 def tensor_static_reason_to_message(reason: TensorStaticReason): if reason == TensorStaticReason.PARAMETER: return "mark_dynamic on parameter, parameters are always static today." if reason == TensorStaticReason.NOT_TENSOR: return "mark_dynamic on a non tensor, how did this happen?" if reason == TensorStaticReason.NN_MODULE_PROPERTY: return "tensor is static because it is nn module associated." raise AssertionError(f"Illegal reason {reason}") def tensor_always_has_static_shape( tensor: Union[torch.Tensor, Any], is_tensor: bool, guard_source: torch._guards.GuardSource, ) -> Tuple[bool, Optional[TensorStaticReason]]: """ Given a tensor, source, and is_tensor flag, determine if a shape should be static. Args: tensor - the real tensor to evaluate, parameters force a static shape. is_tensor - internal dynamo check, essentially "is_tensor": target_cls is TensorVariable, tensors not in a TensorVariable for whatever reason are forced static. Returns a tuple, where the first element is the bool of whether or not this tensor should have a static shape. The second element is a TensorStaticReason, useful for passing to tensor_static_reason_to_message if needed. """ if ( guard_source.is_specialized_nn_module() and config.force_nn_module_property_static_shapes ): return True, TensorStaticReason.NN_MODULE_PROPERTY if type(tensor) is torch.nn.Parameter and config.force_parameter_static_shapes: return True, TensorStaticReason.PARAMETER if not is_tensor: return True, TensorStaticReason.NOT_TENSOR return False, None def lazy_format_graph_tabular(fn_name, gm): def inner(): try: from tabulate import tabulate # TODO: Check that this is installed except ImportError: return ( "Tabulate module missing, please install tabulate to log the graph in tabular format, logging code instead:\n" + str(lazy_format_graph_code(fn_name, gm)) ) node_specs = [ [n.op, n.name, n.target, n.args, n.kwargs] for n in gm.graph.nodes ] graph_str = tabulate( node_specs, headers=["opcode", "name", "target", "args", "kwargs"] ) return _format_graph_code(fn_name, gm.forward.__code__.co_filename, graph_str) return LazyString(inner) def format_bytecode(prefix, name, filename, line_no, code): return f"{prefix} {name} {filename} line {line_no} \n{dis.Bytecode(code).dis()}\n" forward_hook_names = ["_forward_pre_hooks", "_forward_hooks"] backward_hook_names = ["_backward_pre_hooks", "_backward_hooks"] state_dict_hook_names = [ "_state_dict_pre_hooks", "_state_dict_hooks", "_load_state_dict_pre_hooks", "_load_state_dict_post_hooks", ] all_hook_names = forward_hook_names + backward_hook_names + state_dict_hook_names def nn_module_has_global_hooks(): # This is limited to backward hooks for now because NNModuleVariable # supports fwd hooks underneath. return len(torch.nn.modules.module._global_backward_hooks) or len( torch.nn.modules.module._global_backward_pre_hooks ) def nn_module_get_all_hooks( mod, check_forward_hooks=False, check_backward_hooks=False, check_state_dict_hooks=False, ): """ Sometimes its useful to differentiate between types of hooks such as forward/backward/pre hooks executed during module.__call__, and state_dict hooks which are executed separately. """ hook_dicts_to_check = [] check_all_hooks = ( not check_forward_hooks and not check_backward_hooks and not check_state_dict_hooks ) if check_forward_hooks or check_all_hooks: hook_dicts_to_check.extend(forward_hook_names) if check_backward_hooks or check_all_hooks: hook_dicts_to_check.extend(backward_hook_names) if check_state_dict_hooks: hook_dicts_to_check.extend(state_dict_hook_names) all_hooks = [] for hook_dict_name in hook_dicts_to_check: hooks = getattr(mod, hook_dict_name, []) for hook_name in hooks: hook = hooks[hook_name] all_hooks.append(hook) return all_hooks def nnmodule_has_hooks( mod, check_forward_hooks=False, check_backward_hooks=False, check_state_dict_hooks=False, ): """ Helper function to check if a module has any hooks attached to it. """ hooks = nn_module_get_all_hooks( mod, check_forward_hooks=check_forward_hooks, check_backward_hooks=check_backward_hooks, check_state_dict_hooks=check_state_dict_hooks, ) return bool(hooks) def to_numpy_helper(value): """Convert tensor and tnp.ndarray to numpy.ndarray.""" if is_fake(value): return value if isinstance(value, tnp.ndarray): return to_numpy_helper(value.tensor) elif isinstance(value, torch.Tensor): return value.numpy(force=True) elif isinstance(value, (tuple, list)): return type(value)(to_numpy_helper(obj) for obj in value) else: return value def numpy_to_tensor(value): """Convert tnp.ndarray to tensor, leave other types intact. If a list/tuple, loop through it to convert.""" assert np is not None if isinstance(value, np.ndarray): return torch.as_tensor(value) if isinstance(value, tnp.ndarray): return value.tensor elif isinstance(value, (tuple, list)): return type(value)(numpy_to_tensor(obj) for obj in value) else: return value class numpy_to_tensor_wrapper: def __init__(self, f): self.f = f self.__name__ = "wrapped_" + self.f.__name__ def __repr__(self): return f"<Wrapped function <original {self.f.__name__}>>" def __call__(self, *args, **kwargs): out = self.f(*args, **kwargs) return numpy_to_tensor(out) def numpy_attr_wrapper(obj, name): if isinstance(obj, tnp.ndarray): out = getattr(obj, name) return numpy_to_tensor(out) elif isinstance(obj, torch.Tensor): out = getattr(tnp.ndarray(obj), name) return numpy_to_tensor(out) class numpy_method_wrapper: """Convert obj from torch.Tensor to tnp.ndarray and call method. Then convert result back to torch.Tensor.""" def __init__(self, method: str): self.method = method self.__name__ = "wrapped_" + self.method def __repr__(self): return f"<Wrapped method <original {self.method}>>" def __call__(self, *args, **kwargs): obj = args[0] if isinstance(obj, torch.Tensor): obj = tnp.ndarray(obj) method_callable = getattr(obj, self.method) out = method_callable(*args[1:], **kwargs) return numpy_to_tensor(out) class numpy_operator_wrapper: """Implements dunder methods for tnp.ndarray via functions from the operator library""" def __init__(self, op: Callable[..., Any]): self.op = op self.__name__ = f"wrapped_{op.__name__}" def __repr__(self): return f"<Wrapped operator <original {self.__name__}>>" def __call__(self, *args, **kwargs): assert not kwargs args = ( tnp.ndarray(arg) if isinstance(arg, torch.Tensor) else arg for arg in args ) out = self.op(*args) return numpy_to_tensor(out) def defake(x): if not isinstance(x, FakeTensor): return x size: torch._prims_common.ShapeType stride: torch._prims_common.StrideType if x._has_symbolic_sizes_strides: size = [] for s in x.size(): if isinstance(s, torch.SymInt): size.append(s.node.shape_env.size_hint(s.node.expr)) else: size.append(s) stride = [] for s in x.stride(): if isinstance(s, torch.SymInt): stride.append(s.node.shape_env.size_hint(s.node.expr)) else: stride.append(s) else: size = x.size() stride = x.stride() y = torch.empty_strided( size, stride, dtype=x.dtype, device=x.device, requires_grad=x.requires_grad, ) y.zero_() return y def is_utils_checkpoint(obj): # Lazy import to avoid circular dependencies import torch.utils.checkpoint return obj is torch.utils.checkpoint.checkpoint def build_checkpoint_variable(**options): import torch._higher_order_ops.wrap as higher_order_ops from .variables.higher_order_ops import TorchHigherOrderOperatorVariable # TODO - This is a temporary situation where we have two versions of # checkpointing implementation. We will converge on one and remove the other. activation_checkpoint_op: torch._ops.HigherOrderOperator = ( higher_order_ops.tag_activation_checkpoint ) if torch._functorch.config.functionalize_rng_ops: activation_checkpoint_op = higher_order_ops.wrap_activation_checkpoint return TorchHigherOrderOperatorVariable.make( activation_checkpoint_op, **options, ) def is_compile_supported(device_type): from .eval_frame import is_dynamo_supported compile_supported = is_dynamo_supported() if device_type == "cpu": pass elif device_type == "cuda" and compile_supported: compile_supported = has_triton() else: compile_supported = False return compile_supported # The following 3.11 source code functions are adapted from # path_to_url # in order to output source code corresponding to bytecode in 3.11+. # We need our own versions since we want to support multiline expressions. def _fix_offset(str: str, offset: int) -> int: """ Convert byte offset `offset` of `str` into character offset. Byte offset is used for 3.11+ instruction column data. Takes things like unicode characters into consideration. Unchanged from CPython implementation. """ as_utf8 = str.encode("utf-8") return len(as_utf8[:offset].decode("utf-8", errors="replace")) @dataclasses.dataclass class _Anchors: # inclusive left_end_lineno: int left_end_offset: int right_start_lineno: int # exclusive right_start_offset: int def _extract_anchors_from_expr(segment: str) -> Optional[_Anchors]: """ Given source code `segment` corresponding to a bytecode instruction, determine: - for binary ops, the location of the binary op - for indexing, the location of the brackets. `segment` is expected to be a valid Python expression """ assert sys.version_info >= (3, 11) import ast try: # Without brackets, `segment` is parsed as a statement. # We expect an expression, so wrap `segment` in # brackets to handle multi-line expressions. tree = ast.parse("(\n" + segment + "\n)") except SyntaxError: return None if len(tree.body) != 1: return None lines = segment.split("\n") # get character index given byte offset def normalize(lineno, offset): return _fix_offset(lines[lineno], offset) # Gets the next valid character index in `lines`, if # the current location is not valid. Handles empty lines. def next_valid_char(lineno, col): while lineno < len(lines) and col >= len(lines[lineno]): col = 0 lineno += 1 assert lineno < len(lines) and col < len(lines[lineno]) return lineno, col # Get the next valid character index in `lines`. def increment(lineno, col): col += 1 lineno, col = next_valid_char(lineno, col) assert lineno < len(lines) and col < len(lines[lineno]) return lineno, col # Get the next valid character at least on the next line def nextline(lineno, col): col = 0 lineno += 1 lineno, col = next_valid_char(lineno, col) assert lineno < len(lines) and col < len(lines[lineno]) return lineno, col statement = tree.body[0] if isinstance(statement, ast.Expr): expr = statement.value if isinstance(expr, ast.BinOp): # ast gives locations for BinOp subexpressions, e.g. # ( left_expr ) + ( right_expr ) # left^^^^^ right^^^^^ # -2 since end_lineno is 1-indexed and because we added an extra # bracket to `segment` when calling ast.parse cur_lineno = cast(int, expr.left.end_lineno) - 2 cur_col = normalize(cur_lineno, expr.left.end_col_offset) cur_lineno, cur_col = next_valid_char(cur_lineno, cur_col) # Heuristic to find the operator character. # The original CPython implementation did not look for ), \, or #, # leading to incorrect anchor location, e.g. # (x) + (y) # ~~^~~~~~~ while (ch := lines[cur_lineno][cur_col]).isspace() or ch in ")\\#": if ch in "\\#": cur_lineno, cur_col = nextline(cur_lineno, cur_col) else: cur_lineno, cur_col = increment(cur_lineno, cur_col) # binary op is 1 or 2 characters long, on the same line right_col = cur_col + 1 if ( right_col < len(lines[cur_lineno]) and not (ch := lines[cur_lineno][right_col]).isspace() and ch not in "\\#" ): right_col += 1 # right_col can be invalid since it is exclusive return _Anchors(cur_lineno, cur_col, cur_lineno, right_col) elif isinstance(expr, ast.Subscript): # ast gives locations for value and slice subexpressions, e.g. # ( value_expr ) [ slice_expr ] # value^^^^^ slice^^^^^ # subscript^^^^^^^^^^^^^^^^^^^^ # find left bracket (first '[' after value) left_lineno = cast(int, expr.value.end_lineno) - 2 left_col = normalize(left_lineno, expr.value.end_col_offset) left_lineno, left_col = next_valid_char(left_lineno, left_col) while lines[left_lineno][left_col] != "[": left_lineno, left_col = increment(left_lineno, left_col) # find right bracket (final character of expression) right_lineno = cast(int, expr.end_lineno) - 2 right_col = normalize(right_lineno, expr.end_col_offset) return _Anchors(left_lineno, left_col, right_lineno, right_col) elif isinstance(expr, ast.Call): # ( func_expr ) (args, kwargs) # func^^^^^ # call^^^^^^^^^^^^^^^^^^^^^^^^ # find left bracket (first '(' after func) left_lineno = cast(int, expr.func.end_lineno) - 2 left_col = normalize(left_lineno, expr.func.end_col_offset) left_lineno, left_col = next_valid_char(left_lineno, left_col) while lines[left_lineno][left_col] != "(": left_lineno, left_col = increment(left_lineno, left_col) # find right bracket (final character of expression) right_lineno = cast(int, expr.end_lineno) - 2 right_col = normalize(right_lineno, expr.end_col_offset) return _Anchors(left_lineno, left_col, right_lineno, right_col) return None def get_instruction_source_311(code: types.CodeType, inst: dis.Instruction) -> str: """ Python 3.11+ only. Returns lines of source code (from code object `code`) corresponding to `inst`'s location data, and underlines relevant code to `inst`. Example: CALL on `g`: f(g( ^^ h(x))) ^^^^^ We need our own implementation since `format_frame_summary` in Python's `traceback` module doesn't handle multi-line expressions (and their anchor extraction code is not completely correct). """ assert inst.positions is not None if inst.positions.lineno is None: return "" # The rstrip + "\n" pattern is used throughout this function to handle # linecache.getline errors. Error lines are treated as empty strings "", but we want # to treat them as blank lines "\n". first_line = linecache.getline(code.co_filename, inst.positions.lineno).rstrip() if inst.positions.end_lineno is None: return first_line if inst.positions.col_offset is None or inst.positions.end_col_offset is None: return first_line # character index of the start of the instruction start_offset = _fix_offset(first_line, inst.positions.col_offset) # character index of the end of the instruction # compute later since end may be a different line end_offset = None # expression corresponding to the instruction so we can get anchors segment = "" # underline markers to be printed - start with `~` marker and replace with `^` later markers = [] # Compute segment and initial markers if inst.positions.end_lineno == inst.positions.lineno: end_offset = _fix_offset(first_line, inst.positions.end_col_offset) segment = first_line[start_offset:end_offset] markers.append(" " * start_offset + "~" * (end_offset - start_offset)) else: segment = first_line[start_offset:] + "\n" markers.append(" " * start_offset + "~" * (len(first_line) - start_offset)) last_line = linecache.getline( code.co_filename, inst.positions.end_lineno ).rstrip() end_offset = _fix_offset(last_line, inst.positions.end_col_offset) for lineno in range(inst.positions.lineno + 1, inst.positions.end_lineno): line = linecache.getline(code.co_filename, lineno).rstrip() segment += line + "\n" # don't underline leading spaces num_spaces = len(line) - len(line.lstrip()) markers.append(" " * num_spaces + "~" * (len(line) - num_spaces)) segment += last_line[:end_offset] num_spaces = len(last_line) - len(last_line.lstrip()) markers.append(" " * num_spaces + "~" * (end_offset - num_spaces)) anchors: Optional[_Anchors] = None try: anchors = _extract_anchors_from_expr(segment) except AssertionError: pass # replace `~` markers with `^` where necessary if anchors is None: markers = [marker.replace("~", "^") for marker in markers] else: # make markers mutable mutable_markers: List[List[str]] = [list(marker) for marker in markers] # anchor positions do not take start_offset into account if anchors.left_end_lineno == 0: anchors.left_end_offset += start_offset if anchors.right_start_lineno == 0: anchors.right_start_offset += start_offset # Turn `~`` markers between anchors to `^` for lineno in range(len(markers)): for col in range(len(mutable_markers[lineno])): if lineno < anchors.left_end_lineno: continue if lineno == anchors.left_end_lineno and col < anchors.left_end_offset: continue if ( lineno == anchors.right_start_lineno and col >= anchors.right_start_offset ): continue if lineno > anchors.right_start_lineno: continue if mutable_markers[lineno][col] == "~": mutable_markers[lineno][col] = "^" # make markers into strings again markers = ["".join(marker) for marker in mutable_markers] result = "" for i in range(len(markers)): result += ( linecache.getline(code.co_filename, inst.positions.lineno + i).rstrip() + "\n" ) result += markers[i] + "\n" return result def get_static_address_type(t): if isinstance(t, torch.Tensor): return getattr(t, "_dynamo_static_input_type", None) return None def is_rng_state_getter_or_setter(value): getters = ( # The following two functions are not identical, so don't remove anyone! torch._C.Generator.get_state, torch.default_generator.get_state, torch.get_rng_state, torch.cuda.get_rng_state, ) setters = ( torch._C.Generator.set_state, torch.default_generator.set_state, torch.set_rng_state, torch.cuda.set_rng_state, ) return value in (*setters, *getters) def is_tensor_base_attr_getter(value): return ( isinstance(value, types.MethodWrapperType) and value.__name__ == "__get__" and value.__self__.__objclass__ is torch._C._TensorBase # type: ignore[attr-defined] ) def is_torch_function_object(value): return hasattr(value, "__torch_function__") def has_torch_function(vt: torch._dynamo.variables.base.VariableTracker) -> bool: from torch._dynamo.variables import LazyVariableTracker, UserDefinedObjectVariable from torch._dynamo.variables.torch_function import TensorWithTFOverrideVariable if isinstance(vt, TensorWithTFOverrideVariable): return True if isinstance(vt, LazyVariableTracker): LazyVariableTracker.realize(vt) return isinstance(vt, UserDefinedObjectVariable) and hasattr( vt.value, "__torch_function__" ) # see note [Tensor Fakification and Symbol Caching] def to_fake_tensor(t, fake_mode): symbolic_context = None source = None if tracing_context := torch._guards.TracingContext.try_get(): if t in tracing_context.tensor_to_context: symbolic_context = tracing_context.tensor_to_context[t] source = symbolic_context.tensor_source return fake_mode.from_tensor( t, static_shapes=False, symbolic_context=symbolic_context, source=source ) def get_first_attr(obj, *attrs): """ Return the first available attribute or throw an exception if none is present. """ for attr in attrs: if hasattr(obj, attr): return getattr(obj, attr) raise AssertionError(f"{obj} does not has any of the attributes: {attrs}") @contextlib.contextmanager def maybe_enable_compiled_autograd(should_enable, fullgraph=True, dynamic=True): if not should_enable: yield else: def compiler_fn(gm): def inner_compiler(gm_, example_inputs_): torch._dynamo.utils.counters["compiled_autograd"]["compiles"] += 1 return torch._inductor.compile(gm_, example_inputs_) return torch.compile( gm, backend=inner_compiler, fullgraph=fullgraph, dynamic=dynamic ) with torch._dynamo.compiled_autograd.enable(compiler_fn) as ctx: yield ctx def invalid_removeable_handle(): # need a subclass so weakref works class Invalid(dict): # type: ignore[type-arg] pass return RemovableHandle(Invalid()) # Returns a "proxy" (new object with the same class and dict) for (non-GraphModule) nn.Module's. # Attribute changes to the original object/proxy will be reflected in the other. # This is useful for cases where we want a keep-alive reference to a module without increasing # its reference count. def nn_module_proxy(mod): if not isinstance(mod, torch.nn.Module): return mod if isinstance(mod, torch.fx.GraphModule): # Dynamo-generated GM's shouldn't contain user-created GM's return mod proxy = mod.__class__.__new__(mod.__class__) proxy.__dict__ = mod.__dict__ return proxy class GmWrapper(torch.nn.Module): def __init__(self, gm, unflatten_fn): super().__init__() self.gm = gm self.unflatten_fn = unflatten_fn def forward(self, *args): args: List[Any] = list(args) return self.gm(*self.unflatten_fn(args)) def flatten_graph_inputs(gm: torch.fx.GraphModule, inputs, compile_gm): """ Mutate inputs so that they are flat and wrap gm such that it accepts those inputs. This is needed for graphs that take bumpy inputs. """ inputs_idx_to_clear = [ i for i, node in enumerate(gm.graph.nodes) if node.op == "placeholder" and node.meta.get("steal_arg", False) ] if torch._dynamo.compiled_autograd.in_compiled_autograd_region: # fast path, avoid pytree overhead # compiled autograd inputs are always a list of tensors, maybe followed by symints assert inputs_idx_to_clear == [0] assert isinstance(inputs[0], list) boxed_inputs_count = len(inputs[0]) def flatten_fn(args): return args[0] + list(args[1:]) def unflatten_fn(flat_args): return (flat_args[:boxed_inputs_count], *flat_args[boxed_inputs_count:]) compiled_fn = compile_gm(GmWrapper(gm, unflatten_fn), flatten_fn(inputs)) else: # slow path, don't know inputs structure flat_inputs, spec = pytree.tree_flatten(inputs) unflatten_fn = functools.partial(pytree.tree_unflatten, treespec=spec) compiled_fn = compile_gm(GmWrapper(gm, unflatten_fn), flat_inputs) # note this doesn't check the spec, assuming it is the same flatten_fn = pytree.arg_tree_leaves def wrapper(*args): flat_args = flatten_fn(args) # flat_args is a new list, so we need to clear references from the old list for i in inputs_idx_to_clear: args[i].clear() # this call is boxed to avoid increasing refcount until we reach aot_module_simplified forward return compiled_fn(flat_args) return wrapper def get_locals_to_steal(maybe_gm): if not isinstance(maybe_gm, torch.fx.GraphModule) or not hasattr(maybe_gm, "meta"): return [] return maybe_gm.meta.get("locals_to_steal", []) def set_locals_to_steal(gm, locals_to_steal): gm.meta["locals_to_steal"] = locals_to_steal class Lit: def __init__(self, s): self.s = s def __repr__(self): return self.s warn_once_cache: Set[str] = set() def warn_once(msg, stacklevel=1): # Dynamo causes all warnings.warn (in user code and in Dynamo code) to print all the time. # path_to_url # warn_once is a workaround: if the msg has been warned on before, then we will not # warn again. # NB: it's totally ok to store a cache of all the strings: this is what warnings.warn does as well. if msg in warn_once_cache: return warn_once_cache.add(msg) warnings.warn(msg, stacklevel=stacklevel + 1) def strip_color_from_string(text): # This regular expression matches ANSI escape codes ansi_escape = re.compile(r"\x1B[@-_][0-?]*[ -/]*[@-~]") return ansi_escape.sub("", text) @contextlib.contextmanager def _disable_saved_tensors_hooks_during_tracing(): # See NOTE: [Deferring tensor pack/unpack hooks until runtime] try: prior = torch._C._autograd._saved_tensors_hooks_set_tracing(True) yield finally: torch._C._autograd._saved_tensors_hooks_set_tracing(prior) def is_parameter_freezing(): return torch._inductor.config.freezing and not torch.is_grad_enabled() def get_torch_function_mode_stack(filter_ignored=True): from .variables.torch_function import IGNORED_MODES stack = [_get_function_stack_at(i) for i in range(_len_torch_function_stack())] if filter_ignored: stack = [mode for mode in stack if type(mode) not in IGNORED_MODES] return stack def get_torch_function_mode_stack_at(ind): assert ind < _len_torch_function_stack() and ind >= 0 return torch._C._get_function_stack_at(ind) def set_torch_function_mode_stack(stack): for i in range(_len_torch_function_stack()): _pop_torch_function_stack() for mode in stack: _push_on_torch_function_stack(mode) def verify_guard_fn_signature(value): fn = value.__metadata_guard__ sig = inspect.signature(fn) if len(sig.parameters) != 2: from .exc import InternalTorchDynamoError raise InternalTorchDynamoError( "Tensor subclass method __metadata_guard__ must take exactly two subclass metadata arguments" ) if fn.__self__ != value.__class__: from .exc import InternalTorchDynamoError raise InternalTorchDynamoError( "Tensor subclass method __metadata_guard__ must be a classmethod" ) def does_not_override_dict_iter_methods(user_cls): return ( user_cls.items in (dict.items, collections.OrderedDict.items) and user_cls.values in (dict.values, collections.OrderedDict.values) and user_cls.keys in (dict.keys, collections.OrderedDict.keys) and user_cls.__iter__ in (dict.__iter__, collections.OrderedDict.__iter__) ) # Helper function to extract relevant parts of a tensor's __dict__ to store in node meta. # To avoid ref cycles, it's important that no tensors are present here, so leave those out. def _extract_tensor_dict(t): KEYS_TO_COPY = [ "_dynamo_static_input_type", "tag", ] tensor_dict = { key: copy.copy(t.__dict__[key]) for key in KEYS_TO_COPY if key in t.__dict__ } return tensor_dict # This is useful for reconstructing within the Dynamo graph the non-graph-input objects # whose lifetime is governed by the user. # e.g. torch.cuda.Event is a prime example. user_obj_id_to_weakref: Dict[int, weakref.ReferenceType[object]] = {} def get_user_object_from_id(obj_id): obj = user_obj_id_to_weakref[obj_id]() assert obj is not None, "User object is no longer alive" return obj def store_user_object_weakref(obj): obj_id = id(obj) user_obj_id_to_weakref[obj_id] = weakref.ref(obj) ```
```objective-c /* Public domain. */ #ifndef _LINUX_MUTEX_H #define _LINUX_MUTEX_H #include <sys/stdint.h> #include <sys/rwlock.h> #include <linux/list.h> #include <linux/spinlock_types.h> #include <linux/lockdep.h> #define DEFINE_MUTEX(x) struct rwlock x = RWLOCK_INITIALIZER(#x) #define mutex_lock_interruptible_nested(rwl, subc) \ mutex_lock_interruptible(rwl) #define mutex_lock(rwl) rw_enter_write(rwl) #define mutex_lock_nest_lock(rwl, sub) rw_enter_write(rwl) #define mutex_lock_nested(rwl, sub) rw_enter_write(rwl) #define mutex_trylock(rwl) (rw_enter(rwl, RW_WRITE | RW_NOSLEEP) == 0) #define mutex_unlock(rwl) rw_exit_write(rwl) #define mutex_is_locked(rwl) (rw_status(rwl) != 0) #define mutex_destroy(rwl) static inline int mutex_lock_interruptible(struct rwlock *rwl) { if (rw_enter(rwl, RW_WRITE | RW_INTR) != 0) return -EINTR; return 0; } enum mutex_trylock_recursive_result { MUTEX_TRYLOCK_FAILED, MUTEX_TRYLOCK_SUCCESS, MUTEX_TRYLOCK_RECURSIVE }; static inline enum mutex_trylock_recursive_result mutex_trylock_recursive(struct rwlock *rwl) { if (rw_status(rwl) == RW_WRITE) return MUTEX_TRYLOCK_RECURSIVE; if (mutex_trylock(rwl)) return MUTEX_TRYLOCK_SUCCESS; return MUTEX_TRYLOCK_FAILED; } int atomic_dec_and_mutex_lock(volatile int *, struct rwlock *); #endif ```
Köklüce is a village in the Palu District of Elazığ Province in Turkey. Its population is 199 (2021). References Villages in Palu District
Qu Ding (ca. 1023–ca. 1056) (Chinese: 屈鼎) was a Chinese painter of the Song Dynasty. He learned the art of painting from Yan Wengui, a master artist of that time period. His work, Summer Mountains, currently held at the Metropolitan Museum of Art, is perhaps the only work of his that has survived to the present day. His paintings of landscapes bring out a panoramic view of mountains and rivers. Summer Mountains bears the seal of Emperor Huizong of Song, a noted patron of the arts and himself an artist, which may imply that Qu Ding was a court painter in the court of Huizong. Further reading See Also Qu (surname 屈) References Painters from Henan Song dynasty landscape painters Year of death unknown People from Kaifeng Year of birth unknown Year of birth uncertain 1020s births 1050s deaths
```assembly ;****************************************************************************** ;* SSE-optimized functions for the DCA decoder ;* ;* This file is part of FFmpeg. ;* ;* FFmpeg is free software; you can redistribute it and/or ;* modify it under the terms of the GNU Lesser General Public ;* ;* FFmpeg is distributed in the hope that it will be useful, ;* but WITHOUT ANY WARRANTY; without even the implied warranty of ;* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ;* ;* You should have received a copy of the GNU Lesser General Public ;* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA ;****************************************************************************** %include "libavutil/x86/x86util.asm" SECTION .text %macro SETZERO 1 %if cpuflag(sse2) && notcpuflag(avx) pxor %1, %1 %else xorps %1, %1, %1 %endif %endmacro %macro SHUF 3 %if cpuflag(avx) mova %3, [%2 - 16] vperm2f128 %1, %3, %3, 1 vshufps %1, %1, %1, q0123 %elif cpuflag(sse2) pshufd %1, [%2], q0123 %else mova %1, [%2] shufps %1, %1, q0123 %endif %endmacro %macro INNER_LOOP 1 ; reading backwards: ptr1 = synth_buf + j + i; ptr2 = synth_buf + j - i ;~ a += window[i + j] * (-synth_buf[15 - i + j]) ;~ b += window[i + j + 16] * (synth_buf[i + j]) SHUF m5, ptr2 + j + (15 - 3) * 4, m6 mova m6, [ptr1 + j] %if ARCH_X86_64 SHUF m11, ptr2 + j + (15 - 3) * 4 - mmsize, m12 mova m12, [ptr1 + j + mmsize] %endif %if cpuflag(fma3) fmaddps m2, m6, [win + %1 + j + 16 * 4], m2 fnmaddps m1, m5, [win + %1 + j], m1 %if ARCH_X86_64 fmaddps m8, m12, [win + %1 + j + mmsize + 16 * 4], m8 fnmaddps m7, m11, [win + %1 + j + mmsize], m7 %endif %else ; non-FMA mulps m6, m6, [win + %1 + j + 16 * 4] mulps m5, m5, [win + %1 + j] %if ARCH_X86_64 mulps m12, m12, [win + %1 + j + mmsize + 16 * 4] mulps m11, m11, [win + %1 + j + mmsize] %endif addps m2, m2, m6 subps m1, m1, m5 %if ARCH_X86_64 addps m8, m8, m12 subps m7, m7, m11 %endif %endif ; cpuflag(fma3) ;~ c += window[i + j + 32] * (synth_buf[16 + i + j]) ;~ d += window[i + j + 48] * (synth_buf[31 - i + j]) SHUF m6, ptr2 + j + (31 - 3) * 4, m5 mova m5, [ptr1 + j + 16 * 4] %if ARCH_X86_64 SHUF m12, ptr2 + j + (31 - 3) * 4 - mmsize, m11 mova m11, [ptr1 + j + mmsize + 16 * 4] %endif %if cpuflag(fma3) fmaddps m3, m5, [win + %1 + j + 32 * 4], m3 fmaddps m4, m6, [win + %1 + j + 48 * 4], m4 %if ARCH_X86_64 fmaddps m9, m11, [win + %1 + j + mmsize + 32 * 4], m9 fmaddps m10, m12, [win + %1 + j + mmsize + 48 * 4], m10 %endif %else ; non-FMA mulps m5, m5, [win + %1 + j + 32 * 4] mulps m6, m6, [win + %1 + j + 48 * 4] %if ARCH_X86_64 mulps m11, m11, [win + %1 + j + mmsize + 32 * 4] mulps m12, m12, [win + %1 + j + mmsize + 48 * 4] %endif addps m3, m3, m5 addps m4, m4, m6 %if ARCH_X86_64 addps m9, m9, m11 addps m10, m10, m12 %endif %endif ; cpuflag(fma3) sub j, 64 * 4 %endmacro ; void ff_synth_filter_inner_<opt>(float *synth_buf, float synth_buf2[32], ; const float window[512], float out[32], ; intptr_t offset, float scale) %macro SYNTH_FILTER 0 cglobal synth_filter_inner, 0, 6 + 4 * ARCH_X86_64, 7 + 6 * ARCH_X86_64, \ synth_buf, synth_buf2, window, out, off, scale %define scale m0 %if ARCH_X86_32 || WIN64 %if cpuflag(sse2) && notcpuflag(avx) movd scale, scalem SPLATD m0 %else VBROADCASTSS m0, scalem %endif ; Make sure offset is in a register and not on the stack %define OFFQ r4q %else SPLATD xmm0 %if cpuflag(avx) vinsertf128 m0, m0, xmm0, 1 %endif %define OFFQ offq %endif ; prepare inner counter limit 1 mov r5q, 480 sub r5q, offmp and r5q, -64 shl r5q, 2 %if ARCH_X86_32 || notcpuflag(avx) mov OFFQ, r5q %define i r5q mov i, 16 * 4 - (ARCH_X86_64 + 1) * mmsize ; main loop counter %else %define i 0 %define OFFQ r5q %endif %define buf2 synth_buf2q %if ARCH_X86_32 mov buf2, synth_buf2mp %endif .mainloop: ; m1 = a m2 = b m3 = c m4 = d SETZERO m3 SETZERO m4 mova m1, [buf2 + i] mova m2, [buf2 + i + 16 * 4] %if ARCH_X86_32 %define ptr1 r0q %define ptr2 r1q %define win r2q %define j r3q mov win, windowm mov ptr1, synth_bufm %if ARCH_X86_32 || notcpuflag(avx) add win, i add ptr1, i %endif %else ; ARCH_X86_64 %define ptr1 r6q %define ptr2 r7q ; must be loaded %define win r8q %define j r9q SETZERO m9 SETZERO m10 mova m7, [buf2 + i + mmsize] mova m8, [buf2 + i + mmsize + 16 * 4] lea win, [windowq + i] lea ptr1, [synth_bufq + i] %endif mov ptr2, synth_bufmp ; prepare the inner loop counter mov j, OFFQ %if ARCH_X86_32 || notcpuflag(avx) sub ptr2, i %endif .loop1: INNER_LOOP 0 jge .loop1 mov j, 448 * 4 sub j, OFFQ jz .end sub ptr1, j sub ptr2, j add win, OFFQ ; now at j-64, so define OFFSET sub j, 64 * 4 .loop2: INNER_LOOP 64 * 4 jge .loop2 .end: %if ARCH_X86_32 mov buf2, synth_buf2m ; needed for next iteration anyway mov outq, outmp ; j, which will be set again during it %endif ;~ out[i] = a * scale; ;~ out[i + 16] = b * scale; mulps m1, m1, scale mulps m2, m2, scale %if ARCH_X86_64 mulps m7, m7, scale mulps m8, m8, scale %endif ;~ synth_buf2[i] = c; ;~ synth_buf2[i + 16] = d; mova [buf2 + i + 0 * 4], m3 mova [buf2 + i + 16 * 4], m4 %if ARCH_X86_64 mova [buf2 + i + 0 * 4 + mmsize], m9 mova [buf2 + i + 16 * 4 + mmsize], m10 %endif ;~ out[i] = a; ;~ out[i + 16] = a; mova [outq + i + 0 * 4], m1 mova [outq + i + 16 * 4], m2 %if ARCH_X86_64 mova [outq + i + 0 * 4 + mmsize], m7 mova [outq + i + 16 * 4 + mmsize], m8 %endif %if ARCH_X86_32 || notcpuflag(avx) sub i, (ARCH_X86_64 + 1) * mmsize jge .mainloop %endif RET %endmacro %if ARCH_X86_32 INIT_XMM sse SYNTH_FILTER %endif INIT_XMM sse2 SYNTH_FILTER INIT_YMM avx SYNTH_FILTER INIT_YMM fma3 SYNTH_FILTER ```
```javascript /* For licensing, see LICENSE.md or path_to_url */ CKEDITOR.plugins.setLang("widget","fr",{move:"Cliquer et glisser pour dplacer",label:"lment %1"}); ```
Daf Sar (, also Romanized as Dāf Sār; also known as Davsar) is a village in Pasikhan Rural District, in the Central District of Rasht County, Gilan Province, Iran. At the 2006 census, its population was 527, in 140 families. References Populated places in Rasht County
```c /* * * Permission is hereby granted, free of charge, to any person obtaining a * copy of this software and associated documentation files (the "Software"), * to deal in the Software without restriction, including without limitation * the rights to use, copy, modify, merge, publish, distribute, sublicense, * and/or sell copies of the Software, and to permit persons to whom the * Software is furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. * * Authors: * Eric Anholt <eric@anholt.net> * Keith Packard <keithp@keithp.com> * */ #include <linux/sched/mm.h> #include <linux/sort.h> #include <linux/string_helpers.h> #include <drm/drm_debugfs.h> #include "gem/i915_gem_context.h" #include "gt/intel_gt.h" #include "gt/intel_gt_buffer_pool.h" #include "gt/intel_gt_clock_utils.h" #include "gt/intel_gt_debugfs.h" #include "gt/intel_gt_pm.h" #include "gt/intel_gt_pm_debugfs.h" #include "gt/intel_gt_regs.h" #include "gt/intel_gt_requests.h" #include "gt/intel_rc6.h" #include "gt/intel_reset.h" #include "gt/intel_rps.h" #include "gt/intel_sseu_debugfs.h" #include "i915_debugfs.h" #include "i915_debugfs_params.h" #include "i915_driver.h" #include "i915_irq.h" #include "i915_reg.h" #include "i915_scheduler.h" #include "intel_mchbar_regs.h" static inline struct drm_i915_private *node_to_i915(struct drm_info_node *node) { return to_i915(node->minor->dev); } static int i915_capabilities(struct seq_file *m, void *data) { struct drm_i915_private *i915 = node_to_i915(m->private); struct drm_printer p = drm_seq_file_printer(m); seq_printf(m, "pch: %d\n", INTEL_PCH_TYPE(i915)); intel_device_info_print(INTEL_INFO(i915), RUNTIME_INFO(i915), &p); intel_display_device_info_print(DISPLAY_INFO(i915), DISPLAY_RUNTIME_INFO(i915), &p); i915_print_iommu_status(i915, &p); intel_gt_info_print(&to_gt(i915)->info, &p); intel_driver_caps_print(&i915->caps, &p); kernel_param_lock(THIS_MODULE); i915_params_dump(&i915->params, &p); kernel_param_unlock(THIS_MODULE); return 0; } static char get_tiling_flag(struct drm_i915_gem_object *obj) { switch (i915_gem_object_get_tiling(obj)) { default: case I915_TILING_NONE: return ' '; case I915_TILING_X: return 'X'; case I915_TILING_Y: return 'Y'; } } static char get_global_flag(struct drm_i915_gem_object *obj) { return READ_ONCE(obj->userfault_count) ? 'g' : ' '; } static char get_pin_mapped_flag(struct drm_i915_gem_object *obj) { return obj->mm.mapping ? 'M' : ' '; } static const char * stringify_page_sizes(unsigned int page_sizes, char *buf, size_t len) { size_t x = 0; switch (page_sizes) { case 0: return ""; case I915_GTT_PAGE_SIZE_4K: return "4K"; case I915_GTT_PAGE_SIZE_64K: return "64K"; case I915_GTT_PAGE_SIZE_2M: return "2M"; default: if (!buf) return "M"; if (page_sizes & I915_GTT_PAGE_SIZE_2M) x += snprintf(buf + x, len - x, "2M, "); if (page_sizes & I915_GTT_PAGE_SIZE_64K) x += snprintf(buf + x, len - x, "64K, "); if (page_sizes & I915_GTT_PAGE_SIZE_4K) x += snprintf(buf + x, len - x, "4K, "); buf[x-2] = '\0'; return buf; } } static const char *stringify_vma_type(const struct i915_vma *vma) { if (i915_vma_is_ggtt(vma)) return "ggtt"; if (i915_vma_is_dpt(vma)) return "dpt"; return "ppgtt"; } static const char *i915_cache_level_str(struct drm_i915_gem_object *obj) { struct drm_i915_private *i915 = obj_to_i915(obj); if (IS_GFX_GT_IP_RANGE(to_gt(i915), IP_VER(12, 70), IP_VER(12, 71))) { switch (obj->pat_index) { case 0: return " WB"; case 1: return " WT"; case 2: return " UC"; case 3: return " WB (1-Way Coh)"; case 4: return " WB (2-Way Coh)"; default: return " not defined"; } } else if (IS_PONTEVECCHIO(i915)) { switch (obj->pat_index) { case 0: return " UC"; case 1: return " WC"; case 2: return " WT"; case 3: return " WB"; case 4: return " WT (CLOS1)"; case 5: return " WB (CLOS1)"; case 6: return " WT (CLOS2)"; case 7: return " WT (CLOS2)"; default: return " not defined"; } } else if (GRAPHICS_VER(i915) >= 12) { switch (obj->pat_index) { case 0: return " WB"; case 1: return " WC"; case 2: return " WT"; case 3: return " UC"; default: return " not defined"; } } else { switch (obj->pat_index) { case 0: return " UC"; case 1: return HAS_LLC(i915) ? " LLC" : " snooped"; case 2: return " L3+LLC"; case 3: return " WT"; default: return " not defined"; } } } void i915_debugfs_describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj) { struct i915_vma *vma; int pin_count = 0; seq_printf(m, "%pK: %c%c%c %8zdKiB %02x %02x %s%s%s", &obj->base, get_tiling_flag(obj), get_global_flag(obj), get_pin_mapped_flag(obj), obj->base.size / 1024, obj->read_domains, obj->write_domain, i915_cache_level_str(obj), obj->mm.dirty ? " dirty" : "", obj->mm.madv == I915_MADV_DONTNEED ? " purgeable" : ""); if (obj->base.name) seq_printf(m, " (name: %d)", obj->base.name); spin_lock(&obj->vma.lock); list_for_each_entry(vma, &obj->vma.list, obj_link) { if (!drm_mm_node_allocated(&vma->node)) continue; spin_unlock(&obj->vma.lock); if (i915_vma_is_pinned(vma)) pin_count++; seq_printf(m, " (%s offset: %08llx, size: %08llx, pages: %s", stringify_vma_type(vma), i915_vma_offset(vma), i915_vma_size(vma), stringify_page_sizes(vma->resource->page_sizes_gtt, NULL, 0)); if (i915_vma_is_ggtt(vma) || i915_vma_is_dpt(vma)) { switch (vma->gtt_view.type) { case I915_GTT_VIEW_NORMAL: seq_puts(m, ", normal"); break; case I915_GTT_VIEW_PARTIAL: seq_printf(m, ", partial [%08llx+%x]", vma->gtt_view.partial.offset << PAGE_SHIFT, vma->gtt_view.partial.size << PAGE_SHIFT); break; case I915_GTT_VIEW_ROTATED: seq_printf(m, ", rotated [(%ux%u, src_stride=%u, dst_stride=%u, offset=%u), (%ux%u, src_stride=%u, dst_stride=%u, offset=%u)]", vma->gtt_view.rotated.plane[0].width, vma->gtt_view.rotated.plane[0].height, vma->gtt_view.rotated.plane[0].src_stride, vma->gtt_view.rotated.plane[0].dst_stride, vma->gtt_view.rotated.plane[0].offset, vma->gtt_view.rotated.plane[1].width, vma->gtt_view.rotated.plane[1].height, vma->gtt_view.rotated.plane[1].src_stride, vma->gtt_view.rotated.plane[1].dst_stride, vma->gtt_view.rotated.plane[1].offset); break; case I915_GTT_VIEW_REMAPPED: seq_printf(m, ", remapped [(%ux%u, src_stride=%u, dst_stride=%u, offset=%u), (%ux%u, src_stride=%u, dst_stride=%u, offset=%u)]", vma->gtt_view.remapped.plane[0].width, vma->gtt_view.remapped.plane[0].height, vma->gtt_view.remapped.plane[0].src_stride, vma->gtt_view.remapped.plane[0].dst_stride, vma->gtt_view.remapped.plane[0].offset, vma->gtt_view.remapped.plane[1].width, vma->gtt_view.remapped.plane[1].height, vma->gtt_view.remapped.plane[1].src_stride, vma->gtt_view.remapped.plane[1].dst_stride, vma->gtt_view.remapped.plane[1].offset); break; default: MISSING_CASE(vma->gtt_view.type); break; } } if (vma->fence) seq_printf(m, " , fence: %d", vma->fence->id); seq_puts(m, ")"); spin_lock(&obj->vma.lock); } spin_unlock(&obj->vma.lock); seq_printf(m, " (pinned x %d)", pin_count); if (i915_gem_object_is_stolen(obj)) seq_printf(m, " (stolen: %08llx)", obj->stolen->start); if (i915_gem_object_is_framebuffer(obj)) seq_printf(m, " (fb)"); } static int i915_gem_object_info(struct seq_file *m, void *data) { struct drm_i915_private *i915 = node_to_i915(m->private); struct drm_printer p = drm_seq_file_printer(m); struct intel_memory_region *mr; enum intel_region_id id; seq_printf(m, "%u shrinkable [%u free] objects, %llu bytes\n", i915->mm.shrink_count, atomic_read(&i915->mm.free_count), i915->mm.shrink_memory); for_each_memory_region(mr, i915, id) intel_memory_region_debug(mr, &p); return 0; } #if IS_ENABLED(CONFIG_DRM_I915_CAPTURE_ERROR) static ssize_t gpu_state_read(struct file *file, char __user *ubuf, size_t count, loff_t *pos) { struct i915_gpu_coredump *error; ssize_t ret; void *buf; error = file->private_data; if (!error) return 0; /* Bounce buffer required because of kernfs __user API convenience. */ buf = kmalloc(count, GFP_KERNEL); if (!buf) return -ENOMEM; ret = i915_gpu_coredump_copy_to_buffer(error, buf, *pos, count); if (ret <= 0) goto out; if (!copy_to_user(ubuf, buf, ret)) *pos += ret; else ret = -EFAULT; out: kfree(buf); return ret; } static int gpu_state_release(struct inode *inode, struct file *file) { i915_gpu_coredump_put(file->private_data); return 0; } static int i915_gpu_info_open(struct inode *inode, struct file *file) { struct drm_i915_private *i915 = inode->i_private; struct i915_gpu_coredump *gpu; intel_wakeref_t wakeref; gpu = NULL; with_intel_runtime_pm(&i915->runtime_pm, wakeref) gpu = i915_gpu_coredump(to_gt(i915), ALL_ENGINES, CORE_DUMP_FLAG_NONE); if (IS_ERR(gpu)) return PTR_ERR(gpu); file->private_data = gpu; return 0; } static const struct file_operations i915_gpu_info_fops = { .owner = THIS_MODULE, .open = i915_gpu_info_open, .read = gpu_state_read, .llseek = default_llseek, .release = gpu_state_release, }; static ssize_t i915_error_state_write(struct file *filp, const char __user *ubuf, size_t cnt, loff_t *ppos) { struct i915_gpu_coredump *error = filp->private_data; if (!error) return 0; drm_dbg(&error->i915->drm, "Resetting error state\n"); i915_reset_error_state(error->i915); return cnt; } static int i915_error_state_open(struct inode *inode, struct file *file) { struct i915_gpu_coredump *error; error = i915_first_error_state(inode->i_private); if (IS_ERR(error)) return PTR_ERR(error); file->private_data = error; return 0; } static const struct file_operations i915_error_state_fops = { .owner = THIS_MODULE, .open = i915_error_state_open, .read = gpu_state_read, .write = i915_error_state_write, .llseek = default_llseek, .release = gpu_state_release, }; #endif static int i915_frequency_info(struct seq_file *m, void *unused) { struct drm_i915_private *i915 = node_to_i915(m->private); struct intel_gt *gt = to_gt(i915); struct drm_printer p = drm_seq_file_printer(m); intel_gt_pm_frequency_dump(gt, &p); return 0; } static const char *swizzle_string(unsigned swizzle) { switch (swizzle) { case I915_BIT_6_SWIZZLE_NONE: return "none"; case I915_BIT_6_SWIZZLE_9: return "bit9"; case I915_BIT_6_SWIZZLE_9_10: return "bit9/bit10"; case I915_BIT_6_SWIZZLE_9_11: return "bit9/bit11"; case I915_BIT_6_SWIZZLE_9_10_11: return "bit9/bit10/bit11"; case I915_BIT_6_SWIZZLE_9_17: return "bit9/bit17"; case I915_BIT_6_SWIZZLE_9_10_17: return "bit9/bit10/bit17"; case I915_BIT_6_SWIZZLE_UNKNOWN: return "unknown"; } return "bug"; } static int i915_swizzle_info(struct seq_file *m, void *data) { struct drm_i915_private *dev_priv = node_to_i915(m->private); struct intel_uncore *uncore = &dev_priv->uncore; intel_wakeref_t wakeref; seq_printf(m, "bit6 swizzle for X-tiling = %s\n", swizzle_string(to_gt(dev_priv)->ggtt->bit_6_swizzle_x)); seq_printf(m, "bit6 swizzle for Y-tiling = %s\n", swizzle_string(to_gt(dev_priv)->ggtt->bit_6_swizzle_y)); if (dev_priv->gem_quirks & GEM_QUIRK_PIN_SWIZZLED_PAGES) seq_puts(m, "L-shaped memory detected\n"); /* On BDW+, swizzling is not used. See detect_bit_6_swizzle() */ if (GRAPHICS_VER(dev_priv) >= 8 || IS_VALLEYVIEW(dev_priv)) return 0; wakeref = intel_runtime_pm_get(&dev_priv->runtime_pm); if (IS_GRAPHICS_VER(dev_priv, 3, 4)) { seq_printf(m, "DDC = 0x%08x\n", intel_uncore_read(uncore, DCC)); seq_printf(m, "DDC2 = 0x%08x\n", intel_uncore_read(uncore, DCC2)); seq_printf(m, "C0DRB3 = 0x%04x\n", intel_uncore_read16(uncore, C0DRB3_BW)); seq_printf(m, "C1DRB3 = 0x%04x\n", intel_uncore_read16(uncore, C1DRB3_BW)); } else if (GRAPHICS_VER(dev_priv) >= 6) { seq_printf(m, "MAD_DIMM_C0 = 0x%08x\n", intel_uncore_read(uncore, MAD_DIMM_C0)); seq_printf(m, "MAD_DIMM_C1 = 0x%08x\n", intel_uncore_read(uncore, MAD_DIMM_C1)); seq_printf(m, "MAD_DIMM_C2 = 0x%08x\n", intel_uncore_read(uncore, MAD_DIMM_C2)); seq_printf(m, "TILECTL = 0x%08x\n", intel_uncore_read(uncore, TILECTL)); if (GRAPHICS_VER(dev_priv) >= 8) seq_printf(m, "GAMTARBMODE = 0x%08x\n", intel_uncore_read(uncore, GAMTARBMODE)); else seq_printf(m, "ARB_MODE = 0x%08x\n", intel_uncore_read(uncore, ARB_MODE)); seq_printf(m, "DISP_ARB_CTL = 0x%08x\n", intel_uncore_read(uncore, DISP_ARB_CTL)); } intel_runtime_pm_put(&dev_priv->runtime_pm, wakeref); return 0; } static int i915_rps_boost_info(struct seq_file *m, void *data) { struct drm_i915_private *dev_priv = node_to_i915(m->private); struct intel_rps *rps = &to_gt(dev_priv)->rps; seq_printf(m, "RPS enabled? %s\n", str_yes_no(intel_rps_is_enabled(rps))); seq_printf(m, "RPS active? %s\n", str_yes_no(intel_rps_is_active(rps))); seq_printf(m, "GPU busy? %s\n", str_yes_no(to_gt(dev_priv)->awake)); seq_printf(m, "Boosts outstanding? %d\n", atomic_read(&rps->num_waiters)); seq_printf(m, "Interactive? %d\n", READ_ONCE(rps->power.interactive)); seq_printf(m, "Frequency requested %d, actual %d\n", intel_gpu_freq(rps, rps->cur_freq), intel_rps_read_actual_frequency(rps)); seq_printf(m, " min hard:%d, soft:%d; max soft:%d, hard:%d\n", intel_gpu_freq(rps, rps->min_freq), intel_gpu_freq(rps, rps->min_freq_softlimit), intel_gpu_freq(rps, rps->max_freq_softlimit), intel_gpu_freq(rps, rps->max_freq)); seq_printf(m, " idle:%d, efficient:%d, boost:%d\n", intel_gpu_freq(rps, rps->idle_freq), intel_gpu_freq(rps, rps->efficient_freq), intel_gpu_freq(rps, rps->boost_freq)); seq_printf(m, "Wait boosts: %d\n", READ_ONCE(rps->boosts)); return 0; } static int i915_runtime_pm_status(struct seq_file *m, void *unused) { struct drm_i915_private *dev_priv = node_to_i915(m->private); struct pci_dev *pdev = to_pci_dev(dev_priv->drm.dev); if (!HAS_RUNTIME_PM(dev_priv)) seq_puts(m, "Runtime power management not supported\n"); seq_printf(m, "Runtime power status: %s\n", str_enabled_disabled(!dev_priv->display.power.domains.init_wakeref)); seq_printf(m, "GPU idle: %s\n", str_yes_no(!to_gt(dev_priv)->awake)); seq_printf(m, "IRQs disabled: %s\n", str_yes_no(!intel_irqs_enabled(dev_priv))); #ifdef CONFIG_PM seq_printf(m, "Usage count: %d\n", atomic_read(&dev_priv->drm.dev->power.usage_count)); #else seq_printf(m, "Device Power Management (CONFIG_PM) disabled\n"); #endif seq_printf(m, "PCI device power state: %s [%d]\n", pci_power_name(pdev->current_state), pdev->current_state); if (IS_ENABLED(CONFIG_DRM_I915_DEBUG_RUNTIME_PM)) { struct drm_printer p = drm_seq_file_printer(m); print_intel_runtime_pm_wakeref(&dev_priv->runtime_pm, &p); } return 0; } static int i915_engine_info(struct seq_file *m, void *unused) { struct drm_i915_private *i915 = node_to_i915(m->private); struct intel_engine_cs *engine; intel_wakeref_t wakeref; struct drm_printer p; wakeref = intel_runtime_pm_get(&i915->runtime_pm); seq_printf(m, "GT awake? %s [%d], %llums\n", str_yes_no(to_gt(i915)->awake), atomic_read(&to_gt(i915)->wakeref.count), ktime_to_ms(intel_gt_get_awake_time(to_gt(i915)))); seq_printf(m, "CS timestamp frequency: %u Hz, %d ns\n", to_gt(i915)->clock_frequency, to_gt(i915)->clock_period_ns); p = drm_seq_file_printer(m); for_each_uabi_engine(engine, i915) intel_engine_dump(engine, &p, "%s\n", engine->name); intel_gt_show_timelines(to_gt(i915), &p, i915_request_show_with_schedule); intel_runtime_pm_put(&i915->runtime_pm, wakeref); return 0; } static int i915_wa_registers(struct seq_file *m, void *unused) { struct drm_i915_private *i915 = node_to_i915(m->private); struct intel_engine_cs *engine; for_each_uabi_engine(engine, i915) { const struct i915_wa_list *wal = &engine->ctx_wa_list; const struct i915_wa *wa; unsigned int count; count = wal->count; if (!count) continue; seq_printf(m, "%s: Workarounds applied: %u\n", engine->name, count); for (wa = wal->list; count--; wa++) seq_printf(m, "0x%X: 0x%08X, mask: 0x%08X\n", i915_mmio_reg_offset(wa->reg), wa->set, wa->clr); seq_printf(m, "\n"); } return 0; } static int i915_wedged_get(void *data, u64 *val) { struct drm_i915_private *i915 = data; struct intel_gt *gt; unsigned int i; *val = 0; for_each_gt(gt, i915, i) { int ret; ret = intel_gt_debugfs_reset_show(gt, val); if (ret) return ret; /* at least one tile should be wedged */ if (*val) break; } return 0; } static int i915_wedged_set(void *data, u64 val) { struct drm_i915_private *i915 = data; struct intel_gt *gt; unsigned int i; for_each_gt(gt, i915, i) intel_gt_debugfs_reset_store(gt, val); return 0; } DEFINE_SIMPLE_ATTRIBUTE(i915_wedged_fops, i915_wedged_get, i915_wedged_set, "%llu\n"); static int i915_perf_noa_delay_set(void *data, u64 val) { struct drm_i915_private *i915 = data; /* * This would lead to infinite waits as we're doing timestamp * difference on the CS with only 32bits. */ if (intel_gt_ns_to_clock_interval(to_gt(i915), val) > U32_MAX) return -EINVAL; atomic64_set(&i915->perf.noa_programming_delay, val); return 0; } static int i915_perf_noa_delay_get(void *data, u64 *val) { struct drm_i915_private *i915 = data; *val = atomic64_read(&i915->perf.noa_programming_delay); return 0; } DEFINE_SIMPLE_ATTRIBUTE(i915_perf_noa_delay_fops, i915_perf_noa_delay_get, i915_perf_noa_delay_set, "%llu\n"); #define DROP_UNBOUND BIT(0) #define DROP_BOUND BIT(1) #define DROP_RETIRE BIT(2) #define DROP_ACTIVE BIT(3) #define DROP_FREED BIT(4) #define DROP_SHRINK_ALL BIT(5) #define DROP_IDLE BIT(6) #define DROP_RESET_ACTIVE BIT(7) #define DROP_RESET_SEQNO BIT(8) #define DROP_RCU BIT(9) #define DROP_ALL (DROP_UNBOUND | \ DROP_BOUND | \ DROP_RETIRE | \ DROP_ACTIVE | \ DROP_FREED | \ DROP_SHRINK_ALL |\ DROP_IDLE | \ DROP_RESET_ACTIVE | \ DROP_RESET_SEQNO | \ DROP_RCU) static int i915_drop_caches_get(void *data, u64 *val) { *val = DROP_ALL; return 0; } static int gt_drop_caches(struct intel_gt *gt, u64 val) { int ret; if (val & DROP_RESET_ACTIVE && wait_for(intel_engines_are_idle(gt), 200)) intel_gt_set_wedged(gt); if (val & DROP_RETIRE) intel_gt_retire_requests(gt); if (val & (DROP_IDLE | DROP_ACTIVE)) { ret = intel_gt_wait_for_idle(gt, MAX_SCHEDULE_TIMEOUT); if (ret) return ret; } if (val & DROP_IDLE) { ret = intel_gt_pm_wait_for_idle(gt); if (ret) return ret; } if (val & DROP_RESET_ACTIVE && intel_gt_terminally_wedged(gt)) intel_gt_handle_error(gt, ALL_ENGINES, 0, NULL); if (val & DROP_FREED) intel_gt_flush_buffer_pool(gt); return 0; } static int i915_drop_caches_set(void *data, u64 val) { struct drm_i915_private *i915 = data; unsigned int flags; int ret; drm_dbg(&i915->drm, "Dropping caches: 0x%08llx [0x%08llx]\n", val, val & DROP_ALL); ret = gt_drop_caches(to_gt(i915), val); if (ret) return ret; fs_reclaim_acquire(GFP_KERNEL); flags = memalloc_noreclaim_save(); if (val & DROP_BOUND) i915_gem_shrink(NULL, i915, LONG_MAX, NULL, I915_SHRINK_BOUND); if (val & DROP_UNBOUND) i915_gem_shrink(NULL, i915, LONG_MAX, NULL, I915_SHRINK_UNBOUND); if (val & DROP_SHRINK_ALL) i915_gem_shrink_all(i915); memalloc_noreclaim_restore(flags); fs_reclaim_release(GFP_KERNEL); if (val & DROP_RCU) rcu_barrier(); if (val & DROP_FREED) i915_gem_drain_freed_objects(i915); return 0; } DEFINE_SIMPLE_ATTRIBUTE(i915_drop_caches_fops, i915_drop_caches_get, i915_drop_caches_set, "0x%08llx\n"); static int i915_sseu_status(struct seq_file *m, void *unused) { struct drm_i915_private *i915 = node_to_i915(m->private); struct intel_gt *gt = to_gt(i915); return intel_sseu_status(m, gt); } static int i915_forcewake_open(struct inode *inode, struct file *file) { struct drm_i915_private *i915 = inode->i_private; struct intel_gt *gt; unsigned int i; for_each_gt(gt, i915, i) intel_gt_pm_debugfs_forcewake_user_open(gt); return 0; } static int i915_forcewake_release(struct inode *inode, struct file *file) { struct drm_i915_private *i915 = inode->i_private; struct intel_gt *gt; unsigned int i; for_each_gt(gt, i915, i) intel_gt_pm_debugfs_forcewake_user_release(gt); return 0; } static const struct file_operations i915_forcewake_fops = { .owner = THIS_MODULE, .open = i915_forcewake_open, .release = i915_forcewake_release, }; static const struct drm_info_list i915_debugfs_list[] = { {"i915_capabilities", i915_capabilities, 0}, {"i915_gem_objects", i915_gem_object_info, 0}, {"i915_frequency_info", i915_frequency_info, 0}, {"i915_swizzle_info", i915_swizzle_info, 0}, {"i915_runtime_pm_status", i915_runtime_pm_status, 0}, {"i915_engine_info", i915_engine_info, 0}, {"i915_wa_registers", i915_wa_registers, 0}, {"i915_sseu_status", i915_sseu_status, 0}, {"i915_rps_boost_info", i915_rps_boost_info, 0}, }; static const struct i915_debugfs_files { const char *name; const struct file_operations *fops; } i915_debugfs_files[] = { {"i915_perf_noa_delay", &i915_perf_noa_delay_fops}, {"i915_wedged", &i915_wedged_fops}, {"i915_gem_drop_caches", &i915_drop_caches_fops}, #if IS_ENABLED(CONFIG_DRM_I915_CAPTURE_ERROR) {"i915_error_state", &i915_error_state_fops}, {"i915_gpu_info", &i915_gpu_info_fops}, #endif }; void i915_debugfs_register(struct drm_i915_private *dev_priv) { struct drm_minor *minor = dev_priv->drm.primary; int i; i915_debugfs_params(dev_priv); debugfs_create_file("i915_forcewake_user", S_IRUSR, minor->debugfs_root, to_i915(minor->dev), &i915_forcewake_fops); for (i = 0; i < ARRAY_SIZE(i915_debugfs_files); i++) { debugfs_create_file(i915_debugfs_files[i].name, S_IRUGO | S_IWUSR, minor->debugfs_root, to_i915(minor->dev), i915_debugfs_files[i].fops); } drm_debugfs_create_files(i915_debugfs_list, ARRAY_SIZE(i915_debugfs_list), minor->debugfs_root, minor); } ```
```haskell -- | Reexports from modules from the @PlutusCore.Generators.Internal@ folder. module PlutusCore.Generators.Hedgehog ( module Export ) where import PlutusCore.Generators.Hedgehog.Builtin as Export import PlutusCore.Generators.Hedgehog.Denotation as Export import PlutusCore.Generators.Hedgehog.Entity as Export import PlutusCore.Generators.Hedgehog.TypedBuiltinGen as Export import PlutusCore.Generators.Hedgehog.TypeEvalCheck as Export import PlutusCore.Generators.Hedgehog.Utils as Export ```
```python from flask import Flask, request, make_response app = Flask(__name__) @app.route('/XSS_param', methods =['GET']) def XSS1(): param = request.args.get('param', 'not set') other_var = param + '' html = open('templates/XSS_param.html').read() not_dangerous = "" resp = make_response(html.replace('{{ param }}', not_dangerous)) return resp if __name__ == '__main__': app.run(debug= True) ```
```c /* ** 2001 September 15 ** ** The author disclaims copyright to this source code. In place of ** a legal notice, here is a blessing: ** ** May you do good and not evil. ** May you find forgiveness for yourself and forgive others. ** May you share freely, never taking more than you give. ** ************************************************************************* ** This file contains C code routines that are called by the SQLite parser ** when syntax rules are reduced. The routines in this file handle the ** following kinds of SQL syntax: ** ** CREATE TABLE ** DROP TABLE ** CREATE INDEX ** DROP INDEX ** creating ID lists ** BEGIN TRANSACTION ** COMMIT ** ROLLBACK */ #include "sqliteInt.h" /* ** This routine is called when a new SQL statement is beginning to ** be parsed. Initialize the pParse structure as needed. */ void sqlite3BeginParse(Parse *pParse, int explainFlag){ pParse->explain = (u8)explainFlag; pParse->nVar = 0; } #ifndef SQLITE_OMIT_SHARED_CACHE /* ** The TableLock structure is only used by the sqlite3TableLock() and ** codeTableLocks() functions. */ struct TableLock { int iDb; /* The database containing the table to be locked */ int iTab; /* The root page of the table to be locked */ u8 isWriteLock; /* True for write lock. False for a read lock */ const char *zName; /* Name of the table */ }; /* ** Record the fact that we want to lock a table at run-time. ** ** The table to be locked has root page iTab and is found in database iDb. ** A read or a write lock can be taken depending on isWritelock. ** ** This routine just records the fact that the lock is desired. The ** code to make the lock occur is generated by a later call to ** codeTableLocks() which occurs during sqlite3FinishCoding(). */ void sqlite3TableLock( Parse *pParse, /* Parsing context */ int iDb, /* Index of the database containing the table to lock */ int iTab, /* Root page number of the table to be locked */ u8 isWriteLock, /* True for a write lock */ const char *zName /* Name of the table to be locked */ ){ Parse *pToplevel = sqlite3ParseToplevel(pParse); int i; int nBytes; TableLock *p; assert( iDb>=0 ); for(i=0; i<pToplevel->nTableLock; i++){ p = &pToplevel->aTableLock[i]; if( p->iDb==iDb && p->iTab==iTab ){ p->isWriteLock = (p->isWriteLock || isWriteLock); return; } } nBytes = sizeof(TableLock) * (pToplevel->nTableLock+1); pToplevel->aTableLock = sqlite3DbReallocOrFree(pToplevel->db, pToplevel->aTableLock, nBytes); if( pToplevel->aTableLock ){ p = &pToplevel->aTableLock[pToplevel->nTableLock++]; p->iDb = iDb; p->iTab = iTab; p->isWriteLock = isWriteLock; p->zName = zName; }else{ pToplevel->nTableLock = 0; pToplevel->db->mallocFailed = 1; } } /* ** Code an OP_TableLock instruction for each table locked by the ** statement (configured by calls to sqlite3TableLock()). */ static void codeTableLocks(Parse *pParse){ int i; Vdbe *pVdbe; pVdbe = sqlite3GetVdbe(pParse); assert( pVdbe!=0 ); /* sqlite3GetVdbe cannot fail: VDBE already allocated */ for(i=0; i<pParse->nTableLock; i++){ TableLock *p = &pParse->aTableLock[i]; int p1 = p->iDb; sqlite3VdbeAddOp4(pVdbe, OP_TableLock, p1, p->iTab, p->isWriteLock, p->zName, P4_STATIC); } } #else #define codeTableLocks(x) #endif /* ** Return TRUE if the given yDbMask object is empty - if it contains no ** 1 bits. This routine is used by the DbMaskAllZero() and DbMaskNotZero() ** macros when SQLITE_MAX_ATTACHED is greater than 30. */ #if SQLITE_MAX_ATTACHED>30 int sqlite3DbMaskAllZero(yDbMask m){ int i; for(i=0; i<sizeof(yDbMask); i++) if( m[i] ) return 0; return 1; } #endif /* ** This routine is called after a single SQL statement has been ** parsed and a VDBE program to execute that statement has been ** prepared. This routine puts the finishing touches on the ** VDBE program and resets the pParse structure for the next ** parse. ** ** Note that if an error occurred, it might be the case that ** no VDBE code was generated. */ void sqlite3FinishCoding(Parse *pParse){ sqlite3 *db; Vdbe *v; assert( pParse->pToplevel==0 ); db = pParse->db; if( db->mallocFailed ) return; if( pParse->nested ) return; if( pParse->nErr ) return; /* Begin by generating some termination code at the end of the ** vdbe program */ v = sqlite3GetVdbe(pParse); assert( !pParse->isMultiWrite || sqlite3VdbeAssertMayAbort(v, pParse->mayAbort)); if( v ){ while( sqlite3VdbeDeletePriorOpcode(v, OP_Close) ){} sqlite3VdbeAddOp0(v, OP_Halt); #if SQLITE_USER_AUTHENTICATION if( pParse->nTableLock>0 && db->init.busy==0 ){ sqlite3UserAuthInit(db); if( db->auth.authLevel<UAUTH_User ){ pParse->rc = SQLITE_AUTH_USER; sqlite3ErrorMsg(pParse, "user not authenticated"); return; } } #endif /* The cookie mask contains one bit for each database file open. ** (Bit 0 is for main, bit 1 is for temp, and so forth.) Bits are ** set for each database that is used. Generate code to start a ** transaction on each used database and to verify the schema cookie ** on each used database. */ if( db->mallocFailed==0 && (DbMaskNonZero(pParse->cookieMask) || pParse->pConstExpr) ){ int iDb, i; assert( sqlite3VdbeGetOp(v, 0)->opcode==OP_Init ); sqlite3VdbeJumpHere(v, 0); for(iDb=0; iDb<db->nDb; iDb++){ if( DbMaskTest(pParse->cookieMask, iDb)==0 ) continue; sqlite3VdbeUsesBtree(v, iDb); sqlite3VdbeAddOp4Int(v, OP_Transaction, /* Opcode */ iDb, /* P1 */ DbMaskTest(pParse->writeMask,iDb), /* P2 */ pParse->cookieValue[iDb], /* P3 */ db->aDb[iDb].pSchema->iGeneration /* P4 */ ); if( db->init.busy==0 ) sqlite3VdbeChangeP5(v, 1); } #ifndef SQLITE_OMIT_VIRTUALTABLE for(i=0; i<pParse->nVtabLock; i++){ char *vtab = (char *)sqlite3GetVTable(db, pParse->apVtabLock[i]); sqlite3VdbeAddOp4(v, OP_VBegin, 0, 0, 0, vtab, P4_VTAB); } pParse->nVtabLock = 0; #endif /* Once all the cookies have been verified and transactions opened, ** obtain the required table-locks. This is a no-op unless the ** shared-cache feature is enabled. */ codeTableLocks(pParse); /* Initialize any AUTOINCREMENT data structures required. */ sqlite3AutoincrementBegin(pParse); /* Code constant expressions that where factored out of inner loops */ if( pParse->pConstExpr ){ ExprList *pEL = pParse->pConstExpr; pParse->okConstFactor = 0; for(i=0; i<pEL->nExpr; i++){ sqlite3ExprCode(pParse, pEL->a[i].pExpr, pEL->a[i].u.iConstExprReg); } } /* Finally, jump back to the beginning of the executable code. */ sqlite3VdbeAddOp2(v, OP_Goto, 0, 1); } } /* Get the VDBE program ready for execution */ if( v && ALWAYS(pParse->nErr==0) && !db->mallocFailed ){ assert( pParse->iCacheLevel==0 ); /* Disables and re-enables match */ /* A minimum of one cursor is required if autoincrement is used * See ticket [a696379c1f08866] */ if( pParse->pAinc!=0 && pParse->nTab==0 ) pParse->nTab = 1; sqlite3VdbeMakeReady(v, pParse); pParse->rc = SQLITE_DONE; pParse->colNamesSet = 0; }else{ pParse->rc = SQLITE_ERROR; } pParse->nTab = 0; pParse->nMem = 0; pParse->nSet = 0; pParse->nVar = 0; DbMaskZero(pParse->cookieMask); } /* ** Run the parser and code generator recursively in order to generate ** code for the SQL statement given onto the end of the pParse context ** currently under construction. When the parser is run recursively ** this way, the final OP_Halt is not appended and other initialization ** and finalization steps are omitted because those are handling by the ** outermost parser. ** ** Not everything is nestable. This facility is designed to permit ** INSERT, UPDATE, and DELETE operations against SQLITE_MASTER. Use ** care if you decide to try to use this routine for some other purposes. */ void sqlite3NestedParse(Parse *pParse, const char *zFormat, ...){ va_list ap; char *zSql; char *zErrMsg = 0; sqlite3 *db = pParse->db; # define SAVE_SZ (sizeof(Parse) - offsetof(Parse,nVar)) char saveBuf[SAVE_SZ]; if( pParse->nErr ) return; assert( pParse->nested<10 ); /* Nesting should only be of limited depth */ va_start(ap, zFormat); zSql = sqlite3VMPrintf(db, zFormat, ap); va_end(ap); if( zSql==0 ){ return; /* A malloc must have failed */ } pParse->nested++; memcpy(saveBuf, &pParse->nVar, SAVE_SZ); memset(&pParse->nVar, 0, SAVE_SZ); sqlite3RunParser(pParse, zSql, &zErrMsg); sqlite3DbFree(db, zErrMsg); sqlite3DbFree(db, zSql); memcpy(&pParse->nVar, saveBuf, SAVE_SZ); pParse->nested--; } #if SQLITE_USER_AUTHENTICATION /* ** Return TRUE if zTable is the name of the system table that stores the ** list of users and their access credentials. */ int sqlite3UserAuthTable(const char *zTable){ return sqlite3_stricmp(zTable, "sqlite_user")==0; } #endif /* ** Locate the in-memory structure that describes a particular database ** table given the name of that table and (optionally) the name of the ** database containing the table. Return NULL if not found. ** ** If zDatabase is 0, all databases are searched for the table and the ** first matching table is returned. (No checking for duplicate table ** names is done.) The search order is TEMP first, then MAIN, then any ** auxiliary databases added using the ATTACH command. ** ** See also sqlite3LocateTable(). */ Table *sqlite3FindTable(sqlite3 *db, const char *zName, const char *zDatabase){ Table *p = 0; int i; assert( zName!=0 ); /* All mutexes are required for schema access. Make sure we hold them. */ assert( zDatabase!=0 || sqlite3BtreeHoldsAllMutexes(db) ); #if SQLITE_USER_AUTHENTICATION /* Only the admin user is allowed to know that the sqlite_user table ** exists */ if( db->auth.authLevel<UAUTH_Admin && sqlite3UserAuthTable(zName)!=0 ){ return 0; } #endif for(i=OMIT_TEMPDB; i<db->nDb; i++){ int j = (i<2) ? i^1 : i; /* Search TEMP before MAIN */ if( zDatabase!=0 && sqlite3StrICmp(zDatabase, db->aDb[j].zName) ) continue; assert( sqlite3SchemaMutexHeld(db, j, 0) ); p = sqlite3HashFind(&db->aDb[j].pSchema->tblHash, zName); if( p ) break; } return p; } /* ** Locate the in-memory structure that describes a particular database ** table given the name of that table and (optionally) the name of the ** database containing the table. Return NULL if not found. Also leave an ** error message in pParse->zErrMsg. ** ** The difference between this routine and sqlite3FindTable() is that this ** routine leaves an error message in pParse->zErrMsg where ** sqlite3FindTable() does not. */ Table *sqlite3LocateTable( Parse *pParse, /* context in which to report errors */ int isView, /* True if looking for a VIEW rather than a TABLE */ const char *zName, /* Name of the table we are looking for */ const char *zDbase /* Name of the database. Might be NULL */ ){ Table *p; /* Read the database schema. If an error occurs, leave an error message ** and code in pParse and return NULL. */ if( SQLITE_OK!=sqlite3ReadSchema(pParse) ){ return 0; } p = sqlite3FindTable(pParse->db, zName, zDbase); if( p==0 ){ const char *zMsg = isView ? "no such view" : "no such table"; if( zDbase ){ sqlite3ErrorMsg(pParse, "%s: %s.%s", zMsg, zDbase, zName); }else{ sqlite3ErrorMsg(pParse, "%s: %s", zMsg, zName); } pParse->checkSchema = 1; } #if SQLITE_USER_AUTHENICATION else if( pParse->db->auth.authLevel<UAUTH_User ){ sqlite3ErrorMsg(pParse, "user not authenticated"); p = 0; } #endif return p; } /* ** Locate the table identified by *p. ** ** This is a wrapper around sqlite3LocateTable(). The difference between ** sqlite3LocateTable() and this function is that this function restricts ** the search to schema (p->pSchema) if it is not NULL. p->pSchema may be ** non-NULL if it is part of a view or trigger program definition. See ** sqlite3FixSrcList() for details. */ Table *sqlite3LocateTableItem( Parse *pParse, int isView, struct SrcList_item *p ){ const char *zDb; assert( p->pSchema==0 || p->zDatabase==0 ); if( p->pSchema ){ int iDb = sqlite3SchemaToIndex(pParse->db, p->pSchema); zDb = pParse->db->aDb[iDb].zName; }else{ zDb = p->zDatabase; } return sqlite3LocateTable(pParse, isView, p->zName, zDb); } /* ** Locate the in-memory structure that describes ** a particular index given the name of that index ** and the name of the database that contains the index. ** Return NULL if not found. ** ** If zDatabase is 0, all databases are searched for the ** table and the first matching index is returned. (No checking ** for duplicate index names is done.) The search order is ** TEMP first, then MAIN, then any auxiliary databases added ** using the ATTACH command. */ Index *sqlite3FindIndex(sqlite3 *db, const char *zName, const char *zDb){ Index *p = 0; int i; /* All mutexes are required for schema access. Make sure we hold them. */ assert( zDb!=0 || sqlite3BtreeHoldsAllMutexes(db) ); for(i=OMIT_TEMPDB; i<db->nDb; i++){ int j = (i<2) ? i^1 : i; /* Search TEMP before MAIN */ Schema *pSchema = db->aDb[j].pSchema; assert( pSchema ); if( zDb && sqlite3StrICmp(zDb, db->aDb[j].zName) ) continue; assert( sqlite3SchemaMutexHeld(db, j, 0) ); p = sqlite3HashFind(&pSchema->idxHash, zName); if( p ) break; } return p; } /* ** Reclaim the memory used by an index */ static void freeIndex(sqlite3 *db, Index *p){ #ifndef SQLITE_OMIT_ANALYZE sqlite3DeleteIndexSamples(db, p); #endif if( db==0 || db->pnBytesFreed==0 ) sqlite3KeyInfoUnref(p->pKeyInfo); sqlite3ExprDelete(db, p->pPartIdxWhere); sqlite3DbFree(db, p->zColAff); if( p->isResized ) sqlite3DbFree(db, p->azColl); #ifdef SQLITE_ENABLE_STAT3_OR_STAT4 sqlite3_free(p->aiRowEst); #endif sqlite3DbFree(db, p); } /* ** For the index called zIdxName which is found in the database iDb, ** unlike that index from its Table then remove the index from ** the index hash table and free all memory structures associated ** with the index. */ void sqlite3UnlinkAndDeleteIndex(sqlite3 *db, int iDb, const char *zIdxName){ Index *pIndex; Hash *pHash; assert( sqlite3SchemaMutexHeld(db, iDb, 0) ); pHash = &db->aDb[iDb].pSchema->idxHash; pIndex = sqlite3HashInsert(pHash, zIdxName, 0); if( ALWAYS(pIndex) ){ if( pIndex->pTable->pIndex==pIndex ){ pIndex->pTable->pIndex = pIndex->pNext; }else{ Index *p; /* Justification of ALWAYS(); The index must be on the list of ** indices. */ p = pIndex->pTable->pIndex; while( ALWAYS(p) && p->pNext!=pIndex ){ p = p->pNext; } if( ALWAYS(p && p->pNext==pIndex) ){ p->pNext = pIndex->pNext; } } freeIndex(db, pIndex); } db->flags |= SQLITE_InternChanges; } /* ** Look through the list of open database files in db->aDb[] and if ** any have been closed, remove them from the list. Reallocate the ** db->aDb[] structure to a smaller size, if possible. ** ** Entry 0 (the "main" database) and entry 1 (the "temp" database) ** are never candidates for being collapsed. */ void sqlite3CollapseDatabaseArray(sqlite3 *db){ int i, j; for(i=j=2; i<db->nDb; i++){ struct Db *pDb = &db->aDb[i]; if( pDb->pBt==0 ){ sqlite3DbFree(db, pDb->zName); pDb->zName = 0; continue; } if( j<i ){ db->aDb[j] = db->aDb[i]; } j++; } memset(&db->aDb[j], 0, (db->nDb-j)*sizeof(db->aDb[j])); db->nDb = j; if( db->nDb<=2 && db->aDb!=db->aDbStatic ){ memcpy(db->aDbStatic, db->aDb, 2*sizeof(db->aDb[0])); sqlite3DbFree(db, db->aDb); db->aDb = db->aDbStatic; } } /* ** Reset the schema for the database at index iDb. Also reset the ** TEMP schema. */ void sqlite3ResetOneSchema(sqlite3 *db, int iDb){ Db *pDb; assert( iDb<db->nDb ); /* Case 1: Reset the single schema identified by iDb */ pDb = &db->aDb[iDb]; assert( sqlite3SchemaMutexHeld(db, iDb, 0) ); assert( pDb->pSchema!=0 ); sqlite3SchemaClear(pDb->pSchema); /* If any database other than TEMP is reset, then also reset TEMP ** since TEMP might be holding triggers that reference tables in the ** other database. */ if( iDb!=1 ){ pDb = &db->aDb[1]; assert( pDb->pSchema!=0 ); sqlite3SchemaClear(pDb->pSchema); } return; } /* ** Erase all schema information from all attached databases (including ** "main" and "temp") for a single database connection. */ void sqlite3ResetAllSchemasOfConnection(sqlite3 *db){ int i; sqlite3BtreeEnterAll(db); for(i=0; i<db->nDb; i++){ Db *pDb = &db->aDb[i]; if( pDb->pSchema ){ sqlite3SchemaClear(pDb->pSchema); } } db->flags &= ~SQLITE_InternChanges; sqlite3VtabUnlockList(db); sqlite3BtreeLeaveAll(db); sqlite3CollapseDatabaseArray(db); } /* ** This routine is called when a commit occurs. */ void sqlite3CommitInternalChanges(sqlite3 *db){ db->flags &= ~SQLITE_InternChanges; } /* ** Delete memory allocated for the column names of a table or view (the ** Table.aCol[] array). */ static void sqliteDeleteColumnNames(sqlite3 *db, Table *pTable){ int i; Column *pCol; assert( pTable!=0 ); if( (pCol = pTable->aCol)!=0 ){ for(i=0; i<pTable->nCol; i++, pCol++){ sqlite3DbFree(db, pCol->zName); sqlite3ExprDelete(db, pCol->pDflt); sqlite3DbFree(db, pCol->zDflt); sqlite3DbFree(db, pCol->zType); sqlite3DbFree(db, pCol->zColl); } sqlite3DbFree(db, pTable->aCol); } } /* ** Remove the memory data structures associated with the given ** Table. No changes are made to disk by this routine. ** ** This routine just deletes the data structure. It does not unlink ** the table data structure from the hash table. But it does destroy ** memory structures of the indices and foreign keys associated with ** the table. ** ** The db parameter is optional. It is needed if the Table object ** contains lookaside memory. (Table objects in the schema do not use ** lookaside memory, but some ephemeral Table objects do.) Or the ** db parameter can be used with db->pnBytesFreed to measure the memory ** used by the Table object. */ void sqlite3DeleteTable(sqlite3 *db, Table *pTable){ Index *pIndex, *pNext; TESTONLY( int nLookaside; ) /* Used to verify lookaside not used for schema */ assert( !pTable || pTable->nRef>0 ); /* Do not delete the table until the reference count reaches zero. */ if( !pTable ) return; if( ((!db || db->pnBytesFreed==0) && (--pTable->nRef)>0) ) return; /* Record the number of outstanding lookaside allocations in schema Tables ** prior to doing any free() operations. Since schema Tables do not use ** lookaside, this number should not change. */ TESTONLY( nLookaside = (db && (pTable->tabFlags & TF_Ephemeral)==0) ? db->lookaside.nOut : 0 ); /* Delete all indices associated with this table. */ for(pIndex = pTable->pIndex; pIndex; pIndex=pNext){ pNext = pIndex->pNext; assert( pIndex->pSchema==pTable->pSchema ); if( !db || db->pnBytesFreed==0 ){ char *zName = pIndex->zName; TESTONLY ( Index *pOld = ) sqlite3HashInsert( &pIndex->pSchema->idxHash, zName, 0 ); assert( db==0 || sqlite3SchemaMutexHeld(db, 0, pIndex->pSchema) ); assert( pOld==pIndex || pOld==0 ); } freeIndex(db, pIndex); } /* Delete any foreign keys attached to this table. */ sqlite3FkDelete(db, pTable); /* Delete the Table structure itself. */ sqliteDeleteColumnNames(db, pTable); sqlite3DbFree(db, pTable->zName); sqlite3DbFree(db, pTable->zColAff); sqlite3SelectDelete(db, pTable->pSelect); #ifndef SQLITE_OMIT_CHECK sqlite3ExprListDelete(db, pTable->pCheck); #endif #ifndef SQLITE_OMIT_VIRTUALTABLE sqlite3VtabClear(db, pTable); #endif sqlite3DbFree(db, pTable); /* Verify that no lookaside memory was used by schema tables */ assert( nLookaside==0 || nLookaside==db->lookaside.nOut ); } /* ** Unlink the given table from the hash tables and the delete the ** table structure with all its indices and foreign keys. */ void sqlite3UnlinkAndDeleteTable(sqlite3 *db, int iDb, const char *zTabName){ Table *p; Db *pDb; assert( db!=0 ); assert( iDb>=0 && iDb<db->nDb ); assert( zTabName ); assert( sqlite3SchemaMutexHeld(db, iDb, 0) ); testcase( zTabName[0]==0 ); /* Zero-length table names are allowed */ pDb = &db->aDb[iDb]; p = sqlite3HashInsert(&pDb->pSchema->tblHash, zTabName, 0); sqlite3DeleteTable(db, p); db->flags |= SQLITE_InternChanges; } /* ** Given a token, return a string that consists of the text of that ** token. Space to hold the returned string ** is obtained from sqliteMalloc() and must be freed by the calling ** function. ** ** Any quotation marks (ex: "name", 'name', [name], or `name`) that ** surround the body of the token are removed. ** ** Tokens are often just pointers into the original SQL text and so ** are not \000 terminated and are not persistent. The returned string ** is \000 terminated and is persistent. */ char *sqlite3NameFromToken(sqlite3 *db, Token *pName){ char *zName; if( pName ){ zName = sqlite3DbStrNDup(db, (char*)pName->z, pName->n); sqlite3Dequote(zName); }else{ zName = 0; } return zName; } /* ** Open the sqlite_master table stored in database number iDb for ** writing. The table is opened using cursor 0. */ void sqlite3OpenMasterTable(Parse *p, int iDb){ Vdbe *v = sqlite3GetVdbe(p); sqlite3TableLock(p, iDb, MASTER_ROOT, 1, SCHEMA_TABLE(iDb)); sqlite3VdbeAddOp4Int(v, OP_OpenWrite, 0, MASTER_ROOT, iDb, 5); if( p->nTab==0 ){ p->nTab = 1; } } /* ** Parameter zName points to a nul-terminated buffer containing the name ** of a database ("main", "temp" or the name of an attached db). This ** function returns the index of the named database in db->aDb[], or ** -1 if the named db cannot be found. */ int sqlite3FindDbName(sqlite3 *db, const char *zName){ int i = -1; /* Database number */ if( zName ){ Db *pDb; int n = sqlite3Strlen30(zName); for(i=(db->nDb-1), pDb=&db->aDb[i]; i>=0; i--, pDb--){ if( (!OMIT_TEMPDB || i!=1 ) && n==sqlite3Strlen30(pDb->zName) && 0==sqlite3StrICmp(pDb->zName, zName) ){ break; } } } return i; } /* ** The token *pName contains the name of a database (either "main" or ** "temp" or the name of an attached db). This routine returns the ** index of the named database in db->aDb[], or -1 if the named db ** does not exist. */ int sqlite3FindDb(sqlite3 *db, Token *pName){ int i; /* Database number */ char *zName; /* Name we are searching for */ zName = sqlite3NameFromToken(db, pName); i = sqlite3FindDbName(db, zName); sqlite3DbFree(db, zName); return i; } /* The table or view or trigger name is passed to this routine via tokens ** pName1 and pName2. If the table name was fully qualified, for example: ** ** CREATE TABLE xxx.yyy (...); ** ** Then pName1 is set to "xxx" and pName2 "yyy". On the other hand if ** the table name is not fully qualified, i.e.: ** ** CREATE TABLE yyy(...); ** ** Then pName1 is set to "yyy" and pName2 is "". ** ** This routine sets the *ppUnqual pointer to point at the token (pName1 or ** pName2) that stores the unqualified table name. The index of the ** database "xxx" is returned. */ int sqlite3TwoPartName( Parse *pParse, /* Parsing and code generating context */ Token *pName1, /* The "xxx" in the name "xxx.yyy" or "xxx" */ Token *pName2, /* The "yyy" in the name "xxx.yyy" */ Token **pUnqual /* Write the unqualified object name here */ ){ int iDb; /* Database holding the object */ sqlite3 *db = pParse->db; if( ALWAYS(pName2!=0) && pName2->n>0 ){ if( db->init.busy ) { sqlite3ErrorMsg(pParse, "corrupt database"); pParse->nErr++; return -1; } *pUnqual = pName2; iDb = sqlite3FindDb(db, pName1); if( iDb<0 ){ sqlite3ErrorMsg(pParse, "unknown database %T", pName1); pParse->nErr++; return -1; } }else{ assert( db->init.iDb==0 || db->init.busy ); iDb = db->init.iDb; *pUnqual = pName1; } return iDb; } /* ** This routine is used to check if the UTF-8 string zName is a legal ** unqualified name for a new schema object (table, index, view or ** trigger). All names are legal except those that begin with the string ** "sqlite_" (in upper, lower or mixed case). This portion of the namespace ** is reserved for internal use. */ int sqlite3CheckObjectName(Parse *pParse, const char *zName){ if( !pParse->db->init.busy && pParse->nested==0 && (pParse->db->flags & SQLITE_WriteSchema)==0 && 0==sqlite3StrNICmp(zName, "sqlite_", 7) ){ sqlite3ErrorMsg(pParse, "object name reserved for internal use: %s", zName); return SQLITE_ERROR; } return SQLITE_OK; } /* ** Return the PRIMARY KEY index of a table */ Index *sqlite3PrimaryKeyIndex(Table *pTab){ Index *p; for(p=pTab->pIndex; p && !IsPrimaryKeyIndex(p); p=p->pNext){} return p; } /* ** Return the column of index pIdx that corresponds to table ** column iCol. Return -1 if not found. */ i16 sqlite3ColumnOfIndex(Index *pIdx, i16 iCol){ int i; for(i=0; i<pIdx->nColumn; i++){ if( iCol==pIdx->aiColumn[i] ) return i; } return -1; } /* ** Begin constructing a new table representation in memory. This is ** the first of several action routines that get called in response ** to a CREATE TABLE statement. In particular, this routine is called ** after seeing tokens "CREATE" and "TABLE" and the table name. The isTemp ** flag is true if the table should be stored in the auxiliary database ** file instead of in the main database file. This is normally the case ** when the "TEMP" or "TEMPORARY" keyword occurs in between ** CREATE and TABLE. ** ** The new table record is initialized and put in pParse->pNewTable. ** As more of the CREATE TABLE statement is parsed, additional action ** routines will be called to add more information to this record. ** At the end of the CREATE TABLE statement, the sqlite3EndTable() routine ** is called to complete the construction of the new table record. */ void sqlite3StartTable( Parse *pParse, /* Parser context */ Token *pName1, /* First part of the name of the table or view */ Token *pName2, /* Second part of the name of the table or view */ int isTemp, /* True if this is a TEMP table */ int isView, /* True if this is a VIEW */ int isVirtual, /* True if this is a VIRTUAL table */ int noErr /* Do nothing if table already exists */ ){ Table *pTable; char *zName = 0; /* The name of the new table */ sqlite3 *db = pParse->db; Vdbe *v; int iDb; /* Database number to create the table in */ Token *pName; /* Unqualified name of the table to create */ /* The table or view name to create is passed to this routine via tokens ** pName1 and pName2. If the table name was fully qualified, for example: ** ** CREATE TABLE xxx.yyy (...); ** ** Then pName1 is set to "xxx" and pName2 "yyy". On the other hand if ** the table name is not fully qualified, i.e.: ** ** CREATE TABLE yyy(...); ** ** Then pName1 is set to "yyy" and pName2 is "". ** ** The call below sets the pName pointer to point at the token (pName1 or ** pName2) that stores the unqualified table name. The variable iDb is ** set to the index of the database that the table or view is to be ** created in. */ iDb = sqlite3TwoPartName(pParse, pName1, pName2, &pName); if( iDb<0 ) return; if( !OMIT_TEMPDB && isTemp && pName2->n>0 && iDb!=1 ){ /* If creating a temp table, the name may not be qualified. Unless ** the database name is "temp" anyway. */ sqlite3ErrorMsg(pParse, "temporary table name must be unqualified"); return; } if( !OMIT_TEMPDB && isTemp ) iDb = 1; pParse->sNameToken = *pName; zName = sqlite3NameFromToken(db, pName); if( zName==0 ) return; if( SQLITE_OK!=sqlite3CheckObjectName(pParse, zName) ){ goto begin_table_error; } if( db->init.iDb==1 ) isTemp = 1; #ifndef SQLITE_OMIT_AUTHORIZATION assert( (isTemp & 1)==isTemp ); { int code; char *zDb = db->aDb[iDb].zName; if( sqlite3AuthCheck(pParse, SQLITE_INSERT, SCHEMA_TABLE(isTemp), 0, zDb) ){ goto begin_table_error; } if( isView ){ if( !OMIT_TEMPDB && isTemp ){ code = SQLITE_CREATE_TEMP_VIEW; }else{ code = SQLITE_CREATE_VIEW; } }else{ if( !OMIT_TEMPDB && isTemp ){ code = SQLITE_CREATE_TEMP_TABLE; }else{ code = SQLITE_CREATE_TABLE; } } if( !isVirtual && sqlite3AuthCheck(pParse, code, zName, 0, zDb) ){ goto begin_table_error; } } #endif /* Make sure the new table name does not collide with an existing ** index or table name in the same database. Issue an error message if ** it does. The exception is if the statement being parsed was passed ** to an sqlite3_declare_vtab() call. In that case only the column names ** and types will be used, so there is no need to test for namespace ** collisions. */ if( !IN_DECLARE_VTAB ){ char *zDb = db->aDb[iDb].zName; if( SQLITE_OK!=sqlite3ReadSchema(pParse) ){ goto begin_table_error; } pTable = sqlite3FindTable(db, zName, zDb); if( pTable ){ if( !noErr ){ sqlite3ErrorMsg(pParse, "table %T already exists", pName); }else{ assert( !db->init.busy ); sqlite3CodeVerifySchema(pParse, iDb); } goto begin_table_error; } if( sqlite3FindIndex(db, zName, zDb)!=0 ){ sqlite3ErrorMsg(pParse, "there is already an index named %s", zName); goto begin_table_error; } } pTable = sqlite3DbMallocZero(db, sizeof(Table)); if( pTable==0 ){ db->mallocFailed = 1; pParse->rc = SQLITE_NOMEM; pParse->nErr++; goto begin_table_error; } pTable->zName = zName; pTable->iPKey = -1; pTable->pSchema = db->aDb[iDb].pSchema; pTable->nRef = 1; pTable->nRowLogEst = 200; assert( 200==sqlite3LogEst(1048576) ); assert( pParse->pNewTable==0 ); pParse->pNewTable = pTable; /* If this is the magic sqlite_sequence table used by autoincrement, ** then record a pointer to this table in the main database structure ** so that INSERT can find the table easily. */ #ifndef SQLITE_OMIT_AUTOINCREMENT if( !pParse->nested && strcmp(zName, "sqlite_sequence")==0 ){ assert( sqlite3SchemaMutexHeld(db, iDb, 0) ); pTable->pSchema->pSeqTab = pTable; } #endif /* Begin generating the code that will insert the table record into ** the SQLITE_MASTER table. Note in particular that we must go ahead ** and allocate the record number for the table entry now. Before any ** PRIMARY KEY or UNIQUE keywords are parsed. Those keywords will cause ** indices to be created and the table record must come before the ** indices. Hence, the record number for the table must be allocated ** now. */ if( !db->init.busy && (v = sqlite3GetVdbe(pParse))!=0 ){ int j1; int fileFormat; int reg1, reg2, reg3; sqlite3BeginWriteOperation(pParse, 0, iDb); #ifndef SQLITE_OMIT_VIRTUALTABLE if( isVirtual ){ sqlite3VdbeAddOp0(v, OP_VBegin); } #endif /* If the file format and encoding in the database have not been set, ** set them now. */ reg1 = pParse->regRowid = ++pParse->nMem; reg2 = pParse->regRoot = ++pParse->nMem; reg3 = ++pParse->nMem; sqlite3VdbeAddOp3(v, OP_ReadCookie, iDb, reg3, BTREE_FILE_FORMAT); sqlite3VdbeUsesBtree(v, iDb); j1 = sqlite3VdbeAddOp1(v, OP_If, reg3); VdbeCoverage(v); fileFormat = (db->flags & SQLITE_LegacyFileFmt)!=0 ? 1 : SQLITE_MAX_FILE_FORMAT; sqlite3VdbeAddOp2(v, OP_Integer, fileFormat, reg3); sqlite3VdbeAddOp3(v, OP_SetCookie, iDb, BTREE_FILE_FORMAT, reg3); sqlite3VdbeAddOp2(v, OP_Integer, ENC(db), reg3); sqlite3VdbeAddOp3(v, OP_SetCookie, iDb, BTREE_TEXT_ENCODING, reg3); sqlite3VdbeJumpHere(v, j1); /* This just creates a place-holder record in the sqlite_master table. ** The record created does not contain anything yet. It will be replaced ** by the real entry in code generated at sqlite3EndTable(). ** ** The rowid for the new entry is left in register pParse->regRowid. ** The root page number of the new table is left in reg pParse->regRoot. ** The rowid and root page number values are needed by the code that ** sqlite3EndTable will generate. */ #if !defined(SQLITE_OMIT_VIEW) || !defined(SQLITE_OMIT_VIRTUALTABLE) if( isView || isVirtual ){ sqlite3VdbeAddOp2(v, OP_Integer, 0, reg2); }else #endif { pParse->addrCrTab = sqlite3VdbeAddOp2(v, OP_CreateTable, iDb, reg2); } sqlite3OpenMasterTable(pParse, iDb); sqlite3VdbeAddOp2(v, OP_NewRowid, 0, reg1); sqlite3VdbeAddOp2(v, OP_Null, 0, reg3); sqlite3VdbeAddOp3(v, OP_Insert, 0, reg3, reg1); sqlite3VdbeChangeP5(v, OPFLAG_APPEND); sqlite3VdbeAddOp0(v, OP_Close); } /* Normal (non-error) return. */ return; /* If an error occurs, we jump here */ begin_table_error: sqlite3DbFree(db, zName); return; } /* ** This macro is used to compare two strings in a case-insensitive manner. ** It is slightly faster than calling sqlite3StrICmp() directly, but ** produces larger code. ** ** WARNING: This macro is not compatible with the strcmp() family. It ** returns true if the two strings are equal, otherwise false. */ #define STRICMP(x, y) (\ sqlite3UpperToLower[*(unsigned char *)(x)]== \ sqlite3UpperToLower[*(unsigned char *)(y)] \ && sqlite3StrICmp((x)+1,(y)+1)==0 ) /* ** Add a new column to the table currently being constructed. ** ** The parser calls this routine once for each column declaration ** in a CREATE TABLE statement. sqlite3StartTable() gets called ** first to get things going. Then this routine is called for each ** column. */ void sqlite3AddColumn(Parse *pParse, Token *pName){ Table *p; int i; char *z; Column *pCol; sqlite3 *db = pParse->db; if( (p = pParse->pNewTable)==0 ) return; #if SQLITE_MAX_COLUMN if( p->nCol+1>db->aLimit[SQLITE_LIMIT_COLUMN] ){ sqlite3ErrorMsg(pParse, "too many columns on %s", p->zName); return; } #endif z = sqlite3NameFromToken(db, pName); if( z==0 ) return; for(i=0; i<p->nCol; i++){ if( STRICMP(z, p->aCol[i].zName) ){ sqlite3ErrorMsg(pParse, "duplicate column name: %s", z); sqlite3DbFree(db, z); return; } } if( (p->nCol & 0x7)==0 ){ Column *aNew; aNew = sqlite3DbRealloc(db,p->aCol,(p->nCol+8)*sizeof(p->aCol[0])); if( aNew==0 ){ sqlite3DbFree(db, z); return; } p->aCol = aNew; } pCol = &p->aCol[p->nCol]; memset(pCol, 0, sizeof(p->aCol[0])); pCol->zName = z; /* If there is no type specified, columns have the default affinity ** 'NONE'. If there is a type specified, then sqlite3AddColumnType() will ** be called next to set pCol->affinity correctly. */ pCol->affinity = SQLITE_AFF_NONE; pCol->szEst = 1; p->nCol++; } /* ** This routine is called by the parser while in the middle of ** parsing a CREATE TABLE statement. A "NOT NULL" constraint has ** been seen on a column. This routine sets the notNull flag on ** the column currently under construction. */ void sqlite3AddNotNull(Parse *pParse, int onError){ Table *p; p = pParse->pNewTable; if( p==0 || NEVER(p->nCol<1) ) return; p->aCol[p->nCol-1].notNull = (u8)onError; } /* ** Scan the column type name zType (length nType) and return the ** associated affinity type. ** ** This routine does a case-independent search of zType for the ** substrings in the following table. If one of the substrings is ** found, the corresponding affinity is returned. If zType contains ** more than one of the substrings, entries toward the top of ** the table take priority. For example, if zType is 'BLOBINT', ** SQLITE_AFF_INTEGER is returned. ** ** Substring | Affinity ** -------------------------------- ** 'INT' | SQLITE_AFF_INTEGER ** 'CHAR' | SQLITE_AFF_TEXT ** 'CLOB' | SQLITE_AFF_TEXT ** 'TEXT' | SQLITE_AFF_TEXT ** 'BLOB' | SQLITE_AFF_NONE ** 'REAL' | SQLITE_AFF_REAL ** 'FLOA' | SQLITE_AFF_REAL ** 'DOUB' | SQLITE_AFF_REAL ** ** If none of the substrings in the above table are found, ** SQLITE_AFF_NUMERIC is returned. */ char sqlite3AffinityType(const char *zIn, u8 *pszEst){ u32 h = 0; char aff = SQLITE_AFF_NUMERIC; const char *zChar = 0; if( zIn==0 ) return aff; while( zIn[0] ){ h = (h<<8) + sqlite3UpperToLower[(*zIn)&0xff]; zIn++; if( h==(('c'<<24)+('h'<<16)+('a'<<8)+'r') ){ /* CHAR */ aff = SQLITE_AFF_TEXT; zChar = zIn; }else if( h==(('c'<<24)+('l'<<16)+('o'<<8)+'b') ){ /* CLOB */ aff = SQLITE_AFF_TEXT; }else if( h==(('t'<<24)+('e'<<16)+('x'<<8)+'t') ){ /* TEXT */ aff = SQLITE_AFF_TEXT; }else if( h==(('b'<<24)+('l'<<16)+('o'<<8)+'b') /* BLOB */ && (aff==SQLITE_AFF_NUMERIC || aff==SQLITE_AFF_REAL) ){ aff = SQLITE_AFF_NONE; if( zIn[0]=='(' ) zChar = zIn; #ifndef SQLITE_OMIT_FLOATING_POINT }else if( h==(('r'<<24)+('e'<<16)+('a'<<8)+'l') /* REAL */ && aff==SQLITE_AFF_NUMERIC ){ aff = SQLITE_AFF_REAL; }else if( h==(('f'<<24)+('l'<<16)+('o'<<8)+'a') /* FLOA */ && aff==SQLITE_AFF_NUMERIC ){ aff = SQLITE_AFF_REAL; }else if( h==(('d'<<24)+('o'<<16)+('u'<<8)+'b') /* DOUB */ && aff==SQLITE_AFF_NUMERIC ){ aff = SQLITE_AFF_REAL; #endif }else if( (h&0x00FFFFFF)==(('i'<<16)+('n'<<8)+'t') ){ /* INT */ aff = SQLITE_AFF_INTEGER; break; } } /* If pszEst is not NULL, store an estimate of the field size. The ** estimate is scaled so that the size of an integer is 1. */ if( pszEst ){ *pszEst = 1; /* default size is approx 4 bytes */ if( aff<SQLITE_AFF_NUMERIC ){ if( zChar ){ while( zChar[0] ){ if( sqlite3Isdigit(zChar[0]) ){ int v = 0; sqlite3GetInt32(zChar, &v); v = v/4 + 1; if( v>255 ) v = 255; *pszEst = v; /* BLOB(k), VARCHAR(k), CHAR(k) -> r=(k/4+1) */ break; } zChar++; } }else{ *pszEst = 5; /* BLOB, TEXT, CLOB -> r=5 (approx 20 bytes)*/ } } } return aff; } /* ** This routine is called by the parser while in the middle of ** parsing a CREATE TABLE statement. The pFirst token is the first ** token in the sequence of tokens that describe the type of the ** column currently under construction. pLast is the last token ** in the sequence. Use this information to construct a string ** that contains the typename of the column and store that string ** in zType. */ void sqlite3AddColumnType(Parse *pParse, Token *pType){ Table *p; Column *pCol; p = pParse->pNewTable; if( p==0 || NEVER(p->nCol<1) ) return; pCol = &p->aCol[p->nCol-1]; assert( pCol->zType==0 ); pCol->zType = sqlite3NameFromToken(pParse->db, pType); pCol->affinity = sqlite3AffinityType(pCol->zType, &pCol->szEst); } /* ** The expression is the default value for the most recently added column ** of the table currently under construction. ** ** Default value expressions must be constant. Raise an exception if this ** is not the case. ** ** This routine is called by the parser while in the middle of ** parsing a CREATE TABLE statement. */ void sqlite3AddDefaultValue(Parse *pParse, ExprSpan *pSpan){ Table *p; Column *pCol; sqlite3 *db = pParse->db; p = pParse->pNewTable; if( p!=0 ){ pCol = &(p->aCol[p->nCol-1]); if( !sqlite3ExprIsConstantOrFunction(pSpan->pExpr, db->init.busy) ){ sqlite3ErrorMsg(pParse, "default value of column [%s] is not constant", pCol->zName); }else{ /* A copy of pExpr is used instead of the original, as pExpr contains ** tokens that point to volatile memory. The 'span' of the expression ** is required by pragma table_info. */ sqlite3ExprDelete(db, pCol->pDflt); pCol->pDflt = sqlite3ExprDup(db, pSpan->pExpr, EXPRDUP_REDUCE); sqlite3DbFree(db, pCol->zDflt); pCol->zDflt = sqlite3DbStrNDup(db, (char*)pSpan->zStart, (int)(pSpan->zEnd - pSpan->zStart)); } } sqlite3ExprDelete(db, pSpan->pExpr); } /* ** Designate the PRIMARY KEY for the table. pList is a list of names ** of columns that form the primary key. If pList is NULL, then the ** most recently added column of the table is the primary key. ** ** A table can have at most one primary key. If the table already has ** a primary key (and this is the second primary key) then create an ** error. ** ** If the PRIMARY KEY is on a single column whose datatype is INTEGER, ** then we will try to use that column as the rowid. Set the Table.iPKey ** field of the table under construction to be the index of the ** INTEGER PRIMARY KEY column. Table.iPKey is set to -1 if there is ** no INTEGER PRIMARY KEY. ** ** If the key is not an INTEGER PRIMARY KEY, then create a unique ** index for the key. No index is created for INTEGER PRIMARY KEYs. */ void sqlite3AddPrimaryKey( Parse *pParse, /* Parsing context */ ExprList *pList, /* List of field names to be indexed */ int onError, /* What to do with a uniqueness conflict */ int autoInc, /* True if the AUTOINCREMENT keyword is present */ int sortOrder /* SQLITE_SO_ASC or SQLITE_SO_DESC */ ){ Table *pTab = pParse->pNewTable; char *zType = 0; int iCol = -1, i; int nTerm; if( pTab==0 || IN_DECLARE_VTAB ) goto primary_key_exit; #ifdef GD_ENABLE_NEWSQL_SERVER /* for test */ goto primary_key_exit; #endif if( pTab->tabFlags & TF_HasPrimaryKey ){ sqlite3ErrorMsg(pParse, "table \"%s\" has more than one primary key", pTab->zName); goto primary_key_exit; } pTab->tabFlags |= TF_HasPrimaryKey; if( pList==0 ){ iCol = pTab->nCol - 1; pTab->aCol[iCol].colFlags |= COLFLAG_PRIMKEY; zType = pTab->aCol[iCol].zType; nTerm = 1; }else{ nTerm = pList->nExpr; for(i=0; i<nTerm; i++){ for(iCol=0; iCol<pTab->nCol; iCol++){ if( sqlite3StrICmp(pList->a[i].zName, pTab->aCol[iCol].zName)==0 ){ pTab->aCol[iCol].colFlags |= COLFLAG_PRIMKEY; zType = pTab->aCol[iCol].zType; break; } } } } if( nTerm==1 && zType && sqlite3StrICmp(zType, "INTEGER")==0 && sortOrder==SQLITE_SO_ASC ){ pTab->iPKey = iCol; pTab->keyConf = (u8)onError; assert( autoInc==0 || autoInc==1 ); pTab->tabFlags |= autoInc*TF_Autoincrement; if( pList ) pParse->iPkSortOrder = pList->a[0].sortOrder; }else if( autoInc ){ #ifndef SQLITE_OMIT_AUTOINCREMENT sqlite3ErrorMsg(pParse, "AUTOINCREMENT is only allowed on an " "INTEGER PRIMARY KEY"); #endif }else{ Vdbe *v = pParse->pVdbe; Index *p; if( v ) pParse->addrSkipPK = sqlite3VdbeAddOp0(v, OP_Noop); p = sqlite3CreateIndex(pParse, 0, 0, 0, pList, onError, 0, 0, sortOrder, 0); if( p ){ p->idxType = SQLITE_IDXTYPE_PRIMARYKEY; if( v ) sqlite3VdbeJumpHere(v, pParse->addrSkipPK); } pList = 0; } primary_key_exit: sqlite3ExprListDelete(pParse->db, pList); return; } /* ** Add a new CHECK constraint to the table currently under construction. */ void sqlite3AddCheckConstraint( Parse *pParse, /* Parsing context */ Expr *pCheckExpr /* The check expression */ ){ #ifndef SQLITE_OMIT_CHECK Table *pTab = pParse->pNewTable; sqlite3 *db = pParse->db; if( pTab && !IN_DECLARE_VTAB && !sqlite3BtreeIsReadonly(db->aDb[db->init.iDb].pBt) ){ pTab->pCheck = sqlite3ExprListAppend(pParse, pTab->pCheck, pCheckExpr); if( pParse->constraintName.n ){ sqlite3ExprListSetName(pParse, pTab->pCheck, &pParse->constraintName, 1); } }else #endif { sqlite3ExprDelete(pParse->db, pCheckExpr); } } /* ** Set the collation function of the most recently parsed table column ** to the CollSeq given. */ void sqlite3AddCollateType(Parse *pParse, Token *pToken){ Table *p; int i; char *zColl; /* Dequoted name of collation sequence */ sqlite3 *db; if( (p = pParse->pNewTable)==0 ) return; i = p->nCol-1; db = pParse->db; zColl = sqlite3NameFromToken(db, pToken); if( !zColl ) return; if( sqlite3LocateCollSeq(pParse, zColl) ){ Index *pIdx; sqlite3DbFree(db, p->aCol[i].zColl); p->aCol[i].zColl = zColl; /* If the column is declared as "<name> PRIMARY KEY COLLATE <type>", ** then an index may have been created on this column before the ** collation type was added. Correct this if it is the case. */ for(pIdx=p->pIndex; pIdx; pIdx=pIdx->pNext){ assert( pIdx->nKeyCol==1 ); if( pIdx->aiColumn[0]==i ){ pIdx->azColl[0] = p->aCol[i].zColl; } } }else{ sqlite3DbFree(db, zColl); } } /* ** This function returns the collation sequence for database native text ** encoding identified by the string zName, length nName. ** ** If the requested collation sequence is not available, or not available ** in the database native encoding, the collation factory is invoked to ** request it. If the collation factory does not supply such a sequence, ** and the sequence is available in another text encoding, then that is ** returned instead. ** ** If no versions of the requested collations sequence are available, or ** another error occurs, NULL is returned and an error message written into ** pParse. ** ** This routine is a wrapper around sqlite3FindCollSeq(). This routine ** invokes the collation factory if the named collation cannot be found ** and generates an error message. ** ** See also: sqlite3FindCollSeq(), sqlite3GetCollSeq() */ CollSeq *sqlite3LocateCollSeq(Parse *pParse, const char *zName){ sqlite3 *db = pParse->db; u8 enc = ENC(db); u8 initbusy = db->init.busy; CollSeq *pColl; pColl = sqlite3FindCollSeq(db, enc, zName, initbusy); if( !initbusy && (!pColl || !pColl->xCmp) ){ pColl = sqlite3GetCollSeq(pParse, enc, pColl, zName); } return pColl; } /* ** Generate code that will increment the schema cookie. ** ** The schema cookie is used to determine when the schema for the ** database changes. After each schema change, the cookie value ** changes. When a process first reads the schema it records the ** cookie. Thereafter, whenever it goes to access the database, ** it checks the cookie to make sure the schema has not changed ** since it was last read. ** ** This plan is not completely bullet-proof. It is possible for ** the schema to change multiple times and for the cookie to be ** set back to prior value. But schema changes are infrequent ** and the probability of hitting the same cookie value is only ** 1 chance in 2^32. So we're safe enough. */ void sqlite3ChangeCookie(Parse *pParse, int iDb){ int r1 = sqlite3GetTempReg(pParse); sqlite3 *db = pParse->db; Vdbe *v = pParse->pVdbe; assert( sqlite3SchemaMutexHeld(db, iDb, 0) ); sqlite3VdbeAddOp2(v, OP_Integer, db->aDb[iDb].pSchema->schema_cookie+1, r1); sqlite3VdbeAddOp3(v, OP_SetCookie, iDb, BTREE_SCHEMA_VERSION, r1); sqlite3ReleaseTempReg(pParse, r1); } /* ** Measure the number of characters needed to output the given ** identifier. The number returned includes any quotes used ** but does not include the null terminator. ** ** The estimate is conservative. It might be larger that what is ** really needed. */ static int identLength(const char *z){ int n; for(n=0; *z; n++, z++){ if( *z=='"' ){ n++; } } return n + 2; } /* ** The first parameter is a pointer to an output buffer. The second ** parameter is a pointer to an integer that contains the offset at ** which to write into the output buffer. This function copies the ** nul-terminated string pointed to by the third parameter, zSignedIdent, ** to the specified offset in the buffer and updates *pIdx to refer ** to the first byte after the last byte written before returning. ** ** If the string zSignedIdent consists entirely of alpha-numeric ** characters, does not begin with a digit and is not an SQL keyword, ** then it is copied to the output buffer exactly as it is. Otherwise, ** it is quoted using double-quotes. */ static void identPut(char *z, int *pIdx, char *zSignedIdent){ unsigned char *zIdent = (unsigned char*)zSignedIdent; int i, j, needQuote; i = *pIdx; for(j=0; zIdent[j]; j++){ if( !sqlite3Isalnum(zIdent[j]) && zIdent[j]!='_' ) break; } needQuote = sqlite3Isdigit(zIdent[0]) || sqlite3KeywordCode(zIdent, j)!=TK_ID || zIdent[j]!=0 || j==0; if( needQuote ) z[i++] = '"'; for(j=0; zIdent[j]; j++){ z[i++] = zIdent[j]; if( zIdent[j]=='"' ) z[i++] = '"'; } if( needQuote ) z[i++] = '"'; z[i] = 0; *pIdx = i; } /* ** Generate a CREATE TABLE statement appropriate for the given ** table. Memory to hold the text of the statement is obtained ** from sqliteMalloc() and must be freed by the calling function. */ static char *createTableStmt(sqlite3 *db, Table *p){ int i, k, n; char *zStmt; char *zSep, *zSep2, *zEnd; Column *pCol; n = 0; for(pCol = p->aCol, i=0; i<p->nCol; i++, pCol++){ n += identLength(pCol->zName) + 5; } n += identLength(p->zName); if( n<50 ){ zSep = ""; zSep2 = ","; zEnd = ")"; }else{ zSep = "\n "; zSep2 = ",\n "; zEnd = "\n)"; } n += 35 + 6*p->nCol; zStmt = sqlite3DbMallocRaw(0, n); if( zStmt==0 ){ db->mallocFailed = 1; return 0; } sqlite3_snprintf(n, zStmt, "CREATE TABLE "); k = sqlite3Strlen30(zStmt); identPut(zStmt, &k, p->zName); zStmt[k++] = '('; for(pCol=p->aCol, i=0; i<p->nCol; i++, pCol++){ static const char * const azType[] = { /* SQLITE_AFF_NONE */ "", /* SQLITE_AFF_TEXT */ " TEXT", /* SQLITE_AFF_NUMERIC */ " NUM", /* SQLITE_AFF_INTEGER */ " INT", /* SQLITE_AFF_REAL */ " REAL" }; int len; const char *zType; sqlite3_snprintf(n-k, &zStmt[k], zSep); k += sqlite3Strlen30(&zStmt[k]); zSep = zSep2; identPut(zStmt, &k, pCol->zName); assert( pCol->affinity-SQLITE_AFF_NONE >= 0 ); assert( pCol->affinity-SQLITE_AFF_NONE < ArraySize(azType) ); testcase( pCol->affinity==SQLITE_AFF_NONE ); testcase( pCol->affinity==SQLITE_AFF_TEXT ); testcase( pCol->affinity==SQLITE_AFF_NUMERIC ); testcase( pCol->affinity==SQLITE_AFF_INTEGER ); testcase( pCol->affinity==SQLITE_AFF_REAL ); zType = azType[pCol->affinity - SQLITE_AFF_NONE]; len = sqlite3Strlen30(zType); assert( pCol->affinity==SQLITE_AFF_NONE || pCol->affinity==sqlite3AffinityType(zType, 0) ); memcpy(&zStmt[k], zType, len); k += len; assert( k<=n ); } sqlite3_snprintf(n-k, &zStmt[k], "%s", zEnd); return zStmt; } /* ** Resize an Index object to hold N columns total. Return SQLITE_OK ** on success and SQLITE_NOMEM on an OOM error. */ static int resizeIndexObject(sqlite3 *db, Index *pIdx, int N){ char *zExtra; int nByte; if( pIdx->nColumn>=N ) return SQLITE_OK; assert( pIdx->isResized==0 ); nByte = (sizeof(char*) + sizeof(i16) + 1)*N; zExtra = sqlite3DbMallocZero(db, nByte); if( zExtra==0 ) return SQLITE_NOMEM; memcpy(zExtra, pIdx->azColl, sizeof(char*)*pIdx->nColumn); pIdx->azColl = (char**)zExtra; zExtra += sizeof(char*)*N; memcpy(zExtra, pIdx->aiColumn, sizeof(i16)*pIdx->nColumn); pIdx->aiColumn = (i16*)zExtra; zExtra += sizeof(i16)*N; memcpy(zExtra, pIdx->aSortOrder, pIdx->nColumn); pIdx->aSortOrder = (u8*)zExtra; pIdx->nColumn = N; pIdx->isResized = 1; return SQLITE_OK; } /* ** Estimate the total row width for a table. */ static void estimateTableWidth(Table *pTab){ unsigned wTable = 0; const Column *pTabCol; int i; for(i=pTab->nCol, pTabCol=pTab->aCol; i>0; i--, pTabCol++){ wTable += pTabCol->szEst; } if( pTab->iPKey<0 ) wTable++; pTab->szTabRow = sqlite3LogEst(wTable*4); } /* ** Estimate the average size of a row for an index. */ static void estimateIndexWidth(Index *pIdx){ unsigned wIndex = 0; int i; const Column *aCol = pIdx->pTable->aCol; for(i=0; i<pIdx->nColumn; i++){ i16 x = pIdx->aiColumn[i]; assert( x<pIdx->pTable->nCol ); wIndex += x<0 ? 1 : aCol[pIdx->aiColumn[i]].szEst; } pIdx->szIdxRow = sqlite3LogEst(wIndex*4); } /* Return true if value x is found any of the first nCol entries of aiCol[] */ static int hasColumn(const i16 *aiCol, int nCol, int x){ while( nCol-- > 0 ) if( x==*(aiCol++) ) return 1; return 0; } /* ** This routine runs at the end of parsing a CREATE TABLE statement that ** has a WITHOUT ROWID clause. The job of this routine is to convert both ** internal schema data structures and the generated VDBE code so that they ** are appropriate for a WITHOUT ROWID table instead of a rowid table. ** Changes include: ** ** (1) Convert the OP_CreateTable into an OP_CreateIndex. There is ** no rowid btree for a WITHOUT ROWID. Instead, the canonical ** data storage is a covering index btree. ** (2) Bypass the creation of the sqlite_master table entry ** for the PRIMARY KEY as the primary key index is now ** identified by the sqlite_master table entry of the table itself. ** (3) Set the Index.tnum of the PRIMARY KEY Index object in the ** schema to the rootpage from the main table. ** (4) Set all columns of the PRIMARY KEY schema object to be NOT NULL. ** (5) Add all table columns to the PRIMARY KEY Index object ** so that the PRIMARY KEY is a covering index. The surplus ** columns are part of KeyInfo.nXField and are not used for ** sorting or lookup or uniqueness checks. ** (6) Replace the rowid tail on all automatically generated UNIQUE ** indices with the PRIMARY KEY columns. */ static void convertToWithoutRowidTable(Parse *pParse, Table *pTab){ Index *pIdx; Index *pPk; int nPk; int i, j; sqlite3 *db = pParse->db; Vdbe *v = pParse->pVdbe; /* Convert the OP_CreateTable opcode that would normally create the ** root-page for the table into an OP_CreateIndex opcode. The index ** created will become the PRIMARY KEY index. */ if( pParse->addrCrTab ){ assert( v ); sqlite3VdbeGetOp(v, pParse->addrCrTab)->opcode = OP_CreateIndex; } /* Bypass the creation of the PRIMARY KEY btree and the sqlite_master ** table entry. */ if( pParse->addrSkipPK ){ assert( v ); sqlite3VdbeGetOp(v, pParse->addrSkipPK)->opcode = OP_Goto; } /* Locate the PRIMARY KEY index. Or, if this table was originally ** an INTEGER PRIMARY KEY table, create a new PRIMARY KEY index. */ if( pTab->iPKey>=0 ){ ExprList *pList; pList = sqlite3ExprListAppend(pParse, 0, 0); if( pList==0 ) return; pList->a[0].zName = sqlite3DbStrDup(pParse->db, pTab->aCol[pTab->iPKey].zName); pList->a[0].sortOrder = pParse->iPkSortOrder; assert( pParse->pNewTable==pTab ); pPk = sqlite3CreateIndex(pParse, 0, 0, 0, pList, pTab->keyConf, 0, 0, 0, 0); if( pPk==0 ) return; pPk->idxType = SQLITE_IDXTYPE_PRIMARYKEY; pTab->iPKey = -1; }else{ pPk = sqlite3PrimaryKeyIndex(pTab); } pPk->isCovering = 1; assert( pPk!=0 ); nPk = pPk->nKeyCol; /* Make sure every column of the PRIMARY KEY is NOT NULL */ for(i=0; i<nPk; i++){ pTab->aCol[pPk->aiColumn[i]].notNull = 1; } pPk->uniqNotNull = 1; /* The root page of the PRIMARY KEY is the table root page */ pPk->tnum = pTab->tnum; /* Update the in-memory representation of all UNIQUE indices by converting ** the final rowid column into one or more columns of the PRIMARY KEY. */ for(pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext){ int n; if( IsPrimaryKeyIndex(pIdx) ) continue; for(i=n=0; i<nPk; i++){ if( !hasColumn(pIdx->aiColumn, pIdx->nKeyCol, pPk->aiColumn[i]) ) n++; } if( n==0 ){ /* This index is a superset of the primary key */ pIdx->nColumn = pIdx->nKeyCol; continue; } if( resizeIndexObject(db, pIdx, pIdx->nKeyCol+n) ) return; for(i=0, j=pIdx->nKeyCol; i<nPk; i++){ if( !hasColumn(pIdx->aiColumn, pIdx->nKeyCol, pPk->aiColumn[i]) ){ pIdx->aiColumn[j] = pPk->aiColumn[i]; pIdx->azColl[j] = pPk->azColl[i]; j++; } } assert( pIdx->nColumn>=pIdx->nKeyCol+n ); assert( pIdx->nColumn>=j ); } /* Add all table columns to the PRIMARY KEY index */ if( nPk<pTab->nCol ){ if( resizeIndexObject(db, pPk, pTab->nCol) ) return; for(i=0, j=nPk; i<pTab->nCol; i++){ if( !hasColumn(pPk->aiColumn, j, i) ){ assert( j<pPk->nColumn ); pPk->aiColumn[j] = i; pPk->azColl[j] = "BINARY"; j++; } } assert( pPk->nColumn==j ); assert( pTab->nCol==j ); }else{ pPk->nColumn = pTab->nCol; } } /* ** This routine is called to report the final ")" that terminates ** a CREATE TABLE statement. ** ** The table structure that other action routines have been building ** is added to the internal hash tables, assuming no errors have ** occurred. ** ** An entry for the table is made in the master table on disk, unless ** this is a temporary table or db->init.busy==1. When db->init.busy==1 ** it means we are reading the sqlite_master table because we just ** connected to the database or because the sqlite_master table has ** recently changed, so the entry for this table already exists in ** the sqlite_master table. We do not want to create it again. ** ** If the pSelect argument is not NULL, it means that this routine ** was called to create a table generated from a ** "CREATE TABLE ... AS SELECT ..." statement. The column names of ** the new table will match the result set of the SELECT. */ void sqlite3EndTable( Parse *pParse, /* Parse context */ Token *pCons, /* The ',' token after the last column defn. */ Token *pEnd, /* The ')' before options in the CREATE TABLE */ u8 tabOpts, /* Extra table options. Usually 0. */ Select *pSelect /* Select from a "CREATE ... AS SELECT" */ ){ Table *p; /* The new table */ sqlite3 *db = pParse->db; /* The database connection */ int iDb; /* Database in which the table lives */ Index *pIdx; /* An implied index of the table */ if( (pEnd==0 && pSelect==0) || db->mallocFailed ){ return; } p = pParse->pNewTable; if( p==0 ) return; assert( !db->init.busy || !pSelect ); /* If the db->init.busy is 1 it means we are reading the SQL off the ** "sqlite_master" or "sqlite_temp_master" table on the disk. ** So do not write to the disk again. Extract the root page number ** for the table from the db->init.newTnum field. (The page number ** should have been put there by the sqliteOpenCb routine.) */ if( db->init.busy ){ p->tnum = db->init.newTnum; } /* Special processing for WITHOUT ROWID Tables */ if( tabOpts & TF_WithoutRowid ){ if( (p->tabFlags & TF_Autoincrement) ){ sqlite3ErrorMsg(pParse, "AUTOINCREMENT not allowed on WITHOUT ROWID tables"); return; } if( (p->tabFlags & TF_HasPrimaryKey)==0 ){ sqlite3ErrorMsg(pParse, "PRIMARY KEY missing on table %s", p->zName); }else{ p->tabFlags |= TF_WithoutRowid; convertToWithoutRowidTable(pParse, p); } } iDb = sqlite3SchemaToIndex(db, p->pSchema); #ifndef SQLITE_OMIT_CHECK /* Resolve names in all CHECK constraint expressions. */ if( p->pCheck ){ sqlite3ResolveSelfReference(pParse, p, NC_IsCheck, 0, p->pCheck); } #endif /* !defined(SQLITE_OMIT_CHECK) */ /* Estimate the average row size for the table and for all implied indices */ estimateTableWidth(p); for(pIdx=p->pIndex; pIdx; pIdx=pIdx->pNext){ estimateIndexWidth(pIdx); } /* If not initializing, then create a record for the new table ** in the SQLITE_MASTER table of the database. ** ** If this is a TEMPORARY table, write the entry into the auxiliary ** file instead of into the main database file. */ if( !db->init.busy ){ int n; Vdbe *v; char *zType; /* "view" or "table" */ char *zType2; /* "VIEW" or "TABLE" */ char *zStmt; /* Text of the CREATE TABLE or CREATE VIEW statement */ v = sqlite3GetVdbe(pParse); if( NEVER(v==0) ) return; sqlite3VdbeAddOp1(v, OP_Close, 0); /* ** Initialize zType for the new view or table. */ if( p->pSelect==0 ){ /* A regular table */ zType = "table"; zType2 = "TABLE"; #ifndef SQLITE_OMIT_VIEW }else{ /* A view */ zType = "view"; zType2 = "VIEW"; #endif } /* If this is a CREATE TABLE xx AS SELECT ..., execute the SELECT ** statement to populate the new table. The root-page number for the ** new table is in register pParse->regRoot. ** ** Once the SELECT has been coded by sqlite3Select(), it is in a ** suitable state to query for the column names and types to be used ** by the new table. ** ** A shared-cache write-lock is not required to write to the new table, ** as a schema-lock must have already been obtained to create it. Since ** a schema-lock excludes all other database users, the write-lock would ** be redundant. */ if( pSelect ){ SelectDest dest; Table *pSelTab; assert(pParse->nTab==1); sqlite3VdbeAddOp3(v, OP_OpenWrite, 1, pParse->regRoot, iDb); sqlite3VdbeChangeP5(v, OPFLAG_P2ISREG); pParse->nTab = 2; sqlite3SelectDestInit(&dest, SRT_Table, 1); sqlite3Select(pParse, pSelect, &dest); sqlite3VdbeAddOp1(v, OP_Close, 1); if( pParse->nErr==0 ){ pSelTab = sqlite3ResultSetOfSelect(pParse, pSelect); if( pSelTab==0 ) return; assert( p->aCol==0 ); p->nCol = pSelTab->nCol; p->aCol = pSelTab->aCol; pSelTab->nCol = 0; pSelTab->aCol = 0; sqlite3DeleteTable(db, pSelTab); } } /* Compute the complete text of the CREATE statement */ if( pSelect ){ zStmt = createTableStmt(db, p); }else{ Token *pEnd2 = tabOpts ? &pParse->sLastToken : pEnd; n = (int)(pEnd2->z - pParse->sNameToken.z); if( pEnd2->z[0]!=';' ) n += pEnd2->n; zStmt = sqlite3MPrintf(db, "CREATE %s %.*s", zType2, n, pParse->sNameToken.z ); } /* A slot for the record has already been allocated in the ** SQLITE_MASTER table. We just need to update that slot with all ** the information we've collected. */ sqlite3NestedParse(pParse, "UPDATE %Q.%s " "SET type='%s', name=%Q, tbl_name=%Q, rootpage=#%d, sql=%Q " "WHERE rowid=#%d", db->aDb[iDb].zName, SCHEMA_TABLE(iDb), zType, p->zName, p->zName, pParse->regRoot, zStmt, pParse->regRowid ); sqlite3DbFree(db, zStmt); sqlite3ChangeCookie(pParse, iDb); #ifndef SQLITE_OMIT_AUTOINCREMENT /* Check to see if we need to create an sqlite_sequence table for ** keeping track of autoincrement keys. */ if( p->tabFlags & TF_Autoincrement ){ Db *pDb = &db->aDb[iDb]; assert( sqlite3SchemaMutexHeld(db, iDb, 0) ); if( pDb->pSchema->pSeqTab==0 ){ sqlite3NestedParse(pParse, "CREATE TABLE %Q.sqlite_sequence(name,seq)", pDb->zName ); } } #endif /* Reparse everything to update our internal data structures */ sqlite3VdbeAddParseSchemaOp(v, iDb, sqlite3MPrintf(db, "tbl_name='%q' AND type!='trigger'", p->zName)); } /* Add the table to the in-memory representation of the database. */ if( db->init.busy ){ Table *pOld; Schema *pSchema = p->pSchema; assert( sqlite3SchemaMutexHeld(db, iDb, 0) ); pOld = sqlite3HashInsert(&pSchema->tblHash, p->zName, p); if( pOld ){ assert( p==pOld ); /* Malloc must have failed inside HashInsert() */ db->mallocFailed = 1; return; } pParse->pNewTable = 0; db->flags |= SQLITE_InternChanges; #ifndef SQLITE_OMIT_ALTERTABLE if( !p->pSelect ){ const char *zName = (const char *)pParse->sNameToken.z; int nName; assert( !pSelect && pCons && pEnd ); if( pCons->z==0 ){ pCons = pEnd; } nName = (int)((const char *)pCons->z - zName); p->addColOffset = 13 + sqlite3Utf8CharLen(zName, nName); } #endif } } #ifndef SQLITE_OMIT_VIEW /* ** The parser calls this routine in order to create a new VIEW */ void sqlite3CreateView( Parse *pParse, /* The parsing context */ Token *pBegin, /* The CREATE token that begins the statement */ Token *pName1, /* The token that holds the name of the view */ Token *pName2, /* The token that holds the name of the view */ Select *pSelect, /* A SELECT statement that will become the new view */ int isTemp, /* TRUE for a TEMPORARY view */ int noErr /* Suppress error messages if VIEW already exists */ ){ Table *p; int n; const char *z; Token sEnd; DbFixer sFix; Token *pName = 0; int iDb; sqlite3 *db = pParse->db; if( pParse->nVar>0 ){ sqlite3ErrorMsg(pParse, "parameters are not allowed in views"); sqlite3SelectDelete(db, pSelect); return; } sqlite3StartTable(pParse, pName1, pName2, isTemp, 1, 0, noErr); p = pParse->pNewTable; if( p==0 || pParse->nErr ){ sqlite3SelectDelete(db, pSelect); return; } sqlite3TwoPartName(pParse, pName1, pName2, &pName); iDb = sqlite3SchemaToIndex(db, p->pSchema); sqlite3FixInit(&sFix, pParse, iDb, "view", pName); if( sqlite3FixSelect(&sFix, pSelect) ){ sqlite3SelectDelete(db, pSelect); return; } /* Make a copy of the entire SELECT statement that defines the view. ** This will force all the Expr.token.z values to be dynamically ** allocated rather than point to the input string - which means that ** they will persist after the current sqlite3_exec() call returns. */ p->pSelect = sqlite3SelectDup(db, pSelect, EXPRDUP_REDUCE); sqlite3SelectDelete(db, pSelect); if( db->mallocFailed ){ return; } if( !db->init.busy ){ sqlite3ViewGetColumnNames(pParse, p); } /* Locate the end of the CREATE VIEW statement. Make sEnd point to ** the end. */ sEnd = pParse->sLastToken; if( ALWAYS(sEnd.z[0]!=0) && sEnd.z[0]!=';' ){ sEnd.z += sEnd.n; } sEnd.n = 0; n = (int)(sEnd.z - pBegin->z); z = pBegin->z; while( ALWAYS(n>0) && sqlite3Isspace(z[n-1]) ){ n--; } sEnd.z = &z[n-1]; sEnd.n = 1; /* Use sqlite3EndTable() to add the view to the SQLITE_MASTER table */ sqlite3EndTable(pParse, 0, &sEnd, 0, 0); return; } #endif /* SQLITE_OMIT_VIEW */ #if !defined(SQLITE_OMIT_VIEW) || !defined(SQLITE_OMIT_VIRTUALTABLE) /* ** The Table structure pTable is really a VIEW. Fill in the names of ** the columns of the view in the pTable structure. Return the number ** of errors. If an error is seen leave an error message in pParse->zErrMsg. */ int sqlite3ViewGetColumnNames(Parse *pParse, Table *pTable){ Table *pSelTab; /* A fake table from which we get the result set */ Select *pSel; /* Copy of the SELECT that implements the view */ int nErr = 0; /* Number of errors encountered */ int n; /* Temporarily holds the number of cursors assigned */ sqlite3 *db = pParse->db; /* Database connection for malloc errors */ sqlite3_xauth xAuth; /* Saved xAuth pointer */ assert( pTable ); #ifndef SQLITE_OMIT_VIRTUALTABLE if( sqlite3VtabCallConnect(pParse, pTable) ){ return SQLITE_ERROR; } if( IsVirtual(pTable) ) return 0; #endif #ifndef SQLITE_OMIT_VIEW /* A positive nCol means the columns names for this view are ** already known. */ if( pTable->nCol>0 ) return 0; /* A negative nCol is a special marker meaning that we are currently ** trying to compute the column names. If we enter this routine with ** a negative nCol, it means two or more views form a loop, like this: ** ** CREATE VIEW one AS SELECT * FROM two; ** CREATE VIEW two AS SELECT * FROM one; ** ** Actually, the error above is now caught prior to reaching this point. ** But the following test is still important as it does come up ** in the following: ** ** CREATE TABLE main.ex1(a); ** CREATE TEMP VIEW ex1 AS SELECT a FROM ex1; ** SELECT * FROM temp.ex1; */ if( pTable->nCol<0 ){ sqlite3ErrorMsg(pParse, "view %s is circularly defined", pTable->zName); return 1; } assert( pTable->nCol>=0 ); /* If we get this far, it means we need to compute the table names. ** Note that the call to sqlite3ResultSetOfSelect() will expand any ** "*" elements in the results set of the view and will assign cursors ** to the elements of the FROM clause. But we do not want these changes ** to be permanent. So the computation is done on a copy of the SELECT ** statement that defines the view. */ assert( pTable->pSelect ); pSel = sqlite3SelectDup(db, pTable->pSelect, 0); if( pSel ){ u8 enableLookaside = db->lookaside.bEnabled; n = pParse->nTab; sqlite3SrcListAssignCursors(pParse, pSel->pSrc); pTable->nCol = -1; db->lookaside.bEnabled = 0; #ifndef SQLITE_OMIT_AUTHORIZATION xAuth = db->xAuth; db->xAuth = 0; pSelTab = sqlite3ResultSetOfSelect(pParse, pSel); db->xAuth = xAuth; #else pSelTab = sqlite3ResultSetOfSelect(pParse, pSel); #endif db->lookaside.bEnabled = enableLookaside; pParse->nTab = n; if( pSelTab ){ assert( pTable->aCol==0 ); pTable->nCol = pSelTab->nCol; pTable->aCol = pSelTab->aCol; pSelTab->nCol = 0; pSelTab->aCol = 0; sqlite3DeleteTable(db, pSelTab); assert( sqlite3SchemaMutexHeld(db, 0, pTable->pSchema) ); pTable->pSchema->schemaFlags |= DB_UnresetViews; }else{ pTable->nCol = 0; nErr++; } sqlite3SelectDelete(db, pSel); } else { nErr++; } #endif /* SQLITE_OMIT_VIEW */ return nErr; } #endif /* !defined(SQLITE_OMIT_VIEW) || !defined(SQLITE_OMIT_VIRTUALTABLE) */ #ifndef SQLITE_OMIT_VIEW /* ** Clear the column names from every VIEW in database idx. */ static void sqliteViewResetAll(sqlite3 *db, int idx){ HashElem *i; assert( sqlite3SchemaMutexHeld(db, idx, 0) ); if( !DbHasProperty(db, idx, DB_UnresetViews) ) return; for(i=sqliteHashFirst(&db->aDb[idx].pSchema->tblHash); i;i=sqliteHashNext(i)){ Table *pTab = sqliteHashData(i); if( pTab->pSelect ){ sqliteDeleteColumnNames(db, pTab); pTab->aCol = 0; pTab->nCol = 0; } } DbClearProperty(db, idx, DB_UnresetViews); } #else # define sqliteViewResetAll(A,B) #endif /* SQLITE_OMIT_VIEW */ /* ** This function is called by the VDBE to adjust the internal schema ** used by SQLite when the btree layer moves a table root page. The ** root-page of a table or index in database iDb has changed from iFrom ** to iTo. ** ** Ticket #1728: The symbol table might still contain information ** on tables and/or indices that are the process of being deleted. ** If you are unlucky, one of those deleted indices or tables might ** have the same rootpage number as the real table or index that is ** being moved. So we cannot stop searching after the first match ** because the first match might be for one of the deleted indices ** or tables and not the table/index that is actually being moved. ** We must continue looping until all tables and indices with ** rootpage==iFrom have been converted to have a rootpage of iTo ** in order to be certain that we got the right one. */ #ifndef SQLITE_OMIT_AUTOVACUUM void sqlite3RootPageMoved(sqlite3 *db, int iDb, int iFrom, int iTo){ HashElem *pElem; Hash *pHash; Db *pDb; assert( sqlite3SchemaMutexHeld(db, iDb, 0) ); pDb = &db->aDb[iDb]; pHash = &pDb->pSchema->tblHash; for(pElem=sqliteHashFirst(pHash); pElem; pElem=sqliteHashNext(pElem)){ Table *pTab = sqliteHashData(pElem); if( pTab->tnum==iFrom ){ pTab->tnum = iTo; } } pHash = &pDb->pSchema->idxHash; for(pElem=sqliteHashFirst(pHash); pElem; pElem=sqliteHashNext(pElem)){ Index *pIdx = sqliteHashData(pElem); if( pIdx->tnum==iFrom ){ pIdx->tnum = iTo; } } } #endif /* ** Write code to erase the table with root-page iTable from database iDb. ** Also write code to modify the sqlite_master table and internal schema ** if a root-page of another table is moved by the btree-layer whilst ** erasing iTable (this can happen with an auto-vacuum database). */ static void destroyRootPage(Parse *pParse, int iTable, int iDb){ Vdbe *v = sqlite3GetVdbe(pParse); int r1 = sqlite3GetTempReg(pParse); sqlite3VdbeAddOp3(v, OP_Destroy, iTable, r1, iDb); sqlite3MayAbort(pParse); #ifndef SQLITE_OMIT_AUTOVACUUM /* OP_Destroy stores an in integer r1. If this integer ** is non-zero, then it is the root page number of a table moved to ** location iTable. The following code modifies the sqlite_master table to ** reflect this. ** ** The "#NNN" in the SQL is a special constant that means whatever value ** is in register NNN. See grammar rules associated with the TK_REGISTER ** token for additional information. */ sqlite3NestedParse(pParse, "UPDATE %Q.%s SET rootpage=%d WHERE #%d AND rootpage=#%d", pParse->db->aDb[iDb].zName, SCHEMA_TABLE(iDb), iTable, r1, r1); #endif sqlite3ReleaseTempReg(pParse, r1); } /* ** Write VDBE code to erase table pTab and all associated indices on disk. ** Code to update the sqlite_master tables and internal schema definitions ** in case a root-page belonging to another table is moved by the btree layer ** is also added (this can happen with an auto-vacuum database). */ static void destroyTable(Parse *pParse, Table *pTab){ #ifdef SQLITE_OMIT_AUTOVACUUM Index *pIdx; int iDb = sqlite3SchemaToIndex(pParse->db, pTab->pSchema); destroyRootPage(pParse, pTab->tnum, iDb); for(pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext){ destroyRootPage(pParse, pIdx->tnum, iDb); } #else /* If the database may be auto-vacuum capable (if SQLITE_OMIT_AUTOVACUUM ** is not defined), then it is important to call OP_Destroy on the ** table and index root-pages in order, starting with the numerically ** largest root-page number. This guarantees that none of the root-pages ** to be destroyed is relocated by an earlier OP_Destroy. i.e. if the ** following were coded: ** ** OP_Destroy 4 0 ** ... ** OP_Destroy 5 0 ** ** and root page 5 happened to be the largest root-page number in the ** database, then root page 5 would be moved to page 4 by the ** "OP_Destroy 4 0" opcode. The subsequent "OP_Destroy 5 0" would hit ** a free-list page. */ int iTab = pTab->tnum; int iDestroyed = 0; while( 1 ){ Index *pIdx; int iLargest = 0; if( iDestroyed==0 || iTab<iDestroyed ){ iLargest = iTab; } for(pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext){ int iIdx = pIdx->tnum; assert( pIdx->pSchema==pTab->pSchema ); if( (iDestroyed==0 || (iIdx<iDestroyed)) && iIdx>iLargest ){ iLargest = iIdx; } } if( iLargest==0 ){ return; }else{ int iDb = sqlite3SchemaToIndex(pParse->db, pTab->pSchema); assert( iDb>=0 && iDb<pParse->db->nDb ); destroyRootPage(pParse, iLargest, iDb); iDestroyed = iLargest; } } #endif } /* ** Remove entries from the sqlite_statN tables (for N in (1,2,3)) ** after a DROP INDEX or DROP TABLE command. */ static void sqlite3ClearStatTables( Parse *pParse, /* The parsing context */ int iDb, /* The database number */ const char *zType, /* "idx" or "tbl" */ const char *zName /* Name of index or table */ ){ int i; const char *zDbName = pParse->db->aDb[iDb].zName; for(i=1; i<=4; i++){ char zTab[24]; sqlite3_snprintf(sizeof(zTab),zTab,"sqlite_stat%d",i); if( sqlite3FindTable(pParse->db, zTab, zDbName) ){ sqlite3NestedParse(pParse, "DELETE FROM %Q.%s WHERE %s=%Q", zDbName, zTab, zType, zName ); } } } /* ** Generate code to drop a table. */ void sqlite3CodeDropTable(Parse *pParse, Table *pTab, int iDb, int isView){ Vdbe *v; sqlite3 *db = pParse->db; Trigger *pTrigger; Db *pDb = &db->aDb[iDb]; v = sqlite3GetVdbe(pParse); assert( v!=0 ); sqlite3BeginWriteOperation(pParse, 1, iDb); #ifndef SQLITE_OMIT_VIRTUALTABLE if( IsVirtual(pTab) ){ sqlite3VdbeAddOp0(v, OP_VBegin); } #endif /* Drop all triggers associated with the table being dropped. Code ** is generated to remove entries from sqlite_master and/or ** sqlite_temp_master if required. */ pTrigger = sqlite3TriggerList(pParse, pTab); while( pTrigger ){ assert( pTrigger->pSchema==pTab->pSchema || pTrigger->pSchema==db->aDb[1].pSchema ); sqlite3DropTriggerPtr(pParse, pTrigger); pTrigger = pTrigger->pNext; } #ifndef SQLITE_OMIT_AUTOINCREMENT /* Remove any entries of the sqlite_sequence table associated with ** the table being dropped. This is done before the table is dropped ** at the btree level, in case the sqlite_sequence table needs to ** move as a result of the drop (can happen in auto-vacuum mode). */ if( pTab->tabFlags & TF_Autoincrement ){ sqlite3NestedParse(pParse, "DELETE FROM %Q.sqlite_sequence WHERE name=%Q", pDb->zName, pTab->zName ); } #endif /* Drop all SQLITE_MASTER table and index entries that refer to the ** table. The program name loops through the master table and deletes ** every row that refers to a table of the same name as the one being ** dropped. Triggers are handled separately because a trigger can be ** created in the temp database that refers to a table in another ** database. */ sqlite3NestedParse(pParse, "DELETE FROM %Q.%s WHERE tbl_name=%Q and type!='trigger'", pDb->zName, SCHEMA_TABLE(iDb), pTab->zName); if( !isView && !IsVirtual(pTab) ){ destroyTable(pParse, pTab); } /* Remove the table entry from SQLite's internal schema and modify ** the schema cookie. */ if( IsVirtual(pTab) ){ sqlite3VdbeAddOp4(v, OP_VDestroy, iDb, 0, 0, pTab->zName, 0); } sqlite3VdbeAddOp4(v, OP_DropTable, iDb, 0, 0, pTab->zName, 0); sqlite3ChangeCookie(pParse, iDb); sqliteViewResetAll(db, iDb); } /* ** This routine is called to do the work of a DROP TABLE statement. ** pName is the name of the table to be dropped. */ void sqlite3DropTable(Parse *pParse, SrcList *pName, int isView, int noErr){ Table *pTab; Vdbe *v; sqlite3 *db = pParse->db; int iDb; if( db->mallocFailed ){ goto exit_drop_table; } assert( pParse->nErr==0 ); assert( pName->nSrc==1 ); if( noErr ) db->suppressErr++; pTab = sqlite3LocateTableItem(pParse, isView, &pName->a[0]); if( noErr ) db->suppressErr--; if( pTab==0 ){ if( noErr ) sqlite3CodeVerifyNamedSchema(pParse, pName->a[0].zDatabase); goto exit_drop_table; } iDb = sqlite3SchemaToIndex(db, pTab->pSchema); assert( iDb>=0 && iDb<db->nDb ); /* If pTab is a virtual table, call ViewGetColumnNames() to ensure ** it is initialized. */ if( IsVirtual(pTab) && sqlite3ViewGetColumnNames(pParse, pTab) ){ goto exit_drop_table; } #ifndef SQLITE_OMIT_AUTHORIZATION { int code; const char *zTab = SCHEMA_TABLE(iDb); const char *zDb = db->aDb[iDb].zName; const char *zArg2 = 0; if( sqlite3AuthCheck(pParse, SQLITE_DELETE, zTab, 0, zDb)){ goto exit_drop_table; } if( isView ){ if( !OMIT_TEMPDB && iDb==1 ){ code = SQLITE_DROP_TEMP_VIEW; }else{ code = SQLITE_DROP_VIEW; } #ifndef SQLITE_OMIT_VIRTUALTABLE }else if( IsVirtual(pTab) ){ code = SQLITE_DROP_VTABLE; zArg2 = sqlite3GetVTable(db, pTab)->pMod->zName; #endif }else{ if( !OMIT_TEMPDB && iDb==1 ){ code = SQLITE_DROP_TEMP_TABLE; }else{ code = SQLITE_DROP_TABLE; } } if( sqlite3AuthCheck(pParse, code, pTab->zName, zArg2, zDb) ){ goto exit_drop_table; } if( sqlite3AuthCheck(pParse, SQLITE_DELETE, pTab->zName, 0, zDb) ){ goto exit_drop_table; } } #endif if( sqlite3StrNICmp(pTab->zName, "sqlite_", 7)==0 && sqlite3StrNICmp(pTab->zName, "sqlite_stat", 11)!=0 ){ sqlite3ErrorMsg(pParse, "table %s may not be dropped", pTab->zName); goto exit_drop_table; } #ifndef SQLITE_OMIT_VIEW /* Ensure DROP TABLE is not used on a view, and DROP VIEW is not used ** on a table. */ if( isView && pTab->pSelect==0 ){ sqlite3ErrorMsg(pParse, "use DROP TABLE to delete table %s", pTab->zName); goto exit_drop_table; } if( !isView && pTab->pSelect ){ sqlite3ErrorMsg(pParse, "use DROP VIEW to delete view %s", pTab->zName); goto exit_drop_table; } #endif /* Generate code to remove the table from the master table ** on disk. */ v = sqlite3GetVdbe(pParse); if( v ){ sqlite3BeginWriteOperation(pParse, 1, iDb); sqlite3ClearStatTables(pParse, iDb, "tbl", pTab->zName); sqlite3FkDropTable(pParse, pName, pTab); sqlite3CodeDropTable(pParse, pTab, iDb, isView); } exit_drop_table: sqlite3SrcListDelete(db, pName); } /* ** This routine is called to create a new foreign key on the table ** currently under construction. pFromCol determines which columns ** in the current table point to the foreign key. If pFromCol==0 then ** connect the key to the last column inserted. pTo is the name of ** the table referred to (a.k.a the "parent" table). pToCol is a list ** of tables in the parent pTo table. flags contains all ** information about the conflict resolution algorithms specified ** in the ON DELETE, ON UPDATE and ON INSERT clauses. ** ** An FKey structure is created and added to the table currently ** under construction in the pParse->pNewTable field. ** ** The foreign key is set for IMMEDIATE processing. A subsequent call ** to sqlite3DeferForeignKey() might change this to DEFERRED. */ void sqlite3CreateForeignKey( Parse *pParse, /* Parsing context */ ExprList *pFromCol, /* Columns in this table that point to other table */ Token *pTo, /* Name of the other table */ ExprList *pToCol, /* Columns in the other table */ int flags /* Conflict resolution algorithms. */ ){ sqlite3 *db = pParse->db; #ifndef SQLITE_OMIT_FOREIGN_KEY FKey *pFKey = 0; FKey *pNextTo; Table *p = pParse->pNewTable; int nByte; int i; int nCol; char *z; assert( pTo!=0 ); if( p==0 || IN_DECLARE_VTAB ) goto fk_end; if( pFromCol==0 ){ int iCol = p->nCol-1; if( NEVER(iCol<0) ) goto fk_end; if( pToCol && pToCol->nExpr!=1 ){ sqlite3ErrorMsg(pParse, "foreign key on %s" " should reference only one column of table %T", p->aCol[iCol].zName, pTo); goto fk_end; } nCol = 1; }else if( pToCol && pToCol->nExpr!=pFromCol->nExpr ){ sqlite3ErrorMsg(pParse, "number of columns in foreign key does not match the number of " "columns in the referenced table"); goto fk_end; }else{ nCol = pFromCol->nExpr; } nByte = sizeof(*pFKey) + (nCol-1)*sizeof(pFKey->aCol[0]) + pTo->n + 1; if( pToCol ){ for(i=0; i<pToCol->nExpr; i++){ nByte += sqlite3Strlen30(pToCol->a[i].zName) + 1; } } pFKey = sqlite3DbMallocZero(db, nByte ); if( pFKey==0 ){ goto fk_end; } pFKey->pFrom = p; pFKey->pNextFrom = p->pFKey; z = (char*)&pFKey->aCol[nCol]; pFKey->zTo = z; memcpy(z, pTo->z, pTo->n); z[pTo->n] = 0; sqlite3Dequote(z); z += pTo->n+1; pFKey->nCol = nCol; if( pFromCol==0 ){ pFKey->aCol[0].iFrom = p->nCol-1; }else{ for(i=0; i<nCol; i++){ int j; for(j=0; j<p->nCol; j++){ if( sqlite3StrICmp(p->aCol[j].zName, pFromCol->a[i].zName)==0 ){ pFKey->aCol[i].iFrom = j; break; } } if( j>=p->nCol ){ sqlite3ErrorMsg(pParse, "unknown column \"%s\" in foreign key definition", pFromCol->a[i].zName); goto fk_end; } } } if( pToCol ){ for(i=0; i<nCol; i++){ int n = sqlite3Strlen30(pToCol->a[i].zName); pFKey->aCol[i].zCol = z; memcpy(z, pToCol->a[i].zName, n); z[n] = 0; z += n+1; } } pFKey->isDeferred = 0; pFKey->aAction[0] = (u8)(flags & 0xff); /* ON DELETE action */ pFKey->aAction[1] = (u8)((flags >> 8 ) & 0xff); /* ON UPDATE action */ assert( sqlite3SchemaMutexHeld(db, 0, p->pSchema) ); pNextTo = (FKey *)sqlite3HashInsert(&p->pSchema->fkeyHash, pFKey->zTo, (void *)pFKey ); if( pNextTo==pFKey ){ db->mallocFailed = 1; goto fk_end; } if( pNextTo ){ assert( pNextTo->pPrevTo==0 ); pFKey->pNextTo = pNextTo; pNextTo->pPrevTo = pFKey; } /* Link the foreign key to the table as the last step. */ p->pFKey = pFKey; pFKey = 0; fk_end: sqlite3DbFree(db, pFKey); #endif /* !defined(SQLITE_OMIT_FOREIGN_KEY) */ sqlite3ExprListDelete(db, pFromCol); sqlite3ExprListDelete(db, pToCol); } /* ** This routine is called when an INITIALLY IMMEDIATE or INITIALLY DEFERRED ** clause is seen as part of a foreign key definition. The isDeferred ** parameter is 1 for INITIALLY DEFERRED and 0 for INITIALLY IMMEDIATE. ** The behavior of the most recently created foreign key is adjusted ** accordingly. */ void sqlite3DeferForeignKey(Parse *pParse, int isDeferred){ #ifndef SQLITE_OMIT_FOREIGN_KEY Table *pTab; FKey *pFKey; if( (pTab = pParse->pNewTable)==0 || (pFKey = pTab->pFKey)==0 ) return; assert( isDeferred==0 || isDeferred==1 ); /* EV: R-30323-21917 */ pFKey->isDeferred = (u8)isDeferred; #endif } /* ** Generate code that will erase and refill index *pIdx. This is ** used to initialize a newly created index or to recompute the ** content of an index in response to a REINDEX command. ** ** if memRootPage is not negative, it means that the index is newly ** created. The register specified by memRootPage contains the ** root page number of the index. If memRootPage is negative, then ** the index already exists and must be cleared before being refilled and ** the root page number of the index is taken from pIndex->tnum. */ static void sqlite3RefillIndex(Parse *pParse, Index *pIndex, int memRootPage){ Table *pTab = pIndex->pTable; /* The table that is indexed */ int iTab = pParse->nTab++; /* Btree cursor used for pTab */ int iIdx = pParse->nTab++; /* Btree cursor used for pIndex */ int iSorter; /* Cursor opened by OpenSorter (if in use) */ int addr1; /* Address of top of loop */ int addr2; /* Address to jump to for next iteration */ int tnum; /* Root page of index */ int iPartIdxLabel; /* Jump to this label to skip a row */ Vdbe *v; /* Generate code into this virtual machine */ KeyInfo *pKey; /* KeyInfo for index */ int regRecord; /* Register holding assembled index record */ sqlite3 *db = pParse->db; /* The database connection */ int iDb = sqlite3SchemaToIndex(db, pIndex->pSchema); #ifndef SQLITE_OMIT_AUTHORIZATION if( sqlite3AuthCheck(pParse, SQLITE_REINDEX, pIndex->zName, 0, db->aDb[iDb].zName ) ){ return; } #endif /* Require a write-lock on the table to perform this operation */ sqlite3TableLock(pParse, iDb, pTab->tnum, 1, pTab->zName); v = sqlite3GetVdbe(pParse); if( v==0 ) return; if( memRootPage>=0 ){ tnum = memRootPage; }else{ tnum = pIndex->tnum; } pKey = sqlite3KeyInfoOfIndex(pParse, pIndex); /* Open the sorter cursor if we are to use one. */ iSorter = pParse->nTab++; sqlite3VdbeAddOp4(v, OP_SorterOpen, iSorter, 0, pIndex->nKeyCol, (char*) sqlite3KeyInfoRef(pKey), P4_KEYINFO); /* Open the table. Loop through all rows of the table, inserting index ** records into the sorter. */ sqlite3OpenTable(pParse, iTab, iDb, pTab, OP_OpenRead); addr1 = sqlite3VdbeAddOp2(v, OP_Rewind, iTab, 0); VdbeCoverage(v); regRecord = sqlite3GetTempReg(pParse); sqlite3GenerateIndexKey(pParse,pIndex,iTab,regRecord,0,&iPartIdxLabel,0,0); sqlite3VdbeAddOp2(v, OP_SorterInsert, iSorter, regRecord); sqlite3ResolvePartIdxLabel(pParse, iPartIdxLabel); sqlite3VdbeAddOp2(v, OP_Next, iTab, addr1+1); VdbeCoverage(v); sqlite3VdbeJumpHere(v, addr1); if( memRootPage<0 ) sqlite3VdbeAddOp2(v, OP_Clear, tnum, iDb); sqlite3VdbeAddOp4(v, OP_OpenWrite, iIdx, tnum, iDb, (char *)pKey, P4_KEYINFO); sqlite3VdbeChangeP5(v, OPFLAG_BULKCSR|((memRootPage>=0)?OPFLAG_P2ISREG:0)); addr1 = sqlite3VdbeAddOp2(v, OP_SorterSort, iSorter, 0); VdbeCoverage(v); assert( pKey!=0 || db->mallocFailed || pParse->nErr ); if( IsUniqueIndex(pIndex) && pKey!=0 ){ int j2 = sqlite3VdbeCurrentAddr(v) + 3; sqlite3VdbeAddOp2(v, OP_Goto, 0, j2); addr2 = sqlite3VdbeCurrentAddr(v); sqlite3VdbeAddOp4Int(v, OP_SorterCompare, iSorter, j2, regRecord, pIndex->nKeyCol); VdbeCoverage(v); sqlite3UniqueConstraint(pParse, OE_Abort, pIndex); }else{ addr2 = sqlite3VdbeCurrentAddr(v); } sqlite3VdbeAddOp3(v, OP_SorterData, iSorter, regRecord, iIdx); sqlite3VdbeAddOp3(v, OP_IdxInsert, iIdx, regRecord, 1); sqlite3VdbeChangeP5(v, OPFLAG_USESEEKRESULT); sqlite3ReleaseTempReg(pParse, regRecord); sqlite3VdbeAddOp2(v, OP_SorterNext, iSorter, addr2); VdbeCoverage(v); sqlite3VdbeJumpHere(v, addr1); sqlite3VdbeAddOp1(v, OP_Close, iTab); sqlite3VdbeAddOp1(v, OP_Close, iIdx); sqlite3VdbeAddOp1(v, OP_Close, iSorter); } /* ** Allocate heap space to hold an Index object with nCol columns. ** ** Increase the allocation size to provide an extra nExtra bytes ** of 8-byte aligned space after the Index object and return a ** pointer to this extra space in *ppExtra. */ Index *sqlite3AllocateIndexObject( sqlite3 *db, /* Database connection */ i16 nCol, /* Total number of columns in the index */ int nExtra, /* Number of bytes of extra space to alloc */ char **ppExtra /* Pointer to the "extra" space */ ){ Index *p; /* Allocated index object */ int nByte; /* Bytes of space for Index object + arrays */ nByte = ROUND8(sizeof(Index)) + /* Index structure */ ROUND8(sizeof(char*)*nCol) + /* Index.azColl */ ROUND8(sizeof(LogEst)*(nCol+1) + /* Index.aiRowLogEst */ sizeof(i16)*nCol + /* Index.aiColumn */ sizeof(u8)*nCol); /* Index.aSortOrder */ p = sqlite3DbMallocZero(db, nByte + nExtra); if( p ){ char *pExtra = ((char*)p)+ROUND8(sizeof(Index)); p->azColl = (char**)pExtra; pExtra += ROUND8(sizeof(char*)*nCol); p->aiRowLogEst = (LogEst*)pExtra; pExtra += sizeof(LogEst)*(nCol+1); p->aiColumn = (i16*)pExtra; pExtra += sizeof(i16)*nCol; p->aSortOrder = (u8*)pExtra; p->nColumn = nCol; p->nKeyCol = nCol - 1; *ppExtra = ((char*)p) + nByte; } return p; } /* ** Create a new index for an SQL table. pName1.pName2 is the name of the index ** and pTblList is the name of the table that is to be indexed. Both will ** be NULL for a primary key or an index that is created to satisfy a ** UNIQUE constraint. If pTable and pIndex are NULL, use pParse->pNewTable ** as the table to be indexed. pParse->pNewTable is a table that is ** currently being constructed by a CREATE TABLE statement. ** ** pList is a list of columns to be indexed. pList will be NULL if this ** is a primary key or unique-constraint on the most recent column added ** to the table currently under construction. ** ** If the index is created successfully, return a pointer to the new Index ** structure. This is used by sqlite3AddPrimaryKey() to mark the index ** as the tables primary key (Index.idxType==SQLITE_IDXTYPE_PRIMARYKEY) */ Index *sqlite3CreateIndex( Parse *pParse, /* All information about this parse */ Token *pName1, /* First part of index name. May be NULL */ Token *pName2, /* Second part of index name. May be NULL */ SrcList *pTblName, /* Table to index. Use pParse->pNewTable if 0 */ ExprList *pList, /* A list of columns to be indexed */ int onError, /* OE_Abort, OE_Ignore, OE_Replace, or OE_None */ Token *pStart, /* The CREATE token that begins this statement */ Expr *pPIWhere, /* WHERE clause for partial indices */ int sortOrder, /* Sort order of primary key when pList==NULL */ int ifNotExist /* Omit error if index already exists */ ){ Index *pRet = 0; /* Pointer to return */ Table *pTab = 0; /* Table to be indexed */ Index *pIndex = 0; /* The index to be created */ char *zName = 0; /* Name of the index */ int nName; /* Number of characters in zName */ int i, j; DbFixer sFix; /* For assigning database names to pTable */ int sortOrderMask; /* 1 to honor DESC in index. 0 to ignore. */ sqlite3 *db = pParse->db; Db *pDb; /* The specific table containing the indexed database */ int iDb; /* Index of the database that is being written */ Token *pName = 0; /* Unqualified name of the index to create */ struct ExprList_item *pListItem; /* For looping over pList */ const Column *pTabCol; /* A column in the table */ int nExtra = 0; /* Space allocated for zExtra[] */ int nExtraCol; /* Number of extra columns needed */ char *zExtra = 0; /* Extra space after the Index object */ Index *pPk = 0; /* PRIMARY KEY index for WITHOUT ROWID tables */ assert( pParse->nErr==0 ); /* Never called with prior errors */ if( db->mallocFailed || IN_DECLARE_VTAB ){ goto exit_create_index; } if( SQLITE_OK!=sqlite3ReadSchema(pParse) ){ goto exit_create_index; } /* ** Find the table that is to be indexed. Return early if not found. */ if( pTblName!=0 ){ /* Use the two-part index name to determine the database ** to search for the table. 'Fix' the table name to this db ** before looking up the table. */ assert( pName1 && pName2 ); iDb = sqlite3TwoPartName(pParse, pName1, pName2, &pName); if( iDb<0 ) goto exit_create_index; assert( pName && pName->z ); #ifndef SQLITE_OMIT_TEMPDB /* If the index name was unqualified, check if the table ** is a temp table. If so, set the database to 1. Do not do this ** if initialising a database schema. */ if( !db->init.busy ){ pTab = sqlite3SrcListLookup(pParse, pTblName); if( pName2->n==0 && pTab && pTab->pSchema==db->aDb[1].pSchema ){ iDb = 1; } } #endif sqlite3FixInit(&sFix, pParse, iDb, "index", pName); if( sqlite3FixSrcList(&sFix, pTblName) ){ /* Because the parser constructs pTblName from a single identifier, ** sqlite3FixSrcList can never fail. */ assert(0); } pTab = sqlite3LocateTableItem(pParse, 0, &pTblName->a[0]); assert( db->mallocFailed==0 || pTab==0 ); if( pTab==0 ) goto exit_create_index; if( iDb==1 && db->aDb[iDb].pSchema!=pTab->pSchema ){ sqlite3ErrorMsg(pParse, "cannot create a TEMP index on non-TEMP table \"%s\"", pTab->zName); goto exit_create_index; } if( !HasRowid(pTab) ) pPk = sqlite3PrimaryKeyIndex(pTab); }else{ assert( pName==0 ); assert( pStart==0 ); pTab = pParse->pNewTable; if( !pTab ) goto exit_create_index; iDb = sqlite3SchemaToIndex(db, pTab->pSchema); } pDb = &db->aDb[iDb]; assert( pTab!=0 ); assert( pParse->nErr==0 ); if( sqlite3StrNICmp(pTab->zName, "sqlite_", 7)==0 && db->init.busy==0 #if SQLITE_USER_AUTHENTICATION && sqlite3UserAuthTable(pTab->zName)==0 #endif && sqlite3StrNICmp(&pTab->zName[7],"altertab_",9)!=0 ){ sqlite3ErrorMsg(pParse, "table %s may not be indexed", pTab->zName); goto exit_create_index; } #ifndef SQLITE_OMIT_VIEW if( pTab->pSelect ){ sqlite3ErrorMsg(pParse, "views may not be indexed"); goto exit_create_index; } #endif #ifndef SQLITE_OMIT_VIRTUALTABLE if( IsVirtual(pTab) ){ sqlite3ErrorMsg(pParse, "virtual tables may not be indexed"); goto exit_create_index; } #endif /* ** Find the name of the index. Make sure there is not already another ** index or table with the same name. ** ** Exception: If we are reading the names of permanent indices from the ** sqlite_master table (because some other process changed the schema) and ** one of the index names collides with the name of a temporary table or ** index, then we will continue to process this index. ** ** If pName==0 it means that we are ** dealing with a primary key or UNIQUE constraint. We have to invent our ** own name. */ if( pName ){ zName = sqlite3NameFromToken(db, pName); if( zName==0 ) goto exit_create_index; assert( pName->z!=0 ); if( SQLITE_OK!=sqlite3CheckObjectName(pParse, zName) ){ goto exit_create_index; } if( !db->init.busy ){ if( sqlite3FindTable(db, zName, 0)!=0 ){ sqlite3ErrorMsg(pParse, "there is already a table named %s", zName); goto exit_create_index; } } if( sqlite3FindIndex(db, zName, pDb->zName)!=0 ){ if( !ifNotExist ){ sqlite3ErrorMsg(pParse, "index %s already exists", zName); }else{ assert( !db->init.busy ); sqlite3CodeVerifySchema(pParse, iDb); } goto exit_create_index; } }else{ int n; Index *pLoop; for(pLoop=pTab->pIndex, n=1; pLoop; pLoop=pLoop->pNext, n++){} zName = sqlite3MPrintf(db, "sqlite_autoindex_%s_%d", pTab->zName, n); if( zName==0 ){ goto exit_create_index; } } /* Check for authorization to create an index. */ #ifndef SQLITE_OMIT_AUTHORIZATION { const char *zDb = pDb->zName; if( sqlite3AuthCheck(pParse, SQLITE_INSERT, SCHEMA_TABLE(iDb), 0, zDb) ){ goto exit_create_index; } i = SQLITE_CREATE_INDEX; if( !OMIT_TEMPDB && iDb==1 ) i = SQLITE_CREATE_TEMP_INDEX; if( sqlite3AuthCheck(pParse, i, zName, pTab->zName, zDb) ){ goto exit_create_index; } } #endif /* If pList==0, it means this routine was called to make a primary ** key out of the last column added to the table under construction. ** So create a fake list to simulate this. */ if( pList==0 ){ pList = sqlite3ExprListAppend(pParse, 0, 0); if( pList==0 ) goto exit_create_index; pList->a[0].zName = sqlite3DbStrDup(pParse->db, pTab->aCol[pTab->nCol-1].zName); pList->a[0].sortOrder = (u8)sortOrder; } /* Figure out how many bytes of space are required to store explicitly ** specified collation sequence names. */ for(i=0; i<pList->nExpr; i++){ Expr *pExpr = pList->a[i].pExpr; if( pExpr ){ assert( pExpr->op==TK_COLLATE ); nExtra += (1 + sqlite3Strlen30(pExpr->u.zToken)); } } /* ** Allocate the index structure. */ nName = sqlite3Strlen30(zName); nExtraCol = pPk ? pPk->nKeyCol : 1; pIndex = sqlite3AllocateIndexObject(db, pList->nExpr + nExtraCol, nName + nExtra + 1, &zExtra); if( db->mallocFailed ){ goto exit_create_index; } assert( EIGHT_BYTE_ALIGNMENT(pIndex->aiRowLogEst) ); assert( EIGHT_BYTE_ALIGNMENT(pIndex->azColl) ); pIndex->zName = zExtra; zExtra += nName + 1; memcpy(pIndex->zName, zName, nName+1); pIndex->pTable = pTab; pIndex->onError = (u8)onError; pIndex->uniqNotNull = onError!=OE_None; pIndex->idxType = pName ? SQLITE_IDXTYPE_APPDEF : SQLITE_IDXTYPE_UNIQUE; pIndex->pSchema = db->aDb[iDb].pSchema; pIndex->nKeyCol = pList->nExpr; if( pPIWhere ){ sqlite3ResolveSelfReference(pParse, pTab, NC_PartIdx, pPIWhere, 0); pIndex->pPartIdxWhere = pPIWhere; pPIWhere = 0; } assert( sqlite3SchemaMutexHeld(db, iDb, 0) ); /* Check to see if we should honor DESC requests on index columns */ if( pDb->pSchema->file_format>=4 ){ sortOrderMask = -1; /* Honor DESC */ }else{ sortOrderMask = 0; /* Ignore DESC */ } /* Scan the names of the columns of the table to be indexed and ** load the column indices into the Index structure. Report an error ** if any column is not found. ** ** TODO: Add a test to make sure that the same column is not named ** more than once within the same index. Only the first instance of ** the column will ever be used by the optimizer. Note that using the ** same column more than once cannot be an error because that would ** break backwards compatibility - it needs to be a warning. */ for(i=0, pListItem=pList->a; i<pList->nExpr; i++, pListItem++){ const char *zColName = pListItem->zName; int requestedSortOrder; char *zColl; /* Collation sequence name */ for(j=0, pTabCol=pTab->aCol; j<pTab->nCol; j++, pTabCol++){ if( sqlite3StrICmp(zColName, pTabCol->zName)==0 ) break; } if( j>=pTab->nCol ){ sqlite3ErrorMsg(pParse, "table %s has no column named %s", pTab->zName, zColName); pParse->checkSchema = 1; goto exit_create_index; } assert( j<=0x7fff ); pIndex->aiColumn[i] = (i16)j; if( pListItem->pExpr ){ int nColl; assert( pListItem->pExpr->op==TK_COLLATE ); zColl = pListItem->pExpr->u.zToken; nColl = sqlite3Strlen30(zColl) + 1; assert( nExtra>=nColl ); memcpy(zExtra, zColl, nColl); zColl = zExtra; zExtra += nColl; nExtra -= nColl; }else{ zColl = pTab->aCol[j].zColl; if( !zColl ) zColl = "BINARY"; } if( !db->init.busy && !sqlite3LocateCollSeq(pParse, zColl) ){ goto exit_create_index; } pIndex->azColl[i] = zColl; requestedSortOrder = pListItem->sortOrder & sortOrderMask; pIndex->aSortOrder[i] = (u8)requestedSortOrder; if( pTab->aCol[j].notNull==0 ) pIndex->uniqNotNull = 0; } if( pPk ){ for(j=0; j<pPk->nKeyCol; j++){ int x = pPk->aiColumn[j]; if( hasColumn(pIndex->aiColumn, pIndex->nKeyCol, x) ){ pIndex->nColumn--; }else{ pIndex->aiColumn[i] = x; pIndex->azColl[i] = pPk->azColl[j]; pIndex->aSortOrder[i] = pPk->aSortOrder[j]; i++; } } assert( i==pIndex->nColumn ); }else{ pIndex->aiColumn[i] = -1; pIndex->azColl[i] = "BINARY"; } sqlite3DefaultRowEst(pIndex); if( pParse->pNewTable==0 ) estimateIndexWidth(pIndex); if( pTab==pParse->pNewTable ){ /* This routine has been called to create an automatic index as a ** result of a PRIMARY KEY or UNIQUE clause on a column definition, or ** a PRIMARY KEY or UNIQUE clause following the column definitions. ** i.e. one of: ** ** CREATE TABLE t(x PRIMARY KEY, y); ** CREATE TABLE t(x, y, UNIQUE(x, y)); ** ** Either way, check to see if the table already has such an index. If ** so, don't bother creating this one. This only applies to ** automatically created indices. Users can do as they wish with ** explicit indices. ** ** Two UNIQUE or PRIMARY KEY constraints are considered equivalent ** (and thus suppressing the second one) even if they have different ** sort orders. ** ** If there are different collating sequences or if the columns of ** the constraint occur in different orders, then the constraints are ** considered distinct and both result in separate indices. */ Index *pIdx; for(pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext){ int k; assert( IsUniqueIndex(pIdx) ); assert( pIdx->idxType!=SQLITE_IDXTYPE_APPDEF ); assert( IsUniqueIndex(pIndex) ); if( pIdx->nKeyCol!=pIndex->nKeyCol ) continue; for(k=0; k<pIdx->nKeyCol; k++){ const char *z1; const char *z2; if( pIdx->aiColumn[k]!=pIndex->aiColumn[k] ) break; z1 = pIdx->azColl[k]; z2 = pIndex->azColl[k]; if( z1!=z2 && sqlite3StrICmp(z1, z2) ) break; } if( k==pIdx->nKeyCol ){ if( pIdx->onError!=pIndex->onError ){ /* This constraint creates the same index as a previous ** constraint specified somewhere in the CREATE TABLE statement. ** However the ON CONFLICT clauses are different. If both this ** constraint and the previous equivalent constraint have explicit ** ON CONFLICT clauses this is an error. Otherwise, use the ** explicitly specified behavior for the index. */ if( !(pIdx->onError==OE_Default || pIndex->onError==OE_Default) ){ sqlite3ErrorMsg(pParse, "conflicting ON CONFLICT clauses specified", 0); } if( pIdx->onError==OE_Default ){ pIdx->onError = pIndex->onError; } } goto exit_create_index; } } } /* Link the new Index structure to its table and to the other ** in-memory database structures. */ if( db->init.busy ){ Index *p; assert( sqlite3SchemaMutexHeld(db, 0, pIndex->pSchema) ); p = sqlite3HashInsert(&pIndex->pSchema->idxHash, pIndex->zName, pIndex); if( p ){ assert( p==pIndex ); /* Malloc must have failed */ db->mallocFailed = 1; goto exit_create_index; } db->flags |= SQLITE_InternChanges; if( pTblName!=0 ){ pIndex->tnum = db->init.newTnum; } } /* If this is the initial CREATE INDEX statement (or CREATE TABLE if the ** index is an implied index for a UNIQUE or PRIMARY KEY constraint) then ** emit code to allocate the index rootpage on disk and make an entry for ** the index in the sqlite_master table and populate the index with ** content. But, do not do this if we are simply reading the sqlite_master ** table to parse the schema, or if this index is the PRIMARY KEY index ** of a WITHOUT ROWID table. ** ** If pTblName==0 it means this index is generated as an implied PRIMARY KEY ** or UNIQUE index in a CREATE TABLE statement. Since the table ** has just been created, it contains no data and the index initialization ** step can be skipped. */ else if( pParse->nErr==0 && (HasRowid(pTab) || pTblName!=0) ){ Vdbe *v; char *zStmt; int iMem = ++pParse->nMem; v = sqlite3GetVdbe(pParse); if( v==0 ) goto exit_create_index; /* Create the rootpage for the index */ sqlite3BeginWriteOperation(pParse, 1, iDb); sqlite3VdbeAddOp2(v, OP_CreateIndex, iDb, iMem); /* Gather the complete text of the CREATE INDEX statement into ** the zStmt variable */ if( pStart ){ int n = (int)(pParse->sLastToken.z - pName->z) + pParse->sLastToken.n; if( pName->z[n-1]==';' ) n--; /* A named index with an explicit CREATE INDEX statement */ zStmt = sqlite3MPrintf(db, "CREATE%s INDEX %.*s", onError==OE_None ? "" : " UNIQUE", n, pName->z); }else{ /* An automatic index created by a PRIMARY KEY or UNIQUE constraint */ /* zStmt = sqlite3MPrintf(""); */ zStmt = 0; } /* Add an entry in sqlite_master for this index */ sqlite3NestedParse(pParse, "INSERT INTO %Q.%s VALUES('index',%Q,%Q,#%d,%Q);", db->aDb[iDb].zName, SCHEMA_TABLE(iDb), pIndex->zName, pTab->zName, iMem, zStmt ); sqlite3DbFree(db, zStmt); /* Fill the index with data and reparse the schema. Code an OP_Expire ** to invalidate all pre-compiled statements. */ if( pTblName ){ sqlite3RefillIndex(pParse, pIndex, iMem); sqlite3ChangeCookie(pParse, iDb); sqlite3VdbeAddParseSchemaOp(v, iDb, sqlite3MPrintf(db, "name='%q' AND type='index'", pIndex->zName)); sqlite3VdbeAddOp1(v, OP_Expire, 0); } } /* When adding an index to the list of indices for a table, make ** sure all indices labeled OE_Replace come after all those labeled ** OE_Ignore. This is necessary for the correct constraint check ** processing (in sqlite3GenerateConstraintChecks()) as part of ** UPDATE and INSERT statements. */ if( db->init.busy || pTblName==0 ){ if( onError!=OE_Replace || pTab->pIndex==0 || pTab->pIndex->onError==OE_Replace){ pIndex->pNext = pTab->pIndex; pTab->pIndex = pIndex; }else{ Index *pOther = pTab->pIndex; while( pOther->pNext && pOther->pNext->onError!=OE_Replace ){ pOther = pOther->pNext; } pIndex->pNext = pOther->pNext; pOther->pNext = pIndex; } pRet = pIndex; pIndex = 0; } /* Clean up before exiting */ exit_create_index: if( pIndex ) freeIndex(db, pIndex); sqlite3ExprDelete(db, pPIWhere); sqlite3ExprListDelete(db, pList); sqlite3SrcListDelete(db, pTblName); sqlite3DbFree(db, zName); return pRet; } /* ** Fill the Index.aiRowEst[] array with default information - information ** to be used when we have not run the ANALYZE command. ** ** aiRowEst[0] is supposed to contain the number of elements in the index. ** Since we do not know, guess 1 million. aiRowEst[1] is an estimate of the ** number of rows in the table that match any particular value of the ** first column of the index. aiRowEst[2] is an estimate of the number ** of rows that match any particular combination of the first 2 columns ** of the index. And so forth. It must always be the case that * ** aiRowEst[N]<=aiRowEst[N-1] ** aiRowEst[N]>=1 ** ** Apart from that, we have little to go on besides intuition as to ** how aiRowEst[] should be initialized. The numbers generated here ** are based on typical values found in actual indices. */ void sqlite3DefaultRowEst(Index *pIdx){ /* 10, 9, 8, 7, 6 */ LogEst aVal[] = { 33, 32, 30, 28, 26 }; LogEst *a = pIdx->aiRowLogEst; int nCopy = MIN(ArraySize(aVal), pIdx->nKeyCol); int i; /* Set the first entry (number of rows in the index) to the estimated ** number of rows in the table. Or 10, if the estimated number of rows ** in the table is less than that. */ a[0] = pIdx->pTable->nRowLogEst; if( a[0]<33 ) a[0] = 33; assert( 33==sqlite3LogEst(10) ); /* Estimate that a[1] is 10, a[2] is 9, a[3] is 8, a[4] is 7, a[5] is ** 6 and each subsequent value (if any) is 5. */ memcpy(&a[1], aVal, nCopy*sizeof(LogEst)); for(i=nCopy+1; i<=pIdx->nKeyCol; i++){ a[i] = 23; assert( 23==sqlite3LogEst(5) ); } assert( 0==sqlite3LogEst(1) ); if( IsUniqueIndex(pIdx) ) a[pIdx->nKeyCol] = 0; } /* ** This routine will drop an existing named index. This routine ** implements the DROP INDEX statement. */ void sqlite3DropIndex(Parse *pParse, SrcList *pName, int ifExists){ Index *pIndex; Vdbe *v; sqlite3 *db = pParse->db; int iDb; assert( pParse->nErr==0 ); /* Never called with prior errors */ if( db->mallocFailed ){ goto exit_drop_index; } assert( pName->nSrc==1 ); if( SQLITE_OK!=sqlite3ReadSchema(pParse) ){ goto exit_drop_index; } pIndex = sqlite3FindIndex(db, pName->a[0].zName, pName->a[0].zDatabase); if( pIndex==0 ){ if( !ifExists ){ sqlite3ErrorMsg(pParse, "no such index: %S", pName, 0); }else{ sqlite3CodeVerifyNamedSchema(pParse, pName->a[0].zDatabase); } pParse->checkSchema = 1; goto exit_drop_index; } if( pIndex->idxType!=SQLITE_IDXTYPE_APPDEF ){ sqlite3ErrorMsg(pParse, "index associated with UNIQUE " "or PRIMARY KEY constraint cannot be dropped", 0); goto exit_drop_index; } iDb = sqlite3SchemaToIndex(db, pIndex->pSchema); #ifndef SQLITE_OMIT_AUTHORIZATION { int code = SQLITE_DROP_INDEX; Table *pTab = pIndex->pTable; const char *zDb = db->aDb[iDb].zName; const char *zTab = SCHEMA_TABLE(iDb); if( sqlite3AuthCheck(pParse, SQLITE_DELETE, zTab, 0, zDb) ){ goto exit_drop_index; } if( !OMIT_TEMPDB && iDb ) code = SQLITE_DROP_TEMP_INDEX; if( sqlite3AuthCheck(pParse, code, pIndex->zName, pTab->zName, zDb) ){ goto exit_drop_index; } } #endif /* Generate code to remove the index and from the master table */ v = sqlite3GetVdbe(pParse); if( v ){ sqlite3BeginWriteOperation(pParse, 1, iDb); sqlite3NestedParse(pParse, "DELETE FROM %Q.%s WHERE name=%Q AND type='index'", db->aDb[iDb].zName, SCHEMA_TABLE(iDb), pIndex->zName ); sqlite3ClearStatTables(pParse, iDb, "idx", pIndex->zName); sqlite3ChangeCookie(pParse, iDb); destroyRootPage(pParse, pIndex->tnum, iDb); sqlite3VdbeAddOp4(v, OP_DropIndex, iDb, 0, 0, pIndex->zName, 0); } exit_drop_index: sqlite3SrcListDelete(db, pName); } /* ** pArray is a pointer to an array of objects. Each object in the ** array is szEntry bytes in size. This routine uses sqlite3DbRealloc() ** to extend the array so that there is space for a new object at the end. ** ** When this function is called, *pnEntry contains the current size of ** the array (in entries - so the allocation is ((*pnEntry) * szEntry) bytes ** in total). ** ** If the realloc() is successful (i.e. if no OOM condition occurs), the ** space allocated for the new object is zeroed, *pnEntry updated to ** reflect the new size of the array and a pointer to the new allocation ** returned. *pIdx is set to the index of the new array entry in this case. ** ** Otherwise, if the realloc() fails, *pIdx is set to -1, *pnEntry remains ** unchanged and a copy of pArray returned. */ void *sqlite3ArrayAllocate( sqlite3 *db, /* Connection to notify of malloc failures */ void *pArray, /* Array of objects. Might be reallocated */ int szEntry, /* Size of each object in the array */ int *pnEntry, /* Number of objects currently in use */ int *pIdx /* Write the index of a new slot here */ ){ char *z; int n = *pnEntry; if( (n & (n-1))==0 ){ int sz = (n==0) ? 1 : 2*n; void *pNew = sqlite3DbRealloc(db, pArray, sz*szEntry); if( pNew==0 ){ *pIdx = -1; return pArray; } pArray = pNew; } z = (char*)pArray; memset(&z[n * szEntry], 0, szEntry); *pIdx = n; ++*pnEntry; return pArray; } /* ** Append a new element to the given IdList. Create a new IdList if ** need be. ** ** A new IdList is returned, or NULL if malloc() fails. */ IdList *sqlite3IdListAppend(sqlite3 *db, IdList *pList, Token *pToken){ int i; if( pList==0 ){ pList = sqlite3DbMallocZero(db, sizeof(IdList) ); if( pList==0 ) return 0; } pList->a = sqlite3ArrayAllocate( db, pList->a, sizeof(pList->a[0]), &pList->nId, &i ); if( i<0 ){ sqlite3IdListDelete(db, pList); return 0; } pList->a[i].zName = sqlite3NameFromToken(db, pToken); return pList; } /* ** Delete an IdList. */ void sqlite3IdListDelete(sqlite3 *db, IdList *pList){ int i; if( pList==0 ) return; for(i=0; i<pList->nId; i++){ sqlite3DbFree(db, pList->a[i].zName); } sqlite3DbFree(db, pList->a); sqlite3DbFree(db, pList); } /* ** Return the index in pList of the identifier named zId. Return -1 ** if not found. */ int sqlite3IdListIndex(IdList *pList, const char *zName){ int i; if( pList==0 ) return -1; for(i=0; i<pList->nId; i++){ if( sqlite3StrICmp(pList->a[i].zName, zName)==0 ) return i; } return -1; } /* ** Expand the space allocated for the given SrcList object by ** creating nExtra new slots beginning at iStart. iStart is zero based. ** New slots are zeroed. ** ** For example, suppose a SrcList initially contains two entries: A,B. ** To append 3 new entries onto the end, do this: ** ** sqlite3SrcListEnlarge(db, pSrclist, 3, 2); ** ** After the call above it would contain: A, B, nil, nil, nil. ** If the iStart argument had been 1 instead of 2, then the result ** would have been: A, nil, nil, nil, B. To prepend the new slots, ** the iStart value would be 0. The result then would ** be: nil, nil, nil, A, B. ** ** If a memory allocation fails the SrcList is unchanged. The ** db->mallocFailed flag will be set to true. */ SrcList *sqlite3SrcListEnlarge( sqlite3 *db, /* Database connection to notify of OOM errors */ SrcList *pSrc, /* The SrcList to be enlarged */ int nExtra, /* Number of new slots to add to pSrc->a[] */ int iStart /* Index in pSrc->a[] of first new slot */ ){ int i; /* Sanity checking on calling parameters */ assert( iStart>=0 ); assert( nExtra>=1 ); assert( pSrc!=0 ); assert( iStart<=pSrc->nSrc ); /* Allocate additional space if needed */ if( (u32)pSrc->nSrc+nExtra>pSrc->nAlloc ){ SrcList *pNew; int nAlloc = pSrc->nSrc+nExtra; int nGot; pNew = sqlite3DbRealloc(db, pSrc, sizeof(*pSrc) + (nAlloc-1)*sizeof(pSrc->a[0]) ); if( pNew==0 ){ assert( db->mallocFailed ); return pSrc; } pSrc = pNew; nGot = (sqlite3DbMallocSize(db, pNew) - sizeof(*pSrc))/sizeof(pSrc->a[0])+1; pSrc->nAlloc = nGot; } /* Move existing slots that come after the newly inserted slots ** out of the way */ for(i=pSrc->nSrc-1; i>=iStart; i--){ pSrc->a[i+nExtra] = pSrc->a[i]; } pSrc->nSrc += nExtra; /* Zero the newly allocated slots */ memset(&pSrc->a[iStart], 0, sizeof(pSrc->a[0])*nExtra); for(i=iStart; i<iStart+nExtra; i++){ pSrc->a[i].iCursor = -1; } /* Return a pointer to the enlarged SrcList */ return pSrc; } /* ** Append a new table name to the given SrcList. Create a new SrcList if ** need be. A new entry is created in the SrcList even if pTable is NULL. ** ** A SrcList is returned, or NULL if there is an OOM error. The returned ** SrcList might be the same as the SrcList that was input or it might be ** a new one. If an OOM error does occurs, then the prior value of pList ** that is input to this routine is automatically freed. ** ** If pDatabase is not null, it means that the table has an optional ** database name prefix. Like this: "database.table". The pDatabase ** points to the table name and the pTable points to the database name. ** The SrcList.a[].zName field is filled with the table name which might ** come from pTable (if pDatabase is NULL) or from pDatabase. ** SrcList.a[].zDatabase is filled with the database name from pTable, ** or with NULL if no database is specified. ** ** In other words, if call like this: ** ** sqlite3SrcListAppend(D,A,B,0); ** ** Then B is a table name and the database name is unspecified. If called ** like this: ** ** sqlite3SrcListAppend(D,A,B,C); ** ** Then C is the table name and B is the database name. If C is defined ** then so is B. In other words, we never have a case where: ** ** sqlite3SrcListAppend(D,A,0,C); ** ** Both pTable and pDatabase are assumed to be quoted. They are dequoted ** before being added to the SrcList. */ SrcList *sqlite3SrcListAppend( sqlite3 *db, /* Connection to notify of malloc failures */ SrcList *pList, /* Append to this SrcList. NULL creates a new SrcList */ Token *pTable, /* Table to append */ Token *pDatabase /* Database of the table */ ){ struct SrcList_item *pItem; assert( pDatabase==0 || pTable!=0 ); /* Cannot have C without B */ if( pList==0 ){ pList = sqlite3DbMallocZero(db, sizeof(SrcList) ); if( pList==0 ) return 0; pList->nAlloc = 1; } pList = sqlite3SrcListEnlarge(db, pList, 1, pList->nSrc); if( db->mallocFailed ){ sqlite3SrcListDelete(db, pList); return 0; } pItem = &pList->a[pList->nSrc-1]; if( pDatabase && pDatabase->z==0 ){ pDatabase = 0; } if( pDatabase ){ Token *pTemp = pDatabase; pDatabase = pTable; pTable = pTemp; } pItem->zName = sqlite3NameFromToken(db, pTable); pItem->zDatabase = sqlite3NameFromToken(db, pDatabase); return pList; } /* ** Assign VdbeCursor index numbers to all tables in a SrcList */ void sqlite3SrcListAssignCursors(Parse *pParse, SrcList *pList){ int i; struct SrcList_item *pItem; assert(pList || pParse->db->mallocFailed ); if( pList ){ for(i=0, pItem=pList->a; i<pList->nSrc; i++, pItem++){ if( pItem->iCursor>=0 ) break; pItem->iCursor = pParse->nTab++; if( pItem->pSelect ){ sqlite3SrcListAssignCursors(pParse, pItem->pSelect->pSrc); } } } } /* ** Delete an entire SrcList including all its substructure. */ void sqlite3SrcListDelete(sqlite3 *db, SrcList *pList){ int i; struct SrcList_item *pItem; if( pList==0 ) return; for(pItem=pList->a, i=0; i<pList->nSrc; i++, pItem++){ sqlite3DbFree(db, pItem->zDatabase); sqlite3DbFree(db, pItem->zName); sqlite3DbFree(db, pItem->zAlias); sqlite3DbFree(db, pItem->zIndex); sqlite3DeleteTable(db, pItem->pTab); sqlite3SelectDelete(db, pItem->pSelect); sqlite3ExprDelete(db, pItem->pOn); sqlite3IdListDelete(db, pItem->pUsing); } sqlite3DbFree(db, pList); } /* ** This routine is called by the parser to add a new term to the ** end of a growing FROM clause. The "p" parameter is the part of ** the FROM clause that has already been constructed. "p" is NULL ** if this is the first term of the FROM clause. pTable and pDatabase ** are the name of the table and database named in the FROM clause term. ** pDatabase is NULL if the database name qualifier is missing - the ** usual case. If the term has an alias, then pAlias points to the ** alias token. If the term is a subquery, then pSubquery is the ** SELECT statement that the subquery encodes. The pTable and ** pDatabase parameters are NULL for subqueries. The pOn and pUsing ** parameters are the content of the ON and USING clauses. ** ** Return a new SrcList which encodes is the FROM with the new ** term added. */ SrcList *sqlite3SrcListAppendFromTerm( Parse *pParse, /* Parsing context */ SrcList *p, /* The left part of the FROM clause already seen */ Token *pTable, /* Name of the table to add to the FROM clause */ Token *pDatabase, /* Name of the database containing pTable */ Token *pAlias, /* The right-hand side of the AS subexpression */ Select *pSubquery, /* A subquery used in place of a table name */ Expr *pOn, /* The ON clause of a join */ IdList *pUsing /* The USING clause of a join */ ){ struct SrcList_item *pItem; sqlite3 *db = pParse->db; if( !p && (pOn || pUsing) ){ sqlite3ErrorMsg(pParse, "a JOIN clause is required before %s", (pOn ? "ON" : "USING") ); goto append_from_error; } p = sqlite3SrcListAppend(db, p, pTable, pDatabase); if( p==0 || NEVER(p->nSrc==0) ){ goto append_from_error; } pItem = &p->a[p->nSrc-1]; assert( pAlias!=0 ); if( pAlias->n ){ pItem->zAlias = sqlite3NameFromToken(db, pAlias); } pItem->pSelect = pSubquery; pItem->pOn = pOn; pItem->pUsing = pUsing; return p; append_from_error: assert( p==0 ); sqlite3ExprDelete(db, pOn); sqlite3IdListDelete(db, pUsing); sqlite3SelectDelete(db, pSubquery); return 0; } /* ** Add an INDEXED BY or NOT INDEXED clause to the most recently added ** element of the source-list passed as the second argument. */ void sqlite3SrcListIndexedBy(Parse *pParse, SrcList *p, Token *pIndexedBy){ assert( pIndexedBy!=0 ); if( p && ALWAYS(p->nSrc>0) ){ struct SrcList_item *pItem = &p->a[p->nSrc-1]; assert( pItem->notIndexed==0 && pItem->zIndex==0 ); if( pIndexedBy->n==1 && !pIndexedBy->z ){ /* A "NOT INDEXED" clause was supplied. See parse.y ** construct "indexed_opt" for details. */ pItem->notIndexed = 1; }else{ pItem->zIndex = sqlite3NameFromToken(pParse->db, pIndexedBy); } } } /* ** When building up a FROM clause in the parser, the join operator ** is initially attached to the left operand. But the code generator ** expects the join operator to be on the right operand. This routine ** Shifts all join operators from left to right for an entire FROM ** clause. ** ** Example: Suppose the join is like this: ** ** A natural cross join B ** ** The operator is "natural cross join". The A and B operands are stored ** in p->a[0] and p->a[1], respectively. The parser initially stores the ** operator with A. This routine shifts that operator over to B. */ void sqlite3SrcListShiftJoinType(SrcList *p){ if( p ){ int i; assert( p->a || p->nSrc==0 ); for(i=p->nSrc-1; i>0; i--){ p->a[i].jointype = p->a[i-1].jointype; } p->a[0].jointype = 0; } } /* ** Begin a transaction */ void sqlite3BeginTransaction(Parse *pParse, int type){ sqlite3 *db; Vdbe *v; int i; assert( pParse!=0 ); db = pParse->db; assert( db!=0 ); /* if( db->aDb[0].pBt==0 ) return; */ if( sqlite3AuthCheck(pParse, SQLITE_TRANSACTION, "BEGIN", 0, 0) ){ return; } v = sqlite3GetVdbe(pParse); if( !v ) return; if( type!=TK_DEFERRED ){ for(i=0; i<db->nDb; i++){ sqlite3VdbeAddOp2(v, OP_Transaction, i, (type==TK_EXCLUSIVE)+1); sqlite3VdbeUsesBtree(v, i); } } sqlite3VdbeAddOp2(v, OP_AutoCommit, 0, 0); } /* ** Commit a transaction */ void sqlite3CommitTransaction(Parse *pParse){ Vdbe *v; assert( pParse!=0 ); assert( pParse->db!=0 ); if( sqlite3AuthCheck(pParse, SQLITE_TRANSACTION, "COMMIT", 0, 0) ){ return; } v = sqlite3GetVdbe(pParse); if( v ){ sqlite3VdbeAddOp2(v, OP_AutoCommit, 1, 0); } } /* ** Rollback a transaction */ void sqlite3RollbackTransaction(Parse *pParse){ Vdbe *v; assert( pParse!=0 ); assert( pParse->db!=0 ); if( sqlite3AuthCheck(pParse, SQLITE_TRANSACTION, "ROLLBACK", 0, 0) ){ return; } v = sqlite3GetVdbe(pParse); if( v ){ sqlite3VdbeAddOp2(v, OP_AutoCommit, 1, 1); } } /* ** This function is called by the parser when it parses a command to create, ** release or rollback an SQL savepoint. */ void sqlite3Savepoint(Parse *pParse, int op, Token *pName){ char *zName = sqlite3NameFromToken(pParse->db, pName); if( zName ){ Vdbe *v = sqlite3GetVdbe(pParse); #ifndef SQLITE_OMIT_AUTHORIZATION static const char * const az[] = { "BEGIN", "RELEASE", "ROLLBACK" }; assert( !SAVEPOINT_BEGIN && SAVEPOINT_RELEASE==1 && SAVEPOINT_ROLLBACK==2 ); #endif if( !v || sqlite3AuthCheck(pParse, SQLITE_SAVEPOINT, az[op], zName, 0) ){ sqlite3DbFree(pParse->db, zName); return; } sqlite3VdbeAddOp4(v, OP_Savepoint, op, 0, 0, zName, P4_DYNAMIC); } } /* ** Make sure the TEMP database is open and available for use. Return ** the number of errors. Leave any error messages in the pParse structure. */ int sqlite3OpenTempDatabase(Parse *pParse){ sqlite3 *db = pParse->db; if( db->aDb[1].pBt==0 && !pParse->explain ){ int rc; Btree *pBt; static const int flags = SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE | SQLITE_OPEN_EXCLUSIVE | SQLITE_OPEN_DELETEONCLOSE | SQLITE_OPEN_TEMP_DB; rc = sqlite3BtreeOpen(db->pVfs, 0, db, &pBt, 0, flags); if( rc!=SQLITE_OK ){ sqlite3ErrorMsg(pParse, "unable to open a temporary database " "file for storing temporary tables"); pParse->rc = rc; return 1; } db->aDb[1].pBt = pBt; assert( db->aDb[1].pSchema ); if( SQLITE_NOMEM==sqlite3BtreeSetPageSize(pBt, db->nextPagesize, -1, 0) ){ db->mallocFailed = 1; return 1; } } return 0; } /* ** Record the fact that the schema cookie will need to be verified ** for database iDb. The code to actually verify the schema cookie ** will occur at the end of the top-level VDBE and will be generated ** later, by sqlite3FinishCoding(). */ void sqlite3CodeVerifySchema(Parse *pParse, int iDb){ Parse *pToplevel = sqlite3ParseToplevel(pParse); sqlite3 *db = pToplevel->db; assert( iDb>=0 && iDb<db->nDb ); assert( db->aDb[iDb].pBt!=0 || iDb==1 ); assert( iDb<SQLITE_MAX_ATTACHED+2 ); assert( sqlite3SchemaMutexHeld(db, iDb, 0) ); if( DbMaskTest(pToplevel->cookieMask, iDb)==0 ){ DbMaskSet(pToplevel->cookieMask, iDb); pToplevel->cookieValue[iDb] = db->aDb[iDb].pSchema->schema_cookie; if( !OMIT_TEMPDB && iDb==1 ){ sqlite3OpenTempDatabase(pToplevel); } } } /* ** If argument zDb is NULL, then call sqlite3CodeVerifySchema() for each ** attached database. Otherwise, invoke it for the database named zDb only. */ void sqlite3CodeVerifyNamedSchema(Parse *pParse, const char *zDb){ sqlite3 *db = pParse->db; int i; for(i=0; i<db->nDb; i++){ Db *pDb = &db->aDb[i]; if( pDb->pBt && (!zDb || 0==sqlite3StrICmp(zDb, pDb->zName)) ){ sqlite3CodeVerifySchema(pParse, i); } } } /* ** Generate VDBE code that prepares for doing an operation that ** might change the database. ** ** This routine starts a new transaction if we are not already within ** a transaction. If we are already within a transaction, then a checkpoint ** is set if the setStatement parameter is true. A checkpoint should ** be set for operations that might fail (due to a constraint) part of ** the way through and which will need to undo some writes without having to ** rollback the whole transaction. For operations where all constraints ** can be checked before any changes are made to the database, it is never ** necessary to undo a write and the checkpoint should not be set. */ void sqlite3BeginWriteOperation(Parse *pParse, int setStatement, int iDb){ Parse *pToplevel = sqlite3ParseToplevel(pParse); sqlite3CodeVerifySchema(pParse, iDb); DbMaskSet(pToplevel->writeMask, iDb); pToplevel->isMultiWrite |= setStatement; } /* ** Indicate that the statement currently under construction might write ** more than one entry (example: deleting one row then inserting another, ** inserting multiple rows in a table, or inserting a row and index entries.) ** If an abort occurs after some of these writes have completed, then it will ** be necessary to undo the completed writes. */ void sqlite3MultiWrite(Parse *pParse){ Parse *pToplevel = sqlite3ParseToplevel(pParse); pToplevel->isMultiWrite = 1; } /* ** The code generator calls this routine if is discovers that it is ** possible to abort a statement prior to completion. In order to ** perform this abort without corrupting the database, we need to make ** sure that the statement is protected by a statement transaction. ** ** Technically, we only need to set the mayAbort flag if the ** isMultiWrite flag was previously set. There is a time dependency ** such that the abort must occur after the multiwrite. This makes ** some statements involving the REPLACE conflict resolution algorithm ** go a little faster. But taking advantage of this time dependency ** makes it more difficult to prove that the code is correct (in ** particular, it prevents us from writing an effective ** implementation of sqlite3AssertMayAbort()) and so we have chosen ** to take the safe route and skip the optimization. */ void sqlite3MayAbort(Parse *pParse){ Parse *pToplevel = sqlite3ParseToplevel(pParse); pToplevel->mayAbort = 1; } /* ** Code an OP_Halt that causes the vdbe to return an SQLITE_CONSTRAINT ** error. The onError parameter determines which (if any) of the statement ** and/or current transaction is rolled back. */ void sqlite3HaltConstraint( Parse *pParse, /* Parsing context */ int errCode, /* extended error code */ int onError, /* Constraint type */ char *p4, /* Error message */ i8 p4type, /* P4_STATIC or P4_TRANSIENT */ u8 p5Errmsg /* P5_ErrMsg type */ ){ Vdbe *v = sqlite3GetVdbe(pParse); assert( (errCode&0xff)==SQLITE_CONSTRAINT ); if( onError==OE_Abort ){ sqlite3MayAbort(pParse); } sqlite3VdbeAddOp4(v, OP_Halt, errCode, onError, 0, p4, p4type); if( p5Errmsg ) sqlite3VdbeChangeP5(v, p5Errmsg); } /* ** Code an OP_Halt due to UNIQUE or PRIMARY KEY constraint violation. */ void sqlite3UniqueConstraint( Parse *pParse, /* Parsing context */ int onError, /* Constraint type */ Index *pIdx /* The index that triggers the constraint */ ){ char *zErr; int j; StrAccum errMsg; Table *pTab = pIdx->pTable; sqlite3StrAccumInit(&errMsg, 0, 0, 200); errMsg.db = pParse->db; for(j=0; j<pIdx->nKeyCol; j++){ char *zCol = pTab->aCol[pIdx->aiColumn[j]].zName; if( j ) sqlite3StrAccumAppend(&errMsg, ", ", 2); sqlite3StrAccumAppendAll(&errMsg, pTab->zName); sqlite3StrAccumAppend(&errMsg, ".", 1); sqlite3StrAccumAppendAll(&errMsg, zCol); } zErr = sqlite3StrAccumFinish(&errMsg); sqlite3HaltConstraint(pParse, IsPrimaryKeyIndex(pIdx) ? SQLITE_CONSTRAINT_PRIMARYKEY : SQLITE_CONSTRAINT_UNIQUE, onError, zErr, P4_DYNAMIC, P5_ConstraintUnique); } /* ** Code an OP_Halt due to non-unique rowid. */ void sqlite3RowidConstraint( Parse *pParse, /* Parsing context */ int onError, /* Conflict resolution algorithm */ Table *pTab /* The table with the non-unique rowid */ ){ char *zMsg; int rc; if( pTab->iPKey>=0 ){ zMsg = sqlite3MPrintf(pParse->db, "%s.%s", pTab->zName, pTab->aCol[pTab->iPKey].zName); rc = SQLITE_CONSTRAINT_PRIMARYKEY; }else{ zMsg = sqlite3MPrintf(pParse->db, "%s.rowid", pTab->zName); rc = SQLITE_CONSTRAINT_ROWID; } sqlite3HaltConstraint(pParse, rc, onError, zMsg, P4_DYNAMIC, P5_ConstraintUnique); } /* ** Check to see if pIndex uses the collating sequence pColl. Return ** true if it does and false if it does not. */ #ifndef SQLITE_OMIT_REINDEX static int collationMatch(const char *zColl, Index *pIndex){ int i; assert( zColl!=0 ); for(i=0; i<pIndex->nColumn; i++){ const char *z = pIndex->azColl[i]; assert( z!=0 || pIndex->aiColumn[i]<0 ); if( pIndex->aiColumn[i]>=0 && 0==sqlite3StrICmp(z, zColl) ){ return 1; } } return 0; } #endif /* ** Recompute all indices of pTab that use the collating sequence pColl. ** If pColl==0 then recompute all indices of pTab. */ #ifndef SQLITE_OMIT_REINDEX static void reindexTable(Parse *pParse, Table *pTab, char const *zColl){ Index *pIndex; /* An index associated with pTab */ for(pIndex=pTab->pIndex; pIndex; pIndex=pIndex->pNext){ if( zColl==0 || collationMatch(zColl, pIndex) ){ int iDb = sqlite3SchemaToIndex(pParse->db, pTab->pSchema); sqlite3BeginWriteOperation(pParse, 0, iDb); sqlite3RefillIndex(pParse, pIndex, -1); } } } #endif /* ** Recompute all indices of all tables in all databases where the ** indices use the collating sequence pColl. If pColl==0 then recompute ** all indices everywhere. */ #ifndef SQLITE_OMIT_REINDEX static void reindexDatabases(Parse *pParse, char const *zColl){ Db *pDb; /* A single database */ int iDb; /* The database index number */ sqlite3 *db = pParse->db; /* The database connection */ HashElem *k; /* For looping over tables in pDb */ Table *pTab; /* A table in the database */ assert( sqlite3BtreeHoldsAllMutexes(db) ); /* Needed for schema access */ for(iDb=0, pDb=db->aDb; iDb<db->nDb; iDb++, pDb++){ assert( pDb!=0 ); for(k=sqliteHashFirst(&pDb->pSchema->tblHash); k; k=sqliteHashNext(k)){ pTab = (Table*)sqliteHashData(k); reindexTable(pParse, pTab, zColl); } } } #endif /* ** Generate code for the REINDEX command. ** ** REINDEX -- 1 ** REINDEX <collation> -- 2 ** REINDEX ?<database>.?<tablename> -- 3 ** REINDEX ?<database>.?<indexname> -- 4 ** ** Form 1 causes all indices in all attached databases to be rebuilt. ** Form 2 rebuilds all indices in all databases that use the named ** collating function. Forms 3 and 4 rebuild the named index or all ** indices associated with the named table. */ #ifndef SQLITE_OMIT_REINDEX void sqlite3Reindex(Parse *pParse, Token *pName1, Token *pName2){ CollSeq *pColl; /* Collating sequence to be reindexed, or NULL */ char *z; /* Name of a table or index */ const char *zDb; /* Name of the database */ Table *pTab; /* A table in the database */ Index *pIndex; /* An index associated with pTab */ int iDb; /* The database index number */ sqlite3 *db = pParse->db; /* The database connection */ Token *pObjName; /* Name of the table or index to be reindexed */ /* Read the database schema. If an error occurs, leave an error message ** and code in pParse and return NULL. */ if( SQLITE_OK!=sqlite3ReadSchema(pParse) ){ return; } if( pName1==0 ){ reindexDatabases(pParse, 0); return; }else if( NEVER(pName2==0) || pName2->z==0 ){ char *zColl; assert( pName1->z ); zColl = sqlite3NameFromToken(pParse->db, pName1); if( !zColl ) return; pColl = sqlite3FindCollSeq(db, ENC(db), zColl, 0); if( pColl ){ reindexDatabases(pParse, zColl); sqlite3DbFree(db, zColl); return; } sqlite3DbFree(db, zColl); } iDb = sqlite3TwoPartName(pParse, pName1, pName2, &pObjName); if( iDb<0 ) return; z = sqlite3NameFromToken(db, pObjName); if( z==0 ) return; zDb = db->aDb[iDb].zName; pTab = sqlite3FindTable(db, z, zDb); if( pTab ){ reindexTable(pParse, pTab, 0); sqlite3DbFree(db, z); return; } pIndex = sqlite3FindIndex(db, z, zDb); sqlite3DbFree(db, z); if( pIndex ){ sqlite3BeginWriteOperation(pParse, 0, iDb); sqlite3RefillIndex(pParse, pIndex, -1); return; } sqlite3ErrorMsg(pParse, "unable to identify the object to be reindexed"); } #endif /* ** Return a KeyInfo structure that is appropriate for the given Index. ** ** The KeyInfo structure for an index is cached in the Index object. ** So there might be multiple references to the returned pointer. The ** caller should not try to modify the KeyInfo object. ** ** The caller should invoke sqlite3KeyInfoUnref() on the returned object ** when it has finished using it. */ KeyInfo *sqlite3KeyInfoOfIndex(Parse *pParse, Index *pIdx){ if( pParse->nErr ) return 0; #ifndef SQLITE_OMIT_SHARED_CACHE if( pIdx->pKeyInfo && pIdx->pKeyInfo->db!=pParse->db ){ sqlite3KeyInfoUnref(pIdx->pKeyInfo); pIdx->pKeyInfo = 0; } #endif if( pIdx->pKeyInfo==0 ){ int i; int nCol = pIdx->nColumn; int nKey = pIdx->nKeyCol; KeyInfo *pKey; if( pIdx->uniqNotNull ){ pKey = sqlite3KeyInfoAlloc(pParse->db, nKey, nCol-nKey); }else{ pKey = sqlite3KeyInfoAlloc(pParse->db, nCol, 0); } if( pKey ){ assert( sqlite3KeyInfoIsWriteable(pKey) ); for(i=0; i<nCol; i++){ char *zColl = pIdx->azColl[i]; assert( zColl!=0 ); pKey->aColl[i] = strcmp(zColl,"BINARY")==0 ? 0 : sqlite3LocateCollSeq(pParse, zColl); pKey->aSortOrder[i] = pIdx->aSortOrder[i]; } if( pParse->nErr ){ sqlite3KeyInfoUnref(pKey); }else{ pIdx->pKeyInfo = pKey; } } } return sqlite3KeyInfoRef(pIdx->pKeyInfo); } #ifndef SQLITE_OMIT_CTE /* ** This routine is invoked once per CTE by the parser while parsing a ** WITH clause. */ With *sqlite3WithAdd( Parse *pParse, /* Parsing context */ With *pWith, /* Existing WITH clause, or NULL */ Token *pName, /* Name of the common-table */ ExprList *pArglist, /* Optional column name list for the table */ Select *pQuery /* Query used to initialize the table */ ){ sqlite3 *db = pParse->db; With *pNew; char *zName; /* Check that the CTE name is unique within this WITH clause. If ** not, store an error in the Parse structure. */ zName = sqlite3NameFromToken(pParse->db, pName); if( zName && pWith ){ int i; for(i=0; i<pWith->nCte; i++){ if( sqlite3StrICmp(zName, pWith->a[i].zName)==0 ){ sqlite3ErrorMsg(pParse, "duplicate WITH table name: %s", zName); } } } if( pWith ){ int nByte = sizeof(*pWith) + (sizeof(pWith->a[1]) * pWith->nCte); pNew = sqlite3DbRealloc(db, pWith, nByte); }else{ pNew = sqlite3DbMallocZero(db, sizeof(*pWith)); } assert( zName!=0 || pNew==0 ); assert( db->mallocFailed==0 || pNew==0 ); if( pNew==0 ){ sqlite3ExprListDelete(db, pArglist); sqlite3SelectDelete(db, pQuery); sqlite3DbFree(db, zName); pNew = pWith; }else{ pNew->a[pNew->nCte].pSelect = pQuery; pNew->a[pNew->nCte].pCols = pArglist; pNew->a[pNew->nCte].zName = zName; pNew->a[pNew->nCte].zErr = 0; pNew->nCte++; } return pNew; } /* ** Free the contents of the With object passed as the second argument. */ void sqlite3WithDelete(sqlite3 *db, With *pWith){ if( pWith ){ int i; for(i=0; i<pWith->nCte; i++){ struct Cte *pCte = &pWith->a[i]; sqlite3ExprListDelete(db, pCte->pCols); sqlite3SelectDelete(db, pCte->pSelect); sqlite3DbFree(db, pCte->zName); } sqlite3DbFree(db, pWith); } } #endif /* !defined(SQLITE_OMIT_CTE) */ ```
```javascript /** * Module dependencies. */ var assert = require('assert'); var isInNet = require('../isInNet'); describe('isInNet(host, pattern, mask)', function () { var tests = [ ["198.95.249.79", "198.95.249.79", "255.255.255.255", true], ["198.95.249.78", "198.95.249.79", "255.255.255.255", false], ["198.95.1.1", "198.95.0.0", "255.255.0.0", true], ["198.94.1.1", "198.95.0.0", "255.255.0.0", false] ]; tests.forEach(function (test) { var expected = test.pop(); it('should return `' + expected + '` for "' + test.join('", "') + '"', function (done) { isInNet(test[0], test[1], test[2], function (err, res) { if (err) return done(err); assert.equal(expected, res); done(); }); }); }); }); ```
Robert Walter Doyne (1857–1916) was an Anglo-Irish ophthalmologist. He was born in Monart, County Wexford, Ireland, second son of the Reverend Philip Walter Doyne (died 1861), vicar of Monart, and Emily Sophia Richards, daughter of John Goddard Richards, barrister, of Ardamine House, Gorey, County Wexford and his first wife Anna-Catherine Ward, and granddaughter of the noted physician Solomon Richards. He belonged to a junior branch of the long-established Doyne family of Wells House, County Wexford, who were descended from the eminent judge Sir Robert Doyne (1651-1733). Doyne studied medicine in Oxford, Bristol and St George's Hospital in London. In 1886, he founded the Oxford Eye Hospital, and in 1909 became the first president of the Oxford Ophthalmological Congress. In 1899 Doyne discovered colloid bodies lying on Bruch's membrane that appeared to merge, forming a mosaic pattern that resembled a honeycomb. Afterwards, this disorder was referred to as "Doyne's honeycomb choroiditis". Today this condition is known to be a rare hereditary form of macular degeneration that results in progressive and irreversible loss of vision. Other names for the disorder are: "macular drusen", "malattia leventinese", "dominant radial drusen" and "Doyne honeycomb retinal dystrophy". In 1889, he was the first physician to describe angioid streaks, a disorder that affects Bruch's membrane, the innermost layer of the choroid. Two years after his death in 1916, a prized distinction in British ophthalmologic medicine known as the "Doyne Memorial Lecture" was established. He married Gertrude Hope Hollings, daughter of John Hollings of The Watchetts, Surrey, and had two sons including Philip, who was an ophthalmologist like his father. References British ophthalmologists 1857 births 1916 deaths
```c++ #include "UEPyAssetUserData.h" #if WITH_EDITOR PyObject *py_ue_asset_import_data(ue_PyUObject * self, PyObject * args) { ue_py_check(self); UStruct *u_struct = (UStruct *)self->ue_object->GetClass(); UClassProperty *u_property = (UClassProperty *)u_struct->FindPropertyByName(TEXT("AssetImportData")); if (!u_property) { return PyErr_Format(PyExc_Exception, "UObject does not have asset import data."); } UAssetImportData *import_data = (UAssetImportData *)u_property->GetPropertyValue_InContainer(self->ue_object); FAssetImportInfo *import_info = &import_data->SourceData; PyObject *ret = PyList_New(import_info->SourceFiles.Num()); for (int i = 0; i < import_info->SourceFiles.Num(); i++) { PyObject *py_source_file = PyDict_New(); PyDict_SetItemString(py_source_file, "absolute_filepath", PyUnicode_FromString(TCHAR_TO_UTF8(*import_data->ResolveImportFilename(import_info->SourceFiles[i].RelativeFilename, NULL)))); PyDict_SetItemString(py_source_file, "relative_filepath", PyUnicode_FromString(TCHAR_TO_UTF8(*import_info->SourceFiles[i].RelativeFilename))); PyDict_SetItemString(py_source_file, "timestamp", PyLong_FromLong(import_info->SourceFiles[i].Timestamp.ToUnixTimestamp())); #if ENGINE_MINOR_VERSION > 19 PyDict_SetItemString(py_source_file, "filehash", PyUnicode_FromString(TCHAR_TO_UTF8(*LexToString(import_info->SourceFiles[i].FileHash)))); #else PyDict_SetItemString(py_source_file, "filehash", PyUnicode_FromString(TCHAR_TO_UTF8(*LexicalConversion::ToString(import_info->SourceFiles[i].FileHash)))); #endif PyList_SetItem(ret, i, py_source_file); } return ret; } PyObject *py_ue_asset_import_data_set_sources(ue_PyUObject * self, PyObject * args) { ue_py_check(self); PyObject *py_files; if (!PyArg_ParseTuple(args, "O:asset_import_data_set_sources", &py_files)) { return nullptr; } TArray<FString> filenames; UStruct *u_struct = (UStruct *)self->ue_object->GetClass(); UClassProperty *u_property = (UClassProperty *)u_struct->FindPropertyByName(TEXT("AssetImportData")); if (!u_property) { return PyErr_Format(PyExc_Exception, "UObject does not have asset import data."); } if (PyUnicodeOrString_Check(py_files)) { filenames.Add(FString(UTF8_TO_TCHAR(UEPyUnicode_AsUTF8(py_files)))); } else { PyObject *py_iter = PyObject_GetIter(py_files); if (!py_iter) { return PyErr_Format(PyExc_Exception, "argument is not a string or an interable of strings"); } while (PyObject *py_item = PyIter_Next(py_iter)) { if (!PyUnicodeOrString_Check(py_item)) { Py_DECREF(py_iter); return PyErr_Format(PyExc_Exception, "argument is not a string or an interable of strings"); } filenames.Add(FString(UTF8_TO_TCHAR(UEPyUnicode_AsUTF8(py_item)))); } Py_DECREF(py_iter); } UAssetImportData *import_data = (UAssetImportData *)u_property->GetPropertyValue_InContainer(self->ue_object); FAssetImportInfo *import_info = &import_data->SourceData; TArray<FAssetImportInfo::FSourceFile> sources; for (FString filename : filenames) { sources.Add(FAssetImportInfo::FSourceFile(filename)); } import_info->SourceFiles = sources; Py_RETURN_NONE; } #endif ```
Among those who were born in the London Borough of Croydon, or have dwelt within the borders of the modern borough are (alphabetical order): A Feroz Abbasi, arrested in Afghanistan in 2001 and detained at Guantanamo Bay; lived in Shirley and attended school in Croydon Adegbenga Adejumo (1987–), Croydon born dubstep producer known as Benga Allan Ahlberg (1938–), children's writer (Penguin) Waheed Alli (1964–), born and raised in the north of Croydon; multimillionaire media entrepreneur and politician; co-founder of Planet 24 TV production company; MD at Carlton Television; currently chairman of ASOS.com and Chorion Ltd.; a Labour peer; one of very few openly gay Muslim politicians in the world Aaron Wan-Bissaka, Professional Football player who plays for Premier League club Manchester United born in Croydon Dame Peggy Ashcroft (1907–1991), actress, born in Croydon and lived in George Street as a child; honoured in the naming of the Ashcroft Theatre, part of the Fairfield Halls; was a school friend of architect Jane Drew Lionel Atwill (1885-1946), stage and screen actor, was born in Croydon B Jeannie Baker (1940–), artist, author, designer and animator Cicely Mary Barker (1895–1973), illustrator and artist; created the famous Flower Fairies books; born in Croydon and lived locally; studied at the Croydon School of Art Jon Benjamin (1964–), Chief Executive of the Board of Deputies of British Jews since 2005; born and grew up in Croydon, and educated at Park Hill Primary School and Dulwich College Edward White Benson, Archbishop of Canterbury (1883–1896); lived at Addington Palace; invented Christmas tradition of Festival of Nine Lessons and Carols Jeff Beck (1944–), guitarist Jay Bernard (writer) (1988–), FRSL, writer, artist, film programmer, and activist, raised in Croydon Keith Berry (1973–), musician and composer Frederick Betts (1859–1944), donated Betts Park and built large areas in Croydon and Penge Jamal Blackman (1993-), footballer Emily Blunt (1983–), actress; she and husband John Krasinski own an apartment in East Croydon James Booth (1927–2005), actor (Zulu) Dane Bowers (1979–), singer, attended Trinity School Derren Brown, illusionist; born and brought up in Purley James Buckley, actor, best known for playing Jay Cartwright in The Inbetweeners Raymond Burns (1954–), musician, member of punk rock band the Damned; also known by the name Captain Sensible Mark Butcher (1972–), Surrey and England cricketer; born in Croydon, attended Trinity School C Alison Carroll, actress Raymond Chandler (1888–1959), screenwriter and author Anne Clark (poet) (1960–), poet, songwriter and electronic musician Klariza Clayton (1989–), actress Martin Clunes (1961–), actor, resident Carlton Cole, ex footballer, born in Croydon Ronnie Corbett, comic actor, lived for many years in Addington, London Frederick George Creed (1871–1957), electrical engineer and inventor of the teleprinter; lived and died at 20 Outram Road, Addiscombe Peter Cushing (1913–1994), actor; born in Kenley, lived in Purley D Tasha Danvers-Smith (1977–), champion hurdler Michael Dapaah (1991–), actor and comedian, attended Thomas More Catholic School, Purley Bertrand Dawson (1864–1945), physician to the British Royal Family and President of the Royal College of Physicians Desmond Dekker (1941–2006), ska musician, lived in Thornton Heath R.F. Delderfield (1912–1972), writer and dramatist; lived at 22 Ashburton Avenue, Addiscombe, 1918–1923; his "Avenue" series is based on his life in Addiscombe & Shirley Park; many of his works were adapted for television Norman Demuth (1898–1968), composer and musicologist, born at 91 St James' Road. Luol Deng (1985–), basketball player for the Chicago Bulls and Great Britain; raised in South Norwood Sir Arthur Conan Doyle (1859–1930), author and creator of Sherlock Holmes; lived at 12 Tennison Road, South Norwood 1891–1894 Jane Drew (1911–1996), architect and town planner; born at 8 Parchmore Road, Thornton Heath; went to Croydon High School and was a school friend of Dame Peggy Ashcroft Jacqueline du Pré (1945–1987), British cellist, acknowledged as one of the greatest players of the instrument, but whose career was cut short by multiple sclerosis; lived in Purley and attended Croydon High School Des'ree (1968-), award-winning English recording artist. E Havelock Ellis (1859–1939), Victorian sexologist, born in Croydon Tracey Emin (1963–), artist Carlos Ezquerra (1947–2018), comics artist, co-creator of Judge Dredd F Noel Fielding (1973–), comedian, writer, actor, artist, co-creator of The Mighty Boosh Matthew Fisher (1946–), musician, Procol Harum, composer of "Whiter Shade of Pale" Kenelm Foss (1885–1963), actor, theatre director, author, screenwriter and film director, born in Croydon Alexander Francis (1995–), musician, composer Vincent Frank (1985–), musician, Frankmusik Donna Fraser (1972–), international athlete Ian Frazer, poker player Neil Fraser (1955–), dub musician/producer (AKA Mad Professor) Jacqueline Froom (1929–2018), poet, lyricist, and teacher Charles Burgess Fry (1872–1956), polymath – sportsman, politician, teacher, writer, editor, publisher G Paul Garelli (1924–2006), French Assyriologist Trevor Goddard (1962–2003), actor JB Gill (1986–), singer with British boyband JLS, farmer and TV presenter Otis Grand (1950–2023), American Blues guitarist, lived in Croydon Sir Philip Green (1952–), Croydon born billionaire, owner of the Arcadia Group Deryck Guyler (1914–1999), actor H Ben Haenow (1985–), winner of the eleventh series of The X Factor Will Hay (1888–1949), comic actor; lived at 45 The Chase, Norbury, 1927–1934 Simon Haynes (1967-), author, was born in Croydon Sir Francis Bond Head (1793–1875), soldier, traveller, author and Lieut. Governor of Upper Canada (1836–1838), had his home at Duppas Hill, Croydon Chris Heath (1959–), actor, author, comedian Roy Hodgson, football manager and former player, born in Croydon, Attended John Ruskin Grammar School. Joseph Holbrooke (1822–1876), composer of stage, choral, and orchestral music Roy Hudd, comedian, born in Croydon in 1936 J Len Jarrett (1921–), former Director of Administration of the World Scout Bureau; former World Organizer of Scoutings's Jamboree-on-the-Air for thirty years; Croydon born Nora Johnston (1886–1952), carillon performer and inventor of the mobile carillon Finn Jones (1988–), Croydon raised Oliver Jones (1986–), Croydon born dubstep producer otherwise known as Skream K Steve Kember (1948–), footballer, born in Croydon George Knowland (1922–1945), Victoria Cross recipient Krept and Konan, UK rap duo, raised in Gipsy Hill, Lambeth and Thornton Heath, Croydon respectively Rachel Keen (1997–), singer/songwriter, known as Raye, raised in Croydon Nish Kumar (1985-), comedian, grew up in Bromley and Croydon L Andrew Lawrence D. H. Lawrence (1885–1930), author; lived at 12 Colworth Road, Addiscombe, 1908–1912, whilst a teacher at Davidson Road School Sir David Lean (1908–1991), film director, born in Croydon Iain Lee (1973–), comedian, born in South Croydon Mike Leeder (1968–), Hong Kong based Film Producer, Casting Director and sometimes actor, born and raised in Croydon E G Handel Lucas (1861–1936), artist, lived in Croydon from 1861 to 1909 Dani Luna Luna (1999-), Professional Wrestler M Kirsty MacColl (1959–2000), singer and songwriter, born and grew up in Croydon Miles Malleson (1888–1969), actor and dramatist, born in Croydon Jimi Manuwa (1980–), American-born English mixed martial artist Ursula Martinez (1966–), cabaret and burlesque entertainer David McAlmont (1967–), British vocalist and songwriter, born in Croydon Duke McKenzie (1963–), world champion boxer Ralph McTell (1944–), musician, composer of "Streets of London" Katie Melua (1984–), singer, songwriter, musician, went to the Brit School for Performing Arts at Selhurst, Croydon Graham Moodie (1981–), Olympic hockey player Kate Moss (1974–), model Malcolm Muggeridge (1903–1990), author and media personality; son of H. T. Muggeridge, a prominent Croydon Labour councillor; taught at John Ruskin Central School in the 1920s N Habib Nasib Nader (1979–), actor, writer Kate Nash (1987–), singer/songwriter; attended Brit School, Croydon O Lawrence Okoye, athlete, attended Whitgift School Tarik O'Regan (1978–), composer, attended Elmhurst and Whitgift Schools, Croydon P Sue Perkins (1969–), comedian, writer, performer Christopher Pitcher (1973–), cricketer Lucy Porter, comedian, raised in Croydon Simon Prebble (1942–), actor, narrator Dickie Pride (1941-1969), rock and roll and jazz singer Luke Pritchard, lead singer of The Kooks, attended the Brit School, Croydon David Prowse, actor, aka Darth Vader in Star Wars; born in Bristol, lived in Addiscombe, Croydon for over 40 years Jason Puncheon (1986–) English professional footballer who plays in midfield for Crystal Palace R Chris Reed (1982–), BBC Radio One dubstep and grime DJ/producer (AKA Plastician) Jamie Reid (1947–), situationist, artist, graphic designer Robert Reid, rally driver, lives in a flat in South Croydon Susanna Reid (1970–), BBC television presenter; born in Croydon, attended Croham Hurst School and Croydon High School Nigel Reo-Coker, current English midfielder, playing for Bolton Wanderers and formerly of Wimbledon F.C., West Ham United and Aston Villa; born in Thornton Heath Phillip Rhys, actor Bridget Riley (1931–), painter, one of the foremost proponents of op art; born in Norbury Francis Ronalds (1788–1873), inventor, lived in Croydon in the period 1823–33 and manufactured his patented drawing instruments here Emily Ronalds (1795–1889), social reformer and sister of Francis Ronalds, established an early preschool in Croydon in 1826 Martyn Rooney (1987–), international sprinter Nadia Rose (1993–), recording artist John Ruskin (1819–1900), art critic and social theorist; spent much of childhood in Croydon at his mother's family home and visited often as an adult; his parents are buried in Shirley S Peter Sarstedt (1942–2017), singer, winner of Ivor Novello Award; resident Danny Schwarz, model Kellie Shirley, EastEnders actress Emile Smith Rowe (born 2000), Arsenal footballer, was born in Croydon and spent his early life in Thornton Heath Bernard Spear (1919–2003), actor (Yentob) William Stanley, (1829–1909), philanthropist, inventor, engineer, author, and artist. Lived most of his life in South Norwood, he designed and built Stanley Halls. South Norwood E.L.G. Stones (1914-1987), professor of medieval history at the University of Glasgow from 1956 to 1978 Dan Stevens, actor Stormzy, (1993–), musician, raised in Thornton Heath Swift, rapper, part of the group Section Boyz, raised in Croydon T Samuel Coleridge Taylor (1875–1912), composer; noted for his cantatas including the Song of Hiawatha trilogy; lived at 30 Dagnall Park, Selhurst and worked and died in St Leonards Road, Waddon Sam Taylor-Johnson (born 1967), artist and filmmaker, born in Croydon V Jonathan Vaughn (1981−), organist and choir director W Alfred Russel Wallace (1823–1913), naturalist; independently proposed a theory of evolution by natural selection and prompted Charles Darwin to reveal his own unpublished theory sooner than he had intended; lived at 44 St Peter's Road, Croydon John Whitgift (ca. 1530–1604), Archbishop of Canterbury; buried in the Parish Church of St John the Baptist; several other Archbishops are buried in the Parish Church or St Mary's in Addington Rickie Haywood Williams (1982–), TV and radio presenter currently working for MTV and Kiss 100 London Karl "Konan" Wilson, half of the British Rap duo "Krept and Konan", from Thornton Heath Amy Winehouse (1983–2011), singer, attended Brit School, Croydon Wilfred Wood, served as Bishop of Croydon 1985–2002, the first black Church of England bishop Edward Woodward (1930–2009), actor, born in Croydon Ian Wright MBE, former Crystal Palace, Arsenal and England football team footballer; lives in Shirley Matthew Wright, journalist and television presenter; born and resides in Croydon Tom Wright (1957–), architect of Burj Al Arab Y Alfred Gregory Yewen, an Australian agricultural writer, journalist and socialist. In fiction Sarah Jane Smith, the popular fictional companion of the Third and Fourth Doctors in the British science fiction television series Doctor Who Jeremy "Jez" Osbourne and Mark Corrigan, the fictional protagonists from the Channel 4 sitcom Peep Show, live in a flat in West Croydon. Captain Kevin Darling from the BBC sitcom Blackadder Goes Forth lived in Croydon with his girlfriend Doris. Darling was also a wicket keeper for the Croydon Gentlemen cricket team. Terry and June, the protagonists of the BBC sitcom of the same name, lived in Purley, a suburb of Croydon. References Croydon
```scss html, body { width: 100%; height: 100%; overscroll-behavior-y: none; } body { overflow-x: hidden; } @include media-breakpoint-up(xl) { .aside { min-width: $app-aside-min-width; max-width: $app-aside-width; min-height: 100vh; nav { height: 100%; } } } @include media-breakpoint-down(xl) { .aside-collapse { visibility: hidden; opacity: 0; transition: all 1ms ease!important; } .aside { height: calc(1.325rem + .9vw + 2em); transition: all 300ms ease; } body.menu-open { .aside-collapse { min-height: calc(100vh - 4rem); display: flex !important; visibility: visible; opacity: 1; } .aside { height: auto!important; min-height: 100vh; } .workspace { display: none !important; } } body:not(.menu-open) { .aside-collapse{ .sub-menu { visibility: hidden !important; } } } } .full-height { height: 100vh; overflow: auto; } .command-bar { --#{$prefix}dropdown-item-padding-x: 1rem; --#{$prefix}dropdown-item-padding-y: .25rem; } .table td { --#{$prefix}dropdown-item-padding-x: 0.5rem; --#{$prefix}dropdown-item-padding-y: .25rem; } .command-bar, .table td, { --#{$prefix}dropdown-spacer: #{$dropdown-spacer}; --#{$prefix}dropdown-color: #{$dropdown-color}; --#{$prefix}dropdown-bg: #{$dropdown-bg}; --#{$prefix}dropdown-border-color: #{$dropdown-border-color}; --#{$prefix}dropdown-border-radius: #{$dropdown-border-radius}; --#{$prefix}dropdown-border-width: #{$dropdown-border-width}; --#{$prefix}dropdown-inner-border-radius: #{$btn-border-radius}; --#{$prefix}dropdown-link-color: #{$dropdown-link-color}; --#{$prefix}dropdown-link-hover-color: #{$dropdown-link-hover-color}; --#{$prefix}dropdown-link-hover-bg: #{$dropdown-link-hover-bg}; --#{$prefix}dropdown-link-active-color: #{$dropdown-link-active-color}; --#{$prefix}dropdown-link-active-bg: #{$dropdown-link-active-bg}; --#{$prefix}dropdown-link-disabled-color: #{$dropdown-link-disabled-color}; .btn { @extend .dropdown-item; } } .dropdown-menu{ .btn { @extend .dropdown-item; } } @include media-breakpoint-down(md) { // Styles .app:before { display: none; } .command-bar { display: inline-block !important; list-style: none; margin: 0; padding: 0; overflow-x: auto; white-space: nowrap; width: 100%; position: initial; vertical-align: middle; text-align: center; li { display: inline-block; /* &:last-child{ .btn-link { margin-right: 0; padding-right: 0; } } &:first-child{ .btn-link { margin-left: 0; padding-left: 0; } } */ } } } .layout { @extend .bg-white; @extend .shadow-sm; @extend .rounded; @extend .p-4; @extend .mb-3; } .layout-wrapper-no-padder { > .p-4 { padding: 0 !important; } } .layout-wrapper { .shadow-sm { box-shadow: none !important; } > .mb-3:last-child{ @extend .mb-0 } } .iframe-error { position: fixed; top: 0; left: 0; z-index: 2050; width: 100vw; height: 100vh; overflow: hidden; outline: 0; border: none; } .workspace-limit { max-width: calc(1120px + #{$spacer}); overscroll-behavior-y: none; } .command-bar-wrapper { position: sticky; bottom: 0; -webkit-backdrop-filter: blur(2px); backdrop-filter: blur(2px); z-index: 5; left: 0; right: 0; } @include media-breakpoint-down(md) { .command-bar-wrapper { .layout { //background-color: $dark !important; padding: 1.5em 0 !important; margin-top: 0.3em; border-top: 1px solid $border-color; } /* .command-bar { .btn { //color: $white!important; display: flex !important; flex-direction: column !important; //font-size: 0.875rem; //padding-top: 0.8em; padding: 0.7em; border-radius: 0.5rem; svg { height: 1.5em; width: 1.5em; margin: 0 !important; //margin: 0 0 0.3rem 0 !important; } span { display: none; } } } */ } } ```
```c++ //===----- R600Packetizer.cpp - VLIW packetizer ---------------------------===// // // See path_to_url for license information. // //===your_sha256_hash------===// // /// \file /// This pass implements instructions packetization for R600. It unsets isLast /// bit of instructions inside a bundle and substitutes src register with /// PreviousVector when applicable. // //===your_sha256_hash------===// #include "MCTargetDesc/R600MCTargetDesc.h" #include "R600.h" #include "R600Subtarget.h" #include "llvm/CodeGen/DFAPacketizer.h" #include "llvm/CodeGen/MachineDominators.h" #include "llvm/CodeGen/MachineLoopInfo.h" #include "llvm/CodeGen/ScheduleDAG.h" using namespace llvm; #define DEBUG_TYPE "packets" namespace { class R600Packetizer : public MachineFunctionPass { public: static char ID; R600Packetizer() : MachineFunctionPass(ID) {} void getAnalysisUsage(AnalysisUsage &AU) const override { AU.setPreservesCFG(); AU.addRequired<MachineDominatorTree>(); AU.addPreserved<MachineDominatorTree>(); AU.addRequired<MachineLoopInfo>(); AU.addPreserved<MachineLoopInfo>(); MachineFunctionPass::getAnalysisUsage(AU); } StringRef getPassName() const override { return "R600 Packetizer"; } bool runOnMachineFunction(MachineFunction &Fn) override; }; class R600PacketizerList : public VLIWPacketizerList { private: const R600InstrInfo *TII; const R600RegisterInfo &TRI; bool VLIW5; bool ConsideredInstUsesAlreadyWrittenVectorElement; unsigned getSlot(const MachineInstr &MI) const { return TRI.getHWRegChan(MI.getOperand(0).getReg()); } /// \returns register to PV chan mapping for bundle/single instructions that /// immediately precedes I. DenseMap<unsigned, unsigned> getPreviousVector(MachineBasicBlock::iterator I) const { DenseMap<unsigned, unsigned> Result; I--; if (!TII->isALUInstr(I->getOpcode()) && !I->isBundle()) return Result; MachineBasicBlock::instr_iterator BI = I.getInstrIterator(); if (I->isBundle()) BI++; int LastDstChan = -1; do { bool isTrans = false; int BISlot = getSlot(*BI); if (LastDstChan >= BISlot) isTrans = true; LastDstChan = BISlot; if (TII->isPredicated(*BI)) continue; int OperandIdx = TII->getOperandIdx(BI->getOpcode(), R600::OpName::write); if (OperandIdx > -1 && BI->getOperand(OperandIdx).getImm() == 0) continue; int DstIdx = TII->getOperandIdx(BI->getOpcode(), R600::OpName::dst); if (DstIdx == -1) { continue; } Register Dst = BI->getOperand(DstIdx).getReg(); if (isTrans || TII->isTransOnly(*BI)) { Result[Dst] = R600::PS; continue; } if (BI->getOpcode() == R600::DOT4_r600 || BI->getOpcode() == R600::DOT4_eg) { Result[Dst] = R600::PV_X; continue; } if (Dst == R600::OQAP) { continue; } unsigned PVReg = 0; switch (TRI.getHWRegChan(Dst)) { case 0: PVReg = R600::PV_X; break; case 1: PVReg = R600::PV_Y; break; case 2: PVReg = R600::PV_Z; break; case 3: PVReg = R600::PV_W; break; default: llvm_unreachable("Invalid Chan"); } Result[Dst] = PVReg; } while ((++BI)->isBundledWithPred()); return Result; } void substitutePV(MachineInstr &MI, const DenseMap<unsigned, unsigned> &PVs) const { unsigned Ops[] = { R600::OpName::src0, R600::OpName::src1, R600::OpName::src2 }; for (unsigned Op : Ops) { int OperandIdx = TII->getOperandIdx(MI.getOpcode(), Op); if (OperandIdx < 0) continue; Register Src = MI.getOperand(OperandIdx).getReg(); const DenseMap<unsigned, unsigned>::const_iterator It = PVs.find(Src); if (It != PVs.end()) MI.getOperand(OperandIdx).setReg(It->second); } } public: // Ctor. R600PacketizerList(MachineFunction &MF, const R600Subtarget &ST, MachineLoopInfo &MLI) : VLIWPacketizerList(MF, MLI, nullptr), TII(ST.getInstrInfo()), TRI(TII->getRegisterInfo()) { VLIW5 = !ST.hasCaymanISA(); } // initPacketizerState - initialize some internal flags. void initPacketizerState() override { ConsideredInstUsesAlreadyWrittenVectorElement = false; } // ignorePseudoInstruction - Ignore bundling of pseudo instructions. bool ignorePseudoInstruction(const MachineInstr &MI, const MachineBasicBlock *MBB) override { return false; } // isSoloInstruction - return true if instruction MI can not be packetized // with any other instruction, which means that MI itself is a packet. bool isSoloInstruction(const MachineInstr &MI) override { if (TII->isVector(MI)) return true; if (!TII->isALUInstr(MI.getOpcode())) return true; if (MI.getOpcode() == R600::GROUP_BARRIER) return true; // XXX: This can be removed once the packetizer properly handles all the // LDS instruction group restrictions. return TII->isLDSInstr(MI.getOpcode()); } // isLegalToPacketizeTogether - Is it legal to packetize SUI and SUJ // together. bool isLegalToPacketizeTogether(SUnit *SUI, SUnit *SUJ) override { MachineInstr *MII = SUI->getInstr(), *MIJ = SUJ->getInstr(); if (getSlot(*MII) == getSlot(*MIJ)) ConsideredInstUsesAlreadyWrittenVectorElement = true; // Does MII and MIJ share the same pred_sel ? int OpI = TII->getOperandIdx(MII->getOpcode(), R600::OpName::pred_sel), OpJ = TII->getOperandIdx(MIJ->getOpcode(), R600::OpName::pred_sel); Register PredI = (OpI > -1)?MII->getOperand(OpI).getReg() : Register(), PredJ = (OpJ > -1)?MIJ->getOperand(OpJ).getReg() : Register(); if (PredI != PredJ) return false; if (SUJ->isSucc(SUI)) { for (unsigned i = 0, e = SUJ->Succs.size(); i < e; ++i) { const SDep &Dep = SUJ->Succs[i]; if (Dep.getSUnit() != SUI) continue; if (Dep.getKind() == SDep::Anti) continue; if (Dep.getKind() == SDep::Output) if (MII->getOperand(0).getReg() != MIJ->getOperand(0).getReg()) continue; return false; } } bool ARDef = TII->definesAddressRegister(*MII) || TII->definesAddressRegister(*MIJ); bool ARUse = TII->usesAddressRegister(*MII) || TII->usesAddressRegister(*MIJ); return !ARDef || !ARUse; } // isLegalToPruneDependencies - Is it legal to prune dependency between SUI // and SUJ. bool isLegalToPruneDependencies(SUnit *SUI, SUnit *SUJ) override { return false; } void setIsLastBit(MachineInstr *MI, unsigned Bit) const { unsigned LastOp = TII->getOperandIdx(MI->getOpcode(), R600::OpName::last); MI->getOperand(LastOp).setImm(Bit); } bool isBundlableWithCurrentPMI(MachineInstr &MI, const DenseMap<unsigned, unsigned> &PV, std::vector<R600InstrInfo::BankSwizzle> &BS, bool &isTransSlot) { isTransSlot = TII->isTransOnly(MI); assert (!isTransSlot || VLIW5); // Is the dst reg sequence legal ? if (!isTransSlot && !CurrentPacketMIs.empty()) { if (getSlot(MI) <= getSlot(*CurrentPacketMIs.back())) { if (ConsideredInstUsesAlreadyWrittenVectorElement && !TII->isVectorOnly(MI) && VLIW5) { isTransSlot = true; LLVM_DEBUG({ dbgs() << "Considering as Trans Inst :"; MI.dump(); }); } else return false; } } // Are the Constants limitations met ? CurrentPacketMIs.push_back(&MI); if (!TII->fitsConstReadLimitations(CurrentPacketMIs)) { LLVM_DEBUG({ dbgs() << "Couldn't pack :\n"; MI.dump(); dbgs() << "with the following packets :\n"; for (unsigned i = 0, e = CurrentPacketMIs.size() - 1; i < e; i++) { CurrentPacketMIs[i]->dump(); dbgs() << "\n"; } dbgs() << "because of Consts read limitations\n"; }); CurrentPacketMIs.pop_back(); return false; } // Is there a BankSwizzle set that meet Read Port limitations ? if (!TII->fitsReadPortLimitations(CurrentPacketMIs, PV, BS, isTransSlot)) { LLVM_DEBUG({ dbgs() << "Couldn't pack :\n"; MI.dump(); dbgs() << "with the following packets :\n"; for (unsigned i = 0, e = CurrentPacketMIs.size() - 1; i < e; i++) { CurrentPacketMIs[i]->dump(); dbgs() << "\n"; } dbgs() << "because of Read port limitations\n"; }); CurrentPacketMIs.pop_back(); return false; } // We cannot read LDS source registers from the Trans slot. if (isTransSlot && TII->readsLDSSrcReg(MI)) return false; CurrentPacketMIs.pop_back(); return true; } MachineBasicBlock::iterator addToPacket(MachineInstr &MI) override { MachineBasicBlock::iterator FirstInBundle = CurrentPacketMIs.empty() ? &MI : CurrentPacketMIs.front(); const DenseMap<unsigned, unsigned> &PV = getPreviousVector(FirstInBundle); std::vector<R600InstrInfo::BankSwizzle> BS; bool isTransSlot; if (isBundlableWithCurrentPMI(MI, PV, BS, isTransSlot)) { for (unsigned i = 0, e = CurrentPacketMIs.size(); i < e; i++) { MachineInstr *MI = CurrentPacketMIs[i]; unsigned Op = TII->getOperandIdx(MI->getOpcode(), R600::OpName::bank_swizzle); MI->getOperand(Op).setImm(BS[i]); } unsigned Op = TII->getOperandIdx(MI.getOpcode(), R600::OpName::bank_swizzle); MI.getOperand(Op).setImm(BS.back()); if (!CurrentPacketMIs.empty()) setIsLastBit(CurrentPacketMIs.back(), 0); substitutePV(MI, PV); MachineBasicBlock::iterator It = VLIWPacketizerList::addToPacket(MI); if (isTransSlot) { endPacket(std::next(It)->getParent(), std::next(It)); } return It; } endPacket(MI.getParent(), MI); if (TII->isTransOnly(MI)) return MI; return VLIWPacketizerList::addToPacket(MI); } }; bool R600Packetizer::runOnMachineFunction(MachineFunction &Fn) { const R600Subtarget &ST = Fn.getSubtarget<R600Subtarget>(); const R600InstrInfo *TII = ST.getInstrInfo(); MachineLoopInfo &MLI = getAnalysis<MachineLoopInfo>(); // Instantiate the packetizer. R600PacketizerList Packetizer(Fn, ST, MLI); // DFA state table should not be empty. assert(Packetizer.getResourceTracker() && "Empty DFA table!"); assert(Packetizer.getResourceTracker()->getInstrItins()); if (Packetizer.getResourceTracker()->getInstrItins()->isEmpty()) return false; // // Loop over all basic blocks and remove KILL pseudo-instructions // These instructions confuse the dependence analysis. Consider: // D0 = ... (Insn 0) // R0 = KILL R0, D0 (Insn 1) // R0 = ... (Insn 2) // Here, Insn 1 will result in the dependence graph not emitting an output // dependence between Insn 0 and Insn 2. This can lead to incorrect // packetization // for (MachineBasicBlock &MBB : Fn) { for (MachineInstr &MI : llvm::make_early_inc_range(MBB)) { if (MI.isKill() || MI.getOpcode() == R600::IMPLICIT_DEF || (MI.getOpcode() == R600::CF_ALU && !MI.getOperand(8).getImm())) MBB.erase(MI); } } // Loop over all of the basic blocks. for (MachineFunction::iterator MBB = Fn.begin(), MBBe = Fn.end(); MBB != MBBe; ++MBB) { // Find scheduling regions and schedule / packetize each region. unsigned RemainingCount = MBB->size(); for(MachineBasicBlock::iterator RegionEnd = MBB->end(); RegionEnd != MBB->begin();) { // The next region starts above the previous region. Look backward in the // instruction stream until we find the nearest boundary. MachineBasicBlock::iterator I = RegionEnd; for(;I != MBB->begin(); --I, --RemainingCount) { if (TII->isSchedulingBoundary(*std::prev(I), &*MBB, Fn)) break; } I = MBB->begin(); // Skip empty scheduling regions. if (I == RegionEnd) { RegionEnd = std::prev(RegionEnd); --RemainingCount; continue; } // Skip regions with one instruction. if (I == std::prev(RegionEnd)) { RegionEnd = std::prev(RegionEnd); continue; } Packetizer.PacketizeMIs(&*MBB, &*I, RegionEnd); RegionEnd = I; } } return true; } } // end anonymous namespace INITIALIZE_PASS_BEGIN(R600Packetizer, DEBUG_TYPE, "R600 Packetizer", false, false) INITIALIZE_PASS_END(R600Packetizer, DEBUG_TYPE, "R600 Packetizer", false, false) char R600Packetizer::ID = 0; char &llvm::R600PacketizerID = R600Packetizer::ID; llvm::FunctionPass *llvm::createR600Packetizer() { return new R600Packetizer(); } ```
```go package spec_iterator func ParallelizedIndexRange(length int, parallelTotal int, parallelNode int) (startIndex int, count int) { if length == 0 { return 0, 0 } // We have more nodes than tests. Trivial case. if parallelTotal >= length { if parallelNode > length { return 0, 0 } else { return parallelNode - 1, 1 } } // This is the minimum amount of tests that a node will be required to run minTestsPerNode := length / parallelTotal // This is the maximum amount of tests that a node will be required to run // The algorithm guarantees that this would be equal to at least the minimum amount // and at most one more maxTestsPerNode := minTestsPerNode if length%parallelTotal != 0 { maxTestsPerNode++ } // Number of nodes that will have to run the maximum amount of tests per node numMaxLoadNodes := length % parallelTotal // Number of nodes that precede the current node and will have to run the maximum amount of tests per node var numPrecedingMaxLoadNodes int if parallelNode > numMaxLoadNodes { numPrecedingMaxLoadNodes = numMaxLoadNodes } else { numPrecedingMaxLoadNodes = parallelNode - 1 } // Number of nodes that precede the current node and will have to run the minimum amount of tests per node var numPrecedingMinLoadNodes int if parallelNode <= numMaxLoadNodes { numPrecedingMinLoadNodes = 0 } else { numPrecedingMinLoadNodes = parallelNode - numMaxLoadNodes - 1 } // Evaluate the test start index and number of tests to run startIndex = numPrecedingMaxLoadNodes*maxTestsPerNode + numPrecedingMinLoadNodes*minTestsPerNode if parallelNode > numMaxLoadNodes { count = minTestsPerNode } else { count = maxTestsPerNode } return } ```
```javascript // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. // Flags: --allow-natives-syntax (function ShiftRightWithDeoptUsage() { function g() {} function f() { var tmp = 1264475713; var tmp1 = tmp - (-913041544); g(); return 1 >> tmp1; } %PrepareFunctionForOptimization(f); %OptimizeFunctionOnNextCall(f); assertEquals(0, f()); })(); (function ShiftRightWithCallUsage() { var f = (function() { "use asm" // This is not a valid asm.js, we use the "use asm" here to // trigger Turbofan without deoptimization support. function g(x) { return x; } function f() { var tmp = 1264475713; var tmp1 = tmp - (-913041544); return g(1 >> tmp1, tmp1); } return f; })(); %PrepareFunctionForOptimization(f); %OptimizeFunctionOnNextCall(f); assertEquals(0, f()); })(); ```
The House of Vizarrón (Spanish: La Casa Vizarrón o de las Cadenas) is a house located in El Puerto de Santa María, Spain. It was declared Bien de Interés Cultural in 2006. References See also List of Bien de Interés Cultural in the Province of Cádiz Bien de Interés Cultural landmarks in the Province of Cádiz Houses in Spain
The communauté de communes du Carrefour des Quatre Provinces was located in the Creuse département of the Nouvelle-Aquitaine region of central France. It was created in January 1999. It was merged into the new Communauté de communes Creuse Confluence in January 2017. It comprised the following 15 communes: Blaudeix Cressat La Celle-sous-Gouzon Domeyrot Gouzon Jarnages Ladapeyre Parsac-Rimondeix Pierrefitte Pionnat Saint-Julien-le-Châtel Saint-Loup Saint-Silvain-sous-Toulx Trois-Fonds Vigeville References Carrefour des Quatre Provinces
```xml <test> <settings> <allow_suspicious_low_cardinality_types>1</allow_suspicious_low_cardinality_types> </settings> <query>WITH toUInt8(number) AS k, toUInt64(k) AS k1, k AS k2 SELECT k1, k2, count() FROM numbers(100000000) GROUP BY k1, k2</query> <query>WITH toUInt8(number) AS k, toUInt16(k) AS k1, toUInt32(k) AS k2, k AS k3 SELECT k1, k2, k3, count() FROM numbers(100000000) GROUP BY k1, k2, k3</query> <query>WITH toUInt8(number) AS k, k AS k1, k + 1 AS k2 SELECT k1, k2, count() FROM numbers(100000000) GROUP BY k1, k2</query> <query>WITH toUInt8(number) AS k, k AS k1, k + 1 AS k2, k + 2 AS k3, k + 3 AS k4 SELECT k1, k2, k3, k4, count() FROM numbers(100000000) GROUP BY k1, k2, k3, k4</query> <query>WITH toUInt8(number) AS k, toUInt64(k) AS k1, k1 + 1 AS k2 SELECT k1, k2, count() FROM numbers(100000000) GROUP BY k1, k2</query> <create_query>create table group_by_fk(a UInt32, b UInt32, c LowCardinality(UInt32), d Nullable(UInt32), e UInt64, f UInt64, g UInt64, h LowCardinality(UInt64), i Nullable(UInt64)) engine=MergeTree order by tuple()</create_query> <fill_query>insert into group_by_fk select number, number, number % 10000, number % 2 == 0 ? number : Null, number, number, number, number % 10000, number % 2 == 0 ? number : Null from numbers_mt(1e7) settings max_insert_threads=8</fill_query> <!-- keys64_two_level --> <query>select a, b from group_by_fk group by a, b format Null</query> <!-- low_cardinality_keys128_two_level --> <query>select a, c from group_by_fk group by a, c format Null</query> <!-- nullable_keys128_two_level --> <query>select a, d from group_by_fk group by a, d format Null</query> <!-- keys128_two_level --> <query>select e, f from group_by_fk group by e, f format Null</query> <!-- low_cardinality_keys128_two_level --> <query>select e, h from group_by_fk group by e, h format Null</query> <!-- nullable_keys256_two_level --> <query>select e, i from group_by_fk group by e, i format Null</query> <!-- keys256_two_level --> <query>select e, f, g from group_by_fk group by e, f, g format Null</query> <!-- low_cardinality_keys256_two_level --> <query>select e, f, h from group_by_fk group by e, f, h format Null</query> <!-- nullable_keys256_two_level --> <query>select e, f, i from group_by_fk group by e, f, i format Null</query> <drop_query>drop table group_by_fk</drop_query> </test> ```
```java /* * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * * path_to_url * * Unless required by applicable law or agreed to in writing, software * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. */ package org.apache.beam.runners.dataflow.worker; import static org.apache.beam.sdk.testing.SystemNanoTimeSleeper.sleepMillis; import static org.hamcrest.Matchers.allOf; import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.greaterThan; import static org.hamcrest.Matchers.greaterThanOrEqualTo; import static org.hamcrest.Matchers.lessThan; import static org.hamcrest.Matchers.not; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertFalse; import java.util.ArrayList; import java.util.Collection; import java.util.Collections; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.Optional; import java.util.Queue; import java.util.Set; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentLinkedQueue; import java.util.concurrent.CountDownLatch; import java.util.concurrent.LinkedBlockingQueue; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicInteger; import java.util.function.Consumer; import java.util.function.Function; import javax.annotation.concurrent.GuardedBy; import org.apache.beam.runners.dataflow.worker.streaming.ComputationState; import org.apache.beam.runners.dataflow.worker.streaming.WorkHeartbeatResponseProcessor; import org.apache.beam.runners.dataflow.worker.streaming.WorkId; import org.apache.beam.runners.dataflow.worker.windmill.Windmill; import org.apache.beam.runners.dataflow.worker.windmill.Windmill.CommitWorkResponse; import org.apache.beam.runners.dataflow.worker.windmill.Windmill.ComputationCommitWorkRequest; import org.apache.beam.runners.dataflow.worker.windmill.Windmill.ComputationGetDataRequest; import org.apache.beam.runners.dataflow.worker.windmill.Windmill.ComputationHeartbeatRequest; import org.apache.beam.runners.dataflow.worker.windmill.Windmill.ComputationHeartbeatResponse; import org.apache.beam.runners.dataflow.worker.windmill.Windmill.GetDataRequest; import org.apache.beam.runners.dataflow.worker.windmill.Windmill.GetDataResponse; import org.apache.beam.runners.dataflow.worker.windmill.Windmill.HeartbeatRequest; import org.apache.beam.runners.dataflow.worker.windmill.Windmill.KeyedGetDataRequest; import org.apache.beam.runners.dataflow.worker.windmill.Windmill.LatencyAttribution; import org.apache.beam.runners.dataflow.worker.windmill.Windmill.LatencyAttribution.State; import org.apache.beam.runners.dataflow.worker.windmill.Windmill.WorkItemCommitRequest; import org.apache.beam.runners.dataflow.worker.windmill.WindmillServerStub; import org.apache.beam.runners.dataflow.worker.windmill.client.WindmillStream.CommitWorkStream; import org.apache.beam.runners.dataflow.worker.windmill.client.WindmillStream.GetDataStream; import org.apache.beam.runners.dataflow.worker.windmill.client.WindmillStream.GetWorkStream; import org.apache.beam.runners.dataflow.worker.windmill.work.WorkItemReceiver; import org.apache.beam.runners.dataflow.worker.windmill.work.budget.GetWorkBudget; import org.apache.beam.vendor.guava.v32_1_2_jre.com.google.common.collect.ImmutableSet; import org.apache.beam.vendor.guava.v32_1_2_jre.com.google.common.net.HostAndPort; import org.apache.beam.vendor.guava.v32_1_2_jre.com.google.common.util.concurrent.Uninterruptibles; import org.joda.time.Duration; import org.joda.time.Instant; import org.junit.rules.ErrorCollector; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** An in-memory Windmill server that offers provided work and data. */ public final class FakeWindmillServer extends WindmillServerStub { private static final Logger LOG = LoggerFactory.getLogger(FakeWindmillServer.class); private final ResponseQueue<Windmill.GetWorkRequest, Windmill.GetWorkResponse> workToOffer; private final ResponseQueue<GetDataRequest, GetDataResponse> dataToOffer; private final ResponseQueue<Windmill.CommitWorkRequest, CommitWorkResponse> commitsToOffer; private final Map<WorkId, Windmill.CommitStatus> streamingCommitsToOffer; // Keys are work tokens. private final Map<Long, WorkItemCommitRequest> commitsReceived; private final ArrayList<Windmill.ReportStatsRequest> statsReceived; private final LinkedBlockingQueue<Windmill.Exception> exceptions; private final AtomicInteger expectedExceptionCount; private final ErrorCollector errorCollector; private final ConcurrentHashMap<Long, Consumer<Windmill.CommitStatus>> droppedStreamingCommits; private final List<Windmill.GetDataRequest> getDataRequests = new ArrayList<>(); private final Consumer<List<Windmill.ComputationHeartbeatResponse>> processHeartbeatResponses; private int commitsRequested = 0; private boolean dropStreamingCommits = false; @GuardedBy("this") private ImmutableSet<HostAndPort> dispatcherEndpoints; public FakeWindmillServer( ErrorCollector errorCollector, Function<String, Optional<ComputationState>> computationStateFetcher) { workToOffer = new ResponseQueue<Windmill.GetWorkRequest, Windmill.GetWorkResponse>() .returnByDefault(Windmill.GetWorkResponse.getDefaultInstance()); dataToOffer = new ResponseQueue<GetDataRequest, GetDataResponse>() .returnByDefault(GetDataResponse.getDefaultInstance()) // Sleep for a bit to ensure that *-windmill-read state-sampled counters show up. .delayEachResponseBy(Duration.millis(500)); commitsToOffer = new ResponseQueue<Windmill.CommitWorkRequest, CommitWorkResponse>() .returnByDefault(CommitWorkResponse.getDefaultInstance()); streamingCommitsToOffer = new HashMap<>(); commitsReceived = new ConcurrentHashMap<>(); exceptions = new LinkedBlockingQueue<>(); expectedExceptionCount = new AtomicInteger(); this.errorCollector = errorCollector; statsReceived = new ArrayList<>(); droppedStreamingCommits = new ConcurrentHashMap<>(); this.processHeartbeatResponses = new WorkHeartbeatResponseProcessor(computationStateFetcher); } public void setDropStreamingCommits(boolean dropStreamingCommits) { this.dropStreamingCommits = dropStreamingCommits; } public ResponseQueue<Windmill.GetWorkRequest, Windmill.GetWorkResponse> whenGetWorkCalled() { return workToOffer; } public ResponseQueue<GetDataRequest, GetDataResponse> whenGetDataCalled() { return dataToOffer; } public void sendFailedHeartbeats(List<Windmill.ComputationHeartbeatResponse> responses) { getDataStream().onHeartbeatResponse(responses); } public ResponseQueue<Windmill.CommitWorkRequest, Windmill.CommitWorkResponse> whenCommitWorkCalled() { return commitsToOffer; } public Map<WorkId, Windmill.CommitStatus> whenCommitWorkStreamCalled() { return streamingCommitsToOffer; } @Override public Windmill.GetWorkResponse getWork(Windmill.GetWorkRequest request) { LOG.debug("getWorkRequest: {}", request.toString()); Windmill.GetWorkResponse response = workToOffer.getOrDefault(request); LOG.debug("getWorkResponse: {}", response.toString()); return response; } private void validateGetDataRequest(Windmill.GetDataRequest request) { for (ComputationGetDataRequest computationRequest : request.getRequestsList()) { for (KeyedGetDataRequest keyRequest : computationRequest.getRequestsList()) { errorCollector.checkThat(keyRequest.hasWorkToken(), equalTo(true)); errorCollector.checkThat( keyRequest.getShardingKey(), allOf(greaterThan(0L), lessThan(Long.MAX_VALUE))); errorCollector.checkThat(keyRequest.getMaxBytes(), greaterThanOrEqualTo(0L)); } } } @Override public Windmill.GetDataResponse getData(Windmill.GetDataRequest request) { LOG.info("getDataRequest: {}", request.toString()); validateGetDataRequest(request); getDataRequests.add(request); GetDataResponse response = dataToOffer.getOrDefault(request); LOG.debug("getDataResponse: {}", response.toString()); return response; } private void validateCommitWorkRequest(Windmill.CommitWorkRequest request) { for (ComputationCommitWorkRequest computationRequest : request.getRequestsList()) { for (WorkItemCommitRequest commit : computationRequest.getRequestsList()) { errorCollector.checkThat(commit.hasWorkToken(), equalTo(true)); errorCollector.checkThat( commit.getShardingKey(), allOf(greaterThan(0L), lessThan(Long.MAX_VALUE))); errorCollector.checkThat(commit.getCacheToken(), not(equalTo(0L))); } } } @Override public CommitWorkResponse commitWork(Windmill.CommitWorkRequest request) { LOG.debug("commitWorkRequest: {}", request); validateCommitWorkRequest(request); for (ComputationCommitWorkRequest computationRequest : request.getRequestsList()) { for (WorkItemCommitRequest commit : computationRequest.getRequestsList()) { commitsReceived.put(commit.getWorkToken(), commit); } } CommitWorkResponse response = commitsToOffer.getOrDefault(request); LOG.debug("commitWorkResponse: {}", response); return response; } @Override public Windmill.GetConfigResponse getConfig(Windmill.GetConfigRequest request) { return Windmill.GetConfigResponse.newBuilder().build(); } @Override public Windmill.ReportStatsResponse reportStats(Windmill.ReportStatsRequest request) { for (Windmill.Exception exception : request.getExceptionsList()) { Uninterruptibles.putUninterruptibly(exceptions, exception); } statsReceived.add(request); if (request.getExceptionsList().isEmpty() || expectedExceptionCount.getAndDecrement() > 0) { return Windmill.ReportStatsResponse.newBuilder().build(); } else { return Windmill.ReportStatsResponse.newBuilder().setFailed(true).build(); } } @Override public long getAndResetThrottleTime() { return 0; } @Override public GetWorkStream getWorkStream(Windmill.GetWorkRequest request, WorkItemReceiver receiver) { LOG.debug("getWorkStream: {}", request.toString()); Instant startTime = Instant.now(); final CountDownLatch done = new CountDownLatch(1); return new GetWorkStream() { @Override public String backendWorkerToken() { return ""; } @Override public void shutdown() {} @Override public void halfClose() { done.countDown(); } @Override public void adjustBudget(long itemsDelta, long bytesDelta) { // no-op. } @Override public GetWorkBudget remainingBudget() { return GetWorkBudget.builder() .setItems(request.getMaxItems()) .setBytes(request.getMaxBytes()) .build(); } @Override public boolean awaitTermination(int time, TimeUnit unit) throws InterruptedException { while (done.getCount() > 0) { Windmill.GetWorkResponse response = workToOffer.get(null); if (response == null) { try { sleepMillis(500); } catch (InterruptedException e) { halfClose(); Thread.currentThread().interrupt(); } continue; } for (Windmill.ComputationWorkItems computationWork : response.getWorkList()) { Instant inputDataWatermark = WindmillTimeUtils.windmillToHarnessWatermark( computationWork.getInputDataWatermark()); for (Windmill.WorkItem workItem : computationWork.getWorkList()) { receiver.receiveWork( computationWork.getComputationId(), inputDataWatermark, Instant.now(), workItem, Collections.singletonList( LatencyAttribution.newBuilder() .setState(State.GET_WORK_IN_TRANSIT_TO_USER_WORKER) .setTotalDurationMillis(1000) .build())); } } } return done.await(time, unit); } @Override public Instant startTime() { return startTime; } }; } @Override public GetDataStream getDataStream() { Instant startTime = Instant.now(); return new GetDataStream() { @Override public String backendWorkerToken() { return ""; } @Override public void shutdown() {} @Override public Windmill.KeyedGetDataResponse requestKeyedData( String computation, KeyedGetDataRequest request) { Windmill.GetDataRequest getDataRequest = GetDataRequest.newBuilder() .addRequests( ComputationGetDataRequest.newBuilder() .setComputationId(computation) .addRequests(request) .build()) .build(); GetDataResponse getDataResponse = getData(getDataRequest); if (getDataResponse.getDataList().isEmpty()) { return null; } assertEquals(1, getDataResponse.getDataCount()); if (getDataResponse.getData(0).getDataList().isEmpty()) { return null; } assertEquals(1, getDataResponse.getData(0).getDataCount()); return getDataResponse.getData(0).getData(0); } @Override public Windmill.GlobalData requestGlobalData(Windmill.GlobalDataRequest request) { Windmill.GetDataRequest getDataRequest = GetDataRequest.newBuilder().addGlobalDataFetchRequests(request).build(); GetDataResponse getDataResponse = getData(getDataRequest); if (getDataResponse.getGlobalDataList().isEmpty()) { return null; } assertEquals(1, getDataResponse.getGlobalDataCount()); return getDataResponse.getGlobalData(0); } @Override public void refreshActiveWork(Map<String, Collection<HeartbeatRequest>> heartbeats) { Windmill.GetDataRequest.Builder builder = Windmill.GetDataRequest.newBuilder(); for (Map.Entry<String, Collection<HeartbeatRequest>> entry : heartbeats.entrySet()) { builder.addComputationHeartbeatRequest( ComputationHeartbeatRequest.newBuilder() .setComputationId(entry.getKey()) .addAllHeartbeatRequests(entry.getValue())); } getData(builder.build()); } @Override public void onHeartbeatResponse(List<ComputationHeartbeatResponse> responses) { processHeartbeatResponses.accept(responses); } @Override public void halfClose() {} @Override public boolean awaitTermination(int time, TimeUnit unit) { return true; } @Override public Instant startTime() { return startTime; } }; } @Override public CommitWorkStream commitWorkStream() { Instant startTime = Instant.now(); return new CommitWorkStream() { @Override public String backendWorkerToken() { return ""; } @Override public void shutdown() {} @Override public RequestBatcher batcher() { return new RequestBatcher() { final List<RequestAndDone> requests = new ArrayList<>(); @Override public boolean commitWorkItem( String computation, WorkItemCommitRequest request, Consumer<Windmill.CommitStatus> onDone) { LOG.debug("commitWorkStream::commitWorkItem: {}", request); errorCollector.checkThat(request.hasWorkToken(), equalTo(true)); errorCollector.checkThat( request.getShardingKey(), allOf(greaterThan(0L), lessThan(Long.MAX_VALUE))); errorCollector.checkThat(request.getCacheToken(), not(equalTo(0L))); if (requests.size() > 5) return false; // Throws away the result, but allows to inject latency. Windmill.CommitWorkRequest.Builder builder = Windmill.CommitWorkRequest.newBuilder(); builder.addRequestsBuilder().setComputationId(computation).addRequests(request); commitsToOffer.getOrDefault(builder.build()); requests.add(new RequestAndDone(request, onDone)); flush(); return true; } @Override public void flush() { for (RequestAndDone elem : requests) { if (dropStreamingCommits) { droppedStreamingCommits.put(elem.request.getWorkToken(), elem.onDone); // Return true to indicate the request was accepted even if we are dropping the // commit to simulate a dropped commit. continue; } commitsReceived.put(elem.request.getWorkToken(), elem.request); elem.onDone.accept( Optional.ofNullable( streamingCommitsToOffer.remove( WorkId.builder() .setWorkToken(elem.request.getWorkToken()) .setCacheToken(elem.request.getCacheToken()) .build())) // Default to CommitStatus.OK .orElse(Windmill.CommitStatus.OK)); } requests.clear(); } class RequestAndDone { final Consumer<Windmill.CommitStatus> onDone; final WorkItemCommitRequest request; RequestAndDone(WorkItemCommitRequest request, Consumer<Windmill.CommitStatus> onDone) { this.request = request; this.onDone = onDone; } } }; } @Override public void halfClose() {} @Override public boolean awaitTermination(int time, TimeUnit unit) { return true; } @Override public Instant startTime() { return startTime; } }; } public void waitForEmptyWorkQueue() { while (!workToOffer.isEmpty()) { Uninterruptibles.sleepUninterruptibly(100, TimeUnit.MILLISECONDS); } } public Map<Long, WorkItemCommitRequest> waitForAndGetCommitsWithTimeout( int numCommits, Duration timeout) { LOG.debug("waitForAndGetCommitsWithTimeout: {} {}", numCommits, timeout); Instant waitStart = Instant.now(); while (commitsReceived.size() < commitsRequested + numCommits && Instant.now().isBefore(waitStart.plus(timeout))) { Uninterruptibles.sleepUninterruptibly(100, TimeUnit.MILLISECONDS); } commitsRequested += numCommits; return commitsReceived; } public Map<Long, WorkItemCommitRequest> waitForAndGetCommits(int numCommits) { LOG.debug("waitForAndGetCommitsRequest: {}", numCommits); int maxTries = 100; while (maxTries-- > 0 && commitsReceived.size() < commitsRequested + numCommits) { Uninterruptibles.sleepUninterruptibly(100, TimeUnit.MILLISECONDS); } assertFalse( "Should have received " + numCommits + " more commits beyond " + commitsRequested + " commits already seen, but after 10s have only seen " + commitsReceived.size() + ". Exceptions seen: " + exceptions, commitsReceived.size() < commitsRequested + numCommits); commitsRequested += numCommits; LOG.debug("waitForAndGetCommitsResponse: {}", commitsReceived); return commitsReceived; } public void clearCommitsReceived() { commitsRequested = 0; commitsReceived.clear(); } public ConcurrentHashMap<Long, Consumer<Windmill.CommitStatus>> waitForDroppedCommits( int droppedCommits) { LOG.debug("waitForDroppedCommits: {}", droppedCommits); int maxTries = 10; while (maxTries-- > 0 && droppedStreamingCommits.size() < droppedCommits) { Uninterruptibles.sleepUninterruptibly(1000, TimeUnit.MILLISECONDS); } assertEquals(droppedCommits, droppedStreamingCommits.size()); return droppedStreamingCommits; } public void setExpectedExceptionCount(int i) { expectedExceptionCount.getAndAdd(i); } public Windmill.Exception getException() throws InterruptedException { return exceptions.take(); } public int numGetDataRequests() { return getDataRequests.size(); } public List<Windmill.GetDataRequest> getGetDataRequests() { return getDataRequests; } public ArrayList<Windmill.ReportStatsRequest> getStatsReceived() { return statsReceived; } @Override public synchronized ImmutableSet<HostAndPort> getWindmillServiceEndpoints() { return dispatcherEndpoints; } @Override public synchronized void setWindmillServiceEndpoints(Set<HostAndPort> endpoints) { this.dispatcherEndpoints = ImmutableSet.copyOf(endpoints); } public static class ResponseQueue<T, U> { private final Queue<Function<T, U>> responses = new ConcurrentLinkedQueue<>(); Duration sleep = Duration.ZERO; private Function<T, U> defaultResponse; // (Fluent) interface for response producers, accessible from tests. public ResponseQueue<T, U> thenAnswer(Function<T, U> mapFun) { responses.add(mapFun); return this; } public ResponseQueue<T, U> thenReturn(U response) { return thenAnswer((request) -> response); } public ResponseQueue<T, U> answerByDefault(Function<T, U> mapFun) { defaultResponse = mapFun; return this; } public ResponseQueue<T, U> returnByDefault(U response) { return answerByDefault((request) -> response); } public ResponseQueue<T, U> delayEachResponseBy(Duration sleep) { this.sleep = sleep; return this; } // Interface for response consumers, accessible from the enclosing class. private U getOrDefault(T request) { Function<T, U> mapFun = responses.poll(); U response = mapFun == null ? defaultResponse.apply(request) : mapFun.apply(request); Uninterruptibles.sleepUninterruptibly(sleep.getMillis(), TimeUnit.MILLISECONDS); return response; } private U get(T request) { Function<T, U> mapFun = responses.poll(); U response = mapFun == null ? null : mapFun.apply(request); Uninterruptibles.sleepUninterruptibly(sleep.getMillis(), TimeUnit.MILLISECONDS); return response; } private boolean isEmpty() { return responses.isEmpty(); } } } ```
```php <?php namespace Elementor\Core\Logger\Loggers; use Elementor\Core\Logger\Items\Base as Log_Item; use Elementor\Core\Logger\Items\Log_Item_Interface as Log_Item_Interface; if ( ! defined( 'ABSPATH' ) ) { exit; // Exit if accessed directly } abstract class Base implements Logger_Interface { abstract protected function save_log( Log_Item_Interface $item ); /** * @return Log_Item_Interface[] */ abstract public function get_log(); public function log( $item, $type = self::LEVEL_INFO, $args = [] ) { if ( ! $item instanceof Log_Item ) { $item = $this->create_item( $item, $type, $args ); } $this->save_log( $item ); } public function info( $message, $args = [] ) { $this->log( $message, self::LEVEL_INFO, $args ); } public function notice( $message, $args = [] ) { $this->log( $message, self::LEVEL_NOTICE, $args ); } public function warning( $message, $args = [] ) { $this->log( $message, self::LEVEL_WARNING, $args ); } public function error( $message, $args = [] ) { $this->log( $message, self::LEVEL_ERROR, $args ); } /** * @param string $message * @param string $type * @param array $args * * @return Log_Item_Interface */ private function create_item( $message, $type, $args = [] ) { $args['message'] = $message; $args['type'] = $type; $item = new Log_Item( $args ); return $item; } public function get_formatted_log_entries( $max_entries, $table = true ) { $entries = $this->get_log(); if ( empty( $entries ) ) { return [ 'All' => [ 'total_count' => 0, 'count' => 0, 'entries' => '', ], ]; } $sorted_entries = []; $open_tag = $table ? '<tr><td>' : ''; $close_tab = $table ? '</td></tr>' : PHP_EOL; $format = $table ? 'html' : 'raw'; foreach ( $entries as $entry ) { /** @var Log_Item $entry */ $sorted_entries[ $entry->get_name() ][] = $open_tag . $entry->format( $format ) . $close_tab; } $formatted_entries = []; foreach ( $sorted_entries as $key => $sorted_entry ) { $formatted_entries[ $key ]['total_count'] = count( $sorted_entry ); $formatted_entries[ $key ]['count'] = count( $sorted_entry ); $sorted_entry = array_slice( $sorted_entry, -$max_entries ); $formatted_entries[ $key ]['count'] = count( $sorted_entry ); $formatted_entries[ $key ]['entries'] = implode( $sorted_entry ); } return $formatted_entries; } } ```
Goodson is an English surname. Notable people with the surname include: Adrienne Goodson (born 1966), American basketball player Alfred Goodson (1867-1940), British businessman and public servant Barbara Goodson (born 1949), American voice actress Clarence Goodson (born 1982), American soccer player Don Goodson (1932–2010), English cricketer Ed Goodson (born 1948), American baseball player Ida Goodson (1909–2000), American classic female blues and jazz singer and pianist Ivor Goodson (born 1943), British educationalist Jonathan Goodson (born 1945), American television producer Katharine Goodson (1872-1958), English pianist Len Goodson (1880−1922), English footballer Mark Goodson (1915–1992), American television producer Patricia Goodson, American concert pianist Tyler Goodson (born 2000), American football player William Goodsonn (1610–), English vice-admiral in the Royal Navy Fictional characters Joel Goodson, fictional character in the film Risky Business, played by Tom Cruise See also Charles Goodson-Wickes (born 1945), British politician Patricia Timmons-Goodson (born 1954), American judge English-language surnames
Quryug () may refer to: Koruk Chutur Kuruk
```python # coding=utf-8 # *** WARNING: this file was generated by test. *** # *** Do not edit by hand unless you're certain you know what you are doing! *** from ... import _utilities import typing # Export this package's modules as members: from .instance import * ```
Carl Anthony Carter Sr. (born March 7, 1964) was a former cornerback in the National Football League (NFL). He played for the St. Louis/Phoenix Cardinals, Cincinnati Bengals, Tampa Bay Buccaneers, and Green Bay Packers. He was drafted in the fourth round of the 1986 NFL Draft by the Cardinals. Collegiately, he played for the Texas Tech Red Raiders. According to the Star-Telegram, former Texas Tech Red Raider Carl Carter who played seven seasons in the NFL, died on May 15, 2019, at the age of 55. Carter came to Texas Tech in the fall of 1982 and played in 33 games as a Red Raider. Notes 1964 births Living people American football cornerbacks Texas Tech Red Raiders football players St. Louis Cardinals (football) players Phoenix Cardinals players Cincinnati Bengals players Tampa Bay Buccaneers players Green Bay Packers players
Scream Silence is a German gothic rock/alternative rock band that was founded 1998 in Berlin. History Their first album "To Die For" was music magazine Orkus's album of the month. With the success of the first album, Scream Silence could go on tours with bands like Christian Death and Dreadful Shadows. Two years later the successor, "The 2nd". On the subsequent tour they were the main band and they played in Wave-Gotik-Treffen. Without big promotion they released "Seven Tears" two years later and got excellent reviews from music magazines. In spring 2004 they founded their own record label Plainsong Records and immediately began to work on their fourth album, "Elegy", which was released on October 25, 2004. Yuki Melchert (violin) and Anika Skusa (cello) had guest appearances. On January 30, 2006 the successor "Saviourine" was finally released in Germany, and also in almost all of Central and Eastern Europe. On April 20, 2007 the sixth album "Aphelia" was released. Last album is "Heartburnt" released in 2015. Members Current Line-Up Hardy Fieting (Vocals) Robert Klausch (Guitar) René Gödde (Guitar) Hagen Schneevoigt (Bass) Heiko Wolf (Drums) Past members Cornel Otto (Bass) Rene Schulze (Bass) Joerg Rennewald (Guitar) Guest Members Yuki Melchert (violin) Anika Skusa (cello) Discography Albums To Die For (1999) The 2nd (2001) Seven Tears (2003) Elegy (2004) Saviourine (2006) Aphelia (2007) Apathology (2008) Scream Silence (2012) Heartburnt (2015) Singles "The Sparrows and The Nightingales" (2000) "Forgotten Days" (2001) "Curious Changes" (2004) "Creed" (2005) "Dreamer's Court" (2012) "Art Remains" (2015) References External links Official Site Official MySpace Scream Silence.net - Die Scream Silence Community Plainsong Records Profile Official Facebook Profile German rock music groups Musical groups established in 1998 German gothic rock groups
```javascript /*! * align-text <path_to_url * */ 'use strict'; var typeOf = require('kind-of'); var repeat = require('repeat-string'); var longest = require('longest'); module.exports = function alignText(val, fn) { var lines, type = typeOf(val); if (type === 'array') { lines = val; } else if (type === 'string') { lines = val.split(/(?:\r\n|\n)/); } else { throw new TypeError('align-text expects a string or array.'); } var fnType = typeOf(fn); var len = lines.length; var max = longest(lines); var res = [], i = 0; while (len--) { var line = String(lines[i++]); var diff; if (fnType === 'function') { diff = fn(line.length, max.length, line, lines, i); } else if (fnType === 'number') { diff = fn; } else { diff = max.length - line.length; } if (typeOf(diff) === 'number') { res.push(repeat(' ', diff) + line); } else if (typeOf(diff) === 'object') { var result = repeat(diff.character || ' ', diff.indent || 0); res.push((diff.prefix || '') + result + line); } } if (type === 'array') return res; return res.join('\n'); }; ```
Lawtons is a hamlet in the town of North Collins. It is located in southern Erie County, New York, United States. References Hamlets in New York (state) Hamlets in Erie County, New York
```python # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # # path_to_url # # Unless required by applicable law or agreed to in writing, # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # specific language governing permissions and limitations # pylint: disable=invalid-name, unused-argument """CoreML codegen supported operators.""" import tvm.ir from tvm.contrib.target.coreml import _convert_map from ...expr import Constant def _register_coreml_op(op_name): """Register a function to check the given operator is supported by Core ML. Paramters --------- op_name : Str The name of operator that will be registered. """ def _check_supported(expr): attrs, args = expr.attrs, expr.args if op_name == "nn.conv2d": if not isinstance(args[1], Constant): return False if attrs["kernel_layout"] not in ["HWIO", "OIHW"]: return False return True tvm.ir.register_op_attr(op_name, "target.coremlcompiler", _check_supported) for op in _convert_map: _register_coreml_op(op) ```
Guido Andreozzi and Guillermo Durán were the defending champions but chose not to defend their title. Fernando Romboli and Marcelo Zormann won the title after defeating Román Andrés Burruchaga and Orlando Luz 6–7(13–15), 6–4, [10–5] in the final. Seeds Draw References External links Main draw Internazionali di Tennis Città di Todi - Doubles 2023 Doubles
```javascript // adds .remark-code-has-line-highlighted class to <pre> parent elements // of code chunks containing highlighted lines with class .remark-code-line-highlighted (function(d) { const hlines = d.querySelectorAll('.remark-code-line-highlighted'); const preParents = []; const findPreParent = function(line, p = 0) { if (p > 1) return null; // traverse up no further than grandparent const el = line.parentElement; return el.tagName === "PRE" ? el : findPreParent(el, ++p); }; for (let line of hlines) { let pre = findPreParent(line); if (pre && !preParents.includes(pre)) preParents.push(pre); } preParents.forEach(p => p.classList.add("remark-code-has-line-highlighted")); })(document); ```
UTEC (University of Toronto Electronic Computer Mark I) was a computer built at the University of Toronto (UofT) in the early 1950s. It was the first computer in Canada, one of the first working computers in the world, although only built in a prototype form while awaiting funding for expansion into a full-scale version. This funding was eventually used to purchase a surplus Manchester Mark 1 from Ferranti in the UK instead, and UTEC quickly disappeared. Background Immediately after the end of World War II several members of the UofT staff met informally as the Committee on Computing Machines to discuss their computation needs over the next few years. In 1946 a small $1,000 grant was used to send one of the group's members to tour several US research labs to see their progress on computers and try to see what was possible given UofT's likely funding. Due to UofT's preeminent position in the Canadian research world, the tour was also followed by members of the Canadian Research Council. In January 1947 the committee delivered a report suggesting the creation of a formal Computing Center, primarily as a service bureau to provide computing services both to the university and commercial interests, as well as the nucleus of a research group into computing machinery. Specifically they recommended the immediate renting of an IBM mechanical punched card-based calculator, building a simple differential analyzer, and the eventual purchase or construction of an electronic computer. The report noted that funding should be expected from both the National Research Council (NRC) and the Defense Research Board (DRB). The DRB soon provided a grant of $6,500 to set up the Computation Center, with the Committee eventually selecting Kelly Gotlieb to run it. Additional funding followed in February 1948 with a $20,000 a year grant from a combined pool set up by the DRB and NRC. Although this was less than was hoped for, the IBM machinery was soon in place and being used to calculate several tables for Atomic Energy of Canada Limited (AECL). Additionally a small version of the differential analyzer was completed by September 1948, although it appears to have seen little use. Preliminary work on an electronic computer also started about the same time with some experimental work in various circuit designs. However they also felt that in order to get a machine working quickly, a fully electronic design was simply too state of the art and had significant risk. Instead they considered building a copy of Bell Labs' Model 6 relay-based machine, which they had seen earlier. However, when they finally decided to go ahead with the project in August 1948, Northern Electric (Bell's arm in Canada) informed them they would charge $25,000 ($ in ) to license the Model 6 design. At a meeting with the NRC in March 1949, the NRC turned down their request for additional funding for the license, and instead suggested that the Center invest in a fully electronic computer, upping the yearly grants to $50,000 to that end. This turned out to be a major "win" - relay based computers quickly disappeared, and electronic systems proved themselves quickly. UTEC Beatrice Helen Worsley and Perham Stanley, two graduate students working at the Computation Center, were sent to Cambridge University to work with Maurice Wilkes who was in the process of completing the EDSAC. Worsley wrote the program that generated a table of squares, the first program to successfully run on EDSAC. Another two graduate students, Alf Ratz and Josef Kates had been studying circuitry for some time by this point, and turned their attention to computer memory systems. Their first attempts were with a novel system based on neon tubes, but a 1949 visit by Freddie Williams led to them abandoning this work and moving to Williams tubes instead. Given the current level of funding a full-scale machine was not possible, so it was decided to build a smaller machine to test out the various components. Williams tubes would store 256 12-bit words, with instructions using 3-bits of a word leaving 9-bits for addressing (allowing up to 512 words of memory). Parts of the machine were up and running quickly, with the math and logic units (the arithmetic logic unit in modern terminology) running by the autumn of 1950. Memory reliability proved to be a serious problem, as it was for all systems using the Williams tube concept, but Katz introduced shielding that improved things somewhat. The machine was declared fully operational on October 1, 1951. Over the next few months major efforts were made to increase reliability, as well as add a second bank of memory to bring it to the full 512 words. Libraries added math functions for 12-, 24-, 36- and 48-bit math. A basic 12-bit addition took about 240 microseconds, multiplication about 18 milliseconds. With the basic system up and running, attention turned to a "full sized" version. This machine would use a 44-bit word with 1,024 words of memory backed up with a 10,000 word magnetic drum to be supplied by Ferranti Canada. A new math unit would operate on an entire word in parallel, instead of bit-serial as with most machines of the era, dramatically improving performance so that an addition would take only 20 microseconds and a multiply about 200—faster than the prototype at addition even on its much smaller word size. Success of the UTEC created intense demand within the Canadian research establishment to start construction of the full scale follow-on. The funding pool was increased to $300,000 to cover development and construction. FERUT While UTEC was being built, a similar machine was under construction at Manchester University, known as the "Baby". Once it started working the university signed an agreement with Ferranti (in the UK) to build a full-scale machine eventually known as the Mark I. The new machine was delivered to the university in February 1951, making it the first commercial computer, about one month earlier than the UNIVAC I was handed over to the US Census Bureau. Ferranti had high hopes for further sales of the machine, and were happy when an order was placed by the British Atomic Energy Authority for delivery in autumn of 1952. However the government changed hands while the machine was being built, and all government contracts over £100,000 were cancelled outright. This left a partially completed Mark I sitting at Ferranti, who became interested in unloading it as soon as possible. Word of the machine quickly reached the AECL, who suggested that they use the $300,000 set aside for the "new" UTEC to purchase the Mark I instead. The Computation Center considered the Mark I to be inferior to their own design and rejected it, notably because it used a serial math unit like their prototype and would thus be much slower. The AECL was not terribly impressed but came up with a solution; if the Computation Center would buy the Mark I, another $150,000 would be made available to continue development of the UTEC, and an equal amount if they decided to actually build it. This sort of deal one does not refuse, and plans to ship the Mark I to Toronto were soon underway. The machine arrived on April 30, 1952, at the time it was major news. Named Ferut (Ferranti, University of Toronto) by Worsley shortly before it arrived, it took the Ferranti engineers several months to set it up. Even then it became one of the first "large" machines to start operation in North America. Ferut would go on to be a major research system in Canada, being used by Ontario Hydro to calculate changes in water levels due to the opening of the St. Lawrence Seaway, various development of the groundbreaking ReserVec system with Ferranti Canada for Trans Canada Airlines, and even rental of time for commercial seismic data processing. The arrival of the Ferut also spelled the death of the UTEC project. Even with the additional funding, most of the engineers quickly drifted to the Ferut machine. References UTEC and Ferut: the University of Toronto's computation centre FERUT.ca One-of-a-kind computers Atomic Energy of Canada Limited University of Toronto Vacuum tube computers
```javascript import superPropBase from "./superPropBase.js"; import defineProperty from "./defineProperty.js"; function set(target, property, value, receiver) { if (typeof Reflect !== "undefined" && Reflect.set) { set = Reflect.set; } else { set = function set(target, property, value, receiver) { var base = superPropBase(target, property); var desc; if (base) { desc = Object.getOwnPropertyDescriptor(base, property); if (desc.set) { desc.set.call(receiver, value); return true; } else if (!desc.writable) { return false; } } desc = Object.getOwnPropertyDescriptor(receiver, property); if (desc) { if (!desc.writable) { return false; } desc.value = value; Object.defineProperty(receiver, property, desc); } else { defineProperty(receiver, property, value); } return true; }; } return set(target, property, value, receiver); } export default function _set(target, property, value, receiver, isStrict) { var s = set(target, property, value, receiver || target); if (!s && isStrict) { throw new TypeError('failed to set property'); } return value; } ```
Nihal Chand Chauhan also known as Nihal Chand Meghwal (born 4 February 1971) is an Indian politician belonging to the Bharatiya Janata Party. Early life Nihal Chand was born to Bega Ram Chauhan, a two-time M.P of Ganganagar and Surji Devi into Meghwal community. He completed B.A from Shri Nehru S.P Evening College at Bikaner, Rajasthan. He married Jyoti Chauhan in 1992. Political career In 1995, at the age of 24, Nihal Chand was elected as the "Panchayat Director" of Nanuwala, Sardarpura Bika and Bagicha. He was also elected as the pradhan (chief) of the Raisinghnagar panchayat committee. In 1996, he became the youngest Member of Parliament (MP) from Rajasthan at the age of 25. He was elected to the 11th Lok Sabha (lower house of the Parliament of India) on a BJP ticket from Ganganagar. In the next general election in 1998, he was defeated by Shankar Pannu of Congress. After his defeat in the general election, Nihal Chand contested the Rajasthan Legislative Assembly elections. He was declared as BJP's candidate from Raisinghnagar. However, just before the elections, BJP formed an alliance with Haryana Rastriya Lokdal (HRLD), and gave that seat to HRLD. Nihal Chand was asked to withdraw his candidature, but he refused to do so. As a result, BJP expelled him. Contesting on the BJP election symbol, Nihal Chand won the seat and became an MLA from Raisinghnagar. Subsequently, Nihal Chand won the 1999 and 2004 general elections from Ganganagar as a BJP candidate. In 2008, he lost the Assembly elections to Daulat Raj of Congress from Raisinghnagar. In 2009, he lost the Indian general elections to Bharat Ram Meghwal of Congress. In 2014, he defeated Bhanwarlal Meghwal of Congress on the same seat. He served as a Minister of state (MOS) in the Cabinet of Prime Minister Narendra Modi from May 2014 to July 2016. Controversy In 2011, Nihal Chand's name appeared as one of the 17 accused in a police FIR. The complainant, a woman from Sirsa, Haryana, alleged that her husband Om Prakash Godara had drugged her and then let his associates rape her in Jaipur. After a year of investigation, the police closed the case in 2012, calling the charges as false and fabricated. The woman approached the trial court, which accepted the police report and dismissed the protest petition filed by her. The woman then approached the district court, which also dismissed the charges. In 2014, a few days after Nihal Chand was made the minister, the complainant went in for revision, following which the district court issued notices to Nihal Chand and 16 others, asking them to respond to the court. This caused a controversy, with the opposition party Congress demanding Nihal Chand's resignation. BJP refused to oblige, pointing out that Nihalchand was given a "clean chit" in the case when Congress was in power in Rajasthan. References 1971 births Living people Bharatiya Janata Party politicians from Rajasthan India MPs 1996–1997 India MPs 1999–2004 India MPs 2004–2009 India MPs 2014–2019 India MPs 2019–present People from Sri Ganganagar Lok Sabha members from Rajasthan Narendra Modi ministry
The High Bridge (originally the Aqueduct Bridge) is the oldest bridge in New York City, having originally opened as part of the Croton Aqueduct in 1848 and reopened as a pedestrian walkway in 2015 after being closed for over 45 years. A steel arch bridge with a height of over the Harlem River, it connects the New York City boroughs of the Bronx and Manhattan. The eastern end is located in the Highbridge section of the Bronx near the western end of West 170th Street, and the western end is located in Highbridge Park in Manhattan, roughly parallel to the end of West 174th Street. High Bridge was originally completed in 1848 with 16 individual stone arches. In 1928, the five that spanned the Harlem River were replaced by a single steel arch. The bridge was closed to all traffic from around 1970 until its restoration, which began in 2009. The bridge was reopened to pedestrians and bicycles on June 9, 2015. The bridge is operated and maintained by the New York City Department of Parks and Recreation. Construction and history Construction Originally designed as a stone arch bridge, the High Bridge had the appearance of a Roman aqueduct. Construction on the bridge was started in 1837, and was completed in 1848 as part of the Croton Aqueduct, which carried water from the Croton River to supply the then burgeoning city of New York some to the south. The bridge has a height of above the Harlem River, with a total length of . The design of the bridge was originally awarded to Major David Bates Douglass, who was fired from the project. The design then fell to the aqueduct's engineering team, led by John B. Jervis. James Renwick, Jr., who later went on to design the landmark Saint Patrick's Cathedral on Fifth Avenue in Midtown Manhattan, participated in the design. The Croton Aqueduct had to cross the Harlem River at some point, and the method was a major design decision. A tunnel under the river was considered, but tunneling technology was in its infancy at the time, and the uncertainty of pursuing this option led to its rejection. This left a bridge, with the Water Commission, engineers and the public split between a low bridge and a high bridge. A low bridge would have been simpler, faster, and cheaper to construct. When concerns were raised to the New York Legislature that a low bridge would obstruct passage along the Harlem River to the Hudson River, a high bridge was ultimately chosen. The contractors for the project were George Law, Samuel Roberts and Arnold Mason. Mason had prior engineering experience working on the Erie Canal and the Morris Canal. Usage In 1864, a walkway was built across the High Bridge. The New York City Department of Parks and Recreation (NYC Parks), the bridge's current maintainer, described the walkway as the bridge's contemporary High Line. However, the bridge was not used for vehicles. In 1928, to improve navigation in the Harlem River, the five masonry arches that spanned the river were demolished and replaced with a single steel arch of about . Of the masonry arches of the original 1848 bridge, only one survives on the Manhattan side, while nine survive on the Bronx side. Use of the structure to deliver water to Manhattan ceased on December 15, 1949. By 1954, The New York Times reported that the commissioner of the Department of Water Supply, Gas and Electricity said that "the bridge entailed serious problems of maintenance and vandalism." Robert Moses agreed to take responsibility for the bridge, which was transferred to the Parks Department in 1955. There were incidents, in 1957 and 1958, of pedestrians throwing sticks, stones, and bricks from the bridge, seriously injuring passengers on Circle Line tour boats which passed under the bridge. Concerns due to these incidents supposedly contributed to the bridge being closed as early as 1960, although NYC Parks said that it was not closed until 1970, when high crime and fiscal crisis led to the contraction of many city services and public spaces. However, a reporter for The New York Times wrote that when he had tried to walk across the bridge in 1968, it was closed. The New York City Landmarks Preservation Commission (LPC) designated the High Bridge as a city landmark on November 15, 1970. Aqueduct The High Bridge was part of the first reliable and plentiful water supply system in New York City. As the City was devastated by cholera in 1832 and the Great Fire in 1835, the inadequacy of the water system of wells-and-cisterns became apparent. Numerous corrective measures were examined. In the final analysis, only the Croton River in northern Westchester County was found to carry water sufficient in quantity and quality to serve the City. The delivery system was begun in 1837, and was completed in 1848. The Old Croton Aqueduct was the first of its kind ever constructed in the United States. The innovative system used a classic gravity feed, dropping per mile, or about 1/4" per 100' (~0.02%) and running into New York City through an enclosed masonry structure crossing ridges, valleys, and rivers. University Avenue was later built over the southernmost mainland portion of the aqueduct, leading to the bridge. Though the carrying capacity was enlarged in 1861–1862 with a larger tube, the bridge became obsolete in 1910 with the opening of the New Croton Aqueduct. In the 1920s, the bridge's masonry arches were declared a hazard to ship navigation by the United States Army Corps of Engineers, and the City considered demolishing the entire structure. Local organizations called to preserve the historic bridge, and, in 1927, five of the original arches across the river were replaced by a single steel span, while the remaining arches were retained. Restoration In November 2006, the Department of Parks and Recreation announced that the bridge would reopen to pedestrians in 2009. This date was repeatedly put off. A $20 million renovation project would include strengthening the arch, improving staircases, cameras on both ends of the bridge, and boat beacon lights among other features. In 2009, preliminary planning, funded by PlaNYC, began for restoring the High Bridge. The High Bridge Coalition raised funds and public awareness to restore High Bridge to pedestrian and bicycle traffic, joining the Highbridge Parks in both Manhattan and the Bronx that together cover more than of parkland, and providing a link in New York's greenway system. In early 2010, a contract was signed with Lichtenstein Consulting Engineers and Chu & Gassman Consulting Engineers (MEP sub-consultant) to provide designs for the restored bridge. On January 11, 2013, the mayor's office announced the bridge would reopen for pedestrian traffic by 2014, but in August 2014, the open was postponed to spring of 2015. In May 2015, the Parks Department announced a July opening and a July 25 festival. The ribbon was cut for the restored bridge at about 11:30 a.m. on June 9, 2015, with the bridge open to the general public at noon. The High Bridge was illuminated at night following its restoration. High Bridge Water Tower The High Bridge Water Tower, in Highbridge Park between West 173rd and 174th streets, on top of the ridge on the Manhattan side of High Bridge, was built in 1866–1872 to help meet the ever-increasing demands on the city's water system. The octagonal tower, which was authorized by the State Legislature in 1863, was designed by John B. Jervis, the engineer who supervised the building of the High Bridge Aqueduct. Water was pumped up to a reservoir next to the tower – now the site of a play center and public pool built in 1934–1936 – which then provided water to be lifted to the tower's tank. This "high service" improved the water system's gravity pressure, necessary because of the increased use of flush toilets. The High Bridge system reached its full capacity by 1875. With the opening of the New Croton Aqueduct in 1890, the High Bridge system became less relied upon; during World War I, it was completely shut down when sabotage was feared. In 1949, the tower was removed from service, and a carillon, donated by the Altman Foundation, was installed in 1958. The tower's cupola was damaged by an arson fire in 1984. It was reconstructed, and the tower's load-bearing exterior stonework – which Jervis designed in a mixture of Romanesque Revival and neo-Grec styles – was cleaned and restored in 1989–1990 by the William A. Hall Partnership. Christopher Gray has said of the tower's design that "Its rock-faced granite gives the tower a chunky, fortified appearance, as if it were a lookout for a much larger castle complex that was never built.... The granite is competently handled, but the details are not very inspired or elegant. The tower is more picturesque than beautiful." The interior of the tower, which was never open to the public, features a wide well-detailed iron spiral staircase with six large landings and paired windows. The High Bridge Water Tower was designated a New York City landmark by the Landmarks Preservation Commission in 1967. The High Bridge Water Tower underwent a 10-year, $5 million renovation during the 2010s and reopened to the public in November 2021. After the water tower reopened, NYC Parks began hosting free tours of the structure. Gallery See also Highbridge Reservoir List of bridges documented by the Historic American Engineering Record in New York (state) List of New York City Designated Landmarks in Manhattan above 110th Street List of New York City Designated Landmarks in the Bronx National Register of Historic Places listings in Manhattan above 110th Street National Register of Historic Places listings in the Bronx References Notes External links Friends of the Old Croton Aqueduct New York City Department of Parks: High Bridge High Bridge Park Development Association NYCRoads.com: High Bridge (Aqueduct Bridge) thehighbridge.org 2004 article about restoration plans High Bridge Documentary produced by The City Concealed bridgesnyc: High Bridge Highbridge Historic Structure Report Parade of horses on Speedway (1900 silent film showing High Bridge in the background) 1848 establishments in New York (state) Brick bridges in the United States Bridges completed in 1848 Bridges in Manhattan Bridges in the Bronx Bridges on the National Register of Historic Places in New York (state) Bridges on the National Register of Historic Places in New York City Bridges on the National Register of Historic Places Bridges over the Harlem River Buildings and structures on the National Register of Historic Places in Manhattan Deck arch bridges in the United States Highbridge, Bronx Historic American Engineering Record in New York City National Register of Historic Places in the Bronx New York City Designated Landmarks in Manhattan New York City Designated Landmarks in the Bronx Open-spandrel deck arch bridges in the United States Pedestrian bridges in New York City Steel bridges in the United States Washington Heights, Manhattan Bridge light displays
The Żejtun Roman villa is an archaeological complex in the city of Żejtun, in south-eastern Malta. The open-air remains contain areas of original Roman tiling and coloured stucco. The complex was an active settlement since the Bronze Age, although the presently visible remains can be mainly dated from the Punic period right up to Late Antiquity. The site was discovered in 1961, with the complex being the subject of two large-scale archaeological investigations, the first of which was carried out in the 1970s. More evidence of ancient habitation in the area comes from burial grounds, such as those around St Gregory's Church, Tal-Barrani, Tal-Ħotba and Bulebel. The excavation site at the villa confirms the presence of a thriving olive oil industry on the southern end of the islands. The site is located in the grounds of the St Thomas More Secondary School in Żejtun. Topography The remains rest on the highest point of a long, flat ridge stretching in an east-west direction. The villa can be found close to the eastern end of the ridge. Beyond the secondary school grounds on the east side, the ridge dips significantly towards Tas-Silġ and Delimara, along the main road leading to these destinations. The ridge dips gently to the north and south, beyond the main road. The road maintains more or less the same altitude to the west until Bir id-Deheb, with the ground rising again towards Gudja and the parish church of Ħal Għaxaq. The remains, therefore, are a couple of metres higher than the old parish church of Saint Catherine's (the present St Gregory's church) and considerably higher than the present Żejtun parish church. Discovery and excavation Local people had known of the existence of ancient remains in the vicinity, it was not until 1964 that archaeologists systematically excavated the site, which had been accidentally uncovered by workmen when a school was being built. Various Punic and Roman tombs were discovered in the area around Żejtun, the most interesting being the burial complex at Tal-Barrani, with evidence of substantial restructuring in the 7th century. On 8 April 1963 a tomb was discovered and investigated by the local authorities "in the field immediately to the east of the new village school." It contained both Punic and Roman material, including a "third century lamp, a glass unguentarium…and a large peg based amphora split to serve as a child's sarcophagus" The villa continued to be used as an olive oil-producing establishment until the end of the third century AD. No signs of the ancient remains were apparent in the fields before 1961, when "traces of masonry and some pottery came to light" during the building of a new school for the village. The authorities' investigations deemed the remains to be "slight", with no further action being taken. The discovery of more remains was again notified in 1964, with them being identified as a "Punico-Roman building". Three main features were surveyed, namely a large water cistern, a line of stone water channels, and a foundation wall which was separated from a nearby paved area by a trench. The Roman villa was excavated again between 1972 and 1976, as well as from 2006 onwards. A considerable number of Roman remains are found in the south-eastern part of Malta, such as Tas-Silġ and the Ta' Kaċċatura Roman villa. During the initial full-scale excavations in 1972, the work focused on the area containing the olive oil pressing remains. Flat floor slabs were also exposed. Archaeologists also found the remains of a number of rectangular rooms paved with lozenge-shaped tiles, in an area identified with residential habitation. The villa was a domestic country settlement with a residential area and an industrial area for olive pressing. The latter is confirmed by large parallel-piped block, with various holes and channels, anchor blocks, and a square block hollowed out to form a circular liquid container. The residential area consisted of at least three rectangular rooms, one of which could be described as a long hall. All the rooms were paved with lozenge-shaped tiles, with coloured tiles forming a herring-bone pattern. These are different from the patterns discovered in the other villas in Malta and Gozo. The walls of the villa were also plastered and decorated with simple line paintings in red, yellow and green, traces of which survived. A hoard of 43 bronze Roman coins dating mostly to the third century AD were recovered during the excavations, as was a small stone oil press. The villa experienced a period of extensive renovation works during the Roman period. Bronze Age occupation was indicated by two rock-cut silos containing sherds of the Borġ in-Nadur Phase cut in the very soft bedrock. Evidence of activity in Punic times is suggested by finds of early ceramic fragments like the fragment of an imitation kylix and a black gloss sherd, possibly Attic. The most intensive use of the site was in Roman imperial times, as confirmed by the presence of several terra sigillata fragments, both Italian and North African. The most important find from the 1976 excavation were two fragments of a cooking pot, one bearing an inscription in Punic characters which was read either as a dedication to the god Astarte, or to Anat, or to both. A large number of pottery shards were also discovered, including half a flat red plate and more hand-made pottery. Vine trenches were also found below the Żejtun villa, indicating that the trenches pre-dated the Roman era. No permanent protection was ever erected to preserve the remains on site, bar a boundary wall which separates the villa from the school and residential roads. Archaeological investigations by the Department of Classics and Archaeology, University of Malta were resumed in 2006. Investigations have been carried out in Germany on artifacts found at Żejtun to establish if there was oil production, but the results have been inconclusive. Considerable amount of damage occurred to parts of the villa unearthed in the 1970s. The plaster was left out in the open and is flaking and falling off, while the paint is fading. Most of the Roman remains that unearthed in the 1970s are in a very fragile state. Notes Bibliography Further reading 1961 archaeological discoveries Ancient Roman buildings and structures in Malta Buildings and structures completed in the 1st century Żejtun Astarte Anat
Codelfa Construction Pty Ltd v State Rail Authority of New South Wales, ("Codelfa") is a widely cited Australian contract law case, which serves as authority for the modern approach to contractual construction. The case greatly influenced the development of the Eastern Suburbs railway line. In terms of contract law, the case addresses questions of frustration, construction and the parol evidence rule. The case diverged from the well established English approach regarding the use of extrinsic evidence in contractual interpretation. Background The State Rail Authority engaged Codelfa Construction under a contract for services to excavate tunnels in the Eastern Suburbs allowing for the development of the Eastern Suburbs railway line. The works were to include "the excavation of two single track tunnels commencing at Edgecliff and running through Woollahra to Bondi Junction, an open cut excavation at the site of the Woollahra station, and an underground excavation at the site of the Bondi Junction station." The State Rail Authority issued Codelfa Construction with a notice to proceed on 7 March 1972. From this date, Codelfa was bound to complete all works within 130 weeks. On the basis of legal advice the contracting parties were led to believe that the work would be exempt from injunction as it was authorised by s 11 of the City and Suburban Electric Railways (Amendment) Act 1967 (NSW), supposedly providing crown immunity. In 1972 Codelfa Construction commenced the work in three shifts each day for seven days a week. However, the noise generated by their underground drilling led several local residents and Council to apply for an injunction. On 28 June 1972, the Supreme Court of New South Wales granted an injunction, significantly restricting the work that could be performed after 10 pm and on Sunday. Codelfa Construction incurred additional costs to complete the required work within the agreed-upon timeframe. Procedural history Pursuant to an arbitration clause, within the contract, the parties started arbitration proceedings in 1976 to establish whether Codelfa Construction could recover the additional costs by reason of an implied term or alternatively if the contract was frustrated to recover the reasonable value of the services provided (quantum meruit). As the arbitration proceedings had no jurisdiction with regard to frustration of the contract, they dealt principally with the issue of an implied term in the contract. The arbitrator found that a term could be implied into the contract to the effect that the deadline could be extended if workable hours varied. Both parties issued summons in the Supreme Court of NSW to reach a determination on a number of questions raised in the proceedings. Codelfa Construction alleged that the contract had been frustrated and further alleged that an implied provision of the contract, to pay a reasonable sum for work performed, had not been met. The State Rail Authority's allegations were to the effect that Codelfa Construction was bound to complete the works. Following the arbitrator's decision, a case was commenced in the Supreme Court of New South Wales. In his judgment Justice Ash found that the contract had not been frustrated, instead he extended the implied term found by the arbitrator to also account for the understanding that work's could not continue where an injunction was granted. On appeal, Justices Reynolds, Glass and Samuels of the Court of Appeal varied Justice Ash's implied term but reached the same conclusion that an implied term could be found in the contract but that the contract was not frustrated. Codelfa Construction then appealed to the High Court challenging the finding regarding the action in frustration. The State Rail Authority cross-appealed on a number of grounds centrally challenging the court's assertion that a term could be implied into the contract. The High Court decision Construction According to the parol evidence rule, it can be said that where a contract is wholly in writing "verbal evidence is not allowed to be given of what passed between the parties, either before the written document was made, or during the time that it was in a state of preparation, so as to add to or subtract from, or in any manner to vary or qualify the written contract." In order to ascertain whether the contract is wholly or partly in writing the court will consider the oral statements which parties claim forms part of the final contract. On this point the law is uniform in Australia and the United Kingdom. The rationale behind contractual construction, as explained by J.W. Carter, is not to infer the subjective intentions of the parties or give meaning to a term of a contract consistent with those subjective understandings. Instead, the goal is to give meaning to the contract that is consistent with what a reasonable person in the position of the contracting party would have understood the term to mean. In implementing this principle, British and Australian courts have diverged in their allowance of extrinsic evidence which is said to form part of the "surrounding circumstances" of a contract when determining the meaning and effect of contractual terms. In English law courts may consider the "matrix of fact" surrounding the formation of the contract. The "matrix of fact" extends to the words and conduct of the contractual parties, common industry knowledge and any other factor which may have affected the reasonable person's understanding of the language of the contract. The Court will interpret the meaning of the contract in light of these circumstances. In Australian law however the High Court deviated from the English rule of contractual construction and instead held that Australian courts should follow the 'true rule' of contractual construction. The 'true rule' Justice Mason held that: Under this rule extrinsic evidence of the surrounding circumstances and commercial objectives of a contract may only be referred to where the Court has established that a term of a contract is ambiguous. However, Justice Mason did not define the kind of ambiguity required to meet the requirements of the 'true rule'. Implied term The court considered whether a term could be implied into the contract allowing for a reasonable extension of time to complete the works given the delays caused by the injunction. The High Court rejected that a term could be implied, holding that it was impossible to formulate a term with appropriate clarity and precision. Further, even if it could be established that such a term would be necessary to give business efficacy it could not be held "so obvious that it went without saying" that the contracting parties intended for such a term to form part of their contractual relationship. Frustration However, Codelfa Construction succeeded on the second ground of appeal, with the majority finding that the contract was frustrated. In coming to this determination, the court followed the definition of frustration laid out in Davis Contractors Ltd v Fareham Urban District Council. That is that "frustration occurs whenever the law recognises that without default of either party a contractual obligation has become incapable of being performed because the circumstances in which performance called for would render it a thing radically different from that which was undertaken by the contract". Therefore, the critical issue which the court had to determine was whether the situation resulting from the grant of an injunction rendered the situation "radically different" from that which was contemplated at the time of contractual formation. On this point, Justice Aickin said: Consequences A number of decisions made by the High Court following Codelfa contradicted the 'true rule' including Maggbury Pty Ltd v Hafele Australia Pty Ltd, Pacific Carriers Ltd v BNP Paribas, and Toll (FGCT) Pty Ltd v Alphapharm Pty Ltd. Following this apparent shift in judicial opinion, numerous intermediate appellate courts and lower courts followed the principles established by Investors Compensation Scheme Ltd v West Bromwich Building Society. The 'true rule' was understood by many courts to have lapsed in favour of the English approach to contractual construction. However, in other High Court cases the 'true rule' was affirmed as the correct approach to contractual construction. In Royal Botanic Gardens and Domain Trust v South Sydney City Council, the court indicated that the decision remained good law in Australia. In that case, the High Court noted that ambiguity must first be established before referring to extrinsic evidence. The court held that the use of the term "may" introduced ambiguity into the contract, and could refer to an exhaustive or inexhaustive number of considerations. This has attracted criticism from many academics, who have found that the term "may" was not open to ambiguity. Further, they argue that this demonstrates the difficulty of applying the "true rule" and determining which contextual factors are truly extrinsic to the language of the contract. Furthermore, an application for special leave for a case to be heard in the High Court in Western Export Services Inc v Jireh International Pty Ltd, the bench stated that Codelfa remained good law in Australia. Justices Gummow, Bell and Heydon noted that primary judges and intermediate appellate courts are ‘bound to follow that precedent' until the High Court holds otherwise. As an application for special leave is a procedural motion rather than a substantive hearing the statements of the bench did not establish a binding precedent. However, this application for special leave is notable for being published in the Australian Law Reports and representing the unambiguous judicial opinion of three justices of the High Court. Yet in Mount Bruce Mining Pty Ltd v Wright Prospecting Pty Ltd, Chief Justice French, as well as Justices Nettle and Gordon made clear that lower courts had been incorrect in identifying Western Export Services Inc v Jireh International Pty Ltd, as an authoritative statement on the correct approach to contractual construction, as a procedural motion in itself is not binding in Australian law. Decisions such as Electricity Generation Corporation v Woodside Energy Ltd, and Mount Bruce Mining Pty Ltd v Wright Prospecting Pty Ltd, have involved the High Court applying the approach set out in Investors Compensation Scheme Ltd v West Bromwich Building Society, despite affirming the 'true rule' in Western Export Services Inc v Jireh International Pty Ltd. This suggests that Codelfa may no longer be good law in Australia. The New South Wales Supreme Court has taken the view that Codelfa no longer represents the view of the court and as such has moved towards accepting the English approach laid out in Investors Compensation Scheme Ltd v West Bromwich Building Society. As the authority of the Codelfa decision remains an unsettled point in Australian law many issues have arisen in contractual construction at lower level courts. One common practice used to circumvent this issue has been the use of recitals at the start of the contract, per Adventure Golf Systems Australia Pty Ltd v Belgravia Health & Leisure Group Pty Ltd. This allows the contract to be read in light of circumstances that both parties agreed at the time of formation were relevant to the interpretation of terms. Legal scholars have noted that this is a significant area of law, in which a binding decision in favour of Investors Compensation Scheme Ltd v West Bromwich Building Society, or in favour of Justice Mason's 'true rule' would have significant implications for contractual disputes. Lower courts At present contractual construction in Australian law is not consistent and uniform between different states and territories, with lower courts and intermediate appellate courts adopting different positions in relation to Codelfa. The Supreme Court of NSW in Mainteck Services Pty Ltd v Stein Heurtey SA, supported the conclusion that Investors Compensation Scheme Ltd v West Bromwich Building Society, had been accepted in Australian law, therefore, ambiguity did not have to be pointed to before referring to 'surrounding circumstances'. This position was supported by the Full Court of the Federal Court of Australia in Stratton Finance Pty Ltd v Webb. However, the Western Australian Supreme Court has stated that Codelfa remains good law in Australia in Technomin Australia Pty Ltd v Xstrata Nickel Australia Operations''. References High Court of Australia cases 1982 in Australian law 1982 in case law
The Architects of History is a Big Finish Productions audio drama based on the long-running British science fiction television series Doctor Who. It contains a four-part story. The story was broadcast on BBC Radio 4 Extra in 4 episodes on 30 May, 31 May, 1 June and 4 June 2012. Plot In 2044, the Selachians attack Earth’s Moonbase. And the Galactic Reich is threatened. Cast The Doctor — Sylvester McCoy Klein — Tracey Childs Rachel Cooper — Lenora Crichlow Sam Kirke — Ian Hayles Major Richter — Jamie Parker Generalleutnant Tendexter — Lloyd McGuire Selachian Leader — Chris Porter Feldwebel/Computer Voice — Rachel Laurence Pilot/Selachian — David Dobson References External links The Architects of History 2010 audio plays Seventh Doctor audio plays Radio plays based on Doctor Who 2012 radio dramas Works by Steve Lyons Fiction set in 2044 Fiction set on the Moon