text
stringlengths 1
22.8M
|
|---|
```html
<HTML>
<!--
Use, modification and distribution is subject to the Boost Software
path_to_url
-->
<Head>
<Title>Reference Property Map</Title>
<BODY BGCOLOR="#ffffff" LINK="#0000ee" TEXT="#000000" VLINK="#551a8b"
ALINK="#ff0000">
<IMG SRC="../../../boost.png"
ALT="C++ Boost" width="277" height="86">
<BR Clear>
<H2><A NAME="sec:identity-property-map"></A>
</h2>
<PRE>
template <typename KeyType, typename ValueType>
class ref_property_map
</PRE>
This property map wraps a reference to some particular object, and
returns that reference whenever a key object is input.
<H3>Where Defined</H3>
<P>
<a href="../../../boost/property_map/property_map.hpp"><TT>boost/property_map/property_map.hpp</TT></a>
<h3>Model of</h3>
<a href="./LvaluePropertyMap.html">Lvalue Property Map</a>
<h3>Associated Types</h3>
<table border>
<tr>
<th>Type</th><th>Description</th>
</tr>
<tr>
<td><tt>
boost::property_traits<ref_property_map>::value_type
</tt></td>
<td>
This type is the <tt>ValueType</tt> with which the template was instantiated.
</td>
</tr>
<tr>
<td><tt>
boost::property_traits<ref_property_map>::key_type
</tt></td>
<td>
This type is the <tt>KeyType</tt> with which the template was instantiated.
</td>
</tr>
<tr>
<td><tt>
boost::property_traits<ref_property_map>::category
</tt></td>
<td>
This type is <tt>boost::lvalue_property_map_tag</tt>.
</td>
</tr>
</table>
<h3>Member Functions</h3>
<table border>
<tr>
<th>Member</th><th>Description</th>
</tr>
<tr>
<td><tt>
ref_property_map(ValueType& v)
</tt></td>
<td>
The constructor for ref_property_map is provided the reference that
the property map will return when queried.
</td>
</tr>
<tr>
<td><tt>
ref_property_map(const ref_property_map& x)
</tt></td>
<td>
Copy constructor.
</td>
</tr>
<tr>
<td><tt>
ValueType& operator[](KeyType const&) const
</tt></td>
<td>
Returns the contained reference.
</td>
</tr>
</table>
</BODY>
</HTML>
```
|
Frederick Albert McDonald (7 December 1872 disappeared April 1926) was an Australian politician. He was a Labor member of the Australian House of Representatives for Barton from 1922 until 1925, when he was narrowly defeated by Nationalist Thomas Ley. McDonald was challenging the election result in court when he mysteriously disappeared in 1926. It is widely suspected that Ley, who later died in an insane asylum in England after committing murder and had several other rivals die in mysterious circumstances, was responsible for McDonald's disappearance.
Early life
Frederick McDonald was born in Grafton, New South Wales, and studied at the Sydney Teachers' College and the University of Sydney before becoming a teacher. He had been teaching at Hurstville Superior Public School for thirteen years at the time of his election; among his previous postings was at Wellington. McDonald was president of both the New South Wales Teachers Federation (which he had been involved in founding) and the Assistant Teachers' Association, and had been credited with gaining the Teachers' Federation access to the Industrial Court.
McDonald was president of his local branch of the Labor Party and president of Labor's electorate council for the Lang seat. He was narrowly defeated as a Labor candidate at the 1920 state election. In the 1922 federal election, McDonald contested the new seat of Barton for Labor, defeating the Nationalist member for the abolished seat of Illawarra, Hector Lamond.
McDonald married Mrs. I. B. Burnett at Scots Church in Melbourne in April 1924.
Political career
McDonald was involved in an extremely contentious race for re-election at the 1925 federal election, when he was challenged by Nationalist candidate and former state minister Thomas Ley. During the campaign, Ley lambasted McDonald for his alleged links to communists. On the day before the election, 13 November 1925, McDonald alleged that the year before, Ley had tried to bribe him into not recontesting Barton. Ley ferociously denied the allegations, and on election day issued a writ against McDonald claiming £15,000 for defamation. Ley won the election by 1,090 votes, and McDonald acknowledged the defeat, stating "the Labor movement is ruled by its heart and not its head; had it been ruled by its head there would be a different story to tell".
In January 1926, McDonald challenged the election result in the Court of Disputed Returns on the basis of the bribery allegations. In March 1926, it was reported that he and Ley had agreed on a legal settlement wherein McDonald issued an apology for the bribery allegations and they both stated their intention to withdraw their respective lawsuits. However, it has been suggested that McDonald subsequently "had a fit of remorse" and refused to withdraw the petition.
Disappearance
On 15 April 1926, McDonald disappeared on his way to a meeting with New South Wales Premier Jack Lang, in which he was to have discussed a proposal to have the election result declared void. He was last seen at 2.30pm by his wife outside Challis House in Martin Place when he left for the appointment with Lang, but never arrived. Despite the March announcement, McDonald's Court of Disputed Returns litigation had not been formally withdrawn, and when the matter went to court on 23 April, his solicitor denied knowledge of the settlement and sought an adjournment in the hope that McDonald would be found alive; however, the matter was struck out on the basis of the March announcement. Despite an extensive search, neither McDonald nor his attaché case were ever found.
Initial reports suggested that McDonald was suffering from "nervous trouble" at the time of his disappearance. However, Ley was later deemed insane after committing murder in England, and was committed to Broadmoor Hospital, where he died. It is now widely suspected that Ley may have been responsible for McDonald's disappearance, and several other suspicious deaths of his political and personal opponents.
See also
List of people who disappeared
References
1872 births
1920s missing person cases
20th-century Australian politicians
Australian Labor Party members of the Parliament of Australia
Date of death unknown
Members of the Australian House of Representatives
Members of the Australian House of Representatives for Barton
Missing person cases in Australia
Australian schoolteachers
People from Grafton, New South Wales
|
```php
<?php
declare(strict_types=1);
namespace Psalm\Node\Expr\AssignOp;
use PhpParser\Node\Expr\AssignOp\Minus;
use Psalm\Node\VirtualNode;
final class VirtualMinus extends Minus implements VirtualNode
{
}
```
|
Landry Joel Tsafack N'Guémo (born 28 November 1985) is a Cameroonian former professional footballer who played as a defensive midfielder. He has previously played for Nancy, Bordeaux and Saint-Étienne in France and for Scottish club Celtic on loan. N'Guémo has played for the Cameroon national team since 2006, including at the 2010 and 2014 World Cups.
Club career
Early career
N'Guémo is a native of Dschang, a town in western Cameroon, he played for various local teams in Dschang before moving to Yaounde aged 13.
He spent a short time in EMC.
Nancy
N'Guémo was spotted by scouts of Nancy in Yaoundé and was promptly invited to France where he had trials before signing for the club aged 15. He made his debut aged 19 in August 2005 as a substitute against Lyon in a league match. He made his first start one month later against Troyes.
In January 2009, N'Guémo said he would welcome a move away from the club after being linked by French and English media with moves to Arsenal, Sunderland and Everton.
On 31 January 2009, N'Guémo scored his first goal for Nancy, a dramatic 90th-minute winner in a match against Le Havre. His second goal for the club also came in dramatic circumstances when he again netted in the 90th minute on 23 May 2009 against Marseille but Nancy were beaten 2–1.
Loan to Celtic
After days of speculation, on 16 July 2009, N'Guémo completed a one-year loan move to Celtic with an option to make it permanent. He wore the number 6 shirt the squad number previously allotted to Bobo Baldé. N'Guemo's debut game came in the 0–0 draw against Cardiff City where he was awarded as Celtic's man of the match.
He made his competitive debut in the first leg of a Champions League qualifying tie against Dynamo Moscow in Glasgow, losing 1–0. He was also part of the team that won 2–0 in the return leg in Moscow, sending Celtic through to play Arsenal in the final qualifier for the Champions League. He made his league debut away to Aberdeen in a 3–1 win for Celtic.
In total N'Guémo made 35 appearances for Celtic without scoring. At the end of his loan period the two clubs were unable to agree a transfer fee for N'Guémo and so he returned to AS Nancy.
Bordeaux
On 4 July 2011, N'Guémo moved from Nancy to Ligue 1 rivals Bordeaux, signing a three-year contract. He played in 33 of Bordeaux's 38 league fixtures in his first season there, helping the club to fifth place and qualification for the following season's Europa League. On 3 October 2013 in a Europa League tie against Maccabi Tel Aviv, N'Guémo suffered what was initially suspected to be a minor heart attack. He was substituted and taken to hospital where he underwent extensive tests, with nothing untoward found. N'Guémo stated on his Twitter account afterwards that "There is nothing serious. I went back home from hospital after an electrocardiogram" and he returned to first team action just over two weeks later in a Ligue 1 match against Lyon.
Saint-Etienne
In January 2015, N'Guémo signed a six-month contract with Saint-Étienne.
Turkish football
On 29 August 2015, N'Guémo signed with Turkish Süper Lig club Akhisar Belediyespor on a three-year contract after being released from Saint-Étienne at the end of the 2014–15 season.
N'Guémo joined another Turkish club, Kayserispor, in January 2017, signing a contract until 2019.
International career
N'Guémo made 42 appearances for Cameroon, scoring 3 times.
Coaching career
After a spell at Norwegian side Kongsvinger in 2019, N'Guémo retired from football. In May 2020, N'Guémo was named U18 manager of french club COS Villers-les-Nancy. In June 2021, N'Guémo was hired as a youth coach at his former club, AS Nancy.
Personal life
N'Guémo is a keen falconer and keeps a modest collection of bird of prey, with his favourite, a white-tailed eagle named Mr George after George Weah.
He was naturalized French in December 2007.
Honours
Nancy
French League Cup: 2006
Cameroon
Africa Cup of Nations runner-up:2008
References
External links
FIFA Profile
1985 births
Footballers from Yaoundé
Living people
Men's association football midfielders
Cameroonian men's footballers
Cameroon men's international footballers
2008 Africa Cup of Nations players
2010 Africa Cup of Nations players
2010 FIFA World Cup players
2014 FIFA World Cup players
AS Nancy Lorraine players
Celtic F.C. players
FC Girondins de Bordeaux players
AS Saint-Étienne players
Akhisarspor footballers
Kayserispor footballers
Kongsvinger IL Toppfotball players
Ligue 1 players
Scottish Premier League players
Süper Lig players
Norwegian First Division players
Expatriate men's footballers in France
Expatriate men's footballers in Norway
Expatriate men's footballers in Scotland
Expatriate men's footballers in Turkey
Cameroonian expatriate men's footballers
Cameroonian expatriate sportspeople in France
Cameroonian expatriate sportspeople in Norway
Cameroonian expatriate sportspeople in Scotland
Cameroonian expatriate sportspeople in Turkey
Naturalized citizens of France
|
The list of women Impressionists attempts to include women artists who were involved with the Impressionist movement or artists.
The four most well known women Impressionists - Morisot, Cassatt, Bracquemond, and Gonzalès - emerged as artists at a time when the art world, at least in terms of Paris, was increasingly becoming feminized. 609 works by women were shown in the 1900 Salon, as opposed to 66 by women in the 1800 Salon; women represented 20% of the artists shown in painting and graphic arts between 1818 and 1877, and close to 30% by the end of the 1890s.
Source: Women Artists in Paris 1850-1900
Anna Ancher, Danish, 1859 -1935
Harriet Backer, Norwegian, 1845-1932
Marie Bashkirtseff, née Maria Konstantinovna Bashkirtsena, French, 1858-1884
Amélie Beaury-Saurel, French, 1848-1924
Cecilia Beaux, American, 1855-1942
Anna Bilinska-Bohdanowicz, Polish, 1857-1893
Marie Bracquemond, French, 1840-1916
Louise Catherine Breslau, German, 1856-1927
Lady Elizabeth Butler, née Elizabeth Southerden Thompson, British, 1846-1933
Mina Carlson-Bredberg, Swedish, 1857-1943
Mary Cassatt, American, 1844-1926
Mary Cazin, French, 1844-1924
Fanny Churberg, Finnish, 1845-1892
Elin Daneilson-Gambogi, Finnish, 1861-1919
Julie Delance-Ferugard, French, 1859-1892
Virginie Demont-Breton, French, 1859-1935
Elizabeth Jane Gardner Bouguereau, American, 1837-1922
Eva Gonzalès, French, 1849-1883
Annie Hopf, Swiss, 1861-1918
Kitty Kieland, Norwegian, 1843-1914
Anna Elizabeth Klumpke, American, 1856-1942
Emma Löwstädt-Chadwick, Swedish, 1855-1932
Paula Modersohn-Becker, German, 1876-1907
Berthe Morisot, French, 1841-1895
Asta Nørregaard, Norwegian, 1853-1933
Elizabeth Nourse, American, 1859-1938
Hanna Pauli, Swedish, 1864-1940
Lilla Cabot Perry, American, 1848-1933
Marie Petiet, French, 1854-1893
Helene Schjerfbeck, Finnish, 1862-1946
Mary Shepard Greene Blumenschein, American, 1869-1958
Marianne Stokes, née Preindlsberger, Austrian, 1855-1927
Annie Louise Swynnerton, née Robinson, English, 1844-1933
Ellen Thesleff, Finnish, 1869-1954
References
Impressionist artists
|
```sqlpl
INSERT INTO pageviews
( session_id, pathname, is_new_visitor, is_unique, is_bounce, referrer, duration, timestamp) VALUES
( LEFT(UUID(), 8), "/", 1, 1, 0, "", 15, "2018-05-03 15:00:00"),
( LEFT(UUID(), 8), "/", 1, 1, 1, "", 14, "2018-05-03 15:00:00"),
( LEFT(UUID(), 8), "/", 1, 1, 0, "", 13, "2018-05-04 15:00:00"),
( LEFT(UUID(), 8), "/", 0, 1, 0, "", 16, "2018-05-04 15:00:00"),
( LEFT(UUID(), 8), "/", 0, 1, 0, "", 16, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/", 0, 1, 0, "", 17, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/", 0, 1, 1, "", 18, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/", 1, 1, 0, "", 15, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/", 1, 1, 1, "path_to_url", 15, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/", 1, 1, 0, "path_to_url", 15, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/", 0, 1, 0, "path_to_url", 150, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/", 0, 1, 0, "path_to_url", 15, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/about", 1, 1, 0, "", 15, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/about", 1, 1, 0, "", 10, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/about", 1, 1, 0, "", 11, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/contact", 1, 1, 0, "", 21, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/contact", 1, 1, 0, "", 15, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/contact", 0, 1, 0, "", 15, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/contact", 0, 1, 1, "", 8, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/contact", 1, 1, 0, "", 15, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/contact", 1, 1, 1, "path_to_url", 15, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/pricing", 1, 0, 0, "", 15, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/pricing", 1, 0, 1, "", 15, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/pricing", 1, 1, 0, "", 24, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/pricing", 1, 1, 1, "", 15, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/pricing", 1, 0, 0, "", 8, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/pricing", 1, 0, 1, "", 24, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/pricing", 1, 1, 0, "", 15, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/pricing", 1, 1, 1, "", 15, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/pricing", 1, 1, 0, "", 14, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/pricing", 1, 1, 0, "", 15, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/pricing", 1, 1, 0, "", 15, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/", 1, 1, 0, "", 24, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/", 1, 1, 0, "", 15, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/", 1, 1, 0, "", 24, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/", 1, 1, 1, "", 15, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/", 1, 1, 0, "path_to_url", 8, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/", 1, 1, 1, "path_to_url", 24, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/", 1, 1, 0, "path_to_url", 15, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/", 1, 0, 1, "", 19, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/", 1, 0, 0, "", 15, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/", 1, 0, 0, "", 19, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/", 1, 1, 0, "path_to_url", 19, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/", 1, 1, 0, "path_to_url", 15, "2018-05-05 15:00:00"),
( LEFT(UUID(), 8), "/", 1, 1, 0, "path_to_url", 14, "2018-05-05 15:00:00");
```
|
Jean-Marie Kersauson de Goasmelquin (also spelled Kersauzon and Goasmelquen) was a French Navy officer. He fought in the War of American Independence, and took part in the French operations in the Indian Ocean under Suffren.
Biography
After the Battle of Negapatam, Suffren reshuffled his captains, notably appointing Beaulieu, captain of Bellone, to Brillant. After Pierrevert, Bellone 's new captain, was killed in the action of 12 August 1782, Suffren returned Beaulieu to Bellone. To replace Beaulieu on Brillant, he appointed Lieutenant de Kersauson captained Brillant.
At the Battle of Trincomalee, from 25 August to 3 September 1782, Brillant was the only ship to return from the vanguard to support the French main battle body, as ordered. However, he failed to seize the opportunity to engage the British from point-blank range, instead taking a position behind Illustre.
Kersauson was promoted to Captain on 31 July 1784.
Notes
Citations
References
French Navy officers
|
The Frenchside Fishing Village is located in Two Rivers, Wisconsin. It was added to the National Register of Historic Places in 1987.
History
The area was originally largely inhabited by French Canadian immigrants. It became the longest-running commercial fishing district on the Great Lakes and would come to have the largest fleet as well.
References
Fishing communities in the United States
Historic districts on the National Register of Historic Places in Wisconsin
Geography of Manitowoc County, Wisconsin
French-Canadian American history
French-Canadian culture in Wisconsin
National Register of Historic Places in Manitowoc County, Wisconsin
|
Pietro Cerone (1566–1625) was an Italian music theorist, singer and priest of the late Renaissance. He is most famous for an enormous music treatise he wrote in 1613, which is useful in the studying compositional practices of the 16th century.
Life
Cerone was born in Bergamo. While Italian, he spent most of his life in Spanish-dominated Naples, Sardinia, and later in Spain: he did most of his writing in Spanish. He was unusual in being an Italian musician in Spain; far more often in the 16th century, Spanish musicians went to Italy, as in the case of Victoria. In 1603 he returned to Naples, where he was a priest and singer until his death. It was in Naples that he wrote his two most famous treatises.
Writings
The first of these, in Italian, was Le regole più necessarie per l'introduzione del canto fermo, which he published in 1609. It was a didactic and practical work on singing plainsong, which he probably used in his work at the Neapolitan church of Ss Annunziata. Four years later, however, he published a monumental volume on music theory, El melopeo y maestro: tractado de música theorica y pratica; en que se pone por extenso; lo que uno para hazerse perfecto musico ha menester saber, which consisted of 22 volumes, 849 chapters, and 1160 pages in the original Spanish.
El melopeo achieved considerable notoriety, and was sufficiently famous as late as 1803 to be lampooned by the Spanish novelist Antonio Eximeno, who compared it to the chivalric romances in Don Quixote: an impossibly detailed and absurd compilation of nonsense. Other writers in the 18th and 19th centuries have called it "monstrous." However the treatise contains passages which give insight into the compositional practices of the time.
Cerone was musically conservative, and his conservatism in this influential treatise doubtless had some effect on the delay of the Baroque style arriving in the Iberian peninsula. In his writing he was generally contemptuous of Spanish composers, and lavish in his praise of Italians (which may partially account for the abuse heaped on him by Spanish critics). He discusses the previous theoretical treatises of Zarlino, Vicentino, Juan Bermudo and others; he describes in detail how a composer can achieve expressive intensity when writing masses, motets, madrigals, frottolas, canzonettas, canticles, hymns, psalms, lamentations, ricercares, tientos, strambotti, and other forms of the time. The compositional ideal which he maintained was the style of Palestrina, though curiously he maintained that the "rules" of counterpoint were made to be broken, and should be abandoned as soon as a composer had learned his craft: paradoxically, even in the 21st century, no style of composition is taught in a more rigorous, rule-based way than the polyphonic idiom of Palestrina.
While the treatise shows that he possessed considerable compositional skill, no music by Cerone has survived and he is not known to have published any. He died in Naples.
Notes
References
Barton Hudson: "Pietro Cerone", Grove Music Online, ed. L. Macy (Accessed November 4, 2006), (subscription access)
Further reading
Gustave Reese, Music in the Renaissance. New York, W.W. Norton & Co., 1954. .
Article "Pietro Cerone", in The New Grove Dictionary of Music and Musicians, ed. Stanley Sadie. 20 vol. London, Macmillan Publishers Ltd., 1980. .
Oliver Strunk, Source Readings in Music History. New York, W.W. Norton & Co, 1950. Contains a portion of El melopeo y maestro in English.
External links
Complete copy of Cerone's treatise El melopeo y maestro at the World Digital Library
1566 births
1625 deaths
Italian music theorists
Musicians from Bergamo
Clergy from Bergamo
|
Memogate may refer to:
Mannygate, a 2004 controversy concerning confidential memos of Democratic staffers on the U.S. Senate Judiciary Committee obtained by Manual Miranda.
Killian documents controversy, a 2004 controversy involving apparently forged documents critical of George W. Bush's military service, reported on by 60 Minutes
Memogate (Pakistan), a 2011 controversy about an alleged Pakistani memo seeking the help of the US Government
Nunes memo, a 2018 controversy regarding a memo about the FBI's obtaining of a Foreign Intelligence Surveillance Act (FISA) warrant
|
```c++
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
//
// path_to_url
//
// Unless required by applicable law or agreed to in writing,
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// specific language governing permissions and limitations
#include "kudu/tablet/tablet_replica.h"
#include <algorithm>
#include <deque>
#include <functional>
#include <memory>
#include <mutex>
#include <optional>
#include <ostream>
#include <string>
#include <type_traits>
#include <utility>
#include <vector>
#include <gflags/gflags.h>
#include <glog/logging.h>
#include "kudu/common/common.pb.h"
#include "kudu/common/partition.h"
#include "kudu/common/timestamp.h"
#include "kudu/consensus/consensus.pb.h"
#include "kudu/consensus/consensus_meta_manager.h"
#include "kudu/consensus/consensus_peers.h"
#include "kudu/consensus/log.h"
#include "kudu/consensus/log_anchor_registry.h"
#include "kudu/consensus/metadata.pb.h"
#include "kudu/consensus/opid.pb.h"
#include "kudu/consensus/quorum_util.h"
#include "kudu/consensus/raft_consensus.h"
#include "kudu/consensus/time_manager.h"
#include "kudu/fs/data_dirs.h"
#include "kudu/gutil/basictypes.h"
#include "kudu/gutil/map-util.h"
#include "kudu/gutil/port.h"
#include "kudu/gutil/strings/substitute.h"
#include "kudu/rpc/result_tracker.h"
#include "kudu/rpc/rpc_header.pb.h"
#include "kudu/tablet/mvcc.h"
#include "kudu/tablet/ops/alter_schema_op.h"
#include "kudu/tablet/ops/op_driver.h"
#include "kudu/tablet/ops/participant_op.h"
#include "kudu/tablet/ops/write_op.h"
#include "kudu/tablet/tablet.h"
#include "kudu/tablet/tablet.pb.h"
#include "kudu/tablet/tablet_replica_mm_ops.h"
#include "kudu/tablet/txn_coordinator.h"
#include "kudu/tserver/tserver.pb.h"
#include "kudu/tserver/tserver_admin.pb.h"
#include "kudu/util/flag_tags.h"
#include "kudu/util/locks.h"
#include "kudu/util/logging.h"
#include "kudu/util/maintenance_manager.h"
#include "kudu/util/metrics.h"
#include "kudu/util/monotime.h"
#include "kudu/util/pb_util.h"
#include "kudu/util/scoped_cleanup.h"
#include "kudu/util/stopwatch.h"
#include "kudu/util/threadpool.h"
#include "kudu/util/trace.h"
DEFINE_uint32(tablet_max_pending_txn_write_ops, 2,
"Maximum number of write operations to be pending at leader "
"tablet replica per transaction prior to registering the tablet "
"as a participant in the corresponding transaction");
TAG_FLAG(tablet_max_pending_txn_write_ops, experimental);
TAG_FLAG(tablet_max_pending_txn_write_ops, runtime);
METRIC_DEFINE_histogram(tablet, op_prepare_queue_length, "Operation Prepare Queue Length",
kudu::MetricUnit::kTasks,
"Number of operations waiting to be prepared within this tablet. "
"High queue lengths indicate that the server is unable to process "
"operations as fast as they are being written to the WAL.",
kudu::MetricLevel::kInfo,
10000, 2);
METRIC_DEFINE_histogram(tablet, op_prepare_queue_time, "Operation Prepare Queue Time",
kudu::MetricUnit::kMicroseconds,
"Time that operations spent waiting in the prepare queue before being "
"processed. High queue times indicate that the server is unable to "
"process operations as fast as they are being written to the WAL.",
kudu::MetricLevel::kInfo,
10000000, 2);
METRIC_DEFINE_histogram(tablet, op_prepare_run_time, "Operation Prepare Run Time",
kudu::MetricUnit::kMicroseconds,
"Time that operations spent being prepared in the tablet. "
"High values may indicate that the server is under-provisioned or "
"that operations are experiencing high contention with one another for "
"locks.",
kudu::MetricLevel::kInfo,
10000000, 2);
METRIC_DEFINE_gauge_size(tablet, on_disk_size, "Tablet Size On Disk",
kudu::MetricUnit::kBytes,
"Space used by this tablet on disk, including metadata.",
kudu::MetricLevel::kInfo);
METRIC_DEFINE_gauge_string(tablet, state, "Tablet State",
kudu::MetricUnit::kState,
"State of this tablet.",
kudu::MetricLevel::kInfo);
METRIC_DEFINE_gauge_uint64(tablet, live_row_count, "Tablet Live Row Count",
kudu::MetricUnit::kRows,
"Number of live rows in this tablet, excludes deleted rows.",
kudu::MetricLevel::kInfo);
DECLARE_bool(prevent_kudu_2233_corruption);
using kudu::consensus::ALTER_SCHEMA_OP;
using kudu::consensus::ConsensusBootstrapInfo;
using kudu::consensus::ConsensusOptions;
using kudu::consensus::ConsensusRound;
using kudu::consensus::ConsensusStatePB;
using kudu::consensus::MarkDirtyCallback;
using kudu::consensus::OpId;
using kudu::consensus::PARTICIPANT_OP;
using kudu::consensus::PeerProxyFactory;
using kudu::consensus::RaftConfigPB;
using kudu::consensus::RaftConsensus;
using kudu::consensus::RaftPeerPB;
using kudu::consensus::RpcPeerProxyFactory;
using kudu::consensus::ServerContext;
using kudu::consensus::TimeManager;
using kudu::consensus::WRITE_OP;
using kudu::log::Log;
using kudu::log::LogAnchorRegistry;
using kudu::pb_util::SecureDebugString;
using kudu::pb_util::SecureShortDebugString;
using kudu::rpc::Messenger;
using kudu::rpc::ResultTracker;
using kudu::tserver::ParticipantOpPB;
using kudu::tserver::ParticipantRequestPB;
using kudu::tserver::TabletServerErrorPB;
using std::deque;
using std::map;
using std::shared_ptr;
using std::string;
using std::unique_ptr;
using std::vector;
using strings::Substitute;
namespace kudu {
namespace tablet {
TabletReplica::TabletReplica(
scoped_refptr<TabletMetadata> meta,
scoped_refptr<consensus::ConsensusMetadataManager> cmeta_manager,
consensus::RaftPeerPB local_peer_pb,
ThreadPool* apply_pool,
ThreadPool* reload_txn_status_tablet_pool,
TxnCoordinatorFactory* txn_coordinator_factory,
MarkDirtyCallback cb)
: meta_(DCHECK_NOTNULL(std::move(meta))),
cmeta_manager_(DCHECK_NOTNULL(std::move(cmeta_manager))),
local_peer_pb_(std::move(local_peer_pb)),
log_anchor_registry_(new LogAnchorRegistry),
apply_pool_(apply_pool),
reload_txn_status_tablet_pool_(reload_txn_status_tablet_pool),
txn_coordinator_(meta_->table_type() &&
*meta_->table_type() == TableTypePB::TXN_STATUS_TABLE ?
DCHECK_NOTNULL(txn_coordinator_factory)->Create(this) : nullptr),
txn_coordinator_task_counter_(0),
mark_dirty_clbk_(meta_->table_type() &&
*meta_->table_type() == TableTypePB::TXN_STATUS_TABLE ?
[this, cb](const string& reason) {
cb(reason);
this->TxnStatusReplicaStateChanged(this->tablet_id(), reason);
} : std::move(cb)),
state_(NOT_INITIALIZED),
last_status_("Tablet initializing...") {
}
TabletReplica::TabletReplica()
: apply_pool_(nullptr),
reload_txn_status_tablet_pool_(nullptr),
state_(SHUTDOWN),
last_status_("Fake replica created") {
}
TabletReplica::~TabletReplica() {
// We are required to call Shutdown() before destroying a TabletReplica.
CHECK(state_ == SHUTDOWN || state_ == FAILED)
<< "TabletReplica not fully shut down. State: "
<< TabletStatePB_Name(state_);
}
Status TabletReplica::Init(ServerContext server_ctx) {
CHECK_EQ(NOT_INITIALIZED, state_);
TRACE("Creating consensus instance");
SetStatusMessage("Initializing consensus...");
ConsensusOptions options;
options.tablet_id = meta_->tablet_id();
shared_ptr<RaftConsensus> consensus;
RETURN_NOT_OK(RaftConsensus::Create(std::move(options),
local_peer_pb_,
cmeta_manager_,
std::move(server_ctx),
&consensus));
consensus_ = std::move(consensus);
set_state(INITIALIZED);
SetStatusMessage("Initialized. Waiting to start...");
return Status::OK();
}
Status TabletReplica::Start(
const ConsensusBootstrapInfo& bootstrap_info,
shared_ptr<Tablet> tablet,
clock::Clock* clock,
shared_ptr<Messenger> messenger,
scoped_refptr<ResultTracker> result_tracker,
scoped_refptr<Log> log,
ThreadPool* prepare_pool,
DnsResolver* resolver) {
DCHECK(tablet) << "A TabletReplica must be provided with a Tablet";
DCHECK(log) << "A TabletReplica must be provided with a Log";
{
std::lock_guard state_change_guard(state_change_lock_);
scoped_refptr<MetricEntity> metric_entity;
unique_ptr<PeerProxyFactory> peer_proxy_factory;
unique_ptr<TimeManager> time_manager;
{
std::lock_guard l(lock_);
CHECK_EQ(BOOTSTRAPPING, state_);
tablet_ = DCHECK_NOTNULL(std::move(tablet));
clock_ = DCHECK_NOTNULL(clock);
messenger_ = DCHECK_NOTNULL(std::move(messenger));
result_tracker_ = std::move(result_tracker); // Passed null in tablet_replica-test
log_ = DCHECK_NOTNULL(log); // Not moved because it's passed to RaftConsensus::Start() below.
metric_entity = tablet_->GetMetricEntity();
prepare_pool_token_ = prepare_pool->NewTokenWithMetrics(
ThreadPool::ExecutionMode::SERIAL,
{
METRIC_op_prepare_queue_length.Instantiate(metric_entity),
METRIC_op_prepare_queue_time.Instantiate(metric_entity),
METRIC_op_prepare_run_time.Instantiate(metric_entity)
});
if (tablet_->metrics() != nullptr) {
TRACE("Starting instrumentation");
op_tracker_.StartInstrumentation(tablet_->GetMetricEntity());
METRIC_on_disk_size.InstantiateFunctionGauge(
tablet_->GetMetricEntity(), [this]() { return this->OnDiskSize(); })
->AutoDetach(&metric_detacher_);
METRIC_state.InstantiateFunctionGauge(
tablet_->GetMetricEntity(), [this]() { return this->StateName(); })
->AutoDetach(&metric_detacher_);
if (tablet_->metadata()->supports_live_row_count()) {
METRIC_live_row_count.InstantiateFunctionGauge(
tablet_->GetMetricEntity(), [this]() { return this->CountLiveRowsNoFail(); })
->AutoDetach(&metric_detacher_);
} else {
METRIC_live_row_count.InstantiateInvalid(tablet_->GetMetricEntity(), 0);
}
}
op_tracker_.StartMemoryTracking(tablet_->mem_tracker());
TRACE("Starting consensus");
VLOG(2) << "T " << tablet_id() << " P " << consensus_->peer_uuid() << ": Peer starting";
VLOG(2) << "RaftConfig before starting: " << SecureDebugString(consensus_->CommittedConfig());
peer_proxy_factory.reset(new RpcPeerProxyFactory(messenger_, resolver));
time_manager.reset(new TimeManager(clock_, tablet_->mvcc_manager()->GetCleanTimestamp()));
}
// We cannot hold 'lock_' while we call RaftConsensus::Start() because it
// may invoke TabletReplica::StartFollowerOp() during startup,
// causing a self-deadlock. We take a ref to members protected by 'lock_'
// before unlocking.
RETURN_NOT_OK(consensus_->Start(
bootstrap_info,
std::move(peer_proxy_factory),
log,
std::move(time_manager),
this,
metric_entity,
mark_dirty_clbk_));
std::lock_guard l(lock_);
// If an error has been set (e.g. due to a disk failure from a separate
// thread), error out.
RETURN_NOT_OK(error_);
CHECK_EQ(BOOTSTRAPPING, state_); // We are still protected by 'state_change_lock_'.
set_state(RUNNING);
}
return Status::OK();
}
const string& TabletReplica::StateName() const {
return TabletStatePB_Name(state());
}
const consensus::RaftConfigPB TabletReplica::RaftConfig() const {
CHECK(consensus_) << "consensus is null";
return consensus_->CommittedConfig();
}
void TabletReplica::Stop() {
{
std::unique_lock<simple_spinlock> lock(lock_);
if (state_ == STOPPING || state_ == STOPPED ||
state_ == SHUTDOWN || state_ == FAILED) {
lock.unlock();
WaitUntilStopped();
return;
}
LOG_WITH_PREFIX(INFO) << "stopping tablet replica";
set_state(STOPPING);
}
WaitUntilTxnCoordinatorTasksFinished();
std::lock_guard l(state_change_lock_);
// Even though Tablet::Shutdown() also unregisters its ops, we have to do it here
// to ensure that any currently running operation finishes before we proceed with
// the rest of the shutdown sequence. In particular, a maintenance operation could
// indirectly end up calling into the log, which we are about to shut down.
if (tablet_) tablet_->UnregisterMaintenanceOps();
UnregisterMaintenanceOps();
if (consensus_) consensus_->Stop();
// TODO(KUDU-183): Keep track of the pending tasks and send an "abort" message.
LOG_SLOW_EXECUTION(WARNING, 1000,
Substitute("TabletReplica: tablet $0: Waiting for Ops to complete", tablet_id())) {
op_tracker_.WaitForAllToFinish();
}
if (prepare_pool_token_) {
prepare_pool_token_->Shutdown();
}
if (log_) {
WARN_NOT_OK(log_->Close(), "Error closing the Log.");
}
if (tablet_) {
tablet_->Shutdown();
}
if (txn_coordinator_) {
txn_coordinator_->Shutdown();
}
// Only mark the peer as STOPPED when all other components have shut down.
{
std::lock_guard lock(lock_);
tablet_.reset();
set_state(STOPPED);
}
VLOG(1) << "TabletReplica: tablet " << tablet_id() << " shut down!";
}
void TabletReplica::Shutdown() {
Stop();
if (consensus_) consensus_->Shutdown();
std::lock_guard lock(lock_);
if (state_ == SHUTDOWN || state_ == FAILED) return;
if (!error_.ok()) {
set_state(FAILED);
return;
}
set_state(SHUTDOWN);
}
void TabletReplica::WaitUntilStopped() {
while (true) {
{
std::lock_guard lock(lock_);
if (state_ == STOPPED || state_ == SHUTDOWN || state_ == FAILED) {
return;
}
}
SleepFor(MonoDelta::FromMilliseconds(10));
}
}
void TabletReplica::WaitUntilTxnCoordinatorTasksFinished() {
if (!txn_coordinator_) {
return;
}
while (true) {
{
std::lock_guard lock(lock_);
if (txn_coordinator_task_counter_ == 0) {
return;
}
}
SleepFor(MonoDelta::FromMilliseconds(10));
}
}
void TabletReplica::TxnStatusReplicaStateChanged(const string& tablet_id, const string& reason) {
if (PREDICT_FALSE(!ShouldRunTxnCoordinatorStateChangedTask())) {
return;
}
auto decrement_on_failure = MakeScopedCleanup([&] {
DecreaseTxnCoordinatorTaskCounter();
});
CHECK_EQ(tablet_id, this->tablet_id());
shared_ptr<RaftConsensus> consensus = shared_consensus();
if (!consensus) {
LOG_WITH_PREFIX(WARNING) << "Received notification of TxnStatusTablet state change "
<< "but the raft consensus is not initialized. Tablet ID: "
<< tablet_id << ". Reason: " << reason;
return;
}
ConsensusStatePB cstate;
Status s = consensus->ConsensusState(&cstate);
if (PREDICT_FALSE(!s.ok())) {
LOG_WITH_PREFIX(WARNING) << "Consensus state is not available. " << s.ToString();
return;
}
LOG_WITH_PREFIX(INFO) << "TxnStatusTablet state changed. Reason: " << reason << ". "
<< "Latest consensus state: " << SecureShortDebugString(cstate);
RaftPeerPB::Role new_role = GetConsensusRole(permanent_uuid(), cstate);
LOG_WITH_PREFIX(INFO) << "This TxnStatusTablet replica's current role is: "
<< RaftPeerPB::Role_Name(new_role);
if (new_role == RaftPeerPB::LEADER) {
// If we're going to schedule a task, only decrement our task count when
// that task finishes.
decrement_on_failure.cancel();
CHECK_OK(reload_txn_status_tablet_pool_->Submit([this] {
txn_coordinator_->PrepareLeadershipTask();
DecreaseTxnCoordinatorTaskCounter();
}));
}
}
void TabletReplica::set_state(TabletStatePB new_state) {
switch (new_state) {
case NOT_INITIALIZED:
LOG(FATAL) << "Cannot transition to NOT_INITIALIZED state";
return;
case INITIALIZED:
CHECK_EQ(NOT_INITIALIZED, state_);
break;
case BOOTSTRAPPING:
CHECK_EQ(INITIALIZED, state_);
break;
case RUNNING:
CHECK_EQ(BOOTSTRAPPING, state_);
break;
case STOPPING:
CHECK_NE(STOPPED, state_);
CHECK_NE(SHUTDOWN, state_);
break;
case STOPPED:
CHECK_EQ(STOPPING, state_);
break;
case SHUTDOWN: [[fallthrough]];
case FAILED:
CHECK_EQ(STOPPED, state_) << TabletStatePB_Name(state_);
break;
default:
break;
}
state_ = new_state;
}
Status TabletReplica::CheckRunning() const {
{
std::lock_guard lock(lock_);
if (state_ != RUNNING) {
return Status::IllegalState(Substitute("The tablet is not in a running state: $0",
TabletStatePB_Name(state_)));
}
}
return Status::OK();
}
bool TabletReplica::IsShuttingDown() const {
std::lock_guard l(lock_);
if (state_ == STOPPING || state_ == STOPPED) {
return true;
}
return false;
}
Status TabletReplica::WaitUntilConsensusRunning(const MonoDelta& timeout) {
MonoTime start(MonoTime::Now());
int backoff_exp = 0;
const int kMaxBackoffExp = 8;
while (true) {
bool has_consensus = false;
TabletStatePB cached_state;
{
std::lock_guard lock(lock_);
cached_state = state_;
if (consensus_) {
has_consensus = true; // consensus_ is a set-once object.
}
}
if (cached_state == STOPPING || cached_state == STOPPED) {
return Status::IllegalState(
Substitute("The tablet is already shutting down or shutdown. State: $0",
TabletStatePB_Name(cached_state)));
}
if (cached_state == RUNNING && has_consensus && consensus_->IsRunning()) {
break;
}
MonoTime now(MonoTime::Now());
MonoDelta elapsed(now - start);
if (elapsed > timeout) {
return Status::TimedOut(Substitute("Raft Consensus is not running after waiting for $0: $1",
elapsed.ToString(), TabletStatePB_Name(cached_state)));
}
SleepFor(MonoDelta::FromMilliseconds(1L << backoff_exp));
backoff_exp = std::min(backoff_exp + 1, kMaxBackoffExp);
}
return Status::OK();
}
Status TabletReplica::SubmitTxnWrite(
std::unique_ptr<WriteOpState> op_state,
const std::function<Status(int64_t txn_id, RegisteredTxnCallback cb)>& scheduler) {
DCHECK(op_state);
DCHECK(op_state->request()->has_txn_id());
RETURN_NOT_OK(CheckRunning());
op_state->SetResultTracker(result_tracker_);
const int64_t txn_id = op_state->request()->txn_id();
shared_ptr<TxnOpDispatcher> dispatcher;
{
std::lock_guard guard(txn_op_dispatchers_lock_);
dispatcher = LookupOrEmplace(
&txn_op_dispatchers_,
txn_id,
std::make_shared<TxnOpDispatcher>(
this, FLAGS_tablet_max_pending_txn_write_ops));
}
return dispatcher->Dispatch(std::move(op_state), scheduler);
}
Status TabletReplica::UnregisterTxnOpDispatcher(int64_t txn_id,
bool abort_pending_ops) {
Status unregister_status;
VLOG(3) << Substitute(
"received request to unregister TxnOpDispatcher (txn ID $0)", txn_id);
std::lock_guard guard(txn_op_dispatchers_lock_);
auto it = txn_op_dispatchers_.find(txn_id);
// It might happen that TxnOpDispatcher isn't there, and that's completely
// fine. One possible scenario that might lead to such condition is:
// * There was a change in replica leadership, and the former leader replica
// received all the write requests in the scope of the transaction, while
// this new leader replica hasn't received any since its start. Now this
// replica is receiving BEGIN_COMMIT transaction coordination request
// for the participant (i.e. for the tablet) from TxnStatusManager.
if (it != txn_op_dispatchers_.end()) {
auto& dispatcher = it->second;
unregister_status = dispatcher->MarkUnregistered();
if (abort_pending_ops) {
dispatcher->Cancel(Status::Aborted("operation has been aborted"),
TabletServerErrorPB::TXN_ILLEGAL_STATE);
unregister_status = Status::OK();
}
if (unregister_status.ok()) {
txn_op_dispatchers_.erase(it);
}
VLOG(1) << Substitute("TxnOpDispatcher (txn ID $0) unregistration: $1",
txn_id, unregister_status.ToString());
}
return unregister_status;
}
Status TabletReplica::SubmitWrite(unique_ptr<WriteOpState> op_state,
MonoTime deadline) {
RETURN_NOT_OK(CheckRunning());
op_state->SetResultTracker(result_tracker_);
unique_ptr<WriteOp> op(new WriteOp(std::move(op_state), consensus::LEADER));
scoped_refptr<OpDriver> driver;
RETURN_NOT_OK(NewLeaderOpDriver(std::move(op), &driver, deadline));
driver->ExecuteAsync();
return Status::OK();
}
Status TabletReplica::SubmitTxnParticipantOp(std::unique_ptr<ParticipantOpState> op_state) {
RETURN_NOT_OK(CheckRunning());
op_state->SetResultTracker(result_tracker_);
unique_ptr<ParticipantOp> op(new ParticipantOp(std::move(op_state), consensus::LEADER));
scoped_refptr<OpDriver> driver;
RETURN_NOT_OK(NewLeaderOpDriver(std::move(op), &driver, MonoTime::Max()));
driver->ExecuteAsync();
return Status::OK();
}
Status TabletReplica::SubmitAlterSchema(unique_ptr<AlterSchemaOpState> state,
MonoTime deadline) {
RETURN_NOT_OK(CheckRunning());
unique_ptr<AlterSchemaOp> op(
new AlterSchemaOp(std::move(state), consensus::LEADER));
scoped_refptr<OpDriver> driver;
RETURN_NOT_OK(NewLeaderOpDriver(std::move(op), &driver, deadline));
driver->ExecuteAsync();
return Status::OK();
}
void TabletReplica::GetTabletStatusPB(TabletStatusPB* status_pb_out) const {
DCHECK(status_pb_out != nullptr);
{
std::lock_guard lock(lock_);
status_pb_out->set_state(state_);
status_pb_out->set_last_status(last_status_);
}
const string& tablet_id = meta_->tablet_id();
status_pb_out->set_tablet_id(tablet_id);
status_pb_out->set_table_name(meta_->table_name());
meta_->partition().ToPB(status_pb_out->mutable_partition());
status_pb_out->set_tablet_data_state(meta_->tablet_data_state());
status_pb_out->set_estimated_on_disk_size(OnDiskSize());
// There are circumstances where the call to 'FindDataDirsByTabletId' may
// fail, like if the tablet is tombstoned or failed. It's alright to return
// an empty 'data_dirs' in this case-- the state and last status will inform
// the caller.
vector<string> data_dirs;
ignore_result(
meta_->fs_manager()->dd_manager()->FindDataDirsByTabletId(tablet_id,
&data_dirs));
for (auto& dir : data_dirs) {
status_pb_out->add_data_dirs(std::move(dir));
}
}
Status TabletReplica::RunLogGC() {
if (!CheckRunning().ok()) {
return Status::OK();
}
int32_t num_gced;
log::RetentionIndexes retention = GetRetentionIndexes();
Status s = log_->GC(retention, &num_gced);
if (!s.ok()) {
s = s.CloneAndPrepend("Unexpected error while running Log GC from TabletReplica");
LOG(ERROR) << s.ToString();
}
return Status::OK();
}
void TabletReplica::SetBootstrapping() {
std::lock_guard lock(lock_);
set_state(BOOTSTRAPPING);
}
void TabletReplica::SetStatusMessage(string status) {
std::lock_guard lock(lock_);
last_status_ = std::move(status);
}
string TabletReplica::last_status() const {
std::lock_guard lock(lock_);
return last_status_;
}
void TabletReplica::SetError(const Status& error) {
std::lock_guard lock(lock_);
CHECK(!error.ok());
error_ = error;
last_status_ = error.ToString();
}
string TabletReplica::HumanReadableState() const {
std::lock_guard lock(lock_);
TabletDataState data_state = meta_->tablet_data_state();
// If failed, any number of things could have gone wrong.
if (state_ == FAILED) {
return Substitute("$0 ($1): $2", TabletStatePB_Name(state_),
TabletDataState_Name(data_state),
error_.ToString());
// If it's copying, or tombstoned, that is the important thing
// to show.
} else if (data_state != TABLET_DATA_READY) {
return TabletDataState_Name(data_state);
}
// Otherwise, the tablet's data is in a "normal" state, so we just display
// the runtime state (BOOTSTRAPPING, RUNNING, etc).
return TabletStatePB_Name(state_);
}
void TabletReplica::GetInFlightOps(Op::TraceType trace_type,
vector<consensus::OpStatusPB>* out) const {
vector<scoped_refptr<OpDriver> > pending_ops;
op_tracker_.GetPendingOps(&pending_ops);
for (const scoped_refptr<OpDriver>& driver : pending_ops) {
if (driver->state() != nullptr) {
consensus::OpStatusPB status_pb;
status_pb.mutable_op_id()->CopyFrom(driver->GetOpId());
switch (driver->op_type()) {
case Op::WRITE_OP:
status_pb.set_op_type(consensus::WRITE_OP);
break;
case Op::ALTER_SCHEMA_OP:
status_pb.set_op_type(consensus::ALTER_SCHEMA_OP);
break;
case Op::PARTICIPANT_OP:
status_pb.set_op_type(consensus::PARTICIPANT_OP);
break;
}
status_pb.set_description(driver->ToString());
int64_t running_for_micros =
(MonoTime::Now() - driver->start_time()).ToMicroseconds();
status_pb.set_running_for_micros(running_for_micros);
if (trace_type == Op::TRACE_OPS) {
status_pb.set_trace_buffer(driver->trace()->DumpToString());
}
out->push_back(status_pb);
}
}
}
log::RetentionIndexes TabletReplica::GetRetentionIndexes() const {
// Let consensus set a minimum index that should be anchored.
// This ensures that we:
// (a) don't GC any operations which are still in flight
// (b) don't GC any operations that are needed to catch up lagging peers.
log::RetentionIndexes ret = consensus_->GetRetentionIndexes();
VLOG_WITH_PREFIX(4) << "Log GC: With Consensus retention: "
<< Substitute("{dur: $0, peers: $1}", ret.for_durability, ret.for_peers);
// If we never have written to the log, no need to proceed.
if (ret.for_durability == 0) return ret;
// Next, we interrogate the anchor registry.
// Returns OK if minimum known, NotFound if no anchors are registered.
{
int64_t min_anchor_index;
Status s = log_anchor_registry_->GetEarliestRegisteredLogIndex(&min_anchor_index);
if (PREDICT_FALSE(!s.ok())) {
DCHECK(s.IsNotFound()) << "Unexpected error calling LogAnchorRegistry: " << s.ToString();
} else {
ret.for_durability = std::min(ret.for_durability, min_anchor_index);
}
}
VLOG_WITH_PREFIX(4) << "Log GC: With Anchor retention: "
<< Substitute("{dur: $0, peers: $1}", ret.for_durability, ret.for_peers);
// Next, interrogate the OpTracker.
vector<scoped_refptr<OpDriver> > pending_ops;
op_tracker_.GetPendingOps(&pending_ops);
for (const scoped_refptr<OpDriver>& driver : pending_ops) {
OpId op_id = driver->GetOpId();
// A op which doesn't have an opid hasn't been submitted for replication yet and
// thus has no need to anchor the log.
if (op_id.IsInitialized()) {
ret.for_durability = std::min(ret.for_durability, op_id.index());
}
}
VLOG_WITH_PREFIX(4) << "Log GC: With Op retention: "
<< Substitute("{dur: $0, peers: $1}", ret.for_durability, ret.for_peers);
return ret;
}
Status TabletReplica::GetReplaySizeMap(map<int64_t, int64_t>* replay_size_map) const {
RETURN_NOT_OK(CheckRunning());
log_->GetReplaySizeMap(replay_size_map);
return Status::OK();
}
Status TabletReplica::GetGCableDataSize(int64_t* retention_size) const {
RETURN_NOT_OK(CheckRunning());
*retention_size = log_->GetGCableDataSize(GetRetentionIndexes());
return Status::OK();
}
Status TabletReplica::StartFollowerOp(const scoped_refptr<ConsensusRound>& round) {
{
std::lock_guard lock(lock_);
if (state_ != RUNNING && state_ != BOOTSTRAPPING) {
return Status::IllegalState(TabletStatePB_Name(state_));
}
}
consensus::ReplicateMsg* replicate_msg = round->replicate_msg();
DCHECK(replicate_msg->has_timestamp());
unique_ptr<Op> op;
switch (replicate_msg->op_type()) {
case WRITE_OP:
{
DCHECK(replicate_msg->has_write_request()) << "WRITE_OP replica"
" op must receive a WriteRequestPB";
unique_ptr<WriteOpState> op_state(
new WriteOpState(
this,
&replicate_msg->write_request(),
replicate_msg->has_request_id() ? &replicate_msg->request_id() : nullptr));
op_state->SetResultTracker(result_tracker_);
op.reset(new WriteOp(std::move(op_state), consensus::FOLLOWER));
break;
}
case PARTICIPANT_OP:
{
DCHECK(replicate_msg->has_participant_request()) << "PARTICIPANT_OP replica"
" op must receive an ParticipantRequestPB";
unique_ptr<ParticipantOpState> op_state(
new ParticipantOpState(
this,
tablet_->txn_participant(),
&replicate_msg->participant_request()));
op_state->SetResultTracker(result_tracker_);
op.reset(new ParticipantOp(std::move(op_state), consensus::FOLLOWER));
break;
}
case ALTER_SCHEMA_OP:
{
DCHECK(replicate_msg->has_alter_schema_request()) << "ALTER_SCHEMA_OP replica"
" op must receive an AlterSchemaRequestPB";
unique_ptr<AlterSchemaOpState> op_state(
new AlterSchemaOpState(this, &replicate_msg->alter_schema_request(),
nullptr));
op.reset(
new AlterSchemaOp(std::move(op_state), consensus::FOLLOWER));
break;
}
default:
LOG(FATAL) << "Unsupported Operation Type";
}
// TODO(todd) Look at wiring the stuff below on the driver
OpState* state = op->state();
state->set_consensus_round(round);
scoped_refptr<OpDriver> driver;
RETURN_NOT_OK(NewFollowerOpDriver(std::move(op), &driver));
// A raw pointer is required to avoid a refcount cycle.
auto* driver_raw = driver.get();
state->consensus_round()->SetConsensusReplicatedCallback(
[driver_raw](const Status& s) { driver_raw->ReplicationFinished(s); });
driver->ExecuteAsync();
return Status::OK();
}
void TabletReplica::FinishConsensusOnlyRound(ConsensusRound* round) {
consensus::ReplicateMsg* replicate_msg = round->replicate_msg();
consensus::OperationType op_type = replicate_msg->op_type();
// The timestamp of a Raft no-op used to assert term leadership is guaranteed
// to be lower than the timestamps of writes in the same terms and those
// thereafter. As such, we are able to bump the MVCC safe time with the
// timestamps of such no-ops, as further op timestamps are
// guaranteed to be higher than them.
//
// It is important for MVCC safe time updates to be serialized with respect
// to ops. To ensure that we only advance the safe time with the
// no-op of term N after all ops of term N-1 have been prepared, we
// run the adjustment function on the prepare thread, which is the same
// mechanism we use to serialize ops.
//
// If the 'timestamp_in_opid_order' flag is unset, the no-op is assumed to be
// the Raft leadership no-op from a version of Kudu that only supported creating
// a no-op to assert a new leadership term, in which case it would be in order.
if (op_type == consensus::NO_OP &&
(!replicate_msg->noop_request().has_timestamp_in_opid_order() ||
replicate_msg->noop_request().timestamp_in_opid_order()) &&
PREDICT_TRUE(FLAGS_prevent_kudu_2233_corruption)) {
DCHECK(replicate_msg->has_noop_request());
int64_t ts = replicate_msg->timestamp();
// We are guaranteed that the prepare pool token is running now because
// TabletReplica::Stop() stops RaftConsensus before it stops the prepare
// pool token and this callback is invoked while the RaftConsensus lock is
// held.
CHECK_OK(prepare_pool_token_->Submit([this, ts] {
std::lock_guard l(lock_);
if (state_ == RUNNING || state_ == BOOTSTRAPPING) {
tablet_->mvcc_manager()->AdjustNewOpLowerBound(Timestamp(ts));
}
}));
}
}
Status TabletReplica::NewLeaderOpDriver(unique_ptr<Op> op,
scoped_refptr<OpDriver>* driver,
MonoTime deadline) {
scoped_refptr<OpDriver> op_driver = new OpDriver(
&op_tracker_,
consensus_.get(),
log_.get(),
prepare_pool_token_.get(),
apply_pool_,
&op_order_verifier_,
deadline);
RETURN_NOT_OK(op_driver->Init(std::move(op), consensus::LEADER));
*driver = std::move(op_driver);
return Status::OK();
}
Status TabletReplica::NewFollowerOpDriver(unique_ptr<Op> op,
scoped_refptr<OpDriver>* driver) {
scoped_refptr<OpDriver> op_driver = new OpDriver(
&op_tracker_,
consensus_.get(),
log_.get(),
prepare_pool_token_.get(),
apply_pool_,
&op_order_verifier_);
RETURN_NOT_OK(op_driver->Init(std::move(op), consensus::FOLLOWER));
*driver = std::move(op_driver);
return Status::OK();
}
void TabletReplica::RegisterMaintenanceOps(MaintenanceManager* maint_mgr) {
// Taking state_change_lock_ ensures that we don't shut down concurrently with
// this last start-up task.
std::lock_guard state_change_lock(state_change_lock_);
if (state() != RUNNING) {
LOG(WARNING) << "Not registering maintenance operations for " << tablet_
<< ": tablet not in RUNNING state";
return;
}
vector<unique_ptr<MaintenanceOp>> maintenance_ops;
maintenance_ops.emplace_back(new FlushMRSOp(this));
maint_mgr->RegisterOp(maintenance_ops.back().get());
maintenance_ops.emplace_back(new FlushDeltaMemStoresOp(this));
maint_mgr->RegisterOp(maintenance_ops.back().get());
maintenance_ops.emplace_back(new LogGCOp(this));
maint_mgr->RegisterOp(maintenance_ops.back().get());
std::shared_ptr<Tablet> tablet;
{
std::lock_guard l(lock_);
DCHECK(maintenance_ops_.empty());
maintenance_ops_ = std::move(maintenance_ops);
tablet = tablet_;
}
tablet->RegisterMaintenanceOps(maint_mgr);
}
void TabletReplica::CancelMaintenanceOpsForTests() {
std::lock_guard l(lock_);
for (auto& op : maintenance_ops_) {
op->CancelAndDisable();
}
}
void TabletReplica::UnregisterMaintenanceOps() {
DCHECK(state_change_lock_.is_locked());
decltype(maintenance_ops_) ops;
{
std::lock_guard l(lock_);
ops = std::move(maintenance_ops_);
maintenance_ops_.clear();
}
for (auto& op : ops) {
op->Unregister();
}
}
size_t TabletReplica::OnDiskSize() const {
size_t ret = 0;
// Consensus metadata.
if (consensus_ != nullptr) {
ret += consensus_->MetadataOnDiskSize();
}
shared_ptr<Tablet> tablet;
scoped_refptr<Log> log;
{
std::lock_guard l(lock_);
tablet = tablet_;
log = log_;
}
if (tablet) {
ret += tablet->OnDiskSize();
}
if (log) {
ret += log->OnDiskSize();
}
return ret;
}
Status TabletReplica::CountLiveRows(uint64_t* live_row_count) const {
shared_ptr<Tablet> tablet;
{
std::lock_guard l(lock_);
tablet = tablet_;
}
if (!tablet) {
return Status::IllegalState("The tablet is shutdown.");
}
return tablet->CountLiveRows(live_row_count);
}
uint64_t TabletReplica::CountLiveRowsNoFail() const {
uint64_t live_row_count = 0;
ignore_result(CountLiveRows(&live_row_count));
return live_row_count;
}
void TabletReplica::UpdateTabletStats(vector<string>* dirty_tablets) {
// It's necessary to check the state before visiting the "consensus_".
if (RUNNING != state()) {
return;
}
ReportedTabletStatsPB pb;
pb.set_on_disk_size(OnDiskSize());
uint64_t live_row_count;
Status s = CountLiveRows(&live_row_count);
if (s.ok()) {
pb.set_live_row_count(live_row_count);
}
// We cannot hold 'lock_' while calling RaftConsensus::role() because
// it may invoke TabletReplica::StartFollowerOp() and lead to
// a deadlock.
RaftPeerPB::Role role = consensus_->role();
std::lock_guard l(lock_);
if (stats_pb_.on_disk_size() != pb.on_disk_size() ||
stats_pb_.live_row_count() != pb.live_row_count()) {
if (consensus::RaftPeerPB_Role_LEADER == role) {
dirty_tablets->emplace_back(tablet_id());
}
stats_pb_ = std::move(pb);
}
}
ReportedTabletStatsPB TabletReplica::GetTabletStats() const {
std::lock_guard l(lock_);
return stats_pb_;
}
bool TabletReplica::ShouldRunTxnCoordinatorStalenessTask() {
if (!txn_coordinator_) {
return false;
}
std::lock_guard l(lock_);
if (state_ != RUNNING) {
LOG(WARNING) << Substitute("The tablet is not running. State: $0",
TabletStatePB_Name(state_));
return false;
}
txn_coordinator_task_counter_++;
return true;
}
bool TabletReplica::ShouldRunTxnCoordinatorStateChangedTask() {
if (!txn_coordinator_) {
return false;
}
std::lock_guard l(lock_);
// We only check if the tablet is shutting down here, since replica
// state change can happen even when the tablet is not running yet.
if (state_ == STOPPING || state_ == STOPPED) {
LOG(WARNING) << Substitute("The tablet is already shutting down or shutdown. State: $0",
TabletStatePB_Name(state_));
return false;
}
txn_coordinator_task_counter_++;
return true;
}
void TabletReplica::DecreaseTxnCoordinatorTaskCounter() {
DCHECK(txn_coordinator_);
std::lock_guard l(lock_);
txn_coordinator_task_counter_--;
DCHECK_GE(txn_coordinator_task_counter_, 0);
}
class ParticipantBeginTxnCallback : public OpCompletionCallback {
public:
ParticipantBeginTxnCallback(RegisteredTxnCallback cb,
unique_ptr<ParticipantRequestPB> req)
: cb_(std::move(cb)),
req_(std::move(req)),
txn_id_(req_->op().txn_id()) {
DCHECK(req_->has_tablet_id());
DCHECK(req_->has_op());
DCHECK(req_->op().has_txn_id());
DCHECK(req_->op().has_type());
DCHECK_EQ(ParticipantOpPB::BEGIN_TXN, req_->op().type());
DCHECK_LE(0, txn_id_);
}
void OpCompleted() override {
if (status_.IsIllegalState() &&
code_ == TabletServerErrorPB::TXN_OP_ALREADY_APPLIED) {
// This is the case of duplicate calls to TxnStatusManager to begin
// transaction for a participant tablet. Those calls may happen if a
// a transactional write request arrives to a tablet server which
// hasn't yet served a write request in the context of the specified
// transaction.
return cb_(Status::OK(), TabletServerErrorPB::UNKNOWN_ERROR);
}
return cb_(status_, code_);
}
private:
RegisteredTxnCallback cb_;
unique_ptr<ParticipantRequestPB> req_;
const int64_t txn_id_;
};
void TabletReplica::BeginTxnParticipantOp(int64_t txn_id, RegisteredTxnCallback began_txn_cb) {
auto s = CheckRunning();
if (PREDICT_FALSE(!s.ok())) {
return began_txn_cb(s, TabletServerErrorPB::UNKNOWN_ERROR);
}
unique_ptr<ParticipantRequestPB> req(new ParticipantRequestPB);
req->set_tablet_id(tablet_id());
{
ParticipantOpPB op_pb;
op_pb.set_txn_id(txn_id);
op_pb.set_type(ParticipantOpPB::BEGIN_TXN);
*req->mutable_op() = std::move(op_pb);
}
unique_ptr<ParticipantOpState> op_state(
new ParticipantOpState(this, tablet()->txn_participant(), req.get()));
op_state->set_completion_callback(unique_ptr<OpCompletionCallback>(
new ParticipantBeginTxnCallback(began_txn_cb, std::move(req))));
s = SubmitTxnParticipantOp(std::move(op_state));
VLOG(3) << Substitute(
"submitting BEGIN_TXN for participant $0 (txn ID $1): $2",
tablet_id(), txn_id, s.ToString());
if (PREDICT_FALSE(!s.ok())) {
// Now it's time to respond with appropriate error status to the RPCs
// corresponding to the pending write operations.
return began_txn_cb(s, TabletServerErrorPB::UNKNOWN_ERROR);
}
}
void TabletReplica::MakeUnavailable(const Status& error) {
std::shared_ptr<Tablet> tablet;
{
std::lock_guard lock(lock_);
tablet = tablet_;
for (auto& op : maintenance_ops_) {
op->CancelAndDisable();
}
}
// Stop the Tablet from doing further I/O.
if (tablet) {
tablet->Stop();
}
// Set the error; when the replica is shut down, it will end up FAILED.
SetError(error);
}
Status TabletReplica::TxnOpDispatcher::Dispatch(
std::unique_ptr<WriteOpState> op,
const std::function<Status(int64_t txn_id, RegisteredTxnCallback cb)>& scheduler) {
const auto txn_id = op->request()->txn_id();
std::lock_guard guard(lock_);
if (PREDICT_FALSE(unregistered_)) {
KLOG_EVERY_N_SECS(WARNING, 10)
<< Substitute("received request for unregistered TxnOpDispatcher (txn ID $0)",
txn_id)
<< THROTTLE_MSG;
// TODO(aserbin): Status::ServiceUnavailable() is more appropriate here?
return Status::IllegalState(
"tablet replica could not accept txn write operation");
}
if (preliminary_tasks_completed_) {
// All preparatory work is done: submit the write operation directly.
DCHECK(ops_queue_.empty());
return replica_->SubmitWrite(std::move(op));
}
DCHECK(!preliminary_tasks_completed_);
if (!inflight_status_.ok()) {
// Still in process of cleaning up from prior error conditition: a request
// can be retried later.
return Status::ServiceUnavailable(Substitute(
"cleaning up from failure of prior write operation: $0",
inflight_status_.ToString()));
}
// This lambda is used as a status callback by the scheduler of "preliminary"
// tasks. In case of success, the callback is invoked after completion
// of the preliminary tasks with Status::OK(). In case of any failure down
// the road, the callback is called with corresponding non-OK status.
auto cb = [self = shared_from_this(), txn_id](const Status& s, TabletServerErrorPB::Code code) {
if (PREDICT_TRUE(s.ok())) {
// The all-is-well case: it's time to submit all the write operations
// accumulated in the queue.
auto submit_status = self->Submit();
if (PREDICT_FALSE(!submit_status.ok())) {
// If submitting the accumulated write operations fails, it's necessary
// to remove this TxnOpDispatcher entry from the registry.
WARN_NOT_OK(submit_status, "fail to submit pending txn write requests");
CHECK_OK(self->replica_->UnregisterTxnOpDispatcher(
txn_id, false/*abort_pending_ops*/));
}
} else {
// Something went wrong: cancel all the pending write operations
self->Cancel(s, code);
CHECK_OK(self->replica_->UnregisterTxnOpDispatcher(
txn_id, false/*abort_pending_ops*/));
}
};
// If nothing is in the queue yet after checking for the state of all other
// related fields above, this is the very first request received by this
// dispatcher and it's time to schedule the preliminary tasks.
if (ops_queue_.empty()) {
RETURN_NOT_OK(scheduler(txn_id, std::move(cb)));
}
return EnqueueUnlocked(std::move(op));
}
Status TabletReplica::TxnOpDispatcher::Submit() {
decltype(ops_queue_) failed_ops;
Status failed_status;
{
std::lock_guard guard(lock_);
DCHECK(!preliminary_tasks_completed_);
while (!ops_queue_.empty()) {
auto op = std::move(ops_queue_.front());
ops_queue_.pop_front();
// Store the information necessary to recreate WriteOp instance: this is
// useful if TabletReplica::SubmitWrite() returns non-OK status.
// The pointers to the replica, request, and response are kept valid until
// the corresponding RPC is responded to, and the RPC response is sent
// upon invoking completion callback for the 'op'. Receiving non-OK status
// from TabletReplica::SubmitWrite() is a guarantee that the completion
// callback hasn't been called yet, so these pointers stay valid.
TabletReplica* replica = op->tablet_replica();
const tserver::WriteRequestPB* request = op->request();
const bool has_request_id = op->has_request_id();
rpc::RequestIdPB request_id;
if (has_request_id) {
request_id = op->request_id();
}
tserver::WriteResponsePB* response = op->response();
auto s = replica_->SubmitWrite(std::move(op));
if (PREDICT_FALSE(!s.ok())) {
// Put the operation back into the queue if SubmitWrite() fails: this
// is necessary to respond back to the corresponding RPC, as needed.
ops_queue_.emplace_front(new WriteOpState(
replica, request, has_request_id ? &request_id : nullptr, response));
inflight_status_ = s;
break;
}
}
if (PREDICT_TRUE(inflight_status_.ok())) {
DCHECK(ops_queue_.empty());
preliminary_tasks_completed_ = true;
return Status::OK();
}
DCHECK(!inflight_status_.ok());
DCHECK(!ops_queue_.empty());
failed_status = inflight_status_;
std::swap(failed_ops, ops_queue_);
}
return RespondWithStatus(failed_status,
TabletServerErrorPB::UNKNOWN_ERROR,
std::move(failed_ops));
}
void TabletReplica::TxnOpDispatcher::Cancel(const Status& status,
TabletServerErrorPB::Code code) {
CHECK(!status.ok());
KLOG_EVERY_N_SECS(WARNING, 1)
<< Substitute("$0: cancelling pending write operations",
status.ToString()) << THROTTLE_MSG;
decltype(ops_queue_) ops;
{
std::lock_guard guard(lock_);
inflight_status_ = status;
std::swap(ops, ops_queue_);
}
RespondWithStatus(status, code, std::move(ops));
}
Status TabletReplica::TxnOpDispatcher::MarkUnregistered() {
std::lock_guard guard(lock_);
unregistered_ = true;
// If there are still pending write operations, return ServiceUnavailable()
// to let the caller retry later, if needed: there is a chance that pending
// write operations will complete when the call is retried.
if (PREDICT_FALSE(!ops_queue_.empty())) {
// It might be Status::IllegalState() instead, but ServiceUnavailable()
// refers to the transient nature of this state.
return Status::ServiceUnavailable("there are pending txn write operations");
}
return Status::OK();
}
Status TabletReplica::TxnOpDispatcher::EnqueueUnlocked(unique_ptr<WriteOpState> op) {
// TODO(aserbin): do we need to track username coming with write operations
// to make sure there is no way to slip in write operations for
// transactions of other users?
DCHECK(lock_.is_locked());
if (PREDICT_FALSE(ops_queue_.size() >= max_queue_size_)) {
return Status::ServiceUnavailable("pending operations queue is at capacity");
}
if (PREDICT_FALSE(!inflight_status_.ok())) {
return inflight_status_;
}
ops_queue_.emplace_back(std::move(op));
return Status::OK();
}
Status TabletReplica::TxnOpDispatcher::RespondWithStatus(
const Status& status,
TabletServerErrorPB::Code code,
deque<unique_ptr<WriteOpState>> ops) {
// Invoke the callback for every operation in the queue.
for (auto& op : ops) {
auto* cb = op->completion_callback();
DCHECK(cb);
cb->set_error(status, code);
cb->OpCompleted();
}
return status;
}
Status FlushInflightsToLogCallback::WaitForInflightsAndFlushLog() {
// This callback is triggered prior to any TabletMetadata flush.
// The guarantee that we are trying to enforce is this:
//
// If an operation has been flushed to stable storage (eg a DRS or DeltaFile)
// then its COMMIT message must be present in the log.
//
// The purpose for this is so that, during bootstrap, we can accurately identify
// whether each operation has been flushed. If we don't see a COMMIT message for
// an operation, then we assume it was not completely applied and needs to be
// re-applied. Thus, if we had something on disk but with no COMMIT message,
// we'd attempt to double-apply the write, resulting in an error (eg trying to
// delete an already-deleted row).
//
// So, to enforce this property, we do two steps:
//
// 1) Wait for any operations which are already mid-Apply() to FinishApplying() in MVCC.
//
// Because the operations always enqueue their COMMIT message to the log
// before calling FinishApplying(), this ensures that any in-flight operations have
// their commit messages "en route".
//
// NOTE: we only wait for those operations that have started their Apply() phase.
// Any operations which haven't yet started applying haven't made any changes
// to in-memory state: thus, they obviously couldn't have made any changes to
// on-disk storage either (data can only get to the disk by going through an in-memory
// store). Only those that have started Apply() could have potentially written some
// data which is now on disk.
//
// Perhaps more importantly, if we waited on operations that hadn't started their
// Apply() phase, we might be waiting forever -- for example, if a follower has been
// partitioned from its leader, it may have operations sitting around in flight
// for quite a long time before eventually aborting or committing. This would
// end up blocking all flushes if we waited on it.
//
// 2) Flush the log
//
// This ensures that the above-mentioned commit messages are not just enqueued
// to the log, but also on disk.
VLOG(1) << "T " << tablet_->metadata()->tablet_id()
<< ": waiting for in-flight ops to apply";
LOG_SLOW_EXECUTION(WARNING, 200, "applying in-flights took a long time") {
RETURN_NOT_OK(tablet_->mvcc_manager()->WaitForApplyingOpsToApply());
}
VLOG(1) << "T " << tablet_->metadata()->tablet_id()
<< ": waiting for the log queue to be flushed";
LOG_SLOW_EXECUTION(WARNING, 200, "flushing the Log queue took a long time") {
RETURN_NOT_OK(log_->WaitUntilAllFlushed());
}
return Status::OK();
}
} // namespace tablet
} // namespace kudu
```
|
```javascript
var searchData=
[
['compat_2edox',['compat.dox',['../compat_8dox.html',1,'']]],
['compiling_20glfw',['Compiling GLFW',['../compile.html',1,'']]],
['compile_2edox',['compile.dox',['../compile_8dox.html',1,'']]],
['context_20handling',['Context handling',['../group__context.html',1,'(Global Namespace)'],['../context.html',1,'(Global Namespace)']]],
['context_2edox',['context.dox',['../context_8dox.html',1,'']]]
];
```
|
The Fetu Afahye is a festival celebrated by the chiefs and peoples of Cape Coast in the Central region of Ghana. The festival is celebrated on the first Saturday in the month of September every year. The Fetu Afahye is celebrated annually by the Oguaa people of Cape Coast because in the past there was an outbreak of disease among the people that killed many. The people prayed to the gods to help them to get rid of the disease. Thus the festival is celebrated to keep the town clean and to prevent another epidemic befalling the people.
History
Fetu Afahye is an annual festival celebrated by the people and chiefs of Cape Coast Traditional Area in the Central Region of Ghana. Once upon a time there had been a plague in Cape Coast as history has it. This was devastating and as such demanded that the people of Cape Coast call for an intervention from their gods. However, it is believed that the inhabitants of Cape Coast and its environs were able to eliminate this plague with the help of their gods, hence, the name "Fetu" – originally Efin Tu ("doing away with dirt"). It is also observed to commemorate a bumper harvest from the sea as well as performing rituals to thank the 77 gods of Oguaa Traditional Area.
The Fetu Afahye was once banned by the then colonial administration of the country, and specifically Cape Coast, and was termed "Black Christmas" to depict it as a bad traditional phenomenon. The Omanhen (paramount chief) at that time, who is named as the Osabarimba Kodwo Mbra V, Okyeame Ekow Atta, debunked this conception as misleading. Between 1948 and 1996, the festival finally resumed, after the religious struggle from various important personalities in the Oguaa Traditional Area. The festival is now used as a calendar for the farming seasons of the Oguaa Traditional Area and this particular phenomenon is also referred to as "Afehyia", meaning "a loop of seasons".
Celebrating the Fetu Afahye
Preparation for the festival starts in the last week of August. During this period, Oguaa Traditional area receives many visitors from all walks of life, as well as people from different parts of the country or outside the country who are natives of Oguaa state. The actual celebration follows on the first Saturday of September.
Prior to the actual celebration of the festival, the Omanhen is confined for a week. During this period of confinement, he meditates and asks for wisdom from the creator (Aboadze) and the ancestors, as well as seeking medical attention where necessary from his physician to enable him come out both physically and mentally fit for the impending activities such as delivering his tasks for the success of the festival. At the end of Omanhen’s confinement, he appears in public in pomp and dignity and goes to the stool house to pour libation, seeking blessing from the 77 gods of Oguaa state who the people believe steer the affairs of Oguaa traditional area.
It is also noted that before the festival, all drumming festivities and drumming sounds are banned as tradition demands, as well as fishing in the Fosu Lagoon, lying between The Government Central Hospital and stretching to a place called Aquarium, to ensure a quiet and peaceful environment. It is believed that this is done to allow the spirits of Oguaa state to take over and lead the planners of the festival. This is usually observed before 1 September.
The custodians of Fosu Lagoon (Amissafo) of Oguaa Traditional Area also pour libation at the estuary of the lagoon and invoke the spirits of their ancestors to eradicate any bad omens that may befall visitors involved in the festival. The aim of pouring libation is also to call for a plumper harvest of fish and crops. In all, they call for prosperity.
Another important event observed is the "Amuntumadeze" – literal meaning "health day" – a day when both old and the young make efforts to clean the environment, including clearing waste from choked gutters and painting all buildings in the area, with the aim of beautifying the vicinity before the actual grand durbar of "Bakatue".
A vigil is observed at Fosu Lagoon near its shrine on every last Monday of August. A large number of people gather at the shrine to have a glimpse of the display of the priests and priestesses of the traditional area. This exhibition is normally done in the night till the next morning. During this night, drumming and dancing by both the priest and the priestesses are observed and the spirits of their ancestors are invoked to foretell what will happen in the next year. The following Tuesday also sees many activities, such as rituals carried out at the Fosu shrine, and finally daytime regatta and canoe riding on the Fosu Lagoon is observed after the Omanhen's libation at the estuary.
As a result of the earlier ban on fishing in the Fosu Lagoon, the Omanhen is the first person to throw his net, three consecutive times to officially open up the lagoon for the general public. For the Omanhen to catch plenty of fish indicates a prosperous fishing season to come. This event is foreshown by special crowd in the middle of the firing of musketry. This is called "Bakatue".
However, in giving a warm welcoming atmosphere for the native who have travelled, the chiefs of Oguaa Traditional Area set aside Wednesday for receiving and welcoming citizens of Cape Coast. This day is also characterized by drumming and dancing by the Asafo companies, the seven traditional militia groups. It is also noted as a day of socialization and resolution of issues.
A religious ceremony is held in front of Nana Paprata shrine on the Thursday night, with accompanying rituals and dancing ("Adammba") to summon the spirits of the ancestors to enable the priests and priestesses to soothsay. This ceremony normally lasts till the next morning. The main aim of this ceremony is to cleanse Oguaa Traditional Area of any bad spirit. During this same period, a bull is always needed to purify the Oguaa Traditional Area. Prior to this purification, the bull is sent to Nana Tabir’s shrine to cleanse the bull for sacrifice on the ultimate day. This bull is then later sacrificed at Papratam (the durbar grounds for Ogua Traditional Area). It is mostly identify as the silk cotton tree where the Omanhen, on the climax day, sits in state with his divisional and sub-chiefs, flanked by the council of elders. At the meeting, the Omanhen addresses the people and visitors of Oguaa Traditional Area, recounting events of the past. After the state address, the Omanhen walks towards the entrance, flanked by his sub-chiefs and divisional chiefs to Tabir's shrine, where the cow is tied by its limbs. The Omanhen pours libation and performs various rituals, calling on the forefathers to intervene in Oguaa state. At this juncture, he takes a dagger to slaughter the cow for the gods.
After the Omanhen's sacrifice, the Fetu festival reaches its climax on the first Saturday of September. This particular day attracts a unique and attentive audience for the procession of the Asafo Companies, which usually parade along the street of Cape Coast from Kotokuraba through Chapel Square to the chief palace. People from all parts of the country visit Cape Coast to observe this festival. A durbar of chiefs is held on this day to deliberate on issues affecting Oguaa Traditional Area as well as the seven Asafo companies to contribute to the security of Oguaa Traditional Area. This day is marked by drumming, dancing and pouring of libation to usher the state in a peaceful and prosperous new year.
Contemporary issues, such as the Afahye state dance, local cuisine, football games, clothing and traditional wear, among many other forms of cultural artifacts, add up to giving facial lift to the endowments of the festivals, particularly the stylish and eye-catching dance of the Miss Afahye.
After the festive days are over, the grand ceremony is on Sunday, when a joint service of all Christian denominations is held at Victoria Park to offer thanks to Almighty God for helping Oguaa Traditional Area to have a peaceful festival. In addition the day is an occasion is to appeal for funds for Oguaa Traditional Area. In view of this, the Omanhen and his divisional chiefs as well as elders attend the church occasion and take the opportunity to announce the day for next year's celebration.
Important Highlights of the Fetu Afahye
The Omanhen is confined for a week prior to the actual celebration.
Pouring of libation to the 77 gods
All drumming festivities and drumming sounds are banned
Banning on fishing in the Fosu Lagoon
Observation of "Amuntumadeze", a day dedicated to the general cleaning of the towns.
The climax of the event is held on the first Saturday in September.
A grand ceremony is held on Sunday, when a joint service of all Christian denominations is held at Victoria Park.
References
External links
"Fetu-Afahye Festival", "Discover Ghana."
Festivals in Ghana
Annual events in Ghana
Cape Coast
|
There were no defending champions as the previous edition of the tournament was canceled due to Hurricane Ian.
Luke Johnson and Skander Mansouri won the title after defeating Nicholas Bybel and Oliver Crawford 6–4, 6–4 in the final.
Seeds
Draw
References
External links
Main draw
LTP Men's Open - Doubles
|
```shell
Fast file indexing with `updatedb` and `locate`
Finding file with regexes
Identify files using the `file` command
Preserving permissions and structure with `rsync`
Easy way of sharing files
```
|
Phazer is the name of a model of snowmobile produced by the Yamaha Motor Company. Introduced in 1984, it became a popular model for Yamaha and spawned several follow-up models (such as the Phazer II, Phazer Deluxe, Phazer Mountain Lite, Phazer FX, and Phazer GT); its design features were also incorporated into other models (such as later-model Exciters as well as the Venture Lite).
Of particular note on the Phazer is way in which the headlight is directly connected to the handlebars so that the headlight follows the direction of a turn. This feature was certainly new among Yamaha models when it was introduced in 1984, and Yamaha claims that it and other features 'began a new era in snowmobiling'. The Phazer was always known for its sharp handling, free-revving 485cc fan-cooled engine, solid reliability (most notably on Phazer II), light weight, and most importantly, value. Ride quality akin to most of the snowmobiles of that era.
Snowmobiles under the original Phazer name appeared until 1989; new models such as the Phazer II, Phazer Deluxe, Phazer SS etc., were sold until 1998, when it was revamped to the more traditional looking snowmobile and was known as the Phazer 500 (1999-2001). For the 2002 model year the Phazer nameplate was axed, where it would take a five-year absence.
Yamaha released a bold new snowmobile under the Phazer name-plate for the 2007 model year. The snowmobile featured a radical new design which is inspired from the YZ250F motocross bike. The new Phazer is powered by a new 80 hp 499cc fuel-injected liquid-cooled four-stroke twin which makes its peak power at 11,000RPM. The engine is based on Yamaha's highly successful and dependable YZ250F dirt bike engine. With a 487-pound estimated dry weight, it is one of the lightest production four-strokes.
The 2007-era Yamaha Phazer was discontinued at the end of the 2018 model year amidst Yamaha's restructuring of their model lineups for 2019.
References
External links
2009 Model Homepage from the Wayback Machine
Phazer
Snowmobile brands
Vehicles introduced in 1984
|
```python
# -*- coding: utf-8 -*-
import logging
import os
import subprocess
import sys
import time
import scrapydo
import utils
from importlib import import_module
VALIDATORS = {
'HttpBinSpider': 'crawler.spiders.validator.httpbin',
# 'DoubanSpider':'ipproxytool.spiders.validator.douban',
# 'AssetStoreSpider':'ipproxytool.spiders.validator.assetstore',
# 'GatherSpider' :'ipproxytool.spiders.validator.gather',
# 'HttpBinSpider' :'ipproxytool.spiders.validator.httpbin',
# 'SteamSpider' :'ipproxytool.spiders.validator.steam',
# 'BossSpider' :'ipproxytool.spiders.validator.boss',
# 'LagouSpider' :'ipproxytool.spiders.validator.lagou',
# 'LiepinSpider' :'ipproxytool.spiders.validator.liepin',
# 'JDSpider' :'ipproxytool.spiders.validator.jd',
# 'BBSSpider' :'ipproxytool.spiders.validator.bbs',
# 'ZhiLianSpider' :'ipproxytool.spiders.validator.zhilian',
# 'AmazonCnSpider' :'ipproxytool.spiders.validator.amazoncn',
}
scrapydo.setup()
def validator():
process_list = []
for item, path in VALIDATORS.items():
module = import_module(path)
validator = getattr(module, item)
popen = subprocess.Popen(['python', 'run_spider.py', validator.name], shell=False)
data = {
'name': validator.name,
'popen': popen,
}
process_list.append(data)
while True:
time.sleep(60)
for process in process_list:
popen = process.get('popen', None)
utils.log('name:%s poll:%s' % (process.get('name'), popen.poll()))
#
if popen != None and popen.poll() == 0:
name = process.get('name')
utils.log('%(name)s spider finish...\n' % {'name': name})
process_list.remove(process)
p = subprocess.Popen(['python', 'run_spider.py', name], shell=False)
data = {
'name': name,
'popen': p,
}
process_list.append(data)
time.sleep(1)
break
if __name__ == '__main__':
os.chdir(sys.path[0])
if not os.path.exists('log'):
os.makedirs('log')
logging.basicConfig(
filename='log/validator.log',
format='%(asctime)s: %(message)s',
level=logging.DEBUG
)
validator()
```
|
Evenus is a butterfly genus in the family Lycaenidae, with species ranging from North to South America.
Species list
Evenus regalis (Cramer, 1775) – type species
Evenus coronata (Hewitson, 1865)– crowned hairstreak
Evenus gabriela (Cramer, 1775)
Evenus batesii (Hewitson, 1865)
Evenus candidus (Druce, 1907)
Evenus sponsa (Möschler, 1877)
Evenus sumptuosa (Druce, 1907)
Evenus tagyra (Hewitson, 1865)
Evenus floralia (Druce, 1907)
Evenus temathea (Hewitson, 1865)
Evenus satyroides (Hewitson, 1865)
Evenus latreillii (Hewitson, 1865)
Evenus felix (Neild and Balint, 2014)
References
External links
Funet Taxonomy Distribution Image of Evenus regalis
Evenus images at EOL
Eumaeini
Lycaenidae of South America
Lycaenidae genera
Taxa named by Jacob Hübner
|
The Japanese martial art judo has been practised in Yukon, Canada since at least 1950.
History
Judo was introduced to Canada in the early twentieth century by Japanese migrants, and the first dojo was established in Vancouver by Shigetaka Sasaki in 1924. It was initially limited to British Columbia, but spread throughout the country following the forced expulsion, internment, and resettlement of Japanese-Canadians after Japan entered the Second World War in 1941.
It is difficult to determine when judo was first introduced to Yukon, but it was taught to members of the Forest Girl Guards and Junior Forest Wardens as part of their physical education in 1950, and courses were offered at the Whitehorse Gymnasium in 1953. The first dedicated judo club was the Keno Hill Judo Club in Elsa, founded in 1960 by Laurie Wayman, who had earned his nidan (second dan) at the Budokwai in London, England, and received financial support from the Budokwai to purchase mats. It is likely that Wayman was an employee of United Keno Hill Mines Ltd., given that many club members were employees and Wayman's successor as instructor, Fred Thode, was a Project Engineer for the company. At its peak, the club had 100 members.
The Whitehorse Judo Club was established at the Takhini Recreation Centre by Mike Waddell for members of the armed forces who were stationed in the city. By 1961 George Takahashi, a student of Thode, was in charge of the club. Chuck MacKenzie, one of Takahashi's students, founded his own club at the Whitehorse Elementary School in 1964, and went on to be Yukon's most successful judoka: he won the 1963 Yukon Judo Championships in the junior heavyweight division, won two gold ulus at the 1972 Arctic Winter Games, and was the first judoka from the Yukon to compete in the Senior National Championships, with George Peary as his coach. MacKenzie was inducted into the Sport Yukon Hall of Fame in 1982.
The Yukon Kodokan Black Belt Association was established as the judo governing body for the territory in 1974, and was replaced by Judo Yukon in 1996.
Competition
The 1973 Canadian National Judo Championships were held in Whitehorse.
See also
Judo in Canada
List of Canadian judoka
References
Yukon
Sport in Yukon
|
```java
package com.github.ma1co.openmemories.tweak;
import java.util.concurrent.TimeoutException;
public class Condition {
public interface Runnable {
boolean run();
}
public static void waitFor(Runnable runnable, long interval, long timeout) throws InterruptedException, TimeoutException {
if (runnable.run())
return;
for (long t = 0; t < timeout; t += interval) {
Thread.sleep(interval);
if (runnable.run())
return;
}
throw new TimeoutException("waitFor timed out");
}
}
```
|
```kotlin
package com.pinterest.ktlint.rule.engine.api
import com.pinterest.ktlint.rule.engine.core.api.RuleId
import dev.drewhamilton.poko.Poko
/**
* Lint error found by the [KtLintRuleEngine].
*
* [line]: line number (one-based)
* [col]: column number (one-based)
* [ruleId]: rule id
* [detail]: error message
* [canBeAutoCorrected]: flag indicating whether the error can be corrected by the rule if "format" is run
*/
@Poko
public class LintError(
public val line: Int,
public val col: Int,
public val ruleId: RuleId,
public val detail: String,
public val canBeAutoCorrected: Boolean,
)
```
|
```c++
//
//
// path_to_url
//
// Unless required by applicable law or agreed to in writing, software
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#include "source/fuzz/transformation_set_memory_operands_mask.h"
#include "gtest/gtest.h"
#include "source/fuzz/fuzzer_util.h"
#include "source/fuzz/instruction_descriptor.h"
#include "test/fuzz/fuzz_test_util.h"
namespace spvtools {
namespace fuzz {
namespace {
TEST(TransformationSetMemoryOperandsMaskTest, PreSpirv14) {
std::string shader = R"(
OpCapability Shader
%1 = OpExtInstImport "GLSL.std.450"
OpMemoryModel Logical GLSL450
OpEntryPoint Fragment %4 "main"
OpExecutionMode %4 OriginUpperLeft
OpSource ESSL 310
OpName %4 "main"
OpName %7 "Point3D"
OpMemberName %7 0 "x"
OpMemberName %7 1 "y"
OpMemberName %7 2 "z"
OpName %12 "global_points"
OpName %15 "block"
OpMemberName %15 0 "in_points"
OpMemberName %15 1 "in_point"
OpName %17 ""
OpName %133 "local_points"
OpMemberDecorate %7 0 Offset 0
OpMemberDecorate %7 1 Offset 4
OpMemberDecorate %7 2 Offset 8
OpDecorate %10 ArrayStride 16
OpMemberDecorate %15 0 Offset 0
OpMemberDecorate %15 1 Offset 192
OpDecorate %15 Block
OpDecorate %17 DescriptorSet 0
OpDecorate %17 Binding 0
%2 = OpTypeVoid
%3 = OpTypeFunction %2
%6 = OpTypeFloat 32
%7 = OpTypeStruct %6 %6 %6
%8 = OpTypeInt 32 0
%9 = OpConstant %8 12
%10 = OpTypeArray %7 %9
%11 = OpTypePointer Private %10
%12 = OpVariable %11 Private
%15 = OpTypeStruct %10 %7
%16 = OpTypePointer Uniform %15
%17 = OpVariable %16 Uniform
%18 = OpTypeInt 32 1
%19 = OpConstant %18 0
%20 = OpTypePointer Uniform %10
%24 = OpTypePointer Private %7
%27 = OpTypePointer Private %6
%30 = OpConstant %18 1
%132 = OpTypePointer Function %10
%135 = OpTypePointer Uniform %7
%145 = OpTypePointer Function %7
%4 = OpFunction %2 None %3
%5 = OpLabel
%133 = OpVariable %132 Function
%21 = OpAccessChain %20 %17 %19
OpCopyMemory %12 %21 Aligned 16
OpCopyMemory %133 %12 Volatile
OpCopyMemory %133 %12
%136 = OpAccessChain %135 %17 %30
%138 = OpAccessChain %24 %12 %19
OpCopyMemory %138 %136 None
%146 = OpAccessChain %145 %133 %30
%147 = OpLoad %7 %146 Volatile|Nontemporal|Aligned 16
%148 = OpAccessChain %24 %12 %19
OpStore %148 %147 Nontemporal
OpReturn
OpFunctionEnd
)";
for (auto env :
{SPV_ENV_UNIVERSAL_1_0, SPV_ENV_UNIVERSAL_1_1, SPV_ENV_UNIVERSAL_1_2,
SPV_ENV_UNIVERSAL_1_3, SPV_ENV_VULKAN_1_0, SPV_ENV_VULKAN_1_1}) {
const auto consumer = nullptr;
const auto context =
BuildModule(env, consumer, shader, kFuzzAssembleOption);
spvtools::ValidatorOptions validator_options;
ASSERT_TRUE(fuzzerutil::IsValidAndWellFormed(
context.get(), validator_options, kConsoleMessageConsumer));
TransformationContext transformation_context(
MakeUnique<FactManager>(context.get()), validator_options);
#ifndef NDEBUG
{
// Not OK: multiple operands are not supported pre SPIR-V 1.4.
TransformationSetMemoryOperandsMask transformation(
MakeInstructionDescriptor(21, spv::Op::OpCopyMemory, 3),
(uint32_t)spv::MemoryAccessMask::Nontemporal |
(uint32_t)spv::MemoryAccessMask::Volatile,
1);
ASSERT_DEATH(
transformation.IsApplicable(context.get(), transformation_context),
"Multiple memory operand masks are not supported");
}
#endif
// Not OK: the instruction is not a memory access.
ASSERT_FALSE(TransformationSetMemoryOperandsMask(
MakeInstructionDescriptor(21, spv::Op::OpAccessChain, 0),
(uint32_t)spv::MemoryAccessMask::MaskNone, 0)
.IsApplicable(context.get(), transformation_context));
// Not OK to remove Aligned
ASSERT_FALSE(TransformationSetMemoryOperandsMask(
MakeInstructionDescriptor(147, spv::Op::OpLoad, 0),
(uint32_t)spv::MemoryAccessMask::Volatile |
(uint32_t)spv::MemoryAccessMask::Nontemporal,
0)
.IsApplicable(context.get(), transformation_context));
{
TransformationSetMemoryOperandsMask transformation(
MakeInstructionDescriptor(147, spv::Op::OpLoad, 0),
(uint32_t)spv::MemoryAccessMask::Aligned |
(uint32_t)spv::MemoryAccessMask::Volatile,
0);
ASSERT_TRUE(
transformation.IsApplicable(context.get(), transformation_context));
ApplyAndCheckFreshIds(transformation, context.get(),
&transformation_context);
}
// Not OK to remove Aligned
ASSERT_FALSE(TransformationSetMemoryOperandsMask(
MakeInstructionDescriptor(21, spv::Op::OpCopyMemory, 0),
(uint32_t)spv::MemoryAccessMask::MaskNone, 0)
.IsApplicable(context.get(), transformation_context));
// OK: leaves the mask as is
ASSERT_TRUE(TransformationSetMemoryOperandsMask(
MakeInstructionDescriptor(21, spv::Op::OpCopyMemory, 0),
(uint32_t)spv::MemoryAccessMask::Aligned, 0)
.IsApplicable(context.get(), transformation_context));
{
// OK: adds Nontemporal and Volatile
TransformationSetMemoryOperandsMask transformation(
MakeInstructionDescriptor(21, spv::Op::OpCopyMemory, 0),
(uint32_t)spv::MemoryAccessMask::Aligned |
(uint32_t)spv::MemoryAccessMask::Nontemporal |
(uint32_t)spv::MemoryAccessMask::Volatile,
0);
ASSERT_TRUE(
transformation.IsApplicable(context.get(), transformation_context));
ApplyAndCheckFreshIds(transformation, context.get(),
&transformation_context);
}
// Not OK to remove Volatile
ASSERT_FALSE(TransformationSetMemoryOperandsMask(
MakeInstructionDescriptor(21, spv::Op::OpCopyMemory, 1),
(uint32_t)spv::MemoryAccessMask::Nontemporal, 0)
.IsApplicable(context.get(), transformation_context));
// Not OK to add Aligned
ASSERT_FALSE(TransformationSetMemoryOperandsMask(
MakeInstructionDescriptor(21, spv::Op::OpCopyMemory, 1),
(uint32_t)spv::MemoryAccessMask::Aligned |
(uint32_t)spv::MemoryAccessMask::Volatile,
0)
.IsApplicable(context.get(), transformation_context));
{
// OK: adds Nontemporal
TransformationSetMemoryOperandsMask transformation(
MakeInstructionDescriptor(21, spv::Op::OpCopyMemory, 1),
(uint32_t)spv::MemoryAccessMask::Nontemporal |
(uint32_t)spv::MemoryAccessMask::Volatile,
0);
ASSERT_TRUE(
transformation.IsApplicable(context.get(), transformation_context));
ApplyAndCheckFreshIds(transformation, context.get(),
&transformation_context);
}
{
// OK: adds Nontemporal (creates new operand)
TransformationSetMemoryOperandsMask transformation(
MakeInstructionDescriptor(21, spv::Op::OpCopyMemory, 2),
(uint32_t)spv::MemoryAccessMask::Nontemporal |
(uint32_t)spv::MemoryAccessMask::Volatile,
0);
ASSERT_TRUE(
transformation.IsApplicable(context.get(), transformation_context));
ApplyAndCheckFreshIds(transformation, context.get(),
&transformation_context);
}
{
// OK: adds Nontemporal and Volatile
TransformationSetMemoryOperandsMask transformation(
MakeInstructionDescriptor(138, spv::Op::OpCopyMemory, 0),
(uint32_t)spv::MemoryAccessMask::Nontemporal |
(uint32_t)spv::MemoryAccessMask::Volatile,
0);
ASSERT_TRUE(
transformation.IsApplicable(context.get(), transformation_context));
ApplyAndCheckFreshIds(transformation, context.get(),
&transformation_context);
}
{
// OK: removes Nontemporal, adds Volatile
TransformationSetMemoryOperandsMask transformation(
MakeInstructionDescriptor(148, spv::Op::OpStore, 0),
(uint32_t)spv::MemoryAccessMask::Volatile, 0);
ASSERT_TRUE(
transformation.IsApplicable(context.get(), transformation_context));
ApplyAndCheckFreshIds(transformation, context.get(),
&transformation_context);
}
std::string after_transformation = R"(
OpCapability Shader
%1 = OpExtInstImport "GLSL.std.450"
OpMemoryModel Logical GLSL450
OpEntryPoint Fragment %4 "main"
OpExecutionMode %4 OriginUpperLeft
OpSource ESSL 310
OpName %4 "main"
OpName %7 "Point3D"
OpMemberName %7 0 "x"
OpMemberName %7 1 "y"
OpMemberName %7 2 "z"
OpName %12 "global_points"
OpName %15 "block"
OpMemberName %15 0 "in_points"
OpMemberName %15 1 "in_point"
OpName %17 ""
OpName %133 "local_points"
OpMemberDecorate %7 0 Offset 0
OpMemberDecorate %7 1 Offset 4
OpMemberDecorate %7 2 Offset 8
OpDecorate %10 ArrayStride 16
OpMemberDecorate %15 0 Offset 0
OpMemberDecorate %15 1 Offset 192
OpDecorate %15 Block
OpDecorate %17 DescriptorSet 0
OpDecorate %17 Binding 0
%2 = OpTypeVoid
%3 = OpTypeFunction %2
%6 = OpTypeFloat 32
%7 = OpTypeStruct %6 %6 %6
%8 = OpTypeInt 32 0
%9 = OpConstant %8 12
%10 = OpTypeArray %7 %9
%11 = OpTypePointer Private %10
%12 = OpVariable %11 Private
%15 = OpTypeStruct %10 %7
%16 = OpTypePointer Uniform %15
%17 = OpVariable %16 Uniform
%18 = OpTypeInt 32 1
%19 = OpConstant %18 0
%20 = OpTypePointer Uniform %10
%24 = OpTypePointer Private %7
%27 = OpTypePointer Private %6
%30 = OpConstant %18 1
%132 = OpTypePointer Function %10
%135 = OpTypePointer Uniform %7
%145 = OpTypePointer Function %7
%4 = OpFunction %2 None %3
%5 = OpLabel
%133 = OpVariable %132 Function
%21 = OpAccessChain %20 %17 %19
OpCopyMemory %12 %21 Aligned|Nontemporal|Volatile 16
OpCopyMemory %133 %12 Nontemporal|Volatile
OpCopyMemory %133 %12 Nontemporal|Volatile
%136 = OpAccessChain %135 %17 %30
%138 = OpAccessChain %24 %12 %19
OpCopyMemory %138 %136 Nontemporal|Volatile
%146 = OpAccessChain %145 %133 %30
%147 = OpLoad %7 %146 Aligned|Volatile 16
%148 = OpAccessChain %24 %12 %19
OpStore %148 %147 Volatile
OpReturn
OpFunctionEnd
)";
ASSERT_TRUE(IsEqual(env, after_transformation, context.get()));
}
}
TEST(TransformationSetMemoryOperandsMaskTest, Spirv14OrHigher) {
std::string shader = R"(
OpCapability Shader
%1 = OpExtInstImport "GLSL.std.450"
OpMemoryModel Logical GLSL450
OpEntryPoint Fragment %4 "main" %12 %17
OpExecutionMode %4 OriginUpperLeft
OpSource ESSL 310
OpName %4 "main"
OpName %7 "Point3D"
OpMemberName %7 0 "x"
OpMemberName %7 1 "y"
OpMemberName %7 2 "z"
OpName %12 "global_points"
OpName %15 "block"
OpMemberName %15 0 "in_points"
OpMemberName %15 1 "in_point"
OpName %17 ""
OpName %133 "local_points"
OpMemberDecorate %7 0 Offset 0
OpMemberDecorate %7 1 Offset 4
OpMemberDecorate %7 2 Offset 8
OpDecorate %10 ArrayStride 16
OpMemberDecorate %15 0 Offset 0
OpMemberDecorate %15 1 Offset 192
OpDecorate %15 Block
OpDecorate %17 DescriptorSet 0
OpDecorate %17 Binding 0
%2 = OpTypeVoid
%3 = OpTypeFunction %2
%6 = OpTypeFloat 32
%7 = OpTypeStruct %6 %6 %6
%8 = OpTypeInt 32 0
%9 = OpConstant %8 12
%10 = OpTypeArray %7 %9
%11 = OpTypePointer Private %10
%12 = OpVariable %11 Private
%15 = OpTypeStruct %10 %7
%16 = OpTypePointer Uniform %15
%17 = OpVariable %16 Uniform
%18 = OpTypeInt 32 1
%19 = OpConstant %18 0
%20 = OpTypePointer Uniform %10
%24 = OpTypePointer Private %7
%27 = OpTypePointer Private %6
%30 = OpConstant %18 1
%132 = OpTypePointer Function %10
%135 = OpTypePointer Uniform %7
%145 = OpTypePointer Function %7
%4 = OpFunction %2 None %3
%5 = OpLabel
%133 = OpVariable %132 Function
%21 = OpAccessChain %20 %17 %19
OpCopyMemory %12 %21 Aligned 16 Nontemporal|Aligned 16
OpCopyMemory %133 %12 Volatile
OpCopyMemory %133 %12
OpCopyMemory %133 %12
%136 = OpAccessChain %135 %17 %30
%138 = OpAccessChain %24 %12 %19
OpCopyMemory %138 %136 None Aligned 16
OpCopyMemory %138 %136 Aligned 16
%146 = OpAccessChain %145 %133 %30
%147 = OpLoad %7 %146 Volatile|Nontemporal|Aligned 16
%148 = OpAccessChain %24 %12 %19
OpStore %148 %147 Nontemporal
OpReturn
OpFunctionEnd
)";
for (auto env : {SPV_ENV_UNIVERSAL_1_4, SPV_ENV_UNIVERSAL_1_5,
SPV_ENV_VULKAN_1_1_SPIRV_1_4, SPV_ENV_VULKAN_1_2}) {
const auto consumer = nullptr;
const auto context =
BuildModule(env, consumer, shader, kFuzzAssembleOption);
spvtools::ValidatorOptions validator_options;
ASSERT_TRUE(fuzzerutil::IsValidAndWellFormed(
context.get(), validator_options, kConsoleMessageConsumer));
TransformationContext transformation_context(
MakeUnique<FactManager>(context.get()), validator_options);
{
TransformationSetMemoryOperandsMask transformation(
MakeInstructionDescriptor(21, spv::Op::OpCopyMemory, 0),
(uint32_t)spv::MemoryAccessMask::Aligned |
(uint32_t)spv::MemoryAccessMask::Volatile,
1);
// Bad: cannot remove aligned
ASSERT_FALSE(TransformationSetMemoryOperandsMask(
MakeInstructionDescriptor(21, spv::Op::OpCopyMemory, 0),
(uint32_t)spv::MemoryAccessMask::Volatile, 1)
.IsApplicable(context.get(), transformation_context));
ASSERT_TRUE(
transformation.IsApplicable(context.get(), transformation_context));
ApplyAndCheckFreshIds(transformation, context.get(),
&transformation_context);
}
{
TransformationSetMemoryOperandsMask transformation(
MakeInstructionDescriptor(21, spv::Op::OpCopyMemory, 1),
(uint32_t)spv::MemoryAccessMask::Nontemporal |
(uint32_t)spv::MemoryAccessMask::Volatile,
1);
// Bad: cannot remove volatile
ASSERT_FALSE(TransformationSetMemoryOperandsMask(
MakeInstructionDescriptor(21, spv::Op::OpCopyMemory, 1),
(uint32_t)spv::MemoryAccessMask::Nontemporal, 0)
.IsApplicable(context.get(), transformation_context));
ASSERT_TRUE(
transformation.IsApplicable(context.get(), transformation_context));
ApplyAndCheckFreshIds(transformation, context.get(),
&transformation_context);
}
{
// Creates the first operand.
TransformationSetMemoryOperandsMask transformation(
MakeInstructionDescriptor(21, spv::Op::OpCopyMemory, 2),
(uint32_t)spv::MemoryAccessMask::Nontemporal |
(uint32_t)spv::MemoryAccessMask::Volatile,
0);
ASSERT_TRUE(
transformation.IsApplicable(context.get(), transformation_context));
ApplyAndCheckFreshIds(transformation, context.get(),
&transformation_context);
}
{
// Creates both operands.
TransformationSetMemoryOperandsMask transformation(
MakeInstructionDescriptor(21, spv::Op::OpCopyMemory, 3),
(uint32_t)spv::MemoryAccessMask::Nontemporal |
(uint32_t)spv::MemoryAccessMask::Volatile,
1);
ASSERT_TRUE(
transformation.IsApplicable(context.get(), transformation_context));
ApplyAndCheckFreshIds(transformation, context.get(),
&transformation_context);
}
{
TransformationSetMemoryOperandsMask transformation(
MakeInstructionDescriptor(138, spv::Op::OpCopyMemory, 0),
(uint32_t)spv::MemoryAccessMask::Aligned |
(uint32_t)spv::MemoryAccessMask::Nontemporal,
1);
// Bad: the first mask is None, so Aligned cannot be added to it.
ASSERT_FALSE(TransformationSetMemoryOperandsMask(
MakeInstructionDescriptor(138, spv::Op::OpCopyMemory, 0),
(uint32_t)spv::MemoryAccessMask::Aligned |
(uint32_t)spv::MemoryAccessMask::Nontemporal,
0)
.IsApplicable(context.get(), transformation_context));
ASSERT_TRUE(
transformation.IsApplicable(context.get(), transformation_context));
ApplyAndCheckFreshIds(transformation, context.get(),
&transformation_context);
}
{
TransformationSetMemoryOperandsMask transformation(
MakeInstructionDescriptor(138, spv::Op::OpCopyMemory, 1),
(uint32_t)spv::MemoryAccessMask::Volatile, 1);
ASSERT_TRUE(
transformation.IsApplicable(context.get(), transformation_context));
ApplyAndCheckFreshIds(transformation, context.get(),
&transformation_context);
}
{
TransformationSetMemoryOperandsMask transformation(
MakeInstructionDescriptor(147, spv::Op::OpLoad, 0),
(uint32_t)spv::MemoryAccessMask::Volatile |
(uint32_t)spv::MemoryAccessMask::Aligned,
0);
ASSERT_TRUE(
transformation.IsApplicable(context.get(), transformation_context));
ApplyAndCheckFreshIds(transformation, context.get(),
&transformation_context);
}
{
TransformationSetMemoryOperandsMask transformation(
MakeInstructionDescriptor(148, spv::Op::OpStore, 0),
(uint32_t)spv::MemoryAccessMask::MaskNone, 0);
ASSERT_TRUE(
transformation.IsApplicable(context.get(), transformation_context));
ApplyAndCheckFreshIds(transformation, context.get(),
&transformation_context);
}
std::string after_transformation = R"(
OpCapability Shader
%1 = OpExtInstImport "GLSL.std.450"
OpMemoryModel Logical GLSL450
OpEntryPoint Fragment %4 "main" %12 %17
OpExecutionMode %4 OriginUpperLeft
OpSource ESSL 310
OpName %4 "main"
OpName %7 "Point3D"
OpMemberName %7 0 "x"
OpMemberName %7 1 "y"
OpMemberName %7 2 "z"
OpName %12 "global_points"
OpName %15 "block"
OpMemberName %15 0 "in_points"
OpMemberName %15 1 "in_point"
OpName %17 ""
OpName %133 "local_points"
OpMemberDecorate %7 0 Offset 0
OpMemberDecorate %7 1 Offset 4
OpMemberDecorate %7 2 Offset 8
OpDecorate %10 ArrayStride 16
OpMemberDecorate %15 0 Offset 0
OpMemberDecorate %15 1 Offset 192
OpDecorate %15 Block
OpDecorate %17 DescriptorSet 0
OpDecorate %17 Binding 0
%2 = OpTypeVoid
%3 = OpTypeFunction %2
%6 = OpTypeFloat 32
%7 = OpTypeStruct %6 %6 %6
%8 = OpTypeInt 32 0
%9 = OpConstant %8 12
%10 = OpTypeArray %7 %9
%11 = OpTypePointer Private %10
%12 = OpVariable %11 Private
%15 = OpTypeStruct %10 %7
%16 = OpTypePointer Uniform %15
%17 = OpVariable %16 Uniform
%18 = OpTypeInt 32 1
%19 = OpConstant %18 0
%20 = OpTypePointer Uniform %10
%24 = OpTypePointer Private %7
%27 = OpTypePointer Private %6
%30 = OpConstant %18 1
%132 = OpTypePointer Function %10
%135 = OpTypePointer Uniform %7
%145 = OpTypePointer Function %7
%4 = OpFunction %2 None %3
%5 = OpLabel
%133 = OpVariable %132 Function
%21 = OpAccessChain %20 %17 %19
OpCopyMemory %12 %21 Aligned 16 Aligned|Volatile 16
OpCopyMemory %133 %12 Volatile Nontemporal|Volatile
OpCopyMemory %133 %12 Nontemporal|Volatile
OpCopyMemory %133 %12 None Nontemporal|Volatile
%136 = OpAccessChain %135 %17 %30
%138 = OpAccessChain %24 %12 %19
OpCopyMemory %138 %136 None Aligned|Nontemporal 16
OpCopyMemory %138 %136 Aligned 16 Volatile
%146 = OpAccessChain %145 %133 %30
%147 = OpLoad %7 %146 Volatile|Aligned 16
%148 = OpAccessChain %24 %12 %19
OpStore %148 %147 None
OpReturn
OpFunctionEnd
)";
ASSERT_TRUE(IsEqual(env, after_transformation, context.get()));
}
}
} // namespace
} // namespace fuzz
} // namespace spvtools
```
|
```java
/*
* This file is part of TelegramBots.
*
* TelegramBots is free software: you can redistribute it and/or modify
* (at your option) any later version.
*
* TelegramBots is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
*
* along with TelegramBots. If not, see <path_to_url
*/
package org.telegram.telegrambots.meta.api.objects.games;
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
import com.fasterxml.jackson.annotation.JsonProperty;
import lombok.AllArgsConstructor;
import lombok.EqualsAndHashCode;
import lombok.Getter;
import lombok.NonNull;
import lombok.RequiredArgsConstructor;
import lombok.Setter;
import lombok.ToString;
import lombok.experimental.SuperBuilder;
import lombok.extern.jackson.Jacksonized;
import org.telegram.telegrambots.meta.api.interfaces.BotApiObject;
import org.telegram.telegrambots.meta.api.objects.MessageEntity;
import org.telegram.telegrambots.meta.api.objects.PhotoSize;
import java.util.List;
/**
* This object represents a game.
* @author Ruben Bermudez
* @version 2.4
* @apiNote Use BotFather to create and edit games, their short names will act as unique identifiers.
*/
@EqualsAndHashCode(callSuper = false)
@Getter
@Setter
@ToString
@RequiredArgsConstructor
@AllArgsConstructor
@SuperBuilder
@Jacksonized
@JsonIgnoreProperties(ignoreUnknown = true)
public class Game implements BotApiObject {
private static final String TITLE_FIELD = "title";
private static final String DESCRIPTION_FIELD = "description";
private static final String PHOTO_FIELD = "photo";
private static final String ANIMATION_FIELD = "animation";
private static final String TEXT_FIELD = "text";
private static final String TEXTENTITIES_FIELD = "text_entities";
/**
* Title of the game
*/
@JsonProperty(TITLE_FIELD)
@NonNull
private String title;
/**
* Description of the game
*/
@JsonProperty(DESCRIPTION_FIELD)
@NonNull
private String description;
/**
* Photo
*/
@JsonProperty(PHOTO_FIELD)
@NonNull
private List<PhotoSize> photo;
/**
* Optional. Brief description of the game or high scores included in the game message.
* Can be automatically edited to include current high scores for the game
* when the bot calls setGameScore, or manually edited using editMessageText.
* 0-4096 characters.
*/
@JsonProperty(TEXT_FIELD)
private String text;
/**
* Optional. Special entities that appear in text, such as usernames,
* URLs, bot commands, etc.
*/
@JsonProperty(TEXTENTITIES_FIELD)
private List<MessageEntity> entities;
/**
* Optional.
* Animation
*/
@JsonProperty(ANIMATION_FIELD)
private Animation animation;
public boolean hasEntities() {
return entities != null && !entities.isEmpty();
}
}
```
|
Charles Evenden (1 October 1894 – 1 April 1961) was an English cartoonist, known as the founder and guiding inspiration of the ex-servicemen organisation known as the Memorable Order of Tin Hats.
Early life
Charles Alfred Evenden, the eldest of the thirteen children was born in London to John Charles Evenden of Kent and his wife, Elizabeth Gregory on 1 October 1894.
He was educated at Haggerston Road School in the London Borough of Hackney. At the age of twelve, he was top of the school and remained there for two years, winning two scholarships to Charterhouse School. However, his parents did not have the means to send him to Charterhouse and found him a job in a factory instead of half-a-crown a week.
To supplement his income he took to selling newspapers. While doing this he began studying newspaper cartoons. This inspired him in the drawing classes he attended and on one occasion he sent a cartoon to the Daily Express. The psychological effect of this act was to influence his whole life.
World War I
He joined the Australian Army in World War I and was sent to Egypt. As a member of the Australian and New Zealand Army Corps, he took part in the Gallipoli campaign until he was badly shell-shocked and evacuated to Malta. He was hospitalised in England and at the cessation of hostilities returned to a farming life in Australia.
Post-war years
His farming efforts proved to be financially unsuccessful, and he took up newspaper work in Melbourne. After a brief period, he decided to try newspaper life in South Africa. In 1923 he arrived in Durban where he joined the staff of The Natal Mercury as its cartoonist under the nom-de-plume of "EVO". He remained with the paper from 1924 until 1953. With the startling simplicity of his ideas, he soon made a name for himself. To emphasise his attitude towards politicians and bureaucrats he created two characters, "Dr Mug" and "Mr Wump". His brand of Cockney humour had a special appeal for the newspaper's readers.
Memorable Order of Tin Hats
According to the Dictionary of South African Biography, one night in 1927 after he and the editor of The Natal Mercury, RJ Kingston Russell, had seen a war film, Evenden was persuaded to draw a cartoon on 'remembrance'. According to the Dictionary, "The cartoon showed a tin helmet surmounted by a burning candle. Around the flames of the candle were six words – True Comradeship – Mutual Help – Sound Memory".
However, the official MOTH website carries a cartoon captioned Forgetfulness and this led to the founding of the Order. This is confirmed by the Eastern Province Herald which describes the cartoon as follows: "a bullet- and shrapnel-riddled Allied helmet awash in the ocean. In the background a steamship passes over the horizon, leaving the forgotten, ghostly form of a veteran forlornly wading through the water."
The concepts of True Comradeship, Mutual Help and Sound Memory were to become the inspiration of a remarkable organisation of ex-front line soldiers, of all ranks, known as the Memorable Order of Tin Hats (MOTH). Evenden, as the founder of the movement and its guiding inspiration was given the title of 'Moth O' – a position he held until his death."
The membership of the MOTH movement, under Evenden's vigorous direction and leadership, grew into thousands. Men and women of two world wars, of the Second Anglo Boer War (1899–1902) and even those of former enemy forces streamed into its ranks. All who were prepared to keep alive the memories of comradeship and self-sacrifice – the finer virtues that war brings forth – were welcomed and made at home in shell holes with colourful and meaningful names of war-time memories and occasions. The shellholes spread to the United Kingdom, Australia, New Zealand and to Rhodesia (Zimbabwe). Membership was extended to those who had participated in the South African Border War.
The MOTH national headquarters is situated in Warriors Gate, Durban, which is modelled on a Norman design from a photograph given to Evenden by Admiral Evans-of-the-Broke. In 1948 Evenden opened Mount Memory ,- a monument to the missing and dead of the Second World War, in the foothills of the Drakensberg mountains.
Family life
He married Reenie Carlos and had a son, Barrie, and a daughter. Barrie was posted as missing in action when his ship was torpedoed by a U-boat in the Mediterranean Sea.
After he died in Durban on 1 April 1961 Evenden was cremated, and his ashes were scattered over the Durban bay.
Writings
Evenden wrote the story of how the MOTH organisation was created in his book Old soldiers never die (Durban, 1952). He was also the author of Like a little candle (Durban, 1959).
Recognition
In 1955 he was received by Queen Elizabeth, the Queen Mother, at Clarence House.
On 11 November 1955, the freedom of the city of Durban was conferred on him, at a parade of 14 000 Moths, by the then Mayor, Councillor Vernon Essery.
References
External links
Official website of the Memorable Order of Tin Hats
1894 births
1961 deaths
People from Hackney Central
Writers from Durban
People from KwaZulu-Natal
English emigrants to South Africa
British editorial cartoonists
People educated at Charterhouse School
Artists from Durban
Australian military personnel of World War I
|
```shell
#!/usr/bin/env bash
# vim:ts=4:sts=4:sw=4:et
#
# Author: Hari Sekhon
# Date: 2020-07-20 00:26:08 +0100 (Mon, 20 Jul 2020)
#
# path_to_url
#
#
# If you're using my code you're welcome to connect with me on LinkedIn and optionally send me feedback to help steer this or other code I publish
#
# path_to_url
#
set -euo pipefail
[ -n "${DEBUG:-}" ] && set -x
libdir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# shellcheck disable=SC1090
. "$libdir/utils.sh"
offset="${SPOTIFY_OFFSET:-0}"
# conservative, some API endpoints can take 100, others 50 - this is the safe choice but will go faster for those APIs if you can set 100 where appropriate
limit="${SPOTIFY_LIMIT:-50}"
if ! [[ "$offset" =~ ^[[:digit:]]+$ ]]; then
echo "Invalid \$SPOTIFY_OFFSET = $offset found in environment" >&2
exit 1
fi
if ! [[ "$limit" =~ ^[[:digit:]]+$ ]]; then
echo "Invalid \$SPOTIFY_LIMIT = $limit found in environment" >&2
exit 1
fi
spotify_token(){
# tokens with authorization have length 281, tokens without 219, so if there is an unauthorized token in the environment, regenerate it to avoid 401 errors
# [ -a ] works in bash and is the most concise way of doing this
# shellcheck disable=SC2166
if [ -z "${SPOTIFY_ACCESS_TOKEN:-}" ] ||
[ -n "${SPOTIFY_PRIVATE:-}" -a "${#SPOTIFY_ACCESS_TOKEN}" -lt 280 ]; then
SPOTIFY_ACCESS_TOKEN="$("$libdir/../spotify_api_token.sh")"
fi
export SPOTIFY_ACCESS_TOKEN
}
spotify_user(){
spotify_user="${spotify_user:-${SPOTIFY_USER:-}}"
if [ -z "$spotify_user" ]; then
if [ -n "${SPOTIFY_PRIVATE:-}" ]; then
spotify_user="$(SPOTIFY_PRIVATE=1 "$libdir/../spotify_api.sh" "/v1/me" | jq -r '.id')"
else
usage "\$SPOTIFY_USER not defined, and not using SPOTIFY_PRIVATE to auto-infer user from an authorized token"
fi
fi
}
# used by client scripts
# shellcheck disable=SC2034
usage_auth_msg="Requires \$SPOTIFY_ACCESS_TOKEN, or \$SPOTIFY_ID and \$SPOTIFY_SECRET to be defined in the environment"
# srcdir defined in client scripts
# shellcheck disable=SC2034,SC2154
usage_token_private="export SPOTIFY_ACCESS_TOKEN=\"\$(SPOTIFY_PRIVATE=1 '$libdir/../spotify_api_token.sh')\""
# shellcheck disable=SC2034
usage_playlist_help="See spotify_playlists.sh --help for details on accessing private playlists"
# shellcheck disable=SC2034
usage_auth_help="See spotify_api_token.sh --help for authentication details and setting up your SPOTIFY_ID, SPOTIFY_SECRET and callback URL"
get_next(){
jq -r '.next' <<< "$*"
}
is_local_uri(){
[[ "$1" =~ ^spotify:local:|open.spotify.com/local/ ]]
}
is_spotify_playlist_id(){
#local playlist_id="${1:-}"
#if [ -z "$playlist_id" ]; then
# die "no playlist id passed to function is_spotify_playlist_id()"
#fi
[[ "$1" =~ [[:alnum:]]{22} ]]
}
validate_spotify_uri(){
local uri="$1"
if ! [[ "$uri" =~ ^(spotify:(track|album|artist):|^https?://open.spotify.com/(track|album|artist)/)?[[:alnum:]]+(\?.+)?$ ]]; then
echo "Invalid URI provided: $uri" >&2
return 1
fi
if [[ "$uri" =~ open.spotify.com/|^spotify: ]]; then
if ! [[ "$uri" =~ open.spotify.com/${uri_type:-track}|^spotify:${uri_type:-track} ]]; then
echo "Invalid URI type '${uri_type:-track}' vs URI '$uri'" >&2
return 1
fi
fi
uri="${uri##*[:/]}"
uri="${uri%%\?*}"
echo "$uri"
}
```
|
{{Infobox person
| name = James Daly
| image = James Daly Medical Center 1975.jpg
| imagesize =
| caption = Daly in Medical Center, 1975
| birth_name = James Firman Daly
| birth_date =
| birth_place = Wisconsin Rapids, Wisconsin, U.S.
| death_date =
| death_place = Nyack, New York, U.S.
| othername =
| years active = 1946–1978
| television = Medical Center, Twelve O'Clock High
| occupation = Actor
| alma_mater = Cornell College
| spouse =
| website =
| awards =
| children = 4, including Tyne and Tim Daly
}}
James Firman Daly (October 23, 1918 – July 3, 1978) was an American theater, film, and television actor, who is perhaps best known for his role as Paul Lochner in the hospital drama series Medical Center, in which he played Chad Everett's superior.
Early life
Daly was born in Wisconsin Rapids in Wood County in central Wisconsin, to Dorothy Ethelbert (Hogan) Mullen, who later worked for the Central Intelligence Agency, and Percifer Charles Daly, a fuel merchant. During the 1930s, Daly studied drama and acted in shows before serving in three branches of the armed forces, including six months as an infantryman in the U.S. Army, two months as a cadet in the Army Air Corps, and more than four years in the Navy as an ensign during World War II.
Daly attended the University of Wisconsin, State University of Iowa, and Carroll College before receiving a degree from Cornell College in Mount Vernon, Iowa. Cornell College later presented him with an honorary Doctor of Fine Arts degree.
Career
Daly was an accomplished stage actor, starting out in 1946 as Gary Merrill's understudy in Born Yesterday. His starring roles on Broadway included Archibald MacLeish's Pulitzer Prize- winning J.B. and Tennessee Williams' Period of Adjustment.
Between 1953 and 1955, Daly appeared in the TV series Foreign Intrigue. He guest-starred on many television series, including Appointment with Adventure (two episodes), Breaking Point, Mission: Impossible ("Shock"), DuPont Cavalcade Theater ("One Day at a Time" 1955) portraying Bill Wilson the co-founder of Alcoholics Anonymous, The Twilight Zone ("A Stop at Willoughby"), The Tenderfoot (1964) for Walt Disney's Wonderful World of Color, The Road West (1966 episode "The Gunfighter"), Custer, Gunsmoke (1968 episode "The Favor"), Combat!, The Fugitive, The Virginian, and Twelve O'Clock High. He portrayed Mr. Flint (an apparently immortal human) in the Star Trek episode "Requiem for Methuselah" (1969). He starred in "Medical Center" on CBS from 1969-1975.
In 1958, Daly signed a contract with the R.J. Reynolds Tobacco Company to do television commercials for Camel cigarettes. He served as the Camel representative for seven years, being flown by Reynolds throughout the United States to be filmed smoking a Camel cigarette at various locations.
In addition to his acting career, Daly was one of the hosts on NBC Radio's weekend Monitor program in 1963–1964.
Daly's last screen role was as Mr. Boyce in the mini-series Roots: The Next Generations.
Personal life
According to his son Tim Daly during an interview on CBS News Sunday Morning, James Daly came out to Tim as gay a decade after divorcing his wife Hope. His struggle to come to terms with his sexual orientation nearly put a rift between him and his family. As homosexuality was still considered a mental illness until the early 1970s, he and his wife tried and failed at "curing" him. After their divorce, Daly decided to limit his contact with his children out of fear that they would end up mentally ill themselves.
Two of Daly's children, Tyne Daly and Tim Daly, and his granddaughter, Kathryne Dora Brown, and grandson, Sam Daly, are actors. Tyne appeared on Daly's TV series, Foreign Intrigue, as a child. She also played Jennifer Lochner, Paul Lochner's adult daughter, on Medical Center in the 1970 season 1 episode Moment of Decision. The elder Daly and his daughter both guest-starred separately in the original Mission: Impossible TV series. Tim appeared as a child with his father in Henrik Ibsen's play, An Enemy of the People. Daly had two other children: daughters Mary Glynn and Pegeen Michael.
Death
Daly died on July 3, 1978, of heart failure in Nyack, New York, two years after Medical Center ended, and while he was preparing to star in the play Equus'' in Tarrytown, New York. His ashes were sprinkled into the Atlantic Ocean.
Filmography
Theatre
Awards
References
External links
1918 births
1978 deaths
20th-century American male actors
20th-century American LGBT people
American gay actors
American male film actors
American male television actors
Carroll University alumni
Cornell College alumni
Iowa State University alumni
LGBT people from Wisconsin
Male actors from Wisconsin
Male Spaghetti Western actors
Military personnel from Wisconsin
Outstanding Performance by a Supporting Actor in a Drama Series Primetime Emmy Award winners
People from Wisconsin Rapids, Wisconsin
United States Army personnel of World War II
United States Army Air Forces personnel of World War II
United States Navy officers
United States Navy personnel of World War II
University of Wisconsin–Madison College of Letters and Science alumni
|
Guénin (; ) is a commune in the Morbihan department of Brittany in north-western France.
Geography
The River Ével flows southwestwards through the middle of the commune and forms its south-western border.
Demographics
The inhabitants of Guénin are called in French Guéninois.
See also
Communes of the Morbihan department
References
External links
Cultural Heritage
Mayors of Morbihan Association
Communes of Morbihan
|
```java
/*
*
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing, software
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*
* Contributors:
* ohun@live.cn ()
*/
package com.mpush.client.push;
import com.mpush.api.service.BaseService;
import com.mpush.api.service.Listener;
import com.mpush.client.MPushClient;
import com.mpush.monitor.service.ThreadPoolManager;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.Map;
import java.util.concurrent.*;
/**
* Created by ohun on 2015/12/30.
*
* @author ohun@live.cn
*/
public class PushRequestBus extends BaseService {
private final Logger logger = LoggerFactory.getLogger(PushRequestBus.class);
private final Map<Integer, PushRequest> reqQueue = new ConcurrentHashMap<>(1024);
private ScheduledExecutorService scheduledExecutor;
private final MPushClient mPushClient;
public PushRequestBus(MPushClient mPushClient) {
this.mPushClient = mPushClient;
}
public Future<?> put(int sessionId, PushRequest request) {
reqQueue.put(sessionId, request);
return scheduledExecutor.schedule(request, request.getTimeout(), TimeUnit.MILLISECONDS);
}
public PushRequest getAndRemove(int sessionId) {
return reqQueue.remove(sessionId);
}
public void asyncCall(Runnable runnable) {
scheduledExecutor.execute(runnable);
}
@Override
protected void doStart(Listener listener) throws Throwable {
scheduledExecutor = mPushClient.getThreadPoolManager().getPushClientTimer();
listener.onSuccess();
}
@Override
protected void doStop(Listener listener) throws Throwable {
if (scheduledExecutor != null) {
scheduledExecutor.shutdown();
}
listener.onSuccess();
}
}
```
|
This article lists the rulers of Tyrconnell (Irish: Tír Ċonaıll), a medieval Irish kingdom which covered much of what is now County Donegal.
Oral history
It was founded in the fifth century by a son of Niall of the Nine Hostages, Conall Gulban, of whom the Cenél Conaill are descended. They ruled the kingdom until the Flight of the Earls in September 1607, which marked the end of the kingdom.
Early Chiefs of Cenél Conaill
Conall Gulban mac Néill (died 464)
.......
Ninnid mac Dauach (flourished 544-563)
Ainmuire mac Sétnai (died 569)
Báetán mac Ninneda (died 586).
Áed mac Ainmuirech (died 598)
Conall Cú mac Áedo (died 604)
Máel Coba mac Áedo (died 615)
Domnall mac Áedo (died 642)
Conall Cóel mac Máele Coba (died 654)
Cellach mac Máele Coba (died 658)
......
Loingsech mac Óengusso (died 703)
Congal Cennmagair mac Fergusa (died 710)
Flaithbertach mac Loingsig (died 765)
Áed Muinderg mac Flaithbertaig (died 747)
Loingsech mac Flaithbertaig (died 754)
Murchad mac Flaithbertaig (died 767)
Domnall mac Áeda Muindeirg (died 804)
Máel Bresail mac Murchada (died 819)
Ruaidrí ua Canannáin (died 950)
Kings of Tyrconnell (Rí Thír Chonaill) from c. 1201 to 1608
Eneas MacDaly (Eigneachan mac Dalach), 1201-1207
Donall Mor MacEneas (Domhnall Mór mac Eicnechain), 1207-1241
Melaghlin O'Donnell, 1241–1247
Goffraid O'Donnell, 1247-1258
Donal Óg O'Donnell, 1258-1281
Turlough (Toirdhealbhach) O'Donnell (sone of a daughter of Angus Mor Macdonald, Lord of the Isles), 1290–91
Turlogh O'Donnell (Tairrdelbach an Fhiona Ó Domhnaill), 1380-1422
Niall Garve O'Donnell, 1422-1439
Naughton O'Donnell (Neachtan Ó Domhnaill), 1439-1452
Hugh Roe O'Donnell, 1461-1505
Hugh Duff O'Donnell (Aodh Dubh Ó Domhnaill), 1505-1537
Manus O'Donnell (d. 1564)
Calvagh O'Donnell (d. 1566)
Sir Hugh O'Donnell (d. 1600)
Hugh Roe O'Donnell (d. 1602)
Rory O'Donnell, 1st Earl of Tyrconnell (d. 1608)
References
Further reading
Annals of Ulster University College Cork
Annals of Tigernach University College Cork
Byrne, Francis John (2001), Irish Kings and High-Kings, Dublin: Four Courts Press,
Charles-Edwards, T. M. (2000), Early Christian Ireland, Cambridge: Cambridge University Press,
Mac Niocaill, Gearoid (1972), Ireland before the Vikings, Dublin: Gill and Macmillan
O'Donnell dynasty
Tir Connaill
|
```shell
#!/usr/bin/env bash
CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
# shellcheck source=../shell_config.sh
. "$CURDIR"/../shell_config.sh
EXCEPTION_TEXT="Code: 457."
$CLICKHOUSE_CLIENT -q "DROP TABLE IF EXISTS ps";
$CLICKHOUSE_CLIENT -q "CREATE TABLE ps (
a Array(UInt32), da Array(Array(UInt8)),
t Tuple(Int16, String), dt Tuple(UInt8, Tuple(String, UInt8)),
n Nullable(Date)
) ENGINE = Memory";
$CLICKHOUSE_CLIENT -q "INSERT INTO ps VALUES (
[1, 2], [[1, 1], [2, 2]],
(1, 'Hello'), (1, ('dt', 2)),
NULL)";
$CLICKHOUSE_CLIENT -q "INSERT INTO ps VALUES (
[10, 10, 10], [[10], [10], [10]],
(10, 'Test'), (10, ('dt', 10)),
'2015-02-15')";
$CLICKHOUSE_CLIENT --max_threads=1 --param_aui="[1, 2]" \
-q "SELECT t FROM ps WHERE a = {aui:Array(UInt16)}";
$CLICKHOUSE_CLIENT --max_threads=1 --param_d_a="[[1, 1], [2, 2]]" \
-q "SELECT dt FROM ps WHERE da = {d_a:Array(Array(UInt8))}";
$CLICKHOUSE_CLIENT --max_threads=1 --param_tisd="(10, 'Test')" \
-q "SELECT a FROM ps WHERE t = {tisd:Tuple(Int16, String)}";
$CLICKHOUSE_CLIENT --max_threads=1 --param_d_t="(10, ('dt', 10))" \
-q "SELECT da FROM ps WHERE dt = {d_t:Tuple(UInt8, Tuple(String, UInt8))}";
$CLICKHOUSE_CLIENT --max_threads=1 --param_nd="2015-02-15" \
-q "SELECT * FROM ps WHERE n = {nd:Nullable(Date)}";
# Must throw an exception to avoid SQL injection
$CLICKHOUSE_CLIENT --max_threads=1 --param_injection="[1] OR 1" \
-q "SELECT * FROM ps WHERE a = {injection:Array(UInt32)}" 2>&1 \
| grep -o "$EXCEPTION_TEXT"
$CLICKHOUSE_CLIENT -q "DROP TABLE ps";
```
|
```smalltalk
// The .NET Foundation licenses this file to you under the MIT license.
using Microsoft.DotNet.Cli.Utils;
namespace StreamForwarderTests
{
public class StreamForwarderTests : SdkTest
{
public StreamForwarderTests(ITestOutputHelper log) : base(log)
{
}
public static IEnumerable<object[]> ForwardingTheoryVariations
{
get
{
return new[]
{
new object[] { "123", new string[]{"123"} },
new object[] { "123\n", new string[] {"123"} },
new object[] { "123\r\n", new string[] {"123"} },
new object[] { "1234\n5678", new string[] {"1234", "5678"} },
new object[] { "1234\r\n5678", new string[] {"1234", "5678"} },
new object[] { "1234\n5678\n", new string[] {"1234", "5678"} },
new object[] { "1234\r\n5678\r\n", new string[] {"1234", "5678"} },
new object[] { "1234\n5678\nabcdefghijklmnopqrstuvwxyz", new string[] {"1234", "5678", "abcdefghijklmnopqrstuvwxyz"} },
new object[] { "1234\r\n5678\r\nabcdefghijklmnopqrstuvwxyz", new string[] {"1234", "5678", "abcdefghijklmnopqrstuvwxyz"} },
new object[] { "1234\n5678\nabcdefghijklmnopqrstuvwxyz\n", new string[] {"1234", "5678", "abcdefghijklmnopqrstuvwxyz"} },
new object[] { "1234\r\n5678\r\nabcdefghijklmnopqrstuvwxyz\r\n", new string[] {"1234", "5678", "abcdefghijklmnopqrstuvwxyz"} }
};
}
}
[Theory]
[InlineData("123")]
[InlineData("123\n")]
public void TestNoForwardingNoCapture(string inputStr)
{
TestCapturingAndForwardingHelper(ForwardOptions.None, inputStr, null, new string[0]);
}
[Theory]
[MemberData(nameof(ForwardingTheoryVariations))]
public void TestForwardingOnly(string inputStr, string[] expectedWrites)
{
for (int i = 0; i < expectedWrites.Length; ++i)
{
expectedWrites[i] += Environment.NewLine;
}
TestCapturingAndForwardingHelper(ForwardOptions.WriteLine, inputStr, null, expectedWrites);
}
[Theory]
[MemberData(nameof(ForwardingTheoryVariations))]
public void TestCaptureOnly(string inputStr, string[] expectedWrites)
{
for (int i = 0; i < expectedWrites.Length; ++i)
{
expectedWrites[i] += Environment.NewLine;
}
var expectedCaptured = string.Join("", expectedWrites);
TestCapturingAndForwardingHelper(ForwardOptions.Capture, inputStr, expectedCaptured, new string[0]);
}
[Theory]
[MemberData(nameof(ForwardingTheoryVariations))]
public void TestCaptureAndForwardingTogether(string inputStr, string[] expectedWrites)
{
for (int i = 0; i < expectedWrites.Length; ++i)
{
expectedWrites[i] += Environment.NewLine;
}
var expectedCaptured = string.Join("", expectedWrites);
TestCapturingAndForwardingHelper(ForwardOptions.WriteLine | ForwardOptions.Capture, inputStr, expectedCaptured, expectedWrites);
}
private enum ForwardOptions
{
None = 0x0,
Capture = 0x1,
WriteLine = 0x02,
}
private void TestCapturingAndForwardingHelper(ForwardOptions options, string str, string expectedCaptured, string[] expectedWrites)
{
var forwarder = new StreamForwarder();
var writes = new List<string>();
if ((options & ForwardOptions.WriteLine) != 0)
{
forwarder.ForwardTo(writeLine: s => writes.Add(s + Environment.NewLine));
}
if ((options & ForwardOptions.Capture) != 0)
{
forwarder.Capture();
}
forwarder.Read(new StringReader(str));
Assert.Equal(expectedWrites, writes);
var captured = forwarder.CapturedOutput;
Assert.Equal(expectedCaptured, captured);
}
}
}
```
|
Davydikha () is a rural locality (a village) in Velikodvorskoye Rural Settlement, Totemsky District, Vologda Oblast, Russia. The population was 29 as of 2002.
Geography
Davydikha is located 48 km south of Totma (the district's administrative centre) by road. Veliky Dvor is the nearest rural locality.
References
Rural localities in Tarnogsky District
|
The Cottage House, formerly known as the White Horse Inn and Vernon Stiles Inn, is a historic bed and breakfast located in Thompson, Connecticut, United States.
History
Built in 1814 by Stephen Tefft, Dr. James Webb, Noadiah Comins, and Hezekiah Olney, the inn began as one of many public houses in the area. After Captain Vernon Stiles purchased it in 1830, it became Stiles Tavern and quickly gained popularity, boasting that “more stage passengers dined there every day than at any other house in New England.”
In addition to its fame as an inn, Stiles Tavern also became known as a wedding facility. Couples who disliked their state's requirements for publishing their intentions to marry fled to Connecticut. There, Captain Stiles, also a Justice of the Peace, wed them in his tavern. The unions of these run-aways earned Stiles Tavern the celebrated reputation as the “Gretna Green of New England.”
When the temperance movement arose in the mid-1830s, Captain Stiles disposed of his liquor, transforming his tavern into a temperance house. After Captain Stiles, several innkeepers owned and managed the inn over the years. Eventually changing from a tavern to a restaurant, the building took on different titles, such as the Vernon Stiles Inn, the White Horse Inn at Vernon Stiles, and simply the White Horse Inn.
Present state
Now a bed and breakfast, the inn currently is called The Cottage House. Though it no longer holds wedding ceremonies, it still accommodates several brides and guests of couples who host their weddings or receptions at its nearby sister property, Lord Thompson Manor, also a historic site.
The Cottage House is part of the Thompson Hill Historic District, registered with the National Register of Historic Places; the State Register, and the Local Register, and is also recognized as a National Historical Site, a State Historical Site, and a City Historical Site.
Claims to fame
Since Captain Vernon Stiles's purchase of the inn in the 1830s, the signs outside The Cottage House have portrayed a gentleman riding in a carriage pulled by a white horse. This image depicts a visit from the famous Marquis de Lafayette in 1824. Lafayette stayed at the inn for three days during his tour of America, and visited with a few of the town locals.
The Cottage House also was used in the 1959 filming of the mystery movie, The Man in the Net.
References
Iamartino, Joseph, ed. (2003). Echoes of Old Thompson: A Pictorial History of Thompson, Connecticut. The Donning Company Publishers.
Larned, Ellen D. (2000). History of Windham County Connecticut, Vol. 2, 1760-1880. Swordsmith Productions.
Bayles, Richard M. History of the Village of Thompson. Connecticut Genealogy. Retrieved on 2007-05-31.
External links
The Cottage House
Thompson Historical Society
Buildings and structures in Windham County, Connecticut
Bed and breakfasts in Connecticut
Thompson, Connecticut
|
Gulnara Iskanderovna Samitova-Galkina (, ) (born 9 July 1978 in Naberezhnye Chelny, Tatarstan) is a Russian distance runner. In July 2004 she ran 3000 metres steeplechase in a new world record of 9:01.59 minutes. Early that year she won a bronze medal over 1500 metres at the 2004 IAAF World Indoor Championships.
Gulnara Samitova-Galkina is of mixed Tatar and Russian origin. She is a two-time national champion in the women's 5000 metres.
Samitova claimed the gold medal at the 2008 Olympics in the 3000 m steeplechase, breaking her own world record in the final with a time of 8:58.81 min, becoming the first woman in history to run under 9 minutes for the steeplechase.
She won both the 800 and 1500 metres races at the Russian Team Championships in June 2009, clocking a personal best of 2:00.29 in the 800 m and a world leading time over 1500 m.
She missed the entire 2010 season, including the 2010 European Athletics Championships, after she fell pregnant. She gave birth to her daughter Alina in June, and returned to training that autumn.
International competitions
Personal bests
800 metres - 2:00.29 min (2009)
1500 metres - 4:01.29 min (2004)
Mile run - 4:20.23 min (2007)
3000 metres - 8:42.96 min (2008)
3000 metres steeplechase - 8:58.81 min (2008)
5000 metres - 14:33.13 min (2008)
See also
List of Olympic medalists in athletics (women)
List of 2008 Summer Olympics medal winners
Steeplechase at the Olympics
List of IAAF World Indoor Championships medalists (women)
List of 5000 metres national champions (women)
List of long-distance runners
List of Tatars
List of Russian sportspeople
References
External links
Focus on Athletes – in-depth article from IAAF
1978 births
Living people
Sportspeople from Naberezhnye Chelny
Russian female steeplechase runners
Russian female middle-distance runners
Russian female long-distance runners
Russian female cross country runners
Olympic female steeplechase runners
Olympic female long-distance runners
Olympic athletes for Russia
Olympic gold medalists for Russia
Olympic gold medalists in athletics (track and field)
Athletes (track and field) at the 2004 Summer Olympics
Athletes (track and field) at the 2008 Summer Olympics
Athletes (track and field) at the 2012 Summer Olympics
Medalists at the 2008 Summer Olympics
World Athletics Championships athletes for Russia
Russian Athletics Championships winners
World record setters in athletics (track and field)
Tatar people of Russia
|
```ruby
class Twm < Formula
desc "Tab Window Manager for X Window System"
homepage "path_to_url"
url "path_to_url"
sha256 your_sha256_hash
license "X11"
bottle do
sha256 arm64_sonoma: your_sha256_hash
sha256 arm64_ventura: your_sha256_hash
sha256 arm64_monterey: your_sha256_hash
sha256 arm64_big_sur: your_sha256_hash
sha256 sonoma: your_sha256_hash
sha256 ventura: your_sha256_hash
sha256 monterey: your_sha256_hash
sha256 big_sur: your_sha256_hash
sha256 x86_64_linux: your_sha256_hash
end
depends_on "pkg-config" => :build
depends_on "libxmu"
depends_on "libxrandr"
uses_from_macos "bison" => :build
def install
system "./configure", *std_configure_args
system "make"
system "make", "install"
end
test do
fork do
exec Formula["xorg-server"].bin/"Xvfb", ":1"
end
ENV["DISPLAY"] = ":1"
sleep 10
fork do
exec bin/"twm"
end
end
end
```
|
```yaml
#
#
# path_to_url
#
# Unless required by applicable law or agreed to in writing, software
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
---
apiVersion: platform.confluent.io/v1beta1
kind: Kafka
metadata:
name: my-cluster
spec:
replicas: 3
tls:
autoGeneratedCerts: true
image:
application: confluentinc/cp-server:7.4.0
init: confluentinc/confluent-init-container:2.6.0
dataVolumeCapacity: 100Gi
storageClass:
name: premium-rwo
configOverrides:
server:
- offsets.topic.replication.factor=3
- transaction.state.log.replication.factor=3
- transaction.state.log.min.isr=2
- default.replication.factor=3
- min.insync.replicas=2
- auto.create.topics.enable=true
listeners:
custom:
- name: tls
port: 9093
tls:
enabled: true
podTemplate:
tolerations:
- key: "app.stateful/component"
operator: "Equal"
value: "kafka-broker"
effect: NoSchedule
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: "app.stateful/component"
operator: In
values:
- "kafka-broker"
topologySpreadConstraints:
- maxSkew: 1
topologyKey: "topology.kubernetes.io/zone"
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: my-cluster
clusterId: kafka
platform.confluent.io/type: kafka
envVars:
- name: KAFKA_HEAP_OPTS
value: "-Xmx4G -Xms4G"
resources:
requests:
memory: 5Gi
cpu: "1"
limits:
memory: 5Gi
cpu: "2"
probe:
readiness:
failureThreshold: 15
metricReporter:
enabled: true
metrics:
prometheus:
rules:
# Special cases and very specific rules
- pattern: kafka.server<type=(.+), name=(.+), clientId=(.+), topic=(.+), partition=(.*)><>Value
name: kafka_server_$1_$2
type: GAUGE
labels:
clientId: "$3"
topic: "$4"
partition: "$5"
- pattern: kafka.server<type=(.+), name=(.+), clientId=(.+), brokerHost=(.+), brokerPort=(.+)><>Value
name: kafka_server_$1_$2
type: GAUGE
labels:
clientId: "$3"
broker: "$4:$5"
- pattern: kafka.server<type=(.+), cipher=(.+), protocol=(.+), listener=(.+), networkProcessor=(.+)><>connections
name: kafka_server_$1_connections_tls_info
type: GAUGE
labels:
cipher: "$2"
protocol: "$3"
listener: "$4"
networkProcessor: "$5"
- pattern: kafka.server<type=(.+), clientSoftwareName=(.+), clientSoftwareVersion=(.+), listener=(.+), networkProcessor=(.+)><>connections
name: kafka_server_$1_connections_software
type: GAUGE
labels:
clientSoftwareName: "$2"
clientSoftwareVersion: "$3"
listener: "$4"
networkProcessor: "$5"
- pattern: "kafka.server<type=(.+), listener=(.+), networkProcessor=(.+)><>(.+):"
name: kafka_server_$1_$4
type: GAUGE
labels:
listener: "$2"
networkProcessor: "$3"
- pattern: kafka.server<type=(.+), listener=(.+), networkProcessor=(.+)><>(.+)
name: kafka_server_$1_$4
type: GAUGE
labels:
listener: "$2"
networkProcessor: "$3"
# Some percent metrics use MeanRate attribute
# Ex) kafka.server<type=(KafkaRequestHandlerPool), name=(RequestHandlerAvgIdlePercent)><>MeanRate
- pattern: kafka.(\w+)<type=(.+), name=(.+)Percent\w*><>MeanRate
name: kafka_$1_$2_$3_percent
type: GAUGE
# Generic gauges for percents
- pattern: kafka.(\w+)<type=(.+), name=(.+)Percent\w*><>Value
name: kafka_$1_$2_$3_percent
type: GAUGE
- pattern: kafka.(\w+)<type=(.+), name=(.+)Percent\w*, (.+)=(.+)><>Value
name: kafka_$1_$2_$3_percent
type: GAUGE
labels:
"$4": "$5"
# Generic per-second counters with 0-2 key/value pairs
- pattern: kafka.(\w+)<type=(.+), name=(.+)PerSec\w*, (.+)=(.+), (.+)=(.+)><>Count
name: kafka_$1_$2_$3_total
type: COUNTER
labels:
"$4": "$5"
"$6": "$7"
- pattern: kafka.(\w+)<type=(.+), name=(.+)PerSec\w*, (.+)=(.+)><>Count
name: kafka_$1_$2_$3_total
type: COUNTER
labels:
"$4": "$5"
- pattern: kafka.(\w+)<type=(.+), name=(.+)PerSec\w*><>Count
name: kafka_$1_$2_$3_total
type: COUNTER
# Generic gauges with 0-2 key/value pairs
- pattern: kafka.(\w+)<type=(.+), name=(.+), (.+)=(.+), (.+)=(.+)><>Value
name: kafka_$1_$2_$3
type: GAUGE
labels:
"$4": "$5"
"$6": "$7"
- pattern: kafka.(\w+)<type=(.+), name=(.+), (.+)=(.+)><>Value
name: kafka_$1_$2_$3
type: GAUGE
labels:
"$4": "$5"
- pattern: kafka.(\w+)<type=(.+), name=(.+)><>Value
name: kafka_$1_$2_$3
type: GAUGE
# Emulate Prometheus 'Summary' metrics for the exported 'Histogram's.
# Note that these are missing the '_sum' metric!
- pattern: kafka.(\w+)<type=(.+), name=(.+), (.+)=(.+), (.+)=(.+)><>Count
name: kafka_$1_$2_$3_count
type: COUNTER
labels:
"$4": "$5"
"$6": "$7"
- pattern: kafka.(\w+)<type=(.+), name=(.+), (.+)=(.*), (.+)=(.+)><>(\d+)thPercentile
name: kafka_$1_$2_$3
type: GAUGE
labels:
"$4": "$5"
"$6": "$7"
quantile: "0.$8"
- pattern: kafka.(\w+)<type=(.+), name=(.+), (.+)=(.+)><>Count
name: kafka_$1_$2_$3_count
type: COUNTER
labels:
"$4": "$5"
- pattern: kafka.(\w+)<type=(.+), name=(.+), (.+)=(.*)><>(\d+)thPercentile
name: kafka_$1_$2_$3
type: GAUGE
labels:
"$4": "$5"
quantile: "0.$6"
- pattern: kafka.(\w+)<type=(.+), name=(.+)><>Count
name: kafka_$1_$2_$3_count
type: COUNTER
- pattern: kafka.(\w+)<type=(.+), name=(.+)><>(\d+)thPercentile
name: kafka_$1_$2_$3
type: GAUGE
labels:
quantile: "0.$4"
dependencies:
zookeeper:
endpoint: zookeeper.kafka.svc.cluster.local:2182
tls:
enabled: true
---
apiVersion: platform.confluent.io/v1beta1
kind: Zookeeper
metadata:
name: zookeeper
spec:
replicas: 3
tls:
autoGeneratedCerts: true
image:
application: confluentinc/cp-zookeeper:7.4.0
init: confluentinc/confluent-init-container:2.6.0
dataVolumeCapacity: 90Gi
logVolumeCapacity: 10Gi
storageClass:
name: premium-rwo
podTemplate:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: "app.stateful/component"
operator: In
values:
- "zookeeper"
topologySpreadConstraints:
- maxSkew: 1
topologyKey: "topology.kubernetes.io/zone"
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: zookeeper
clusterId: kafka
platform.confluent.io/type: zookeeper
resources:
requests:
memory: 3Gi
cpu: "1"
limits:
memory: 3Gi
cpu: "2"
```
|
```shell
#!/bin/bash
#nesting for loops
for (( a=1; a<=3; a++ ))
do
echo "Starting loop $a:"
for (( b=1; b<=3; b++))
do
echo "Inside loog: $b:"
done
done
```
|
Trichotichnus autumnalis is a species of ground beetle in the family Carabidae. It is found in North America.
References
Further reading
Harpalinae
Articles created by Qbugbot
Beetles described in 1823
|
Berthe de Courrière (June 1852, Lille – 14 June 1916, Paris) was a French artists' model and demimondaine. She was the mistress, model, and heir of the sculptor and painter Auguste Clésinger.
Life
Born Caroline Louise Victoire Courrière in June 1852 in Lille, de Courrière set out for Paris at age 20 and first became the mistress of General Georges Boulanger and several ministers. The sculptor Auguste Clésinger, son-in-law of George Sand, remarked on Courrière's full form and gigantic proportions, bringing her the nicknames la grande dame ("the big woman") or Berthe aux grands pieds ("Bigfoot Bertha"). She was his model for the bust of Marianne for the Sénat as well as for the colossal statue of the Republic for the 1878 Exposition Universelle. On Clésinger's death, in 1883, Berthe was his sole heiress and found herself with a large fortune.
In 1886, she met Remy de Gourmont, then making his literary debut, and commissioned him to write a memorial of Clésinger. She became Gourmont's mistress and muse. Gourmont lived with her, at first on rue de Varenne then at 71 rue des Saints-Pères, until his death in 1915. de Courrière had him buried in the same vault as Clésinger. She died in 1916 and was laid to rest beside the two men in Clésinger's vault at the cimetière du Père-Lachaise. Gourmont's passionate letters to her during the year 1887 were published together in one volume as Lettres à Sixtine (1921).
Occultism
de Courrière was interested in occultism and found herself involved in a Black Mass affair that nearly went awry and earned her a month 's stay in a psychiatric hospital. In the early morning of 8 September 1890 the Bruges police were informed that a naked woman was parading on the fortresses near the Smedenpoort. She showed signs of mental disorder and was taken to the Sint-Juliaan psychiatric institution in the Boeveriestraat where she was identified as Berthe de Courrière. On 6 October, Gourmont travelled from Paris and removed her from the institution. It turned out that de Courrière had spent the night of 7 to 8 September at Moerstraat 36, the house of Canon Louis Van Haecke, rector of the chapel of the Holy Blood and alleged exorcist. She was also in touch with ex-Father Joseph-Antoine Boullan, who was laicized as a heretic.
de Courrière was, it seems, quite unbalanced. She had to be interned a second time in Brussels in 1906, and she wrote a violent booklet, Nero Prince of Science, against Jean-Martin Charcot, which is characteristic of the hatred that patients sometimes devote to their psychiatrist.
She had a morbid passion for ecclesiastics, whom she endeavored to seduce by all means. Rachilde claims to have seen her take out of her tapestry bag of consecrated hosts to throw them to stray dogs. The interior of her residence, according to Henry de Groux, "is the most heterogeneous thing I could ever have imagined in the taste of this half-pagan, half-catholic, or so-called world. These are only chasubles, altar cloths, objects of worship adapted to the most unexpected destinations, monstrances, corporals, dalmatics, candelabra with multicolored candles, mysteriously lit in corners of shadow, near a superb lectern on its wings works by Félicien Rops or the Marquis de Sade. The scent of benzoin, amber and rose essence alternately suffocate with those of incense."
As a practitioner of the cult of Satan, her vault in the cemetery of Pere Lachaise still attracts lovers of black masses.
In popular culture
In 1889, Gourmont presented de Courrière to Joris-Karl Huysmans who based the character of Mme Hyacinthe Chantelouve in his novel Là-Bas (1891) on her.
Gourmont based his novels Sixtine, roman de la vie cérébrale (1890) and Le Fantôme (1893), both tales of religio-sadistic eroticism, on de Courrière.
References
Bibliography
External links
1852 births
1916 deaths
French artists' models
Courtesans from Paris
French occultists
French Satanists
People from Lille
Burials at Père Lachaise Cemetery
19th-century occultists
|
AllStar, sometimes referred to as La Secta AllStar is the second album released by the Puerto Rican rock band La Secta AllStar. It was released independently in 2001 and includes the hit songs "Dame lo Que Quieras", "Asesino" and "Eso es Vivir".
Track listing
"Sequía"
"Dame lo Que Quieras"
"Asesino"
"X"
"Cruzado Pero Claro"
"Vino Viejo"
"Canaguey"
"Por Qué Volver"
"Dámela"
"Eso Es Vivir"
"End of the Story"
"Asesino [Vampy Dance Remix]"
2001 albums
La Secta AllStar albums
|
```html
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "path_to_url">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=US-ASCII">
<title>Class template ostream_time_duration_formatter</title>
<link rel="stylesheet" href="../../../../doc/src/boostbook.css" type="text/css">
<meta name="generator" content="DocBook XSL Stylesheets V1.79.1">
<link rel="home" href="../../index.html" title="The Boost C++ Libraries BoostBook Documentation Subset">
<link rel="up" href="../../date_time/doxy.html#header.boost.date_time.time_formatting_streams_hpp" title="Header <boost/date_time/time_formatting_streams.hpp>">
<link rel="prev" href="time_input_facet.html" title="Class template time_input_facet">
<link rel="next" href="ostream_time_formatter.html" title="Class template ostream_time_formatter">
</head>
<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
<table cellpadding="2" width="100%"><tr>
<td valign="top"><img alt="Boost C++ Libraries" width="277" height="86" src="../../../../boost.png"></td>
<td align="center"><a href="../../../../index.html">Home</a></td>
<td align="center"><a href="../../../../libs/libraries.htm">Libraries</a></td>
<td align="center"><a href="path_to_url">People</a></td>
<td align="center"><a href="path_to_url">FAQ</a></td>
<td align="center"><a href="../../../../more/index.htm">More</a></td>
</tr></table>
<hr>
<div class="spirit-nav">
<a accesskey="p" href="time_input_facet.html"><img src="../../../../doc/src/images/prev.png" alt="Prev"></a><a accesskey="u" href="../../date_time/doxy.html#header.boost.date_time.time_formatting_streams_hpp"><img src="../../../../doc/src/images/up.png" alt="Up"></a><a accesskey="h" href="../../index.html"><img src="../../../../doc/src/images/home.png" alt="Home"></a><a accesskey="n" href="ostream_time_formatter.html"><img src="../../../../doc/src/images/next.png" alt="Next"></a>
</div>
<div class="refentry">
<a name="boost.date_time.ostre_1_3_12_15_3_49_1_1_1"></a><div class="titlepage"></div>
<div class="refnamediv">
<h2><span class="refentrytitle">Class template ostream_time_duration_formatter</span></h2>
<p>boost::date_time::ostream_time_duration_formatter — Put a time type into a stream using appropriate facets. </p>
</div>
<h2 xmlns:rev="path_to_url~gregod/boost/tools/doc/revision" class="refsynopsisdiv-title">Synopsis</h2>
<div xmlns:rev="path_to_url~gregod/boost/tools/doc/revision" class="refsynopsisdiv"><pre class="synopsis"><span class="comment">// In header: <<a class="link" href="../../date_time/doxy.html#header.boost.date_time.time_formatting_streams_hpp" title="Header <boost/date_time/time_formatting_streams.hpp>">boost/date_time/time_formatting_streams.hpp</a>>
</span><span class="keyword">template</span><span class="special"><</span><span class="keyword">typename</span> time_duration_type<span class="special">,</span> <span class="keyword">typename</span> charT <span class="special">=</span> <span class="keyword">char</span><span class="special">></span>
<span class="keyword">class</span> <a class="link" href="ostre_1_3_12_15_3_49_1_1_1.html" title="Class template ostream_time_duration_formatter">ostream_time_duration_formatter</a> <span class="special">{</span>
<span class="keyword">public</span><span class="special">:</span>
<span class="comment">// types</span>
<span class="keyword">typedef</span> <span class="identifier">std</span><span class="special">::</span><span class="identifier">basic_ostream</span><span class="special"><</span> <span class="identifier">charT</span> <span class="special">></span> <a name="boost.date_time.ostre_1_3_12_15_3_49_1_1_1.ostream_type"></a><span class="identifier">ostream_type</span><span class="special">;</span>
<span class="keyword">typedef</span> <span class="identifier">time_duration_type</span><span class="special">::</span><span class="identifier">fractional_seconds_type</span> <a name="boost.date_time.ostre_1_3_12_15_3_49_1_1_1.fractional_seconds_type"></a><span class="identifier">fractional_seconds_type</span><span class="special">;</span>
<span class="comment">// <a class="link" href="ostre_1_3_12_15_3_49_1_1_1.html#id-1_3_12_15_3_49_1_1_1_5-bb">public static functions</a></span>
<span class="keyword">static</span> <span class="keyword">void</span> <a class="link" href="ostre_1_3_12_15_3_49_1_1_1.html#id-1_3_12_15_3_49_1_1_1_5_1-bb"><span class="identifier">duration_put</span></a><span class="special">(</span><span class="keyword">const</span> <span class="identifier">time_duration_type</span> <span class="special">&</span><span class="special">,</span> <span class="identifier">ostream_type</span> <span class="special">&</span><span class="special">)</span><span class="special">;</span>
<span class="special">}</span><span class="special">;</span></pre></div>
<div class="refsect1">
<a name="id-1.3.12.15.3.48.3.4"></a><h2>Description</h2>
<div class="refsect2">
<a name="id-1.3.12.15.3.48.3.4.2"></a><h3>
<a name="id-1_3_12_15_3_49_1_1_1_5-bb"></a><code class="computeroutput">ostream_time_duration_formatter</code> public static functions</h3>
<div class="orderedlist"><ol class="orderedlist" type="1"><li class="listitem">
<pre class="literallayout"><span class="keyword">static</span> <span class="keyword">void</span> <a name="id-1_3_12_15_3_49_1_1_1_5_1-bb"></a><span class="identifier">duration_put</span><span class="special">(</span><span class="keyword">const</span> <span class="identifier">time_duration_type</span> <span class="special">&</span> td<span class="special">,</span> <span class="identifier">ostream_type</span> <span class="special">&</span> os<span class="special">)</span><span class="special">;</span></pre>Put time into an ostream. </li></ol></div>
</div>
</div>
</div>
<table xmlns:rev="path_to_url~gregod/boost/tools/doc/revision" width="100%"><tr>
<td align="left"></td>
<code class="filename">LICENSE_1_0.txt</code> or copy at <a href="path_to_url" target="_top">path_to_url
</div></td>
</tr></table>
<hr>
<div class="spirit-nav">
<a accesskey="p" href="time_input_facet.html"><img src="../../../../doc/src/images/prev.png" alt="Prev"></a><a accesskey="u" href="../../date_time/doxy.html#header.boost.date_time.time_formatting_streams_hpp"><img src="../../../../doc/src/images/up.png" alt="Up"></a><a accesskey="h" href="../../index.html"><img src="../../../../doc/src/images/home.png" alt="Home"></a><a accesskey="n" href="ostream_time_formatter.html"><img src="../../../../doc/src/images/next.png" alt="Next"></a>
</div>
</body>
</html>
```
|
```objective-c
// //////////////////////////////////////////////////////////
// Crc32.h
// Slicing-by-16 contributed by Bulat Ziganshin
// Tableless bytewise CRC contributed by Hagai Gold
// see path_to_url
//
// if running on an embedded system, you might consider shrinking the
// big Crc32Lookup table by undefining these lines:
#if UINTPTR_MAX == 0xffffffff
// Don't define lookup tables on 32bit (mostly armhf/armel) because it seems to
// break and cause weird bus errors
#elif UINTPTR_MAX == 0xffffffffffffffff
#define CRC32_USE_LOOKUP_TABLE_BYTE
#define CRC32_USE_LOOKUP_TABLE_SLICING_BY_4
#define CRC32_USE_LOOKUP_TABLE_SLICING_BY_8
#define CRC32_USE_LOOKUP_TABLE_SLICING_BY_16
#endif
//
// - crc32_bitwise doesn't need it at all
// - crc32_halfbyte has its own small lookup table
// - crc32_1byte_tableless and crc32_1byte_tableless2 don't need it at all
// - crc32_1byte needs only Crc32Lookup[0]
// - crc32_4bytes needs only Crc32Lookup[0..3]
// - crc32_8bytes needs only Crc32Lookup[0..7]
// - crc32_4x8bytes needs only Crc32Lookup[0..7]
// - crc32_16bytes needs all of Crc32Lookup
// using the aforementioned #defines the table is automatically fitted to your needs
// uint8_t, uint32_t, int32_t
#include <stdint.h>
// size_t
#include <cstddef>
// crc32_fast selects the fastest algorithm depending on flags (CRC32_USE_LOOKUP_...)
/// compute CRC32 using the fastest algorithm for large datasets on modern CPUs
uint32_t crc32_fast (const void* data, size_t length, uint32_t previousCrc32 = 0);
/// merge two CRC32 such that result = crc32(dataB, lengthB, crc32(dataA, lengthA))
uint32_t crc32_combine (uint32_t crcA, uint32_t crcB, size_t lengthB);
/// compute CRC32 (bitwise algorithm)
uint32_t crc32_bitwise (const void* data, size_t length, uint32_t previousCrc32 = 0);
/// compute CRC32 (half-byte algoritm)
uint32_t crc32_halfbyte(const void* data, size_t length, uint32_t previousCrc32 = 0);
#ifdef CRC32_USE_LOOKUP_TABLE_BYTE
/// compute CRC32 (standard algorithm)
uint32_t crc32_1byte (const void* data, size_t length, uint32_t previousCrc32 = 0);
#endif
/// compute CRC32 (byte algorithm) without lookup tables
uint32_t crc32_1byte_tableless (const void* data, size_t length, uint32_t previousCrc32 = 0);
/// compute CRC32 (byte algorithm) without lookup tables
uint32_t crc32_1byte_tableless2(const void* data, size_t length, uint32_t previousCrc32 = 0);
#ifdef CRC32_USE_LOOKUP_TABLE_SLICING_BY_4
/// compute CRC32 (Slicing-by-4 algorithm)
uint32_t crc32_4bytes (const void* data, size_t length, uint32_t previousCrc32 = 0);
#endif
#ifdef CRC32_USE_LOOKUP_TABLE_SLICING_BY_8
/// compute CRC32 (Slicing-by-8 algorithm)
uint32_t crc32_8bytes (const void* data, size_t length, uint32_t previousCrc32 = 0);
/// compute CRC32 (Slicing-by-8 algorithm), unroll inner loop 4 times
uint32_t crc32_4x8bytes(const void* data, size_t length, uint32_t previousCrc32 = 0);
#endif
#ifdef CRC32_USE_LOOKUP_TABLE_SLICING_BY_16
/// compute CRC32 (Slicing-by-16 algorithm)
uint32_t crc32_16bytes (const void* data, size_t length, uint32_t previousCrc32 = 0);
/// compute CRC32 (Slicing-by-16 algorithm, prefetch upcoming data blocks)
uint32_t crc32_16bytes_prefetch(const void* data, size_t length, uint32_t previousCrc32 = 0, size_t prefetchAhead = 256);
#endif
```
|
Shavei Shomron (, lit. Returnees of Samaria) is an Israeli settlement in the northern West Bank. Built on lands confiscated from the neighboring Palestinian villages of An-Naqura and Deir Sharaf, it is located to the west of Nablus, on the road to Tulkarm. It is organised as a community settlement and falls under the jurisdiction of Shomron Regional Council. In it had a population of , mostly religious Zionist and Modern Orthodox Jews. Its municipal jurisdiction is 664 dunams, of which 272 dunams are built up.
The international community considers Israeli settlements in the West Bank illegal under international law, but the Israeli government disputes this.
History
In late 1976, supporters of the Gush Emunim (Bloc of the Faithful) staged a takeover of the abandoned Sebastia railroad station, located outside an Arab village of the same name. The location is in proximity to the ruins of Samaria, the capital city of the northern Kingdom of Israel, built by King Omri. Using this as justification to secure the Israeli claim to the region, the demonstrators demanded that settlement be initiated in this region. With the support of newly elected Prime Minister Menachem Begin, a residential community was built the following year alongside a military base at a strategically valuable crossroads by residents of nearby Netanya, and with the assistance of the Amana settlement organisation.
According to ARIJ, Israel confiscated land from two nearby Palestinian villages in order to construct Shavei Shomron: 680 dunums of land were taken from An-Naqura, while 236 dunums were taken from Deir Sharaf.
On 11 October 2022, an Israeli soldier was shot dead by Palestinian gunmen in the settlement. The Palestinian group the Lions' Den claimed responsibility.
Intifada and disengagement
Like all Israeli settlers, the residents of Shavei Shomron traveled through and conducted business in Nablus and neighboring Arab villages. However, as tensions increased following the First Intifada, Israeli travel to Nablus was restricted, and new roads were built to bypass certain villages. Twenty-four hundred dunums were confiscated from Palestinian villages to build a bypass road from Shavei Shomron to the Mt. Ibal military installation. In 2002, the Israeli Supreme Court approved the construction of part of the West Bank barrier around the community. Many local residents opposed its construction, fearing that it may become a future border between Israel and a Palestinian state. Others were concerned that an incident like the one earlier in the year, when a terrorist infiltrated the community and targeted the kindergarten with grenades and firearms before being shot by a local resident, could be repeated without such measures.
In August 2005, the community hosted mass demonstrations in opposition to the Israeli disengagement from Gaza, which included the forced evacuation of four settlements to the north of Shavei Shomron, and brought a potential frontier to the settlement's backyard. Following their evacuation and demolition, the community hosted some of the former residents of Homesh and Sa-Nur.
As of 2008, residents of Shavei Shomron were being trained by Mishmeret Yesha in counterterrorism tactics and the use of guns.
Demographics
The community has a swimming pool and ulpan (Hebrew classes) for newcomers which serve to attract a population including many olim (Jewish immigrants) from English-speaking countries as well as Russian Jews from the former USSR, Yemenite Jews, Bnei Menashe, and some Incan Jewish families from Trujillo, Peru that converted to Orthodox Judaism.
References
External links
Shavey-shomron / Official website
Religious Israeli settlements
Populated places established in 1977
1977 establishments in the Israeli Military Governorate
Israeli settlements in the West Bank
|
The Upper Volga (Verkhne-Volzhskaya) railway (Russian: Верхне-Волжская железная дорога) was a private railway in the upper Volga region of Russia, built in 1914–1918 and in the second half of the 1930s. It was planned as part of a backup route from St. Petersburg to Moscow. Today, the lines are part of the Moscow region of the October Railway.
Main lines
Savelovo-Kalyazin (completed 1918)
Kashin-Kalyazin (completed 1918)
Kalyazin - Uglich (completed 1930s)
Kalyazin - Novki – Not constructed
Historical background
The Verkhne-Volzhskaya railroad was designed and built at the beginning of the 20th century on the promising from the transport and economic point of view Tver-Rybinsk-Nizhny Novgorod, which was barely covered by the fast railway transport at the time. The Verkhne-Volzhskaya railway was to connect the cities of Kashin, Kalyazin, Rybinsk, and Uglich both with each other and with the Moscow-Savyolovo branch completed in 1900, which made it possible to speed up the delivery of goods to Moscow from the Volga, the carriage of goods at that time was carried out on slow-moving punts.
The construction of the Verkhne-Volzhskaya railway, the main branches of which were located and projected to the north-northwest of Moscow, in the Tver, Yaroslavl, and Vladimir regions, as well as the work of the joint stock company to create it. This is a pre-revolutionary concept that is analogous to what is today called as private-state partnership.
Such concessionary or mixed public-private form of financing and management was typical for the construction of railways in Russia. Against the background of the boom in the middle of the 19th century of the construction of railways, the Russian state combined a private way of building roads and building them at the expense of the treasury, periodically switching priority to a particular construction system.
From the second half of the 19th century and before the revolution of 1917, several main branches of railways were laid and operated with the money of private founders with the participation of the state, which formed the basis of the modern railway network of Russia.
The Sino-Eastern Railway (KVZhD), Kulundinskaya, and the Minusinsk Railway, the Akkerman Railway, the Southeast Railway network, the Olonets Railway—all these roads were built with the involvement of the capitals of private entrepreneurs-concessionaires, united in the form of joint-stock companies. (The very concept of a joint-stock company in pre-revolutionary Russia was enshrined in the "Regulations on Companies on Shares", approved by the Decree of December 6, 1836, and in force until the October Revolution).
The founders of railway societies were given permission and administrative support of the state, but they had colossal financial risks. To obtain permission to establish a joint-stock company, they required the consent of the supreme authority or bodies of subordinate management, as well as the Minister of Finance.
Projects were very risky for entrepreneurs: before they were allowed to establish a company, they were obliged to conduct a survey of the route; draw up a complete road design; to work out estimates for all construction projects; and also to contribute to the State Bank 5% of the total amount of construction capital. At the same time, the Government could reject the project or even completely abandon the concession.
Despite the risks, the operation of railways was economically attractive. The government guaranteed fulfillment of obligations on securities of railway companies. Owners of shares were provided with a solid dividend not from the moment of putting the line into operation, but from the day of the organization of the joint-stock company.
Founders of the Company and its charter
The founders of the Verkhne-Volzhskaya Railroad Society were the entrepreneur & railway pioneer Nikolai Vasilievich Belyaev (1859-1920), the honorary citizen of Pereslavl Zalessky Leonid Pavlov (1870-1917), the railway engineer Fedor Nikolayevich Mamontov, the nobleman Nikolai Mitrofanovich Andreev, the personal honorary citizen Ivan Orestovich Kurlyukov and the major-general Anatoly Anatolyevich Reinboth (1868-1918). The Chairman of the board was Nikolai Vasilievich Belyaev.
The Board of the company was located in Moscow, and the management of the construction of railway lines in St. Petersburg in the House of Pertsov (N.N. Pertsov was one of the founders of the Black Sea Railroad Society).
The most significant right that the shareholders of the company received was the right to own the highway and its subsidiary enterprises for 81 years from the day the movement was opened.
At the same time, the Society assumed many obligations to the state. So, after the expiry of the 81-year-term, the railway and other property of the Society should go free of charge to the treasury. The property of the company, both immovable and movable, constituting the railroad's ownership, could not be alienated or mortgaged without the permission of the government.
Among other duties of the company were the survey of the construction of the Uglich-Rybinsk line, the provision of mail and, if necessary, troops, the removal of apartments for postal and telegraph officials, the construction of a military food station on the Kashin-Novki line.
The carrying capacity of the railway was to be 3 pairs of passenger and 6 pairs of freight trains per day. It was also planned to use the road for the movement of military trains.
The road was planned to be built in 3 years. The total cost of the project, including the rolling stock, as well as the working capital of the company, was to be 21 million 620 thousand rubles.
Planning and construction of railway lines before the Revolution
The Verkhne-Volzhskaya railway was built in order to connect the cities of Kashin (at that time the station of the Moscow-Vindavo-Rybinsk railway), Kalyazin and Novki station of the Moscow-Nizhny Novgorod railway.
Also, two more branches were to be built from Kalyazin. The first - to the station Savelovo (then - run by the Northern Railways), and the second - to the city of Uglich.
According to data for 1914, the total length of the road was 332 versts (354 km). Of these, the Kashin-Novka line was to be the longest - about 239 versts, the Kalyazin-Savelovo branch 50 versts (53 km), and the Kalyazin-Uglich branch about 43 versts (46 km).
Because of the First World War and the revolution in Russia, construction was very slow. During the years 1914–1917, the company built the Kalyazin-Savelovo railway line, which was opened in 1918.
The second "short" branch of Kashin-Kalyazin was also built. The opening of this section closed the reserve route from Moscow to St. Petersburg, passing through Kalyazin, Ovinische, Khvoynaya, Mga.
In 1916, the price lists for the construction of the Kashin-Novki and Savelovo-Uglich lines were developed. However, in connection with the same difficult financial situation in the country, the construction of the Savyolovsky radius from Kalyazin through Uglich to Rybinsk (designed in pre-revolution Russia) never began.
Verkhne-Volzhskaya railway in the Soviet era
After 1918, several members of the Society of the Upper Volga Railway, including entrepreneur and railway pioneer N.V. Belyaev suffered repressions, and the Upper Volga Railroad Society, as well as the lines of the railway built up – were nationalized and handed over to the People's Commissariat of Railways.
The "tsarist" plans for the construction of the Kalyazin-Uglich-Rybinsk line were recalled in the 1930s during industrialization when it became necessary to deliver construction materials for the construction of the Uglich hydroplant. In a short time, the Kalyazin-Uglich branch (48 km), opened for traffic in 1937, was finally completed based on the project plans of the Upper Volga Railroad Society.
With the flooding of the bed of the Uglich reservoir in some areas, it was necessary to move the roads, and some of the settlements were flooded, including the historic town of Kalyazin. The same happened with the settlements on the section between the cities of Uglich and Rybinsk in connection with the organization of the Rybinsk Reservoir.
Upper-Volga railroad today
Today, the Savelovo-Kalyazin and Kalyazin-Uglich branches are part of the Savelovsky direction of the Moscow region of October Railway. They serve goods trains and the passenger train Moscow-Rybinsk.
References
Literature
The Upper Volga Railway project: Explanatory a note to the project. - St. Petersburg, 1913. http://rr.aroundspb.ru/1913_Kashin_Novki_Savelovo_Uglija_zd-1.pdf
Technical conditions for design and construction of the Upper Volga railway. - Moscow, 1914. http://rr.aroundspb.ru/1913_Kashin_Novki_Savelovo_Uglija_zd-2.pdf
Essay on the petition of Kalyazin's public figures on the conduct of a railway through Kalyazin Tver province. Kalyazin, 1915.
Archives
Moscow Central State Archive. MF 1434. Op.1. The board of society of the Verkhne-Volzhskaya (Upper Volga) railway. 1914–1917. Dd.1-10.
State Archive of St. Petersburg. F.-9346. Op.1. Department of Construction of the Upper Volga Railway - Committee of Public Institutions of the Supreme Economic Council of the RSFSR (1917 - [1921]). Dd. 158. 1917-1921 yy.
RGIA. F.350. Plans and drawings for the construction of railways (collection). Op.77. Verkhne-Volzhskaya (Upper Volga) railway, etc. 1873-1917 yy.
External links
Korshunov A. The Kalyazin zemstvo got the railway
McKay J.P. Pioneers for Profit: Foreign Entrepreneurship and Russian Industrialization, 1885-1913. p. 171
Railway companies of Russia
Railway lines in Russia
|
Katupathwela is a village in Sri Lanka. It is located within Central Province.
See also
List of towns in Central Province, Sri Lanka
External links
Populated places in Nuwara Eliya District
|
```c++
// (See accompanying file LICENSE.md or copy at path_to_url
#include <boost/hana/ext/std/array.hpp>
#include <boost/hana/less.hpp>
#include <array>
namespace hana = boost::hana;
constexpr std::array<int, 4> evens = {{2, 4, 6, 8}};
constexpr std::array<int, 4> odds = {{1, 3, 5, 7}};
constexpr std::array<int, 5> up_to_5 = {{1, 2, 3, 4, 5}};
// arrays with same length
static_assert(hana::less(odds, evens), "");
// arrays with different lengths
static_assert(hana::less(up_to_5, odds), "");
int main() { }
```
|
Bogaha Kotuwe Gedara Niluka Geethani Rajasekara (born 17 March 1982) is a female Sri Lankan long-distance runner. With a time of 2 hours 40.07 minutes, and a new Sri Lankan record, at the 2015 Hong Kong marathon, Rajasekara achieved the qualifying standard for the marathon at the 2016 Summer Olympics. She competed in the marathon event at the 2015 World Championships in Athletics in Beijing, China, finishing 49th.
At the 2015 Hong Kong marathon on 25 January 2015 she ran a Sri Lankan record in the marathon in a time of 2 hours 40.07 minutes. At the 2016 Summer Olympics she finished 129th out of a field of 157.
See also
Sri Lanka at the 2015 World Championships in Athletics
References
External links
1982 births
Living people
Sportspeople from Kandy
Sri Lankan female long-distance runners
Sri Lankan female marathon runners
World Athletics Championships athletes for Sri Lanka
Athletes (track and field) at the 2016 Summer Olympics
Olympic athletes for Sri Lanka
20th-century Sri Lankan women
21st-century Sri Lankan women
|
The Nigerian National Integrated Power Project (NIPP) was conceived in 2004 when Olusegun Obasanjo was the President of the Federal Government of Nigeria. It was formed to address the issues of insufficient electric power generation and excessive gas flaring from oil exploration in the Niger Delta region. Seven power plants were designed in gas-producing states as part of the project.
Planned power plants included:
Ihovbor Power Station Benin, Edo State with the capacity of 4 x 112.5 MW (ISO 126 MW).
Calabar Power Station, Cross River State with the capacity of 5 x 112.5112.5 MW (ISO 126 MW).
Egbema Power Station, Imo State with the capacity of 3 x 112.5 MW (ISO 126 MW).
Gbarain Power Station, Yenagoa, Bayelsa State with the capacity of 2 x 112.5 MW (ISO 126 MW).
Sapele Power Station, Delta State with the capacity of 4 x 112.5 MW (ISO 126 MW).
Omoku Power Station, Rivers State with the capacity of 2 x 112.5 MW (ISO 126 MW).
Ikot Abasi Power Station, Akwa Ibom with the capacity of 2 x 112.5 MW (ISO 126 MW) (replaced later by Ibom Power Station).
Together, the projects generated contracts worth $414,000,000 for the supply of turbines and electricity generation equipment to General Electric (GE). The primary turbine is GE 9E gas turbine with a nominal ISO rating of 126MW. After adjusting for site conditions, the capacity was set to 112.5 MW.
The plants are low efficiency simple cycle but have provision for future extension to combined cycle.
Administration changes in 2007 interrupted funding for more than two years.
The NIPP project includes 11 power plants and 4 FGN Power Stations:
Alaoji Power Station, Abia State, combined cycle plant with the capacity of 4 x 112.5 MW (ISO 125 MW) and 2x steam 255 MW
Omotosho II Power Station, Ondo State, with the capacity of 4 x 112.5 (ISO 125 MW)
Olorunsogo II Power Station, Ogun State, combined cycle plant with the capacity of 4 x 125 MW and 2 x steam 125 MW
Geregu II Power Station, Kogi State, with the capacity of 434 MW
Following the Afam V and Geregu I plants, Geregu II is now the third gas-turbine power plant to be constructed by Siemens in Nigeria as a turnkey project and completed on schedule. The scope of delivery supplied by Siemens for Geregu II included three SGT5-2000E gas turbines, three SGen5-100A generators, as well as all the electrical systems and the SPPA-T3000 control system.
The Ikot Abasi NIPP power plant has been replaced by Ibom Power, which is a 190 MW project of the Akwa Ibom State Government.
The revised project involves large scale transmission projects across all of Nigeria which are crucial to ensure power distribution from generation plants to final customers.
References
Power stations in Nigeria
|
```javascript
Anonymous functions
IIFE pattern
Get query/url variables
Function call method
Check if a document is done loading
```
|
```c++
// Boost.Geometry Index
// Additional tests
// Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at
// path_to_url
#include <iostream>
#include <fstream>
#define BOOST_GEOMETRY_INDEX_DETAIL_EXPERIMENTAL
#include <boost/geometry.hpp>
#include <boost/geometry/index/rtree.hpp>
#include <boost/geometry/index/detail/rtree/utilities/statistics.hpp>
#include <boost/archive/binary_oarchive.hpp>
#include <boost/archive/binary_iarchive.hpp>
#include <boost/archive/xml_oarchive.hpp>
#include <boost/archive/xml_iarchive.hpp>
#include <boost/serialization/vector.hpp>
#include <boost/foreach.hpp>
#include <boost/timer.hpp>
template <typename T, size_t I = 0, size_t S = boost::tuples::length<T>::value>
struct print_tuple
{
template <typename Os>
static inline Os & apply(Os & os, T const& t)
{
os << boost::get<I>(t) << ", ";
return print_tuple<T, I+1>::apply(os, t);
}
};
template <typename T, size_t S>
struct print_tuple<T, S, S>
{
template <typename Os>
static inline Os & apply(Os & os, T const&)
{
return os;
}
};
int main()
{
namespace bg = boost::geometry;
namespace bgi = bg::index;
typedef boost::tuple<std::size_t, std::size_t, std::size_t, std::size_t, std::size_t, std::size_t> S;
typedef bg::model::point<double, 2, bg::cs::cartesian> P;
typedef bg::model::box<P> B;
typedef B V;
//typedef bgi::rtree<V, bgi::linear<16> > RT;
//typedef bgi::rtree<V, bgi::quadratic<8, 3> > RT;
//typedef bgi::rtree<V, bgi::rstar<8, 3> > RT;
typedef bgi::rtree<V, bgi::dynamic_linear > RT;
//RT tree;
RT tree(bgi::dynamic_linear(16));
std::vector<V> vect;
boost::timer t;
//insert values
{
for ( double x = 0 ; x < 1000 ; x += 1 )
for ( double y = 0 ; y < 1000 ; y += 1 )
vect.push_back(B(P(x, y), P(x+0.5, y+0.5)));
RT tmp(vect, tree.parameters());
tree = boost::move(tmp);
}
B q(P(5, 5), P(6, 6));
S s;
std::cout << "vector and tree created in: " << t.elapsed() << std::endl;
print_tuple<S>::apply(std::cout, bgi::detail::rtree::utilities::statistics(tree)) << std::endl;
std::cout << boost::get<0>(s) << std::endl;
BOOST_FOREACH(V const& v, tree | bgi::adaptors::queried(bgi::intersects(q)))
std::cout << bg::wkt<V>(v) << std::endl;
// save
{
std::ofstream ofs("serialized_vector.bin", std::ios::binary | std::ios::trunc);
boost::archive::binary_oarchive oa(ofs);
t.restart();
oa << vect;
std::cout << "vector saved to bin in: " << t.elapsed() << std::endl;
}
{
std::ofstream ofs("serialized_tree.bin", std::ios::binary | std::ios::trunc);
boost::archive::binary_oarchive oa(ofs);
t.restart();
oa << tree;
std::cout << "tree saved to bin in: " << t.elapsed() << std::endl;
}
{
std::ofstream ofs("serialized_tree.xml", std::ios::trunc);
boost::archive::xml_oarchive oa(ofs);
t.restart();
oa << boost::serialization::make_nvp("rtree", tree);
std::cout << "tree saved to xml in: " << t.elapsed() << std::endl;
}
t.restart();
vect.clear();
std::cout << "vector cleared in: " << t.elapsed() << std::endl;
t.restart();
tree.clear();
std::cout << "tree cleared in: " << t.elapsed() << std::endl;
// load
{
std::ifstream ifs("serialized_vector.bin", std::ios::binary);
boost::archive::binary_iarchive ia(ifs);
t.restart();
ia >> vect;
std::cout << "vector loaded from bin in: " << t.elapsed() << std::endl;
t.restart();
RT tmp(vect, tree.parameters());
tree = boost::move(tmp);
std::cout << "tree rebuilt from vector in: " << t.elapsed() << std::endl;
}
t.restart();
tree.clear();
std::cout << "tree cleared in: " << t.elapsed() << std::endl;
{
std::ifstream ifs("serialized_tree.bin", std::ios::binary);
boost::archive::binary_iarchive ia(ifs);
t.restart();
ia >> tree;
std::cout << "tree loaded from bin in: " << t.elapsed() << std::endl;
}
std::cout << "loaded from bin" << std::endl;
print_tuple<S>::apply(std::cout, bgi::detail::rtree::utilities::statistics(tree)) << std::endl;
BOOST_FOREACH(V const& v, tree | bgi::adaptors::queried(bgi::intersects(q)))
std::cout << bg::wkt<V>(v) << std::endl;
t.restart();
tree.clear();
std::cout << "tree cleared in: " << t.elapsed() << std::endl;
{
std::ifstream ifs("serialized_tree.xml");
boost::archive::xml_iarchive ia(ifs);
t.restart();
ia >> boost::serialization::make_nvp("rtree", tree);
std::cout << "tree loaded from xml in: " << t.elapsed() << std::endl;
}
std::cout << "loaded from xml" << std::endl;
print_tuple<S>::apply(std::cout, bgi::detail::rtree::utilities::statistics(tree)) << std::endl;
BOOST_FOREACH(V const& v, tree | bgi::adaptors::queried(bgi::intersects(q)))
std::cout << bg::wkt<V>(v) << std::endl;
return 0;
}
```
|
David or Dave Wight may refer to:
David Wight (cricketer) (born 1959), cricketer from the Cayman Islands
David Wight (rower) (1934–2017), American gold medalist at the 1956 Melbourne Olympics
Dave Wight, member of the band London
See also
David White (disambiguation)
|
Nico Pelamonia (16 March 1940 – 15 May 2017) was an Indonesian actor turned film director who won the Citra Award for Best Director in 1976 for his film Semalam di Malaysia. He has been involved in 33 feature film productions since his debut in Fred Young's Dibalik Awan in 1963.
Biography
Pelamonia was born in Ambon, Maluku, on 16 March 1940. His parents intended for him to become a priest, and after senior high school he sent to Java to prepare for it. In 1962, however, Pelamonia enrolled at the Indonesian National Theater Academy (, or ATNI). He had his first film role, a bit part in Fred Young's Dibalik Awan (Behind the Clouds), the following year. During his school years he acted in two films, D. Djajakusuma's Masa Topan dan Badai (Time of Storms and Gales; 1963) and Pitrajaya Burnama's Karma (1965). He was also involved in television, becoming a presenter for the newly established TVRI.
In 1965 Pelamonia dropped out of ATNI and signed with Pertisa Film, later known as Tuti Mutia Film Production. Pelamonia's first film for the company, Fadjar Ditengah Kabut (Dawn Amidst the Fog), saw him act and serve as assistant director to Danu Umbara. The following year Pelamonia made his directorial debut with Sendja di Jakarta (Twilight in Jakarta), an adaptation of the novel of the same name by Mochtar Lubis. Two years later he departed for West Germany, where he studied filmmaking.
Upon his return to Indonesia in 1970, Pelamonia resumed acting. He took a role in Hasmanan's Samiun dan Dasima (Samiun and Dasima) in 1970. The following year he directed his first film since returning from West Germany, Jang Djatuh Dikaki Lelaki (Those Who Fall at the Feet of Men) from the novel by Abdullah Harahap. He directed a further four films for Tuti Mutia, acting in several of them. This culminated with Semalam di Malaysia (One Night in Malaysia) in 1975. After the film garnered him a Citra Award for Best Director at the 1976 Indonesian Film Festival, Pelamonia left Tuti Mutia and began working as a freelancer.
Between 1976 and 1988 Pelamonia directed another 13 films, mostly romances. His 1981 adaptation of Marga T's Dr. Karmila garnered him another Citra Award nomination, although he lost to Arifin C. Noer of Serangan Fajar (Attack at Dawn). Pelamonia has also acted in a further ten films, including Nagabonar in 1986; the film was submitted for an Academy Award for Best Foreign Language Film, although it was not nominated. During the 1980s he was active as a member of the Indonesian Film Festival selection committee and Chairman of Film and Television Employees Association.
Filmography
As actor
Dibalik Awan (1963)
Masa Topan dan Badai (1963)
Karma (1965)
Fadjar Ditengah Kabut (1966)
Sendja di Djakarta (1967)
Hidup, Tjinta dan Air Mata (1970)
Samiun dan Dasima (1970)
Anjing-Anjing Geladak (1972)
Laki-Laki Pilihan (1973)
Tragedi Tante Sex (1976)
Si Doel Anak Modern (1976)
Pengalaman Pertama (1977)
Yoan (1977)
Istriku Sayang Istriku Malang (1977)
Ombaknya Laut Mabuknya Cinta (1978)
Musang Berjanggut (1983)
Penyesalan Seumur Hidup (1986)
Nagabonar (1986)
Akibat Kanker Payudara (1987)
Nagabonar Jadi 2 (2007)
As crew
Sendja di Djakarta (1967)
Jang Djatuh Dikaki Lelaki (1971)
Anjing-Anjing Geladak (1972)
Laki-Laki Pilihan (1973)
Prahara (Betinanya Seorang Perempuan) (1974)
Semalam di Malaysia (1975)
Wajah Tiga Perempuan (1976)
Marina (1977)
Anggrek Merah (1977)
Yoan (1977)
Perempuan Tanpa Dosa (1978)
Di Ujung Malam (1979)
Karena Dia (1979)
Cantik (1980)
Permainan Bulan Desember (1980)
Amalia SH (1981)
Dr. Karmila (1981)
Luka Hati Sang Bidadari (1983)
Kenikmatan (1984)
Yang Terbelenggu (1984)
Gema Kampus 66 (1988)
References
Works cited
External links
1940 births
2017 deaths
Citra Award winners
People from Maluku Islands
Moluccan people
Indonesian Christians
Indonesian film directors
|
is a mecha anime television series aired from 1980 to 1981. It ran for 50 episodes. It is also referred to as "God Sigma, Empire of Space" and "Space Combination God Sigma".
Concept
Space Emperor God Sigma was created by Toei's Television Division, under the name "Saburo Yatsude" and produced by Academy Production (who subcontracted Greenbox). The series was produced by the main Toei company, and not by Toei Animation; Yoshinobu Nishizaki's Academy Productions provided production assistance. Toei Agency handled the advertising for the show, and its main sponsor was Popy (now Bandai's Boy's Toys Division).
This anime was the last that Takashi Ijima would work on; he had been part of Toei's TV division's projects since Chōdenji Robo Combattler V, which aired in 1976. Joined by Katsuhiko Taguchi, the chief director, the two created an anime that lasted for four full seasons, which was rare at the time of its broadcast.
Story
The story is set in the year 2050 AD, and mankind has been steadily advancing its space technology. However, the planet is suddenly set upon by a mysterious enemy: the forces of Eldar, who came from 250 years in the future. In their time, 2300 AD, their planet Eldar was invaded by Earth, and soundly defeated by Earth's Trinity Energy, a mysterious energy used in their weaponry that possesses power many times that of a hydrogen bomb. The Eldar people's objective is to steal this Trinity Energy before it can be used against them.
The Eldar forces begin by taking over Jupiter's moon Io, one of the places humanity has immigrated to by then. After that, they begin to attack Trinity City with their legions of Cosmosauruses in order to steal the Trinity Energy. Toshiya and his friends use God Sigma to protect the planet and the Trinity Energy, and the battle evolves into a long war to retake Io.
Characters
Trinity City
Toshiya Dan
, Tomokazu Seki (in Super Robot Wars Z)
The protagonist of the show. He is an 18-year-old second-generation pioneer living on Jupiter's moon, Io. The main pilot of God Sigma.
Julie Noguchi
Assistant to Dr. Kazami, the head scientist and researcher at Trinity City. One of the three pilots of God Sigma.
Kira Kensaku
One of Toshiya's good friends, and also a native of Io.
Eldar's Forces
Supreme Commander Teral
, Hiromi Tsuru (in Super Robot Wars Z)
The supreme commander of the Eldar forces sent to Earth to steal the Trinity Energy.
Commander Leats
The commander of the Cosmosauruses, under Teral's command.
Jeela
A commander under Teral's command.
Mecha
God Sigma
The completed giant robot formed when the Kuuraioh, Kaimeioh, and Rikushin'oh combine with the Big-Wing.
With the command "Sigma Formation", the three robots form a triangle, and when they shout "Trinity Charge", the Big-Wing flies to them and combines. Two separate combination scenes exist in the anime - one during the beginning, and one from the middle onwards.
Its total height was originally stated to be 265 meters tall, but since this would cause difficulties for the actual animation, it was changed to 66 meters. It weighs 1200 tons, and its power source is Trinity Energy. It possesses the ability to cruise from Earth to Jupiter and back. It can be controlled by the main pilot, Toshiya, but without the other two component robots' generators to supplement his own, it's not as powerful.
In the final episode, it and the giant mecha Gargos destroy each other. However, it is repaired after the war and revamped to be a single-pilot machine, and Toshiya sets off with it by himself.
Staff
Original Work: Saburo Yatsude
Producer: Takashi Ijima, Heita Ezu (Tokyo Channel 12), Itaru Orita (Toei TV Division)
Music: Hiroshi Tsutsui
Music Performance: Tokyo Indoors Music Society
Chief Director: Takeyuki Kanda (episodes 1-10), Katsuhiko Taguchi (episodes 11-50)
Character Design: Kazuhiko Utaka
Character Concepts: Kaoru Shintani
Design Assistance: Yutaka Izubuchi
Mechanical Design: Katsushi Murakami, Submarine
Artistic Director: Kazuo Okada, Tadanoumi Shimokawa
Music Director: Jouhei Kawamura
Animation Production: Greenbox, Anime City, Sunluck
Production Assistance: Academy Production, Tokyo Dôga
Production: Toei Company, Ltd., Toei Agency (advertising)
Episodes
Theme songs
Opening Theme: "Ganbare! Uchuu no Senshi" (Do Your Best, Space Warrior!)
Vocals: Isao Sasaki, Koroogi '73, Columbia Yurikago-kai
Lyrics: Saburo Yatsude
Composition: Asei Kobayashi
Arrangement: Masahisa Takeichi
Ending Theme: "Red Blue Yellow"
Vocals: Kumiko Kaori, Koroogi '73, Columbia Yurikago-kai
Lyrics: Saburo Yatsude
Composition: Asei Kobayashi
Arrangement: Masahisa Takeichi
The ending theme, "Red Blue Yellow", consists of the planets in the Solar System in order. At the time of the show's broadcast, Pluto was actually inside the orbit of Neptune. This was not reflected in the animation, the reason being that the show itself is set in the year 2050. Also, the rings of Jupiter had just been discovered the year before, so in the ending animation, Jupiter is depicted with rings. However, in contrast to the actual rings of Jupiter, which are too thin to see a shadow cast on the planet, the picture of Jupiter in the ending sequence does have the shadow of rings on it.
Media
The series was telecast in Italy, simply titled God Sigma.
God Sigma toys were released under the premier Godaikin toy label in Japan and Hong Kong in the 1980s.
On March 16, 2011, Pony Canyon released a complete DVD box set, which only had a limited production run. This is the first and only digital version of the show.
Appearances in other works
Space Emperor God Sigma appears in various games by Banpresto (currently Bandai Namco Games). Its first appearance was in a 1992 Famicom game called Shuffle Fight. Its second was in the 2008 PS2 game Super Robot Wars Z. However, by this time, Toshiya's voice actor Tomiyama Kei had died, so the role was given to Tomokazu Seki instead. In the same game, a robot from Gravion called God Σ Gravion appears, triggering a conversation between the characters of each series. God Sigma also appears in the game's two-part sequel, 2nd Super Robot Wars Z: Hakai-hen and Saisei-hen.
References
External links
Encirobot.com (Italian)
1980 anime television series debuts
1980 Japanese television series debuts
1981 Japanese television series endings
Adventure anime and manga
Animated space adventure television series
Kaoru Shintani
Super robot anime and manga
TV Tokyo original programming
Toei Animation television
|
```go
//
//
// path_to_url
//
// Unless required by applicable law or agreed to in writing, software
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
package nodejs
import (
"encoding/json"
"fmt"
"io"
"regexp"
"strings"
"unicode"
"github.com/pulumi/pulumi/sdk/v3/go/common/util/contract"
"github.com/pulumi/pulumi/pkg/v3/codegen"
"github.com/pulumi/pulumi/pkg/v3/codegen/schema"
)
// isReservedWord returns true if s is a reserved word as per ECMA-262.
func isReservedWord(s string) bool {
switch s {
case "break", "case", "catch", "class", "const", "continue", "debugger", "default", "delete",
"do", "else", "export", "extends", "finally", "for", "function", "if", "import",
"in", "instanceof", "new", "return", "super", "switch", "this", "throw", "try",
"typeof", "var", "void", "while", "with", "yield":
// Keywords
return true
case "enum", "await", "implements", "interface", "package", "private", "protected", "public":
// Future reserved words
return true
case "null", "true", "false":
// Null and boolean literals
return true
default:
return false
}
}
// isLegalIdentifierStart returns true if it is legal for c to be the first character of a JavaScript identifier as per
// ECMA-262.
func isLegalIdentifierStart(c rune) bool {
return c == '$' || c == '_' ||
unicode.In(c, unicode.Lu, unicode.Ll, unicode.Lt, unicode.Lm, unicode.Lo, unicode.Nl)
}
// isLegalIdentifierPart returns true if it is legal for c to be part of a JavaScript identifier (besides the first
// character) as per ECMA-262.
func isLegalIdentifierPart(c rune) bool {
return isLegalIdentifierStart(c) || unicode.In(c, unicode.Mn, unicode.Mc, unicode.Nd, unicode.Pc)
}
// isLegalIdentifier returns true if s is a legal JavaScript identifier as per ECMA-262.
func isLegalIdentifier(s string) bool {
if isReservedWord(s) {
return false
}
reader := strings.NewReader(s)
c, _, _ := reader.ReadRune()
if !isLegalIdentifierStart(c) {
return false
}
for {
c, _, err := reader.ReadRune()
if err != nil {
return err == io.EOF
}
if !isLegalIdentifierPart(c) {
return false
}
}
}
// makeValidIdentifier replaces characters that are not allowed in JavaScript identifiers with underscores. No attempt
// is made to ensure that the result is unique.
func makeValidIdentifier(name string) string {
var builder strings.Builder
for i, c := range name {
if !isLegalIdentifierPart(c) {
builder.WriteRune('_')
} else {
if i == 0 && !isLegalIdentifierStart(c) {
builder.WriteRune('_')
}
builder.WriteRune(c)
}
}
name = builder.String()
if isReservedWord(name) {
return "_" + name
}
return name
}
func makeSafeEnumName(name, typeName string) (string, error) {
// Replace common single character enum names.
safeName := codegen.ExpandShortEnumName(name)
// If the name is one illegal character, return an error.
if len(safeName) == 1 && !isLegalIdentifierStart(rune(safeName[0])) {
return "", fmt.Errorf("enum name %s is not a valid identifier", safeName)
}
// Capitalize and make a valid identifier.
safeName = makeValidIdentifier(title(safeName))
// If there are multiple underscores in a row, replace with one.
regex := regexp.MustCompile(`_+`)
safeName = regex.ReplaceAllString(safeName, "_")
// If the enum name starts with an underscore, add the type name as a prefix.
if strings.HasPrefix(safeName, "_") {
safeName = typeName + safeName
}
return safeName, nil
}
// escape returns the string escaped for a JS string literal
func escape(s string) string {
// Seems the most fool-proof way of doing this is by using the JSON marshaler and then stripping the surrounding quotes
escaped, err := json.Marshal(s)
contract.AssertNoErrorf(err, "JSON(%q)", s)
contract.Assertf(len(escaped) >= 2, "JSON(%s) expected a quoted string but returned %s", s, escaped)
contract.Assertf(
escaped[0] == byte('"') && escaped[len(escaped)-1] == byte('"'),
"JSON(%s) expected a quoted string but returned %s", s, escaped)
return string(escaped)[1:(len(escaped) - 1)]
}
func lookupNodePackageInfo(pkg *schema.Package) NodePackageInfo {
nodePackageInfo := NodePackageInfo{}
if pkg == nil {
return nodePackageInfo
}
if languageInfo, ok := pkg.Language["nodejs"]; ok {
if info, ok2 := languageInfo.(NodePackageInfo); ok2 {
nodePackageInfo = info
}
}
return nodePackageInfo
}
func nonEmptyStrings(candidates []string) []string {
res := []string{}
for _, c := range candidates {
if c != "" {
res = append(res, c)
}
}
return res
}
```
|
```javascript
/**
* @license Apache-2.0
*
*
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing, software
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*/
'use strict';
// MODULES //
var PINF = require( '@stdlib/constants/float64/pinf' );
// MAIN //
/**
* Tests if a double-precision floating-point numeric value is a positive finite number.
*
* @param {number} x - value to test
* @returns {boolean} boolean indicating whether the value is a positive finite number
*
* @example
* var bool = isPositiveFinite( 5.0 );
* // returns true
*
* @example
* var bool = isPositiveFinite( 3.14 );
* // returns true
*
* @example
* var bool = isPositiveFinite( 0.0 );
* // returns false
*
* @example
* var bool = isPositiveFinite( Infinity );
* // returns false
*
* @example
* var bool = isPositiveFinite( -3.14 );
* // returns false
*
* @example
* var bool = isPositiveFinite( NaN );
* // returns false
*/
function isPositiveFinite( x ) {
return ( x > 0.0 && x < PINF );
}
// EXPORTS //
module.exports = isPositiveFinite;
```
|
```objective-c
//
//
// path_to_url
//
// Unless required by applicable law or agreed to in writing, software
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#ifndef SOURCE_VAL_VALIDATE_H_
#define SOURCE_VAL_VALIDATE_H_
#include <functional>
#include <memory>
#include <utility>
#include <vector>
#include "source/instruction.h"
#include "source/table.h"
#include "spirv-tools/libspirv.h"
namespace spvtools {
namespace val {
class ValidationState_t;
class BasicBlock;
class Instruction;
/// @brief Performs the Control Flow Graph checks
///
/// @param[in] _ the validation state of the module
///
/// @return SPV_SUCCESS if no errors are found. SPV_ERROR_INVALID_CFG otherwise
spv_result_t PerformCfgChecks(ValidationState_t& _);
/// @brief Updates the use vectors of all instructions that can be referenced
///
/// This function will update the vector which define where an instruction was
/// referenced in the binary.
///
/// @param[in] _ the validation state of the module
///
/// @return SPV_SUCCESS if no errors are found.
spv_result_t UpdateIdUse(ValidationState_t& _, const Instruction* inst);
/// @brief This function checks all ID definitions dominate their use in the
/// CFG.
///
/// This function will iterate over all ID definitions that are defined in the
/// functions of a module and make sure that the definitions appear in a
/// block that dominates their use.
///
/// @param[in] _ the validation state of the module
///
/// @return SPV_SUCCESS if no errors are found. SPV_ERROR_INVALID_ID otherwise
spv_result_t CheckIdDefinitionDominateUse(ValidationState_t& _);
/// @brief This function checks for preconditions involving the adjacent
/// instructions.
///
/// This function will iterate over all instructions and check for any required
/// predecessor and/or successor instructions. e.g. spv::Op::OpPhi must only be
/// preceded by spv::Op::OpLabel, spv::Op::OpPhi, or spv::Op::OpLine.
///
/// @param[in] _ the validation state of the module
///
/// @return SPV_SUCCESS if no errors are found. SPV_ERROR_INVALID_DATA otherwise
spv_result_t ValidateAdjacency(ValidationState_t& _);
/// @brief Validates static uses of input and output variables
///
/// Checks that any entry point that uses a input or output variable lists that
/// variable in its interface.
///
/// @param[in] _ the validation state of the module
///
/// @return SPV_SUCCESS if no errors are found.
spv_result_t ValidateInterfaces(ValidationState_t& _);
/// @brief Validates entry point call tree requirements of
/// SPV_KHR_float_controls2
///
/// Checks that no entry point using FPFastMathDefault uses:
/// * FPFastMathMode Fast
/// * NoContraction
///
/// @param[in] _ the validation state of the module
///
/// @return SPV_SUCCESS if no errors are found.
spv_result_t ValidateFloatControls2(ValidationState_t& _);
/// @brief Validates duplicated execution modes for each entry point.
///
/// @param[in] _ the validation state of the module
///
/// @return SPV_SUCCESS if no errors are found.
spv_result_t ValidateDuplicateExecutionModes(ValidationState_t& _);
/// @brief Validates memory instructions
///
/// @param[in] _ the validation state of the module
/// @return SPV_SUCCESS if no errors are found.
spv_result_t MemoryPass(ValidationState_t& _, const Instruction* inst);
/// @brief Updates the immediate dominator for each of the block edges
///
/// Updates the immediate dominator of the blocks for each of the edges
/// provided by the @p dom_edges parameter
///
/// @param[in,out] dom_edges The edges of the dominator tree
/// @param[in] set_func This function will be called to updated the Immediate
/// dominator
void UpdateImmediateDominators(
const std::vector<std::pair<BasicBlock*, BasicBlock*>>& dom_edges,
std::function<void(BasicBlock*, BasicBlock*)> set_func);
/// @brief Prints all of the dominators of a BasicBlock
///
/// @param[in] block The dominators of this block will be printed
void printDominatorList(BasicBlock& block);
/// Performs logical layout validation as described in section 2.4 of the SPIR-V
/// spec.
spv_result_t ModuleLayoutPass(ValidationState_t& _, const Instruction* inst);
/// Performs Control Flow Graph validation and construction.
spv_result_t CfgPass(ValidationState_t& _, const Instruction* inst);
/// Validates Control Flow Graph instructions.
spv_result_t ControlFlowPass(ValidationState_t& _, const Instruction* inst);
/// Performs Id and SSA validation of a module
spv_result_t IdPass(ValidationState_t& _, Instruction* inst);
/// Performs instruction validation.
spv_result_t InstructionPass(ValidationState_t& _, const Instruction* inst);
/// Performs decoration validation. Assumes each decoration on a group
/// has been propagated down to the group members.
spv_result_t ValidateDecorations(ValidationState_t& _);
/// Performs validation of built-in variables.
spv_result_t ValidateBuiltIns(ValidationState_t& _);
/// Validates type instructions.
spv_result_t TypePass(ValidationState_t& _, const Instruction* inst);
/// Validates constant instructions.
spv_result_t ConstantPass(ValidationState_t& _, const Instruction* inst);
/// Validates correctness of arithmetic instructions.
spv_result_t ArithmeticsPass(ValidationState_t& _, const Instruction* inst);
/// Validates correctness of composite instructions.
spv_result_t CompositesPass(ValidationState_t& _, const Instruction* inst);
/// Validates correctness of conversion instructions.
spv_result_t ConversionPass(ValidationState_t& _, const Instruction* inst);
/// Validates correctness of derivative instructions.
spv_result_t DerivativesPass(ValidationState_t& _, const Instruction* inst);
/// Validates correctness of logical instructions.
spv_result_t LogicalsPass(ValidationState_t& _, const Instruction* inst);
/// Validates correctness of bitwise instructions.
spv_result_t BitwisePass(ValidationState_t& _, const Instruction* inst);
/// Validates correctness of image instructions.
spv_result_t ImagePass(ValidationState_t& _, const Instruction* inst);
/// Validates correctness of atomic instructions.
spv_result_t AtomicsPass(ValidationState_t& _, const Instruction* inst);
/// Validates correctness of barrier instructions.
spv_result_t BarriersPass(ValidationState_t& _, const Instruction* inst);
/// Validates correctness of literal numbers.
spv_result_t LiteralsPass(ValidationState_t& _, const Instruction* inst);
/// Validates correctness of extension instructions.
spv_result_t ExtensionPass(ValidationState_t& _, const Instruction* inst);
/// Validates correctness of annotation instructions.
spv_result_t AnnotationPass(ValidationState_t& _, const Instruction* inst);
/// Validates correctness of non-uniform group instructions.
spv_result_t NonUniformPass(ValidationState_t& _, const Instruction* inst);
/// Validates correctness of debug instructions.
spv_result_t DebugPass(ValidationState_t& _, const Instruction* inst);
// Validates that capability declarations use operands allowed in the current
// context.
spv_result_t CapabilityPass(ValidationState_t& _, const Instruction* inst);
/// Validates correctness of primitive instructions.
spv_result_t PrimitivesPass(ValidationState_t& _, const Instruction* inst);
/// Validates correctness of mode setting instructions.
spv_result_t ModeSettingPass(ValidationState_t& _, const Instruction* inst);
/// Validates correctness of function instructions.
spv_result_t FunctionPass(ValidationState_t& _, const Instruction* inst);
/// Validates correctness of miscellaneous instructions.
spv_result_t MiscPass(ValidationState_t& _, const Instruction* inst);
/// Validates correctness of ray query instructions.
spv_result_t RayQueryPass(ValidationState_t& _, const Instruction* inst);
/// Validates correctness of ray tracing instructions.
spv_result_t RayTracingPass(ValidationState_t& _, const Instruction* inst);
/// Validates correctness of shader execution reorder instructions.
spv_result_t RayReorderNVPass(ValidationState_t& _, const Instruction* inst);
/// Validates correctness of mesh shading instructions.
spv_result_t MeshShadingPass(ValidationState_t& _, const Instruction* inst);
/// Calculates the reachability of basic blocks.
void ReachabilityPass(ValidationState_t& _);
/// Validates execution limitations.
///
/// Verifies execution models are allowed for all functionality they contain.
spv_result_t ValidateExecutionLimitations(ValidationState_t& _,
const Instruction* inst);
/// Validates restricted uses of 8- and 16-bit types.
///
/// Validates shaders that uses 8- or 16-bit storage capabilities, but not full
/// capabilities only have appropriate uses of those types.
spv_result_t ValidateSmallTypeUses(ValidationState_t& _,
const Instruction* inst);
/// Validates restricted uses of QCOM decorated textures
///
/// The textures that are decorated with some of QCOM image processing
/// decorations must be used in the specified QCOM image processing built-in
/// functions and not used in any other image functions.
spv_result_t ValidateQCOMImageProcessingTextureUsages(ValidationState_t& _,
const Instruction* inst);
/// @brief Validate the ID's within a SPIR-V binary
///
/// @param[in] pInstructions array of instructions
/// @param[in] count number of elements in instruction array
/// @param[in] bound the binary header
/// @param[in,out] position current word in the binary
/// @param[in] consumer message consumer callback
///
/// @return result code
spv_result_t spvValidateIDs(const spv_instruction_t* pInstructions,
const uint64_t count, const uint32_t bound,
spv_position position,
const MessageConsumer& consumer);
// Performs validation for the SPIRV-V module binary.
// The main difference between this API and spvValidateBinary is that the
// "Validation State" is not destroyed upon function return; it lives on and is
// pointed to by the vstate unique_ptr.
spv_result_t ValidateBinaryAndKeepValidationState(
const spv_const_context context, spv_const_validator_options options,
const uint32_t* words, const size_t num_words, spv_diagnostic* pDiagnostic,
std::unique_ptr<ValidationState_t>* vstate);
} // namespace val
} // namespace spvtools
#endif // SOURCE_VAL_VALIDATE_H_
```
|
```python
import contextlib
import glob
import ntpath
import os
import posixpath
import re
from pathlib import Path, PosixPath, WindowsPath
from typing import (
AsyncIterator,
Callable,
Final,
Iterator,
MutableMapping,
Optional,
Pattern,
Tuple,
Union,
)
from urllib.parse import urlparse
import lavalink
from red_commons.logging import getLogger
from redbot.core.i18n import Translator
from redbot.core.utils import AsyncIter
_ = Translator("Audio", Path(__file__))
_RE_REMOVE_START: Final[Pattern] = re.compile(r"^(sc|list) ")
_RE_YOUTUBE_TIMESTAMP: Final[Pattern] = re.compile(r"[&|?]t=(\d+)s?")
_RE_YOUTUBE_INDEX: Final[Pattern] = re.compile(r"&index=(\d+)")
_RE_SPOTIFY_URL: Final[Pattern] = re.compile(r"(http[s]?://)?(open\.spotify\.com)/")
_RE_SPOTIFY_TIMESTAMP: Final[Pattern] = re.compile(r"#(\d+):(\d+)")
_RE_SOUNDCLOUD_TIMESTAMP: Final[Pattern] = re.compile(r"#t=(\d+):(\d+)s?")
_RE_TWITCH_TIMESTAMP: Final[Pattern] = re.compile(r"\?t=(\d+)h(\d+)m(\d+)s")
_PATH_SEPS: Final[Tuple[str, str]] = (posixpath.sep, ntpath.sep)
_FULLY_SUPPORTED_MUSIC_EXT: Final[Tuple[str, ...]] = (".mp3", ".flac", ".ogg")
_PARTIALLY_SUPPORTED_MUSIC_EXT: Tuple[str, ...] = (
".m3u",
".m4a",
".aac",
".ra",
".wav",
".opus",
".wma",
".ts",
".au",
# These do not work
# ".mid",
# ".mka",
# ".amr",
# ".aiff",
# ".ac3",
# ".voc",
# ".dsf",
)
_PARTIALLY_SUPPORTED_VIDEO_EXT: Tuple[str, ...] = (
".mp4",
".mov",
".flv",
".webm",
".mkv",
".wmv",
".3gp",
".m4v",
".mk3d", # path_to_url
".mka", # path_to_url
".mks", # path_to_url
# These do not work
# ".vob",
# ".mts",
# ".avi",
# ".mpg",
# ".mpeg",
# ".swf",
)
_PARTIALLY_SUPPORTED_MUSIC_EXT += _PARTIALLY_SUPPORTED_VIDEO_EXT
log = getLogger("red.cogs.Audio.audio_dataclasses")
class LocalPath:
"""Local tracks class.
Used to handle system dir trees in a cross system manner. The only use of this class is for
`localtracks`.
"""
_all_music_ext = _FULLY_SUPPORTED_MUSIC_EXT + _PARTIALLY_SUPPORTED_MUSIC_EXT
def __init__(self, path, localtrack_folder, **kwargs):
self._localtrack_folder = localtrack_folder
self._path = path
if isinstance(path, (Path, WindowsPath, PosixPath, LocalPath)):
path = str(path.absolute())
elif path is not None:
path = str(path)
self.cwd = Path.cwd()
_lt_folder = Path(self._localtrack_folder) if self._localtrack_folder else self.cwd
_path = Path(path) if path else self.cwd
if _lt_folder.parts[-1].lower() == "localtracks" and not kwargs.get("forced"):
self.localtrack_folder = _lt_folder
elif kwargs.get("forced"):
if _path.parts[-1].lower() == "localtracks":
self.localtrack_folder = _path
else:
self.localtrack_folder = _path / "localtracks"
else:
self.localtrack_folder = _lt_folder / "localtracks"
try:
_path = Path(path)
_path.relative_to(self.localtrack_folder)
self.path = _path
except (ValueError, TypeError):
for sep in _PATH_SEPS:
if path and path.startswith(f"localtracks{sep}{sep}"):
path = path.replace(f"localtracks{sep}{sep}", "", 1)
elif path and path.startswith(f"localtracks{sep}"):
path = path.replace(f"localtracks{sep}", "", 1)
self.path = self.localtrack_folder.joinpath(path) if path else self.localtrack_folder
try:
if self.path.is_file():
parent = self.path.parent
else:
parent = self.path
self.parent = Path(parent)
except OSError:
self.parent = None
@property
def name(self):
return str(self.path.name)
@property
def suffix(self):
return str(self.path.suffix)
def is_dir(self):
try:
return self.path.is_dir()
except OSError:
return False
def exists(self):
try:
return self.path.exists()
except OSError:
return False
def is_file(self):
try:
return self.path.is_file()
except OSError:
return False
def absolute(self):
try:
return self.path.absolute()
except OSError:
return self._path
@classmethod
def joinpath(cls, localpath, *args):
modified = cls(None, localpath)
modified.path = modified.path.joinpath(*args)
return modified
def rglob(self, pattern, folder=False) -> Iterator[str]:
if folder:
return glob.iglob(f"{glob.escape(self.path)}{os.sep}**{os.sep}", recursive=True)
else:
return glob.iglob(
f"{glob.escape(self.path)}{os.sep}**{os.sep}*{pattern}", recursive=True
)
def glob(self, pattern, folder=False) -> Iterator[str]:
if folder:
return glob.iglob(f"{glob.escape(self.path)}{os.sep}*{os.sep}", recursive=False)
else:
return glob.iglob(f"{glob.escape(self.path)}{os.sep}*{pattern}", recursive=False)
async def _multiglob(self, pattern: str, folder: bool, method: Callable):
async for rp in AsyncIter(method(pattern)):
rp_local = LocalPath(rp, self._localtrack_folder)
if (
(folder and rp_local.is_dir() and rp_local.exists())
or (not folder and rp_local.suffix in self._all_music_ext and rp_local.is_file())
and rp_local.exists()
):
yield rp_local
async def multiglob(self, *patterns, folder=False) -> AsyncIterator["LocalPath"]:
async for p in AsyncIter(patterns):
async for path in self._multiglob(p, folder, self.glob):
yield path
async def multirglob(self, *patterns, folder=False) -> AsyncIterator["LocalPath"]:
async for p in AsyncIter(patterns):
async for path in self._multiglob(p, folder, self.rglob):
yield path
def __str__(self):
return self.to_string()
def __repr__(self):
return str(self)
def to_string(self):
try:
return str(self.path.absolute())
except OSError:
return str(self._path)
def to_string_user(self, arg: str = None):
string = str(self.absolute()).replace(
(str(self.localtrack_folder.absolute()) + os.sep) if arg is None else arg, ""
)
chunked = False
while len(string) > 145 and os.sep in string:
string = string.split(os.sep, 1)[-1]
chunked = True
if chunked:
string = f"...{os.sep}{string}"
return string
async def tracks_in_tree(self):
tracks = []
async for track in self.multirglob(*[f"{ext}" for ext in self._all_music_ext]):
with contextlib.suppress(ValueError):
if track.path.parent != self.localtrack_folder and track.path.relative_to(
self.path
):
tracks.append(Query.process_input(track, self._localtrack_folder))
return sorted(tracks, key=lambda x: x.to_string_user().lower())
async def subfolders_in_tree(self):
return_folders = []
async for f in self.multirglob("", folder=True):
with contextlib.suppress(ValueError):
if (
f not in return_folders
and f.is_dir()
and f.path != self.localtrack_folder
and f.path.relative_to(self.path)
):
return_folders.append(f)
return sorted(return_folders, key=lambda x: x.to_string_user().lower())
async def tracks_in_folder(self):
tracks = []
async for track in self.multiglob(*[f"{ext}" for ext in self._all_music_ext]):
with contextlib.suppress(ValueError):
if track.path.parent != self.localtrack_folder and track.path.relative_to(
self.path
):
tracks.append(Query.process_input(track, self._localtrack_folder))
return sorted(tracks, key=lambda x: x.to_string_user().lower())
async def subfolders(self):
return_folders = []
async for f in self.multiglob("", folder=True):
with contextlib.suppress(ValueError):
if (
f not in return_folders
and f.path != self.localtrack_folder
and f.path.relative_to(self.path)
):
return_folders.append(f)
return sorted(return_folders, key=lambda x: x.to_string_user().lower())
def __eq__(self, other):
if isinstance(other, LocalPath):
return self.path._cparts == other.path._cparts
elif isinstance(other, Path):
return self.path._cparts == other._cpart
return NotImplemented
def __hash__(self):
try:
return self._hash
except AttributeError:
self._hash = hash(tuple(self.path._cparts))
return self._hash
def __lt__(self, other):
if isinstance(other, LocalPath):
return self.path._cparts < other.path._cparts
elif isinstance(other, Path):
return self.path._cparts < other._cpart
return NotImplemented
def __le__(self, other):
if isinstance(other, LocalPath):
return self.path._cparts <= other.path._cparts
elif isinstance(other, Path):
return self.path._cparts <= other._cpart
return NotImplemented
def __gt__(self, other):
if isinstance(other, LocalPath):
return self.path._cparts > other.path._cparts
elif isinstance(other, Path):
return self.path._cparts > other._cpart
return NotImplemented
def __ge__(self, other):
if isinstance(other, LocalPath):
return self.path._cparts >= other.path._cparts
elif isinstance(other, Path):
return self.path._cparts >= other._cpart
return NotImplemented
class Query:
"""Query data class.
Use: Query.process_input(query, localtrack_folder) to generate the Query object.
"""
def __init__(self, query: Union[LocalPath, str], local_folder_current_path: Path, **kwargs):
query = kwargs.get("queryforced", query)
self._raw: Union[LocalPath, str] = query
self._local_folder_current_path = local_folder_current_path
_localtrack: LocalPath = LocalPath(query, local_folder_current_path)
self.valid: bool = query != "InvalidQueryPlaceHolderName"
self.is_local: bool = kwargs.get("local", False)
self.is_spotify: bool = kwargs.get("spotify", False)
self.is_youtube: bool = kwargs.get("youtube", False)
self.is_soundcloud: bool = kwargs.get("soundcloud", False)
self.is_bandcamp: bool = kwargs.get("bandcamp", False)
self.is_vimeo: bool = kwargs.get("vimeo", False)
self.is_mixer: bool = kwargs.get("mixer", False)
self.is_twitch: bool = kwargs.get("twitch", False)
self.is_other: bool = kwargs.get("other", False)
self.is_pornhub: bool = kwargs.get("pornhub", False)
self.is_playlist: bool = kwargs.get("playlist", False)
self.is_album: bool = kwargs.get("album", False)
self.is_search: bool = kwargs.get("search", False)
self.is_stream: bool = kwargs.get("stream", False)
self.single_track: bool = kwargs.get("single", False)
self.id: Optional[str] = kwargs.get("id", None)
self.invoked_from: Optional[str] = kwargs.get("invoked_from", None)
self.local_name: Optional[str] = kwargs.get("name", None)
self.search_subfolders: bool = kwargs.get("search_subfolders", False)
self.spotify_uri: Optional[str] = kwargs.get("uri", None)
self.uri: Optional[str] = kwargs.get("url", None)
self.is_url: bool = kwargs.get("is_url", False)
self.start_time: int = kwargs.get("start_time", 0)
self.track_index: Optional[int] = kwargs.get("track_index", None)
if self.invoked_from == "sc search":
self.is_youtube = False
self.is_soundcloud = True
if (_localtrack.is_file() or _localtrack.is_dir()) and _localtrack.exists():
self.local_track_path: Optional[LocalPath] = _localtrack
self.track: str = str(_localtrack.absolute())
self.is_local: bool = True
self.uri = self.track
else:
self.local_track_path: Optional[LocalPath] = None
self.track: str = str(query)
self.lavalink_query: str = self._get_query()
if self.is_playlist or self.is_album:
self.single_track = False
self._hash = hash(
(
self.valid,
self.is_local,
self.is_spotify,
self.is_youtube,
self.is_soundcloud,
self.is_bandcamp,
self.is_vimeo,
self.is_mixer,
self.is_twitch,
self.is_other,
self.is_playlist,
self.is_album,
self.is_search,
self.is_stream,
self.single_track,
self.id,
self.spotify_uri,
self.start_time,
self.track_index,
self.uri,
)
)
def __str__(self):
return str(self.lavalink_query)
@classmethod
def process_input(
cls,
query: Union[LocalPath, lavalink.Track, "Query", str],
_local_folder_current_path: Path,
**kwargs,
) -> "Query":
"""Process the input query into its type.
Parameters
----------
query : Union[Query, LocalPath, lavalink.Track, str]
The query string or LocalPath object.
_local_folder_current_path: Path
The Current Local Track folder
Returns
-------
Query
Returns a parsed Query object.
"""
if not query:
query = "InvalidQueryPlaceHolderName"
possible_values = {}
if isinstance(query, str):
query = query.strip("<>")
while "ytsearch:" in query:
query = query.replace("ytsearch:", "")
while "scsearch:" in query:
query = query.replace("scsearch:", "")
elif isinstance(query, Query):
for key, val in kwargs.items():
setattr(query, key, val)
return query
elif isinstance(query, lavalink.Track):
possible_values["stream"] = query.is_stream
query = query.uri
possible_values.update(dict(**kwargs))
possible_values.update(cls._parse(query, _local_folder_current_path, **kwargs))
return cls(query, _local_folder_current_path, **possible_values)
@staticmethod
def _parse(track, _local_folder_current_path: Path, **kwargs) -> MutableMapping:
"""Parse a track into all the relevant metadata."""
returning: MutableMapping = {}
if (
type(track) == type(LocalPath)
and (track.is_file() or track.is_dir())
and track.exists()
):
returning["local"] = True
returning["name"] = track.name
if track.is_file():
returning["single"] = True
elif track.is_dir():
returning["album"] = True
else:
track = str(track)
if track.startswith("spotify:"):
returning["spotify"] = True
if ":playlist:" in track:
returning["playlist"] = True
elif ":album:" in track:
returning["album"] = True
elif ":track:" in track:
returning["single"] = True
_id = track.split(":", 2)[-1]
_id = _id.split("?")[0]
returning["id"] = _id
if "#" in _id:
match = re.search(_RE_SPOTIFY_TIMESTAMP, track)
if match:
returning["start_time"] = (int(match.group(1)) * 60) + int(match.group(2))
returning["uri"] = track
return returning
if track.startswith("sc ") or track.startswith("list "):
if track.startswith("sc "):
returning["invoked_from"] = "sc search"
returning["soundcloud"] = True
elif track.startswith("list "):
returning["invoked_from"] = "search list"
track = _RE_REMOVE_START.sub("", track, 1)
returning["queryforced"] = track
_localtrack = LocalPath(track, _local_folder_current_path)
if _localtrack.exists():
if _localtrack.is_file():
returning["local"] = True
returning["single"] = True
returning["name"] = _localtrack.name
return returning
elif _localtrack.is_dir():
returning["album"] = True
returning["local"] = True
returning["name"] = _localtrack.name
return returning
try:
query_url = urlparse(track)
if all([query_url.scheme, query_url.netloc, query_url.path]):
returning["url"] = track
returning["is_url"] = True
url_domain = ".".join(query_url.netloc.split(".")[-2:])
if not query_url.netloc:
url_domain = ".".join(query_url.path.split("/")[0].split(".")[-2:])
if url_domain in ["youtube.com", "youtu.be"]:
returning["youtube"] = True
_has_index = "&index=" in track
if "&t=" in track or "?t=" in track:
match = re.search(_RE_YOUTUBE_TIMESTAMP, track)
if match:
returning["start_time"] = int(match.group(1))
if _has_index:
match = re.search(_RE_YOUTUBE_INDEX, track)
if match:
returning["track_index"] = int(match.group(1)) - 1
if all(k in track for k in ["&list=", "watch?"]):
returning["track_index"] = 0
returning["playlist"] = True
returning["single"] = False
elif all(x in track for x in ["playlist?"]):
returning["playlist"] = not _has_index
returning["single"] = _has_index
elif any(k in track for k in ["list="]):
returning["track_index"] = 0
returning["playlist"] = True
returning["single"] = False
else:
returning["single"] = True
elif url_domain == "spotify.com":
returning["spotify"] = True
if "/playlist/" in track:
returning["playlist"] = True
elif "/album/" in track:
returning["album"] = True
elif "/track/" in track:
returning["single"] = True
val = re.sub(_RE_SPOTIFY_URL, "", track).replace("/", ":")
if "user:" in val:
val = val.split(":", 2)[-1]
_id = val.split(":", 1)[-1]
_id = _id.split("?")[0]
if "#" in _id:
_id = _id.split("#")[0]
match = re.search(_RE_SPOTIFY_TIMESTAMP, track)
if match:
returning["start_time"] = (int(match.group(1)) * 60) + int(
match.group(2)
)
returning["id"] = _id
returning["uri"] = f"spotify:{val}"
elif url_domain == "soundcloud.com":
returning["soundcloud"] = True
if "#t=" in track:
match = re.search(_RE_SOUNDCLOUD_TIMESTAMP, track)
if match:
returning["start_time"] = (int(match.group(1)) * 60) + int(
match.group(2)
)
if "/sets/" in track:
if "?in=" in track:
returning["single"] = True
else:
returning["playlist"] = True
else:
returning["single"] = True
elif url_domain == "bandcamp.com":
returning["bandcamp"] = True
if "/album/" in track:
returning["album"] = True
else:
returning["single"] = True
elif url_domain == "vimeo.com":
returning["vimeo"] = True
elif url_domain == "twitch.tv":
returning["twitch"] = True
if "?t=" in track:
match = re.search(_RE_TWITCH_TIMESTAMP, track)
if match:
returning["start_time"] = (
(int(match.group(1)) * 60 * 60)
+ (int(match.group(2)) * 60)
+ int(match.group(3))
)
if not any(x in track for x in ["/clip/", "/videos/"]):
returning["stream"] = True
else:
returning["other"] = True
returning["single"] = True
else:
if kwargs.get("soundcloud", False):
returning["soundcloud"] = True
else:
returning["youtube"] = True
returning["search"] = True
returning["single"] = True
except Exception:
returning["search"] = True
returning["youtube"] = True
returning["single"] = True
return returning
def _get_query(self):
if self.is_local:
return self.local_track_path.to_string()
elif self.is_spotify:
return self.spotify_uri
elif self.is_search and self.is_youtube:
return f"ytsearch:{self.track}"
elif self.is_search and self.is_soundcloud:
return f"scsearch:{self.track}"
return self.track
def to_string_user(self):
if self.is_local:
return str(self.local_track_path.to_string_user())
return str(self._raw)
@property
def suffix(self):
if self.is_local:
return self.local_track_path.suffix
return None
def __eq__(self, other):
if not isinstance(other, Query):
return NotImplemented
return self.to_string_user() == other.to_string_user()
def __hash__(self):
try:
return self._hash
except AttributeError:
self._hash = hash(
(
self.valid,
self.is_local,
self.is_spotify,
self.is_youtube,
self.is_soundcloud,
self.is_bandcamp,
self.is_vimeo,
self.is_mixer,
self.is_twitch,
self.is_other,
self.is_playlist,
self.is_album,
self.is_search,
self.is_stream,
self.single_track,
self.id,
self.spotify_uri,
self.start_time,
self.track_index,
self.uri,
)
)
return self._hash
def __lt__(self, other):
if not isinstance(other, Query):
return NotImplemented
return self.to_string_user() < other.to_string_user()
def __le__(self, other):
if not isinstance(other, Query):
return NotImplemented
return self.to_string_user() <= other.to_string_user()
def __gt__(self, other):
if not isinstance(other, Query):
return NotImplemented
return self.to_string_user() > other.to_string_user()
def __ge__(self, other):
if not isinstance(other, Query):
return NotImplemented
return self.to_string_user() >= other.to_string_user()
```
|
```javascript
module.exports = function(source) {
this.cacheable && this.cacheable();
return [
'// loader.js',
source,
].join('\n');
};
```
|
Provincial Secretary of Prince Edward Island v Egan, [1941] S.C.R. 396 is a famous constitutional decision of the Supreme Court of Canada.The Court upheld a provincial Act, which provided that anyone who was convicted of an impaired driving offence under the Criminal Code will have their licence suspended, on the basis that the law was in relation to the regulation of highway safety which is a valid provincial subject.
The case later became central to another key constitutional decision of O'Grady v. Sparling, [1960] S.C.R. 804.
See also
List of Supreme Court of Canada cases (Richards Court through Fauteux Court)
External links
Canadian federalism case law
Supreme Court of Canada cases
Supreme Court of Canada case articles without infoboxes
1941 in Canadian case law
|
```javascript
const { render } = require('@govuk-frontend/helpers/puppeteer')
const { getExamples } = require('@govuk-frontend/lib/components')
const { KnownDevices } = require('puppeteer')
const iPhone = KnownDevices['iPhone 6']
describe('/components/tabs', () => {
let examples
beforeAll(async () => {
examples = await getExamples('tabs')
})
describe('/components/tabs/preview', () => {
describe('when JavaScript is unavailable or fails', () => {
beforeAll(async () => {
await page.setJavaScriptEnabled(false)
await render(page, 'tabs', examples.default)
})
afterAll(async () => {
await page.setJavaScriptEnabled(true)
})
it('falls back to making all tab containers visible', async () => {
const isContentVisible = await page.waitForSelector(
'.govuk-tabs__panel',
{ visible: true, timeout: 10000 }
)
expect(isContentVisible).toBeTruthy()
})
})
describe('when JavaScript is available', () => {
beforeAll(async () => {
await render(page, 'tabs', examples.default)
})
it('should indicate the open state of the first tab', async () => {
const firstTabAriaSelected = await page.evaluate(() =>
document.body
.querySelector(
'.govuk-tabs__list-item:first-child .govuk-tabs__tab'
)
.getAttribute('aria-selected')
)
expect(firstTabAriaSelected).toBe('true')
const firstTabClasses = await page.evaluate(
() =>
document.body.querySelector('.govuk-tabs__list-item:first-child')
.className
)
expect(firstTabClasses).toContain('govuk-tabs__list-item--selected')
})
it('should display the first tab panel', async () => {
const tabPanelIsHidden = await page.evaluate(() =>
document.body
.querySelector('.govuk-tabs > .govuk-tabs__panel')
.classList.contains('govuk-tabs__panel--hidden')
)
expect(tabPanelIsHidden).toBeFalsy()
})
it('should hide all the tab panels except for the first one', async () => {
const tabPanelIsHidden = await page.evaluate(() =>
document.body
.querySelector(
'.govuk-tabs > .govuk-tabs__panel ~ .govuk-tabs__panel'
)
.classList.contains('govuk-tabs__panel--hidden')
)
expect(tabPanelIsHidden).toBeTruthy()
})
})
describe('when a tab is pressed', () => {
beforeEach(async () => {
await render(page, 'tabs', examples.default)
})
it('should indicate the open state of the pressed tab', async () => {
// Click the second tab
await page.click('.govuk-tabs__list-item:nth-child(2) .govuk-tabs__tab')
const secondTabAriaSelected = await page.evaluate(() =>
document.body
.querySelector(
'.govuk-tabs__list-item:nth-child(2) .govuk-tabs__tab'
)
.getAttribute('aria-selected')
)
expect(secondTabAriaSelected).toBe('true')
const secondTabClasses = await page.evaluate(
() =>
document.body.querySelector('.govuk-tabs__list-item:nth-child(2)')
.className
)
expect(secondTabClasses).toContain('govuk-tabs__list-item--selected')
})
it('should display the tab panel associated with the selected tab', async () => {
// Click the second tab
await page.click('.govuk-tabs__list-item:nth-child(2) .govuk-tabs__tab')
const secondTabPanelIsHidden = await page.evaluate(() => {
const secondTabAriaControls = document.body
.querySelector(
'.govuk-tabs__list-item:nth-child(2) .govuk-tabs__tab'
)
.getAttribute('aria-controls')
return document.body
.querySelector(`[id="${secondTabAriaControls}"]`)
.classList.contains('govuk-tabs__panel--hidden')
})
expect(secondTabPanelIsHidden).toBeFalsy()
})
describe('when the tab contains a DOM element', () => {
beforeAll(async () => {
await render(page, 'tabs', examples.default)
})
it('should display the tab panel associated with the selected tab', async () => {
await page.evaluate(() => {
// Replace contents of second tab with a DOM element
const secondTab = document.body.querySelector(
'.govuk-tabs__list-item:nth-child(2) .govuk-tabs__tab'
)
secondTab.innerHTML = '<span>Tab 2</span>'
})
// Click the DOM element inside the second tab
await page.click(
'.govuk-tabs__list-item:nth-child(2) .govuk-tabs__tab span'
)
const secondTabPanelIsHidden = await page.evaluate(() => {
const secondTabAriaControls = document.body
.querySelector(
'.govuk-tabs__list-item:nth-child(2) .govuk-tabs__tab'
)
.getAttribute('aria-controls')
return document.body
.querySelector(`[id="${secondTabAriaControls}"]`)
.classList.contains('govuk-tabs__panel--hidden')
})
expect(secondTabPanelIsHidden).toBeFalsy()
})
})
})
describe('when first tab is focused and the right arrow key is pressed', () => {
beforeEach(async () => {
await render(page, 'tabs', examples.default)
})
it('should indicate the open state of the next tab', async () => {
// Press right arrow when focused on the first tab
await page.focus('.govuk-tabs__list-item:first-child .govuk-tabs__tab')
await page.keyboard.press('ArrowRight')
const secondTabAriaSelected = await page.evaluate(() =>
document.body
.querySelector(
'.govuk-tabs__list-item:nth-child(2) .govuk-tabs__tab'
)
.getAttribute('aria-selected')
)
expect(secondTabAriaSelected).toBe('true')
const secondTabClasses = await page.evaluate(
() =>
document.body.querySelector('.govuk-tabs__list-item:nth-child(2)')
.className
)
expect(secondTabClasses).toContain('govuk-tabs__list-item--selected')
})
it('should display the tab panel associated with the selected tab', async () => {
// Press right arrow
await page.focus('.govuk-tabs__list-item:first-child .govuk-tabs__tab')
await page.keyboard.down('ArrowRight')
const secondTabPanelIsHidden = await page.evaluate(() => {
const secondTabAriaControls = document.body
.querySelector(
'.govuk-tabs__list-item:nth-child(2) .govuk-tabs__tab'
)
.getAttribute('aria-controls')
return document.body
.querySelector(`[id="${secondTabAriaControls}"]`)
.classList.contains('govuk-tabs__panel--hidden')
})
expect(secondTabPanelIsHidden).toBeFalsy()
})
})
describe('when a hash associated with a tab panel is passed in the URL', () => {
it('should indicate the open state of the associated tab', async () => {
await render(page, 'tabs', examples.default)
await page.evaluate(() => {
window.location.hash = '#past-week'
})
const currentTabAriaSelected = await page.evaluate(() =>
document.body
.querySelector('.govuk-tabs__tab[href="#past-week"]')
.getAttribute('aria-selected')
)
expect(currentTabAriaSelected).toBe('true')
const currentTabClasses = await page.evaluate(
() =>
document.body.querySelector('.govuk-tabs__tab[href="#past-week"]')
.parentElement.className
)
expect(currentTabClasses).toContain('govuk-tabs__list-item--selected')
const currentTabPanelIsHidden = await page.evaluate(() =>
document
.getElementById('past-week')
.classList.contains('govuk-tabs__panel--hidden')
)
expect(currentTabPanelIsHidden).toBeFalsy()
})
it('should only update based on hashes that are tabs', async () => {
await render(page, 'tabs', examples['tabs-with-anchor-in-panel'])
await page.click('[href="#anchor"]')
const activeElementId = await page.evaluate(
() => document.activeElement.id
)
expect(activeElementId).toBe('anchor')
})
})
describe('when rendered on a small device', () => {
it('falls back to making the all tab containers visible', async () => {
await page.emulate(iPhone)
await render(page, 'tabs', examples.default)
const isContentVisible = await page.waitForSelector(
'.govuk-tabs__panel',
{ visible: true, timeout: 10000 }
)
expect(isContentVisible).toBeTruthy()
})
})
describe('errors at instantiation', () => {
it('can throw a SupportError if appropriate', async () => {
await expect(
render(page, 'tabs', examples.default, {
beforeInitialisation() {
document.body.classList.remove('govuk-frontend-supported')
}
})
).rejects.toMatchObject({
cause: {
name: 'SupportError',
message:
'GOV.UK Frontend initialised without `<body class="govuk-frontend-supported">` from template `<script>` snippet'
}
})
})
it('throws when $module is not set', async () => {
await expect(
render(page, 'tabs', examples.default, {
beforeInitialisation($module) {
$module.remove()
}
})
).rejects.toMatchObject({
cause: {
name: 'ElementError',
message: 'Tabs: Root element (`$module`) not found'
}
})
})
it('throws when there are no tabs', async () => {
await expect(
render(page, 'tabs', examples.default, {
beforeInitialisation($module, { selector }) {
$module
.querySelectorAll(selector)
.forEach((item) => item.remove())
},
context: {
selector: 'a.govuk-tabs__tab'
}
})
).rejects.toMatchObject({
cause: {
name: 'ElementError',
message: 'Tabs: Links (`<a class="govuk-tabs__tab">`) not found'
}
})
})
it('throws when the tab list is missing', async () => {
await expect(
render(page, 'tabs', examples.default, {
beforeInitialisation($module, { selector }) {
$module
.querySelector(selector)
.setAttribute('class', 'govuk-tabs__typo')
},
context: {
selector: '.govuk-tabs__list'
}
})
).rejects.toMatchObject({
cause: {
name: 'ElementError',
message: 'Tabs: List (`<ul class="govuk-tabs__list">`) not found'
}
})
})
it('throws when there the tab list is empty', async () => {
await expect(
render(page, 'tabs', examples.default, {
beforeInitialisation($module, { selector, className }) {
$module
.querySelectorAll(selector)
.forEach((item) => item.setAttribute('class', className))
},
context: {
selector: '.govuk-tabs__list-item',
className: 'govuk-tabs__list-typo'
}
})
).rejects.toMatchObject({
cause: {
name: 'ElementError',
message:
'Tabs: List items (`<li class="govuk-tabs__list-item">`) not found'
}
})
})
})
})
})
```
|
```python
# -*- coding: utf-8 -*-
#
#
# path_to_url
#
# Unless required by applicable law or agreed to in writing, software
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#
from __future__ import print_function
from builtins import zip
from past.builtins import basestring
import collections
import unicodecsv as csv
import itertools
import logging
import re
import os
import subprocess
import time
from tempfile import NamedTemporaryFile
import hive_metastore
from airflow.exceptions import AirflowException
from airflow.hooks.base_hook import BaseHook
from airflow.utils.helpers import as_flattened_list
from airflow.utils.file import TemporaryDirectory
from airflow import configuration
import airflow.security.utils as utils
HIVE_QUEUE_PRIORITIES = ['VERY_HIGH', 'HIGH', 'NORMAL', 'LOW', 'VERY_LOW']
class HiveCliHook(BaseHook):
"""Simple wrapper around the hive CLI.
It also supports the ``beeline``
a lighter CLI that runs JDBC and is replacing the heavier
traditional CLI. To enable ``beeline``, set the use_beeline param in the
extra field of your connection as in ``{ "use_beeline": true }``
Note that you can also set default hive CLI parameters using the
``hive_cli_params`` to be used in your connection as in
``{"hive_cli_params": "-hiveconf mapred.job.tracker=some.jobtracker:444"}``
Parameters passed here can be overridden by run_cli's hive_conf param
The extra connection parameter ``auth`` gets passed as in the ``jdbc``
connection string as is.
:param mapred_queue: queue used by the Hadoop Scheduler (Capacity or Fair)
:type mapred_queue: string
:param mapred_queue_priority: priority within the job queue.
Possible settings include: VERY_HIGH, HIGH, NORMAL, LOW, VERY_LOW
:type mapred_queue_priority: string
:param mapred_job_name: This name will appear in the jobtracker.
This can make monitoring easier.
:type mapred_job_name: string
"""
def __init__(
self,
hive_cli_conn_id="hive_cli_default",
run_as=None,
mapred_queue=None,
mapred_queue_priority=None,
mapred_job_name=None):
conn = self.get_connection(hive_cli_conn_id)
self.hive_cli_params = conn.extra_dejson.get('hive_cli_params', '')
self.use_beeline = conn.extra_dejson.get('use_beeline', False)
self.auth = conn.extra_dejson.get('auth', 'noSasl')
self.conn = conn
self.run_as = run_as
if mapred_queue_priority:
mapred_queue_priority = mapred_queue_priority.upper()
if mapred_queue_priority not in HIVE_QUEUE_PRIORITIES:
raise AirflowException(
"Invalid Mapred Queue Priority. Valid values are: "
"{}".format(', '.join(HIVE_QUEUE_PRIORITIES)))
self.mapred_queue = mapred_queue
self.mapred_queue_priority = mapred_queue_priority
self.mapred_job_name = mapred_job_name
def _prepare_cli_cmd(self):
"""
This function creates the command list from available information
"""
conn = self.conn
hive_bin = 'hive'
cmd_extra = []
if self.use_beeline:
hive_bin = 'beeline'
jdbc_url = "jdbc:hive2://{conn.host}:{conn.port}/{conn.schema}"
if configuration.get('core', 'security') == 'kerberos':
template = conn.extra_dejson.get(
'principal', "hive/_HOST@EXAMPLE.COM")
if "_HOST" in template:
template = utils.replace_hostname_pattern(
utils.get_components(template))
proxy_user = "" # noqa
if conn.extra_dejson.get('proxy_user') == "login" and conn.login:
proxy_user = "hive.server2.proxy.user={0}".format(conn.login)
elif conn.extra_dejson.get('proxy_user') == "owner" and self.run_as:
proxy_user = "hive.server2.proxy.user={0}".format(self.run_as)
jdbc_url += ";principal={template};{proxy_user}"
elif self.auth:
jdbc_url += ";auth=" + self.auth
jdbc_url = jdbc_url.format(**locals())
cmd_extra += ['-u', jdbc_url]
if conn.login:
cmd_extra += ['-n', conn.login]
if conn.password:
cmd_extra += ['-p', conn.password]
hive_params_list = self.hive_cli_params.split()
return [hive_bin] + cmd_extra + hive_params_list
def _prepare_hiveconf(self, d):
"""
This function prepares a list of hiveconf params
from a dictionary of key value pairs.
:param d:
:type d: dict
>>> hh = HiveCliHook()
>>> hive_conf = {"hive.exec.dynamic.partition": "true",
... "hive.exec.dynamic.partition.mode": "nonstrict"}
>>> hh._prepare_hiveconf(hive_conf)
["-hiveconf", "hive.exec.dynamic.partition=true",\
"-hiveconf", "hive.exec.dynamic.partition.mode=nonstrict"]
"""
if not d:
return []
return as_flattened_list(
itertools.izip(
["-hiveconf"] * len(d),
["{}={}".format(k, v) for k, v in d.items()]
)
)
def run_cli(self, hql, schema=None, verbose=True, hive_conf=None):
"""
Run an hql statement using the hive cli. If hive_conf is specified
it should be a dict and the entries will be set as key/value pairs
in HiveConf
:param hive_conf: if specified these key value pairs will be passed
to hive as ``-hiveconf "key"="value"``. Note that they will be
passed after the ``hive_cli_params`` and thus will override
whatever values are specified in the database.
:type hive_conf: dict
>>> hh = HiveCliHook()
>>> result = hh.run_cli("USE airflow;")
>>> ("OK" in result)
True
"""
conn = self.conn
schema = schema or conn.schema
if schema:
hql = "USE {schema};\n{hql}".format(**locals())
with TemporaryDirectory(prefix='airflow_hiveop_') as tmp_dir:
with NamedTemporaryFile(dir=tmp_dir) as f:
f.write(hql.encode('UTF-8'))
f.flush()
hive_cmd = self._prepare_cli_cmd()
hive_conf_params = self._prepare_hiveconf(hive_conf)
if self.mapred_queue:
hive_conf_params.extend(
['-hiveconf',
'mapreduce.job.queuename={}'
.format(self.mapred_queue)])
if self.mapred_queue_priority:
hive_conf_params.extend(
['-hiveconf',
'mapreduce.job.priority={}'
.format(self.mapred_queue_priority)])
if self.mapred_job_name:
hive_conf_params.extend(
['-hiveconf',
'mapred.job.name={}'
.format(self.mapred_job_name)])
hive_cmd.extend(hive_conf_params)
hive_cmd.extend(['-f', f.name])
if verbose:
logging.info(" ".join(hive_cmd))
sp = subprocess.Popen(
hive_cmd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
cwd=tmp_dir)
self.sp = sp
stdout = ''
while True:
line = sp.stdout.readline()
if not line:
break
stdout += line.decode('UTF-8')
if verbose:
logging.info(line.decode('UTF-8').strip())
sp.wait()
if sp.returncode:
raise AirflowException(stdout)
return stdout
def transfer_data_file(self, filepath, verbose=True):
filename = os.path.basename(filepath)
destination_path = '/user/cloudera/{0}'.format(filename)
hdfs_cmd = ['hdfs','dfs','-put',filepath,destination_path]
if verbose:
logging.info(" ".join(hdfs_cmd))
sp = subprocess.Popen(
hdfs_cmd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
self.sp = sp
stdout = ''
while True:
line = sp.stdout.readline()
if not line:
break
stdout += line.decode('UTF-8')
if verbose:
logging.info(line.decode('UTF-8').strip())
sp.wait()
if sp.returncode:
raise AirflowException(stdout)
return destination_path
def test_hql(self, hql):
"""
Test an hql statement using the hive cli and EXPLAIN
"""
create, insert, other = [], [], []
for query in hql.split(';'): # naive
query_original = query
query = query.lower().strip()
if query.startswith('create table'):
create.append(query_original)
elif query.startswith(('set ',
'add jar ',
'create temporary function')):
other.append(query_original)
elif query.startswith('insert'):
insert.append(query_original)
other = ';'.join(other)
for query_set in [create, insert]:
for query in query_set:
query_preview = ' '.join(query.split())[:50]
logging.info("Testing HQL [{0} (...)]".format(query_preview))
if query_set == insert:
query = other + '; explain ' + query
else:
query = 'explain ' + query
try:
self.run_cli(query, verbose=False)
except AirflowException as e:
message = e.args[0].split('\n')[-2]
logging.info(message)
error_loc = re.search('(\d+):(\d+)', message)
if error_loc and error_loc.group(1).isdigit():
l = int(error_loc.group(1))
begin = max(l-2, 0)
end = min(l+3, len(query.split('\n')))
context = '\n'.join(query.split('\n')[begin:end])
logging.info("Context :\n {0}".format(context))
else:
logging.info("SUCCESS")
def load_df(
self,
df,
table,
create=True,
recreate=False,
field_dict=None,
delimiter=',',
encoding='utf8',
pandas_kwargs=None, **kwargs):
"""
Loads a pandas DataFrame into hive.
Hive data types will be inferred if not passed but column names will
not be sanitized.
:param table: target Hive table, use dot notation to target a
specific database
:type table: str
:param create: whether to create the table if it doesn't exist
:type create: bool
:param recreate: whether to drop and recreate the table at every
execution
:type recreate: bool
:param field_dict: mapping from column name to hive data type
:type field_dict: dict
:param encoding: string encoding to use when writing DataFrame to file
:type encoding: str
:param pandas_kwargs: passed to DataFrame.to_csv
:type pandas_kwargs: dict
:param kwargs: passed to self.load_file
"""
def _infer_field_types_from_df(df):
DTYPE_KIND_HIVE_TYPE = {
'b': 'BOOLEAN', # boolean
'i': 'BIGINT', # signed integer
'u': 'BIGINT', # unsigned integer
'f': 'DOUBLE', # floating-point
'c': 'STRING', # complex floating-point
'O': 'STRING', # object
'S': 'STRING', # (byte-)string
'U': 'STRING', # Unicode
'V': 'STRING' # void
}
return dict((col, DTYPE_KIND_HIVE_TYPE[dtype.kind]) for col, dtype in df.dtypes.iteritems())
if pandas_kwargs is None:
pandas_kwargs = {}
with TemporaryDirectory(prefix='airflow_hiveop_') as tmp_dir:
with NamedTemporaryFile(dir=tmp_dir) as f:
if field_dict is None and (create or recreate):
field_dict = _infer_field_types_from_df(df)
df.to_csv(f, sep=delimiter, **pandas_kwargs)
return self.load_file(filepath=f.name,
table=table,
delimiter=delimiter,
field_dict=field_dict,
**kwargs)
def load_file(
self,
filepath,
table,
delimiter=",",
field_dict=None,
create=True,
overwrite=True,
partition=None,
recreate=False):
"""
Loads a local file into Hive
Note that the table generated in Hive uses ``STORED AS textfile``
which isn't the most efficient serialization format. If a
large amount of data is loaded and/or if the tables gets
queried considerably, you may want to use this operator only to
stage the data into a temporary table before loading it into its
final destination using a ``HiveOperator``.
:param table: target Hive table, use dot notation to target a
specific database
:type table: str
:param create: whether to create the table if it doesn't exist
:type create: bool
:param recreate: whether to drop and recreate the table at every
execution
:type recreate: bool
:param partition: target partition as a dict of partition columns
and values
:type partition: dict
:param delimiter: field delimiter in the file
:type delimiter: str
"""
ord_delimiter = ord(delimiter)
hql = ''
if recreate:
hql += "DROP TABLE IF EXISTS {table};\n"
if create or recreate:
if field_dict is None:
raise ValueError("Must provide a field dict when creating a table")
fields = ",\n ".join(
[k + ' ' + v for k, v in field_dict.items() if k not in partition])
hql += "CREATE TABLE IF NOT EXISTS {table} (\n{fields})\n"
if partition:
pfields = ",\n ".join(
[p + " STRING" for p in partition])
hql += "PARTITIONED BY ({pfields})\n"
hql += "ROW FORMAT DELIMITED\n"
hql += "FIELDS TERMINATED BY '\{ord_delimiter:03d}'\n"
hql += "STORED AS textfile;"
hql = hql.format(**locals())
logging.info(hql)
self.run_cli(hql)
hql = None
destination_path = None
if self.use_beeline:
destination_path = self.transfer_data_file(filepath)
hql = "LOAD DATA INPATH '{destination_path}' "
else:
hql = "LOAD DATA LOCAL INPATH '{filepath}' "
if overwrite:
hql += "OVERWRITE "
hql += "INTO TABLE {table} "
if partition:
pvals = ", ".join(
["{0}='{1}'".format(k, v) for k, v in partition.items()])
hql += "PARTITION ({pvals});"
hql = hql.format(**locals())
logging.info(hql)
self.run_cli(hql)
def kill(self):
if hasattr(self, 'sp'):
if self.sp.poll() is None:
print("Killing the Hive job")
self.sp.terminate()
time.sleep(60)
self.sp.kill()
class HiveMetastoreHook(BaseHook):
""" Wrapper to interact with the Hive Metastore"""
def __init__(self, metastore_conn_id='metastore_default'):
self.metastore_conn = self.get_connection(metastore_conn_id)
self.metastore = self.get_metastore_client()
def __getstate__(self):
# This is for pickling to work despite the thirft hive client not
# being pickable
d = dict(self.__dict__)
del d['metastore']
return d
def __setstate__(self, d):
self.__dict__.update(d)
self.__dict__['metastore'] = self.get_metastore_client()
def get_metastore_client(self):
"""
Returns a Hive thrift client.
"""
from thrift.transport import TSocket, TTransport
from thrift.protocol import TBinaryProtocol
from hive_service import ThriftHive
ms = self.metastore_conn
auth_mechanism = ms.extra_dejson.get('authMechanism', 'NOSASL')
if configuration.get('core', 'security') == 'kerberos':
auth_mechanism = ms.extra_dejson.get('authMechanism', 'GSSAPI')
kerberos_service_name = ms.extra_dejson.get('kerberos_service_name', 'hive')
socket = TSocket.TSocket(ms.host, ms.port)
if configuration.get('core', 'security') == 'kerberos' and auth_mechanism == 'GSSAPI':
try:
import saslwrapper as sasl
except ImportError:
import sasl
def sasl_factory():
sasl_client = sasl.Client()
sasl_client.setAttr("host", ms.host)
sasl_client.setAttr("service", kerberos_service_name)
sasl_client.init()
return sasl_client
from thrift_sasl import TSaslClientTransport
transport = TSaslClientTransport(sasl_factory, "GSSAPI", socket)
else:
transport = TTransport.TBufferedTransport(socket)
protocol = TBinaryProtocol.TBinaryProtocol(transport)
return ThriftHive.Client(protocol)
def get_conn(self):
return self.metastore
def check_for_partition(self, schema, table, partition):
"""
Checks whether a partition exists
:param schema: Name of hive schema (database) @table belongs to
:type schema: string
:param table: Name of hive table @partition belongs to
:type schema: string
:partition: Expression that matches the partitions to check for
(eg `a = 'b' AND c = 'd'`)
:type schema: string
:rtype: boolean
>>> hh = HiveMetastoreHook()
>>> t = 'static_babynames_partitioned'
>>> hh.check_for_partition('airflow', t, "ds='2015-01-01'")
True
"""
self.metastore._oprot.trans.open()
partitions = self.metastore.get_partitions_by_filter(
schema, table, partition, 1)
self.metastore._oprot.trans.close()
if partitions:
return True
else:
return False
def check_for_named_partition(self, schema, table, partition_name):
"""
Checks whether a partition with a given name exists
:param schema: Name of hive schema (database) @table belongs to
:type schema: string
:param table: Name of hive table @partition belongs to
:type schema: string
:partition: Name of the partitions to check for (eg `a=b/c=d`)
:type schema: string
:rtype: boolean
>>> hh = HiveMetastoreHook()
>>> t = 'static_babynames_partitioned'
>>> hh.check_for_named_partition('airflow', t, "ds=2015-01-01")
True
>>> hh.check_for_named_partition('airflow', t, "ds=xxx")
False
"""
self.metastore._oprot.trans.open()
try:
self.metastore.get_partition_by_name(
schema, table, partition_name)
return True
except hive_metastore.ttypes.NoSuchObjectException:
return False
finally:
self.metastore._oprot.trans.close()
def get_table(self, table_name, db='default'):
"""Get a metastore table object
>>> hh = HiveMetastoreHook()
>>> t = hh.get_table(db='airflow', table_name='static_babynames')
>>> t.tableName
'static_babynames'
>>> [col.name for col in t.sd.cols]
['state', 'year', 'name', 'gender', 'num']
"""
self.metastore._oprot.trans.open()
if db == 'default' and '.' in table_name:
db, table_name = table_name.split('.')[:2]
table = self.metastore.get_table(dbname=db, tbl_name=table_name)
self.metastore._oprot.trans.close()
return table
def get_tables(self, db, pattern='*'):
"""
Get a metastore table object
"""
self.metastore._oprot.trans.open()
tables = self.metastore.get_tables(db_name=db, pattern=pattern)
objs = self.metastore.get_table_objects_by_name(db, tables)
self.metastore._oprot.trans.close()
return objs
def get_databases(self, pattern='*'):
"""
Get a metastore table object
"""
self.metastore._oprot.trans.open()
dbs = self.metastore.get_databases(pattern)
self.metastore._oprot.trans.close()
return dbs
def get_partitions(
self, schema, table_name, filter=None):
"""
Returns a list of all partitions in a table. Works only
for tables with less than 32767 (java short max val).
For subpartitioned table, the number might easily exceed this.
>>> hh = HiveMetastoreHook()
>>> t = 'static_babynames_partitioned'
>>> parts = hh.get_partitions(schema='airflow', table_name=t)
>>> len(parts)
1
>>> parts
[{'ds': '2015-01-01'}]
"""
self.metastore._oprot.trans.open()
table = self.metastore.get_table(dbname=schema, tbl_name=table_name)
if len(table.partitionKeys) == 0:
raise AirflowException("The table isn't partitioned")
else:
if filter:
parts = self.metastore.get_partitions_by_filter(
db_name=schema, tbl_name=table_name,
filter=filter, max_parts=32767)
else:
parts = self.metastore.get_partitions(
db_name=schema, tbl_name=table_name, max_parts=32767)
self.metastore._oprot.trans.close()
pnames = [p.name for p in table.partitionKeys]
return [dict(zip(pnames, p.values)) for p in parts]
def max_partition(self, schema, table_name, field=None, filter=None):
"""
Returns the maximum value for all partitions in a table. Works only
for tables that have a single partition key. For subpartitioned
table, we recommend using signal tables.
>>> hh = HiveMetastoreHook()
>>> t = 'static_babynames_partitioned'
>>> hh.max_partition(schema='airflow', table_name=t)
'2015-01-01'
"""
parts = self.get_partitions(schema, table_name, filter)
if not parts:
return None
elif len(parts[0]) == 1:
field = list(parts[0].keys())[0]
elif not field:
raise AirflowException(
"Please specify the field you want the max "
"value for")
return max([p[field] for p in parts])
def table_exists(self, table_name, db='default'):
"""
Check if table exists
>>> hh = HiveMetastoreHook()
>>> hh.table_exists(db='airflow', table_name='static_babynames')
True
>>> hh.table_exists(db='airflow', table_name='does_not_exist')
False
"""
try:
t = self.get_table(table_name, db)
return True
except Exception as e:
return False
class HiveServer2Hook(BaseHook):
"""
Wrapper around the impyla library
Note that the default authMechanism is PLAIN, to override it you
can specify it in the ``extra`` of your connection in the UI as in
"""
def __init__(self, hiveserver2_conn_id='hiveserver2_default'):
self.hiveserver2_conn_id = hiveserver2_conn_id
def get_conn(self, schema=None):
db = self.get_connection(self.hiveserver2_conn_id)
auth_mechanism = db.extra_dejson.get('authMechanism', 'PLAIN')
kerberos_service_name = None
if configuration.get('core', 'security') == 'kerberos':
auth_mechanism = db.extra_dejson.get('authMechanism', 'GSSAPI')
kerberos_service_name = db.extra_dejson.get('kerberos_service_name', 'hive')
# impyla uses GSSAPI instead of KERBEROS as a auth_mechanism identifier
if auth_mechanism == 'KERBEROS':
logging.warning("Detected deprecated 'KERBEROS' for authMechanism for %s. Please use 'GSSAPI' instead",
self.hiveserver2_conn_id)
auth_mechanism = 'GSSAPI'
from impala.dbapi import connect
return connect(
host=db.host,
port=db.port,
auth_mechanism=auth_mechanism,
kerberos_service_name=kerberos_service_name,
user=db.login,
database=schema or db.schema or 'default')
def get_results(self, hql, schema='default', arraysize=1000):
from impala.error import ProgrammingError
with self.get_conn(schema) as conn:
if isinstance(hql, basestring):
hql = [hql]
results = {
'data': [],
'header': [],
}
cur = conn.cursor()
for statement in hql:
cur.execute(statement)
records = []
try:
# impala Lib raises when no results are returned
# we're silencing here as some statements in the list
# may be `SET` or DDL
records = cur.fetchall()
except ProgrammingError:
logging.debug("get_results returned no records")
if records:
results = {
'data': records,
'header': cur.description,
}
return results
def to_csv(
self,
hql,
csv_filepath,
schema='default',
delimiter=',',
lineterminator='\r\n',
output_header=True,
fetch_size=1000):
schema = schema or 'default'
with self.get_conn(schema) as conn:
with conn.cursor() as cur:
logging.info("Running query: " + hql)
cur.execute(hql)
schema = cur.description
with open(csv_filepath, 'wb') as f:
writer = csv.writer(f,
delimiter=delimiter,
lineterminator=lineterminator,
encoding='utf-8')
if output_header:
writer.writerow([c[0] for c in cur.description])
i = 0
while True:
rows = [row for row in cur.fetchmany(fetch_size) if row]
if not rows:
break
writer.writerows(rows)
i += len(rows)
logging.info("Written {0} rows so far.".format(i))
logging.info("Done. Loaded a total of {0} rows.".format(i))
def get_records(self, hql, schema='default'):
"""
Get a set of records from a Hive query.
>>> hh = HiveServer2Hook()
>>> sql = "SELECT * FROM airflow.static_babynames LIMIT 100"
>>> len(hh.get_records(sql))
100
"""
return self.get_results(hql, schema=schema)['data']
def get_pandas_df(self, hql, schema='default'):
"""
Get a pandas dataframe from a Hive query
>>> hh = HiveServer2Hook()
>>> sql = "SELECT * FROM airflow.static_babynames LIMIT 100"
>>> df = hh.get_pandas_df(sql)
>>> len(df.index)
100
"""
import pandas as pd
res = self.get_results(hql, schema=schema)
df = pd.DataFrame(res['data'])
df.columns = [c[0] for c in res['header']]
return df
```
|
The following is a list of massacres and mass executions that occurred in Yugoslavia during World War II. Areas once part of Yugoslavia that are now parts of Bosnia and Herzegovina, Croatia, Kosovo, Serbia, Slovenia, North Macedonia, and Montenegro; see the lists of massacres in those countries for more details.
Perpetrators
The majority of massacres were committed by Yugoslav factions during the civil war, while a number were committed by invading Axis forces.
Ustaše
After the invasion of Yugoslavia, puppet-state Independent State of Croatia (NDH) was created by Axis powers in the areas of most of modern-day Croatia and Bosnia and Herzegovina. The Ustaše sought to create an ethnically clean state by eradicating Serbs, Jews and Romani through genocidal policies. According to Ustaše officials, the creation of an ethnically pure Greater Croatian state would ensure the safety of the Croats from the Serbs. From the data calculated by the German Ministry of Foreign Affairs, during the creation of the state the population of Serbs was approximately 1,925,000. The Ustaše's largest genocidal massacres were carried out in Bosanska Krajina and in places in Croatia where Serbs constituted a large proportion of the population including Banija, Kordun, Lika, and northern Dalmatia. Between 300 000– 350 000 Serbs were killed in massacres and in concentration camps like Jasenovac and Jadovno. Some 100,000 Serbs, Jews, and anti-fascist Croat were killed at Jasenovac alone.
Chetniks
The Chetniks wanted to forge an ethnically pure Greater Serbia claiming it was to ensure the survival of Serbs in Axis/Ustaše-controlled areas by violently "cleansing" these areas of Croats and Muslims. Several historians view Chetnik actions against Muslim and Croats as constituting genocide. Estimates of the number of deaths caused by the Chetniks in Croatia and Bosnia and Herzegovina range from 50,000 to 68,000, while more than 5,000 victims are registered in the region of Sandžak. About 300 villages and small towns were destroyed, along with a large number of mosques and Catholic churches. Chetnik massacres of the Bosniak population took place in eastern Bosnia which, according to historian Marko Attila Hoare, had been "relatively untouched" by the Ustaše until the spring of 1942. Bosnian historian Enver Redžić has a different opinion and claims that eastern Bosnia wasn't in relative peace at all during the period 1941–1942. He writes that in the summer of 1941, killings of Serbs had already started and acquired broader proportions in eastern Bosnia and that anti-Serb propaganda by Ustaše, by that time, had success among local Muslim and Croats. Bosniak Muslims, particularly in Eastern Bosnia, comprised a large contingent of Ustashe units in the region and played a large role in the genocide of ethnic Serbs in the area that began in 1941. Bosniaks, later in the war, also joined the Waffen SS units that were notorious for their cruelty to the Serbian population. The Serbian population in the Podrina region (Eastern Bosnia) declined significantly as a result of these massacres and ethnic cleansing. Hoare argues that the latter-referenced massacres were not acts of revenge, but "an expression of the genocidal policy and ideology of the Chetnik movement."
Yugoslav Partisans
Yugoslav Partisans committed various massacres, notably as part of the so-called "leftist errors" against ideological opponents and suspected collaborators. At the end of the war, the Partisans "purged" in Serbia (1944–45), and massacred tens of thousands of suspected collaborators during the Bleiburg repatriations at the end and immediate aftermath of the war. Ethnic minorities, such as Italians (namely Istrian Italians and Dalmatian Italians), were persecuted during the Foibe massacres in Julian March, Kvarner and Dalmatia, while ethnic Germans were also massacred during the Flight and expulsion of Germans in Yugoslavia.
Axis occupying forces
German, Italian, Hungarian and Bulgarian occupying forces engaged in atrocities against the Yugoslavian population, in the form of mass-killings of civilians and hostages in retaliation for Partisan attacks and resistance. Infamous examples include the Kragujevac massacre, committed by German forces, as did the Albanian Waffen-SS units, which murdered more than 400 Orthodox Christian civilians at Andrijevica, the Novi Sad raid, committed by Hungarian forces and crimes committed by Italian forces, such as in Podhum.
List
See also
Bloody Christmas (1945)
List of massacres in the Bosnian War
List of massacres in the Croatian War
List of massacres in the Kosovo War
References
Sources
Books
Reports
Journals
Conference papers and proceedings
Web
Yugo
Yugoslavia, World War II
Massacres, World War II
World War II
World War II, Yugoslavia
|
```xml
<UserControl x:Class="AsmDude.QuickInfo.RegisterTooltipWindow"
xmlns="path_to_url"
xmlns:x="path_to_url"
xmlns:mc="path_to_url"
xmlns:d="path_to_url"
mc:Ignorable="d" x:Name="MainWindow"
Focusable="True">
<UserControl.InputBindings>
<KeyBinding Key="Escape" Command="{Binding Path=CloseCommand, Mode=OneTime}" CommandTarget="{Binding ElementName=MainWindow}" />
</UserControl.InputBindings>
<StackPanel>
<TextBlock x:Name="Description" TextWrapping="Wrap"/>
<Border x:Name="AsmSimGridBorder" BorderBrush = "LightGray" CornerRadius="2" BorderThickness="1" Margin="0,6,0,0" Visibility="Collapsed">
<Expander x:Name="AsmSimGridExpander" Visibility="Collapsed" IsExpanded="False">
<Expander.Header>
<DockPanel LastChildFill="False">
<Label DockPanel.Dock="Left">
<Label.VerticalAlignment>Center</Label.VerticalAlignment>
<Label.VerticalContentAlignment>Center</Label.VerticalContentAlignment>
<Label.Content>Register Contents</Label.Content>
</Label>
<ComboBox x:Name="AsmSimGridExpanderNumeration" DockPanel.Dock="Right">
<ComboBox.VerticalAlignment>Center</ComboBox.VerticalAlignment>
<ComboBox.VerticalContentAlignment>Center</ComboBox.VerticalContentAlignment>
<ComboBox.Items>
<ComboBoxItem>HEX</ComboBoxItem>
<ComboBoxItem>BIN</ComboBoxItem>
<ComboBoxItem>DEC</ComboBoxItem>
<ComboBoxItem>OCT</ComboBoxItem>
</ComboBox.Items>
</ComboBox>
</DockPanel>
</Expander.Header>
<Grid x:Name="AsmSimGrid">
<Grid.RowDefinitions>
<RowDefinition Height="*" />
<RowDefinition Height="*" />
<RowDefinition Height="*" />
</Grid.RowDefinitions>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="*" />
<ColumnDefinition Width="*" />
</Grid.ColumnDefinitions>
</Grid>
</Expander>
</Border>
</StackPanel>
</UserControl>
```
|
```xml
import { vol } from 'memfs';
import { parseXMLAsync } from '../../utils/XML';
import { fileExistsAsync } from '../../utils/modules';
import { getResourceXMLPathAsync } from '../Paths';
import { getObjectAsResourceGroup, readResourcesXMLAsync } from '../Resources';
jest.mock('fs');
describe(getResourceXMLPathAsync, () => {
beforeAll(async () => {
vol.fromJSON(
{
'./android/app/src/main/res/values/colors.xml': '<resources></resources>',
// './android/app/src/main/res/values-night/colors.xml': '<resources></resources>',
},
'/app'
);
vol.fromJSON(
{
// no files -- specifically no android folder
},
'/managed'
);
});
afterAll(async () => {
vol.reset();
});
it(`returns a fallback value for a missing file`, async () => {
const path = await getResourceXMLPathAsync('/app', { name: 'colors', kind: 'values-night' });
// ensure the file is missing so the test works as expected.
expect(await fileExistsAsync(path)).toBe(false);
// read the file with a default fallback
expect(await readResourcesXMLAsync({ path })).toStrictEqual({ resources: {} });
});
it(`returns a default value for an XML file`, async () => {
const path = await getResourceXMLPathAsync('/app', { name: 'colors', kind: 'values' });
// ensure the file exists so the test works as expected.
expect(await fileExistsAsync(path)).toBe(true);
// read the file with a default fallback
expect(await readResourcesXMLAsync({ path })).toStrictEqual({ resources: {} });
});
});
describe(getObjectAsResourceGroup, () => {
it(`matches parsed xml`, async () => {
// Parsed from a string for DX readability
const styles = await parseXMLAsync(
[
'<?xml version="1.0" encoding="UTF-8" standalone="yes"?>',
'<resources>',
' <style name="AppTheme" parent="Theme.AppCompat.Light.NoActionBar">',
' <item name="key">value</item>',
' <item name="foo">bar</item>',
' </style>',
'</resources>',
].join('\n')
);
expect(
getObjectAsResourceGroup({
name: 'AppTheme',
parent: 'Theme.AppCompat.Light.NoActionBar',
item: { key: 'value', foo: 'bar' },
})
).toStrictEqual((styles.resources as any).style[0]);
});
});
```
|
Alexander Bittroff (born 19 September 1989) is a German professional footballer who plays as a full-back for Jahn Regensburg.
References
External links
1989 births
Living people
People from Lauchhammer
People from Bezirk Cottbus
German men's footballers
Footballers from Brandenburg
Men's association football fullbacks
2. Bundesliga players
3. Liga players
Regionalliga players
FC Energie Cottbus II players
FC Energie Cottbus players
FSV Frankfurt players
Chemnitzer FC players
KFC Uerdingen 05 players
1. FC Magdeburg players
SSV Jahn Regensburg players
|
Rosemary Gladstar is an American herbalist.
Biography
She began her work in herbalism in California, and she founded the California School of Herbal Studies in Forestville, California, the first herbal school in California, in 1978. Gladstar taught at the school with the help of herbalists such as Christopher Hobbs. She moved to Vermont in 1987 and co-founded Sage Mountain Herbs. Gladstar founded United Plant Savers out of concern over the ecological sustainability of the herb trade; she serves as president of the Board of Directors of United Plant Savers. Gladstar helped found the Northeast Herb Association in 1991. She is the co-founder of the International Herb Symposium and The New England Women's Herbal Conference. The New England Women's Herbal Conference is held yearly in August at Camp Wicosuta in Newfound Lake, New Hampshire. Gladstar announced her retirement from being the director of the conference during the 2018 event.
Gladstar is the author of several books, including Herbal Healing for Women, Herbs for Natural Beauty, Herbs for the Home Medicine Chest, Herbal Recipes for Vibrant Health and Planting the Future: Saving Our Medicinal Herbs. She has taught herbs extensively throughout the U.S. She organizes the International Herb Symposium and the United Plant Savers conference, and speaks widely at other herbal conferences including the Southwest Conference, Medicines from the Earth, the Green Nations Gathering and Breitenbush. She also leads herbal travel adventures in various parts of the world.
Works
Gladstar, Rosemary. (2001) Rosemary Gladstar's family herbal a guide to living life with energy, health, and vitality. North Adams, Mass. : Storey Books, c2001.vii, 400 p. : ill. (some col.) ; 19 cm.
Gladstar, Rosemary. (2000) Planting the future : saving our medicinal herbs / edited by Rosemary Gladstar and Pamela Hirsch.Rochester, Vt. : Healing Arts Press, c2000.x, 310 p., [8] p. of plates : ill. (some col.) ; 26 cm.
References
External links
Sage Mountain
United Plant Savers
Herbalists
American conservationists
Activists from the San Francisco Bay Area
People from Forestville, California
Year of birth missing (living people)
Living people
American women non-fiction writers
21st-century American women
|
Garland is a city in the U.S. state of Texas. It is located northeast of Dallas and is a part of the Dallas–Fort Worth metroplex. It is located within Dallas County except for small portions located in Collin and Rockwall Counties. At the 2010 census, the city had a population of 226,876. In 2019, the population rose to 239,928, making it the 93rd-most populous city in the United States of America and the 12th-most populous city in Texas; by 2020, it had a population of 246,018. Garland is the third largest city in Dallas County by population and has access to downtown Dallas via public transportation including two Dart Blue Line stations and buses.
History
Immigrants began arriving in the Peters colony area around 1850, but a community was not created until 1874. Two communities sprang up in the area: Embree, named for physician K. H. Embree, and Duck Creek, named for the local creek of the same name. A rivalry between the two towns ensued as the area began to grow around the Santa Fe Railroad depot.
Eventually, to settle a dispute regarding which town should have the local post office, Dallas County Judge Thomas A. Nash asked visiting Congressman Joe Abbott to move the post office between the two towns. The move was completed in 1887. The new location was named Garland after U.S. Attorney General Augustus Hill Garland.
Soon after, the towns of Embree and Duck Creek were combined, and the three areas combined to form the city of Garland, which was incorporated in 1891. By 1904, the town had a population of 819 people.
In 1920, local businessmen financed a new electrical generator plant (sold by Fairbanks-Morse) for the town. This later led to the formation of Garland Power and Light, the municipal electric provider that still powers the city today.
On May 9, 1927, a devastating F4 tornado struck the town and killed 15 people, including the former mayor, S. E. Nicholson.
Businesses began to move back into the area in the late 1930s. The Craddock food company and later the Byer-Rolnick hat factory (now owned by Resistol) moved into the area. In 1937, KRLD, a major Dallas radio station, built its radio antenna tower in Garland, and it is operational to this day.
During World War II, several aircraft plants were operated in the area, and the Kraft Foods company purchased a vacant one after the war for its own use. By 1950, the population of Garland exceeded 10,000 people. From 1950 to 1954, though, the Dallas/Garland area suffered from a serious and extended drought, so to supplement the water provided by wells, Garland began using the water from the nearby Lake Lavon.
The suburban population boom that the whole country experienced after World War II also reached Garland by 1960, when the population nearly quadrupled from the 1950 figure to about 38,500. By 1970, the population had doubled to about 81,500. By 1980, the population reached 138,850. Charles R. Matthews served as mayor in the 1980s; he was later a member of the elected Texas Railroad Commission.
In the 2000s, Garland added several notable developments, mostly in the northern portion of the city. Hawaiian Falls waterpark opened in 2003. (Garland formerly had a Wet 'n Wild waterpark, which closed in 1993). The Garland Independent School District's Curtis Culwell Center (formerly called the Special Events Center), an arena and conference facility, opened in 2005. Later that year, Firewheel Town Center, a Main Street-style outdoor mall, owned by Simon Property Group, opened in October 2005.
It has over 100 business and includes an AMC theater. In 2009, the city, in conjunction with developer Trammell Crow Company, finished a public/private partnership to develop the old parking lot (the land between 5th Street, 6th Street, and on the north side of Austin Street) into a new mixed-use, transit-oriented development named 5th Street Crossing. Cater-corner to both City Hall and the downtown DART Rail station, the project consists of 189 residential apartment units, of flex retail, and six live-work units.
The southeast side of Garland suffered a major blow on the night of December 26, 2015, after a large EF4 tornado struck the area, moving north from Sunnyvale. At least eight fatalities were confirmed in the city from this event.
Geography
Garland is located at (32.907325, –96.635197). According to the United States Census Bureau, the city has a total area of 57.1 sq mi (147.9 km), all land.
Neighborhoods and historical communities
Buckingham North
Duck Creek
Centerville
Club Hill
Eastern Hills
Embree
Firewheel
Oaks
Rose Hill
Spring Park
Travis College Hill Addition
Valley Creek*
The 5
Oakridge
Brentwood Place
Brentwood Village
Climate
Garland is part of the humid subtropical region. The average warmest month is July, with the highest recorded temperature being in 2000. Typically, the coolest month is January, when the lowest recorded temperature was in 1989. The maximum average precipitation occurs in May.
Demographics
According to the 2020 United States census, there were 246,018 people, 75,886 households, and 56,868 families residing in the city, up from 226,876 people, 75,696 households, and 56,272 families residing in the city at the 2010 census. The population density was 3,973.3 people/sq mi (1,534.1/km). The 80,834 housing units averaged 1,415.7/sq mi (546.5/km). The 2019 census estimates placed the population at 239,928.
Of the 75,696 households in 2010, 36.9% had children under 18 living with them, 52.0% were married couples living together, 16.1% had a female householder with no husband present, and 25.7% were not families. About 20.8% of all households were made up of individuals, and 6.5% had someone living alone who was 65 or older. The average household size was 2.99, and the average family size was 3.48.
At the 2018 American Community Survey, 25.9% of households had children under the age of 18 living with them and the median age was 34.1 Of the adult population, 48.1% were male and 51.9% were female. The average household size was 3.25 and the average family size was 3.71. Roughly 0.3% of households in Garland were same-sex, unmarried-partner households and 5.3% opposite-sex, unmarried-partner households.
According to the U.S. Census Bureau's 2007–2011 American Community Survey, the median income for a household in the city was $52,441, and for a family was $57,293. Males had a median income of $36,041 versus $33,950 for females. The per capita income for the city was $20,000. About 11.1% of families and 14.5% of the population were below the poverty line, including 21.7% of those under age 18 and 7.3% of those age 65 or over. The median income for a household in Garland as of 2018 was $60,374. In 2018, an estimated 242,402 people, 74,489 households, and 77,626 housing units were in the city.
In the city, the population was distributed as 28.5% under the age of 18, 9.6% from 18 to 24, 28.0% from 25 to 44, 24.7% from 45 to 64, and 9.2% who were 65 years of age or older at the 2010 United States census. The median age was 33.7 years. For every 100 females, there were 96.1 males. For every 100 females age 18 and over, there were 92.6 males.
Race and ethnicity
The racial and ethnic makeup of the city was 57.5% White, 14.5% African American, 0.8% Native American, 9.4% Asian, 0.04% Pacific Islander, 14.4% some other race, and 3.3% from two or more races in 2010. Hispanics or Latinos of any race were 37.8% of the population. Non-Hispanic whites were 36.7% of the population, down from 86.5% in 1980. Following continued trends of diversification, the racial and ethnic makeup at 2018's census estimates were 27% non-Hispanic White, 14% African American, 0.7% American Indian or Alaska Native, 12.4% Asian, 0.5% some other race, 1.7% two or more races, and 43.2% Hispanic or Latino American of any race. Within the local Hispanic or Latino demographic, the largest nationality were Mexican Americans (34.2%). Puerto Ricans made up the second largest single Latin group (0.5%) followed by 42 Cuban Americans and 8.5% other Hispanic and Latino Americans. In 2020, the composition of the city was 27.31% non-Hispanic white, 14.77% Black or African American, 0.25% Native American, 11.88% Asian, 0.03% Pacific Islander, 0.38% some other race, 2.72% multiracial, and 42.66% Hispanic or Latino of any race.
As of 2000, 12% of the foreign-born population of Garland originated from Vietnam. Two strip-style shopping malls along Walnut Street cater to Vietnamese people, and a community center as of 2009 hosts first-generation Vietnamese immigrants. According to the 2010 U.S. census, Garland has the 16th-largest number of Vietnamese Americans in the United States.
Religion
The majority of Garland's local population are affiliated with a religion, being part of the largest Christian-dominated metropolitan area in the United States. As of 2020, the Catholic Church is the largest single Christian denomination in the city and wider Dallas–Fort Worth–Arlington metropolitan statistical area. Garland's Catholic population is served by the Roman Catholic Diocese of Dallas, one of the largest jurisdictions of the Catholic Church in the United States.
Following, Baptists were the second-largest Christian denomination, and the largest Protestant group in the city limits. Baptists are traditionally divided among the Southern Baptist Convention, National Baptists (USA and America) and Texas Baptists. The third largest Christian denomination in the city of Garland are Methodists. Other prominent Christian denominations were the Church of Jesus Christ of Latter-Day Saints, Pentecostalism, Lutheranism, Presbyterianism, and Episcopalianism. An estimated 12.2% of the total religious population professed another Christian faith. The largest non-Christian religion according to Sperling's BestPlaces was Islam, followed by Judaism and the eastern religions including Buddhism, Sikhism, and Hinduism.
In 1997, the Taiwanese UFO religion Chen Tao moved many of its members to Garland, where they believed the Second Coming of Jesus Christ would occur.
Economy
In the late 1930s, the Craddock food company, which manufactured pickles, moved to town. In 1937, the KRLD (Dallas) radio tower was constructed in Garland. During World War II, several aircraft plants operated in the Garland area. After the war, Kraft Foods bought the Continental Motors Plant to retool for its manufacture. The Kraft plant still operates to this day. As a station on two railroads, Garland was a major onion-shipping point in the 1940s.
Resistol Hats in Garland is a notable manufacturer of premium hats, many of which have been worn by or given to notable figures around the world. The company has long been an important part of Garland's manufacturing base. The company was founded by E.R. Byer and Harry Rolnick, who established Byer-Rolnick in Dallas in 1927. At the time, the company produced men's felt hats. The company used the name "Resistol Hats" to indicate that the hats could "resist-all" weather conditions. Some accounts contend the name was given because the headbands in the company's hats were more resistant to scalp oil. The growing firm needed to expand. In 1938, it moved to a larger facility in Garland, where Resistol hats continue to be manufactured today. For decades, residents surrounding the hat factory could set their clocks to its whistle.
In the early 1980s, Garland had one of the lowest poverty rates of cities in the country. In 1990, it had a population of 180,650 and 2,227 businesses, making it Dallas County's second-largest city and the 10th-largest in the state. Today, Garland had a variety of industries, including electronics, steel fabrication, oilfield equipment, aluminum die casting, hat manufacture, dairy products, and food processors.
Top employers
According to the City of Garland's Economic Development Partnership website, the top employers in the city are:
Garland has lost many of their major employers over the last few years. Raytheon moved to Richardson, Baylor Scott and White closed (but later opened as a VA hospital), L3 Technologies closed, as did many others.
Arts and culture
Garland is home to numerous historic and recent entertainment venues.
Entertainment
The Granville Arts Center is a complex owned and operated by the city. Included within the complex are two elegant proscenium theatres which seat 720 and 200, respectively. Also included as part of the complex is the Plaza Theatre, which has seating for 350. The Plaza Theatre is a historic entertainment venue. The Plaza Theatre was refurbished and is utilized for business conferences, concerts, receptions, and stage productions. It is also host to paintings by artist Bruce Cody. The Atrium at the Granville Arts Center is a ballroom encased in glass on two sides and opening onto an elegant outdoor courtyard. The Atrium provides civic, community and commercial organizations the opportunity to house banquets, receptions, trade shows, and conventions.
Landmarks
Garland is home to the Pace House, which was the original home of John H. Pace and his wife; it was built in the Queen Anne-style architecture. The Pace House was recognized as a historic landmark by the Dallas County Historic Resource Survey of 1982.
Other historic areas of the city include the Garland Landmark Museum, housed in the former 1901 Santa Fe depot. Inside are historical artifacts and documents representing the period from 1850 to the present. Historic Downtown Garland is another local landmark. Historic Downtown Garland was listed in the National Register of Historic Places in 2017.
Travis College Hill Historic District, a residential neighborhood in downtown Garland, was the first site in Garland history to be added to the National Register of Historic Places, administered by the U.S. Department of the Interior through its National Park Service. Two months later, the downtown square and surrounding buildings became the second site in Garland added to the listing. Travis College Hill consists of 12 homes whose period of significance is 1913 to 1960. Travis College Hill was platted in January 1913 by developer R.O. Travis.
On May 9, 1927, a tornado destroyed much of the city and killed 17 people, including a former mayor, S. E. Nicholson. Six years later, the Nicholson Memorial Library opened in his honor.
The Nicholson Memorial Library System is also the Major Resource Center, or headquarters, of the Northeast Texas Library System (NETLS). NETLS serves a 33-county area that includes 105 member libraries. The Nicholson Memorial Library System headquarters and offices have been housed in NMLS' Central Library since 1983.
Parks and recreation
Garland includes over of park land, six recreation centers, and 63 parks.
Government
The city of Garland is a voluntary member of the North Central Texas Council of Governments association, the purpose of which is to coordinate individual and collective local governments and facilitate regional solutions, eliminate unnecessary duplication, and enable joint decisions.
The Parkland Health & Hospital System (Dallas County Hospital District) operates the Garland Health Center.
The Texas Department of Public Safety operates the Region I office in Garland.
The Texas Department of Criminal Justice operates the Dallas II District Parole Offices in Garland.
The United States Postal Service operates the Garland, Kingsley, and North Garland post offices.
Politics
Education
Primary and secondary schools
Most of Garland is in the Garland Independent School District (GISD). Parts of Garland extend into other districts, including the Dallas, Mesquite, and Richardson Independent School Districts.
The GISD does not have school zoning, so GISD residents may apply to any GISD school.
The GISD portion of Garland is served by several high schools. Garland High School is home to the district's international baccalaureate program. North Garland High School is the math, science and technology magnet. Lakeview Centennial High School is GISD's "College and Career" magnet school. South Garland High School is known within the community for its vocational cosmetology program. Other GISD high schools include Naaman Forest, Rowlett, and Sachse High Schools.
The Mesquite ISD portion of Garland is served by Price Elementary School, Vanston Middle School, and North Mesquite High School.
The Richardson ISD portion is served by Big Springs Elementary School, O. Henry Elementary School, Apollo Junior High School, and Berkner High School, which are in the western and northern portions of Garland.
As of November 2006, the GISD had 52,391 students and 3,236 teachers, for an average ratio of 16.2 students per teacher. The 2006 GISD property tax rate was $1.5449 per hundred dollars of assessed property value.
For a private Christian school option, hundreds of families have chosen for their children to attend Garland Christian Academy, which was founded in 1972. The city also has a Pre-K–12 Islamic school, Brighter Horizons Academy.
Colleges and universities
Dallas County residents are zoned to Dallas College (formerly Dallas County Community College or DCCCD). Richland College, a member of Dallas College, states, operates a Garland Campus which has been in operation since June 30, 2009.
Garland is also the home of Amberton University, a fully accredited private university with both undergraduate and graduate degree programs. Amberton University was formerly known as Amber University and previously known as Abilene Christian University at Dallas.
Infrastructure
Transportation
The city of Garland has a lower than average percentage of households without a car. In 2015, 4.6 percent of Garland households lacked a car, and that figure was virtually unchanged in 2016 (4.4 percent). The national average was 8.7 percent in 2016. Garland averaged 2.04 cars per household in 2016, compared to a national average of 1.8. According to the American Community Survey for 2016 (5-year average), 78.8 percent of Garland residents commuted by driving alone, 13.1 carpooled, 2.5 used public transportation, and .9 percent walked. About 1.3 percent of Garland residents commuted to work by bicycle, taxi, motorcycle, or some other means, while 3.5 percent worked out of the home.
Major highways
Interstate 30 is a major east–west interstate that runs through the south side of Garland. I-30 connects with Mesquite and Dallas to the west and Rockwall to the east.
Interstate 635 (Lyndon B. Johnson Freeway) is an auxiliary interstate serving as a partial loop around Dallas and its suburbs. The interstate runs along the southwest section of Garland and serves as a border between Dallas and Garland. I-635 connects Garland with major freeways (such as North Central Expressway and Stemmons Freeway) and Dallas/Fort Worth International Airport.
Texas State Highway 66 is an east–west highway that terminates at a junction with Highway 78 in downtown Garland. East of downtown the highway connects with Rowlett and Rockwall.
Texas State Highway 78 (Garland Road, Lavon Drive) is a north–south highway that bisects the city of Garland and goes through the downtown area. The highway is known as Garland Road south of downtown and Lavon Drive north of downtown. In downtown Garland it transverses as a pair of roads known as Avenue D and 1st Street on the northbound section and Avenue B on the southbound section. Highway 78 connects with Sachse and Wylie to the north and East Dallas to the south.
President George Bush Turnpike is a toll road that serves as a loop around Dallas County. George Bush Turnpike runs through the northern parts of Garland. The turnpike connects with Richardson and Plano to the west and Rowlett to the east.
Belt Line Loop (some parts are named as First Street and Broadway Blvd) serves as an outer loop around the Dallas suburbs.
Trains
A Kansas City Southern track runs parallel to State Highway 78 (Garland Road and Lavon Drive), coming out of Dallas and heading all the way through the other side of Garland towards Wylie. There is also a Dallas, Garland and Northeastern Railroad line serving industries around the city.
Light rail
DART: Blue Line
Forest/Jupiter station
Downtown Garland station
Air
The city of Garland owns the Garland/DFW Heloplex. The facility was the first municipal heliport in Texas when it opened in November 1989. Located at 2559 S. Jupiter Road, the heliport is operated by SKY Helicopters Inc., which was initially awarded a lease of the facility in January 1993.
Utilities
The city of Garland operates the city's water system and waste services. Electricity for about 85% of Garland is provided by the city's municipal utility, Garland Power and Light (GP&L). Electricity for the other 15% was formerly provided by TXU, but is now supplied by multiple companies after deregulation of the Texas electricity market.
Water and wastewater utilities
Garland is an original member city of the North Texas Municipal Water District (NTMWD). The vision of the city fathers in the early 1940s resulted in Garland and its companion member cities benefitting from reliable, high quality, affordable water from the water district's many reservoirs.
The effluent from Garland's wastewater treatment plant flows through a NTMWD man-made, wetland. This provides a natural habitat for a wide variety of birds and reduces the sediment, nitrogen, and phosphorus contents of the water to a drinkable level. Through the use of selected aquatic plants, this environmentally friendly project will provide millions of gallons of reusable water and reduce the environmental impact.
Garland Power and Light
GP&L was founded in 1923 to provide Garland residents not-for-profit public utility services, locally controlled by its citizens. GP&L provides services to over 69,000 customers, making it the fourth-largest municipal utility in Texas and the 41st-largest in the nation.
It has two gas-fired generating plants, which combined have 640 megawatts of generation capacity. In addition, Garland partners with the Texas Municipal Power Agency, which operates the 462-megawatt coal-fired Gibbons Creek Power Plant. Garland's electric distribution system has of overhead lines and of underground lines. Its transmission system consists of 23 substations and of transmission lines. Garland's peak load for 2007 was 483 megawatts, with annual operating revenues of nearly $238 million.
Notable people
Hakeem Adeniji, offensive lineman for the Cincinnati Bengals of the National Football League
Troy Baker, voice and screen actor known for video game performances, attended Naaman Forest High School
Tyson Ballou, model
Crystal Bernard, starred as K.C. Cunningham on the TV sitcom Happy Days and as Helen in the show Wings
Mookie Blaylock, NBA basketball player
Johnny Yong Bosch, actor, musician, and martial artist, raised in Garland
C. L. Bryant, Baptist minister and conservative talk-show host, resided in Garland
Amber Dotson, country music artist
Brian Adam Douglas, Brooklyn-based artist
Samuel Eguavoen, football player
Anu Emmanuel, actress
William Jackson Harper, actor and playwright, grew up in Garland
Caleb Landry Jones, actor
Chris Jones (born 1993), basketball player for Valencia Basket and the Armenian national team.
Tyrese Maxey, University of Kentucky basketball and Philadelphia 76ers basketball player
Adrienne Reese, professional wrestler Athena, currently of All Elite Wrestling and Ring of Honor, previously known as Ember Moon in WWE
Mitchel Musso, actor and musician
Adrian Phillips, NFL football player
Ricky Pierce, NBA guard, NBA All-Star, 2x winner of NBA 6th Man Of The Year Award, raised in Garland
LeAnn Rimes, musician, grew up in Garland
Gene Summers, musician
Lee Trevino, professional golfer, winner of six major championships and 29 PGA Tour events, World Golf Hall of Fame member (1981), was born in Garland (1939)
LTC Allen West, chair of Texas GOP; former Florida Congressman
See also
Curtis Culwell Center attack
Notes
References
Bibliography
External links
City of Garland
Garland Landmark Society
Cities in Collin County, Texas
Cities in Dallas County, Texas
Dallas–Fort Worth metroplex
Populated places established in 1891
1891 establishments in Texas
Cities in Texas
|
```c++
//your_sha256_hash---------------------------------------
//your_sha256_hash---------------------------------------
#include "Backend.h"
#if DBG_DUMP
#define DO_MEMOP_TRACE() (PHASE_TRACE(Js::MemOpPhase, this->func) ||\
PHASE_TRACE(Js::MemSetPhase, this->func) ||\
PHASE_TRACE(Js::MemCopyPhase, this->func))
#define DO_MEMOP_TRACE_PHASE(phase) (PHASE_TRACE(Js::MemOpPhase, this->func) || PHASE_TRACE(Js::phase ## Phase, this->func))
#define OUTPUT_MEMOP_TRACE(loop, instr, ...) {\
char16 debugStringBuffer[MAX_FUNCTION_BODY_DEBUG_STRING_SIZE];\
Output::Print(15, _u("Function: %s%s, Loop: %u: "), this->func->GetJITFunctionBody()->GetDisplayName(), this->func->GetDebugNumberSet(debugStringBuffer), loop->GetLoopNumber());\
Output::Print(__VA_ARGS__);\
IR::Instr* __instr__ = instr;\
if(__instr__) __instr__->DumpByteCodeOffset();\
if(__instr__) Output::Print(_u(" (%s)"), Js::OpCodeUtil::GetOpCodeName(__instr__->m_opcode));\
Output::Print(_u("\n"));\
Output::Flush(); \
}
#define TRACE_MEMOP(loop, instr, ...) \
if (DO_MEMOP_TRACE()) {\
Output::Print(_u("TRACE MemOp:"));\
OUTPUT_MEMOP_TRACE(loop, instr, __VA_ARGS__)\
}
#define TRACE_MEMOP_VERBOSE(loop, instr, ...) if(CONFIG_FLAG(Verbose)) {TRACE_MEMOP(loop, instr, __VA_ARGS__)}
#define TRACE_MEMOP_PHASE(phase, loop, instr, ...) \
if (DO_MEMOP_TRACE_PHASE(phase))\
{\
Output::Print(_u("TRACE ") _u(#phase) _u(":"));\
OUTPUT_MEMOP_TRACE(loop, instr, __VA_ARGS__)\
}
#define TRACE_MEMOP_PHASE_VERBOSE(phase, loop, instr, ...) if(CONFIG_FLAG(Verbose)) {TRACE_MEMOP_PHASE(phase, loop, instr, __VA_ARGS__)}
#else
#define DO_MEMOP_TRACE()
#define DO_MEMOP_TRACE_PHASE(phase)
#define OUTPUT_MEMOP_TRACE(loop, instr, ...)
#define TRACE_MEMOP(loop, instr, ...)
#define TRACE_MEMOP_VERBOSE(loop, instr, ...)
#define TRACE_MEMOP_PHASE(phase, loop, instr, ...)
#define TRACE_MEMOP_PHASE_VERBOSE(phase, loop, instr, ...)
#endif
class AutoRestoreVal
{
private:
Value *const originalValue;
Value *const tempValue;
Value * *const valueRef;
public:
AutoRestoreVal(Value *const originalValue, Value * *const tempValueRef)
: originalValue(originalValue), tempValue(*tempValueRef), valueRef(tempValueRef)
{
}
~AutoRestoreVal()
{
if(*valueRef == tempValue)
{
*valueRef = originalValue;
}
}
PREVENT_COPY(AutoRestoreVal);
};
GlobOpt::GlobOpt(Func * func)
: func(func),
intConstantToStackSymMap(nullptr),
intConstantToValueMap(nullptr),
currentValue(FirstNewValueNumber),
prePassLoop(nullptr),
alloc(nullptr),
isCallHelper(false),
inInlinedBuiltIn(false),
rootLoopPrePass(nullptr),
noImplicitCallUsesToInsert(nullptr),
valuesCreatedForClone(nullptr),
valuesCreatedForMerge(nullptr),
instrCountSinceLastCleanUp(0),
isRecursiveCallOnLandingPad(false),
updateInductionVariableValueNumber(false),
isPerformingLoopBackEdgeCompensation(false),
currentRegion(nullptr),
auxSlotPtrSyms(nullptr),
changedSymsAfterIncBailoutCandidate(nullptr),
doTypeSpec(
!IsTypeSpecPhaseOff(func)),
doAggressiveIntTypeSpec(
doTypeSpec &&
DoAggressiveIntTypeSpec(func)),
doAggressiveMulIntTypeSpec(
doTypeSpec &&
!PHASE_OFF(Js::AggressiveMulIntTypeSpecPhase, func) &&
(!func->HasProfileInfo() || !func->GetReadOnlyProfileInfo()->IsAggressiveMulIntTypeSpecDisabled(func->IsLoopBody()))),
doDivIntTypeSpec(
doAggressiveIntTypeSpec &&
(!func->HasProfileInfo() || !func->GetReadOnlyProfileInfo()->IsDivIntTypeSpecDisabled(func->IsLoopBody()))),
doLossyIntTypeSpec(
doTypeSpec &&
DoLossyIntTypeSpec(func)),
doFloatTypeSpec(
doTypeSpec &&
DoFloatTypeSpec(func)),
doArrayCheckHoist(
DoArrayCheckHoist(func)),
doArrayMissingValueCheckHoist(
doArrayCheckHoist &&
DoArrayMissingValueCheckHoist(func)),
doArraySegmentHoist(
doArrayCheckHoist &&
DoArraySegmentHoist(ValueType::GetObject(ObjectType::Int32Array), func)),
doJsArraySegmentHoist(
doArraySegmentHoist &&
DoArraySegmentHoist(ValueType::GetObject(ObjectType::Array), func)),
doArrayLengthHoist(
doArrayCheckHoist &&
DoArrayLengthHoist(func)),
doEliminateArrayAccessHelperCall(
doArrayCheckHoist &&
!PHASE_OFF(Js::EliminateArrayAccessHelperCallPhase, func)),
doTrackRelativeIntBounds(
doAggressiveIntTypeSpec &&
DoPathDependentValues() &&
!PHASE_OFF(Js::Phase::TrackRelativeIntBoundsPhase, func)),
doBoundCheckElimination(
doTrackRelativeIntBounds &&
!PHASE_OFF(Js::Phase::BoundCheckEliminationPhase, func)),
doBoundCheckHoist(
doEliminateArrayAccessHelperCall &&
doBoundCheckElimination &&
DoConstFold() &&
!PHASE_OFF(Js::Phase::BoundCheckHoistPhase, func) &&
(!func->HasProfileInfo() || !func->GetReadOnlyProfileInfo()->IsBoundCheckHoistDisabled(func->IsLoopBody()))),
doLoopCountBasedBoundCheckHoist(
doBoundCheckHoist &&
!PHASE_OFF(Js::Phase::LoopCountBasedBoundCheckHoistPhase, func) &&
(!func->HasProfileInfo() || !func->GetReadOnlyProfileInfo()->IsLoopCountBasedBoundCheckHoistDisabled(func->IsLoopBody()))),
doPowIntIntTypeSpec(
doAggressiveIntTypeSpec &&
(!func->HasProfileInfo() || !func->GetReadOnlyProfileInfo()->IsPowIntIntTypeSpecDisabled())),
doTagChecks(
(!func->HasProfileInfo() || !func->GetReadOnlyProfileInfo()->IsTagCheckDisabled())),
isAsmJSFunc(func->GetJITFunctionBody()->IsAsmJsMode())
{
}
void
GlobOpt::BackwardPass(Js::Phase tag)
{
BEGIN_CODEGEN_PHASE(this->func, tag);
::BackwardPass backwardPass(this->func, this, tag);
backwardPass.Optimize();
END_CODEGEN_PHASE(this->func, tag);
}
void
GlobOpt::Optimize()
{
this->objectTypeSyms = nullptr;
this->func->argInsCount = this->func->GetInParamsCount();
if (!func->GetJITFunctionBody()->IsAsmJsMode())
{
// Don't include "this" pointer in the count when not in AsmJs mode (AsmJS does not have "this").
this->func->argInsCount--;
}
if (!func->DoGlobOpt())
{
this->lengthEquivBv = nullptr;
this->argumentsEquivBv = nullptr;
this->callerEquivBv = nullptr;
// Still need to run the dead store phase to calculate the live reg on back edge
this->BackwardPass(Js::DeadStorePhase);
CannotAllocateArgumentsObjectOnStack(nullptr);
return;
}
{
this->lengthEquivBv = this->func->m_symTable->m_propertyEquivBvMap->Lookup(Js::PropertyIds::length, nullptr); // Used to kill live "length" properties
this->argumentsEquivBv = func->m_symTable->m_propertyEquivBvMap->Lookup(Js::PropertyIds::arguments, nullptr); // Used to kill live "arguments" properties
this->callerEquivBv = func->m_symTable->m_propertyEquivBvMap->Lookup(Js::PropertyIds::caller, nullptr); // Used to kill live "caller" properties
// The backward phase needs the glob opt's allocator to allocate the propertyTypeValueMap
// in GlobOpt::EnsurePropertyTypeValue and ranges of instructions where int overflow may be ignored.
// (see BackwardPass::TrackIntUsage)
PageAllocator * pageAllocator = this->func->m_alloc->GetPageAllocator();
NoRecoverMemoryJitArenaAllocator localAlloc(_u("BE-GlobOpt"), pageAllocator, Js::Throw::OutOfMemory);
this->alloc = &localAlloc;
NoRecoverMemoryJitArenaAllocator localTempAlloc(_u("BE-GlobOpt temp"), pageAllocator, Js::Throw::OutOfMemory);
this->tempAlloc = &localTempAlloc;
// The forward passes use info (upwardExposedUses) from the backward pass. This info
// isn't available for some of the symbols created during the backward pass, or the forward pass.
// Keep track of the last symbol for which we're guaranteed to have data.
this->maxInitialSymID = this->func->m_symTable->GetMaxSymID();
#if DBG
this->BackwardPass(Js::CaptureByteCodeRegUsePhase);
#endif
this->BackwardPass(Js::BackwardPhase);
this->ForwardPass();
this->BackwardPass(Js::DeadStorePhase);
}
this->TailDupPass();
}
bool GlobOpt::ShouldExpectConventionalArrayIndexValue(IR::IndirOpnd *const indirOpnd)
{
Assert(indirOpnd);
if(!indirOpnd->GetIndexOpnd())
{
return indirOpnd->GetOffset() >= 0;
}
IR::RegOpnd *const indexOpnd = indirOpnd->GetIndexOpnd();
if(indexOpnd->m_sym->m_isNotNumber)
{
// Typically, single-def or any sym-specific information for type-specialized syms should not be used because all of
// their defs will not have been accounted for until after the forward pass. But m_isNotNumber is only ever changed from
// false to true, so it's okay in this case.
return false;
}
StackSym *indexVarSym = indexOpnd->m_sym;
if(indexVarSym->IsTypeSpec())
{
indexVarSym = indexVarSym->GetVarEquivSym(nullptr);
Assert(indexVarSym);
}
else if(!IsLoopPrePass())
{
// Don't use single-def info or const flags for type-specialized syms, as all of their defs will not have been accounted
// for until after the forward pass. Also, don't use the const flags in a loop prepass because the const flags may not
// be up-to-date.
if (indexOpnd->IsNotInt())
{
return false;
}
StackSym *const indexSym = indexOpnd->m_sym;
if(indexSym->IsIntConst())
{
return indexSym->GetIntConstValue() >= 0;
}
}
Value *const indexValue = CurrentBlockData()->FindValue(indexVarSym);
if(!indexValue)
{
// Treat it as Uninitialized, assume it's going to be valid
return true;
}
ValueInfo *const indexValueInfo = indexValue->GetValueInfo();
int32 indexConstantValue;
if(indexValueInfo->TryGetIntConstantValue(&indexConstantValue))
{
return indexConstantValue >= 0;
}
if(indexValueInfo->IsUninitialized())
{
// Assume it's going to be valid
return true;
}
return indexValueInfo->HasBeenNumber() && !indexValueInfo->HasBeenFloat();
}
//
// Either result is float or 1/x or cst1/cst2 where cst1%cst2 != 0
//
ValueType GlobOpt::GetDivValueType(IR::Instr* instr, Value* src1Val, Value* src2Val, bool specialize)
{
ValueInfo *src1ValueInfo = (src1Val ? src1Val->GetValueInfo() : nullptr);
ValueInfo *src2ValueInfo = (src2Val ? src2Val->GetValueInfo() : nullptr);
if (instr->IsProfiledInstr() && instr->m_func->HasProfileInfo())
{
ValueType resultType = instr->m_func->GetReadOnlyProfileInfo()->GetDivProfileInfo(static_cast<Js::ProfileId>(instr->AsProfiledInstr()->u.profileId));
if (resultType.IsLikelyInt())
{
if (specialize && src1ValueInfo && src2ValueInfo
&& ((src1ValueInfo->IsInt() && src2ValueInfo->IsInt()) ||
(this->DoDivIntTypeSpec() && src1ValueInfo->IsLikelyInt() && src2ValueInfo->IsLikelyInt())))
{
return ValueType::GetInt(true);
}
return resultType;
}
// Consider: Checking that the sources are numbers.
if (resultType.IsLikelyFloat())
{
return ValueType::Float;
}
return resultType;
}
int32 src1IntConstantValue;
if(!src1ValueInfo || !src1ValueInfo->TryGetIntConstantValue(&src1IntConstantValue))
{
return ValueType::Number;
}
if (src1IntConstantValue == 1)
{
return ValueType::Float;
}
int32 src2IntConstantValue;
if(!src2Val || !src2ValueInfo->TryGetIntConstantValue(&src2IntConstantValue))
{
return ValueType::Number;
}
if (src2IntConstantValue // Avoid divide by zero
&& !(src1IntConstantValue == 0x80000000 && src2IntConstantValue == -1) // Avoid integer overflow
&& (src1IntConstantValue % src2IntConstantValue) != 0)
{
return ValueType::Float;
}
return ValueType::Number;
}
void
GlobOpt::ForwardPass()
{
BEGIN_CODEGEN_PHASE(this->func, Js::ForwardPhase);
#if DBG_DUMP
if (Js::Configuration::Global.flags.Trace.IsEnabled(Js::GlobOptPhase, this->func->GetSourceContextId(), this->func->GetLocalFunctionId()))
{
this->func->DumpHeader();
}
if (Js::Configuration::Global.flags.TestTrace.IsEnabled(Js::GlobOptPhase))
{
this->TraceSettings();
}
#endif
// GetConstantCount() gives us the right size to pick for the SparseArray, but we may need more if we've inlined
// functions with constants. There will be a gap in the symbol numbering between the main constants and
// the inlined ones, so we'll most likely need a new array chunk. Make the min size of the array chunks be 64
// in case we have a main function with very few constants and a bunch of constants from inlined functions.
this->byteCodeConstantValueArray = SparseArray<Value>::New(this->alloc, max(this->func->GetJITFunctionBody()->GetConstCount(), 64U));
this->byteCodeConstantValueNumbersBv = JitAnew(this->alloc, BVSparse<JitArenaAllocator>, this->alloc);
this->tempBv = JitAnew(this->alloc, BVSparse<JitArenaAllocator>, this->alloc);
this->prePassCopyPropSym = JitAnew(this->alloc, BVSparse<JitArenaAllocator>, this->alloc);
this->slotSyms = JitAnew(this->alloc, BVSparse<JitArenaAllocator>, this->alloc);
this->byteCodeUses = nullptr;
this->propertySymUse = nullptr;
// changedSymsAfterIncBailoutCandidate helps track building incremental bailout in ForwardPass
this->changedSymsAfterIncBailoutCandidate = JitAnew(alloc, BVSparse<JitArenaAllocator>, alloc);
this->auxSlotPtrSyms = JitAnew(alloc, BVSparse<JitArenaAllocator>, alloc);
#if DBG
this->byteCodeUsesBeforeOpt = JitAnew(this->alloc, BVSparse<JitArenaAllocator>, this->alloc);
if (Js::Configuration::Global.flags.Trace.IsEnabled(Js::FieldCopyPropPhase) && this->DoFunctionFieldCopyProp())
{
Output::Print(_u("TRACE: CanDoFieldCopyProp Func: "));
this->func->DumpFullFunctionName();
Output::Print(_u("\n"));
}
#endif
OpndList localNoImplicitCallUsesToInsert(alloc);
this->noImplicitCallUsesToInsert = &localNoImplicitCallUsesToInsert;
IntConstantToStackSymMap localIntConstantToStackSymMap(alloc);
this->intConstantToStackSymMap = &localIntConstantToStackSymMap;
IntConstantToValueMap localIntConstantToValueMap(alloc);
this->intConstantToValueMap = &localIntConstantToValueMap;
Int64ConstantToValueMap localInt64ConstantToValueMap(alloc);
this->int64ConstantToValueMap = &localInt64ConstantToValueMap;
AddrConstantToValueMap localAddrConstantToValueMap(alloc);
this->addrConstantToValueMap = &localAddrConstantToValueMap;
StringConstantToValueMap localStringConstantToValueMap(alloc);
this->stringConstantToValueMap = &localStringConstantToValueMap;
SymIdToInstrMap localPrePassInstrMap(alloc);
this->prePassInstrMap = &localPrePassInstrMap;
ValueSetByValueNumber localValuesCreatedForClone(alloc, 64);
this->valuesCreatedForClone = &localValuesCreatedForClone;
ValueNumberPairToValueMap localValuesCreatedForMerge(alloc, 64);
this->valuesCreatedForMerge = &localValuesCreatedForMerge;
#if DBG
BVSparse<JitArenaAllocator> localFinishedStackLiteralInitFld(alloc);
this->finishedStackLiteralInitFld = &localFinishedStackLiteralInitFld;
#endif
FOREACH_BLOCK_IN_FUNC_EDITING(block, this->func)
{
this->OptBlock(block);
} NEXT_BLOCK_IN_FUNC_EDITING;
if (!PHASE_OFF(Js::MemOpPhase, this->func))
{
ProcessMemOp();
}
this->noImplicitCallUsesToInsert = nullptr;
this->intConstantToStackSymMap = nullptr;
this->intConstantToValueMap = nullptr;
this->int64ConstantToValueMap = nullptr;
this->addrConstantToValueMap = nullptr;
this->stringConstantToValueMap = nullptr;
#if DBG
this->finishedStackLiteralInitFld = nullptr;
uint freedCount = 0;
uint spilledCount = 0;
#endif
FOREACH_BLOCK_IN_FUNC(block, this->func)
{
#if DBG
if (block->GetDataUseCount() == 0)
{
freedCount++;
}
else
{
spilledCount++;
}
#endif
block->SetDataUseCount(0);
if (block->cloneStrCandidates)
{
JitAdelete(this->alloc, block->cloneStrCandidates);
block->cloneStrCandidates = nullptr;
}
} NEXT_BLOCK_IN_FUNC;
// Make sure we free most of them.
Assert(freedCount >= spilledCount);
// this->alloc will be freed right after return, no need to free it here
this->changedSymsAfterIncBailoutCandidate = nullptr;
this->auxSlotPtrSyms = nullptr;
END_CODEGEN_PHASE(this->func, Js::ForwardPhase);
}
void
GlobOpt::OptBlock(BasicBlock *block)
{
if (this->func->m_fg->RemoveUnreachableBlock(block, this))
{
GOPT_TRACE(_u("Removing unreachable block #%d\n"), block->GetBlockNum());
return;
}
Loop * loop = block->loop;
if (loop && block->isLoopHeader)
{
if (loop != this->prePassLoop)
{
OptLoops(loop);
if (!IsLoopPrePass() && loop->parent)
{
loop->fieldPRESymStores->Or(loop->parent->fieldPRESymStores);
}
if (!this->IsLoopPrePass() && DoFieldPRE(loop))
{
// Note: !IsLoopPrePass means this was a root loop pre-pass. FieldPre() is called once per loop.
this->FieldPRE(loop);
// Re-optimize the landing pad
BasicBlock *landingPad = loop->landingPad;
this->isRecursiveCallOnLandingPad = true;
this->OptBlock(landingPad);
this->isRecursiveCallOnLandingPad = false;
}
}
}
this->currentBlock = block;
PrepareLoopArrayCheckHoist();
block->MergePredBlocksValueMaps(this);
this->intOverflowCurrentlyMattersInRange = true;
this->intOverflowDoesNotMatterRange = this->currentBlock->intOverflowDoesNotMatterRange;
if (!DoFieldCopyProp() && !DoFieldRefOpts())
{
this->KillAllFields(CurrentBlockData()->liveFields);
}
this->tempAlloc->Reset();
if(loop && block->isLoopHeader)
{
loop->firstValueNumberInLoop = this->currentValue;
}
GOPT_TRACE_BLOCK(block, true);
FOREACH_INSTR_IN_BLOCK_EDITING(instr, instrNext, block)
{
GOPT_TRACE_INSTRTRACE(instr);
BailOutInfo* oldBailOutInfo = nullptr;
bool isCheckAuxBailoutNeeded = this->func->IsJitInDebugMode() && !this->IsLoopPrePass();
if (isCheckAuxBailoutNeeded && instr->HasAuxBailOut() && !instr->HasBailOutInfo())
{
oldBailOutInfo = instr->GetBailOutInfo();
Assert(oldBailOutInfo);
}
bool isInstrRemoved = false;
instrNext = this->OptInstr(instr, &isInstrRemoved);
// If we still have instrs with only aux bail out, convert aux bail out back to regular bail out and fill it.
// During OptInstr some instr can be moved out to a different block, in this case bailout info is going to be replaced
// with e.g. loop bailout info which is filled as part of processing that block, thus we don't need to fill it here.
if (isCheckAuxBailoutNeeded && !isInstrRemoved && instr->HasAuxBailOut() && !instr->HasBailOutInfo())
{
if (instr->GetBailOutInfo() == oldBailOutInfo)
{
instr->PromoteAuxBailOut();
FillBailOutInfo(block, instr);
}
else
{
AssertMsg(instr->GetBailOutInfo(), "With aux bailout, the bailout info should not be removed by OptInstr.");
}
}
} NEXT_INSTR_IN_BLOCK_EDITING;
GOPT_TRACE_BLOCK(block, false);
if (block->loop)
{
if (IsLoopPrePass())
{
if (DoBoundCheckHoist())
{
DetectUnknownChangesToInductionVariables(&block->globOptData);
}
}
else
{
isPerformingLoopBackEdgeCompensation = true;
Assert(this->tempBv->IsEmpty());
BVSparse<JitArenaAllocator> tempBv2(this->tempAlloc);
// On loop back-edges, we need to restore the state of the type specialized
// symbols to that of the loop header.
FOREACH_SUCCESSOR_BLOCK(succ, block)
{
if (succ->isLoopHeader && succ->loop->IsDescendentOrSelf(block->loop))
{
BVSparse<JitArenaAllocator> *liveOnBackEdge = block->loop->regAlloc.liveOnBackEdgeSyms;
liveOnBackEdge->Or(block->loop->fieldPRESymStores);
this->tempBv->Minus(block->loop->varSymsOnEntry, block->globOptData.liveVarSyms);
this->tempBv->And(liveOnBackEdge);
this->ToVar(this->tempBv, block);
// Lossy int in the loop header, and no int on the back-edge - need a lossy conversion to int
this->tempBv->Minus(block->loop->lossyInt32SymsOnEntry, block->globOptData.liveInt32Syms);
this->tempBv->And(liveOnBackEdge);
this->ToInt32(this->tempBv, block, true /* lossy */);
// Lossless int in the loop header, and no lossless int on the back-edge - need a lossless conversion to int
this->tempBv->Minus(block->loop->int32SymsOnEntry, block->loop->lossyInt32SymsOnEntry);
tempBv2.Minus(block->globOptData.liveInt32Syms, block->globOptData.liveLossyInt32Syms);
this->tempBv->Minus(&tempBv2);
this->tempBv->And(liveOnBackEdge);
this->ToInt32(this->tempBv, block, false /* lossy */);
this->tempBv->Minus(block->loop->float64SymsOnEntry, block->globOptData.liveFloat64Syms);
this->tempBv->And(liveOnBackEdge);
this->ToFloat64(this->tempBv, block);
// For ints and floats, go aggressive and type specialize in the landing pad any symbol which was specialized on
// entry to the loop body (in the loop header), and is still specialized on this tail, but wasn't specialized in
// the landing pad.
// Lossy int in the loop header and no int in the landing pad - need a lossy conversion to int
// (entry.lossyInt32 - landingPad.int32)
this->tempBv->Minus(block->loop->lossyInt32SymsOnEntry, block->loop->landingPad->globOptData.liveInt32Syms);
this->tempBv->And(liveOnBackEdge);
this->ToInt32(this->tempBv, block->loop->landingPad, true /* lossy */);
// Lossless int in the loop header, and no lossless int in the landing pad - need a lossless conversion to int
// ((entry.int32 - entry.lossyInt32) - (landingPad.int32 - landingPad.lossyInt32))
this->tempBv->Minus(block->loop->int32SymsOnEntry, block->loop->lossyInt32SymsOnEntry);
tempBv2.Minus(
block->loop->landingPad->globOptData.liveInt32Syms,
block->loop->landingPad->globOptData.liveLossyInt32Syms);
this->tempBv->Minus(&tempBv2);
this->tempBv->And(liveOnBackEdge);
this->ToInt32(this->tempBv, block->loop->landingPad, false /* lossy */);
// ((entry.float64 - landingPad.float64) & block.float64)
this->tempBv->Minus(block->loop->float64SymsOnEntry, block->loop->landingPad->globOptData.liveFloat64Syms);
this->tempBv->And(block->globOptData.liveFloat64Syms);
this->tempBv->And(liveOnBackEdge);
this->ToFloat64(this->tempBv, block->loop->landingPad);
if (block->loop->symsRequiringCompensationToMergedValueInfoMap)
{
InsertValueCompensation(block, succ, block->loop->symsRequiringCompensationToMergedValueInfoMap);
}
// Now that we're done with the liveFields within this loop, trim the set to those syms
// that the backward pass told us were live out of the loop.
// This assumes we have no further need of the liveFields within the loop.
if (block->loop->liveOutFields)
{
block->globOptData.liveFields->And(block->loop->liveOutFields);
}
}
} NEXT_SUCCESSOR_BLOCK;
this->tempBv->ClearAll();
isPerformingLoopBackEdgeCompensation = false;
}
}
block->PathDepBranchFolding(this);
#if DBG
// The set of live lossy int32 syms should be a subset of all live int32 syms
this->tempBv->And(block->globOptData.liveInt32Syms, block->globOptData.liveLossyInt32Syms);
Assert(this->tempBv->Count() == block->globOptData.liveLossyInt32Syms->Count());
// The set of live lossy int32 syms should be a subset of live var or float syms (var or float sym containing the lossless
// value of the sym should be live)
this->tempBv->Or(block->globOptData.liveVarSyms, block->globOptData.liveFloat64Syms);
this->tempBv->And(block->globOptData.liveLossyInt32Syms);
Assert(this->tempBv->Count() == block->globOptData.liveLossyInt32Syms->Count());
this->tempBv->ClearAll();
Assert(this->currentBlock == block);
#endif
}
void
GlobOpt::OptLoops(Loop *loop)
{
Assert(loop != nullptr);
#if DBG
if (Js::Configuration::Global.flags.Trace.IsEnabled(Js::FieldCopyPropPhase) &&
!DoFunctionFieldCopyProp() && DoFieldCopyProp(loop))
{
Output::Print(_u("TRACE: CanDoFieldCopyProp Loop: "));
this->func->DumpFullFunctionName();
uint loopNumber = loop->GetLoopNumber();
Assert(loopNumber != Js::LoopHeader::NoLoop);
Output::Print(_u(" Loop: %d\n"), loopNumber);
}
#endif
Loop *previousLoop = this->prePassLoop;
this->prePassLoop = loop;
if (previousLoop == nullptr)
{
Assert(this->rootLoopPrePass == nullptr);
this->rootLoopPrePass = loop;
this->prePassInstrMap->Clear();
if (loop->parent == nullptr)
{
// Outer most loop...
this->prePassCopyPropSym->ClearAll();
}
}
Assert(loop->symsAssignedToInLoop != nullptr);
if (loop->symsUsedBeforeDefined == nullptr)
{
loop->symsUsedBeforeDefined = JitAnew(alloc, BVSparse<JitArenaAllocator>, this->alloc);
loop->likelyIntSymsUsedBeforeDefined = JitAnew(alloc, BVSparse<JitArenaAllocator>, this->alloc);
loop->likelyNumberSymsUsedBeforeDefined = JitAnew(alloc, BVSparse<JitArenaAllocator>, this->alloc);
loop->forceFloat64SymsOnEntry = JitAnew(this->alloc, BVSparse<JitArenaAllocator>, this->alloc);
loop->symsDefInLoop = JitAnew(this->alloc, BVSparse<JitArenaAllocator>, this->alloc);
loop->fieldKilled = JitAnew(alloc, BVSparse<JitArenaAllocator>, this->alloc);
loop->fieldPRESymStores = JitAnew(alloc, BVSparse<JitArenaAllocator>, this->alloc);
loop->allFieldsKilled = false;
}
else
{
loop->symsUsedBeforeDefined->ClearAll();
loop->likelyIntSymsUsedBeforeDefined->ClearAll();
loop->likelyNumberSymsUsedBeforeDefined->ClearAll();
loop->forceFloat64SymsOnEntry->ClearAll();
loop->symsDefInLoop->ClearAll();
loop->fieldKilled->ClearAll();
loop->allFieldsKilled = false;
loop->initialValueFieldMap.Reset();
}
FOREACH_BLOCK_IN_LOOP(block, loop)
{
block->SetDataUseCount(block->GetSuccList()->Count());
OptBlock(block);
} NEXT_BLOCK_IN_LOOP;
if (previousLoop == nullptr)
{
Assert(this->rootLoopPrePass == loop);
this->rootLoopPrePass = nullptr;
}
this->prePassLoop = previousLoop;
}
void
GlobOpt::TailDupPass()
{
FOREACH_LOOP_IN_FUNC_EDITING(loop, this->func)
{
BasicBlock* header = loop->GetHeadBlock();
BasicBlock* loopTail = nullptr;
FOREACH_PREDECESSOR_BLOCK(pred, header)
{
if (loop->IsDescendentOrSelf(pred->loop))
{
loopTail = pred;
break;
}
} NEXT_PREDECESSOR_BLOCK;
if (loopTail)
{
AssertMsg(loopTail->GetLastInstr()->IsBranchInstr(), "LastInstr of loop should always be a branch no?");
if (!loopTail->GetPredList()->HasOne())
{
TryTailDup(loopTail->GetLastInstr()->AsBranchInstr());
}
}
} NEXT_LOOP_IN_FUNC_EDITING;
}
bool
GlobOpt::TryTailDup(IR::BranchInstr *tailBranch)
{
if (PHASE_OFF(Js::TailDupPhase, tailBranch->m_func->GetTopFunc()))
{
return false;
}
if (tailBranch->IsConditional())
{
return false;
}
IR::Instr *instr;
uint instrCount = 0;
for (instr = tailBranch->GetPrevRealInstrOrLabel(); !instr->IsLabelInstr(); instr = instr->GetPrevRealInstrOrLabel())
{
if (instr->HasBailOutInfo())
{
break;
}
if (!OpCodeAttr::CanCSE(instr->m_opcode))
{
// Consider: We could be more aggressive here
break;
}
instrCount++;
if (instrCount > 1)
{
// Consider: If copy handled single-def tmps renaming, we could do more instrs
break;
}
}
if (!instr->IsLabelInstr())
{
return false;
}
IR::LabelInstr *mergeLabel = instr->AsLabelInstr();
IR::Instr *mergeLabelPrev = mergeLabel->m_prev;
// Skip unreferenced labels
while (mergeLabelPrev->IsLabelInstr() && mergeLabelPrev->AsLabelInstr()->labelRefs.Empty())
{
mergeLabelPrev = mergeLabelPrev->m_prev;
}
BasicBlock* labelBlock = mergeLabel->GetBasicBlock();
uint origPredCount = labelBlock->GetPredList()->Count();
uint dupCount = 0;
// We are good to go. Let's do the tail duplication.
FOREACH_SLISTCOUNTED_ENTRY_EDITING(IR::BranchInstr*, branchEntry, &mergeLabel->labelRefs, iter)
{
if (branchEntry->IsUnconditional() && !branchEntry->IsMultiBranch() && branchEntry != mergeLabelPrev && branchEntry != tailBranch)
{
for (instr = mergeLabel->m_next; instr != tailBranch; instr = instr->m_next)
{
branchEntry->InsertBefore(instr->Copy());
}
instr = branchEntry;
branchEntry->ReplaceTarget(mergeLabel, tailBranch->GetTarget());
while(!instr->IsLabelInstr())
{
instr = instr->m_prev;
}
BasicBlock* branchBlock = instr->AsLabelInstr()->GetBasicBlock();
labelBlock->RemovePred(branchBlock, func->m_fg);
func->m_fg->AddEdge(branchBlock, tailBranch->GetTarget()->GetBasicBlock());
dupCount++;
}
} NEXT_SLISTCOUNTED_ENTRY_EDITING;
// If we've duplicated everywhere, tail block is dead and should be removed.
if (dupCount == origPredCount)
{
AssertMsg(mergeLabel->labelRefs.Empty(), "Should not remove block with referenced label.");
func->m_fg->RemoveBlock(labelBlock, nullptr, true);
}
return true;
}
void
GlobOpt::ToVar(BVSparse<JitArenaAllocator> *bv, BasicBlock *block, IR::Instr* insertBeforeInstr /* = nullptr */)
{
FOREACH_BITSET_IN_SPARSEBV(id, bv)
{
StackSym *stackSym = this->func->m_symTable->FindStackSym(id);
IR::RegOpnd *newOpnd = IR::RegOpnd::New(stackSym, TyVar, this->func);
IR::Instr* lastInstr = block->GetLastInstr();
if (insertBeforeInstr != nullptr)
{
this->ToVar(insertBeforeInstr, newOpnd, block, nullptr, false);
}
else if (lastInstr->IsBranchInstr() || lastInstr->m_opcode == Js::OpCode::BailTarget)
{
// If branch is using this symbol, hoist the operand as the ToVar load will get
// inserted right before the branch.
IR::Opnd *src1 = lastInstr->GetSrc1();
if (src1)
{
if (src1->IsRegOpnd() && src1->AsRegOpnd()->m_sym == stackSym)
{
lastInstr->HoistSrc1(Js::OpCode::Ld_A);
}
IR::Opnd *src2 = lastInstr->GetSrc2();
if (src2)
{
if (src2->IsRegOpnd() && src2->AsRegOpnd()->m_sym == stackSym)
{
lastInstr->HoistSrc2(Js::OpCode::Ld_A);
}
}
}
this->ToVar(lastInstr, newOpnd, block, nullptr, false);
}
else
{
IR::Instr *lastNextInstr = lastInstr->m_next;
this->ToVar(lastNextInstr, newOpnd, block, nullptr, false);
}
} NEXT_BITSET_IN_SPARSEBV;
}
void
GlobOpt::ToInt32(BVSparse<JitArenaAllocator> *bv, BasicBlock *block, bool lossy, IR::Instr *insertBeforeInstr)
{
return this->ToTypeSpec(bv, block, TyInt32, IR::BailOutIntOnly, lossy, insertBeforeInstr);
}
void
GlobOpt::ToFloat64(BVSparse<JitArenaAllocator> *bv, BasicBlock *block)
{
return this->ToTypeSpec(bv, block, TyFloat64, IR::BailOutNumberOnly);
}
void
GlobOpt::ToTypeSpec(BVSparse<JitArenaAllocator> *bv, BasicBlock *block, IRType toType, IR::BailOutKind bailOutKind, bool lossy, IR::Instr *insertBeforeInstr)
{
FOREACH_BITSET_IN_SPARSEBV(id, bv)
{
StackSym *stackSym = this->func->m_symTable->FindStackSym(id);
IRType fromType = TyIllegal;
// Win8 bug: 757126. If we are trying to type specialize the arguments object,
// let's make sure stack args optimization is not enabled. This is a problem, particularly,
// if the instruction comes from an unreachable block. In other cases, the pass on the
// instruction itself should disable arguments object optimization.
if(block->globOptData.argObjSyms && block->globOptData.IsArgumentsSymID(id))
{
CannotAllocateArgumentsObjectOnStack(nullptr);
}
if (block->globOptData.liveVarSyms->Test(id))
{
fromType = TyVar;
}
else if (block->globOptData.liveInt32Syms->Test(id) && !block->globOptData.liveLossyInt32Syms->Test(id))
{
fromType = TyInt32;
stackSym = stackSym->GetInt32EquivSym(this->func);
}
else if (block->globOptData.liveFloat64Syms->Test(id))
{
fromType = TyFloat64;
stackSym = stackSym->GetFloat64EquivSym(this->func);
}
else
{
Assert(UNREACHED);
}
IR::RegOpnd *newOpnd = IR::RegOpnd::New(stackSym, fromType, this->func);
this->ToTypeSpecUse(nullptr, newOpnd, block, nullptr, nullptr, toType, bailOutKind, lossy, insertBeforeInstr);
} NEXT_BITSET_IN_SPARSEBV;
}
void GlobOpt::PRE::FindPossiblePRECandidates(Loop *loop, JitArenaAllocator *alloc)
{
// Find the set of PRE candidates
BasicBlock *loopHeader = loop->GetHeadBlock();
PRECandidates *candidates = nullptr;
bool firstBackEdge = true;
FOREACH_PREDECESSOR_BLOCK(blockPred, loopHeader)
{
if (!loop->IsDescendentOrSelf(blockPred->loop))
{
// Not a loop back-edge
continue;
}
if (firstBackEdge)
{
candidates = this->globOpt->FindBackEdgePRECandidates(blockPred, alloc);
}
else
{
blockPred->globOptData.RemoveUnavailableCandidates(candidates);
}
} NEXT_PREDECESSOR_BLOCK;
this->candidates = candidates;
}
BOOL GlobOpt::PRE::PreloadPRECandidate(Loop *loop, GlobHashBucket* candidate)
{
// Insert a load for each field PRE candidate.
PropertySym *propertySym = candidate->value->AsPropertySym();
if (!candidates->candidatesToProcess->TestAndClear(propertySym->m_id))
{
return false;
}
Value * propSymValueOnBackEdge = candidate->element;
StackSym *objPtrSym = propertySym->m_stackSym;
Sym * objPtrCopyPropSym = nullptr;
if (!loop->landingPad->globOptData.IsLive(objPtrSym))
{
if (PHASE_OFF(Js::MakeObjSymLiveInLandingPadPhase, this->globOpt->func))
{
return false;
}
if (objPtrSym->IsSingleDef())
{
// We can still try to do PRE if the object sym is single def, even if its not live in the landing pad.
// We'll have to add a def instruction for the object sym in the landing pad, and then we can continue
// pre-loading the current PRE candidate.
// Case in point:
// $L1
// value|symStore
// t1 = o.x (v1|t3)
// t2 = t1.y (v2|t4) <-- t1 is not live in the loop landing pad
// jmp $L1
if (!InsertSymDefinitionInLandingPad(objPtrSym, loop, &objPtrCopyPropSym))
{
#if DBG_DUMP
TraceFailedPreloadInLandingPad(loop, propertySym, _u("Failed to insert load of object sym in landing pad"));
#endif
return false;
}
}
else
{
#if DBG_DUMP
TraceFailedPreloadInLandingPad(loop, propertySym, _u("Object sym not live in landing pad and not single-def"));
#endif
return false;
}
}
Assert(loop->landingPad->globOptData.IsLive(objPtrSym));
BasicBlock *landingPad = loop->landingPad;
Sym *symStore = propSymValueOnBackEdge->GetValueInfo()->GetSymStore();
// The symStore can't be live into the loop
// The symStore needs to still have the same value
Assert(symStore && symStore->IsStackSym());
if (loop->landingPad->globOptData.IsLive(symStore))
{
// May have already been hoisted:
// o.x = t1;
// o.y = t1;
return false;
}
Value *landingPadValue = landingPad->globOptData.FindValue(propertySym);
// Value should be added as initial value or already be there.
Assert(landingPadValue);
IR::Instr * ldInstrInLoop = this->globOpt->prePassInstrMap->Lookup(propertySym->m_id, nullptr);
Assert(ldInstrInLoop);
Assert(ldInstrInLoop->GetDst() == nullptr);
// Create instr to put in landing pad for compensation
Assert(IsPREInstrCandidateLoad(ldInstrInLoop->m_opcode));
IR::Instr * ldInstr = InsertPropertySymPreloadInLandingPad(ldInstrInLoop, loop, propertySym);
if (!ldInstr)
{
return false;
}
Assert(ldInstr->GetDst() == nullptr);
ldInstr->SetDst(IR::RegOpnd::New(symStore->AsStackSym(), TyVar, this->globOpt->func));
loop->fieldPRESymStores->Set(symStore->m_id);
landingPad->globOptData.liveVarSyms->Set(symStore->m_id);
Value * objPtrValue = landingPad->globOptData.FindValue(objPtrSym);
objPtrCopyPropSym = objPtrCopyPropSym ? objPtrCopyPropSym : objPtrValue ? landingPad->globOptData.GetCopyPropSym(objPtrSym, objPtrValue) : nullptr;
if (objPtrCopyPropSym)
{
// If we inserted T4 = T1.y, and T3 is the copy prop sym for T1 in the landing pad, we need T3.y
// to be live on back edges to have the merge produce a value for T3.y. Having a value for T1.y
// produced from the merge is not enough as the T1.y in the loop will get obj-ptr-copy-propped to
// T3.y
// T3.y
PropertySym *newPropSym = PropertySym::FindOrCreate(
objPtrCopyPropSym->m_id, propertySym->m_propertyId, propertySym->GetPropertyIdIndex(), propertySym->GetInlineCacheIndex(), propertySym->m_fieldKind, this->globOpt->func);
if (!landingPad->globOptData.FindValue(newPropSym))
{
landingPad->globOptData.SetValue(landingPadValue, newPropSym);
landingPad->globOptData.liveFields->Set(newPropSym->m_id);
MakePropertySymLiveOnBackEdges(newPropSym, loop, propSymValueOnBackEdge);
}
}
ValueType valueType(ValueType::Uninitialized);
Value *initialValue = nullptr;
if (loop->initialValueFieldMap.TryGetValue(propertySym, &initialValue))
{
if (ldInstr->IsProfiledInstr())
{
if (initialValue->GetValueNumber() == propSymValueOnBackEdge->GetValueNumber())
{
if (propSymValueOnBackEdge->GetValueInfo()->IsUninitialized())
{
valueType = ldInstr->AsProfiledInstr()->u.FldInfo().valueType;
}
else
{
valueType = propSymValueOnBackEdge->GetValueInfo()->Type();
}
}
else
{
valueType = ValueType::Uninitialized;
}
ldInstr->AsProfiledInstr()->u.FldInfo().valueType = valueType;
}
}
else
{
valueType = landingPadValue->GetValueInfo()->Type();
}
loop->symsUsedBeforeDefined->Set(symStore->m_id);
if (valueType.IsLikelyNumber())
{
loop->likelyNumberSymsUsedBeforeDefined->Set(symStore->m_id);
if (globOpt->DoAggressiveIntTypeSpec() ? valueType.IsLikelyInt() : valueType.IsInt())
{
// Can only force int conversions in the landing pad based on likely-int values if aggressive int type
// specialization is enabled
loop->likelyIntSymsUsedBeforeDefined->Set(symStore->m_id);
}
}
#if DBG_DUMP
if (Js::Configuration::Global.flags.Trace.IsEnabled(Js::FieldPREPhase, this->globOpt->func->GetSourceContextId(), this->globOpt->func->GetLocalFunctionId()))
{
Output::Print(_u("** TRACE: Field PRE: field pre-loaded in landing pad of loop head #%-3d: "), loop->GetHeadBlock()->GetBlockNum());
ldInstr->Dump();
Output::Print(_u("\n"));
Output::Flush();
}
#endif
return true;
}
void GlobOpt::PRE::PreloadPRECandidates(Loop *loop)
{
// Insert loads in landing pad for field PRE candidates. Iterate while(changed)
// for the o.x.y cases.
BOOL changed = true;
if (!candidates || !candidates->candidatesList)
{
return;
}
Assert(loop->landingPad->GetFirstInstr() == loop->landingPad->GetLastInstr());
while (changed)
{
changed = false;
FOREACH_SLIST_ENTRY_EDITING(GlobHashBucket*, candidate, (SList<GlobHashBucket*>*)candidates->candidatesList, iter)
{
if (this->PreloadPRECandidate(loop, candidate))
{
changed = true;
iter.RemoveCurrent();
}
if (PHASE_TRACE(Js::FieldPREPhase, this->globOpt->func))
{
Output::Print(_u("============================\n"));
Output::Flush();
}
} NEXT_SLIST_ENTRY_EDITING;
}
}
void GlobOpt::FieldPRE(Loop *loop)
{
if (!DoFieldPRE(loop))
{
return;
}
GlobOpt::PRE pre(this);
pre.FieldPRE(loop);
}
void GlobOpt::InsertValueCompensation(
BasicBlock *const predecessor,
BasicBlock *const successor,
const SymToValueInfoMap *symsRequiringCompensationToMergedValueInfoMap)
{
Assert(predecessor);
Assert(successor);
AssertOrFailFast(predecessor != successor);
Assert(symsRequiringCompensationToMergedValueInfoMap->Count() != 0);
IR::Instr *insertBeforeInstr = predecessor->GetLastInstr();
Func *const func = insertBeforeInstr->m_func;
bool setLastInstrInPredecessor;
// If this is a loop back edge, and the successor has been completed, don't attempt to update its block data.
// The update is unnecessary, and the data has likely been freed.
bool updateSuccessorBlockData = !this->isPerformingLoopBackEdgeCompensation || successor->GetDataUseCount() > 0;
if(insertBeforeInstr->IsBranchInstr() || insertBeforeInstr->m_opcode == Js::OpCode::BailTarget)
{
// Don't insert code between the branch and the corresponding ByteCodeUses instructions
while(insertBeforeInstr->m_prev->m_opcode == Js::OpCode::ByteCodeUses)
{
insertBeforeInstr = insertBeforeInstr->m_prev;
}
setLastInstrInPredecessor = false;
}
else
{
// Insert at the end of the block and set the last instruction
Assert(insertBeforeInstr->m_next);
insertBeforeInstr = insertBeforeInstr->m_next; // Instruction after the last instruction in the predecessor
setLastInstrInPredecessor = true;
}
GlobOptBlockData &predecessorBlockData = predecessor->globOptData;
GlobOptBlockData &successorBlockData = successor->globOptData;
struct DelayChangeValueInfo
{
Value* predecessorValue;
ArrayValueInfo* valueInfo;
void ChangeValueInfo(BasicBlock* predecessor, GlobOpt* g)
{
g->ChangeValueInfo(
predecessor,
predecessorValue,
valueInfo,
false /*allowIncompatibleType*/,
true /*compensated*/);
}
};
JsUtil::List<DelayChangeValueInfo, ArenaAllocator> delayChangeValueInfo(alloc);
for(auto it = symsRequiringCompensationToMergedValueInfoMap->GetIterator(); it.IsValid(); it.MoveNext())
{
const auto &entry = it.Current();
Sym *const sym = entry.Key();
Value *const predecessorValue = predecessorBlockData.FindValue(sym);
Assert(predecessorValue);
ValueInfo *const predecessorValueInfo = predecessorValue->GetValueInfo();
// Currently, array value infos are the only ones that require compensation based on values
Assert(predecessorValueInfo->IsAnyOptimizedArray());
const ArrayValueInfo *const predecessorArrayValueInfo = predecessorValueInfo->AsArrayValueInfo();
StackSym *const predecessorHeadSegmentSym = predecessorArrayValueInfo->HeadSegmentSym();
StackSym *const predecessorHeadSegmentLengthSym = predecessorArrayValueInfo->HeadSegmentLengthSym();
StackSym *const predecessorLengthSym = predecessorArrayValueInfo->LengthSym();
ValueInfo *const mergedValueInfo = entry.Value();
const ArrayValueInfo *const mergedArrayValueInfo = mergedValueInfo->AsArrayValueInfo();
StackSym *const mergedHeadSegmentSym = mergedArrayValueInfo->HeadSegmentSym();
StackSym *const mergedHeadSegmentLengthSym = mergedArrayValueInfo->HeadSegmentLengthSym();
StackSym *const mergedLengthSym = mergedArrayValueInfo->LengthSym();
Assert(!mergedHeadSegmentSym || predecessorHeadSegmentSym);
Assert(!mergedHeadSegmentLengthSym || predecessorHeadSegmentLengthSym);
Assert(!mergedLengthSym || predecessorLengthSym);
bool compensated = false;
if(mergedHeadSegmentSym && predecessorHeadSegmentSym != mergedHeadSegmentSym)
{
IR::Instr *const newInstr =
IR::Instr::New(
Js::OpCode::Ld_A,
IR::RegOpnd::New(mergedHeadSegmentSym, mergedHeadSegmentSym->GetType(), func),
IR::RegOpnd::New(predecessorHeadSegmentSym, predecessorHeadSegmentSym->GetType(), func),
func);
newInstr->GetDst()->SetIsJITOptimizedReg(true);
newInstr->GetSrc1()->SetIsJITOptimizedReg(true);
newInstr->SetByteCodeOffset(insertBeforeInstr);
insertBeforeInstr->InsertBefore(newInstr);
compensated = true;
}
if(mergedHeadSegmentLengthSym && predecessorHeadSegmentLengthSym != mergedHeadSegmentLengthSym)
{
IR::Instr *const newInstr =
IR::Instr::New(
Js::OpCode::Ld_A,
IR::RegOpnd::New(mergedHeadSegmentLengthSym, mergedHeadSegmentLengthSym->GetType(), func),
IR::RegOpnd::New(predecessorHeadSegmentLengthSym, predecessorHeadSegmentLengthSym->GetType(), func),
func);
newInstr->GetDst()->SetIsJITOptimizedReg(true);
newInstr->GetSrc1()->SetIsJITOptimizedReg(true);
newInstr->SetByteCodeOffset(insertBeforeInstr);
insertBeforeInstr->InsertBefore(newInstr);
compensated = true;
// Merge the head segment length value
Assert(predecessorBlockData.liveVarSyms->Test(predecessorHeadSegmentLengthSym->m_id));
predecessorBlockData.liveVarSyms->Set(mergedHeadSegmentLengthSym->m_id);
Value *const predecessorHeadSegmentLengthValue =
predecessorBlockData.FindValue(predecessorHeadSegmentLengthSym);
Assert(predecessorHeadSegmentLengthValue);
predecessorBlockData.SetValue(predecessorHeadSegmentLengthValue, mergedHeadSegmentLengthSym);
if (updateSuccessorBlockData)
{
successorBlockData.liveVarSyms->Set(mergedHeadSegmentLengthSym->m_id);
Value *const mergedHeadSegmentLengthValue = successorBlockData.FindValue(mergedHeadSegmentLengthSym);
if(mergedHeadSegmentLengthValue)
{
Assert(mergedHeadSegmentLengthValue->GetValueNumber() != predecessorHeadSegmentLengthValue->GetValueNumber());
if(predecessorHeadSegmentLengthValue->GetValueInfo() != mergedHeadSegmentLengthValue->GetValueInfo())
{
mergedHeadSegmentLengthValue->SetValueInfo(
ValueInfo::MergeLikelyIntValueInfo(
this->alloc,
mergedHeadSegmentLengthValue,
predecessorHeadSegmentLengthValue,
mergedHeadSegmentLengthValue->GetValueInfo()->Type()
.Merge(predecessorHeadSegmentLengthValue->GetValueInfo()->Type())));
}
}
else
{
successorBlockData.SetValue(CopyValue(predecessorHeadSegmentLengthValue), mergedHeadSegmentLengthSym);
}
}
}
if(mergedLengthSym && predecessorLengthSym != mergedLengthSym)
{
IR::Instr *const newInstr =
IR::Instr::New(
Js::OpCode::Ld_I4,
IR::RegOpnd::New(mergedLengthSym, mergedLengthSym->GetType(), func),
IR::RegOpnd::New(predecessorLengthSym, predecessorLengthSym->GetType(), func),
func);
newInstr->GetDst()->SetIsJITOptimizedReg(true);
newInstr->GetSrc1()->SetIsJITOptimizedReg(true);
newInstr->SetByteCodeOffset(insertBeforeInstr);
insertBeforeInstr->InsertBefore(newInstr);
compensated = true;
// Merge the length value
Assert(predecessorBlockData.liveVarSyms->Test(predecessorLengthSym->m_id));
predecessorBlockData.liveVarSyms->Set(mergedLengthSym->m_id);
Value *const predecessorLengthValue = predecessorBlockData.FindValue(predecessorLengthSym);
Assert(predecessorLengthValue);
predecessorBlockData.SetValue(predecessorLengthValue, mergedLengthSym);
if (updateSuccessorBlockData)
{
successorBlockData.liveVarSyms->Set(mergedLengthSym->m_id);
Value *const mergedLengthValue = successorBlockData.FindValue(mergedLengthSym);
if(mergedLengthValue)
{
Assert(mergedLengthValue->GetValueNumber() != predecessorLengthValue->GetValueNumber());
if(predecessorLengthValue->GetValueInfo() != mergedLengthValue->GetValueInfo())
{
mergedLengthValue->SetValueInfo(
ValueInfo::MergeLikelyIntValueInfo(
this->alloc,
mergedLengthValue,
predecessorLengthValue,
mergedLengthValue->GetValueInfo()->Type().Merge(predecessorLengthValue->GetValueInfo()->Type())));
}
}
else
{
successorBlockData.SetValue(CopyValue(predecessorLengthValue), mergedLengthSym);
}
}
}
if(compensated)
{
// Save the new ValueInfo for later.
// We don't want other symbols needing compensation to see this new one
delayChangeValueInfo.Add({
predecessorValue,
ArrayValueInfo::New(
alloc,
predecessorValueInfo->Type(),
mergedHeadSegmentSym ? mergedHeadSegmentSym : predecessorHeadSegmentSym,
mergedHeadSegmentLengthSym ? mergedHeadSegmentLengthSym : predecessorHeadSegmentLengthSym,
mergedLengthSym ? mergedLengthSym : predecessorLengthSym,
predecessorValueInfo->GetSymStore())
});
}
}
// Once we've compensated all the symbols, update the new ValueInfo.
delayChangeValueInfo.Map([predecessor, this](int, DelayChangeValueInfo d) { d.ChangeValueInfo(predecessor, this); });
if(setLastInstrInPredecessor)
{
predecessor->SetLastInstr(insertBeforeInstr->m_prev);
}
}
bool
GlobOpt::AreFromSameBytecodeFunc(IR::RegOpnd const* src1, IR::RegOpnd const* dst) const
{
Assert(this->func->m_symTable->FindStackSym(src1->m_sym->m_id) == src1->m_sym);
Assert(this->func->m_symTable->FindStackSym(dst->m_sym->m_id) == dst->m_sym);
if (dst->m_sym->HasByteCodeRegSlot() && src1->m_sym->HasByteCodeRegSlot())
{
return src1->m_sym->GetByteCodeFunc() == dst->m_sym->GetByteCodeFunc();
}
return false;
}
/*
* This is for scope object removal along with Heap Arguments optimization.
* We track several instructions to facilitate the removal of scope object.
* - LdSlotArr - This instr is tracked to keep track of the formals array (the dest)
* - InlineeStart - To keep track of the stack syms for the formals of the inlinee.
*/
void
GlobOpt::TrackInstrsForScopeObjectRemoval(IR::Instr * instr)
{
IR::Opnd* dst = instr->GetDst();
IR::Opnd* src1 = instr->GetSrc1();
if (instr->m_opcode == Js::OpCode::Ld_A && src1->IsRegOpnd())
{
AssertMsg(!instr->m_func->IsStackArgsEnabled() || !src1->IsScopeObjOpnd(instr->m_func), "There can be no aliasing for scope object.");
}
// The following is to track formals array for Stack Arguments optimization with Formals
if (instr->m_func->IsStackArgsEnabled() && !this->IsLoopPrePass())
{
if (instr->m_opcode == Js::OpCode::LdSlotArr)
{
if (instr->GetSrc1()->IsScopeObjOpnd(instr->m_func))
{
AssertMsg(!instr->m_func->GetJITFunctionBody()->HasImplicitArgIns(), "No mapping is required in this case. So it should already be generating ArgIns.");
instr->m_func->TrackFormalsArraySym(dst->GetStackSym()->m_id);
}
}
else if (instr->m_opcode == Js::OpCode::InlineeStart)
{
Assert(instr->m_func->IsInlined());
Js::ArgSlot actualsCount = instr->m_func->actualCount - 1;
Js::ArgSlot formalsCount = instr->m_func->GetJITFunctionBody()->GetInParamsCount() - 1;
Func * func = instr->m_func;
Func * inlinerFunc = func->GetParentFunc(); //Inliner's func
IR::Instr * argOutInstr = instr->GetSrc2()->GetStackSym()->GetInstrDef();
//The argout immediately before the InlineeStart will be the ArgOut for NewScObject
//So we don't want to track the stack sym for this argout.- Skipping it here.
if (instr->m_func->IsInlinedConstructor())
{
//PRE might introduce a second defintion for the Src1. So assert for the opcode only when it has single definition.
Assert(argOutInstr->GetSrc1()->GetStackSym()->GetInstrDef() == nullptr ||
argOutInstr->GetSrc1()->GetStackSym()->GetInstrDef()->m_opcode == Js::OpCode::NewScObjectNoCtor);
argOutInstr = argOutInstr->GetSrc2()->GetStackSym()->GetInstrDef();
}
if (formalsCount < actualsCount)
{
Js::ArgSlot extraActuals = actualsCount - formalsCount;
//Skipping extra actuals passed
for (Js::ArgSlot i = 0; i < extraActuals; i++)
{
argOutInstr = argOutInstr->GetSrc2()->GetStackSym()->GetInstrDef();
}
}
StackSym * undefinedSym = nullptr;
for (Js::ArgSlot param = formalsCount; param > 0; param--)
{
StackSym * argOutSym = nullptr;
if (argOutInstr->GetSrc1())
{
if (argOutInstr->GetSrc1()->IsRegOpnd())
{
argOutSym = argOutInstr->GetSrc1()->GetStackSym();
}
else
{
// We will always have ArgOut instr - so the source operand will not be removed.
argOutSym = StackSym::New(inlinerFunc);
IR::Opnd * srcOpnd = argOutInstr->GetSrc1();
IR::Opnd * dstOpnd = IR::RegOpnd::New(argOutSym, TyVar, inlinerFunc);
IR::Instr * assignInstr = IR::Instr::New(Js::OpCode::Ld_A, dstOpnd, srcOpnd, inlinerFunc);
instr->InsertBefore(assignInstr);
}
}
Assert(!func->HasStackSymForFormal(param - 1));
if (param <= actualsCount)
{
Assert(argOutSym);
func->TrackStackSymForFormalIndex(param - 1, argOutSym);
argOutInstr = argOutInstr->GetSrc2()->GetStackSym()->GetInstrDef();
}
else
{
/*When param is out of range of actuals count, load undefined*/
// TODO: saravind: This will insert undefined for each of the param not having an actual. - Clean up this by having a sym for undefined on func ?
Assert(formalsCount > actualsCount);
if (undefinedSym == nullptr)
{
undefinedSym = StackSym::New(inlinerFunc);
IR::Opnd * srcOpnd = IR::AddrOpnd::New(inlinerFunc->GetScriptContextInfo()->GetUndefinedAddr(), IR::AddrOpndKindDynamicMisc, inlinerFunc);
IR::Opnd * dstOpnd = IR::RegOpnd::New(undefinedSym, TyVar, inlinerFunc);
IR::Instr * assignUndefined = IR::Instr::New(Js::OpCode::Ld_A, dstOpnd, srcOpnd, inlinerFunc);
instr->InsertBefore(assignUndefined);
}
func->TrackStackSymForFormalIndex(param - 1, undefinedSym);
}
}
}
}
}
void
GlobOpt::OptArguments(IR::Instr *instr)
{
IR::Opnd* dst = instr->GetDst();
IR::Opnd* src1 = instr->GetSrc1();
IR::Opnd* src2 = instr->GetSrc2();
TrackInstrsForScopeObjectRemoval(instr);
if (!TrackArgumentsObject())
{
return;
}
if (instr->HasAnyLoadHeapArgsOpCode())
{
#ifdef ENABLE_DEBUG_CONFIG_OPTIONS
if (instr->m_func->IsStackArgsEnabled())
{
if (instr->GetSrc1()->IsRegOpnd() && instr->m_func->GetJITFunctionBody()->GetInParamsCount() > 1)
{
StackSym * scopeObjSym = instr->GetSrc1()->GetStackSym();
Assert(scopeObjSym);
Assert(scopeObjSym->GetInstrDef()->m_opcode == Js::OpCode::InitCachedScope || scopeObjSym->GetInstrDef()->m_opcode == Js::OpCode::NewScopeObject);
Assert(instr->m_func->GetScopeObjSym() == scopeObjSym);
if (PHASE_VERBOSE_TRACE1(Js::StackArgFormalsOptPhase))
{
Output::Print(_u("StackArgFormals : %s (%d) :Setting scopeObjSym in forward pass. \n"), instr->m_func->GetJITFunctionBody()->GetDisplayName(), instr->m_func->GetJITFunctionBody()->GetFunctionNumber());
Output::Flush();
}
}
}
#endif
if (instr->m_func->GetJITFunctionBody()->GetInParamsCount() != 1 && !instr->m_func->IsStackArgsEnabled())
{
CannotAllocateArgumentsObjectOnStack(instr->m_func);
}
else
{
CurrentBlockData()->TrackArgumentsSym(dst->AsRegOpnd());
}
return;
}
// Keep track of arguments objects and its aliases
// LdHeapArguments loads the arguments object and Ld_A tracks the aliases.
if ((instr->m_opcode == Js::OpCode::Ld_A || instr->m_opcode == Js::OpCode::BytecodeArgOutCapture) && (src1->IsRegOpnd() && CurrentBlockData()->IsArgumentsOpnd(src1)))
{
// In the debug mode, we don't want to optimize away the aliases. Since we may have to show them on the inspection.
if (((!AreFromSameBytecodeFunc(src1->AsRegOpnd(), dst->AsRegOpnd()) || this->currentBlock->loop) && instr->m_opcode != Js::OpCode::BytecodeArgOutCapture) || this->func->IsJitInDebugMode())
{
CannotAllocateArgumentsObjectOnStack(instr->m_func);
return;
}
// Disable stack args if we are aliasing arguments inside try block to a writethrough symbol.
// We don't have precise tracking of these symbols, so bailout couldn't know if it needs to restore arguments object or not after exception
Region* tryRegion = this->currentRegion ? this->currentRegion->GetSelfOrFirstTryAncestor() : nullptr;
if (tryRegion && tryRegion->GetType() == RegionTypeTry &&
tryRegion->writeThroughSymbolsSet &&
tryRegion->writeThroughSymbolsSet->Test(dst->AsRegOpnd()->m_sym->m_id))
{
CannotAllocateArgumentsObjectOnStack(instr->m_func);
return;
}
if(!dst->AsRegOpnd()->GetStackSym()->m_nonEscapingArgObjAlias)
{
CurrentBlockData()->TrackArgumentsSym(dst->AsRegOpnd());
}
return;
}
if (!CurrentBlockData()->TestAnyArgumentsSym())
{
// There are no syms to track yet, don't start tracking arguments sym.
return;
}
// Avoid loop prepass
if (this->currentBlock->loop && this->IsLoopPrePass())
{
return;
}
SymID id = 0;
switch(instr->m_opcode)
{
case Js::OpCode::LdElemI_A:
case Js::OpCode::TypeofElem:
{
Assert(src1->IsIndirOpnd());
IR::RegOpnd *indexOpnd = src1->AsIndirOpnd()->GetIndexOpnd();
if (indexOpnd && CurrentBlockData()->IsArgumentsSymID(indexOpnd->m_sym->m_id))
{
// Pathological test cases such as a[arguments]
CannotAllocateArgumentsObjectOnStack(instr->m_func);
return;
}
IR::RegOpnd *baseOpnd = src1->AsIndirOpnd()->GetBaseOpnd();
id = baseOpnd->m_sym->m_id;
if (CurrentBlockData()->IsArgumentsSymID(id))
{
instr->usesStackArgumentsObject = true;
}
break;
}
case Js::OpCode::LdLen_A:
{
Assert(src1->IsRegOpnd());
if(CurrentBlockData()->IsArgumentsOpnd(src1))
{
instr->usesStackArgumentsObject = true;
}
break;
}
case Js::OpCode::ArgOut_A_InlineBuiltIn:
{
if (CurrentBlockData()->IsArgumentsOpnd(src1))
{
instr->usesStackArgumentsObject = true;
instr->m_func->unoptimizableArgumentsObjReference++;
}
if (CurrentBlockData()->IsArgumentsOpnd(src1) &&
src1->AsRegOpnd()->m_sym->GetInstrDef()->m_opcode == Js::OpCode::BytecodeArgOutCapture)
{
// Apply inlining results in such usage - this is to ignore this sym that is def'd by ByteCodeArgOutCapture
// It's needed because we do not have block level merging of arguments object and this def due to inlining can turn off stack args opt.
IR::Instr* builtinStart = instr->GetNextRealInstr();
if (builtinStart->m_opcode == Js::OpCode::InlineBuiltInStart)
{
IR::Opnd* builtinOpnd = builtinStart->GetSrc1();
if (builtinStart->GetSrc1()->IsAddrOpnd())
{
Assert(builtinOpnd->AsAddrOpnd()->m_isFunction);
Js::BuiltinFunction builtinFunction = Js::JavascriptLibrary::GetBuiltInForFuncInfo(((FixedFieldInfo*)builtinOpnd->AsAddrOpnd()->m_metadata)->GetLocalFuncId());
if (builtinFunction == Js::BuiltinFunction::JavascriptFunction_Apply)
{
CurrentBlockData()->ClearArgumentsSym(src1->AsRegOpnd());
instr->m_func->unoptimizableArgumentsObjReference--;
}
}
else if (builtinOpnd->IsRegOpnd())
{
if (builtinOpnd->AsRegOpnd()->m_sym->m_builtInIndex == Js::BuiltinFunction::JavascriptFunction_Apply)
{
CurrentBlockData()->ClearArgumentsSym(src1->AsRegOpnd());
instr->m_func->unoptimizableArgumentsObjReference--;
}
}
}
}
break;
}
case Js::OpCode::BailOnNotStackArgs:
case Js::OpCode::ArgOut_A_FromStackArgs:
case Js::OpCode::BytecodeArgOutUse:
{
if (src1 && CurrentBlockData()->IsArgumentsOpnd(src1))
{
instr->usesStackArgumentsObject = true;
}
break;
}
default:
{
// Super conservative here, if we see the arguments or any of its alias being used in any
// other opcode just don't do this optimization. Revisit this to optimize further if we see any common
// case is missed.
if (src1)
{
if (src1->IsRegOpnd() || src1->IsSymOpnd() || src1->IsIndirOpnd())
{
if (CurrentBlockData()->IsArgumentsOpnd(src1))
{
#ifdef PERF_HINT
if (PHASE_TRACE1(Js::PerfHintPhase))
{
WritePerfHint(PerfHints::HeapArgumentsCreated, instr->m_func, instr->GetByteCodeOffset());
}
#endif
CannotAllocateArgumentsObjectOnStack(instr->m_func);
return;
}
}
}
if (src2)
{
if (src2->IsRegOpnd() || src2->IsSymOpnd() || src2->IsIndirOpnd())
{
if (CurrentBlockData()->IsArgumentsOpnd(src2))
{
#ifdef PERF_HINT
if (PHASE_TRACE1(Js::PerfHintPhase))
{
WritePerfHint(PerfHints::HeapArgumentsCreated, instr->m_func, instr->GetByteCodeOffset());
}
#endif
CannotAllocateArgumentsObjectOnStack(instr->m_func);
return;
}
}
}
// We should look at dst last to correctly handle cases where it's the same as one of the src operands.
if (dst)
{
if (dst->IsIndirOpnd() || dst->IsSymOpnd())
{
if (CurrentBlockData()->IsArgumentsOpnd(dst))
{
#ifdef PERF_HINT
if (PHASE_TRACE1(Js::PerfHintPhase))
{
WritePerfHint(PerfHints::HeapArgumentsModification, instr->m_func, instr->GetByteCodeOffset());
}
#endif
CannotAllocateArgumentsObjectOnStack(instr->m_func);
return;
}
}
else if (dst->IsRegOpnd())
{
if (this->currentBlock->loop && CurrentBlockData()->IsArgumentsOpnd(dst))
{
#ifdef PERF_HINT
if (PHASE_TRACE1(Js::PerfHintPhase))
{
WritePerfHint(PerfHints::HeapArgumentsModification, instr->m_func, instr->GetByteCodeOffset());
}
#endif
CannotAllocateArgumentsObjectOnStack(instr->m_func);
return;
}
CurrentBlockData()->ClearArgumentsSym(dst->AsRegOpnd());
}
}
}
break;
}
return;
}
void
GlobOpt::MarkArgumentsUsedForBranch(IR::Instr * instr)
{
// If it's a conditional branch instruction and the operand used for branching is one of the arguments
// to the function, tag the m_argUsedForBranch of the functionBody so that it can be used later for inlining decisions.
if (instr->IsBranchInstr() && !instr->AsBranchInstr()->IsUnconditional())
{
IR::BranchInstr * bInstr = instr->AsBranchInstr();
IR::Opnd *src1 = bInstr->GetSrc1();
IR::Opnd *src2 = bInstr->GetSrc2();
// These are used because we don't want to rely on src1 or src2 to always be the register/constant
IR::RegOpnd *regOpnd = nullptr;
if (!src2 && (instr->m_opcode == Js::OpCode::BrFalse_A || instr->m_opcode == Js::OpCode::BrTrue_A) && src1->IsRegOpnd())
{
regOpnd = src1->AsRegOpnd();
}
// We need to check for (0===arg) and (arg===0); this is especially important since some minifiers
// change all instances of one to the other.
else if (src2 && src2->IsConstOpnd() && src1->IsRegOpnd())
{
regOpnd = src1->AsRegOpnd();
}
else if (src2 && src2->IsRegOpnd() && src1->IsConstOpnd())
{
regOpnd = src2->AsRegOpnd();
}
if (regOpnd != nullptr)
{
if (regOpnd->m_sym->IsSingleDef())
{
IR::Instr * defInst = regOpnd->m_sym->GetInstrDef();
IR::Opnd *defSym = defInst->GetSrc1();
if (defSym && defSym->IsSymOpnd() && defSym->AsSymOpnd()->m_sym->IsStackSym()
&& defSym->AsSymOpnd()->m_sym->AsStackSym()->IsParamSlotSym())
{
uint16 param = defSym->AsSymOpnd()->m_sym->AsStackSym()->GetParamSlotNum();
// We only support functions with 13 arguments to ensure optimal size of callSiteInfo
if (param < Js::Constants::MaximumArgumentCountForConstantArgumentInlining)
{
this->func->GetJITOutput()->SetArgUsedForBranch((uint8)param);
}
}
}
}
}
}
const InductionVariable*
GlobOpt::GetInductionVariable(SymID sym, Loop *loop)
{
if (loop->inductionVariables)
{
for (auto it = loop->inductionVariables->GetIterator(); it.IsValid(); it.MoveNext())
{
InductionVariable* iv = &it.CurrentValueReference();
if (!iv->IsChangeDeterminate() || !iv->IsChangeUnidirectional())
{
continue;
}
if (iv->Sym()->m_id == sym)
{
return iv;
}
}
}
return nullptr;
}
bool
GlobOpt::IsSymIDInductionVariable(SymID sym, Loop *loop)
{
return GetInductionVariable(sym, loop) != nullptr;
}
SymID
GlobOpt::GetVarSymID(StackSym *sym)
{
if (sym && sym->m_type != TyVar)
{
sym = sym->GetVarEquivSym(nullptr);
}
if (!sym)
{
return Js::Constants::InvalidSymID;
}
return sym->m_id;
}
bool
GlobOpt::IsAllowedForMemOpt(IR::Instr* instr, bool isMemset, IR::RegOpnd *baseOpnd, IR::Opnd *indexOpnd)
{
Assert(instr);
if (!baseOpnd || !indexOpnd)
{
return false;
}
Loop* loop = this->currentBlock->loop;
const ValueType baseValueType(baseOpnd->GetValueType());
const ValueType indexValueType(indexOpnd->GetValueType());
// Validate the array and index types
if (
!indexValueType.IsInt() ||
!(
baseValueType.IsTypedIntOrFloatArray() ||
baseValueType.IsArray()
)
)
{
#if DBG_DUMP
wchar indexValueTypeStr[VALUE_TYPE_MAX_STRING_SIZE];
indexValueType.ToString(indexValueTypeStr);
wchar baseValueTypeStr[VALUE_TYPE_MAX_STRING_SIZE];
baseValueType.ToString(baseValueTypeStr);
TRACE_MEMOP_VERBOSE(loop, instr, _u("Index[%s] or Array[%s] value type is invalid"), indexValueTypeStr, baseValueTypeStr);
#endif
return false;
}
// The following is conservative and works around a bug in induction variable analysis.
if (baseOpnd->IsArrayRegOpnd())
{
IR::ArrayRegOpnd *baseArrayOp = baseOpnd->AsArrayRegOpnd();
bool hasBoundChecksRemoved = (
baseArrayOp->EliminatedLowerBoundCheck() &&
baseArrayOp->EliminatedUpperBoundCheck() &&
!instr->extractedUpperBoundCheckWithoutHoisting &&
!instr->loadedArrayHeadSegment &&
!instr->loadedArrayHeadSegmentLength
);
if (!hasBoundChecksRemoved)
{
TRACE_MEMOP_VERBOSE(loop, instr, _u("Missing bounds check optimization"));
return false;
}
}
else
{
return false;
}
if (!baseValueType.IsTypedArray())
{
// Check if the instr can kill the value type of the array
JsArrayKills arrayKills = CheckJsArrayKills(instr);
if (arrayKills.KillsValueType(baseValueType))
{
TRACE_MEMOP_VERBOSE(loop, instr, _u("The array (s%d) can lose its value type"), GetVarSymID(baseOpnd->GetStackSym()));
return false;
}
}
// Process the Index Operand
if (!this->OptIsInvariant(baseOpnd, this->currentBlock, loop, CurrentBlockData()->FindValue(baseOpnd->m_sym), false, true))
{
TRACE_MEMOP_VERBOSE(loop, instr, _u("Base (s%d) is not invariant"), GetVarSymID(baseOpnd->GetStackSym()));
return false;
}
// Validate the index
Assert(indexOpnd->GetStackSym());
SymID indexSymID = GetVarSymID(indexOpnd->GetStackSym());
const InductionVariable* iv = GetInductionVariable(indexSymID, loop);
if (!iv)
{
// If the index is not an induction variable return
TRACE_MEMOP_VERBOSE(loop, instr, _u("Index (s%d) is not an induction variable"), indexSymID);
return false;
}
Assert(iv->IsChangeDeterminate() && iv->IsChangeUnidirectional());
const IntConstantBounds & bounds = iv->ChangeBounds();
if (loop->memOpInfo)
{
// Only accept induction variables that increments by 1
Loop::InductionVariableChangeInfo inductionVariableChangeInfo = { 0, 0 };
inductionVariableChangeInfo = loop->memOpInfo->inductionVariableChangeInfoMap->Lookup(indexSymID, inductionVariableChangeInfo);
if (
(bounds.LowerBound() != 1 && bounds.LowerBound() != -1) ||
(bounds.UpperBound() != bounds.LowerBound()) ||
inductionVariableChangeInfo.unroll > 1 // Must be 0 (not seen yet) or 1 (already seen)
)
{
TRACE_MEMOP_VERBOSE(loop, instr, _u("The index does not change by 1: %d><%d, unroll=%d"), bounds.LowerBound(), bounds.UpperBound(), inductionVariableChangeInfo.unroll);
return false;
}
// Check if the index is the same in all MemOp optimization in this loop
if (!loop->memOpInfo->candidates->Empty())
{
Loop::MemOpCandidate* previousCandidate = loop->memOpInfo->candidates->Head();
// All MemOp operations within the same loop must use the same index
if (previousCandidate->index != indexSymID)
{
TRACE_MEMOP_VERBOSE(loop, instr, _u("The index is not the same as other MemOp in the loop"));
return false;
}
}
}
return true;
}
bool
GlobOpt::CollectMemcopyLdElementI(IR::Instr *instr, Loop *loop)
{
Assert(instr->GetSrc1()->IsIndirOpnd());
IR::IndirOpnd *src1 = instr->GetSrc1()->AsIndirOpnd();
IR::Opnd *indexOpnd = src1->GetIndexOpnd();
IR::RegOpnd *baseOpnd = src1->GetBaseOpnd()->AsRegOpnd();
SymID baseSymID = GetVarSymID(baseOpnd->GetStackSym());
if (!IsAllowedForMemOpt(instr, false, baseOpnd, indexOpnd))
{
return false;
}
SymID inductionSymID = GetVarSymID(indexOpnd->GetStackSym());
Assert(IsSymIDInductionVariable(inductionSymID, loop));
loop->EnsureMemOpVariablesInitialized();
bool isIndexPreIncr = loop->memOpInfo->inductionVariableChangeInfoMap->ContainsKey(inductionSymID);
IR::Opnd * dst = instr->GetDst();
if (!dst->IsRegOpnd() || !dst->AsRegOpnd()->GetStackSym()->IsSingleDef())
{
return false;
}
Loop::MemCopyCandidate* memcopyInfo = memcopyInfo = JitAnewStruct(this->func->GetTopFunc()->m_fg->alloc, Loop::MemCopyCandidate);
memcopyInfo->ldBase = baseSymID;
memcopyInfo->ldCount = 1;
memcopyInfo->count = 0;
memcopyInfo->bIndexAlreadyChanged = isIndexPreIncr;
memcopyInfo->base = Js::Constants::InvalidSymID; //need to find the stElem first
memcopyInfo->index = inductionSymID;
memcopyInfo->transferSym = dst->AsRegOpnd()->GetStackSym();
loop->memOpInfo->candidates->Prepend(memcopyInfo);
return true;
}
bool
GlobOpt::CollectMemsetStElementI(IR::Instr *instr, Loop *loop)
{
Assert(instr->GetDst()->IsIndirOpnd());
IR::IndirOpnd *dst = instr->GetDst()->AsIndirOpnd();
IR::Opnd *indexOp = dst->GetIndexOpnd();
IR::RegOpnd *baseOp = dst->GetBaseOpnd()->AsRegOpnd();
if (!IsAllowedForMemOpt(instr, true, baseOp, indexOp))
{
return false;
}
SymID baseSymID = GetVarSymID(baseOp->GetStackSym());
IR::Opnd *srcDef = instr->GetSrc1();
StackSym *srcSym = nullptr;
if (srcDef->IsRegOpnd())
{
IR::RegOpnd* opnd = srcDef->AsRegOpnd();
if (this->OptIsInvariant(opnd, this->currentBlock, loop, CurrentBlockData()->FindValue(opnd->m_sym), true, true))
{
srcSym = opnd->GetStackSym();
}
}
BailoutConstantValue constant = {TyIllegal, 0};
if (srcDef->IsFloatConstOpnd())
{
constant.InitFloatConstValue(srcDef->AsFloatConstOpnd()->m_value);
}
else if (srcDef->IsIntConstOpnd())
{
constant.InitIntConstValue(srcDef->AsIntConstOpnd()->GetValue(), srcDef->AsIntConstOpnd()->GetType());
}
else if (srcDef->IsAddrOpnd())
{
constant.InitVarConstValue(srcDef->AsAddrOpnd()->m_address);
}
else if(!srcSym)
{
TRACE_MEMOP_PHASE_VERBOSE(MemSet, loop, instr, _u("Source is not an invariant"));
return false;
}
// Process the Index Operand
Assert(indexOp->GetStackSym());
SymID inductionSymID = GetVarSymID(indexOp->GetStackSym());
Assert(IsSymIDInductionVariable(inductionSymID, loop));
loop->EnsureMemOpVariablesInitialized();
bool isIndexPreIncr = loop->memOpInfo->inductionVariableChangeInfoMap->ContainsKey(inductionSymID);
Loop::MemSetCandidate* memsetInfo = JitAnewStruct(this->func->GetTopFunc()->m_fg->alloc, Loop::MemSetCandidate);
memsetInfo->base = baseSymID;
memsetInfo->index = inductionSymID;
memsetInfo->constant = constant;
memsetInfo->srcSym = srcSym;
memsetInfo->count = 1;
memsetInfo->bIndexAlreadyChanged = isIndexPreIncr;
loop->memOpInfo->candidates->Prepend(memsetInfo);
return true;
}
bool GlobOpt::CollectMemcopyStElementI(IR::Instr *instr, Loop *loop)
{
if (!loop->memOpInfo || loop->memOpInfo->candidates->Empty())
{
// There is no ldElem matching this stElem
return false;
}
Assert(instr->GetDst()->IsIndirOpnd());
IR::IndirOpnd *dst = instr->GetDst()->AsIndirOpnd();
IR::Opnd *indexOp = dst->GetIndexOpnd();
IR::RegOpnd *baseOp = dst->GetBaseOpnd()->AsRegOpnd();
SymID baseSymID = GetVarSymID(baseOp->GetStackSym());
if (!instr->GetSrc1()->IsRegOpnd())
{
return false;
}
IR::RegOpnd* src1 = instr->GetSrc1()->AsRegOpnd();
if (!src1->GetIsDead())
{
// This must be the last use of the register.
// It will invalidate `var m = a[i]; b[i] = m;` but this is not a very interesting case.
TRACE_MEMOP_PHASE_VERBOSE(MemCopy, loop, instr, _u("Source (s%d) is still alive after StElemI"), baseSymID);
return false;
}
if (!IsAllowedForMemOpt(instr, false, baseOp, indexOp))
{
return false;
}
SymID srcSymID = GetVarSymID(src1->GetStackSym());
// Prepare the memcopyCandidate entry
Loop::MemOpCandidate* previousCandidate = loop->memOpInfo->candidates->Head();
if (!previousCandidate->IsMemCopy())
{
return false;
}
Loop::MemCopyCandidate* memcopyInfo = previousCandidate->AsMemCopy();
// The previous candidate has to have been created by the matching ldElem
if (
memcopyInfo->base != Js::Constants::InvalidSymID ||
GetVarSymID(memcopyInfo->transferSym) != srcSymID
)
{
TRACE_MEMOP_PHASE_VERBOSE(MemCopy, loop, instr, _u("No matching LdElem found (s%d)"), baseSymID);
return false;
}
Assert(indexOp->GetStackSym());
SymID inductionSymID = GetVarSymID(indexOp->GetStackSym());
Assert(IsSymIDInductionVariable(inductionSymID, loop));
bool isIndexPreIncr = loop->memOpInfo->inductionVariableChangeInfoMap->ContainsKey(inductionSymID);
if (isIndexPreIncr != memcopyInfo->bIndexAlreadyChanged)
{
// The index changed between the load and the store
TRACE_MEMOP_PHASE_VERBOSE(MemCopy, loop, instr, _u("Index value changed between ldElem and stElem"));
return false;
}
// Consider: Can we remove the count field?
memcopyInfo->count++;
AssertOrFailFast(memcopyInfo->count <= 1);
memcopyInfo->base = baseSymID;
return true;
}
bool
GlobOpt::CollectMemOpLdElementI(IR::Instr *instr, Loop *loop)
{
Assert(instr->m_opcode == Js::OpCode::LdElemI_A);
return (!PHASE_OFF(Js::MemCopyPhase, this->func) && CollectMemcopyLdElementI(instr, loop));
}
bool
GlobOpt::CollectMemOpStElementI(IR::Instr *instr, Loop *loop)
{
Assert(instr->m_opcode == Js::OpCode::StElemI_A || instr->m_opcode == Js::OpCode::StElemI_A_Strict);
Assert(instr->GetSrc1());
return (!PHASE_OFF(Js::MemSetPhase, this->func) && CollectMemsetStElementI(instr, loop)) ||
(!PHASE_OFF(Js::MemCopyPhase, this->func) && CollectMemcopyStElementI(instr, loop));
}
bool
GlobOpt::CollectMemOpInfo(IR::Instr *instrBegin, IR::Instr *instr, Value *src1Val, Value *src2Val)
{
Assert(this->currentBlock->loop);
Loop *loop = this->currentBlock->loop;
if (!loop->blockList.HasTwo())
{
// We support memcopy and memset for loops which have only two blocks.
return false;
}
if (loop->GetLoopFlags().isInterpreted && !loop->GetLoopFlags().memopMinCountReached)
{
TRACE_MEMOP_VERBOSE(loop, instr, _u("minimum loop count not reached"))
loop->doMemOp = false;
return false;
}
Assert(loop->doMemOp);
bool isIncr = true, isChangedByOne = false;
switch (instr->m_opcode)
{
case Js::OpCode::StElemI_A:
case Js::OpCode::StElemI_A_Strict:
if (!CollectMemOpStElementI(instr, loop))
{
loop->doMemOp = false;
return false;
}
break;
case Js::OpCode::LdElemI_A:
if (!CollectMemOpLdElementI(instr, loop))
{
loop->doMemOp = false;
return false;
}
break;
case Js::OpCode::Sub_I4:
isIncr = false;
case Js::OpCode::Add_I4:
{
// The only case in which these OpCodes can contribute to an inductionVariableChangeInfo
// is when the induction variable is being modified and overwritten aswell (ex: j = j + 1)
// and not when the induction variable is modified but not overwritten (ex: k = j + 1).
// This can either be detected in IR as
// s1 = Add_I4 s1 1 // Case #1, can be seen with "j++".
// or as
// s4(s2) = Add_I4 s3(s1) 1 // Case #2, can be see with "j = j + 1".
// s1 = Ld_A s2
bool isInductionVar = false;
IR::Instr* nextInstr = instr->m_next;
if (
// Checks for Case #1 and Case #2
instr->GetDst()->GetStackSym() != nullptr &&
instr->GetDst()->IsRegOpnd() &&
(
// Checks for Case #1
(instr->GetDst()->GetStackSym() == instr->GetSrc1()->GetStackSym()) ||
// Checks for Case #2
(nextInstr&& nextInstr->m_opcode == Js::OpCode::Ld_A &&
nextInstr->GetSrc1()->IsRegOpnd() &&
nextInstr->GetDst()->IsRegOpnd() &&
GetVarSymID(instr->GetDst()->GetStackSym()) == nextInstr->GetSrc1()->GetStackSym()->m_id &&
GetVarSymID(instr->GetSrc1()->GetStackSym()) == nextInstr->GetDst()->GetStackSym()->m_id)
)
)
{
isInductionVar = true;
}
// Even if dstIsInductionVar then dst == src1 so it's safe to use src1 as the induction sym always.
StackSym* sym = instr->GetSrc1()->GetStackSym();
SymID inductionSymID = GetVarSymID(sym);
if (isInductionVar && IsSymIDInductionVariable(inductionSymID, this->currentBlock->loop))
{
if (!isChangedByOne)
{
IR::Opnd *src1, *src2;
src1 = instr->GetSrc1();
src2 = instr->GetSrc2();
if (src2->IsRegOpnd())
{
Value *val = CurrentBlockData()->FindValue(src2->AsRegOpnd()->m_sym);
if (val)
{
ValueInfo *vi = val->GetValueInfo();
int constValue;
if (vi && vi->TryGetIntConstantValue(&constValue))
{
if (constValue == 1)
{
isChangedByOne = true;
}
}
}
}
else if (src2->IsIntConstOpnd())
{
if (src2->AsIntConstOpnd()->GetValue() == 1)
{
isChangedByOne = true;
}
}
}
loop->EnsureMemOpVariablesInitialized();
if (!isChangedByOne)
{
Loop::InductionVariableChangeInfo inductionVariableChangeInfo = { Js::Constants::InvalidLoopUnrollFactor, 0 };
if (!loop->memOpInfo->inductionVariableChangeInfoMap->ContainsKey(inductionSymID))
{
loop->memOpInfo->inductionVariableChangeInfoMap->Add(inductionSymID, inductionVariableChangeInfo);
if (sym->m_id != inductionSymID)
{
// Backwards pass uses this bit-vector to lookup upwardExposedUsed/bytecodeUpwardExposedUsed symbols, which are not necessarily vars. Just add both.
loop->memOpInfo->inductionVariableChangeInfoMap->Add(sym->m_id, inductionVariableChangeInfo);
}
}
else
{
loop->memOpInfo->inductionVariableChangeInfoMap->Item(inductionSymID, inductionVariableChangeInfo);
if (sym->m_id != inductionSymID)
{
// Backwards pass uses this bit-vector to lookup upwardExposedUsed/bytecodeUpwardExposedUsed symbols, which are not necessarily vars. Just add both.
loop->memOpInfo->inductionVariableChangeInfoMap->Item(sym->m_id, inductionVariableChangeInfo);
}
}
}
else
{
if (!loop->memOpInfo->inductionVariableChangeInfoMap->ContainsKey(inductionSymID))
{
Loop::InductionVariableChangeInfo inductionVariableChangeInfo = { 1, isIncr };
loop->memOpInfo->inductionVariableChangeInfoMap->Add(inductionSymID, inductionVariableChangeInfo);
if (sym->m_id != inductionSymID)
{
// Backwards pass uses this bit-vector to lookup upwardExposedUsed/bytecodeUpwardExposedUsed symbols, which are not necessarily vars. Just add both.
loop->memOpInfo->inductionVariableChangeInfoMap->Add(sym->m_id, inductionVariableChangeInfo);
}
}
else
{
Loop::InductionVariableChangeInfo inductionVariableChangeInfo = { 0, 0 };
inductionVariableChangeInfo = loop->memOpInfo->inductionVariableChangeInfoMap->Lookup(inductionSymID, inductionVariableChangeInfo);
// If inductionVariableChangeInfo.unroll has been invalidated, do
// not modify the Js::Constants::InvalidLoopUnrollFactor value
if (inductionVariableChangeInfo.unroll != Js::Constants::InvalidLoopUnrollFactor)
{
inductionVariableChangeInfo.unroll++;
}
inductionVariableChangeInfo.isIncremental = isIncr;
loop->memOpInfo->inductionVariableChangeInfoMap->Item(inductionSymID, inductionVariableChangeInfo);
if (sym->m_id != inductionSymID)
{
// Backwards pass uses this bit-vector to lookup upwardExposedUsed/bytecodeUpwardExposedUsed symbols, which are not necessarily vars. Just add both.
loop->memOpInfo->inductionVariableChangeInfoMap->Item(sym->m_id, inductionVariableChangeInfo);
}
}
}
break;
}
// Fallthrough if not an induction variable
}
default:
FOREACH_INSTR_IN_RANGE(chkInstr, instrBegin->m_next, instr)
{
if (IsInstrInvalidForMemOp(chkInstr, loop, src1Val, src2Val))
{
loop->doMemOp = false;
return false;
}
// Make sure this instruction doesn't use the memcopy transfer sym before it is checked by StElemI
if (loop->memOpInfo && !loop->memOpInfo->candidates->Empty())
{
Loop::MemOpCandidate* prevCandidate = loop->memOpInfo->candidates->Head();
if (prevCandidate->IsMemCopy())
{
Loop::MemCopyCandidate* memcopyCandidate = prevCandidate->AsMemCopy();
if (memcopyCandidate->base == Js::Constants::InvalidSymID)
{
if (chkInstr->HasSymUse(memcopyCandidate->transferSym))
{
loop->doMemOp = false;
TRACE_MEMOP_PHASE_VERBOSE(MemCopy, loop, chkInstr, _u("Found illegal use of LdElemI value(s%d)"), GetVarSymID(memcopyCandidate->transferSym));
return false;
}
}
}
}
}
NEXT_INSTR_IN_RANGE;
IR::Instr* prevInstr = instr->m_prev;
// If an instr where the dst is an induction variable (and thus is being written to) is not caught by a case in the above
// switch statement (which implies that this instr does not contributes to a inductionVariableChangeInfo) and in the default
// case does not set doMemOp to false (which implies that this instr does not invalidate this MemOp), then FailFast as we
// should not be performing a MemOp under these conditions.
AssertOrFailFast(!instr->GetDst() || instr->m_opcode == Js::OpCode::IncrLoopBodyCount || !loop->memOpInfo ||
// Refer to "Case #2" described above in this function. For the following IR:
// Line #1: s4(s2) = Add_I4 s3(s1) 1
// Line #2: s3(s1) = Ld_A s4(s2)
// do not consider line #2 as a violating instr
(instr->m_opcode == Js::OpCode::Ld_I4 &&
// note Ld_A is for the case where the add was 0
prevInstr && (prevInstr->m_opcode == Js::OpCode::Add_I4 ||
prevInstr->m_opcode == Js::OpCode::Sub_I4 ||
prevInstr->m_opcode == Js::OpCode::Ld_A ) &&
instr->GetSrc1()->IsRegOpnd() &&
instr->GetDst()->IsRegOpnd() &&
prevInstr->GetDst()->IsRegOpnd() &&
instr->GetDst()->GetStackSym() == prevInstr->GetSrc1()->GetStackSym() &&
instr->GetSrc1()->GetStackSym() == prevInstr->GetDst()->GetStackSym()) ||
!loop->memOpInfo->inductionVariableChangeInfoMap->ContainsKey(GetVarSymID(instr->GetDst()->GetStackSym())));
}
return true;
}
bool
GlobOpt::IsInstrInvalidForMemOp(IR::Instr *instr, Loop *loop, Value *src1Val, Value *src2Val)
{
// List of instruction that are valid with memop (ie: instr that gets removed if memop is emitted)
if (
this->currentBlock != loop->GetHeadBlock() &&
!instr->IsLabelInstr() &&
instr->IsRealInstr() &&
instr->m_opcode != Js::OpCode::IncrLoopBodyCount &&
instr->m_opcode != Js::OpCode::StLoopBodyCount &&
instr->m_opcode != Js::OpCode::Ld_A &&
instr->m_opcode != Js::OpCode::Ld_I4 &&
!(instr->IsBranchInstr() && instr->AsBranchInstr()->IsUnconditional())
)
{
TRACE_MEMOP_VERBOSE(loop, instr, _u("Instruction not accepted for memop"));
return true;
}
// Check prev instr because it could have been added by an optimization and we won't see it here.
if (OpCodeAttr::FastFldInstr(instr->m_opcode) || (instr->m_prev && OpCodeAttr::FastFldInstr(instr->m_prev->m_opcode)))
{
// Refuse any operations interacting with Fields
TRACE_MEMOP_VERBOSE(loop, instr, _u("Field interaction detected"));
return true;
}
if (Js::OpCodeUtil::GetOpCodeLayout(instr->m_opcode) == Js::OpLayoutType::ElementSlot)
{
// Refuse any operations interacting with slots
TRACE_MEMOP_VERBOSE(loop, instr, _u("Slot interaction detected"));
return true;
}
if (this->MayNeedBailOnImplicitCall(instr, src1Val, src2Val))
{
TRACE_MEMOP_VERBOSE(loop, instr, _u("Implicit call bailout detected"));
return true;
}
return false;
}
void
GlobOpt::TryReplaceLdLen(IR::Instr *& instr)
{
// Change LdLen on objects other than arrays, strings, and 'arguments' to LdFld. Otherwise, convert the SymOpnd to a RegOpnd here.
if (instr->m_opcode == Js::OpCode::LdLen_A && instr->GetSrc1() && instr->GetSrc1()->IsSymOpnd())
{
IR::SymOpnd * opnd = instr->GetSrc1()->AsSymOpnd();
Sym *sym = opnd->m_sym;
Assert(sym->IsPropertySym());
PropertySym *originalPropertySym = sym->AsPropertySym();
IR::RegOpnd* newopnd = IR::RegOpnd::New(originalPropertySym->m_stackSym, IRType::TyVar, instr->m_func);
ValueInfo *const objectValueInfo = CurrentBlockData()->FindValue(originalPropertySym->m_stackSym)->GetValueInfo();
// things we'd emit a fast path for
if (
objectValueInfo->IsLikelyAnyArray() ||
objectValueInfo->HasHadStringTag() ||
objectValueInfo->IsLikelyString() ||
newopnd->IsArgumentsObject() ||
(CurrentBlockData()->argObjSyms && CurrentBlockData()->IsArgumentsOpnd(newopnd))
)
{
// We need to properly transfer over the information from the old operand, which is
// a SymOpnd, to the new one, which is a RegOpnd. Unfortunately, the types mean the
// normal copy methods won't work here, so we're going to directly copy data.
newopnd->SetIsJITOptimizedReg(opnd->GetIsJITOptimizedReg());
newopnd->SetValueType(objectValueInfo->Type());
newopnd->SetIsDead(opnd->GetIsDead());
instr->ReplaceSrc1(newopnd);
}
else
{
// otherwise, change the instruction to an LdFld here.
instr->m_opcode = Js::OpCode::LdFld;
}
}
}
IR::Instr *
GlobOpt::OptInstr(IR::Instr *&instr, bool* isInstrRemoved)
{
Assert(instr->m_func->IsTopFunc() || instr->m_func->isGetterSetter || instr->m_func->callSiteIdInParentFunc != UINT16_MAX);
IR::Opnd *src1, *src2;
Value *src1Val = nullptr, *src2Val = nullptr, *dstVal = nullptr;
Value *src1IndirIndexVal = nullptr, *dstIndirIndexVal = nullptr;
IR::Instr *instrPrev = instr->m_prev;
IR::Instr *instrNext = instr->m_next;
if (instr->IsLabelInstr() && this->func->HasTry() && this->func->DoOptimizeTry())
{
this->currentRegion = instr->AsLabelInstr()->GetRegion();
Assert(this->currentRegion);
}
if(PrepareForIgnoringIntOverflow(instr))
{
if(!IsLoopPrePass())
{
*isInstrRemoved = true;
currentBlock->RemoveInstr(instr);
}
return instrNext;
}
if (instr->m_opcode == Js::OpCode::Yield)
{
// TODO[generators][ianhall]: Can this and the FillBailOutInfo call below be moved to after Src1 and Src2 so that Yield can be optimized right up to the actual yield?
this->ProcessKills(instr);
}
if (!instr->IsRealInstr() || instr->IsByteCodeUsesInstr() || instr->m_opcode == Js::OpCode::Conv_Bool)
{
return instrNext;
}
if (!IsLoopPrePass())
{
// Change LdLen on objects other than arrays, strings, and 'arguments' to LdFld.
this->TryReplaceLdLen(instr);
}
// Consider: Do we ever get post-op bailout here, and if so is the FillBailOutInfo call in the right place?
if (instr->HasBailOutInfo() && !this->IsLoopPrePass())
{
this->FillBailOutInfo(this->currentBlock, instr);
}
this->instrCountSinceLastCleanUp++;
instr = this->PreOptPeep(instr);
this->OptArguments(instr);
//StackArguments Optimization - We bail out if the index is out of range of actuals.
if ((instr->m_opcode == Js::OpCode::LdElemI_A || instr->m_opcode == Js::OpCode::TypeofElem) &&
instr->DoStackArgsOpt() && !this->IsLoopPrePass())
{
GenerateBailAtOperation(&instr, IR::BailOnStackArgsOutOfActualsRange);
}
#if DBG
PropertySym *propertySymUseBefore = nullptr;
Assert(this->byteCodeUses == nullptr);
this->byteCodeUsesBeforeOpt->ClearAll();
GlobOpt::TrackByteCodeSymUsed(instr, this->byteCodeUsesBeforeOpt, &propertySymUseBefore);
Assert(noImplicitCallUsesToInsert->Count() == 0);
#endif
this->ignoredIntOverflowForCurrentInstr = false;
this->ignoredNegativeZeroForCurrentInstr = false;
src1 = instr->GetSrc1();
src2 = instr->GetSrc2();
if (src1)
{
src1Val = this->OptSrc(src1, &instr, &src1IndirIndexVal);
GOPT_TRACE_VALUENUMBER(_u("[src1] "), instr->GetSrc1(), _u("%d"), src1Val ? src1Val->GetValueNumber() : -1);
instr = this->SetTypeCheckBailOut(instr->GetSrc1(), instr, nullptr);
if (src2)
{
src2Val = this->OptSrc(src2, &instr);
GOPT_TRACE_VALUENUMBER(_u("[src2] "), instr->GetSrc2(), _u("%d"), src2Val ? src2Val->GetValueNumber() : -1);
}
}
if(instr->GetDst() && instr->GetDst()->IsIndirOpnd())
{
this->OptSrc(instr->GetDst(), &instr, &dstIndirIndexVal);
}
MarkArgumentsUsedForBranch(instr);
CSEOptimize(this->currentBlock, &instr, &src1Val, &src2Val, &src1IndirIndexVal);
OptimizeChecks(instr);
OptArraySrc(&instr, &src1Val, &src2Val);
OptNewScObject(&instr, src1Val);
OptStackArgLenAndConst(instr, &src1Val);
instr = this->OptPeep(instr, src1Val, src2Val);
if (instr->m_opcode == Js::OpCode::Nop ||
(instr->m_opcode == Js::OpCode::CheckThis &&
instr->GetSrc1()->IsRegOpnd() &&
instr->GetSrc1()->AsRegOpnd()->m_sym->m_isSafeThis))
{
instrNext = instr->m_next;
InsertNoImplicitCallUses(instr);
if (this->byteCodeUses)
{
this->InsertByteCodeUses(instr);
}
*isInstrRemoved = true;
this->currentBlock->RemoveInstr(instr);
return instrNext;
}
else if (instr->m_opcode == Js::OpCode::GetNewScObject && !this->IsLoopPrePass() && src1Val->GetValueInfo()->IsPrimitive())
{
// Constructor returned (src1) a primitive value, so fold this into "dst = Ld_A src2", where src2 is the new object that
// was passed into the constructor as its 'this' parameter
instr->FreeSrc1();
instr->SetSrc1(instr->UnlinkSrc2());
instr->m_opcode = Js::OpCode::Ld_A;
src1Val = src2Val;
src2Val = nullptr;
}
else if ((instr->m_opcode == Js::OpCode::TryCatch && this->func->DoOptimizeTry()) || (instr->m_opcode == Js::OpCode::TryFinally && this->func->DoOptimizeTry()))
{
ProcessTryHandler(instr);
}
else if (instr->m_opcode == Js::OpCode::BrOnException || instr->m_opcode == Js::OpCode::BrOnNoException)
{
if (this->ProcessExceptionHandlingEdges(instr))
{
*isInstrRemoved = true;
return instrNext;
}
}
bool isAlreadyTypeSpecialized = false;
if (!IsLoopPrePass() && instr->HasBailOutInfo())
{
if (instr->GetBailOutKind() == IR::BailOutExpectingInteger)
{
isAlreadyTypeSpecialized = TypeSpecializeBailoutExpectedInteger(instr, src1Val, &dstVal);
}
else if (instr->GetBailOutKind() == IR::BailOutExpectingString)
{
if (instr->GetSrc1()->IsRegOpnd())
{
if (!src1Val || !src1Val->GetValueInfo()->IsLikelyString())
{
// Disable SwitchOpt if the source is definitely not a string - This may be realized only in Globopt
Assert(IsSwitchOptEnabled());
throw Js::RejitException(RejitReason::DisableSwitchOptExpectingString);
}
}
}
}
bool forceInvariantHoisting = false;
const bool ignoreIntOverflowInRangeForInstr = instr->ignoreIntOverflowInRange; // Save it since the instr can change
if (!isAlreadyTypeSpecialized)
{
bool redoTypeSpec;
instr = this->TypeSpecialization(instr, &src1Val, &src2Val, &dstVal, &redoTypeSpec, &forceInvariantHoisting);
if(redoTypeSpec && instr->m_opcode != Js::OpCode::Nop)
{
forceInvariantHoisting = false;
instr = this->TypeSpecialization(instr, &src1Val, &src2Val, &dstVal, &redoTypeSpec, &forceInvariantHoisting);
Assert(!redoTypeSpec);
}
if (instr->m_opcode == Js::OpCode::Nop)
{
InsertNoImplicitCallUses(instr);
if (this->byteCodeUses)
{
this->InsertByteCodeUses(instr);
}
instrNext = instr->m_next;
*isInstrRemoved = true;
this->currentBlock->RemoveInstr(instr);
return instrNext;
}
}
if (ignoreIntOverflowInRangeForInstr)
{
VerifyIntSpecForIgnoringIntOverflow(instr);
}
// Track calls after any pre-op bailouts have been inserted before the call, because they will need to restore out params.
this->TrackCalls(instr);
if (instr->GetSrc1())
{
this->UpdateObjPtrValueType(instr->GetSrc1(), instr);
}
IR::Opnd *dst = instr->GetDst();
if (dst)
{
// Copy prop dst uses and mark live/available type syms before tracking kills.
CopyPropDstUses(dst, instr, src1Val);
}
// Track mark temp object before we process the dst so we can generate pre-op bailout
instr = this->TrackMarkTempObject(instrPrev->m_next, instr);
bool removed = OptTagChecks(instr);
if (removed)
{
*isInstrRemoved = true;
return instrNext;
}
dstVal = this->OptDst(&instr, dstVal, src1Val, src2Val, dstIndirIndexVal, src1IndirIndexVal);
if (dst)
{
GOPT_TRACE_VALUENUMBER(_u("[dst] "), instr->GetDst(), _u("%d\n"), dstVal ? dstVal->GetValueNumber() : -1);
}
dst = instr->GetDst();
instrNext = instr->m_next;
if (dst)
{
if (this->func->HasTry() && this->func->DoOptimizeTry())
{
this->InsertToVarAtDefInTryRegion(instr, dst);
}
instr = this->SetTypeCheckBailOut(dst, instr, nullptr);
this->UpdateObjPtrValueType(dst, instr);
}
BVSparse<JitArenaAllocator> instrByteCodeStackSymUsedAfter(this->alloc);
PropertySym *propertySymUseAfter = nullptr;
if (this->byteCodeUses != nullptr)
{
GlobOpt::TrackByteCodeSymUsed(instr, &instrByteCodeStackSymUsedAfter, &propertySymUseAfter);
}
#if DBG
else
{
GlobOpt::TrackByteCodeSymUsed(instr, &instrByteCodeStackSymUsedAfter, &propertySymUseAfter);
instrByteCodeStackSymUsedAfter.Equal(this->byteCodeUsesBeforeOpt);
Assert(propertySymUseAfter == propertySymUseBefore);
}
#endif
bool isHoisted = false;
if (this->currentBlock->loop && !this->IsLoopPrePass())
{
isHoisted = this->TryHoistInvariant(instr, this->currentBlock, dstVal, src1Val, src2Val, true, false, forceInvariantHoisting);
}
src1 = instr->GetSrc1();
if (!this->IsLoopPrePass() && src1)
{
// instr const, nonConst => canonicalize by swapping operands
// This simplifies lowering. (somewhat machine dependent)
// Note that because of Var overflows, src1 may not have been constant prop'd to an IntConst
this->PreLowerCanonicalize(instr, &src1Val, &src2Val);
}
if (!PHASE_OFF(Js::MemOpPhase, this->func) &&
!isHoisted &&
!(instr->IsJitProfilingInstr()) &&
this->currentBlock->loop && !IsLoopPrePass() &&
!func->IsJitInDebugMode() &&
!func->IsMemOpDisabled() &&
this->currentBlock->loop->doMemOp)
{
CollectMemOpInfo(instrPrev, instr, src1Val, src2Val);
}
InsertNoImplicitCallUses(instr);
if (this->byteCodeUses != nullptr)
{
// Optimization removed some uses from the instruction.
// Need to insert fake uses so we can get the correct live register to restore in bailout.
this->byteCodeUses->Minus(&instrByteCodeStackSymUsedAfter);
if (this->propertySymUse == propertySymUseAfter)
{
this->propertySymUse = nullptr;
}
this->InsertByteCodeUses(instr);
}
if (!this->IsLoopPrePass() && !isHoisted && this->IsImplicitCallBailOutCurrentlyNeeded(instr, src1Val, src2Val))
{
IR::BailOutKind kind = IR::BailOutOnImplicitCalls;
if(instr->HasBailOutInfo())
{
Assert(instr->GetBailOutInfo()->bailOutOffset == instr->GetByteCodeOffset());
const IR::BailOutKind bailOutKind = instr->GetBailOutKind();
if((bailOutKind & ~IR::BailOutKindBits) != IR::BailOutOnImplicitCallsPreOp)
{
Assert(!(bailOutKind & ~IR::BailOutKindBits));
instr->SetBailOutKind(bailOutKind + IR::BailOutOnImplicitCallsPreOp);
}
}
else if (instr->forcePreOpBailOutIfNeeded || this->isRecursiveCallOnLandingPad)
{
// We can't have a byte code reg slot as dst to generate a
// pre-op implicit call after we have processed the dst.
// Consider: This might miss an opportunity to use a copy prop sym to restore
// some other byte code reg if the dst is that copy prop that we already killed.
Assert(!instr->GetDst()
|| !instr->GetDst()->IsRegOpnd()
|| instr->GetDst()->AsRegOpnd()->GetIsJITOptimizedReg()
|| !instr->GetDst()->AsRegOpnd()->m_sym->HasByteCodeRegSlot());
this->GenerateBailAtOperation(&instr, IR::BailOutOnImplicitCallsPreOp);
}
else
{
// Capture value of the bailout after the operation is done.
this->GenerateBailAfterOperation(&instr, kind);
}
}
if (this->IsLazyBailOutCurrentlyNeeded(instr, src1Val, src2Val, isHoisted))
{
this->GenerateLazyBailOut(instr);
}
if (CurrentBlockData()->capturedValuesCandidate && !this->IsLoopPrePass())
{
this->CommitCapturedValuesCandidate();
}
#if DBG
if (CONFIG_FLAG(ValidateIntRanges) && !IsLoopPrePass())
{
if (instr->ShouldEmitIntRangeCheck())
{
this->EmitIntRangeChecks(instr);
}
}
#endif
return instrNext;
}
bool
GlobOpt::IsNonNumericRegOpnd(IR::RegOpnd* opnd, bool inGlobOpt, bool* isSafeToTransferInPrepass /*=nullptr*/) const
{
if (opnd == nullptr)
{
return false;
}
if (opnd->m_sym->m_isNotNumber)
{
return true;
}
if (!inGlobOpt)
{
return false;
}
if (opnd->GetValueType().IsNumber() || currentBlock->globOptData.IsTypeSpecialized(opnd->m_sym))
{
if (!this->IsLoopPrePass())
{
return false;
}
Value * opndValue = this->currentBlock->globOptData.FindValue(opnd->m_sym);
ValueInfo * opndValueInfo = opndValue ? opndValue->GetValueInfo() : nullptr;
if (!opndValueInfo)
{
return true;
}
bool isSafeToTransfer = this->IsSafeToTransferInPrepass(opnd->m_sym, opndValueInfo);
if (isSafeToTransferInPrepass != nullptr)
{
*isSafeToTransferInPrepass = isSafeToTransfer;
}
if (this->prePassLoop->preservesNumberValue->Test(opnd->m_sym->m_id))
{
return false;
}
return !isSafeToTransfer;
}
return true;
}
bool
GlobOpt::OptTagChecks(IR::Instr *instr)
{
if (PHASE_OFF(Js::OptTagChecksPhase, this->func) || !this->DoTagChecks())
{
return false;
}
StackSym *stackSym = nullptr;
IR::SymOpnd *symOpnd = nullptr;
IR::RegOpnd *regOpnd = nullptr;
switch(instr->m_opcode)
{
case Js::OpCode::LdFld:
case Js::OpCode::LdMethodFld:
case Js::OpCode::CheckFixedFld:
case Js::OpCode::CheckPropertyGuardAndLoadType:
symOpnd = instr->GetSrc1()->AsSymOpnd();
stackSym = symOpnd->m_sym->AsPropertySym()->m_stackSym;
break;
case Js::OpCode::BailOnNotObject:
case Js::OpCode::BailOnNotArray:
if (instr->GetSrc1()->IsRegOpnd())
{
regOpnd = instr->GetSrc1()->AsRegOpnd();
stackSym = regOpnd->m_sym;
}
break;
case Js::OpCode::StFld:
symOpnd = instr->GetDst()->AsSymOpnd();
stackSym = symOpnd->m_sym->AsPropertySym()->m_stackSym;
break;
}
if (stackSym)
{
Value *value = CurrentBlockData()->FindValue(stackSym);
if (value)
{
ValueInfo *valInfo = value->GetValueInfo();
if (valInfo->GetSymStore() && valInfo->GetSymStore()->IsStackSym() && valInfo->GetSymStore()->AsStackSym()->IsFromByteCodeConstantTable())
{
return false;
}
ValueType valueType = value->GetValueInfo()->Type();
if (instr->m_opcode == Js::OpCode::BailOnNotObject)
{
if (valueType.CanBeTaggedValue())
{
// We're not adding new information to the value other than changing the value type. Preserve any existing
// information and just change the value type.
ChangeValueType(nullptr, value, valueType.SetCanBeTaggedValue(false), true /*preserveSubClassInfo*/);
return false;
}
if (!this->IsLoopPrePass())
{
if (this->byteCodeUses)
{
this->InsertByteCodeUses(instr);
}
this->currentBlock->RemoveInstr(instr);
}
return true;
}
if (valueType.CanBeTaggedValue() &&
!valueType.HasBeenNumber() &&
!this->IsLoopPrePass())
{
ValueType newValueType = valueType.SetCanBeTaggedValue(false);
// Split out the tag check as a separate instruction.
IR::Instr *bailOutInstr;
bailOutInstr = IR::BailOutInstr::New(Js::OpCode::BailOnNotObject, IR::BailOutOnTaggedValue, instr, instr->m_func);
if (!this->IsLoopPrePass())
{
FillBailOutInfo(this->currentBlock, bailOutInstr);
}
IR::RegOpnd *srcOpnd = regOpnd;
if (!srcOpnd)
{
srcOpnd = IR::RegOpnd::New(stackSym, stackSym->GetType(), instr->m_func);
AnalysisAssert(symOpnd);
if (symOpnd->GetIsJITOptimizedReg())
{
srcOpnd->SetIsJITOptimizedReg(true);
}
}
bailOutInstr->SetSrc1(srcOpnd);
bailOutInstr->GetSrc1()->SetValueType(valueType);
bailOutInstr->SetByteCodeOffset(instr);
instr->InsertBefore(bailOutInstr);
if (this->currentBlock->loop)
{
// Try hoisting the BailOnNotObject instr.
// But since this isn't the current instr being optimized, we need to play tricks with
// the byteCodeUse fields...
TrackByteCodeUsesForInstrAddedInOptInstr(bailOutInstr, [&]()
{
if (TryHoistInvariant(bailOutInstr, this->currentBlock, nullptr, value, nullptr, true, false, false, IR::BailOutOnTaggedValue))
{
Value* landingPadValue = this->currentBlock->loop->landingPad->globOptData.FindValue(stackSym);
ValueType newLandingPadValueType = landingPadValue->GetValueInfo()->Type().SetCanBeTaggedValue(false);
ChangeValueType(nullptr, landingPadValue, newLandingPadValueType, false);
}
});
}
if (symOpnd)
{
symOpnd->SetPropertyOwnerValueType(newValueType);
}
else
{
regOpnd->SetValueType(newValueType);
}
ChangeValueType(nullptr, value, newValueType, false);
}
}
}
return false;
}
bool
GlobOpt::TypeSpecializeBailoutExpectedInteger(IR::Instr* instr, Value* src1Val, Value** dstVal)
{
bool isAlreadyTypeSpecialized = false;
if(instr->GetSrc1()->IsRegOpnd())
{
if (!src1Val || !src1Val->GetValueInfo()->IsLikelyInt() || instr->GetSrc1()->AsRegOpnd()->m_sym->m_isNotNumber)
{
Assert(IsSwitchOptEnabledForIntTypeSpec());
throw Js::RejitException(RejitReason::DisableSwitchOptExpectingInteger);
}
// Attach the BailOutExpectingInteger to FromVar and Remove the bail out info on the Ld_A (Begin Switch) instr.
this->ToTypeSpecUse(instr, instr->GetSrc1(), this->currentBlock, src1Val, nullptr, TyInt32, IR::BailOutExpectingInteger, false, instr);
//TypeSpecialize the dst of Ld_A
TypeSpecializeIntDst(instr, instr->m_opcode, src1Val, src1Val, nullptr, IR::BailOutInvalid, INT32_MIN, INT32_MAX, dstVal);
isAlreadyTypeSpecialized = true;
}
instr->ClearBailOutInfo();
return isAlreadyTypeSpecialized;
}
Value*
GlobOpt::OptDst(
IR::Instr ** pInstr,
Value *dstVal,
Value *src1Val,
Value *src2Val,
Value *dstIndirIndexVal,
Value *src1IndirIndexVal)
{
IR::Instr *&instr = *pInstr;
IR::Opnd *opnd = instr->GetDst();
if (opnd)
{
if (opnd->IsSymOpnd() && opnd->AsSymOpnd()->IsPropertySymOpnd())
{
this->FinishOptPropOp(instr, opnd->AsPropertySymOpnd());
}
if (opnd->IsIndirOpnd() && !this->IsLoopPrePass())
{
IR::RegOpnd *baseOpnd = opnd->AsIndirOpnd()->GetBaseOpnd();
const ValueType baseValueType(baseOpnd->GetValueType());
if ((
baseValueType.IsLikelyNativeArray() ||
#ifdef _M_IX86
(
!AutoSystemInfo::Data.SSE2Available() &&
baseValueType.IsLikelyObject() &&
(
baseValueType.GetObjectType() == ObjectType::Float32Array ||
baseValueType.GetObjectType() == ObjectType::Float64Array
)
)
#else
false
#endif
) &&
instr->GetSrc1()->IsVar())
{
if(instr->m_opcode == Js::OpCode::StElemC)
{
// StElemC has different code that handles native array conversion or missing value stores. Add a bailout
// for those cases.
Assert(baseValueType.IsLikelyNativeArray());
Assert(!instr->HasBailOutInfo());
GenerateBailAtOperation(&instr, IR::BailOutConventionalNativeArrayAccessOnly);
}
else if(instr->HasBailOutInfo())
{
// The lowerer is not going to generate a fast path for this case. Remove any bailouts that require the fast
// path. Note that the removed bailouts should not be necessary for correctness. Bailout on native array
// conversion will be handled automatically as normal.
IR::BailOutKind bailOutKind = instr->GetBailOutKind();
if(bailOutKind & IR::BailOutOnArrayAccessHelperCall)
{
bailOutKind -= IR::BailOutOnArrayAccessHelperCall;
}
if(bailOutKind == IR::BailOutOnImplicitCallsPreOp)
{
bailOutKind -= IR::BailOutOnImplicitCallsPreOp;
}
if(bailOutKind)
{
instr->SetBailOutKind(bailOutKind);
}
else
{
instr->ClearBailOutInfo();
}
}
}
}
}
this->ProcessKills(instr);
if (opnd)
{
if (dstVal == nullptr)
{
dstVal = ValueNumberDst(pInstr, src1Val, src2Val);
}
if (this->IsLoopPrePass())
{
// Keep track of symbols defined in the loop.
if (opnd->IsRegOpnd())
{
StackSym *symDst = opnd->AsRegOpnd()->m_sym;
rootLoopPrePass->symsDefInLoop->Set(symDst->m_id);
}
}
else if (dstVal)
{
opnd->SetValueType(dstVal->GetValueInfo()->Type());
if (currentBlock->loop &&
!IsLoopPrePass() &&
(instr->m_opcode == Js::OpCode::Ld_A || instr->m_opcode == Js::OpCode::Ld_I4) &&
instr->GetSrc1()->IsRegOpnd() &&
!func->IsJitInDebugMode())
{
// Look for the following patterns:
//
// Pattern 1:
// s1[liveOnBackEdge] = s3[dead]
//
// Pattern 2:
// s3 = operation(s1[liveOnBackEdge], s2)
// s1[liveOnBackEdge] = s3
//
// In both patterns, s1 and s3 have the same value by the end. Prefer to use s1 as the sym store instead of s3
// since s1 is live on back-edge, as otherwise, their lifetimes overlap, requiring two registers to hold the
// value instead of one.
do
{
IR::RegOpnd *const src = instr->GetSrc1()->AsRegOpnd();
StackSym *srcVarSym = src->m_sym;
if(srcVarSym->IsTypeSpec())
{
srcVarSym = srcVarSym->GetVarEquivSym(nullptr);
Assert(srcVarSym);
}
if(dstVal->GetValueInfo()->GetSymStore() != srcVarSym)
{
break;
}
IR::RegOpnd *const dst = opnd->AsRegOpnd();
StackSym *dstVarSym = dst->m_sym;
if(dstVarSym->IsTypeSpec())
{
dstVarSym = dstVarSym->GetVarEquivSym(nullptr);
Assert(dstVarSym);
}
if(!currentBlock->loop->regAlloc.liveOnBackEdgeSyms->Test(dstVarSym->m_id))
{
break;
}
Value *const srcValue = CurrentBlockData()->FindValue(srcVarSym);
if(srcValue->GetValueNumber() != dstVal->GetValueNumber())
{
break;
}
if(!src->GetIsDead())
{
IR::Instr *const prevInstr = instr->GetPrevRealInstrOrLabel();
IR::Opnd *const prevDst = prevInstr->GetDst();
if(!prevDst ||
!src->IsEqualInternal(prevDst) ||
!(
(prevInstr->GetSrc1() && dst->IsEqual(prevInstr->GetSrc1())) ||
(prevInstr->GetSrc2() && dst->IsEqual(prevInstr->GetSrc2()))
))
{
break;
}
}
this->SetSymStoreDirect(dstVal->GetValueInfo(), dstVarSym);
} while(false);
}
}
this->ValueNumberObjectType(opnd, instr);
}
this->CSEAddInstr(this->currentBlock, *pInstr, dstVal, src1Val, src2Val, dstIndirIndexVal, src1IndirIndexVal);
return dstVal;
}
void
GlobOpt::CopyPropDstUses(IR::Opnd *opnd, IR::Instr *instr, Value *src1Val)
{
if (opnd->IsSymOpnd())
{
IR::SymOpnd *symOpnd = opnd->AsSymOpnd();
if (symOpnd->m_sym->IsPropertySym())
{
PropertySym * originalPropertySym = symOpnd->m_sym->AsPropertySym();
Value *const objectValue = CurrentBlockData()->FindValue(originalPropertySym->m_stackSym);
symOpnd->SetPropertyOwnerValueType(objectValue ? objectValue->GetValueInfo()->Type() : ValueType::Uninitialized);
this->CopyPropPropertySymObj(symOpnd, instr);
}
}
}
void
GlobOpt::SetLoopFieldInitialValue(Loop *loop, IR::Instr *instr, PropertySym *propertySym, PropertySym *originalPropertySym)
{
Value *initialValue = nullptr;
StackSym *symStore;
if (loop->allFieldsKilled || loop->fieldKilled->Test(originalPropertySym->m_id) || loop->fieldKilled->Test(propertySym->m_id))
{
return;
}
// Value already exists
if (CurrentBlockData()->FindValue(propertySym))
{
return;
}
// If this initial value was already added, we would find in the current value table.
Assert(!loop->initialValueFieldMap.TryGetValue(propertySym, &initialValue));
// If propertySym is live in landingPad, we don't need an initial value.
if (loop->landingPad->globOptData.liveFields->Test(propertySym->m_id))
{
return;
}
StackSym * objectSym = propertySym->m_stackSym;
Value *landingPadObjPtrVal, *currentObjPtrVal;
landingPadObjPtrVal = loop->landingPad->globOptData.FindValue(objectSym);
currentObjPtrVal = CurrentBlockData()->FindValue(objectSym);
auto CanSetInitialValue = [&]() -> bool {
if (!currentObjPtrVal)
{
return false;
}
if (landingPadObjPtrVal)
{
return currentObjPtrVal->GetValueNumber() == landingPadObjPtrVal->GetValueNumber();
}
else
{
if (!objectSym->IsSingleDef())
{
return false;
}
IR::Instr * defInstr = objectSym->GetInstrDef();
IR::Opnd * src1 = defInstr->GetSrc1();
while (!(src1 && src1->IsSymOpnd() && src1->AsSymOpnd()->m_sym->IsPropertySym()))
{
if (src1 && src1->IsRegOpnd() && src1->AsRegOpnd()->GetStackSym()->IsSingleDef())
{
defInstr = src1->AsRegOpnd()->GetStackSym()->GetInstrDef();
src1 = defInstr->GetSrc1();
}
else
{
return false;
}
}
return true;
// Todo: allow other kinds of operands as src1 of instr def of the object sym of the current propertySym
// SymOpnd, but not PropertySymOpnd - LdSlotArr, some LdSlots (?)
// nullptr - NewScObject
}
};
if (!CanSetInitialValue())
{
// objPtr has a different value in the landing pad.
return;
}
// The opnd's value type has not yet been initialized. Since the property sym doesn't have a value, it effectively has an
// Uninitialized value type. Use the profiled value type from the instruction.
const ValueType profiledValueType =
instr->IsProfiledInstr() ? instr->AsProfiledInstr()->u.FldInfo().valueType : ValueType::Uninitialized;
Assert(!profiledValueType.IsDefinite()); // Hence the values created here don't need to be tracked for kills
initialValue = this->NewGenericValue(profiledValueType, propertySym);
symStore = StackSym::New(this->func);
initialValue->GetValueInfo()->SetSymStore(symStore);
loop->initialValueFieldMap.Add(propertySym, initialValue->Copy(this->alloc, initialValue->GetValueNumber()));
// Copy the initial value into the landing pad, but without a symStore
Value *landingPadInitialValue = Value::New(this->alloc, initialValue->GetValueNumber(),
ValueInfo::New(this->alloc, initialValue->GetValueInfo()->Type()));
loop->landingPad->globOptData.SetValue(landingPadInitialValue, propertySym);
loop->landingPad->globOptData.liveFields->Set(propertySym->m_id);
#if DBG_DUMP
if (PHASE_TRACE(Js::FieldPREPhase, this->func))
{
Output::Print(_u("** TRACE: Field PRE initial value for loop head #%d. Val:%d symStore:"),
loop->GetHeadBlock()->GetBlockNum(), initialValue->GetValueNumber());
symStore->Dump();
Output::Print(_u("\n Instr: "));
instr->Dump();
Output::Flush();
}
#endif
// Add initial value to all the previous blocks in the loop.
FOREACH_BLOCK_BACKWARD_IN_RANGE(block, this->currentBlock->GetPrev(), loop->GetHeadBlock())
{
if (block->GetDataUseCount() == 0)
{
// All successor blocks have been processed, no point in adding the value.
continue;
}
Value *newValue = initialValue->Copy(this->alloc, initialValue->GetValueNumber());
block->globOptData.SetValue(newValue, propertySym);
block->globOptData.liveFields->Set(propertySym->m_id);
block->globOptData.SetValue(newValue, symStore);
block->globOptData.liveVarSyms->Set(symStore->m_id);
} NEXT_BLOCK_BACKWARD_IN_RANGE;
CurrentBlockData()->SetValue(initialValue, symStore);
CurrentBlockData()->liveVarSyms->Set(symStore->m_id);
CurrentBlockData()->liveFields->Set(propertySym->m_id);
}
// Examine src, apply copy prop and value number it
Value*
GlobOpt::OptSrc(IR::Opnd *opnd, IR::Instr * *pInstr, Value **indirIndexValRef, IR::IndirOpnd *parentIndirOpnd)
{
IR::Instr * &instr = *pInstr;
Assert(!indirIndexValRef || !*indirIndexValRef);
Assert(
parentIndirOpnd
? opnd == parentIndirOpnd->GetBaseOpnd() || opnd == parentIndirOpnd->GetIndexOpnd()
: opnd == instr->GetSrc1() || opnd == instr->GetSrc2() || opnd == instr->GetDst() && opnd->IsIndirOpnd());
Sym *sym;
Value *val;
PropertySym *originalPropertySym = nullptr;
switch(opnd->GetKind())
{
case IR::OpndKindIntConst:
val = this->GetIntConstantValue(opnd->AsIntConstOpnd()->AsInt32(), instr);
opnd->SetValueType(val->GetValueInfo()->Type());
return val;
case IR::OpndKindInt64Const:
val = this->GetIntConstantValue(opnd->AsInt64ConstOpnd()->GetValue(), instr);
opnd->SetValueType(val->GetValueInfo()->Type());
return val;
case IR::OpndKindFloatConst:
{
const FloatConstType floatValue = opnd->AsFloatConstOpnd()->m_value;
int32 int32Value;
if(Js::JavascriptNumber::TryGetInt32Value(floatValue, &int32Value))
{
val = GetIntConstantValue(int32Value, instr);
}
else
{
val = NewFloatConstantValue(floatValue);
}
opnd->SetValueType(val->GetValueInfo()->Type());
return val;
}
case IR::OpndKindAddr:
{
IR::AddrOpnd *addrOpnd = opnd->AsAddrOpnd();
if (addrOpnd->m_isFunction)
{
AssertMsg(!PHASE_OFF(Js::FixedMethodsPhase, instr->m_func), "Fixed function address operand with fixed method calls phase disabled?");
val = NewFixedFunctionValue((Js::JavascriptFunction *)addrOpnd->m_address, addrOpnd);
opnd->SetValueType(val->GetValueInfo()->Type());
return val;
}
else if (addrOpnd->IsVar() && Js::TaggedInt::Is(addrOpnd->m_address))
{
val = this->GetIntConstantValue(Js::TaggedInt::ToInt32(addrOpnd->m_address), instr);
opnd->SetValueType(val->GetValueInfo()->Type());
return val;
}
val = this->GetVarConstantValue(addrOpnd);
return val;
}
case IR::OpndKindSym:
{
// Clear the opnd's value type up-front, so that this code cannot accidentally use the value type set from a previous
// OptSrc on the same instruction (for instance, from an earlier loop prepass). The value type will be set from the
// value if available, before returning from this function.
opnd->SetValueType(ValueType::Uninitialized);
sym = opnd->AsSymOpnd()->m_sym;
// Don't create a new value for ArgSlots and don't copy prop them away.
if (sym->IsStackSym() && sym->AsStackSym()->IsArgSlotSym())
{
return nullptr;
}
// Unless we have profile info, don't create a new value for ArgSlots and don't copy prop them away.
if (sym->IsStackSym() && sym->AsStackSym()->IsParamSlotSym())
{
if (!instr->m_func->IsLoopBody() && instr->m_func->HasProfileInfo())
{
// Skip "this" pointer.
int paramSlotNum = sym->AsStackSym()->GetParamSlotNum() - 2;
if (paramSlotNum >= 0)
{
const auto parameterType = instr->m_func->GetReadOnlyProfileInfo()->GetParameterInfo(static_cast<Js::ArgSlot>(paramSlotNum));
val = NewGenericValue(parameterType);
opnd->SetValueType(val->GetValueInfo()->Type());
return val;
}
}
return nullptr;
}
if (!sym->IsPropertySym())
{
break;
}
originalPropertySym = sym->AsPropertySym();
// Don't give a value to 'arguments' property sym to prevent field copy prop of 'arguments'
if (originalPropertySym->AsPropertySym()->m_propertyId == Js::PropertyIds::arguments &&
originalPropertySym->AsPropertySym()->m_fieldKind == PropertyKindData)
{
if (opnd->AsSymOpnd()->IsPropertySymOpnd())
{
this->FinishOptPropOp(instr, opnd->AsPropertySymOpnd());
}
return nullptr;
}
Value *const objectValue = CurrentBlockData()->FindValue(originalPropertySym->m_stackSym);
opnd->AsSymOpnd()->SetPropertyOwnerValueType(
objectValue ? objectValue->GetValueInfo()->Type() : ValueType::Uninitialized);
sym = this->CopyPropPropertySymObj(opnd->AsSymOpnd(), instr);
if (!DoFieldCopyProp())
{
if (opnd->AsSymOpnd()->IsPropertySymOpnd())
{
this->FinishOptPropOp(instr, opnd->AsPropertySymOpnd());
}
return nullptr;
}
switch (instr->m_opcode)
{
// These need the symbolic reference to the field, don't copy prop the value of the field
case Js::OpCode::DeleteFld:
case Js::OpCode::DeleteRootFld:
case Js::OpCode::DeleteFldStrict:
case Js::OpCode::DeleteRootFldStrict:
case Js::OpCode::ScopedDeleteFld:
case Js::OpCode::ScopedDeleteFldStrict:
case Js::OpCode::LdMethodFromFlags:
case Js::OpCode::BrOnNoProperty:
case Js::OpCode::BrOnNoLocalProperty:
case Js::OpCode::BrOnHasProperty:
case Js::OpCode::BrOnHasLocalProperty:
case Js::OpCode::LdMethodFldPolyInlineMiss:
case Js::OpCode::StSlotChkUndecl:
case Js::OpCode::ScopedLdInst:
return nullptr;
};
if (instr->CallsGetter())
{
return nullptr;
}
if (this->IsLoopPrePass() && this->DoFieldPRE(this->rootLoopPrePass))
{
if (!this->prePassLoop->allFieldsKilled && !this->prePassLoop->fieldKilled->Test(sym->m_id))
{
this->SetLoopFieldInitialValue(this->rootLoopPrePass, instr, sym->AsPropertySym(), originalPropertySym);
}
if (this->IsPREInstrCandidateLoad(instr->m_opcode))
{
// Foreach property sym, remember the first instruction that loads it.
// Can this be done in one call?
if (!this->prePassInstrMap->ContainsKey(sym->m_id))
{
this->prePassInstrMap->AddNew(sym->m_id, instr->CopyWithoutDst());
}
}
}
break;
}
case IR::OpndKindReg:
// Clear the opnd's value type up-front, so that this code cannot accidentally use the value type set from a previous
// OptSrc on the same instruction (for instance, from an earlier loop prepass). The value type will be set from the
// value if available, before returning from this function.
opnd->SetValueType(ValueType::Uninitialized);
sym = opnd->AsRegOpnd()->m_sym;
CurrentBlockData()->MarkTempLastUse(instr, opnd->AsRegOpnd());
if (sym->AsStackSym()->IsTypeSpec())
{
sym = sym->AsStackSym()->GetVarEquivSym(this->func);
}
break;
case IR::OpndKindIndir:
this->OptimizeIndirUses(opnd->AsIndirOpnd(), &instr, indirIndexValRef);
return nullptr;
default:
return nullptr;
}
val = CurrentBlockData()->FindValue(sym);
if (val)
{
Assert(CurrentBlockData()->IsLive(sym) || (sym->IsPropertySym()));
if (instr)
{
opnd = this->CopyProp(opnd, instr, val, parentIndirOpnd);
}
// Check if we freed the operand.
if (opnd == nullptr)
{
return nullptr;
}
// In a loop prepass, determine stack syms that are used before they are defined in the root loop for which the prepass
// is being done. This information is used to do type specialization conversions in the landing pad where appropriate.
if(IsLoopPrePass() &&
sym->IsStackSym() &&
!rootLoopPrePass->symsUsedBeforeDefined->Test(sym->m_id) &&
rootLoopPrePass->landingPad->globOptData.IsLive(sym) && !isAsmJSFunc) // no typespec in asmjs and hence skipping this
{
Value *const landingPadValue = rootLoopPrePass->landingPad->globOptData.FindValue(sym);
if(landingPadValue && val->GetValueNumber() == landingPadValue->GetValueNumber())
{
rootLoopPrePass->symsUsedBeforeDefined->Set(sym->m_id);
ValueInfo *landingPadValueInfo = landingPadValue->GetValueInfo();
if(landingPadValueInfo->IsLikelyNumber())
{
rootLoopPrePass->likelyNumberSymsUsedBeforeDefined->Set(sym->m_id);
if(DoAggressiveIntTypeSpec() ? landingPadValueInfo->IsLikelyInt() : landingPadValueInfo->IsInt())
{
// Can only force int conversions in the landing pad based on likely-int values if aggressive int type
// specialization is enabled.
rootLoopPrePass->likelyIntSymsUsedBeforeDefined->Set(sym->m_id);
}
}
}
}
}
else if ((instr->TransfersSrcValue() || OpCodeAttr::CanCSE(instr->m_opcode)) && (opnd == instr->GetSrc1() || opnd == instr->GetSrc2()))
{
if (sym->IsPropertySym())
{
val = this->CreateFieldSrcValue(sym->AsPropertySym(), originalPropertySym, &opnd, instr);
}
else
{
val = this->NewGenericValue(ValueType::Uninitialized, opnd);
}
}
if (opnd->IsSymOpnd() && opnd->AsSymOpnd()->IsPropertySymOpnd())
{
TryOptimizeInstrWithFixedDataProperty(&instr);
this->FinishOptPropOp(instr, opnd->AsPropertySymOpnd());
}
if (val)
{
ValueType valueType(val->GetValueInfo()->Type());
// This block uses per-instruction profile information on array types to optimize using the best available profile
// information and to prevent infinite bailouts by ensuring array type information is updated on bailouts.
if (valueType.IsLikelyArray() && !valueType.IsDefinite() && !valueType.IsObject() && instr->IsProfiledInstr())
{
// See if we have profile data for the array type
IR::ProfiledInstr *const profiledInstr = instr->AsProfiledInstr();
ValueType profiledArrayType;
bool useAggressiveSpecialization = true;
switch(instr->m_opcode)
{
case Js::OpCode::LdElemI_A:
if(instr->GetSrc1()->IsIndirOpnd() && opnd == instr->GetSrc1()->AsIndirOpnd()->GetBaseOpnd())
{
profiledArrayType = profiledInstr->u.ldElemInfo->GetArrayType();
useAggressiveSpecialization = !profiledInstr->u.ldElemInfo->IsAggressiveSpecializationDisabled();
}
break;
case Js::OpCode::StElemI_A:
case Js::OpCode::StElemI_A_Strict:
case Js::OpCode::StElemC:
if(instr->GetDst()->IsIndirOpnd() && opnd == instr->GetDst()->AsIndirOpnd()->GetBaseOpnd())
{
profiledArrayType = profiledInstr->u.stElemInfo->GetArrayType();
useAggressiveSpecialization = !profiledInstr->u.stElemInfo->IsAggressiveSpecializationDisabled();
}
break;
case Js::OpCode::LdLen_A:
if(instr->GetSrc1()->IsRegOpnd() && opnd == instr->GetSrc1())
{
profiledArrayType = profiledInstr->u.LdLenInfo().GetArrayType();
useAggressiveSpecialization = !profiledInstr->u.LdLenInfo().IsAggressiveSpecializationDisabled();
}
break;
case Js::OpCode::IsIn:
if (instr->GetSrc2()->IsRegOpnd() && opnd == instr->GetSrc2())
{
profiledArrayType = profiledInstr->u.ldElemInfo->GetArrayType();
useAggressiveSpecialization = !profiledInstr->u.ldElemInfo->IsAggressiveSpecializationDisabled();
}
break;
}
if (profiledArrayType.IsLikelyObject())
{
// Ideally we want to use the most specialized type seen by this path, but when that causes bailouts use the least specialized type instead.
if (useAggressiveSpecialization &&
profiledArrayType.GetObjectType() == valueType.GetObjectType() &&
!valueType.IsLikelyNativeIntArray() &&
(
profiledArrayType.HasIntElements() || (valueType.HasVarElements() && profiledArrayType.HasFloatElements())
))
{
// use the more specialized type profiled by the instruction.
valueType = profiledArrayType.SetHasNoMissingValues(valueType.HasNoMissingValues());
ChangeValueType(this->currentBlock, CurrentBlockData()->FindValue(opnd->AsRegOpnd()->m_sym), valueType, false);
}
else if (!useAggressiveSpecialization &&
(profiledArrayType.GetObjectType() != valueType.GetObjectType() ||
(
valueType.IsLikelyNativeArray() &&
(
profiledArrayType.HasVarElements() || (valueType.HasIntElements() && profiledArrayType.HasFloatElements())
)
)
))
{
// Merge array type we pulled from profile with type propagated by dataflow.
if (profiledArrayType.IsLikelyArray())
{
valueType = valueType.Merge(profiledArrayType).SetHasNoMissingValues(valueType.HasNoMissingValues());
}
else
{
valueType = valueType.Merge(profiledArrayType);
}
ChangeValueType(this->currentBlock, CurrentBlockData()->FindValue(opnd->AsRegOpnd()->m_sym), valueType, false, true);
}
}
}
opnd->SetValueType(valueType);
if(!IsLoopPrePass() && opnd->IsSymOpnd() && (valueType.IsDefinite() || valueType.IsNotTaggedValue()))
{
if (opnd->AsSymOpnd()->m_sym->IsPropertySym())
{
// A property sym can only be guaranteed to have a definite value type when implicit calls are disabled from the
// point where the sym was defined with the definite value type. Insert an instruction to indicate to the
// dead-store pass that implicit calls need to be kept disabled until after this instruction.
Assert(DoFieldCopyProp());
CaptureNoImplicitCallUses(opnd, false, instr);
}
}
}
else
{
opnd->SetValueType(ValueType::Uninitialized);
}
return val;
}
/*
* GlobOpt::TryOptimizeInstrWithFixedDataProperty
* Converts Ld[Root]Fld instr to
* * CheckFixedFld
* * Dst = Ld_A <int Constant value>
* This API assumes that the source operand is a Sym/PropertySym kind.
*/
void
GlobOpt::TryOptimizeInstrWithFixedDataProperty(IR::Instr ** const pInstr)
{
Assert(pInstr);
IR::Instr * &instr = *pInstr;
IR::Opnd * src1 = instr->GetSrc1();
Assert(src1 && src1->IsSymOpnd() && src1->AsSymOpnd()->IsPropertySymOpnd());
if(PHASE_OFF(Js::UseFixedDataPropsPhase, instr->m_func))
{
return;
}
if (!this->IsLoopPrePass() && !this->isRecursiveCallOnLandingPad &&
OpCodeAttr::CanLoadFixedFields(instr->m_opcode))
{
instr->TryOptimizeInstrWithFixedDataProperty(&instr, this);
}
}
// Constant prop if possible, otherwise if this value already resides in another
// symbol, reuse this previous symbol. This should help register allocation.
IR::Opnd *
GlobOpt::CopyProp(IR::Opnd *opnd, IR::Instr *instr, Value *val, IR::IndirOpnd *parentIndirOpnd)
{
Assert(
parentIndirOpnd
? opnd == parentIndirOpnd->GetBaseOpnd() || opnd == parentIndirOpnd->GetIndexOpnd()
: opnd == instr->GetSrc1() || opnd == instr->GetSrc2() || opnd == instr->GetDst() && opnd->IsIndirOpnd());
if (this->IsLoopPrePass())
{
// Transformations are not legal in prepass...
return opnd;
}
if (instr->m_opcode == Js::OpCode::CheckFixedFld || instr->m_opcode == Js::OpCode::CheckPropertyGuardAndLoadType)
{
// Don't copy prop into CheckFixedFld or CheckPropertyGuardAndLoadType
return opnd;
}
// Don't copy-prop link operands of ExtendedArgs
if (instr->m_opcode == Js::OpCode::ExtendArg_A && opnd == instr->GetSrc2())
{
return opnd;
}
// Don't copy-prop operand of SIMD instr with ExtendedArg operands. Each instr should have its exclusive EA sequence.
if (
Js::IsSimd128Opcode(instr->m_opcode) &&
instr->GetSrc1() != nullptr &&
instr->GetSrc1()->IsRegOpnd() &&
instr->GetSrc2() == nullptr
)
{
StackSym *sym = instr->GetSrc1()->GetStackSym();
if (sym && sym->IsSingleDef() && sym->GetInstrDef()->m_opcode == Js::OpCode::ExtendArg_A)
{
return opnd;
}
}
ValueInfo *valueInfo = val->GetValueInfo();
if (this->func->HasFinally())
{
// s0 = undefined was added on functions with early exit in try-finally functions, that can get copy-proped and case incorrect results
if (instr->m_opcode == Js::OpCode::ArgOut_A_Inline && valueInfo->GetSymStore() &&
valueInfo->GetSymStore()->m_id == 0)
{
// We don't want to copy-prop s0 (return symbol) into inlinee code
return opnd;
}
}
// Constant prop?
int32 intConstantValue;
int64 int64ConstantValue;
if (valueInfo->TryGetIntConstantValue(&intConstantValue))
{
if (PHASE_OFF(Js::ConstPropPhase, this->func))
{
return opnd;
}
if ((
instr->m_opcode == Js::OpCode::StElemI_A ||
instr->m_opcode == Js::OpCode::StElemI_A_Strict ||
instr->m_opcode == Js::OpCode::StElemC
) && instr->GetSrc1() == opnd)
{
// Disabling prop to src of native array store, because we were losing the chance to type specialize.
// Is it possible to type specialize this src if we allow constants, etc., to be prop'd here?
if (instr->GetDst()->AsIndirOpnd()->GetBaseOpnd()->GetValueType().IsLikelyNativeArray())
{
return opnd;
}
}
if(opnd != instr->GetSrc1() && opnd != instr->GetSrc2())
{
if(PHASE_OFF(Js::IndirCopyPropPhase, instr->m_func))
{
return opnd;
}
// Const-prop an indir opnd's constant index into its offset
IR::Opnd *srcs[] = { instr->GetSrc1(), instr->GetSrc2(), instr->GetDst() };
for(int i = 0; i < sizeof(srcs) / sizeof(srcs[0]); ++i)
{
const auto src = srcs[i];
if(!src || !src->IsIndirOpnd())
{
continue;
}
const auto indir = src->AsIndirOpnd();
if ((int64)indir->GetOffset() + intConstantValue > INT32_MAX)
{
continue;
}
if(opnd == indir->GetIndexOpnd())
{
Assert(indir->GetScale() == 0);
GOPT_TRACE_OPND(opnd, _u("Constant prop indir index into offset (value: %d)\n"), intConstantValue);
this->CaptureByteCodeSymUses(instr);
indir->SetOffset(indir->GetOffset() + intConstantValue);
indir->SetIndexOpnd(nullptr);
}
}
return opnd;
}
if (Js::TaggedInt::IsOverflow(intConstantValue))
{
return opnd;
}
IR::Opnd *constOpnd;
if (opnd->IsVar())
{
IR::AddrOpnd *addrOpnd = IR::AddrOpnd::New(Js::TaggedInt::ToVarUnchecked((int)intConstantValue), IR::AddrOpndKindConstantVar, instr->m_func);
GOPT_TRACE_OPND(opnd, _u("Constant prop %d (value:%d)\n"), addrOpnd->m_address, intConstantValue);
constOpnd = addrOpnd;
}
else
{
// Note: Jit loop body generates some i32 operands...
Assert(opnd->IsInt32() || opnd->IsInt64() || opnd->IsUInt32());
IRType opndType;
IntConstType constVal;
if (opnd->IsUInt32())
{
// avoid sign extension
constVal = (uint32)intConstantValue;
opndType = TyUint32;
}
else
{
constVal = intConstantValue;
opndType = TyInt32;
}
IR::IntConstOpnd *intOpnd = IR::IntConstOpnd::New(constVal, opndType, instr->m_func);
GOPT_TRACE_OPND(opnd, _u("Constant prop %d (value:%d)\n"), intOpnd->GetImmediateValue(instr->m_func), intConstantValue);
constOpnd = intOpnd;
}
#if ENABLE_DEBUG_CONFIG_OPTIONS
//Need to update DumpFieldCopyPropTestTrace for every new opcode that is added for fieldcopyprop
if(Js::Configuration::Global.flags.TestTrace.IsEnabled(Js::FieldCopyPropPhase))
{
instr->DumpFieldCopyPropTestTrace(this->isRecursiveCallOnLandingPad);
}
#endif
this->CaptureByteCodeSymUses(instr);
opnd = instr->ReplaceSrc(opnd, constOpnd);
switch (instr->m_opcode)
{
case Js::OpCode::LdSlot:
case Js::OpCode::LdSlotArr:
case Js::OpCode::LdFld:
case Js::OpCode::LdFldForTypeOf:
case Js::OpCode::LdRootFldForTypeOf:
case Js::OpCode::LdFldForCallApplyTarget:
case Js::OpCode::LdRootFld:
case Js::OpCode::LdMethodFld:
case Js::OpCode::LdRootMethodFld:
case Js::OpCode::LdMethodFromFlags:
case Js::OpCode::ScopedLdMethodFld:
case Js::OpCode::ScopedLdFld:
case Js::OpCode::ScopedLdFldForTypeOf:
instr->m_opcode = Js::OpCode::Ld_A;
case Js::OpCode::Ld_A:
{
IR::Opnd * dst = instr->GetDst();
if (dst->IsRegOpnd() && dst->AsRegOpnd()->m_sym->IsSingleDef())
{
dst->AsRegOpnd()->m_sym->SetIsIntConst((int)intConstantValue);
}
break;
}
case Js::OpCode::ArgOut_A:
case Js::OpCode::ArgOut_A_Inline:
case Js::OpCode::ArgOut_A_FixupForStackArgs:
case Js::OpCode::ArgOut_A_InlineBuiltIn:
if (instr->GetDst()->IsRegOpnd())
{
Assert(instr->GetDst()->AsRegOpnd()->m_sym->m_isSingleDef);
instr->GetDst()->AsRegOpnd()->m_sym->AsStackSym()->SetIsIntConst((int)intConstantValue);
}
else
{
instr->GetDst()->AsSymOpnd()->m_sym->AsStackSym()->SetIsIntConst((int)intConstantValue);
}
break;
case Js::OpCode::TypeofElem:
instr->m_opcode = Js::OpCode::Typeof;
break;
case Js::OpCode::StSlotChkUndecl:
if (instr->GetSrc2() == opnd)
{
// Src2 here should refer to the same location as the Dst operand, which we need to keep live
// due to the implicit read for ChkUndecl.
instr->m_opcode = Js::OpCode::StSlot;
instr->FreeSrc2();
opnd = nullptr;
}
break;
}
return opnd;
}
else if (valueInfo->TryGetIntConstantValue(&int64ConstantValue, false))
{
if (PHASE_OFF(Js::ConstPropPhase, this->func) || !PHASE_ON(Js::Int64ConstPropPhase, this->func))
{
return opnd;
}
Assert(this->func->GetJITFunctionBody()->IsWasmFunction());
if (this->func->GetJITFunctionBody()->IsWasmFunction() && opnd->IsInt64())
{
IR::Int64ConstOpnd *intOpnd = IR::Int64ConstOpnd::New(int64ConstantValue, opnd->GetType(), instr->m_func);
GOPT_TRACE_OPND(opnd, _u("Constant prop %lld (value:%lld)\n"), intOpnd->GetImmediateValue(instr->m_func), int64ConstantValue);
this->CaptureByteCodeSymUses(instr);
opnd = instr->ReplaceSrc(opnd, intOpnd);
}
return opnd;
}
Sym *opndSym = nullptr;
if (opnd->IsRegOpnd())
{
IR::RegOpnd *regOpnd = opnd->AsRegOpnd();
opndSym = regOpnd->m_sym;
}
else if (opnd->IsSymOpnd())
{
IR::SymOpnd *symOpnd = opnd->AsSymOpnd();
opndSym = symOpnd->m_sym;
}
if (!opndSym)
{
return opnd;
}
if (PHASE_OFF(Js::CopyPropPhase, this->func))
{
this->SetSymStoreDirect(valueInfo, opndSym);
return opnd;
}
StackSym *copySym = CurrentBlockData()->GetCopyPropSym(opndSym, val);
if (copySym != nullptr)
{
Assert(!opndSym->IsStackSym() || copySym->GetSymSize() == opndSym->AsStackSym()->GetSymSize());
// Copy prop.
return CopyPropReplaceOpnd(instr, opnd, copySym, parentIndirOpnd);
}
else
{
if (valueInfo->GetSymStore() && instr->m_opcode == Js::OpCode::Ld_A && instr->GetDst()->IsRegOpnd()
&& valueInfo->GetSymStore() == instr->GetDst()->AsRegOpnd()->m_sym)
{
// Avoid resetting symStore after fieldHoisting:
// t1 = LdFld field <- set symStore to fieldHoistSym
// fieldHoistSym = Ld_A t1 <- we're looking at t1 now, but want to copy-prop fieldHoistSym forward
return opnd;
}
this->SetSymStoreDirect(valueInfo, opndSym);
}
return opnd;
}
IR::Opnd *
GlobOpt::CopyPropReplaceOpnd(IR::Instr * instr, IR::Opnd * opnd, StackSym * copySym, IR::IndirOpnd *parentIndirOpnd)
{
Assert(
parentIndirOpnd
? opnd == parentIndirOpnd->GetBaseOpnd() || opnd == parentIndirOpnd->GetIndexOpnd()
: opnd == instr->GetSrc1() || opnd == instr->GetSrc2() || opnd == instr->GetDst() && opnd->IsIndirOpnd());
Assert(CurrentBlockData()->IsLive(copySym));
IR::RegOpnd *regOpnd;
StackSym *newSym = copySym;
GOPT_TRACE_OPND(opnd, _u("Copy prop s%d\n"), newSym->m_id);
#if ENABLE_DEBUG_CONFIG_OPTIONS
//Need to update DumpFieldCopyPropTestTrace for every new opcode that is added for fieldcopyprop
if(Js::Configuration::Global.flags.TestTrace.IsEnabled(Js::FieldCopyPropPhase))
{
instr->DumpFieldCopyPropTestTrace(this->isRecursiveCallOnLandingPad);
}
#endif
this->CaptureByteCodeSymUses(instr);
if (opnd->IsRegOpnd())
{
regOpnd = opnd->AsRegOpnd();
regOpnd->m_sym = newSym;
regOpnd->SetIsJITOptimizedReg(true);
// The dead bit on the opnd is specific to the sym it is referencing. Since we replaced the sym, the bit is reset.
regOpnd->SetIsDead(false);
if(parentIndirOpnd)
{
return regOpnd;
}
}
else
{
// If this is an object type specialized field load inside a loop, and it produces a type value which wasn't live
// before, make sure the type check is left in the loop, because it may be the last type check in the loop protecting
// other fields which are not hoistable and are lexically upstream in the loop. If the check is not ultimately
// needed, the dead store pass will remove it.
if (this->currentBlock->loop != nullptr && opnd->IsSymOpnd() && opnd->AsSymOpnd()->IsPropertySymOpnd())
{
IR::PropertySymOpnd* propertySymOpnd = opnd->AsPropertySymOpnd();
if (CheckIfPropOpEmitsTypeCheck(instr, propertySymOpnd))
{
// We only set guarded properties in the dead store pass, so they shouldn't be set here yet. If they were
// we would need to move them from this operand to the operand which is being copy propagated.
Assert(propertySymOpnd->GetGuardedPropOps() == nullptr);
// We're creating a copy of this operand to be reused in the same spot in the flow, so we can copy all
// flow sensitive fields. However, we will do only a type check here (no property access) and only for
// the sake of downstream instructions, so the flags pertaining to this property access are irrelevant.
IR::PropertySymOpnd* checkObjTypeOpnd = CreateOpndForTypeCheckOnly(propertySymOpnd, instr->m_func);
IR::Instr* checkObjTypeInstr = IR::Instr::New(Js::OpCode::CheckObjType, instr->m_func);
checkObjTypeInstr->SetSrc1(checkObjTypeOpnd);
checkObjTypeInstr->SetByteCodeOffset(instr);
instr->InsertBefore(checkObjTypeInstr);
// Since we inserted this instruction before the one that is being processed in natural flow, we must process
// it for object type spec explicitly here.
FinishOptPropOp(checkObjTypeInstr, checkObjTypeOpnd);
Assert(!propertySymOpnd->IsTypeChecked());
checkObjTypeInstr = this->SetTypeCheckBailOut(checkObjTypeOpnd, checkObjTypeInstr, nullptr);
Assert(checkObjTypeInstr->HasBailOutInfo());
if (this->currentBlock->loop && !this->IsLoopPrePass())
{
// Try hoisting this checkObjType.
// But since this isn't the current instr being optimized, we need to play tricks with
// the byteCodeUse fields...
TrackByteCodeUsesForInstrAddedInOptInstr(checkObjTypeInstr, [&]()
{
TryHoistInvariant(checkObjTypeInstr, this->currentBlock, NULL, CurrentBlockData()->FindValue(copySym), NULL, true);
});
}
}
}
if (opnd->IsSymOpnd() && opnd->GetIsDead())
{
// Take the property sym out of the live fields set
this->EndFieldLifetime(opnd->AsSymOpnd());
}
regOpnd = IR::RegOpnd::New(newSym, opnd->GetType(), instr->m_func);
regOpnd->SetIsJITOptimizedReg(true);
instr->ReplaceSrc(opnd, regOpnd);
}
switch (instr->m_opcode)
{
case Js::OpCode::Ld_A:
if (instr->GetDst()->IsRegOpnd() && instr->GetSrc1()->IsRegOpnd() &&
instr->GetDst()->AsRegOpnd()->GetStackSym() == instr->GetSrc1()->AsRegOpnd()->GetStackSym())
{
this->InsertByteCodeUses(instr, true);
instr->m_opcode = Js::OpCode::Nop;
}
break;
case Js::OpCode::LdSlot:
case Js::OpCode::LdSlotArr:
if (instr->GetDst()->IsRegOpnd() && instr->GetSrc1()->IsRegOpnd() &&
instr->GetDst()->AsRegOpnd()->GetStackSym() == instr->GetSrc1()->AsRegOpnd()->GetStackSym())
{
this->InsertByteCodeUses(instr, true);
instr->m_opcode = Js::OpCode::Nop;
}
else
{
instr->m_opcode = Js::OpCode::Ld_A;
}
break;
case Js::OpCode::StSlotChkUndecl:
if (instr->GetSrc2()->IsRegOpnd())
{
// Src2 here should refer to the same location as the Dst operand, which we need to keep live
// due to the implicit read for ChkUndecl.
instr->m_opcode = Js::OpCode::StSlot;
instr->FreeSrc2();
return nullptr;
}
break;
case Js::OpCode::LdFld:
case Js::OpCode::LdFldForTypeOf:
case Js::OpCode::LdRootFldForTypeOf:
case Js::OpCode::LdFldForCallApplyTarget:
case Js::OpCode::LdRootFld:
case Js::OpCode::LdMethodFld:
case Js::OpCode::LdRootMethodFld:
case Js::OpCode::ScopedLdMethodFld:
case Js::OpCode::ScopedLdFld:
case Js::OpCode::ScopedLdFldForTypeOf:
instr->m_opcode = Js::OpCode::Ld_A;
break;
case Js::OpCode::LdMethodFromFlags:
// The bailout is checked on the loop top and we don't need to check bailout again in loop.
instr->m_opcode = Js::OpCode::Ld_A;
instr->ClearBailOutInfo();
break;
case Js::OpCode::TypeofElem:
instr->m_opcode = Js::OpCode::Typeof;
break;
}
CurrentBlockData()->MarkTempLastUse(instr, regOpnd);
return regOpnd;
}
ValueNumber
GlobOpt::NewValueNumber()
{
ValueNumber valueNumber = this->currentValue++;
if (valueNumber == 0)
{
Js::Throw::OutOfMemory();
}
return valueNumber;
}
Value *GlobOpt::NewValue(ValueInfo *const valueInfo)
{
return NewValue(NewValueNumber(), valueInfo);
}
Value *GlobOpt::NewValue(const ValueNumber valueNumber, ValueInfo *const valueInfo)
{
Assert(valueInfo);
return Value::New(alloc, valueNumber, valueInfo);
}
Value *GlobOpt::CopyValue(Value const *const value)
{
return CopyValue(value, NewValueNumber());
}
Value *GlobOpt::CopyValue(Value const *const value, const ValueNumber valueNumber)
{
Assert(value);
return value->Copy(alloc, valueNumber);
}
Value *
GlobOpt::NewGenericValue(const ValueType valueType)
{
return NewGenericValue(valueType, static_cast<IR::Opnd *>(nullptr));
}
Value *
GlobOpt::NewGenericValue(const ValueType valueType, IR::Opnd *const opnd)
{
// Shouldn't assign a likely-int value to something that is definitely not an int
Assert(!(valueType.IsLikelyInt() && opnd && opnd->IsNotInt()));
ValueInfo *valueInfo = ValueInfo::New(this->alloc, valueType);
Value *val = NewValue(valueInfo);
TrackNewValueForKills(val);
CurrentBlockData()->InsertNewValue(val, opnd);
return val;
}
Value *
GlobOpt::NewGenericValue(const ValueType valueType, Sym *const sym)
{
ValueInfo *valueInfo = ValueInfo::New(this->alloc, valueType);
Value *val = NewValue(valueInfo);
TrackNewValueForKills(val);
CurrentBlockData()->SetValue(val, sym);
return val;
}
Value *
GlobOpt::GetIntConstantValue(const int32 intConst, IR::Instr * instr, IR::Opnd *const opnd)
{
Value *value = nullptr;
Value *const cachedValue = this->intConstantToValueMap->Lookup(intConst, nullptr);
if(cachedValue)
{
// The cached value could be from a different block since this is a global (as opposed to a per-block) cache. Since
// values are cloned for each block, we can't use the same value object. We also can't have two values with the same
// number in one block, so we can't simply copy the cached value either. And finally, there is no deterministic and fast
// way to determine if a value with the same value number exists for this block. So the best we can do with a global
// cache is to check the sym-store's value in the current block to see if it has a value with the same number.
// Otherwise, we have to create a new value with a new value number.
Sym *const symStore = cachedValue->GetValueInfo()->GetSymStore();
if (symStore && CurrentBlockData()->IsLive(symStore))
{
Value *const symStoreValue = CurrentBlockData()->FindValue(symStore);
int32 symStoreIntConstantValue;
if (symStoreValue &&
symStoreValue->GetValueNumber() == cachedValue->GetValueNumber() &&
symStoreValue->GetValueInfo()->TryGetIntConstantValue(&symStoreIntConstantValue) &&
symStoreIntConstantValue == intConst)
{
value = symStoreValue;
}
}
}
if (!value)
{
value = NewIntConstantValue(intConst, instr, !Js::TaggedInt::IsOverflow(intConst));
}
return CurrentBlockData()->InsertNewValue(value, opnd);
}
Value *
GlobOpt::GetIntConstantValue(const int64 intConst, IR::Instr * instr, IR::Opnd *const opnd)
{
Assert(instr->m_func->GetJITFunctionBody()->IsWasmFunction());
Value *value = nullptr;
Value *const cachedValue = this->int64ConstantToValueMap->Lookup(intConst, nullptr);
if (cachedValue)
{
// The cached value could be from a different block since this is a global (as opposed to a per-block) cache. Since
// values are cloned for each block, we can't use the same value object. We also can't have two values with the same
// number in one block, so we can't simply copy the cached value either. And finally, there is no deterministic and fast
// way to determine if a value with the same value number exists for this block. So the best we can do with a global
// cache is to check the sym-store's value in the current block to see if it has a value with the same number.
// Otherwise, we have to create a new value with a new value number.
Sym *const symStore = cachedValue->GetValueInfo()->GetSymStore();
if (symStore && this->currentBlock->globOptData.IsLive(symStore))
{
Value *const symStoreValue = this->currentBlock->globOptData.FindValue(symStore);
int64 symStoreIntConstantValue;
if (symStoreValue &&
symStoreValue->GetValueNumber() == cachedValue->GetValueNumber() &&
symStoreValue->GetValueInfo()->TryGetInt64ConstantValue(&symStoreIntConstantValue, false) &&
symStoreIntConstantValue == intConst)
{
value = symStoreValue;
}
}
}
if (!value)
{
value = NewInt64ConstantValue(intConst, instr);
}
return this->currentBlock->globOptData.InsertNewValue(value, opnd);
}
Value *
GlobOpt::NewInt64ConstantValue(const int64 intConst, IR::Instr* instr)
{
Value * value = NewValue(Int64ConstantValueInfo::New(this->alloc, intConst));
this->int64ConstantToValueMap->Item(intConst, value);
if (!value->GetValueInfo()->GetSymStore() &&
(instr->m_opcode == Js::OpCode::LdC_A_I4 || instr->m_opcode == Js::OpCode::Ld_I4))
{
StackSym * sym = instr->GetDst()->GetStackSym();
Assert(sym && !sym->IsTypeSpec());
this->currentBlock->globOptData.SetValue(value, sym);
this->currentBlock->globOptData.liveVarSyms->Set(sym->m_id);
}
return value;
}
Value *
GlobOpt::NewIntConstantValue(const int32 intConst, IR::Instr * instr, bool isTaggable)
{
Value * value = NewValue(IntConstantValueInfo::New(this->alloc, intConst));
this->intConstantToValueMap->Item(intConst, value);
if (isTaggable &&
!PHASE_OFF(Js::HoistConstIntPhase, this->func))
{
// When creating a new int constant value, make sure it gets a symstore. If the int const doesn't have a symstore,
// any downstream instruction using the same int will have to create a new value (object) for the int.
// This gets in the way of CSE.
value = HoistConstantLoadAndPropagateValueBackward(Js::TaggedInt::ToVarUnchecked(intConst), instr, value);
if (!value->GetValueInfo()->GetSymStore() &&
(instr->m_opcode == Js::OpCode::LdC_A_I4 || instr->m_opcode == Js::OpCode::Ld_I4))
{
StackSym * sym = instr->GetDst()->GetStackSym();
Assert(sym);
if (sym->IsTypeSpec())
{
Assert(sym->IsInt32());
StackSym * varSym = sym->GetVarEquivSym(instr->m_func);
CurrentBlockData()->SetValue(value, varSym);
CurrentBlockData()->liveInt32Syms->Set(varSym->m_id);
}
else
{
CurrentBlockData()->SetValue(value, sym);
CurrentBlockData()->liveVarSyms->Set(sym->m_id);
}
}
}
return value;
}
ValueInfo *
GlobOpt::NewIntRangeValueInfo(const int32 min, const int32 max, const bool wasNegativeZeroPreventedByBailout)
{
return ValueInfo::NewIntRangeValueInfo(this->alloc, min, max, wasNegativeZeroPreventedByBailout);
}
ValueInfo *GlobOpt::NewIntRangeValueInfo(
const ValueInfo *const originalValueInfo,
const int32 min,
const int32 max) const
{
Assert(originalValueInfo);
ValueInfo *valueInfo;
if(min == max)
{
// Since int constant values are const-propped, negative zero tracking does not track them, and so it's okay to ignore
// 'wasNegativeZeroPreventedByBailout'
valueInfo = IntConstantValueInfo::New(alloc, min);
}
else
{
valueInfo =
IntRangeValueInfo::New(
alloc,
min,
max,
min <= 0 && max >= 0 && originalValueInfo->WasNegativeZeroPreventedByBailout());
}
valueInfo->SetSymStore(originalValueInfo->GetSymStore());
return valueInfo;
}
Value *
GlobOpt::NewIntRangeValue(
const int32 min,
const int32 max,
const bool wasNegativeZeroPreventedByBailout,
IR::Opnd *const opnd)
{
ValueInfo *valueInfo = this->NewIntRangeValueInfo(min, max, wasNegativeZeroPreventedByBailout);
Value *val = NewValue(valueInfo);
if (opnd)
{
GOPT_TRACE_OPND(opnd, _u("Range %d (0x%X) to %d (0x%X)\n"), min, min, max, max);
}
CurrentBlockData()->InsertNewValue(val, opnd);
return val;
}
IntBoundedValueInfo *GlobOpt::NewIntBoundedValueInfo(
const ValueInfo *const originalValueInfo,
const IntBounds *const bounds) const
{
Assert(originalValueInfo);
bounds->Verify();
IntBoundedValueInfo *const valueInfo =
IntBoundedValueInfo::New(
originalValueInfo->Type(),
bounds,
(
bounds->ConstantLowerBound() <= 0 &&
bounds->ConstantUpperBound() >= 0 &&
originalValueInfo->WasNegativeZeroPreventedByBailout()
),
alloc);
valueInfo->SetSymStore(originalValueInfo->GetSymStore());
return valueInfo;
}
Value *GlobOpt::NewIntBoundedValue(
const ValueType valueType,
const IntBounds *const bounds,
const bool wasNegativeZeroPreventedByBailout,
IR::Opnd *const opnd)
{
Value *const value = NewValue(IntBoundedValueInfo::New(valueType, bounds, wasNegativeZeroPreventedByBailout, alloc));
CurrentBlockData()->InsertNewValue(value, opnd);
return value;
}
Value *
GlobOpt::NewFloatConstantValue(const FloatConstType floatValue, IR::Opnd *const opnd)
{
FloatConstantValueInfo *valueInfo = FloatConstantValueInfo::New(this->alloc, floatValue);
Value *val = NewValue(valueInfo);
CurrentBlockData()->InsertNewValue(val, opnd);
return val;
}
Value *
GlobOpt::GetVarConstantValue(IR::AddrOpnd *addrOpnd)
{
bool isVar = addrOpnd->IsVar();
bool isString = isVar && addrOpnd->m_localAddress && JITJavascriptString::Is(addrOpnd->m_localAddress);
Value *val = nullptr;
Value *cachedValue = nullptr;
if(this->addrConstantToValueMap->TryGetValue(addrOpnd->m_address, &cachedValue))
{
// The cached value could be from a different block since this is a global (as opposed to a per-block) cache. Since
// values are cloned for each block, we can't use the same value object. We also can't have two values with the same
// number in one block, so we can't simply copy the cached value either. And finally, there is no deterministic and fast
// way to determine if a value with the same value number exists for this block. So the best we can do with a global
// cache is to check the sym-store's value in the current block to see if it has a value with the same number.
// Otherwise, we have to create a new value with a new value number.
Sym *symStore = cachedValue->GetValueInfo()->GetSymStore();
if(symStore && CurrentBlockData()->IsLive(symStore))
{
Value *const symStoreValue = CurrentBlockData()->FindValue(symStore);
if(symStoreValue && symStoreValue->GetValueNumber() == cachedValue->GetValueNumber())
{
ValueInfo *const symStoreValueInfo = symStoreValue->GetValueInfo();
if(symStoreValueInfo->IsVarConstant() && symStoreValueInfo->AsVarConstant()->VarValue() == addrOpnd->m_address)
{
val = symStoreValue;
}
}
}
}
else if (isString)
{
JITJavascriptString* jsString = JITJavascriptString::FromVar(addrOpnd->m_localAddress);
Js::InternalString internalString(jsString->GetString(), jsString->GetLength());
if (this->stringConstantToValueMap->TryGetValue(internalString, &cachedValue))
{
Sym *symStore = cachedValue->GetValueInfo()->GetSymStore();
if (symStore && CurrentBlockData()->IsLive(symStore))
{
Value *const symStoreValue = CurrentBlockData()->FindValue(symStore);
if (symStoreValue && symStoreValue->GetValueNumber() == cachedValue->GetValueNumber())
{
ValueInfo *const symStoreValueInfo = symStoreValue->GetValueInfo();
if (symStoreValueInfo->IsVarConstant())
{
JITJavascriptString * cachedString = JITJavascriptString::FromVar(symStoreValue->GetValueInfo()->AsVarConstant()->VarValue(true));
Js::InternalString cachedInternalString(cachedString->GetString(), cachedString->GetLength());
if (Js::InternalStringComparer::Equals(internalString, cachedInternalString))
{
val = symStoreValue;
}
}
}
}
}
}
if(!val)
{
val = NewVarConstantValue(addrOpnd, isString);
}
addrOpnd->SetValueType(val->GetValueInfo()->Type());
return val;
}
Value *
GlobOpt::NewVarConstantValue(IR::AddrOpnd *addrOpnd, bool isString)
{
VarConstantValueInfo *valueInfo = VarConstantValueInfo::New(this->alloc, addrOpnd->m_address, addrOpnd->GetValueType(), false, addrOpnd->m_localAddress);
Value * value = NewValue(valueInfo);
this->addrConstantToValueMap->Item(addrOpnd->m_address, value);
if (isString)
{
JITJavascriptString* jsString = JITJavascriptString::FromVar(addrOpnd->m_localAddress);
Js::InternalString internalString(jsString->GetString(), jsString->GetLength());
this->stringConstantToValueMap->Item(internalString, value);
}
return value;
}
Value *
GlobOpt::HoistConstantLoadAndPropagateValueBackward(Js::Var varConst, IR::Instr * origInstr, Value * value)
{
if (this->IsLoopPrePass() ||
((this->currentBlock == this->func->m_fg->blockList) &&
origInstr->TransfersSrcValue()))
{
return value;
}
// Only hoisting taggable int const loads for now. Could be extended to other constants (floats, strings, addr opnds) if we see some benefit.
Assert(Js::TaggedInt::Is(varConst));
// Insert a load of the constant at the top of the function
StackSym * dstSym = StackSym::New(this->func);
IR::RegOpnd * constRegOpnd = IR::RegOpnd::New(dstSym, TyVar, this->func);
IR::Instr * loadInstr = IR::Instr::NewConstantLoad(constRegOpnd, (intptr_t)varConst, ValueType::GetInt(true), this->func);
this->func->m_fg->blockList->GetFirstInstr()->InsertAfter(loadInstr);
// Type-spec the load (Support for floats needs to be added when we start hoisting float constants).
bool typeSpecedToInt = false;
if (Js::TaggedInt::Is(varConst) && !IsTypeSpecPhaseOff(this->func))
{
typeSpecedToInt = true;
loadInstr->m_opcode = Js::OpCode::Ld_I4;
ToInt32Dst(loadInstr, loadInstr->GetDst()->AsRegOpnd(), this->currentBlock);
loadInstr->GetDst()->GetStackSym()->SetIsConst();
}
else
{
CurrentBlockData()->liveVarSyms->Set(dstSym->m_id);
}
// Add the value (object) to the current block's symToValueMap and propagate the value backward to all relevant blocks so it is available on merges.
value = CurrentBlockData()->InsertNewValue(value, constRegOpnd);
BVSparse<JitArenaAllocator>* GlobOptBlockData::*bv;
bv = typeSpecedToInt ? &GlobOptBlockData::liveInt32Syms : &GlobOptBlockData::liveVarSyms; // Will need to be expanded when we start hoisting float constants.
if (this->currentBlock != this->func->m_fg->blockList)
{
for (InvariantBlockBackwardIterator it(this, this->currentBlock, this->func->m_fg->blockList, nullptr);
it.IsValid();
it.MoveNext())
{
BasicBlock * block = it.Block();
(block->globOptData.*bv)->Set(dstSym->m_id);
if (!block->globOptData.FindValue(dstSym))
{
Value *const valueCopy = CopyValue(value, value->GetValueNumber());
block->globOptData.SetValue(valueCopy, dstSym);
}
}
}
return value;
}
Value *
GlobOpt::NewFixedFunctionValue(Js::JavascriptFunction *function, IR::AddrOpnd *addrOpnd)
{
Assert(function != nullptr);
Value *val = nullptr;
Value *cachedValue = nullptr;
if(this->addrConstantToValueMap->TryGetValue(addrOpnd->m_address, &cachedValue))
{
// The cached value could be from a different block since this is a global (as opposed to a per-block) cache. Since
// values are cloned for each block, we can't use the same value object. We also can't have two values with the same
// number in one block, so we can't simply copy the cached value either. And finally, there is no deterministic and fast
// way to determine if a value with the same value number exists for this block. So the best we can do with a global
// cache is to check the sym-store's value in the current block to see if it has a value with the same number.
// Otherwise, we have to create a new value with a new value number.
Sym *symStore = cachedValue->GetValueInfo()->GetSymStore();
if(symStore && CurrentBlockData()->IsLive(symStore))
{
Value *const symStoreValue = CurrentBlockData()->FindValue(symStore);
if(symStoreValue && symStoreValue->GetValueNumber() == cachedValue->GetValueNumber())
{
ValueInfo *const symStoreValueInfo = symStoreValue->GetValueInfo();
if(symStoreValueInfo->IsVarConstant())
{
VarConstantValueInfo *const symStoreVarConstantValueInfo = symStoreValueInfo->AsVarConstant();
if(symStoreVarConstantValueInfo->VarValue() == addrOpnd->m_address &&
symStoreVarConstantValueInfo->IsFunction())
{
val = symStoreValue;
}
}
}
}
}
if(!val)
{
VarConstantValueInfo *valueInfo = VarConstantValueInfo::New(this->alloc, function, addrOpnd->GetValueType(), true, addrOpnd->m_localAddress);
val = NewValue(valueInfo);
this->addrConstantToValueMap->AddNew(addrOpnd->m_address, val);
}
CurrentBlockData()->InsertNewValue(val, addrOpnd);
return val;
}
StackSym *GlobOpt::GetTaggedIntConstantStackSym(const int32 intConstantValue) const
{
Assert(!Js::TaggedInt::IsOverflow(intConstantValue));
return intConstantToStackSymMap->Lookup(intConstantValue, nullptr);
}
StackSym *GlobOpt::GetOrCreateTaggedIntConstantStackSym(const int32 intConstantValue) const
{
StackSym *stackSym = GetTaggedIntConstantStackSym(intConstantValue);
if(stackSym)
{
return stackSym;
}
stackSym = StackSym::New(TyVar,func);
intConstantToStackSymMap->Add(intConstantValue, stackSym);
return stackSym;
}
Sym *
GlobOpt::SetSymStore(ValueInfo *valueInfo, Sym *sym)
{
if (sym->IsStackSym())
{
StackSym *stackSym = sym->AsStackSym();
if (stackSym->IsTypeSpec())
{
stackSym = stackSym->GetVarEquivSym(this->func);
sym = stackSym;
}
}
if (valueInfo->GetSymStore() == nullptr || valueInfo->GetSymStore()->IsPropertySym())
{
SetSymStoreDirect(valueInfo, sym);
}
return sym;
}
void
GlobOpt::SetSymStoreDirect(ValueInfo * valueInfo, Sym * sym)
{
Sym * prevSymStore = valueInfo->GetSymStore();
CurrentBlockData()->SetChangedSym(prevSymStore);
valueInfo->SetSymStore(sym);
}
// Figure out the Value of this dst.
Value *
GlobOpt::ValueNumberDst(IR::Instr **pInstr, Value *src1Val, Value *src2Val)
{
IR::Instr *&instr = *pInstr;
IR::Opnd *dst = instr->GetDst();
Value *dstVal = nullptr;
Sym *sym;
if (instr->CallsSetter())
{
return nullptr;
}
if (dst == nullptr)
{
return nullptr;
}
switch (dst->GetKind())
{
case IR::OpndKindSym:
sym = dst->AsSymOpnd()->m_sym;
break;
case IR::OpndKindReg:
sym = dst->AsRegOpnd()->m_sym;
if (OpCodeAttr::TempNumberProducing(instr->m_opcode))
{
CurrentBlockData()->isTempSrc->Set(sym->m_id);
}
else if (OpCodeAttr::TempNumberTransfer(instr->m_opcode))
{
IR::Opnd *src1 = instr->GetSrc1();
if (src1->IsRegOpnd() && CurrentBlockData()->isTempSrc->Test(src1->AsRegOpnd()->m_sym->m_id))
{
StackSym *src1Sym = src1->AsRegOpnd()->m_sym;
// isTempSrc is used for marking isTempLastUse, which is used to generate AddLeftDead()
// calls instead of the normal Add helpers. It tells the runtime that concats can use string
// builders.
// We need to be careful in the case where src1 points to a string builder and is getting aliased.
// Clear the bit on src and dst of the transfer instr in this case, unless we can prove src1
// isn't pointing at a string builder, like if it is single def and the def instr is not an Add,
// but TempProducing.
if (src1Sym->IsSingleDef() && src1Sym->m_instrDef->m_opcode != Js::OpCode::Add_A
&& OpCodeAttr::TempNumberProducing(src1Sym->m_instrDef->m_opcode))
{
CurrentBlockData()->isTempSrc->Set(sym->m_id);
}
else
{
CurrentBlockData()->isTempSrc->Clear(src1->AsRegOpnd()->m_sym->m_id);
CurrentBlockData()->isTempSrc->Clear(sym->m_id);
}
}
else
{
CurrentBlockData()->isTempSrc->Clear(sym->m_id);
}
}
else
{
CurrentBlockData()->isTempSrc->Clear(sym->m_id);
}
break;
case IR::OpndKindIndir:
return nullptr;
default:
return nullptr;
}
int32 min1, max1, min2, max2, newMin, newMax;
ValueInfo *src1ValueInfo = (src1Val ? src1Val->GetValueInfo() : nullptr);
ValueInfo *src2ValueInfo = (src2Val ? src2Val->GetValueInfo() : nullptr);
switch (instr->m_opcode)
{
case Js::OpCode::Conv_PrimStr:
AssertMsg(instr->GetDst()->GetValueType().IsString(),
"Creator of this instruction should have set the type");
if (this->IsLoopPrePass() || src1ValueInfo == nullptr || !src1ValueInfo->IsPrimitive())
{
break;
}
instr->m_opcode = Js::OpCode::Conv_Str;
// fall-through
case Js::OpCode::Conv_Str:
// This opcode is commented out since we don't track regex information in GlobOpt now.
//case Js::OpCode::Coerce_Regex:
case Js::OpCode::Coerce_Str:
AssertMsg(instr->GetDst()->GetValueType().IsString(),
"Creator of this instruction should have set the type");
// Due to fall through and the fact that Ld_A only takes one source,
// free the other source here.
if (instr->GetSrc2() && !(this->IsLoopPrePass() || src1ValueInfo == nullptr || !src1ValueInfo->IsString()))
{
instr->FreeSrc2();
}
// fall-through
case Js::OpCode::Coerce_StrOrRegex:
// We don't set the ValueType of src1 for Coerce_StrOrRegex, hence skip the ASSERT
if (this->IsLoopPrePass() || src1ValueInfo == nullptr || !src1ValueInfo->IsString())
{
break;
}
instr->m_opcode = Js::OpCode::Ld_A;
// fall-through
case Js::OpCode::BytecodeArgOutCapture:
case Js::OpCode::LdAsmJsFunc:
case Js::OpCode::Ld_A:
case Js::OpCode::Ld_I4:
// Propagate sym attributes across the reg copy.
if (!this->IsLoopPrePass() && instr->GetSrc1()->IsRegOpnd())
{
if (dst->AsRegOpnd()->m_sym->IsSingleDef())
{
dst->AsRegOpnd()->m_sym->CopySymAttrs(instr->GetSrc1()->AsRegOpnd()->m_sym);
}
}
if (instr->IsProfiledInstr())
{
const ValueType profiledValueType(instr->AsProfiledInstr()->u.FldInfo().valueType);
if(!(
profiledValueType.IsLikelyInt() &&
(
(dst->IsRegOpnd() && dst->AsRegOpnd()->m_sym->m_isNotNumber) ||
(instr->GetSrc1()->IsRegOpnd() && instr->GetSrc1()->AsRegOpnd()->m_sym->m_isNotNumber)
)
))
{
if(!src1ValueInfo)
{
dstVal = this->NewGenericValue(profiledValueType, dst);
}
else if(src1ValueInfo->IsUninitialized())
{
if(IsLoopPrePass())
{
dstVal = this->NewGenericValue(profiledValueType, dst);
}
else
{
// Assuming the profile data gives more precise value types based on the path it took at runtime, we
// can improve the original value type.
src1ValueInfo->Type() = profiledValueType;
instr->GetSrc1()->SetValueType(profiledValueType);
}
}
}
}
if (dstVal == nullptr)
{
// Ld_A is just transferring the value
dstVal = this->ValueNumberTransferDst(instr, src1Val);
}
break;
case Js::OpCode::ExtendArg_A:
{
// SIMD_JS
// We avoid transforming EAs to Lds to keep the IR shape consistent and avoid CSEing of EAs.
// CSEOptimize only assigns a Value to the EA dst, and doesn't turn it to a Ld. If this happened, we shouldn't assign a new Value here.
if (DoCSE())
{
IR::Opnd * currDst = instr->GetDst();
Value * currDstVal = CurrentBlockData()->FindValue(currDst->GetStackSym());
if (currDstVal != nullptr)
{
return currDstVal;
}
}
break;
}
case Js::OpCode::CheckFixedFld:
AssertMsg(false, "CheckFixedFld doesn't have a dst, so we should never get here");
break;
case Js::OpCode::LdSlot:
case Js::OpCode::LdSlotArr:
case Js::OpCode::LdFld:
case Js::OpCode::LdFldForTypeOf:
case Js::OpCode::LdFldForCallApplyTarget:
// Do not transfer value type on LdRootFldForTypeOf to prevent copy-prop to LdRootFld in case the field doesn't exist since LdRootFldForTypeOf does not throw.
// Same goes for ScopedLdFldForTypeOf as we'll end up loading the property from the root object if the property is not in the scope chain.
//case Js::OpCode::LdRootFldForTypeOf:
//case Js::OpCode::ScopedLdFldForTypeOf:
case Js::OpCode::LdRootFld:
case Js::OpCode::LdMethodFld:
case Js::OpCode::LdRootMethodFld:
case Js::OpCode::ScopedLdMethodFld:
case Js::OpCode::LdMethodFromFlags:
case Js::OpCode::ScopedLdFld:
if (instr->IsProfiledInstr())
{
ValueType profiledValueType(instr->AsProfiledInstr()->u.FldInfo().valueType);
if(!(profiledValueType.IsLikelyInt() && dst->IsRegOpnd() && dst->AsRegOpnd()->m_sym->m_isNotNumber))
{
if(!src1ValueInfo)
{
dstVal = this->NewGenericValue(profiledValueType, dst);
}
else if(src1ValueInfo->IsUninitialized())
{
if(IsLoopPrePass() && (!dst->IsRegOpnd() || !dst->AsRegOpnd()->m_sym->IsSingleDef()))
{
dstVal = this->NewGenericValue(profiledValueType, dst);
}
else
{
// Assuming the profile data gives more precise value types based on the path it took at runtime, we
// can improve the original value type.
src1ValueInfo->Type() = profiledValueType;
instr->GetSrc1()->SetValueType(profiledValueType);
}
}
}
}
if (dstVal == nullptr)
{
dstVal = this->ValueNumberTransferDst(instr, src1Val);
}
if(!this->IsLoopPrePass())
{
// We cannot transfer value if the field hasn't been copy prop'd because we don't generate
// an implicit call bailout between those values if we don't have "live fields" unless, we are hoisting the field.
ValueInfo *dstValueInfo = (dstVal ? dstVal->GetValueInfo() : nullptr);
// Update symStore if it isn't a stackSym
if (dstVal && (!dstValueInfo->GetSymStore() || !dstValueInfo->GetSymStore()->IsStackSym()))
{
Assert(dst->IsRegOpnd());
this->SetSymStoreDirect(dstValueInfo, dst->AsRegOpnd()->m_sym);
}
if (src1Val != dstVal)
{
CurrentBlockData()->SetValue(dstVal, instr->GetSrc1());
}
}
break;
case Js::OpCode::LdC_A_R8:
case Js::OpCode::LdC_A_I4:
case Js::OpCode::ArgIn_A:
dstVal = src1Val;
break;
case Js::OpCode::LdStr:
if (src1Val == nullptr)
{
src1Val = NewGenericValue(ValueType::String, dst);
}
dstVal = src1Val;
break;
// LdElemUndef only assign undef if the field doesn't exist.
// So we don't actually know what the value is, so we can't really copy prop it.
//case Js::OpCode::LdElemUndef:
case Js::OpCode::StSlot:
case Js::OpCode::StSlotChkUndecl:
case Js::OpCode::StFld:
case Js::OpCode::StRootFld:
case Js::OpCode::StFldStrict:
case Js::OpCode::StRootFldStrict:
case Js::OpCode::InitFld:
case Js::OpCode::InitComputedProperty:
if (DoFieldCopyProp())
{
if (src1Val == nullptr)
{
// src1 may have no value if it's not a valid var, e.g., NULL for let/const initialization.
// Consider creating generic values for such things.
return nullptr;
}
AssertMsg(!src2Val, "Bad src Values...");
Assert(sym->IsPropertySym());
SymID symId = sym->m_id;
Assert(instr->m_opcode == Js::OpCode::StSlot || instr->m_opcode == Js::OpCode::StSlotChkUndecl || !CurrentBlockData()->liveFields->Test(symId));
CurrentBlockData()->liveFields->Set(symId);
if (!this->IsLoopPrePass() && dst->GetIsDead())
{
// Take the property sym out of the live fields set (with special handling for loops).
this->EndFieldLifetime(dst->AsSymOpnd());
}
dstVal = this->ValueNumberTransferDst(instr, src1Val);
}
else
{
return nullptr;
}
break;
case Js::OpCode::Conv_Num:
if(src1ValueInfo->IsNumber())
{
dstVal = ValueNumberTransferDst(instr, src1Val);
}
else
{
return NewGenericValue(src1ValueInfo->Type().ToDefiniteAnyNumber().SetCanBeTaggedValue(true), dst);
}
break;
case Js::OpCode::Not_A:
{
if (!src1Val || !src1ValueInfo->GetIntValMinMax(&min1, &max1, this->DoAggressiveIntTypeSpec()))
{
min1 = INT32_MIN;
max1 = INT32_MAX;
}
this->PropagateIntRangeForNot(min1, max1, &newMin, &newMax);
return CreateDstUntransferredIntValue(newMin, newMax, instr, src1Val, src2Val);
}
case Js::OpCode::Xor_A:
case Js::OpCode::Or_A:
case Js::OpCode::And_A:
case Js::OpCode::Shl_A:
case Js::OpCode::Shr_A:
case Js::OpCode::ShrU_A:
{
if (!src1Val || !src1ValueInfo->GetIntValMinMax(&min1, &max1, this->DoAggressiveIntTypeSpec()))
{
min1 = INT32_MIN;
max1 = INT32_MAX;
}
if (!src2Val || !src2ValueInfo->GetIntValMinMax(&min2, &max2, this->DoAggressiveIntTypeSpec()))
{
min2 = INT32_MIN;
max2 = INT32_MAX;
}
if (instr->m_opcode == Js::OpCode::ShrU_A &&
min1 < 0 &&
IntConstantBounds(min2, max2).And_0x1f().Contains(0))
{
// Src1 may be too large to represent as a signed int32, and src2 may be zero.
// Since the result can therefore be too large to represent as a signed int32,
// include Number in the value type.
return CreateDstUntransferredValue(
ValueType::AnyNumber.SetCanBeTaggedValue(true), instr, src1Val, src2Val);
}
this->PropagateIntRangeBinary(instr, min1, max1, min2, max2, &newMin, &newMax);
return CreateDstUntransferredIntValue(newMin, newMax, instr, src1Val, src2Val);
}
case Js::OpCode::Incr_A:
case Js::OpCode::Decr_A:
{
ValueType valueType;
if(src1Val)
{
valueType = src1Val->GetValueInfo()->Type().ToDefiniteAnyNumber();
}
else
{
valueType = ValueType::Number;
}
return CreateDstUntransferredValue(valueType.SetCanBeTaggedValue(true), instr, src1Val, src2Val);
}
case Js::OpCode::Add_A:
{
ValueType valueType;
if (src1Val && src1ValueInfo->IsLikelyNumber() && src2Val && src2ValueInfo->IsLikelyNumber())
{
if(src1ValueInfo->IsLikelyInt() && src2ValueInfo->IsLikelyInt())
{
// When doing aggressiveIntType, just assume the result is likely going to be int
// if both input is int.
const bool isLikelyTagged = src1ValueInfo->IsLikelyTaggedInt() && src2ValueInfo->IsLikelyTaggedInt();
if(src1ValueInfo->IsNumber() && src2ValueInfo->IsNumber())
{
// If both of them are numbers then we can definitely say that the result is a number.
valueType = ValueType::GetNumberAndLikelyInt(isLikelyTagged);
}
else
{
// This is only likely going to be int but can be a string as well.
valueType = ValueType::GetInt(isLikelyTagged).ToLikely();
}
}
else
{
// We can only be certain of any thing if both of them are numbers.
// Otherwise, the result could be string.
if (src1ValueInfo->IsNumber() && src2ValueInfo->IsNumber())
{
if (src1ValueInfo->IsFloat() || src2ValueInfo->IsFloat())
{
// If one of them is a float, the result probably is a float instead of just int
// but should always be a number.
valueType = ValueType::Float.SetCanBeTaggedValue(true);
}
else
{
// Could be int, could be number
valueType = ValueType::Number.SetCanBeTaggedValue(true);
}
}
else if (src1ValueInfo->IsLikelyFloat() || src2ValueInfo->IsLikelyFloat())
{
// Result is likely a float (but can be anything)
valueType = ValueType::Float.ToLikely();
}
else
{
// Otherwise it is a likely int or float (but can be anything)
valueType = ValueType::Number.ToLikely();
}
}
}
else if((src1Val && src1ValueInfo->IsString()) || (src2Val && src2ValueInfo->IsString()))
{
// String + anything should always result in a string
valueType = ValueType::String;
}
else if((src1Val && src1ValueInfo->IsNotString() && src1ValueInfo->IsPrimitive())
&& (src2Val && src2ValueInfo->IsNotString() && src2ValueInfo->IsPrimitive()))
{
// If src1 and src2 are not strings and primitive, add should yield a number.
valueType = ValueType::Number.SetCanBeTaggedValue(true);
}
else if((src1Val && src1ValueInfo->IsLikelyString()) || (src2Val && src2ValueInfo->IsLikelyString()))
{
// likelystring + anything should always result in a likelystring
valueType = ValueType::String.ToLikely();
}
else
{
// Number or string. Could make the value a merge of Number and String, but Uninitialized is more useful at the moment.
Assert(valueType.IsUninitialized());
}
return CreateDstUntransferredValue(valueType, instr, src1Val, src2Val);
}
case Js::OpCode::Div_A:
{
ValueType divValueType = GetDivValueType(instr, src1Val, src2Val, false);
if (divValueType.IsLikelyInt() || divValueType.IsFloat())
{
return CreateDstUntransferredValue(divValueType.SetCanBeTaggedValue(true), instr, src1Val, src2Val);
}
}
// fall-through
case Js::OpCode::Sub_A:
case Js::OpCode::Mul_A:
case Js::OpCode::Rem_A:
{
ValueType valueType;
if( src1Val &&
src1ValueInfo->IsLikelyInt() &&
src2Val &&
src2ValueInfo->IsLikelyInt() &&
instr->m_opcode != Js::OpCode::Div_A)
{
const bool isLikelyTagged =
src1ValueInfo->IsLikelyTaggedInt() && (src2ValueInfo->IsLikelyTaggedInt() || instr->m_opcode == Js::OpCode::Rem_A);
if(src1ValueInfo->IsNumber() && src2ValueInfo->IsNumber())
{
valueType = ValueType::GetNumberAndLikelyInt(isLikelyTagged);
}
else
{
valueType = ValueType::GetInt(isLikelyTagged).ToLikely();
}
}
else if ((src1Val && src1ValueInfo->IsLikelyFloat()) || (src2Val && src2ValueInfo->IsLikelyFloat()))
{
// This should ideally be NewNumberAndLikelyFloatValue since we know the result is a number but not sure if it will
// be a float value. However, that Number/LikelyFloat value type doesn't exist currently and all the necessary
// checks are done for float values (tagged int checks, etc.) so it's sufficient to just create a float value here.
valueType = ValueType::Float.SetCanBeTaggedValue(true);
}
else
{
valueType = ValueType::Number.SetCanBeTaggedValue(true);
}
return CreateDstUntransferredValue(valueType, instr, src1Val, src2Val);
}
case Js::OpCode::CallI:
Assert(dst->IsRegOpnd());
return NewGenericValue(dst->AsRegOpnd()->GetValueType(), dst);
case Js::OpCode::LdElemI_A:
{
dstVal = ValueNumberLdElemDst(pInstr, src1Val);
const ValueType baseValueType(instr->GetSrc1()->AsIndirOpnd()->GetBaseOpnd()->GetValueType());
if( (
baseValueType.IsLikelyNativeArray() ||
#ifdef _M_IX86
(
!AutoSystemInfo::Data.SSE2Available() &&
baseValueType.IsLikelyObject() &&
(
baseValueType.GetObjectType() == ObjectType::Float32Array ||
baseValueType.GetObjectType() == ObjectType::Float64Array
)
)
#else
false
#endif
) &&
instr->GetDst()->IsVar() &&
instr->HasBailOutInfo())
{
// The lowerer is not going to generate a fast path for this case. Remove any bailouts that require the fast
// path. Note that the removed bailouts should not be necessary for correctness.
IR::BailOutKind bailOutKind = instr->GetBailOutKind();
if(bailOutKind & IR::BailOutOnArrayAccessHelperCall)
{
bailOutKind -= IR::BailOutOnArrayAccessHelperCall;
}
if(bailOutKind == IR::BailOutOnImplicitCallsPreOp)
{
bailOutKind -= IR::BailOutOnImplicitCallsPreOp;
}
if(bailOutKind)
{
instr->SetBailOutKind(bailOutKind);
}
else
{
instr->ClearBailOutInfo();
}
}
return dstVal;
}
case Js::OpCode::LdMethodElem:
// Not worth profiling this, just assume it's likely object (should be likely function but ValueType does not track
// functions currently, so using ObjectType::Object instead)
dstVal = NewGenericValue(ValueType::GetObject(ObjectType::Object).ToLikely(), dst);
if(instr->GetSrc1()->AsIndirOpnd()->GetBaseOpnd()->GetValueType().IsLikelyNativeArray() && instr->HasBailOutInfo())
{
// The lowerer is not going to generate a fast path for this case. Remove any bailouts that require the fast
// path. Note that the removed bailouts should not be necessary for correctness.
IR::BailOutKind bailOutKind = instr->GetBailOutKind();
if(bailOutKind & IR::BailOutOnArrayAccessHelperCall)
{
bailOutKind -= IR::BailOutOnArrayAccessHelperCall;
}
if(bailOutKind == IR::BailOutOnImplicitCallsPreOp)
{
bailOutKind -= IR::BailOutOnImplicitCallsPreOp;
}
if(bailOutKind)
{
instr->SetBailOutKind(bailOutKind);
}
else
{
instr->ClearBailOutInfo();
}
}
return dstVal;
case Js::OpCode::StElemI_A:
case Js::OpCode::StElemI_A_Strict:
dstVal = this->ValueNumberTransferDst(instr, src1Val);
break;
case Js::OpCode::LdLen_A:
if (instr->IsProfiledInstr())
{
const ValueType profiledValueType(instr->AsProfiledInstr()->u.FldInfo().valueType);
if(!(profiledValueType.IsLikelyInt() && dst->AsRegOpnd()->m_sym->m_isNotNumber))
{
return this->NewGenericValue(profiledValueType, dst);
}
}
break;
case Js::OpCode::BrOnEmpty:
case Js::OpCode::BrOnNotEmpty:
Assert(dst->IsRegOpnd());
Assert(dst->GetValueType().IsString());
return this->NewGenericValue(ValueType::String, dst);
case Js::OpCode::IsInst:
case Js::OpCode::LdTrue:
case Js::OpCode::LdFalse:
case Js::OpCode::CmEq_A:
case Js::OpCode::CmSrEq_A:
case Js::OpCode::CmNeq_A:
case Js::OpCode::CmSrNeq_A:
case Js::OpCode::CmLe_A:
case Js::OpCode::CmUnLe_A:
case Js::OpCode::CmLt_A:
case Js::OpCode::CmUnLt_A:
case Js::OpCode::CmGe_A:
case Js::OpCode::CmUnGe_A:
case Js::OpCode::CmGt_A:
case Js::OpCode::CmUnGt_A:
return this->NewGenericValue(ValueType::Boolean, dst);
case Js::OpCode::LdUndef:
return this->NewGenericValue(ValueType::Undefined, dst);
case Js::OpCode::LdC_A_Null:
return this->NewGenericValue(ValueType::Null, dst);
case Js::OpCode::LdThis:
if (!PHASE_OFF(Js::OptTagChecksPhase, this->func) &&
(src1ValueInfo == nullptr || src1ValueInfo->IsUninitialized()))
{
return this->NewGenericValue(ValueType::GetObject(ObjectType::Object).ToLikely().SetCanBeTaggedValue(false), dst);
}
break;
case Js::OpCode::Typeof:
case Js::OpCode::TypeofElem:
return this->NewGenericValue(ValueType::String, dst);
case Js::OpCode::InitLocalClosure:
Assert(instr->GetDst());
Assert(instr->GetDst()->IsRegOpnd());
IR::RegOpnd *regOpnd = instr->GetDst()->AsRegOpnd();
StackSym *opndStackSym = regOpnd->m_sym;
Assert(opndStackSym != nullptr);
ObjectSymInfo *objectSymInfo = opndStackSym->m_objectInfo;
Assert(objectSymInfo != nullptr);
for (PropertySym *localVarSlotList = objectSymInfo->m_propertySymList; localVarSlotList; localVarSlotList = localVarSlotList->m_nextInStackSymList)
{
this->slotSyms->Set(localVarSlotList->m_id);
}
break;
}
if (dstVal == nullptr)
{
return this->NewGenericValue(dst->GetValueType(), dst);
}
return CurrentBlockData()->SetValue(dstVal, dst);
}
Value *
GlobOpt::ValueNumberLdElemDst(IR::Instr **pInstr, Value *srcVal)
{
IR::Instr *&instr = *pInstr;
IR::Opnd *dst = instr->GetDst();
Value *dstVal = nullptr;
int32 newMin, newMax;
ValueInfo *srcValueInfo = (srcVal ? srcVal->GetValueInfo() : nullptr);
ValueType profiledElementType;
if (instr->IsProfiledInstr())
{
profiledElementType = instr->AsProfiledInstr()->u.ldElemInfo->GetElementType();
if(!(profiledElementType.IsLikelyInt() && dst->IsRegOpnd() && dst->AsRegOpnd()->m_sym->m_isNotNumber) &&
srcVal &&
srcValueInfo->IsUninitialized())
{
if(IsLoopPrePass())
{
dstVal = NewGenericValue(profiledElementType, dst);
}
else
{
// Assuming the profile data gives more precise value types based on the path it took at runtime, we
// can improve the original value type.
srcValueInfo->Type() = profiledElementType;
instr->GetSrc1()->SetValueType(profiledElementType);
}
}
}
IR::IndirOpnd *src = instr->GetSrc1()->AsIndirOpnd();
const ValueType baseValueType(src->GetBaseOpnd()->GetValueType());
if (instr->DoStackArgsOpt() ||
!(
baseValueType.IsLikelyOptimizedTypedArray() ||
(baseValueType.IsLikelyNativeArray() && instr->IsProfiledInstr()) // Specialized native array lowering for LdElem requires that it is profiled.
) ||
(!this->DoTypedArrayTypeSpec() && baseValueType.IsLikelyOptimizedTypedArray()) ||
// Don't do type spec on native array with a history of accessing gaps, as this is a bailout
(!this->DoNativeArrayTypeSpec() && baseValueType.IsLikelyNativeArray()) ||
!ShouldExpectConventionalArrayIndexValue(src))
{
if(DoTypedArrayTypeSpec() && !IsLoopPrePass())
{
GOPT_TRACE_INSTR(instr, _u("Didn't specialize array access.\n"));
if (PHASE_TRACE(Js::TypedArrayTypeSpecPhase, this->func))
{
char16 debugStringBuffer[MAX_FUNCTION_BODY_DEBUG_STRING_SIZE];
char baseValueTypeStr[VALUE_TYPE_MAX_STRING_SIZE];
baseValueType.ToString(baseValueTypeStr);
Output::Print(_u("Typed Array Optimization: function: %s (%s): instr: %s, base value type: %S, did not type specialize, because %s.\n"),
this->func->GetJITFunctionBody()->GetDisplayName(),
this->func->GetDebugNumberSet(debugStringBuffer),
Js::OpCodeUtil::GetOpCodeName(instr->m_opcode),
baseValueTypeStr,
instr->DoStackArgsOpt() ? _u("instruction uses the arguments object") :
baseValueType.IsLikelyOptimizedTypedArray() ? _u("index is negative or likely not int") : _u("of array type"));
Output::Flush();
}
}
if(!dstVal)
{
if(srcVal)
{
dstVal = this->ValueNumberTransferDst(instr, srcVal);
}
else
{
dstVal = NewGenericValue(profiledElementType, dst);
}
}
return dstVal;
}
Assert(instr->GetSrc1()->IsIndirOpnd());
IRType toType = TyVar;
IR::BailOutKind bailOutKind = IR::BailOutConventionalTypedArrayAccessOnly;
switch(baseValueType.GetObjectType())
{
case ObjectType::Int8Array:
case ObjectType::Int8VirtualArray:
case ObjectType::Int8MixedArray:
newMin = Int8ConstMin;
newMax = Int8ConstMax;
goto IntArrayCommon;
case ObjectType::Uint8Array:
case ObjectType::Uint8VirtualArray:
case ObjectType::Uint8MixedArray:
case ObjectType::Uint8ClampedArray:
case ObjectType::Uint8ClampedVirtualArray:
case ObjectType::Uint8ClampedMixedArray:
newMin = Uint8ConstMin;
newMax = Uint8ConstMax;
goto IntArrayCommon;
case ObjectType::Int16Array:
case ObjectType::Int16VirtualArray:
case ObjectType::Int16MixedArray:
newMin = Int16ConstMin;
newMax = Int16ConstMax;
goto IntArrayCommon;
case ObjectType::Uint16Array:
case ObjectType::Uint16VirtualArray:
case ObjectType::Uint16MixedArray:
newMin = Uint16ConstMin;
newMax = Uint16ConstMax;
goto IntArrayCommon;
case ObjectType::Int32Array:
case ObjectType::Int32VirtualArray:
case ObjectType::Int32MixedArray:
case ObjectType::Uint32Array: // int-specialized loads from uint32 arrays will bail out on values that don't fit in an int32
case ObjectType::Uint32VirtualArray:
case ObjectType::Uint32MixedArray:
Int32Array:
newMin = Int32ConstMin;
newMax = Int32ConstMax;
goto IntArrayCommon;
IntArrayCommon:
Assert(dst->IsRegOpnd());
// If int type spec is disabled, it is ok to load int values as they can help float type spec, and merging int32 with float64 => float64.
// But if float type spec is also disabled, we'll have problems because float64 merged with var => float64...
if (!this->DoAggressiveIntTypeSpec() && !this->DoFloatTypeSpec())
{
if (!dstVal)
{
if (srcVal)
{
dstVal = this->ValueNumberTransferDst(instr, srcVal);
}
else
{
dstVal = NewGenericValue(profiledElementType, dst);
}
}
return dstVal;
}
if (!this->IsLoopPrePass())
{
if (instr->HasBailOutInfo())
{
const IR::BailOutKind oldBailOutKind = instr->GetBailOutKind();
Assert(
(
!(oldBailOutKind & ~IR::BailOutKindBits) ||
(oldBailOutKind & ~IR::BailOutKindBits) == IR::BailOutOnImplicitCallsPreOp
) &&
!(oldBailOutKind & IR::BailOutKindBits & ~(IR::BailOutOnArrayAccessHelperCall | IR::BailOutMarkTempObject)));
if (bailOutKind == IR::BailOutConventionalTypedArrayAccessOnly)
{
// BailOutConventionalTypedArrayAccessOnly also bails out if the array access is outside the head
// segment bounds, and guarantees no implicit calls. Override the bailout kind so that the instruction
// bails out for the right reason.
instr->SetBailOutKind(
bailOutKind | (oldBailOutKind & (IR::BailOutKindBits - IR::BailOutOnArrayAccessHelperCall)));
}
else
{
// BailOutConventionalNativeArrayAccessOnly by itself may generate a helper call, and may cause implicit
// calls to occur, so it must be merged in to eliminate generating the helper call
Assert(bailOutKind == IR::BailOutConventionalNativeArrayAccessOnly);
instr->SetBailOutKind(oldBailOutKind | bailOutKind);
}
}
else
{
GenerateBailAtOperation(&instr, bailOutKind);
}
}
TypeSpecializeIntDst(instr, instr->m_opcode, nullptr, nullptr, nullptr, bailOutKind, newMin, newMax, &dstVal);
toType = TyInt32;
break;
case ObjectType::Float32Array:
case ObjectType::Float32VirtualArray:
case ObjectType::Float32MixedArray:
case ObjectType::Float64Array:
case ObjectType::Float64VirtualArray:
case ObjectType::Float64MixedArray:
Float64Array:
Assert(dst->IsRegOpnd());
// If float type spec is disabled, don't load float64 values
if (!this->DoFloatTypeSpec())
{
if (!dstVal)
{
if (srcVal)
{
dstVal = this->ValueNumberTransferDst(instr, srcVal);
}
else
{
dstVal = NewGenericValue(profiledElementType, dst);
}
}
return dstVal;
}
if (!this->IsLoopPrePass())
{
if (instr->HasBailOutInfo())
{
const IR::BailOutKind oldBailOutKind = instr->GetBailOutKind();
Assert(
(
!(oldBailOutKind & ~IR::BailOutKindBits) ||
(oldBailOutKind & ~IR::BailOutKindBits) == IR::BailOutOnImplicitCallsPreOp
) &&
!(oldBailOutKind & IR::BailOutKindBits & ~(IR::BailOutOnArrayAccessHelperCall | IR::BailOutMarkTempObject)));
if (bailOutKind == IR::BailOutConventionalTypedArrayAccessOnly)
{
// BailOutConventionalTypedArrayAccessOnly also bails out if the array access is outside the head
// segment bounds, and guarantees no implicit calls. Override the bailout kind so that the instruction
// bails out for the right reason.
instr->SetBailOutKind(
bailOutKind | (oldBailOutKind & (IR::BailOutKindBits - IR::BailOutOnArrayAccessHelperCall)));
}
else
{
// BailOutConventionalNativeArrayAccessOnly by itself may generate a helper call, and may cause implicit
// calls to occur, so it must be merged in to eliminate generating the helper call
Assert(bailOutKind == IR::BailOutConventionalNativeArrayAccessOnly);
instr->SetBailOutKind(oldBailOutKind | bailOutKind);
}
}
else
{
GenerateBailAtOperation(&instr, bailOutKind);
}
}
TypeSpecializeFloatDst(instr, nullptr, nullptr, nullptr, &dstVal);
toType = TyFloat64;
break;
default:
Assert(baseValueType.IsLikelyNativeArray());
bailOutKind = IR::BailOutConventionalNativeArrayAccessOnly;
if(baseValueType.HasIntElements())
{
goto Int32Array;
}
Assert(baseValueType.HasFloatElements());
goto Float64Array;
}
if(!dstVal)
{
dstVal = NewGenericValue(profiledElementType, dst);
}
Assert(toType != TyVar);
GOPT_TRACE_INSTR(instr, _u("Type specialized array access.\n"));
if (PHASE_TRACE(Js::TypedArrayTypeSpecPhase, this->func))
{
char16 debugStringBuffer[MAX_FUNCTION_BODY_DEBUG_STRING_SIZE];
char baseValueTypeStr[VALUE_TYPE_MAX_STRING_SIZE];
baseValueType.ToString(baseValueTypeStr);
char dstValTypeStr[VALUE_TYPE_MAX_STRING_SIZE];
dstVal->GetValueInfo()->Type().ToString(dstValTypeStr);
Output::Print(_u("Typed Array Optimization: function: %s (%s): instr: %s, base value type: %S, type specialized to %s producing %S"),
this->func->GetJITFunctionBody()->GetDisplayName(),
this->func->GetDebugNumberSet(debugStringBuffer),
Js::OpCodeUtil::GetOpCodeName(instr->m_opcode),
baseValueTypeStr,
toType == TyInt32 ? _u("int32") : _u("float64"),
dstValTypeStr);
#if DBG_DUMP
Output::Print(_u(" ("));
dstVal->Dump();
Output::Print(_u(").\n"));
#else
Output::Print(_u(".\n"));
#endif
Output::Flush();
}
return dstVal;
}
ValueType
GlobOpt::GetPrepassValueTypeForDst(
const ValueType desiredValueType,
IR::Instr *const instr,
Value *const src1Value,
Value *const src2Value,
bool const isValueInfoPrecise,
bool const isSafeToTransferInPrepass) const
{
// Values with definite types can be created in the loop prepass only when it is guaranteed that the value type will be the
// same on any iteration of the loop. The heuristics currently used are:
// - If the source sym is not live on the back-edge, then it acquires a new value for each iteration of the loop, so
// that value type can be definite
// - Consider: A better solution for this is to track values that originate in this loop, which can have definite value
// types. That catches more cases, should look into that in the future.
// - If the source sym has a constant value that doesn't change for the duration of the function
// - The operation always results in a definite value type. For instance, signed bitwise operations always result in an
// int32, conv_num and ++ always result in a number, etc.
// - For operations that always result in an int32, the resulting int range is precise only if the source syms pass
// the above heuristics. Otherwise, the range must be expanded to the full int32 range.
Assert(IsLoopPrePass());
Assert(instr);
if(!isValueInfoPrecise)
{
if(!desiredValueType.IsDefinite())
{
return isSafeToTransferInPrepass ? desiredValueType : desiredValueType.SetCanBeTaggedValue(true);
}
// If the desired value type is not precise, the value type of the destination is derived from the value types of the
// sources. Since the value type of a source sym is not definite, the destination value type also cannot be definite.
if(desiredValueType.IsInt() && OpCodeAttr::IsInt32(instr->m_opcode))
{
// The op always produces an int32, but not always a tagged int
return ValueType::GetInt(desiredValueType.IsLikelyTaggedInt());
}
if(desiredValueType.IsNumber() && OpCodeAttr::ProducesNumber(instr->m_opcode))
{
// The op always produces a number, but not always an int
return desiredValueType.ToDefiniteAnyNumber();
}
// Note: ToLikely() also sets CanBeTaggedValue
return desiredValueType.ToLikely();
}
return desiredValueType;
}
bool
GlobOpt::IsPrepassSrcValueInfoPrecise(IR::Instr *const instr, Value *const src1Value, Value *const src2Value, bool * isSafeToTransferInPrepass) const
{
return
(!instr->GetSrc1() || IsPrepassSrcValueInfoPrecise(instr->GetSrc1(), src1Value, isSafeToTransferInPrepass)) &&
(!instr->GetSrc2() || IsPrepassSrcValueInfoPrecise(instr->GetSrc2(), src2Value, isSafeToTransferInPrepass));
}
bool
GlobOpt::IsPrepassSrcValueInfoPrecise(IR::Opnd *const src, Value *const srcValue, bool * isSafeToTransferInPrepass) const
{
Assert(IsLoopPrePass());
Assert(src);
if (isSafeToTransferInPrepass)
{
*isSafeToTransferInPrepass = false;
}
if (src->IsAddrOpnd() &&
srcValue->GetValueInfo()->GetSymStore() &&
srcValue->GetValueInfo()->GetSymStore()->IsStackSym() &&
srcValue->GetValueInfo()->GetSymStore()->AsStackSym()->IsFromByteCodeConstantTable())
{
if (isSafeToTransferInPrepass)
{
*isSafeToTransferInPrepass = false;
}
return true;
}
if (!src->IsRegOpnd() || !srcValue)
{
return false;
}
ValueInfo *const srcValueInfo = srcValue->GetValueInfo();
bool isValueInfoDefinite = srcValueInfo->IsDefinite();
StackSym * srcSym = src->AsRegOpnd()->m_sym;
bool isSafeToTransfer = IsSafeToTransferInPrepass(srcSym, srcValueInfo);
if (isSafeToTransferInPrepass)
{
*isSafeToTransferInPrepass = isSafeToTransfer;
}
return isValueInfoDefinite && isSafeToTransfer;
}
bool
GlobOpt::IsSafeToTransferInPrepass(StackSym * const srcSym, ValueInfo *const srcValueInfo) const
{
int32 intConstantValue;
return
srcSym->IsFromByteCodeConstantTable() ||
(
srcValueInfo->TryGetIntConstantValue(&intConstantValue) &&
!Js::TaggedInt::IsOverflow(intConstantValue) &&
GetTaggedIntConstantStackSym(intConstantValue) == srcSym
) ||
!currentBlock->loop->regAlloc.liveOnBackEdgeSyms->Test(srcSym->m_id) ||
!currentBlock->loop->IsSymAssignedToInSelfOrParents(srcSym);
}
bool
GlobOpt::SafeToCopyPropInPrepass(StackSym * const originalSym, StackSym * const copySym, Value *const value) const
{
Assert(this->currentBlock->globOptData.GetCopyPropSym(originalSym, value) == copySym);
// In the following example, to copy-prop s2 into s1, it is not enough to check if s1 and s2 are safe to transfer.
// In fact, both s1 and s2 are safe to transfer, but it is not legal to copy prop s2 into s1.
//
// s1 = s2
// $Loop:
// s3 = s1
// s2 = s4
// Br $Loop
//
// In general, requirements for copy-propping in prepass are more restricted than those for transferring values.
// For copy prop in prepass, if the original sym is live on back-edge, then the copy-prop sym should not be written to
// in the loop (or its parents)
ValueInfo* const valueInfo = value->GetValueInfo();
return IsSafeToTransferInPrepass(originalSym, valueInfo) &&
IsSafeToTransferInPrepass(copySym, valueInfo) &&
(!currentBlock->loop->regAlloc.liveOnBackEdgeSyms->Test(originalSym->m_id) || !currentBlock->loop->IsSymAssignedToInSelfOrParents(copySym));
}
Value *GlobOpt::CreateDstUntransferredIntValue(
const int32 min,
const int32 max,
IR::Instr *const instr,
Value *const src1Value,
Value *const src2Value)
{
Assert(instr);
Assert(instr->GetDst());
Assert(OpCodeAttr::ProducesNumber(instr->m_opcode)
|| (instr->m_opcode == Js::OpCode::Add_A && src1Value->GetValueInfo()->IsNumber()
&& src2Value->GetValueInfo()->IsNumber()));
ValueType valueType(ValueType::GetInt(IntConstantBounds(min, max).IsLikelyTaggable()));
Assert(valueType.IsInt());
bool isValueInfoPrecise;
if(IsLoopPrePass())
{
isValueInfoPrecise = IsPrepassSrcValueInfoPrecise(instr, src1Value, src2Value);
valueType = GetPrepassValueTypeForDst(valueType, instr, src1Value, src2Value, isValueInfoPrecise);
}
else
{
isValueInfoPrecise = true;
}
IR::Opnd *const dst = instr->GetDst();
if(isValueInfoPrecise)
{
Assert(valueType == ValueType::GetInt(IntConstantBounds(min, max).IsLikelyTaggable()));
Assert(!(dst->IsRegOpnd() && dst->AsRegOpnd()->m_sym->IsTypeSpec()));
return NewIntRangeValue(min, max, false, dst);
}
return NewGenericValue(valueType, dst);
}
Value *
GlobOpt::CreateDstUntransferredValue(
const ValueType desiredValueType,
IR::Instr *const instr,
Value *const src1Value,
Value *const src2Value)
{
Assert(instr);
Assert(instr->GetDst());
Assert(!desiredValueType.IsInt()); // use CreateDstUntransferredIntValue instead
ValueType valueType(desiredValueType);
if(IsLoopPrePass())
{
valueType = GetPrepassValueTypeForDst(valueType, instr, src1Value, src2Value, IsPrepassSrcValueInfoPrecise(instr, src1Value, src2Value));
}
return NewGenericValue(valueType, instr->GetDst());
}
Value *
GlobOpt::ValueNumberTransferDst(IR::Instr *const instr, Value * src1Val)
{
Value *dstVal = this->IsLoopPrePass() ? this->ValueNumberTransferDstInPrepass(instr, src1Val) : src1Val;
// Don't copy-prop a temp over a user symbol. This is likely to extend the temp's lifetime, as the user symbol
// is more likely to already have later references.
// REVIEW: Enabling this does cause perf issues...
#if 0
if (dstVal != src1Val)
{
return dstVal;
}
Sym *dstSym = dst->GetStackSym();
if (dstVal && dstSym && dstSym->IsStackSym() && !dstSym->AsStackSym()->m_isBytecodeTmp)
{
Sym *dstValSym = dstVal->GetValueInfo()->GetSymStore();
if (dstValSym && dstValSym->AsStackSym()->m_isBytecodeTmp /* src->GetIsDead()*/)
{
dstVal->GetValueInfo()->SetSymStore(dstSym);
}
}
#endif
return dstVal;
}
bool
GlobOpt::IsSafeToTransferInPrePass(IR::Opnd *src, Value *srcValue)
{
if (src->IsRegOpnd())
{
StackSym *srcSym = src->AsRegOpnd()->m_sym;
if (srcSym->IsFromByteCodeConstantTable())
{
return true;
}
ValueInfo *srcValueInfo = srcValue->GetValueInfo();
int32 srcIntConstantValue;
if (srcValueInfo->TryGetIntConstantValue(&srcIntConstantValue) && !Js::TaggedInt::IsOverflow(srcIntConstantValue)
&& GetTaggedIntConstantStackSym(srcIntConstantValue) == srcSym)
{
return true;
}
}
return false;
}
Value *
GlobOpt::ValueNumberTransferDstInPrepass(IR::Instr *const instr, Value *const src1Val)
{
Value *dstVal = nullptr;
if (!src1Val)
{
return nullptr;
}
bool isValueInfoPrecise;
ValueInfo *const src1ValueInfo = src1Val->GetValueInfo();
// TODO: This conflicts with new values created by the type specialization code
// We should re-enable if we change that code to avoid the new values.
#if 0
if (this->IsSafeToTransferInPrePass(instr->GetSrc1(), src1Val))
{
return src1Val;
}
if (this->IsPREInstrCandidateLoad(instr->m_opcode) && instr->GetDst())
{
StackSym *dstSym = instr->GetDst()->AsRegOpnd()->m_sym;
for (Loop *curLoop = this->currentBlock->loop; curLoop; curLoop = curLoop->parent)
{
if (curLoop->fieldPRESymStore->Test(dstSym->m_id))
{
return src1Val;
}
}
}
if (instr->GetDst()->IsRegOpnd())
{
StackSym *stackSym = instr->GetDst()->AsRegOpnd()->m_sym;
if (stackSym->IsSingleDef() || this->IsLive(stackSym, this->prePassLoop->landingPad))
{
IntConstantBounds src1IntConstantBounds;
if (src1ValueInfo->TryGetIntConstantBounds(&src1IntConstantBounds) &&
!(
src1IntConstantBounds.LowerBound() == INT32_MIN &&
src1IntConstantBounds.UpperBound() == INT32_MAX
))
{
const ValueType valueType(
GetPrepassValueTypeForDst(src1ValueInfo->Type(), instr, src1Val, nullptr, &isValueInfoPrecise));
if (isValueInfoPrecise)
{
return src1Val;
}
}
else
{
return src1Val;
}
}
}
#endif
// Src1's value could change later in the loop, so the value wouldn't be the same for each
// iteration. Since we don't iterate over loops "while (!changed)", go conservative on the
// first pass when transferring a value that is live on the back-edge.
// In prepass we are going to copy the value but with a different value number
// for aggressive int type spec.
bool isSafeToTransferInPrepass = false;
isValueInfoPrecise = IsPrepassSrcValueInfoPrecise(instr, src1Val, nullptr, &isSafeToTransferInPrepass);
const ValueType valueType(GetPrepassValueTypeForDst(src1ValueInfo->Type(), instr, src1Val, nullptr, isValueInfoPrecise, isSafeToTransferInPrepass));
if(isValueInfoPrecise || isSafeToTransferInPrepass)
{
Assert(valueType == src1ValueInfo->Type());
if (!PHASE_OFF1(Js::AVTInPrePassPhase))
{
dstVal = src1Val;
}
else
{
dstVal = CopyValue(src1Val);
TrackCopiedValueForKills(dstVal);
}
}
else if (valueType == src1ValueInfo->Type() && src1ValueInfo->IsGeneric()) // this else branch is probably not needed
{
Assert(valueType == src1ValueInfo->Type());
dstVal = CopyValue(src1Val);
TrackCopiedValueForKills(dstVal);
}
else
{
dstVal = NewGenericValue(valueType);
dstVal->GetValueInfo()->SetSymStore(src1ValueInfo->GetSymStore());
}
return dstVal;
}
void
GlobOpt::PropagateIntRangeForNot(int32 minimum, int32 maximum, int32 *pNewMin, int32* pNewMax)
{
int32 tmp;
Int32Math::Not(minimum, pNewMin);
*pNewMax = *pNewMin;
Int32Math::Not(maximum, &tmp);
*pNewMin = min(*pNewMin, tmp);
*pNewMax = max(*pNewMax, tmp);
}
void
GlobOpt::PropagateIntRangeBinary(IR::Instr *instr, int32 min1, int32 max1,
int32 min2, int32 max2, int32 *pNewMin, int32* pNewMax)
{
int32 min, max, tmp, tmp2;
min = INT32_MIN;
max = INT32_MAX;
switch (instr->m_opcode)
{
case Js::OpCode::Xor_A:
case Js::OpCode::Or_A:
// Find range with highest high order bit
tmp = ::max((uint32)min1, (uint32)max1);
tmp2 = ::max((uint32)min2, (uint32)max2);
if ((uint32)tmp > (uint32)tmp2)
{
max = tmp;
}
else
{
max = tmp2;
}
if (max < 0)
{
min = INT32_MIN; // REVIEW: conservative...
max = INT32_MAX;
}
else
{
// Turn values like 0x1010 into 0x1111
max = 1 << Math::Log2(max);
max = (uint32)(max << 1) - 1;
min = 0;
}
break;
case Js::OpCode::And_A:
if (min1 == INT32_MIN && min2 == INT32_MIN)
{
// Shortcut
break;
}
// Find range with lowest higher bit
tmp = ::max((uint32)min1, (uint32)max1);
tmp2 = ::max((uint32)min2, (uint32)max2);
if ((uint32)tmp < (uint32)tmp2)
{
min = min1;
max = max1;
}
else
{
min = min2;
max = max2;
}
// To compute max, look if min has higher high bit
if ((uint32)min > (uint32)max)
{
max = min;
}
// If max is negative, max let's assume it could be -1, so result in MAX_INT
if (max < 0)
{
max = INT32_MAX;
}
// If min is positive, the resulting min is zero
if (min >= 0)
{
min = 0;
}
else
{
min = INT32_MIN;
}
break;
case Js::OpCode::Shl_A:
{
// Shift count
if (min2 != max2 && ((uint32)min2 > 0x1F || (uint32)max2 > 0x1F))
{
min2 = 0;
max2 = 0x1F;
}
else
{
min2 &= 0x1F;
max2 &= 0x1F;
}
int32 min1FreeTopBitCount = min1 ? (sizeof(int32) * 8) - (Math::Log2(min1) + 1) : (sizeof(int32) * 8);
int32 max1FreeTopBitCount = max1 ? (sizeof(int32) * 8) - (Math::Log2(max1) + 1) : (sizeof(int32) * 8);
if (min1FreeTopBitCount <= max2 || max1FreeTopBitCount <= max2)
{
// If the shift is going to touch the sign bit return the max range
min = INT32_MIN;
max = INT32_MAX;
}
else
{
// Compute max
// Turn values like 0x1010 into 0x1111
if (min1)
{
min1 = 1 << Math::Log2(min1);
min1 = (min1 << 1) - 1;
}
if (max1)
{
max1 = 1 << Math::Log2(max1);
max1 = (uint32)(max1 << 1) - 1;
}
if (max1 > 0)
{
int32 nrTopBits = (sizeof(int32) * 8) - Math::Log2(max1);
if (nrTopBits < ::min(max2, 30))
max = INT32_MAX;
else
max = ::max((max1 << ::min(max2, 30)) & ~0x80000000, (min1 << min2) & ~0x80000000);
}
else
{
max = (max1 << min2) & ~0x80000000;
}
// Compute min
if (min1 < 0)
{
min = ::min(min1 << max2, max1 << max2);
}
else
{
min = ::min(min1 << min2, max1 << max2);
}
// Turn values like 0x1110 into 0x1000
if (min)
{
min = 1 << Math::Log2(min);
}
}
}
break;
case Js::OpCode::Shr_A:
// Shift count
if (min2 != max2 && ((uint32)min2 > 0x1F || (uint32)max2 > 0x1F))
{
min2 = 0;
max2 = 0x1F;
}
else
{
min2 &= 0x1F;
max2 &= 0x1F;
}
// Compute max
if (max1 < 0)
{
max = max1 >> max2;
}
else
{
max = max1 >> min2;
}
// Compute min
if (min1 < 0)
{
min = min1 >> min2;
}
else
{
min = min1 >> max2;
}
break;
case Js::OpCode::ShrU_A:
// shift count is constant zero
if ((min2 == max2) && (max2 & 0x1f) == 0)
{
// We can't encode uint32 result, so it has to be used as int32 only or the original value is positive.
Assert(instr->ignoreIntOverflow || min1 >= 0);
// We can transfer the signed int32 range.
min = min1;
max = max1;
break;
}
const IntConstantBounds src2NewBounds = IntConstantBounds(min2, max2).And_0x1f();
// Zero is only allowed if result is always a signed int32 or always used as a signed int32
Assert(min1 >= 0 || instr->ignoreIntOverflow || !src2NewBounds.Contains(0));
min2 = src2NewBounds.LowerBound();
max2 = src2NewBounds.UpperBound();
Assert(min2 <= max2);
// zero shift count is only allowed if result is used as int32 and/or value is positive
Assert(min2 > 0 || instr->ignoreIntOverflow || min1 >= 0);
uint32 umin1 = (uint32)min1;
uint32 umax1 = (uint32)max1;
if (umin1 > umax1)
{
uint32 temp = umax1;
umax1 = umin1;
umin1 = temp;
}
Assert(min2 >= 0 && max2 < 32);
// Compute max
if (min1 < 0)
{
umax1 = UINT32_MAX;
}
max = umax1 >> min2;
// Compute min
if (min1 <= 0 && max1 >=0)
{
min = 0;
}
else
{
min = umin1 >> max2;
}
// We should be able to fit uint32 range as int32
Assert(instr->ignoreIntOverflow || (min >= 0 && max >= 0) );
if (min > max)
{
// can only happen if shift count can be zero
Assert(min2 == 0 && (instr->ignoreIntOverflow || min1 >= 0));
min = Int32ConstMin;
max = Int32ConstMax;
}
break;
}
*pNewMin = min;
*pNewMax = max;
}
IR::Instr *
GlobOpt::TypeSpecialization(
IR::Instr *instr,
Value **pSrc1Val,
Value **pSrc2Val,
Value **pDstVal,
bool *redoTypeSpecRef,
bool *const forceInvariantHoistingRef)
{
Value *&src1Val = *pSrc1Val;
Value *&src2Val = *pSrc2Val;
*redoTypeSpecRef = false;
Assert(!*forceInvariantHoistingRef);
this->ignoredIntOverflowForCurrentInstr = false;
this->ignoredNegativeZeroForCurrentInstr = false;
// - Int32 values that can't be tagged are created as float constant values instead because a JavascriptNumber var is needed
// for that value at runtime. For the purposes of type specialization, recover the int32 values so that they will be
// treated as ints.
// - If int overflow does not matter for the instruction, we can additionally treat uint32 values as int32 values because
// the value resulting from the operation will eventually be converted to int32 anyway
Value *const src1OriginalVal = src1Val;
Value *const src2OriginalVal = src2Val;
if(!instr->ShouldCheckForIntOverflow())
{
if(src1Val && src1Val->GetValueInfo()->IsFloatConstant())
{
int32 int32Value;
bool isInt32;
if(Js::JavascriptNumber::TryGetInt32OrUInt32Value(
src1Val->GetValueInfo()->AsFloatConstant()->FloatValue(),
&int32Value,
&isInt32))
{
src1Val = GetIntConstantValue(int32Value, instr);
if(!isInt32)
{
this->ignoredIntOverflowForCurrentInstr = true;
}
}
}
if(src2Val && src2Val->GetValueInfo()->IsFloatConstant())
{
int32 int32Value;
bool isInt32;
if(Js::JavascriptNumber::TryGetInt32OrUInt32Value(
src2Val->GetValueInfo()->AsFloatConstant()->FloatValue(),
&int32Value,
&isInt32))
{
src2Val = GetIntConstantValue(int32Value, instr);
if(!isInt32)
{
this->ignoredIntOverflowForCurrentInstr = true;
}
}
}
}
const AutoRestoreVal autoRestoreSrc1Val(src1OriginalVal, &src1Val);
const AutoRestoreVal autoRestoreSrc2Val(src2OriginalVal, &src2Val);
if (src1Val && instr->GetSrc2() == nullptr)
{
// Unary
// Note make sure that native array StElemI gets to TypeSpecializeStElem. Do this for typed arrays, too?
int32 intConstantValue;
if (!this->IsLoopPrePass() &&
!instr->IsBranchInstr() &&
src1Val->GetValueInfo()->TryGetIntConstantValue(&intConstantValue) &&
!(
// Nothing to fold for element stores. Go into type specialization to see if they can at least be specialized.
instr->m_opcode == Js::OpCode::StElemI_A ||
instr->m_opcode == Js::OpCode::StElemI_A_Strict ||
instr->m_opcode == Js::OpCode::StElemC ||
instr->m_opcode == Js::OpCode::MultiBr ||
instr->m_opcode == Js::OpCode::InlineArrayPop
))
{
if (OptConstFoldUnary(&instr, intConstantValue, src1Val == src1OriginalVal, pDstVal))
{
return instr;
}
}
else if (this->TypeSpecializeUnary(
&instr,
&src1Val,
pDstVal,
src1OriginalVal,
redoTypeSpecRef,
forceInvariantHoistingRef))
{
return instr;
}
else if(*redoTypeSpecRef)
{
return instr;
}
}
else if (instr->GetSrc2() && !instr->IsBranchInstr())
{
// Binary
if (!this->IsLoopPrePass())
{
if (GetIsAsmJSFunc())
{
if (CONFIG_FLAG(WasmFold))
{
bool success = instr->GetSrc1()->IsInt64() ?
this->OptConstFoldBinaryWasm<int64>(&instr, src1Val, src2Val, pDstVal) :
this->OptConstFoldBinaryWasm<int>(&instr, src1Val, src2Val, pDstVal);
if (success)
{
return instr;
}
}
}
else
{
// OptConstFoldBinary doesn't do type spec, so only deal with things we are sure are int (IntConstant and IntRange)
// and not just likely ints TypeSpecializeBinary will deal with type specializing them and fold them again
IntConstantBounds src1IntConstantBounds, src2IntConstantBounds;
if (src1Val && src1Val->GetValueInfo()->TryGetIntConstantBounds(&src1IntConstantBounds))
{
if (src2Val && src2Val->GetValueInfo()->TryGetIntConstantBounds(&src2IntConstantBounds))
{
if (this->OptConstFoldBinary(&instr, src1IntConstantBounds, src2IntConstantBounds, pDstVal))
{
return instr;
}
}
}
}
}
}
if (instr->GetSrc2() && this->TypeSpecializeBinary(&instr, pSrc1Val, pSrc2Val, pDstVal, src1OriginalVal, src2OriginalVal, redoTypeSpecRef))
{
if (!this->IsLoopPrePass() &&
instr->m_opcode != Js::OpCode::Nop &&
instr->m_opcode != Js::OpCode::Br && // We may have const fold a branch
// Cannot const-peep if the result of the operation is required for a bailout check
!(instr->HasBailOutInfo() && instr->GetBailOutKind() & IR::BailOutOnResultConditions))
{
if (src1Val && src1Val->GetValueInfo()->HasIntConstantValue())
{
if (this->OptConstPeep(instr, instr->GetSrc1(), pDstVal, src1Val->GetValueInfo()))
{
return instr;
}
}
else if (src2Val && src2Val->GetValueInfo()->HasIntConstantValue())
{
if (this->OptConstPeep(instr, instr->GetSrc2(), pDstVal, src2Val->GetValueInfo()))
{
return instr;
}
}
}
return instr;
}
else if(*redoTypeSpecRef)
{
return instr;
}
if (instr->IsBranchInstr() && !this->IsLoopPrePass())
{
if (this->OptConstFoldBranch(instr, src1Val, src2Val, pDstVal))
{
return instr;
}
}
// We didn't type specialize, make sure the srcs are unspecialized
IR::Opnd *src1 = instr->GetSrc1();
if (src1)
{
instr = this->ToVarUses(instr, src1, false, src1Val);
IR::Opnd *src2 = instr->GetSrc2();
if (src2)
{
instr = this->ToVarUses(instr, src2, false, src2Val);
}
}
IR::Opnd *dst = instr->GetDst();
if (dst)
{
instr = this->ToVarUses(instr, dst, true, nullptr);
// Handling for instructions other than built-ins that may require only dst type specialization
// should be added here.
if(OpCodeAttr::IsInlineBuiltIn(instr->m_opcode) && !GetIsAsmJSFunc()) // don't need to do typespec for asmjs
{
this->TypeSpecializeInlineBuiltInDst(&instr, pDstVal);
return instr;
}
// Clear the int specialized bit on the dst.
if (dst->IsRegOpnd())
{
IR::RegOpnd *dstRegOpnd = dst->AsRegOpnd();
if (!dstRegOpnd->m_sym->IsTypeSpec())
{
this->ToVarRegOpnd(dstRegOpnd, this->currentBlock);
}
else if (dstRegOpnd->m_sym->IsInt32())
{
this->ToInt32Dst(instr, dstRegOpnd, this->currentBlock);
}
else if (dstRegOpnd->m_sym->IsUInt32() && GetIsAsmJSFunc())
{
this->ToUInt32Dst(instr, dstRegOpnd, this->currentBlock);
}
else if (dstRegOpnd->m_sym->IsFloat64())
{
this->ToFloat64Dst(instr, dstRegOpnd, this->currentBlock);
}
}
else if (dst->IsSymOpnd() && dst->AsSymOpnd()->m_sym->IsStackSym())
{
this->ToVarStackSym(dst->AsSymOpnd()->m_sym->AsStackSym(), this->currentBlock);
}
}
return instr;
}
bool
GlobOpt::OptConstPeep(IR::Instr *instr, IR::Opnd *constSrc, Value **pDstVal, ValueInfo *valuInfo)
{
int32 value;
IR::Opnd *src;
IR::Opnd *nonConstSrc = (constSrc == instr->GetSrc1() ? instr->GetSrc2() : instr->GetSrc1());
// Try to find the value from value info first
if (valuInfo->TryGetIntConstantValue(&value))
{
}
else if (constSrc->IsAddrOpnd())
{
IR::AddrOpnd *addrOpnd = constSrc->AsAddrOpnd();
#ifdef _M_X64
Assert(addrOpnd->IsVar() || Math::FitsInDWord((size_t)addrOpnd->m_address));
#else
Assert(sizeof(value) == sizeof(addrOpnd->m_address));
#endif
if (addrOpnd->IsVar())
{
value = Js::TaggedInt::ToInt32(addrOpnd->m_address);
}
else
{
// We asserted that the address will fit in a DWORD above
value = ::Math::PointerCastToIntegral<int32>(constSrc->AsAddrOpnd()->m_address);
}
}
else if (constSrc->IsIntConstOpnd())
{
value = constSrc->AsIntConstOpnd()->AsInt32();
}
else
{
return false;
}
switch(instr->m_opcode)
{
// Can't do all Add_A because of string concats.
// Sub_A cannot be transformed to a NEG_A because 0 - 0 != -0
case Js::OpCode::Add_A:
src = nonConstSrc;
if (!src->GetValueType().IsInt())
{
// 0 + -0 != -0
// "Foo" + 0 != "Foo
return false;
}
// fall-through
case Js::OpCode::Add_I4:
if (value != 0)
{
return false;
}
if (constSrc == instr->GetSrc1())
{
src = instr->GetSrc2();
}
else
{
src = instr->GetSrc1();
}
break;
case Js::OpCode::Mul_A:
case Js::OpCode::Mul_I4:
if (value == 0)
{
// -0 * 0 != 0
return false;
}
else if (value == 1)
{
src = nonConstSrc;
}
else
{
return false;
}
break;
case Js::OpCode::Div_A:
if (value == 1 && constSrc == instr->GetSrc2())
{
src = instr->GetSrc1();
}
else
{
return false;
}
break;
case Js::OpCode::Or_I4:
if (value == -1)
{
src = constSrc;
}
else if (value == 0)
{
src = nonConstSrc;
}
else
{
return false;
}
break;
case Js::OpCode::And_I4:
if (value == -1)
{
src = nonConstSrc;
}
else if (value == 0)
{
src = constSrc;
}
else
{
return false;
}
break;
case Js::OpCode::Shl_I4:
case Js::OpCode::ShrU_I4:
case Js::OpCode::Shr_I4:
if (value != 0 || constSrc != instr->GetSrc2())
{
return false;
}
src = instr->GetSrc1();
break;
default:
return false;
}
this->CaptureByteCodeSymUses(instr);
if (src == instr->GetSrc1())
{
instr->FreeSrc2();
}
else
{
Assert(src == instr->GetSrc2());
instr->ReplaceSrc1(instr->UnlinkSrc2());
}
instr->m_opcode = Js::OpCode::Ld_A;
InvalidateInductionVariables(instr);
return true;
}
Js::Var // TODO: michhol OOP JIT, shouldn't play with Vars
GlobOpt::GetConstantVar(IR::Opnd *opnd, Value *val)
{
ValueInfo *valueInfo = val->GetValueInfo();
if (valueInfo->IsVarConstant() && valueInfo->IsPrimitive())
{
return valueInfo->AsVarConstant()->VarValue();
}
if (opnd->IsAddrOpnd())
{
IR::AddrOpnd *addrOpnd = opnd->AsAddrOpnd();
if (addrOpnd->IsVar())
{
return addrOpnd->m_address;
}
}
else if (opnd->IsIntConstOpnd())
{
if (!Js::TaggedInt::IsOverflow(opnd->AsIntConstOpnd()->AsInt32()))
{
return Js::TaggedInt::ToVarUnchecked(opnd->AsIntConstOpnd()->AsInt32());
}
}
#if FLOATVAR
else if (opnd->IsFloatConstOpnd())
{
return Js::JavascriptNumber::ToVar(opnd->AsFloatConstOpnd()->m_value);
}
#endif
else if (opnd->IsRegOpnd() && opnd->AsRegOpnd()->m_sym->IsSingleDef())
{
if (valueInfo->IsBoolean())
{
IR::Instr * defInstr = opnd->AsRegOpnd()->m_sym->GetInstrDef();
if (defInstr->m_opcode != Js::OpCode::Ld_A || !defInstr->GetSrc1()->IsAddrOpnd())
{
return nullptr;
}
Assert(defInstr->GetSrc1()->AsAddrOpnd()->IsVar());
return defInstr->GetSrc1()->AsAddrOpnd()->m_address;
}
else if (valueInfo->IsUndefined())
{
return (Js::Var)this->func->GetScriptContextInfo()->GetUndefinedAddr();
}
else if (valueInfo->IsNull())
{
return (Js::Var)this->func->GetScriptContextInfo()->GetNullAddr();
}
#if FLOATVAR
else if (valueInfo->IsFloat())
{
IR::Instr * defInstr = opnd->AsRegOpnd()->m_sym->GetInstrDef();
if ((defInstr->m_opcode == Js::OpCode::LdC_F8_R8 || defInstr->m_opcode == Js::OpCode::LdC_A_R8) && defInstr->GetSrc1()->IsFloatConstOpnd())
{
return Js::JavascriptNumber::ToVar(defInstr->GetSrc1()->AsFloatConstOpnd()->m_value);
}
}
#endif
}
return nullptr;
}
namespace
{
bool TryCompIntAndFloat(bool * result, Js::Var left, Js::Var right)
{
if (Js::TaggedInt::Is(left))
{
// If both are tagged ints we should not get here.
Assert(!Js::TaggedInt::Is(right));
if (Js::JavascriptNumber::Is_NoTaggedIntCheck(right))
{
double value = Js::JavascriptNumber::GetValue(right);
*result = (Js::TaggedInt::ToInt32(left) == value);
return true;
}
}
return false;
}
bool Op_JitEq(bool * result, Value * src1Val, Value * src2Val, Js::Var src1Var, Js::Var src2Var, Func * func, bool isStrict)
{
Assert(src1Val != nullptr && src2Val != nullptr);
Assert(src1Var != nullptr && src2Var != nullptr);
if (src1Var == src2Var)
{
if (Js::TaggedInt::Is(src1Var))
{
*result = true;
return true;
}
if (!isStrict && src1Val->GetValueInfo()->IsNotFloat())
{
// If the vars are equal and they are not NaN, non-strict equal returns true. Not float guarantees not NaN.
*result = true;
return true;
}
#if FLOATVAR
if (Js::JavascriptNumber::Is_NoTaggedIntCheck(src1Var))
{
*result = !Js::JavascriptNumber::IsNan(Js::JavascriptNumber::GetValue(src1Var));
return true;
}
#endif
if (src1Var == reinterpret_cast<Js::Var>(func->GetScriptContextInfo()->GetTrueAddr()) ||
src1Var == reinterpret_cast<Js::Var>(func->GetScriptContextInfo()->GetFalseAddr()) ||
src1Var == reinterpret_cast<Js::Var>(func->GetScriptContextInfo()->GetNullAddr()) ||
src1Var == reinterpret_cast<Js::Var>(func->GetScriptContextInfo()->GetUndefinedAddr()))
{
*result = true;
return true;
}
// Other var comparisons require the runtime to prove.
return false;
}
#if FLOATVAR
if (TryCompIntAndFloat(result, src1Var, src2Var) || TryCompIntAndFloat(result, src2Var, src1Var))
{
return true;
}
#endif
return false;
}
bool Op_JitNeq(bool * result, Value * src1Val, Value * src2Val, Js::Var src1Var, Js::Var src2Var, Func * func, bool isStrict)
{
if (Op_JitEq(result, src1Val, src2Val, src1Var, src2Var, func, isStrict))
{
*result = !*result;
return true;
}
return false;
}
bool BoolAndIntStaticAndTypeMismatch(Value* src1Val, Value* src2Val, Js::Var src1Var, Js::Var src2Var)
{
ValueInfo *src1ValInfo = src1Val->GetValueInfo();
ValueInfo *src2ValInfo = src2Val->GetValueInfo();
return (src1ValInfo->IsNumber() && src1Var && src2ValInfo->IsBoolean() && src1Var != Js::TaggedInt::ToVarUnchecked(0) && src1Var != Js::TaggedInt::ToVarUnchecked(1)) ||
(src2ValInfo->IsNumber() && src2Var && src1ValInfo->IsBoolean() && src2Var != Js::TaggedInt::ToVarUnchecked(0) && src2Var != Js::TaggedInt::ToVarUnchecked(1));
}
}
bool
GlobOpt::CanProveConditionalBranch(IR::Instr *instr, Value *src1Val, Value *src2Val, Js::Var src1Var, Js::Var src2Var, bool *result)
{
auto AreSourcesEqual = [&](Value * val1, Value * val2, bool undefinedCmp) -> bool
{
// NaN !== NaN, and objects can have valueOf/toString
if (val1->IsEqualTo(val2))
{
if (val1->GetValueInfo()->IsUndefined())
{
return undefinedCmp;
}
ValueInfo * valInfo = val1->GetValueInfo();
return !valInfo->HasBeenUndefined() && valInfo->IsPrimitive() && valInfo->IsNotFloat();
}
return false;
};
// Make sure GetConstantVar only returns primitives.
// TODO: OOP JIT, enabled these asserts
//Assert(!src1Var || !Js::JavascriptOperators::IsObject(src1Var));
//Assert(!src2Var || !Js::JavascriptOperators::IsObject(src2Var));
int64 left64, right64;
int32 left, right;
int32 constVal;
switch (instr->m_opcode)
{
#define BRANCHSIGNED(OPCODE,CMP,TYPE,UNSIGNEDNESS,UNDEFINEDCMP) \
case Js::OpCode::##OPCODE: \
if (src1Val && src2Val) \
{ \
if (src1Val->GetValueInfo()->TryGetIntConstantValue(&left, UNSIGNEDNESS) && \
src2Val->GetValueInfo()->TryGetIntConstantValue(&right, UNSIGNEDNESS)) \
{ \
*result = (TYPE)left CMP(TYPE)right; \
} \
if (src1Val->GetValueInfo()->TryGetInt64ConstantValue(&left64, UNSIGNEDNESS) && \
src2Val->GetValueInfo()->TryGetInt64ConstantValue(&right64, UNSIGNEDNESS)) \
{ \
*result = (TYPE)left64 CMP(TYPE)right64; \
} \
else if (AreSourcesEqual(src1Val, src2Val, UNDEFINEDCMP)) \
{ \
*result = 0 CMP 0; \
} \
else \
{ \
return false; \
} \
} \
else \
{ \
return false; \
} \
break;
BRANCHSIGNED(BrEq_I4, == , int64, false, true)
BRANCHSIGNED(BrGe_I4, >= , int64, false, false)
BRANCHSIGNED(BrGt_I4, > , int64, false, false)
BRANCHSIGNED(BrLt_I4, < , int64, false, false)
BRANCHSIGNED(BrLe_I4, <= , int64, false, false)
BRANCHSIGNED(BrNeq_I4, != , int64, false, false)
BRANCHSIGNED(BrUnGe_I4, >= , uint64, true, false)
BRANCHSIGNED(BrUnGt_I4, > , uint64, true, false)
BRANCHSIGNED(BrUnLt_I4, < , uint64, true, false)
BRANCHSIGNED(BrUnLe_I4, <= , uint64, true, false)
#undef BRANCHSIGNED
#define BRANCH(OPCODE,CMP,VARCMPFUNC,UNDEFINEDCMP) \
case Js::OpCode::##OPCODE: \
if (src1Val && src2Val && src1Val->GetValueInfo()->TryGetIntConstantValue(&left) && \
src2Val->GetValueInfo()->TryGetIntConstantValue(&right)) \
{ \
*result = left CMP right; \
} \
else if (src1Val && src2Val && AreSourcesEqual(src1Val, src2Val, UNDEFINEDCMP)) \
{ \
*result = 0 CMP 0; \
} \
else if (src1Var && src2Var) \
{ \
if (func->IsOOPJIT() || !CONFIG_FLAG(OOPJITMissingOpts)) \
{ \
return false; \
} \
*result = VARCMPFUNC(src1Var, src2Var, this->func->GetScriptContext()); \
} \
else \
{ \
return false; \
} \
break;
BRANCH(BrGe_A, >= , Js::JavascriptOperators::GreaterEqual, /*undefinedEquality*/ false)
BRANCH(BrNotGe_A, <, !Js::JavascriptOperators::GreaterEqual, false)
BRANCH(BrLt_A, <, Js::JavascriptOperators::Less, false)
BRANCH(BrNotLt_A, >= , !Js::JavascriptOperators::Less, false)
BRANCH(BrGt_A, >, Js::JavascriptOperators::Greater, false)
BRANCH(BrNotGt_A, <= , !Js::JavascriptOperators::Greater, false)
BRANCH(BrLe_A, <= , Js::JavascriptOperators::LessEqual, false)
BRANCH(BrNotLe_A, >, !Js::JavascriptOperators::LessEqual, false)
#undef BRANCH
case Js::OpCode::BrEq_A:
case Js::OpCode::BrNotNeq_A:
if (src1Val && src2Val && src1Val->GetValueInfo()->TryGetIntConstantValue(&left) &&
src2Val->GetValueInfo()->TryGetIntConstantValue(&right))
{
*result = left == right;
}
else if (src1Val && src2Val && AreSourcesEqual(src1Val, src2Val, true))
{
*result = true;
}
else if (!src1Var || !src2Var)
{
if (BoolAndIntStaticAndTypeMismatch(src1Val, src2Val, src1Var, src2Var))
{
*result = false;
}
else
{
return false;
}
}
else
{
if (!Op_JitEq(result, src1Val, src2Val, src1Var, src2Var, this->func, false /* isStrict */))
{
return false;
}
}
break;
case Js::OpCode::BrNeq_A:
case Js::OpCode::BrNotEq_A:
if (src1Val && src2Val && src1Val->GetValueInfo()->TryGetIntConstantValue(&left) &&
src2Val->GetValueInfo()->TryGetIntConstantValue(&right))
{
*result = left != right;
}
else if (src1Val && src2Val && AreSourcesEqual(src1Val, src2Val, true))
{
*result = false;
}
else if (!src1Var || !src2Var)
{
if (BoolAndIntStaticAndTypeMismatch(src1Val, src2Val, src1Var, src2Var))
{
*result = true;
}
else
{
return false;
}
}
else
{
if (!Op_JitNeq(result, src1Val, src2Val, src1Var, src2Var, this->func, false /* isStrict */))
{
return false;
}
}
break;
case Js::OpCode::BrSrEq_A:
case Js::OpCode::BrSrNotNeq_A:
if (!src1Var || !src2Var)
{
ValueInfo *src1ValInfo = src1Val->GetValueInfo();
ValueInfo *src2ValInfo = src2Val->GetValueInfo();
if (
(src1ValInfo->IsUndefined() && src2ValInfo->IsDefinite() && !src2ValInfo->HasBeenUndefined()) ||
(src1ValInfo->IsNull() && src2ValInfo->IsDefinite() && !src2ValInfo->HasBeenNull()) ||
(src1ValInfo->IsBoolean() && src2ValInfo->IsDefinite() && !src2ValInfo->HasBeenBoolean()) ||
(src1ValInfo->IsNumber() && src2ValInfo->IsDefinite() && !src2ValInfo->HasBeenNumber()) ||
(src1ValInfo->IsString() && src2ValInfo->IsDefinite() && !src2ValInfo->HasBeenString()) ||
(src2ValInfo->IsUndefined() && src1ValInfo->IsDefinite() && !src1ValInfo->HasBeenUndefined()) ||
(src2ValInfo->IsNull() && src1ValInfo->IsDefinite() && !src1ValInfo->HasBeenNull()) ||
(src2ValInfo->IsBoolean() && src1ValInfo->IsDefinite() && !src1ValInfo->HasBeenBoolean()) ||
(src2ValInfo->IsNumber() && src1ValInfo->IsDefinite() && !src1ValInfo->HasBeenNumber()) ||
(src2ValInfo->IsString() && src1ValInfo->IsDefinite() && !src1ValInfo->HasBeenString())
)
{
*result = false;
}
else if (AreSourcesEqual(src1Val, src2Val, true))
{
*result = true;
}
else
{
return false;
}
}
else
{
if (!Op_JitEq(result, src1Val, src2Val, src1Var, src2Var, this->func, true /* isStrict */))
{
return false;
}
}
break;
case Js::OpCode::BrSrNeq_A:
case Js::OpCode::BrSrNotEq_A:
if (!src1Var || !src2Var)
{
ValueInfo *src1ValInfo = src1Val->GetValueInfo();
ValueInfo *src2ValInfo = src2Val->GetValueInfo();
if (
(src1ValInfo->IsUndefined() && src2ValInfo->IsDefinite() && !src2ValInfo->HasBeenUndefined()) ||
(src1ValInfo->IsNull() && src2ValInfo->IsDefinite() && !src2ValInfo->HasBeenNull()) ||
(src1ValInfo->IsBoolean() && src2ValInfo->IsDefinite() && !src2ValInfo->HasBeenBoolean()) ||
(src1ValInfo->IsNumber() && src2ValInfo->IsDefinite() && !src2ValInfo->HasBeenNumber()) ||
(src1ValInfo->IsString() && src2ValInfo->IsDefinite() && !src2ValInfo->HasBeenString()) ||
(src2ValInfo->IsUndefined() && src1ValInfo->IsDefinite() && !src1ValInfo->HasBeenUndefined()) ||
(src2ValInfo->IsNull() && src1ValInfo->IsDefinite() && !src1ValInfo->HasBeenNull()) ||
(src2ValInfo->IsBoolean() && src1ValInfo->IsDefinite() && !src1ValInfo->HasBeenBoolean()) ||
(src2ValInfo->IsNumber() && src1ValInfo->IsDefinite() && !src1ValInfo->HasBeenNumber()) ||
(src2ValInfo->IsString() && src1ValInfo->IsDefinite() && !src1ValInfo->HasBeenString())
)
{
*result = true;
}
else if (AreSourcesEqual(src1Val, src2Val, true))
{
*result = false;
}
else
{
return false;
}
}
else
{
if (!Op_JitNeq(result, src1Val, src2Val, src1Var, src2Var, this->func, true /* isStrict */))
{
return false;
}
}
break;
case Js::OpCode::BrFalse_A:
case Js::OpCode::BrTrue_A:
{
ValueInfo *const src1ValueInfo = src1Val->GetValueInfo();
if (src1ValueInfo->IsNull() || src1ValueInfo->IsUndefined())
{
*result = instr->m_opcode == Js::OpCode::BrFalse_A;
break;
}
if (src1ValueInfo->IsObject() && src1ValueInfo->GetObjectType() > ObjectType::Object)
{
// Specific object types that are tracked are equivalent to 'true'
*result = instr->m_opcode == Js::OpCode::BrTrue_A;
break;
}
if (!src1Var)
{
return false;
}
// Set *result = (evaluates true) and negate it later for BrFalse
if (src1Var == reinterpret_cast<Js::Var>(this->func->GetScriptContextInfo()->GetTrueAddr()))
{
*result = true;
}
else if (src1Var == reinterpret_cast<Js::Var>(this->func->GetScriptContextInfo()->GetFalseAddr()))
{
*result = false;
}
else if (Js::TaggedInt::Is(src1Var))
{
*result = (src1Var != reinterpret_cast<Js::Var>(Js::AtomTag_IntPtr));
}
#if FLOATVAR
else if (Js::JavascriptNumber::Is_NoTaggedIntCheck(src1Var))
{
double value = Js::JavascriptNumber::GetValue(src1Var);
*result = (!Js::JavascriptNumber::IsNan(value)) && (!Js::JavascriptNumber::IsZero(value));
}
#endif
else
{
return false;
}
if (instr->m_opcode == Js::OpCode::BrFalse_A)
{
*result = !(*result);
}
break;
}
case Js::OpCode::BrFalse_I4:
{
constVal = 0;
if (!src1Val->GetValueInfo()->TryGetIntConstantValue(&constVal))
{
return false;
}
*result = constVal == 0;
break;
}
case Js::OpCode::BrOnObject_A:
{
ValueInfo *const src1ValueInfo = src1Val->GetValueInfo();
if (!src1ValueInfo->IsDefinite())
{
return false;
}
if (src1ValueInfo->IsPrimitive())
{
*result = false;
}
else
{
if (src1ValueInfo->HasBeenPrimitive())
{
return false;
}
*result = true;
}
break;
}
default:
return false;
}
return true;
}
bool
GlobOpt::OptConstFoldBranch(IR::Instr *instr, Value *src1Val, Value*src2Val, Value **pDstVal)
{
if (!src1Val)
{
return false;
}
Js::Var src1Var = this->GetConstantVar(instr->GetSrc1(), src1Val);
Js::Var src2Var = nullptr;
if (instr->GetSrc2())
{
if (!src2Val)
{
return false;
}
src2Var = this->GetConstantVar(instr->GetSrc2(), src2Val);
}
bool result;
if (!CanProveConditionalBranch(instr, src1Val, src2Val, src1Var, src2Var, &result))
{
return false;
}
this->OptConstFoldBr(!!result, instr);
return true;
}
bool
GlobOpt::OptConstFoldUnary(
IR::Instr * *pInstr,
const int32 intConstantValue,
const bool isUsingOriginalSrc1Value,
Value **pDstVal)
{
IR::Instr * &instr = *pInstr;
int32 value = 0;
IR::Opnd *constOpnd;
bool isInt = true;
bool doSetDstVal = true;
FloatConstType fValue = 0.0;
if (!DoConstFold())
{
return false;
}
if (instr->GetDst() && !instr->GetDst()->IsRegOpnd())
{
return false;
}
switch(instr->m_opcode)
{
case Js::OpCode::Neg_A:
if (intConstantValue == 0)
{
// Could fold to -0.0
return false;
}
if (Int32Math::Neg(intConstantValue, &value))
{
return false;
}
break;
case Js::OpCode::Not_A:
Int32Math::Not(intConstantValue, &value);
break;
case Js::OpCode::Ld_A:
if (instr->HasBailOutInfo())
{
//The profile data for switch expr can be string and in GlobOpt we realize it is an int.
if(instr->GetBailOutKind() == IR::BailOutExpectingString)
{
throw Js::RejitException(RejitReason::DisableSwitchOptExpectingString);
}
Assert(instr->GetBailOutKind() == IR::BailOutExpectingInteger);
instr->ClearBailOutInfo();
}
value = intConstantValue;
if(isUsingOriginalSrc1Value)
{
doSetDstVal = false; // Let OptDst do it by copying src1Val
}
break;
case Js::OpCode::Conv_Num:
case Js::OpCode::LdC_A_I4:
value = intConstantValue;
if(isUsingOriginalSrc1Value)
{
doSetDstVal = false; // Let OptDst do it by copying src1Val
}
break;
case Js::OpCode::Incr_A:
if (Int32Math::Inc(intConstantValue, &value))
{
return false;
}
break;
case Js::OpCode::Decr_A:
if (Int32Math::Dec(intConstantValue, &value))
{
return false;
}
break;
case Js::OpCode::InlineMathAcos:
fValue = Js::Math::Acos((double)intConstantValue);
isInt = false;
break;
case Js::OpCode::InlineMathAsin:
fValue = Js::Math::Asin((double)intConstantValue);
isInt = false;
break;
case Js::OpCode::InlineMathAtan:
fValue = Js::Math::Atan((double)intConstantValue);
isInt = false;
break;
case Js::OpCode::InlineMathCos:
fValue = Js::Math::Cos((double)intConstantValue);
isInt = false;
break;
case Js::OpCode::InlineMathExp:
fValue = Js::Math::Exp((double)intConstantValue);
isInt = false;
break;
case Js::OpCode::InlineMathLog:
fValue = Js::Math::Log((double)intConstantValue);
isInt = false;
break;
case Js::OpCode::InlineMathSin:
fValue = Js::Math::Sin((double)intConstantValue);
isInt = false;
break;
case Js::OpCode::InlineMathSqrt:
fValue = ::sqrt((double)intConstantValue);
isInt = false;
break;
case Js::OpCode::InlineMathTan:
fValue = ::tan((double)intConstantValue);
isInt = false;
break;
case Js::OpCode::InlineMathFround:
fValue = (double) (float) intConstantValue;
isInt = false;
break;
case Js::OpCode::InlineMathAbs:
if (intConstantValue == INT32_MIN)
{
if (instr->GetDst()->IsInt32())
{
// if dst is an int (e.g. in asm.js), we should coerce it, not convert to float
value = static_cast<int32>(2147483648U);
}
else
{
// Rejit with AggressiveIntTypeSpecDisabled for Math.abs(INT32_MIN) because it causes dst
// to be float type which could be different with previous type spec result in LoopPrePass
throw Js::RejitException(RejitReason::AggressiveIntTypeSpecDisabled);
}
}
else
{
value = ::abs(intConstantValue);
}
break;
case Js::OpCode::InlineMathClz:
DWORD clz;
if (_BitScanReverse(&clz, intConstantValue))
{
value = 31 - clz;
}
else
{
value = 32;
}
instr->ClearBailOutInfo();
break;
case Js::OpCode::Ctz:
Assert(func->GetJITFunctionBody()->IsWasmFunction());
Assert(!instr->HasBailOutInfo());
DWORD ctz;
if (_BitScanForward(&ctz, intConstantValue))
{
value = ctz;
}
else
{
value = 32;
}
break;
case Js::OpCode::InlineMathFloor:
value = intConstantValue;
instr->ClearBailOutInfo();
break;
case Js::OpCode::InlineMathCeil:
value = intConstantValue;
instr->ClearBailOutInfo();
break;
case Js::OpCode::InlineMathRound:
value = intConstantValue;
instr->ClearBailOutInfo();
break;
case Js::OpCode::ToVar:
if (Js::TaggedInt::IsOverflow(intConstantValue))
{
return false;
}
else
{
value = intConstantValue;
instr->ClearBailOutInfo();
break;
}
default:
return false;
}
this->CaptureByteCodeSymUses(instr);
Assert(!instr->HasBailOutInfo()); // If we are, in fact, successful in constant folding the instruction, there is no point in having the bailoutinfo around anymore.
// Make sure that it is cleared if it was initially present.
if (!isInt)
{
value = (int32)fValue;
if (fValue == (double)value)
{
isInt = true;
}
}
if (isInt)
{
constOpnd = IR::IntConstOpnd::New(value, TyInt32, instr->m_func);
GOPT_TRACE(_u("Constant folding to %d\n"), value);
}
else
{
constOpnd = IR::FloatConstOpnd::New(fValue, TyFloat64, instr->m_func);
GOPT_TRACE(_u("Constant folding to %f\n"), fValue);
}
instr->ReplaceSrc1(constOpnd);
this->OptSrc(constOpnd, &instr);
IR::Opnd *dst = instr->GetDst();
Assert(dst->IsRegOpnd());
StackSym *dstSym = dst->AsRegOpnd()->m_sym;
if (isInt)
{
if (dstSym->IsSingleDef())
{
dstSym->SetIsIntConst(value);
}
if (doSetDstVal)
{
*pDstVal = GetIntConstantValue(value, instr, dst);
}
if (IsTypeSpecPhaseOff(this->func))
{
instr->m_opcode = Js::OpCode::LdC_A_I4;
this->ToVarRegOpnd(dst->AsRegOpnd(), this->currentBlock);
}
else
{
instr->m_opcode = Js::OpCode::Ld_I4;
this->ToInt32Dst(instr, dst->AsRegOpnd(), this->currentBlock);
StackSym * currDstSym = instr->GetDst()->AsRegOpnd()->m_sym;
if (currDstSym->IsSingleDef())
{
currDstSym->SetIsIntConst(value);
}
}
}
else
{
*pDstVal = NewFloatConstantValue(fValue, dst);
if (IsTypeSpecPhaseOff(this->func))
{
instr->m_opcode = Js::OpCode::LdC_A_R8;
this->ToVarRegOpnd(dst->AsRegOpnd(), this->currentBlock);
}
else
{
instr->m_opcode = Js::OpCode::LdC_F8_R8;
this->ToFloat64Dst(instr, dst->AsRegOpnd(), this->currentBlock);
}
}
InvalidateInductionVariables(instr);
return true;
}
//your_sha256_hash--------------------------------------
// Type specialization
//your_sha256_hash--------------------------------------
bool
GlobOpt::IsWorthSpecializingToInt32DueToSrc(IR::Opnd *const src, Value *const val)
{
Assert(src);
Assert(val);
ValueInfo *valueInfo = val->GetValueInfo();
Assert(valueInfo->IsLikelyInt());
// If it is not known that the operand is definitely an int, the operand is not already type-specialized, and it's not live
// in the loop landing pad (if we're in a loop), it's probably not worth type-specializing this instruction. The common case
// where type-specializing this would be bad is where the operations are entirely on properties or array elements, where the
// ratio of FromVars and ToVars to the number of actual operations is high, and the conversions would dominate the time
// spent. On the other hand, if we're using a function formal parameter more than once, it would probably be worth
// type-specializing it, hence the IsDead check on the operands.
return
valueInfo->IsInt() ||
valueInfo->HasIntConstantValue(true) ||
!src->GetIsDead() ||
!src->IsRegOpnd() ||
CurrentBlockData()->IsInt32TypeSpecialized(src->AsRegOpnd()->m_sym) ||
(this->currentBlock->loop && this->currentBlock->loop->landingPad->globOptData.IsLive(src->AsRegOpnd()->m_sym));
}
bool
GlobOpt::IsWorthSpecializingToInt32DueToDst(IR::Opnd *const dst)
{
Assert(dst);
const auto sym = dst->AsRegOpnd()->m_sym;
return
CurrentBlockData()->IsInt32TypeSpecialized(sym) ||
(this->currentBlock->loop && this->currentBlock->loop->landingPad->globOptData.IsLive(sym));
}
bool
GlobOpt::IsWorthSpecializingToInt32(IR::Instr *const instr, Value *const src1Val, Value *const src2Val)
{
Assert(instr);
const auto src1 = instr->GetSrc1();
const auto src2 = instr->GetSrc2();
// In addition to checking each operand and the destination, if for any reason we only have to do a maximum of two
// conversions instead of the worst-case 3 conversions, it's probably worth specializing.
if (IsWorthSpecializingToInt32DueToSrc(src1, src1Val) ||
(src2Val && IsWorthSpecializingToInt32DueToSrc(src2, src2Val)))
{
return true;
}
IR::Opnd *dst = instr->GetDst();
if (!dst || IsWorthSpecializingToInt32DueToDst(dst))
{
return true;
}
if (dst->IsEqual(src1) || (src2Val && (dst->IsEqual(src2) || src1->IsEqual(src2))))
{
return true;
}
IR::Instr *instrNext = instr->GetNextRealInstrOrLabel();
// Skip useless Ld_A's
do
{
switch (instrNext->m_opcode)
{
case Js::OpCode::Ld_A:
if (!dst->IsEqual(instrNext->GetSrc1()))
{
goto done;
}
dst = instrNext->GetDst();
break;
case Js::OpCode::LdFld:
case Js::OpCode::LdRootFld:
case Js::OpCode::LdRootFldForTypeOf:
case Js::OpCode::LdFldForTypeOf:
case Js::OpCode::LdElemI_A:
case Js::OpCode::ByteCodeUses:
break;
default:
goto done;
}
instrNext = instrNext->GetNextRealInstrOrLabel();
} while (true);
done:
// If the next instr could also be type specialized, then it is probably worth it.
if ((instrNext->GetSrc1() && dst->IsEqual(instrNext->GetSrc1())) || (instrNext->GetSrc2() && dst->IsEqual(instrNext->GetSrc2())))
{
switch (instrNext->m_opcode)
{
case Js::OpCode::Add_A:
case Js::OpCode::Sub_A:
case Js::OpCode::Mul_A:
case Js::OpCode::Div_A:
case Js::OpCode::Rem_A:
case Js::OpCode::Xor_A:
case Js::OpCode::And_A:
case Js::OpCode::Or_A:
case Js::OpCode::Shl_A:
case Js::OpCode::Shr_A:
case Js::OpCode::Incr_A:
case Js::OpCode::Decr_A:
case Js::OpCode::Neg_A:
case Js::OpCode::Not_A:
case Js::OpCode::Conv_Num:
case Js::OpCode::BrEq_I4:
case Js::OpCode::BrTrue_I4:
case Js::OpCode::BrFalse_I4:
case Js::OpCode::BrGe_I4:
case Js::OpCode::BrGt_I4:
case Js::OpCode::BrLt_I4:
case Js::OpCode::BrLe_I4:
case Js::OpCode::BrNeq_I4:
return true;
}
}
return false;
}
bool
GlobOpt::TypeSpecializeNumberUnary(IR::Instr *instr, Value *src1Val, Value **pDstVal)
{
Assert(src1Val->GetValueInfo()->IsNumber());
if (this->IsLoopPrePass())
{
return false;
}
switch (instr->m_opcode)
{
case Js::OpCode::Conv_Num:
// Optimize Conv_Num away since we know this is a number
instr->m_opcode = Js::OpCode::Ld_A;
return false;
}
return false;
}
bool
GlobOpt::TypeSpecializeUnary(
IR::Instr **pInstr,
Value **pSrc1Val,
Value **pDstVal,
Value *const src1OriginalVal,
bool *redoTypeSpecRef,
bool *const forceInvariantHoistingRef)
{
Assert(pSrc1Val);
Value *&src1Val = *pSrc1Val;
Assert(src1Val);
// We don't need to do typespec for asmjs
if (IsTypeSpecPhaseOff(this->func) || GetIsAsmJSFunc())
{
return false;
}
IR::Instr *&instr = *pInstr;
int32 min, max;
// Inline built-ins explicitly specify how srcs/dst must be specialized.
if (OpCodeAttr::IsInlineBuiltIn(instr->m_opcode))
{
TypeSpecializeInlineBuiltInUnary(pInstr, &src1Val, pDstVal, src1OriginalVal, redoTypeSpecRef);
return true;
}
// Consider: If type spec wasn't completely done, make sure that we don't type-spec the dst 2nd time.
if(instr->m_opcode == Js::OpCode::LdLen_A && TypeSpecializeLdLen(&instr, &src1Val, pDstVal, forceInvariantHoistingRef))
{
return true;
}
if (!src1Val->GetValueInfo()->GetIntValMinMax(&min, &max, this->DoAggressiveIntTypeSpec()))
{
src1Val = src1OriginalVal;
if (src1Val->GetValueInfo()->IsLikelyFloat())
{
// Try to type specialize to float
return this->TypeSpecializeFloatUnary(pInstr, src1Val, pDstVal);
}
else if (src1Val->GetValueInfo()->IsNumber())
{
return TypeSpecializeNumberUnary(instr, src1Val, pDstVal);
}
return TryTypeSpecializeUnaryToFloatHelper(pInstr, &src1Val, src1OriginalVal, pDstVal);
}
return this->TypeSpecializeIntUnary(pInstr, &src1Val, pDstVal, min, max, src1OriginalVal, redoTypeSpecRef);
}
// Returns true if the built-in requested type specialization, and no further action needed,
// otherwise returns false.
void
GlobOpt::TypeSpecializeInlineBuiltInUnary(IR::Instr **pInstr, Value **pSrc1Val, Value **pDstVal, Value *const src1OriginalVal, bool *redoTypeSpecRef)
{
IR::Instr *&instr = *pInstr;
Assert(pSrc1Val);
Value *&src1Val = *pSrc1Val;
Assert(OpCodeAttr::IsInlineBuiltIn(instr->m_opcode));
Js::BuiltinFunction builtInId = Js::JavascriptLibrary::GetBuiltInInlineCandidateId(instr->m_opcode); // From actual instr, not profile based.
Assert(builtInId != Js::BuiltinFunction::None);
// Consider using different bailout for float/int FromVars, so that when the arg cannot be converted to number we don't disable
// type spec for other parts of the big function but rather just don't inline that built-in instr.
// E.g. could do that if the value is not likelyInt/likelyFloat.
Js::BuiltInFlags builtInFlags = Js::JavascriptLibrary::GetFlagsForBuiltIn(builtInId);
bool areAllArgsAlwaysFloat = (builtInFlags & Js::BuiltInFlags::BIF_Args) == Js::BuiltInFlags::BIF_TypeSpecUnaryToFloat;
if (areAllArgsAlwaysFloat)
{
// InlineMathAcos, InlineMathAsin, InlineMathAtan, InlineMathCos, InlineMathExp, InlineMathLog, InlineMathSin, InlineMathSqrt, InlineMathTan.
Assert(this->DoFloatTypeSpec());
// Type-spec the src.
src1Val = src1OriginalVal;
bool retVal = this->TypeSpecializeFloatUnary(pInstr, src1Val, pDstVal, /* skipDst = */ true);
AssertMsg(retVal, "For inline built-ins the args have to be type-specialized to float, but something failed during the process.");
// Type-spec the dst.
this->TypeSpecializeFloatDst(instr, nullptr, src1Val, nullptr, pDstVal);
}
else if (instr->m_opcode == Js::OpCode::InlineMathAbs)
{
// Consider the case when the value is unknown - because of bailout in abs we may disable type spec for the whole function which is too much.
// First, try int.
int minVal, maxVal;
bool shouldTypeSpecToInt = src1Val->GetValueInfo()->GetIntValMinMax(&minVal, &maxVal, /* doAggressiveIntTypeSpec = */ true);
if (shouldTypeSpecToInt)
{
Assert(this->DoAggressiveIntTypeSpec());
bool retVal = this->TypeSpecializeIntUnary(pInstr, &src1Val, pDstVal, minVal, maxVal, src1OriginalVal, redoTypeSpecRef, true);
AssertMsg(retVal, "For inline built-ins the args have to be type-specialized (int), but something failed during the process.");
if (!this->IsLoopPrePass())
{
// Create bailout for INT_MIN which does not have corresponding int value on the positive side.
// Check int range: if we know the range is out of overflow, we do not need the bail out at all.
if (minVal == INT32_MIN)
{
GenerateBailAtOperation(&instr, IR::BailOnIntMin);
}
}
// Account for ::abs(INT_MIN) == INT_MIN (which is less than 0).
maxVal = ::max(
::abs(Int32Math::NearestInRangeTo(minVal, INT_MIN + 1, INT_MAX)),
::abs(Int32Math::NearestInRangeTo(maxVal, INT_MIN + 1, INT_MAX)));
minVal = minVal >= 0 ? minVal : 0;
this->TypeSpecializeIntDst(instr, instr->m_opcode, nullptr, src1Val, nullptr, IR::BailOutInvalid, minVal, maxVal, pDstVal);
}
else
{
// If we couldn't do int, do float.
Assert(this->DoFloatTypeSpec());
src1Val = src1OriginalVal;
bool retVal = this->TypeSpecializeFloatUnary(pInstr, src1Val, pDstVal, true);
AssertMsg(retVal, "For inline built-ins the args have to be type-specialized (float), but something failed during the process.");
this->TypeSpecializeFloatDst(instr, nullptr, src1Val, nullptr, pDstVal);
}
}
else if (instr->m_opcode == Js::OpCode::InlineMathFloor || instr->m_opcode == Js::OpCode::InlineMathCeil || instr->m_opcode == Js::OpCode::InlineMathRound)
{
// Type specialize src to float
src1Val = src1OriginalVal;
bool retVal = this->TypeSpecializeFloatUnary(pInstr, src1Val, pDstVal, /* skipDst = */ true);
AssertMsg(retVal, "For inline Math.floor and Math.ceil the src has to be type-specialized to float, but something failed during the process.");
// Type specialize dst to int
this->TypeSpecializeIntDst(
instr,
instr->m_opcode,
nullptr,
src1Val,
nullptr,
IR::BailOutInvalid,
INT32_MIN,
INT32_MAX,
pDstVal);
}
else if(instr->m_opcode == Js::OpCode::InlineArrayPop)
{
IR::Opnd *const thisOpnd = instr->GetSrc1();
Assert(thisOpnd);
// Ensure src1 (Array) is a var
this->ToVarUses(instr, thisOpnd, false, src1Val);
if(!this->IsLoopPrePass() && thisOpnd->GetValueType().IsLikelyNativeArray())
{
// We bail out, if there is illegal access or a mismatch in the Native array type that is optimized for, during the run time.
GenerateBailAtOperation(&instr, IR::BailOutConventionalNativeArrayAccessOnly);
}
if(!instr->GetDst())
{
return;
}
// Try Type Specializing the element (return item from Pop) based on the array's profile data.
if(thisOpnd->GetValueType().IsLikelyNativeIntArray())
{
this->TypeSpecializeIntDst(instr, instr->m_opcode, nullptr, nullptr, nullptr, IR::BailOutInvalid, INT32_MIN, INT32_MAX, pDstVal);
}
else if(thisOpnd->GetValueType().IsLikelyNativeFloatArray())
{
this->TypeSpecializeFloatDst(instr, nullptr, nullptr, nullptr, pDstVal);
}
else
{
// We reached here so the Element is not yet type specialized. Ensure element is a var
if(instr->GetDst()->IsRegOpnd())
{
this->ToVarRegOpnd(instr->GetDst()->AsRegOpnd(), currentBlock);
}
}
}
else if (instr->m_opcode == Js::OpCode::InlineMathClz)
{
Assert(this->DoAggressiveIntTypeSpec());
Assert(this->DoLossyIntTypeSpec());
//Type specialize to int
bool retVal = this->TypeSpecializeIntUnary(pInstr, &src1Val, pDstVal, INT32_MIN, INT32_MAX, src1OriginalVal, redoTypeSpecRef);
AssertMsg(retVal, "For clz32, the arg has to be type-specialized to int.");
}
else
{
AssertMsg(FALSE, "Unsupported built-in!");
}
}
void
GlobOpt::TypeSpecializeInlineBuiltInBinary(IR::Instr **pInstr, Value *src1Val, Value* src2Val, Value **pDstVal, Value *const src1OriginalVal, Value *const src2OriginalVal)
{
IR::Instr *&instr = *pInstr;
Assert(OpCodeAttr::IsInlineBuiltIn(instr->m_opcode));
switch(instr->m_opcode)
{
case Js::OpCode::InlineMathAtan2:
{
Js::BuiltinFunction builtInId = Js::JavascriptLibrary::GetBuiltInInlineCandidateId(instr->m_opcode); // From actual instr, not profile based.
Js::BuiltInFlags builtInFlags = Js::JavascriptLibrary::GetFlagsForBuiltIn(builtInId);
bool areAllArgsAlwaysFloat = (builtInFlags & Js::BuiltInFlags::BIF_TypeSpecAllToFloat) != 0;
Assert(areAllArgsAlwaysFloat);
Assert(this->DoFloatTypeSpec());
// Type-spec the src1, src2 and dst.
src1Val = src1OriginalVal;
src2Val = src2OriginalVal;
bool retVal = this->TypeSpecializeFloatBinary(instr, src1Val, src2Val, pDstVal);
AssertMsg(retVal, "For pow and atnan2 the args have to be type-specialized to float, but something failed during the process.");
break;
}
case Js::OpCode::InlineMathPow:
{
#ifndef _M_ARM32_OR_ARM64
if (src2Val->GetValueInfo()->IsLikelyInt())
{
bool lossy = false;
this->ToInt32(instr, instr->GetSrc2(), this->currentBlock, src2Val, nullptr, lossy);
IR::Opnd* src1 = instr->GetSrc1();
int32 valueMin, valueMax;
if (src1Val->GetValueInfo()->IsLikelyInt() &&
this->DoPowIntIntTypeSpec() &&
src2Val->GetValueInfo()->GetIntValMinMax(&valueMin, &valueMax, this->DoAggressiveIntTypeSpec()) &&
valueMin >= 0)
{
this->ToInt32(instr, src1, this->currentBlock, src1Val, nullptr, lossy);
this->TypeSpecializeIntDst(instr, instr->m_opcode, nullptr, src1Val, src2Val, IR::BailOutInvalid, INT32_MIN, INT32_MAX, pDstVal);
if(!this->IsLoopPrePass())
{
GenerateBailAtOperation(&instr, IR::BailOutOnPowIntIntOverflow);
}
}
else
{
this->ToFloat64(instr, src1, this->currentBlock, src1Val, nullptr, IR::BailOutPrimitiveButString);
TypeSpecializeFloatDst(instr, nullptr, src1Val, src2Val, pDstVal);
}
}
else
{
#endif
this->TypeSpecializeFloatBinary(instr, src1Val, src2Val, pDstVal);
#ifndef _M_ARM32_OR_ARM64
}
#endif
break;
}
case Js::OpCode::InlineMathImul:
{
Assert(this->DoAggressiveIntTypeSpec());
Assert(this->DoLossyIntTypeSpec());
//Type specialize to int
bool retVal = this->TypeSpecializeIntBinary(pInstr, src1Val, src2Val, pDstVal, INT32_MIN, INT32_MAX, false /* skipDst */);
AssertMsg(retVal, "For imul, the args have to be type-specialized to int but something failed during the process.");
break;
}
case Js::OpCode::InlineMathMin:
case Js::OpCode::InlineMathMax:
{
if(src1Val->GetValueInfo()->IsLikelyInt() && src2Val->GetValueInfo()->IsLikelyInt())
{
// Compute resulting range info
int32 min1 = INT32_MIN;
int32 max1 = INT32_MAX;
int32 min2 = INT32_MIN;
int32 max2 = INT32_MAX;
int32 newMin, newMax;
Assert(this->DoAggressiveIntTypeSpec());
src1Val->GetValueInfo()->GetIntValMinMax(&min1, &max1, this->DoAggressiveIntTypeSpec());
src2Val->GetValueInfo()->GetIntValMinMax(&min2, &max2, this->DoAggressiveIntTypeSpec());
if (instr->m_opcode == Js::OpCode::InlineMathMin)
{
newMin = min(min1, min2);
newMax = min(max1, max2);
}
else
{
Assert(instr->m_opcode == Js::OpCode::InlineMathMax);
newMin = max(min1, min2);
newMax = max(max1, max2);
}
// Type specialize to int
bool retVal = this->TypeSpecializeIntBinary(pInstr, src1Val, src2Val, pDstVal, newMin, newMax, false /* skipDst */);
AssertMsg(retVal, "For min and max, the args have to be type-specialized to int if any one of the sources is an int, but something failed during the process.");
}
// Couldn't type specialize to int, type specialize to float
else
{
Assert(this->DoFloatTypeSpec());
src1Val = src1OriginalVal;
src2Val = src2OriginalVal;
bool retVal = this->TypeSpecializeFloatBinary(instr, src1Val, src2Val, pDstVal);
AssertMsg(retVal, "For min and max, the args have to be type-specialized to float if any one of the sources is a float, but something failed during the process.");
}
break;
}
case Js::OpCode::InlineArrayPush:
{
IR::Opnd *const thisOpnd = instr->GetSrc1();
Assert(thisOpnd);
if(instr->GetDst() && instr->GetDst()->IsRegOpnd())
{
// Set the dst as live here, as the built-ins return early from the TypeSpecialization functions - before the dst is marked as live.
// Also, we are not specializing the dst separately and we are skipping the dst to be handled when we specialize the instruction above.
this->ToVarRegOpnd(instr->GetDst()->AsRegOpnd(), currentBlock);
}
// Ensure src1 (Array) is a var
this->ToVarUses(instr, thisOpnd, false, src1Val);
if(!this->IsLoopPrePass())
{
if(thisOpnd->GetValueType().IsLikelyNativeArray())
{
// We bail out, if there is illegal access or a mismatch in the Native array type that is optimized for, during run time.
GenerateBailAtOperation(&instr, IR::BailOutConventionalNativeArrayAccessOnly);
}
else
{
GenerateBailAtOperation(&instr, IR::BailOutOnImplicitCallsPreOp);
}
}
// Try Type Specializing the element based on the array's profile data.
if(thisOpnd->GetValueType().IsLikelyNativeFloatArray())
{
src1Val = src1OriginalVal;
src2Val = src2OriginalVal;
}
if((thisOpnd->GetValueType().IsLikelyNativeIntArray() && this->TypeSpecializeIntBinary(pInstr, src1Val, src2Val, pDstVal, INT32_MIN, INT32_MAX, true))
|| (thisOpnd->GetValueType().IsLikelyNativeFloatArray() && this->TypeSpecializeFloatBinary(instr, src1Val, src2Val, pDstVal)))
{
break;
}
// The Element is not yet type specialized. Ensure element is a var
this->ToVarUses(instr, instr->GetSrc2(), false, src2Val);
break;
}
}
}
void
GlobOpt::TypeSpecializeInlineBuiltInDst(IR::Instr **pInstr, Value **pDstVal)
{
IR::Instr *&instr = *pInstr;
Assert(OpCodeAttr::IsInlineBuiltIn(instr->m_opcode));
if (instr->m_opcode == Js::OpCode::InlineMathRandom)
{
Assert(this->DoFloatTypeSpec());
// Type specialize dst to float
this->TypeSpecializeFloatDst(instr, nullptr, nullptr, nullptr, pDstVal);
}
}
bool
GlobOpt::TryTypeSpecializeUnaryToFloatHelper(IR::Instr** pInstr, Value** pSrc1Val, Value* const src1OriginalVal, Value **pDstVal)
{
// It has been determined that this instruction cannot be int-specialized. We need to determine whether to attempt to
// float-specialize the instruction, or leave it unspecialized.
#if !INT32VAR
Value*& src1Val = *pSrc1Val;
if(src1Val->GetValueInfo()->IsLikelyUntaggedInt())
{
// An input range is completely outside the range of an int31. Even if the operation may overflow, it is
// unlikely to overflow on these operations, so we leave it unspecialized on 64-bit platforms. However, on
// 32-bit platforms, the value is untaggable and will be a JavascriptNumber, which is significantly slower to
// use in an unspecialized operation compared to a tagged int. So, try to float-specialize the instruction.
src1Val = src1OriginalVal;
return this->TypeSpecializeFloatUnary(pInstr, src1Val, pDstVal);
}
#endif
return false;
}
bool
GlobOpt::TypeSpecializeIntBinary(IR::Instr **pInstr, Value *src1Val, Value *src2Val, Value **pDstVal, int32 min, int32 max, bool skipDst /* = false */)
{
// Consider moving the code for int type spec-ing binary functions here.
IR::Instr *&instr = *pInstr;
bool lossy = false;
if(OpCodeAttr::IsInlineBuiltIn(instr->m_opcode))
{
if(instr->m_opcode == Js::OpCode::InlineArrayPush)
{
int32 intConstantValue;
bool isIntConstMissingItem = src2Val->GetValueInfo()->TryGetIntConstantValue(&intConstantValue);
if(isIntConstMissingItem)
{
isIntConstMissingItem = Js::SparseArraySegment<int>::IsMissingItem(&intConstantValue);
}
// Don't specialize if the element is not likelyInt or an IntConst which is a missing item value.
if(!(src2Val->GetValueInfo()->IsLikelyInt()) || isIntConstMissingItem)
{
return false;
}
// We don't want to specialize both the source operands, though it is a binary instr.
IR::Opnd * elementOpnd = instr->GetSrc2();
this->ToInt32(instr, elementOpnd, this->currentBlock, src2Val, nullptr, lossy);
}
else
{
IR::Opnd *src1 = instr->GetSrc1();
this->ToInt32(instr, src1, this->currentBlock, src1Val, nullptr, lossy);
IR::Opnd *src2 = instr->GetSrc2();
this->ToInt32(instr, src2, this->currentBlock, src2Val, nullptr, lossy);
}
if(!skipDst)
{
IR::Opnd *dst = instr->GetDst();
if (dst)
{
TypeSpecializeIntDst(instr, instr->m_opcode, nullptr, src1Val, src2Val, IR::BailOutInvalid, min, max, pDstVal);
}
}
return true;
}
else
{
AssertMsg(false, "Yet to move code for other binary functions here");
return false;
}
}
bool
GlobOpt::TypeSpecializeIntUnary(
IR::Instr **pInstr,
Value **pSrc1Val,
Value **pDstVal,
int32 min,
int32 max,
Value *const src1OriginalVal,
bool *redoTypeSpecRef,
bool skipDst /* = false */)
{
IR::Instr *&instr = *pInstr;
Assert(pSrc1Val);
Value *&src1Val = *pSrc1Val;
bool isTransfer = false;
Js::OpCode opcode;
int32 newMin, newMax;
bool lossy = false;
IR::BailOutKind bailOutKind = IR::BailOutInvalid;
bool ignoredIntOverflow = this->ignoredIntOverflowForCurrentInstr;
bool ignoredNegativeZero = false;
bool checkTypeSpecWorth = false;
if(instr->GetSrc1()->IsRegOpnd() && instr->GetSrc1()->AsRegOpnd()->m_sym->m_isNotNumber)
{
return TryTypeSpecializeUnaryToFloatHelper(pInstr, &src1Val, src1OriginalVal, pDstVal);
}
AddSubConstantInfo addSubConstantInfo;
switch(instr->m_opcode)
{
case Js::OpCode::Ld_A:
if (instr->GetSrc1()->IsRegOpnd())
{
StackSym *sym = instr->GetSrc1()->AsRegOpnd()->m_sym;
if (CurrentBlockData()->IsInt32TypeSpecialized(sym) == false)
{
// Type specializing an Ld_A isn't worth it, unless the src
// is already type specialized.
return false;
}
}
newMin = min;
newMax = max;
opcode = Js::OpCode::Ld_I4;
isTransfer = true;
break;
case Js::OpCode::Conv_Num:
newMin = min;
newMax = max;
opcode = Js::OpCode::Ld_I4;
isTransfer = true;
break;
case Js::OpCode::LdC_A_I4:
newMin = newMax = instr->GetSrc1()->AsIntConstOpnd()->AsInt32();
opcode = Js::OpCode::Ld_I4;
break;
case Js::OpCode::Neg_A:
if (min <= 0 && max >= 0)
{
if(instr->ShouldCheckForNegativeZero())
{
// -0 matters since the sym is not a local, or is used in a way in which -0 would differ from +0
if(!DoAggressiveIntTypeSpec())
{
// May result in -0
// Consider adding a dynamic check for src1 == 0
return TryTypeSpecializeUnaryToFloatHelper(pInstr, &src1Val, src1OriginalVal, pDstVal);
}
if(min == 0 && max == 0)
{
// Always results in -0
return TryTypeSpecializeUnaryToFloatHelper(pInstr, &src1Val, src1OriginalVal, pDstVal);
}
bailOutKind |= IR::BailOutOnNegativeZero;
}
else
{
ignoredNegativeZero = true;
}
}
if (Int32Math::Neg(min, &newMax))
{
if(instr->ShouldCheckForIntOverflow())
{
if(!DoAggressiveIntTypeSpec())
{
// May overflow
return TryTypeSpecializeUnaryToFloatHelper(pInstr, &src1Val, src1OriginalVal, pDstVal);
}
if(min == max)
{
// Always overflows
return TryTypeSpecializeUnaryToFloatHelper(pInstr, &src1Val, src1OriginalVal, pDstVal);
}
bailOutKind |= IR::BailOutOnOverflow;
newMax = INT32_MAX;
}
else
{
ignoredIntOverflow = true;
}
}
if (Int32Math::Neg(max, &newMin))
{
if(instr->ShouldCheckForIntOverflow())
{
if(!DoAggressiveIntTypeSpec())
{
// May overflow
return TryTypeSpecializeUnaryToFloatHelper(pInstr, &src1Val, src1OriginalVal, pDstVal);
}
bailOutKind |= IR::BailOutOnOverflow;
newMin = INT32_MAX;
}
else
{
ignoredIntOverflow = true;
}
}
if(!instr->ShouldCheckForIntOverflow() && newMin > newMax)
{
// When ignoring overflow, the range needs to account for overflow. Since MIN_INT is the only int32 value that
// overflows on Neg, and the value resulting from overflow is also MIN_INT, if calculating only the new min or new
// max overflowed but not both, then the new min will be greater than the new max. In that case we need to consider
// the full range of int32s as possible resulting values.
newMin = INT32_MIN;
newMax = INT32_MAX;
}
opcode = Js::OpCode::Neg_I4;
checkTypeSpecWorth = true;
break;
case Js::OpCode::Not_A:
if(!DoLossyIntTypeSpec())
{
return false;
}
this->PropagateIntRangeForNot(min, max, &newMin, &newMax);
opcode = Js::OpCode::Not_I4;
lossy = true;
break;
case Js::OpCode::Incr_A:
do // while(false)
{
const auto CannotOverflowBasedOnRelativeBounds = [&]()
{
const ValueInfo *const src1ValueInfo = src1Val->GetValueInfo();
return
(src1ValueInfo->IsInt() || DoAggressiveIntTypeSpec()) &&
src1ValueInfo->IsIntBounded() &&
src1ValueInfo->AsIntBounded()->Bounds()->AddCannotOverflowBasedOnRelativeBounds(1);
};
if (Int32Math::Inc(min, &newMin))
{
if(CannotOverflowBasedOnRelativeBounds())
{
newMin = INT32_MAX;
}
else if(instr->ShouldCheckForIntOverflow())
{
// Always overflows
return TryTypeSpecializeUnaryToFloatHelper(pInstr, &src1Val, src1OriginalVal, pDstVal);
}
else
{
// When ignoring overflow, the range needs to account for overflow. For any Add or Sub, since overflow
// causes the value to wrap around, and we don't have a way to specify a lower and upper range of ints,
// we use the full range of int32s.
ignoredIntOverflow = true;
newMin = INT32_MIN;
newMax = INT32_MAX;
break;
}
}
if (Int32Math::Inc(max, &newMax))
{
if(CannotOverflowBasedOnRelativeBounds())
{
newMax = INT32_MAX;
}
else if(instr->ShouldCheckForIntOverflow())
{
if(!DoAggressiveIntTypeSpec())
{
// May overflow
return TryTypeSpecializeUnaryToFloatHelper(pInstr, &src1Val, src1OriginalVal, pDstVal);
}
bailOutKind |= IR::BailOutOnOverflow;
newMax = INT32_MAX;
}
else
{
// See comment about ignoring overflow above
ignoredIntOverflow = true;
newMin = INT32_MIN;
newMax = INT32_MAX;
break;
}
}
} while(false);
if(!ignoredIntOverflow && instr->GetSrc1()->IsRegOpnd())
{
addSubConstantInfo.Set(instr->GetSrc1()->AsRegOpnd()->m_sym, src1Val, min == max, 1);
}
opcode = Js::OpCode::Add_I4;
if (!this->IsLoopPrePass())
{
instr->SetSrc2(IR::IntConstOpnd::New(1, TyInt32, instr->m_func));
}
checkTypeSpecWorth = true;
break;
case Js::OpCode::Decr_A:
do // while(false)
{
const auto CannotOverflowBasedOnRelativeBounds = [&]()
{
const ValueInfo *const src1ValueInfo = src1Val->GetValueInfo();
return
(src1ValueInfo->IsInt() || DoAggressiveIntTypeSpec()) &&
src1ValueInfo->IsIntBounded() &&
src1ValueInfo->AsIntBounded()->Bounds()->SubCannotOverflowBasedOnRelativeBounds(1);
};
if (Int32Math::Dec(max, &newMax))
{
if(CannotOverflowBasedOnRelativeBounds())
{
newMax = INT32_MIN;
}
else if(instr->ShouldCheckForIntOverflow())
{
// Always overflows
return TryTypeSpecializeUnaryToFloatHelper(pInstr, &src1Val, src1OriginalVal, pDstVal);
}
else
{
// When ignoring overflow, the range needs to account for overflow. For any Add or Sub, since overflow
// causes the value to wrap around, and we don't have a way to specify a lower and upper range of ints, we
// use the full range of int32s.
ignoredIntOverflow = true;
newMin = INT32_MIN;
newMax = INT32_MAX;
break;
}
}
if (Int32Math::Dec(min, &newMin))
{
if(CannotOverflowBasedOnRelativeBounds())
{
newMin = INT32_MIN;
}
else if(instr->ShouldCheckForIntOverflow())
{
if(!DoAggressiveIntTypeSpec())
{
// May overflow
return TryTypeSpecializeUnaryToFloatHelper(pInstr, &src1Val, src1OriginalVal, pDstVal);
}
bailOutKind |= IR::BailOutOnOverflow;
newMin = INT32_MIN;
}
else
{
// See comment about ignoring overflow above
ignoredIntOverflow = true;
newMin = INT32_MIN;
newMax = INT32_MAX;
break;
}
}
} while(false);
if(!ignoredIntOverflow && instr->GetSrc1()->IsRegOpnd())
{
addSubConstantInfo.Set(instr->GetSrc1()->AsRegOpnd()->m_sym, src1Val, min == max, -1);
}
opcode = Js::OpCode::Sub_I4;
if (!this->IsLoopPrePass())
{
instr->SetSrc2(IR::IntConstOpnd::New(1, TyInt32, instr->m_func));
}
checkTypeSpecWorth = true;
break;
case Js::OpCode::BrFalse_A:
case Js::OpCode::BrTrue_A:
{
if(DoConstFold() && !IsLoopPrePass() && TryOptConstFoldBrFalse(instr, src1Val, min, max))
{
return true;
}
bool specialize = true;
if (!src1Val->GetValueInfo()->HasIntConstantValue() && instr->GetSrc1()->IsRegOpnd())
{
StackSym *sym = instr->GetSrc1()->AsRegOpnd()->m_sym;
if (CurrentBlockData()->IsInt32TypeSpecialized(sym) == false)
{
// Type specializing a BrTrue_A/BrFalse_A isn't worth it, unless the src
// is already type specialized
specialize = false;
}
}
if(instr->m_opcode == Js::OpCode::BrTrue_A)
{
UpdateIntBoundsForNotEqualBranch(src1Val, nullptr, 0);
opcode = Js::OpCode::BrTrue_I4;
}
else
{
UpdateIntBoundsForEqualBranch(src1Val, nullptr, 0);
opcode = Js::OpCode::BrFalse_I4;
}
if(!specialize)
{
return false;
}
newMin = 2; newMax = 1; // We'll assert if we make a range where min > max
break;
}
case Js::OpCode::MultiBr:
newMin = min;
newMax = max;
opcode = instr->m_opcode;
break;
case Js::OpCode::StElemI_A:
case Js::OpCode::StElemI_A_Strict:
case Js::OpCode::StElemC:
if(instr->GetDst()->AsIndirOpnd()->GetBaseOpnd()->GetValueType().IsLikelyAnyArrayWithNativeFloatValues())
{
src1Val = src1OriginalVal;
}
return TypeSpecializeStElem(pInstr, src1Val, pDstVal);
case Js::OpCode::NewScArray:
case Js::OpCode::NewScArrayWithMissingValues:
case Js::OpCode::InitFld:
case Js::OpCode::InitRootFld:
case Js::OpCode::StSlot:
case Js::OpCode::StSlotChkUndecl:
#if !FLOATVAR
case Js::OpCode::StSlotBoxTemp:
#endif
case Js::OpCode::StFld:
case Js::OpCode::StRootFld:
case Js::OpCode::StFldStrict:
case Js::OpCode::StRootFldStrict:
case Js::OpCode::ArgOut_A:
case Js::OpCode::ArgOut_A_Inline:
case Js::OpCode::ArgOut_A_FixupForStackArgs:
case Js::OpCode::ArgOut_A_Dynamic:
case Js::OpCode::ArgOut_A_FromStackArgs:
case Js::OpCode::ArgOut_A_SpreadArg:
// For this one we need to implement type specialization
//case Js::OpCode::ArgOut_A_InlineBuiltIn:
case Js::OpCode::Ret:
case Js::OpCode::LdElemUndef:
case Js::OpCode::LdElemUndefScoped:
return false;
default:
if (OpCodeAttr::IsInlineBuiltIn(instr->m_opcode))
{
newMin = min;
newMax = max;
opcode = instr->m_opcode;
break; // Note: we must keep checkTypeSpecWorth = false to make sure we never return false from this function.
}
return false;
}
// If this instruction is in a range of instructions where int overflow does not matter, we will still specialize it (won't
// leave it unspecialized based on heuristics), since it is most likely worth specializing, and the dst value needs to be
// guaranteed to be an int
if(checkTypeSpecWorth &&
!ignoredIntOverflow &&
!ignoredNegativeZero &&
instr->ShouldCheckForIntOverflow() &&
!IsWorthSpecializingToInt32(instr, src1Val))
{
// Even though type specialization is being skipped since it may not be worth it, the proper value should still be
// maintained so that the result may be type specialized later. An int value is not created for the dst in any of
// the following cases.
// - A bailout check is necessary to specialize this instruction. The bailout check is what guarantees the result to be
// an int, but since we're not going to specialize this instruction, there won't be a bailout check.
// - Aggressive int type specialization is disabled and we're in a loop prepass. We're conservative on dst values in
// that case, especially if the dst sym is live on the back-edge.
if(bailOutKind == IR::BailOutInvalid &&
instr->GetDst() &&
(DoAggressiveIntTypeSpec() || !this->IsLoopPrePass()))
{
*pDstVal = CreateDstUntransferredIntValue(newMin, newMax, instr, src1Val, nullptr);
}
if(instr->GetSrc2())
{
instr->FreeSrc2();
}
return false;
}
this->ignoredIntOverflowForCurrentInstr = ignoredIntOverflow;
this->ignoredNegativeZeroForCurrentInstr = ignoredNegativeZero;
{
// Try CSE again before modifying the IR, in case some attributes are required for successful CSE
Value *src1IndirIndexVal = nullptr;
Value *src2Val = nullptr;
if(CSEOptimize(currentBlock, &instr, &src1Val, &src2Val, &src1IndirIndexVal, true /* intMathExprOnly */))
{
*redoTypeSpecRef = true;
return false;
}
}
const Js::OpCode originalOpCode = instr->m_opcode;
if (!this->IsLoopPrePass())
{
// No re-write on prepass
instr->m_opcode = opcode;
}
Value *src1ValueToSpecialize = src1Val;
if(lossy)
{
// Lossy conversions to int32 must be done based on the original source values. For instance, if one of the values is a
// float constant with a value that fits in a uint32 but not an int32, and the instruction can ignore int overflow, the
// source value for the purposes of int specialization would have been changed to an int constant value by ignoring
// overflow. If we were to specialize the sym using the int constant value, it would be treated as a lossless
// conversion, but since there may be subsequent uses of the same float constant value that may not ignore overflow,
// this must be treated as a lossy conversion by specializing the sym using the original float constant value.
src1ValueToSpecialize = src1OriginalVal;
}
// Make sure the srcs are specialized
IR::Opnd *src1 = instr->GetSrc1();
this->ToInt32(instr, src1, this->currentBlock, src1ValueToSpecialize, nullptr, lossy);
if(bailOutKind != IR::BailOutInvalid && !this->IsLoopPrePass())
{
GenerateBailAtOperation(&instr, bailOutKind);
}
if (!skipDst)
{
IR::Opnd *dst = instr->GetDst();
if (dst)
{
AssertMsg(!(isTransfer && !this->IsLoopPrePass()) || min == newMin && max == newMax, "If this is just a copy, old/new min/max should be the same");
TypeSpecializeIntDst(
instr,
originalOpCode,
isTransfer ? src1Val : nullptr,
src1Val,
nullptr,
bailOutKind,
newMin,
newMax,
pDstVal,
addSubConstantInfo.HasInfo() ? &addSubConstantInfo : nullptr);
}
}
if(bailOutKind == IR::BailOutInvalid)
{
GOPT_TRACE(_u("Type specialized to INT\n"));
#if ENABLE_DEBUG_CONFIG_OPTIONS
if (Js::Configuration::Global.flags.TestTrace.IsEnabled(Js::AggressiveIntTypeSpecPhase))
{
Output::Print(_u("Type specialized to INT: "));
Output::Print(_u("%s \n"), Js::OpCodeUtil::GetOpCodeName(instr->m_opcode));
}
#endif
}
else
{
GOPT_TRACE(_u("Type specialized to INT with bailout on:\n"));
if(bailOutKind & IR::BailOutOnOverflow)
{
GOPT_TRACE(_u(" Overflow\n"));
#if ENABLE_DEBUG_CONFIG_OPTIONS
if (Js::Configuration::Global.flags.TestTrace.IsEnabled(Js::AggressiveIntTypeSpecPhase))
{
Output::Print(_u("Type specialized to INT with bailout (%S): "), "Overflow");
Output::Print(_u("%s \n"), Js::OpCodeUtil::GetOpCodeName(instr->m_opcode));
}
#endif
}
if(bailOutKind & IR::BailOutOnNegativeZero)
{
GOPT_TRACE(_u(" Zero\n"));
#if ENABLE_DEBUG_CONFIG_OPTIONS
if (Js::Configuration::Global.flags.TestTrace.IsEnabled(Js::AggressiveIntTypeSpecPhase))
{
Output::Print(_u("Type specialized to INT with bailout (%S): "), "Zero");
Output::Print(_u("%s \n"), Js::OpCodeUtil::GetOpCodeName(instr->m_opcode));
}
#endif
}
}
return true;
}
void
GlobOpt::TypeSpecializeIntDst(IR::Instr* instr, Js::OpCode originalOpCode, Value* valToTransfer, Value *const src1Value, Value *const src2Value, const IR::BailOutKind bailOutKind, int32 newMin, int32 newMax, Value** pDstVal, const AddSubConstantInfo *const addSubConstantInfo)
{
this->TypeSpecializeIntDst(instr, originalOpCode, valToTransfer, src1Value, src2Value, bailOutKind, ValueType::GetInt(IntConstantBounds(newMin, newMax).IsLikelyTaggable()), newMin, newMax, pDstVal, addSubConstantInfo);
}
void
GlobOpt::TypeSpecializeIntDst(IR::Instr* instr, Js::OpCode originalOpCode, Value* valToTransfer, Value *const src1Value, Value *const src2Value, const IR::BailOutKind bailOutKind, ValueType valueType, Value** pDstVal, const AddSubConstantInfo *const addSubConstantInfo)
{
this->TypeSpecializeIntDst(instr, originalOpCode, valToTransfer, src1Value, src2Value, bailOutKind, valueType, 0, 0, pDstVal, addSubConstantInfo);
}
void
GlobOpt::TypeSpecializeIntDst(IR::Instr* instr, Js::OpCode originalOpCode, Value* valToTransfer, Value *const src1Value, Value *const src2Value, const IR::BailOutKind bailOutKind, ValueType valueType, int32 newMin, int32 newMax, Value** pDstVal, const AddSubConstantInfo *const addSubConstantInfo)
{
Assert(valueType.IsInt() || (valueType.IsNumber() && valueType.IsLikelyInt() && newMin == 0 && newMax == 0));
Assert(!valToTransfer || valToTransfer == src1Value);
Assert(!addSubConstantInfo || addSubConstantInfo->HasInfo());
IR::Opnd *dst = instr->GetDst();
Assert(dst);
bool isValueInfoPrecise;
if(IsLoopPrePass())
{
isValueInfoPrecise = IsPrepassSrcValueInfoPrecise(instr, src1Value, src2Value);
valueType = GetPrepassValueTypeForDst(valueType, instr, src1Value, src2Value, isValueInfoPrecise);
}
else
{
isValueInfoPrecise = true;
}
// If dst has a circular reference in a loop, it probably won't get specialized. Don't mark the dst as type-specialized on
// the pre-pass. With aggressive int spec though, it will take care of bailing out if necessary so there's no need to assume
// that the dst will be a var even if it's live on the back-edge. Also if the op always produces an int32, then there's no
// ambiguity in the dst's value type even in the prepass.
if (!DoAggressiveIntTypeSpec() && this->IsLoopPrePass() && !valueType.IsInt())
{
if (dst->IsRegOpnd())
{
this->ToVarRegOpnd(dst->AsRegOpnd(), this->currentBlock);
}
return;
}
const IntBounds *dstBounds = nullptr;
if(addSubConstantInfo && !addSubConstantInfo->SrcValueIsLikelyConstant() && DoTrackRelativeIntBounds())
{
Assert(!ignoredIntOverflowForCurrentInstr);
// Track bounds for add or sub with a constant. For instance, consider (b = a + 2). The value of 'b' should track that
// it is equal to (the value of 'a') + 2. Additionally, the value of 'b' should inherit the bounds of 'a', offset by
// the constant value.
if(!valueType.IsInt() || !isValueInfoPrecise)
{
newMin = INT32_MIN;
newMax = INT32_MAX;
}
dstBounds =
IntBounds::Add(
addSubConstantInfo->SrcValue(),
addSubConstantInfo->Offset(),
isValueInfoPrecise,
IntConstantBounds(newMin, newMax),
alloc);
}
// Src1's value could change later in the loop, so the value wouldn't be the same for each
// iteration. Since we don't iterate over loops "while (!changed)", go conservative on the
// pre-pass.
if (valToTransfer)
{
// If this is just a copy, no need for creating a new value.
Assert(!addSubConstantInfo);
*pDstVal = this->ValueNumberTransferDst(instr, valToTransfer);
CurrentBlockData()->InsertNewValue(*pDstVal, dst);
}
else if (valueType.IsInt() && isValueInfoPrecise)
{
bool wasNegativeZeroPreventedByBailout = false;
if(newMin <= 0 && newMax >= 0)
{
switch(originalOpCode)
{
case Js::OpCode::Add_A:
// -0 + -0 == -0
Assert(src1Value);
Assert(src2Value);
wasNegativeZeroPreventedByBailout =
src1Value->GetValueInfo()->WasNegativeZeroPreventedByBailout() &&
src2Value->GetValueInfo()->WasNegativeZeroPreventedByBailout();
break;
case Js::OpCode::Sub_A:
// -0 - 0 == -0
Assert(src1Value);
wasNegativeZeroPreventedByBailout = src1Value->GetValueInfo()->WasNegativeZeroPreventedByBailout();
break;
case Js::OpCode::Neg_A:
case Js::OpCode::Mul_A:
case Js::OpCode::Div_A:
case Js::OpCode::Rem_A:
wasNegativeZeroPreventedByBailout = !!(bailOutKind & IR::BailOutOnNegativeZero);
break;
}
}
*pDstVal =
dstBounds
? NewIntBoundedValue(valueType, dstBounds, wasNegativeZeroPreventedByBailout, nullptr)
: NewIntRangeValue(newMin, newMax, wasNegativeZeroPreventedByBailout, nullptr);
}
else
{
*pDstVal = dstBounds ? NewIntBoundedValue(valueType, dstBounds, false, nullptr) : NewGenericValue(valueType);
}
if(addSubConstantInfo || updateInductionVariableValueNumber)
{
TrackIntSpecializedAddSubConstant(instr, addSubConstantInfo, *pDstVal, !!dstBounds);
}
CurrentBlockData()->SetValue(*pDstVal, dst);
AssertMsg(dst->IsRegOpnd(), "What else?");
this->ToInt32Dst(instr, dst->AsRegOpnd(), this->currentBlock);
}
bool
GlobOpt::TypeSpecializeBinary(IR::Instr **pInstr, Value **pSrc1Val, Value **pSrc2Val, Value **pDstVal, Value *const src1OriginalVal, Value *const src2OriginalVal, bool *redoTypeSpecRef)
{
IR::Instr *&instr = *pInstr;
int32 min1 = INT32_MIN, max1 = INT32_MAX, min2 = INT32_MIN, max2 = INT32_MAX, newMin, newMax, tmp;
Js::OpCode opcode;
Value *&src1Val = *pSrc1Val;
Value *&src2Val = *pSrc2Val;
// We don't need to do typespec for asmjs
if (IsTypeSpecPhaseOff(this->func) || GetIsAsmJSFunc())
{
return false;
}
if (OpCodeAttr::IsInlineBuiltIn(instr->m_opcode))
{
this->TypeSpecializeInlineBuiltInBinary(pInstr, src1Val, src2Val, pDstVal, src1OriginalVal, src2OriginalVal);
return true;
}
if (src1Val)
{
src1Val->GetValueInfo()->GetIntValMinMax(&min1, &max1, this->DoAggressiveIntTypeSpec());
}
if (src2Val)
{
src2Val->GetValueInfo()->GetIntValMinMax(&min2, &max2, this->DoAggressiveIntTypeSpec());
}
// Type specialize binary operators to int32
bool src1Lossy = true;
bool src2Lossy = true;
IR::BailOutKind bailOutKind = IR::BailOutInvalid;
bool ignoredIntOverflow = this->ignoredIntOverflowForCurrentInstr;
bool ignoredNegativeZero = false;
bool skipSrc2 = false;
bool skipDst = false;
bool needsBoolConv = false;
AddSubConstantInfo addSubConstantInfo;
switch (instr->m_opcode)
{
case Js::OpCode::Or_A:
if (!DoLossyIntTypeSpec())
{
return false;
}
this->PropagateIntRangeBinary(instr, min1, max1, min2, max2, &newMin, &newMax);
opcode = Js::OpCode::Or_I4;
break;
case Js::OpCode::And_A:
if (!DoLossyIntTypeSpec())
{
return false;
}
this->PropagateIntRangeBinary(instr, min1, max1, min2, max2, &newMin, &newMax);
opcode = Js::OpCode::And_I4;
break;
case Js::OpCode::Xor_A:
if (!DoLossyIntTypeSpec())
{
return false;
}
this->PropagateIntRangeBinary(instr, min1, max1, min2, max2, &newMin, &newMax);
opcode = Js::OpCode::Xor_I4;
break;
case Js::OpCode::Shl_A:
if (!DoLossyIntTypeSpec())
{
return false;
}
this->PropagateIntRangeBinary(instr, min1, max1, min2, max2, &newMin, &newMax);
opcode = Js::OpCode::Shl_I4;
break;
case Js::OpCode::Shr_A:
if (!DoLossyIntTypeSpec())
{
return false;
}
this->PropagateIntRangeBinary(instr, min1, max1, min2, max2, &newMin, &newMax);
opcode = Js::OpCode::Shr_I4;
break;
case Js::OpCode::ShrU_A:
if (!DoLossyIntTypeSpec())
{
return false;
}
if (min1 < 0 && IntConstantBounds(min2, max2).And_0x1f().Contains(0))
{
// Src1 may be too large to represent as a signed int32, and src2 may be zero. Unless the resulting value is only
// used as a signed int32 (hence allowing us to ignore the result's sign), don't specialize the instruction.
if (!instr->ignoreIntOverflow)
return false;
ignoredIntOverflow = true;
}
this->PropagateIntRangeBinary(instr, min1, max1, min2, max2, &newMin, &newMax);
opcode = Js::OpCode::ShrU_I4;
break;
case Js::OpCode::BrUnLe_A:
// Folding the branch based on bounds will attempt a lossless int32 conversion of the sources if they are not definitely
// int already, so require that both sources are likely int for folding.
if (DoConstFold() &&
!IsLoopPrePass() &&
TryOptConstFoldBrUnsignedGreaterThan(instr, false, src1Val, min1, max1, src2Val, min2, max2))
{
return true;
}
if (min1 >= 0 && min2 >= 0)
{
// Only handle positive values since this is unsigned...
// Bounds are tracked only for likely int values. Only likely int values may have bounds that are not the defaults
// (INT32_MIN, INT32_MAX), so we're good.
Assert(src1Val);
Assert(src1Val->GetValueInfo()->IsLikelyInt());
Assert(src2Val);
Assert(src2Val->GetValueInfo()->IsLikelyInt());
UpdateIntBoundsForLessThanOrEqualBranch(src1Val, src2Val);
}
if (!DoLossyIntTypeSpec())
{
return false;
}
newMin = newMax = 0;
opcode = Js::OpCode::BrUnLe_I4;
break;
case Js::OpCode::BrUnLt_A:
// Folding the branch based on bounds will attempt a lossless int32 conversion of the sources if they are not definitely
// int already, so require that both sources are likely int for folding.
if (DoConstFold() &&
!IsLoopPrePass() &&
TryOptConstFoldBrUnsignedLessThan(instr, true, src1Val, min1, max1, src2Val, min2, max2))
{
return true;
}
if (min1 >= 0 && min2 >= 0)
{
// Only handle positive values since this is unsigned...
// Bounds are tracked only for likely int values. Only likely int values may have bounds that are not the defaults
// (INT32_MIN, INT32_MAX), so we're good.
Assert(src1Val);
Assert(src1Val->GetValueInfo()->IsLikelyInt());
Assert(src2Val);
Assert(src2Val->GetValueInfo()->IsLikelyInt());
UpdateIntBoundsForLessThanBranch(src1Val, src2Val);
}
if (!DoLossyIntTypeSpec())
{
return false;
}
newMin = newMax = 0;
opcode = Js::OpCode::BrUnLt_I4;
break;
case Js::OpCode::BrUnGe_A:
// Folding the branch based on bounds will attempt a lossless int32 conversion of the sources if they are not definitely
// int already, so require that both sources are likely int for folding.
if (DoConstFold() &&
!IsLoopPrePass() &&
TryOptConstFoldBrUnsignedLessThan(instr, false, src1Val, min1, max1, src2Val, min2, max2))
{
return true;
}
if (min1 >= 0 && min2 >= 0)
{
// Only handle positive values since this is unsigned...
// Bounds are tracked only for likely int values. Only likely int values may have bounds that are not the defaults
// (INT32_MIN, INT32_MAX), so we're good.
Assert(src1Val);
Assert(src1Val->GetValueInfo()->IsLikelyInt());
Assert(src2Val);
Assert(src2Val->GetValueInfo()->IsLikelyInt());
UpdateIntBoundsForGreaterThanOrEqualBranch(src1Val, src2Val);
}
if (!DoLossyIntTypeSpec())
{
return false;
}
newMin = newMax = 0;
opcode = Js::OpCode::BrUnGe_I4;
break;
case Js::OpCode::BrUnGt_A:
// Folding the branch based on bounds will attempt a lossless int32 conversion of the sources if they are not definitely
// int already, so require that both sources are likely int for folding.
if (DoConstFold() &&
!IsLoopPrePass() &&
TryOptConstFoldBrUnsignedGreaterThan(instr, true, src1Val, min1, max1, src2Val, min2, max2))
{
return true;
}
if (min1 >= 0 && min2 >= 0)
{
// Only handle positive values since this is unsigned...
// Bounds are tracked only for likely int values. Only likely int values may have bounds that are not the defaults
// (INT32_MIN, INT32_MAX), so we're good.
Assert(src1Val);
Assert(src1Val->GetValueInfo()->IsLikelyInt());
Assert(src2Val);
Assert(src2Val->GetValueInfo()->IsLikelyInt());
UpdateIntBoundsForGreaterThanBranch(src1Val, src2Val);
}
if (!DoLossyIntTypeSpec())
{
return false;
}
newMin = newMax = 0;
opcode = Js::OpCode::BrUnGt_I4;
break;
case Js::OpCode::CmUnLe_A:
if (!DoLossyIntTypeSpec())
{
return false;
}
newMin = 0;
newMax = 1;
opcode = Js::OpCode::CmUnLe_I4;
needsBoolConv = true;
break;
case Js::OpCode::CmUnLt_A:
if (!DoLossyIntTypeSpec())
{
return false;
}
newMin = 0;
newMax = 1;
opcode = Js::OpCode::CmUnLt_I4;
needsBoolConv = true;
break;
case Js::OpCode::CmUnGe_A:
if (!DoLossyIntTypeSpec())
{
return false;
}
newMin = 0;
newMax = 1;
opcode = Js::OpCode::CmUnGe_I4;
needsBoolConv = true;
break;
case Js::OpCode::CmUnGt_A:
if (!DoLossyIntTypeSpec())
{
return false;
}
newMin = 0;
newMax = 1;
opcode = Js::OpCode::CmUnGt_I4;
needsBoolConv = true;
break;
case Js::OpCode::Expo_A:
{
src1Val = src1OriginalVal;
src2Val = src2OriginalVal;
return this->TypeSpecializeFloatBinary(instr, src1Val, src2Val, pDstVal);
}
case Js::OpCode::Div_A:
{
ValueType specializedValueType = GetDivValueType(instr, src1Val, src2Val, true);
if (specializedValueType.IsFloat())
{
// Either result is float or 1/x or cst1/cst2 where cst1%cst2 != 0
// Note: We should really constant fold cst1%cst2...
src1Val = src1OriginalVal;
src2Val = src2OriginalVal;
return this->TypeSpecializeFloatBinary(instr, src1Val, src2Val, pDstVal);
}
#ifdef _M_ARM
if (!AutoSystemInfo::Data.ArmDivAvailable())
{
return false;
}
#endif
if (specializedValueType.IsInt())
{
if (max2 == 0x80000000 || (min2 == 0 && max2 == 00))
{
return false;
}
if (min1 == 0x80000000 && min2 <= -1 && max2 >= -1)
{
// Prevent integer overflow, as div by zero or MIN_INT / -1 will throw an exception
// Or we know we are dividing by zero (which is weird to have because the profile data
// say we got an int)
bailOutKind = IR::BailOutOnDivOfMinInt;
}
src1Lossy = false; // Detect -0 on the sources
src2Lossy = false;
opcode = Js::OpCode::Div_I4;
Assert(!instr->GetSrc1()->IsUnsigned());
bailOutKind |= IR::BailOnDivResultNotInt;
if (max2 >= 0 && min2 <= 0)
{
// Need to check for divide by zero if the denominator range includes 0
bailOutKind |= IR::BailOutOnDivByZero;
}
if (max1 >= 0 && min1 <= 0)
{
// Numerator contains 0 so the result contains 0
newMin = 0;
newMax = 0;
if (min2 < 0)
{
// Denominator may be negative, so the result could be negative 0
if (instr->ShouldCheckForNegativeZero())
{
bailOutKind |= IR::BailOutOnNegativeZero;
}
else
{
ignoredNegativeZero = true;
}
}
}
else
{
// Initialize to invalid value, one of the condition below will update it correctly
newMin = INT_MAX;
newMax = INT_MIN;
}
// Deal with the positive and negative range separately for both the numerator and the denominator,
// and integrate to the overall min and max.
// If the result is positive (positive/positive or negative/negative):
// The min should be the smallest magnitude numerator (positive_Min1 | negative_Max1)
// divided by ---------------------------------------------------------------
// largest magnitude denominator (positive_Max2 | negative_Min2)
//
// The max should be the largest magnitude numerator (positive_Max1 | negative_Max1)
// divided by ---------------------------------------------------------------
// smallest magnitude denominator (positive_Min2 | negative_Max2)
// If the result is negative (positive/negative or positive/negative):
// The min should be the largest magnitude numerator (positive_Max1 | negative_Min1)
// divided by ---------------------------------------------------------------
// smallest magnitude denominator (negative_Max2 | positive_Min2)
//
// The max should be the smallest magnitude numerator (positive_Min1 | negative_Max1)
// divided by ---------------------------------------------------------------
// largest magnitude denominator (negative_Min2 | positive_Max2)
// Consider: The range can be slightly more precise if we take care of the rounding
if (max1 > 0)
{
// Take only the positive numerator range
int32 positive_Min1 = max(1, min1);
int32 positive_Max1 = max1;
if (max2 > 0)
{
// Take only the positive denominator range
int32 positive_Min2 = max(1, min2);
int32 positive_Max2 = max2;
// Positive / Positive
int32 quadrant1_Min = positive_Min1 <= positive_Max2? 1 : positive_Min1 / positive_Max2;
int32 quadrant1_Max = positive_Max1 <= positive_Min2? 1 : positive_Max1 / positive_Min2;
Assert(1 <= quadrant1_Min && quadrant1_Min <= quadrant1_Max);
// The result should positive
newMin = min(newMin, quadrant1_Min);
newMax = max(newMax, quadrant1_Max);
}
if (min2 < 0)
{
// Take only the negative denominator range
int32 negative_Min2 = min2;
int32 negative_Max2 = min(-1, max2);
// Positive / Negative
int32 quadrant2_Min = -positive_Max1 >= negative_Max2? -1 : positive_Max1 / negative_Max2;
int32 quadrant2_Max = -positive_Min1 >= negative_Min2? -1 : positive_Min1 / negative_Min2;
// The result should negative
Assert(quadrant2_Min <= quadrant2_Max && quadrant2_Max <= -1);
newMin = min(newMin, quadrant2_Min);
newMax = max(newMax, quadrant2_Max);
}
}
if (min1 < 0)
{
// Take only the native numerator range
int32 negative_Min1 = min1;
int32 negative_Max1 = min(-1, max1);
if (max2 > 0)
{
// Take only the positive denominator range
int32 positive_Min2 = max(1, min2);
int32 positive_Max2 = max2;
// Negative / Positive
int32 quadrant4_Min = negative_Min1 >= -positive_Min2? -1 : negative_Min1 / positive_Min2;
int32 quadrant4_Max = negative_Max1 >= -positive_Max2? -1 : negative_Max1 / positive_Max2;
// The result should negative
Assert(quadrant4_Min <= quadrant4_Max && quadrant4_Max <= -1);
newMin = min(newMin, quadrant4_Min);
newMax = max(newMax, quadrant4_Max);
}
if (min2 < 0)
{
// Take only the negative denominator range
int32 negative_Min2 = min2;
int32 negative_Max2 = min(-1, max2);
int32 quadrant3_Min;
int32 quadrant3_Max;
// Negative / Negative
if (negative_Max1 == 0x80000000 && negative_Min2 == -1)
{
quadrant3_Min = negative_Max1 >= negative_Min2? 1 : (negative_Max1+1) / negative_Min2;
}
else
{
quadrant3_Min = negative_Max1 >= negative_Min2? 1 : negative_Max1 / negative_Min2;
}
if (negative_Min1 == 0x80000000 && negative_Max2 == -1)
{
quadrant3_Max = negative_Min1 >= negative_Max2? 1 : (negative_Min1+1) / negative_Max2;
}
else
{
quadrant3_Max = negative_Min1 >= negative_Max2? 1 : negative_Min1 / negative_Max2;
}
// The result should positive
Assert(1 <= quadrant3_Min && quadrant3_Min <= quadrant3_Max);
newMin = min(newMin, quadrant3_Min);
newMax = max(newMax, quadrant3_Max);
}
}
Assert(newMin <= newMax);
// Continue to int type spec
break;
}
}
// fall-through
default:
{
const bool involesLargeInt32 =
(src1Val && src1Val->GetValueInfo()->IsLikelyUntaggedInt()) ||
(src2Val && src2Val->GetValueInfo()->IsLikelyUntaggedInt());
const auto trySpecializeToFloat =
[&](const bool mayOverflow) -> bool
{
// It has been determined that this instruction cannot be int-specialized. Need to determine whether to attempt
// to float-specialize the instruction, or leave it unspecialized.
if((involesLargeInt32
#if INT32VAR
&& mayOverflow
#endif
) || (instr->m_opcode == Js::OpCode::Mul_A && !this->DoAggressiveMulIntTypeSpec())
)
{
// An input range is completely outside the range of an int31 and the operation is likely to overflow.
// Additionally, on 32-bit platforms, the value is untaggable and will be a JavascriptNumber, which is
// significantly slower to use in an unspecialized operation compared to a tagged int. So, try to
// float-specialize the instruction.
src1Val = src1OriginalVal;
src2Val = src2OriginalVal;
return TypeSpecializeFloatBinary(instr, src1Val, src2Val, pDstVal);
}
return false;
};
if (instr->m_opcode != Js::OpCode::ArgOut_A_InlineBuiltIn)
{
if ((src1Val && src1Val->GetValueInfo()->IsLikelyFloat()) || (src2Val && src2Val->GetValueInfo()->IsLikelyFloat()))
{
// Try to type specialize to float
src1Val = src1OriginalVal;
src2Val = src2OriginalVal;
return this->TypeSpecializeFloatBinary(instr, src1Val, src2Val, pDstVal);
}
if (src1Val == nullptr ||
src2Val == nullptr ||
!src1Val->GetValueInfo()->IsLikelyInt() ||
!src2Val->GetValueInfo()->IsLikelyInt() ||
(
!DoAggressiveIntTypeSpec() &&
(
!(src1Val->GetValueInfo()->IsInt() || CurrentBlockData()->IsSwitchInt32TypeSpecialized(instr)) ||
!src2Val->GetValueInfo()->IsInt()
)
) ||
(instr->GetSrc1()->IsRegOpnd() && instr->GetSrc1()->AsRegOpnd()->m_sym->m_isNotNumber) ||
(instr->GetSrc2()->IsRegOpnd() && instr->GetSrc2()->AsRegOpnd()->m_sym->m_isNotNumber))
{
return trySpecializeToFloat(true);
}
}
// Try to type specialize to int32
// If one of the values is a float constant with a value that fits in a uint32 but not an int32,
// and the instruction can ignore int overflow, the source value for the purposes of int specialization
// would have been changed to an int constant value by ignoring overflow. But, the conversion is still lossy.
if (!(src1OriginalVal && src1OriginalVal->GetValueInfo()->IsFloatConstant() && src1Val && src1Val->GetValueInfo()->HasIntConstantValue()))
{
src1Lossy = false;
}
if (!(src2OriginalVal && src2OriginalVal->GetValueInfo()->IsFloatConstant() && src2Val && src2Val->GetValueInfo()->HasIntConstantValue()))
{
src2Lossy = false;
}
switch(instr->m_opcode)
{
case Js::OpCode::ArgOut_A_InlineBuiltIn:
// If the src is already type-specialized, if we don't type-specialize ArgOut_A_InlineBuiltIn instr, we'll get additional ToVar.
// So, to avoid that, type-specialize the ArgOut_A_InlineBuiltIn instr.
// Else we don't need to type-specialize the instr, we are fine with src being Var.
if (instr->GetSrc1()->IsRegOpnd())
{
StackSym *sym = instr->GetSrc1()->AsRegOpnd()->m_sym;
if (CurrentBlockData()->IsInt32TypeSpecialized(sym))
{
opcode = instr->m_opcode;
skipDst = true; // We should keep dst as is, otherwise the link opnd for next ArgOut/InlineBuiltInStart would be broken.
skipSrc2 = true; // src2 is linkOpnd. We don't need to type-specialize it.
newMin = min1; newMax = max1; // Values don't matter, these are unused.
goto LOutsideSwitch; // Continue to int-type-specialize.
}
else if (CurrentBlockData()->IsFloat64TypeSpecialized(sym))
{
src1Val = src1OriginalVal;
src2Val = src2OriginalVal;
return this->TypeSpecializeFloatBinary(instr, src1Val, src2Val, pDstVal);
}
}
return false;
case Js::OpCode::Add_A:
do // while(false)
{
const auto CannotOverflowBasedOnRelativeBounds = [&](int32 *const constantValueRef)
{
Assert(constantValueRef);
if(min2 == max2 &&
src1Val->GetValueInfo()->IsIntBounded() &&
src1Val->GetValueInfo()->AsIntBounded()->Bounds()->AddCannotOverflowBasedOnRelativeBounds(min2))
{
*constantValueRef = min2;
return true;
}
else if(
min1 == max1 &&
src2Val->GetValueInfo()->IsIntBounded() &&
src2Val->GetValueInfo()->AsIntBounded()->Bounds()->AddCannotOverflowBasedOnRelativeBounds(min1))
{
*constantValueRef = min1;
return true;
}
return false;
};
if (Int32Math::Add(min1, min2, &newMin))
{
int32 constantSrcValue;
if(CannotOverflowBasedOnRelativeBounds(&constantSrcValue))
{
newMin = constantSrcValue >= 0 ? INT32_MAX : INT32_MIN;
}
else if(instr->ShouldCheckForIntOverflow())
{
if(involesLargeInt32 || !DoAggressiveIntTypeSpec())
{
// May overflow
return trySpecializeToFloat(true);
}
bailOutKind |= IR::BailOutOnOverflow;
newMin = min1 < 0 ? INT32_MIN : INT32_MAX;
}
else
{
// When ignoring overflow, the range needs to account for overflow. For any Add or Sub, since
// overflow causes the value to wrap around, and we don't have a way to specify a lower and upper
// range of ints, we use the full range of int32s.
ignoredIntOverflow = true;
newMin = INT32_MIN;
newMax = INT32_MAX;
break;
}
}
if (Int32Math::Add(max1, max2, &newMax))
{
int32 constantSrcValue;
if(CannotOverflowBasedOnRelativeBounds(&constantSrcValue))
{
newMax = constantSrcValue >= 0 ? INT32_MAX : INT32_MIN;
}
else if(instr->ShouldCheckForIntOverflow())
{
if(involesLargeInt32 || !DoAggressiveIntTypeSpec())
{
// May overflow
return trySpecializeToFloat(true);
}
bailOutKind |= IR::BailOutOnOverflow;
newMax = max1 < 0 ? INT32_MIN : INT32_MAX;
}
else
{
// See comment about ignoring overflow above
ignoredIntOverflow = true;
newMin = INT32_MIN;
newMax = INT32_MAX;
break;
}
}
if(bailOutKind & IR::BailOutOnOverflow)
{
Assert(bailOutKind == IR::BailOutOnOverflow);
Assert(instr->ShouldCheckForIntOverflow());
int32 temp;
if(Int32Math::Add(
Int32Math::NearestInRangeTo(0, min1, max1),
Int32Math::NearestInRangeTo(0, min2, max2),
&temp))
{
// Always overflows
return trySpecializeToFloat(true);
}
}
} while(false);
if (!this->IsLoopPrePass() && newMin == newMax && bailOutKind == IR::BailOutInvalid)
{
// Take care of Add with zero here, since we know we're dealing with 2 numbers.
this->CaptureByteCodeSymUses(instr);
IR::Opnd *src;
bool isAddZero = true;
int32 intConstantValue;
if (src1Val->GetValueInfo()->TryGetIntConstantValue(&intConstantValue) && intConstantValue == 0)
{
src = instr->UnlinkSrc2();
instr->FreeSrc1();
}
else if (src2Val->GetValueInfo()->TryGetIntConstantValue(&intConstantValue) && intConstantValue == 0)
{
src = instr->UnlinkSrc1();
instr->FreeSrc2();
}
else
{
// This should have been handled by const folding, unless:
// - A source's value was substituted with a different value here, which is after const folding happened
// - A value is not definitely int, but once converted to definite int, it would be zero due to a
// condition in the source code such as if(a === 0). Ideally, we would specialize the sources and
// remove the add, but doesn't seem too important for now.
Assert(
!DoConstFold() ||
src1Val != src1OriginalVal ||
src2Val != src2OriginalVal ||
!src1Val->GetValueInfo()->IsInt() ||
!src2Val->GetValueInfo()->IsInt());
isAddZero = false;
src = nullptr;
}
if (isAddZero)
{
IR::Instr *newInstr = IR::Instr::New(Js::OpCode::Ld_A, instr->UnlinkDst(), src, instr->m_func);
newInstr->SetByteCodeOffset(instr);
instr->m_opcode = Js::OpCode::Nop;
this->currentBlock->InsertInstrAfter(newInstr, instr);
return true;
}
}
if(!ignoredIntOverflow)
{
if(min2 == max2 &&
(!IsLoopPrePass() || IsPrepassSrcValueInfoPrecise(instr->GetSrc2(), src2Val)) &&
instr->GetSrc1()->IsRegOpnd())
{
addSubConstantInfo.Set(instr->GetSrc1()->AsRegOpnd()->m_sym, src1Val, min1 == max1, min2);
}
else if(
min1 == max1 &&
(!IsLoopPrePass() || IsPrepassSrcValueInfoPrecise(instr->GetSrc1(), src1Val)) &&
instr->GetSrc2()->IsRegOpnd())
{
addSubConstantInfo.Set(instr->GetSrc2()->AsRegOpnd()->m_sym, src2Val, min2 == max2, min1);
}
}
opcode = Js::OpCode::Add_I4;
break;
case Js::OpCode::Sub_A:
do // while(false)
{
const auto CannotOverflowBasedOnRelativeBounds = [&]()
{
return
min2 == max2 &&
src1Val->GetValueInfo()->IsIntBounded() &&
src1Val->GetValueInfo()->AsIntBounded()->Bounds()->SubCannotOverflowBasedOnRelativeBounds(min2);
};
if (Int32Math::Sub(min1, max2, &newMin))
{
if(CannotOverflowBasedOnRelativeBounds())
{
Assert(min2 == max2);
newMin = min2 >= 0 ? INT32_MIN : INT32_MAX;
}
else if(instr->ShouldCheckForIntOverflow())
{
if(involesLargeInt32 || !DoAggressiveIntTypeSpec())
{
// May overflow
return trySpecializeToFloat(true);
}
bailOutKind |= IR::BailOutOnOverflow;
newMin = min1 < 0 ? INT32_MIN : INT32_MAX;
}
else
{
// When ignoring overflow, the range needs to account for overflow. For any Add or Sub, since overflow
// causes the value to wrap around, and we don't have a way to specify a lower and upper range of ints,
// we use the full range of int32s.
ignoredIntOverflow = true;
newMin = INT32_MIN;
newMax = INT32_MAX;
break;
}
}
if (Int32Math::Sub(max1, min2, &newMax))
{
if(CannotOverflowBasedOnRelativeBounds())
{
Assert(min2 == max2);
newMax = min2 >= 0 ? INT32_MIN: INT32_MAX;
}
else if(instr->ShouldCheckForIntOverflow())
{
if(involesLargeInt32 || !DoAggressiveIntTypeSpec())
{
// May overflow
return trySpecializeToFloat(true);
}
bailOutKind |= IR::BailOutOnOverflow;
newMax = max1 < 0 ? INT32_MIN : INT32_MAX;
}
else
{
// See comment about ignoring overflow above
ignoredIntOverflow = true;
newMin = INT32_MIN;
newMax = INT32_MAX;
break;
}
}
if(bailOutKind & IR::BailOutOnOverflow)
{
Assert(bailOutKind == IR::BailOutOnOverflow);
Assert(instr->ShouldCheckForIntOverflow());
int32 temp;
if(Int32Math::Sub(
Int32Math::NearestInRangeTo(-1, min1, max1),
Int32Math::NearestInRangeTo(0, min2, max2),
&temp))
{
// Always overflows
return trySpecializeToFloat(true);
}
}
} while(false);
if(!ignoredIntOverflow &&
min2 == max2 &&
min2 != INT32_MIN &&
(!IsLoopPrePass() || IsPrepassSrcValueInfoPrecise(instr->GetSrc2(), src2Val)) &&
instr->GetSrc1()->IsRegOpnd())
{
addSubConstantInfo.Set(instr->GetSrc1()->AsRegOpnd()->m_sym, src1Val, min1 == max1, -min2);
}
opcode = Js::OpCode::Sub_I4;
break;
case Js::OpCode::Mul_A:
{
bool isConservativeMulInt = !DoAggressiveMulIntTypeSpec() || !DoAggressiveIntTypeSpec();
// Be conservative about predicting Mul overflow in prepass.
// Operands that are live on back edge may be denied lossless-conversion to int32 and
// trigger rejit with AggressiveIntTypeSpec off.
// Besides multiplying a variable in a loop can overflow in just a few iterations even in simple cases like v *= 2
// So, make sure we definitely know the source max/min values, otherwise assume the full range.
if (isConservativeMulInt && IsLoopPrePass())
{
if (!IsPrepassSrcValueInfoPrecise(instr->GetSrc1(), src1Val))
{
max1 = INT32_MAX;
min1 = INT32_MIN;
}
if (!IsPrepassSrcValueInfoPrecise(instr->GetSrc2(), src2Val))
{
max2 = INT32_MAX;
min2 = INT32_MIN;
}
}
if (Int32Math::Mul(min1, min2, &newMin))
{
if (involesLargeInt32 || isConservativeMulInt)
{
// May overflow
return trySpecializeToFloat(true);
}
bailOutKind |= IR::BailOutOnMulOverflow;
newMin = (min1 < 0) ^ (min2 < 0) ? INT32_MIN : INT32_MAX;
}
newMax = newMin;
if (Int32Math::Mul(max1, max2, &tmp))
{
if (involesLargeInt32 || isConservativeMulInt)
{
// May overflow
return trySpecializeToFloat(true);
}
bailOutKind |= IR::BailOutOnMulOverflow;
tmp = (max1 < 0) ^ (max2 < 0) ? INT32_MIN : INT32_MAX;
}
newMin = min(newMin, tmp);
newMax = max(newMax, tmp);
if (Int32Math::Mul(min1, max2, &tmp))
{
if (involesLargeInt32 || isConservativeMulInt)
{
// May overflow
return trySpecializeToFloat(true);
}
bailOutKind |= IR::BailOutOnMulOverflow;
tmp = (min1 < 0) ^ (max2 < 0) ? INT32_MIN : INT32_MAX;
}
newMin = min(newMin, tmp);
newMax = max(newMax, tmp);
if (Int32Math::Mul(max1, min2, &tmp))
{
if (involesLargeInt32 || isConservativeMulInt)
{
// May overflow
return trySpecializeToFloat(true);
}
bailOutKind |= IR::BailOutOnMulOverflow;
tmp = (max1 < 0) ^ (min2 < 0) ? INT32_MIN : INT32_MAX;
}
newMin = min(newMin, tmp);
newMax = max(newMax, tmp);
if (bailOutKind & IR::BailOutOnMulOverflow)
{
// CSE only if two MULs have the same overflow check behavior.
// Currently this is set to be ignore int32 overflow, but not 53-bit, or int32 overflow matters.
if (!instr->ShouldCheckFor32BitOverflow() && instr->ShouldCheckForNon32BitOverflow())
{
// If we allow int to overflow then there can be anything in the resulting int
newMin = IntConstMin;
newMax = IntConstMax;
ignoredIntOverflow = true;
}
int32 temp, overflowValue;
if (Int32Math::Mul(
Int32Math::NearestInRangeTo(0, min1, max1),
Int32Math::NearestInRangeTo(0, min2, max2),
&temp,
&overflowValue))
{
Assert(instr->ignoreOverflowBitCount >= 32);
int overflowMatters = 64 - instr->ignoreOverflowBitCount;
if (!ignoredIntOverflow ||
// Use shift to check high bits in case its negative
((overflowValue << overflowMatters) >> overflowMatters) != overflowValue
)
{
// Always overflows
return trySpecializeToFloat(true);
}
}
}
if (newMin <= 0 && newMax >= 0 && // New range crosses zero
(min1 < 0 || min2 < 0) && // An operand's range contains a negative integer
!(min1 > 0 || min2 > 0) && // Neither operand's range contains only positive integers
!instr->GetSrc1()->IsEqual(instr->GetSrc2())) // The operands don't have the same value
{
if (instr->ShouldCheckForNegativeZero())
{
// -0 matters since the sym is not a local, or is used in a way in which -0 would differ from +0
if (!DoAggressiveIntTypeSpec())
{
// May result in -0
return trySpecializeToFloat(false);
}
if (((min1 == 0 && max1 == 0) || (min2 == 0 && max2 == 0)) && (max1 < 0 || max2 < 0))
{
// Always results in -0
return trySpecializeToFloat(false);
}
bailOutKind |= IR::BailOutOnNegativeZero;
}
else
{
ignoredNegativeZero = true;
}
}
opcode = Js::OpCode::Mul_I4;
break;
}
case Js::OpCode::Rem_A:
{
IR::Opnd* src2 = instr->GetSrc2();
if (!this->IsLoopPrePass() && min2 == max2 && min1 >= 0)
{
int32 value = min2;
if (value == (1 << Math::Log2(value)) && src2->IsAddrOpnd())
{
Assert(src2->AsAddrOpnd()->IsVar());
instr->m_opcode = Js::OpCode::And_A;
src2->AsAddrOpnd()->SetAddress(Js::TaggedInt::ToVarUnchecked(value - 1),
IR::AddrOpndKindConstantVar);
*pSrc2Val = GetIntConstantValue(value - 1, instr);
src2Val = *pSrc2Val;
return this->TypeSpecializeBinary(&instr, pSrc1Val, pSrc2Val, pDstVal, src1OriginalVal, src2Val, redoTypeSpecRef);
}
}
#ifdef _M_ARM
if (!AutoSystemInfo::Data.ArmDivAvailable())
{
return false;
}
#endif
if (min1 < 0)
{
// The most negative it can be is min1, unless limited by min2/max2
int32 negMaxAbs2;
if (min2 == INT32_MIN)
{
negMaxAbs2 = INT32_MIN;
}
else
{
negMaxAbs2 = -max(abs(min2), abs(max2)) + 1;
}
newMin = max(min1, negMaxAbs2);
}
else
{
newMin = 0;
}
bool isModByPowerOf2 = (instr->IsProfiledInstr() && instr->m_func->HasProfileInfo() &&
instr->m_func->GetReadOnlyProfileInfo()->IsModulusOpByPowerOf2(static_cast<Js::ProfileId>(instr->AsProfiledInstr()->u.profileId)));
if(isModByPowerOf2)
{
Assert(bailOutKind == IR::BailOutInvalid);
bailOutKind = IR::BailOnModByPowerOf2;
newMin = 0;
}
else
{
if (min2 <= 0 && max2 >= 0)
{
// Consider: We could handle the zero case with a check and bailout...
return false;
}
if (min1 == 0x80000000 && (min2 <= -1 && max2 >= -1))
{
// Prevent integer overflow, as div by zero or MIN_INT / -1 will throw an exception
return false;
}
if (min1 < 0)
{
if(instr->ShouldCheckForNegativeZero())
{
if (!DoAggressiveIntTypeSpec())
{
return false;
}
bailOutKind |= IR::BailOutOnNegativeZero;
}
else
{
ignoredNegativeZero = true;
}
}
}
{
int32 absMax2;
if (min2 == INT32_MIN)
{
// abs(INT32_MIN) == INT32_MAX because of overflow
absMax2 = INT32_MAX;
}
else
{
absMax2 = max(abs(min2), abs(max2)) - 1;
}
newMax = min(absMax2, max(max1, 0));
newMax = max(newMin, newMax);
}
opcode = Js::OpCode::Rem_I4;
Assert(!instr->GetSrc1()->IsUnsigned());
break;
}
case Js::OpCode::CmEq_A:
case Js::OpCode::CmSrEq_A:
if (!IsWorthSpecializingToInt32Branch(instr, src1Val, src2Val))
{
return false;
}
newMin = 0;
newMax = 1;
opcode = Js::OpCode::CmEq_I4;
needsBoolConv = true;
break;
case Js::OpCode::CmNeq_A:
case Js::OpCode::CmSrNeq_A:
if (!IsWorthSpecializingToInt32Branch(instr, src1Val, src2Val))
{
return false;
}
newMin = 0;
newMax = 1;
opcode = Js::OpCode::CmNeq_I4;
needsBoolConv = true;
break;
case Js::OpCode::CmLe_A:
if (!IsWorthSpecializingToInt32Branch(instr, src1Val, src2Val))
{
return false;
}
newMin = 0;
newMax = 1;
opcode = Js::OpCode::CmLe_I4;
needsBoolConv = true;
break;
case Js::OpCode::CmLt_A:
if (!IsWorthSpecializingToInt32Branch(instr, src1Val, src2Val))
{
return false;
}
newMin = 0;
newMax = 1;
opcode = Js::OpCode::CmLt_I4;
needsBoolConv = true;
break;
case Js::OpCode::CmGe_A:
if (!IsWorthSpecializingToInt32Branch(instr, src1Val, src2Val))
{
return false;
}
newMin = 0;
newMax = 1;
opcode = Js::OpCode::CmGe_I4;
needsBoolConv = true;
break;
case Js::OpCode::CmGt_A:
if (!IsWorthSpecializingToInt32Branch(instr, src1Val, src2Val))
{
return false;
}
newMin = 0;
newMax = 1;
opcode = Js::OpCode::CmGt_I4;
needsBoolConv = true;
break;
case Js::OpCode::BrSrEq_A:
case Js::OpCode::BrEq_A:
case Js::OpCode::BrNotNeq_A:
case Js::OpCode::BrSrNotNeq_A:
{
if(DoConstFold() &&
!IsLoopPrePass() &&
TryOptConstFoldBrEqual(instr, true, src1Val, min1, max1, src2Val, min2, max2))
{
return true;
}
const bool specialize = IsWorthSpecializingToInt32Branch(instr, src1Val, src2Val);
UpdateIntBoundsForEqualBranch(src1Val, src2Val);
if(!specialize)
{
return false;
}
opcode = Js::OpCode::BrEq_I4;
// We'll get a warning if we don't assign a value to these...
// We'll assert if we use them and make a range where min > max
newMin = 2; newMax = 1;
break;
}
case Js::OpCode::BrSrNeq_A:
case Js::OpCode::BrNeq_A:
case Js::OpCode::BrSrNotEq_A:
case Js::OpCode::BrNotEq_A:
{
if(DoConstFold() &&
!IsLoopPrePass() &&
TryOptConstFoldBrEqual(instr, false, src1Val, min1, max1, src2Val, min2, max2))
{
return true;
}
const bool specialize = IsWorthSpecializingToInt32Branch(instr, src1Val, src2Val);
UpdateIntBoundsForNotEqualBranch(src1Val, src2Val);
if(!specialize)
{
return false;
}
opcode = Js::OpCode::BrNeq_I4;
// We'll get a warning if we don't assign a value to these...
// We'll assert if we use them and make a range where min > max
newMin = 2; newMax = 1;
break;
}
case Js::OpCode::BrGt_A:
case Js::OpCode::BrNotLe_A:
{
if(DoConstFold() &&
!IsLoopPrePass() &&
TryOptConstFoldBrGreaterThan(instr, true, src1Val, min1, max1, src2Val, min2, max2))
{
return true;
}
const bool specialize = IsWorthSpecializingToInt32Branch(instr, src1Val, src2Val);
UpdateIntBoundsForGreaterThanBranch(src1Val, src2Val);
if(!specialize)
{
return false;
}
opcode = Js::OpCode::BrGt_I4;
// We'll get a warning if we don't assign a value to these...
// We'll assert if we use them and make a range where min > max
newMin = 2; newMax = 1;
break;
}
case Js::OpCode::BrGe_A:
case Js::OpCode::BrNotLt_A:
{
if(DoConstFold() &&
!IsLoopPrePass() &&
TryOptConstFoldBrGreaterThanOrEqual(instr, true, src1Val, min1, max1, src2Val, min2, max2))
{
return true;
}
const bool specialize = IsWorthSpecializingToInt32Branch(instr, src1Val, src2Val);
UpdateIntBoundsForGreaterThanOrEqualBranch(src1Val, src2Val);
if(!specialize)
{
return false;
}
opcode = Js::OpCode::BrGe_I4;
// We'll get a warning if we don't assign a value to these...
// We'll assert if we use them and make a range where min > max
newMin = 2; newMax = 1;
break;
}
case Js::OpCode::BrLt_A:
case Js::OpCode::BrNotGe_A:
{
if(DoConstFold() &&
!IsLoopPrePass() &&
TryOptConstFoldBrGreaterThanOrEqual(instr, false, src1Val, min1, max1, src2Val, min2, max2))
{
return true;
}
const bool specialize = IsWorthSpecializingToInt32Branch(instr, src1Val, src2Val);
UpdateIntBoundsForLessThanBranch(src1Val, src2Val);
if(!specialize)
{
return false;
}
opcode = Js::OpCode::BrLt_I4;
// We'll get a warning if we don't assign a value to these...
// We'll assert if we use them and make a range where min > max
newMin = 2; newMax = 1;
break;
}
case Js::OpCode::BrLe_A:
case Js::OpCode::BrNotGt_A:
{
if(DoConstFold() &&
!IsLoopPrePass() &&
TryOptConstFoldBrGreaterThan(instr, false, src1Val, min1, max1, src2Val, min2, max2))
{
return true;
}
const bool specialize = IsWorthSpecializingToInt32Branch(instr, src1Val, src2Val);
UpdateIntBoundsForLessThanOrEqualBranch(src1Val, src2Val);
if(!specialize)
{
return false;
}
opcode = Js::OpCode::BrLe_I4;
// We'll get a warning if we don't assign a value to these...
// We'll assert if we use them and make a range where min > max
newMin = 2; newMax = 1;
break;
}
default:
return false;
}
// If this instruction is in a range of instructions where int overflow does not matter, we will still specialize it
// (won't leave it unspecialized based on heuristics), since it is most likely worth specializing, and the dst value
// needs to be guaranteed to be an int
if(!ignoredIntOverflow &&
!ignoredNegativeZero &&
!needsBoolConv &&
instr->ShouldCheckForIntOverflow() &&
!IsWorthSpecializingToInt32(instr, src1Val, src2Val))
{
// Even though type specialization is being skipped since it may not be worth it, the proper value should still be
// maintained so that the result may be type specialized later. An int value is not created for the dst in any of
// the following cases.
// - A bailout check is necessary to specialize this instruction. The bailout check is what guarantees the result to
// be an int, but since we're not going to specialize this instruction, there won't be a bailout check.
// - Aggressive int type specialization is disabled and we're in a loop prepass. We're conservative on dst values in
// that case, especially if the dst sym is live on the back-edge.
if(bailOutKind == IR::BailOutInvalid &&
instr->GetDst() &&
src1Val->GetValueInfo()->IsInt() &&
src2Val->GetValueInfo()->IsInt() &&
(DoAggressiveIntTypeSpec() || !this->IsLoopPrePass()))
{
*pDstVal = CreateDstUntransferredIntValue(newMin, newMax, instr, src1Val, src2Val);
}
return false;
}
} // case default
} // switch
LOutsideSwitch:
this->ignoredIntOverflowForCurrentInstr = ignoredIntOverflow;
this->ignoredNegativeZeroForCurrentInstr = ignoredNegativeZero;
{
// Try CSE again before modifying the IR, in case some attributes are required for successful CSE
Value *src1IndirIndexVal = nullptr;
if(CSEOptimize(currentBlock, &instr, &src1Val, &src2Val, &src1IndirIndexVal, true /* intMathExprOnly */))
{
*redoTypeSpecRef = true;
return false;
}
}
const Js::OpCode originalOpCode = instr->m_opcode;
if (!this->IsLoopPrePass())
{
// No re-write on prepass
instr->m_opcode = opcode;
}
Value *src1ValueToSpecialize = src1Val, *src2ValueToSpecialize = src2Val;
// Lossy conversions to int32 must be done based on the original source values. For instance, if one of the values is a
// float constant with a value that fits in a uint32 but not an int32, and the instruction can ignore int overflow, the
// source value for the purposes of int specialization would have been changed to an int constant value by ignoring
// overflow. If we were to specialize the sym using the int constant value, it would be treated as a lossless
// conversion, but since there may be subsequent uses of the same float constant value that may not ignore overflow,
// this must be treated as a lossy conversion by specializing the sym using the original float constant value.
if(src1Lossy)
{
src1ValueToSpecialize = src1OriginalVal;
}
if (src2Lossy)
{
src2ValueToSpecialize = src2OriginalVal;
}
// Make sure the srcs are specialized
IR::Opnd* src1 = instr->GetSrc1();
this->ToInt32(instr, src1, this->currentBlock, src1ValueToSpecialize, nullptr, src1Lossy);
if (!skipSrc2)
{
IR::Opnd* src2 = instr->GetSrc2();
this->ToInt32(instr, src2, this->currentBlock, src2ValueToSpecialize, nullptr, src2Lossy);
}
if(bailOutKind != IR::BailOutInvalid && !this->IsLoopPrePass())
{
GenerateBailAtOperation(&instr, bailOutKind);
}
if (!skipDst && instr->GetDst())
{
if (needsBoolConv)
{
IR::RegOpnd *varDst;
if (this->IsLoopPrePass())
{
varDst = instr->GetDst()->AsRegOpnd();
this->ToVarRegOpnd(varDst, this->currentBlock);
}
else
{
// Generate:
// t1.i = CmCC t2.i, t3.i
// t1.v = Conv_bool t1.i
//
// If the only uses of t1 are ints, the conv_bool will get dead-stored
TypeSpecializeIntDst(instr, originalOpCode, nullptr, src1Val, src2Val, bailOutKind, newMin, newMax, pDstVal);
IR::RegOpnd *intDst = instr->GetDst()->AsRegOpnd();
intDst->SetIsJITOptimizedReg(true);
varDst = IR::RegOpnd::New(intDst->m_sym->GetVarEquivSym(this->func), TyVar, this->func);
IR::Instr *convBoolInstr = IR::Instr::New(Js::OpCode::Conv_Bool, varDst, intDst, this->func);
// In some cases (e.g. unsigned compare peep code), a comparison will use variables
// other than the ones initially intended for it, if we can determine that we would
// arrive at the same result. This means that we get a ByteCodeUses operation after
// the actual comparison. Since Inserting the Conv_bool just after the compare, and
// just before the ByteCodeUses, would cause issues later on with register lifetime
// calculation, we want to insert the Conv_bool after the whole compare instruction
// block.
IR::Instr *putAfter = instr;
while (putAfter->m_next && putAfter->m_next->IsByteCodeUsesInstrFor(instr))
{
putAfter = putAfter->m_next;
}
putAfter->InsertAfter(convBoolInstr);
convBoolInstr->SetByteCodeOffset(instr);
this->ToVarRegOpnd(varDst, this->currentBlock);
CurrentBlockData()->liveInt32Syms->Set(varDst->m_sym->m_id);
CurrentBlockData()->liveLossyInt32Syms->Set(varDst->m_sym->m_id);
}
*pDstVal = this->NewGenericValue(ValueType::Boolean, varDst);
}
else
{
TypeSpecializeIntDst(
instr,
originalOpCode,
nullptr,
src1Val,
src2Val,
bailOutKind,
newMin,
newMax,
pDstVal,
addSubConstantInfo.HasInfo() ? &addSubConstantInfo : nullptr);
}
}
if(bailOutKind == IR::BailOutInvalid)
{
GOPT_TRACE(_u("Type specialized to INT\n"));
#if ENABLE_DEBUG_CONFIG_OPTIONS
if (Js::Configuration::Global.flags.TestTrace.IsEnabled(Js::AggressiveIntTypeSpecPhase))
{
Output::Print(_u("Type specialized to INT: "));
Output::Print(_u("%s \n"), Js::OpCodeUtil::GetOpCodeName(instr->m_opcode));
}
#endif
}
else
{
GOPT_TRACE(_u("Type specialized to INT with bailout on:\n"));
if(bailOutKind & (IR::BailOutOnOverflow | IR::BailOutOnMulOverflow) )
{
GOPT_TRACE(_u(" Overflow\n"));
#if ENABLE_DEBUG_CONFIG_OPTIONS
if (Js::Configuration::Global.flags.TestTrace.IsEnabled(Js::AggressiveIntTypeSpecPhase))
{
Output::Print(_u("Type specialized to INT with bailout (%S): "), "Overflow");
Output::Print(_u("%s \n"), Js::OpCodeUtil::GetOpCodeName(instr->m_opcode));
}
#endif
}
if(bailOutKind & IR::BailOutOnNegativeZero)
{
GOPT_TRACE(_u(" Zero\n"));
#if ENABLE_DEBUG_CONFIG_OPTIONS
if (Js::Configuration::Global.flags.TestTrace.IsEnabled(Js::AggressiveIntTypeSpecPhase))
{
Output::Print(_u("Type specialized to INT with bailout (%S): "), "Zero");
Output::Print(_u("%s \n"), Js::OpCodeUtil::GetOpCodeName(instr->m_opcode));
}
#endif
}
}
return true;
}
bool
GlobOpt::IsWorthSpecializingToInt32Branch(IR::Instr const * instr, Value const * src1Val, Value const * src2Val) const
{
if (!src1Val->GetValueInfo()->HasIntConstantValue() && instr->GetSrc1()->IsRegOpnd())
{
StackSym const *sym1 = instr->GetSrc1()->AsRegOpnd()->m_sym;
if (CurrentBlockData()->IsInt32TypeSpecialized(sym1) == false)
{
if (!src2Val->GetValueInfo()->HasIntConstantValue() && instr->GetSrc2()->IsRegOpnd())
{
StackSym const *sym2 = instr->GetSrc2()->AsRegOpnd()->m_sym;
if (CurrentBlockData()->IsInt32TypeSpecialized(sym2) == false)
{
// Type specializing a Br itself isn't worth it, unless one src
// is already type specialized
return false;
}
}
}
}
return true;
}
bool
GlobOpt::TryOptConstFoldBrFalse(
IR::Instr *const instr,
Value *const srcValue,
const int32 min,
const int32 max)
{
Assert(instr);
Assert(instr->m_opcode == Js::OpCode::BrFalse_A || instr->m_opcode == Js::OpCode::BrTrue_A);
Assert(srcValue);
if(!(DoAggressiveIntTypeSpec() ? srcValue->GetValueInfo()->IsLikelyInt() : srcValue->GetValueInfo()->IsInt()))
{
return false;
}
if(ValueInfo::IsEqualTo(srcValue, min, max, nullptr, 0, 0))
{
OptConstFoldBr(instr->m_opcode == Js::OpCode::BrFalse_A, instr, srcValue);
return true;
}
if(ValueInfo::IsNotEqualTo(srcValue, min, max, nullptr, 0, 0))
{
OptConstFoldBr(instr->m_opcode == Js::OpCode::BrTrue_A, instr, srcValue);
return true;
}
return false;
}
bool
GlobOpt::TryOptConstFoldBrEqual(
IR::Instr *const instr,
const bool branchOnEqual,
Value *const src1Value,
const int32 min1,
const int32 max1,
Value *const src2Value,
const int32 min2,
const int32 max2)
{
Assert(instr);
Assert(src1Value);
Assert(DoAggressiveIntTypeSpec() ? src1Value->GetValueInfo()->IsLikelyInt() : src1Value->GetValueInfo()->IsInt());
Assert(src2Value);
Assert(DoAggressiveIntTypeSpec() ? src2Value->GetValueInfo()->IsLikelyInt() : src2Value->GetValueInfo()->IsInt());
if(ValueInfo::IsEqualTo(src1Value, min1, max1, src2Value, min2, max2))
{
OptConstFoldBr(branchOnEqual, instr, src1Value, src2Value);
return true;
}
if(ValueInfo::IsNotEqualTo(src1Value, min1, max1, src2Value, min2, max2))
{
OptConstFoldBr(!branchOnEqual, instr, src1Value, src2Value);
return true;
}
return false;
}
bool
GlobOpt::TryOptConstFoldBrGreaterThan(
IR::Instr *const instr,
const bool branchOnGreaterThan,
Value *const src1Value,
const int32 min1,
const int32 max1,
Value *const src2Value,
const int32 min2,
const int32 max2)
{
Assert(instr);
Assert(src1Value);
Assert(DoAggressiveIntTypeSpec() ? src1Value->GetValueInfo()->IsLikelyInt() : src1Value->GetValueInfo()->IsInt());
Assert(src2Value);
Assert(DoAggressiveIntTypeSpec() ? src2Value->GetValueInfo()->IsLikelyInt() : src2Value->GetValueInfo()->IsInt());
if(ValueInfo::IsGreaterThan(src1Value, min1, max1, src2Value, min2, max2))
{
OptConstFoldBr(branchOnGreaterThan, instr, src1Value, src2Value);
return true;
}
if(ValueInfo::IsLessThanOrEqualTo(src1Value, min1, max1, src2Value, min2, max2))
{
OptConstFoldBr(!branchOnGreaterThan, instr, src1Value, src2Value);
return true;
}
return false;
}
bool
GlobOpt::TryOptConstFoldBrGreaterThanOrEqual(
IR::Instr *const instr,
const bool branchOnGreaterThanOrEqual,
Value *const src1Value,
const int32 min1,
const int32 max1,
Value *const src2Value,
const int32 min2,
const int32 max2)
{
Assert(instr);
Assert(src1Value);
Assert(DoAggressiveIntTypeSpec() ? src1Value->GetValueInfo()->IsLikelyInt() : src1Value->GetValueInfo()->IsInt());
Assert(src2Value);
Assert(DoAggressiveIntTypeSpec() ? src2Value->GetValueInfo()->IsLikelyInt() : src2Value->GetValueInfo()->IsInt());
if(ValueInfo::IsGreaterThanOrEqualTo(src1Value, min1, max1, src2Value, min2, max2))
{
OptConstFoldBr(branchOnGreaterThanOrEqual, instr, src1Value, src2Value);
return true;
}
if(ValueInfo::IsLessThan(src1Value, min1, max1, src2Value, min2, max2))
{
OptConstFoldBr(!branchOnGreaterThanOrEqual, instr, src1Value, src2Value);
return true;
}
return false;
}
bool
GlobOpt::TryOptConstFoldBrUnsignedLessThan(
IR::Instr *const instr,
const bool branchOnLessThan,
Value *const src1Value,
const int32 min1,
const int32 max1,
Value *const src2Value,
const int32 min2,
const int32 max2)
{
Assert(DoConstFold());
Assert(!IsLoopPrePass());
if(!src1Value ||
!src2Value ||
!(
DoAggressiveIntTypeSpec()
? src1Value->GetValueInfo()->IsLikelyInt() && src2Value->GetValueInfo()->IsLikelyInt()
: src1Value->GetValueInfo()->IsInt() && src2Value->GetValueInfo()->IsInt()
))
{
return false;
}
uint uMin1 = (min1 < 0 ? (max1 < 0 ? min((uint)min1, (uint)max1) : 0) : min1);
uint uMax1 = max((uint)min1, (uint)max1);
uint uMin2 = (min2 < 0 ? (max2 < 0 ? min((uint)min2, (uint)max2) : 0) : min2);
uint uMax2 = max((uint)min2, (uint)max2);
if (uMax1 < uMin2)
{
// Range 1 is always lesser than Range 2
OptConstFoldBr(branchOnLessThan, instr, src1Value, src2Value);
return true;
}
if (uMin1 >= uMax2)
{
// Range 2 is always lesser than Range 1
OptConstFoldBr(!branchOnLessThan, instr, src1Value, src2Value);
return true;
}
return false;
}
bool
GlobOpt::TryOptConstFoldBrUnsignedGreaterThan(
IR::Instr *const instr,
const bool branchOnGreaterThan,
Value *const src1Value,
const int32 min1,
const int32 max1,
Value *const src2Value,
const int32 min2,
const int32 max2)
{
Assert(DoConstFold());
Assert(!IsLoopPrePass());
if(!src1Value ||
!src2Value ||
!(
DoAggressiveIntTypeSpec()
? src1Value->GetValueInfo()->IsLikelyInt() && src2Value->GetValueInfo()->IsLikelyInt()
: src1Value->GetValueInfo()->IsInt() && src2Value->GetValueInfo()->IsInt()
))
{
return false;
}
uint uMin1 = (min1 < 0 ? (max1 < 0 ? min((uint)min1, (uint)max1) : 0) : min1);
uint uMax1 = max((uint)min1, (uint)max1);
uint uMin2 = (min2 < 0 ? (max2 < 0 ? min((uint)min2, (uint)max2) : 0) : min2);
uint uMax2 = max((uint)min2, (uint)max2);
if (uMin1 > uMax2)
{
// Range 1 is always greater than Range 2
OptConstFoldBr(branchOnGreaterThan, instr, src1Value, src2Value);
return true;
}
if (uMax1 <= uMin2)
{
// Range 2 is always greater than Range 1
OptConstFoldBr(!branchOnGreaterThan, instr, src1Value, src2Value);
return true;
}
return false;
}
void
GlobOpt::SetPathDependentInfo(const bool conditionToBranch, const PathDependentInfo &info)
{
Assert(this->currentBlock->GetSuccList()->Count() == 2);
IR::Instr * fallthrough = this->currentBlock->GetNext()->GetFirstInstr();
FOREACH_SLISTBASECOUNTED_ENTRY(FlowEdge*, edge, this->currentBlock->GetSuccList())
{
if (conditionToBranch == (edge->GetSucc()->GetFirstInstr() != fallthrough))
{
edge->SetPathDependentInfo(info, alloc);
return;
}
}
NEXT_SLISTBASECOUNTED_ENTRY;
// In case flowgraph peeps is disabled, we could have conditional branch to next instr
Assert(this->func->HasTry() || PHASE_OFF(Js::FGPeepsPhase, this->func));
}
PathDependentInfoToRestore
GlobOpt::UpdatePathDependentInfo(PathDependentInfo *const info)
{
Assert(info);
if(!info->HasInfo())
{
return PathDependentInfoToRestore();
}
decltype(&GlobOpt::UpdateIntBoundsForEqual) UpdateIntBoundsForLeftValue, UpdateIntBoundsForRightValue;
switch(info->Relationship())
{
case PathDependentRelationship::Equal:
UpdateIntBoundsForLeftValue = &GlobOpt::UpdateIntBoundsForEqual;
UpdateIntBoundsForRightValue = &GlobOpt::UpdateIntBoundsForEqual;
break;
case PathDependentRelationship::NotEqual:
UpdateIntBoundsForLeftValue = &GlobOpt::UpdateIntBoundsForNotEqual;
UpdateIntBoundsForRightValue = &GlobOpt::UpdateIntBoundsForNotEqual;
break;
case PathDependentRelationship::GreaterThanOrEqual:
UpdateIntBoundsForLeftValue = &GlobOpt::UpdateIntBoundsForGreaterThanOrEqual;
UpdateIntBoundsForRightValue = &GlobOpt::UpdateIntBoundsForLessThanOrEqual;
break;
case PathDependentRelationship::GreaterThan:
UpdateIntBoundsForLeftValue = &GlobOpt::UpdateIntBoundsForGreaterThan;
UpdateIntBoundsForRightValue = &GlobOpt::UpdateIntBoundsForLessThan;
break;
case PathDependentRelationship::LessThanOrEqual:
UpdateIntBoundsForLeftValue = &GlobOpt::UpdateIntBoundsForLessThanOrEqual;
UpdateIntBoundsForRightValue = &GlobOpt::UpdateIntBoundsForGreaterThanOrEqual;
break;
case PathDependentRelationship::LessThan:
UpdateIntBoundsForLeftValue = &GlobOpt::UpdateIntBoundsForLessThan;
UpdateIntBoundsForRightValue = &GlobOpt::UpdateIntBoundsForGreaterThan;
break;
default:
Assert(false);
__assume(false);
}
ValueInfo *leftValueInfo = info->LeftValue()->GetValueInfo();
IntConstantBounds leftConstantBounds;
AssertVerify(leftValueInfo->TryGetIntConstantBounds(&leftConstantBounds, true));
ValueInfo *rightValueInfo;
IntConstantBounds rightConstantBounds;
if(info->RightValue())
{
rightValueInfo = info->RightValue()->GetValueInfo();
AssertVerify(rightValueInfo->TryGetIntConstantBounds(&rightConstantBounds, true));
}
else
{
rightValueInfo = nullptr;
rightConstantBounds = IntConstantBounds(info->RightConstantValue(), info->RightConstantValue());
}
ValueInfo *const newLeftValueInfo =
(this->*UpdateIntBoundsForLeftValue)(
info->LeftValue(),
leftConstantBounds,
info->RightValue(),
rightConstantBounds,
true);
if(newLeftValueInfo)
{
ChangeValueInfo(nullptr, info->LeftValue(), newLeftValueInfo);
AssertVerify(newLeftValueInfo->TryGetIntConstantBounds(&leftConstantBounds, true));
}
else
{
leftValueInfo = nullptr;
}
ValueInfo *const newRightValueInfo =
(this->*UpdateIntBoundsForRightValue)(
info->RightValue(),
rightConstantBounds,
info->LeftValue(),
leftConstantBounds,
true);
if(newRightValueInfo)
{
ChangeValueInfo(nullptr, info->RightValue(), newRightValueInfo);
}
else
{
rightValueInfo = nullptr;
}
return PathDependentInfoToRestore(leftValueInfo, rightValueInfo);
}
void
GlobOpt::RestorePathDependentInfo(PathDependentInfo *const info, const PathDependentInfoToRestore infoToRestore)
{
Assert(info);
if(infoToRestore.LeftValueInfo())
{
Assert(info->LeftValue());
ChangeValueInfo(nullptr, info->LeftValue(), infoToRestore.LeftValueInfo());
}
if(infoToRestore.RightValueInfo())
{
Assert(info->RightValue());
ChangeValueInfo(nullptr, info->RightValue(), infoToRestore.RightValueInfo());
}
}
bool
GlobOpt::TypeSpecializeFloatUnary(IR::Instr **pInstr, Value *src1Val, Value **pDstVal, bool skipDst /* = false */)
{
IR::Instr *&instr = *pInstr;
IR::Opnd *src1;
IR::Opnd *dst;
Js::OpCode opcode = instr->m_opcode;
Value *valueToTransfer = nullptr;
Assert(src1Val && src1Val->GetValueInfo()->IsLikelyNumber() || OpCodeAttr::IsInlineBuiltIn(instr->m_opcode));
if (!this->DoFloatTypeSpec())
{
return false;
}
// For inline built-ins we need to do type specialization. Check upfront to avoid duplicating same case labels.
if (!OpCodeAttr::IsInlineBuiltIn(instr->m_opcode))
{
switch (opcode)
{
case Js::OpCode::ArgOut_A_InlineBuiltIn:
skipDst = true;
// fall-through
case Js::OpCode::Ld_A:
case Js::OpCode::BrTrue_A:
case Js::OpCode::BrFalse_A:
if (instr->GetSrc1()->IsRegOpnd())
{
StackSym *sym = instr->GetSrc1()->AsRegOpnd()->m_sym;
if (CurrentBlockData()->IsFloat64TypeSpecialized(sym) == false)
{
// Type specializing an Ld_A isn't worth it, unless the src
// is already type specialized
return false;
}
}
if (instr->m_opcode == Js::OpCode::Ld_A)
{
valueToTransfer = src1Val;
}
break;
case Js::OpCode::Neg_A:
break;
case Js::OpCode::Conv_Num:
Assert(src1Val);
opcode = Js::OpCode::Ld_A;
valueToTransfer = src1Val;
if (!src1Val->GetValueInfo()->IsNumber())
{
StackSym *sym = instr->GetSrc1()->AsRegOpnd()->m_sym;
valueToTransfer = NewGenericValue(ValueType::Float, instr->GetDst()->GetStackSym());
if (CurrentBlockData()->IsFloat64TypeSpecialized(sym) == false)
{
// Set the dst as a nonDeadStore. We want to keep the Ld_A to prevent the FromVar from
// being dead-stored, as it could cause implicit calls.
dst = instr->GetDst();
dst->AsRegOpnd()->m_dontDeadStore = true;
}
}
break;
case Js::OpCode::StElemI_A:
case Js::OpCode::StElemI_A_Strict:
case Js::OpCode::StElemC:
return TypeSpecializeStElem(pInstr, src1Val, pDstVal);
default:
return false;
}
}
// Make sure the srcs are specialized
src1 = instr->GetSrc1();
// Use original val when calling toFloat64 as this is what we'll use to try hoisting the fromVar if we're in a loop.
this->ToFloat64(instr, src1, this->currentBlock, src1Val, nullptr, IR::BailOutPrimitiveButString);
if (!skipDst)
{
dst = instr->GetDst();
if (dst)
{
this->TypeSpecializeFloatDst(instr, valueToTransfer, src1Val, nullptr, pDstVal);
if (!this->IsLoopPrePass())
{
instr->m_opcode = opcode;
}
}
}
GOPT_TRACE_INSTR(instr, _u("Type specialized to FLOAT: "));
#if ENABLE_DEBUG_CONFIG_OPTIONS
if (Js::Configuration::Global.flags.TestTrace.IsEnabled(Js::FloatTypeSpecPhase))
{
Output::Print(_u("Type specialized to FLOAT: "));
Output::Print(_u("%s \n"), Js::OpCodeUtil::GetOpCodeName(instr->m_opcode));
}
#endif
return true;
}
// Unconditionally type-spec dst to float.
void
GlobOpt::TypeSpecializeFloatDst(IR::Instr *instr, Value *valToTransfer, Value *const src1Value, Value *const src2Value, Value **pDstVal)
{
IR::Opnd* dst = instr->GetDst();
Assert(dst);
AssertMsg(dst->IsRegOpnd(), "What else?");
this->ToFloat64Dst(instr, dst->AsRegOpnd(), this->currentBlock);
if(valToTransfer)
{
*pDstVal = this->ValueNumberTransferDst(instr, valToTransfer);
CurrentBlockData()->InsertNewValue(*pDstVal, dst);
}
else
{
*pDstVal = CreateDstUntransferredValue(ValueType::Float, instr, src1Value, src2Value);
}
}
bool
GlobOpt::TypeSpecializeLdLen(
IR::Instr * *const instrRef,
Value * *const src1ValueRef,
Value * *const dstValueRef,
bool *const forceInvariantHoistingRef)
{
Assert(instrRef);
IR::Instr *&instr = *instrRef;
Assert(instr);
Assert(instr->m_opcode == Js::OpCode::LdLen_A);
Assert(src1ValueRef);
Value *&src1Value = *src1ValueRef;
Assert(dstValueRef);
Value *&dstValue = *dstValueRef;
Assert(forceInvariantHoistingRef);
bool &forceInvariantHoisting = *forceInvariantHoistingRef;
if(!DoLdLenIntSpec(instr, instr->GetSrc1()->GetValueType()))
{
return false;
}
IR::BailOutKind bailOutKind = IR::BailOutOnIrregularLength;
if(!IsLoopPrePass())
{
IR::RegOpnd *const baseOpnd = instr->GetSrc1()->AsRegOpnd();
if(baseOpnd->IsArrayRegOpnd())
{
StackSym *const lengthSym = baseOpnd->AsArrayRegOpnd()->LengthSym();
if(lengthSym)
{
CaptureByteCodeSymUses(instr);
instr->m_opcode = Js::OpCode::Ld_I4;
instr->ReplaceSrc1(IR::RegOpnd::New(lengthSym, lengthSym->GetType(), func));
instr->ClearBailOutInfo();
// Find the hoisted length value
Value *const lengthValue = CurrentBlockData()->FindValue(lengthSym);
Assert(lengthValue);
src1Value = lengthValue;
ValueInfo *const lengthValueInfo = lengthValue->GetValueInfo();
IntConstantBounds lengthConstantBounds;
AssertVerify(lengthValueInfo->TryGetIntConstantBounds(&lengthConstantBounds));
Assert(lengthConstantBounds.LowerBound() >= 0);
if (lengthValueInfo->GetSymStore() == lengthSym)
{
// When type specializing the dst below, we will end up inserting lengthSym.u32 as symstore for a var
// Clear the symstore here, so that we dont end up with problems with copyprop later on
lengthValueInfo->SetSymStore(nullptr);
}
// Int-specialize, and transfer the value to the dst
TypeSpecializeIntDst(
instr,
Js::OpCode::LdLen_A,
src1Value,
src1Value,
nullptr,
bailOutKind,
lengthConstantBounds.LowerBound(),
lengthConstantBounds.UpperBound(),
&dstValue);
// Try to force hoisting the Ld_I4 so that the length will have an invariant sym store that can be
// copy-propped. Invariant hoisting does not automatically hoist Ld_I4.
forceInvariantHoisting = true;
return true;
}
}
if (instr->HasBailOutInfo())
{
Assert(instr->GetBailOutKind() == IR::BailOutMarkTempObject);
bailOutKind = IR::BailOutOnIrregularLength | IR::BailOutMarkTempObject;
instr->SetBailOutKind(bailOutKind);
}
else
{
Assert(bailOutKind == IR::BailOutOnIrregularLength);
GenerateBailAtOperation(&instr, bailOutKind);
}
}
TypeSpecializeIntDst(
instr,
Js::OpCode::LdLen_A,
nullptr,
nullptr,
nullptr,
bailOutKind,
0,
INT32_MAX,
&dstValue);
return true;
}
bool
GlobOpt::TypeSpecializeFloatBinary(IR::Instr *instr, Value *src1Val, Value *src2Val, Value **pDstVal)
{
IR::Opnd *src1;
IR::Opnd *src2;
IR::Opnd *dst;
bool allowUndefinedOrNullSrc1 = true;
bool allowUndefinedOrNullSrc2 = true;
bool skipSrc1 = false;
bool skipSrc2 = false;
bool skipDst = false;
bool convertDstToBool = false;
if (!this->DoFloatTypeSpec())
{
return false;
}
// For inline built-ins we need to do type specialization. Check upfront to avoid duplicating same case labels.
if (!OpCodeAttr::IsInlineBuiltIn(instr->m_opcode))
{
switch (instr->m_opcode)
{
case Js::OpCode::Sub_A:
case Js::OpCode::Mul_A:
case Js::OpCode::Div_A:
case Js::OpCode::Expo_A:
// Avoid if one source is known not to be a number.
if (src1Val->GetValueInfo()->IsNotNumber() || src2Val->GetValueInfo()->IsNotNumber())
{
return false;
}
break;
case Js::OpCode::BrSrEq_A:
case Js::OpCode::BrSrNeq_A:
case Js::OpCode::BrEq_A:
case Js::OpCode::BrNeq_A:
case Js::OpCode::BrSrNotEq_A:
case Js::OpCode::BrNotEq_A:
case Js::OpCode::BrSrNotNeq_A:
case Js::OpCode::BrNotNeq_A:
// Avoid if one source is known not to be a number.
if (src1Val->GetValueInfo()->IsNotNumber() || src2Val->GetValueInfo()->IsNotNumber())
{
return false;
}
// Undef == Undef, but +Undef != +Undef
// 0.0 != null, but 0.0 == +null
//
// So Bailout on anything but numbers for both src1 and src2
allowUndefinedOrNullSrc1 = false;
allowUndefinedOrNullSrc2 = false;
break;
case Js::OpCode::BrGt_A:
case Js::OpCode::BrGe_A:
case Js::OpCode::BrLt_A:
case Js::OpCode::BrLe_A:
case Js::OpCode::BrNotGt_A:
case Js::OpCode::BrNotGe_A:
case Js::OpCode::BrNotLt_A:
case Js::OpCode::BrNotLe_A:
// Avoid if one source is known not to be a number.
if (src1Val->GetValueInfo()->IsNotNumber() || src2Val->GetValueInfo()->IsNotNumber())
{
return false;
}
break;
case Js::OpCode::Add_A:
// For Add, we need both sources to be Numbers, otherwise it could be a string concat
if (!src1Val || !src2Val || !(src1Val->GetValueInfo()->IsLikelyNumber() && src2Val->GetValueInfo()->IsLikelyNumber()))
{
return false;
}
break;
case Js::OpCode::ArgOut_A_InlineBuiltIn:
skipSrc2 = true;
skipDst = true;
break;
case Js::OpCode::CmEq_A:
case Js::OpCode::CmSrEq_A:
case Js::OpCode::CmNeq_A:
case Js::OpCode::CmSrNeq_A:
{
if (src1Val->GetValueInfo()->IsNotNumber() || src2Val->GetValueInfo()->IsNotNumber())
{
return false;
}
allowUndefinedOrNullSrc1 = false;
allowUndefinedOrNullSrc2 = false;
convertDstToBool = true;
break;
}
case Js::OpCode::CmLe_A:
case Js::OpCode::CmLt_A:
case Js::OpCode::CmGe_A:
case Js::OpCode::CmGt_A:
{
if (src1Val->GetValueInfo()->IsNotNumber() || src2Val->GetValueInfo()->IsNotNumber())
{
return false;
}
convertDstToBool = true;
break;
}
default:
return false;
}
}
else
{
switch (instr->m_opcode)
{
case Js::OpCode::InlineArrayPush:
bool isFloatConstMissingItem = src2Val->GetValueInfo()->IsFloatConstant();
if(isFloatConstMissingItem)
{
FloatConstType floatValue = src2Val->GetValueInfo()->AsFloatConstant()->FloatValue();
isFloatConstMissingItem = Js::SparseArraySegment<double>::IsMissingItem(&floatValue);
}
// Don't specialize if the element is not likelyNumber - we will surely bailout
if(!(src2Val->GetValueInfo()->IsLikelyNumber()) || isFloatConstMissingItem)
{
return false;
}
// Only specialize the Second source - element
skipSrc1 = true;
skipDst = true;
allowUndefinedOrNullSrc2 = false;
break;
}
}
// Make sure the srcs are specialized
if(!skipSrc1)
{
src1 = instr->GetSrc1();
this->ToFloat64(instr, src1, this->currentBlock, src1Val, nullptr, (allowUndefinedOrNullSrc1 ? IR::BailOutPrimitiveButString : IR::BailOutNumberOnly));
}
if (!skipSrc2)
{
src2 = instr->GetSrc2();
this->ToFloat64(instr, src2, this->currentBlock, src2Val, nullptr, (allowUndefinedOrNullSrc2 ? IR::BailOutPrimitiveButString : IR::BailOutNumberOnly));
}
if (!skipDst)
{
dst = instr->GetDst();
if (dst)
{
if (convertDstToBool)
{
*pDstVal = CreateDstUntransferredValue(ValueType::Boolean, instr, src1Val, src2Val);
ToVarRegOpnd(dst->AsRegOpnd(), currentBlock);
}
else
{
*pDstVal = CreateDstUntransferredValue(ValueType::Float, instr, src1Val, src2Val);
AssertMsg(dst->IsRegOpnd(), "What else?");
this->ToFloat64Dst(instr, dst->AsRegOpnd(), this->currentBlock);
}
}
}
GOPT_TRACE_INSTR(instr, _u("Type specialized to FLOAT: "));
#if ENABLE_DEBUG_CONFIG_OPTIONS
if (Js::Configuration::Global.flags.TestTrace.IsEnabled(Js::FloatTypeSpecPhase))
{
Output::Print(_u("Type specialized to FLOAT: "));
Output::Print(_u("%s \n"), Js::OpCodeUtil::GetOpCodeName(instr->m_opcode));
}
#endif
return true;
}
bool
GlobOpt::TypeSpecializeStElem(IR::Instr ** pInstr, Value *src1Val, Value **pDstVal)
{
IR::Instr *&instr = *pInstr;
IR::RegOpnd *baseOpnd = instr->GetDst()->AsIndirOpnd()->GetBaseOpnd();
ValueType baseValueType(baseOpnd->GetValueType());
if (instr->DoStackArgsOpt() ||
(!this->DoTypedArrayTypeSpec() && baseValueType.IsLikelyOptimizedTypedArray()) ||
(!this->DoNativeArrayTypeSpec() && baseValueType.IsLikelyNativeArray()) ||
!(baseValueType.IsLikelyOptimizedTypedArray() || baseValueType.IsLikelyNativeArray()))
{
GOPT_TRACE_INSTR(instr, _u("Didn't type specialize array access, because typed array type specialization is disabled, or base is not an optimized typed array.\n"));
if (PHASE_TRACE(Js::TypedArrayTypeSpecPhase, this->func))
{
char16 debugStringBuffer[MAX_FUNCTION_BODY_DEBUG_STRING_SIZE];
char baseValueTypeStr[VALUE_TYPE_MAX_STRING_SIZE];
baseValueType.ToString(baseValueTypeStr);
Output::Print(_u("Typed Array Optimization: function: %s (%s): instr: %s, base value type: %S, did not specialize because %s.\n"),
this->func->GetJITFunctionBody()->GetDisplayName(),
this->func->GetDebugNumberSet(debugStringBuffer),
Js::OpCodeUtil::GetOpCodeName(instr->m_opcode),
baseValueTypeStr,
instr->DoStackArgsOpt() ?
_u("instruction uses the arguments object") :
_u("typed array type specialization is disabled, or base is not an optimized typed array"));
Output::Flush();
}
return false;
}
Assert(instr->GetSrc1()->IsRegOpnd() || (src1Val && src1Val->GetValueInfo()->HasIntConstantValue()));
StackSym *sym = instr->GetSrc1()->IsRegOpnd() ? instr->GetSrc1()->AsRegOpnd()->m_sym : nullptr;
// Only type specialize the source of store element if the source symbol is already type specialized to int or float.
if (sym)
{
if (baseValueType.IsLikelyNativeArray())
{
// Gently coerce these src's into native if it seems likely to work.
// Otherwise we can't use the fast path to store.
// But don't try to put a float-specialized number into an int array this way.
if (!(
CurrentBlockData()->IsInt32TypeSpecialized(sym) ||
(
src1Val &&
(
DoAggressiveIntTypeSpec()
? src1Val->GetValueInfo()->IsLikelyInt()
: src1Val->GetValueInfo()->IsInt()
)
)
))
{
if (!(
CurrentBlockData()->IsFloat64TypeSpecialized(sym) ||
(src1Val && src1Val->GetValueInfo()->IsLikelyNumber())
) ||
baseValueType.HasIntElements())
{
return false;
}
}
}
else if (!CurrentBlockData()->IsInt32TypeSpecialized(sym) && !CurrentBlockData()->IsFloat64TypeSpecialized(sym))
{
GOPT_TRACE_INSTR(instr, _u("Didn't specialize array access, because src is not type specialized.\n"));
if (PHASE_TRACE(Js::TypedArrayTypeSpecPhase, this->func))
{
char16 debugStringBuffer[MAX_FUNCTION_BODY_DEBUG_STRING_SIZE];
char baseValueTypeStr[VALUE_TYPE_MAX_STRING_SIZE];
baseValueType.ToString(baseValueTypeStr);
Output::Print(_u("Typed Array Optimization: function: %s (%s): instr: %s, base value type: %S, did not specialize because src is not specialized.\n"),
this->func->GetJITFunctionBody()->GetDisplayName(),
this->func->GetDebugNumberSet(debugStringBuffer),
Js::OpCodeUtil::GetOpCodeName(instr->m_opcode),
baseValueTypeStr);
Output::Flush();
}
return false;
}
}
int32 src1IntConstantValue;
if(baseValueType.IsLikelyNativeIntArray() && src1Val && src1Val->GetValueInfo()->TryGetIntConstantValue(&src1IntConstantValue))
{
if(Js::SparseArraySegment<int32>::IsMissingItem(&src1IntConstantValue))
{
return false;
}
}
// Note: doing ToVarUses to make sure we do get the int32 version of the index before trying to access its value in
// ShouldExpectConventionalArrayIndexValue. Not sure why that never gave us a problem before.
Assert(instr->GetDst()->IsIndirOpnd());
IR::IndirOpnd *dst = instr->GetDst()->AsIndirOpnd();
// Make sure we use the int32 version of the index operand symbol, if available. Otherwise, ensure the var symbol is live (by
// potentially inserting a ToVar).
this->ToVarUses(instr, dst, /* isDst = */ true, nullptr);
if (!ShouldExpectConventionalArrayIndexValue(dst))
{
GOPT_TRACE_INSTR(instr, _u("Didn't specialize array access, because index is negative or likely not int.\n"));
if (PHASE_TRACE(Js::TypedArrayTypeSpecPhase, this->func))
{
char16 debugStringBuffer[MAX_FUNCTION_BODY_DEBUG_STRING_SIZE];
char baseValueTypeStr[VALUE_TYPE_MAX_STRING_SIZE];
baseValueType.ToString(baseValueTypeStr);
Output::Print(_u("Typed Array Optimization: function: %s (%s): instr: %s, base value type: %S, did not specialize because index is negative or likely not int.\n"),
this->func->GetJITFunctionBody()->GetDisplayName(),
this->func->GetDebugNumberSet(debugStringBuffer),
Js::OpCodeUtil::GetOpCodeName(instr->m_opcode),
baseValueTypeStr);
Output::Flush();
}
return false;
}
IRType toType = TyVar;
bool isLossyAllowed = true;
IR::BailOutKind arrayBailOutKind = IR::BailOutConventionalTypedArrayAccessOnly;
switch(baseValueType.GetObjectType())
{
case ObjectType::Int8Array:
case ObjectType::Uint8Array:
case ObjectType::Int16Array:
case ObjectType::Uint16Array:
case ObjectType::Int32Array:
case ObjectType::Int8VirtualArray:
case ObjectType::Uint8VirtualArray:
case ObjectType::Int16VirtualArray:
case ObjectType::Uint16VirtualArray:
case ObjectType::Int32VirtualArray:
case ObjectType::Int8MixedArray:
case ObjectType::Uint8MixedArray:
case ObjectType::Int16MixedArray:
case ObjectType::Uint16MixedArray:
case ObjectType::Int32MixedArray:
Int32Array:
if (this->DoAggressiveIntTypeSpec() || this->DoFloatTypeSpec())
{
toType = TyInt32;
}
break;
case ObjectType::Uint32Array:
case ObjectType::Uint32VirtualArray:
case ObjectType::Uint32MixedArray:
// Uint32Arrays may store values that overflow int32. If the value being stored comes from a symbol that's
// already losslessly type specialized to int32, we'll use it. Otherwise, if we only have a float64 specialized
// value, we don't want to force bailout if it doesn't fit in int32. Instead, we'll emit conversion in the
// lowerer, and handle overflow, if necessary.
if (!sym || CurrentBlockData()->IsInt32TypeSpecialized(sym))
{
toType = TyInt32;
}
else if (CurrentBlockData()->IsFloat64TypeSpecialized(sym))
{
toType = TyFloat64;
}
break;
case ObjectType::Float32Array:
case ObjectType::Float64Array:
case ObjectType::Float32VirtualArray:
case ObjectType::Float32MixedArray:
case ObjectType::Float64VirtualArray:
case ObjectType::Float64MixedArray:
Float64Array:
if (this->DoFloatTypeSpec())
{
toType = TyFloat64;
}
break;
case ObjectType::Uint8ClampedArray:
case ObjectType::Uint8ClampedVirtualArray:
case ObjectType::Uint8ClampedMixedArray:
// Uint8ClampedArray requires rounding (as opposed to truncation) of floating point values. If source symbol is
// float type specialized, type specialize this instruction to float as well, and handle rounding in the
// lowerer.
if (!sym || CurrentBlockData()->IsInt32TypeSpecialized(sym))
{
toType = TyInt32;
isLossyAllowed = false;
}
else if (CurrentBlockData()->IsFloat64TypeSpecialized(sym))
{
toType = TyFloat64;
}
break;
default:
Assert(baseValueType.IsLikelyNativeArray());
isLossyAllowed = false;
arrayBailOutKind = IR::BailOutConventionalNativeArrayAccessOnly;
if(baseValueType.HasIntElements())
{
goto Int32Array;
}
Assert(baseValueType.HasFloatElements());
goto Float64Array;
}
if (toType != TyVar)
{
GOPT_TRACE_INSTR(instr, _u("Type specialized array access.\n"));
if (PHASE_TRACE(Js::TypedArrayTypeSpecPhase, this->func))
{
char16 debugStringBuffer[MAX_FUNCTION_BODY_DEBUG_STRING_SIZE];
char baseValueTypeStr[VALUE_TYPE_MAX_STRING_SIZE];
baseValueType.ToString(baseValueTypeStr);
Output::Print(_u("Typed Array Optimization: function: %s (%s): instr: %s, base value type: %S, type specialized to %s.\n"),
this->func->GetJITFunctionBody()->GetDisplayName(),
this->func->GetDebugNumberSet(debugStringBuffer),
Js::OpCodeUtil::GetOpCodeName(instr->m_opcode),
baseValueTypeStr,
toType == TyInt32 ? _u("int32") : _u("float64"));
Output::Flush();
}
IR::BailOutKind bailOutKind = ((toType == TyInt32) ? IR::BailOutIntOnly : IR::BailOutNumberOnly);
this->ToTypeSpecUse(instr, instr->GetSrc1(), this->currentBlock, src1Val, nullptr, toType, bailOutKind, /* lossy = */ isLossyAllowed);
if (!this->IsLoopPrePass())
{
bool bConvertToBailoutInstr = true;
// Definite StElemC doesn't need bailout, because it can't fail or cause conversion.
if (instr->m_opcode == Js::OpCode::StElemC && baseValueType.IsObject())
{
if (baseValueType.HasIntElements())
{
//Native int array requires a missing element check & bailout
int32 min = INT32_MIN;
int32 max = INT32_MAX;
if (src1Val->GetValueInfo()->GetIntValMinMax(&min, &max, false))
{
bConvertToBailoutInstr = ((min <= Js::JavascriptNativeIntArray::MissingItem) && (max >= Js::JavascriptNativeIntArray::MissingItem));
}
}
else
{
bConvertToBailoutInstr = false;
}
}
if (bConvertToBailoutInstr)
{
if(instr->HasBailOutInfo())
{
const IR::BailOutKind oldBailOutKind = instr->GetBailOutKind();
Assert(
(
!(oldBailOutKind & ~IR::BailOutKindBits) ||
(oldBailOutKind & ~IR::BailOutKindBits) == IR::BailOutOnImplicitCallsPreOp
) &&
!(oldBailOutKind & IR::BailOutKindBits & ~(IR::BailOutOnArrayAccessHelperCall | IR::BailOutMarkTempObject)));
if(arrayBailOutKind == IR::BailOutConventionalTypedArrayAccessOnly)
{
// BailOutConventionalTypedArrayAccessOnly also bails out if the array access is outside the head
// segment bounds, and guarantees no implicit calls. Override the bailout kind so that the instruction
// bails out for the right reason.
instr->SetBailOutKind(
arrayBailOutKind | (oldBailOutKind & (IR::BailOutKindBits - IR::BailOutOnArrayAccessHelperCall)));
}
else
{
// BailOutConventionalNativeArrayAccessOnly by itself may generate a helper call, and may cause implicit
// calls to occur, so it must be merged in to eliminate generating the helper call.
Assert(arrayBailOutKind == IR::BailOutConventionalNativeArrayAccessOnly);
instr->SetBailOutKind(oldBailOutKind | arrayBailOutKind);
}
}
else
{
GenerateBailAtOperation(&instr, arrayBailOutKind);
}
}
}
}
else
{
GOPT_TRACE_INSTR(instr, _u("Didn't specialize array access, because the source was not already specialized.\n"));
if (PHASE_TRACE(Js::TypedArrayTypeSpecPhase, this->func))
{
char16 debugStringBuffer[MAX_FUNCTION_BODY_DEBUG_STRING_SIZE];
char baseValueTypeStr[VALUE_TYPE_MAX_STRING_SIZE];
baseValueType.ToString(baseValueTypeStr);
Output::Print(_u("Typed Array Optimization: function: %s (%s): instr: %s, base value type: %S, did not type specialize, because of array type.\n"),
this->func->GetJITFunctionBody()->GetDisplayName(),
this->func->GetDebugNumberSet(debugStringBuffer),
Js::OpCodeUtil::GetOpCodeName(instr->m_opcode),
baseValueTypeStr);
Output::Flush();
}
}
return toType != TyVar;
}
IR::Instr *
GlobOpt::ToVarUses(IR::Instr *instr, IR::Opnd *opnd, bool isDst, Value *val)
{
Sym *sym;
switch (opnd->GetKind())
{
case IR::OpndKindReg:
if (!isDst && !CurrentBlockData()->liveVarSyms->Test(opnd->AsRegOpnd()->m_sym->m_id))
{
instr = this->ToVar(instr, opnd->AsRegOpnd(), this->currentBlock, val, true);
}
break;
case IR::OpndKindSym:
sym = opnd->AsSymOpnd()->m_sym;
if (sym->IsPropertySym() && !CurrentBlockData()->liveVarSyms->Test(sym->AsPropertySym()->m_stackSym->m_id)
&& sym->AsPropertySym()->m_stackSym->IsVar())
{
StackSym *propertyBase = sym->AsPropertySym()->m_stackSym;
IR::RegOpnd *newOpnd = IR::RegOpnd::New(propertyBase, TyVar, instr->m_func);
instr = this->ToVar(instr, newOpnd, this->currentBlock, CurrentBlockData()->FindValue(propertyBase), true);
}
break;
case IR::OpndKindIndir:
IR::RegOpnd *baseOpnd = opnd->AsIndirOpnd()->GetBaseOpnd();
if (!CurrentBlockData()->liveVarSyms->Test(baseOpnd->m_sym->m_id))
{
instr = this->ToVar(instr, baseOpnd, this->currentBlock, CurrentBlockData()->FindValue(baseOpnd->m_sym), true);
}
IR::RegOpnd *indexOpnd = opnd->AsIndirOpnd()->GetIndexOpnd();
if (indexOpnd && !indexOpnd->m_sym->IsTypeSpec())
{
instr = ToTypeSpecIndex(instr, indexOpnd, opnd->AsIndirOpnd());
}
break;
}
return instr;
}
IR::Instr *
GlobOpt::ToTypeSpecIndex(IR::Instr * instr, IR::RegOpnd * indexOpnd, IR::IndirOpnd * indirOpnd)
{
Assert(indirOpnd != nullptr || indexOpnd == instr->GetSrc1());
bool isGetterOrSetter = instr->m_opcode == Js::OpCode::InitGetElemI ||
instr->m_opcode == Js::OpCode::InitSetElemI ||
instr->m_opcode == Js::OpCode::InitClassMemberGetComputedName ||
instr->m_opcode == Js::OpCode::InitClassMemberSetComputedName;
if (!isGetterOrSetter // typespec is disabled for getters, setters
&& (indexOpnd->GetValueType().IsInt()
? !IsTypeSpecPhaseOff(func)
: indexOpnd->GetValueType().IsLikelyInt() && DoAggressiveIntTypeSpec())
&& !GetIsAsmJSFunc()) // typespec is disabled for asmjs
{
StackSym *const indexVarSym = indexOpnd->m_sym;
Value *const indexValue = CurrentBlockData()->FindValue(indexVarSym);
Assert(indexValue);
Assert(indexValue->GetValueInfo()->IsLikelyInt());
ToInt32(instr, indexOpnd, currentBlock, indexValue, indirOpnd, false);
Assert(indexValue->GetValueInfo()->IsInt() || IsLoopPrePass());
if (!IsLoopPrePass())
{
IR::Opnd * intOpnd = indirOpnd ? indirOpnd->GetIndexOpnd() : instr->GetSrc1();
if (intOpnd != nullptr)
{
Assert(!intOpnd->IsRegOpnd() || intOpnd->AsRegOpnd()->m_sym->IsTypeSpec());
IntConstantBounds indexConstantBounds;
AssertVerify(indexValue->GetValueInfo()->TryGetIntConstantBounds(&indexConstantBounds));
if (ValueInfo::IsGreaterThanOrEqualTo(
indexValue,
indexConstantBounds.LowerBound(),
indexConstantBounds.UpperBound(),
nullptr,
0,
0))
{
intOpnd->SetType(TyUint32);
}
}
}
}
else if (!CurrentBlockData()->liveVarSyms->Test(indexOpnd->m_sym->m_id))
{
instr = this->ToVar(instr, indexOpnd, this->currentBlock, CurrentBlockData()->FindValue(indexOpnd->m_sym), true);
}
return instr;
}
IR::Instr *
GlobOpt::ToVar(IR::Instr *instr, IR::RegOpnd *regOpnd, BasicBlock *block, Value *value, bool needsUpdate)
{
IR::Instr *newInstr;
StackSym *varSym = regOpnd->m_sym;
if (IsTypeSpecPhaseOff(this->func))
{
return instr;
}
if (this->IsLoopPrePass())
{
block->globOptData.liveVarSyms->Set(varSym->m_id);
return instr;
}
if (block->globOptData.liveVarSyms->Test(varSym->m_id))
{
// Already live, nothing to do
return instr;
}
if (!varSym->IsVar())
{
Assert(!varSym->IsTypeSpec());
// Leave non-vars alone.
return instr;
}
Assert(block->globOptData.IsTypeSpecialized(varSym));
if (!value)
{
value = block->globOptData.FindValue(varSym);
}
ValueInfo *valueInfo = value ? value->GetValueInfo() : nullptr;
if(valueInfo && valueInfo->IsInt())
{
// If two syms have the same value, one is lossy-int-specialized, and then the other is int-specialized, the value
// would have been updated to definitely int. Upon using the lossy-int-specialized sym later, it would be flagged as
// lossy while the value is definitely int. Since the bit-vectors are based on the sym and not the value, update the
// lossy state.
block->globOptData.liveLossyInt32Syms->Clear(varSym->m_id);
}
IRType fromType = TyIllegal;
StackSym *typeSpecSym = nullptr;
if (block->globOptData.liveInt32Syms->Test(varSym->m_id) && !block->globOptData.liveLossyInt32Syms->Test(varSym->m_id))
{
fromType = TyInt32;
typeSpecSym = varSym->GetInt32EquivSym(this->func);
Assert(valueInfo);
Assert(valueInfo->IsInt());
}
else if (block->globOptData.liveFloat64Syms->Test(varSym->m_id))
{
fromType = TyFloat64;
typeSpecSym = varSym->GetFloat64EquivSym(this->func);
// Ensure that all bailout FromVars that generate a value for this type-specialized sym will bail out on any non-number
// value, even ones that have already been generated before. Float-specialized non-number values cannot be converted
// back to Var since they will not go back to the original non-number value. The dead-store pass will update the bailout
// kind on already-generated FromVars based on this bit.
typeSpecSym->m_requiresBailOnNotNumber = true;
// A previous float conversion may have used BailOutPrimitiveButString, which does not change the value type to say
// definitely float, since it can also be a non-string primitive. The convert back to Var though, will cause that
// bailout kind to be changed to BailOutNumberOnly in the dead-store phase, so from the point of the initial conversion
// to float, that the value is definitely number. Since we don't know where the FromVar is, change the value type here.
if(valueInfo)
{
if(!valueInfo->IsNumber())
{
valueInfo = valueInfo->SpecializeToFloat64(alloc);
ChangeValueInfo(block, value, valueInfo);
regOpnd->SetValueType(valueInfo->Type());
}
}
else
{
value = NewGenericValue(ValueType::Float);
valueInfo = value->GetValueInfo();
block->globOptData.SetValue(value, varSym);
regOpnd->SetValueType(valueInfo->Type());
}
}
else
{
Assert(UNREACHED);
}
AssertOrFailFast(valueInfo);
int32 intConstantValue;
if (valueInfo->TryGetIntConstantValue(&intConstantValue))
{
// Lower will tag or create a number directly
newInstr = IR::Instr::New(Js::OpCode::LdC_A_I4, regOpnd,
IR::IntConstOpnd::New(intConstantValue, TyInt32, instr->m_func), instr->m_func);
}
else
{
IR::RegOpnd * regNew = IR::RegOpnd::New(typeSpecSym, fromType, instr->m_func);
Js::OpCode opcode = Js::OpCode::ToVar;
regNew->SetIsJITOptimizedReg(true);
newInstr = IR::Instr::New(opcode, regOpnd, regNew, instr->m_func);
}
newInstr->SetByteCodeOffset(instr);
newInstr->GetDst()->AsRegOpnd()->SetIsJITOptimizedReg(true);
ValueType valueType = valueInfo->Type();
if(fromType == TyInt32)
{
#if !INT32VAR // All 32-bit ints are taggable on 64-bit architectures
IntConstantBounds constantBounds;
AssertVerify(valueInfo->TryGetIntConstantBounds(&constantBounds));
if(constantBounds.IsTaggable())
#endif
{
// The value is within the taggable range, so set the opnd value types to TaggedInt to avoid the overflow check
valueType = ValueType::GetTaggedInt();
}
}
newInstr->GetDst()->SetValueType(valueType);
newInstr->GetSrc1()->SetValueType(valueType);
IR::Instr *insertAfterInstr = instr->m_prev;
if (instr == block->GetLastInstr() &&
(instr->IsBranchInstr() || instr->m_opcode == Js::OpCode::BailTarget))
{
// Don't insert code between the branch and the preceding ByteCodeUses instrs...
while(insertAfterInstr->m_opcode == Js::OpCode::ByteCodeUses)
{
insertAfterInstr = insertAfterInstr->m_prev;
}
}
block->InsertInstrAfter(newInstr, insertAfterInstr);
block->globOptData.liveVarSyms->Set(varSym->m_id);
GOPT_TRACE_OPND(regOpnd, _u("Converting to var\n"));
if (block->loop)
{
Assert(!this->IsLoopPrePass());
this->TryHoistInvariant(newInstr, block, value, value, nullptr, false);
}
if (needsUpdate)
{
// Make sure that the kill effect of the ToVar instruction is tracked and that the kill of a property
// type is reflected in the current instruction.
this->ProcessKills(newInstr);
this->ValueNumberObjectType(newInstr->GetDst(), newInstr);
if (instr->GetSrc1() && instr->GetSrc1()->IsSymOpnd() && instr->GetSrc1()->AsSymOpnd()->IsPropertySymOpnd())
{
// Reprocess the load source. We need to reset the PropertySymOpnd fields first.
IR::PropertySymOpnd *propertySymOpnd = instr->GetSrc1()->AsPropertySymOpnd();
if (propertySymOpnd->IsTypeCheckSeqCandidate())
{
propertySymOpnd->SetTypeChecked(false);
propertySymOpnd->SetTypeAvailable(false);
propertySymOpnd->SetWriteGuardChecked(false);
}
this->FinishOptPropOp(instr, propertySymOpnd);
instr = this->SetTypeCheckBailOut(instr->GetSrc1(), instr, nullptr);
}
}
return instr;
}
IR::Instr *
GlobOpt::ToInt32(IR::Instr *instr, IR::Opnd *opnd, BasicBlock *block, Value *val, IR::IndirOpnd *indir, bool lossy)
{
return this->ToTypeSpecUse(instr, opnd, block, val, indir, TyInt32, IR::BailOutIntOnly, lossy);
}
IR::Instr *
GlobOpt::ToFloat64(IR::Instr *instr, IR::Opnd *opnd, BasicBlock *block, Value *val, IR::IndirOpnd *indir, IR::BailOutKind bailOutKind)
{
return this->ToTypeSpecUse(instr, opnd, block, val, indir, TyFloat64, bailOutKind);
}
IR::Instr *
GlobOpt::ToTypeSpecUse(IR::Instr *instr, IR::Opnd *opnd, BasicBlock *block, Value *val, IR::IndirOpnd *indir, IRType toType, IR::BailOutKind bailOutKind, bool lossy, IR::Instr *insertBeforeInstr)
{
Assert(bailOutKind != IR::BailOutInvalid);
IR::Instr *newInstr;
if (!val && opnd->IsRegOpnd())
{
val = block->globOptData.FindValue(opnd->AsRegOpnd()->m_sym);
}
ValueInfo *valueInfo = val ? val->GetValueInfo() : nullptr;
bool needReplaceSrc = false;
bool updateBlockLastInstr = false;
if (instr)
{
needReplaceSrc = true;
if (!insertBeforeInstr)
{
insertBeforeInstr = instr;
}
}
else if (!insertBeforeInstr)
{
// Insert it at the end of the block
insertBeforeInstr = block->GetLastInstr();
if (insertBeforeInstr->IsBranchInstr() || insertBeforeInstr->m_opcode == Js::OpCode::BailTarget)
{
// Don't insert code between the branch and the preceding ByteCodeUses instrs...
while(insertBeforeInstr->m_prev->m_opcode == Js::OpCode::ByteCodeUses)
{
insertBeforeInstr = insertBeforeInstr->m_prev;
}
}
else
{
insertBeforeInstr = insertBeforeInstr->m_next;
updateBlockLastInstr = true;
}
}
// Int constant values will be propagated into the instruction. For ArgOut_A_InlineBuiltIn, there's no benefit from
// const-propping, so those are excluded.
if (opnd->IsRegOpnd() &&
!(
valueInfo &&
(valueInfo->HasIntConstantValue() || valueInfo->IsFloatConstant()) &&
(!instr || instr->m_opcode != Js::OpCode::ArgOut_A_InlineBuiltIn)
))
{
IR::RegOpnd *regSrc = opnd->AsRegOpnd();
StackSym *varSym = regSrc->m_sym;
Js::OpCode opcode = Js::OpCode::FromVar;
if (varSym->IsTypeSpec() || !block->globOptData.liveVarSyms->Test(varSym->m_id))
{
// Conversion between int32 and float64
if (varSym->IsTypeSpec())
{
varSym = varSym->GetVarEquivSym(this->func);
}
opcode = Js::OpCode::Conv_Prim;
}
Assert(block->globOptData.liveVarSyms->Test(varSym->m_id) || block->globOptData.IsTypeSpecialized(varSym));
StackSym *typeSpecSym = nullptr;
BOOL isLive = FALSE;
BVSparse<JitArenaAllocator> *livenessBv = nullptr;
if(valueInfo && valueInfo->IsInt())
{
// If two syms have the same value, one is lossy-int-specialized, and then the other is int-specialized, the value
// would have been updated to definitely int. Upon using the lossy-int-specialized sym later, it would be flagged as
// lossy while the value is definitely int. Since the bit-vectors are based on the sym and not the value, update the
// lossy state.
block->globOptData.liveLossyInt32Syms->Clear(varSym->m_id);
}
if (toType == TyInt32)
{
// Need to determine whether the conversion is actually lossy or lossless. If the value is an int, then it's a
// lossless conversion despite the type of conversion requested. The liveness of the converted int32 sym needs to be
// set to reflect the actual type of conversion done. Also, a lossless conversion needs the value to determine
// whether the conversion may need to bail out.
Assert(valueInfo);
if(valueInfo->IsInt())
{
lossy = false;
}
else
{
Assert(IsLoopPrePass() || !block->globOptData.IsInt32TypeSpecialized(varSym));
}
livenessBv = block->globOptData.liveInt32Syms;
isLive = livenessBv->Test(varSym->m_id) && (lossy || !block->globOptData.liveLossyInt32Syms->Test(varSym->m_id));
if (this->IsLoopPrePass())
{
if (!isLive)
{
livenessBv->Set(varSym->m_id);
if (lossy)
{
block->globOptData.liveLossyInt32Syms->Set(varSym->m_id);
}
else
{
block->globOptData.liveLossyInt32Syms->Clear(varSym->m_id);
}
}
return instr;
}
typeSpecSym = varSym->GetInt32EquivSym(this->func);
if (!isLive)
{
if (!opnd->IsVar() ||
!block->globOptData.liveVarSyms->Test(varSym->m_id) ||
(block->globOptData.liveFloat64Syms->Test(varSym->m_id) && valueInfo && valueInfo->IsLikelyFloat()))
{
Assert(block->globOptData.liveFloat64Syms->Test(varSym->m_id));
if(!lossy && !valueInfo->IsInt())
{
// Shouldn't try to do a lossless conversion from float64 to int32 when the value is not known to be an
// int. There are cases where we need more than two passes over loops to flush out all dependencies.
// It's possible for the loop prepass to think that a sym s1 remains an int because it acquires the
// value of another sym s2 that is an int in the prepass at that time. However, s2 can become a float
// later in the loop body, in which case s1 would become a float on the second iteration of the loop. By
// that time, we would have already committed to having s1 live as a lossless int on entry into the
// loop, and we end up having to compensate by doing a lossless conversion from float to int, which will
// need a bailout and will most likely bail out.
//
// If s2 becomes a var instead of a float, then the compensation is legal although not ideal. After
// enough bailouts, rejit would be triggered with aggressive int type spec turned off. For the
// float-to-int conversion though, there's no point in emitting a bailout because we already know that
// the value is a float and has high probability of bailing out (whereas a var has a chance to be a
// tagged int), and so currently lossless conversion from float to int with bailout is not supported.
//
// So, treating this case as a compile-time bailout. The exception will trigger the jit work item to be
// restarted with aggressive int type specialization disabled.
if(bailOutKind == IR::BailOutExpectingInteger)
{
Assert(IsSwitchOptEnabledForIntTypeSpec());
throw Js::RejitException(RejitReason::DisableSwitchOptExpectingInteger);
}
else
{
Assert(DoAggressiveIntTypeSpec());
if(PHASE_TRACE(Js::BailOutPhase, this->func))
{
char16 debugStringBuffer[MAX_FUNCTION_BODY_DEBUG_STRING_SIZE];
Output::Print(
_u("BailOut (compile-time): function: %s (%s) varSym: "),
this->func->GetJITFunctionBody()->GetDisplayName(),
this->func->GetDebugNumberSet(debugStringBuffer),
varSym->m_id);
#if DBG_DUMP
varSym->Dump();
#else
Output::Print(_u("s%u"), varSym->m_id);
#endif
if(varSym->HasByteCodeRegSlot())
{
Output::Print(_u(" byteCodeReg: R%u"), varSym->GetByteCodeRegSlot());
}
Output::Print(_u(" (lossless conversion from float64 to int32)\n"));
Output::Flush();
}
if(!DoAggressiveIntTypeSpec())
{
// Aggressive int type specialization is already off for some reason. Prevent trying to rejit again
// because it won't help and the same thing will happen again. Just abort jitting this function.
if(PHASE_TRACE(Js::BailOutPhase, this->func))
{
Output::Print(_u(" Aborting JIT because AggressiveIntTypeSpec is already off\n"));
Output::Flush();
}
throw Js::OperationAbortedException();
}
throw Js::RejitException(RejitReason::AggressiveIntTypeSpecDisabled);
}
}
if(opnd->IsVar())
{
regSrc->SetType(TyFloat64);
regSrc->m_sym = varSym->GetFloat64EquivSym(this->func);
opcode = Js::OpCode::Conv_Prim;
}
else
{
Assert(regSrc->IsFloat64());
Assert(regSrc->m_sym->IsFloat64());
Assert(opcode == Js::OpCode::Conv_Prim);
}
}
}
GOPT_TRACE_OPND(regSrc, _u("Converting to int32\n"));
}
else if (toType == TyFloat64)
{
// float64
typeSpecSym = varSym->GetFloat64EquivSym(this->func);
if(!IsLoopPrePass() && typeSpecSym->m_requiresBailOnNotNumber && block->globOptData.IsFloat64TypeSpecialized(varSym))
{
// This conversion is already protected by a BailOutNumberOnly bailout (or at least it will be after the
// dead-store phase). Since 'requiresBailOnNotNumber' is not flow-based, change the value to definitely float.
if(valueInfo)
{
if(!valueInfo->IsNumber())
{
valueInfo = valueInfo->SpecializeToFloat64(alloc);
ChangeValueInfo(block, val, valueInfo);
opnd->SetValueType(valueInfo->Type());
}
}
else
{
val = NewGenericValue(ValueType::Float);
valueInfo = val->GetValueInfo();
block->globOptData.SetValue(val, varSym);
opnd->SetValueType(valueInfo->Type());
}
}
if(bailOutKind == IR::BailOutNumberOnly)
{
if(!IsLoopPrePass())
{
// Ensure that all bailout FromVars that generate a value for this type-specialized sym will bail out on any
// non-number value, even ones that have already been generated before. The dead-store pass will update the
// bailout kind on already-generated FromVars based on this bit.
typeSpecSym->m_requiresBailOnNotNumber = true;
}
}
else if(typeSpecSym->m_requiresBailOnNotNumber)
{
Assert(bailOutKind == IR::BailOutPrimitiveButString);
bailOutKind = IR::BailOutNumberOnly;
}
livenessBv = block->globOptData.liveFloat64Syms;
isLive = livenessBv->Test(varSym->m_id);
if (this->IsLoopPrePass())
{
if(!isLive)
{
livenessBv->Set(varSym->m_id);
}
if (this->OptIsInvariant(opnd, block, this->prePassLoop, val, false, true))
{
this->prePassLoop->forceFloat64SymsOnEntry->Set(varSym->m_id);
}
else
{
Sym *symStore = (valueInfo ? valueInfo->GetSymStore() : NULL);
if (symStore && symStore != varSym
&& this->OptIsInvariant(symStore, block, this->prePassLoop, block->globOptData.FindValue(symStore), false, true))
{
// If symStore is assigned to sym and we want sym to be type-specialized, for symStore to be specialized
// outside the loop.
this->prePassLoop->forceFloat64SymsOnEntry->Set(symStore->m_id);
}
}
return instr;
}
if (!isLive && regSrc->IsVar())
{
if (!block->globOptData.liveVarSyms->Test(varSym->m_id) ||
(
block->globOptData.liveInt32Syms->Test(varSym->m_id) &&
!block->globOptData.liveLossyInt32Syms->Test(varSym->m_id) &&
valueInfo &&
valueInfo->IsLikelyInt()
))
{
Assert(block->globOptData.liveInt32Syms->Test(varSym->m_id));
Assert(!block->globOptData.liveLossyInt32Syms->Test(varSym->m_id)); // Shouldn't try to convert a lossy int32 to anything
regSrc->SetType(TyInt32);
regSrc->m_sym = varSym->GetInt32EquivSym(this->func);
opcode = Js::OpCode::Conv_Prim;
}
}
GOPT_TRACE_OPND(regSrc, _u("Converting to float64\n"));
}
bool needLoad = false;
if (needReplaceSrc)
{
bool wasDead = regSrc->GetIsDead();
// needReplaceSrc means we are type specializing a use, and need to replace the src on the instr
if (!isLive)
{
needLoad = true;
// ReplaceSrc will delete it.
regSrc = regSrc->Copy(instr->m_func)->AsRegOpnd();
}
IR::RegOpnd * regNew = IR::RegOpnd::New(typeSpecSym, toType, instr->m_func);
if(valueInfo)
{
regNew->SetValueType(valueInfo->Type());
regNew->m_wasNegativeZeroPreventedByBailout = valueInfo->WasNegativeZeroPreventedByBailout();
}
regNew->SetIsDead(wasDead);
regNew->SetIsJITOptimizedReg(true);
this->CaptureByteCodeSymUses(instr);
if (indir == nullptr)
{
instr->ReplaceSrc(opnd, regNew);
}
else
{
indir->ReplaceIndexOpnd(regNew);
}
opnd = regNew;
if (!needLoad)
{
Assert(isLive);
return instr;
}
}
else
{
// We just need to insert a load of a type spec sym
if(isLive)
{
return instr;
}
// Insert it before the specified instruction
instr = insertBeforeInstr;
}
IR::RegOpnd *regDst = IR::RegOpnd::New(typeSpecSym, toType, instr->m_func);
bool isBailout = false;
bool isHoisted = false;
bool isInLandingPad = (block->next && !block->next->isDeleted && block->next->isLoopHeader);
if (isInLandingPad)
{
Loop *loop = block->next->loop;
Assert(loop && loop->landingPad == block);
Assert(loop->bailOutInfo);
}
if (opcode == Js::OpCode::FromVar)
{
if (toType == TyInt32)
{
Assert(valueInfo);
if (lossy)
{
if (!valueInfo->IsPrimitive() && !block->globOptData.IsTypeSpecialized(varSym))
{
// Lossy conversions to int32 on non-primitive values may have implicit calls to toString or valueOf, which
// may be overridden to have a side effect. The side effect needs to happen every time the conversion is
// supposed to happen, so the resulting lossy int32 value cannot be reused. Bail out on implicit calls.
Assert(DoLossyIntTypeSpec());
bailOutKind = IR::BailOutOnNotPrimitive;
isBailout = true;
}
}
else if (!valueInfo->IsInt())
{
// The operand is likely an int (hence the request to convert to int), so bail out if it's not an int. Only
// bail out if a lossless conversion to int is requested. Lossy conversions to int such as in (a | 0) don't
// need to bail out.
if (bailOutKind == IR::BailOutExpectingInteger)
{
Assert(IsSwitchOptEnabledForIntTypeSpec());
}
else
{
Assert(DoAggressiveIntTypeSpec());
}
isBailout = true;
}
}
else if (toType == TyFloat64 &&
(!valueInfo || !valueInfo->IsNumber()))
{
// Bailout if converting vars to float if we can't prove they are floats:
// x = str + float; -> need to bailout if str is a string
//
// x = obj * 0.1;
// y = obj * 0.2; -> if obj has valueof, we'll only call valueof once on the FromVar conversion...
Assert(bailOutKind != IR::BailOutInvalid);
isBailout = true;
}
}
if (isBailout)
{
if (isInLandingPad)
{
Loop *loop = block->next->loop;
this->EnsureBailTarget(loop);
instr = loop->bailOutInfo->bailOutInstr;
updateBlockLastInstr = false;
newInstr = IR::BailOutInstr::New(opcode, bailOutKind, loop->bailOutInfo, instr->m_func);
newInstr->SetDst(regDst);
newInstr->SetSrc1(regSrc);
}
else
{
newInstr = IR::BailOutInstr::New(opcode, regDst, regSrc, bailOutKind, instr, instr->m_func);
}
}
else
{
newInstr = IR::Instr::New(opcode, regDst, regSrc, instr->m_func);
}
newInstr->SetByteCodeOffset(instr);
instr->InsertBefore(newInstr);
if (updateBlockLastInstr)
{
block->SetLastInstr(newInstr);
}
regDst->SetIsJITOptimizedReg(true);
newInstr->GetSrc1()->AsRegOpnd()->SetIsJITOptimizedReg(true);
ValueInfo *const oldValueInfo = valueInfo;
if(valueInfo)
{
newInstr->GetSrc1()->SetValueType(valueInfo->Type());
}
if(isBailout)
{
Assert(opcode == Js::OpCode::FromVar);
if(toType == TyInt32)
{
Assert(valueInfo);
if(!lossy)
{
Assert(bailOutKind == IR::BailOutIntOnly || bailOutKind == IR::BailOutExpectingInteger);
valueInfo = valueInfo->SpecializeToInt32(alloc, isPerformingLoopBackEdgeCompensation);
ChangeValueInfo(nullptr, val, valueInfo);
int32 intConstantValue;
if(indir && needReplaceSrc && valueInfo->TryGetIntConstantValue(&intConstantValue))
{
// A likely-int value can have constant bounds due to conditional branches narrowing its range. Now that
// the sym has been proven to be an int, the likely-int value, after specialization, will be constant.
// Replace the index opnd in the indir with an offset.
Assert(opnd == indir->GetIndexOpnd());
Assert(indir->GetScale() == 0);
indir->UnlinkIndexOpnd()->Free(instr->m_func);
opnd = nullptr;
indir->SetOffset(intConstantValue);
}
}
}
else if (toType == TyFloat64)
{
if(bailOutKind == IR::BailOutNumberOnly)
{
if(valueInfo)
{
valueInfo = valueInfo->SpecializeToFloat64(alloc);
ChangeValueInfo(block, val, valueInfo);
}
else
{
val = NewGenericValue(ValueType::Float);
valueInfo = val->GetValueInfo();
block->globOptData.SetValue(val, varSym);
}
}
}
else
{
Assert(UNREACHED);
}
}
if(valueInfo)
{
newInstr->GetDst()->SetValueType(valueInfo->Type());
if(needReplaceSrc && opnd)
{
opnd->SetValueType(valueInfo->Type());
}
}
if (block->loop)
{
Assert(!this->IsLoopPrePass());
isHoisted = this->TryHoistInvariant(newInstr, block, val, val, nullptr, false, lossy, false, bailOutKind);
}
if (isBailout)
{
if (!isHoisted && !isInLandingPad)
{
if(valueInfo)
{
// Since this is a pre-op bailout, the old value info should be used for the purposes of bailout. For
// instance, the value info could be LikelyInt but with a constant range. Once specialized to int, the value
// info would be an int constant. However, the int constant is only guaranteed if the value is actually an
// int, which this conversion is verifying, so bailout cannot assume the constant value.
if(oldValueInfo)
{
val->SetValueInfo(oldValueInfo);
}
else
{
block->globOptData.ClearSymValue(varSym);
}
}
// Fill in bail out info if the FromVar is a bailout instr, and it wasn't hoisted as invariant.
// If it was hoisted, the invariant code will fill out the bailout info with the loop landing pad bailout info.
this->FillBailOutInfo(block, newInstr);
if(valueInfo)
{
// Restore the new value info after filling the bailout info
if(oldValueInfo)
{
val->SetValueInfo(valueInfo);
}
else
{
block->globOptData.SetValue(val, varSym);
}
}
}
}
// Now that we've captured the liveness in the bailout info, we can mark this as live.
// This type specialized sym isn't live if the FromVar bails out.
livenessBv->Set(varSym->m_id);
if(toType == TyInt32)
{
if(lossy)
{
block->globOptData.liveLossyInt32Syms->Set(varSym->m_id);
}
else
{
block->globOptData.liveLossyInt32Syms->Clear(varSym->m_id);
}
}
}
else
{
Assert(valueInfo);
if(opnd->IsRegOpnd() && valueInfo->IsInt())
{
// If two syms have the same value, one is lossy-int-specialized, and then the other is int-specialized, the value
// would have been updated to definitely int. Upon using the lossy-int-specialized sym later, it would be flagged as
// lossy while the value is definitely int. Since the bit-vectors are based on the sym and not the value, update the
// lossy state.
block->globOptData.liveLossyInt32Syms->Clear(opnd->AsRegOpnd()->m_sym->m_id);
if(toType == TyInt32)
{
lossy = false;
}
}
if (this->IsLoopPrePass())
{
if(opnd->IsRegOpnd())
{
StackSym *const sym = opnd->AsRegOpnd()->m_sym;
if(toType == TyInt32)
{
Assert(!sym->IsTypeSpec());
block->globOptData.liveInt32Syms->Set(sym->m_id);
if(lossy)
{
block->globOptData.liveLossyInt32Syms->Set(sym->m_id);
}
else
{
block->globOptData.liveLossyInt32Syms->Clear(sym->m_id);
}
}
else
{
Assert(toType == TyFloat64);
AnalysisAssert(instr);
StackSym *const varSym = sym->IsTypeSpec() ? sym->GetVarEquivSym(instr->m_func) : sym;
block->globOptData.liveFloat64Syms->Set(varSym->m_id);
}
}
return instr;
}
if (!needReplaceSrc)
{
instr = insertBeforeInstr;
}
IR::Opnd *constOpnd;
int32 intConstantValue;
if(valueInfo->TryGetIntConstantValue(&intConstantValue))
{
if(toType == TyInt32)
{
constOpnd = IR::IntConstOpnd::New(intConstantValue, TyInt32, instr->m_func);
}
else
{
Assert(toType == TyFloat64);
constOpnd = IR::FloatConstOpnd::New(static_cast<FloatConstType>(intConstantValue), TyFloat64, instr->m_func);
}
}
else if(valueInfo->IsFloatConstant())
{
const FloatConstType floatValue = valueInfo->AsFloatConstant()->FloatValue();
if(toType == TyInt32)
{
// In some loop scenarios, a sym can be specialized to int32 on loop entry
// during the prepass and then subsequentely specialized to float within
// the loop, leading to an attempted lossy conversion from float64 to int32
// on the backedge. For these cases, disable aggressive int type specialization
// and try again.
if (!lossy)
{
AssertOrFailFast(DoAggressiveIntTypeSpec());
throw Js::RejitException(RejitReason::AggressiveIntTypeSpecDisabled);
}
constOpnd =
IR::IntConstOpnd::New(
Js::JavascriptMath::ToInt32(floatValue),
TyInt32,
instr->m_func);
}
else
{
Assert(toType == TyFloat64);
constOpnd = IR::FloatConstOpnd::New(floatValue, TyFloat64, instr->m_func);
}
}
else
{
Assert(opnd->IsVar());
Assert(opnd->IsAddrOpnd());
AssertMsg(opnd->AsAddrOpnd()->IsVar(), "We only expect to see addr that are var before lower.");
// Don't need to capture uses, we are only replacing an addr opnd
if(toType == TyInt32)
{
constOpnd = IR::IntConstOpnd::New(Js::TaggedInt::ToInt32(opnd->AsAddrOpnd()->m_address), TyInt32, instr->m_func);
}
else
{
Assert(toType == TyFloat64);
constOpnd = IR::FloatConstOpnd::New(Js::TaggedInt::ToDouble(opnd->AsAddrOpnd()->m_address), TyFloat64, instr->m_func);
}
}
if (toType == TyInt32)
{
if (needReplaceSrc)
{
CaptureByteCodeSymUses(instr);
if(indir)
{
Assert(opnd == indir->GetIndexOpnd());
Assert(indir->GetScale() == 0);
indir->UnlinkIndexOpnd()->Free(instr->m_func);
indir->SetOffset(constOpnd->AsIntConstOpnd()->AsInt32());
}
else
{
instr->ReplaceSrc(opnd, constOpnd);
}
}
else
{
StackSym *varSym = opnd->AsRegOpnd()->m_sym;
if(varSym->IsTypeSpec())
{
varSym = varSym->GetVarEquivSym(nullptr);
Assert(varSym);
}
if(block->globOptData.liveInt32Syms->TestAndSet(varSym->m_id))
{
Assert(!!block->globOptData.liveLossyInt32Syms->Test(varSym->m_id) == lossy);
}
else
{
if(lossy)
{
block->globOptData.liveLossyInt32Syms->Set(varSym->m_id);
}
StackSym *int32Sym = varSym->GetInt32EquivSym(instr->m_func);
IR::RegOpnd *int32Reg = IR::RegOpnd::New(int32Sym, TyInt32, instr->m_func);
int32Reg->SetIsJITOptimizedReg(true);
newInstr = IR::Instr::New(Js::OpCode::Ld_I4, int32Reg, constOpnd, instr->m_func);
newInstr->SetByteCodeOffset(instr);
instr->InsertBefore(newInstr);
if (updateBlockLastInstr)
{
block->SetLastInstr(newInstr);
}
}
}
}
else
{
StackSym *floatSym;
bool newFloatSym = false;
StackSym* varSym;
if (opnd->IsRegOpnd())
{
varSym = opnd->AsRegOpnd()->m_sym;
if (varSym->IsTypeSpec())
{
varSym = varSym->GetVarEquivSym(nullptr);
Assert(varSym);
}
floatSym = varSym->GetFloat64EquivSym(instr->m_func);
}
else
{
varSym = block->globOptData.GetCopyPropSym(nullptr, val);
if(!varSym)
{
// Clear the symstore to ensure it's set below to this new symbol
this->SetSymStoreDirect(val->GetValueInfo(), nullptr);
varSym = StackSym::New(TyVar, instr->m_func);
newFloatSym = true;
}
floatSym = varSym->GetFloat64EquivSym(instr->m_func);
}
IR::RegOpnd *floatReg = IR::RegOpnd::New(floatSym, TyFloat64, instr->m_func);
floatReg->SetIsJITOptimizedReg(true);
// If the value is not live - let's load it.
if(!block->globOptData.liveFloat64Syms->TestAndSet(varSym->m_id))
{
newInstr = IR::Instr::New(Js::OpCode::LdC_F8_R8, floatReg, constOpnd, instr->m_func);
newInstr->SetByteCodeOffset(instr);
instr->InsertBefore(newInstr);
if (updateBlockLastInstr)
{
block->SetLastInstr(newInstr);
}
if(newFloatSym)
{
block->globOptData.SetValue(val, varSym);
}
// Src is always invariant, but check if the dst is, and then hoist.
if (block->loop &&
(
(newFloatSym && block->loop->CanHoistInvariants()) ||
this->OptIsInvariant(floatReg, block, block->loop, val, false, false)
))
{
Assert(!this->IsLoopPrePass());
this->OptHoistInvariant(newInstr, block, block->loop, val, val, nullptr, false);
}
}
if (needReplaceSrc)
{
CaptureByteCodeSymUses(instr);
instr->ReplaceSrc(opnd, floatReg);
}
}
return instr;
}
return newInstr;
}
void
GlobOpt::ToVarRegOpnd(IR::RegOpnd *dst, BasicBlock *block)
{
ToVarStackSym(dst->m_sym, block);
}
void
GlobOpt::ToVarStackSym(StackSym *varSym, BasicBlock *block)
{
//added another check for sym , in case of asmjs there is mostly no var syms and hence added a new check to see if it is the primary sym
Assert(!varSym->IsTypeSpec());
block->globOptData.liveVarSyms->Set(varSym->m_id);
block->globOptData.liveInt32Syms->Clear(varSym->m_id);
block->globOptData.liveLossyInt32Syms->Clear(varSym->m_id);
block->globOptData.liveFloat64Syms->Clear(varSym->m_id);
}
void
GlobOpt::ToInt32Dst(IR::Instr *instr, IR::RegOpnd *dst, BasicBlock *block)
{
StackSym *varSym = dst->m_sym;
Assert(!varSym->IsTypeSpec());
if (!this->IsLoopPrePass() && varSym->IsVar())
{
StackSym *int32Sym = varSym->GetInt32EquivSym(instr->m_func);
// Use UnlinkDst / SetDst to make sure isSingleDef is tracked properly,
// since we'll just be hammering the symbol.
dst = instr->UnlinkDst()->AsRegOpnd();
dst->m_sym = int32Sym;
dst->SetType(TyInt32);
instr->SetDst(dst);
}
block->globOptData.liveInt32Syms->Set(varSym->m_id);
block->globOptData.liveLossyInt32Syms->Clear(varSym->m_id); // The store makes it lossless
block->globOptData.liveVarSyms->Clear(varSym->m_id);
block->globOptData.liveFloat64Syms->Clear(varSym->m_id);
}
void
GlobOpt::ToUInt32Dst(IR::Instr *instr, IR::RegOpnd *dst, BasicBlock *block)
{
// We should be calling only for asmjs function
Assert(GetIsAsmJSFunc());
StackSym *varSym = dst->m_sym;
Assert(!varSym->IsTypeSpec());
block->globOptData.liveInt32Syms->Set(varSym->m_id);
block->globOptData.liveLossyInt32Syms->Clear(varSym->m_id); // The store makes it lossless
block->globOptData.liveVarSyms->Clear(varSym->m_id);
block->globOptData.liveFloat64Syms->Clear(varSym->m_id);
}
void
GlobOpt::ToFloat64Dst(IR::Instr *instr, IR::RegOpnd *dst, BasicBlock *block)
{
StackSym *varSym = dst->m_sym;
Assert(!varSym->IsTypeSpec());
if (!this->IsLoopPrePass() && varSym->IsVar())
{
StackSym *float64Sym = varSym->GetFloat64EquivSym(this->func);
// Use UnlinkDst / SetDst to make sure isSingleDef is tracked properly,
// since we'll just be hammering the symbol.
dst = instr->UnlinkDst()->AsRegOpnd();
dst->m_sym = float64Sym;
dst->SetType(TyFloat64);
instr->SetDst(dst);
}
block->globOptData.liveFloat64Syms->Set(varSym->m_id);
block->globOptData.liveVarSyms->Clear(varSym->m_id);
block->globOptData.liveInt32Syms->Clear(varSym->m_id);
block->globOptData.liveLossyInt32Syms->Clear(varSym->m_id);
}
static void SetIsConstFlag(StackSym* dstSym, int64 value)
{
Assert(dstSym);
dstSym->SetIsInt64Const();
}
static void SetIsConstFlag(StackSym* dstSym, int value)
{
Assert(dstSym);
dstSym->SetIsIntConst(value);
}
static IR::Opnd* CreateIntConstOpnd(IR::Instr* instr, int64 value)
{
return (IR::Opnd*)IR::Int64ConstOpnd::New(value, instr->GetDst()->GetType(), instr->m_func);
}
static IR::Opnd* CreateIntConstOpnd(IR::Instr* instr, int value)
{
IntConstType constVal;
if (instr->GetDst()->IsUnsigned())
{
// we should zero extend in case of uint
constVal = (uint32)value;
}
else
{
constVal = value;
}
return (IR::Opnd*)IR::IntConstOpnd::New(constVal, instr->GetDst()->GetType(), instr->m_func);
}
template <typename T>
IR::Opnd* GlobOpt::ReplaceWConst(IR::Instr **pInstr, T value, Value **pDstVal)
{
IR::Instr * &instr = *pInstr;
IR::Opnd * constOpnd = CreateIntConstOpnd(instr, value);
instr->ReplaceSrc1(constOpnd);
instr->FreeSrc2();
this->OptSrc(constOpnd, &instr);
IR::Opnd *dst = instr->GetDst();
StackSym *dstSym = dst->AsRegOpnd()->m_sym;
if (dstSym->IsSingleDef())
{
SetIsConstFlag(dstSym, value);
}
GOPT_TRACE_INSTR(instr, _u("Constant folding to %d: \n"), value);
*pDstVal = GetIntConstantValue(value, instr, dst);
return dst;
}
template <typename T>
bool GlobOpt::OptConstFoldBinaryWasm(
IR::Instr** pInstr,
const Value* src1,
const Value* src2,
Value **pDstVal)
{
IR::Instr* &instr = *pInstr;
if (!DoConstFold())
{
return false;
}
T src1IntConstantValue, src2IntConstantValue;
if (!src1 || !src1->GetValueInfo()->TryGetIntConstantValue(&src1IntConstantValue, false) || //a bit sketchy: false for int32 means likelyInt = false
!src2 || !src2->GetValueInfo()->TryGetIntConstantValue(&src2IntConstantValue, false) //and unsigned = false for int64
)
{
return false;
}
int64 tmpValueOut;
if (!instr->BinaryCalculatorT<T>(src1IntConstantValue, src2IntConstantValue, &tmpValueOut, func->GetJITFunctionBody()->IsWasmFunction()))
{
return false;
}
this->CaptureByteCodeSymUses(instr);
IR::Opnd *dst = (instr->GetDst()->IsInt64()) ? //dst can be int32 for int64 comparison operators
ReplaceWConst(pInstr, tmpValueOut, pDstVal) :
ReplaceWConst(pInstr, (int)tmpValueOut, pDstVal);
instr->m_opcode = Js::OpCode::Ld_I4;
this->ToInt32Dst(instr, dst->AsRegOpnd(), this->currentBlock);
return true;
}
bool
GlobOpt::OptConstFoldBinary(
IR::Instr * *pInstr,
const IntConstantBounds &src1IntConstantBounds,
const IntConstantBounds &src2IntConstantBounds,
Value **pDstVal)
{
IR::Instr * &instr = *pInstr;
int32 value;
IR::IntConstOpnd *constOpnd;
if (!DoConstFold())
{
return false;
}
int32 src1IntConstantValue = -1;
int32 src2IntConstantValue = -1;
int32 src1MaxIntConstantValue = -1;
int32 src2MaxIntConstantValue = -1;
int32 src1MinIntConstantValue = -1;
int32 src2MinIntConstantValue = -1;
if (instr->IsBranchInstr())
{
src1MinIntConstantValue = src1IntConstantBounds.LowerBound();
src1MaxIntConstantValue = src1IntConstantBounds.UpperBound();
src2MinIntConstantValue = src2IntConstantBounds.LowerBound();
src2MaxIntConstantValue = src2IntConstantBounds.UpperBound();
}
else if (src1IntConstantBounds.IsConstant() && src2IntConstantBounds.IsConstant())
{
src1IntConstantValue = src1IntConstantBounds.LowerBound();
src2IntConstantValue = src2IntConstantBounds.LowerBound();
}
else
{
return false;
}
IntConstType tmpValueOut;
if (!instr->BinaryCalculator(src1IntConstantValue, src2IntConstantValue, &tmpValueOut, TyInt32)
|| !Math::FitsInDWord(tmpValueOut))
{
return false;
}
value = (int32)tmpValueOut;
this->CaptureByteCodeSymUses(instr);
constOpnd = IR::IntConstOpnd::New(value, TyInt32, instr->m_func);
instr->ReplaceSrc1(constOpnd);
instr->FreeSrc2();
this->OptSrc(constOpnd, &instr);
IR::Opnd *dst = instr->GetDst();
Assert(dst->IsRegOpnd());
StackSym *dstSym = dst->AsRegOpnd()->m_sym;
if (dstSym->IsSingleDef())
{
dstSym->SetIsIntConst(value);
}
GOPT_TRACE_INSTR(instr, _u("Constant folding to %d: \n"), value);
*pDstVal = GetIntConstantValue(value, instr, dst);
if (IsTypeSpecPhaseOff(this->func))
{
instr->m_opcode = Js::OpCode::LdC_A_I4;
this->ToVarRegOpnd(dst->AsRegOpnd(), this->currentBlock);
}
else
{
instr->m_opcode = Js::OpCode::Ld_I4;
this->ToInt32Dst(instr, dst->AsRegOpnd(), this->currentBlock);
}
InvalidateInductionVariables(instr);
return true;
}
void
GlobOpt::OptConstFoldBr(bool test, IR::Instr *instr, Value * src1Val, Value * src2Val)
{
GOPT_TRACE_INSTR(instr, _u("Constant folding to branch: "));
BasicBlock *deadBlock;
if (src1Val)
{
this->ToInt32(instr, instr->GetSrc1(), this->currentBlock, src1Val, nullptr, false);
}
if (src2Val)
{
this->ToInt32(instr, instr->GetSrc2(), this->currentBlock, src2Val, nullptr, false);
}
this->CaptureByteCodeSymUses(instr);
if (test)
{
instr->m_opcode = Js::OpCode::Br;
instr->FreeSrc1();
if(instr->GetSrc2())
{
instr->FreeSrc2();
}
deadBlock = instr->m_next->AsLabelInstr()->GetBasicBlock();
}
else
{
AssertMsg(instr->m_next->IsLabelInstr(), "Next instr of branch should be a label...");
if(instr->AsBranchInstr()->IsMultiBranch())
{
return;
}
deadBlock = instr->AsBranchInstr()->GetTarget()->GetBasicBlock();
instr->FreeSrc1();
if(instr->GetSrc2())
{
instr->FreeSrc2();
}
instr->m_opcode = Js::OpCode::Nop;
}
// Loop back edge: we would have already decremented data use count for the tail block when we processed the loop header.
if (!(this->currentBlock->loop && this->currentBlock->loop->GetHeadBlock() == deadBlock))
{
this->currentBlock->DecrementDataUseCount();
}
this->currentBlock->RemoveDeadSucc(deadBlock, this->func->m_fg);
if (deadBlock->GetPredList()->Count() == 0)
{
deadBlock->SetDataUseCount(0);
}
}
void
GlobOpt::ChangeValueType(
BasicBlock *const block,
Value *const value,
const ValueType newValueType,
const bool preserveSubclassInfo,
const bool allowIncompatibleType) const
{
Assert(value);
// Why are we trying to change the value type of the type sym value? Asserting here to make sure we don't deep copy the type sym's value info.
Assert(!value->GetValueInfo()->IsJsType());
ValueInfo *const valueInfo = value->GetValueInfo();
const ValueType valueType(valueInfo->Type());
if(valueType == newValueType && (preserveSubclassInfo || valueInfo->IsGeneric()))
{
return;
}
// ArrayValueInfo has information specific to the array type, so make sure that doesn't change
Assert(
!preserveSubclassInfo ||
!valueInfo->IsArrayValueInfo() ||
newValueType.IsObject() && newValueType.GetObjectType() == valueInfo->GetObjectType());
Assert(!valueInfo->GetSymStore() || !valueInfo->GetSymStore()->IsStackSym() || !valueInfo->GetSymStore()->AsStackSym()->IsFromByteCodeConstantTable());
ValueInfo *const newValueInfo =
preserveSubclassInfo
? valueInfo->Copy(alloc)
: valueInfo->CopyWithGenericStructureKind(alloc);
newValueInfo->Type() = newValueType;
ChangeValueInfo(block, value, newValueInfo, allowIncompatibleType);
}
void
GlobOpt::ChangeValueInfo(BasicBlock *const block, Value *const value, ValueInfo *const newValueInfo, const bool allowIncompatibleType, const bool compensated) const
{
Assert(value);
Assert(newValueInfo);
// The value type must be changed to something more specific or something more generic. For instance, it would be changed to
// something more specific if the current value type is LikelyArray and checks have been done to ensure that it's an array,
// and it would be changed to something more generic if a call kills the Array value type and it must be treated as
// LikelyArray going forward.
// There are cases where we change the type because of different profile information, and because of rejit, these profile information
// may conflict. Need to allow incompatible type in those cause. However, the old type should be indefinite.
Assert((allowIncompatibleType && !value->GetValueInfo()->IsDefinite()) ||
AreValueInfosCompatible(newValueInfo, value->GetValueInfo()));
// ArrayValueInfo has information specific to the array type, so make sure that doesn't change
Assert(
!value->GetValueInfo()->IsArrayValueInfo() ||
!newValueInfo->IsArrayValueInfo() ||
newValueInfo->GetObjectType() == value->GetValueInfo()->GetObjectType());
if(block)
{
TrackValueInfoChangeForKills(block, value, newValueInfo, compensated);
}
value->SetValueInfo(newValueInfo);
}
bool
GlobOpt::AreValueInfosCompatible(const ValueInfo *const v0, const ValueInfo *const v1) const
{
Assert(v0);
Assert(v1);
if(v0->IsUninitialized() || v1->IsUninitialized())
{
return true;
}
const bool doAggressiveIntTypeSpec = DoAggressiveIntTypeSpec();
if(doAggressiveIntTypeSpec && (v0->IsInt() || v1->IsInt()))
{
// Int specialization in some uncommon loop cases involving dependencies, needs to allow specializing values of
// arbitrary types, even values that are definitely not int, to compensate for aggressive assumptions made by a loop
// prepass
return true;
}
if ((v0->Type()).IsMixedTypedArrayPair(v1->Type()) || (v1->Type()).IsMixedTypedArrayPair(v0->Type()))
{
return true;
}
const bool doFloatTypeSpec = DoFloatTypeSpec();
if(doFloatTypeSpec && (v0->IsFloat() || v1->IsFloat()))
{
// Float specialization allows specializing values of arbitrary types, even values that are definitely not float
return true;
}
const bool doArrayMissingValueCheckHoist = DoArrayMissingValueCheckHoist();
const bool doNativeArrayTypeSpec = DoNativeArrayTypeSpec();
const auto AreValueTypesCompatible = [=](const ValueType t0, const ValueType t1)
{
return
t0.IsSubsetOf(t1, doAggressiveIntTypeSpec, doFloatTypeSpec, doArrayMissingValueCheckHoist, doNativeArrayTypeSpec) ||
t1.IsSubsetOf(t0, doAggressiveIntTypeSpec, doFloatTypeSpec, doArrayMissingValueCheckHoist, doNativeArrayTypeSpec);
};
const ValueType t0(v0->Type().ToDefinite()), t1(v1->Type().ToDefinite());
if(t0.IsLikelyObject() && t1.IsLikelyObject())
{
// Check compatibility for the primitive portions and the object portions of the value types separately
if(AreValueTypesCompatible(t0.ToDefiniteObject(), t1.ToDefiniteObject()) &&
(
!t0.HasBeenPrimitive() ||
!t1.HasBeenPrimitive() ||
AreValueTypesCompatible(t0.ToDefinitePrimitiveSubset(), t1.ToDefinitePrimitiveSubset())
))
{
return true;
}
}
else if(AreValueTypesCompatible(t0, t1))
{
return true;
}
const FloatConstantValueInfo *floatConstantValueInfo;
const ValueInfo *likelyIntValueinfo;
if(v0->IsFloatConstant() && v1->IsLikelyInt())
{
floatConstantValueInfo = v0->AsFloatConstant();
likelyIntValueinfo = v1;
}
else if(v0->IsLikelyInt() && v1->IsFloatConstant())
{
floatConstantValueInfo = v1->AsFloatConstant();
likelyIntValueinfo = v0;
}
else
{
return false;
}
// A float constant value with a value that is actually an int is a subset of a likely-int value.
// Ideally, we should create an int constant value for this up front, such that IsInt() also returns true. There
// were other issues with that, should see if that can be done.
int32 int32Value;
return
Js::JavascriptNumber::TryGetInt32Value(floatConstantValueInfo->FloatValue(), &int32Value) &&
(!likelyIntValueinfo->IsLikelyTaggedInt() || !Js::TaggedInt::IsOverflow(int32Value));
}
#if DBG
void
GlobOpt::VerifyArrayValueInfoForTracking(
const ValueInfo *const valueInfo,
const bool isJsArray,
const BasicBlock *const block,
const bool ignoreKnownImplicitCalls) const
{
Assert(valueInfo);
Assert(valueInfo->IsAnyOptimizedArray());
Assert(isJsArray == valueInfo->IsArrayOrObjectWithArray());
Assert(!isJsArray == valueInfo->IsOptimizedTypedArray());
Assert(block);
Loop *implicitCallsLoop;
if(block->next && !block->next->isDeleted && block->next->isLoopHeader)
{
// Since a loop's landing pad does not have user code, determine whether disabling implicit calls is allowed in the
// landing pad based on the loop for which this block is the landing pad.
implicitCallsLoop = block->next->loop;
Assert(implicitCallsLoop);
Assert(implicitCallsLoop->landingPad == block);
}
else
{
implicitCallsLoop = block->loop;
}
Assert(
!isJsArray ||
DoArrayCheckHoist(valueInfo->Type(), implicitCallsLoop) ||
(
ignoreKnownImplicitCalls &&
!(implicitCallsLoop ? ImplicitCallFlagsAllowOpts(implicitCallsLoop) : ImplicitCallFlagsAllowOpts(func))
));
Assert(!(isJsArray && valueInfo->HasNoMissingValues() && !DoArrayMissingValueCheckHoist()));
Assert(
!(
valueInfo->IsArrayValueInfo() &&
(
valueInfo->AsArrayValueInfo()->HeadSegmentSym() ||
valueInfo->AsArrayValueInfo()->HeadSegmentLengthSym()
) &&
!DoArraySegmentHoist(valueInfo->Type())
));
#if 0
// We can't assert here that there is only a head segment length sym if hoisting is allowed in the current block,
// because we may have propagated the sym forward out of a loop, and hoisting may be allowed inside but not
// outside the loop.
Assert(
isJsArray ||
!valueInfo->IsArrayValueInfo() ||
!valueInfo->AsArrayValueInfo()->HeadSegmentLengthSym() ||
DoTypedArraySegmentLengthHoist(implicitCallsLoop) ||
ignoreKnownImplicitCalls ||
(implicitCallsLoop ? ImplicitCallFlagsAllowOpts(implicitCallsLoop) : ImplicitCallFlagsAllowOpts(func))
);
#endif
Assert(
!(
isJsArray &&
valueInfo->IsArrayValueInfo() &&
valueInfo->AsArrayValueInfo()->LengthSym() &&
!DoArrayLengthHoist()
));
}
#endif
void
GlobOpt::TrackNewValueForKills(Value *const value)
{
Assert(value);
if(!value->GetValueInfo()->IsAnyOptimizedArray())
{
return;
}
DoTrackNewValueForKills(value);
}
void
GlobOpt::DoTrackNewValueForKills(Value *const value)
{
Assert(value);
ValueInfo *const valueInfo = value->GetValueInfo();
Assert(valueInfo->IsAnyOptimizedArray());
Assert(!valueInfo->IsArrayValueInfo());
// The value and value info here are new, so it's okay to modify the value info in-place
Assert(!valueInfo->GetSymStore());
const bool isJsArray = valueInfo->IsArrayOrObjectWithArray();
Assert(!isJsArray == valueInfo->IsOptimizedTypedArray());
const bool isVirtualTypedArray = valueInfo->IsOptimizedVirtualTypedArray();
Loop *implicitCallsLoop;
if(currentBlock->next && !currentBlock->next->isDeleted && currentBlock->next->isLoopHeader)
{
// Since a loop's landing pad does not have user code, determine whether disabling implicit calls is allowed in the
// landing pad based on the loop for which this block is the landing pad.
implicitCallsLoop = currentBlock->next->loop;
Assert(implicitCallsLoop);
Assert(implicitCallsLoop->landingPad == currentBlock);
}
else
{
implicitCallsLoop = currentBlock->loop;
}
if(isJsArray || isVirtualTypedArray)
{
if(!DoArrayCheckHoist(valueInfo->Type(), implicitCallsLoop))
{
// Array opts are disabled for this value type, so treat it as an indefinite value type going forward
valueInfo->Type() = valueInfo->Type().ToLikely();
return;
}
if(isJsArray && valueInfo->HasNoMissingValues() && !DoArrayMissingValueCheckHoist())
{
valueInfo->Type() = valueInfo->Type().SetHasNoMissingValues(false);
}
}
#if DBG
VerifyArrayValueInfoForTracking(valueInfo, isJsArray, currentBlock);
#endif
if(!isJsArray && !isVirtualTypedArray)
{
return;
}
// Can't assume going forward that it will definitely be an array without disabling implicit calls, because the
// array may be transformed into an ES5 array. Since array opts are enabled, implicit calls can be disabled, and we can
// treat it as a definite value type going forward, but the value needs to be tracked so that something like a call can
// revert the value type to a likely version.
CurrentBlockData()->valuesToKillOnCalls->Add(value);
}
void
GlobOpt::TrackCopiedValueForKills(Value *const value)
{
Assert(value);
if(!value->GetValueInfo()->IsAnyOptimizedArray())
{
return;
}
DoTrackCopiedValueForKills(value);
}
void
GlobOpt::DoTrackCopiedValueForKills(Value *const value)
{
Assert(value);
ValueInfo *const valueInfo = value->GetValueInfo();
Assert(valueInfo->IsAnyOptimizedArray());
const bool isJsArray = valueInfo->IsArrayOrObjectWithArray();
Assert(!isJsArray == valueInfo->IsOptimizedTypedArray());
const bool isVirtualTypedArray = valueInfo->IsOptimizedVirtualTypedArray();
#if DBG
VerifyArrayValueInfoForTracking(valueInfo, isJsArray, currentBlock);
#endif
if(!isJsArray && !isVirtualTypedArray && !(valueInfo->IsArrayValueInfo() && valueInfo->AsArrayValueInfo()->HeadSegmentLengthSym()))
{
return;
}
// Can't assume going forward that it will definitely be an array without disabling implicit calls, because the
// array may be transformed into an ES5 array. Since array opts are enabled, implicit calls can be disabled, and we can
// treat it as a definite value type going forward, but the value needs to be tracked so that something like a call can
// revert the value type to a likely version.
CurrentBlockData()->valuesToKillOnCalls->Add(value);
}
void
GlobOpt::TrackMergedValueForKills(
Value *const value,
GlobOptBlockData *const blockData,
BVSparse<JitArenaAllocator> *const mergedValueTypesTrackedForKills) const
{
Assert(value);
if(!value->GetValueInfo()->IsAnyOptimizedArray())
{
return;
}
DoTrackMergedValueForKills(value, blockData, mergedValueTypesTrackedForKills);
}
void
GlobOpt::DoTrackMergedValueForKills(
Value *const value,
GlobOptBlockData *const blockData,
BVSparse<JitArenaAllocator> *const mergedValueTypesTrackedForKills) const
{
Assert(value);
Assert(blockData);
ValueInfo *valueInfo = value->GetValueInfo();
Assert(valueInfo->IsAnyOptimizedArray());
const bool isJsArray = valueInfo->IsArrayOrObjectWithArray();
Assert(!isJsArray == valueInfo->IsOptimizedTypedArray());
const bool isVirtualTypedArray = valueInfo->IsOptimizedVirtualTypedArray();
#if DBG
VerifyArrayValueInfoForTracking(valueInfo, isJsArray, currentBlock, true);
#endif
if(!isJsArray && !isVirtualTypedArray && !(valueInfo->IsArrayValueInfo() && valueInfo->AsArrayValueInfo()->HeadSegmentLengthSym()))
{
return;
}
// Can't assume going forward that it will definitely be an array without disabling implicit calls, because the
// array may be transformed into an ES5 array. Since array opts are enabled, implicit calls can be disabled, and we can
// treat it as a definite value type going forward, but the value needs to be tracked so that something like a call can
// revert the value type to a likely version.
if(!mergedValueTypesTrackedForKills || !mergedValueTypesTrackedForKills->TestAndSet(value->GetValueNumber()))
{
blockData->valuesToKillOnCalls->Add(value);
}
}
void
GlobOpt::TrackValueInfoChangeForKills(BasicBlock *const block, Value *const value, ValueInfo *const newValueInfo, const bool compensated) const
{
Assert(block);
Assert(value);
Assert(newValueInfo);
ValueInfo *const oldValueInfo = value->GetValueInfo();
#if DBG
if(oldValueInfo->IsAnyOptimizedArray())
{
VerifyArrayValueInfoForTracking(oldValueInfo, oldValueInfo->IsArrayOrObjectWithArray(), block, compensated);
}
#endif
const bool trackOldValueInfo =
oldValueInfo->IsArrayOrObjectWithArray() ||
oldValueInfo->IsOptimizedVirtualTypedArray() ||
(
oldValueInfo->IsOptimizedTypedArray() &&
oldValueInfo->IsArrayValueInfo() &&
oldValueInfo->AsArrayValueInfo()->HeadSegmentLengthSym()
);
Assert(trackOldValueInfo == block->globOptData.valuesToKillOnCalls->ContainsKey(value));
#if DBG
if(newValueInfo->IsAnyOptimizedArray())
{
VerifyArrayValueInfoForTracking(newValueInfo, newValueInfo->IsArrayOrObjectWithArray(), block, compensated);
}
#endif
const bool trackNewValueInfo =
newValueInfo->IsArrayOrObjectWithArray() ||
newValueInfo->IsOptimizedVirtualTypedArray() ||
(
newValueInfo->IsOptimizedTypedArray() &&
newValueInfo->IsArrayValueInfo() &&
newValueInfo->AsArrayValueInfo()->HeadSegmentLengthSym()
);
if(trackOldValueInfo == trackNewValueInfo)
{
return;
}
if(trackNewValueInfo)
{
block->globOptData.valuesToKillOnCalls->Add(value);
}
else
{
block->globOptData.valuesToKillOnCalls->Remove(value);
}
}
void
GlobOpt::ProcessValueKills(IR::Instr *const instr)
{
Assert(instr);
ValueSet *const valuesToKillOnCalls = CurrentBlockData()->valuesToKillOnCalls;
if(!IsLoopPrePass() && valuesToKillOnCalls->Count() == 0)
{
return;
}
const JsArrayKills kills = CheckJsArrayKills(instr);
Assert(!kills.KillsArrayHeadSegments() || kills.KillsArrayHeadSegmentLengths());
if(IsLoopPrePass())
{
rootLoopPrePass->jsArrayKills = rootLoopPrePass->jsArrayKills.Merge(kills);
Assert(
!rootLoopPrePass->parent ||
rootLoopPrePass->jsArrayKills.AreSubsetOf(rootLoopPrePass->parent->jsArrayKills));
if(kills.KillsAllArrays())
{
rootLoopPrePass->needImplicitCallBailoutChecksForJsArrayCheckHoist = false;
}
if(valuesToKillOnCalls->Count() == 0)
{
return;
}
}
if(kills.KillsAllArrays())
{
Assert(kills.KillsTypedArrayHeadSegmentLengths());
// - Calls need to kill the value types of values in the following list. For instance, calls can transform a JS array
// into an ES5 array, so any definitely-array value types need to be killed. Also, VirtualTypeArrays do not have
// bounds checks; this can be problematic if the array is detached, so check to ensure that it is a virtual array.
// Update the value types to likley to ensure a bailout that asserts Array type is generated.
// - Calls also need to kill typed array head segment lengths. A typed array's array buffer may be transferred to a web
// worker, in which case the typed array's length is set to zero.
for(auto it = valuesToKillOnCalls->GetIterator(); it.IsValid(); it.MoveNext())
{
Value *const value = it.CurrentValue();
ValueInfo *const valueInfo = value->GetValueInfo();
Assert(
valueInfo->IsArrayOrObjectWithArray() ||
valueInfo->IsOptimizedVirtualTypedArray() ||
valueInfo->IsOptimizedTypedArray() && valueInfo->AsArrayValueInfo()->HeadSegmentLengthSym());
if (valueInfo->IsArrayOrObjectWithArray() || valueInfo->IsOptimizedVirtualTypedArray())
{
ChangeValueType(nullptr, value, valueInfo->Type().ToLikely(), false);
continue;
}
ChangeValueInfo(
nullptr,
value,
valueInfo->AsArrayValueInfo()->Copy(alloc, true, false /* copyHeadSegmentLength */, true));
}
valuesToKillOnCalls->Clear();
return;
}
if(kills.KillsArraysWithNoMissingValues())
{
// Some operations may kill arrays with no missing values in unlikely circumstances. Convert their value types to likely
// versions so that the checks have to be redone.
for(auto it = valuesToKillOnCalls->GetIteratorWithRemovalSupport(); it.IsValid(); it.MoveNext())
{
Value *const value = it.CurrentValue();
ValueInfo *const valueInfo = value->GetValueInfo();
Assert(
valueInfo->IsArrayOrObjectWithArray() ||
valueInfo->IsOptimizedVirtualTypedArray() ||
valueInfo->IsOptimizedTypedArray() && valueInfo->AsArrayValueInfo()->HeadSegmentLengthSym());
if(!valueInfo->IsArrayOrObjectWithArray() || !valueInfo->HasNoMissingValues())
{
continue;
}
ChangeValueType(nullptr, value, valueInfo->Type().ToLikely(), false);
it.RemoveCurrent();
}
}
else if(kills.KillsObjectArraysWithNoMissingValues())
{
// Some operations may kill objects with arrays-with-no-missing-values in unlikely circumstances. Convert their value types to likely
// versions so that the checks have to be redone.
for(auto it = valuesToKillOnCalls->GetIteratorWithRemovalSupport(); it.IsValid(); it.MoveNext())
{
Value *const value = it.CurrentValue();
ValueInfo *const valueInfo = value->GetValueInfo();
Assert(
valueInfo->IsArrayOrObjectWithArray() ||
valueInfo->IsOptimizedVirtualTypedArray() ||
valueInfo->IsOptimizedTypedArray() && valueInfo->AsArrayValueInfo()->HeadSegmentLengthSym());
if(!valueInfo->IsArrayOrObjectWithArray() || valueInfo->IsArray() || !valueInfo->HasNoMissingValues())
{
continue;
}
ChangeValueType(nullptr, value, valueInfo->Type().ToLikely(), false);
it.RemoveCurrent();
}
}
if(kills.KillsNativeArrays())
{
// Some operations may kill native arrays in (what should be) unlikely circumstances. Convert their value types to
// likely versions so that the checks have to be redone.
for(auto it = valuesToKillOnCalls->GetIteratorWithRemovalSupport(); it.IsValid(); it.MoveNext())
{
Value *const value = it.CurrentValue();
ValueInfo *const valueInfo = value->GetValueInfo();
Assert(
valueInfo->IsArrayOrObjectWithArray() ||
valueInfo->IsOptimizedVirtualTypedArray() ||
valueInfo->IsOptimizedTypedArray() && valueInfo->AsArrayValueInfo()->HeadSegmentLengthSym());
if(!valueInfo->IsArrayOrObjectWithArray() || valueInfo->HasVarElements())
{
continue;
}
ChangeValueType(nullptr, value, valueInfo->Type().ToLikely(), false);
it.RemoveCurrent();
}
}
const bool likelyKillsJsArraysWithNoMissingValues = IsOperationThatLikelyKillsJsArraysWithNoMissingValues(instr);
if(!kills.KillsArrayHeadSegmentLengths())
{
Assert(!kills.KillsArrayHeadSegments());
if(!likelyKillsJsArraysWithNoMissingValues && !kills.KillsArrayLengths())
{
return;
}
}
for(auto it = valuesToKillOnCalls->GetIterator(); it.IsValid(); it.MoveNext())
{
Value *const value = it.CurrentValue();
ValueInfo *valueInfo = value->GetValueInfo();
Assert(
valueInfo->IsArrayOrObjectWithArray() ||
valueInfo->IsOptimizedVirtualTypedArray() ||
valueInfo->IsOptimizedTypedArray() && valueInfo->AsArrayValueInfo()->HeadSegmentLengthSym());
if(!valueInfo->IsArrayOrObjectWithArray())
{
continue;
}
if(likelyKillsJsArraysWithNoMissingValues && valueInfo->HasNoMissingValues())
{
ChangeValueType(nullptr, value, valueInfo->Type().SetHasNoMissingValues(false), true);
valueInfo = value->GetValueInfo();
}
if(!valueInfo->IsArrayValueInfo())
{
continue;
}
ArrayValueInfo *const arrayValueInfo = valueInfo->AsArrayValueInfo();
const bool removeHeadSegment = kills.KillsArrayHeadSegments() && arrayValueInfo->HeadSegmentSym();
const bool removeHeadSegmentLength = kills.KillsArrayHeadSegmentLengths() && arrayValueInfo->HeadSegmentLengthSym();
const bool removeLength = kills.KillsArrayLengths() && arrayValueInfo->LengthSym();
if(removeHeadSegment || removeHeadSegmentLength || removeLength)
{
ChangeValueInfo(
nullptr,
value,
arrayValueInfo->Copy(alloc, !removeHeadSegment, !removeHeadSegmentLength, !removeLength));
valueInfo = value->GetValueInfo();
}
}
}
void
GlobOpt::ProcessValueKills(BasicBlock *const block, GlobOptBlockData *const blockData)
{
Assert(block);
Assert(blockData);
ValueSet *const valuesToKillOnCalls = blockData->valuesToKillOnCalls;
if(!IsLoopPrePass() && valuesToKillOnCalls->Count() == 0)
{
return;
}
// If the current block or loop has implicit calls, kill all definitely-array value types, as using that info will cause
// implicit calls to be disabled, resulting in unnecessary bailouts
const bool killValuesOnImplicitCalls =
(block->loop ? !this->ImplicitCallFlagsAllowOpts(block->loop) : !this->ImplicitCallFlagsAllowOpts(func));
if (!killValuesOnImplicitCalls)
{
return;
}
if(IsLoopPrePass() && block->loop == rootLoopPrePass)
{
AnalysisAssert(rootLoopPrePass);
for (Loop * loop = rootLoopPrePass; loop != nullptr; loop = loop->parent)
{
loop->jsArrayKills.SetKillsAllArrays();
}
Assert(!rootLoopPrePass->parent || rootLoopPrePass->jsArrayKills.AreSubsetOf(rootLoopPrePass->parent->jsArrayKills));
if(valuesToKillOnCalls->Count() == 0)
{
return;
}
}
for(auto it = valuesToKillOnCalls->GetIterator(); it.IsValid(); it.MoveNext())
{
Value *const value = it.CurrentValue();
ValueInfo *const valueInfo = value->GetValueInfo();
Assert(
valueInfo->IsArrayOrObjectWithArray() ||
valueInfo->IsOptimizedVirtualTypedArray() ||
valueInfo->IsOptimizedTypedArray() && valueInfo->AsArrayValueInfo()->HeadSegmentLengthSym());
if(valueInfo->IsArrayOrObjectWithArray() || valueInfo->IsOptimizedVirtualTypedArray())
{
ChangeValueType(nullptr, value, valueInfo->Type().ToLikely(), false);
continue;
}
ChangeValueInfo(
nullptr,
value,
valueInfo->AsArrayValueInfo()->Copy(alloc, true, false /* copyHeadSegmentLength */, true));
}
valuesToKillOnCalls->Clear();
}
void
GlobOpt::ProcessValueKillsForLoopHeaderAfterBackEdgeMerge(BasicBlock *const block, GlobOptBlockData *const blockData)
{
Assert(block);
Assert(block->isLoopHeader);
Assert(blockData);
ValueSet *const valuesToKillOnCalls = blockData->valuesToKillOnCalls;
if(valuesToKillOnCalls->Count() == 0)
{
return;
}
const JsArrayKills loopKills(block->loop->jsArrayKills);
for(auto it = valuesToKillOnCalls->GetIteratorWithRemovalSupport(); it.IsValid(); it.MoveNext())
{
Value *const value = it.CurrentValue();
ValueInfo *valueInfo = value->GetValueInfo();
Assert(
valueInfo->IsArrayOrObjectWithArray() ||
valueInfo->IsOptimizedVirtualTypedArray() ||
valueInfo->IsOptimizedTypedArray() && valueInfo->AsArrayValueInfo()->HeadSegmentLengthSym());
const bool isJsArray = valueInfo->IsArrayOrObjectWithArray();
Assert(!isJsArray == valueInfo->IsOptimizedTypedArray());
const bool isVirtualTypedArray = valueInfo->IsOptimizedVirtualTypedArray();
if((isJsArray || isVirtualTypedArray) ? loopKills.KillsValueType(valueInfo->Type()) : loopKills.KillsTypedArrayHeadSegmentLengths())
{
// Hoisting array checks and other related things for this type is disabled for the loop due to the kill, as
// compensation code is currently not added on back-edges. When merging values from a back-edge, the array value
// type cannot be definite, as that may require adding compensation code on the back-edge if the optimization pass
// chooses to not optimize the array.
if(isJsArray || isVirtualTypedArray)
{
ChangeValueType(nullptr, value, valueInfo->Type().ToLikely(), false);
}
else
{
ChangeValueInfo(
nullptr,
value,
valueInfo->AsArrayValueInfo()->Copy(alloc, true, false /* copyHeadSegmentLength */, true));
}
it.RemoveCurrent();
continue;
}
if(!isJsArray || !valueInfo->IsArrayValueInfo())
{
continue;
}
// Similarly, if the loop contains an operation that kills JS array segments, don't make the segment or other related
// syms available initially inside the loop
ArrayValueInfo *const arrayValueInfo = valueInfo->AsArrayValueInfo();
const bool removeHeadSegment = loopKills.KillsArrayHeadSegments() && arrayValueInfo->HeadSegmentSym();
const bool removeHeadSegmentLength = loopKills.KillsArrayHeadSegmentLengths() && arrayValueInfo->HeadSegmentLengthSym();
const bool removeLength = loopKills.KillsArrayLengths() && arrayValueInfo->LengthSym();
if(removeHeadSegment || removeHeadSegmentLength || removeLength)
{
ChangeValueInfo(
nullptr,
value,
arrayValueInfo->Copy(alloc, !removeHeadSegment, !removeHeadSegmentLength, !removeLength));
valueInfo = value->GetValueInfo();
}
}
}
bool
GlobOpt::NeedBailOnImplicitCallForLiveValues(BasicBlock const * const block, const bool isForwardPass) const
{
if(isForwardPass)
{
return block->globOptData.valuesToKillOnCalls->Count() != 0;
}
if(block->noImplicitCallUses->IsEmpty())
{
Assert(block->noImplicitCallNoMissingValuesUses->IsEmpty());
Assert(block->noImplicitCallNativeArrayUses->IsEmpty());
Assert(block->noImplicitCallJsArrayHeadSegmentSymUses->IsEmpty());
Assert(block->noImplicitCallArrayLengthSymUses->IsEmpty());
return false;
}
return true;
}
IR::Instr*
GlobOpt::CreateBoundsCheckInstr(IR::Opnd* lowerBound, IR::Opnd* upperBound, int offset, Func* func)
{
IR::Instr* instr = IR::Instr::New(Js::OpCode::BoundCheck, func);
return AttachBoundsCheckData(instr, lowerBound, upperBound, offset);
}
IR::Instr*
GlobOpt::CreateBoundsCheckInstr(IR::Opnd* lowerBound, IR::Opnd* upperBound, int offset, IR::BailOutKind bailoutkind, BailOutInfo* bailoutInfo, Func * func)
{
IR::Instr* instr = IR::BailOutInstr::New(Js::OpCode::BoundCheck, bailoutkind, bailoutInfo, func);
return AttachBoundsCheckData(instr, lowerBound, upperBound, offset);
}
IR::Instr*
GlobOpt::AttachBoundsCheckData(IR::Instr* instr, IR::Opnd* lowerBound, IR::Opnd* upperBound, int offset)
{
instr->SetSrc1(lowerBound);
instr->SetSrc2(upperBound);
if (offset != 0)
{
instr->SetDst(IR::IntConstOpnd::New(offset, TyInt32, instr->m_func));
}
return instr;
}
void
GlobOpt::OptArraySrc(IR::Instr ** const instrRef, Value ** src1Val, Value ** src2Val)
{
Assert(instrRef != nullptr);
ArraySrcOpt arraySrcOpt(this, instrRef, src1Val, src2Val);
arraySrcOpt.Optimize();
}
void
GlobOpt::ProcessNoImplicitCallArrayUses(IR::RegOpnd * baseOpnd, IR::ArrayRegOpnd * baseArrayOpnd, IR::Instr * instr, bool isLikelyJsArray, bool useNoMissingValues)
{
if (isLikelyJsArray)
{
// Insert an instruction to indicate to the dead-store pass that implicit calls need to be kept disabled until this
// instruction. Operations other than LdElem, StElem and IsIn don't benefit much from arrays having no missing values,
// so no need to ensure that the array still has no missing values. For a particular array, if none of the accesses
// benefit much from the no-missing-values information, it may be beneficial to avoid checking for no missing
// values, especially in the case for a single array access, where the cost of the check could be relatively
// significant. An StElem has to do additional checks in the common path if the array may have missing values, and
// a StElem that operates on an array that has no missing values is more likely to keep the no-missing-values info
// on the array more precise, so it still benefits a little from the no-missing-values info.
this->CaptureNoImplicitCallUses(baseOpnd, isLikelyJsArray);
}
else if (baseArrayOpnd && baseArrayOpnd->HeadSegmentLengthSym())
{
// A typed array's array buffer may be transferred to a web worker as part of an implicit call, in which case the typed
// array's length is set to zero. Insert an instruction to indicate to the dead-store pass that implicit calls need to
// be disabled until this instruction.
IR::RegOpnd *const headSegmentLengthOpnd =
IR::RegOpnd::New(
baseArrayOpnd->HeadSegmentLengthSym(),
baseArrayOpnd->HeadSegmentLengthSym()->GetType(),
instr->m_func);
const IR::AutoReuseOpnd autoReuseHeadSegmentLengthOpnd(headSegmentLengthOpnd, instr->m_func);
this->CaptureNoImplicitCallUses(headSegmentLengthOpnd, false);
}
}
void
GlobOpt::OptStackArgLenAndConst(IR::Instr* instr, Value** src1Val)
{
if (!PHASE_OFF(Js::StackArgLenConstOptPhase, instr->m_func) && instr->m_func->IsStackArgsEnabled() && instr->usesStackArgumentsObject && instr->IsInlined())
{
IR::Opnd* src1 = instr->GetSrc1();
auto replaceInstr = [&](IR::Opnd* newopnd, Js::OpCode opcode)
{
if (PHASE_TESTTRACE(Js::StackArgLenConstOptPhase, instr->m_func))
{
Output::Print(_u("Inlined function %s have replaced opcode %s with opcode %s for stack arg optimization. \n"), instr->m_func->GetJITFunctionBody()->GetDisplayName(),
Js::OpCodeUtil::GetOpCodeName(instr->m_opcode), Js::OpCodeUtil::GetOpCodeName(opcode));
Output::Flush();
}
this->CaptureByteCodeSymUses(instr);
instr->m_opcode = opcode;
instr->ReplaceSrc1(newopnd);
if (instr->HasBailOutInfo())
{
instr->ClearBailOutInfo();
}
if (instr->IsProfiledInstr())
{
Assert(opcode == Js::OpCode::Ld_A || opcode == Js::OpCode::Typeof);
instr->AsProfiledInstr()->u.FldInfo().valueType = ValueType::Uninitialized;
}
*src1Val = this->OptSrc(instr->GetSrc1(), &instr);
instr->m_func->hasArgLenAndConstOpt = true;
};
Assert(CurrentBlockData()->IsArgumentsOpnd(src1));
switch(instr->m_opcode)
{
case Js::OpCode::LdLen_A:
{
IR::AddrOpnd* newopnd = IR::AddrOpnd::New(Js::TaggedInt::ToVarUnchecked(instr->m_func->actualCount - 1), IR::AddrOpndKindConstantVar, instr->m_func);
replaceInstr(newopnd, Js::OpCode::Ld_A);
break;
}
case Js::OpCode::LdElemI_A:
case Js::OpCode::TypeofElem:
{
IR::IndirOpnd* indirOpndSrc1 = src1->AsIndirOpnd();
if (!indirOpndSrc1->GetIndexOpnd())
{
int argIndex = indirOpndSrc1->GetOffset() + 1;
IR::Instr* defInstr = nullptr;
if (argIndex > 0)
{
IR::Instr* inlineeStart = instr->m_func->GetInlineeStart();
inlineeStart->IterateArgInstrs([&](IR::Instr* argInstr) {
StackSym *argSym = argInstr->GetDst()->AsSymOpnd()->m_sym->AsStackSym();
if (argSym->GetArgSlotNum() - 1 == argIndex)
{
defInstr = argInstr;
return true;
}
return false;
});
}
Js::OpCode replacementOpcode;
if (instr->m_opcode == Js::OpCode::TypeofElem)
{
replacementOpcode = Js::OpCode::Typeof;
}
else
{
replacementOpcode = Js::OpCode::Ld_A;
}
// If we cannot find the right instruction. I.E. When calling arguments[2] and no arguments were passed to the func
if (defInstr == nullptr)
{
IR::Opnd * undefined = IR::AddrOpnd::New(instr->m_func->GetScriptContextInfo()->GetUndefinedAddr(), IR::AddrOpndKindDynamicVar, instr->m_func, true);
undefined->SetValueType(ValueType::Undefined);
replaceInstr(undefined, replacementOpcode);
}
else
{
replaceInstr(defInstr->GetSrc1(), replacementOpcode);
}
}
else
{
instr->m_func->unoptimizableArgumentsObjReference++;
}
break;
}
}
}
}
void
GlobOpt::CaptureNoImplicitCallUses(
IR::Opnd *opnd,
const bool usesNoMissingValuesInfo,
IR::Instr *const includeCurrentInstr)
{
Assert(!IsLoopPrePass());
Assert(noImplicitCallUsesToInsert);
Assert(opnd);
// The opnd may be deleted later, so make a copy to ensure it is alive for inserting NoImplicitCallUses later
opnd = opnd->Copy(func);
if(!usesNoMissingValuesInfo)
{
const ValueType valueType(opnd->GetValueType());
if(valueType.IsArrayOrObjectWithArray() && valueType.HasNoMissingValues())
{
// Inserting NoImplicitCallUses for an opnd with a definitely-array-with-no-missing-values value type means that the
// instruction following it uses the information that the array has no missing values in some way, for instance, it
// may omit missing value checks. Based on that, the dead-store phase in turn ensures that the necessary bailouts
// are inserted to ensure that the array still has no missing values until the following instruction. Since
// 'usesNoMissingValuesInfo' is false, change the value type to indicate to the dead-store phase that the following
// instruction does not use the no-missing-values information.
opnd->SetValueType(valueType.SetHasNoMissingValues(false));
}
}
if(includeCurrentInstr)
{
IR::Instr *const noImplicitCallUses =
IR::PragmaInstr::New(Js::OpCode::NoImplicitCallUses, 0, includeCurrentInstr->m_func);
noImplicitCallUses->SetSrc1(opnd);
noImplicitCallUses->GetSrc1()->SetIsJITOptimizedReg(true);
includeCurrentInstr->InsertAfter(noImplicitCallUses);
return;
}
noImplicitCallUsesToInsert->Add(opnd);
}
void
GlobOpt::InsertNoImplicitCallUses(IR::Instr *const instr)
{
Assert(noImplicitCallUsesToInsert);
const int n = noImplicitCallUsesToInsert->Count();
if(n == 0)
{
return;
}
IR::Instr *const insertBeforeInstr = instr->GetInsertBeforeByteCodeUsesInstr();
for(int i = 0; i < n;)
{
IR::Instr *const noImplicitCallUses = IR::PragmaInstr::New(Js::OpCode::NoImplicitCallUses, 0, instr->m_func);
noImplicitCallUses->SetSrc1(noImplicitCallUsesToInsert->Item(i));
noImplicitCallUses->GetSrc1()->SetIsJITOptimizedReg(true);
++i;
if(i < n)
{
noImplicitCallUses->SetSrc2(noImplicitCallUsesToInsert->Item(i));
noImplicitCallUses->GetSrc2()->SetIsJITOptimizedReg(true);
++i;
}
noImplicitCallUses->SetByteCodeOffset(instr);
insertBeforeInstr->InsertBefore(noImplicitCallUses);
}
noImplicitCallUsesToInsert->Clear();
}
void
GlobOpt::PrepareLoopArrayCheckHoist()
{
if(IsLoopPrePass() || !currentBlock->loop || !currentBlock->isLoopHeader || !currentBlock->loop->parent)
{
return;
}
if(currentBlock->loop->parent->needImplicitCallBailoutChecksForJsArrayCheckHoist)
{
// If the parent loop is an array check elimination candidate, so is the current loop. Even though the current loop may
// not have array accesses, if the parent loop hoists array checks, the current loop also needs implicit call checks.
currentBlock->loop->needImplicitCallBailoutChecksForJsArrayCheckHoist = true;
}
}
JsArrayKills
GlobOpt::CheckJsArrayKills(IR::Instr *const instr)
{
Assert(instr);
JsArrayKills kills;
if(instr->UsesAllFields())
{
// Calls can (but are unlikely to) change a javascript array into an ES5 array, which may have different behavior for
// index properties.
kills.SetKillsAllArrays();
return kills;
}
const bool doArrayMissingValueCheckHoist = DoArrayMissingValueCheckHoist();
const bool doNativeArrayTypeSpec = DoNativeArrayTypeSpec();
const bool doArraySegmentHoist = DoArraySegmentHoist(ValueType::GetObject(ObjectType::Array));
Assert(doArraySegmentHoist == DoArraySegmentHoist(ValueType::GetObject(ObjectType::ObjectWithArray)));
const bool doArrayLengthHoist = DoArrayLengthHoist();
if(!doArrayMissingValueCheckHoist && !doNativeArrayTypeSpec && !doArraySegmentHoist && !doArrayLengthHoist)
{
return kills;
}
// The following operations may create missing values in an array in an unlikely circumstance. Even though they don't kill
// the fact that the 'this' parameter is an array (when implicit calls are disabled), we don't have a way to say the value
// type is definitely array but it likely has no missing values. So, these will kill the definite value type as well, making
// it likely array, such that the array checks will have to be redone.
const bool useValueTypes = !IsLoopPrePass(); // Source value types are not guaranteed to be correct in a loop prepass
switch(instr->m_opcode)
{
case Js::OpCode::StElemC:
case Js::OpCode::StElemI_A:
case Js::OpCode::StElemI_A_Strict:
{
Assert(instr->GetDst());
if(!instr->GetDst()->IsIndirOpnd())
{
break;
}
const ValueType baseValueType =
useValueTypes ? instr->GetDst()->AsIndirOpnd()->GetBaseOpnd()->GetValueType() : ValueType::Uninitialized;
if(useValueTypes && baseValueType.IsNotArrayOrObjectWithArray())
{
break;
}
if(instr->IsProfiledInstr())
{
const Js::StElemInfo *const stElemInfo = instr->AsProfiledInstr()->u.stElemInfo;
if(doArraySegmentHoist && stElemInfo->LikelyStoresOutsideHeadSegmentBounds())
{
kills.SetKillsArrayHeadSegments();
kills.SetKillsArrayHeadSegmentLengths();
}
if(doArrayLengthHoist &&
!(useValueTypes && baseValueType.IsNotArray()) &&
stElemInfo->LikelyStoresOutsideArrayBounds())
{
kills.SetKillsArrayLengths();
}
}
break;
}
case Js::OpCode::DeleteElemI_A:
case Js::OpCode::DeleteElemIStrict_A:
Assert(instr->GetSrc1());
if(!instr->GetSrc1()->IsIndirOpnd() ||
(useValueTypes && instr->GetSrc1()->AsIndirOpnd()->GetBaseOpnd()->GetValueType().IsNotArrayOrObjectWithArray()))
{
break;
}
if(doArrayMissingValueCheckHoist)
{
kills.SetKillsArraysWithNoMissingValues();
}
if(doArraySegmentHoist)
{
kills.SetKillsArrayHeadSegmentLengths();
}
break;
case Js::OpCode::ConsoleScopedStFld:
case Js::OpCode::ConsoleScopedStFldStrict:
case Js::OpCode::ScopedStFld:
case Js::OpCode::ScopedStFldStrict:
case Js::OpCode::StFld:
case Js::OpCode::StFldStrict:
case Js::OpCode::StSuperFld:
case Js::OpCode::StSuperFldStrict:
{
Assert(instr->GetDst());
if(!doArraySegmentHoist && !doArrayLengthHoist)
{
break;
}
IR::SymOpnd *const symDst = instr->GetDst()->AsSymOpnd();
if(!symDst->IsPropertySymOpnd())
{
break;
}
IR::PropertySymOpnd *const dst = symDst->AsPropertySymOpnd();
if(dst->m_sym->AsPropertySym()->m_propertyId != Js::PropertyIds::length)
{
break;
}
if(useValueTypes && dst->GetPropertyOwnerValueType().IsNotArray())
{
// Setting the 'length' property of an object that is not an array, even if it has an internal array, does
// not kill the head segment or head segment length of any arrays.
break;
}
if(doArraySegmentHoist)
{
kills.SetKillsArrayHeadSegmentLengths();
}
if(doArrayLengthHoist)
{
kills.SetKillsArrayLengths();
}
break;
}
case Js::OpCode::InlineArrayPush:
{
Assert(instr->GetSrc2());
IR::Opnd *const arrayOpnd = instr->GetSrc1();
Assert(arrayOpnd);
const ValueType arrayValueType(arrayOpnd->GetValueType());
if(!arrayOpnd->IsRegOpnd() || (useValueTypes && arrayValueType.IsNotArrayOrObjectWithArray()))
{
break;
}
if(doArrayMissingValueCheckHoist)
{
kills.SetKillsArraysWithNoMissingValues();
}
if(doArraySegmentHoist)
{
kills.SetKillsArrayHeadSegments();
kills.SetKillsArrayHeadSegmentLengths();
}
if(doArrayLengthHoist && !(useValueTypes && arrayValueType.IsNotArray()))
{
kills.SetKillsArrayLengths();
}
// Don't kill NativeArray, if there is no mismatch between array's type and element's type.
if(doNativeArrayTypeSpec &&
!(useValueTypes && arrayValueType.IsNativeArray() &&
((arrayValueType.IsLikelyNativeIntArray() && instr->GetSrc2()->IsInt32()) ||
(arrayValueType.IsLikelyNativeFloatArray() && instr->GetSrc2()->IsFloat()))
) &&
!(useValueTypes && arrayValueType.IsNotNativeArray()))
{
kills.SetKillsNativeArrays();
}
break;
}
case Js::OpCode::InlineArrayPop:
{
IR::Opnd *const arrayOpnd = instr->GetSrc1();
Assert(arrayOpnd);
const ValueType arrayValueType(arrayOpnd->GetValueType());
if(!arrayOpnd->IsRegOpnd() || (useValueTypes && arrayValueType.IsNotArrayOrObjectWithArray()))
{
break;
}
if(doArraySegmentHoist)
{
kills.SetKillsArrayHeadSegmentLengths();
}
if(doArrayLengthHoist && !(useValueTypes && arrayValueType.IsNotArray()))
{
kills.SetKillsArrayLengths();
}
if(doArrayMissingValueCheckHoist && !(useValueTypes && arrayValueType.IsArray()))
{
kills.SetKillsObjectArraysWithNoMissingValues();
}
break;
}
case Js::OpCode::CallDirect:
{
Assert(instr->GetSrc1());
// Find the 'this' parameter and check if it's possible for it to be an array
IR::Opnd *const arrayOpnd = instr->FindCallArgumentOpnd(1);
Assert(arrayOpnd);
const ValueType arrayValueType(arrayOpnd->GetValueType());
if(!arrayOpnd->IsRegOpnd() || (useValueTypes && arrayValueType.IsNotArrayOrObjectWithArray()))
{
break;
}
const IR::JnHelperMethod helperMethod = instr->GetSrc1()->AsHelperCallOpnd()->m_fnHelper;
if(doArrayMissingValueCheckHoist)
{
switch(helperMethod)
{
case IR::HelperArray_Reverse:
case IR::HelperArray_Shift:
case IR::HelperArray_Splice:
case IR::HelperArray_Unshift:
kills.SetKillsArraysWithNoMissingValues();
break;
}
}
if(doArraySegmentHoist)
{
switch(helperMethod)
{
case IR::HelperArray_Reverse:
case IR::HelperArray_Shift:
case IR::HelperArray_Splice:
case IR::HelperArray_Unshift:
case IR::HelperArray_Concat:
kills.SetKillsArrayHeadSegments();
kills.SetKillsArrayHeadSegmentLengths();
break;
}
}
if(doArrayLengthHoist && !(useValueTypes && arrayValueType.IsNotArray()))
{
switch(helperMethod)
{
case IR::HelperArray_Shift:
case IR::HelperArray_Splice:
case IR::HelperArray_Unshift:
kills.SetKillsArrayLengths();
break;
}
}
if(doNativeArrayTypeSpec && !(useValueTypes && arrayValueType.IsNotNativeArray()))
{
switch(helperMethod)
{
case IR::HelperArray_Reverse:
case IR::HelperArray_Shift:
case IR::HelperArray_Slice:
// Currently not inlined.
//case IR::HelperArray_Sort:
case IR::HelperArray_Splice:
case IR::HelperArray_Unshift:
case IR::HelperArray_Concat:
kills.SetKillsNativeArrays();
break;
}
}
break;
}
case Js::OpCode::InitProto:
{
// Find the 'this' parameter and check if it's possible for it to be an array
IR::Opnd *const arrayOpnd = instr->GetSrc1();
Assert(arrayOpnd);
const ValueType arrayValueType(arrayOpnd->GetValueType());
if(!arrayOpnd->IsRegOpnd() || (useValueTypes && arrayValueType.IsNotArrayOrObjectWithArray()))
{
break;
}
if(doNativeArrayTypeSpec && !(useValueTypes && arrayValueType.IsNotNativeArray()))
{
kills.SetKillsNativeArrays();
}
break;
}
case Js::OpCode::NewClassProto:
Assert(instr->GetSrc1());
if (IR::AddrOpnd::IsEqualAddr(instr->GetSrc1(), (void*)func->GetScriptContextInfo()->GetObjectPrototypeAddr()))
{
// No extends operand, the proto parent is the Object prototype
break;
}
// Fall through
case Js::OpCode::NewScObjectNoCtor:
case Js::OpCode::NewScObjectNoCtorFull:
if(doNativeArrayTypeSpec)
{
// Class/object construction can make something a prototype
kills.SetKillsNativeArrays();
}
break;
}
return kills;
}
GlobOptBlockData const * GlobOpt::CurrentBlockData() const
{
return &this->currentBlock->globOptData;
}
GlobOptBlockData * GlobOpt::CurrentBlockData()
{
return &this->currentBlock->globOptData;
}
void GlobOpt::CommitCapturedValuesCandidate()
{
GlobOptBlockData * globOptData = CurrentBlockData();
globOptData->changedSyms->ClearAll();
if (!this->changedSymsAfterIncBailoutCandidate->IsEmpty())
{
//
// some symbols are changed after the values for current bailout have been
// captured (GlobOpt::CapturedValues), need to restore such symbols as changed
// for following incremental bailout construction, or we will miss capturing
// values for later bailout
//
// swap changedSyms and changedSymsAfterIncBailoutCandidate
// because both are from this->alloc
BVSparse<JitArenaAllocator> * tempBvSwap = globOptData->changedSyms;
globOptData->changedSyms = this->changedSymsAfterIncBailoutCandidate;
this->changedSymsAfterIncBailoutCandidate = tempBvSwap;
}
if (globOptData->capturedValues)
{
globOptData->capturedValues->DecrementRefCount();
}
globOptData->capturedValues = globOptData->capturedValuesCandidate;
// null out capturedValuesCandidate to stop tracking symbols change for it
globOptData->capturedValuesCandidate = nullptr;
}
bool
GlobOpt::IsOperationThatLikelyKillsJsArraysWithNoMissingValues(IR::Instr *const instr)
{
// StElem is profiled with information indicating whether it will likely create a missing value in the array. In that case,
// we prefer to kill the no-missing-values information in the value so that we don't bail out in a likely circumstance.
return
(instr->m_opcode == Js::OpCode::StElemI_A || instr->m_opcode == Js::OpCode::StElemI_A_Strict) &&
DoArrayMissingValueCheckHoist() &&
instr->IsProfiledInstr() &&
instr->AsProfiledInstr()->u.stElemInfo->LikelyCreatesMissingValue();
}
bool
GlobOpt::NeedBailOnImplicitCallForArrayCheckHoist(BasicBlock const * const block, const bool isForwardPass) const
{
Assert(block);
return isForwardPass && block->loop && block->loop->needImplicitCallBailoutChecksForJsArrayCheckHoist;
}
bool
GlobOpt::PrepareForIgnoringIntOverflow(IR::Instr *const instr)
{
Assert(instr);
const bool isBoundary = instr->m_opcode == Js::OpCode::NoIntOverflowBoundary;
// Update the instruction's "int overflow matters" flag based on whether we are currently allowing ignoring int overflows.
// Some operations convert their srcs to int32s, those can still ignore int overflow.
if(instr->ignoreIntOverflowInRange)
{
instr->ignoreIntOverflowInRange = !intOverflowCurrentlyMattersInRange || OpCodeAttr::IsInt32(instr->m_opcode);
}
if(!intOverflowDoesNotMatterRange)
{
Assert(intOverflowCurrentlyMattersInRange);
// There are no more ranges of instructions where int overflow does not matter, in this block.
return isBoundary;
}
if(instr == intOverflowDoesNotMatterRange->LastInstr())
{
Assert(isBoundary);
// Reached the last instruction in the range
intOverflowCurrentlyMattersInRange = true;
intOverflowDoesNotMatterRange = intOverflowDoesNotMatterRange->Next();
return isBoundary;
}
if(!intOverflowCurrentlyMattersInRange)
{
return isBoundary;
}
if(instr != intOverflowDoesNotMatterRange->FirstInstr())
{
// Have not reached the next range
return isBoundary;
}
Assert(isBoundary);
// This is the first instruction in a range of instructions where int overflow does not matter. There can be many inputs to
// instructions in the range, some of which are inputs to the range itself (that is, the values are not defined in the
// range). Ignoring int overflow is only valid for int operations, so we need to ensure that all inputs to the range are
// int (not "likely int") before ignoring any overflows in the range. Ensuring that a sym with a "likely int" value is an
// int requires a bail-out. These bail-out check need to happen before any overflows are ignored, otherwise it's too late.
// The backward pass tracked all inputs into the range. Iterate over them and verify the values, and insert lossless
// conversions to int as necessary, before the first instruction in the range. If for any reason all values cannot be
// guaranteed to be ints, the optimization will be disabled for this range.
intOverflowCurrentlyMattersInRange = false;
{
BVSparse<JitArenaAllocator> tempBv1(tempAlloc);
BVSparse<JitArenaAllocator> tempBv2(tempAlloc);
{
// Just renaming the temp BVs for this section to indicate how they're used so that it makes sense
BVSparse<JitArenaAllocator> &symsToExclude = tempBv1;
BVSparse<JitArenaAllocator> &symsToInclude = tempBv2;
#if DBG_DUMP
SymID couldNotConvertSymId = 0;
#endif
FOREACH_BITSET_IN_SPARSEBV(id, intOverflowDoesNotMatterRange->SymsRequiredToBeInt())
{
Sym *const sym = func->m_symTable->Find(id);
Assert(sym);
// Some instructions with property syms are also tracked by the backward pass, and may be included in the range
// (LdSlot for instance). These property syms don't get their values until either copy-prop resolves a value for
// them, or a new value is created once the use of the property sym is reached. In either case, we're not that
// far yet, so we need to find the future value of the property sym by evaluating copy-prop in reverse.
Value *const value = sym->IsStackSym() ? CurrentBlockData()->FindValue(sym) : CurrentBlockData()->FindFuturePropertyValue(sym->AsPropertySym());
if(!value)
{
#if DBG_DUMP
couldNotConvertSymId = id;
#endif
intOverflowCurrentlyMattersInRange = true;
BREAK_BITSET_IN_SPARSEBV;
}
const bool isInt32OrUInt32Float =
value->GetValueInfo()->IsFloatConstant() &&
Js::JavascriptNumber::IsInt32OrUInt32(value->GetValueInfo()->AsFloatConstant()->FloatValue());
if(value->GetValueInfo()->IsInt() || isInt32OrUInt32Float)
{
if(!IsLoopPrePass())
{
// Input values that are already int can be excluded from int-specialization. We can treat unsigned
// int32 values as int32 values (ignoring the overflow), since the values will only be used inside the
// range where overflow does not matter.
symsToExclude.Set(sym->m_id);
}
continue;
}
if(!DoAggressiveIntTypeSpec() || !value->GetValueInfo()->IsLikelyInt())
{
// When aggressive int specialization is off, syms with "likely int" values cannot be forced to int since
// int bail-out checks are not allowed in that mode. Similarly, with aggressive int specialization on, it
// wouldn't make sense to force non-"likely int" values to int since it would almost guarantee a bail-out at
// runtime. In both cases, just disable ignoring overflow for this range.
#if DBG_DUMP
couldNotConvertSymId = id;
#endif
intOverflowCurrentlyMattersInRange = true;
BREAK_BITSET_IN_SPARSEBV;
}
if(IsLoopPrePass())
{
// The loop prepass does not modify bit-vectors. Since it doesn't add bail-out checks, it also does not need
// to specialize anything up-front. It only needs to be consistent in how it determines whether to allow
// ignoring overflow for a range, based on the values of inputs into the range.
continue;
}
// Since input syms are tracked in the backward pass, where there is no value tracking, it will not be aware of
// copy-prop. If a copy-prop sym is available, it will be used instead, so exclude the original sym and include
// the copy-prop sym for specialization.
StackSym *const copyPropSym = CurrentBlockData()->GetCopyPropSym(sym, value);
if(copyPropSym)
{
symsToExclude.Set(sym->m_id);
Assert(!symsToExclude.Test(copyPropSym->m_id));
const bool needsToBeLossless =
!intOverflowDoesNotMatterRange->SymsRequiredToBeLossyInt()->Test(sym->m_id);
if(intOverflowDoesNotMatterRange->SymsRequiredToBeInt()->Test(copyPropSym->m_id) ||
symsToInclude.TestAndSet(copyPropSym->m_id))
{
// The copy-prop sym is already included
if(needsToBeLossless)
{
// The original sym needs to be lossless, so make the copy-prop sym lossless as well.
intOverflowDoesNotMatterRange->SymsRequiredToBeLossyInt()->Clear(copyPropSym->m_id);
}
}
else if(!needsToBeLossless)
{
// The copy-prop sym was not included before, and the original sym can be lossy, so make it lossy.
intOverflowDoesNotMatterRange->SymsRequiredToBeLossyInt()->Set(copyPropSym->m_id);
}
}
else if(!sym->IsStackSym())
{
// Only stack syms can be converted to int, and copy-prop syms are stack syms. If a copy-prop sym was not
// found for the property sym, we can't ignore overflows in this range.
#if DBG_DUMP
couldNotConvertSymId = id;
#endif
intOverflowCurrentlyMattersInRange = true;
BREAK_BITSET_IN_SPARSEBV;
}
} NEXT_BITSET_IN_SPARSEBV;
if(intOverflowCurrentlyMattersInRange)
{
#if DBG_DUMP
if(PHASE_TRACE(Js::TrackCompoundedIntOverflowPhase, func) && !IsLoopPrePass())
{
char16 debugStringBuffer[MAX_FUNCTION_BODY_DEBUG_STRING_SIZE];
Output::Print(
_u("TrackCompoundedIntOverflow - Top function: %s (%s), Phase: %s, Block: %u, Disabled ignoring overflows\n"),
func->GetJITFunctionBody()->GetDisplayName(),
func->GetDebugNumberSet(debugStringBuffer),
Js::PhaseNames[Js::ForwardPhase],
currentBlock->GetBlockNum());
Output::Print(_u(" Input sym could not be turned into an int: %u\n"), couldNotConvertSymId);
Output::Print(_u(" First instr: "));
instr->m_next->Dump();
Output::Flush();
}
#endif
intOverflowDoesNotMatterRange = intOverflowDoesNotMatterRange->Next();
return isBoundary;
}
if(IsLoopPrePass())
{
return isBoundary;
}
// Update the syms to specialize after enumeration
intOverflowDoesNotMatterRange->SymsRequiredToBeInt()->Minus(&symsToExclude);
intOverflowDoesNotMatterRange->SymsRequiredToBeLossyInt()->Minus(&symsToExclude);
intOverflowDoesNotMatterRange->SymsRequiredToBeInt()->Or(&symsToInclude);
}
{
// Exclude syms that are already live as lossless int32, and exclude lossy conversions of syms that are already live
// as lossy int32.
// symsToExclude = liveInt32Syms - liveLossyInt32Syms // syms live as lossless int
// lossySymsToExclude = symsRequiredToBeLossyInt & liveLossyInt32Syms; // syms we want as lossy int that are already live as lossy int
// symsToExclude |= lossySymsToExclude
// symsRequiredToBeInt -= symsToExclude
// symsRequiredToBeLossyInt -= symsToExclude
BVSparse<JitArenaAllocator> &symsToExclude = tempBv1;
BVSparse<JitArenaAllocator> &lossySymsToExclude = tempBv2;
symsToExclude.Minus(CurrentBlockData()->liveInt32Syms, CurrentBlockData()->liveLossyInt32Syms);
lossySymsToExclude.And(
intOverflowDoesNotMatterRange->SymsRequiredToBeLossyInt(),
CurrentBlockData()->liveLossyInt32Syms);
symsToExclude.Or(&lossySymsToExclude);
intOverflowDoesNotMatterRange->SymsRequiredToBeInt()->Minus(&symsToExclude);
intOverflowDoesNotMatterRange->SymsRequiredToBeLossyInt()->Minus(&symsToExclude);
}
#if DBG
{
// Verify that the syms to be converted are live
// liveSyms = liveInt32Syms | liveFloat64Syms | liveVarSyms
// deadSymsRequiredToBeInt = symsRequiredToBeInt - liveSyms
BVSparse<JitArenaAllocator> &liveSyms = tempBv1;
BVSparse<JitArenaAllocator> &deadSymsRequiredToBeInt = tempBv2;
liveSyms.Or(CurrentBlockData()->liveInt32Syms, CurrentBlockData()->liveFloat64Syms);
liveSyms.Or(CurrentBlockData()->liveVarSyms);
deadSymsRequiredToBeInt.Minus(intOverflowDoesNotMatterRange->SymsRequiredToBeInt(), &liveSyms);
Assert(deadSymsRequiredToBeInt.IsEmpty());
}
#endif
}
// Int-specialize the syms before the first instruction of the range (the current instruction)
intOverflowDoesNotMatterRange->SymsRequiredToBeInt()->Minus(intOverflowDoesNotMatterRange->SymsRequiredToBeLossyInt());
#if DBG_DUMP
if(PHASE_TRACE(Js::TrackCompoundedIntOverflowPhase, func))
{
char16 debugStringBuffer[MAX_FUNCTION_BODY_DEBUG_STRING_SIZE];
Output::Print(
_u("TrackCompoundedIntOverflow - Top function: %s (%s), Phase: %s, Block: %u\n"),
func->GetJITFunctionBody()->GetDisplayName(),
func->GetDebugNumberSet(debugStringBuffer),
Js::PhaseNames[Js::ForwardPhase],
currentBlock->GetBlockNum());
Output::Print(_u(" Input syms to be int-specialized (lossless): "));
intOverflowDoesNotMatterRange->SymsRequiredToBeInt()->Dump();
Output::Print(_u(" Input syms to be converted to int (lossy): "));
intOverflowDoesNotMatterRange->SymsRequiredToBeLossyInt()->Dump();
Output::Print(_u(" First instr: "));
instr->m_next->Dump();
Output::Flush();
}
#endif
ToInt32(intOverflowDoesNotMatterRange->SymsRequiredToBeInt(), currentBlock, false /* lossy */, instr);
ToInt32(intOverflowDoesNotMatterRange->SymsRequiredToBeLossyInt(), currentBlock, true /* lossy */, instr);
return isBoundary;
}
void
GlobOpt::VerifyIntSpecForIgnoringIntOverflow(IR::Instr *const instr)
{
if(intOverflowCurrentlyMattersInRange || IsLoopPrePass())
{
return;
}
Assert(instr->m_opcode != Js::OpCode::Mul_I4 ||
(instr->m_opcode == Js::OpCode::Mul_I4 && !instr->ShouldCheckFor32BitOverflow() && instr->ShouldCheckForNon32BitOverflow() ));
// Instructions that are marked as "overflow doesn't matter" in the range must guarantee that they operate on int values and
// result in int values, for ignoring overflow to be valid. So, int-specialization is required for such instructions in the
// range. Ld_A is an exception because it only specializes if the src sym is available as a required specialized sym, and it
// doesn't generate bailouts or cause ignoring int overflow to be invalid.
// MULs are allowed to start a region and have BailOutInfo since they will bailout on non-32 bit overflow.
if(instr->m_opcode == Js::OpCode::Ld_A ||
((!instr->HasBailOutInfo() || instr->m_opcode == Js::OpCode::Mul_I4) &&
(!instr->GetDst() || instr->GetDst()->IsInt32()) &&
(!instr->GetSrc1() || instr->GetSrc1()->IsInt32()) &&
(!instr->GetSrc2() || instr->GetSrc2()->IsInt32())))
{
return;
}
if (!instr->HasBailOutInfo() && !instr->HasAnySideEffects())
{
return;
}
// This can happen for Neg_A if it needs to bail out on negative zero, and perhaps other cases as well. It's too late to fix
// the problem (overflows may already be ignored), so handle it by bailing out at compile-time and disabling tracking int
// overflow.
Assert(!func->IsTrackCompoundedIntOverflowDisabled());
if(PHASE_TRACE(Js::BailOutPhase, this->func))
{
char16 debugStringBuffer[MAX_FUNCTION_BODY_DEBUG_STRING_SIZE];
Output::Print(
_u("BailOut (compile-time): function: %s (%s) instr: "),
func->GetJITFunctionBody()->GetDisplayName(),
func->GetDebugNumberSet(debugStringBuffer));
#if DBG_DUMP
instr->Dump();
#else
Output::Print(_u("%s "), Js::OpCodeUtil::GetOpCodeName(instr->m_opcode));
#endif
Output::Print(_u("(overflow does not matter but could not int-spec or needed bailout)\n"));
Output::Flush();
}
if(func->IsTrackCompoundedIntOverflowDisabled())
{
// Tracking int overflows is already off for some reason. Prevent trying to rejit again because it won't help and the
// same thing will happen again and cause an infinite loop. Just abort jitting this function.
if(PHASE_TRACE(Js::BailOutPhase, this->func))
{
Output::Print(_u(" Aborting JIT because TrackIntOverflow is already off\n"));
Output::Flush();
}
throw Js::OperationAbortedException();
}
throw Js::RejitException(RejitReason::TrackIntOverflowDisabled);
}
// It makes lowering easier if it can assume that the first src is never a constant,
// at least for commutative operators. For non-commutative, just hoist the constant.
void
GlobOpt::PreLowerCanonicalize(IR::Instr *instr, Value **pSrc1Val, Value **pSrc2Val)
{
IR::Opnd *dst = instr->GetDst();
IR::Opnd *src1 = instr->GetSrc1();
IR::Opnd *src2 = instr->GetSrc2();
if (src1->IsImmediateOpnd())
{
// Swap for dst, src
}
else if (src2 && dst && src2->IsRegOpnd())
{
if (src2->GetIsDead() && !src1->GetIsDead() && !src1->IsEqual(dst))
{
// Swap if src2 is dead, as the reg can be reuse for the dst for opEqs like on x86 (ADD r1, r2)
}
else if (src2->IsEqual(dst))
{
// Helps lowering of opEqs
}
else
{
return;
}
// Make sure we don't swap 2 srcs with valueOf calls.
if (OpCodeAttr::OpndHasImplicitCall(instr->m_opcode))
{
if (instr->IsBranchInstr())
{
if (!src1->GetValueType().IsPrimitive() || !src2->GetValueType().IsPrimitive())
{
return;
}
}
else if (!src1->GetValueType().IsPrimitive() && !src2->GetValueType().IsPrimitive())
{
return;
}
}
}
else
{
return;
}
Js::OpCode opcode = instr->m_opcode;
switch (opcode)
{
case Js::OpCode::And_A:
case Js::OpCode::Mul_A:
case Js::OpCode::Or_A:
case Js::OpCode::Xor_A:
case Js::OpCode::And_I4:
case Js::OpCode::Mul_I4:
case Js::OpCode::Or_I4:
case Js::OpCode::Xor_I4:
case Js::OpCode::Add_I4:
swap_srcs:
if (!instr->GetSrc2()->IsImmediateOpnd())
{
instr->m_opcode = opcode;
instr->SwapOpnds();
Value *tempVal = *pSrc1Val;
*pSrc1Val = *pSrc2Val;
*pSrc2Val = tempVal;
return;
}
break;
case Js::OpCode::BrSrEq_A:
case Js::OpCode::BrSrNotNeq_A:
case Js::OpCode::BrEq_I4:
goto swap_srcs;
case Js::OpCode::BrSrNeq_A:
case Js::OpCode::BrNeq_A:
case Js::OpCode::BrSrNotEq_A:
case Js::OpCode::BrNotEq_A:
case Js::OpCode::BrNeq_I4:
goto swap_srcs;
case Js::OpCode::BrGe_A:
opcode = Js::OpCode::BrLe_A;
goto swap_srcs;
case Js::OpCode::BrNotGe_A:
opcode = Js::OpCode::BrNotLe_A;
goto swap_srcs;
case Js::OpCode::BrGe_I4:
opcode = Js::OpCode::BrLe_I4;
goto swap_srcs;
case Js::OpCode::BrGt_A:
opcode = Js::OpCode::BrLt_A;
goto swap_srcs;
case Js::OpCode::BrNotGt_A:
opcode = Js::OpCode::BrNotLt_A;
goto swap_srcs;
case Js::OpCode::BrGt_I4:
opcode = Js::OpCode::BrLt_I4;
goto swap_srcs;
case Js::OpCode::BrLe_A:
opcode = Js::OpCode::BrGe_A;
goto swap_srcs;
case Js::OpCode::BrNotLe_A:
opcode = Js::OpCode::BrNotGe_A;
goto swap_srcs;
case Js::OpCode::BrLe_I4:
opcode = Js::OpCode::BrGe_I4;
goto swap_srcs;
case Js::OpCode::BrLt_A:
opcode = Js::OpCode::BrGt_A;
goto swap_srcs;
case Js::OpCode::BrNotLt_A:
opcode = Js::OpCode::BrNotGt_A;
goto swap_srcs;
case Js::OpCode::BrLt_I4:
opcode = Js::OpCode::BrGt_I4;
goto swap_srcs;
case Js::OpCode::BrEq_A:
case Js::OpCode::BrNotNeq_A:
case Js::OpCode::CmEq_A:
case Js::OpCode::CmNeq_A:
// this == "" not the same as "" == this...
if (!src1->IsImmediateOpnd() && (!src1->GetValueType().IsPrimitive() || !src2->GetValueType().IsPrimitive()))
{
return;
}
goto swap_srcs;
case Js::OpCode::CmGe_A:
if (!src1->IsImmediateOpnd() && (!src1->GetValueType().IsPrimitive() || !src2->GetValueType().IsPrimitive()))
{
return;
}
opcode = Js::OpCode::CmLe_A;
goto swap_srcs;
case Js::OpCode::CmGt_A:
if (!src1->IsImmediateOpnd() && (!src1->GetValueType().IsPrimitive() || !src2->GetValueType().IsPrimitive()))
{
return;
}
opcode = Js::OpCode::CmLt_A;
goto swap_srcs;
case Js::OpCode::CmLe_A:
if (!src1->IsImmediateOpnd() && (!src1->GetValueType().IsPrimitive() || !src2->GetValueType().IsPrimitive()))
{
return;
}
opcode = Js::OpCode::CmGe_A;
goto swap_srcs;
case Js::OpCode::CmLt_A:
if (!src1->IsImmediateOpnd() && (!src1->GetValueType().IsPrimitive() || !src2->GetValueType().IsPrimitive()))
{
return;
}
opcode = Js::OpCode::CmGt_A;
goto swap_srcs;
case Js::OpCode::CallI:
case Js::OpCode::CallIFixed:
case Js::OpCode::NewScObject:
case Js::OpCode::NewScObjectSpread:
case Js::OpCode::NewScObjArray:
case Js::OpCode::NewScObjArraySpread:
case Js::OpCode::NewScObjectNoCtor:
// Don't insert load to register if the function operand is a fixed function.
if (instr->HasFixedFunctionAddressTarget())
{
return;
}
break;
// Can't do add because <32 + "Hello"> isn't equal to <"Hello" + 32>
// Lower can do the swap. Other op-codes listed below don't need immediate source hoisting, as the fast paths handle it,
// or the lowering handles the hoisting.
case Js::OpCode::Add_A:
if (src1->IsFloat())
{
goto swap_srcs;
}
return;
case Js::OpCode::Sub_I4:
case Js::OpCode::Neg_I4:
case Js::OpCode::Not_I4:
case Js::OpCode::NewScFunc:
case Js::OpCode::NewScGenFunc:
case Js::OpCode::NewScFuncHomeObj:
case Js::OpCode::NewScGenFuncHomeObj:
case Js::OpCode::NewScArray:
case Js::OpCode::NewScIntArray:
case Js::OpCode::NewScFltArray:
case Js::OpCode::NewScArrayWithMissingValues:
case Js::OpCode::NewRegEx:
case Js::OpCode::Ld_A:
case Js::OpCode::Ld_I4:
case Js::OpCode::ThrowRuntimeError:
case Js::OpCode::TrapIfMinIntOverNegOne:
case Js::OpCode::TrapIfTruncOverflow:
case Js::OpCode::TrapIfZero:
case Js::OpCode::TrapIfUnalignedAccess:
case Js::OpCode::FromVar:
case Js::OpCode::Conv_Prim:
case Js::OpCode::Conv_Prim_Sat:
case Js::OpCode::LdC_A_I4:
case Js::OpCode::LdStr:
case Js::OpCode::InitFld:
case Js::OpCode::InitRootFld:
case Js::OpCode::StartCall:
case Js::OpCode::ArgOut_A:
case Js::OpCode::ArgOut_A_Inline:
case Js::OpCode::ArgOut_A_Dynamic:
case Js::OpCode::ArgOut_A_FromStackArgs:
case Js::OpCode::ArgOut_A_InlineBuiltIn:
case Js::OpCode::ArgOut_A_InlineSpecialized:
case Js::OpCode::ArgOut_A_SpreadArg:
case Js::OpCode::InlineeEnd:
case Js::OpCode::EndCallForPolymorphicInlinee:
case Js::OpCode::InlineeMetaArg:
case Js::OpCode::InlineBuiltInEnd:
case Js::OpCode::InlineNonTrackingBuiltInEnd:
case Js::OpCode::CallHelper:
case Js::OpCode::LdElemUndef:
case Js::OpCode::LdElemUndefScoped:
case Js::OpCode::RuntimeTypeError:
case Js::OpCode::RuntimeReferenceError:
case Js::OpCode::Ret:
case Js::OpCode::NewScObjectSimple:
case Js::OpCode::NewScObjectLiteral:
case Js::OpCode::StFld:
case Js::OpCode::StRootFld:
case Js::OpCode::StSlot:
case Js::OpCode::StSlotChkUndecl:
case Js::OpCode::StElemC:
case Js::OpCode::StArrSegElemC:
case Js::OpCode::StElemI_A:
case Js::OpCode::StElemI_A_Strict:
case Js::OpCode::CallDirect:
case Js::OpCode::BrNotHasSideEffects:
case Js::OpCode::NewConcatStrMulti:
case Js::OpCode::NewConcatStrMultiBE:
case Js::OpCode::ExtendArg_A:
case Js::OpCode::NewScopeSlots:
case Js::OpCode::NewScopeSlotsWithoutPropIds:
case Js::OpCode::NewStackScopeSlots:
case Js::OpCode::IsInst:
case Js::OpCode::BailOnEqual:
case Js::OpCode::BailOnNotEqual:
case Js::OpCode::StArrViewElem:
return;
}
if (!src1->IsImmediateOpnd())
{
return;
}
// The fast paths or lowering of the remaining instructions may not support handling immediate opnds for the first src. The
// immediate src1 is hoisted here into a separate instruction.
if (src1->IsIntConstOpnd())
{
IR::Instr *newInstr = instr->HoistSrc1(Js::OpCode::Ld_I4);
ToInt32Dst(newInstr, newInstr->GetDst()->AsRegOpnd(), this->currentBlock);
}
else if (src1->IsInt64ConstOpnd())
{
instr->HoistSrc1(Js::OpCode::Ld_I4);
}
else
{
instr->HoistSrc1(Js::OpCode::Ld_A);
}
src1 = instr->GetSrc1();
src1->AsRegOpnd()->m_sym->SetIsConst();
}
// Clear the ValueMap pf the values invalidated by this instr.
void
GlobOpt::ProcessKills(IR::Instr *instr)
{
if (instr->m_opcode == Js::OpCode::Yield)
{
this->CurrentBlockData()->KillStateForGeneratorYield(instr);
}
this->ProcessFieldKills(instr);
this->ProcessValueKills(instr);
this->ProcessArrayValueKills(instr);
}
bool
GlobOpt::OptIsInvariant(IR::Opnd *src, BasicBlock *block, Loop *loop, Value *srcVal, bool isNotTypeSpecConv, bool allowNonPrimitives)
{
if(!loop->CanHoistInvariants())
{
return false;
}
Sym *sym;
switch(src->GetKind())
{
case IR::OpndKindAddr:
case IR::OpndKindFloatConst:
case IR::OpndKindIntConst:
return true;
case IR::OpndKindReg:
sym = src->AsRegOpnd()->m_sym;
break;
case IR::OpndKindSym:
sym = src->AsSymOpnd()->m_sym;
if (src->AsSymOpnd()->IsPropertySymOpnd())
{
if (src->AsSymOpnd()->AsPropertySymOpnd()->IsTypeChecked())
{
// We do not handle hoisting these yet. We might be hoisting this across the instr with the type check protecting this one.
// And somehow, the dead-store pass now removes the type check on that instr later on...
// For CheckFixedFld, there is no benefit hoisting these if they don't have a type check as they won't generate code.
return false;
}
}
break;
case IR::OpndKindHelperCall:
// Helper calls, like the private slot getter, can be invariant.
// Consider moving more math builtin to invariant?
return HelperMethodAttributes::IsInVariant(src->AsHelperCallOpnd()->m_fnHelper);
default:
return false;
}
return OptIsInvariant(sym, block, loop, srcVal, isNotTypeSpecConv, allowNonPrimitives);
}
bool
GlobOpt::OptIsInvariant(Sym *sym, BasicBlock *block, Loop *loop, Value *srcVal, bool isNotTypeSpecConv, bool allowNonPrimitives, Value **loopHeadValRef)
{
Value *localLoopHeadVal;
if(!loopHeadValRef)
{
loopHeadValRef = &localLoopHeadVal;
}
Value *&loopHeadVal = *loopHeadValRef;
loopHeadVal = nullptr;
if(!loop->CanHoistInvariants())
{
return false;
}
if (sym->IsStackSym())
{
if (sym->AsStackSym()->IsTypeSpec())
{
StackSym *varSym = sym->AsStackSym()->GetVarEquivSym(this->func);
// Make sure the int32/float64 version of this is available.
// Note: We could handle this by converting the src, but usually the
// conversion is hoistable if this is hoistable anyway.
// In some weird cases it may not be however, so we'll bail out.
if (sym->AsStackSym()->IsInt32())
{
Assert(block->globOptData.liveInt32Syms->Test(varSym->m_id));
if (!loop->landingPad->globOptData.liveInt32Syms->Test(varSym->m_id) ||
(loop->landingPad->globOptData.liveLossyInt32Syms->Test(varSym->m_id) &&
!block->globOptData.liveLossyInt32Syms->Test(varSym->m_id)))
{
// Either the int32 sym is not live in the landing pad, or it's lossy in the landing pad and the
// instruction's block is using the lossless version. In either case, the instruction cannot be hoisted
// without doing a conversion of this operand.
return false;
}
}
else if (sym->AsStackSym()->IsFloat64())
{
if (!loop->landingPad->globOptData.liveFloat64Syms->Test(varSym->m_id))
{
return false;
}
}
sym = sym->AsStackSym()->GetVarEquivSym(this->func);
}
else
{
// Make sure the var version of this is available.
// Note: We could handle this by converting the src, but usually the
// conversion is hoistable if this is hoistable anyway.
// In some weird cases it may not be however, so we'll bail out.
if (!loop->landingPad->globOptData.liveVarSyms->Test(sym->m_id))
{
return false;
}
}
}
else if (sym->IsPropertySym())
{
if (!loop->landingPad->globOptData.liveVarSyms->Test(sym->AsPropertySym()->m_stackSym->m_id))
{
return false;
}
}
else
{
return false;
}
// We rely on having a value.
if (srcVal == NULL)
{
return false;
}
// A symbol is invariant if its current value is the same as it was upon entering the loop.
loopHeadVal = loop->landingPad->globOptData.FindValue(sym);
if (loopHeadVal == NULL || loopHeadVal->GetValueNumber() != srcVal->GetValueNumber())
{
return false;
}
// Can't hoist non-primitives, unless we have safeguards against valueof/tostring. Additionally, we need to consider
// the value annotations on the source *before* the loop: if we hoist this instruction outside the loop, we can't
// necessarily rely on type annotations added (and enforced) earlier in the loop's body.
//
// It might look as though !loopHeadVal->GetValueInfo()->IsPrimitive() implies
// !loop->landingPad->globOptData.IsTypeSpecialized(sym), but it turns out that this is not always the case. We
// encountered a test case in which we had previously hoisted a FromVar (to float 64) instruction, but its bailout code was
// BailoutPrimitiveButString, rather than BailoutNumberOnly, which would have allowed us to conclude that the dest was
// definitely a float64. Instead, it was only *likely* a float64, causing IsPrimitive to return false.
if (!allowNonPrimitives && !loopHeadVal->GetValueInfo()->IsPrimitive() && !loop->landingPad->globOptData.IsTypeSpecialized(sym))
{
return false;
}
if(!isNotTypeSpecConv && loop->symsDefInLoop->Test(sym->m_id))
{
// Typically, a sym is considered invariant if it has the same value in the current block and in the loop landing pad.
// The sym may have had a different value earlier in the loop or on the back-edge, but as long as it's reassigned to its
// value outside the loop, it would be considered invariant in this block. Consider that case:
// s1 = s2[invariant]
// <loop start>
// s1 = s2[invariant]
// // s1 now has the same value as in the landing pad, and is considered invariant
// s1 += s3
// // s1 is not invariant here, or on the back-edge
// ++s3 // s3 is not invariant, so the add above cannot be hoisted
// <loop end>
//
// A problem occurs at the point of (s1 += s3) when:
// - At (s1 = s2) inside the loop, s1 was made to be the sym store of that value. This by itself is legal, because
// after that transfer, s1 and s2 have the same value.
// - (s1 += s3) is type-specialized but s1 is not specialized in the loop header. This happens when s1 is not
// specialized entering the loop, and since s1 is not used before it's defined in the loop, it's not specialized
// on back-edges.
//
// With that, at (s1 += s3), the conversion of s1 to the type-specialized version would be hoisted because s1 is
// invariant just before that instruction. Since this add is specialized, the specialized version of the sym is modified
// in the loop without a reassignment at (s1 = s2) inside the loop, and (s1 += s3) would then use an incorrect value of
// s1 (it would use the value of s1 from the previous loop iteration, instead of using the value of s2).
//
// The problem here, is that we cannot hoist the conversion of s1 into its specialized version across the assignment
// (s1 = s2) inside the loop. So for the purposes of type specialization, don't consider a sym invariant if it has a def
// inside the loop.
return false;
}
// For values with an int range, require additionally that the range is the same as in the landing pad, as the range may
// have been changed on this path based on branches, and int specialization and invariant hoisting may rely on the range
// being the same. For type spec conversions, only require that if the value is an int constant in the current block, that
// it is also an int constant with the same value in the landing pad. Other range differences don't matter for type spec.
IntConstantBounds srcIntConstantBounds, loopHeadIntConstantBounds;
if(srcVal->GetValueInfo()->TryGetIntConstantBounds(&srcIntConstantBounds) &&
(isNotTypeSpecConv || srcIntConstantBounds.IsConstant()) &&
(
!loopHeadVal->GetValueInfo()->TryGetIntConstantBounds(&loopHeadIntConstantBounds) ||
loopHeadIntConstantBounds.LowerBound() != srcIntConstantBounds.LowerBound() ||
loopHeadIntConstantBounds.UpperBound() != srcIntConstantBounds.UpperBound()
))
{
return false;
}
// Disabling this assert, because it does not hold true when we force specialize in the loop landing pad
//Assert((!loopHeadVal->GetValueInfo()->IsPrimitive()) || srcVal->GetValueInfo()->IsLikelyPrimitive());
return true;
}
bool
GlobOpt::OptIsInvariant(
IR::Instr *instr,
BasicBlock *block,
Loop *loop,
Value *src1Val,
Value *src2Val,
bool isNotTypeSpecConv,
const bool forceInvariantHoisting)
{
if (!loop->CanHoistInvariants())
{
return false;
}
if (!OpCodeAttr::CanCSE(instr->m_opcode))
{
return false;
}
bool allowNonPrimitives = !OpCodeAttr::OpndHasImplicitCall(instr->m_opcode);
switch(instr->m_opcode)
{
// Can't legally hoist these
case Js::OpCode::LdLen_A:
return false;
//Can't Hoist BailOnNotStackArgs, as it is necessary as InlineArgsOptimization relies on this opcode
//to decide whether to throw rejit exception or not.
case Js::OpCode::BailOnNotStackArgs:
return false;
// Usually not worth hoisting these
case Js::OpCode::Ld_A:
case Js::OpCode::Ld_I4:
case Js::OpCode::LdC_A_I4:
if(!forceInvariantHoisting)
{
return false;
}
break;
// Can't hoist these outside the function it's for. The LdArgumentsFromFrame for an inlinee depends on the inlinee meta arg
// that holds the arguments object, which is only initialized at the start of the inlinee. So, can't hoist this outside the
// inlinee.
case Js::OpCode::LdArgumentsFromFrame:
if(instr->m_func != loop->GetFunc())
{
return false;
}
break;
case Js::OpCode::FromVar:
if (instr->HasBailOutInfo())
{
allowNonPrimitives = true;
}
break;
case Js::OpCode::CheckObjType:
// Bug 11712101: If the operand is a field, ensure that its containing object type is invariant
// before hoisting -- that is, don't hoist a CheckObjType over a DeleteFld on that object.
// (CheckObjType only checks the operand and its immediate parent, so we don't need to go
// any farther up the object graph.)
Assert(instr->GetSrc1());
PropertySym *propertySym = instr->GetSrc1()->AsPropertySymOpnd()->GetPropertySym();
if (propertySym->HasObjectTypeSym()) {
StackSym *objectTypeSym = propertySym->GetObjectTypeSym();
if (!this->OptIsInvariant(objectTypeSym, block, loop, this->CurrentBlockData()->FindValue(objectTypeSym), true, true)) {
return false;
}
}
break;
}
IR::Opnd *dst = instr->GetDst();
if (dst && !dst->IsRegOpnd())
{
return false;
}
IR::Opnd *src1 = instr->GetSrc1();
if (src1)
{
if (!this->OptIsInvariant(src1, block, loop, src1Val, isNotTypeSpecConv, allowNonPrimitives))
{
return false;
}
IR::Opnd *src2 = instr->GetSrc2();
if (src2)
{
if (!this->OptIsInvariant(src2, block, loop, src2Val, isNotTypeSpecConv, allowNonPrimitives))
{
return false;
}
}
}
return true;
}
bool
GlobOpt::OptDstIsInvariant(IR::RegOpnd *dst)
{
StackSym *dstSym = dst->m_sym;
if (dstSym->IsTypeSpec())
{
// The type-specialized sym may be single def, but not the original...
dstSym = dstSym->GetVarEquivSym(this->func);
}
return (dstSym->m_isSingleDef);
}
void
GlobOpt::OptHoistUpdateValueType(
Loop* loop,
IR::Instr* instr,
IR::Opnd** srcOpndPtr /* All code paths that change src, should update srcOpndPtr*/,
Value* opndVal)
{
if (opndVal == nullptr || instr->m_opcode == Js::OpCode::FromVar || srcOpndPtr == nullptr || *srcOpndPtr == nullptr)
{
return;
}
IR::Opnd* srcOpnd = *srcOpndPtr;
Sym* opndSym = srcOpnd->GetSym();;
if (opndSym)
{
BasicBlock* landingPad = loop->landingPad;
Value* opndValueInLandingPad = landingPad->globOptData.FindValue(opndSym);
Assert(opndVal->GetValueNumber() == opndValueInLandingPad->GetValueNumber());
ValueType opndValueTypeInLandingPad = opndValueInLandingPad->GetValueInfo()->Type();
if (srcOpnd->GetValueType() != opndValueTypeInLandingPad)
{
srcOpnd->SetValueType(opndValueTypeInLandingPad);
if (instr->m_opcode == Js::OpCode::SetConcatStrMultiItemBE)
{
Assert(!opndSym->IsPropertySym());
Assert(!opndValueTypeInLandingPad.IsString());
Assert(instr->GetDst());
IR::RegOpnd* strOpnd = IR::RegOpnd::New(TyVar, instr->m_func);
strOpnd->SetValueType(ValueType::String);
strOpnd->SetValueTypeFixed();
IR::Instr* convPrimStrInstr =
IR::Instr::New(Js::OpCode::Conv_PrimStr, strOpnd, srcOpnd->Use(instr->m_func), instr->m_func);
instr->ReplaceSrc(srcOpnd, strOpnd);
// Replace above will free srcOpnd, so reassign it
*srcOpndPtr = srcOpnd = reinterpret_cast<IR::Opnd *>(strOpnd);
// We add ConvPrim_Str in the landingpad, and since this instruction doesn't go through the checks in OptInstr, the bailout is never added
// As we expand hoisting of instructions to new opcode, we need a better framework to handle such cases
if (IsImplicitCallBailOutCurrentlyNeeded(convPrimStrInstr, opndValueInLandingPad, nullptr, landingPad, landingPad->globOptData.liveFields->IsEmpty(), true, true))
{
EnsureBailTarget(loop);
loop->bailOutInfo->bailOutInstr->InsertBefore(convPrimStrInstr);
convPrimStrInstr = convPrimStrInstr->ConvertToBailOutInstr(convPrimStrInstr, IR::BailOutOnImplicitCallsPreOp, loop->bailOutInfo->bailOutOffset);
convPrimStrInstr->ReplaceBailOutInfo(loop->bailOutInfo);
}
else
{
if (loop->bailOutInfo->bailOutInstr)
{
loop->bailOutInfo->bailOutInstr->InsertBefore(convPrimStrInstr);
}
else
{
landingPad->InsertAfter(convPrimStrInstr);
}
}
// If we came here opndSym can't be PropertySym
return;
}
}
if (opndSym->IsPropertySym())
{
// Also fix valueInfo on objPtr
StackSym* opndObjPtrSym = opndSym->AsPropertySym()->m_stackSym;
Value* opndObjPtrSymValInLandingPad = landingPad->globOptData.FindValue(opndObjPtrSym);
ValueInfo* opndObjPtrSymValueInfoInLandingPad = opndObjPtrSymValInLandingPad->GetValueInfo();
srcOpnd->AsSymOpnd()->SetPropertyOwnerValueType(opndObjPtrSymValueInfoInLandingPad->Type());
}
}
}
void
GlobOpt::OptHoistInvariant(
IR::Instr *instr,
BasicBlock *block,
Loop *loop,
Value *dstVal,
Value *const src1Val,
Value *const src2Val,
bool isNotTypeSpecConv,
bool lossy,
IR::BailOutKind bailoutKind)
{
BasicBlock *landingPad = loop->landingPad;
IR::Opnd* src1 = instr->GetSrc1();
if (src1)
{
// We are hoisting this instruction possibly past other uses, which might invalidate the last use info. Clear it.
OptHoistUpdateValueType(loop, instr, &src1, src1Val);
if (src1->IsRegOpnd())
{
src1->AsRegOpnd()->m_isTempLastUse = false;
}
IR::Opnd* src2 = instr->GetSrc2();
if (src2)
{
OptHoistUpdateValueType(loop, instr, &src2, src2Val);
if (src2->IsRegOpnd())
{
src2->AsRegOpnd()->m_isTempLastUse = false;
}
}
}
IR::RegOpnd *dst = instr->GetDst() ? instr->GetDst()->AsRegOpnd() : nullptr;
if(dst)
{
switch (instr->m_opcode)
{
case Js::OpCode::CmEq_I4:
case Js::OpCode::CmNeq_I4:
case Js::OpCode::CmLt_I4:
case Js::OpCode::CmLe_I4:
case Js::OpCode::CmGt_I4:
case Js::OpCode::CmGe_I4:
case Js::OpCode::CmUnLt_I4:
case Js::OpCode::CmUnLe_I4:
case Js::OpCode::CmUnGt_I4:
case Js::OpCode::CmUnGe_I4:
// These operations are a special case. They generate a lossy int value, and the var sym is initialized using
// Conv_Bool. A sym cannot be live only as a lossy int sym, the var needs to be live as well since the lossy int
// sym cannot be used to convert to var. We don't know however, whether the Conv_Bool will be hoisted. The idea
// currently is that the sym is only used on the path in which it is initialized inside the loop. So, don't
// hoist any liveness info for the dst.
if (!this->GetIsAsmJSFunc())
{
lossy = true;
}
break;
case Js::OpCode::FromVar:
{
StackSym* src1StackSym = IR::RegOpnd::TryGetStackSym(instr->GetSrc1());
if (instr->HasBailOutInfo())
{
IR::BailOutKind instrBailoutKind = instr->GetBailOutKind();
Assert(instrBailoutKind == IR::BailOutIntOnly ||
instrBailoutKind == IR::BailOutExpectingInteger ||
instrBailoutKind == IR::BailOutOnNotPrimitive ||
instrBailoutKind == IR::BailOutNumberOnly ||
instrBailoutKind == IR::BailOutPrimitiveButString);
}
else if (src1StackSym && bailoutKind != IR::BailOutInvalid)
{
// We may be hoisting FromVar from a region where it didn't need a bailout (src1 had a definite value type) to a region
// where it would. In such cases, the FromVar needs a bailout based on the value type of src1 in its new position.
Assert(!src1StackSym->IsTypeSpec());
Value* landingPadSrc1val = landingPad->globOptData.FindValue(src1StackSym);
Assert(src1Val->GetValueNumber() == landingPadSrc1val->GetValueNumber());
ValueInfo *src1ValueInfo = src1Val->GetValueInfo();
ValueInfo *landingPadSrc1ValueInfo = landingPadSrc1val->GetValueInfo();
IRType dstType = dst->GetType();
const auto AddBailOutToFromVar = [&]()
{
instr->GetSrc1()->SetValueType(landingPadSrc1val->GetValueInfo()->Type());
EnsureBailTarget(loop);
if (block->IsLandingPad())
{
instr = instr->ConvertToBailOutInstr(instr, bailoutKind, loop->bailOutInfo->bailOutOffset);
}
else
{
instr = instr->ConvertToBailOutInstr(instr, bailoutKind);
}
};
// A definite type in the source position and not a definite type in the destination (landing pad)
// and no bailout on the instruction; we should put a bailout on the hoisted instruction.
if (dstType == TyInt32)
{
if (lossy)
{
if ((src1ValueInfo->IsPrimitive() || block->globOptData.IsTypeSpecialized(src1StackSym)) && // didn't need a lossy type spec bailout in the source block
(!landingPadSrc1ValueInfo->IsPrimitive() && !landingPad->globOptData.IsTypeSpecialized(src1StackSym))) // needs a lossy type spec bailout in the landing pad
{
bailoutKind = IR::BailOutOnNotPrimitive;
AddBailOutToFromVar();
}
}
else if (src1ValueInfo->IsInt() && !landingPadSrc1ValueInfo->IsInt())
{
AddBailOutToFromVar();
}
}
else if ((dstType == TyFloat64 && src1ValueInfo->IsNumber() && !landingPadSrc1ValueInfo->IsNumber()))
{
AddBailOutToFromVar();
}
}
break;
}
}
if (dstVal == NULL)
{
dstVal = this->NewGenericValue(ValueType::Uninitialized, dst);
}
// ToVar/FromVar don't need a new dst because it has to be invariant if their src is invariant.
bool dstDoesntNeedLoad = (!isNotTypeSpecConv && instr->m_opcode != Js::OpCode::LdC_A_I4);
StackSym *varSym = dst->m_sym;
if (varSym->IsTypeSpec())
{
varSym = varSym->GetVarEquivSym(this->func);
}
Value *const landingPadDstVal = loop->landingPad->globOptData.FindValue(varSym);
if(landingPadDstVal
? dstVal->GetValueNumber() != landingPadDstVal->GetValueNumber()
: loop->symsDefInLoop->Test(varSym->m_id))
{
// We need a temp for FromVar/ToVar if dst changes in the loop.
dstDoesntNeedLoad = false;
}
if (!dstDoesntNeedLoad && this->OptDstIsInvariant(dst) == false)
{
// Keep dst in place, hoist instr using a new dst.
instr->UnlinkDst();
// Set type specialization info correctly for this new sym
StackSym *copyVarSym;
IR::RegOpnd *copyReg;
if (dst->m_sym->IsTypeSpec())
{
copyVarSym = StackSym::New(TyVar, instr->m_func);
StackSym *copySym = copyVarSym;
if (dst->m_sym->IsInt32())
{
if(lossy)
{
// The new sym would only be live as a lossy int since we're only hoisting the store to the int version
// of the sym, and cannot be converted to var. It is not legal to have a sym only live as a lossy int,
// so don't update liveness info for this sym.
}
else
{
block->globOptData.liveInt32Syms->Set(copyVarSym->m_id);
}
copySym = copySym->GetInt32EquivSym(instr->m_func);
}
else if (dst->m_sym->IsFloat64())
{
block->globOptData.liveFloat64Syms->Set(copyVarSym->m_id);
copySym = copySym->GetFloat64EquivSym(instr->m_func);
}
copyReg = IR::RegOpnd::New(copySym, copySym->GetType(), instr->m_func);
}
else
{
copyReg = IR::RegOpnd::New(dst->GetType(), instr->m_func);
copyVarSym = copyReg->m_sym;
block->globOptData.liveVarSyms->Set(copyVarSym->m_id);
}
copyReg->SetValueType(dst->GetValueType());
IR::Instr *copyInstr = IR::Instr::New(Js::OpCode::Ld_A, dst, copyReg, instr->m_func);
copyInstr->SetByteCodeOffset(instr);
instr->SetDst(copyReg);
instr->InsertBefore(copyInstr);
dst->m_sym->m_mayNotBeTempLastUse = true;
if (instr->GetSrc1() && instr->GetSrc1()->IsImmediateOpnd())
{
// Propagate IsIntConst if appropriate
switch(instr->m_opcode)
{
case Js::OpCode::Ld_A:
case Js::OpCode::Ld_I4:
case Js::OpCode::LdC_A_I4:
copyReg->m_sym->SetIsConst();
break;
}
}
ValueInfo *dstValueInfo = dstVal->GetValueInfo();
if((!dstValueInfo->GetSymStore() || dstValueInfo->GetSymStore() == varSym) && !lossy)
{
// The destination's value may have been transferred from one of the invariant sources, in which case we should
// keep the sym store intact, as that sym will likely have a better lifetime than this new copy sym. For
// instance, if we're inside a conditioned block, because we don't make the copy sym live and set its value in
// all preceding blocks, this sym would not be live after exiting this block, causing this value to not
// participate in copy-prop after this block.
this->SetSymStoreDirect(dstValueInfo, copyVarSym);
}
block->globOptData.InsertNewValue(dstVal, copyReg);
dst = copyReg;
}
}
// Move to landing pad
block->UnlinkInstr(instr);
if (loop->bailOutInfo->bailOutInstr)
{
loop->bailOutInfo->bailOutInstr->InsertBefore(instr);
}
else
{
landingPad->InsertAfter(instr);
}
GlobOpt::MarkNonByteCodeUsed(instr);
if (instr->HasBailOutInfo() || instr->HasAuxBailOut())
{
Assert(loop->bailOutInfo);
EnsureBailTarget(loop);
// Copy bailout info of loop top.
instr->ReplaceBailOutInfo(loop->bailOutInfo);
}
if(!dst)
{
return;
}
// The bailout info's liveness for the dst sym is not updated in loop landing pads because bailout instructions previously
// hoisted into the loop's landing pad may bail out before the current type of the dst sym became live (perhaps due to this
// instruction). Since the landing pad will have a shared bailout point, the bailout info cannot assume that the current
// type of the dst sym was live during every bailout hoisted into the landing pad.
StackSym *const dstSym = dst->m_sym;
StackSym *const dstVarSym = dstSym->IsTypeSpec() ? dstSym->GetVarEquivSym(nullptr) : dstSym;
Assert(dstVarSym);
if(isNotTypeSpecConv || !loop->landingPad->globOptData.IsLive(dstVarSym))
{
// A new dst is being hoisted, or the same single-def dst that would not be live before this block. So, make it live and
// update the value info with the same value info in this block.
if(lossy)
{
// This is a lossy conversion to int. The instruction was given a new dst specifically for hoisting, so this new dst
// will not be live as a var before this block. A sym cannot be live only as a lossy int sym, the var needs to be
// live as well since the lossy int sym cannot be used to convert to var. Since the var version of the sym is not
// going to be initialized, don't hoist any liveness info for the dst. The sym is only going to be used on the path
// in which it is initialized inside the loop.
Assert(dstSym->IsTypeSpec());
Assert(dstSym->IsInt32());
return;
}
// Check if the dst value was transferred from the src. If so, the value transfer needs to be replicated.
bool isTransfer = dstVal == src1Val;
StackSym *transferValueOfSym = nullptr;
if(isTransfer)
{
Assert(instr->GetSrc1());
if(instr->GetSrc1()->IsRegOpnd())
{
StackSym *src1Sym = instr->GetSrc1()->AsRegOpnd()->m_sym;
if(src1Sym->IsTypeSpec())
{
src1Sym = src1Sym->GetVarEquivSym(nullptr);
Assert(src1Sym);
}
if(dstVal == block->globOptData.FindValue(src1Sym))
{
transferValueOfSym = src1Sym;
}
}
}
// SIMD_JS
if (instr->m_opcode == Js::OpCode::ExtendArg_A)
{
// Check if we should have CSE'ed this EA
Assert(instr->GetSrc1());
// If the dstVal symstore is not the dst itself, then we copied the Value from another expression.
if (dstVal->GetValueInfo()->GetSymStore() != instr->GetDst()->GetStackSym())
{
isTransfer = true;
transferValueOfSym = dstVal->GetValueInfo()->GetSymStore()->AsStackSym();
}
}
const ValueNumber dstValueNumber = dstVal->GetValueNumber();
ValueNumber dstNewValueNumber = InvalidValueNumber;
for(InvariantBlockBackwardIterator it(this, block, loop->landingPad, nullptr); it.IsValid(); it.MoveNext())
{
BasicBlock *const hoistBlock = it.Block();
GlobOptBlockData &hoistBlockData = hoistBlock->globOptData;
Assert(!hoistBlockData.IsLive(dstVarSym));
hoistBlockData.MakeLive(dstSym, lossy);
Value *newDstValue;
do
{
if(isTransfer)
{
if(transferValueOfSym)
{
newDstValue = hoistBlockData.FindValue(transferValueOfSym);
if(newDstValue && newDstValue->GetValueNumber() == dstValueNumber)
{
break;
}
}
// It's a transfer, but we don't have a sym whose value number matches in the target block. Use a new value
// number since we don't know if there is already a value with the current number for the target block.
if(dstNewValueNumber == InvalidValueNumber)
{
dstNewValueNumber = NewValueNumber();
}
newDstValue = CopyValue(dstVal, dstNewValueNumber);
break;
}
newDstValue = CopyValue(dstVal, dstValueNumber);
} while(false);
hoistBlockData.SetValue(newDstValue, dstVarSym);
}
return;
}
#if DBG
if(instr->GetSrc1()->IsRegOpnd()) // Type spec conversion may load a constant into a dst sym
{
StackSym *const srcSym = instr->GetSrc1()->AsRegOpnd()->m_sym;
Assert(srcSym != dstSym); // Type spec conversion must be changing the type, so the syms must be different
StackSym *const srcVarSym = srcSym->IsTypeSpec() ? srcSym->GetVarEquivSym(nullptr) : srcSym;
Assert(srcVarSym == dstVarSym); // Type spec conversion must be between variants of the same var sym
}
#endif
bool changeValueType = false, changeValueTypeToInt = false;
if(dstSym->IsTypeSpec())
{
if(dst->IsInt32())
{
if(!lossy)
{
Assert(
!instr->HasBailOutInfo() ||
instr->GetBailOutKind() == IR::BailOutIntOnly ||
instr->GetBailOutKind() == IR::BailOutExpectingInteger);
changeValueType = changeValueTypeToInt = true;
}
}
else if (dst->IsFloat64())
{
if(instr->HasBailOutInfo() && instr->GetBailOutKind() == IR::BailOutNumberOnly)
{
changeValueType = true;
}
}
}
ValueInfo *previousValueInfoBeforeUpdate = nullptr, *previousValueInfoAfterUpdate = nullptr;
for(InvariantBlockBackwardIterator it(
this,
block,
loop->landingPad,
dstVarSym,
dstVal->GetValueNumber());
it.IsValid();
it.MoveNext())
{
BasicBlock *const hoistBlock = it.Block();
GlobOptBlockData &hoistBlockData = hoistBlock->globOptData;
#if DBG
// TODO: There are some odd cases with field hoisting where the sym is invariant in only part of the loop and the info
// does not flow through all blocks. Un-comment the verification below after PRE replaces field hoisting.
//// Verify that the src sym is live as the required type, and that the conversion is valid
//Assert(IsLive(dstVarSym, &hoistBlockData));
//if(instr->GetSrc1()->IsRegOpnd())
//{
// IR::RegOpnd *const src = instr->GetSrc1()->AsRegOpnd();
// StackSym *const srcSym = instr->GetSrc1()->AsRegOpnd()->m_sym;
// if(srcSym->IsTypeSpec())
// {
// if(src->IsInt32())
// {
// Assert(hoistBlockData.liveInt32Syms->Test(dstVarSym->m_id));
// Assert(!hoistBlockData.liveLossyInt32Syms->Test(dstVarSym->m_id)); // shouldn't try to convert a lossy int32 to anything
// }
// else
// {
// Assert(src->IsFloat64());
// Assert(hoistBlockData.liveFloat64Syms->Test(dstVarSym->m_id));
// if(dstSym->IsTypeSpec() && dst->IsInt32())
// {
// Assert(lossy); // shouldn't try to do a lossless conversion from float64 to int32
// }
// }
// }
// else
// {
// Assert(hoistBlockData.liveVarSyms->Test(dstVarSym->m_id));
// }
//}
//if(dstSym->IsTypeSpec() && dst->IsInt32())
//{
// // If the sym is already specialized as required in the block to which we are attempting to hoist the conversion,
// // that info should have flowed into this block
// if(lossy)
// {
// Assert(!hoistBlockData.liveInt32Syms->Test(dstVarSym->m_id));
// }
// else
// {
// Assert(!IsInt32TypeSpecialized(dstVarSym, hoistBlock));
// }
//}
#endif
hoistBlockData.MakeLive(dstSym, lossy);
if(!changeValueType)
{
continue;
}
Value *const hoistBlockValue = it.InvariantSymValue();
ValueInfo *const hoistBlockValueInfo = hoistBlockValue->GetValueInfo();
if(hoistBlockValueInfo == previousValueInfoBeforeUpdate)
{
if(hoistBlockValueInfo != previousValueInfoAfterUpdate)
{
HoistInvariantValueInfo(previousValueInfoAfterUpdate, hoistBlockValue, hoistBlock);
}
}
else
{
previousValueInfoBeforeUpdate = hoistBlockValueInfo;
ValueInfo *const newValueInfo =
changeValueTypeToInt
? hoistBlockValueInfo->SpecializeToInt32(alloc)
: hoistBlockValueInfo->SpecializeToFloat64(alloc);
previousValueInfoAfterUpdate = newValueInfo;
ChangeValueInfo(changeValueTypeToInt ? nullptr : hoistBlock, hoistBlockValue, newValueInfo);
}
}
}
bool
GlobOpt::TryHoistInvariant(
IR::Instr *instr,
BasicBlock *block,
Value *dstVal,
Value *src1Val,
Value *src2Val,
bool isNotTypeSpecConv,
const bool lossy,
const bool forceInvariantHoisting,
IR::BailOutKind bailoutKind)
{
Assert(!this->IsLoopPrePass());
if (OptIsInvariant(instr, block, block->loop, src1Val, src2Val, isNotTypeSpecConv, forceInvariantHoisting))
{
#if DBG
if (Js::Configuration::Global.flags.Trace.IsEnabled(Js::InvariantsPhase, this->func->GetSourceContextId(), this->func->GetLocalFunctionId()))
{
Output::Print(_u(" **** INVARIANT *** "));
instr->Dump();
}
#endif
#if ENABLE_DEBUG_CONFIG_OPTIONS
if (Js::Configuration::Global.flags.TestTrace.IsEnabled(Js::InvariantsPhase))
{
Output::Print(_u(" **** INVARIANT *** "));
Output::Print(_u("%s \n"), Js::OpCodeUtil::GetOpCodeName(instr->m_opcode));
}
#endif
Loop *loop = block->loop;
// Try hoisting from to outer most loop
while (loop->parent && OptIsInvariant(instr, block, loop->parent, src1Val, src2Val, isNotTypeSpecConv, forceInvariantHoisting))
{
loop = loop->parent;
}
// Record the byte code use here since we are going to move this instruction up
if (isNotTypeSpecConv)
{
InsertNoImplicitCallUses(instr);
this->CaptureByteCodeSymUses(instr);
this->InsertByteCodeUses(instr, true);
}
#if DBG
else
{
PropertySym *propertySymUse = NULL;
NoRecoverMemoryJitArenaAllocator tempAllocator(_u("BE-GlobOpt-Temp"), this->alloc->GetPageAllocator(), Js::Throw::OutOfMemory);
BVSparse<JitArenaAllocator> * tempByteCodeUse = JitAnew(&tempAllocator, BVSparse<JitArenaAllocator>, &tempAllocator);
GlobOpt::TrackByteCodeSymUsed(instr, tempByteCodeUse, &propertySymUse);
Assert(tempByteCodeUse->Count() == 0 && propertySymUse == NULL);
}
#endif
OptHoistInvariant(instr, block, loop, dstVal, src1Val, src2Val, isNotTypeSpecConv, lossy, bailoutKind);
return true;
}
return false;
}
InvariantBlockBackwardIterator::InvariantBlockBackwardIterator(
GlobOpt *const globOpt,
BasicBlock *const exclusiveBeginBlock,
BasicBlock *const inclusiveEndBlock,
StackSym *const invariantSym,
const ValueNumber invariantSymValueNumber,
bool followFlow)
: globOpt(globOpt),
exclusiveEndBlock(inclusiveEndBlock->prev),
invariantSym(invariantSym),
invariantSymValueNumber(invariantSymValueNumber),
block(exclusiveBeginBlock),
blockBV(globOpt->tempAlloc),
followFlow(followFlow)
#if DBG
,
inclusiveEndBlock(inclusiveEndBlock)
#endif
{
Assert(exclusiveBeginBlock);
Assert(inclusiveEndBlock);
Assert(!inclusiveEndBlock->isDeleted);
Assert(exclusiveBeginBlock != inclusiveEndBlock);
Assert(!invariantSym == (invariantSymValueNumber == InvalidValueNumber));
MoveNext();
}
bool
InvariantBlockBackwardIterator::IsValid() const
{
return block != exclusiveEndBlock;
}
void
InvariantBlockBackwardIterator::MoveNext()
{
Assert(IsValid());
while(true)
{
#if DBG
BasicBlock *const previouslyIteratedBlock = block;
#endif
block = block->prev;
if(!IsValid())
{
Assert(previouslyIteratedBlock == inclusiveEndBlock);
break;
}
if (!this->UpdatePredBlockBV())
{
continue;
}
if (!this->UpdatePredBlockBV())
{
continue;
}
if(block->isDeleted)
{
continue;
}
if(!block->globOptData.HasData())
{
// This block's info has already been merged with all of its successors
continue;
}
if(!invariantSym)
{
break;
}
invariantSymValue = block->globOptData.FindValue(invariantSym);
if(!invariantSymValue || invariantSymValue->GetValueNumber() != invariantSymValueNumber)
{
// BailOnNoProfile and throw blocks are not moved outside loops. A sym table cleanup on these paths may delete the
// values. Field hoisting also has some odd cases where the hoisted stack sym is invariant in only part of the loop.
continue;
}
break;
}
}
bool
InvariantBlockBackwardIterator::UpdatePredBlockBV()
{
if (!this->followFlow)
{
return true;
}
// Track blocks we've visited to ensure that we only iterate over predecessor blocks
if (!this->blockBV.IsEmpty() && !this->blockBV.Test(this->block->GetBlockNum()))
{
return false;
}
FOREACH_SLISTBASECOUNTED_ENTRY(FlowEdge*, edge, this->block->GetPredList())
{
this->blockBV.Set(edge->GetPred()->GetBlockNum());
} NEXT_SLISTBASECOUNTED_ENTRY;
return true;
}
BasicBlock *
InvariantBlockBackwardIterator::Block() const
{
Assert(IsValid());
return block;
}
Value *
InvariantBlockBackwardIterator::InvariantSymValue() const
{
Assert(IsValid());
Assert(invariantSym);
return invariantSymValue;
}
void
GlobOpt::HoistInvariantValueInfo(
ValueInfo *const invariantValueInfoToHoist,
Value *const valueToUpdate,
BasicBlock *const targetBlock)
{
Assert(invariantValueInfoToHoist);
Assert(valueToUpdate);
Assert(targetBlock);
// Why are we trying to change the value type of the type sym value? Asserting here to make sure we don't deep copy the type sym's value info.
Assert(!invariantValueInfoToHoist->IsJsType());
Sym *const symStore = valueToUpdate->GetValueInfo()->GetSymStore();
ValueInfo *newValueInfo;
if(invariantValueInfoToHoist->GetSymStore() == symStore)
{
newValueInfo = invariantValueInfoToHoist;
}
else
{
newValueInfo = invariantValueInfoToHoist->Copy(alloc);
this->SetSymStoreDirect(newValueInfo, symStore);
}
ChangeValueInfo(targetBlock, valueToUpdate, newValueInfo, true);
}
// static
bool
GlobOpt::DoInlineArgsOpt(Func const * func)
{
Func const * topFunc = func->GetTopFunc();
Assert(topFunc != func);
bool doInlineArgsOpt =
!PHASE_OFF(Js::InlineArgsOptPhase, topFunc) &&
!func->GetHasCalls() &&
!func->GetHasUnoptimizedArgumentsAccess() &&
func->m_canDoInlineArgsOpt;
return doInlineArgsOpt;
}
bool
GlobOpt::IsSwitchOptEnabled(Func const * func)
{
Assert(func->IsTopFunc());
return !PHASE_OFF(Js::SwitchOptPhase, func) && !func->IsSwitchOptDisabled() && func->DoGlobOpt();
}
bool
GlobOpt::IsSwitchOptEnabledForIntTypeSpec(Func const * func)
{
return IsSwitchOptEnabled(func) && !IsTypeSpecPhaseOff(func) && DoAggressiveIntTypeSpec(func);
}
bool
GlobOpt::DoConstFold() const
{
return !PHASE_OFF(Js::ConstFoldPhase, func);
}
bool
GlobOpt::IsTypeSpecPhaseOff(Func const *func)
{
return PHASE_OFF(Js::TypeSpecPhase, func) || func->IsJitInDebugMode();
}
bool
GlobOpt::DoTypeSpec() const
{
return doTypeSpec;
}
bool
GlobOpt::DoAggressiveIntTypeSpec(Func const * func)
{
return
!PHASE_OFF(Js::AggressiveIntTypeSpecPhase, func) &&
!IsTypeSpecPhaseOff(func) &&
!func->IsAggressiveIntTypeSpecDisabled();
}
bool
GlobOpt::DoAggressiveIntTypeSpec() const
{
return doAggressiveIntTypeSpec;
}
bool
GlobOpt::DoAggressiveMulIntTypeSpec() const
{
return doAggressiveMulIntTypeSpec;
}
bool
GlobOpt::DoDivIntTypeSpec() const
{
return doDivIntTypeSpec;
}
// static
bool
GlobOpt::DoLossyIntTypeSpec(Func const * func)
{
return
!PHASE_OFF(Js::LossyIntTypeSpecPhase, func) &&
!IsTypeSpecPhaseOff(func) &&
(!func->HasProfileInfo() || !func->GetReadOnlyProfileInfo()->IsLossyIntTypeSpecDisabled());
}
bool
GlobOpt::DoLossyIntTypeSpec() const
{
return doLossyIntTypeSpec;
}
// static
bool
GlobOpt::DoFloatTypeSpec(Func const * func)
{
return
!PHASE_OFF(Js::FloatTypeSpecPhase, func) &&
!IsTypeSpecPhaseOff(func) &&
(!func->HasProfileInfo() || !func->GetReadOnlyProfileInfo()->IsFloatTypeSpecDisabled()) &&
AutoSystemInfo::Data.SSE2Available();
}
bool
GlobOpt::DoFloatTypeSpec() const
{
return doFloatTypeSpec;
}
bool
GlobOpt::DoStringTypeSpec(Func const * func)
{
return !PHASE_OFF(Js::StringTypeSpecPhase, func) && !IsTypeSpecPhaseOff(func);
}
// static
bool
GlobOpt::DoTypedArrayTypeSpec(Func const * func)
{
return !PHASE_OFF(Js::TypedArrayTypeSpecPhase, func) &&
!IsTypeSpecPhaseOff(func) &&
(!func->HasProfileInfo() || !func->GetReadOnlyProfileInfo()->IsTypedArrayTypeSpecDisabled(func->IsLoopBody()))
#if defined(_M_IX86)
&& AutoSystemInfo::Data.SSE2Available()
#endif
;
}
// static
bool
GlobOpt::DoNativeArrayTypeSpec(Func const * func)
{
return !PHASE_OFF(Js::NativeArrayPhase, func) &&
!IsTypeSpecPhaseOff(func)
#if defined(_M_IX86)
&& AutoSystemInfo::Data.SSE2Available()
#endif
;
}
bool
GlobOpt::DoArrayCheckHoist(Func const * const func)
{
Assert(func->IsTopFunc());
return
!PHASE_OFF(Js::ArrayCheckHoistPhase, func) &&
!func->IsArrayCheckHoistDisabled() &&
!func->IsJitInDebugMode(); // StElemI fast path is not allowed when in debug mode, so it cannot have bailout
}
bool
GlobOpt::DoArrayCheckHoist() const
{
return doArrayCheckHoist;
}
bool
GlobOpt::DoArrayCheckHoist(const ValueType baseValueType, Loop* loop, IR::Instr const * const instr) const
{
if(!DoArrayCheckHoist() || (instr && !IsLoopPrePass() && instr->DoStackArgsOpt()))
{
return false;
}
// This includes typed arrays, but not virtual typed arrays, whose vtable can change if the buffer goes away.
// Note that in the virtual case the vtable check is the only way to catch this, since there's no bound check.
if(!(baseValueType.IsLikelyArrayOrObjectWithArray() || baseValueType.IsLikelyOptimizedVirtualTypedArray()) ||
(loop ? ImplicitCallFlagsAllowOpts(loop) : ImplicitCallFlagsAllowOpts(func)))
{
return true;
}
// The function or loop does not allow disabling implicit calls, which is required to eliminate redundant JS array checks
#if DBG_DUMP
if((((loop ? loop->GetImplicitCallFlags() : func->m_fg->implicitCallFlags) & ~Js::ImplicitCall_External) == 0) &&
Js::Configuration::Global.flags.Trace.IsEnabled(Js::HostOptPhase))
{
Output::Print(_u("DoArrayCheckHoist disabled for JS arrays because of external: "));
func->DumpFullFunctionName();
Output::Print(_u("\n"));
Output::Flush();
}
#endif
return false;
}
bool
GlobOpt::DoArrayMissingValueCheckHoist(Func const * const func)
{
return
DoArrayCheckHoist(func) &&
!PHASE_OFF(Js::ArrayMissingValueCheckHoistPhase, func) &&
(!func->HasProfileInfo() || !func->GetReadOnlyProfileInfo()->IsArrayMissingValueCheckHoistDisabled(func->IsLoopBody()));
}
bool
GlobOpt::DoArrayMissingValueCheckHoist() const
{
return doArrayMissingValueCheckHoist;
}
bool
GlobOpt::DoArraySegmentHoist(const ValueType baseValueType, Func const * const func)
{
Assert(baseValueType.IsLikelyAnyOptimizedArray());
if(!DoArrayCheckHoist(func) || PHASE_OFF(Js::ArraySegmentHoistPhase, func))
{
return false;
}
if(!baseValueType.IsLikelyArrayOrObjectWithArray())
{
return true;
}
return
!PHASE_OFF(Js::JsArraySegmentHoistPhase, func) &&
(!func->HasProfileInfo() || !func->GetReadOnlyProfileInfo()->IsJsArraySegmentHoistDisabled(func->IsLoopBody()));
}
bool
GlobOpt::DoArraySegmentHoist(const ValueType baseValueType) const
{
Assert(baseValueType.IsLikelyAnyOptimizedArray());
return baseValueType.IsLikelyArrayOrObjectWithArray() ? doJsArraySegmentHoist : doArraySegmentHoist;
}
bool
GlobOpt::DoTypedArraySegmentLengthHoist(Loop *const loop) const
{
if(!DoArraySegmentHoist(ValueType::GetObject(ObjectType::Int32Array)))
{
return false;
}
if(loop ? ImplicitCallFlagsAllowOpts(loop) : ImplicitCallFlagsAllowOpts(func))
{
return true;
}
// The function or loop does not allow disabling implicit calls, which is required to eliminate redundant typed array
// segment length loads.
#if DBG_DUMP
if((((loop ? loop->GetImplicitCallFlags() : func->m_fg->implicitCallFlags) & ~Js::ImplicitCall_External) == 0) &&
Js::Configuration::Global.flags.Trace.IsEnabled(Js::HostOptPhase))
{
Output::Print(_u("DoArraySegmentLengthHoist disabled for typed arrays because of external: "));
func->DumpFullFunctionName();
Output::Print(_u("\n"));
Output::Flush();
}
#endif
return false;
}
bool
GlobOpt::DoArrayLengthHoist(Func const * const func)
{
return
DoArrayCheckHoist(func) &&
!PHASE_OFF(Js::Phase::ArrayLengthHoistPhase, func) &&
(!func->HasProfileInfo() || !func->GetReadOnlyProfileInfo()->IsArrayLengthHoistDisabled(func->IsLoopBody()));
}
bool
GlobOpt::DoArrayLengthHoist() const
{
return doArrayLengthHoist;
}
bool
GlobOpt::DoEliminateArrayAccessHelperCall(Func *const func)
{
return DoArrayCheckHoist(func);
}
bool
GlobOpt::DoEliminateArrayAccessHelperCall() const
{
return doEliminateArrayAccessHelperCall;
}
bool
GlobOpt::DoLdLenIntSpec(IR::Instr * const instr, const ValueType baseValueType)
{
Assert(!instr || instr->m_opcode == Js::OpCode::LdLen_A);
Assert(!instr || instr->GetDst());
Assert(!instr || instr->GetSrc1());
if(PHASE_OFF(Js::LdLenIntSpecPhase, func) ||
IsTypeSpecPhaseOff(func) ||
(func->HasProfileInfo() && func->GetReadOnlyProfileInfo()->IsLdLenIntSpecDisabled()) ||
(instr && !IsLoopPrePass() && instr->DoStackArgsOpt()))
{
return false;
}
if(instr &&
instr->IsProfiledInstr() &&
(
!instr->AsProfiledInstr()->u.FldInfo().valueType.IsLikelyInt() ||
instr->GetDst()->AsRegOpnd()->m_sym->m_isNotNumber
))
{
return false;
}
Assert(!instr || baseValueType == instr->GetSrc1()->GetValueType());
return
baseValueType.HasBeenString() ||
(baseValueType.IsLikelyAnyOptimizedArray() && baseValueType.GetObjectType() != ObjectType::ObjectWithArray);
}
bool
GlobOpt::DoPathDependentValues() const
{
return !PHASE_OFF(Js::Phase::PathDependentValuesPhase, func);
}
bool
GlobOpt::DoTrackRelativeIntBounds() const
{
return doTrackRelativeIntBounds;
}
bool
GlobOpt::DoBoundCheckElimination() const
{
return doBoundCheckElimination;
}
bool
GlobOpt::DoBoundCheckHoist() const
{
return doBoundCheckHoist;
}
bool
GlobOpt::DoLoopCountBasedBoundCheckHoist() const
{
return doLoopCountBasedBoundCheckHoist;
}
bool
GlobOpt::DoPowIntIntTypeSpec() const
{
return doPowIntIntTypeSpec;
}
bool
GlobOpt::DoTagChecks() const
{
return doTagChecks;
}
bool
GlobOpt::TrackArgumentsObject()
{
if (PHASE_OFF(Js::StackArgOptPhase, this->func))
{
this->CannotAllocateArgumentsObjectOnStack(nullptr);
return false;
}
return func->GetHasStackArgs();
}
void
GlobOpt::CannotAllocateArgumentsObjectOnStack(Func * curFunc)
{
if (curFunc != nullptr && curFunc->hasArgLenAndConstOpt)
{
Assert(!curFunc->GetJITOutput()->GetOutputData()->disableStackArgOpt);
curFunc->GetJITOutput()->GetOutputData()->disableStackArgOpt = true;
throw Js::RejitException(RejitReason::DisableStackArgLenAndConstOpt);
}
func->SetHasStackArgs(false);
#ifdef ENABLE_DEBUG_CONFIG_OPTIONS
if (PHASE_TESTTRACE(Js::StackArgOptPhase, this->func))
{
char16 debugStringBuffer[MAX_FUNCTION_BODY_DEBUG_STRING_SIZE];
Output::Print(_u("Stack args disabled for function %s(%s)\n"), func->GetJITFunctionBody()->GetDisplayName(), func->GetDebugNumberSet(debugStringBuffer));
Output::Flush();
}
#endif
}
IR::Instr *
GlobOpt::PreOptPeep(IR::Instr *instr)
{
if (OpCodeAttr::HasDeadFallThrough(instr->m_opcode))
{
switch (instr->m_opcode)
{
case Js::OpCode::BailOnNoProfile:
{
// Handle BailOnNoProfile
if (instr->HasBailOutInfo())
{
if (!this->prePassLoop)
{
FillBailOutInfo(this->currentBlock, instr);
}
// Already processed.
return instr;
}
// Convert to bailout instr
IR::Instr *nextBytecodeOffsetInstr = instr->GetNextRealInstrOrLabel();
while(nextBytecodeOffsetInstr->GetByteCodeOffset() == Js::Constants::NoByteCodeOffset)
{
nextBytecodeOffsetInstr = nextBytecodeOffsetInstr->GetNextRealInstrOrLabel();
Assert(!nextBytecodeOffsetInstr->IsLabelInstr());
}
instr = instr->ConvertToBailOutInstr(nextBytecodeOffsetInstr, IR::BailOutOnNoProfile);
instr->ClearByteCodeOffset();
instr->SetByteCodeOffset(nextBytecodeOffsetInstr);
if (!this->currentBlock->loop)
{
FillBailOutInfo(this->currentBlock, instr);
}
else
{
Assert(this->prePassLoop);
}
break;
}
case Js::OpCode::BailOnException:
{
Assert(
(
this->func->HasTry() && this->func->DoOptimizeTry() &&
instr->m_prev->m_opcode == Js::OpCode::Catch &&
instr->m_prev->m_prev->IsLabelInstr() &&
instr->m_prev->m_prev->AsLabelInstr()->GetRegion()->GetType() == RegionType::RegionTypeCatch
)
||
(
this->func->HasFinally() && this->func->DoOptimizeTry() &&
instr->m_prev->AsLabelInstr() &&
instr->m_prev->AsLabelInstr()->GetRegion()->GetType() == RegionType::RegionTypeFinally
)
);
break;
}
case Js::OpCode::BailOnEarlyExit:
{
Assert(this->func->HasFinally() && this->func->DoOptimizeTry());
break;
}
default:
{
if(this->currentBlock->loop && !this->IsLoopPrePass())
{
return instr;
}
break;
}
}
RemoveCodeAfterNoFallthroughInstr(instr);
}
return instr;
}
void
GlobOpt::RemoveCodeAfterNoFallthroughInstr(IR::Instr *instr)
{
if (instr != this->currentBlock->GetLastInstr())
{
// Remove dead code after bailout
IR::Instr *instrDead = instr->m_next;
IR::Instr *instrNext;
for (; instrDead != this->currentBlock->GetLastInstr(); instrDead = instrNext)
{
instrNext = instrDead->m_next;
if (instrNext->m_opcode == Js::OpCode::FunctionExit)
{
break;
}
this->func->m_fg->RemoveInstr(instrDead, this);
}
IR::Instr *instrNextBlock = instrDead->m_next;
this->func->m_fg->RemoveInstr(instrDead, this);
this->currentBlock->SetLastInstr(instrNextBlock->m_prev);
}
// Cleanup dead successors
FOREACH_SUCCESSOR_BLOCK_EDITING(deadBlock, this->currentBlock, iter)
{
this->currentBlock->RemoveDeadSucc(deadBlock, this->func->m_fg);
if (this->currentBlock->GetDataUseCount() > 0)
{
this->currentBlock->DecrementDataUseCount();
}
} NEXT_SUCCESSOR_BLOCK_EDITING;
}
void
GlobOpt::ProcessTryHandler(IR::Instr* instr)
{
Assert(instr->m_next->IsLabelInstr() && instr->m_next->AsLabelInstr()->GetRegion()->GetType() == RegionType::RegionTypeTry);
Region* tryRegion = instr->m_next->AsLabelInstr()->GetRegion();
BVSparse<JitArenaAllocator> * writeThroughSymbolsSet = tryRegion->writeThroughSymbolsSet;
ToVar(writeThroughSymbolsSet, this->currentBlock);
}
bool
GlobOpt::ProcessExceptionHandlingEdges(IR::Instr* instr)
{
Assert(instr->m_opcode == Js::OpCode::BrOnException || instr->m_opcode == Js::OpCode::BrOnNoException);
if (instr->m_opcode == Js::OpCode::BrOnException)
{
if (instr->AsBranchInstr()->GetTarget()->GetRegion()->GetType() == RegionType::RegionTypeCatch)
{
// BrOnException was added to model flow from try region to the catch region to assist
// the backward pass in propagating bytecode upward exposed info from the catch block
// to the try, and to handle break blocks. Removing it here as it has served its purpose
// and keeping it around might also have unintended effects while merging block data for
// the catch block's predecessors.
// Note that the Deadstore pass will still be able to propagate bytecode upward exposed info
// because it doesn't skip dead blocks for that.
this->RemoveFlowEdgeToCatchBlock(instr);
this->currentBlock->RemoveInstr(instr);
return true;
}
else
{
// We add BrOnException from a finally region to early exit, remove that since it has served its purpose
return this->RemoveFlowEdgeToFinallyOnExceptionBlock(instr);
}
}
else if (instr->m_opcode == Js::OpCode::BrOnNoException)
{
if (instr->AsBranchInstr()->GetTarget()->GetRegion()->GetType() == RegionType::RegionTypeCatch)
{
this->RemoveFlowEdgeToCatchBlock(instr);
}
else
{
this->RemoveFlowEdgeToFinallyOnExceptionBlock(instr);
}
}
return false;
}
void
GlobOpt::InsertToVarAtDefInTryRegion(IR::Instr * instr, IR::Opnd * dstOpnd)
{
if ((this->currentRegion->GetType() == RegionTypeTry || this->currentRegion->GetType() == RegionTypeFinally) &&
dstOpnd->IsRegOpnd() && dstOpnd->AsRegOpnd()->m_sym->HasByteCodeRegSlot())
{
StackSym * sym = dstOpnd->AsRegOpnd()->m_sym;
if (sym->IsVar())
{
return;
}
StackSym * varSym = sym->GetVarEquivSym(nullptr);
if ((this->currentRegion->GetType() == RegionTypeTry && this->currentRegion->writeThroughSymbolsSet->Test(varSym->m_id)) ||
((this->currentRegion->GetType() == RegionTypeFinally && this->currentRegion->GetMatchingTryRegion()->writeThroughSymbolsSet->Test(varSym->m_id))))
{
IR::RegOpnd * regOpnd = IR::RegOpnd::New(varSym, IRType::TyVar, instr->m_func);
this->ToVar(instr->m_next, regOpnd, this->currentBlock, NULL, false);
}
}
}
void
GlobOpt::RemoveFlowEdgeToCatchBlock(IR::Instr * instr)
{
Assert(instr->IsBranchInstr());
BasicBlock * catchBlock = nullptr;
BasicBlock * predBlock = nullptr;
if (instr->m_opcode == Js::OpCode::BrOnException)
{
catchBlock = instr->AsBranchInstr()->GetTarget()->GetBasicBlock();
predBlock = this->currentBlock;
}
else
{
Assert(instr->m_opcode == Js::OpCode::BrOnNoException);
IR::Instr * nextInstr = instr->GetNextRealInstrOrLabel();
Assert(nextInstr->IsLabelInstr());
IR::LabelInstr * nextLabel = nextInstr->AsLabelInstr();
if (nextLabel->GetRegion() && nextLabel->GetRegion()->GetType() == RegionTypeCatch)
{
catchBlock = nextLabel->GetBasicBlock();
predBlock = this->currentBlock;
}
else
{
Assert(nextLabel->m_next->IsBranchInstr() && nextLabel->m_next->AsBranchInstr()->IsUnconditional());
BasicBlock * nextBlock = nextLabel->GetBasicBlock();
IR::BranchInstr * branchToCatchBlock = nextLabel->m_next->AsBranchInstr();
IR::LabelInstr * catchBlockLabel = branchToCatchBlock->GetTarget();
Assert(catchBlockLabel->GetRegion()->GetType() == RegionTypeCatch);
catchBlock = catchBlockLabel->GetBasicBlock();
predBlock = nextBlock;
}
}
Assert(catchBlock);
Assert(predBlock);
if (this->func->m_fg->FindEdge(predBlock, catchBlock))
{
predBlock->RemoveDeadSucc(catchBlock, this->func->m_fg);
if (predBlock == this->currentBlock)
{
predBlock->DecrementDataUseCount();
}
}
}
bool
GlobOpt::RemoveFlowEdgeToFinallyOnExceptionBlock(IR::Instr * instr)
{
Assert(instr->IsBranchInstr());
if (instr->m_opcode == Js::OpCode::BrOnNoException && instr->AsBranchInstr()->m_brFinallyToEarlyExit)
{
// We add edge from finally to early exit block
// We should not remove this edge
// If a loop has continue, and we add edge in finally to continue
// Break block removal can move all continues inside the loop to branch to the continue added within finally
// If we get rid of this edge, then loop may loose all backedges
// Ideally, doing tail duplication before globopt would enable us to remove these edges, but since we do it after globopt, keep it this way for now
// See test1() in core/test/tryfinallytests.js
return false;
}
BasicBlock * finallyBlock = nullptr;
BasicBlock * predBlock = nullptr;
if (instr->m_opcode == Js::OpCode::BrOnException)
{
finallyBlock = instr->AsBranchInstr()->GetTarget()->GetBasicBlock();
predBlock = this->currentBlock;
}
else
{
Assert(instr->m_opcode == Js::OpCode::BrOnNoException);
IR::Instr * nextInstr = instr->GetNextRealInstrOrLabel();
Assert(nextInstr->IsLabelInstr());
IR::LabelInstr * nextLabel = nextInstr->AsLabelInstr();
if (nextLabel->GetRegion() && nextLabel->GetRegion()->GetType() == RegionTypeFinally)
{
finallyBlock = nextLabel->GetBasicBlock();
predBlock = this->currentBlock;
}
else
{
if (!(nextLabel->m_next->IsBranchInstr() && nextLabel->m_next->AsBranchInstr()->IsUnconditional()))
{
return false;
}
BasicBlock * nextBlock = nextLabel->GetBasicBlock();
IR::BranchInstr * branchTofinallyBlockOrEarlyExit = nextLabel->m_next->AsBranchInstr();
IR::LabelInstr * finallyBlockLabelOrEarlyExitLabel = branchTofinallyBlockOrEarlyExit->GetTarget();
finallyBlock = finallyBlockLabelOrEarlyExitLabel->GetBasicBlock();
predBlock = nextBlock;
}
}
Assert(finallyBlock && predBlock);
if (this->func->m_fg->FindEdge(predBlock, finallyBlock))
{
predBlock->RemoveDeadSucc(finallyBlock, this->func->m_fg);
if (instr->m_opcode == Js::OpCode::BrOnException)
{
this->currentBlock->RemoveInstr(instr);
}
if (finallyBlock->GetFirstInstr()->AsLabelInstr()->IsUnreferenced())
{
// Traverse predBlocks of finallyBlock, if any of the preds have a different region, set m_hasNonBranchRef to true
// If not, this label can get eliminated and an incorrect region from the predecessor can get propagated in lowered code
// See test3() in tryfinallytests.js
Region * finallyRegion = finallyBlock->GetFirstInstr()->AsLabelInstr()->GetRegion();
FOREACH_PREDECESSOR_BLOCK(pred, finallyBlock)
{
Region * predRegion = pred->GetFirstInstr()->AsLabelInstr()->GetRegion();
if (predRegion != finallyRegion)
{
finallyBlock->GetFirstInstr()->AsLabelInstr()->m_hasNonBranchRef = true;
}
} NEXT_PREDECESSOR_BLOCK;
}
if (predBlock == this->currentBlock)
{
predBlock->DecrementDataUseCount();
}
}
return true;
}
IR::Instr *
GlobOpt::OptPeep(IR::Instr *instr, Value *src1Val, Value *src2Val)
{
IR::Opnd *dst, *src1, *src2;
if (this->IsLoopPrePass())
{
return instr;
}
switch (instr->m_opcode)
{
case Js::OpCode::DeadBrEqual:
case Js::OpCode::DeadBrRelational:
case Js::OpCode::DeadBrSrEqual:
src1 = instr->GetSrc1();
src2 = instr->GetSrc2();
// These branches were turned into dead branches because they were unnecessary (branch to next, ...).
// The DeadBr are necessary in case the evaluation of the sources have side-effects.
// If we know for sure the srcs are primitive or have been type specialized, we don't need these instructions
if (((src1Val && src1Val->GetValueInfo()->IsPrimitive()) || (src1->IsRegOpnd() && CurrentBlockData()->IsTypeSpecialized(src1->AsRegOpnd()->m_sym))) &&
((src2Val && src2Val->GetValueInfo()->IsPrimitive()) || (src2->IsRegOpnd() && CurrentBlockData()->IsTypeSpecialized(src2->AsRegOpnd()->m_sym))))
{
this->CaptureByteCodeSymUses(instr);
instr->m_opcode = Js::OpCode::Nop;
}
break;
case Js::OpCode::DeadBrOnHasProperty:
src1 = instr->GetSrc1();
if (((src1Val && src1Val->GetValueInfo()->IsPrimitive()) || (src1->IsRegOpnd() && CurrentBlockData()->IsTypeSpecialized(src1->AsRegOpnd()->m_sym))))
{
this->CaptureByteCodeSymUses(instr);
instr->m_opcode = Js::OpCode::Nop;
}
break;
case Js::OpCode::Ld_A:
case Js::OpCode::Ld_I4:
src1 = instr->GetSrc1();
dst = instr->GetDst();
if (dst->IsRegOpnd() && dst->IsEqual(src1))
{
dst = instr->UnlinkDst();
if (!dst->GetIsJITOptimizedReg())
{
IR::ByteCodeUsesInstr *bytecodeUse = IR::ByteCodeUsesInstr::New(instr);
bytecodeUse->SetDst(dst);
instr->InsertAfter(bytecodeUse);
}
instr->FreeSrc1();
instr->m_opcode = Js::OpCode::Nop;
}
break;
}
return instr;
}
void
GlobOpt::OptimizeIndirUses(IR::IndirOpnd *indirOpnd, IR::Instr * *pInstr, Value **indirIndexValRef)
{
IR::Instr * &instr = *pInstr;
Assert(!indirIndexValRef || !*indirIndexValRef);
// Update value types and copy-prop the base
OptSrc(indirOpnd->GetBaseOpnd(), &instr, nullptr, indirOpnd);
IR::RegOpnd *indexOpnd = indirOpnd->GetIndexOpnd();
if (!indexOpnd)
{
return;
}
// Update value types and copy-prop the index
Value *indexVal = OptSrc(indexOpnd, &instr, nullptr, indirOpnd);
if(indirIndexValRef)
{
*indirIndexValRef = indexVal;
}
}
bool
GlobOpt::IsPREInstrCandidateLoad(Js::OpCode opcode)
{
switch (opcode)
{
case Js::OpCode::LdFld:
case Js::OpCode::LdFldForTypeOf:
case Js::OpCode::LdRootFld:
case Js::OpCode::LdRootFldForTypeOf:
case Js::OpCode::LdMethodFld:
case Js::OpCode::LdRootMethodFld:
case Js::OpCode::LdSlot:
case Js::OpCode::LdSlotArr:
return true;
}
return false;
}
bool
GlobOpt::IsPREInstrSequenceCandidateLoad(Js::OpCode opcode)
{
switch (opcode)
{
default:
return IsPREInstrCandidateLoad(opcode);
case Js::OpCode::Ld_A:
case Js::OpCode::BytecodeArgOutCapture:
return true;
}
}
bool
GlobOpt::IsPREInstrCandidateStore(Js::OpCode opcode)
{
switch (opcode)
{
case Js::OpCode::StFld:
case Js::OpCode::StRootFld:
case Js::OpCode::StSlot:
return true;
}
return false;
}
bool
GlobOpt::ImplicitCallFlagsAllowOpts(Loop *loop)
{
return loop->GetImplicitCallFlags() != Js::ImplicitCall_HasNoInfo &&
(((loop->GetImplicitCallFlags() & ~Js::ImplicitCall_Accessor) | Js::ImplicitCall_None) == Js::ImplicitCall_None);
}
bool
GlobOpt::ImplicitCallFlagsAllowOpts(Func const *func)
{
return func->m_fg->implicitCallFlags != Js::ImplicitCall_HasNoInfo &&
(((func->m_fg->implicitCallFlags & ~Js::ImplicitCall_Accessor) | Js::ImplicitCall_None) == Js::ImplicitCall_None);
}
#if DBG_DUMP
void
GlobOpt::Dump() const
{
this->DumpSymToValueMap();
}
void
GlobOpt::DumpSymToValueMap(BasicBlock const * block) const
{
Output::Print(_u("\n*** SymToValueMap ***\n"));
block->globOptData.DumpSymToValueMap();
}
void
GlobOpt::DumpSymToValueMap() const
{
DumpSymToValueMap(this->currentBlock);
}
void
GlobOpt::DumpSymVal(int index)
{
SymID id = index;
extern Func *CurrentFunc;
Sym *sym = this->func->m_symTable->Find(id);
AssertMsg(sym, "Sym not found!!!");
Output::Print(_u("Sym: "));
sym->Dump();
Output::Print(_u("\t\tValueNumber: "));
Value * pValue = CurrentBlockData()->FindValueFromMapDirect(sym->m_id);
pValue->Dump();
Output::Print(_u("\n"));
}
void
GlobOpt::Trace(BasicBlock * block, bool before) const
{
bool globOptTrace = Js::Configuration::Global.flags.Trace.IsEnabled(Js::GlobOptPhase, this->func->GetSourceContextId(), this->func->GetLocalFunctionId());
bool typeSpecTrace = Js::Configuration::Global.flags.Trace.IsEnabled(Js::TypeSpecPhase, this->func->GetSourceContextId(), this->func->GetLocalFunctionId());
bool floatTypeSpecTrace = Js::Configuration::Global.flags.Trace.IsEnabled(Js::FloatTypeSpecPhase, this->func->GetSourceContextId(), this->func->GetLocalFunctionId());
bool fieldCopyPropTrace = Js::Configuration::Global.flags.Trace.IsEnabled(Js::FieldCopyPropPhase, this->func->GetSourceContextId(), this->func->GetLocalFunctionId());
bool objTypeSpecTrace = Js::Configuration::Global.flags.Trace.IsEnabled(Js::ObjTypeSpecPhase, this->func->GetSourceContextId(), this->func->GetLocalFunctionId());
bool valueTableTrace = Js::Configuration::Global.flags.Trace.IsEnabled(Js::ValueTablePhase, this->func->GetSourceContextId(), this->func->GetLocalFunctionId());
bool fieldPRETrace = Js::Configuration::Global.flags.Trace.IsEnabled(Js::FieldPREPhase, this->func->GetSourceContextId(), this->func->GetLocalFunctionId());
bool anyTrace = globOptTrace || typeSpecTrace || floatTypeSpecTrace || fieldCopyPropTrace || objTypeSpecTrace || valueTableTrace || fieldPRETrace;
if (!anyTrace)
{
return;
}
if (fieldPRETrace && this->IsLoopPrePass())
{
if (block->isLoopHeader && before)
{
Output::Print(_u("==== Loop Prepass block header #%-3d, Visiting Loop block head #%-3d\n"),
this->prePassLoop->GetHeadBlock()->GetBlockNum(), block->GetBlockNum());
}
}
if (!typeSpecTrace && !floatTypeSpecTrace && !valueTableTrace && !Js::Configuration::Global.flags.Verbose)
{
return;
}
if (before)
{
Output::Print(_u("========================================================================\n"));
Output::Print(_u("Begin OptBlock: Block #%-3d"), block->GetBlockNum());
if (block->loop)
{
Output::Print(_u(" Loop block header:%-3d currentLoop block head:%-3d %s"),
block->loop->GetHeadBlock()->GetBlockNum(),
this->prePassLoop ? this->prePassLoop->GetHeadBlock()->GetBlockNum() : 0,
this->IsLoopPrePass() ? _u("PrePass") : _u(""));
}
Output::Print(_u("\n"));
}
else
{
Output::Print(_u(your_sha256_hash-------\n"));
Output::Print(_u("After OptBlock: Block #%-3d\n"), block->GetBlockNum());
}
if ((typeSpecTrace || floatTypeSpecTrace) && !block->globOptData.liveVarSyms->IsEmpty())
{
Output::Print(_u(" Live var syms: "));
block->globOptData.liveVarSyms->Dump();
}
if (typeSpecTrace && !block->globOptData.liveInt32Syms->IsEmpty())
{
Assert(this->tempBv->IsEmpty());
this->tempBv->Minus(block->globOptData.liveInt32Syms, block->globOptData.liveLossyInt32Syms);
if(!this->tempBv->IsEmpty())
{
Output::Print(_u(" Int32 type specialized (lossless) syms: "));
this->tempBv->Dump();
}
this->tempBv->ClearAll();
if(!block->globOptData.liveLossyInt32Syms->IsEmpty())
{
Output::Print(_u(" Int32 converted (lossy) syms: "));
block->globOptData.liveLossyInt32Syms->Dump();
}
}
if (floatTypeSpecTrace && !block->globOptData.liveFloat64Syms->IsEmpty())
{
Output::Print(_u(" Float64 type specialized syms: "));
block->globOptData.liveFloat64Syms->Dump();
}
if ((fieldCopyPropTrace || objTypeSpecTrace) && this->DoFieldCopyProp(block->loop) && !block->globOptData.liveFields->IsEmpty())
{
Output::Print(_u(" Live field syms: "));
block->globOptData.liveFields->Dump();
}
if (objTypeSpecTrace || valueTableTrace)
{
Output::Print(_u(" Value table:\n"));
block->globOptData.DumpSymToValueMap();
}
if (before)
{
Output::Print(_u(your_sha256_hash-------\n")); \
}
Output::Flush();
}
void
GlobOpt::TraceSettings() const
{
Output::Print(_u("GlobOpt Settings:\r\n"));
Output::Print(_u(" FloatTypeSpec: %s\r\n"), this->DoFloatTypeSpec() ? _u("enabled") : _u("disabled"));
Output::Print(_u(" AggressiveIntTypeSpec: %s\r\n"), this->DoAggressiveIntTypeSpec() ? _u("enabled") : _u("disabled"));
Output::Print(_u(" LossyIntTypeSpec: %s\r\n"), this->DoLossyIntTypeSpec() ? _u("enabled") : _u("disabled"));
Output::Print(_u(" ArrayCheckHoist: %s\r\n"), this->func->IsArrayCheckHoistDisabled() ? _u("disabled") : _u("enabled"));
Output::Print(_u(" ImplicitCallFlags: %s\r\n"), Js::DynamicProfileInfo::GetImplicitCallFlagsString(this->func->m_fg->implicitCallFlags));
for (Loop * loop = this->func->m_fg->loopList; loop != NULL; loop = loop->next)
{
Output::Print(_u(" loop: %d, ImplicitCallFlags: %s\r\n"), loop->GetLoopNumber(),
Js::DynamicProfileInfo::GetImplicitCallFlagsString(loop->GetImplicitCallFlags()));
}
Output::Flush();
}
#endif // DBG_DUMP
IR::Instr *
GlobOpt::TrackMarkTempObject(IR::Instr * instrStart, IR::Instr * instrLast)
{
if (!this->func->GetHasMarkTempObjects())
{
return instrLast;
}
IR::Instr * instr = instrStart;
IR::Instr * instrEnd = instrLast->m_next;
IR::Instr * lastInstr = nullptr;
GlobOptBlockData& globOptData = *CurrentBlockData();
do
{
bool mayNeedBailOnImplicitCallsPreOp = !this->IsLoopPrePass()
&& instr->HasAnyImplicitCalls()
&& globOptData.maybeTempObjectSyms != nullptr;
if (mayNeedBailOnImplicitCallsPreOp)
{
IR::Opnd * src1 = instr->GetSrc1();
if (src1)
{
instr = GenerateBailOutMarkTempObjectIfNeeded(instr, src1, false);
IR::Opnd * src2 = instr->GetSrc2();
if (src2)
{
instr = GenerateBailOutMarkTempObjectIfNeeded(instr, src2, false);
}
}
}
IR::Opnd *dst = instr->GetDst();
if (dst)
{
if (dst->IsRegOpnd())
{
TrackTempObjectSyms(instr, dst->AsRegOpnd());
}
else if (mayNeedBailOnImplicitCallsPreOp)
{
instr = GenerateBailOutMarkTempObjectIfNeeded(instr, dst, true);
}
}
lastInstr = instr;
instr = instr->m_next;
}
while (instr != instrEnd);
return lastInstr;
}
void
GlobOpt::TrackTempObjectSyms(IR::Instr * instr, IR::RegOpnd * opnd)
{
// If it is marked as dstIsTempObject, we should have mark temped it, or type specialized it to Ld_I4.
Assert(!instr->dstIsTempObject || ObjectTempVerify::CanMarkTemp(instr, nullptr));
GlobOptBlockData& globOptData = *CurrentBlockData();
bool canStoreTemp = false;
bool maybeTemp = false;
if (OpCodeAttr::TempObjectProducing(instr->m_opcode))
{
maybeTemp = instr->dstIsTempObject;
// We have to make sure that lower will always generate code to do stack allocation
// before we can store any other stack instance onto it. Otherwise, we would not
// walk object to box the stack property.
canStoreTemp = instr->dstIsTempObject && ObjectTemp::CanStoreTemp(instr);
}
else if (OpCodeAttr::TempObjectTransfer(instr->m_opcode))
{
// Need to check both sources, GetNewScObject has two srcs for transfer.
// No need to get var equiv sym here as transfer of type spec value does not transfer a mark temp object.
maybeTemp = globOptData.maybeTempObjectSyms && (
(instr->GetSrc1()->IsRegOpnd() && globOptData.maybeTempObjectSyms->Test(instr->GetSrc1()->AsRegOpnd()->m_sym->m_id))
|| (instr->GetSrc2() && instr->GetSrc2()->IsRegOpnd() && globOptData.maybeTempObjectSyms->Test(instr->GetSrc2()->AsRegOpnd()->m_sym->m_id)));
canStoreTemp = globOptData.canStoreTempObjectSyms && (
(instr->GetSrc1()->IsRegOpnd() && globOptData.canStoreTempObjectSyms->Test(instr->GetSrc1()->AsRegOpnd()->m_sym->m_id))
&& (!instr->GetSrc2() || (instr->GetSrc2()->IsRegOpnd() && globOptData.canStoreTempObjectSyms->Test(instr->GetSrc2()->AsRegOpnd()->m_sym->m_id))));
AssertOrFailFast(!canStoreTemp || instr->dstIsTempObject);
AssertOrFailFast(!maybeTemp || instr->dstIsTempObject);
}
// Need to get the var equiv sym as assignment of type specialized sym kill the var sym value anyway.
StackSym * sym = opnd->m_sym;
if (!sym->IsVar())
{
sym = sym->GetVarEquivSym(nullptr);
if (sym == nullptr)
{
return;
}
}
SymID symId = sym->m_id;
if (maybeTemp)
{
// Only var sym should be temp objects
Assert(opnd->m_sym == sym);
if (globOptData.maybeTempObjectSyms == nullptr)
{
globOptData.maybeTempObjectSyms = JitAnew(this->alloc, BVSparse<JitArenaAllocator>, this->alloc);
}
globOptData.maybeTempObjectSyms->Set(symId);
if (canStoreTemp)
{
if (instr->m_opcode == Js::OpCode::NewScObjectLiteral && !this->IsLoopPrePass())
{
// For object literal, we install the final type up front.
// If there are bailout before we finish initializing all the fields, we need to
// zero out the rest if we stack allocate the literal, so that the boxing would not
// try to box trash pointer in the properties.
// Although object Literal initialization can be done lexically, BailOnNoProfile may cause some path
// to disappear. Do it is flow base make it easier to stop propagate those entries.
IR::IntConstOpnd * propertyArrayIdOpnd = instr->GetSrc1()->AsIntConstOpnd();
const Js::PropertyIdArray * propIds = instr->m_func->GetJITFunctionBody()->ReadPropertyIdArrayFromAuxData(propertyArrayIdOpnd->AsUint32());
// Duplicates are removed by parser
Assert(!propIds->hadDuplicates);
if (globOptData.stackLiteralInitFldDataMap == nullptr)
{
globOptData.stackLiteralInitFldDataMap = JitAnew(alloc, StackLiteralInitFldDataMap, alloc);
}
else
{
Assert(!globOptData.stackLiteralInitFldDataMap->ContainsKey(sym));
}
StackLiteralInitFldData data = { propIds, 0};
globOptData.stackLiteralInitFldDataMap->AddNew(sym, data);
}
if (globOptData.canStoreTempObjectSyms == nullptr)
{
globOptData.canStoreTempObjectSyms = JitAnew(this->alloc, BVSparse<JitArenaAllocator>, this->alloc);
}
globOptData.canStoreTempObjectSyms->Set(symId);
}
else if (globOptData.canStoreTempObjectSyms)
{
globOptData.canStoreTempObjectSyms->Clear(symId);
}
}
else
{
Assert(!canStoreTemp);
if (globOptData.maybeTempObjectSyms)
{
if (globOptData.canStoreTempObjectSyms)
{
globOptData.canStoreTempObjectSyms->Clear(symId);
}
globOptData.maybeTempObjectSyms->Clear(symId);
}
else
{
Assert(!globOptData.canStoreTempObjectSyms);
}
// The symbol is being assigned to, the sym shouldn't still be in the stackLiteralInitFldDataMap
Assert(this->IsLoopPrePass() ||
globOptData.stackLiteralInitFldDataMap == nullptr
|| globOptData.stackLiteralInitFldDataMap->Count() == 0
|| !globOptData.stackLiteralInitFldDataMap->ContainsKey(sym));
}
}
IR::Instr *
GlobOpt::GenerateBailOutMarkTempObjectIfNeeded(IR::Instr * instr, IR::Opnd * opnd, bool isDst)
{
Assert(opnd);
Assert(isDst == (opnd == instr->GetDst()));
Assert(opnd != instr->GetDst() || !opnd->IsRegOpnd());
Assert(!this->IsLoopPrePass());
Assert(instr->HasAnyImplicitCalls());
// Only dst reg opnd opcode or ArgOut_A should have dstIsTempObject marked
Assert(!isDst || !instr->dstIsTempObject || instr->m_opcode == Js::OpCode::ArgOut_A);
// Post-op implicit call shouldn't have installed yet
Assert(!instr->HasBailOutInfo() || (instr->GetBailOutKind() & IR::BailOutKindBits) != IR::BailOutOnImplicitCalls);
GlobOptBlockData& globOptData = *CurrentBlockData();
Assert(globOptData.maybeTempObjectSyms != nullptr);
IR::PropertySymOpnd * propertySymOpnd = nullptr;
StackSym * stackSym = ObjectTemp::GetStackSym(opnd, &propertySymOpnd);
// It is okay to not get the var equiv sym here, as use of a type specialized sym is not use of the temp object
// so no need to add mark temp bailout.
// TempObjectSysm doesn't contain any type spec sym, so we will get false here for all type spec sym.
if (stackSym && globOptData.maybeTempObjectSyms->Test(stackSym->m_id))
{
if (instr->HasBailOutInfo())
{
instr->SetBailOutKind(instr->GetBailOutKind() | IR::BailOutMarkTempObject);
instr->GetBailOutInfo()->canDeadStore = false;
}
else
{
// On insert the pre op bailout if it is not Direct field access do nothing, don't check the dst yet.
// SetTypeCheckBailout will clear this out if it is direct field access.
if (isDst
|| (instr->m_opcode == Js::OpCode::FromVar && !opnd->GetValueType().IsPrimitive())
|| propertySymOpnd == nullptr
|| !propertySymOpnd->IsTypeCheckProtected())
{
this->GenerateBailAtOperation(&instr, IR::BailOutMarkTempObject);
instr->GetBailOutInfo()->canDeadStore = false;
}
else if (propertySymOpnd->MayHaveImplicitCall())
{
this->GenerateBailAtOperation(&instr, IR::BailOutMarkTempObject);
}
}
if (!opnd->IsRegOpnd() && (!isDst || (globOptData.canStoreTempObjectSyms && globOptData.canStoreTempObjectSyms->Test(stackSym->m_id))))
{
// If this opnd is a dst, that means that the object pointer is a stack object,
// and we can store temp object/number on it.
// If the opnd is a src, that means that the object pointer may be a stack object
// so the load may be a temp object/number and we need to track its use.
// Don't mark start of indir as can store temp, because we don't actually know
// what it is assigning to.
if (!isDst || !opnd->IsIndirOpnd())
{
opnd->SetCanStoreTemp();
}
if (propertySymOpnd)
{
// Track initfld of stack literals
if (isDst && instr->m_opcode == Js::OpCode::InitFld)
{
const Js::PropertyId propertyId = propertySymOpnd->m_sym->AsPropertySym()->m_propertyId;
// We don't need to track numeric properties init
if (!this->func->GetThreadContextInfo()->IsNumericProperty(propertyId))
{
DebugOnly(bool found = false);
globOptData.stackLiteralInitFldDataMap->RemoveIf(stackSym,
[&](StackSym * key, StackLiteralInitFldData & data)
{
DebugOnly(found = true);
Assert(key == stackSym);
Assert(data.currentInitFldCount < data.propIds->count);
if (data.propIds->elements[data.currentInitFldCount] != propertyId)
{
#if DBG
bool duplicate = false;
for (uint i = 0; i < data.currentInitFldCount; i++)
{
if (data.propIds->elements[i] == propertyId)
{
duplicate = true;
break;
}
}
Assert(duplicate);
#endif
// duplicate initialization
return false;
}
bool finished = (++data.currentInitFldCount == data.propIds->count);
#if DBG
if (finished)
{
// We can still track the finished stack literal InitFld lexically.
this->finishedStackLiteralInitFld->Set(stackSym->m_id);
}
#endif
return finished;
});
// We might still see InitFld even we have finished with all the property Id because
// of duplicate entries at the end
Assert(found || finishedStackLiteralInitFld->Test(stackSym->m_id));
}
}
}
}
}
return instr;
}
LoopCount *
GlobOpt::GetOrGenerateLoopCountForMemOp(Loop *loop)
{
LoopCount *loopCount = loop->loopCount;
if (loopCount && !loopCount->HasGeneratedLoopCountSym())
{
Assert(loop->bailOutInfo);
EnsureBailTarget(loop);
GenerateLoopCountPlusOne(loop, loopCount);
}
return loopCount;
}
IR::Opnd *
GlobOpt::GenerateInductionVariableChangeForMemOp(Loop *loop, byte unroll, IR::Instr *insertBeforeInstr)
{
AssertOrFailFast(unroll != Js::Constants::InvalidLoopUnrollFactor);
LoopCount *loopCount = loop->loopCount;
IR::Opnd *sizeOpnd = nullptr;
Assert(loopCount);
Assert(loop->memOpInfo->inductionVariableOpndPerUnrollMap);
if (loop->memOpInfo->inductionVariableOpndPerUnrollMap->TryGetValue(unroll, &sizeOpnd))
{
return sizeOpnd;
}
Func *localFunc = loop->GetFunc();
const auto InsertInstr = [&](IR::Instr *instr)
{
if (insertBeforeInstr == nullptr)
{
loop->landingPad->InsertAfter(instr);
}
else
{
insertBeforeInstr->InsertBefore(instr);
}
};
if (loopCount->LoopCountMinusOneSym())
{
IRType type = loopCount->LoopCountSym()->GetType();
// Loop count is off by one, so add one
IR::RegOpnd *loopCountOpnd = IR::RegOpnd::New(loopCount->LoopCountSym(), type, localFunc);
sizeOpnd = loopCountOpnd;
if (unroll != 1)
{
sizeOpnd = IR::RegOpnd::New(TyUint32, this->func);
IR::Opnd *unrollOpnd = IR::IntConstOpnd::New(unroll, type, localFunc);
IR::Instr *inductionChangeMultiplier = IR::Instr::New(
Js::OpCode::Mul_I4, sizeOpnd, loopCountOpnd, unrollOpnd, localFunc);
InsertInstr(inductionChangeMultiplier);
inductionChangeMultiplier->ConvertToBailOutInstr(loop->bailOutInfo, IR::BailOutOnOverflow);
}
}
else
{
int32 loopCountMinusOnePlusOne;
int32 size;
if (Int32Math::Add(loopCount->LoopCountMinusOneConstantValue(), 1, &loopCountMinusOnePlusOne) ||
Int32Math::Mul(loopCountMinusOnePlusOne, unroll, &size))
{
throw Js::RejitException(RejitReason::MemOpDisabled);
}
Assert(size > 0);
sizeOpnd = IR::IntConstOpnd::New(size, IRType::TyUint32, localFunc);
}
loop->memOpInfo->inductionVariableOpndPerUnrollMap->Add(unroll, sizeOpnd);
return sizeOpnd;
}
IR::RegOpnd*
GlobOpt::GenerateStartIndexOpndForMemop(Loop *loop, IR::Opnd *indexOpnd, IR::Opnd *sizeOpnd, bool isInductionVariableChangeIncremental, bool bIndexAlreadyChanged, IR::Instr *insertBeforeInstr)
{
IR::RegOpnd *startIndexOpnd = nullptr;
Func *localFunc = loop->GetFunc();
IRType type = indexOpnd->GetType();
const int cacheIndex = ((int)isInductionVariableChangeIncremental << 1) | (int)bIndexAlreadyChanged;
if (loop->memOpInfo->startIndexOpndCache[cacheIndex])
{
return loop->memOpInfo->startIndexOpndCache[cacheIndex];
}
const auto InsertInstr = [&](IR::Instr *instr)
{
if (insertBeforeInstr == nullptr)
{
loop->landingPad->InsertAfter(instr);
}
else
{
insertBeforeInstr->InsertBefore(instr);
}
};
startIndexOpnd = IR::RegOpnd::New(type, localFunc);
// If the 2 are different we can simply use indexOpnd
if (isInductionVariableChangeIncremental != bIndexAlreadyChanged)
{
InsertInstr(IR::Instr::New(Js::OpCode::Ld_A,
startIndexOpnd,
indexOpnd,
localFunc));
}
else
{
// Otherwise add 1 to it
InsertInstr(IR::Instr::New(Js::OpCode::Add_I4,
startIndexOpnd,
indexOpnd,
IR::IntConstOpnd::New(1, type, localFunc, true),
localFunc));
}
if (!isInductionVariableChangeIncremental)
{
InsertInstr(IR::Instr::New(Js::OpCode::Sub_I4,
startIndexOpnd,
startIndexOpnd,
sizeOpnd,
localFunc));
}
loop->memOpInfo->startIndexOpndCache[cacheIndex] = startIndexOpnd;
return startIndexOpnd;
}
IR::Instr*
GlobOpt::FindUpperBoundsCheckInstr(IR::Instr* fromInstr)
{
IR::Instr *upperBoundCheck = fromInstr;
do
{
upperBoundCheck = upperBoundCheck->m_prev;
Assert(upperBoundCheck);
Assert(!upperBoundCheck->IsLabelInstr());
} while (upperBoundCheck->m_opcode != Js::OpCode::BoundCheck);
return upperBoundCheck;
}
IR::Instr*
GlobOpt::FindArraySegmentLoadInstr(IR::Instr* fromInstr)
{
IR::Instr *headSegmentLengthLoad = fromInstr;
do
{
headSegmentLengthLoad = headSegmentLengthLoad->m_prev;
Assert(headSegmentLengthLoad);
Assert(!headSegmentLengthLoad->IsLabelInstr());
} while (headSegmentLengthLoad->m_opcode != Js::OpCode::LdIndir);
return headSegmentLengthLoad;
}
void
GlobOpt::RemoveMemOpSrcInstr(IR::Instr* memopInstr, IR::Instr* srcInstr, BasicBlock* block)
{
Assert(srcInstr && (srcInstr->m_opcode == Js::OpCode::LdElemI_A || srcInstr->m_opcode == Js::OpCode::StElemI_A || srcInstr->m_opcode == Js::OpCode::StElemI_A_Strict));
Assert(memopInstr && (memopInstr->m_opcode == Js::OpCode::Memcopy || memopInstr->m_opcode == Js::OpCode::Memset));
Assert(block);
const bool isDst = srcInstr->m_opcode == Js::OpCode::StElemI_A || srcInstr->m_opcode == Js::OpCode::StElemI_A_Strict;
IR::RegOpnd* opnd = (isDst ? memopInstr->GetDst() : memopInstr->GetSrc1())->AsIndirOpnd()->GetBaseOpnd();
IR::ArrayRegOpnd* arrayOpnd = opnd->IsArrayRegOpnd() ? opnd->AsArrayRegOpnd() : nullptr;
IR::Instr* topInstr = srcInstr;
if (srcInstr->extractedUpperBoundCheckWithoutHoisting)
{
IR::Instr *upperBoundCheck = FindUpperBoundsCheckInstr(srcInstr);
Assert(upperBoundCheck && upperBoundCheck != srcInstr);
topInstr = upperBoundCheck;
}
if (srcInstr->loadedArrayHeadSegmentLength && arrayOpnd && arrayOpnd->HeadSegmentLengthSym())
{
IR::Instr *arrayLoadSegmentHeadLength = FindArraySegmentLoadInstr(topInstr);
Assert(arrayLoadSegmentHeadLength);
topInstr = arrayLoadSegmentHeadLength;
arrayOpnd->RemoveHeadSegmentLengthSym();
}
if (srcInstr->loadedArrayHeadSegment && arrayOpnd && arrayOpnd->HeadSegmentSym())
{
IR::Instr *arrayLoadSegmentHead = FindArraySegmentLoadInstr(topInstr);
Assert(arrayLoadSegmentHead);
topInstr = arrayLoadSegmentHead;
arrayOpnd->RemoveHeadSegmentSym();
}
// If no bounds check are present, simply look up for instruction added for instrumentation
if(topInstr == srcInstr)
{
bool checkPrev = true;
while (checkPrev)
{
switch (topInstr->m_prev->m_opcode)
{
case Js::OpCode::BailOnNotArray:
case Js::OpCode::NoImplicitCallUses:
case Js::OpCode::ByteCodeUses:
topInstr = topInstr->m_prev;
checkPrev = !!topInstr->m_prev;
break;
default:
checkPrev = false;
break;
}
}
}
while (topInstr != srcInstr)
{
IR::Instr* removeInstr = topInstr;
topInstr = topInstr->m_next;
Assert(
removeInstr->m_opcode == Js::OpCode::BailOnNotArray ||
removeInstr->m_opcode == Js::OpCode::NoImplicitCallUses ||
removeInstr->m_opcode == Js::OpCode::ByteCodeUses ||
removeInstr->m_opcode == Js::OpCode::LdIndir ||
removeInstr->m_opcode == Js::OpCode::BoundCheck
);
if (removeInstr->m_opcode != Js::OpCode::ByteCodeUses)
{
block->RemoveInstr(removeInstr);
}
}
this->ConvertToByteCodeUses(srcInstr);
}
void
GlobOpt::GetMemOpSrcInfo(Loop* loop, IR::Instr* instr, IR::RegOpnd*& base, IR::RegOpnd*& index, IRType& arrayType)
{
Assert(instr && (instr->m_opcode == Js::OpCode::LdElemI_A || instr->m_opcode == Js::OpCode::StElemI_A || instr->m_opcode == Js::OpCode::StElemI_A_Strict));
IR::Opnd* arrayOpnd = instr->m_opcode == Js::OpCode::LdElemI_A ? instr->GetSrc1() : instr->GetDst();
Assert(arrayOpnd->IsIndirOpnd());
IR::IndirOpnd* indirArrayOpnd = arrayOpnd->AsIndirOpnd();
IR::RegOpnd* baseOpnd = (IR::RegOpnd*)indirArrayOpnd->GetBaseOpnd();
IR::RegOpnd* indexOpnd = (IR::RegOpnd*)indirArrayOpnd->GetIndexOpnd();
Assert(baseOpnd);
Assert(indexOpnd);
// Process Out Params
base = baseOpnd;
index = indexOpnd;
arrayType = indirArrayOpnd->GetType();
}
void
GlobOpt::EmitMemop(Loop * loop, LoopCount *loopCount, const MemOpEmitData* emitData)
{
Assert(emitData);
Assert(emitData->candidate);
Assert(emitData->stElemInstr);
Assert(emitData->stElemInstr->m_opcode == Js::OpCode::StElemI_A || emitData->stElemInstr->m_opcode == Js::OpCode::StElemI_A_Strict);
IR::BailOutKind bailOutKind = emitData->bailOutKind;
const byte unroll = emitData->inductionVar.unroll;
Assert(unroll == 1);
const bool isInductionVariableChangeIncremental = emitData->inductionVar.isIncremental;
const bool bIndexAlreadyChanged = emitData->candidate->bIndexAlreadyChanged;
IR::RegOpnd *baseOpnd = nullptr;
IR::RegOpnd *indexOpnd = nullptr;
IRType dstType;
GetMemOpSrcInfo(loop, emitData->stElemInstr, baseOpnd, indexOpnd, dstType);
Func *localFunc = loop->GetFunc();
// Handle bailout info
EnsureBailTarget(loop);
Assert(bailOutKind != IR::BailOutInvalid);
// Keep only Array bits bailOuts. Consider handling these bailouts instead of simply ignoring them
bailOutKind &= IR::BailOutForArrayBits;
// Add our custom bailout to handle Op_MemCopy return value.
bailOutKind |= IR::BailOutOnMemOpError;
BailOutInfo *const bailOutInfo = loop->bailOutInfo;
Assert(bailOutInfo);
IR::Instr *insertBeforeInstr = bailOutInfo->bailOutInstr;
Assert(insertBeforeInstr);
IR::Opnd *sizeOpnd = GenerateInductionVariableChangeForMemOp(loop, unroll, insertBeforeInstr);
IR::RegOpnd *startIndexOpnd = GenerateStartIndexOpndForMemop(loop, indexOpnd, sizeOpnd, isInductionVariableChangeIncremental, bIndexAlreadyChanged, insertBeforeInstr);
IR::IndirOpnd* dstOpnd = IR::IndirOpnd::New(baseOpnd, startIndexOpnd, dstType, localFunc);
IR::Opnd *src1;
const bool isMemset = emitData->candidate->IsMemSet();
// Get the source according to the memop type
if (isMemset)
{
MemSetEmitData* data = (MemSetEmitData*)emitData;
const Loop::MemSetCandidate* candidate = data->candidate->AsMemSet();
if (candidate->srcSym)
{
IR::RegOpnd* regSrc = IR::RegOpnd::New(candidate->srcSym, candidate->srcSym->GetType(), func);
regSrc->SetIsJITOptimizedReg(true);
src1 = regSrc;
}
else
{
src1 = IR::AddrOpnd::New(candidate->constant.ToVar(localFunc), IR::AddrOpndKindConstantAddress, localFunc);
}
}
else
{
Assert(emitData->candidate->IsMemCopy());
MemCopyEmitData* data = (MemCopyEmitData*)emitData;
Assert(data->ldElemInstr);
Assert(data->ldElemInstr->m_opcode == Js::OpCode::LdElemI_A);
IR::RegOpnd *srcBaseOpnd = nullptr;
IR::RegOpnd *srcIndexOpnd = nullptr;
IRType srcType;
GetMemOpSrcInfo(loop, data->ldElemInstr, srcBaseOpnd, srcIndexOpnd, srcType);
Assert(GetVarSymID(srcIndexOpnd->GetStackSym()) == GetVarSymID(indexOpnd->GetStackSym()));
src1 = IR::IndirOpnd::New(srcBaseOpnd, startIndexOpnd, srcType, localFunc);
}
// Generate memcopy
IR::Instr* memopInstr = IR::BailOutInstr::New(isMemset ? Js::OpCode::Memset : Js::OpCode::Memcopy, bailOutKind, bailOutInfo, localFunc);
memopInstr->SetDst(dstOpnd);
memopInstr->SetSrc1(src1);
memopInstr->SetSrc2(sizeOpnd);
insertBeforeInstr->InsertBefore(memopInstr);
loop->memOpInfo->instr = memopInstr;
#if DBG_DUMP
if (DO_MEMOP_TRACE())
{
char valueTypeStr[VALUE_TYPE_MAX_STRING_SIZE];
baseOpnd->GetValueType().ToString(valueTypeStr);
const int loopCountBufSize = 16;
char16 loopCountBuf[loopCountBufSize];
if (loopCount->LoopCountMinusOneSym())
{
swprintf_s(loopCountBuf, _u("s%u"), loopCount->LoopCountMinusOneSym()->m_id);
}
else
{
swprintf_s(loopCountBuf, _u("%u"), loopCount->LoopCountMinusOneConstantValue() + 1);
}
if (isMemset)
{
const Loop::MemSetCandidate* candidate = emitData->candidate->AsMemSet();
const int constBufSize = 32;
char16 constBuf[constBufSize];
if (candidate->srcSym)
{
swprintf_s(constBuf, _u("s%u"), candidate->srcSym->m_id);
}
else
{
switch (candidate->constant.type)
{
case TyInt8:
case TyInt16:
case TyInt32:
case TyInt64:
swprintf_s(constBuf, sizeof(IntConstType) == 8 ? _u("%lld") : _u("%d"), candidate->constant.u.intConst.value);
break;
case TyFloat32:
case TyFloat64:
swprintf_s(constBuf, _u("%.4f"), candidate->constant.u.floatConst.value);
break;
case TyVar:
swprintf_s(constBuf, sizeof(Js::Var) == 8 ? _u("0x%.16llX") : _u("0x%.8X"), candidate->constant.u.varConst.value);
break;
default:
AssertMsg(false, "Unsupported constant type");
swprintf_s(constBuf, _u("Unknown"));
break;
}
}
TRACE_MEMOP_PHASE(MemSet, loop, emitData->stElemInstr,
_u("ValueType: %S, Base: s%u, Index: s%u, Constant: %s, LoopCount: %s, IsIndexChangedBeforeUse: %d"),
valueTypeStr,
candidate->base,
candidate->index,
constBuf,
loopCountBuf,
bIndexAlreadyChanged);
}
else
{
const Loop::MemCopyCandidate* candidate = emitData->candidate->AsMemCopy();
TRACE_MEMOP_PHASE(MemCopy, loop, emitData->stElemInstr,
_u("ValueType: %S, StBase: s%u, Index: s%u, LdBase: s%u, LoopCount: %s, IsIndexChangedBeforeUse: %d"),
valueTypeStr,
candidate->base,
candidate->index,
candidate->ldBase,
loopCountBuf,
bIndexAlreadyChanged);
}
}
#endif
Assert(noImplicitCallUsesToInsert->Count() == 0);
bool isLikelyJsArray;
if (emitData->stElemInstr->GetDst()->IsIndirOpnd())
{
baseOpnd = emitData->stElemInstr->GetDst()->AsIndirOpnd()->GetBaseOpnd();
isLikelyJsArray = baseOpnd->GetValueType().IsLikelyArrayOrObjectWithArray();
ProcessNoImplicitCallArrayUses(baseOpnd, baseOpnd->IsArrayRegOpnd() ? baseOpnd->AsArrayRegOpnd() : nullptr, emitData->stElemInstr, isLikelyJsArray, true);
}
RemoveMemOpSrcInstr(memopInstr, emitData->stElemInstr, emitData->block);
if (!isMemset)
{
IR::Instr* ldElemInstr = ((MemCopyEmitData*)emitData)->ldElemInstr;
if (ldElemInstr->GetSrc1()->IsIndirOpnd())
{
baseOpnd = ldElemInstr->GetSrc1()->AsIndirOpnd()->GetBaseOpnd();
isLikelyJsArray = baseOpnd->GetValueType().IsLikelyArrayOrObjectWithArray();
ProcessNoImplicitCallArrayUses(baseOpnd, baseOpnd->IsArrayRegOpnd() ? baseOpnd->AsArrayRegOpnd() : nullptr, ldElemInstr, isLikelyJsArray, true);
}
RemoveMemOpSrcInstr(memopInstr, ldElemInstr, emitData->block);
}
InsertNoImplicitCallUses(memopInstr);
noImplicitCallUsesToInsert->Clear();
}
bool
GlobOpt::InspectInstrForMemSetCandidate(Loop* loop, IR::Instr* instr, MemSetEmitData* emitData, bool& errorInInstr)
{
Assert(emitData && emitData->candidate && emitData->candidate->IsMemSet());
Loop::MemSetCandidate* candidate = (Loop::MemSetCandidate*)emitData->candidate;
if (instr->m_opcode == Js::OpCode::StElemI_A || instr->m_opcode == Js::OpCode::StElemI_A_Strict)
{
if (instr->GetDst()->IsIndirOpnd()
&& (GetVarSymID(instr->GetDst()->AsIndirOpnd()->GetBaseOpnd()->GetStackSym()) == candidate->base)
&& (GetVarSymID(instr->GetDst()->AsIndirOpnd()->GetIndexOpnd()->GetStackSym()) == candidate->index)
)
{
Assert(instr->IsProfiledInstr());
emitData->stElemInstr = instr;
emitData->bailOutKind = instr->GetBailOutKind();
return true;
}
TRACE_MEMOP_PHASE_VERBOSE(MemSet, loop, instr, _u("Orphan StElemI_A detected"));
errorInInstr = true;
}
else if (instr->m_opcode == Js::OpCode::LdElemI_A)
{
TRACE_MEMOP_PHASE_VERBOSE(MemSet, loop, instr, _u("Orphan LdElemI_A detected"));
errorInInstr = true;
}
return false;
}
bool
GlobOpt::InspectInstrForMemCopyCandidate(Loop* loop, IR::Instr* instr, MemCopyEmitData* emitData, bool& errorInInstr)
{
Assert(emitData && emitData->candidate && emitData->candidate->IsMemCopy());
Loop::MemCopyCandidate* candidate = (Loop::MemCopyCandidate*)emitData->candidate;
if (instr->m_opcode == Js::OpCode::StElemI_A || instr->m_opcode == Js::OpCode::StElemI_A_Strict)
{
if (
instr->GetDst()->IsIndirOpnd() &&
(GetVarSymID(instr->GetDst()->AsIndirOpnd()->GetBaseOpnd()->GetStackSym()) == candidate->base) &&
(GetVarSymID(instr->GetDst()->AsIndirOpnd()->GetIndexOpnd()->GetStackSym()) == candidate->index)
)
{
Assert(instr->IsProfiledInstr());
emitData->stElemInstr = instr;
emitData->bailOutKind = instr->GetBailOutKind();
// Still need to find the LdElem
return false;
}
TRACE_MEMOP_PHASE_VERBOSE(MemCopy, loop, instr, _u("Orphan StElemI_A detected"));
errorInInstr = true;
}
else if (instr->m_opcode == Js::OpCode::LdElemI_A)
{
if (
emitData->stElemInstr &&
instr->GetSrc1()->IsIndirOpnd() &&
(GetVarSymID(instr->GetSrc1()->AsIndirOpnd()->GetBaseOpnd()->GetStackSym()) == candidate->ldBase) &&
(GetVarSymID(instr->GetSrc1()->AsIndirOpnd()->GetIndexOpnd()->GetStackSym()) == candidate->index)
)
{
Assert(instr->IsProfiledInstr());
emitData->ldElemInstr = instr;
ValueType stValueType = emitData->stElemInstr->GetDst()->AsIndirOpnd()->GetBaseOpnd()->GetValueType();
ValueType ldValueType = emitData->ldElemInstr->GetSrc1()->AsIndirOpnd()->GetBaseOpnd()->GetValueType();
if (stValueType != ldValueType)
{
#if DBG_DUMP
char16 stValueTypeStr[VALUE_TYPE_MAX_STRING_SIZE];
stValueType.ToString(stValueTypeStr);
char16 ldValueTypeStr[VALUE_TYPE_MAX_STRING_SIZE];
ldValueType.ToString(ldValueTypeStr);
TRACE_MEMOP_PHASE_VERBOSE(MemCopy, loop, instr, _u("for mismatch in Load(%s) and Store(%s) value type"), ldValueTypeStr, stValueTypeStr);
#endif
errorInInstr = true;
return false;
}
// We found both instruction for this candidate
return true;
}
TRACE_MEMOP_PHASE_VERBOSE(MemCopy, loop, instr, _u("Orphan LdElemI_A detected"));
errorInInstr = true;
}
return false;
}
// The caller is responsible to free the memory allocated between inOrderEmitData[iEmitData -> end]
bool
GlobOpt::ValidateMemOpCandidates(Loop * loop, _Out_writes_(iEmitData) MemOpEmitData** inOrderEmitData, int& iEmitData)
{
AnalysisAssert(iEmitData == (int)loop->memOpInfo->candidates->Count());
// We iterate over the second block of the loop only. MemOp Works only if the loop has exactly 2 blocks
Assert(loop->blockList.HasTwo());
Loop::MemOpList::Iterator iter(loop->memOpInfo->candidates);
BasicBlock* bblock = loop->blockList.Head()->next;
Loop::MemOpCandidate* candidate = nullptr;
MemOpEmitData* emitData = nullptr;
// Iterate backward because the list of candidate is reversed
FOREACH_INSTR_BACKWARD_IN_BLOCK(instr, bblock)
{
if (!candidate)
{
// Time to check next candidate
if (!iter.Next())
{
// We have been through the whole list of candidates, finish
break;
}
candidate = iter.Data();
if (!candidate)
{
continue;
}
// Common check for memset and memcopy
Loop::InductionVariableChangeInfo inductionVariableChangeInfo = { 0, 0 };
// Get the inductionVariable changeInfo
if (!loop->memOpInfo->inductionVariableChangeInfoMap->TryGetValue(candidate->index, &inductionVariableChangeInfo))
{
TRACE_MEMOP_VERBOSE(loop, nullptr, _u("MemOp skipped (s%d): no induction variable"), candidate->base);
return false;
}
if (inductionVariableChangeInfo.unroll != candidate->count)
{
TRACE_MEMOP_VERBOSE(loop, nullptr, _u("MemOp skipped (s%d): not matching unroll count"), candidate->base);
return false;
}
if (candidate->IsMemSet())
{
Assert(!PHASE_OFF(Js::MemSetPhase, this->func));
emitData = JitAnew(this->alloc, MemSetEmitData);
}
else
{
Assert(!PHASE_OFF(Js::MemCopyPhase, this->func));
// Specific check for memcopy
Assert(candidate->IsMemCopy());
Loop::MemCopyCandidate* memcopyCandidate = candidate->AsMemCopy();
if (memcopyCandidate->base == Js::Constants::InvalidSymID
|| memcopyCandidate->ldBase == Js::Constants::InvalidSymID
|| (memcopyCandidate->ldCount != memcopyCandidate->count))
{
TRACE_MEMOP_PHASE(MemCopy, loop, nullptr, _u("(s%d): not matching ldElem and stElem"), candidate->base);
return false;
}
emitData = JitAnew(this->alloc, MemCopyEmitData);
}
Assert(emitData);
emitData->block = bblock;
emitData->inductionVar = inductionVariableChangeInfo;
emitData->candidate = candidate;
}
bool errorInInstr = false;
bool candidateFound = candidate->IsMemSet() ?
InspectInstrForMemSetCandidate(loop, instr, (MemSetEmitData*)emitData, errorInInstr)
: InspectInstrForMemCopyCandidate(loop, instr, (MemCopyEmitData*)emitData, errorInInstr);
if (errorInInstr)
{
JitAdelete(this->alloc, emitData);
return false;
}
if (candidateFound)
{
AnalysisAssert(iEmitData > 0);
if (iEmitData == 0)
{
// Explicit for OACR
break;
}
inOrderEmitData[--iEmitData] = emitData;
candidate = nullptr;
emitData = nullptr;
}
} NEXT_INSTR_BACKWARD_IN_BLOCK;
if (iter.IsValid())
{
TRACE_MEMOP(loop, nullptr, _u("Candidates not found in loop while validating"));
return false;
}
return true;
}
void
GlobOpt::ProcessMemOp()
{
FOREACH_LOOP_IN_FUNC_EDITING(loop, this->func)
{
if (HasMemOp(loop))
{
const int candidateCount = loop->memOpInfo->candidates->Count();
Assert(candidateCount > 0);
LoopCount * loopCount = GetOrGenerateLoopCountForMemOp(loop);
// If loopCount is not available we can not continue with memop
if (!loopCount || !(loopCount->LoopCountMinusOneSym() || loopCount->LoopCountMinusOneConstantValue()))
{
TRACE_MEMOP(loop, nullptr, _u("MemOp skipped for no loop count"));
loop->doMemOp = false;
loop->memOpInfo->candidates->Clear();
continue;
}
// The list is reversed, check them and place them in order in the following array
MemOpEmitData** inOrderCandidates = JitAnewArray(this->alloc, MemOpEmitData*, candidateCount);
int i = candidateCount;
if (ValidateMemOpCandidates(loop, inOrderCandidates, i))
{
Assert(i == 0);
// Process the valid MemOp candidate in order.
for (; i < candidateCount; ++i)
{
// Emit
EmitMemop(loop, loopCount, inOrderCandidates[i]);
JitAdelete(this->alloc, inOrderCandidates[i]);
}
}
else
{
Assert(i != 0);
for (; i < candidateCount; ++i)
{
JitAdelete(this->alloc, inOrderCandidates[i]);
}
// One of the memop candidates did not validate. Do not emit for this loop.
loop->doMemOp = false;
loop->memOpInfo->candidates->Clear();
}
// Free memory
JitAdeleteArray(this->alloc, candidateCount, inOrderCandidates);
}
} NEXT_LOOP_EDITING;
}
void GlobOpt::PRE::FieldPRE(Loop *loop)
{
JitArenaAllocator *alloc = this->globOpt->tempAlloc;
this->FindPossiblePRECandidates(loop, alloc);
this->PreloadPRECandidates(loop);
this->RemoveOverlyOptimisticInitialValues(loop);
}
bool
GlobOpt::PRE::InsertSymDefinitionInLandingPad(StackSym * sym, Loop * loop, Sym ** objPtrCopyPropSym)
{
Assert(sym->IsSingleDef());
IR::Instr * symDefInstr = sym->GetInstrDef();
if (!GlobOpt::IsPREInstrSequenceCandidateLoad(symDefInstr->m_opcode))
{
return false;
}
IR::Opnd * symDefInstrSrc1 = symDefInstr->GetSrc1();
if (symDefInstrSrc1->IsSymOpnd())
{
Assert(symDefInstrSrc1->AsSymOpnd()->m_sym->IsPropertySym());
// $L1
// T1 = o.x (v1|T3)
// T2 = T1.y (v2|T4) <-- T1 is not live in the loop landing pad
// jmp $L1
// Trying to make T1 live in the landing pad
// o.x
PropertySym* propSym = symDefInstrSrc1->AsSymOpnd()->m_sym->AsPropertySym();
if (candidates->candidatesBv->Test(propSym->m_id))
{
// If propsym is a PRE candidate, then it must have had the same value on all back edges.
// So, just look up the value on one of the back edges.
BasicBlock* loopTail = loop->GetAnyTailBlock();
Value * valueOnBackEdge = loopTail->globOptData.FindValue(propSym);
// If o.x is not invariant in the loop, we can't use the preloaded value of o.x.y in the landing pad
Value * valueInLandingPad = loop->landingPad->globOptData.FindValue(propSym);
if (valueOnBackEdge->GetValueNumber() != valueInLandingPad->GetValueNumber())
{
return false;
}
*objPtrCopyPropSym = valueOnBackEdge->GetValueInfo()->GetSymStore();
if (candidates->candidatesToProcess->Test(propSym->m_id))
{
GlobHashBucket bucket;
bucket.element = valueOnBackEdge;
bucket.value = propSym;
if (!PreloadPRECandidate(loop, &bucket))
{
return false;
}
Assert(!candidates->candidatesToProcess->Test(propSym->m_id));
Assert(loop->landingPad->globOptData.IsLive(valueOnBackEdge->GetValueInfo()->GetSymStore()));
// Inserted T3 = o.x
// Now, we want to
// 1. Insert T1 = o.x
// 2. Insert T4 = T1.y
// 3. Indentify T3 as the objptr copy prop sym for T1, and make T3.y live on the back-edges
// #1 is done next. #2 and #3 are done as part of preloading T1.y
// Insert T1 = o.x
if (!InsertPropertySymPreloadInLandingPad(symDefInstr->Copy(), loop, propSym))
{
return false;
}
return true;
}
else
{
// o.x was already processed as a PRE candidate. If we were successful in preloading o.x,
// we can now insert T1 = o.x
if (loop->landingPad->globOptData.IsLive(*objPtrCopyPropSym))
{
// insert T1 = o.x
if (!InsertPropertySymPreloadInLandingPad(symDefInstr->Copy(), loop, propSym))
{
return false;
}
return true;
}
else
{
return false;
}
}
}
else
{
return false;
}
}
else if (symDefInstrSrc1->IsRegOpnd())
{
// T2 = T1
// T3 = T2.y
// trying to insert def of T2
// T1
StackSym * symDefInstrSrc1Sym = symDefInstrSrc1->AsRegOpnd()->GetStackSym();
if (!loop->landingPad->globOptData.IsLive(symDefInstrSrc1Sym))
{
if (symDefInstrSrc1Sym->IsSingleDef())
{
if (!InsertSymDefinitionInLandingPad(symDefInstrSrc1Sym, loop, objPtrCopyPropSym))
{
return false;
}
}
}
else
{
*objPtrCopyPropSym = symDefInstrSrc1Sym;
}
if (!(OpCodeAttr::TempNumberTransfer(symDefInstr->m_opcode) && OpCodeAttr::TempObjectTransfer(symDefInstr->m_opcode)))
{
*objPtrCopyPropSym = sym;
}
IR::Instr * instr = symDefInstr->Copy();
if (instr->m_opcode == Js::OpCode::BytecodeArgOutCapture)
{
instr->m_opcode = Js::OpCode::Ld_A;
}
InsertInstrInLandingPad(instr, loop);
return true;
}
else
{
return false;
}
}
void
GlobOpt::PRE::InsertInstrInLandingPad(IR::Instr * instr, Loop * loop)
{
instr->GetSrc1()->SetIsJITOptimizedReg(true);
if (instr->GetDst())
{
instr->GetDst()->SetIsJITOptimizedReg(true);
loop->landingPad->globOptData.liveVarSyms->Set(instr->GetDst()->GetStackSym()->m_id);
}
if (instr->HasAnyImplicitCalls())
{
IR::Instr * bailInstr = globOpt->EnsureDisableImplicitCallRegion(loop);
bailInstr->InsertBefore(instr);
}
else if (loop->endDisableImplicitCall)
{
loop->endDisableImplicitCall->InsertBefore(instr);
}
else
{
loop->landingPad->InsertAfter(instr);
}
instr->ClearByteCodeOffset();
instr->SetByteCodeOffset(loop->landingPad->GetFirstInstr());
}
IR::Instr *
GlobOpt::PRE::InsertPropertySymPreloadInLandingPad(IR::Instr * ldInstr, Loop * loop, PropertySym * propertySym)
{
IR::SymOpnd *ldSrc = ldInstr->GetSrc1()->AsSymOpnd();
if (ldSrc->m_sym != propertySym)
{
// It's possible that the property syms are different but have equivalent objPtrs. Verify their values.
Value *val1 = globOpt->CurrentBlockData()->FindValue(ldSrc->m_sym->AsPropertySym()->m_stackSym);
Value *val2 = globOpt->CurrentBlockData()->FindValue(propertySym->m_stackSym);
if (!val1 || !val2 || val1->GetValueNumber() != val2->GetValueNumber())
{
return nullptr;
}
}
// Consider: Shouldn't be necessary once we have copy-prop in prepass...
ldInstr->GetSrc1()->AsSymOpnd()->m_sym = propertySym;
ldSrc = ldInstr->GetSrc1()->AsSymOpnd();
if (ldSrc->IsPropertySymOpnd())
{
IR::PropertySymOpnd *propSymOpnd = ldSrc->AsPropertySymOpnd();
IR::PropertySymOpnd *newPropSymOpnd;
newPropSymOpnd = propSymOpnd->AsPropertySymOpnd()->CopyWithoutFlowSensitiveInfo(this->globOpt->func);
ldInstr->ReplaceSrc1(newPropSymOpnd);
}
if (ldInstr->GetDst())
{
loop->landingPad->globOptData.liveVarSyms->Set(ldInstr->GetDst()->GetStackSym()->m_id);
}
InsertInstrInLandingPad(ldInstr, loop);
return ldInstr;
}
void
GlobOpt::PRE::MakePropertySymLiveOnBackEdges(PropertySym * propertySym, Loop * loop, Value * valueToAdd)
{
BasicBlock * loopHeader = loop->GetHeadBlock();
FOREACH_PREDECESSOR_BLOCK(blockPred, loopHeader)
{
if (!loop->IsDescendentOrSelf(blockPred->loop))
{
// Not a loop back-edge
continue;
}
// Insert it in the value table
blockPred->globOptData.SetValue(valueToAdd, propertySym);
// Make it a live field
blockPred->globOptData.liveFields->Set(propertySym->m_id);
} NEXT_PREDECESSOR_BLOCK;
}
void GlobOpt::PRE::RemoveOverlyOptimisticInitialValues(Loop * loop)
{
BasicBlock * landingPad = loop->landingPad;
// For a property sym whose obj ptr sym wasn't live in the landing pad, we can optimistically (if the obj ptr sym was
// single def) insert an initial value in the landing pad, with the hope that PRE could make the obj ptr sym live.
// But, if PRE couldn't make the obj ptr sym live, we need to clear the value for the property sym from the landing pad
for (auto it = loop->initialValueFieldMap.GetIteratorWithRemovalSupport(); it.IsValid(); it.MoveNext())
{
PropertySym * propertySym = it.CurrentKey();
StackSym * objPtrSym = propertySym->m_stackSym;
if (!landingPad->globOptData.IsLive(objPtrSym))
{
Value * landingPadPropSymValue = landingPad->globOptData.FindValue(propertySym);
Assert(landingPadPropSymValue);
Assert(landingPadPropSymValue->GetValueNumber() == it.CurrentValue()->GetValueNumber());
Assert(landingPadPropSymValue->GetValueInfo()->GetSymStore() == propertySym);
landingPad->globOptData.ClearSymValue(propertySym);
it.RemoveCurrent();
}
}
}
#if DBG_DUMP
void GlobOpt::PRE::TraceFailedPreloadInLandingPad(const Loop *const loop, PropertySym * propertySym, const char16* reason) const
{
if (PHASE_TRACE(Js::FieldPREPhase, this->globOpt->func))
{
int32 propertyId = propertySym->m_propertyId;
SymID objectSymId = propertySym->m_stackSym->m_id;
char16 propSymStr[32];
switch (propertySym->m_fieldKind)
{
case PropertyKindData:
if (JITManager::GetJITManager()->IsOOPJITEnabled())
{
swprintf_s(propSymStr, _u("s%d->#%d"), objectSymId, propertyId);
}
else
{
Js::PropertyRecord const* fieldName = propertySym->m_func->GetInProcThreadContext()->GetPropertyRecord(propertyId);
swprintf_s(propSymStr, _u("s%d->%s"), objectSymId, fieldName->GetBuffer());
}
break;
case PropertyKindSlots:
case PropertyKindSlotArray:
swprintf_s(propSymStr, _u("s%d[%d]"), objectSymId, propertyId);
break;
case PropertyKindLocalSlots:
swprintf_s(propSymStr, _u("s%dl[%d]"), objectSymId, propertyId);
break;
default:
AssertMsg(0, "Unknown field kind");
break;
}
Output::Print(_u("** TRACE: Field PRE: "));
this->globOpt->func->DumpFullFunctionName();
Output::Print(_u(": Failed to pre-load (%s) in landing pad of loop #%d. Reason: %s "), propSymStr, loop->GetLoopNumber(), reason);
Output::Print(_u("\n"));
}
}
#endif
```
|
Salisbury Law Courts is a Crown Court venue, which deals with criminal cases, and a County Court venue, which deals with civil cases, in Wilton Road, Salisbury, England. It also accommodates the local magistrates' court.
History
All magistrates' court hearings in Salisbury were originally held in the courtroom in the west wing of Salisbury Guildhall. Additional judicial facilities, to accommodate the crown and county courts, were established in Alexandra House in St John's Street in the mid-1980s. However, as the number of court cases in Salisbury grew, it became necessary to commission a more modern courthouse to accommodate the crown and county courts as well as the magistrates' court. The site selected by the Lord Chancellor's Department, on the north side of Wilton Road, had been occupied by the recreation ground for the Old Manor Hospital which had closed in 2003.
Work on the new building started in October 2007. It was designed by Stride Teglown / Feilden & Mawson in the Modernist style, built in buff brick and glass by Mansall Construction at a cost of £18 million, and was officially opened in September 2009. The design involved a symmetrical main frontage in three sections facing onto Wilton Road. The central section of eleven bays featured a single-storey entrance block, which was projected forward and accessed by a glass sliding doorway at the right-hand end. The first floor was fenestrated by a row of glass panels which were fronted by a slatted structure which was projected forward. The end sections were slightly projected forward and faced entirely with buff brick with no fenestration. A Royal coat of arms was mounted on the right-hand end section at first floor level. At roof level, the building featured prominent modillioned eaves. Internally, the building was laid out to accommodate six courtrooms. The project was awarded first prize under the courts scheme at the BREEAM Awards 2010.
Notable cases have included the trial and conviction of an Afghan man, Lawangeen Abdulrahimzai, in January 2023, for the murder of an aspiring Royal Marine, Thomas Roberts; he had already killed two other men in Serbia.
Notes
References
External links
Court information
Buildings and structures in Salisbury
Crown Court buildings
Government buildings completed in 2009
Court buildings in England
|
Saint-Saphorin is a municipality in the Swiss canton of Vaud, located on the shore of Lake Geneva, in the district of Lavaux-Oron.
History
Glerula or Calarona (from , 'gravel') was a Gallo-Roman village. The chroniclers Gregory of Tours and Marius of Avenches described what is now called the Tauredunum event of 563. A landslip into the eastern end of Lake Geneva caused a tsunami which swept along the lake causing immense damage. Glerula was among the villages which were destroyed. It was never rebuilt. Instead, a new community was founded a short distance to the east, taking its name from the new church there, dedicated to Saint-Symphorien. That name has, with the passage of years, transformed into Saint-Saphorin.
Saint-Saphorin (Lavaux) is first mentioned in 1138 as de Sancto Sufforiano.
Geography
Saint-Saphorin (Lavaux) has an area, , of . Of this area, or 56.2% is used for agricultural purposes, while or 11.2% is forested. Of the rest of the land, or 30.3% is settled (buildings or roads).
Of the built up area, housing and buildings made up 11.2% and transportation infrastructure made up 18.0%. while parks, green belts and sports fields made up 1.1%. Out of the forested land, 9.0% of the total land area is heavily forested and 2.2% is covered with orchards or small clusters of trees. Of the agricultural land, 11.2% is used for growing crops and 12.4% is pastures, while 32.6% is used for orchards or vine crops.
The municipality was part of the Lavaux District until it was dissolved on 31 August 2006, and Saint-Saphorin (Lavaux) became part of the new district of Lavaux-Oron.
It consists of the village of Saint-Saphorin and the hamlets of Glérolles, Les Faverges, Ogoz and Lignières.
Coat of arms
The blazon of the municipal coat of arms is Per fess Argent and Gules, overall a Bend wavy counterchanged.
Demographics
Saint-Saphorin (Lavaux) has a population () of . , 24.9% of the population are resident foreign nationals. Over the last 10 years (1999–2009 ) the population has changed at a rate of 2.2%. It has changed at a rate of -5% due to migration and at a rate of 6.6% due to births and deaths.
Most of the population () speaks French (307 or 88.2%), with German being second most common (19 or 5.5%) and English being third (10 or 2.9%). There are 2 people who speak Italian.
Of the population in the municipality 97 or about 27.9% were born in Saint-Saphorin (Lavaux) and lived there in 2000. There were 119 or 34.2% who were born in the same canton, while 63 or 18.1% were born somewhere else in Switzerland, and 61 or 17.5% were born outside of Switzerland.
In there were 4 live births to Swiss citizens and 3 births to non-Swiss citizens, and in same time span there were 2 deaths of Swiss citizens. Ignoring immigration and emigration, the population of Swiss citizens increased by 2 while the foreign population increased by 3. There were 2 Swiss men who emigrated from Switzerland and 1 Swiss woman who immigrated back to Switzerland. At the same time, there were 3 non-Swiss men and 8 non-Swiss women who immigrated from another country to Switzerland. The total Swiss population change in 2008 (from all sources, including moves across municipal borders) was a decrease of 21 and the non-Swiss population increased by 15 people. This represents a population growth rate of -1.6%.
The age distribution of the population () is children and teenagers (0–19 years old) make up 20.7% of the population, while adults (20–64 years old) make up 66.1% and seniors (over 64 years old) make up 13.2%.
, there were 144 people who were single and never married in the municipality. There were 148 married individuals, 16 widows or widowers and 40 individuals who are divorced.
, there were 159 private households in the municipality, and an average of 2.1 persons per household. There were 60 households that consist of only one person and 7 households with five or more people. Out of a total of 164 households that answered this question, 36.6% were households made up of just one person. Of the rest of the households, there are 49 married couples without children, 40 married couples with children There were 8 single parents with a child or children. There were 2 households that were made up of unrelated people and 5 households that were made up of some sort of institution or another collective housing.
there were 53 single family homes (or 48.2% of the total) out of a total of 110 inhabited buildings. There were 39 multi-family buildings (35.5%), along with 12 multi-purpose buildings that were mostly used for housing (10.9%) and 6 other use buildings (commercial or industrial) that also had some housing (5.5%). Of the single family homes 37 were built before 1919, while 3 were built between 1990 and 2000. The most multi-family homes (28) were built before 1919 and the next most (3) were built between 1961 and 1970.
there were 198 apartments in the municipality. The most common apartment size was 4 rooms of which there were 54. There were 15 single room apartments and 49 apartments with five or more rooms. Of these apartments, a total of 152 apartments (76.8% of the total) were permanently occupied, while 41 apartments (20.7%) were seasonally occupied and 5 apartments (2.5%) were empty. , the construction rate of new housing units was 0 new units per 1000 residents. The vacancy rate for the municipality, , was 3.03%.
The historical population is given in the following chart:
Heritage sites of national significance
The Swiss Reformed Church of Saint-Symphorien with a Gallo-Roman villa as well as part of the UNESCO World Heritage Site: Lavaux, Vineyard Terraces are listed as Swiss heritage site of national significance. The entire village of Saint-Saphorin is part of the Inventory of Swiss Heritage Sites.
Politics
In the 2007 federal election the most popular party was the SP which received 27.84% of the vote. The next three most popular parties were the FDP (19.92%), the Green Party (17.15%) and the SVP (9.64%). In the federal election, a total of 138 votes were cast, and the voter turnout was 57.5%.
Economy
, Saint-Saphorin (Lavaux) had an unemployment rate of 1.6%. , there were 28 people employed in the primary economic sector and about 6 businesses involved in this sector. 1 person was employed in the secondary sector and there was 1 business in this sector. 46 people were employed in the tertiary sector, with 12 businesses in this sector. There were 205 residents of the municipality who were employed in some capacity, of which females made up 46.3% of the workforce.
the total number of full-time equivalent jobs was 63. The number of jobs in the primary sector was 20, all of which were in agriculture. The number of jobs in the secondary sector was 1, all of which were in manufacturing. The number of jobs in the tertiary sector was 42. In the tertiary sector; 9 or 21.4% were in wholesale or retail sales or the repair of motor vehicles, 21 or 50.0% were in a hotel or restaurant, 7 or 16.7% were in the information industry, 1 was the insurance or financial industry, 1 was a technical professional or scientist.
, there were 10 workers who commuted into the municipality and 148 workers who commuted away. The municipality is a net exporter of workers, with about 14.8 workers leaving the municipality for every one entering. Of the working population, 16.6% used public transportation to get to work, and 60% used a private car.
Religion
From the , 80 or 23.0% were Roman Catholic, while 194 or 55.7% belonged to the Swiss Reformed Church. Of the rest of the population, there were 3 members of an Orthodox church (or about 0.86% of the population), and there were 4 individuals (or about 1.15% of the population) who belonged to another Christian church. There was 1 individual who was Jewish, and 60 (or about 17.24% of the population) belonged to no church, are agnostic or atheist, and 8 individuals (or about 2.30% of the population) did not answer the question.
Education
In Saint-Saphorin (Lavaux) about 144 or (41.4%) of the population have completed non-mandatory upper secondary education, and 69 or (19.8%) have completed additional higher education (either university or a Fachhochschule). Of the 69 who completed tertiary schooling, 44.9% were Swiss men, 36.2% were Swiss women, 13.0% were non-Swiss men.
In the 2009/2010 school year there were a total of 32 students in the Saint-Saphorin (Lavaux) school district. In the Vaud cantonal school system, two years of non-obligatory pre-school are provided by the political districts. During the school year, the political district provided pre-school care for a total of 665 children of which 232 children (34.9%) received subsidized pre-school care. The canton's primary school program requires students to attend for four years. There were 20 students in the municipal primary school program. The obligatory lower secondary school program lasts for six years and there were 12 students in those schools.
, there were 48 students from Saint-Saphorin (Lavaux) who attended schools outside the municipality.
Transportation
The municipality has a railway station, , on the Simplon line. It has regular service to , , and .
See also
Glérolles Castle
Notes and references
External links
Official website
Populated places on Lake Geneva
Cultural property of national significance in the canton of Vaud
|
Foote Field is a multi-purpose sports facility on the University of Alberta South Campus in Edmonton, Alberta, Canada, built as a legacy facility for the 2001 World Championships in Athletics. It was named for University of Alberta alumnus, former varsity track athlete, and philanthropist Eldon Foote, who donated $2 million toward the construction costs.
Design
Foote Field features two separate athletic fields on either side of a multi-purpose indoor facility. The East Field is a fully lit stadium that serves as home for the Alberta Golden Bears football. It features a CFL-sized surface, press box, electronic scoreboard, and has a capacity of 3,500 spectators. The East Field also features a four-lane, 125 m warm-up runway. In 2007, the field's older Astroturf surface was replaced with a newer type of hybrid artificial surface made by Astroturf LLC, called PureGrass.
The West Field is designed for track-and-field training and competition. It features a 400 m Beynon Sports running track, as well as separate areas for long jump/triple jump, high jump, pole vault, discus, hammer, shot put, and javelin. Inside the track is a natural-turf soccer field. Like the East Field, the West Field features a press box, electronic scoreboard, and has a capacity of 1,500 spectators.
Between the two fields is a multi-purpose indoor facility, which includes locker rooms, press box, and concession area. Other indoor facilities include classroom space, meeting rooms, and a high-performance weight-training area. The fitness centre is for the use of high-performance student-athletes only.
References
External links
U of A Foote Field page
Aerial view image of Foote Field
f
North American Soccer League (2011–2017) stadiums
Soccer venues in Canada
Canadian football venues
Athletics (track and field) venues in Canada
Sports venues in Edmonton
University of Alberta buildings
University sports venues in Canada
University and college buildings completed in 2001
2001 establishments in Alberta
|
```ruby
module ConstantSpecs
CONST_LOCATION = __LINE__
end
```
|
```xml
import "reflect-metadata"
import { expect } from "chai"
import {
closeTestingConnections,
createTestingConnections,
reloadTestingDatabases,
} from "../../../../utils/test-utils"
import { DataSource } from "../../../../../src/data-source"
import { LegacyOracleNamingStrategy } from "../../../../../src/naming-strategy/LegacyOracleNamingStrategy"
describe("LegacyOracleNamingStrategy > create table using this naming strategy", () => {
let connections: DataSource[]
before(
async () =>
(connections = await createTestingConnections({
entities: [__dirname + "/entity/*{.js,.ts}"],
enabledDrivers: ["oracle"],
namingStrategy: new LegacyOracleNamingStrategy("hash"),
})),
)
// without reloadTestingDatabases(connections) -> tables should be created later
after(() => closeTestingConnections(connections))
it("should create the table", () =>
Promise.all(
connections.map(async (connection) => {
await expect(reloadTestingDatabases([connection])).to.be
.fulfilled
}),
))
})
```
|
Banak is a small peninsula in Porsanger Municipality in Finnmark county, Norway. It juts into the Vestbotn bay of the vast Porsangerfjorden. Located immediately north of the village of Lakselv, the peninsula has Brennelvfjorden to its east and the river Lakselva to the east. Banak is the site of Lakselv Airport, Banak and Banak Air Station.
A temperature of was recorded in Banak on 30 July 2018, considered very unusual for a location in the Arctic circle, as part of the 2018 European heat wave. On 29 June 2022, as part of the June 2022 European heat wave, a temperature of was recorded, reported as the highest temperature ever recorded in the Arctic circle. However, both Verkhoyansk and Fort Yukon, Alaska have recorded higher temperatures of and respectively.
References
Porsanger
Peninsulas of Troms og Finnmark
|
George Dow (30 June 1907 – 28 January 1987) was an employee of the London and North Eastern Railway (LNER) and British Railways known for his public relations work and railway maps produced for his employers, and also a writer of railway literature, in particular his three-volume history of the Great Central Railway.
Biography
George Dow joined London and North Eastern Railway (LNER) as a grade five clerk at Kings Cross railway station in London, England. He held many offices on the LNER (particularly as Press Relations Officer throughout the Second World War) and British Railways.
He is perhaps best known as a draughtsman for his diagrammatic railway maps for the LNER and London, Midland and Scottish Railway and as an inspiration to the celebrated designer Harry Beck on the tube map. Their work led to a style of design which has revolutionised the world of urban rail and metro maps.
On the creation of British Railways in 1948, he was appointed Public Relations and Publicity Officer for the Eastern and North Eastern Regions. In 1949 he took the same post at the larger London Midland Region. He rose to Divisional Manager, Birmingham, and later Stoke-on-Trent, and retired in 1968.
He also wrote twenty-one railway histories, starting with studies for the LNER, and later including his three-volume history of the Great Central Railway and a two-volume work on the carriages of the Midland Railway.
He was the founding President of the Model Engineering Trade Association in 1944, and of the Historical Model Railway Society in 1950.
He died on 28 January 1987 aged 79 years.
Bibliography
, 3 Volumes
Republished by Ian Allan 1985: Vol.1 ; Vol.2 ; Vol.3
A fuller bibliography is given in:
References
Biographical material
Further reading
Telling the Passenger Where to Get Off by Andrew Dow, Capital Transport, London, 2005.
1907 births
1987 deaths
20th-century British historians
British graphic designers
London and North Eastern Railway people
Rail transport writers
|
Scrutinyite is a rare oxide mineral and is the alpha crystalline form of lead dioxide (α-PbO2), plattnerite being the other, beta form. The mineral was first reported in 1988 and its name reflects the scrutiny and efforts required to identify it from a very limited amount of available sample material.
Identification
The synthetic orthorhombic form of lead dioxide, α-PbO2, was known from 1941. Although natural lead dioxide has been known, as the mineral plattnerite (β-PbO2), since 1845, its alpha form could only be recognized in 1981 and reliably identified in 1988.
The new mineral was spotted in several samples collected at Bingham, New Mexico and Mapimí, Durango, Mexico. It was first thought to be minium (lead tetroxide mineral) because of its high lead content, brown color and association with other lead oxide minerals plattnerite and murdochite. Its holotype specimen consisted of crystalline plates 25–30 micrometers (µm) across and 1–2 µm thick with the total weight below 1 mg. The flakes were collected from a fluorite, quartz, limonite and rosasite matrixes. Identification and characterization of scrutinyite by the standard X-ray diffraction (XRD) technique was hindered by scarcity of material and strong signal interference with plattnerite. The unusual amount of effort required for the analysis resulted in its name derived from the word "scrutiny". The holotype specimen is preserved in the US National Museum (catalog number NMNH 165479).
Characterization
The PbO2 composition of scrutinyite was deduced by energy-dispersive X-ray spectroscopy. Slight oxygen deficiency is generally attributed to the surface effects, especially in thin samples, namely oxygen in the surface layers of PbO2 is usually substituted by the hydroxyl groups.
The crystal structure was deduced by XRD as orthorhombic, space group Pbcn (No. 60), Pearson symbol oP12, lattice constants a = 0.497 nm, b = 0.596 nm, c = 0.544 nm, Z = 4 (four formula units per unit cell) were in reasonable agreement with previous results obtained on synthetic samples.
References
External links
Spectroscopic data on Scrutinyite
Oxide minerals
Orthorhombic minerals
Minerals in space group 60
|
```xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE generatorConfiguration
PUBLIC "-//mybatis.org//DTD MyBatis Generator Configuration 1.0//EN"
"path_to_url">
<generatorConfiguration>
<context id="caigouTables" targetRuntime="MyBatis3">
<commentGenerator>
<!-- true false: -->
<property name="suppressAllComments" value="true"/>
</commentGenerator>
<!-- -->
<!-- <jdbcConnection driverClass="com.mysql.jdbc.Driver"
connectionURL="jdbc:mysql://localhost:3306/mybatis" userId="root"
password="mysql">
</jdbcConnection> -->
<jdbcConnection driverClass="oracle.jdbc.OracleDriver"
connectionURL="jdbc:oracle:thin:@127.0.0.1:1521:yycg"
userId="yycg"
password="yycg">
</jdbcConnection>
<!-- falseJDBC DECIMAL NUMERIC Integer trueJDBC DECIMAL
NUMERIC java.math.BigDecimal -->
<javaTypeResolver>
<property name="forceBigDecimals" value="false"/>
</javaTypeResolver>
<!-- targetProject:PO -->
<javaModelGenerator targetPackage="yycg.base.pojo.po"
targetProject=".\src">
<!-- enableSubPackages:schema -->
<property name="enableSubPackages" value="false"/>
<!-- -->
<property name="trimStrings" value="true"/>
</javaModelGenerator>
<!-- targetPackage:mapper -->
<sqlMapGenerator targetPackage="yycg.base.dao.mapper"
targetProject=".\src">
<property name="enableSubPackages" value="false"/>
</sqlMapGenerator>
<!-- targetPackagemapper -->
<javaClientGenerator type="XMLMAPPER"
targetPackage="yycg.base.dao.mapper"
targetProject=".\src">
<property name="enableSubPackages" value="false"/>
</javaClientGenerator>
<!-- -->
<!-- <table schema="" tableName="sysuser" /> -->
<!--
schemasysuserschemaschema
-->
<!-- <table schema="yycg" tableName="sysuser" /> -->
<!-- -->
<table schema="" tableName="userjd"/>
<!-- -->
<table schema="" tableName="usergys"/>
<table schema="" tableName="usergysarea"/>
<!-- -->
<table schema="" tableName="useryy"/>
<!-- -->
<table schema="" tableName="dictinfo"/>
<table schema="" tableName="dicttype"/>
<!-- -->
<table schema="" tableName="basicinfo"/>
<!-- -->
<table schema="" tableName="bss_sys_area"/>
</context>
</generatorConfiguration>
```
|
```javascript
/**
* @license Apache-2.0
*
*
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing, software
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*/
'use strict';
// MODULES //
var resolve = require( 'path' ).resolve;
var bench = require( '@stdlib/bench' );
var randu = require( '@stdlib/random/base/randu' );
var isnan = require( '@stdlib/math/base/assert/is-nan' );
var pow = require( '@stdlib/math/base/special/pow' );
var Float64Array = require( '@stdlib/array/float64' );
var tryRequire = require( '@stdlib/utils/try-require' );
var pkg = require( './../package.json' ).name;
// VARIABLES //
var dstdevpn = tryRequire( resolve( __dirname, './../lib/ndarray.native.js' ) );
var opts = {
'skip': ( dstdevpn instanceof Error )
};
// FUNCTIONS //
/**
* Creates a benchmark function.
*
* @private
* @param {PositiveInteger} len - array length
* @returns {Function} benchmark function
*/
function createBenchmark( len ) {
var x;
var i;
x = new Float64Array( len );
for ( i = 0; i < x.length; i++ ) {
x[ i ] = ( randu()*20.0 ) - 10.0;
}
return benchmark;
function benchmark( b ) {
var v;
var i;
b.tic();
for ( i = 0; i < b.iterations; i++ ) {
v = dstdevpn( x.length, 1, x, 1, 0 );
if ( isnan( v ) ) {
b.fail( 'should not return NaN' );
}
}
b.toc();
if ( isnan( v ) ) {
b.fail( 'should not return NaN' );
}
b.pass( 'benchmark finished' );
b.end();
}
}
// MAIN //
/**
* Main execution sequence.
*
* @private
*/
function main() {
var len;
var min;
var max;
var f;
var i;
min = 1; // 10^min
max = 6; // 10^max
for ( i = min; i <= max; i++ ) {
len = pow( 10, i );
f = createBenchmark( len );
bench( pkg+'::native:ndarray:len='+len, opts, f );
}
}
main();
```
|
```smalltalk
using CoreGraphics;
using System;
using UIKit;
namespace Xamarin.Forms.Platform.iOS
{
public class SlideFlyoutTransition : IShellFlyoutTransition
{
internal double Height { get; private set; } = -1d;
internal double Width { get; private set; } = -1d;
public virtual bool UpdateFlyoutSize(double height, double width)
{
if (Height != height ||
Width != width)
{
Height = height;
Width = width;
return true;
}
return false;
}
public virtual void LayoutViews(CGRect bounds, nfloat openPercent, UIView flyout, UIView shell, FlyoutBehavior behavior)
{
if (behavior == FlyoutBehavior.Locked)
openPercent = 1;
nfloat flyoutHeight;
nfloat flyoutWidth;
if (Width != -1d)
flyoutWidth = (nfloat)Width;
else if (UIDevice.CurrentDevice.UserInterfaceIdiom == UIUserInterfaceIdiom.Pad)
flyoutWidth = 320;
else
flyoutWidth = (nfloat)(Math.Min(bounds.Width, bounds.Height) * 0.8);
if (Height == -1d)
flyoutHeight = bounds.Height;
else
flyoutHeight = (nfloat)Height;
nfloat openLimit = flyoutWidth;
nfloat openPixels = openLimit * openPercent;
if (behavior == FlyoutBehavior.Locked)
shell.Frame = new CGRect(bounds.X + flyoutWidth, bounds.Y, bounds.Width - flyoutWidth, flyoutHeight);
else
shell.Frame = bounds;
var shellWidth = shell.Frame.Width;
if(shell.SemanticContentAttribute == UISemanticContentAttribute.ForceRightToLeft)
{
var positionY = shellWidth - openPixels;
flyout.Frame = new CGRect(positionY, 0, flyoutWidth, flyoutHeight);
}
else
{
flyout.Frame = new CGRect(-openLimit + openPixels, 0, flyoutWidth, flyoutHeight);
}
}
}
}
```
|
```smalltalk
using FlaUI.Core.AutomationElements;
using FlaUI.Core.Identifiers;
using FlaUI.Core.Patterns.Infrastructure;
namespace FlaUI.Core.Patterns
{
public interface IGridItemPattern : IPattern
{
IGridItemPatternPropertyIds PropertyIds { get; }
AutomationProperty<int> Column { get; }
AutomationProperty<int> ColumnSpan { get; }
AutomationProperty<AutomationElement> ContainingGrid { get; }
AutomationProperty<int> Row { get; }
AutomationProperty<int> RowSpan { get; }
}
public interface IGridItemPatternPropertyIds
{
PropertyId Column { get; }
PropertyId ColumnSpan { get; }
PropertyId ContainingGrid { get; }
PropertyId Row { get; }
PropertyId RowSpan { get; }
}
public abstract class GridItemPatternBase<TNativePattern> : PatternBase<TNativePattern>, IGridItemPattern
where TNativePattern : class
{
private AutomationProperty<int> _column;
private AutomationProperty<int> _columnSpan;
private AutomationProperty<AutomationElement> _containingGrid;
private AutomationProperty<int> _row;
private AutomationProperty<int> _rowSpan;
protected GridItemPatternBase(FrameworkAutomationElementBase frameworkAutomationElement, TNativePattern nativePattern) : base(frameworkAutomationElement, nativePattern)
{
}
public IGridItemPatternPropertyIds PropertyIds => Automation.PropertyLibrary.GridItem;
public AutomationProperty<int> Column => GetOrCreate(ref _column, PropertyIds.Column);
public AutomationProperty<int> ColumnSpan => GetOrCreate(ref _columnSpan, PropertyIds.ColumnSpan);
public AutomationProperty<AutomationElement> ContainingGrid => GetOrCreate(ref _containingGrid, PropertyIds.ContainingGrid);
public AutomationProperty<int> Row => GetOrCreate(ref _row, PropertyIds.Row);
public AutomationProperty<int> RowSpan => GetOrCreate(ref _rowSpan, PropertyIds.RowSpan);
}
}
```
|
```java
/*
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing, software
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*/
package org.apache.shardingsphere.agent.plugin.metrics.core.advice.jdbc;
import org.apache.shardingsphere.agent.plugin.core.recorder.MethodTimeRecorder;
/**
* Execute latency histogram advance for ShardingSphereStatement.
*/
public final class StatementExecuteLatencyHistogramAdvice extends AbstractExecuteLatencyHistogramAdvice {
private final MethodTimeRecorder methodTimeRecorder = new MethodTimeRecorder(StatementExecuteLatencyHistogramAdvice.class);
@Override
protected MethodTimeRecorder getMethodTimeRecorder() {
return methodTimeRecorder;
}
}
```
|
```c
/********************************************************************
* *
* THIS FILE IS PART OF THE OggVorbis SOFTWARE CODEC SOURCE CODE. *
* USE, DISTRIBUTION AND REPRODUCTION OF THIS LIBRARY SOURCE IS *
* GOVERNED BY A BSD-STYLE SOURCE LICENSE INCLUDED WITH THIS SOURCE *
* IN 'COPYING'. PLEASE READ THESE TERMS BEFORE DISTRIBUTING. *
* *
* THE OggVorbis SOURCE CODE IS (C) COPYRIGHT 1994-2009 *
* by the Xiph.Org Foundation path_to_url *
* *
********************************************************************
function: normalized modified discrete cosine transform
power of two length transform only [64 <= n ]
last mod: $Id: mdct.c 16227 2009-07-08 06:58:46Z xiphmont $
Original algorithm adapted long ago from _The use of multirate filter
banks for coding of high quality digital audio_, by T. Sporer,
K. Brandenburg and B. Edler, collection of the European Signal
Processing Conference (EUSIPCO), Amsterdam, June 1992, Vol.1, pp
211-214
The below code implements an algorithm that no longer looks much like
that presented in the paper, but the basic structure remains if you
dig deep enough to see it.
This module DOES NOT INCLUDE code to generate/apply the window
function. Everybody has their own weird favorite including me... I
happen to like the properties of y=sin(.5PI*sin^2(x)), but others may
vehemently disagree.
********************************************************************/
/* this can also be run as an integer transform by uncommenting a
define in mdct.h; the integerization is a first pass and although
it's likely stable for Vorbis, the dynamic range is constrained and
roundoff isn't done (so it's noisy). Consider it functional, but
only a starting point. There's no point on a machine with an FPU */
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <math.h>
#include "vorbis/codec.h"
#include "mdct.h"
#include "os.h"
#include "misc.h"
/* build lookups for trig functions; also pre-figure scaling and
some window function algebra. */
void mdct_init(mdct_lookup *lookup,int n){
int *bitrev=_ogg_malloc(sizeof(*bitrev)*(n/4));
DATA_TYPE *T=_ogg_malloc(sizeof(*T)*(n+n/4));
int i;
int n2=n>>1;
int log2n=lookup->log2n=rint(log((float)n)/log(2.f));
lookup->n=n;
lookup->trig=T;
lookup->bitrev=bitrev;
/* trig lookups... */
for(i=0;i<n/4;i++){
T[i*2]=FLOAT_CONV(cos((M_PI/n)*(4*i)));
T[i*2+1]=FLOAT_CONV(-sin((M_PI/n)*(4*i)));
T[n2+i*2]=FLOAT_CONV(cos((M_PI/(2*n))*(2*i+1)));
T[n2+i*2+1]=FLOAT_CONV(sin((M_PI/(2*n))*(2*i+1)));
}
for(i=0;i<n/8;i++){
T[n+i*2]=FLOAT_CONV(cos((M_PI/n)*(4*i+2))*.5);
T[n+i*2+1]=FLOAT_CONV(-sin((M_PI/n)*(4*i+2))*.5);
}
/* bitreverse lookup... */
{
int mask=(1<<(log2n-1))-1,i,j;
int msb=1<<(log2n-2);
for(i=0;i<n/8;i++){
int acc=0;
for(j=0;msb>>j;j++)
if((msb>>j)&i)acc|=1<<j;
bitrev[i*2]=((~acc)&mask)-1;
bitrev[i*2+1]=acc;
}
}
lookup->scale=FLOAT_CONV(4.f/n);
}
/* 8 point butterfly (in place, 4 register) */
STIN void mdct_butterfly_8(DATA_TYPE *x){
REG_TYPE r0 = x[6] + x[2];
REG_TYPE r1 = x[6] - x[2];
REG_TYPE r2 = x[4] + x[0];
REG_TYPE r3 = x[4] - x[0];
x[6] = r0 + r2;
x[4] = r0 - r2;
r0 = x[5] - x[1];
r2 = x[7] - x[3];
x[0] = r1 + r0;
x[2] = r1 - r0;
r0 = x[5] + x[1];
r1 = x[7] + x[3];
x[3] = r2 + r3;
x[1] = r2 - r3;
x[7] = r1 + r0;
x[5] = r1 - r0;
}
/* 16 point butterfly (in place, 4 register) */
STIN void mdct_butterfly_16(DATA_TYPE *x){
REG_TYPE r0 = x[1] - x[9];
REG_TYPE r1 = x[0] - x[8];
x[8] += x[0];
x[9] += x[1];
x[0] = MULT_NORM((r0 + r1) * cPI2_8);
x[1] = MULT_NORM((r0 - r1) * cPI2_8);
r0 = x[3] - x[11];
r1 = x[10] - x[2];
x[10] += x[2];
x[11] += x[3];
x[2] = r0;
x[3] = r1;
r0 = x[12] - x[4];
r1 = x[13] - x[5];
x[12] += x[4];
x[13] += x[5];
x[4] = MULT_NORM((r0 - r1) * cPI2_8);
x[5] = MULT_NORM((r0 + r1) * cPI2_8);
r0 = x[14] - x[6];
r1 = x[15] - x[7];
x[14] += x[6];
x[15] += x[7];
x[6] = r0;
x[7] = r1;
mdct_butterfly_8(x);
mdct_butterfly_8(x+8);
}
/* 32 point butterfly (in place, 4 register) */
STIN void mdct_butterfly_32(DATA_TYPE *x){
REG_TYPE r0 = x[30] - x[14];
REG_TYPE r1 = x[31] - x[15];
x[30] += x[14];
x[31] += x[15];
x[14] = r0;
x[15] = r1;
r0 = x[28] - x[12];
r1 = x[29] - x[13];
x[28] += x[12];
x[29] += x[13];
x[12] = MULT_NORM( r0 * cPI1_8 - r1 * cPI3_8 );
x[13] = MULT_NORM( r0 * cPI3_8 + r1 * cPI1_8 );
r0 = x[26] - x[10];
r1 = x[27] - x[11];
x[26] += x[10];
x[27] += x[11];
x[10] = MULT_NORM(( r0 - r1 ) * cPI2_8);
x[11] = MULT_NORM(( r0 + r1 ) * cPI2_8);
r0 = x[24] - x[8];
r1 = x[25] - x[9];
x[24] += x[8];
x[25] += x[9];
x[8] = MULT_NORM( r0 * cPI3_8 - r1 * cPI1_8 );
x[9] = MULT_NORM( r1 * cPI3_8 + r0 * cPI1_8 );
r0 = x[22] - x[6];
r1 = x[7] - x[23];
x[22] += x[6];
x[23] += x[7];
x[6] = r1;
x[7] = r0;
r0 = x[4] - x[20];
r1 = x[5] - x[21];
x[20] += x[4];
x[21] += x[5];
x[4] = MULT_NORM( r1 * cPI1_8 + r0 * cPI3_8 );
x[5] = MULT_NORM( r1 * cPI3_8 - r0 * cPI1_8 );
r0 = x[2] - x[18];
r1 = x[3] - x[19];
x[18] += x[2];
x[19] += x[3];
x[2] = MULT_NORM(( r1 + r0 ) * cPI2_8);
x[3] = MULT_NORM(( r1 - r0 ) * cPI2_8);
r0 = x[0] - x[16];
r1 = x[1] - x[17];
x[16] += x[0];
x[17] += x[1];
x[0] = MULT_NORM( r1 * cPI3_8 + r0 * cPI1_8 );
x[1] = MULT_NORM( r1 * cPI1_8 - r0 * cPI3_8 );
mdct_butterfly_16(x);
mdct_butterfly_16(x+16);
}
/* N point first stage butterfly (in place, 2 register) */
STIN void mdct_butterfly_first(DATA_TYPE *T,
DATA_TYPE *x,
int points){
DATA_TYPE *x1 = x + points - 8;
DATA_TYPE *x2 = x + (points>>1) - 8;
REG_TYPE r0;
REG_TYPE r1;
do{
r0 = x1[6] - x2[6];
r1 = x1[7] - x2[7];
x1[6] += x2[6];
x1[7] += x2[7];
x2[6] = MULT_NORM(r1 * T[1] + r0 * T[0]);
x2[7] = MULT_NORM(r1 * T[0] - r0 * T[1]);
r0 = x1[4] - x2[4];
r1 = x1[5] - x2[5];
x1[4] += x2[4];
x1[5] += x2[5];
x2[4] = MULT_NORM(r1 * T[5] + r0 * T[4]);
x2[5] = MULT_NORM(r1 * T[4] - r0 * T[5]);
r0 = x1[2] - x2[2];
r1 = x1[3] - x2[3];
x1[2] += x2[2];
x1[3] += x2[3];
x2[2] = MULT_NORM(r1 * T[9] + r0 * T[8]);
x2[3] = MULT_NORM(r1 * T[8] - r0 * T[9]);
r0 = x1[0] - x2[0];
r1 = x1[1] - x2[1];
x1[0] += x2[0];
x1[1] += x2[1];
x2[0] = MULT_NORM(r1 * T[13] + r0 * T[12]);
x2[1] = MULT_NORM(r1 * T[12] - r0 * T[13]);
x1-=8;
x2-=8;
T+=16;
}while(x2>=x);
}
/* N/stage point generic N stage butterfly (in place, 2 register) */
STIN void mdct_butterfly_generic(DATA_TYPE *T,
DATA_TYPE *x,
int points,
int trigint){
DATA_TYPE *x1 = x + points - 8;
DATA_TYPE *x2 = x + (points>>1) - 8;
REG_TYPE r0;
REG_TYPE r1;
do{
r0 = x1[6] - x2[6];
r1 = x1[7] - x2[7];
x1[6] += x2[6];
x1[7] += x2[7];
x2[6] = MULT_NORM(r1 * T[1] + r0 * T[0]);
x2[7] = MULT_NORM(r1 * T[0] - r0 * T[1]);
T+=trigint;
r0 = x1[4] - x2[4];
r1 = x1[5] - x2[5];
x1[4] += x2[4];
x1[5] += x2[5];
x2[4] = MULT_NORM(r1 * T[1] + r0 * T[0]);
x2[5] = MULT_NORM(r1 * T[0] - r0 * T[1]);
T+=trigint;
r0 = x1[2] - x2[2];
r1 = x1[3] - x2[3];
x1[2] += x2[2];
x1[3] += x2[3];
x2[2] = MULT_NORM(r1 * T[1] + r0 * T[0]);
x2[3] = MULT_NORM(r1 * T[0] - r0 * T[1]);
T+=trigint;
r0 = x1[0] - x2[0];
r1 = x1[1] - x2[1];
x1[0] += x2[0];
x1[1] += x2[1];
x2[0] = MULT_NORM(r1 * T[1] + r0 * T[0]);
x2[1] = MULT_NORM(r1 * T[0] - r0 * T[1]);
T+=trigint;
x1-=8;
x2-=8;
}while(x2>=x);
}
STIN void mdct_butterflies(mdct_lookup *init,
DATA_TYPE *x,
int points){
DATA_TYPE *T=init->trig;
int stages=init->log2n-5;
int i,j;
if(--stages>0){
mdct_butterfly_first(T,x,points);
}
for(i=1;--stages>0;i++){
for(j=0;j<(1<<i);j++)
mdct_butterfly_generic(T,x+(points>>i)*j,points>>i,4<<i);
}
for(j=0;j<points;j+=32)
mdct_butterfly_32(x+j);
}
void mdct_clear(mdct_lookup *l){
if(l){
if(l->trig)_ogg_free(l->trig);
if(l->bitrev)_ogg_free(l->bitrev);
memset(l,0,sizeof(*l));
}
}
STIN void mdct_bitreverse(mdct_lookup *init,
DATA_TYPE *x){
int n = init->n;
int *bit = init->bitrev;
DATA_TYPE *w0 = x;
DATA_TYPE *w1 = x = w0+(n>>1);
DATA_TYPE *T = init->trig+n;
do{
DATA_TYPE *x0 = x+bit[0];
DATA_TYPE *x1 = x+bit[1];
REG_TYPE r0 = x0[1] - x1[1];
REG_TYPE r1 = x0[0] + x1[0];
REG_TYPE r2 = MULT_NORM(r1 * T[0] + r0 * T[1]);
REG_TYPE r3 = MULT_NORM(r1 * T[1] - r0 * T[0]);
w1 -= 4;
r0 = HALVE(x0[1] + x1[1]);
r1 = HALVE(x0[0] - x1[0]);
w0[0] = r0 + r2;
w1[2] = r0 - r2;
w0[1] = r1 + r3;
w1[3] = r3 - r1;
x0 = x+bit[2];
x1 = x+bit[3];
r0 = x0[1] - x1[1];
r1 = x0[0] + x1[0];
r2 = MULT_NORM(r1 * T[2] + r0 * T[3]);
r3 = MULT_NORM(r1 * T[3] - r0 * T[2]);
r0 = HALVE(x0[1] + x1[1]);
r1 = HALVE(x0[0] - x1[0]);
w0[2] = r0 + r2;
w1[0] = r0 - r2;
w0[3] = r1 + r3;
w1[1] = r3 - r1;
T += 4;
bit += 4;
w0 += 4;
}while(w0<w1);
}
void mdct_backward(mdct_lookup *init, DATA_TYPE *in, DATA_TYPE *out){
int n=init->n;
int n2=n>>1;
int n4=n>>2;
/* rotate */
DATA_TYPE *iX = in+n2-7;
DATA_TYPE *oX = out+n2+n4;
DATA_TYPE *T = init->trig+n4;
do{
oX -= 4;
oX[0] = MULT_NORM(-iX[2] * T[3] - iX[0] * T[2]);
oX[1] = MULT_NORM (iX[0] * T[3] - iX[2] * T[2]);
oX[2] = MULT_NORM(-iX[6] * T[1] - iX[4] * T[0]);
oX[3] = MULT_NORM (iX[4] * T[1] - iX[6] * T[0]);
iX -= 8;
T += 4;
}while(iX>=in);
iX = in+n2-8;
oX = out+n2+n4;
T = init->trig+n4;
do{
T -= 4;
oX[0] = MULT_NORM (iX[4] * T[3] + iX[6] * T[2]);
oX[1] = MULT_NORM (iX[4] * T[2] - iX[6] * T[3]);
oX[2] = MULT_NORM (iX[0] * T[1] + iX[2] * T[0]);
oX[3] = MULT_NORM (iX[0] * T[0] - iX[2] * T[1]);
iX -= 8;
oX += 4;
}while(iX>=in);
mdct_butterflies(init,out+n2,n2);
mdct_bitreverse(init,out);
/* roatate + window */
{
DATA_TYPE *oX1=out+n2+n4;
DATA_TYPE *oX2=out+n2+n4;
DATA_TYPE *iX =out;
T =init->trig+n2;
do{
oX1-=4;
oX1[3] = MULT_NORM (iX[0] * T[1] - iX[1] * T[0]);
oX2[0] = -MULT_NORM (iX[0] * T[0] + iX[1] * T[1]);
oX1[2] = MULT_NORM (iX[2] * T[3] - iX[3] * T[2]);
oX2[1] = -MULT_NORM (iX[2] * T[2] + iX[3] * T[3]);
oX1[1] = MULT_NORM (iX[4] * T[5] - iX[5] * T[4]);
oX2[2] = -MULT_NORM (iX[4] * T[4] + iX[5] * T[5]);
oX1[0] = MULT_NORM (iX[6] * T[7] - iX[7] * T[6]);
oX2[3] = -MULT_NORM (iX[6] * T[6] + iX[7] * T[7]);
oX2+=4;
iX += 8;
T += 8;
}while(iX<oX1);
iX=out+n2+n4;
oX1=out+n4;
oX2=oX1;
do{
oX1-=4;
iX-=4;
oX2[0] = -(oX1[3] = iX[3]);
oX2[1] = -(oX1[2] = iX[2]);
oX2[2] = -(oX1[1] = iX[1]);
oX2[3] = -(oX1[0] = iX[0]);
oX2+=4;
}while(oX2<iX);
iX=out+n2+n4;
oX1=out+n2+n4;
oX2=out+n2;
do{
oX1-=4;
oX1[0]= iX[3];
oX1[1]= iX[2];
oX1[2]= iX[1];
oX1[3]= iX[0];
iX+=4;
}while(oX1>oX2);
}
}
void mdct_forward(mdct_lookup *init, DATA_TYPE *in, DATA_TYPE *out){
int n=init->n;
int n2=n>>1;
int n4=n>>2;
int n8=n>>3;
DATA_TYPE *w=alloca(n*sizeof(*w)); /* forward needs working space */
DATA_TYPE *w2=w+n2;
/* rotate */
/* window + rotate + step 1 */
REG_TYPE r0;
REG_TYPE r1;
DATA_TYPE *x0=in+n2+n4;
DATA_TYPE *x1=x0+1;
DATA_TYPE *T=init->trig+n2;
int i=0;
for(i=0;i<n8;i+=2){
x0 -=4;
T-=2;
r0= x0[2] + x1[0];
r1= x0[0] + x1[2];
w2[i]= MULT_NORM(r1*T[1] + r0*T[0]);
w2[i+1]= MULT_NORM(r1*T[0] - r0*T[1]);
x1 +=4;
}
x1=in+1;
for(;i<n2-n8;i+=2){
T-=2;
x0 -=4;
r0= x0[2] - x1[0];
r1= x0[0] - x1[2];
w2[i]= MULT_NORM(r1*T[1] + r0*T[0]);
w2[i+1]= MULT_NORM(r1*T[0] - r0*T[1]);
x1 +=4;
}
x0=in+n;
for(;i<n2;i+=2){
T-=2;
x0 -=4;
r0= -x0[2] - x1[0];
r1= -x0[0] - x1[2];
w2[i]= MULT_NORM(r1*T[1] + r0*T[0]);
w2[i+1]= MULT_NORM(r1*T[0] - r0*T[1]);
x1 +=4;
}
mdct_butterflies(init,w+n2,n2);
mdct_bitreverse(init,w);
/* roatate + window */
T=init->trig+n2;
x0=out+n2;
for(i=0;i<n4;i++){
x0--;
out[i] =MULT_NORM((w[0]*T[0]+w[1]*T[1])*init->scale);
x0[0] =MULT_NORM((w[0]*T[1]-w[1]*T[0])*init->scale);
w+=2;
T+=2;
}
}
```
|
Country Code: +509
International Call Prefix: 00
Nationally Significant Numbers (NSN): eight digits.
Format: +509 XX XX XXXX
Telephone numbers in the Republic of Haiti increased from seven to eight digits on 1 March 2008.
Number plan
References
Haiti
Communications in Haiti
|
```css
.mce-container,.mce-container *,.mce-widget,.mce-widget *,.mce-reset{margin:0;padding:0;border:0;outline:0;vertical-align:top;background:transparent;text-decoration:none;color:#595959;font-family:"Helvetica Neue",Helvetica,Arial,sans-serif;font-size:14px;text-shadow:none;float:none;position:static;width:auto;height:auto;white-space:nowrap;cursor:inherit;-webkit-tap-highlight-color:transparent;line-height:normal;font-weight:normal;text-align:left;-moz-box-sizing:content-box;-webkit-box-sizing:content-box;box-sizing:content-box;direction:ltr;max-width:none}.mce-widget button{-moz-box-sizing:border-box;-webkit-box-sizing:border-box;box-sizing:border-box}.mce-container *[unselectable]{-moz-user-select:none;-webkit-user-select:none;-o-user-select:none;user-select:none}.word-wrap{word-wrap:break-word;-ms-word-break:break-all;word-break:break-all;word-break:break-word;-ms-hyphens:auto;-moz-hyphens:auto;-webkit-hyphens:auto;hyphens:auto}.mce-fade{opacity:0;-webkit-transition:opacity .15s linear;transition:opacity .15s linear}.mce-fade.mce-in{opacity:1}.mce-tinymce{visibility:inherit !important;position:relative}.mce-fullscreen{border:0;padding:0;margin:0;overflow:hidden;height:100%;z-index:100}div.mce-fullscreen{position:fixed;top:0;left:0;width:100%;height:auto}.mce-tinymce{display:block;-webkit-box-shadow:0 1px 2px rgba(0, 0, 0, 0.2);-moz-box-shadow:0 1px 2px rgba(0, 0, 0, 0.2);box-shadow:0 1px 2px rgba(0, 0, 0, 0.2)}.mce-statusbar>.mce-container-body{display:flex;padding-right:16px}.mce-statusbar>.mce-container-body .mce-path{flex:1}.mce-wordcount{font-size:inherit;text-transform:uppercase;padding:8px 0}div.mce-edit-area{background:#FFF;filter:none}.mce-statusbar{position:relative}.mce-statusbar .mce-container-body{position:relative;font-size:11px}.mce-fullscreen .mce-resizehandle{display:none}.mce-statusbar .mce-flow-layout-item{margin:0}.mce-charmap{border-collapse:collapse}.mce-charmap td{cursor:default;border:1px solid #c5c5c5;width:20px;height:20px;line-height:20px;text-align:center;vertical-align:middle;padding:2px}.mce-charmap td div{text-align:center}.mce-charmap td:hover{background:white}.mce-grid td.mce-grid-cell div{border:1px solid #c5c5c5;width:15px;height:15px;margin:0;cursor:pointer}.mce-grid td.mce-grid-cell div:focus{border-color:#91bbe9}.mce-grid td.mce-grid-cell div[disabled]{cursor:not-allowed}.mce-grid{border-spacing:2px;border-collapse:separate}.mce-grid a{display:block;border:1px solid transparent}.mce-grid a:hover,.mce-grid a:focus{border-color:#91bbe9}.mce-grid-border{margin:0 4px 0 4px}.mce-grid-border a{border-color:#c5c5c5;width:13px;height:13px}.mce-grid-border a:hover,.mce-grid-border a.mce-active{border-color:#91bbe9;background:#bdd6f2}.mce-text-center{text-align:center}div.mce-tinymce-inline{width:100%}.mce-colorbtn-trans div{text-align:center;vertical-align:middle;font-weight:bold;font-size:20px;line-height:16px;color:#8b8b8b}.mce-monospace{font-family:"Courier New",Courier,monospace}.mce-toolbar-grp .mce-flow-layout-item{margin-bottom:0}.mce-container b{font-weight:bold}.mce-container p{margin-bottom:5px}.mce-container a{cursor:pointer;color:#2276d2}.mce-container a:hover{text-decoration:underline}.mce-container ul{margin-left:15px}.mce-container .mce-table-striped{border-collapse:collapse;margin:10px}.mce-container .mce-table-striped thead>tr{background-color:#fafafa}.mce-container .mce-table-striped thead>tr th{font-weight:bold}.mce-container .mce-table-striped td,.mce-container .mce-table-striped th{padding:5px}.mce-container .mce-table-striped tr:nth-child(even){background-color:#fafafa}.mce-container .mce-table-striped tbody>tr:hover{background-color:#e1e1e1}.mce-branding{font-size:inherit;text-transform:uppercase;white-space:pre;padding:8px 0}.mce-branding a{font-size:inherit;color:inherit}.mce-top-part{position:relative}.mce-top-part::before{content:'';position:absolute;-webkit-box-shadow:0 1px 2px rgba(0, 0, 0, 0.2);-moz-box-shadow:0 1px 2px rgba(0, 0, 0, 0.2);box-shadow:0 1px 2px rgba(0, 0, 0, 0.2);top:0;right:0;bottom:0;left:0;pointer-events:none}.mce-rtl .mce-wordcount{left:0;right:auto}.mce-rtl .mce-statusbar>.mce-container-body>*:last-child{padding-right:0;padding-left:10px}.mce-rtl .mce-path{text-align:right;padding-right:16px}.mce-croprect-container{position:absolute;top:0;left:0}.mce-croprect-handle{position:absolute;top:0;left:0;width:20px;height:20px;border:2px solid white}.mce-croprect-handle-nw{border-width:2px 0 0 2px;margin:-2px 0 0 -2px;cursor:nw-resize;top:100px;left:100px}.mce-croprect-handle-ne{border-width:2px 2px 0 0;margin:-2px 0 0 -20px;cursor:ne-resize;top:100px;left:200px}.mce-croprect-handle-sw{border-width:0 0 2px 2px;margin:-20px 2px 0 -2px;cursor:sw-resize;top:200px;left:100px}.mce-croprect-handle-se{border-width:0 2px 2px 0;margin:-20px 0 0 -20px;cursor:se-resize;top:200px;left:200px}.mce-croprect-handle-move{position:absolute;cursor:move;border:0}.mce-croprect-block{opacity:.5;filter:alpha(opacity=50);zoom:1;position:absolute;background:black}.mce-croprect-handle:focus{border-color:#2276d2}.mce-croprect-handle-move:focus{outline:1px solid #2276d2}.mce-imagepanel{overflow:auto;background:black}.mce-imagepanel-bg{position:absolute;background:url('data:image/gif;base64,R0lGODdhDAAMAIABAMzMzP///ywAAAAADAAMAAACFoQfqYeabNyDMkBQb81Uat85nxguUAEAOw==')}.mce-imagepanel img{position:absolute}.mce-imagetool.mce-btn .mce-ico{display:block;width:20px;height:20px;text-align:center;line-height:20px;font-size:20px;padding:5px}.mce-arrow-up{margin-top:12px}.mce-arrow-down{margin-top:-12px}.mce-arrow:before,.mce-arrow:after{position:absolute;left:50%;display:block;width:0;height:0;border-style:solid;border-color:transparent;content:""}.mce-arrow.mce-arrow-up:before{top:-9px;border-bottom-color:#c5c5c5;border-width:0 9px 9px;margin-left:-9px}.mce-arrow.mce-arrow-down:before{bottom:-9px;border-top-color:#c5c5c5;border-width:9px 9px 0;margin-left:-9px}.mce-arrow.mce-arrow-up:after{top:-8px;border-bottom-color:#fff;border-width:0 8px 8px;margin-left:-8px}.mce-arrow.mce-arrow-down:after{bottom:-8px;border-top-color:#fff;border-width:8px 8px 0;margin-left:-8px}.mce-arrow.mce-arrow-left:before,.mce-arrow.mce-arrow-left:after{margin:0}.mce-arrow.mce-arrow-left:before{left:8px}.mce-arrow.mce-arrow-left:after{left:9px}.mce-arrow.mce-arrow-right:before,.mce-arrow.mce-arrow-right:after{left:auto;margin:0}.mce-arrow.mce-arrow-right:before{right:8px}.mce-arrow.mce-arrow-right:after{right:9px}.mce-arrow.mce-arrow-center.mce-arrow.mce-arrow-left:before{left:-9px;top:50%;border-right-color:#c5c5c5;border-width:9px 9px 9px 0;margin-top:-9px}.mce-arrow.mce-arrow-center.mce-arrow.mce-arrow-left:after{left:-8px;top:50%;border-right-color:#fff;border-width:8px 8px 8px 0;margin-top:-8px}.mce-arrow.mce-arrow-center.mce-arrow.mce-arrow-left{margin-left:12px}.mce-arrow.mce-arrow-center.mce-arrow.mce-arrow-right:before{right:-9px;top:50%;border-left-color:#c5c5c5;border-width:9px 0 9px 9px;margin-top:-9px}.mce-arrow.mce-arrow-center.mce-arrow.mce-arrow-right:after{right:-8px;top:50%;border-left-color:#fff;border-width:8px 0 8px 8px;margin-top:-8px}.mce-arrow.mce-arrow-center.mce-arrow.mce-arrow-right{margin-left:-14px}.mce-edit-aria-container>.mce-container-body{display:flex}.mce-edit-aria-container>.mce-container-body .mce-edit-area{flex:1}.mce-edit-aria-container>.mce-container-body .mce-sidebar>.mce-container-body{display:flex;align-items:stretch;height:100%}.mce-edit-aria-container>.mce-container-body .mce-sidebar-panel{min-width:250px;max-width:250px;position:relative}.mce-edit-aria-container>.mce-container-body .mce-sidebar-panel>.mce-container-body{position:absolute;width:100%;height:100%;overflow:auto;top:0;left:0}.mce-sidebar-toolbar{border:0 solid #c5c5c5;border-left-width:1px}.mce-sidebar-toolbar .mce-btn{border-left:0;border-right:0}.mce-sidebar-toolbar .mce-btn.mce-active,.mce-sidebar-toolbar .mce-btn.mce-active:hover{background-color:#555c66}.mce-sidebar-toolbar .mce-btn.mce-active button,.mce-sidebar-toolbar .mce-btn.mce-active:hover button,.mce-sidebar-toolbar .mce-btn.mce-active button i,.mce-sidebar-toolbar .mce-btn.mce-active:hover button i{color:white;text-shadow:1px 1px none}.mce-sidebar-panel{border:0 solid #c5c5c5;border-left-width:1px}.mce-container,.mce-container-body{display:block}.mce-autoscroll{overflow:hidden}.mce-scrollbar{position:absolute;width:7px;height:100%;top:2px;right:2px;opacity:.4;filter:alpha(opacity=40);zoom:1}.mce-scrollbar-h{top:auto;right:auto;left:2px;bottom:2px;width:100%;height:7px}.mce-scrollbar-thumb{position:absolute;background-color:#000;border:1px solid #888;border-color:rgba(85,85,85,0.6);width:5px;height:100%}.mce-scrollbar-h .mce-scrollbar-thumb{width:100%;height:5px}.mce-scrollbar:hover,.mce-scrollbar.mce-active{background-color:#AAA;opacity:.6;filter:alpha(opacity=60);zoom:1}.mce-scroll{position:relative}.mce-panel{border:0 solid #f3f3f3;border:0 solid #c5c5c5;background-color:#fff}.mce-floatpanel{position:absolute;-webkit-box-shadow:0 1px 2px rgba(0, 0, 0, 0.2);-moz-box-shadow:0 1px 2px rgba(0, 0, 0, 0.2);box-shadow:0 1px 2px rgba(0, 0, 0, 0.2)}.mce-floatpanel.mce-fixed{position:fixed}.mce-floatpanel .mce-arrow,.mce-floatpanel .mce-arrow:after{position:absolute;display:block;width:0;height:0;border-color:transparent;border-style:solid}.mce-floatpanel .mce-arrow{border-width:11px}.mce-floatpanel .mce-arrow:after{border-width:10px;content:""}.mce-floatpanel.mce-popover{filter:progid:DXImageTransform.Microsoft.gradient(enabled = false);background:transparent;-webkit-box-shadow:0 1px 2px rgba(0, 0, 0, 0.2);-moz-box-shadow:0 1px 2px rgba(0, 0, 0, 0.2);box-shadow:0 1px 2px rgba(0, 0, 0, 0.2);top:0;left:0;background:#FFF;border:1px solid #c5c5c5;border:1px solid rgba(0,0,0,0.25)}.mce-floatpanel.mce-popover.mce-bottom{margin-top:10px;*margin-top:0}.mce-floatpanel.mce-popover.mce-bottom>.mce-arrow{left:50%;margin-left:-11px;border-top-width:0;border-bottom-color:#c5c5c5;border-bottom-color:rgba(0,0,0,0.25);top:-11px}.mce-floatpanel.mce-popover.mce-bottom>.mce-arrow:after{top:1px;margin-left:-10px;border-top-width:0;border-bottom-color:#FFF}.mce-floatpanel.mce-popover.mce-bottom.mce-start{margin-left:-22px}.mce-floatpanel.mce-popover.mce-bottom.mce-start>.mce-arrow{left:20px}.mce-floatpanel.mce-popover.mce-bottom.mce-end{margin-left:22px}.mce-floatpanel.mce-popover.mce-bottom.mce-end>.mce-arrow{right:10px;left:auto}.mce-fullscreen{border:0;padding:0;margin:0;overflow:hidden;height:100%}div.mce-fullscreen{position:fixed;top:0;left:0}#mce-modal-block{opacity:0;filter:alpha(opacity=0);zoom:1;position:fixed;left:0;top:0;width:100%;height:100%;background:#FFF}#mce-modal-block.mce-in{opacity:.5;filter:alpha(opacity=50);zoom:1}.mce-window-move{cursor:move}.mce-window{-webkit-box-shadow:0 3px 7px rgba(0, 0, 0, 0.3);-moz-box-shadow:0 3px 7px rgba(0, 0, 0, 0.3);box-shadow:0 3px 7px rgba(0, 0, 0, 0.3);filter:progid:DXImageTransform.Microsoft.gradient(enabled = false);background:transparent;background:#FFF;position:fixed;top:0;left:0;opacity:0;transform:scale(.1);transition:transform 100ms ease-in,opacity 150ms ease-in}.mce-window.mce-in{transform:scale(1);opacity:1}.mce-window-head{padding:9px 15px;border-bottom:1px solid #c5c5c5;position:relative}.mce-window-head .mce-close{position:absolute;right:0;top:0;height:38px;width:38px;text-align:center;cursor:pointer}.mce-window-head .mce-close i{color:#9b9b9b}.mce-close:hover i{color:#bdbdbd}.mce-window-head .mce-title{line-height:20px;font-size:20px;font-weight:bold;text-rendering:optimizelegibility;padding-right:20px}.mce-window .mce-container-body{display:block}.mce-foot{display:block;background-color:#FFF;border-top:1px solid #c5c5c5}.mce-window-head .mce-dragh{position:absolute;top:0;left:0;cursor:move;width:90%;height:100%}.mce-window iframe{width:100%;height:100%}.mce-window-body .mce-listbox{border-color:#e2e4e7}.mce-window .mce-btn:hover{border-color:#c5c5c5}.mce-window .mce-btn:focus{border-color:#2276d2}.mce-window-body .mce-btn,.mce-foot .mce-btn{border-color:#c5c5c5}.mce-foot .mce-btn.mce-primary{border-color:transparent}.mce-rtl .mce-window-head .mce-close{position:absolute;right:auto;left:15px}.mce-rtl .mce-window-head .mce-dragh{left:auto;right:0}.mce-rtl .mce-window-head .mce-title{direction:rtl;text-align:right}.mce-tooltip{position:absolute;padding:5px;opacity:.8;filter:alpha(opacity=80);zoom:1;margin-top:1px}.mce-tooltip-inner{font-size:11px;background-color:#000;color:white;max-width:200px;padding:5px 8px 4px 8px;text-align:center;white-space:normal}.mce-tooltip-inner{-webkit-box-shadow:none;-moz-box-shadow:none;box-shadow:none}.mce-tooltip-arrow{position:absolute;width:0;height:0;line-height:0;border:5px dashed #000}.mce-tooltip-arrow-n{border-bottom-color:#000}.mce-tooltip-arrow-s{border-top-color:#000}.mce-tooltip-arrow-e{border-left-color:#000}.mce-tooltip-arrow-w{border-right-color:#000}.mce-tooltip-nw,.mce-tooltip-sw{margin-left:-14px}.mce-tooltip-ne,.mce-tooltip-se{margin-left:14px}.mce-tooltip-n .mce-tooltip-arrow{top:0;left:50%;margin-left:-5px;border-bottom-style:solid;border-top:none;border-left-color:transparent;border-right-color:transparent}.mce-tooltip-nw .mce-tooltip-arrow{top:0;left:10px;border-bottom-style:solid;border-top:none;border-left-color:transparent;border-right-color:transparent}.mce-tooltip-ne .mce-tooltip-arrow{top:0;right:10px;border-bottom-style:solid;border-top:none;border-left-color:transparent;border-right-color:transparent}.mce-tooltip-s .mce-tooltip-arrow{bottom:0;left:50%;margin-left:-5px;border-top-style:solid;border-bottom:none;border-left-color:transparent;border-right-color:transparent}.mce-tooltip-sw .mce-tooltip-arrow{bottom:0;left:10px;border-top-style:solid;border-bottom:none;border-left-color:transparent;border-right-color:transparent}.mce-tooltip-se .mce-tooltip-arrow{bottom:0;right:10px;border-top-style:solid;border-bottom:none;border-left-color:transparent;border-right-color:transparent}.mce-tooltip-e .mce-tooltip-arrow{right:0;top:50%;margin-top:-5px;border-left-style:solid;border-right:none;border-top-color:transparent;border-bottom-color:transparent}.mce-tooltip-w .mce-tooltip-arrow{left:0;top:50%;margin-top:-5px;border-right-style:solid;border-left:none;border-top-color:transparent;border-bottom-color:transparent}.mce-progress{display:inline-block;position:relative;height:20px}.mce-progress .mce-bar-container{display:inline-block;width:100px;height:100%;margin-right:8px;border:1px solid #ccc;overflow:hidden}.mce-progress .mce-text{display:inline-block;margin-top:auto;margin-bottom:auto;font-size:14px;width:40px;color:#595959}.mce-bar{display:block;width:0;height:100%;background-color:#dfdfdf;-webkit-transition:width .2s ease;transition:width .2s ease}.mce-notification{position:absolute;background-color:#fff;padding:5px;margin-top:5px;border-width:1px;border-style:solid;border-color:#c5c5c5;transition:transform 100ms ease-in,opacity 150ms ease-in;opacity:0;box-sizing:border-box}.mce-notification.mce-in{opacity:1}.mce-notification-success{background-color:#dff0d8;border-color:#d6e9c6}.mce-notification-info{background-color:#d9edf7;border-color:#779ECB}.mce-notification-warning{background-color:#fcf8e3;border-color:#faebcc}.mce-notification-error{background-color:#f2dede;border-color:#ebccd1}.mce-notification.mce-has-close{padding-right:15px}.mce-notification .mce-ico{margin-top:5px}.mce-notification-inner{word-wrap:break-word;-ms-word-break:break-all;word-break:break-all;word-break:break-word;-ms-hyphens:auto;-moz-hyphens:auto;-webkit-hyphens:auto;hyphens:auto;display:inline-block;font-size:14px;margin:5px 8px 4px 8px;text-align:center;white-space:normal;color:#31708f}.mce-notification-inner a{text-decoration:underline;cursor:pointer}.mce-notification .mce-progress{margin-right:8px}.mce-notification .mce-progress .mce-text{margin-top:5px}.mce-notification *,.mce-notification .mce-progress .mce-text{color:#595959}.mce-notification .mce-progress .mce-bar-container{border-color:#c5c5c5}.mce-notification .mce-progress .mce-bar-container .mce-bar{background-color:#595959}.mce-notification-success *,.mce-notification-success .mce-progress .mce-text{color:#3c763d}.mce-notification-success .mce-progress .mce-bar-container{border-color:#d6e9c6}.mce-notification-success .mce-progress .mce-bar-container .mce-bar{background-color:#3c763d}.mce-notification-info *,.mce-notification-info .mce-progress .mce-text{color:#31708f}.mce-notification-info .mce-progress .mce-bar-container{border-color:#779ECB}.mce-notification-info .mce-progress .mce-bar-container .mce-bar{background-color:#31708f}.mce-notification-warning *,.mce-notification-warning .mce-progress .mce-text{color:#8a6d3b}.mce-notification-warning .mce-progress .mce-bar-container{border-color:#faebcc}.mce-notification-warning .mce-progress .mce-bar-container .mce-bar{background-color:#8a6d3b}.mce-notification-error *,.mce-notification-error .mce-progress .mce-text{color:#a94442}.mce-notification-error .mce-progress .mce-bar-container{border-color:#ebccd1}.mce-notification-error .mce-progress .mce-bar-container .mce-bar{background-color:#a94442}.mce-notification .mce-close{position:absolute;top:6px;right:8px;font-size:20px;font-weight:bold;line-height:20px;color:#9b9b9b;cursor:pointer}.mce-abs-layout{position:relative}body .mce-abs-layout-item,.mce-abs-end{position:absolute}.mce-abs-end{width:1px;height:1px}.mce-container-body.mce-abs-layout{overflow:hidden}.mce-btn{border:1px solid #b3b3b3;border-color:transparent transparent transparent transparent;position:relative;text-shadow:0 1px 1px rgba(255,255,255,0.75);background:white;display:inline-block;*display:inline;*zoom:1;-webkit-box-shadow:none;-moz-box-shadow:none;box-shadow:none}.mce-btn:hover,.mce-btn:active{background:white;color:#595959;border-color:#e2e4e7}.mce-btn:focus{background:white;color:#595959;border-color:#e2e4e7}.mce-btn.mce-disabled button,.mce-btn.mce-disabled:hover button{cursor:default;-webkit-box-shadow:none;-moz-box-shadow:none;box-shadow:none;opacity:.4;filter:alpha(opacity=40);zoom:1}.mce-btn.mce-active,.mce-btn.mce-active:hover,.mce-btn.mce-active:focus,.mce-btn.mce-active:active{-webkit-box-shadow:none;-moz-box-shadow:none;box-shadow:none;background:#555c66;color:white;border-color:transparent}.mce-btn.mce-active button,.mce-btn.mce-active:hover button,.mce-btn.mce-active i,.mce-btn.mce-active:hover i{color:white}.mce-btn:hover .mce-caret{border-top-color:#b5bcc2}.mce-btn.mce-active .mce-caret,.mce-btn.mce-active:hover .mce-caret{border-top-color:white}.mce-btn button{padding:4px 6px;font-size:14px;line-height:20px;*line-height:16px;cursor:pointer;color:#595959;text-align:center;overflow:visible;-webkit-appearance:none}.mce-btn button::-moz-focus-inner{border:0;padding:0}.mce-btn i{text-shadow:1px 1px none}.mce-primary.mce-btn-has-text{min-width:50px}.mce-primary{color:white;border:1px solid transparent;border-color:transparent;background-color:#2276d2}.mce-primary:hover,.mce-primary:focus{background-color:#1e6abc;border-color:transparent}.mce-primary.mce-disabled button,.mce-primary.mce-disabled:hover button{cursor:default;-webkit-box-shadow:none;-moz-box-shadow:none;box-shadow:none;opacity:.4;filter:alpha(opacity=40);zoom:1}.mce-primary.mce-active,.mce-primary.mce-active:hover,.mce-primary:not(.mce-disabled):active{background-color:#1e6abc;-webkit-box-shadow:none;-moz-box-shadow:none;box-shadow:none}.mce-primary button,.mce-primary button i{color:white;text-shadow:1px 1px none}.mce-btn .mce-txt{font-size:inherit;line-height:inherit;color:inherit}.mce-btn-large button{padding:9px 14px;font-size:16px;line-height:normal}.mce-btn-large i{margin-top:2px}.mce-btn-small button{padding:1px 5px;font-size:12px;*padding-bottom:2px}.mce-btn-small i{line-height:20px;vertical-align:top;*line-height:18px}.mce-btn .mce-caret{margin-top:8px;margin-left:0}.mce-btn-small .mce-caret{margin-top:8px;margin-left:0}.mce-caret{display:inline-block;*display:inline;*zoom:1;width:0;height:0;vertical-align:top;border-top:4px solid #b5bcc2;border-right:4px solid transparent;border-left:4px solid transparent;content:""}.mce-disabled .mce-caret{border-top-color:#aaa}.mce-caret.mce-up{border-bottom:4px solid #b5bcc2;border-top:0}.mce-btn-flat{border:0;background:transparent;-webkit-box-shadow:none;-moz-box-shadow:none;box-shadow:none;filter:none}.mce-btn-flat:hover,.mce-btn-flat.mce-active,.mce-btn-flat:focus,.mce-btn-flat:active{border:0;background:#e6e6e6;filter:none;-webkit-box-shadow:none;-moz-box-shadow:none;box-shadow:none}.mce-btn-has-text .mce-ico{padding-right:5px}.mce-rtl .mce-btn button{direction:rtl}.mce-toolbar .mce-btn-group{margin:0;padding:2px 0}.mce-btn-group .mce-btn{border-width:1px;margin:0;margin-left:2px}.mce-btn-group:not(:first-child){border-left:1px solid #d9d9d9;padding-left:0;margin-left:2px}.mce-btn-group{margin-left:2px}.mce-btn-group .mce-btn.mce-flow-layout-item{margin:0}.mce-rtl .mce-btn-group .mce-btn{margin-left:0;margin-right:2px}.mce-rtl .mce-btn-group .mce-first{margin-right:0}.mce-rtl .mce-btn-group:not(:first-child){border-left:none;border-right:1px solid #d9d9d9;padding-right:4px;margin-right:4px}.mce-checkbox{cursor:pointer}i.mce-i-checkbox{margin:0 3px 0 0;border:1px solid #c5c5c5;-webkit-box-shadow:none;-moz-box-shadow:none;box-shadow:none;background-color:white;text-indent:-10em;overflow:hidden}.mce-checked i.mce-i-checkbox{color:#595959;font-size:16px;line-height:16px;text-indent:0}.mce-checkbox:focus i.mce-i-checkbox,.mce-checkbox.mce-focus i.mce-i-checkbox{border:1px solid #2276d2;-webkit-box-shadow:none;-moz-box-shadow:none;box-shadow:none}.mce-checkbox.mce-disabled .mce-label,.mce-checkbox.mce-disabled i.mce-i-checkbox{color:#bdbdbd}.mce-checkbox .mce-label{vertical-align:middle}.mce-rtl .mce-checkbox{direction:rtl;text-align:right}.mce-rtl i.mce-i-checkbox{margin:0 0 0 3px}.mce-combobox{position:relative;display:inline-block;*display:inline;*zoom:1;-webkit-box-shadow:none;-moz-box-shadow:none;box-shadow:none;*height:32px}.mce-combobox input{border:1px solid #c5c5c5;border-right-color:#c5c5c5;height:28px}.mce-combobox.mce-disabled input{color:#bdbdbd}.mce-combobox .mce-btn{border:1px solid #c5c5c5;border-left:0;margin:0}.mce-combobox button{padding-right:8px;padding-left:8px}.mce-combobox.mce-disabled .mce-btn button{cursor:default;-webkit-box-shadow:none;-moz-box-shadow:none;box-shadow:none;opacity:.4;filter:alpha(opacity=40);zoom:1}.mce-combobox .mce-status{position:absolute;right:2px;top:50%;line-height:16px;margin-top:-8px;font-size:12px;width:15px;height:15px;text-align:center;cursor:pointer}.mce-combobox.mce-has-status input{padding-right:20px}.mce-combobox.mce-has-open .mce-status{right:37px}.mce-combobox .mce-status.mce-i-warning{color:#c09853}.mce-combobox .mce-status.mce-i-checkmark{color:#468847}.mce-menu.mce-combobox-menu{border-top:0;margin-top:0;max-height:200px}.mce-menu.mce-combobox-menu .mce-menu-item{padding:4px 6px 4px 4px;font-size:11px}.mce-menu.mce-combobox-menu .mce-menu-item-sep{padding:0}.mce-menu.mce-combobox-menu .mce-text{font-size:11px}.mce-menu.mce-combobox-menu .mce-menu-item-link,.mce-menu.mce-combobox-menu .mce-menu-item-link b{font-size:11px}.mce-menu.mce-combobox-menu .mce-text b{font-size:11px}.mce-colorbox i{border:1px solid #c5c5c5;width:14px;height:14px}.mce-colorbutton .mce-ico{position:relative}.mce-colorbutton-grid{margin:4px}.mce-colorbutton .mce-preview{padding-right:3px;display:block;position:absolute;left:50%;top:50%;margin-left:-17px;margin-top:7px;background:gray;width:13px;height:2px;overflow:hidden}.mce-colorbutton.mce-btn-small .mce-preview{margin-left:-16px;padding-right:0;width:16px}.mce-rtl .mce-colorbutton{direction:rtl}.mce-rtl .mce-colorbutton .mce-preview{margin-left:0;padding-right:0;padding-left:3px}.mce-rtl .mce-colorbutton.mce-btn-small .mce-preview{margin-left:0;padding-right:0;padding-left:2px}.mce-rtl .mce-colorbutton .mce-open{padding-left:4px;padding-right:4px;border-left:0}.mce-colorpicker{position:relative;width:250px;height:220px}.mce-colorpicker-sv{position:absolute;top:0;left:0;width:90%;height:100%;border:1px solid #c5c5c5;cursor:crosshair;overflow:hidden}.mce-colorpicker-h-chunk{width:100%}.mce-colorpicker-overlay1,.mce-colorpicker-overlay2{width:100%;height:100%;position:absolute;top:0;left:0}.mce-colorpicker-overlay1{filter:progid:DXImageTransform.Microsoft.gradient(GradientType=1, startColorstr='#ffffff', endColorstr='#00ffffff');-ms-filter:"progid:DXImageTransform.Microsoft.gradient(GradientType=1,startColorstr='#ffffff', endColorstr='#00ffffff')";background:linear-gradient(to right, #fff, rgba(255,255,255,0))}.mce-colorpicker-overlay2{filter:progid:DXImageTransform.Microsoft.gradient(GradientType=0, startColorstr='#00000000', endColorstr='#000000');-ms-filter:"progid:DXImageTransform.Microsoft.gradient(GradientType=0,startColorstr='#00000000', endColorstr='#000000')";background:linear-gradient(to bottom, rgba(0,0,0,0), #000)}.mce-colorpicker-selector1{background:none;position:absolute;width:12px;height:12px;margin:-8px 0 0 -8px;border:1px solid black;border-radius:50%}.mce-colorpicker-selector2{position:absolute;width:10px;height:10px;border:1px solid white;border-radius:50%}.mce-colorpicker-h{position:absolute;top:0;right:0;width:6.5%;height:100%;border:1px solid #c5c5c5;cursor:crosshair}.mce-colorpicker-h-marker{margin-top:-4px;position:absolute;top:0;left:-1px;width:100%;border:1px solid black;background:white;height:4px;z-index:100}.mce-path{display:inline-block;*display:inline;*zoom:1;padding:8px;white-space:normal;font-size:inherit}.mce-path .mce-txt{display:inline-block;padding-right:3px}.mce-path .mce-path-body{display:inline-block}.mce-path-item{display:inline-block;*display:inline;*zoom:1;cursor:pointer;color:#595959;font-size:inherit;text-transform:uppercase}.mce-path-item:hover{text-decoration:underline}.mce-path-item:focus{background:#555c66;color:white}.mce-path .mce-divider{display:inline;font-size:inherit}.mce-disabled .mce-path-item{color:#aaa}.mce-rtl .mce-path{direction:rtl}.mce-fieldset{border:0 solid #9E9E9E}.mce-fieldset>.mce-container-body{margin-top:-15px}.mce-fieldset-title{margin-left:5px;padding:0 5px 0 5px}.mce-fit-layout{display:inline-block;*display:inline;*zoom:1}.mce-fit-layout-item{position:absolute}.mce-flow-layout-item{display:inline-block;*display:inline;*zoom:1}.mce-flow-layout-item{margin:2px 0 2px 2px}.mce-flow-layout-item.mce-last{margin-right:2px}.mce-flow-layout{white-space:normal}.mce-tinymce-inline .mce-flow-layout{white-space:nowrap}.mce-rtl .mce-flow-layout{text-align:right;direction:rtl}.mce-rtl .mce-flow-layout-item{margin:2px 2px 2px 0}.mce-rtl .mce-flow-layout-item.mce-last{margin-left:2px}.mce-iframe{border:0 solid #c5c5c5;width:100%;height:100%}.mce-infobox{display:inline-block;*display:inline;*zoom:1;text-shadow:0 1px 1px rgba(255,255,255,0.75);overflow:hidden;border:1px solid red}.mce-infobox div{display:block;margin:5px}.mce-infobox div button{position:absolute;top:50%;right:4px;cursor:pointer;margin-top:-8px;display:none}.mce-infobox div button:focus{outline:2px solid #e2e4e7}.mce-infobox.mce-has-help div{margin-right:25px}.mce-infobox.mce-has-help button{display:block}.mce-infobox.mce-success{background:#dff0d8;border-color:#d6e9c6}.mce-infobox.mce-success div{color:#3c763d}.mce-infobox.mce-warning{background:#fcf8e3;border-color:#faebcc}.mce-infobox.mce-warning div{color:#8a6d3b}.mce-infobox.mce-error{background:#f2dede;border-color:#ebccd1}.mce-infobox.mce-error div{color:#a94442}.mce-rtl .mce-infobox div{text-align:right;direction:rtl}.mce-label{display:inline-block;*display:inline;*zoom:1;text-shadow:0 1px 1px rgba(255,255,255,0.75);overflow:hidden}.mce-label.mce-autoscroll{overflow:auto}.mce-label.mce-disabled{color:#aaa}.mce-label.mce-multiline{white-space:pre-wrap}.mce-label.mce-success{color:#468847}.mce-label.mce-warning{color:#c09853}.mce-label.mce-error{color:#b94a48}.mce-rtl .mce-label{text-align:right;direction:rtl}.mce-menubar{border:1px solid #e2e4e7}.mce-menubar .mce-menubtn{border-color:transparent;background:transparent;-webkit-box-shadow:none;-moz-box-shadow:none;box-shadow:none;filter:none}.mce-menubar .mce-menubtn button span{color:#595959}.mce-menubar .mce-caret{border-top-color:#b5bcc2}.mce-menubar .mce-active .mce-caret,.mce-menubar .mce-menubtn:hover .mce-caret{border-top-color:#b5bcc2}.mce-menubar .mce-menubtn:hover,.mce-menubar .mce-menubtn.mce-active,.mce-menubar .mce-menubtn:focus{border-color:#e2e4e7;background:white;filter:none;-webkit-box-shadow:none;-moz-box-shadow:none;box-shadow:none}.mce-menubar .mce-menubtn.mce-active{border-bottom:none;z-index:65537}div.mce-menubtn.mce-opened{border-bottom-color:white;z-index:65537}.mce-menubtn button{color:#595959}.mce-menubtn.mce-btn-small span{font-size:12px}.mce-menubtn.mce-fixed-width span{display:inline-block;overflow-x:hidden;text-overflow:ellipsis;width:90px}.mce-menubtn.mce-fixed-width.mce-btn-small span{width:70px}.mce-menubtn .mce-caret{*margin-top:6px}.mce-rtl .mce-menubtn button{direction:rtl;text-align:right}.mce-rtl .mce-menubtn.mce-fixed-width span{direction:rtl;text-align:right}.mce-menu-item{display:block;padding:6px 4px 6px 4px;clear:both;font-weight:normal;line-height:20px;color:#595959;white-space:nowrap;cursor:pointer;line-height:normal;border-left:4px solid transparent;margin-bottom:1px}.mce-menu-item .mce-caret{margin-top:4px;margin-right:6px;border-top:4px solid transparent;border-bottom:4px solid transparent;border-left:4px solid #595959}.mce-menu-item .mce-menu-shortcut{display:inline-block;padding:0 10px 0 20px;color:#aaa}.mce-menu-item .mce-ico{padding-right:4px}.mce-menu-item:hover,.mce-menu-item:focus{background:#ededee}.mce-menu-item:hover .mce-menu-shortcut,.mce-menu-item:focus .mce-menu-shortcut{color:#aaa}.mce-menu-item:hover .mce-text,.mce-menu-item:focus .mce-text,.mce-menu-item:hover .mce-ico,.mce-menu-item:focus .mce-ico{color:#595959}.mce-menu-item.mce-selected{background:#ededee}.mce-menu-item.mce-selected .mce-text,.mce-menu-item.mce-selected .mce-ico{color:#595959}.mce-menu-item.mce-active.mce-menu-item-normal{background:#555c66}.mce-menu-item.mce-active.mce-menu-item-normal .mce-text,.mce-menu-item.mce-active.mce-menu-item-normal .mce-ico{color:white}.mce-menu-item.mce-active.mce-menu-item-checkbox .mce-ico{visibility:visible}.mce-menu-item.mce-disabled,.mce-menu-item.mce-disabled:hover{background:white}.mce-menu-item.mce-disabled:focus,.mce-menu-item.mce-disabled:hover:focus{background:#ededee}.mce-menu-item.mce-disabled .mce-text,.mce-menu-item.mce-disabled:hover .mce-text,.mce-menu-item.mce-disabled .mce-ico,.mce-menu-item.mce-disabled:hover .mce-ico{color:#aaa}.mce-menu-item.mce-menu-item-preview.mce-active{border-left:5px solid #555c66;background:white}.mce-menu-item.mce-menu-item-preview.mce-active .mce-text,.mce-menu-item.mce-menu-item-preview.mce-active .mce-ico{color:#595959}.mce-menu-item.mce-menu-item-preview.mce-active:hover{background:#ededee}.mce-menu-item-link{color:#093;overflow:hidden;text-overflow:ellipsis;white-space:nowrap}.mce-menu-item-link b{color:#093}.mce-menu-item-ellipsis{display:block;text-overflow:ellipsis;white-space:nowrap;overflow:hidden}.mce-menu-item:hover *,.mce-menu-item.mce-selected *,.mce-menu-item:focus *{color:#595959}div.mce-menu .mce-menu-item-sep,.mce-menu-item-sep:hover{border:0;padding:0;height:1px;margin:9px 1px;overflow:hidden;background:transparent;border-bottom:1px solid rgba(0,0,0,0.1);cursor:default;filter:none}div.mce-menu .mce-menu-item b{font-weight:bold}.mce-menu-item-indent-1{padding-left:20px}.mce-menu-item-indent-2{padding-left:35px}.mce-menu-item-indent-2{padding-left:35px}.mce-menu-item-indent-3{padding-left:40px}.mce-menu-item-indent-4{padding-left:45px}.mce-menu-item-indent-5{padding-left:50px}.mce-menu-item-indent-6{padding-left:55px}.mce-menu.mce-rtl{direction:rtl}.mce-rtl .mce-menu-item{text-align:right;direction:rtl;padding:6px 12px 6px 15px}.mce-rtl .mce-menu-item .mce-caret{margin-left:6px;margin-right:0;border-right:4px solid #595959;border-left:0}.mce-rtl .mce-menu-item.mce-selected .mce-caret,.mce-rtl .mce-menu-item:focus .mce-caret,.mce-rtl .mce-menu-item:hover .mce-caret{border-left-color:transparent;border-right-color:#595959}.mce-rtl .mce-menu-item .mce-ico{padding-right:0;padding-left:4px}.mce-throbber{position:absolute;top:0;left:0;width:100%;height:100%;opacity:.6;filter:alpha(opacity=60);zoom:1;background:#fff url('img/loader.gif') no-repeat center center}.mce-throbber-inline{position:static;height:50px}.mce-menu .mce-throbber-inline{height:25px;background-size:contain}.mce-menu{position:absolute;left:0;top:0;filter:progid:DXImageTransform.Microsoft.gradient(enabled = false);background:transparent;z-index:1000;padding:5px 0 5px 0;margin:-1px 0 0;min-width:180px;background:white;border:1px solid #c5c9cf;border:1px solid #e2e4e7;z-index:1002;-webkit-box-shadow:0 1px 2px rgba(0, 0, 0, 0.2);-moz-box-shadow:0 1px 2px rgba(0, 0, 0, 0.2);box-shadow:0 1px 2px rgba(0, 0, 0, 0.2);max-height:500px;overflow:auto;overflow-x:hidden}.mce-menu.mce-animate{opacity:.01;transform:rotateY(10deg) rotateX(-10deg);transform-origin:left top}.mce-menu.mce-menu-align .mce-menu-shortcut,.mce-menu.mce-menu-align .mce-caret{position:absolute;right:0}.mce-menu i{display:none}.mce-menu-has-icons i{display:inline-block}.mce-menu.mce-in.mce-animate{opacity:1;transform:rotateY(0) rotateX(0);transition:opacity .075s ease,transform .1s ease}.mce-menu-sub-tr-tl{margin:-6px 0 0 -1px}.mce-menu-sub-br-bl{margin:6px 0 0 -1px}.mce-menu-sub-tl-tr{margin:-6px 0 0 1px}.mce-menu-sub-bl-br{margin:6px 0 0 1px}.mce-rtl .mce-menu-item .mce-ico{padding-right:0;padding-left:4px}.mce-rtl.mce-menu-align .mce-caret,.mce-rtl .mce-menu-shortcut{right:auto;left:0}.mce-listbox button{text-align:left;padding-right:20px;position:relative}.mce-listbox .mce-caret{position:absolute;margin-top:-2px;right:8px;top:50%}.mce-rtl .mce-listbox .mce-caret{right:auto;left:8px}.mce-rtl .mce-listbox button{padding-right:10px;padding-left:20px}.mce-container-body .mce-resizehandle{position:absolute;right:0;bottom:0;width:16px;height:16px;visibility:visible;cursor:s-resize;margin:0}.mce-container-body .mce-resizehandle-both{cursor:se-resize}i.mce-i-resize{color:#595959}.mce-selectbox{background:#fff;border:1px solid #c5c5c5}.mce-slider{border:1px solid #c5c5c5;background:#fff;width:100px;height:10px;position:relative;display:block}.mce-slider.mce-vertical{width:10px;height:100px}.mce-slider-handle{border:1px solid #c5c5c5;background:#e6e6e6;display:block;width:13px;height:13px;position:absolute;top:0;left:0;margin-left:-1px;margin-top:-2px}.mce-slider-handle:focus{border-color:#2276d2}.mce-spacer{visibility:hidden}.mce-splitbtn:hover .mce-open{border-left:1px solid #e2e4e7}.mce-splitbtn .mce-open{border-left:1px solid transparent;padding-right:4px;padding-left:4px}.mce-splitbtn .mce-open:focus{border-left:1px solid #e2e4e7}.mce-splitbtn .mce-open:hover,.mce-splitbtn .mce-open:active{border-left:1px solid #e2e4e7}.mce-splitbtn.mce-active:hover .mce-open{border-left:1px solid white}.mce-splitbtn.mce-opened{border-color:#e2e4e7}.mce-splitbtn.mce-btn-small .mce-open{padding:0 3px 0 3px}.mce-rtl .mce-splitbtn{direction:rtl;text-align:right}.mce-rtl .mce-splitbtn button{padding-right:4px;padding-left:4px}.mce-rtl .mce-splitbtn .mce-open{border-left:0}.mce-stack-layout-item{display:block}.mce-tabs{display:block;border-bottom:1px solid #c5c5c5}.mce-tabs,.mce-tabs+.mce-container-body{background:#fff}.mce-tab{display:inline-block;*display:inline;*zoom:1;border:1px solid #c5c5c5;border-width:0 1px 0 0;background:#fff;padding:8px 15px;text-shadow:0 1px 1px rgba(255,255,255,0.75);height:13px;cursor:pointer}.mce-tab:hover{background:#FDFDFD}.mce-tab.mce-active{background:#FDFDFD;border-bottom-color:transparent;margin-bottom:-1px;height:14px}.mce-tab:focus{color:#2276d2}.mce-rtl .mce-tabs{text-align:right;direction:rtl}.mce-rtl .mce-tab{border-width:0 0 0 1px}.mce-textbox{background:#fff;border:1px solid #c5c5c5;-webkit-box-shadow:none;-moz-box-shadow:none;box-shadow:none;display:inline-block;-webkit-transition:border linear .2s, box-shadow linear .2s;transition:border linear .2s, box-shadow linear .2s;height:28px;resize:none;padding:0 4px 0 4px;white-space:pre-wrap;*white-space:pre;color:#595959}.mce-textbox:focus,.mce-textbox.mce-focus{border-color:#2276d2;-webkit-box-shadow:none;-moz-box-shadow:none;box-shadow:none}.mce-placeholder .mce-textbox{color:#aaa}.mce-textbox.mce-multiline{padding:4px;height:auto}.mce-textbox.mce-disabled{color:#bdbdbd}.mce-rtl .mce-textbox{text-align:right;direction:rtl}.mce-dropzone{border:3px dashed gray;text-align:center}.mce-dropzone span{text-transform:uppercase;display:inline-block;vertical-align:middle}.mce-dropzone:after{content:"";height:100%;display:inline-block;vertical-align:middle}.mce-dropzone.mce-disabled{opacity:.4;filter:alpha(opacity=40);zoom:1}.mce-dropzone.mce-disabled.mce-dragenter{cursor:not-allowed}.mce-browsebutton{position:relative;overflow:hidden}.mce-browsebutton button{position:relative;z-index:1}.mce-browsebutton input{opacity:0;filter:alpha(opacity=0);zoom:1;position:absolute;top:0;left:0;width:100%;height:100%;z-index:0}@font-face{font-family:'tinymce';src:url('fonts/tinymce.eot');src:url('fonts/tinymce.eot?#iefix') format('embedded-opentype'),url('fonts/tinymce.woff') format('woff'),url('fonts/tinymce.ttf') format('truetype'),url('fonts/tinymce.svg#tinymce') format('svg');font-weight:normal;font-style:normal}@font-face{font-family:'tinymce-small';src:url('fonts/tinymce-small.eot');src:url('fonts/tinymce-small.eot?#iefix') format('embedded-opentype'),url('fonts/tinymce-small.woff') format('woff'),url('fonts/tinymce-small.ttf') format('truetype'),url('fonts/tinymce-small.svg#tinymce') format('svg');font-weight:normal;font-style:normal}.mce-ico{font-family:'tinymce',Arial;font-style:normal;font-weight:normal;font-variant:normal;font-size:16px;line-height:16px;speak:none;vertical-align:text-top;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;display:inline-block;background:transparent center center;background-size:cover;width:16px;height:16px;color:#595959}.mce-btn-small .mce-ico{font-family:'tinymce-small',Arial}.mce-i-save:before{content:"\e000"}.mce-i-newdocument:before{content:"\e001"}.mce-i-fullpage:before{content:"\e002"}.mce-i-alignleft:before{content:"\e003"}.mce-i-aligncenter:before{content:"\e004"}.mce-i-alignright:before{content:"\e005"}.mce-i-alignjustify:before{content:"\e006"}.mce-i-alignnone:before{content:"\e003"}.mce-i-cut:before{content:"\e007"}.mce-i-paste:before{content:"\e008"}.mce-i-searchreplace:before{content:"\e009"}.mce-i-bullist:before{content:"\e00a"}.mce-i-numlist:before{content:"\e00b"}.mce-i-indent:before{content:"\e00c"}.mce-i-outdent:before{content:"\e00d"}.mce-i-blockquote:before{content:"\e00e"}.mce-i-undo:before{content:"\e00f"}.mce-i-redo:before{content:"\e010"}.mce-i-link:before{content:"\e011"}.mce-i-unlink:before{content:"\e012"}.mce-i-anchor:before{content:"\e013"}.mce-i-image:before{content:"\e014"}.mce-i-media:before{content:"\e015"}.mce-i-help:before{content:"\e016"}.mce-i-code:before{content:"\e017"}.mce-i-insertdatetime:before{content:"\e018"}.mce-i-preview:before{content:"\e019"}.mce-i-forecolor:before{content:"\e01a"}.mce-i-backcolor:before{content:"\e01a"}.mce-i-table:before{content:"\e01b"}.mce-i-hr:before{content:"\e01c"}.mce-i-removeformat:before{content:"\e01d"}.mce-i-subscript:before{content:"\e01e"}.mce-i-superscript:before{content:"\e01f"}.mce-i-charmap:before{content:"\e020"}.mce-i-emoticons:before{content:"\e021"}.mce-i-print:before{content:"\e022"}.mce-i-fullscreen:before{content:"\e023"}.mce-i-spellchecker:before{content:"\e024"}.mce-i-nonbreaking:before{content:"\e025"}.mce-i-template:before{content:"\e026"}.mce-i-pagebreak:before{content:"\e027"}.mce-i-restoredraft:before{content:"\e028"}.mce-i-bold:before{content:"\e02a"}.mce-i-italic:before{content:"\e02b"}.mce-i-underline:before{content:"\e02c"}.mce-i-strikethrough:before{content:"\e02d"}.mce-i-visualchars:before{content:"\e02e"}.mce-i-visualblocks:before{content:"\e02e"}.mce-i-ltr:before{content:"\e02f"}.mce-i-rtl:before{content:"\e030"}.mce-i-copy:before{content:"\e031"}.mce-i-resize:before{content:"\e032"}.mce-i-browse:before{content:"\e034"}.mce-i-pastetext:before{content:"\e035"}.mce-i-rotateleft:before{content:"\eaa8"}.mce-i-rotateright:before{content:"\eaa9"}.mce-i-crop:before{content:"\ee78"}.mce-i-editimage:before{content:"\e915"}.mce-i-options:before{content:"\ec6a"}.mce-i-flipv:before{content:"\eaaa"}.mce-i-fliph:before{content:"\eaac"}.mce-i-zoomin:before{content:"\eb35"}.mce-i-zoomout:before{content:"\eb36"}.mce-i-sun:before{content:"\eccc"}.mce-i-moon:before{content:"\eccd"}.mce-i-arrowleft:before{content:"\edc0"}.mce-i-arrowright:before{content:"\e93c"}.mce-i-drop:before{content:"\e935"}.mce-i-contrast:before{content:"\ecd4"}.mce-i-sharpen:before{content:"\eba7"}.mce-i-resize2:before{content:"\edf9"}.mce-i-orientation:before{content:"\e601"}.mce-i-invert:before{content:"\e602"}.mce-i-gamma:before{content:"\e600"}.mce-i-remove:before{content:"\ed6a"}.mce-i-tablerowprops:before{content:"\e604"}.mce-i-tablecellprops:before{content:"\e605"}.mce-i-table2:before{content:"\e606"}.mce-i-tablemergecells:before{content:"\e607"}.mce-i-tableinsertcolbefore:before{content:"\e608"}.mce-i-tableinsertcolafter:before{content:"\e609"}.mce-i-tableinsertrowbefore:before{content:"\e60a"}.mce-i-tableinsertrowafter:before{content:"\e60b"}.mce-i-tablesplitcells:before{content:"\e60d"}.mce-i-tabledelete:before{content:"\e60e"}.mce-i-tableleftheader:before{content:"\e62a"}.mce-i-tabletopheader:before{content:"\e62b"}.mce-i-tabledeleterow:before{content:"\e800"}.mce-i-tabledeletecol:before{content:"\e801"}.mce-i-codesample:before{content:"\e603"}.mce-i-fill:before{content:"\e902"}.mce-i-borderwidth:before{content:"\e903"}.mce-i-line:before{content:"\e904"}.mce-i-count:before{content:"\e905"}.mce-i-translate:before{content:"\e907"}.mce-i-drag:before{content:"\e908"}.mce-i-home:before{content:"\e90b"}.mce-i-upload:before{content:"\e914"}.mce-i-bubble:before{content:"\e91c"}.mce-i-user:before{content:"\e91d"}.mce-i-lock:before{content:"\e926"}.mce-i-unlock:before{content:"\e927"}.mce-i-settings:before{content:"\e928"}.mce-i-remove2:before{content:"\e92a"}.mce-i-menu:before{content:"\e92d"}.mce-i-warning:before{content:"\e930"}.mce-i-question:before{content:"\e931"}.mce-i-pluscircle:before{content:"\e932"}.mce-i-info:before{content:"\e933"}.mce-i-notice:before{content:"\e934"}.mce-i-arrowup:before{content:"\e93b"}.mce-i-arrowdown:before{content:"\e93d"}.mce-i-arrowup2:before{content:"\e93f"}.mce-i-arrowdown2:before{content:"\e940"}.mce-i-menu2:before{content:"\e941"}.mce-i-newtab:before{content:"\e961"}.mce-i-a11y:before{content:"\e900"}.mce-i-plus:before{content:"\e93a"}.mce-i-insert:before{content:"\e93a"}.mce-i-minus:before{content:"\e939"}.mce-i-books:before{content:"\e911"}.mce-i-reload:before{content:"\e906"}.mce-i-toc:before{content:"\e901"}.mce-i-checkmark:before{content:"\e033"}.mce-i-checkbox:before,.mce-i-selected:before{content:"\e033"}.mce-i-insert{font-size:14px}.mce-i-selected{visibility:hidden}i.mce-i-backcolor{text-shadow:none;background:#BBB}.mce-rtl .mce-filepicker input{direction:ltr}/*# sourceMappingURL=skin.min.css.map */
```
|
Palam is a historical village near Dwarka in Delhi, India.
References
Geography of Delhi
|
Charles Reginald Jackson (April 6, 1903September 21, 1968) was an American writer. He wrote the 1944 novel The Lost Weekend.
Early life
Charles R. Jackson was born in Summit, New Jersey on April 6, 1903, the son of Frederick George and Sarah Williams Jackson. His family moved to Newark, New York in 1907, and nine years later his older sister, Thelma, and younger brother, Richard, were killed while riding in a car that was struck by an express train. He graduated from Newark High School in 1921. He attended Syracuse University, joining a fraternity there, but left during his freshman year after a "furtive sexual encounter with a fellow member of his fraternity, who then spread word of the incident in such a way that only Jackson came in for public disgrace"; a fictionalized version of that experience was later incorporated into The Lost Weekend.
As a young man he worked as an editor for local newspapers and in various bookstores in Chicago and New York prior to falling ill with tuberculosis. From 1927 to 1931, Jackson was confined to sanatoriums and eventually recovered in Davos, Switzerland. His battle with tuberculosis cost him a lung and served as a catalyst for his alcoholism.
Career
He returned to New York at the height of the Great Depression and his difficulty in finding work spurred on his binge drinking. His battle to stop drinking started in late 1936 and was largely won by 1938. On March 4, 1938, Jackson married magazine writer Rhoda Booth. They later had two daughters, Sarah (born 1940) and Kate (born 1943).
During this time he was a free-lance writer and wrote radio scripts. Jackson's first published story, "Palm Sunday", appeared in the Partisan Review in 1939. It focused on the debauched organist of a church the narrators attended as children.
In the 1940s, Jackson wrote a trio of novels, beginning with The Lost Weekend published by Farrar & Rinehart in 1944. The autobiographical novel chronicled a struggling writer's five-day drinking binge. It earned Jackson lasting recognition. While working on The Lost Weekend, Jackson earned as much as $1000 per week writing scripts for the radio soap opera Sweet River, about a widowed minister and his two sons.
In 1945, Paramount Pictures paid $35,000 for the rights to adapt The Lost Weekend into the a film version of the same name. The Academy Award winning film was directed by Billy Wilder and starred Ray Milland in the lead role of Don Birnam. At the height of his career, Charles R. Jackson lectured at various colleges.
Jackson's second published novel of the 1940s, titled The Fall of Valor, was released in 1946 and takes its name from a passage in Herman Melville's Moby-Dick. Set in 1943, it detailed a professor's obsession with a young, handsome Marine. The Fall of Valor received mixed reviews, and, though sales were respectable, was considerably less successful than Jackson's famous first novel. Jackson's The Outer Edges was released in 1948 and dealt with the gruesome rape and murder of two girls in Westchester County, New York. The Outer Edges also received mixed reviews, and sales were poor relative to his previous novels. Jackson's later works included two collections of short stories, The Sunnier Side: Twelve Arcadian Tales (1950) and Earthly Creatures (1953).
Later years
Throughout his career, Jackson continued to struggle with an addiction to alcohol and pills. Over the years, he underwent psychoanalysis to help him kick his addictions. After the success of The Lost Weekend, Jackson began taking pills (mainly the sedative Seconal) and drinking again. He later told his wife that unless he was under the influence of Seconal, he would suffer from writer's block and become depressed.
In September 1952, he attempted suicide and was committed to Bellevue Hospital. He was readmitted four months later after suffering a nervous breakdown. After his release, he went on an alcohol and paraldehyde binge during which he wrote six short stories and began writing A Second-Hand Life. In 1953, he checked into an alcoholism clinic and joined Alcoholics Anonymous (AA). Jackson later also spoke about alcoholism to large groups, sharing his experience. A recording of his talk in Cleveland, Ohio in May 1959 is still distributed in the AA community. He was the first speaker in Alcoholics Anonymous to address drug dependence (barbiturates and paraldehyde) openly as part of his story.
By the mid-1950s, Jackson was sober but was no longer writing. As a result, he and his family began struggling financially. He and his wife had to sell their New Hampshire home and eventually moved to Sandy Hook, Connecticut. Jackson's wife got a job at the Yale Center of Alcohol Studies while Jackson moved to New York City, where he rented an apartment at The Dakota. He continued to attend Alcoholics Anonymous meetings and attempted to begin writing again. In the early 1960s, three of his short stories appeared in McCall's magazine but Jackson still struggled with periodic bouts of writer's block. He later worked as a story editor for the anthology television series Kraft Television Theatre and got a job teaching writing at Rutgers University.
A long-time heavy smoker, Jackson suffered from chronic obstructive pulmonary disease. Towards the end of his life, he was admitted to the Will Rogers Memorial Hospital in Saranac Lake, New York after a relapse of tuberculosis. Will Rogers Institute even filmed a short theatrical release called "Place in the Country" about his second visit to the Will Rogers Memorial Hospital. After his release, Macmillan Publishers gave him an advance for a new book. Jackson moved to the Hotel Chelsea and resumed work on A Second-Hand Life, a novel that he began writing some 15 years earlier. Upon its release, the book received mediocre reviews but sold well.
Death
On September 21, 1968, Jackson died of barbiturate poisoning at St. Vincent's Hospital in New York City. His death was ruled a suicide. At the time of his death, Jackson was working on a sequel to The Lost Weekend entitled Farther and Wider.
Jackson had relapsed into alcoholism during the months before his death, and had become estranged from his family. Jackson had been closeted for the greater part of his life and, in his later years, attempted to come to terms with his bisexuality. Jackson identified as bisexual late in life and began living with his male lover in 1965.
Bibliography
The Lost Weekend (1944)
The Fall of Valor (1946)
The Outer Edges (1948)
The Sunnier Side: Twelve Arcadian Tales (1950)
Earthly Creatures (1953)
A Second-Hand Life (1967)
References
Bibliography
External links
The Papers of Charles R. Jackson in the Dartmouth College Library
Charles Jackson's "The Fall of Valor" (Archived)
Discussion on Charles Jackson by Blake Bailey, biographer and writer of Farther & Wilder: The Lost Weekends and Literary Dreams of Charles Jackson on WNYC March 20, 2013 Leonard Lopate show.
1903 births
1968 suicides
20th-century American male writers
20th-century American novelists
20th-century American screenwriters
American male novelists
American male screenwriters
American male television writers
American radio writers
American television writers
Barbiturates-related deaths
Bisexual men
Drug-related suicides in New York City
American LGBT novelists
LGBT people from New Jersey
Novelists from New Jersey
Novelists from New York (state)
People from Newark, New York
People from Sandy Hook, Connecticut
Screenwriters from New York (state)
Suicides in New York City
Writers from New York City
Writers from Summit, New Jersey
Drug-related deaths in New York City
1968 deaths
20th-century American LGBT people
American bisexual writers
|
```python
from unittest import mock
from prowler.providers.aws.services.efs.efs_service import FileSystem
# Mock Test Region
AWS_REGION = "eu-west-1"
AWS_ACCOUNT_NUMBER = "123456789012"
file_system_id = "fs-c7a0456e"
filesystem_policy = {
"Id": "1",
"Statement": [
{
"Effect": "Allow",
"Action": ["elasticfilesystem:ClientMount"],
"Principal": {"AWS": f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:root"},
}
],
}
filesystem_invalid_policy = {
"Id": "1",
"Statement": [
{
"Effect": "Allow",
"Action": ["elasticfilesystem:ClientMount"],
"Principal": {"AWS": "*"},
}
],
}
# path_to_url#what-is-a-public-policy
filesystem_policy_with_source_arn_condition = {
"Version": "2012-10-17",
"Id": "efs-policy-wizard-15ad9567-2546-4bbb-8168-5541b6fc0e55",
"Statement": [
{
"Sid": "efs-statement-14a7191c-9401-40e7-a388-6af6cfb7dd9c",
"Effect": "Allow",
"Principal": {"AWS": "*"},
"Action": [
"elasticfilesystem:ClientMount",
"elasticfilesystem:ClientWrite",
"elasticfilesystem:ClientRootAccess",
],
"Condition": {
"ArnEquals": {
"aws:SourceArn": f"arn:aws:iam::{AWS_ACCOUNT_NUMBER}:root"
}
},
}
],
}
# path_to_url#what-is-a-public-policy
filesystem_policy_with_mount_target_condition = {
"Version": "2012-10-17",
"Id": "efs-policy-wizard-15ad9567-2546-4bbb-8168-5541b6fc0e55",
"Statement": [
{
"Sid": "efs-statement-14a7191c-9401-40e7-a388-6af6cfb7dd9c",
"Effect": "Allow",
"Principal": {"AWS": "*"},
"Action": [
"elasticfilesystem:ClientMount",
"elasticfilesystem:ClientWrite",
"elasticfilesystem:ClientRootAccess",
],
"Condition": {"Bool": {"elasticfilesystem:AccessedViaMountTarget": "true"}},
}
],
}
class Test_efs_not_publicly_accessible:
def test_efs_valid_policy(self):
efs_client = mock.MagicMock
efs_arn = f"arn:aws:elasticfilesystem:{AWS_REGION}:{AWS_ACCOUNT_NUMBER}:file-system/{file_system_id}"
efs_client.filesystems = [
FileSystem(
id=file_system_id,
arn=efs_arn,
region=AWS_REGION,
policy=filesystem_policy,
backup_policy=None,
encrypted=True,
)
]
with mock.patch(
"prowler.providers.aws.services.efs.efs_service.EFS",
efs_client,
):
from prowler.providers.aws.services.efs.efs_not_publicly_accessible.efs_not_publicly_accessible import (
efs_not_publicly_accessible,
)
check = efs_not_publicly_accessible()
result = check.execute()
assert len(result) == 1
assert result[0].status == "PASS"
assert (
result[0].status_extended
== f"EFS {file_system_id} has a policy which does not allow access to any client within the VPC."
)
assert result[0].resource_id == file_system_id
assert result[0].resource_arn == efs_arn
assert result[0].region == AWS_REGION
assert result[0].resource_tags == []
def test_efs_valid_policy_with_mount_target_condition(self):
efs_client = mock.MagicMock
efs_arn = f"arn:aws:elasticfilesystem:{AWS_REGION}:{AWS_ACCOUNT_NUMBER}:file-system/{file_system_id}"
efs_client.filesystems = [
FileSystem(
id=file_system_id,
arn=efs_arn,
region=AWS_REGION,
policy=filesystem_policy_with_mount_target_condition,
backup_policy=None,
encrypted=True,
)
]
with mock.patch(
"prowler.providers.aws.services.efs.efs_service.EFS",
efs_client,
):
from prowler.providers.aws.services.efs.efs_not_publicly_accessible.efs_not_publicly_accessible import (
efs_not_publicly_accessible,
)
check = efs_not_publicly_accessible()
result = check.execute()
assert len(result) == 1
assert result[0].status == "PASS"
assert (
result[0].status_extended
== f"EFS {file_system_id} has a policy which does not allow access to any client within the VPC."
)
assert result[0].resource_id == file_system_id
assert result[0].resource_arn == efs_arn
assert result[0].region == AWS_REGION
assert result[0].resource_tags == []
def test_efs_valid_policy_with_source_arn_condition(self):
efs_client = mock.MagicMock
efs_arn = f"arn:aws:elasticfilesystem:{AWS_REGION}:{AWS_ACCOUNT_NUMBER}:file-system/{file_system_id}"
efs_client.filesystems = [
FileSystem(
id=file_system_id,
arn=efs_arn,
region=AWS_REGION,
policy=filesystem_policy_with_source_arn_condition,
backup_policy=None,
encrypted=True,
)
]
with mock.patch(
"prowler.providers.aws.services.efs.efs_service.EFS",
efs_client,
):
from prowler.providers.aws.services.efs.efs_not_publicly_accessible.efs_not_publicly_accessible import (
efs_not_publicly_accessible,
)
check = efs_not_publicly_accessible()
result = check.execute()
assert len(result) == 1
assert result[0].status == "PASS"
assert (
result[0].status_extended
== f"EFS {file_system_id} has a policy which does not allow access to any client within the VPC."
)
assert result[0].resource_id == file_system_id
assert result[0].resource_arn == efs_arn
assert result[0].region == AWS_REGION
assert result[0].resource_tags == []
def test_efs_invalid_policy(self):
efs_client = mock.MagicMock
efs_arn = f"arn:aws:elasticfilesystem:{AWS_REGION}:{AWS_ACCOUNT_NUMBER}:file-system/{file_system_id}"
efs_client.filesystems = [
FileSystem(
id=file_system_id,
arn=efs_arn,
region=AWS_REGION,
policy=filesystem_invalid_policy,
backup_policy=None,
encrypted=True,
)
]
with mock.patch(
"prowler.providers.aws.services.efs.efs_service.EFS",
efs_client,
):
from prowler.providers.aws.services.efs.efs_not_publicly_accessible.efs_not_publicly_accessible import (
efs_not_publicly_accessible,
)
check = efs_not_publicly_accessible()
result = check.execute()
assert len(result) == 1
assert result[0].status == "FAIL"
assert (
result[0].status_extended
== f"EFS {file_system_id} has a policy which allows access to any client within the VPC."
)
assert result[0].resource_id == file_system_id
assert result[0].resource_arn == efs_arn
assert result[0].region == AWS_REGION
assert result[0].resource_tags == []
def test_efs_no_policy(self):
efs_client = mock.MagicMock
efs_arn = f"arn:aws:elasticfilesystem:{AWS_REGION}:{AWS_ACCOUNT_NUMBER}:file-system/{file_system_id}"
efs_client.filesystems = [
FileSystem(
id=file_system_id,
arn=efs_arn,
region=AWS_REGION,
policy=None,
backup_policy=None,
encrypted=True,
)
]
with mock.patch(
"prowler.providers.aws.services.efs.efs_service.EFS",
efs_client,
):
from prowler.providers.aws.services.efs.efs_not_publicly_accessible.efs_not_publicly_accessible import (
efs_not_publicly_accessible,
)
check = efs_not_publicly_accessible()
result = check.execute()
assert len(result) == 1
assert result[0].status == "FAIL"
assert (
result[0].status_extended
== f"EFS {file_system_id} doesn't have any policy which means it grants full access to any client within the VPC."
)
assert result[0].resource_id == file_system_id
assert result[0].resource_arn == efs_arn
assert result[0].region == AWS_REGION
assert result[0].resource_tags == []
```
|
Ouled Aissa () (also written Ouled Aïssa) is a town and commune in Charouine District, Adrar Province, in south-central Algeria. According to the 2008 census it has a population of 7,034, up from 5,497 in 1998, with an annual growth rate of 2.5%.
Geography
Ouled Aissa commune lies at an elevation of about in the Gourara region of northern Adrar Province. The commune consists of several villages (including the town of Ouled Aissa) that populate small oases between Talmine commune to the west and Ksar Kaddour and Ouled Said communes to the east. The oases mainly lie at the northern edge of the habitable Gourara region just south of the Grand Erg Occidental, a large area of sand dunes stretching well into Béchar and El Bayadh provinces.
Climate
Ouled Aissa has a hot desert climate (Köppen climate classification BWh), with extremely hot summers and mild winters, and very little precipitation throughout the year.
Transportation
The main road through the commune is a provincial road that starts at the village of Taouennza, passes through Haiha, Yako and Ouled Aissa town before connecting to the N51 national highway northeast of Charouine. A local road branches off south of Ouled Aissa town connecting to Djimjane.
Education
3.2% of the population has a tertiary education, and another 7.3% has completed secondary education. The overall literacy rate is 63.7%, and is 81.0% among males and 45.5% among females.
Localities
As of 1984, the commune was composed of nine localities:
Ouled Aïssa
Tasfaout
Guentour
Djimjane
Taouennza
Haïha
Yako
El Kort
Lahmer
References
Neighbouring towns and cities
Communes of Adrar Province
Cities in Algeria
|
```smalltalk
using System.Collections.Generic;
using System.Runtime.Serialization;
namespace Amazon.Lambda.CognitoEvents
{
/// <summary>
/// path_to_url
/// </summary>
[DataContract]
public class IdTokenGeneration
{
/// <summary>
/// A map of one or more key-value pairs of claims to add or override. For group related claims, use groupOverrideDetails instead.
/// </summary>
[DataMember(Name = "claimsToAddOrOverride")]
#if NETCOREAPP3_1_OR_GREATER
[System.Text.Json.Serialization.JsonPropertyName("claimsToAddOrOverride")]
# endif
public Dictionary<string, string> ClaimsToAddOrOverride { get; set; } = new Dictionary<string, string>();
/// <summary>
/// A list that contains claims to be suppressed from the identity token.
/// </summary>
[DataMember(Name = "claimsToSuppress")]
#if NETCOREAPP3_1_OR_GREATER
[System.Text.Json.Serialization.JsonPropertyName("claimsToSuppress")]
# endif
public List<string> ClaimsToSuppress { get; set; } = new List<string>();
}
}
```
|
Mustafa Burak Bozan (born 23 August 2000) is a Turkish professional footballer who plays as a goalkeeper for Turkish club Gaziantep.
Career
Bozan is a youth product of Gaziantep, and signed his first professional contract with the club in February 2018. He made his first senior debut with Gaziantep in a 3–0 Turkish Cup win over Turgutluspor on 31 October 2019. He made his professional debut with Gaziantep in a 1–0 Süper Lig win over Hatayspor on 15 May 2021.
International career
Bozan represented the Turkey U18s once in a friendly 2–1 loss to the Russia U18 in 23 February 2018.
References
External links
2000 births
Living people
People from Mardin
Turkish men's footballers
Men's association football goalkeepers
Turkey men's youth international footballers
Gaziantep F.K. footballers
Tuzlaspor players
Süper Lig players
TFF First League players
|
Joseph Francis Cicero (November 18, 1910 – March 30, 1983) was an American professional baseball player and scout. He was a backup outfielder in Major League Baseball who played for the Boston Red Sox and Philadelphia Athletics. Listed at , Weight 167 lb., Cicero batted and threw right-handed. He was born in Atlantic City, New Jersey and attended Atlantic City High School.
An all-around high school athletic standout, Cicero spent most of his 19-year baseball career in minor league baseball, with two brief stops in the major leagues 15 years apart. He signed a contract with the Boston Red Sox when he was only 16, and reached the majors in 1929 with Boston, hitting .312 with a .500 slugging average in just 10 games, an especially impressive accomplishment given that Cicero was the youngest player in the major leagues that season at age 18. The next season, he hit .167 and also lost his youngest player title to Hank Greenberg. After that, he spent the next 14 years in the minors.
In May 1944, while playing for the Newark Bears of the International League, Cicero hit three home runs in a single game, including two grand slams and 10 RBI, to lead his team to a 17–8 victory over the Montreal Royals. At the end of the season he was signed by Philadelphia Athletics, appearing for them in 12 games during 1945, his last major league season.
In 1945 he set a (post-1914) record for a gap of over 14 years between major league hits.
In three major league seasons, Cicero hit .222 (18-for-81) in 40 games, including eight RBI, 14 runs, three doubles, and four triples without any home runs.
A vision problem prevented Cicero from serving in the military during World War II. During the baseball off-season he played semipro football, played winter league baseball in Panama until 1952, and served as a scout for the Brooklyn Dodgers between 1953 and 1954.
Cicero died in Clearwater, Florida, at the age of 72.
See also
Boston Red Sox all-time roster
References
Sources
Baseball Library
Baseball Reference
Retrosheet
This Day in Baseball History
The Deadball Era
1910 births
1983 deaths
Atlantic City High School alumni
Baseball players from Atlantic City, New Jersey
Boston Red Sox players
Brooklyn Dodgers scouts
Elmira Pioneers players
Harrisburg Senators players
Hazleton Mountaineers players
Hazleton Red Sox players
Indianapolis Indians players
Major League Baseball outfielders
Minneapolis Millers (baseball) players
Nashville Vols players
Newark Bears (International League) players
Peoria Tractors players
Philadelphia Athletics players
Pittsfield Hillies players
Quebec Athletics players
Reading Red Sox players
St. Hyacinthe Saints players
Scranton Miners players
Syracuse Chiefs players
Wilkes-Barre Barons players
Williamsport Grays players
|
Rane was a progressive pop jam band based in Hartford, CT. The band formed in 1995 and has released eight albums, including a self-titled album not officially recognized as part of the band's discography. The band also created its own record company in 1998, called Tides Records, the same year they released their first full-length album.
Overview
The five member band features:
Alan Veniscofsky (guitar, vocals)
Ryan 'Bowman' Bowman (guitar, vocals)
Dan Prindle (bass, cello, vocals)
Bryan Kelly (drums)
Kurt Rinaldi (percussion, marimba)
Previous Members:
Travis LaMothe (drums)
Bruce Menard (drums)
Discography
Magnetic North (2005)
Telescope EP (2005)
The Hope Seed (2003)
From the Vine vol. 1 (2003)
From the Vine vol. 2 (2003)
Camelopardalis (2000)
At War With The Moon (1998)
Rane (1997)
External links
thesoundofrane.com
tidesrecords.com
Rane collection at the Internet Archive's live music archive
References
Jam bands
|
Heat flux measurements of thermal insulation are applied in laboratory and industrial environments to obtain reference or in-situ measurements of the thermal properties of an insulation material.
Thermal insulation is tested using nondestructive testing techniques relying on heat flux sensors. Procedures and requirements for in-situ measurements are standardized in ASTM C1041 standard: "Standard Practice for In-Situ Measurements of Heat Flux in Industrial Thermal Insulation Using Heat Flux Transducers".
Laboratory methods
On-site methods
On-site heat flux measurements are often focused on testing the thermal transport properties of for example pipes, tanks, ovens and boilers, by calculating the heat flux q or the apparent thermal conductivity . The real-time energy gain or loss is measured under pseudo steady state-conditions with minimal disturbance by a heat flux transducer (HFT). This on-site method is for flat surfaces (non-pipes) only.
Measurement procedure
Placement of the HFT:
The sensor should be placed on an area of insulation that represents the overall system. For example, it should not be placed closed to an inlet or outlet of a boiler or near a heating element.
Shield the sensor from other sources of heat flux that are not relevant to the measurement, e.g. solar radiation.
Make sure that the HFT is connected to the insulation surface via thermal paste or other conductive material. The emittance of the HFT should match the emittance of surface as close as possible. Air or other material between the sensor and the surface of measurement could lead to measurement errors.
Pre-measurements:
Measure the thickness of the insulation material to the nearest millimetre.
Log ambient weather conditions if necessary. Humidity, air movement and precipitation may be of interest for the interpretation of the results
Measure the temperature of the insulation surface near the sensor and the temperature at the inside of the insulation material, i.e. the process surface.
After successful application of these preparations connect the sensor to a datalogger or integrating voltmeter and wait until pseudo steady-state is achieved. It is advised to average the readings over a short time period when steady-state is achieved. This voltage measurement is the final measurement, but for good measure these steps should be applied on multiple relevant locations on the insulation.
Calculation and precision
The heat flux can be calculated from the voltage by:
V is the voltage measured by the HFT (measured in volt, V)
S is the sensitivity of the HFT (measured in volt / watt per square meter )
The apparent thermal conductivity can be calculated from:
q is the heat flux calculated from the HFT (measured in watt per square meter, )
D is the thickness of the insulation material (measured in millimeter, mm)
the temperature of the process surface, the inside of the material
the temperature of the surface near the HFT, the outside of the material
The interpretation and precision of the results depends on the section of measurement, the choice of HFT and external conditions. The correct heat flux sensor and measurement test section are of importance for a good in-situ measurement and should be based on manufacturer recommendations, past experience and careful consideration of the testing area.
Standards
ASTM C1041: Standard Practice for In-Situ Measurements of Heat Flux in Industrial Thermal Insulation Using Heat Flux Transducers
See also
R-value (insulation)
References
Bibliography
Johannesson, G., “Heat Flow Measurements, Thermoelectrical Meters,Function Principles, and Sources of Error”, Division of Building Technology, Lund Institute of Technology, Report TUBH-3003, Lund,
Sweden, 1979. (Draft Translation, March 1982, U.S. Army Corps of Engineers)
Poppendiek, H. F., “Why Not Measure Heat Flux Directly?”, Environmental Quarterly 15, No. 1, March 1, 1969.
Gilbo, C. F., “Conductimeters, Their Construction and Use”, ASTM Bulletin No. 212, February, 1956.
Materials testing
ASTM standards
Heat conduction
|
Weet-Bix cards are a series of collectors' cards issued in cereal boxes by the Sanitarium Health and Wellbeing Company in Australia and New Zealand.
Sanitarium started the Weet-Bix cards in 1942 in Australia to market their Weet-Bix cereal. The company later expanded the cards to its Granose, Bixies, Cerix and later Puffed Wheat, Puffed Rice, Weeta Puffs, Weeta Flakes and Corn Flakes brands.
In 1972, Sanitarium released a different series of Weet-Bix cards in their New Zealand products. These cards were sometimes similar to the Australian series. but frequently had a New Zealand focus.
Sanitarium discontinued the Australian cards in 2012.
Australian cards
The following is a list of Australian card releases.
New Zealand cards
The following is a list of New Zealand card releases.
Notes
References
Seventh-day Adventist Church
Trading cards
|
```yaml
description: Orise Tech OTM8009A Panel
compatible: "orisetech,otm8009a"
include: [mipi-dsi-device.yaml, display-controller.yaml]
properties:
reset-gpios:
type: phandle-array
description: |
The RESETn pin is asserted to disable the sensor causing a hard
reset. The sensor receives this as an active-low signal.
bl-gpios:
type: phandle-array
description: |
The BLn pin is asserted to control the backlight of the panel.
The sensor receives this as an active-high signal.
rotation:
type: int
default: 0
enum:
- 0
- 90
- 180
- 270
description: |
Display rotation (CW) in degrees. Defaults to 0, display default.
```
|
```c
/* libmumblelink.c -- mumble link interface
This software is provided 'as-is', without any express or implied
warranty. In no event will the authors be held liable for any damages
arising from the use of this software.
Permission is granted to anyone to use this software for any purpose,
including commercial applications, and to alter it and redistribute it
freely, subject to the following restrictions:
1. The origin of this software must not be misrepresented; you must not
claim that you wrote the original software. If you use this software
in a product, an acknowledgment in the product documentation would be
appreciated but is not required.
2. Altered source versions must be plainly marked as such, and must not be
misrepresented as being the original software.
3. This notice may not be removed or altered from any source distribution.
*/
#ifdef WIN32
#include <windows.h>
#define uint32_t UINT32
#else
#include <unistd.h>
#ifdef __sun
#define _POSIX_C_SOURCE 199309L
#endif
#include <sys/mman.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <sys/time.h>
#endif
#include <fcntl.h>
#include <stdint.h>
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include "libmumblelink.h"
#ifndef MIN
#define MIN(a, b) ((a)<(b)?(a):(b))
#endif
typedef struct
{
uint32_t uiVersion;
uint32_t uiTick;
float fAvatarPosition[3];
float fAvatarFront[3];
float fAvatarTop[3];
wchar_t name[256];
/* new in mumble 1.2 */
float fCameraPosition[3];
float fCameraFront[3];
float fCameraTop[3];
wchar_t identity[256];
uint32_t context_len;
unsigned char context[256];
wchar_t description[2048];
} LinkedMem;
static LinkedMem *lm = NULL;
#ifdef WIN32
static HANDLE hMapObject = NULL;
#else
static int32_t GetTickCount(void)
{
struct timeval tv;
gettimeofday(&tv,NULL);
return tv.tv_usec / 1000 + tv.tv_sec * 1000;
}
#endif
int mumble_link(const char* name)
{
#ifdef WIN32
if(lm)
return 0;
hMapObject = OpenFileMappingW(FILE_MAP_ALL_ACCESS, FALSE, L"MumbleLink");
if (hMapObject == NULL)
return -1;
lm = (LinkedMem *) MapViewOfFile(hMapObject, FILE_MAP_ALL_ACCESS, 0, 0, sizeof(LinkedMem));
if (lm == NULL) {
CloseHandle(hMapObject);
hMapObject = NULL;
return -1;
}
#else
char file[256];
int shmfd;
if(lm)
return 0;
snprintf(file, sizeof (file), "/MumbleLink.%d", getuid());
shmfd = shm_open(file, O_RDWR, S_IRUSR | S_IWUSR);
if(shmfd < 0) {
return -1;
}
lm = (LinkedMem *) (mmap(NULL, sizeof(LinkedMem), PROT_READ | PROT_WRITE, MAP_SHARED, shmfd,0));
if (lm == (void *) (-1)) {
lm = NULL;
close(shmfd);
return -1;
}
close(shmfd);
#endif
memset(lm, 0, sizeof(LinkedMem));
mbstowcs(lm->name, name, sizeof(lm->name) / sizeof(wchar_t));
return 0;
}
void mumble_update_coordinates(float fPosition[3], float fFront[3], float fTop[3])
{
mumble_update_coordinates2(fPosition, fFront, fTop, fPosition, fFront, fTop);
}
void mumble_update_coordinates2(float fAvatarPosition[3], float fAvatarFront[3], float fAvatarTop[3],
float fCameraPosition[3], float fCameraFront[3], float fCameraTop[3])
{
if (!lm)
return;
memcpy(lm->fAvatarPosition, fAvatarPosition, sizeof(lm->fAvatarPosition));
memcpy(lm->fAvatarFront, fAvatarFront, sizeof(lm->fAvatarFront));
memcpy(lm->fAvatarTop, fAvatarTop, sizeof(lm->fAvatarTop));
memcpy(lm->fCameraPosition, fCameraPosition, sizeof(lm->fCameraPosition));
memcpy(lm->fCameraFront, fCameraFront, sizeof(lm->fCameraFront));
memcpy(lm->fCameraTop, fCameraTop, sizeof(lm->fCameraTop));
lm->uiVersion = 2;
lm->uiTick = GetTickCount();
}
void mumble_set_identity(const char* identity)
{
size_t len;
if (!lm)
return;
len = MIN(sizeof(lm->identity)/sizeof(wchar_t), strlen(identity)+1);
mbstowcs(lm->identity, identity, len);
}
void mumble_set_context(const unsigned char* context, size_t len)
{
if (!lm)
return;
len = MIN(sizeof(lm->context), len);
lm->context_len = len;
memcpy(lm->context, context, len);
}
void mumble_set_description(const char* description)
{
size_t len;
if (!lm)
return;
len = MIN(sizeof(lm->description)/sizeof(wchar_t), strlen(description)+1);
mbstowcs(lm->description, description, len);
}
void mumble_unlink()
{
if(!lm)
return;
#ifdef WIN32
UnmapViewOfFile(lm);
CloseHandle(hMapObject);
hMapObject = NULL;
#else
munmap(lm, sizeof(LinkedMem));
#endif
lm = NULL;
}
int mumble_islinked(void)
{
return lm != NULL;
}
```
|
```c++
//===-- AVRRegisterInfo.cpp - AVR Register Information --------------------===//
//
// See path_to_url for license information.
//
//===your_sha256_hash------===//
//
// This file contains the AVR implementation of the TargetRegisterInfo class.
//
//===your_sha256_hash------===//
#include "AVRRegisterInfo.h"
#include "llvm/ADT/BitVector.h"
#include "llvm/CodeGen/MachineFrameInfo.h"
#include "llvm/CodeGen/MachineFunction.h"
#include "llvm/CodeGen/MachineInstrBuilder.h"
#include "llvm/CodeGen/MachineRegisterInfo.h"
#include "llvm/IR/Function.h"
#include "llvm/CodeGen/TargetFrameLowering.h"
#include "AVR.h"
#include "AVRInstrInfo.h"
#include "AVRTargetMachine.h"
#include "MCTargetDesc/AVRMCTargetDesc.h"
#define GET_REGINFO_TARGET_DESC
#include "AVRGenRegisterInfo.inc"
namespace llvm {
AVRRegisterInfo::AVRRegisterInfo() : AVRGenRegisterInfo(0) {}
const uint16_t *
AVRRegisterInfo::getCalleeSavedRegs(const MachineFunction *MF) const {
CallingConv::ID CC = MF->getFunction().getCallingConv();
return ((CC == CallingConv::AVR_INTR || CC == CallingConv::AVR_SIGNAL)
? CSR_Interrupts_SaveList
: CSR_Normal_SaveList);
}
const uint32_t *
AVRRegisterInfo::getCallPreservedMask(const MachineFunction &MF,
CallingConv::ID CC) const {
return ((CC == CallingConv::AVR_INTR || CC == CallingConv::AVR_SIGNAL)
? CSR_Interrupts_RegMask
: CSR_Normal_RegMask);
}
BitVector AVRRegisterInfo::getReservedRegs(const MachineFunction &MF) const {
BitVector Reserved(getNumRegs());
// Reserve the intermediate result registers r1 and r2
// The result of instructions like 'mul' is always stored here.
Reserved.set(AVR::R0);
Reserved.set(AVR::R1);
Reserved.set(AVR::R1R0);
// Reserve the stack pointer.
Reserved.set(AVR::SPL);
Reserved.set(AVR::SPH);
Reserved.set(AVR::SP);
// We tenatively reserve the frame pointer register r29:r28 because the
// function may require one, but we cannot tell until register allocation
// is complete, which can be too late.
//
// Instead we just unconditionally reserve the Y register.
//
// TODO: Write a pass to enumerate functions which reserved the Y register
// but didn't end up needing a frame pointer. In these, we can
// convert one or two of the spills inside to use the Y register.
Reserved.set(AVR::R28);
Reserved.set(AVR::R29);
Reserved.set(AVR::R29R28);
return Reserved;
}
const TargetRegisterClass *
AVRRegisterInfo::getLargestLegalSuperClass(const TargetRegisterClass *RC,
const MachineFunction &MF) const {
const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
if (TRI->isTypeLegalForClass(*RC, MVT::i16)) {
return &AVR::DREGSRegClass;
}
if (TRI->isTypeLegalForClass(*RC, MVT::i8)) {
return &AVR::GPR8RegClass;
}
llvm_unreachable("Invalid register size");
}
/// Fold a frame offset shared between two add instructions into a single one.
static void foldFrameOffset(MachineBasicBlock::iterator &II, int &Offset, unsigned DstReg) {
MachineInstr &MI = *II;
int Opcode = MI.getOpcode();
// Don't bother trying if the next instruction is not an add or a sub.
if ((Opcode != AVR::SUBIWRdK) && (Opcode != AVR::ADIWRdK)) {
return;
}
// Check that DstReg matches with next instruction, otherwise the instruction
// is not related to stack address manipulation.
if (DstReg != MI.getOperand(0).getReg()) {
return;
}
// Add the offset in the next instruction to our offset.
switch (Opcode) {
case AVR::SUBIWRdK:
Offset += -MI.getOperand(2).getImm();
break;
case AVR::ADIWRdK:
Offset += MI.getOperand(2).getImm();
break;
}
// Finally remove the instruction.
II++;
MI.eraseFromParent();
}
void AVRRegisterInfo::eliminateFrameIndex(MachineBasicBlock::iterator II,
int SPAdj, unsigned FIOperandNum,
RegScavenger *RS) const {
assert(SPAdj == 0 && "Unexpected SPAdj value");
MachineInstr &MI = *II;
DebugLoc dl = MI.getDebugLoc();
MachineBasicBlock &MBB = *MI.getParent();
const MachineFunction &MF = *MBB.getParent();
const AVRTargetMachine &TM = (const AVRTargetMachine &)MF.getTarget();
const TargetInstrInfo &TII = *TM.getSubtargetImpl()->getInstrInfo();
const MachineFrameInfo &MFI = MF.getFrameInfo();
const TargetFrameLowering *TFI = TM.getSubtargetImpl()->getFrameLowering();
int FrameIndex = MI.getOperand(FIOperandNum).getIndex();
int Offset = MFI.getObjectOffset(FrameIndex);
// Add one to the offset because SP points to an empty slot.
Offset += MFI.getStackSize() - TFI->getOffsetOfLocalArea() + 1;
// Fold incoming offset.
Offset += MI.getOperand(FIOperandNum + 1).getImm();
// This is actually "load effective address" of the stack slot
// instruction. We have only two-address instructions, thus we need to
// expand it into move + add.
if (MI.getOpcode() == AVR::FRMIDX) {
MI.setDesc(TII.get(AVR::MOVWRdRr));
MI.getOperand(FIOperandNum).ChangeToRegister(AVR::R29R28, false);
MI.RemoveOperand(2);
assert(Offset > 0 && "Invalid offset");
// We need to materialize the offset via an add instruction.
unsigned Opcode;
Register DstReg = MI.getOperand(0).getReg();
assert(DstReg != AVR::R29R28 && "Dest reg cannot be the frame pointer");
II++; // Skip over the FRMIDX (and now MOVW) instruction.
// Generally, to load a frame address two add instructions are emitted that
// could get folded into a single one:
// movw r31:r30, r29:r28
// adiw r31:r30, 29
// adiw r31:r30, 16
// to:
// movw r31:r30, r29:r28
// adiw r31:r30, 45
if (II != MBB.end())
foldFrameOffset(II, Offset, DstReg);
// Select the best opcode based on DstReg and the offset size.
switch (DstReg) {
case AVR::R25R24:
case AVR::R27R26:
case AVR::R31R30: {
if (isUInt<6>(Offset)) {
Opcode = AVR::ADIWRdK;
break;
}
LLVM_FALLTHROUGH;
}
default: {
// This opcode will get expanded into a pair of subi/sbci.
Opcode = AVR::SUBIWRdK;
Offset = -Offset;
break;
}
}
MachineInstr *New = BuildMI(MBB, II, dl, TII.get(Opcode), DstReg)
.addReg(DstReg, RegState::Kill)
.addImm(Offset);
New->getOperand(3).setIsDead();
return;
}
// If the offset is too big we have to adjust and restore the frame pointer
// to materialize a valid load/store with displacement.
//:TODO: consider using only one adiw/sbiw chain for more than one frame index
if (Offset > 62) {
unsigned AddOpc = AVR::ADIWRdK, SubOpc = AVR::SBIWRdK;
int AddOffset = Offset - 63 + 1;
// For huge offsets where adiw/sbiw cannot be used use a pair of subi/sbci.
if ((Offset - 63 + 1) > 63) {
AddOpc = AVR::SUBIWRdK;
SubOpc = AVR::SUBIWRdK;
AddOffset = -AddOffset;
}
// It is possible that the spiller places this frame instruction in between
// a compare and branch, invalidating the contents of SREG set by the
// compare instruction because of the add/sub pairs. Conservatively save and
// restore SREG before and after each add/sub pair.
BuildMI(MBB, II, dl, TII.get(AVR::INRdA), AVR::R0).addImm(0x3f);
MachineInstr *New = BuildMI(MBB, II, dl, TII.get(AddOpc), AVR::R29R28)
.addReg(AVR::R29R28, RegState::Kill)
.addImm(AddOffset);
New->getOperand(3).setIsDead();
// Restore SREG.
BuildMI(MBB, std::next(II), dl, TII.get(AVR::OUTARr))
.addImm(0x3f)
.addReg(AVR::R0, RegState::Kill);
// No need to set SREG as dead here otherwise if the next instruction is a
// cond branch it will be using a dead register.
BuildMI(MBB, std::next(II), dl, TII.get(SubOpc), AVR::R29R28)
.addReg(AVR::R29R28, RegState::Kill)
.addImm(Offset - 63 + 1);
Offset = 62;
}
MI.getOperand(FIOperandNum).ChangeToRegister(AVR::R29R28, false);
assert(isUInt<6>(Offset) && "Offset is out of range");
MI.getOperand(FIOperandNum + 1).ChangeToImmediate(Offset);
}
Register AVRRegisterInfo::getFrameRegister(const MachineFunction &MF) const {
const TargetFrameLowering *TFI = MF.getSubtarget().getFrameLowering();
if (TFI->hasFP(MF)) {
// The Y pointer register
return AVR::R28;
}
return AVR::SP;
}
const TargetRegisterClass *
AVRRegisterInfo::getPointerRegClass(const MachineFunction &MF,
unsigned Kind) const {
// FIXME: Currently we're using avr-gcc as reference, so we restrict
// ptrs to Y and Z regs. Though avr-gcc has buggy implementation
// of memory constraint, so we can fix it and bit avr-gcc here ;-)
return &AVR::PTRDISPREGSRegClass;
}
void AVRRegisterInfo::splitReg(unsigned Reg,
unsigned &LoReg,
unsigned &HiReg) const {
assert(AVR::DREGSRegClass.contains(Reg) && "can only split 16-bit registers");
LoReg = getSubReg(Reg, AVR::sub_lo);
HiReg = getSubReg(Reg, AVR::sub_hi);
}
bool AVRRegisterInfo::shouldCoalesce(MachineInstr *MI,
const TargetRegisterClass *SrcRC,
unsigned SubReg,
const TargetRegisterClass *DstRC,
unsigned DstSubReg,
const TargetRegisterClass *NewRC,
LiveIntervals &LIS) const {
if(this->getRegClass(AVR::PTRDISPREGSRegClassID)->hasSubClassEq(NewRC)) {
return false;
}
return TargetRegisterInfo::shouldCoalesce(MI, SrcRC, SubReg, DstRC, DstSubReg, NewRC, LIS);
}
} // end of namespace llvm
```
|
Rent (Original Broadway Cast Recording) is an album of music from the Tony Award- and Pulitzer Prize-winning 1996 musical Rent. It is produced by DreamWorks with music and lyrics by Jonathan Larson. The album is a 2-disc (in its CD format) collection of every song from the musical; some small segments of narration and spoken dialogue from the play are not included in the recording. The collection ends with a studio-recorded rearrangement of the song "Seasons of Love" featuring Stevie Wonder. The album was recorded by the original Broadway cast of RENT and was released on August 27, 1996. A second one-disc album was released in 1999 containing highlights from the original cast album.
Track listing
Original Broadway Cast Recording
The Best of Rent: Highlights From The Original Cast Album
"Rent" - Anthony Rapp, Adam Pascal, Daphne Rubin-Vega, Jesse L. Martin, Idina Menzel, Fredi Walker, Taye Diggs
"One Song Glory" - Adam Pascal
"Light My Candle" - Adam Pascal, Daphne Rubin-Vega
"Today 4 U" - Wilson Jermaine Heredia, Jesse L. Martin, Anthony Rapp, Adam Pascal
"Tango: Maureen" - Anthony Rapp, Fredi Walker
"Life Support" - Wilson Jermaine Heredia, Jesse L. Martin, Anthony Rapp, Adam Pascal, Gilles Chiasson, Timothy Britten Parker, Rodney Hicks
"Out Tonight" - Daphne Rubin-Vega
"Another Day" - Adam Pascal, Daphne Rubin-Vega
"Will I?" - Gilles Chiasson, Wilson Jermaine Heredia, Jesse L. Martin, Anthony Rapp, Adam Pascal
"Santa Fe" - Wilson Jermaine Heredia, Jesse L. Martin, Anthony Rapp, Adam Pascal
"I'll Cover You" - Wilson Jermaine Heredia, Jesse L. Martin
"La Vie Bohème" - Anthony Rapp, Adam Pascal, Daphne Rubin-Vega, Jesse L. Martin, Wilson Jermaine Heredia, Idina Menzel, Fredi Walker, Taye Diggs
"I Should Tell You" - Adam Pascal, Daphne Rubin-Vega
"La Vie Bohème B" - Anthony Rapp, Adam Pascal, Daphne Rubin-Vega, Jesse L. Martin, Wilson Jermaine Heredia, Idina Menzel, Fredi Walker
"Seasons of Love" - Anthony Rapp, Adam Pascal, Daphne Rubin-Vega, Jesse L. Martin, Wilson Jermaine Heredia, Idina Menzel, Fredi Walker, Taye Diggs, Gwen Stewart, Byron Utley
"Take Me or Leave Me" - Idina Menzel, Fredi Walker
"Without You" - Daphne Ruben-Vega, Adam Pascal
"I'll Cover You (Reprise)" - Jesse L. Martin, Fredi Walker, Anthony Rapp, Adam Pascal, Idina Menzel, Daphne Rubin-Vega, Taye Diggs
"What You Own" - Anthony Rapp, Adam Pascal
"Finale B" - Daphne Rubin-Vega, Adam Pascal, Fredi Walker, Idina Menzel, Jesse L. Martin, Anthony Rapp
"Seasons of Love (Arif Mardin's Remix)" - Anthony Rapp, Adam Pascal, Daphne Rubin-Vega, Jesse L. Martin, Wilson Jermaine Heredia, Idina Menzel, Fredi Walker, Taye Diggs, Gwen Stewart, Byron Utley
See also
Rent: Original Motion Picture Soundtrack
References
Cast recordings
1996 soundtrack albums
1999 soundtrack albums
Theatre soundtracks
DreamWorks Records soundtracks
Works based on Scenes of Bohemian Life
LGBT-related albums
|
```java
/*
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing, software
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*/
package org.apache.rocketmq.remoting.protocol.header;
import org.apache.rocketmq.remoting.CommandCustomHeader;
import org.apache.rocketmq.remoting.exception.RemotingCommandException;
public class GetConsumerListByGroupResponseHeader implements CommandCustomHeader {
@Override
public void checkFields() throws RemotingCommandException {
}
}
```
|
Stanley Akoy (born 17 November 1996) is a Dutch professional footballer who plays as a midfielder for Cypriot Second Division club Olympias Lympion.
Career
Born in Purmerend, Akoy made his debut for SC Cambuur on 8 August 2019, coming on as a second-half substitute in a 2–0 defeat away at De Graafschap.
In January 2022, he signed for Eerste Divisie club SC Telstar.
Akoy joined Cypriot Second Division club Olympias Lympion on 31 August 2022.
Personal life
Born in the Netherlands, Akoy is of Surinamese descent.
Career statistics
References
External links
1996 births
Living people
Footballers from Purmerend
Men's association football midfielders
Dutch men's footballers
Dutch expatriate men's footballers
Dutch sportspeople of Surinamese descent
AFC Ajax players
SC Cambuur players
SC Telstar players
Eerste Divisie players
Expatriate men's footballers in Cyprus
Dutch expatriate sportspeople in Cyprus
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.