text
stringlengths 1
22.8M
|
|---|
Möckern is a town in the Jerichower Land district, in Saxony-Anhalt, Germany. It is situated east of Magdeburg. The Battle of Möckern took place south of the town in 1813.
History
Möckern was originally called "Mokrianici" by the Slavs who settled in the area in the 7th and 8th centuries. The name meant a humid place, a reference to the formation, at that time, of extensive marshes around the Ehle River. By the middle of the 10th century, the settlement was an established German burgward, but it is believed that by the end of the 9th century, the settlement was already under German influence. As such, the burgward was obliged by a document from Otto I the Great in 948 to pay a tithe to the Magdeburg Moritz monastery. This document is considered to be the first mention of the place. At this period, a fortress was built on the site of the old Slavic settlement, and its keep is still part of the fortress today. The castle served as an outpost to protect Magdeburg and secured the important roads to Brandenburg and Zerbst. In 955, Otto I is supposed to have founded Möckern's parish church, following his victory over Hungary on August 10 of that year. Because that day is (St. Lawrence Day), the church was named for the saint. In the 11th century, Möckern acquired a defensive wall (made of boulders since the 12th century), which was equipped with three gates. Möckern already had its town charter.
Over the centuries, the sovereignty of Möckern took several twists and turns. In the 12th century, the Margrave of Brandenburg had sovereignty, but in 1196, Otto II, Margrave of Brandenburg gave it to the bishopric of Magdeburg. By the 14th century, Möckern had become the property of Quedlinburg Abbey, which, with the town as a manor, the Count of Arnstein mortgaged. In 1376, the abbey gave Brandenburg back its sovereignty. After that, Möckern was mortgaged several times, including to a family of nobles and to the bishopric of Magdeburg. In 1472, after several trials, the Prince-Elector of Brandenburg renounced the bishopric in favor of his vassal. Ownership of the fief then went to the Counts of Arnstein-Lindow, who held it till 1524, when they died out. In 1710, ownership went to Christian Wilhelm von Munchausen and in 1742, to another family, which held it until 1945.
In the 17th century, the town suffered heavy damage from an occupation in 1626 during the Thirty Years' War and a conflagration in 1688. After 1680, the city belonged to the Brandenburg-Prussian Duchy of Magdeburg and was part of the former district of Jerichow. A new town hall was built in 1700 and in 1715, Münchhausen built a new castle to replace the old fortress. His successor, William Hagen, added on to the castle in 1840.
A series of heavy clashes between allied Prusso-Russian troops and the Napoleonic French army south of Möckern ended in defeat for the French on April 5, 1813. This became the prelude to the war of liberation against Napoleon and is known as the Battle of Möckern.
After Prussia's final victory, the kingdom re-organized the district administration. This brought Möckern into the newly formed Jerichower Land district, with Burg as the urban district. The town had previously been a farming town with breweries and open air markets, but infrastructure began to develop with saw mills, a steam mill and a starch factory, spurred by the opening of a rail line in 1892 between Magdeburg and Loburg. In 1895, the former town hall was replaced by a three-story Renaissance-style building. At the end of the 19th century, Möckern had more than 1,700 inhabitants.
Modern times
The relative prosperity of the town was reflected in the private construction that began in the second half of the 19th century and continued till the beginning of World War I. A row of new streets was built in the western part of town, some with Jugendstil houses. On May 5, 1945 Möckern was occupied by the Red Army, taking the lives of 42 residents.
After the end of World War II the Soviet occupying forces instituted a land reform, confiscating land held by nobility. The Hagen family lost the Möckern castle and a branch of the State Archives Magdeburg was installed there. Territorial reform in 1952 placed Möckern first in the Loburg urban district and later back in the Burg district. In 1964, Möckern had a population of 2,904.
In the 1960s, a large poultry factory was established, among the largest of its kind in East Germany. After the German reunification, the plant was taken over by a corporate group, securing 400 jobs for the town. Another major employer manufactures laminate flooring which is sold throughout Europe. The former castle owners, the Hagens, also prospered and in 1991, they returned to Möckern and re-purchased parts of their former property. The castle, which remained town property, became Möckern's elementary school in 1998, after the state archives moved out. In 2005, despite significant local protest, a remote facility for mentally incompetent criminals was established on a former army base in the Lochow section of town.
Geography
The territory of the town Möckern was expanded with 26 former municipalities between 2002 and 2010. In 2002 it absorbed Friedensau, Lübars, Stegelitz and Wörmlitz, in 2003 Büden and Ziepel, in 2004 Hohenziatz, in 2007 Zeppernick, and in 2008 Theeßen. On 1 January 2009 it absorbed the former municipalities Dörnitz, Hobeck, Küsel, Loburg, Rosian, Schweinitz, Tryppehna, Wallwitz and Zeddenick, and on 2 July of the same year Magdeburgerforth and Reesdorf. Drewitz, Grabow, Krüssau, Rietzel, Schopsdorf, Stresow and Wüstenjerichow were absorbed in 2010, but the merger with Schopsdorf was repealed in 2011. Möckern was part of the Verwaltungsgemeinschaft ("collective municipality") Möckern-Loburg-Fläming until it was disbanded in 2012.
Divisions
The town Möckern consists of the following 27 Ortschaften or municipal divisions:
Büden
Dörnitz
Drewitz
Friedensau
Grabow
Hobeck
Hohenziatz
Krüssau
Küsel
Loburg
Lübars
Magdeburgerforth
Möckern
Reesdorf
Rietzel
Rosian
Schweinitz
Stegelitz
Stresow
Theeßen
Tryppehna
Wallwitz
Wörmlitz
Wüstenjerichow
Zeddenick
Zeppernick
Ziepel
Notable people
Aga vom Hagen (1872-1949), German painter, author, and art patron
References
External links
Official website
Möckern
Jerichower Land
Fläming Heath
|
```swift
import Prelude
extension Config {
public enum lens {
public static let abExperiments = Lens<Config, [String: String]>(
view: { $0.abExperiments },
set: { Config(
abExperiments: $0, appId: $1.appId, applePayCountries: $1.applePayCountries,
countryCode: $1.countryCode, features: $1.features, iTunesLink: $1.iTunesLink,
launchedCountries: $1.launchedCountries, locale: $1.locale,
stripePublishableKey: $1.stripePublishableKey
) }
)
public static let applePayCountries = Lens<Config, [String]>(
view: { $0.applePayCountries },
set: { Config(
abExperiments: $1.abExperiments, appId: $1.appId, applePayCountries: $0,
countryCode: $1.countryCode, features: $1.features, iTunesLink: $1.iTunesLink,
launchedCountries: $1.launchedCountries, locale: $1.locale,
stripePublishableKey: $1.stripePublishableKey
) }
)
public static let countryCode = Lens<Config, String>(
view: { $0.countryCode },
set: { Config(
abExperiments: $1.abExperiments, appId: $1.appId, applePayCountries: $1.applePayCountries,
countryCode: $0, features: $1.features, iTunesLink: $1.iTunesLink,
launchedCountries: $1.launchedCountries, locale: $1.locale,
stripePublishableKey: $1.stripePublishableKey
) }
)
public static let features = Lens<Config, Features>(
view: { $0.features },
set: { Config(
abExperiments: $1.abExperiments, appId: $1.appId, applePayCountries: $1.applePayCountries,
countryCode: $1.countryCode, features: $0, iTunesLink: $1.iTunesLink,
launchedCountries: $1.launchedCountries, locale: $1.locale,
stripePublishableKey: $1.stripePublishableKey
) }
)
public static let launchedCountries = Lens<Config, [Project.Country]>(
view: { $0.launchedCountries },
set: { Config(
abExperiments: $1.abExperiments, appId: $1.appId, applePayCountries: $1.applePayCountries,
countryCode: $1.countryCode, features: $1.features, iTunesLink: $1.iTunesLink, launchedCountries: $0,
locale: $1.locale, stripePublishableKey: $1.stripePublishableKey
) }
)
public static let locale = Lens<Config, String>(
view: { $0.locale },
set: { Config(
abExperiments: $1.abExperiments, appId: $1.appId, applePayCountries: $1.applePayCountries,
countryCode: $1.countryCode, features: $1.features, iTunesLink: $1.iTunesLink,
launchedCountries: $1.launchedCountries, locale: $0, stripePublishableKey: $1.stripePublishableKey
) }
)
public static let stripePublishableKey = Lens<Config, String>(
view: { $0.stripePublishableKey },
set: { Config(
abExperiments: $1.abExperiments, appId: $1.appId, applePayCountries: $1.applePayCountries,
countryCode: $1.countryCode, features: $1.features, iTunesLink: $1.iTunesLink,
launchedCountries: $1.launchedCountries, locale: $1.locale, stripePublishableKey: $0
) }
)
}
}
```
|
```scala
package org.apache.spark.sql.jdbc
import org.apache.spark.sql.types.{IntegerType, StringType, StructField, StructType}
import org.scalatest.funsuite.AnyFunSuite
class UpsertBuilderTest extends AnyFunSuite {
val idField = Seq(StructField("c1", StringType, nullable = false))
val schema = StructType(Seq(StructField("c1", StringType, nullable = false),
StructField("c2", IntegerType, nullable = true)))
test("generating oracle merge into statement and schema") {
val (stmt, upsertSchema) = OracleUpsertBuilder.generateStatement("test_table", idField, schema)
println(stmt)
assert(upsertSchema.fields.length == 4, "There should be 4 fields in schema")
assert(stmt.startsWith("MERGE INTO test_table"))
}
test("generating mysql insert on duplicate statement and schema") {
val dialect = JdbcDialects.get("jdbc:mysql://127.0.0.1:3306")
val (stmt, upsertSchema) = MysqlUpsertBuilder.generateStatement("table_1", dialect, idField, schema)
println(stmt)
assert(stmt.startsWith("insert into table_1"))
assert(upsertSchema.length == 3)
}
}
```
|
The Dictionnaire des ouvrages anonymes et pseudonymes whose full title is Dictionnaire des ouvrages anonymes et pseudonymes composés, traduits ou publiés en français, avec les noms des auteurs, traducteurs et éditeurs, is a four volume (1806—1809) dictionary by Antoine Alexandre Barbier listing pen names for French and Latin authors.
External links
On Google Books :
1806 edition: tome I, tome II, tome III, tome IV
1822 edition: tome I, tome II, tome III, tome IV
On Internet Archive :
édition de 1872 : tome I, tome II, tome III, tome IV
édition de 1882 : tome I, tome II, tome III, tome IV
Book series introduced in 1806
1806 non-fiction books
1807 non-fiction books
1808 non-fiction books
1809 non-fiction books
Ouvrages anonymes et pseudonymes
|
```c
/*
*
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without modification, are
* permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice, this list of
* conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright notice, this list of
* conditions and the following disclaimer in the documentation and/or other materials provided
* with the distribution.
*
* 3. Neither the name of the copyright holder nor the names of its contributors may be used to
* endorse or promote products derived from this software without specific prior written
* permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS
* OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
* COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE
* GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
* AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
* NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
* OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include <stdlib.h>
int *p[100];
int main() {
int i;
for (i = 0; i < 100; i++) {
p[i] = (int *) malloc(8 * 100);
}
for (i = 0; i < 100; i++) {
p[i] = (int *) realloc(p[i], 8 * 300);
}
for (i = 0; i < 100; i++) {
p[i] = (int *) realloc(p[i], 8 * 30);
}
for (i = 99; i >= 0; i--) {
free(p[i]);
}
return 9;
}
```
|
```javascript
This program is free software; you can redistribute it and/or modify
the Free Software Foundation
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. */
206
%{
#include "StdH.h"
#include "EntitiesMP/Projectile.h"
%}
%{
BOOL ConsiderAll(CEntity*pen)
{
return TRUE;
}
BOOL ConsiderPlayers(CEntity*pen)
{
return IsDerivedFromClass(pen, "Player");
}
%}
class CTouchField: CRationalEntity {
name "Touch Field";
thumbnail "Thumbnails\\TouchField.tbn";
features "HasName", "IsTargetable";
properties:
1 CTString m_strName "Name" 'N' = "Touch Field", // class name
2 CEntityPointer m_penEnter "Enter Target" 'T' COLOR(C_BROWN|0xFF), // target to send event to
3 enum EventEType m_eetEnter "Enter Event" 'E' = EET_TRIGGER, // event to send on enter
7 CEntityPointer m_penExit "Exit Target" COLOR(C_dRED|0xFF), // target to send event to
8 enum EventEType m_eetExit "Exit Event" = EET_TRIGGER, // event to send on exit
4 BOOL m_bActive "Active" 'A' = TRUE, // is field active
5 BOOL m_bPlayersOnly "Players only" 'P' = TRUE, // reacts only on players
6 FLOAT m_tmExitCheck "Exit check time" 'X' = 0.0f, // how often to check for exit
9 BOOL m_bBlockNonPlayers "Block non-players" 'B' = FALSE, // everything except players cannot pass
100 CEntityPointer m_penLastIn,
{
CFieldSettings m_fsField;
}
components:
1 texture TEXTURE_FIELD "Models\\Editor\\CollisionBox.tex",
functions:
void SetupFieldSettings(void)
{
m_fsField.fs_toTexture.SetData(GetTextureDataForComponent(TEXTURE_FIELD));
m_fsField.fs_colColor = C_WHITE|CT_OPAQUE;
}
CFieldSettings *GetFieldSettings(void) {
if (m_fsField.fs_toTexture.GetData()==NULL) {
SetupFieldSettings();
}
return &m_fsField;
};
// returns bytes of memory used by this object
SLONG GetUsedMemory(void)
{
// initial
SLONG slUsedMemory = sizeof(CTouchField) - sizeof(CRationalEntity) + CRationalEntity::GetUsedMemory();
// add some more
slUsedMemory += m_strName.Length();
return slUsedMemory;
}
procedures:
// field is active
WaitingEntry() {
m_bActive = TRUE;
wait() {
on (EBegin) : { resume; }
on (EDeactivate) : { jump Frozen(); }
// when someone passes the polygons
on (EPass ep) : {
// stop enemy projectiles if blocks non players
if (m_bBlockNonPlayers && IsOfClass(ep.penOther, "Projectile"))
{
if (!IsOfClass(((CProjectile *)&*ep.penOther)->m_penLauncher, "Player")) {
EPass epass;
epass.penOther = this;
ep.penOther->SendEvent(epass);
}
}
// if should react only on players and not player,
if (m_bPlayersOnly && !IsDerivedFromClass(ep.penOther, "Player")) {
// ignore
resume;
}
// send event
SendToTarget(m_penEnter, m_eetEnter, ep.penOther);
// if checking for exit
if (m_tmExitCheck>0) {
// remember who entered
m_penLastIn = ep.penOther;
// wait for exit
jump WaitingExit();
}
resume;
}
}
};
// waiting for entity to exit
WaitingExit() {
while(TRUE) {
// wait
wait(m_tmExitCheck) {
on (EBegin) : { resume; }
on (EDeactivate) : { jump Frozen(); }
on (ETimer) : {
// check for entities inside
CEntity *penNewIn;
if (m_bPlayersOnly) {
penNewIn = TouchingEntity(ConsiderPlayers, m_penLastIn);
} else {
penNewIn = TouchingEntity(ConsiderAll, m_penLastIn);
}
// if there are no entities in anymore
if (penNewIn==NULL) {
// send event
SendToTarget(m_penExit, m_eetExit, m_penLastIn);
// wait new entry
jump WaitingEntry();
}
m_penLastIn = penNewIn;
stop;
}
}
}
};
// field is frozen
Frozen() {
m_bActive = FALSE;
wait() {
on (EBegin) : { resume; }
on (EActivate) : { jump WaitingEntry(); }
}
};
// main initialization
Main(EVoid) {
InitAsFieldBrush();
SetPhysicsFlags(EPF_BRUSH_FIXED);
if ( !m_bBlockNonPlayers ) {
SetCollisionFlags( ((ECBI_MODEL)<<ECB_TEST) | ((ECBI_BRUSH)<<ECB_IS) | ((ECBI_MODEL)<<ECB_PASS) );
} else {
SetCollisionFlags( ((ECBI_MODEL|ECBI_PLAYER|ECBI_PROJECTILE_SOLID|ECBI_PROJECTILE_MAGIC)<<ECB_TEST)
| ((ECBI_BRUSH)<<ECB_IS) | ((ECBI_PLAYER|ECBI_PROJECTILE_SOLID|ECBI_PROJECTILE_MAGIC)<<ECB_PASS) );
}
if (m_bActive) {
jump WaitingEntry();
} else {
jump Frozen();
}
return;
};
};
```
|
James Cotterill (born 3 August 1982) is a former professional footballer. He plays as a central defender for Handsworth Parramore.
Footballing career
He began his career at Scunthorpe United, and was released in 2003 after making 24 appearances for the first team. Following this, Cotterill joined Barrow in August 2003, where he remained until December 2006. He made 115 appearances in all competitions for Barrow.
On 1 March 2007, two weeks after his release from prison, Cotterill joined Northern Premier League side Ossett Town. During the 2007/08 season he joined Ossett's ex manager Steve Kittrick at local rivals Guiseley A.F.C.
He joined Bradford Park Avenue on a short-term loan in November 2010 making two appearances for the club.
He left Guiseley in February 2011 to look for regular first team football. He later re-signed for Ossett Town.
Assault conviction
On 11 November 2006, during an FA Cup first round match between Barrow and Bristol Rovers, Cotterill was involved in an off the ball incident with Rovers player Sean Rigg. Cotterill was seen to punch Rigg in the face, leaving him with a double fracture of the jaw, which was screened later that evening on Match of the Day. After the incident, Rigg was only able to eat with a teaspoon and drink through a straw, and his treatment involved the insertion of two metal plates into his jaw, which will remain permanently. Cotterill was banned from all football activity by The FA until March 2007, and on 11 January 2007 he was jailed for four months after pleading guilty to causing grievous bodily harm. An appeal to free Cotterill had failed, however, he was soon released from prison on 14 February 2007 although he was forced to wear an electronic tag in his home town Barnsley until 11 March 2007. Hoping to re-build his life, Cotterill also thanked the Barrow fans who helped release him. Cotterill also apologised to Sean Rigg saying he "never intended to hurt Sean".
References
External links
1982 births
Living people
Footballers from Barnsley
English men's footballers
Men's association football central defenders
Scunthorpe United F.C. players
Barrow A.F.C. players
Ossett Town A.F.C. players
Guiseley A.F.C. players
Bradford (Park Avenue) A.F.C. players
Handsworth F.C. players
English Football League players
|
Obrium maculatum is a species of beetle in the family Cerambycidae. It was described by Olivier in 1795.
References
Obriini
Beetles described in 1795
|
Bazaar Canton was an Asian food and gift store founded by Amy Gee and Stanley Gee in Livermore, California (United States). Bazaar Canton operated from 1971 to 1988 and was the first Asian food and retail store in the Livermore-Amador Valley. It provided an introduction to Asian culture for many residents in the Tri-Valley area.
History
Bazaar Canton was opened by Livermore resident, Amy Gee, a native of Shanghai, China and an accomplished Chinese watercolor artist. Family friends would have Chinese food for dinner at the Gee residence and ask Ms. Gee and her husband, Stanley Gee, a native of Guangzhou, China and a mechanical design draftsman since 1956 at what is now called the Lawrence Livermore National Laboratory, to purchase Chinese groceries from San Francisco and Oakland chinatowns for their own meals.
Eventually, the Gees decided to open a small Chinese grocery store called Bazaar Canton in a small Livermore shopping center at the east end of Second Street known as The Mall. When Bazaar Canton opened on April 1, 1971, Livermore had very few Asian residents and only two small Chinese American restaurants, the Yin Yin and Maly's.
Canton Bazaar was originally considered for the name of the business but rejected because a San Francisco Chinatown store catering to tourists already had the name. An owner of the Way Up Gallery, a neighboring art gallery, suggested reversing the name, and the new business had its name.
Teaching at the store
The Livermore Symphony Guild asked Ms. Gee if she would teach Chinese cooking as a fundraiser for the Guild. She agreed and the lessons became extremely popular, prompting Ms. Gee to continue to give Chinese cooking lessons at her home. The popularity of her Chinese cooking lessons caused Bazaar Canton to quickly double in size by moving downstairs in The Mall and sell Chinese cookware, dishes, and utensils at the store. Sales continued to grow and Bazaar Canton tripled its space by moving to a street-front location at The Mall and began to feature a wide variety of Asian groceries, cookware, and gift items.
Move
Bazaar Canton's final move to the JC Penney's shopping center in 1973 quadrupled the size of the store, dramatically increasing its selection of Asian gift items of all kinds, shapes, and varieties to the point where the store became more known to local shoppers as a gift store. At that time, the JC Penney's shopping center at Second Street and South L Street was the primary shopping center in Livermore. The store also began selling refrigerated Asian food items. Due to popular demand, the store in later years even began selling fresh Chinese dim sum dumplings and pastries from Oakland Chinatown on Saturday mornings. By 1973, Bazaar Canton became one of the larger retail establishments in Livermore. Bazaar Canton later opened a large branch gift store to service the growing suburban population in San Ramon, California. As the population of Livermore grew, and with it the city's Asian population, Bazaar Canton also served the new refugee Vietnamese American families in the town as a clearinghouse for community charitable donations and making available merchandise for those immigrant families.
After a very successful 17 years in business, Ms. Gee had become a well-known and popular business owner in Livermore. In 1988, Mr. and Ms. Gee, Livermore residents since 1964, decided to retire after raising five children and close the store, to the disappointment of Ms. Gee's many customers. Their son Delbert Gee is an Alameda County Superior Court judge. Although the population of Livermore, and its Asian population, has significantly increased since 1988, Bazaar Canton is remembered by many long-time Livermore residents as their first introduction to Asian culture.
Notes
External links
Livermore Heritage Guild
Stanley Gee memoriam at Lawrence Livermore National Laboratory "Newsline" website
Stanley Gee obituary at the Independent newspaper website
Companies based in Livermore, California
Defunct companies based in the San Francisco Bay Area
Food and drink in the San Francisco Bay Area
1971 establishments in California
1988 disestablishments in California
Retail companies based in California
Food and drink companies based in California
|
```objective-c
// hashtable.h header -*- C++ -*-
//
// This file is part of the GNU ISO C++ Library. This library is free
// software; you can redistribute it and/or modify it under the
// Free Software Foundation; either version 3, or (at your option)
// any later version.
// This library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// Under Section 7 of GPL version 3, you are granted additional
// permissions described in the GCC Runtime Library Exception, version
// 3.1, as published by the Free Software Foundation.
// a copy of the GCC Runtime Library Exception along with this program;
// see the files COPYING3 and COPYING.RUNTIME respectively. If not, see
// <path_to_url
/** @file bits/hashtable.h
* This is an internal header file, included by other library headers.
* Do not attempt to use it directly. @headername{unordered_map, unordered_set}
*/
#ifndef _HASHTABLE_H
#define _HASHTABLE_H 1
#pragma GCC system_header
#include <bits/hashtable_policy.h>
namespace std _GLIBCXX_VISIBILITY(default)
{
_GLIBCXX_BEGIN_NAMESPACE_VERSION
template<typename _Tp, typename _Hash>
using __cache_default
= __not_<__and_<// Do not cache for fast hasher.
__is_fast_hash<_Hash>,
// Mandatory to have erase not throwing.
__detail::__is_noexcept_hash<_Tp, _Hash>>>;
/**
* Primary class template _Hashtable.
*
* @ingroup hashtable-detail
*
* @tparam _Value CopyConstructible type.
*
* @tparam _Key CopyConstructible type.
*
* @tparam _Alloc An allocator type
* ([lib.allocator.requirements]) whose _Alloc::value_type is
* _Value. As a conforming extension, we allow for
* _Alloc::value_type != _Value.
*
* @tparam _ExtractKey Function object that takes an object of type
* _Value and returns a value of type _Key.
*
* @tparam _Equal Function object that takes two objects of type k
* and returns a bool-like value that is true if the two objects
* are considered equal.
*
* @tparam _H1 The hash function. A unary function object with
* argument type _Key and result type size_t. Return values should
* be distributed over the entire range [0, numeric_limits<size_t>:::max()].
*
* @tparam _H2 The range-hashing function (in the terminology of
* Tavori and Dreizin). A binary function object whose argument
* types and result type are all size_t. Given arguments r and N,
* the return value is in the range [0, N).
*
* @tparam _Hash The ranged hash function (Tavori and Dreizin). A
* binary function whose argument types are _Key and size_t and
* whose result type is size_t. Given arguments k and N, the
* return value is in the range [0, N). Default: hash(k, N) =
* h2(h1(k), N). If _Hash is anything other than the default, _H1
* and _H2 are ignored.
*
* @tparam _RehashPolicy Policy class with three members, all of
* which govern the bucket count. _M_next_bkt(n) returns a bucket
* count no smaller than n. _M_bkt_for_elements(n) returns a
* bucket count appropriate for an element count of n.
* _M_need_rehash(n_bkt, n_elt, n_ins) determines whether, if the
* current bucket count is n_bkt and the current element count is
* n_elt, we need to increase the bucket count. If so, returns
* make_pair(true, n), where n is the new bucket count. If not,
* returns make_pair(false, <anything>)
*
* @tparam _Traits Compile-time class with three boolean
* std::integral_constant members: __cache_hash_code, __constant_iterators,
* __unique_keys.
*
* Each _Hashtable data structure has:
*
* - _Bucket[] _M_buckets
* - _Hash_node_base _M_before_begin
* - size_type _M_bucket_count
* - size_type _M_element_count
*
* with _Bucket being _Hash_node* and _Hash_node containing:
*
* - _Hash_node* _M_next
* - Tp _M_value
* - size_t _M_hash_code if cache_hash_code is true
*
* In terms of Standard containers the hashtable is like the aggregation of:
*
* - std::forward_list<_Node> containing the elements
* - std::vector<std::forward_list<_Node>::iterator> representing the buckets
*
* The non-empty buckets contain the node before the first node in the
* bucket. This design makes it possible to implement something like a
* std::forward_list::insert_after on container insertion and
* std::forward_list::erase_after on container erase
* calls. _M_before_begin is equivalent to
* std::forward_list::before_begin. Empty buckets contain
* nullptr. Note that one of the non-empty buckets contains
* &_M_before_begin which is not a dereferenceable node so the
* node pointer in a bucket shall never be dereferenced, only its
* next node can be.
*
* Walking through a bucket's nodes requires a check on the hash code to
* see if each node is still in the bucket. Such a design assumes a
* quite efficient hash functor and is one of the reasons it is
* highly advisable to set __cache_hash_code to true.
*
* The container iterators are simply built from nodes. This way
* incrementing the iterator is perfectly efficient independent of
* how many empty buckets there are in the container.
*
* On insert we compute the element's hash code and use it to find the
* bucket index. If the element must be inserted in an empty bucket
* we add it at the beginning of the singly linked list and make the
* bucket point to _M_before_begin. The bucket that used to point to
* _M_before_begin, if any, is updated to point to its new before
* begin node.
*
* On erase, the simple iterator design requires using the hash
* functor to get the index of the bucket to update. For this
* reason, when __cache_hash_code is set to false the hash functor must
* not throw and this is enforced by a static assertion.
*
* Functionality is implemented by decomposition into base classes,
* where the derived _Hashtable class is used in _Map_base,
* _Insert, _Rehash_base, and _Equality base classes to access the
* "this" pointer. _Hashtable_base is used in the base classes as a
* non-recursive, fully-completed-type so that detailed nested type
* information, such as iterator type and node type, can be
* used. This is similar to the "Curiously Recurring Template
* Pattern" (CRTP) technique, but uses a reconstructed, not
* explicitly passed, template pattern.
*
* Base class templates are:
* - __detail::_Hashtable_base
* - __detail::_Map_base
* - __detail::_Insert
* - __detail::_Rehash_base
* - __detail::_Equality
*/
template<typename _Key, typename _Value, typename _Alloc,
typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash,
typename _RehashPolicy, typename _Traits>
class _Hashtable
: public __detail::_Hashtable_base<_Key, _Value, _ExtractKey, _Equal,
_H1, _H2, _Hash, _Traits>,
public __detail::_Map_base<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>,
public __detail::_Insert<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>,
public __detail::_Rehash_base<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>,
public __detail::_Equality<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>,
private __detail::_Hashtable_alloc<
typename __alloctr_rebind<_Alloc,
__detail::_Hash_node<_Value,
_Traits::__hash_cached::value> >::__type>
{
using __traits_type = _Traits;
using __hash_cached = typename __traits_type::__hash_cached;
using __node_type = __detail::_Hash_node<_Value, __hash_cached::value>;
using __node_alloc_type =
typename __alloctr_rebind<_Alloc, __node_type>::__type;
using __hashtable_alloc = __detail::_Hashtable_alloc<__node_alloc_type>;
using __value_alloc_traits =
typename __hashtable_alloc::__value_alloc_traits;
using __node_alloc_traits =
typename __hashtable_alloc::__node_alloc_traits;
using __node_base = typename __hashtable_alloc::__node_base;
using __bucket_type = typename __hashtable_alloc::__bucket_type;
public:
typedef _Key key_type;
typedef _Value value_type;
typedef _Alloc allocator_type;
typedef _Equal key_equal;
// mapped_type, if present, comes from _Map_base.
// hasher, if present, comes from _Hash_code_base/_Hashtable_base.
typedef typename __value_alloc_traits::pointer pointer;
typedef typename __value_alloc_traits::const_pointer const_pointer;
typedef value_type& reference;
typedef const value_type& const_reference;
private:
using __rehash_type = _RehashPolicy;
using __rehash_state = typename __rehash_type::_State;
using __constant_iterators = typename __traits_type::__constant_iterators;
using __unique_keys = typename __traits_type::__unique_keys;
using __key_extract = typename std::conditional<
__constant_iterators::value,
__detail::_Identity,
__detail::_Select1st>::type;
using __hashtable_base = __detail::
_Hashtable_base<_Key, _Value, _ExtractKey,
_Equal, _H1, _H2, _Hash, _Traits>;
using __hash_code_base = typename __hashtable_base::__hash_code_base;
using __hash_code = typename __hashtable_base::__hash_code;
using __ireturn_type = typename __hashtable_base::__ireturn_type;
using __map_base = __detail::_Map_base<_Key, _Value, _Alloc, _ExtractKey,
_Equal, _H1, _H2, _Hash,
_RehashPolicy, _Traits>;
using __rehash_base = __detail::_Rehash_base<_Key, _Value, _Alloc,
_ExtractKey, _Equal,
_H1, _H2, _Hash,
_RehashPolicy, _Traits>;
using __eq_base = __detail::_Equality<_Key, _Value, _Alloc, _ExtractKey,
_Equal, _H1, _H2, _Hash,
_RehashPolicy, _Traits>;
using __reuse_or_alloc_node_type =
__detail::_ReuseOrAllocNode<__node_alloc_type>;
// Metaprogramming for picking apart hash caching.
template<typename _Cond>
using __if_hash_cached = __or_<__not_<__hash_cached>, _Cond>;
template<typename _Cond>
using __if_hash_not_cached = __or_<__hash_cached, _Cond>;
// Compile-time diagnostics.
// _Hash_code_base has everything protected, so use this derived type to
// access it.
struct __hash_code_base_access : __hash_code_base
{ using __hash_code_base::_M_bucket_index; };
// Getting a bucket index from a node shall not throw because it is used
// in methods (erase, swap...) that shall not throw.
static_assert(noexcept(declval<const __hash_code_base_access&>()
._M_bucket_index((const __node_type*)nullptr,
(std::size_t)0)),
"Cache the hash code or qualify your functors involved"
" in hash code and bucket index computation with noexcept");
// Following two static assertions are necessary to guarantee
// that local_iterator will be default constructible.
// When hash codes are cached local iterator inherits from H2 functor
// which must then be default constructible.
static_assert(__if_hash_cached<is_default_constructible<_H2>>::value,
"Functor used to map hash code to bucket index"
" must be default constructible");
template<typename _Keya, typename _Valuea, typename _Alloca,
typename _ExtractKeya, typename _Equala,
typename _H1a, typename _H2a, typename _Hasha,
typename _RehashPolicya, typename _Traitsa,
bool _Unique_keysa>
friend struct __detail::_Map_base;
template<typename _Keya, typename _Valuea, typename _Alloca,
typename _ExtractKeya, typename _Equala,
typename _H1a, typename _H2a, typename _Hasha,
typename _RehashPolicya, typename _Traitsa>
friend struct __detail::_Insert_base;
template<typename _Keya, typename _Valuea, typename _Alloca,
typename _ExtractKeya, typename _Equala,
typename _H1a, typename _H2a, typename _Hasha,
typename _RehashPolicya, typename _Traitsa,
bool _Constant_iteratorsa, bool _Unique_keysa>
friend struct __detail::_Insert;
public:
using size_type = typename __hashtable_base::size_type;
using difference_type = typename __hashtable_base::difference_type;
using iterator = typename __hashtable_base::iterator;
using const_iterator = typename __hashtable_base::const_iterator;
using local_iterator = typename __hashtable_base::local_iterator;
using const_local_iterator = typename __hashtable_base::
const_local_iterator;
private:
__bucket_type* _M_buckets = &_M_single_bucket;
size_type _M_bucket_count = 1;
__node_base _M_before_begin;
size_type _M_element_count = 0;
_RehashPolicy _M_rehash_policy;
// A single bucket used when only need for 1 bucket. Especially
// interesting in move semantic to leave hashtable with only 1 buckets
// which is not allocated so that we can have those operations noexcept
// qualified.
// Note that we can't leave hashtable with 0 bucket without adding
// numerous checks in the code to avoid 0 modulus.
__bucket_type _M_single_bucket = nullptr;
bool
_M_uses_single_bucket(__bucket_type* __bkts) const
{ return __builtin_expect(__bkts == &_M_single_bucket, false); }
bool
_M_uses_single_bucket() const
{ return _M_uses_single_bucket(_M_buckets); }
__hashtable_alloc&
_M_base_alloc() { return *this; }
__bucket_type*
_M_allocate_buckets(size_type __n)
{
if (__builtin_expect(__n == 1, false))
{
_M_single_bucket = nullptr;
return &_M_single_bucket;
}
return __hashtable_alloc::_M_allocate_buckets(__n);
}
void
_M_deallocate_buckets(__bucket_type* __bkts, size_type __n)
{
if (_M_uses_single_bucket(__bkts))
return;
__hashtable_alloc::_M_deallocate_buckets(__bkts, __n);
}
void
_M_deallocate_buckets()
{ _M_deallocate_buckets(_M_buckets, _M_bucket_count); }
// Gets bucket begin, deals with the fact that non-empty buckets contain
// their before begin node.
__node_type*
_M_bucket_begin(size_type __bkt) const;
__node_type*
_M_begin() const
{ return static_cast<__node_type*>(_M_before_begin._M_nxt); }
template<typename _NodeGenerator>
void
_M_assign(const _Hashtable&, const _NodeGenerator&);
void
_M_move_assign(_Hashtable&&, std::true_type);
void
_M_move_assign(_Hashtable&&, std::false_type);
void
_M_reset() noexcept;
_Hashtable(const _H1& __h1, const _H2& __h2, const _Hash& __h,
const _Equal& __eq, const _ExtractKey& __exk,
const allocator_type& __a)
: __hashtable_base(__exk, __h1, __h2, __h, __eq),
__hashtable_alloc(__node_alloc_type(__a))
{ }
public:
// Constructor, destructor, assignment, swap
_Hashtable() = default;
_Hashtable(size_type __bucket_hint,
const _H1&, const _H2&, const _Hash&,
const _Equal&, const _ExtractKey&,
const allocator_type&);
template<typename _InputIterator>
_Hashtable(_InputIterator __first, _InputIterator __last,
size_type __bucket_hint,
const _H1&, const _H2&, const _Hash&,
const _Equal&, const _ExtractKey&,
const allocator_type&);
_Hashtable(const _Hashtable&);
_Hashtable(_Hashtable&&) noexcept;
_Hashtable(const _Hashtable&, const allocator_type&);
_Hashtable(_Hashtable&&, const allocator_type&);
// Use delegating constructors.
explicit
_Hashtable(const allocator_type& __a)
: __hashtable_alloc(__node_alloc_type(__a))
{ }
explicit
_Hashtable(size_type __n,
const _H1& __hf = _H1(),
const key_equal& __eql = key_equal(),
const allocator_type& __a = allocator_type())
: _Hashtable(__n, __hf, _H2(), _Hash(), __eql,
__key_extract(), __a)
{ }
template<typename _InputIterator>
_Hashtable(_InputIterator __f, _InputIterator __l,
size_type __n = 0,
const _H1& __hf = _H1(),
const key_equal& __eql = key_equal(),
const allocator_type& __a = allocator_type())
: _Hashtable(__f, __l, __n, __hf, _H2(), _Hash(), __eql,
__key_extract(), __a)
{ }
_Hashtable(initializer_list<value_type> __l,
size_type __n = 0,
const _H1& __hf = _H1(),
const key_equal& __eql = key_equal(),
const allocator_type& __a = allocator_type())
: _Hashtable(__l.begin(), __l.end(), __n, __hf, _H2(), _Hash(), __eql,
__key_extract(), __a)
{ }
_Hashtable&
operator=(const _Hashtable& __ht);
_Hashtable&
operator=(_Hashtable&& __ht)
noexcept(__node_alloc_traits::_S_nothrow_move())
{
constexpr bool __move_storage =
__node_alloc_traits::_S_propagate_on_move_assign()
|| __node_alloc_traits::_S_always_equal();
_M_move_assign(std::move(__ht),
integral_constant<bool, __move_storage>());
return *this;
}
_Hashtable&
operator=(initializer_list<value_type> __l)
{
__reuse_or_alloc_node_type __roan(_M_begin(), *this);
_M_before_begin._M_nxt = nullptr;
clear();
this->_M_insert_range(__l.begin(), __l.end(), __roan);
return *this;
}
~_Hashtable() noexcept;
void
swap(_Hashtable&)
noexcept(__node_alloc_traits::_S_nothrow_swap());
// Basic container operations
iterator
begin() noexcept
{ return iterator(_M_begin()); }
const_iterator
begin() const noexcept
{ return const_iterator(_M_begin()); }
iterator
end() noexcept
{ return iterator(nullptr); }
const_iterator
end() const noexcept
{ return const_iterator(nullptr); }
const_iterator
cbegin() const noexcept
{ return const_iterator(_M_begin()); }
const_iterator
cend() const noexcept
{ return const_iterator(nullptr); }
size_type
size() const noexcept
{ return _M_element_count; }
bool
empty() const noexcept
{ return size() == 0; }
allocator_type
get_allocator() const noexcept
{ return allocator_type(this->_M_node_allocator()); }
size_type
max_size() const noexcept
{ return __node_alloc_traits::max_size(this->_M_node_allocator()); }
// Observers
key_equal
key_eq() const
{ return this->_M_eq(); }
// hash_function, if present, comes from _Hash_code_base.
// Bucket operations
size_type
bucket_count() const noexcept
{ return _M_bucket_count; }
size_type
max_bucket_count() const noexcept
{ return max_size(); }
size_type
bucket_size(size_type __n) const
{ return std::distance(begin(__n), end(__n)); }
size_type
bucket(const key_type& __k) const
{ return _M_bucket_index(__k, this->_M_hash_code(__k)); }
local_iterator
begin(size_type __n)
{
return local_iterator(*this, _M_bucket_begin(__n),
__n, _M_bucket_count);
}
local_iterator
end(size_type __n)
{ return local_iterator(*this, nullptr, __n, _M_bucket_count); }
const_local_iterator
begin(size_type __n) const
{
return const_local_iterator(*this, _M_bucket_begin(__n),
__n, _M_bucket_count);
}
const_local_iterator
end(size_type __n) const
{ return const_local_iterator(*this, nullptr, __n, _M_bucket_count); }
// DR 691.
const_local_iterator
cbegin(size_type __n) const
{
return const_local_iterator(*this, _M_bucket_begin(__n),
__n, _M_bucket_count);
}
const_local_iterator
cend(size_type __n) const
{ return const_local_iterator(*this, nullptr, __n, _M_bucket_count); }
float
load_factor() const noexcept
{
return static_cast<float>(size()) / static_cast<float>(bucket_count());
}
// max_load_factor, if present, comes from _Rehash_base.
// Generalization of max_load_factor. Extension, not found in
// TR1. Only useful if _RehashPolicy is something other than
// the default.
const _RehashPolicy&
__rehash_policy() const
{ return _M_rehash_policy; }
void
__rehash_policy(const _RehashPolicy&);
// Lookup.
iterator
find(const key_type& __k);
const_iterator
find(const key_type& __k) const;
size_type
count(const key_type& __k) const;
std::pair<iterator, iterator>
equal_range(const key_type& __k);
std::pair<const_iterator, const_iterator>
equal_range(const key_type& __k) const;
protected:
// Bucket index computation helpers.
size_type
_M_bucket_index(__node_type* __n) const noexcept
{ return __hash_code_base::_M_bucket_index(__n, _M_bucket_count); }
size_type
_M_bucket_index(const key_type& __k, __hash_code __c) const
{ return __hash_code_base::_M_bucket_index(__k, __c, _M_bucket_count); }
// Find and insert helper functions and types
// Find the node before the one matching the criteria.
__node_base*
_M_find_before_node(size_type, const key_type&, __hash_code) const;
__node_type*
_M_find_node(size_type __bkt, const key_type& __key,
__hash_code __c) const
{
__node_base* __before_n = _M_find_before_node(__bkt, __key, __c);
if (__before_n)
return static_cast<__node_type*>(__before_n->_M_nxt);
return nullptr;
}
// Insert a node at the beginning of a bucket.
void
_M_insert_bucket_begin(size_type, __node_type*);
// Remove the bucket first node
void
_M_remove_bucket_begin(size_type __bkt, __node_type* __next_n,
size_type __next_bkt);
// Get the node before __n in the bucket __bkt
__node_base*
_M_get_previous_node(size_type __bkt, __node_base* __n);
// Insert node with hash code __code, in bucket bkt if no rehash (assumes
// no element with its key already present). Take ownership of the node,
// deallocate it on exception.
iterator
_M_insert_unique_node(size_type __bkt, __hash_code __code,
__node_type* __n);
// Insert node with hash code __code. Take ownership of the node,
// deallocate it on exception.
iterator
_M_insert_multi_node(__node_type* __hint,
__hash_code __code, __node_type* __n);
template<typename... _Args>
std::pair<iterator, bool>
_M_emplace(std::true_type, _Args&&... __args);
template<typename... _Args>
iterator
_M_emplace(std::false_type __uk, _Args&&... __args)
{ return _M_emplace(cend(), __uk, std::forward<_Args>(__args)...); }
// Emplace with hint, useless when keys are unique.
template<typename... _Args>
iterator
_M_emplace(const_iterator, std::true_type __uk, _Args&&... __args)
{ return _M_emplace(__uk, std::forward<_Args>(__args)...).first; }
template<typename... _Args>
iterator
_M_emplace(const_iterator, std::false_type, _Args&&... __args);
template<typename _Arg, typename _NodeGenerator>
std::pair<iterator, bool>
_M_insert(_Arg&&, const _NodeGenerator&, std::true_type);
template<typename _Arg, typename _NodeGenerator>
iterator
_M_insert(_Arg&& __arg, const _NodeGenerator& __node_gen,
std::false_type __uk)
{
return _M_insert(cend(), std::forward<_Arg>(__arg), __node_gen,
__uk);
}
// Insert with hint, not used when keys are unique.
template<typename _Arg, typename _NodeGenerator>
iterator
_M_insert(const_iterator, _Arg&& __arg,
const _NodeGenerator& __node_gen, std::true_type __uk)
{
return
_M_insert(std::forward<_Arg>(__arg), __node_gen, __uk).first;
}
// Insert with hint when keys are not unique.
template<typename _Arg, typename _NodeGenerator>
iterator
_M_insert(const_iterator, _Arg&&,
const _NodeGenerator&, std::false_type);
size_type
_M_erase(std::true_type, const key_type&);
size_type
_M_erase(std::false_type, const key_type&);
iterator
_M_erase(size_type __bkt, __node_base* __prev_n, __node_type* __n);
public:
// Emplace
template<typename... _Args>
__ireturn_type
emplace(_Args&&... __args)
{ return _M_emplace(__unique_keys(), std::forward<_Args>(__args)...); }
template<typename... _Args>
iterator
emplace_hint(const_iterator __hint, _Args&&... __args)
{
return _M_emplace(__hint, __unique_keys(),
std::forward<_Args>(__args)...);
}
// Insert member functions via inheritance.
// Erase
iterator
erase(const_iterator);
// LWG 2059.
iterator
erase(iterator __it)
{ return erase(const_iterator(__it)); }
size_type
erase(const key_type& __k)
{ return _M_erase(__unique_keys(), __k); }
iterator
erase(const_iterator, const_iterator);
void
clear() noexcept;
// Set number of buckets to be appropriate for container of n element.
void rehash(size_type __n);
// DR 1189.
// reserve, if present, comes from _Rehash_base.
private:
// Helper rehash method used when keys are unique.
void _M_rehash_aux(size_type __n, std::true_type);
// Helper rehash method used when keys can be non-unique.
void _M_rehash_aux(size_type __n, std::false_type);
// Unconditionally change size of bucket array to n, restore
// hash policy state to __state on exception.
void _M_rehash(size_type __n, const __rehash_state& __state);
};
// Definitions of class template _Hashtable's out-of-line member functions.
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
auto
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
_M_bucket_begin(size_type __bkt) const
-> __node_type*
{
__node_base* __n = _M_buckets[__bkt];
return __n ? static_cast<__node_type*>(__n->_M_nxt) : nullptr;
}
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
_Hashtable(size_type __bucket_hint,
const _H1& __h1, const _H2& __h2, const _Hash& __h,
const _Equal& __eq, const _ExtractKey& __exk,
const allocator_type& __a)
: _Hashtable(__h1, __h2, __h, __eq, __exk, __a)
{
auto __bkt = _M_rehash_policy._M_next_bkt(__bucket_hint);
if (__bkt > _M_bucket_count)
{
_M_buckets = _M_allocate_buckets(__bkt);
_M_bucket_count = __bkt;
}
}
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
template<typename _InputIterator>
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
_Hashtable(_InputIterator __f, _InputIterator __l,
size_type __bucket_hint,
const _H1& __h1, const _H2& __h2, const _Hash& __h,
const _Equal& __eq, const _ExtractKey& __exk,
const allocator_type& __a)
: _Hashtable(__h1, __h2, __h, __eq, __exk, __a)
{
auto __nb_elems = __detail::__distance_fw(__f, __l);
auto __bkt_count =
_M_rehash_policy._M_next_bkt(
std::max(_M_rehash_policy._M_bkt_for_elements(__nb_elems),
__bucket_hint));
if (__bkt_count > _M_bucket_count)
{
_M_buckets = _M_allocate_buckets(__bkt_count);
_M_bucket_count = __bkt_count;
}
__try
{
for (; __f != __l; ++__f)
this->insert(*__f);
}
__catch(...)
{
clear();
_M_deallocate_buckets();
__throw_exception_again;
}
}
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
auto
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
operator=(const _Hashtable& __ht)
-> _Hashtable&
{
if (&__ht == this)
return *this;
if (__node_alloc_traits::_S_propagate_on_copy_assign())
{
auto& __this_alloc = this->_M_node_allocator();
auto& __that_alloc = __ht._M_node_allocator();
if (!__node_alloc_traits::_S_always_equal()
&& __this_alloc != __that_alloc)
{
// Replacement allocator cannot free existing storage.
this->_M_deallocate_nodes(_M_begin());
_M_before_begin._M_nxt = nullptr;
_M_deallocate_buckets();
_M_buckets = nullptr;
std::__alloc_on_copy(__this_alloc, __that_alloc);
__hashtable_base::operator=(__ht);
_M_bucket_count = __ht._M_bucket_count;
_M_element_count = __ht._M_element_count;
_M_rehash_policy = __ht._M_rehash_policy;
__try
{
_M_assign(__ht,
[this](const __node_type* __n)
{ return this->_M_allocate_node(__n->_M_v()); });
}
__catch(...)
{
// _M_assign took care of deallocating all memory. Now we
// must make sure this instance remains in a usable state.
_M_reset();
__throw_exception_again;
}
return *this;
}
std::__alloc_on_copy(__this_alloc, __that_alloc);
}
// Reuse allocated buckets and nodes.
__bucket_type* __former_buckets = nullptr;
std::size_t __former_bucket_count = _M_bucket_count;
const __rehash_state& __former_state = _M_rehash_policy._M_state();
if (_M_bucket_count != __ht._M_bucket_count)
{
__former_buckets = _M_buckets;
_M_buckets = _M_allocate_buckets(__ht._M_bucket_count);
_M_bucket_count = __ht._M_bucket_count;
}
else
__builtin_memset(_M_buckets, 0,
_M_bucket_count * sizeof(__bucket_type));
__try
{
__hashtable_base::operator=(__ht);
_M_element_count = __ht._M_element_count;
_M_rehash_policy = __ht._M_rehash_policy;
__reuse_or_alloc_node_type __roan(_M_begin(), *this);
_M_before_begin._M_nxt = nullptr;
_M_assign(__ht,
[&__roan](const __node_type* __n)
{ return __roan(__n->_M_v()); });
if (__former_buckets)
_M_deallocate_buckets(__former_buckets, __former_bucket_count);
}
__catch(...)
{
if (__former_buckets)
{
// Restore previous buckets.
_M_deallocate_buckets();
_M_rehash_policy._M_reset(__former_state);
_M_buckets = __former_buckets;
_M_bucket_count = __former_bucket_count;
}
__builtin_memset(_M_buckets, 0,
_M_bucket_count * sizeof(__bucket_type));
__throw_exception_again;
}
return *this;
}
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
template<typename _NodeGenerator>
void
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
_M_assign(const _Hashtable& __ht, const _NodeGenerator& __node_gen)
{
__bucket_type* __buckets = nullptr;
if (!_M_buckets)
_M_buckets = __buckets = _M_allocate_buckets(_M_bucket_count);
__try
{
if (!__ht._M_before_begin._M_nxt)
return;
// First deal with the special first node pointed to by
// _M_before_begin.
__node_type* __ht_n = __ht._M_begin();
__node_type* __this_n = __node_gen(__ht_n);
this->_M_copy_code(__this_n, __ht_n);
_M_before_begin._M_nxt = __this_n;
_M_buckets[_M_bucket_index(__this_n)] = &_M_before_begin;
// Then deal with other nodes.
__node_base* __prev_n = __this_n;
for (__ht_n = __ht_n->_M_next(); __ht_n; __ht_n = __ht_n->_M_next())
{
__this_n = __node_gen(__ht_n);
__prev_n->_M_nxt = __this_n;
this->_M_copy_code(__this_n, __ht_n);
size_type __bkt = _M_bucket_index(__this_n);
if (!_M_buckets[__bkt])
_M_buckets[__bkt] = __prev_n;
__prev_n = __this_n;
}
}
__catch(...)
{
clear();
if (__buckets)
_M_deallocate_buckets();
__throw_exception_again;
}
}
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
void
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
_M_reset() noexcept
{
_M_rehash_policy._M_reset();
_M_bucket_count = 1;
_M_single_bucket = nullptr;
_M_buckets = &_M_single_bucket;
_M_before_begin._M_nxt = nullptr;
_M_element_count = 0;
}
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
void
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
_M_move_assign(_Hashtable&& __ht, std::true_type)
{
this->_M_deallocate_nodes(_M_begin());
_M_deallocate_buckets();
__hashtable_base::operator=(std::move(__ht));
_M_rehash_policy = __ht._M_rehash_policy;
if (!__ht._M_uses_single_bucket())
_M_buckets = __ht._M_buckets;
else
{
_M_buckets = &_M_single_bucket;
_M_single_bucket = __ht._M_single_bucket;
}
_M_bucket_count = __ht._M_bucket_count;
_M_before_begin._M_nxt = __ht._M_before_begin._M_nxt;
_M_element_count = __ht._M_element_count;
std::__alloc_on_move(this->_M_node_allocator(), __ht._M_node_allocator());
// Fix buckets containing the _M_before_begin pointers that can't be
// moved.
if (_M_begin())
_M_buckets[_M_bucket_index(_M_begin())] = &_M_before_begin;
__ht._M_reset();
}
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
void
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
_M_move_assign(_Hashtable&& __ht, std::false_type)
{
if (__ht._M_node_allocator() == this->_M_node_allocator())
_M_move_assign(std::move(__ht), std::true_type());
else
{
// Can't move memory, move elements then.
__bucket_type* __former_buckets = nullptr;
size_type __former_bucket_count = _M_bucket_count;
const __rehash_state& __former_state = _M_rehash_policy._M_state();
if (_M_bucket_count != __ht._M_bucket_count)
{
__former_buckets = _M_buckets;
_M_buckets = _M_allocate_buckets(__ht._M_bucket_count);
_M_bucket_count = __ht._M_bucket_count;
}
else
__builtin_memset(_M_buckets, 0,
_M_bucket_count * sizeof(__bucket_type));
__try
{
__hashtable_base::operator=(std::move(__ht));
_M_element_count = __ht._M_element_count;
_M_rehash_policy = __ht._M_rehash_policy;
__reuse_or_alloc_node_type __roan(_M_begin(), *this);
_M_before_begin._M_nxt = nullptr;
_M_assign(__ht,
[&__roan](__node_type* __n)
{ return __roan(std::move_if_noexcept(__n->_M_v())); });
__ht.clear();
}
__catch(...)
{
if (__former_buckets)
{
_M_deallocate_buckets();
_M_rehash_policy._M_reset(__former_state);
_M_buckets = __former_buckets;
_M_bucket_count = __former_bucket_count;
}
__builtin_memset(_M_buckets, 0,
_M_bucket_count * sizeof(__bucket_type));
__throw_exception_again;
}
}
}
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
_Hashtable(const _Hashtable& __ht)
: __hashtable_base(__ht),
__map_base(__ht),
__rehash_base(__ht),
__hashtable_alloc(
__node_alloc_traits::_S_select_on_copy(__ht._M_node_allocator())),
_M_buckets(nullptr),
_M_bucket_count(__ht._M_bucket_count),
_M_element_count(__ht._M_element_count),
_M_rehash_policy(__ht._M_rehash_policy)
{
_M_assign(__ht,
[this](const __node_type* __n)
{ return this->_M_allocate_node(__n->_M_v()); });
}
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
_Hashtable(_Hashtable&& __ht) noexcept
: __hashtable_base(__ht),
__map_base(__ht),
__rehash_base(__ht),
__hashtable_alloc(std::move(__ht._M_base_alloc())),
_M_buckets(__ht._M_buckets),
_M_bucket_count(__ht._M_bucket_count),
_M_before_begin(__ht._M_before_begin._M_nxt),
_M_element_count(__ht._M_element_count),
_M_rehash_policy(__ht._M_rehash_policy)
{
// Update, if necessary, buckets if __ht is using its single bucket.
if (__ht._M_uses_single_bucket())
{
_M_buckets = &_M_single_bucket;
_M_single_bucket = __ht._M_single_bucket;
}
// Update, if necessary, bucket pointing to before begin that hasn't
// moved.
if (_M_begin())
_M_buckets[_M_bucket_index(_M_begin())] = &_M_before_begin;
__ht._M_reset();
}
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
_Hashtable(const _Hashtable& __ht, const allocator_type& __a)
: __hashtable_base(__ht),
__map_base(__ht),
__rehash_base(__ht),
__hashtable_alloc(__node_alloc_type(__a)),
_M_buckets(),
_M_bucket_count(__ht._M_bucket_count),
_M_element_count(__ht._M_element_count),
_M_rehash_policy(__ht._M_rehash_policy)
{
_M_assign(__ht,
[this](const __node_type* __n)
{ return this->_M_allocate_node(__n->_M_v()); });
}
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
_Hashtable(_Hashtable&& __ht, const allocator_type& __a)
: __hashtable_base(__ht),
__map_base(__ht),
__rehash_base(__ht),
__hashtable_alloc(__node_alloc_type(__a)),
_M_buckets(nullptr),
_M_bucket_count(__ht._M_bucket_count),
_M_element_count(__ht._M_element_count),
_M_rehash_policy(__ht._M_rehash_policy)
{
if (__ht._M_node_allocator() == this->_M_node_allocator())
{
if (__ht._M_uses_single_bucket())
{
_M_buckets = &_M_single_bucket;
_M_single_bucket = __ht._M_single_bucket;
}
else
_M_buckets = __ht._M_buckets;
_M_before_begin._M_nxt = __ht._M_before_begin._M_nxt;
// Update, if necessary, bucket pointing to before begin that hasn't
// moved.
if (_M_begin())
_M_buckets[_M_bucket_index(_M_begin())] = &_M_before_begin;
__ht._M_reset();
}
else
{
_M_assign(__ht,
[this](__node_type* __n)
{
return this->_M_allocate_node(
std::move_if_noexcept(__n->_M_v()));
});
__ht.clear();
}
}
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
~_Hashtable() noexcept
{
clear();
_M_deallocate_buckets();
}
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
void
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
swap(_Hashtable& __x)
noexcept(__node_alloc_traits::_S_nothrow_swap())
{
// The only base class with member variables is hash_code_base.
// We define _Hash_code_base::_M_swap because different
// specializations have different members.
this->_M_swap(__x);
std::__alloc_on_swap(this->_M_node_allocator(), __x._M_node_allocator());
std::swap(_M_rehash_policy, __x._M_rehash_policy);
// Deal properly with potentially moved instances.
if (this->_M_uses_single_bucket())
{
if (!__x._M_uses_single_bucket())
{
_M_buckets = __x._M_buckets;
__x._M_buckets = &__x._M_single_bucket;
}
}
else if (__x._M_uses_single_bucket())
{
__x._M_buckets = _M_buckets;
_M_buckets = &_M_single_bucket;
}
else
std::swap(_M_buckets, __x._M_buckets);
std::swap(_M_bucket_count, __x._M_bucket_count);
std::swap(_M_before_begin._M_nxt, __x._M_before_begin._M_nxt);
std::swap(_M_element_count, __x._M_element_count);
std::swap(_M_single_bucket, __x._M_single_bucket);
// Fix buckets containing the _M_before_begin pointers that can't be
// swapped.
if (_M_begin())
_M_buckets[_M_bucket_index(_M_begin())] = &_M_before_begin;
if (__x._M_begin())
__x._M_buckets[__x._M_bucket_index(__x._M_begin())]
= &__x._M_before_begin;
}
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
void
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
__rehash_policy(const _RehashPolicy& __pol)
{
auto __do_rehash =
__pol._M_need_rehash(_M_bucket_count, _M_element_count, 0);
if (__do_rehash.first)
_M_rehash(__do_rehash.second, _M_rehash_policy._M_state());
_M_rehash_policy = __pol;
}
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
auto
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
find(const key_type& __k)
-> iterator
{
__hash_code __code = this->_M_hash_code(__k);
std::size_t __n = _M_bucket_index(__k, __code);
__node_type* __p = _M_find_node(__n, __k, __code);
return __p ? iterator(__p) : end();
}
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
auto
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
find(const key_type& __k) const
-> const_iterator
{
__hash_code __code = this->_M_hash_code(__k);
std::size_t __n = _M_bucket_index(__k, __code);
__node_type* __p = _M_find_node(__n, __k, __code);
return __p ? const_iterator(__p) : end();
}
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
auto
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
count(const key_type& __k) const
-> size_type
{
__hash_code __code = this->_M_hash_code(__k);
std::size_t __n = _M_bucket_index(__k, __code);
__node_type* __p = _M_bucket_begin(__n);
if (!__p)
return 0;
std::size_t __result = 0;
for (;; __p = __p->_M_next())
{
if (this->_M_equals(__k, __code, __p))
++__result;
else if (__result)
// All equivalent values are next to each other, if we
// found a non-equivalent value after an equivalent one it
// means that we won't find any new equivalent value.
break;
if (!__p->_M_nxt || _M_bucket_index(__p->_M_next()) != __n)
break;
}
return __result;
}
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
auto
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
equal_range(const key_type& __k)
-> pair<iterator, iterator>
{
__hash_code __code = this->_M_hash_code(__k);
std::size_t __n = _M_bucket_index(__k, __code);
__node_type* __p = _M_find_node(__n, __k, __code);
if (__p)
{
__node_type* __p1 = __p->_M_next();
while (__p1 && _M_bucket_index(__p1) == __n
&& this->_M_equals(__k, __code, __p1))
__p1 = __p1->_M_next();
return std::make_pair(iterator(__p), iterator(__p1));
}
else
return std::make_pair(end(), end());
}
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
auto
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
equal_range(const key_type& __k) const
-> pair<const_iterator, const_iterator>
{
__hash_code __code = this->_M_hash_code(__k);
std::size_t __n = _M_bucket_index(__k, __code);
__node_type* __p = _M_find_node(__n, __k, __code);
if (__p)
{
__node_type* __p1 = __p->_M_next();
while (__p1 && _M_bucket_index(__p1) == __n
&& this->_M_equals(__k, __code, __p1))
__p1 = __p1->_M_next();
return std::make_pair(const_iterator(__p), const_iterator(__p1));
}
else
return std::make_pair(end(), end());
}
// Find the node whose key compares equal to k in the bucket n.
// Return nullptr if no node is found.
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
auto
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
_M_find_before_node(size_type __n, const key_type& __k,
__hash_code __code) const
-> __node_base*
{
__node_base* __prev_p = _M_buckets[__n];
if (!__prev_p)
return nullptr;
for (__node_type* __p = static_cast<__node_type*>(__prev_p->_M_nxt);;
__p = __p->_M_next())
{
if (this->_M_equals(__k, __code, __p))
return __prev_p;
if (!__p->_M_nxt || _M_bucket_index(__p->_M_next()) != __n)
break;
__prev_p = __p;
}
return nullptr;
}
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
void
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
_M_insert_bucket_begin(size_type __bkt, __node_type* __node)
{
if (_M_buckets[__bkt])
{
// Bucket is not empty, we just need to insert the new node
// after the bucket before begin.
__node->_M_nxt = _M_buckets[__bkt]->_M_nxt;
_M_buckets[__bkt]->_M_nxt = __node;
}
else
{
// The bucket is empty, the new node is inserted at the
// beginning of the singly-linked list and the bucket will
// contain _M_before_begin pointer.
__node->_M_nxt = _M_before_begin._M_nxt;
_M_before_begin._M_nxt = __node;
if (__node->_M_nxt)
// We must update former begin bucket that is pointing to
// _M_before_begin.
_M_buckets[_M_bucket_index(__node->_M_next())] = __node;
_M_buckets[__bkt] = &_M_before_begin;
}
}
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
void
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
_M_remove_bucket_begin(size_type __bkt, __node_type* __next,
size_type __next_bkt)
{
if (!__next || __next_bkt != __bkt)
{
// Bucket is now empty
// First update next bucket if any
if (__next)
_M_buckets[__next_bkt] = _M_buckets[__bkt];
// Second update before begin node if necessary
if (&_M_before_begin == _M_buckets[__bkt])
_M_before_begin._M_nxt = __next;
_M_buckets[__bkt] = nullptr;
}
}
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
auto
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
_M_get_previous_node(size_type __bkt, __node_base* __n)
-> __node_base*
{
__node_base* __prev_n = _M_buckets[__bkt];
while (__prev_n->_M_nxt != __n)
__prev_n = __prev_n->_M_nxt;
return __prev_n;
}
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
template<typename... _Args>
auto
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
_M_emplace(std::true_type, _Args&&... __args)
-> pair<iterator, bool>
{
// First build the node to get access to the hash code
__node_type* __node = this->_M_allocate_node(std::forward<_Args>(__args)...);
const key_type& __k = this->_M_extract()(__node->_M_v());
__hash_code __code;
__try
{
__code = this->_M_hash_code(__k);
}
__catch(...)
{
this->_M_deallocate_node(__node);
__throw_exception_again;
}
size_type __bkt = _M_bucket_index(__k, __code);
if (__node_type* __p = _M_find_node(__bkt, __k, __code))
{
// There is already an equivalent node, no insertion
this->_M_deallocate_node(__node);
return std::make_pair(iterator(__p), false);
}
// Insert the node
return std::make_pair(_M_insert_unique_node(__bkt, __code, __node),
true);
}
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
template<typename... _Args>
auto
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
_M_emplace(const_iterator __hint, std::false_type, _Args&&... __args)
-> iterator
{
// First build the node to get its hash code.
__node_type* __node =
this->_M_allocate_node(std::forward<_Args>(__args)...);
__hash_code __code;
__try
{
__code = this->_M_hash_code(this->_M_extract()(__node->_M_v()));
}
__catch(...)
{
this->_M_deallocate_node(__node);
__throw_exception_again;
}
return _M_insert_multi_node(__hint._M_cur, __code, __node);
}
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
auto
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
_M_insert_unique_node(size_type __bkt, __hash_code __code,
__node_type* __node)
-> iterator
{
const __rehash_state& __saved_state = _M_rehash_policy._M_state();
std::pair<bool, std::size_t> __do_rehash
= _M_rehash_policy._M_need_rehash(_M_bucket_count, _M_element_count, 1);
__try
{
if (__do_rehash.first)
{
_M_rehash(__do_rehash.second, __saved_state);
__bkt = _M_bucket_index(this->_M_extract()(__node->_M_v()), __code);
}
this->_M_store_code(__node, __code);
// Always insert at the beginning of the bucket.
_M_insert_bucket_begin(__bkt, __node);
++_M_element_count;
return iterator(__node);
}
__catch(...)
{
this->_M_deallocate_node(__node);
__throw_exception_again;
}
}
// Insert node, in bucket bkt if no rehash (assumes no element with its key
// already present). Take ownership of the node, deallocate it on exception.
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
auto
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
_M_insert_multi_node(__node_type* __hint, __hash_code __code,
__node_type* __node)
-> iterator
{
const __rehash_state& __saved_state = _M_rehash_policy._M_state();
std::pair<bool, std::size_t> __do_rehash
= _M_rehash_policy._M_need_rehash(_M_bucket_count, _M_element_count, 1);
__try
{
if (__do_rehash.first)
_M_rehash(__do_rehash.second, __saved_state);
this->_M_store_code(__node, __code);
const key_type& __k = this->_M_extract()(__node->_M_v());
size_type __bkt = _M_bucket_index(__k, __code);
// Find the node before an equivalent one or use hint if it exists and
// if it is equivalent.
__node_base* __prev
= __builtin_expect(__hint != nullptr, false)
&& this->_M_equals(__k, __code, __hint)
? __hint
: _M_find_before_node(__bkt, __k, __code);
if (__prev)
{
// Insert after the node before the equivalent one.
__node->_M_nxt = __prev->_M_nxt;
__prev->_M_nxt = __node;
if (__builtin_expect(__prev == __hint, false))
// hint might be the last bucket node, in this case we need to
// update next bucket.
if (__node->_M_nxt
&& !this->_M_equals(__k, __code, __node->_M_next()))
{
size_type __next_bkt = _M_bucket_index(__node->_M_next());
if (__next_bkt != __bkt)
_M_buckets[__next_bkt] = __node;
}
}
else
// The inserted node has no equivalent in the
// hashtable. We must insert the new node at the
// beginning of the bucket to preserve equivalent
// elements' relative positions.
_M_insert_bucket_begin(__bkt, __node);
++_M_element_count;
return iterator(__node);
}
__catch(...)
{
this->_M_deallocate_node(__node);
__throw_exception_again;
}
}
// Insert v if no element with its key is already present.
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
template<typename _Arg, typename _NodeGenerator>
auto
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
_M_insert(_Arg&& __v, const _NodeGenerator& __node_gen, std::true_type)
-> pair<iterator, bool>
{
const key_type& __k = this->_M_extract()(__v);
__hash_code __code = this->_M_hash_code(__k);
size_type __bkt = _M_bucket_index(__k, __code);
__node_type* __n = _M_find_node(__bkt, __k, __code);
if (__n)
return std::make_pair(iterator(__n), false);
__n = __node_gen(std::forward<_Arg>(__v));
return std::make_pair(_M_insert_unique_node(__bkt, __code, __n), true);
}
// Insert v unconditionally.
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
template<typename _Arg, typename _NodeGenerator>
auto
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
_M_insert(const_iterator __hint, _Arg&& __v,
const _NodeGenerator& __node_gen, std::false_type)
-> iterator
{
// First compute the hash code so that we don't do anything if it
// throws.
__hash_code __code = this->_M_hash_code(this->_M_extract()(__v));
// Second allocate new node so that we don't rehash if it throws.
__node_type* __node = __node_gen(std::forward<_Arg>(__v));
return _M_insert_multi_node(__hint._M_cur, __code, __node);
}
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
auto
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
erase(const_iterator __it)
-> iterator
{
__node_type* __n = __it._M_cur;
std::size_t __bkt = _M_bucket_index(__n);
// Look for previous node to unlink it from the erased one, this
// is why we need buckets to contain the before begin to make
// this search fast.
__node_base* __prev_n = _M_get_previous_node(__bkt, __n);
return _M_erase(__bkt, __prev_n, __n);
}
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
auto
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
_M_erase(size_type __bkt, __node_base* __prev_n, __node_type* __n)
-> iterator
{
if (__prev_n == _M_buckets[__bkt])
_M_remove_bucket_begin(__bkt, __n->_M_next(),
__n->_M_nxt ? _M_bucket_index(__n->_M_next()) : 0);
else if (__n->_M_nxt)
{
size_type __next_bkt = _M_bucket_index(__n->_M_next());
if (__next_bkt != __bkt)
_M_buckets[__next_bkt] = __prev_n;
}
__prev_n->_M_nxt = __n->_M_nxt;
iterator __result(__n->_M_next());
this->_M_deallocate_node(__n);
--_M_element_count;
return __result;
}
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
auto
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
_M_erase(std::true_type, const key_type& __k)
-> size_type
{
__hash_code __code = this->_M_hash_code(__k);
std::size_t __bkt = _M_bucket_index(__k, __code);
// Look for the node before the first matching node.
__node_base* __prev_n = _M_find_before_node(__bkt, __k, __code);
if (!__prev_n)
return 0;
// We found a matching node, erase it.
__node_type* __n = static_cast<__node_type*>(__prev_n->_M_nxt);
_M_erase(__bkt, __prev_n, __n);
return 1;
}
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
auto
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
_M_erase(std::false_type, const key_type& __k)
-> size_type
{
__hash_code __code = this->_M_hash_code(__k);
std::size_t __bkt = _M_bucket_index(__k, __code);
// Look for the node before the first matching node.
__node_base* __prev_n = _M_find_before_node(__bkt, __k, __code);
if (!__prev_n)
return 0;
// _GLIBCXX_RESOLVE_LIB_DEFECTS
// 526. Is it undefined if a function in the standard changes
// in parameters?
// We use one loop to find all matching nodes and another to deallocate
// them so that the key stays valid during the first loop. It might be
// invalidated indirectly when destroying nodes.
__node_type* __n = static_cast<__node_type*>(__prev_n->_M_nxt);
__node_type* __n_last = __n;
std::size_t __n_last_bkt = __bkt;
do
{
__n_last = __n_last->_M_next();
if (!__n_last)
break;
__n_last_bkt = _M_bucket_index(__n_last);
}
while (__n_last_bkt == __bkt && this->_M_equals(__k, __code, __n_last));
// Deallocate nodes.
size_type __result = 0;
do
{
__node_type* __p = __n->_M_next();
this->_M_deallocate_node(__n);
__n = __p;
++__result;
--_M_element_count;
}
while (__n != __n_last);
if (__prev_n == _M_buckets[__bkt])
_M_remove_bucket_begin(__bkt, __n_last, __n_last_bkt);
else if (__n_last && __n_last_bkt != __bkt)
_M_buckets[__n_last_bkt] = __prev_n;
__prev_n->_M_nxt = __n_last;
return __result;
}
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
auto
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
erase(const_iterator __first, const_iterator __last)
-> iterator
{
__node_type* __n = __first._M_cur;
__node_type* __last_n = __last._M_cur;
if (__n == __last_n)
return iterator(__n);
std::size_t __bkt = _M_bucket_index(__n);
__node_base* __prev_n = _M_get_previous_node(__bkt, __n);
bool __is_bucket_begin = __n == _M_bucket_begin(__bkt);
std::size_t __n_bkt = __bkt;
for (;;)
{
do
{
__node_type* __tmp = __n;
__n = __n->_M_next();
this->_M_deallocate_node(__tmp);
--_M_element_count;
if (!__n)
break;
__n_bkt = _M_bucket_index(__n);
}
while (__n != __last_n && __n_bkt == __bkt);
if (__is_bucket_begin)
_M_remove_bucket_begin(__bkt, __n, __n_bkt);
if (__n == __last_n)
break;
__is_bucket_begin = true;
__bkt = __n_bkt;
}
if (__n && (__n_bkt != __bkt || __is_bucket_begin))
_M_buckets[__n_bkt] = __prev_n;
__prev_n->_M_nxt = __n;
return iterator(__n);
}
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
void
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
clear() noexcept
{
this->_M_deallocate_nodes(_M_begin());
__builtin_memset(_M_buckets, 0, _M_bucket_count * sizeof(__bucket_type));
_M_element_count = 0;
_M_before_begin._M_nxt = nullptr;
}
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
void
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
rehash(size_type __n)
{
const __rehash_state& __saved_state = _M_rehash_policy._M_state();
std::size_t __buckets
= std::max(_M_rehash_policy._M_bkt_for_elements(_M_element_count + 1),
__n);
__buckets = _M_rehash_policy._M_next_bkt(__buckets);
if (__buckets != _M_bucket_count)
_M_rehash(__buckets, __saved_state);
else
// No rehash, restore previous state to keep a consistent state.
_M_rehash_policy._M_reset(__saved_state);
}
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
void
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
_M_rehash(size_type __n, const __rehash_state& __state)
{
__try
{
_M_rehash_aux(__n, __unique_keys());
}
__catch(...)
{
// A failure here means that buckets allocation failed. We only
// have to restore hash policy previous state.
_M_rehash_policy._M_reset(__state);
__throw_exception_again;
}
}
// Rehash when there is no equivalent elements.
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
void
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
_M_rehash_aux(size_type __n, std::true_type)
{
__bucket_type* __new_buckets = _M_allocate_buckets(__n);
__node_type* __p = _M_begin();
_M_before_begin._M_nxt = nullptr;
std::size_t __bbegin_bkt = 0;
while (__p)
{
__node_type* __next = __p->_M_next();
std::size_t __bkt = __hash_code_base::_M_bucket_index(__p, __n);
if (!__new_buckets[__bkt])
{
__p->_M_nxt = _M_before_begin._M_nxt;
_M_before_begin._M_nxt = __p;
__new_buckets[__bkt] = &_M_before_begin;
if (__p->_M_nxt)
__new_buckets[__bbegin_bkt] = __p;
__bbegin_bkt = __bkt;
}
else
{
__p->_M_nxt = __new_buckets[__bkt]->_M_nxt;
__new_buckets[__bkt]->_M_nxt = __p;
}
__p = __next;
}
_M_deallocate_buckets();
_M_bucket_count = __n;
_M_buckets = __new_buckets;
}
// Rehash when there can be equivalent elements, preserve their relative
// order.
template<typename _Key, typename _Value,
typename _Alloc, typename _ExtractKey, typename _Equal,
typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
typename _Traits>
void
_Hashtable<_Key, _Value, _Alloc, _ExtractKey, _Equal,
_H1, _H2, _Hash, _RehashPolicy, _Traits>::
_M_rehash_aux(size_type __n, std::false_type)
{
__bucket_type* __new_buckets = _M_allocate_buckets(__n);
__node_type* __p = _M_begin();
_M_before_begin._M_nxt = nullptr;
std::size_t __bbegin_bkt = 0;
std::size_t __prev_bkt = 0;
__node_type* __prev_p = nullptr;
bool __check_bucket = false;
while (__p)
{
__node_type* __next = __p->_M_next();
std::size_t __bkt = __hash_code_base::_M_bucket_index(__p, __n);
if (__prev_p && __prev_bkt == __bkt)
{
// Previous insert was already in this bucket, we insert after
// the previously inserted one to preserve equivalent elements
// relative order.
__p->_M_nxt = __prev_p->_M_nxt;
__prev_p->_M_nxt = __p;
// Inserting after a node in a bucket require to check that we
// haven't change the bucket last node, in this case next
// bucket containing its before begin node must be updated. We
// schedule a check as soon as we move out of the sequence of
// equivalent nodes to limit the number of checks.
__check_bucket = true;
}
else
{
if (__check_bucket)
{
// Check if we shall update the next bucket because of
// insertions into __prev_bkt bucket.
if (__prev_p->_M_nxt)
{
std::size_t __next_bkt
= __hash_code_base::_M_bucket_index(__prev_p->_M_next(),
__n);
if (__next_bkt != __prev_bkt)
__new_buckets[__next_bkt] = __prev_p;
}
__check_bucket = false;
}
if (!__new_buckets[__bkt])
{
__p->_M_nxt = _M_before_begin._M_nxt;
_M_before_begin._M_nxt = __p;
__new_buckets[__bkt] = &_M_before_begin;
if (__p->_M_nxt)
__new_buckets[__bbegin_bkt] = __p;
__bbegin_bkt = __bkt;
}
else
{
__p->_M_nxt = __new_buckets[__bkt]->_M_nxt;
__new_buckets[__bkt]->_M_nxt = __p;
}
}
__prev_p = __p;
__prev_bkt = __bkt;
__p = __next;
}
if (__check_bucket && __prev_p->_M_nxt)
{
std::size_t __next_bkt
= __hash_code_base::_M_bucket_index(__prev_p->_M_next(), __n);
if (__next_bkt != __prev_bkt)
__new_buckets[__next_bkt] = __prev_p;
}
_M_deallocate_buckets();
_M_bucket_count = __n;
_M_buckets = __new_buckets;
}
_GLIBCXX_END_NAMESPACE_VERSION
} // namespace std
#endif // _HASHTABLE_H
```
|
```go
//go:build windows
package dhcp
import (
"github.com/alecthomas/kingpin/v2"
"github.com/go-kit/log"
"github.com/prometheus-community/windows_exporter/pkg/perflib"
"github.com/prometheus-community/windows_exporter/pkg/types"
"github.com/prometheus/client_golang/prometheus"
)
const Name = "dhcp"
type Config struct{}
var ConfigDefaults = Config{}
// A Collector is a Prometheus Collector perflib DHCP metrics.
type Collector struct {
config Config
logger log.Logger
acksTotal *prometheus.Desc
activeQueueLength *prometheus.Desc
conflictCheckQueueLength *prometheus.Desc
declinesTotal *prometheus.Desc
deniedDueToMatch *prometheus.Desc
deniedDueToNonMatch *prometheus.Desc
discoversTotal *prometheus.Desc
duplicatesDroppedTotal *prometheus.Desc
failoverBndackReceivedTotal *prometheus.Desc
failoverBndackSentTotal *prometheus.Desc
failoverBndupdDropped *prometheus.Desc
failoverBndupdPendingOutboundQueue *prometheus.Desc
failoverBndupdReceivedTotal *prometheus.Desc
failoverBndupdSentTotal *prometheus.Desc
failoverTransitionsCommunicationInterruptedState *prometheus.Desc
failoverTransitionsPartnerDownState *prometheus.Desc
failoverTransitionsRecoverState *prometheus.Desc
informsTotal *prometheus.Desc
nACKsTotal *prometheus.Desc
offerQueueLength *prometheus.Desc
offersTotal *prometheus.Desc
packetsExpiredTotal *prometheus.Desc
packetsReceivedTotal *prometheus.Desc
releasesTotal *prometheus.Desc
requestsTotal *prometheus.Desc
}
func New(logger log.Logger, config *Config) *Collector {
if config == nil {
config = &ConfigDefaults
}
c := &Collector{
config: *config,
}
c.SetLogger(logger)
return c
}
func NewWithFlags(_ *kingpin.Application) *Collector {
return &Collector{}
}
func (c *Collector) GetName() string {
return Name
}
func (c *Collector) SetLogger(logger log.Logger) {
c.logger = log.With(logger, "collector", Name)
}
func (c *Collector) GetPerfCounter() ([]string, error) {
return []string{"DHCP Server"}, nil
}
func (c *Collector) Close() error {
return nil
}
func (c *Collector) Build() error {
c.packetsReceivedTotal = prometheus.NewDesc(
prometheus.BuildFQName(types.Namespace, Name, "packets_received_total"),
"Total number of packets received by the DHCP server (PacketsReceivedTotal)",
nil,
nil,
)
c.duplicatesDroppedTotal = prometheus.NewDesc(
prometheus.BuildFQName(types.Namespace, Name, "duplicates_dropped_total"),
"Total number of duplicate packets received by the DHCP server (DuplicatesDroppedTotal)",
nil,
nil,
)
c.packetsExpiredTotal = prometheus.NewDesc(
prometheus.BuildFQName(types.Namespace, Name, "packets_expired_total"),
"Total number of packets expired in the DHCP server message queue (PacketsExpiredTotal)",
nil,
nil,
)
c.activeQueueLength = prometheus.NewDesc(
prometheus.BuildFQName(types.Namespace, Name, "active_queue_length"),
"Number of packets in the processing queue of the DHCP server (ActiveQueueLength)",
nil,
nil,
)
c.conflictCheckQueueLength = prometheus.NewDesc(
prometheus.BuildFQName(types.Namespace, Name, "conflict_check_queue_length"),
"Number of packets in the DHCP server queue waiting on conflict detection (ping). (ConflictCheckQueueLength)",
nil,
nil,
)
c.discoversTotal = prometheus.NewDesc(
prometheus.BuildFQName(types.Namespace, Name, "discovers_total"),
"Total DHCP Discovers received by the DHCP server (DiscoversTotal)",
nil,
nil,
)
c.offersTotal = prometheus.NewDesc(
prometheus.BuildFQName(types.Namespace, Name, "offers_total"),
"Total DHCP Offers sent by the DHCP server (OffersTotal)",
nil,
nil,
)
c.requestsTotal = prometheus.NewDesc(
prometheus.BuildFQName(types.Namespace, Name, "requests_total"),
"Total DHCP Requests received by the DHCP server (RequestsTotal)",
nil,
nil,
)
c.informsTotal = prometheus.NewDesc(
prometheus.BuildFQName(types.Namespace, Name, "informs_total"),
"Total DHCP Informs received by the DHCP server (InformsTotal)",
nil,
nil,
)
c.acksTotal = prometheus.NewDesc(
prometheus.BuildFQName(types.Namespace, Name, "acks_total"),
"Total DHCP Acks sent by the DHCP server (AcksTotal)",
nil,
nil,
)
c.nACKsTotal = prometheus.NewDesc(
prometheus.BuildFQName(types.Namespace, Name, "nacks_total"),
"Total DHCP Nacks sent by the DHCP server (NacksTotal)",
nil,
nil,
)
c.declinesTotal = prometheus.NewDesc(
prometheus.BuildFQName(types.Namespace, Name, "declines_total"),
"Total DHCP Declines received by the DHCP server (DeclinesTotal)",
nil,
nil,
)
c.releasesTotal = prometheus.NewDesc(
prometheus.BuildFQName(types.Namespace, Name, "releases_total"),
"Total DHCP Releases received by the DHCP server (ReleasesTotal)",
nil,
nil,
)
c.offerQueueLength = prometheus.NewDesc(
prometheus.BuildFQName(types.Namespace, Name, "offer_queue_length"),
"Number of packets in the offer queue of the DHCP server (OfferQueueLength)",
nil,
nil,
)
c.deniedDueToMatch = prometheus.NewDesc(
prometheus.BuildFQName(types.Namespace, Name, "denied_due_to_match_total"),
"Total number of DHCP requests denied, based on matches from the Deny list (DeniedDueToMatch)",
nil,
nil,
)
c.deniedDueToNonMatch = prometheus.NewDesc(
prometheus.BuildFQName(types.Namespace, Name, "denied_due_to_nonmatch_total"),
"Total number of DHCP requests denied, based on non-matches from the Allow list (DeniedDueToNonMatch)",
nil,
nil,
)
c.failoverBndupdSentTotal = prometheus.NewDesc(
prometheus.BuildFQName(types.Namespace, Name, "failover_bndupd_sent_total"),
"Number of DHCP fail over Binding Update messages sent (FailoverBndupdSentTotal)",
nil,
nil,
)
c.failoverBndupdReceivedTotal = prometheus.NewDesc(
prometheus.BuildFQName(types.Namespace, Name, "failover_bndupd_received_total"),
"Number of DHCP fail over Binding Update messages received (FailoverBndupdReceivedTotal)",
nil,
nil,
)
c.failoverBndackSentTotal = prometheus.NewDesc(
prometheus.BuildFQName(types.Namespace, Name, "failover_bndack_sent_total"),
"Number of DHCP fail over Binding Ack messages sent (FailoverBndackSentTotal)",
nil,
nil,
)
c.failoverBndackReceivedTotal = prometheus.NewDesc(
prometheus.BuildFQName(types.Namespace, Name, "failover_bndack_received_total"),
"Number of DHCP fail over Binding Ack messages received (FailoverBndackReceivedTotal)",
nil,
nil,
)
c.failoverBndupdPendingOutboundQueue = prometheus.NewDesc(
prometheus.BuildFQName(types.Namespace, Name, "failover_bndupd_pending_in_outbound_queue"),
"Number of pending outbound DHCP fail over Binding Update messages (FailoverBndupdPendingOutboundQueue)",
nil,
nil,
)
c.failoverTransitionsCommunicationInterruptedState = prometheus.NewDesc(
prometheus.BuildFQName(types.Namespace, Name, "failover_transitions_communicationinterrupted_state_total"),
"Total number of transitions into COMMUNICATION INTERRUPTED state (FailoverTransitionsCommunicationinterruptedState)",
nil,
nil,
)
c.failoverTransitionsPartnerDownState = prometheus.NewDesc(
prometheus.BuildFQName(types.Namespace, Name, "failover_transitions_partnerdown_state_total"),
"Total number of transitions into PARTNER DOWN state (FailoverTransitionsPartnerdownState)",
nil,
nil,
)
c.failoverTransitionsRecoverState = prometheus.NewDesc(
prometheus.BuildFQName(types.Namespace, Name, "failover_transitions_recover_total"),
"Total number of transitions into RECOVER state (FailoverTransitionsRecoverState)",
nil,
nil,
)
c.failoverBndupdDropped = prometheus.NewDesc(
prometheus.BuildFQName(types.Namespace, Name, "failover_bndupd_dropped_total"),
"Total number of DHCP fail over Binding Updates dropped (FailoverBndupdDropped)",
nil,
nil,
)
return nil
}
// represents perflib metrics from the DHCP Server class.
// While the name of a number of perflib metrics would indicate a rate is being returned (E.G. Packets Received/sec),
// perflib instead returns a counter, hence the "Total" suffix in some of the variable names.
type dhcpPerf struct {
PacketsReceivedTotal float64 `perflib:"Packets Received/sec"`
DuplicatesDroppedTotal float64 `perflib:"Duplicates Dropped/sec"`
PacketsExpiredTotal float64 `perflib:"Packets Expired/sec"`
ActiveQueueLength float64 `perflib:"Active Queue Length"`
ConflictCheckQueueLength float64 `perflib:"Conflict Check Queue Length"`
DiscoversTotal float64 `perflib:"Discovers/sec"`
OffersTotal float64 `perflib:"Offers/sec"`
RequestsTotal float64 `perflib:"Requests/sec"`
InformsTotal float64 `perflib:"Informs/sec"`
AcksTotal float64 `perflib:"Acks/sec"`
NacksTotal float64 `perflib:"Nacks/sec"`
DeclinesTotal float64 `perflib:"Declines/sec"`
ReleasesTotal float64 `perflib:"Releases/sec"`
DeniedDueToMatch float64 `perflib:"Denied due to match."`
DeniedDueToNonMatch float64 `perflib:"Denied due to match."`
OfferQueueLength float64 `perflib:"Offer Queue Length"`
FailoverBndupdSentTotal float64 `perflib:"Failover: BndUpd sent/sec."`
FailoverBndupdReceivedTotal float64 `perflib:"Failover: BndUpd received/sec."`
FailoverBndackSentTotal float64 `perflib:"Failover: BndAck sent/sec."`
FailoverBndackReceivedTotal float64 `perflib:"Failover: BndAck received/sec."`
FailoverBndupdPendingOutboundQueue float64 `perflib:"Failover: BndUpd pending in outbound queue."`
FailoverTransitionsCommunicationinterruptedState float64 `perflib:"Failover: Transitions to COMMUNICATION-INTERRUPTED state."`
FailoverTransitionsPartnerdownState float64 `perflib:"Failover: Transitions to PARTNER-DOWN state."`
FailoverTransitionsRecoverState float64 `perflib:"Failover: Transitions to RECOVER state."`
FailoverBndupdDropped float64 `perflib:"Failover: BndUpd Dropped."`
}
func (c *Collector) Collect(ctx *types.ScrapeContext, ch chan<- prometheus.Metric) error {
var dhcpPerfs []dhcpPerf
if err := perflib.UnmarshalObject(ctx.PerfObjects["DHCP Server"], &dhcpPerfs, c.logger); err != nil {
return err
}
ch <- prometheus.MustNewConstMetric(
c.packetsReceivedTotal,
prometheus.CounterValue,
dhcpPerfs[0].PacketsReceivedTotal,
)
ch <- prometheus.MustNewConstMetric(
c.duplicatesDroppedTotal,
prometheus.CounterValue,
dhcpPerfs[0].DuplicatesDroppedTotal,
)
ch <- prometheus.MustNewConstMetric(
c.packetsExpiredTotal,
prometheus.CounterValue,
dhcpPerfs[0].PacketsExpiredTotal,
)
ch <- prometheus.MustNewConstMetric(
c.activeQueueLength,
prometheus.GaugeValue,
dhcpPerfs[0].ActiveQueueLength,
)
ch <- prometheus.MustNewConstMetric(
c.conflictCheckQueueLength,
prometheus.GaugeValue,
dhcpPerfs[0].ConflictCheckQueueLength,
)
ch <- prometheus.MustNewConstMetric(
c.discoversTotal,
prometheus.CounterValue,
dhcpPerfs[0].DiscoversTotal,
)
ch <- prometheus.MustNewConstMetric(
c.offersTotal,
prometheus.CounterValue,
dhcpPerfs[0].OffersTotal,
)
ch <- prometheus.MustNewConstMetric(
c.requestsTotal,
prometheus.CounterValue,
dhcpPerfs[0].RequestsTotal,
)
ch <- prometheus.MustNewConstMetric(
c.informsTotal,
prometheus.CounterValue,
dhcpPerfs[0].InformsTotal,
)
ch <- prometheus.MustNewConstMetric(
c.acksTotal,
prometheus.CounterValue,
dhcpPerfs[0].AcksTotal,
)
ch <- prometheus.MustNewConstMetric(
c.nACKsTotal,
prometheus.CounterValue,
dhcpPerfs[0].NacksTotal,
)
ch <- prometheus.MustNewConstMetric(
c.declinesTotal,
prometheus.CounterValue,
dhcpPerfs[0].DeclinesTotal,
)
ch <- prometheus.MustNewConstMetric(
c.releasesTotal,
prometheus.CounterValue,
dhcpPerfs[0].ReleasesTotal,
)
ch <- prometheus.MustNewConstMetric(
c.offerQueueLength,
prometheus.GaugeValue,
dhcpPerfs[0].OfferQueueLength,
)
ch <- prometheus.MustNewConstMetric(
c.deniedDueToMatch,
prometheus.CounterValue,
dhcpPerfs[0].DeniedDueToMatch,
)
ch <- prometheus.MustNewConstMetric(
c.deniedDueToNonMatch,
prometheus.CounterValue,
dhcpPerfs[0].DeniedDueToNonMatch,
)
ch <- prometheus.MustNewConstMetric(
c.failoverBndupdSentTotal,
prometheus.CounterValue,
dhcpPerfs[0].FailoverBndupdSentTotal,
)
ch <- prometheus.MustNewConstMetric(
c.failoverBndupdReceivedTotal,
prometheus.CounterValue,
dhcpPerfs[0].FailoverBndupdReceivedTotal,
)
ch <- prometheus.MustNewConstMetric(
c.failoverBndackSentTotal,
prometheus.CounterValue,
dhcpPerfs[0].FailoverBndackSentTotal,
)
ch <- prometheus.MustNewConstMetric(
c.failoverBndackReceivedTotal,
prometheus.CounterValue,
dhcpPerfs[0].FailoverBndackReceivedTotal,
)
ch <- prometheus.MustNewConstMetric(
c.failoverBndupdPendingOutboundQueue,
prometheus.GaugeValue,
dhcpPerfs[0].FailoverBndupdPendingOutboundQueue,
)
ch <- prometheus.MustNewConstMetric(
c.failoverTransitionsCommunicationInterruptedState,
prometheus.CounterValue,
dhcpPerfs[0].FailoverTransitionsCommunicationinterruptedState,
)
ch <- prometheus.MustNewConstMetric(
c.failoverTransitionsPartnerDownState,
prometheus.CounterValue,
dhcpPerfs[0].FailoverTransitionsPartnerdownState,
)
ch <- prometheus.MustNewConstMetric(
c.failoverTransitionsRecoverState,
prometheus.CounterValue,
dhcpPerfs[0].FailoverTransitionsRecoverState,
)
ch <- prometheus.MustNewConstMetric(
c.failoverBndupdDropped,
prometheus.CounterValue,
dhcpPerfs[0].FailoverBndupdDropped,
)
return nil
}
```
|
Toot may refer to:
Places
Toot or Tut, Markazi, a village in Iran
Mount Victoria Tunnel, a road tunnel in Wellington, New Zealand, colloquially known as "Toot Tunnel"
Toot Oilfield, an oil field in northern Pakistan
Toot Sahib, a temple in Amritsar, Punjab, India
People with the name
Don Cahoon (born 1949), American retired college ice hockey coach, nicknamed "Toot"
Madelyn Dunham (1922–2008), grandmother of U.S. president Barack Obama, nicknamed "Toot"
Fictional characters
Toot, the title character of Toot the Tiny Tugboat, a British children's animated television series
Toot, in Holly Hobbie's Toot & Puddle children's book series
Toot Braunstein, in the animated series Drawn Together
Music
"Toot", a song by Brant Bjork from the 1999 album Jalamanta
Other uses
Toot, to "pass gas"; flatulence
Toot, to play a horn (instrument)
Toot, the historical term for messages posted on Mastodon (social network)
Toot, to sound an automobile or vehicle horn
See also
Toon (disambiguation)
Toos (disambiguation)
Toot Hill (disambiguation)
Toot Toot (disambiguation)
Tooting (disambiguation)
Toots (disambiguation)
|
```xml
import noop from './noop';
describe('noop()', () => {
it('returns undefined', () => {
expect(noop()).toBe(undefined);
});
});
```
|
```objective-c
/**
* @file lv_imgfont.h
*
*/
#ifndef LV_IMGFONT_H
#define LV_IMGFONT_H
#ifdef __cplusplus
extern "C" {
#endif
/*********************
* INCLUDES
*********************/
#include "../../lv_conf_internal.h"
#include "../../font/lv_font.h"
#if LV_USE_IMGFONT
/*********************
* DEFINES
*********************/
/**********************
* TYPEDEFS
**********************/
/* gets the image path name of this character */
typedef const void * (*lv_imgfont_get_path_cb_t)(const lv_font_t * font,
uint32_t unicode, uint32_t unicode_next,
int32_t * offset_y, void * user_data);
/**********************
* GLOBAL PROTOTYPES
**********************/
/**
* Creates a image font with info parameter specified.
* @param height font size
* @param path_cb a function to get the image path name of character.
* @param user_data pointer to user data
* @return pointer to the new imgfont or NULL if create error.
*/
lv_font_t * lv_imgfont_create(uint16_t height, lv_imgfont_get_path_cb_t path_cb, void * user_data);
/**
* Destroy a image font that has been created.
* @param font pointer to image font handle.
*/
void lv_imgfont_destroy(lv_font_t * font);
/**********************
* MACROS
**********************/
#endif /*LV_USE_IMGFONT*/
#ifdef __cplusplus
} /*extern "C"*/
#endif
#endif /* LV_IMGFONT_H */
```
|
```yaml
periodics:
- cron: "54 * * * *" # Every hour at 54 minutes past the hour
name: ci-test-infra-branchprotector
cluster: k8s-infra-prow-build-trusted
labels:
app: branchprotector
decorate: true
decoration_config:
timeout: 5h
extra_refs:
- org: kubernetes
repo: test-infra
base_ref: master
spec:
containers:
- name: branchprotector
image: us-docker.pkg.dev/k8s-infra-prow/images/branchprotector:v20240802-66b115076
command:
- branchprotector
args:
- --config-path=config/prow/config.yaml
- --job-config-path=config/jobs
- --github-token-path=/etc/github/token
- --confirm
- --github-endpoint=path_to_url
- --github-endpoint=path_to_url
- --github-hourly-tokens=1000 # Up the rate limit from default (300) to 1000
volumeMounts:
- name: github
mountPath: /etc/github
readOnly: true
volumes:
- name: github
secret:
secretName: k8s-github-robot-github-token
annotations:
testgrid-num-failures-to-alert: '6'
testgrid-alert-stale-results-hours: '12'
testgrid-dashboards: sig-testing-misc
testgrid-tab-name: branchprotector
testgrid-alert-email: kubernetes-sig-testing-alerts@googlegroups.com
description: Runs Prow's branchprotector to apply configured GitHub status context requirements and merge policies.
```
|
```objective-c
//===-- llvm/Remarks/RemarkFormat.h - The format of remarks -----*- C++/-*-===//
//
// See path_to_url for license information.
//
//===your_sha256_hash------===//
//
// This file defines utilities to deal with the format of remarks.
//
//===your_sha256_hash------===//
#ifndef LLVM_REMARKS_REMARKFORMAT_H
#define LLVM_REMARKS_REMARKFORMAT_H
#include "llvm/ADT/StringRef.h"
#include "llvm/Support/Error.h"
namespace llvm {
namespace remarks {
constexpr StringLiteral Magic("REMARKS");
/// The format used for serializing/deserializing remarks.
enum class Format { Unknown, YAML, YAMLStrTab, Bitstream };
/// Parse and validate a string for the remark format.
Expected<Format> parseFormat(StringRef FormatStr);
/// Parse and validate a magic number to a remark format.
Expected<Format> magicToFormat(StringRef Magic);
} // end namespace remarks
} // end namespace llvm
#endif // LLVM_REMARKS_REMARKFORMAT_H
```
|
Wyre was a parliamentary constituency in the Wyre district of Lancashire. It returned one Member of Parliament (MP) to the House of Commons of the Parliament of the United Kingdom from 1983 until it was abolished for the 1997 general election. It was then partially replaced by the new constituency of Lancaster and Wyre.
Boundaries
The Borough of Wyre wards of Bailey, Bourne, Breck, Carleton, Cleveleys Park, Hambleton, Hardhorn, High Cross, Jubilee, Mount, Norcross, Park, Pharos, Preesall, Rossall, Staina, Tithebarn, Victoria, and Warren.
Members of Parliament
Elections
Elections in the 1980s
Elections in the 1990s
See also
List of parliamentary constituencies in Lancashire
Notes and references
Parliamentary constituencies in North West England (historic)
Constituencies of the Parliament of the United Kingdom established in 1983
Constituencies of the Parliament of the United Kingdom disestablished in 1997
Borough of Wyre
|
Serpent Lake is a lake in Crosby and Deerwood, Crow Wing County, in the U.S. state of Minnesota spanning 1,103 acres. Fish that swim there include: Black Bullhead, Bluegill, Brown Bullhead, Largemouth Bass, Northern Pike, Pumpkinseed, Rock Bass, Smallmouth Bass, Walleye, Yellow Bullhead and Yellow Perch. Serpent Lake is an English translation of the Ojibwe language name.
See also
List of lakes in Minnesota
References
Lakes of Minnesota
Lakes of Crow Wing County, Minnesota
|
```yaml
---
- name: Configure Jenkins Port
lineinfile: dest=/etc/default/jenkins regexp=^HTTP_PORT= line=HTTP_PORT={{port}}
register: config_changed
- name: Restart jenkins now
service: name=jenkins state=restarted
when: config_changed.changed
- name: Configure Jenkins Prefix
when: prefix is defined
lineinfile: dest=/etc/default/jenkins regexp=^PREFIX= line=PREFIX={{prefix}}
- name: Configure Jenkins E-mail
when: email is defined
template: src=hudson.tasks.Mailer.xml.j2 dest={{ jenkins_lib }}/hudson.tasks.Mailer.xml owner=jenkins group=jenkins mode=0644
```
|
Kittisak Rawangpa (, born January 3, 1975) is a Thai former footballer. He also played for the Thailand national football team 1997–2010.
International career
On the back of performing extremely well in the Thailand Premier League, Kittisak was called up to the full national side in coach Peter Reid's first squad announcement. He was called up with 35 other players to the 2008 T&T Cup hosted by Vietnam.
Kittisak was a member of the victorious T&T Cup 2008 winning squad.
He appeared for Thailand in eleven qualifying matches for the 2002 FIFA World Cup.
Honours
Player
International
Thailand
Asian Games Fourth place (1); 2002
ASEAN Football Championship Champion (2); 2000, 2002
Runners-up 2007, 2008
Sea Games Gold Medal (1); 1997
T&T Cup Winner (1); 2008
Queen's Cup Winner (1); 2010
References
External links
1975 births
Living people
Kittisak Rawangpa
Kittisak Rawangpa
2000 AFC Asian Cup players
Kittisak Rawangpa
Kittisak Rawangpa
Kittisak Rawangpa
Kittisak Rawangpa
Kittisak Rawangpa
Expatriate men's footballers in Vietnam
Kittisak Rawangpa
Kittisak Rawangpa
Men's association football goalkeepers
Footballers at the 2002 Asian Games
SEA Games medalists in football
Kittisak Rawangpa
Competitors at the 1997 SEA Games
Kittisak Rawangpa
Thai expatriate sportspeople in Vietnam
|
```javascript
const assert = require('assert');
const { BucketInfo, BackendInfo } = require('arsenal').models;
const DummyRequest = require('../DummyRequest');
const { DummyRequestLogger } = require('../helpers');
const locationConstraintCheck
= require('../../../lib/api/apiUtils/object/locationConstraintCheck');
const memLocation = 'scality-internal-mem';
const fileLocation = 'scality-internal-file';
const bucketName = 'nameOfBucket';
const owner = 'canonicalID';
const ownerDisplayName = 'bucketOwner';
const testDate = new Date().toJSON();
const locationConstraint = fileLocation;
const namespace = 'default';
const objectKey = 'someobject';
const postBody = Buffer.from('I am a body', 'utf8');
const log = new DummyRequestLogger();
const testBucket = new BucketInfo(bucketName, owner, ownerDisplayName,
testDate, null, null, null, null, null, null, locationConstraint);
function createTestRequest(locationConstraint) {
const testRequest = new DummyRequest({
bucketName,
namespace,
objectKey,
headers: { 'x-amz-meta-scal-location-constraint': locationConstraint },
url: `/${bucketName}/${objectKey}`,
parsedHost: 'localhost',
}, postBody);
return testRequest;
}
describe('Location Constraint Check', () => {
it('should return error if controlling location constraint is ' +
'not valid', done => {
const backendInfoObj = locationConstraintCheck(
createTestRequest('fail-region'), null, testBucket, log);
assert.strictEqual(backendInfoObj.err.code, 400,
'Expected "Invalid Argument" code error');
assert(backendInfoObj.err.is.InvalidArgument, 'Expected "Invalid ' +
'Argument" error');
done();
});
it('should return instance of BackendInfo with correct ' +
'locationConstraints', done => {
const backendInfoObj = locationConstraintCheck(
createTestRequest(memLocation), null, testBucket, log);
assert.strictEqual(backendInfoObj.err, null, 'Expected success ' +
`but got error ${backendInfoObj.err}`);
assert.strictEqual(typeof backendInfoObj.controllingLC, 'string');
assert.equal(backendInfoObj.backendInfo instanceof BackendInfo,
true);
assert.strictEqual(backendInfoObj.
backendInfo.getObjectLocationConstraint(), memLocation);
assert.strictEqual(backendInfoObj.
backendInfo.getBucketLocationConstraint(), fileLocation);
assert.strictEqual(backendInfoObj.backendInfo.getRequestEndpoint(),
'localhost');
done();
});
});
```
|
```scss
/** THIS FILE IS AUTOGENERATED do not modify it manually. See generateDefaultThemeSassFiles.js. New slots should be added to the appropriate interfaces and defaults files. */
$ms-color-themeDarker: "[theme:themeDarker, default: #004578]";
$ms-color-themeDark: "[theme:themeDark, default: #005a9e]";
$ms-color-themeDarkAlt: "[theme:themeDarkAlt, default: #106ebe]";
$ms-color-themePrimary: "[theme:themePrimary, default: #0078d4]";
$ms-color-themeSecondary: "[theme:themeSecondary, default: #2b88d8]";
$ms-color-themeTertiary: "[theme:themeTertiary, default: #71afe5]";
$ms-color-themeLight: "[theme:themeLight, default: #c7e0f4]";
$ms-color-themeLighter: "[theme:themeLighter, default: #deecf9]";
$ms-color-themeLighterAlt: "[theme:themeLighterAlt, default: #eff6fc]";
$ms-color-black: "[theme:black, default: #000000]";
$ms-color-blackTranslucent40: "[theme:blackTranslucent40, default: rgba(0,0,0,.4)]";
$ms-color-neutralDark: "[theme:neutralDark, default: #201f1e]";
$ms-color-neutralPrimary: "[theme:neutralPrimary, default: #323130]";
$ms-color-neutralPrimaryAlt: "[theme:neutralPrimaryAlt, default: #3b3a39]";
$ms-color-neutralSecondary: "[theme:neutralSecondary, default: #605e5c]";
$ms-color-neutralSecondaryAlt: "[theme:neutralSecondaryAlt, default: #8a8886]";
$ms-color-neutralTertiary: "[theme:neutralTertiary, default: #a19f9d]";
$ms-color-neutralTertiaryAlt: "[theme:neutralTertiaryAlt, default: #c8c6c4]";
$ms-color-neutralQuaternary: "[theme:neutralQuaternary, default: #d2d0ce]";
$ms-color-neutralQuaternaryAlt: "[theme:neutralQuaternaryAlt, default: #e1dfdd]";
$ms-color-neutralLight: "[theme:neutralLight, default: #edebe9]";
$ms-color-neutralLighter: "[theme:neutralLighter, default: #f3f2f1]";
$ms-color-neutralLighterAlt: "[theme:neutralLighterAlt, default: #faf9f8]";
$ms-color-accent: "[theme:accent, default: #0078d4]";
$ms-color-white: "[theme:white, default: #ffffff]";
$ms-color-whiteTranslucent40: "[theme:whiteTranslucent40, default: rgba(255,255,255,.4)]";
$ms-color-yellowDark: "[theme:yellowDark, default: #d29200]";
$ms-color-yellow: "[theme:yellow, default: #ffb900]";
$ms-color-yellowLight: "[theme:yellowLight, default: #fff100]";
$ms-color-orange: "[theme:orange, default: #d83b01]";
$ms-color-orangeLight: "[theme:orangeLight, default: #ea4300]";
$ms-color-orangeLighter: "[theme:orangeLighter, default: #ff8c00]";
$ms-color-redDark: "[theme:redDark, default: #a4262c]";
$ms-color-red: "[theme:red, default: #e81123]";
$ms-color-magentaDark: "[theme:magentaDark, default: #5c005c]";
$ms-color-magenta: "[theme:magenta, default: #b4009e]";
$ms-color-magentaLight: "[theme:magentaLight, default: #e3008c]";
$ms-color-purpleDark: "[theme:purpleDark, default: #32145a]";
$ms-color-purple: "[theme:purple, default: #5c2d91]";
$ms-color-purpleLight: "[theme:purpleLight, default: #b4a0ff]";
$ms-color-blueDark: "[theme:blueDark, default: #002050]";
$ms-color-blueMid: "[theme:blueMid, default: #00188f]";
$ms-color-blue: "[theme:blue, default: #0078d4]";
$ms-color-blueLight: "[theme:blueLight, default: #00bcf2]";
$ms-color-tealDark: "[theme:tealDark, default: #004b50]";
$ms-color-teal: "[theme:teal, default: #008272]";
$ms-color-tealLight: "[theme:tealLight, default: #00b294]";
$ms-color-greenDark: "[theme:greenDark, default: #004b1c]";
$ms-color-green: "[theme:green, default: #107c10]";
$ms-color-greenLight: "[theme:greenLight, default: #bad80a]";
```
|
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<link rel="shortcut icon" type="image/png" href="../assets/img/favicon.ico">
<link rel="stylesheet" href="../assets/lib/cssgrids.css">
<link rel="stylesheet" href="../assets/css/main.css" id="site_styles">
<script src="../assets/lib/yui-min.js"></script>
<script src="../assets/js/api-prettify.js"></script>
<script src="../assets/js/api-filter.js"></script>
<script src="../assets/js/api-list.js"></script>
<script src="../assets/js/api-search.js"></script>
<script src="../assets/js/api-docs.js"></script>
<title>nunuStudio PhysicsObject</title>
</head>
<body class="yui3-skin-sam">
<div id="doc">
<div id="hd" class="yui3-g header">
<div class="yui3-u-3-4">
<h1><a href="../index.html"><img src="../assets/img/logo.png" title=""></a></h1>
</div>
</div>
<div id="bd" class="yui3-g">
<div class="yui3-u-1-4">
<div id="docs-sidebar" class="sidebar">
<div id="api-list">
<h2 class="off-left">APIs</h2>
<div id="api-tabview" class="tabview">
<div id="api-tabview-filter">
<input type="search" id="api-filter" placeholder="Type to filter APIs">
</div>
<ul class="tabs">
<li><a href="#api-classes">Classes</a></li>
<li><a href="#api-modules">Modules</a></li>
</ul>
<div id="api-tabview-panel">
<ul id="api-classes" class="apis classes">
<li><a href="../classes/AfterimagePass.html">AfterimagePass</a></li>
<li><a href="../classes/AmbientLight.html">AmbientLight</a></li>
<li><a href="../classes/AnimationMixer.html">AnimationMixer</a></li>
<li><a href="../classes/AnimationTimer.html">AnimationTimer</a></li>
<li><a href="../classes/App.html">App</a></li>
<li><a href="../classes/ARHandler.html">ARHandler</a></li>
<li><a href="../classes/ArraybufferUtils.html">ArraybufferUtils</a></li>
<li><a href="../classes/Audio.html">Audio</a></li>
<li><a href="../classes/AudioEmitter.html">AudioEmitter</a></li>
<li><a href="../classes/AudioLoader.html">AudioLoader</a></li>
<li><a href="../classes/Base64Utils.html">Base64Utils</a></li>
<li><a href="../classes/BaseNode.html">BaseNode</a></li>
<li><a href="../classes/BillboardGroup.html">BillboardGroup</a></li>
<li><a href="../classes/BloomPass.html">BloomPass</a></li>
<li><a href="../classes/BokehPass.html">BokehPass</a></li>
<li><a href="../classes/BufferUtils.html">BufferUtils</a></li>
<li><a href="../classes/ByteArrayUtils.html">ByteArrayUtils</a></li>
<li><a href="../classes/CanvasSprite.html">CanvasSprite</a></li>
<li><a href="../classes/CanvasTexture.html">CanvasTexture</a></li>
<li><a href="../classes/CapsuleBufferGeometry.html">CapsuleBufferGeometry</a></li>
<li><a href="../classes/ColorifyPass.html">ColorifyPass</a></li>
<li><a href="../classes/CompressedTexture.html">CompressedTexture</a></li>
<li><a href="../classes/CopyPass.html">CopyPass</a></li>
<li><a href="../classes/CSS3DObject.html">CSS3DObject</a></li>
<li><a href="../classes/CSS3DRenderer.html">CSS3DRenderer</a></li>
<li><a href="../classes/CSS3DSprite.html">CSS3DSprite</a></li>
<li><a href="../classes/CubeCamera.html">CubeCamera</a></li>
<li><a href="../classes/CubeTexture.html">CubeTexture</a></li>
<li><a href="../classes/DataTexture.html">DataTexture</a></li>
<li><a href="../classes/DirectionalLight.html">DirectionalLight</a></li>
<li><a href="../classes/DirectionalLightCSM.html">DirectionalLightCSM</a></li>
<li><a href="../classes/DotScreenPass.html">DotScreenPass</a></li>
<li><a href="../classes/EffectComposer.html">EffectComposer</a></li>
<li><a href="../classes/EventManager.html">EventManager</a></li>
<li><a href="../classes/FileSystem.html">FileSystem</a></li>
<li><a href="../classes/FilmPass.html">FilmPass</a></li>
<li><a href="../classes/FirstPersonControls.html">FirstPersonControls</a></li>
<li><a href="../classes/Fog.html">Fog</a></li>
<li><a href="../classes/Font.html">Font</a></li>
<li><a href="../classes/FontLoader.html">FontLoader</a></li>
<li><a href="../classes/FXAAPass.html">FXAAPass</a></li>
<li><a href="../classes/Gamepad.html">Gamepad</a></li>
<li><a href="../classes/GeometryLoader.html">GeometryLoader</a></li>
<li><a href="../classes/Group.html">Group</a></li>
<li><a href="../classes/Gyroscope.html">Gyroscope</a></li>
<li><a href="../classes/HemisphereLight.html">HemisphereLight</a></li>
<li><a href="../classes/HTMLView.html">HTMLView</a></li>
<li><a href="../classes/HueSaturationPass.html">HueSaturationPass</a></li>
<li><a href="../classes/Image.html">Image</a></li>
<li><a href="../classes/ImageLoader.html">ImageLoader</a></li>
<li><a href="../classes/InstancedMesh.html">InstancedMesh</a></li>
<li><a href="../classes/Key.html">Key</a></li>
<li><a href="../classes/Keyboard.html">Keyboard</a></li>
<li><a href="../classes/LegacyGeometryLoader.html">LegacyGeometryLoader</a></li>
<li><a href="../classes/LensFlare.html">LensFlare</a></li>
<li><a href="../classes/LightProbe.html">LightProbe</a></li>
<li><a href="../classes/LocalStorage.html">LocalStorage</a></li>
<li><a href="../classes/Material.html">Material</a></li>
<li><a href="../classes/MaterialLoader.html">MaterialLoader</a></li>
<li><a href="../classes/MathUtils.html">MathUtils</a></li>
<li><a href="../classes/Mesh.html">Mesh</a></li>
<li><a href="../classes/Model.html">Model</a></li>
<li><a href="../classes/Mouse.html">Mouse</a></li>
<li><a href="../classes/NodeScript.html">NodeScript</a></li>
<li><a href="../classes/Nunu.html">Nunu</a></li>
<li><a href="../classes/Object3D.html">Object3D</a></li>
<li><a href="../classes/ObjectLoader.html">ObjectLoader</a></li>
<li><a href="../classes/ObjectUtils.html">ObjectUtils</a></li>
<li><a href="../classes/OperationNode.html">OperationNode</a></li>
<li><a href="../classes/OrbitControls.html">OrbitControls</a></li>
<li><a href="../classes/OrthographicCamera.html">OrthographicCamera</a></li>
<li><a href="../classes/ParametricBufferGeometry.html">ParametricBufferGeometry</a></li>
<li><a href="../classes/ParticleDistributions.html">ParticleDistributions</a></li>
<li><a href="../classes/ParticleEmitter.html">ParticleEmitter</a></li>
<li><a href="../classes/ParticleEmitterControl.html">ParticleEmitterControl</a></li>
<li><a href="../classes/ParticleEmitterControlOptions.html">ParticleEmitterControlOptions</a></li>
<li><a href="../classes/ParticleGroup.html">ParticleGroup</a></li>
<li><a href="../classes/Pass.html">Pass</a></li>
<li><a href="../classes/PerspectiveCamera.html">PerspectiveCamera</a></li>
<li><a href="../classes/PhysicsGenerator.html">PhysicsGenerator</a></li>
<li><a href="../classes/PhysicsObject.html">PhysicsObject</a></li>
<li><a href="../classes/PointLight.html">PointLight</a></li>
<li><a href="../classes/PositionalAudio.html">PositionalAudio</a></li>
<li><a href="../classes/Program.html">Program</a></li>
<li><a href="../classes/PythonScript.html">PythonScript</a></li>
<li><a href="../classes/RectAreaLight.html">RectAreaLight</a></li>
<li><a href="../classes/RendererConfiguration.html">RendererConfiguration</a></li>
<li><a href="../classes/RendererState.html">RendererState</a></li>
<li><a href="../classes/RenderPass.html">RenderPass</a></li>
<li><a href="../classes/Resource.html">Resource</a></li>
<li><a href="../classes/ResourceManager.html">ResourceManager</a></li>
<li><a href="../classes/RoundedBoxBufferGeometry.html">RoundedBoxBufferGeometry</a></li>
<li><a href="../classes/Scene.html">Scene</a></li>
<li><a href="../classes/Script.html">Script</a></li>
<li><a href="../classes/ShaderAttribute.html">ShaderAttribute</a></li>
<li><a href="../classes/ShaderPass.html">ShaderPass</a></li>
<li><a href="../classes/ShaderUtils.html">ShaderUtils</a></li>
<li><a href="../classes/SimplexNoise.html">SimplexNoise</a></li>
<li><a href="../classes/Skeleton.html">Skeleton</a></li>
<li><a href="../classes/SkinnedMesh.html">SkinnedMesh</a></li>
<li><a href="../classes/Sky.html">Sky</a></li>
<li><a href="../classes/SobelPass.html">SobelPass</a></li>
<li><a href="../classes/SpineAnimation.html">SpineAnimation</a></li>
<li><a href="../classes/SpineTexture.html">SpineTexture</a></li>
<li><a href="../classes/SpotLight.html">SpotLight</a></li>
<li><a href="../classes/Sprite.html">Sprite</a></li>
<li><a href="../classes/SpriteSheetTexture.html">SpriteSheetTexture</a></li>
<li><a href="../classes/SSAONOHPass.html">SSAONOHPass</a></li>
<li><a href="../classes/SSAOPass.html">SSAOPass</a></li>
<li><a href="../classes/SSAOShader.html">SSAOShader</a></li>
<li><a href="../classes/TargetConfig.html">TargetConfig</a></li>
<li><a href="../classes/TechnicolorPass.html">TechnicolorPass</a></li>
<li><a href="../classes/TerrainBufferGeometry.html">TerrainBufferGeometry</a></li>
<li><a href="../classes/TextBitmap.html">TextBitmap</a></li>
<li><a href="../classes/TextFile.html">TextFile</a></li>
<li><a href="../classes/TextMesh.html">TextMesh</a></li>
<li><a href="../classes/TextSprite.html">TextSprite</a></li>
<li><a href="../classes/Texture.html">Texture</a></li>
<li><a href="../classes/TextureLoader.html">TextureLoader</a></li>
<li><a href="../classes/Timer.html">Timer</a></li>
<li><a href="../classes/TizenKeyboard.html">TizenKeyboard</a></li>
<li><a href="../classes/Tree.html">Tree</a></li>
<li><a href="../classes/TreeUtils.html">TreeUtils</a></li>
<li><a href="../classes/TwistModifier.html">TwistModifier</a></li>
<li><a href="../classes/TypedArrayHelper.html">TypedArrayHelper</a></li>
<li><a href="../classes/UnitConverter.html">UnitConverter</a></li>
<li><a href="../classes/UnrealBloomPass.html">UnrealBloomPass</a></li>
<li><a href="../classes/Video.html">Video</a></li>
<li><a href="../classes/VideoLoader.html">VideoLoader</a></li>
<li><a href="../classes/VideoStream.html">VideoStream</a></li>
<li><a href="../classes/VideoTexture.html">VideoTexture</a></li>
<li><a href="../classes/Viewport.html">Viewport</a></li>
<li><a href="../classes/VRHandler.html">VRHandler</a></li>
<li><a href="../classes/WebcamTexture.html">WebcamTexture</a></li>
<li><a href="../classes/WorkerPool.html">WorkerPool</a></li>
<li><a href="../classes/WorkerTask.html">WorkerTask</a></li>
<li><a href="../classes/{Object} ParticleGroupOptions.html">{Object} ParticleGroupOptions</a></li>
</ul>
<ul id="api-modules" class="apis modules">
<li><a href="../modules/Animation.html">Animation</a></li>
<li><a href="../modules/Animations.html">Animations</a></li>
<li><a href="../modules/Audio.html">Audio</a></li>
<li><a href="../modules/BinaryUtils.html">BinaryUtils</a></li>
<li><a href="../modules/Cameras.html">Cameras</a></li>
<li><a href="../modules/Controls.html">Controls</a></li>
<li><a href="../modules/Core.html">Core</a></li>
<li><a href="../modules/Files.html">Files</a></li>
<li><a href="../modules/Input.html">Input</a></li>
<li><a href="../modules/Lights.html">Lights</a></li>
<li><a href="../modules/Loaders.html">Loaders</a></li>
<li><a href="../modules/Meshes.html">Meshes</a></li>
<li><a href="../modules/Misc.html">Misc</a></li>
<li><a href="../modules/Particles.html">Particles</a></li>
<li><a href="../modules/Physics.html">Physics</a></li>
<li><a href="../modules/Postprocessing.html">Postprocessing</a></li>
<li><a href="../modules/Resources.html">Resources</a></li>
<li><a href="../modules/Runtime.html">Runtime</a></li>
<li><a href="../modules/Script.html">Script</a></li>
<li><a href="../modules/Sprite.html">Sprite</a></li>
<li><a href="../modules/Textures.html">Textures</a></li>
<li><a href="../modules/THREE.html">THREE</a></li>
<li><a href="../modules/Utils.html">Utils</a></li>
</ul>
</div>
</div>
</div>
</div>
</div>
<div class="yui3-u-3-4">
<!--<div id="api-options">
Show:
<label for="api-show-inherited">
<input type="checkbox" id="api-show-inherited" checked>
Inherited
</label>
<label for="api-show-protected">
<input type="checkbox" id="api-show-protected">
Protected
</label>
<label for="api-show-private">
<input type="checkbox" id="api-show-private">
Private
</label>
<label for="api-show-deprecated">
<input type="checkbox" id="api-show-deprecated">
Deprecated
</label>
</div>--> <div class="apidocs">
<div id="docs-main">
<div class="content">
<h1>PhysicsObject Class</h1>
<div class="box meta">
<div class="extends">
Extends <a href="../classes/Group.html" class="crosslink">Group</a>
</div>
Module: <a href="../modules/Physics.html">Physics</a>
</div>
<div class="box intro">
<p>Wrapper for cannon.js physics objects.</p>
<p>The editor includes tools to create cannon shapes from three.js geometry objects.</p>
<p>Documentation for cannon.js physics available here http:// <a href="path_to_url">schteppe.github.io/cannon.js/docs/</a></p>
</div>
<div id="classdocs" class="tabview">
<ul class="api-class-tabs">
<li class="api-class-tab index"><a href="#index">Index</a></li>
<li class="api-class-tab methods"><a href="#methods">Methods</a></li>
<li class="api-class-tab attrs"><a href="#attrs">Attributes</a></li>
</ul>
<div>
<div id="index" class="api-class-tabpanel index">
<h2 class="off-left">Item Index</h2>
<div class="index-section methods">
<h3>Methods</h3>
<ul class="index-list methods extends">
<li class="index-item method">
<a href="#method_addShape">addShape</a>
</li>
<li class="index-item method">
<a href="#method_initialize">initialize</a>
</li>
<li class="index-item method">
<a href="#method_update">update</a>
</li>
</ul>
</div>
<div class="index-section attrs">
<h3>Attributes</h3>
<ul class="index-list attrs extends">
<li class="index-item attr">
<a href="#attr_body">body</a>
</li>
<li class="index-item attr">
<a href="#attr_LOCAL">LOCAL</a>
</li>
<li class="index-item attr">
<a href="#attr_mode">mode</a>
</li>
<li class="index-item attr">
<a href="#attr_world">world</a>
</li>
<li class="index-item attr">
<a href="#attr_WORLD">WORLD</a>
</li>
</ul>
</div>
</div>
<div id="methods" class="api-class-tabpanel">
<h2 class="off-left">Methods</h2>
<div id="method_addShape" class="method item">
<h3 class="name"><code>addShape</code></h3>
<div class="args">
<span class="paren">(</span><ul class="args-list inline commas">
<li class="arg">
<code>shape</code>
</li>
</ul><span class="paren">)</span>
</div>
<div class="meta">
<p>
</p>
</div>
<div class="description">
<p>Add shape to physics object body.</p>
</div>
<div class="params">
<h4>Parameters:</h4>
<ul class="params-list">
<li class="param">
<code class="param-name">shape</code>
<span class="type">Shape</span>
<div class="param-description">
</div>
</li>
</ul>
</div>
</div>
<div id="method_initialize" class="method item">
<h3 class="name"><code>initialize</code></h3>
<span class="paren">()</span>
<div class="meta">
<p>
</p>
</div>
<div class="description">
<p>Intialize physics object and add it to the scene physics world.</p>
</div>
</div>
<div id="method_update" class="method item">
<h3 class="name"><code>update</code></h3>
<span class="paren">()</span>
<div class="meta">
<p>
</p>
</div>
<div class="description">
<p>Update object position and rotation based on cannon.js body.</p>
</div>
</div>
</div>
<div id="attrs" class="api-class-tabpanel">
<h2 class="off-left">Attributes</h2>
<div id="attr_body" class="attr item">
<a name="config_body"></a>
<h3 class="name"><code>body</code></h3>
<span class="type">Body</span>
<div class="meta">
<p>
</p>
</div>
<div class="description">
<p>Physics body contains the following attributes:</p>
<ul>
<li>position Vec3</li>
<li>velocity Vec3</li>
<li>torque Vec3</li>
<li>angularVelocity Vec3</li>
<li>quaternion Quaternion</li>
<li>mass Number</li>
<li>material Material</li>
<li>type Number</li>
<li>linearDamping Number</li>
<li>angularDamping Number</li>
<li>allowSleep Boolean</li>
<li>sleepSpeedLimit Number</li>
<li>sleepTimeLimit Number</li>
<li>collisionFilterGroup Number</li>
<li>collisionFilterMask Number</li>
<li>fixedRotation Boolean</li>
<li>shape Array</li>
</ul>
</div>
</div>
<div id="attr_LOCAL" class="attr item">
<a name="config_LOCAL"></a>
<h3 class="name"><code>LOCAL</code></h3>
<span class="type">Number</span>
<span class="flag static">static</span>
<div class="meta">
<p>
</p>
</div>
<div class="description">
<p>The position of the object is copied directly from the body.</p>
<p>Ignores the world tranforms inherited from parent objects.</p>
<p>Faster but the physics object should not carry any world transformations.</p>
</div>
</div>
<div id="attr_mode" class="attr item">
<a name="config_mode"></a>
<h3 class="name"><code>mode</code></h3>
<span class="type">Number</span>
<div class="meta">
<p>
</p>
</div>
<div class="description">
<p>Physics object position mode, indicates how coordinates from the physics engine are transformed into object coordinates.</p>
</div>
</div>
<div id="attr_world" class="attr item">
<a name="config_world"></a>
<h3 class="name"><code>world</code></h3>
<span class="type">World</span>
<div class="meta">
<p>
</p>
</div>
<div class="description">
<p>Refenrece to the physics world.</p>
</div>
</div>
<div id="attr_WORLD" class="attr item">
<a name="config_WORLD"></a>
<h3 class="name"><code>WORLD</code></h3>
<span class="type">Number</span>
<span class="flag static">static</span>
<div class="meta">
<p>
</p>
</div>
<div class="description">
<p>The position of the object is adjusted to follow the parent object transformation.</p>
<p>This mode should be used for objects placed inside others.</p>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</body>
</html>
```
|
```go
// contributor license agreements. See the NOTICE file distributed with
// this work for additional information regarding copyright ownership.
//
// path_to_url
//
// Unless required by applicable law or agreed to in writing, software
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// beam-playground:
// name: CommonTransformsSolution
// description: Common Transforms motivating challenge solution.
// multifile: false
// context_line: 43
// categories:
// - Quickstart
// complexity: BASIC
// tags:
// - hellobeam
package main
import (
"context"
"strconv"
"strings"
"github.com/apache/beam/sdks/v2/go/pkg/beam"
"github.com/apache/beam/sdks/v2/go/pkg/beam/io/textio"
"github.com/apache/beam/sdks/v2/go/pkg/beam/log"
"github.com/apache/beam/sdks/v2/go/pkg/beam/transforms/filter"
"github.com/apache/beam/sdks/v2/go/pkg/beam/transforms/stats"
"github.com/apache/beam/sdks/v2/go/pkg/beam/x/beamx"
"github.com/apache/beam/sdks/v2/go/pkg/beam/x/debug"
)
func main() {
ctx := context.Background()
beam.Init()
p, s := beam.NewPipelineWithRoot()
input := textio.Read(s, "gs://apache-beam-samples/nyc_taxi/misc/sample1000.csv")
// Extract cost from PCollection
cost := ExtractCostFromFile(s, input)
// Filtering with fixed cost
aboveCosts := getAboveCosts(s, cost)
// Filtering with fixed cost
belowCosts := getBelowCosts(s, cost)
// Summing up the price above the fixed price
aboveCostsSum := getSum(s, aboveCosts)
// Summing up the price above the fixed price
belowCostsSum := getSum(s, belowCosts)
// Create map[key,value]
aboveKV := getMap(s, aboveCostsSum, "above")
// Create map[key,value]
belowKV := getMap(s, belowCostsSum, "below")
debug.Printf(s, "Above pCollection output", aboveKV)
debug.Printf(s, "Below pCollection output", belowKV)
err := beamx.Run(ctx, p)
if err != nil {
log.Exitf(ctx, "Failed to execute job: %v", err)
}
}
func ExtractCostFromFile(s beam.Scope, input beam.PCollection) beam.PCollection {
return beam.ParDo(s, func(line string) float64 {
taxi := strings.Split(strings.TrimSpace(line), ",")
if len(taxi) > 16 {
cost, _ := strconv.ParseFloat(taxi[16], 64)
return cost
}
return 0.0
}, input)
}
func getSum(s beam.Scope, input beam.PCollection) beam.PCollection {
return stats.Sum(s, input)
}
func getAboveCosts(s beam.Scope, input beam.PCollection) beam.PCollection {
return filter.Include(s, input, func(element float64) bool {
return element >= 15
})
}
func getBelowCosts(s beam.Scope, input beam.PCollection) beam.PCollection {
return filter.Include(s, input, func(element float64) bool {
return element < 15
})
}
func getMap(s beam.Scope, input beam.PCollection, key string) beam.PCollection {
return beam.ParDo(s, func(number float64) (string, float64) {
return key, number
}, input)
}
```
|
```php
<?php
/*
*
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing, software
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
*/
namespace Google\Service\Dataproc;
class InstanceReference extends \Google\Model
{
/**
* @var string
*/
public $instanceId;
/**
* @var string
*/
public $instanceName;
/**
* @var string
*/
public $publicEciesKey;
/**
* @var string
*/
public $publicKey;
/**
* @param string
*/
public function setInstanceId($instanceId)
{
$this->instanceId = $instanceId;
}
/**
* @return string
*/
public function getInstanceId()
{
return $this->instanceId;
}
/**
* @param string
*/
public function setInstanceName($instanceName)
{
$this->instanceName = $instanceName;
}
/**
* @return string
*/
public function getInstanceName()
{
return $this->instanceName;
}
/**
* @param string
*/
public function setPublicEciesKey($publicEciesKey)
{
$this->publicEciesKey = $publicEciesKey;
}
/**
* @return string
*/
public function getPublicEciesKey()
{
return $this->publicEciesKey;
}
/**
* @param string
*/
public function setPublicKey($publicKey)
{
$this->publicKey = $publicKey;
}
/**
* @return string
*/
public function getPublicKey()
{
return $this->publicKey;
}
}
// Adding a class alias for backwards compatibility with the previous class name.
class_alias(InstanceReference::class, 'Google_Service_Dataproc_InstanceReference');
```
|
Thornton is a village in the Metropolitan Borough of Sefton, in Merseyside, England. Within the boundaries of the historic county of Lancashire and situated to the north east of Crosby, it is a residential area of semi-detached and detached housing which dates mainly from the 1930s. Many of the houses, particularly those around Edge Lane and Water Street, feature notably long gardens. The A565 Liverpool-Southport road serves the area. At the 2001 Census the population of the village and civil parish was recorded as 2,262, falling to 2,139 at the Census 2011.
History
Historically part of Lancashire. During the compilation of the Domesday Book in 1086, the settlement of Torentún is recorded, along with the settlement of Homer Green, which far outdates any claim that Ince Blundell is the oldest village in Sefton. Thornton was combined with Crosby Village and Blundellsands to form the Great Crosby urban district. The district subsequently became part the municipal borough of Crosby in 1937. Thornton was still served by West Lancashire Council until the formation of the Metropolitan Borough of Sefton on 1 April 1974. Thornton still however retains Parish Council status and therefore has a historical boundary.
Governance
From 1950 until 2010 Thornton was within the boundaries of the Crosby constituency, whose MP from 1997 till 2010 was Claire Curtis-Thomas, a member of the Labour Party, prior to her election the Crosby seat was generally considered to be a safe Conservative Party stronghold with Tory MP's elected at every election barring the 1981 Crosby by-election where Shirley Williams of the Social Democratic Party was elected to represent the constituency. As a result of boundary revisions for the 2010 general election the Crosby constituency was abolished with its northern parts, including Thornton, being merged with the Eastern parts of Sefton that were formerly part of the Knowsley North and Sefton East constituency, to form the new constituency of Sefton Central, which is currently represented by the Labour Party MP Bill Esterson.
For elections to Sefton Council Thornton is within the Manor electoral ward and is represented by three councillors. The councillors of Manor ward are Martyn Barber of the Conservative Party, John Gibson of the Liberal Democrats, and Steve McGinnity of the Labour Party.
Description
The Parish Council consists of seven councillors, who are all local residents. They meet on the first Monday of each month at Holy Family Catholic High School. Notice of these meetings are displayed on the Parish website and on the village notice board on The Crescent.
There are three schools in Thornton: St William Of York RC Primary School, Holy Family Catholic High School and Thornton College, which is an annexe of Hugh Baird College. In 2003, Cherie Blair QC, wife of former British Prime Minister Tony Blair, officially came to Thornton to open Holy Family's new Sixth Form building.
There are two churches in Thornton: The King's Church, Drummond Road and St William Of York RC Church, St William Road. St Frideswydes C of E Church, formerly on Water Street, was demolished in 2012 and merged with All Saints C of E Church at the north end of Great Crosby.
Thornton also has two historical public houses dating back to the early 19th century called the Nags Head, which is situated opposite Water Street, and the Grapes Hotel.
Thornton has set of stocks located at the junction of Green Lane & Water Street. These, along with the local sundial, can be dated back to the late 18th century and are Grade II listed monuments. There is another Grade II listed monument located in Back Lane called Brooms Cross. This is a wayside cross which lies on what was the old bridleway that ran from Hightown / Ince Blundell to Sefton Parish Church, St. Helens. It was here that funeral processions would come to rest and have refreshments before continuing to the church. Unfortunately this monument, which was restored to celebrate the Queen's Silver Jubilee Year of 1977, has become the target of vandals.
Freedom of the Parish
The following people and military units have received the Freedom of the Parish of Thornton.
Individuals
The Reverend Canon Katherine "Kath" Rogers: 18 June 2021.
See also
Listed buildings in Thornton, Merseyside
References
External links
A satellite view of Thornton from Google Maps
Liverpool Street Gallery - Liverpool 23
Towns and villages in the Metropolitan Borough of Sefton
Civil parishes in Merseyside
|
```xml
import { ApplicationRef, NgModule } from '@angular/core';
import { BrowserModule } from '@angular/platform-browser';
import { platformBrowserDynamic } from '@angular/platform-browser-dynamic';
import { bootloader, createInputTransfer, createNewHosts, hmrModule, removeNgStyles } from '@angularclass/hmr';
import { HttpClientModule } from '@angular/common/http';
import { ApolloModule, Apollo } from 'apollo-angular';
import { HttpLinkModule } from 'apollo-angular-link-http';
import { RouterModule } from '@angular/router';
import { take } from 'rxjs/operators';
import { StoreModule, Store } from '@ngrx/store';
import { apiUrl, createApolloClient, log } from '@gqlapp/core-common';
import ClientModule from '@gqlapp/module-client-angular';
import { MainComponent, metaReducers } from './Main';
const createApp = async (modules: ClientModule) => {
const client = createApolloClient({
apiUrl,
createNetLink: modules.createNetLink,
createLink: modules.createLink,
connectionParams: modules.connectionParams,
clientResolvers: modules.resolvers,
});
@NgModule({
declarations: [MainComponent],
bootstrap: [MainComponent],
imports: [
BrowserModule,
HttpClientModule,
ApolloModule,
HttpLinkModule,
RouterModule.forRoot(modules.routes),
StoreModule.forRoot(modules.reducers, { metaReducers }),
...modules.modules,
],
providers: [],
})
class MainModule {
constructor(public appRef: ApplicationRef, apollo: Apollo, private appStore: Store<any>) {
apollo.setClient(client);
}
public hmrOnInit(store: any) {
if (!store || !store.state) {
return;
}
this.appStore.dispatch({ type: 'SET_ROOT_STATE', payload: store.state });
log.debug('Updating front-end', store.state.data);
// inject AppStore here and update it
// this.AppStore.update(store.state)
if ('restoreInputValues' in store) {
store.restoreInputValues();
}
// change detection
this.appRef.tick();
delete store.state;
delete store.restoreInputValues;
}
public hmrOnDestroy(store: any) {
store.disposeOldHosts = createNewHosts(this.appRef.components.map((cmp) => cmp.location.nativeElement));
this.appStore.pipe(take(1)).subscribe((state) => (store.state = state));
// save input values
store.restoreInputValues = createInputTransfer();
// remove styles
removeNgStyles();
}
public hmrAfterDestroy(store: any) {
// display new elements
store.disposeOldHosts();
delete store.disposeOldHosts;
// anything you need done the component is removed
}
}
function main() {
const result = platformBrowserDynamic().bootstrapModule(MainModule);
if (__DEV__) {
result.then((ngModuleRef: any) => {
return hmrModule(ngModuleRef, module);
});
}
}
// boot on document ready
bootloader(main);
};
export default new ClientModule({
onAppCreate: [createApp],
});
```
|
```xml
//
//
// path_to_url
//
// Unless required by applicable law or agreed to in writing, software
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
import * as pulumi from "@pulumi/pulumi";
export class MyRandom extends pulumi.ComponentResource {
public readonly randomID: pulumi.Output<string>;
constructor(name: string, opts: pulumi.ResourceOptions) {
super("pkg:index:MyRandom", name, {}, opts);
this.randomID = pulumi.output(`${name}-${Math.floor(Math.random() * 1000)}`);
this.registerOutputs({ randomID: this.randomID });
}
}
```
|
Paul James Crowe (October 23, 1924 – December 13, 1989) was an American football player who played at the halfback and defensive back positions. He played college football for Saint Mary's military football for the 1944 Saint Mary's Pre-Flight Air Devils football team, and professional football for the San Francisco 49ers, Los Angeles Dons, and New York Yanks.
Early years
Crowe was born in 1924 in Chino, California. He attended and played football at Chino High School. He starred in football, basketball, and track from 1939 to 1942.
Military and college football
Crowe served in the United States Army beginning in 1942. After the war, he played college football for the Saint Mary's Gaels from 1945 to 1947. He also played for the Saint' Mary's basketball team.
Professional football
Crowe played professional football in the All-America Football Conference for the San Francisco 49ers during their 1948 season and for the 49ers and Los Angeles Dons during their 1949 seasons. He also played in the National Football League (NFL) for the 1951 New York Yanks. He appeared in a total of 32 AAFC and NFL games.
Family and later years
After retiring from football, Crowe became a general contractor. He also served on the Feather River Recreation and Park District Board of Directors. Crowe died in 1989 at age 65 in Butte County, California.
References
1924 births
1989 deaths
San Francisco 49ers (AAFC) players
Players of American football from San Bernardino County, California
Los Angeles Dons players
New York Yanks players
Saint Mary's Gaels football players
People from Chino, California
American football halfbacks
United States Army personnel of World War II
Saint Mary's Gaels men's basketball players
San Francisco 49ers players
|
```smalltalk
using System.IO;
using System.Threading;
using JetBrains.Annotations;
namespace Volo.Abp.BlobStoring;
public class BlobProviderSaveArgs : BlobProviderArgs
{
[NotNull]
public Stream BlobStream { get; }
public bool OverrideExisting { get; }
public BlobProviderSaveArgs(
[NotNull] string containerName,
[NotNull] BlobContainerConfiguration configuration,
[NotNull] string blobName,
[NotNull] Stream blobStream,
bool overrideExisting = false,
CancellationToken cancellationToken = default)
: base(
containerName,
configuration,
blobName,
cancellationToken)
{
BlobStream = Check.NotNull(blobStream, nameof(blobStream));
OverrideExisting = overrideExisting;
}
}
```
|
```python
#
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
#
# path_to_url
#
# Unless required by applicable law or agreed to in writing, software
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#
"""Unit tests for the processes module."""
# pytype: skip-file
import glob
import os
import random
import re
import shutil
import socketserver
import subprocess
import tempfile
import threading
import unittest
from apache_beam.utils import subprocess_server
class JavaJarServerTest(unittest.TestCase):
def test_gradle_jar_release(self):
self.assertEqual(
'path_to_url
'beam-sdks-java-fake/VERSION/beam-sdks-java-fake-VERSION.jar',
subprocess_server.JavaJarServer.path_to_beam_jar(
':sdks:java:fake:fatJar', version='VERSION'))
self.assertEqual(
'path_to_url
'beam-sdks-java-fake/VERSION/beam-sdks-java-fake-A-VERSION.jar',
subprocess_server.JavaJarServer.path_to_beam_jar(
':sdks:java:fake:fatJar', appendix='A', version='VERSION'))
self.assertEqual(
'path_to_url
'beam-sdks-java-fake/VERSION/beam-sdks-java-fake-A-VERSION.jar',
subprocess_server.JavaJarServer.path_to_beam_jar(
':gradle:target:doesnt:matter',
appendix='A',
version='VERSION',
artifact_id='beam-sdks-java-fake'))
def test_gradle_jar_dev(self):
with self.assertRaisesRegex(
Exception,
re.escape(os.path.join('sdks',
'java',
'fake',
'build',
'libs',
'beam-sdks-java-fake-VERSION-SNAPSHOT.jar')) +
' not found.'):
subprocess_server.JavaJarServer.path_to_beam_jar(
':sdks:java:fake:fatJar', version='VERSION.dev')
with self.assertRaisesRegex(
Exception,
re.escape(os.path.join('sdks',
'java',
'fake',
'build',
'libs',
'beam-sdks-java-fake-A-VERSION-SNAPSHOT.jar')) +
' not found.'):
subprocess_server.JavaJarServer.path_to_beam_jar(
':sdks:java:fake:fatJar', appendix='A', version='VERSION.dev')
with self.assertRaisesRegex(
Exception,
re.escape(os.path.join('sdks',
'java',
'fake',
'build',
'libs',
'fake-artifact-id-A-VERSION-SNAPSHOT.jar')) +
' not found.'):
subprocess_server.JavaJarServer.path_to_beam_jar(
':sdks:java:fake:fatJar',
appendix='A',
version='VERSION.dev',
artifact_id='fake-artifact-id')
def test_beam_services(self):
with subprocess_server.JavaJarServer.beam_services({':some:target': 'foo'}):
self.assertEqual(
'foo',
subprocess_server.JavaJarServer.path_to_beam_jar(':some:target'))
def test_local_jar(self):
class Handler(socketserver.BaseRequestHandler):
timeout = 1
def handle(self):
self.request.recv(1024)
self.request.sendall(b'HTTP/1.1 200 OK\n\ndata')
port, = subprocess_server.pick_port(None)
server = socketserver.TCPServer(('localhost', port), Handler)
t = threading.Thread(target=server.handle_request)
t.daemon = True
t.start()
with tempfile.TemporaryDirectory() as temp_dir:
subprocess_server.JavaJarServer.local_jar(
'path_to_url % port, temp_dir)
with open(os.path.join(temp_dir, 'file.jar')) as fin:
self.assertEqual(fin.read(), 'data')
@unittest.skipUnless(shutil.which('javac'), 'missing java jdk')
def test_classpath_jar(self):
with tempfile.TemporaryDirectory() as temp_dir:
try:
# Avoid having to prefix everything in our test strings.
oldwd = os.getcwd()
os.chdir(temp_dir)
with open('Main.java', 'w') as fout:
fout.write(
"""
public class Main {
public static void main(String[] args) { Other.greet(); }
}
""")
with open('Other.java', 'w') as fout:
fout.write(
"""
public class Other {
public static void greet() { System.out.println("You got me!"); }
}
""")
os.mkdir('jars')
# Using split just for readability/copyability.
subprocess.check_call('javac Main.java Other.java'.split())
subprocess.check_call('jar cfe jars/Main.jar Main Main.class'.split())
subprocess.check_call('jar cf jars/Other.jar Other.class'.split())
# Make sure the java and class files don't get picked up.
for path in glob.glob('*.*'):
os.unlink(path)
# These should fail.
self.assertNotEqual(
subprocess.call('java -jar jars/Main.jar'.split()), 0)
self.assertNotEqual(
subprocess.call('java -jar jars/Other.jar'.split()), 0)
os.mkdir('beam_temp')
composite_jar = subprocess_server.JavaJarServer.make_classpath_jar(
'jars/Main.jar', ['jars/Other.jar'], cache_dir='beam_temp')
# This, however, should work.
subprocess.check_call(f'java -jar {composite_jar}'.split())
finally:
os.chdir(oldwd)
class CacheTest(unittest.TestCase):
@staticmethod
def with_prefix(prefix):
return '%s-%s' % (prefix, random.random())
def test_memoization(self):
cache = subprocess_server._SharedCache(self.with_prefix, lambda x: None)
try:
token = cache.register()
a = cache.get('a')
self.assertEqual(a[0], 'a')
self.assertEqual(cache.get('a'), a)
b = cache.get('b')
self.assertEqual(b[0], 'b')
self.assertEqual(cache.get('b'), b)
finally:
cache.purge(token)
def test_purged(self):
cache = subprocess_server._SharedCache(self.with_prefix, lambda x: None)
try:
token = cache.register()
a = cache.get('a')
self.assertEqual(cache.get('a'), a)
finally:
cache.purge(token)
try:
token = cache.register()
new_a = cache.get('a')
self.assertNotEqual(new_a, a)
finally:
cache.purge(token)
def test_multiple_owners(self):
cache = subprocess_server._SharedCache(self.with_prefix, lambda x: None)
try:
owner1 = cache.register()
a = cache.get('a')
try:
self.assertEqual(cache.get('a'), a)
owner2 = cache.register()
b = cache.get('b')
self.assertEqual(cache.get('b'), b)
finally:
cache.purge(owner2)
self.assertEqual(cache.get('a'), a)
self.assertEqual(cache.get('b'), b)
finally:
cache.purge(owner1)
try:
owner3 = cache.register()
self.assertNotEqual(cache.get('a'), a)
self.assertNotEqual(cache.get('b'), b)
finally:
cache.purge(owner3)
def test_interleaved_owners(self):
cache = subprocess_server._SharedCache(self.with_prefix, lambda x: None)
owner1 = cache.register()
a = cache.get('a')
self.assertEqual(cache.get('a'), a)
owner2 = cache.register()
b = cache.get('b')
self.assertEqual(cache.get('b'), b)
cache.purge(owner1)
self.assertNotEqual(cache.get('a'), a)
self.assertEqual(cache.get('b'), b)
cache.purge(owner2)
owner3 = cache.register()
self.assertNotEqual(cache.get('b'), b)
cache.purge(owner3)
if __name__ == '__main__':
unittest.main()
```
|
```protocol buffer
syntax = "proto3";
package envoy.admin.v2alpha;
import "envoy/service/tap/v2alpha/common.proto";
import "udpa/annotations/status.proto";
import "validate/validate.proto";
option java_package = "io.envoyproxy.envoy.admin.v2alpha";
option java_outer_classname = "TapProto";
option java_multiple_files = true;
option go_package = "github.com/envoyproxy/go-control-plane/envoy/admin/v2alpha";
option (udpa.annotations.file_status).package_version_status = FROZEN;
// [#protodoc-title: Tap]
// The /tap admin request body that is used to configure an active tap session.
message TapRequest {
// The opaque configuration ID used to match the configuration to a loaded extension.
// A tap extension configures a similar opaque ID that is used to match.
string config_id = 1 [(validate.rules).string = {min_bytes: 1}];
// The tap configuration to load.
service.tap.v2alpha.TapConfig tap_config = 2 [(validate.rules).message = {required: true}];
}
```
|
```xml
import { expect } from '@playwright/test';
import { loadFixture } from '../../playwright/paths';
import { test } from '../../playwright/test';
test.describe('Design interactions', async () => {
test.slow(process.platform === 'darwin' || process.platform === 'win32', 'Slow app start on these platforms');
test('Unit Test interactions', async ({ app, page }) => {
// Setup
await page.getByRole('button', { name: 'Create in project' }).click();
const text = await loadFixture('unit-test.yaml');
await app.evaluate(async ({ clipboard }, text) => clipboard.writeText(text), text);
await page.getByRole('menuitemradio', { name: 'Import' }).click();
await page.locator('[data-test-id="import-from-clipboard"]').click();
await page.getByRole('button', { name: 'Scan' }).click();
await page.getByRole('dialog').getByRole('button', { name: 'Import' }).click();
await page.getByText('unit-test.yaml').click();
// Switch to Test tab
await page.click('a:has-text("Test")');
// Run tests and check results
await page.getByLabel('Run all tests').click();
await expect(page.locator('.app')).toContainText('Request A is found');
await expect(page.locator('.app')).toContainText('Request B is not found');
await expect(page.locator('.app')).toContainText('Tests passed');
// Create a new test suite
await page.click('text=New test suite');
// Rename test suite
await page.getByRole('heading', { name: 'New Suite' }).locator('span').dblclick();
await page.getByRole('textbox').fill('New Suite 2');
await page.getByRole('textbox').press('Enter');
// Add a new test
await page.getByLabel('New test').click();
// Rename test
await page.getByLabel('Unit tests').getByRole('heading', { name: 'Returns' }).locator('div').dblclick();
await page.getByLabel('Unit tests').getByRole('textbox').fill('Returns 200 and works');
await page.getByLabel('Unit tests').getByRole('textbox').press('Enter');
await page.getByLabel('Unit tests').getByText('Returns 200 and works').click();
// Use autocomplete inside the test code
// TODO(filipe) - add this in another PR
});
});
```
|
```java
package com.yahoo.log;
import com.yahoo.text.Utf8;
import java.nio.MappedByteBuffer;
import java.util.HashMap;
import java.util.Map;
/**
* Contains a repository of mapped log level controllers.
*
* @author Ulf Lilleengen
* @since 5.1
* Should only be used internally in the log library
*/
class MappedLevelControllerRepo {
private final Map<String, LevelController> levelControllerMap = new HashMap<>();
private final MappedByteBuffer mapBuf;
private final int controlFileHeaderLength;
private final int numLevels;
private final String logControlFilename;
MappedLevelControllerRepo(MappedByteBuffer mapBuf, int controlFileHeaderLength, int numLevels, String logControlFilename) {
this.mapBuf = mapBuf;
this.controlFileHeaderLength = controlFileHeaderLength;
this.numLevels = numLevels;
this.logControlFilename = logControlFilename;
buildMap();
}
private void buildMap() {
int len = mapBuf.capacity();
int startOfLine = controlFileHeaderLength;
int numLine = 1;
int i = 0;
while (i < len) {
if (mapBuf.get(i) == '\n') {
startOfLine = ++i;
++numLine;
} else if (i < controlFileHeaderLength) {
++i;
} else if (mapBuf.get(i) == ':') {
int endOfName = i;
int levels = i;
levels += 2;
while ((levels % 4) != 0) {
levels++;
}
int endLine = levels + 4*numLevels;
if (checkLine(startOfLine, endOfName, levels, endLine)) {
int l = endOfName - startOfLine;
if (l > 1 && mapBuf.get(startOfLine) == '.') {
++startOfLine;
--l;
}
byte[] namebytes = new byte[l];
for (int j = 0; j < l; j++) {
namebytes[j] = mapBuf.get(startOfLine + j);
}
String name = Utf8.toString(namebytes);
if (name.equals("default")) {
name = "";
}
MappedLevelController ctrl = new MappedLevelController(mapBuf, levels, name);
levelControllerMap.put(name, ctrl);
i = endLine;
continue; // good line
}
// bad line, skip
while (i < len && mapBuf.get(i) != '\n') {
i++;
}
int bll = i - startOfLine;
byte[] badline = new byte[bll];
for (int j = 0; j < bll; j++) {
badline[j] = mapBuf.get(startOfLine + j);
}
System.err.println("bad loglevel line "+numLine+" in "
+ logControlFilename + ": " + Utf8.toString(badline));
} else {
i++;
}
}
}
private boolean checkLine(int sol, int endnam, int levstart, int eol) {
if (eol >= mapBuf.capacity()) {
System.err.println("line would end after end of file");
return false;
}
if (mapBuf.get(eol) != '\n') {
System.err.println("line must end with newline, was: "+mapBuf.get(eol));
return false;
}
if (endnam < sol + 1) {
System.err.println("name must be at least one character after start of line");
return false;
}
return MappedLevelController.checkOnOff(mapBuf, levstart);
}
LevelController getLevelController(String suffix) {
return levelControllerMap.get(suffix);
}
void checkBack() {
for (LevelController ctrl : levelControllerMap.values()) {
ctrl.checkBack();
}
}
}
```
|
```go
package tengo
import (
"fmt"
"strings"
)
// ForeignKey represents a single foreign key constraint in a table. Note that
// the "referenced" side of the FK is tracked as strings, rather than *Schema,
// *Table, *[]Column to avoid potentially having to introspect multiple schemas
// in a particular order. Also, the referenced side is not gauranteed to exist,
// especially if foreign_key_checks=0 has been used at any point in the past.
type ForeignKey struct {
Name string `json:"name"`
ColumnNames []string `json:"columnNames"`
ReferencedSchemaName string `json:"referencedSchemaName,omitempty"` // will be empty string if same schema
ReferencedTableName string `json:"referencedTableName"`
ReferencedColumnNames []string `json:"referencedColumnNames"` // slice length always identical to len(ColumnNames)
UpdateRule string `json:"updateRule"`
DeleteRule string `json:"deleteRule"`
}
// Definition returns this ForeignKey's definition clause, for use as part of a DDL
// statement.
func (fk *ForeignKey) Definition(flavor Flavor) string {
colParts := make([]string, len(fk.ColumnNames))
for n, colName := range fk.ColumnNames {
colParts[n] = EscapeIdentifier(colName)
}
childCols := strings.Join(colParts, ", ")
referencedTable := EscapeIdentifier(fk.ReferencedTableName)
if fk.ReferencedSchemaName != "" {
referencedTable = fmt.Sprintf("%s.%s", EscapeIdentifier(fk.ReferencedSchemaName), referencedTable)
}
for n, col := range fk.ReferencedColumnNames {
colParts[n] = EscapeIdentifier(col)
}
parentCols := strings.Join(colParts, ", ")
// MySQL 8 omits NO ACTION clauses, but includes RESTRICT clauses. In all other
// flavors the opposite is true. (Even though NO ACTION and RESTRICT are
// completely equivalent...)
var hiddenRule, deleteRule, updateRule string
if flavor.MinMySQL(8) {
hiddenRule = "NO ACTION"
} else {
hiddenRule = "RESTRICT"
}
if fk.DeleteRule != hiddenRule {
deleteRule = fmt.Sprintf(" ON DELETE %s", fk.DeleteRule)
}
if fk.UpdateRule != hiddenRule {
updateRule = fmt.Sprintf(" ON UPDATE %s", fk.UpdateRule)
}
return fmt.Sprintf("CONSTRAINT %s FOREIGN KEY (%s) REFERENCES %s (%s)%s%s", EscapeIdentifier(fk.Name), childCols, referencedTable, parentCols, deleteRule, updateRule)
}
// Equals returns true if two ForeignKeys are completely identical (even in
// terms of cosmetic differences), false otherwise.
func (fk *ForeignKey) Equals(other *ForeignKey) bool {
if fk == nil || other == nil {
return fk == other // only equal if BOTH are nil
}
return fk.Name == other.Name && fk.UpdateRule == other.UpdateRule && fk.DeleteRule == other.DeleteRule && fk.Equivalent(other)
}
// Equivalent returns true if two ForeignKeys are functionally equivalent,
// regardless of whether or not they have the same names.
func (fk *ForeignKey) Equivalent(other *ForeignKey) bool {
if fk == nil || other == nil {
return fk == other // only equivalent if BOTH are nil
}
if fk.ReferencedSchemaName != other.ReferencedSchemaName || fk.ReferencedTableName != other.ReferencedTableName {
return false
}
if fk.normalizedUpdateRule() != other.normalizedUpdateRule() || fk.normalizedDeleteRule() != other.normalizedDeleteRule() {
return false
}
if len(fk.ColumnNames) != len(other.ColumnNames) {
return false
}
for n := range fk.ColumnNames {
if fk.ColumnNames[n] != other.ColumnNames[n] || fk.ReferencedColumnNames[n] != other.ReferencedColumnNames[n] {
return false
}
}
return true
}
func (fk *ForeignKey) normalizedUpdateRule() string {
// MySQL and MariaDB both treat RESTRICT, NO ACTION, and lack of a rule
// equivalently in terms of functionality.
if fk.UpdateRule == "RESTRICT" || fk.UpdateRule == "NO ACTION" {
return ""
}
return fk.UpdateRule
}
func (fk *ForeignKey) normalizedDeleteRule() string {
// MySQL and MariaDB both treat RESTRICT, NO ACTION, and lack of a rule
// equivalently in terms of functionality.
if fk.DeleteRule == "RESTRICT" || fk.DeleteRule == "NO ACTION" {
return ""
}
return fk.DeleteRule
}
```
|
```objective-c
path_to_url
Unless required by applicable law or agreed to in writing, software
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
==============================================================================*/
#ifndef TENSORFLOW_SERVING_CORE_ASPIRED_VERSION_POLICY_H_
#define TENSORFLOW_SERVING_CORE_ASPIRED_VERSION_POLICY_H_
#include <string>
#include <vector>
#include "absl/types/optional.h"
#include "tensorflow/core/lib/strings/strcat.h"
#include "tensorflow/core/platform/types.h"
#include "tensorflow_serving/core/loader_harness.h"
#include "tensorflow_serving/core/servable_id.h"
namespace tensorflow {
namespace serving {
/// A snapshot of a servable's state and aspiredness.
struct AspiredServableStateSnapshot final {
ServableId id;
LoaderHarness::State state;
bool is_aspired;
};
/// An interface for the policy to be applied for transitioning servable
/// versions in a servable stream.
///
/// Policies should be entirely stateless and idempotent. Asking the same policy
/// multiple times for the next action, for an identical vector of
/// AspiredServableStateSnapshots, should return the same result.
///
/// If additional state is required to implement a Policy, such state shall be
/// shared via AspiredServableStateSnapshots. Depending on the kind of state,
/// the most likely candidates for originating or tracking state are Sources or
/// the Harness and Manager.
class AspiredVersionPolicy {
public:
/// The different actions that could be recommended by a policy.
enum class Action : int {
/// Call load on the servable.
kLoad,
/// Call unload on the servable.
kUnload,
};
virtual ~AspiredVersionPolicy() = default;
/// Action and the id of the servable associated with it.
struct ServableAction final {
Action action;
ServableId id;
string DebugString() const {
return strings::StrCat("{ action: ", static_cast<int>(action),
" id: ", id.DebugString(), " }");
}
};
/// Takes in a vector of state snapshots of all versions of a servable stream
/// and returns an action to be performed for a particular servable version,
/// depending only on the states of all the versions.
///
/// If no action is to be performed, we don't return an action, meaning
/// that the servable stream is up to date.
virtual absl::optional<ServableAction> GetNextAction(
const std::vector<AspiredServableStateSnapshot>& all_versions) const = 0;
protected:
/// Returns the aspired ServableId with the highest version that matches
/// kNew state, if any exists.
static absl::optional<ServableId> GetHighestAspiredNewServableId(
const std::vector<AspiredServableStateSnapshot>& all_versions);
private:
friend class AspiredVersionPolicyTest;
};
inline bool operator==(const AspiredVersionPolicy::ServableAction& lhs,
const AspiredVersionPolicy::ServableAction& rhs) {
return lhs.action == rhs.action && lhs.id == rhs.id;
}
} // namespace serving
} // namespace tensorflow
#endif // TENSORFLOW_SERVING_CORE_ASPIRED_VERSION_POLICY_H_
```
|
```smalltalk
Class {
#name : 'FFIAbstractTest',
#superclass : 'TestCase',
#instVars : [
'intType',
'int32Type',
'voidType',
'charType',
'uint32Type'
],
#category : 'UnifiedFFI-Tests-Tests',
#package : 'UnifiedFFI-Tests',
#tag : 'Tests'
}
{ #category : 'testing' }
FFIAbstractTest class >> isAbstract [
^ self == FFIAbstractTest
]
{ #category : 'private' }
FFIAbstractTest >> externalTypeAlias: aTypeName [
"Prefix the type so we have control over it.
If it is already prefixed, do not prefix it (otherwise this loops)."
(aTypeName beginsWith: '_test_type_')
ifTrue: [ ^ aTypeName ].
^ '_test_type_', aTypeName
]
{ #category : 'private' }
FFIAbstractTest >> ffiBindingOf: aString [
aString = '_test_type_int'
ifTrue: [ ^ aString -> intType ].
aString = '_test_type_bool'
ifTrue: [ ^ aString -> int32Type ].
aString = '_test_type_uint32'
ifTrue: [ ^ aString -> int32Type ].
aString = '_test_type_int32'
ifTrue: [ ^ aString -> int32Type ].
aString = '_test_type_void'
ifTrue: [ ^ aString -> voidType ].
aString = '_test_type_char'
ifTrue: [ ^ aString -> charType ].
self error: 'Type not recognized: ', aString
]
{ #category : 'running' }
FFIAbstractTest >> setUp [
super setUp.
intType := FFIInt32.
int32Type := FFIInt32.
uint32Type := FFIUInt32.
voidType := FFIVoid.
charType := FFICharacterType
]
```
|
```php
<?php
declare(strict_types=1);
namespace Psalm\Internal\LanguageServer;
use function array_multisort;
use function call_user_func_array;
use function count;
use const SORT_NUMERIC;
/**
* Event Emitter Trait
*
* This trait contains all the basic functions to implement an
* EventEmitterInterface.
*
* Using the trait + interface allows you to add EventEmitter capabilities
* without having to change your base-class.
*
* @author Evert Pot (path_to_url
* @internal
*/
trait EmitterTrait
{
/**
* The list of listeners
*
* @var array<string, array{0: bool, 1: int[], 2: callable[]}>
*/
protected array $listeners = [];
/**
* Subscribe to an event.
*/
public function on(string $eventName, callable $callBack, int $priority = 100): void
{
if (!isset($this->listeners[$eventName])) {
$this->listeners[$eventName] = [
true, // If there's only one item, it's sorted
[$priority],
[$callBack],
];
} else {
$this->listeners[$eventName][0] = false; // marked as unsorted
$this->listeners[$eventName][1][] = $priority;
$this->listeners[$eventName][2][] = $callBack;
}
}
/**
* Emits an event.
*
* This method will return true if 0 or more listeners were successfully
* handled. false is returned if one of the events broke the event chain.
*
* If the continueCallBack is specified, this callback will be called every
* time before the next event handler is called.
*
* If the continueCallback returns false, event propagation stops. This
* allows you to use the eventEmitter as a means for listeners to implement
* functionality in your application, and break the event loop as soon as
* some condition is fulfilled.
*
* Note that returning false from an event subscriber breaks propagation
* and returns false, but if the continue-callback stops propagation, this
* is still considered a 'successful' operation and returns true.
*
* Lastly, if there are 5 event handlers for an event. The continueCallback
* will be called at most 4 times.
*
* @param list<mixed> $arguments
*/
public function emit(
string $eventName,
array $arguments = [],
?callable $continueCallBack = null
): void {
if ($continueCallBack === null) {
foreach ($this->listeners($eventName) as $listener) {
/** @psalm-suppress MixedAssignment */
$result = call_user_func_array($listener, $arguments);
if ($result === false) {
return;
}
}
} else {
$listeners = $this->listeners($eventName);
$counter = count($listeners);
foreach ($listeners as $listener) {
--$counter;
/** @psalm-suppress MixedAssignment */
$result = call_user_func_array($listener, $arguments);
if ($result === false) {
return;
}
if ($counter > 0) {
if (!$continueCallBack()) {
break;
}
}
}
}
}
/**
* Returns the list of listeners for an event.
*
* The list is returned as an array, and the list of events are sorted by
* their priority.
*
* @return callable[]
*/
public function listeners(string $eventName): array
{
if (!isset($this->listeners[$eventName])) {
return [];
}
// The list is not sorted
if (!$this->listeners[$eventName][0]) {
// Sorting
array_multisort($this->listeners[$eventName][1], SORT_NUMERIC, $this->listeners[$eventName][2]);
// Marking the listeners as sorted
$this->listeners[$eventName][0] = true;
}
return $this->listeners[$eventName][2];
}
/**
* Removes a specific listener from an event.
*
* If the listener could not be found, this method will return false. If it
* was removed it will return true.
*/
public function removeListener(string $eventName, callable $listener): bool
{
if (!isset($this->listeners[$eventName])) {
return false;
}
foreach ($this->listeners[$eventName][2] as $index => $check) {
if ($check === $listener) {
unset($this->listeners[$eventName][1][$index], $this->listeners[$eventName][2][$index]);
return true;
}
}
return false;
}
}
```
|
```c
/* Protoize program - Original version by Ron Guilmette (rfg@segfault.us.com).
1999, 2000, 2001, 2002 Free Software Foundation, Inc.
This file is part of GCC.
GCC is free software; you can redistribute it and/or modify it under
Software Foundation; either version 2, or (at your option) any later
version.
GCC is distributed in the hope that it will be useful, but WITHOUT ANY
WARRANTY; without even the implied warranty of MERCHANTABILITY or
for more details.
along with GCC; see the file COPYING. If not, write to the Free
Software Foundation, 59 Temple Place - Suite 330, Boston, MA
02111-1307, USA. */
#include "config.h"
#include "system.h"
#include "intl.h"
#include "cppdefault.h"
#include <setjmp.h>
#include <signal.h>
#if ! defined( SIGCHLD ) && defined( SIGCLD )
# define SIGCHLD SIGCLD
#endif
#ifdef HAVE_UNISTD_H
#include <unistd.h>
#endif
#undef abort
#include "version.h"
/* Include getopt.h for the sake of getopt_long. */
#include "getopt.h"
/* Macro to see if the path elements match. */
#ifdef HAVE_DOS_BASED_FILE_SYSTEM
#define IS_SAME_PATH_CHAR(a,b) (TOUPPER (a) == TOUPPER (b))
#else
#define IS_SAME_PATH_CHAR(a,b) ((a) == (b))
#endif
/* Macro to see if the paths match. */
#ifdef HAVE_DOS_BASED_FILE_SYSTEM
#define IS_SAME_PATH(a,b) (strcasecmp (a, b) == 0)
#else
#define IS_SAME_PATH(a,b) (strcmp (a, b) == 0)
#endif
/* Suffix for aux-info files. */
#ifdef __MSDOS__
#define AUX_INFO_SUFFIX "X"
#else
#define AUX_INFO_SUFFIX ".X"
#endif
/* Suffix for saved files. */
#ifdef __MSDOS__
#define SAVE_SUFFIX "sav"
#else
#define SAVE_SUFFIX ".save"
#endif
/* Suffix for renamed C++ files. */
#ifdef HAVE_DOS_BASED_FILE_SYSTEM
#define CPLUS_FILE_SUFFIX "cc"
#else
#define CPLUS_FILE_SUFFIX "C"
#endif
static void usage PARAMS ((void)) ATTRIBUTE_NORETURN;
static void aux_info_corrupted PARAMS ((void)) ATTRIBUTE_NORETURN;
static void declare_source_confusing PARAMS ((const char *)) ATTRIBUTE_NORETURN;
static const char *shortpath PARAMS ((const char *, const char *));
extern void fancy_abort PARAMS ((void)) ATTRIBUTE_NORETURN;
static void notice PARAMS ((const char *, ...)) ATTRIBUTE_PRINTF_1;
static char *savestring PARAMS ((const char *, unsigned int));
static char *dupnstr PARAMS ((const char *, size_t));
static const char *substr PARAMS ((const char *, const char * const));
static int safe_read PARAMS ((int, PTR, int));
static void safe_write PARAMS ((int, PTR, int, const char *));
static void save_pointers PARAMS ((void));
static void restore_pointers PARAMS ((void));
static int is_id_char PARAMS ((int));
static int in_system_include_dir PARAMS ((const char *));
static int directory_specified_p PARAMS ((const char *));
static int file_excluded_p PARAMS ((const char *));
static char *unexpand_if_needed PARAMS ((const char *));
static char *abspath PARAMS ((const char *, const char *));
static int is_abspath PARAMS ((const char *));
static void check_aux_info PARAMS ((int));
static const char *find_corresponding_lparen PARAMS ((const char *));
static int referenced_file_is_newer PARAMS ((const char *, time_t));
static void save_def_or_dec PARAMS ((const char *, int));
static void munge_compile_params PARAMS ((const char *));
static int gen_aux_info_file PARAMS ((const char *));
static void process_aux_info_file PARAMS ((const char *, int, int));
static int identify_lineno PARAMS ((const char *));
static void check_source PARAMS ((int, const char *));
static const char *seek_to_line PARAMS ((int));
static const char *forward_to_next_token_char PARAMS ((const char *));
static void output_bytes PARAMS ((const char *, size_t));
static void output_string PARAMS ((const char *));
static void output_up_to PARAMS ((const char *));
static int other_variable_style_function PARAMS ((const char *));
static const char *find_rightmost_formals_list PARAMS ((const char *));
static void do_cleaning PARAMS ((char *, const char *));
static const char *careful_find_l_paren PARAMS ((const char *));
static void do_processing PARAMS ((void));
/* Look for these where the `const' qualifier is intentionally cast aside. */
#define NONCONST
/* Define a default place to find the SYSCALLS.X file. */
#ifndef UNPROTOIZE
#ifndef STANDARD_EXEC_PREFIX
#define STANDARD_EXEC_PREFIX "/usr/local/lib/gcc-lib/"
#endif /* !defined STANDARD_EXEC_PREFIX */
static const char * const standard_exec_prefix = STANDARD_EXEC_PREFIX;
static const char * const target_machine = DEFAULT_TARGET_MACHINE;
static const char * const target_version = DEFAULT_TARGET_VERSION;
#endif /* !defined (UNPROTOIZE) */
/* Suffix of aux_info files. */
static const char * const aux_info_suffix = AUX_INFO_SUFFIX;
/* String to attach to filenames for saved versions of original files. */
static const char * const save_suffix = SAVE_SUFFIX;
/* String to attach to C filenames renamed to C++. */
static const char * const cplus_suffix = CPLUS_FILE_SUFFIX;
#ifndef UNPROTOIZE
/* File name of the file which contains descriptions of standard system
routines. Note that we never actually do anything with this file per se,
but we do read in its corresponding aux_info file. */
static const char syscalls_filename[] = "SYSCALLS.c";
/* Default place to find the above file. */
static const char * default_syscalls_dir;
/* Variable to hold the complete absolutized filename of the SYSCALLS.c.X
file. */
static char * syscalls_absolute_filename;
#endif /* !defined (UNPROTOIZE) */
/* Type of the structure that holds information about macro unexpansions. */
struct unexpansion_struct {
const char *const expanded;
const char *const contracted;
};
typedef struct unexpansion_struct unexpansion;
/* A table of conversions that may need to be made for some (stupid) older
operating systems where these types are preprocessor macros rather than
typedefs (as they really ought to be).
WARNING: The contracted forms must be as small (or smaller) as the
expanded forms, or else havoc will ensue. */
static const unexpansion unexpansions[] = {
{ "struct _iobuf", "FILE" },
{ 0, 0 }
};
/* The number of "primary" slots in the hash tables for filenames and for
function names. This can be as big or as small as you like, except that
it must be a power of two. */
#define HASH_TABLE_SIZE (1 << 9)
/* Bit mask to use when computing hash values. */
static const int hash_mask = (HASH_TABLE_SIZE - 1);
/* Datatype for lists of directories or filenames. */
struct string_list
{
const char *name;
struct string_list *next;
};
static struct string_list *string_list_cons PARAMS ((const char *,
struct string_list *));
/* List of directories in which files should be converted. */
struct string_list *directory_list;
/* List of file names which should not be converted.
A file is excluded if the end of its name, following a /,
matches one of the names in this list. */
struct string_list *exclude_list;
/* The name of the other style of variable-number-of-parameters functions
(i.e. the style that we want to leave unconverted because we don't yet
know how to convert them to this style. This string is used in warning
messages. */
/* Also define here the string that we can search for in the parameter lists
taken from the .X files which will unambiguously indicate that we have
found a varargs style function. */
#ifdef UNPROTOIZE
static const char * const other_var_style = "stdarg";
#else /* !defined (UNPROTOIZE) */
static const char * const other_var_style = "varargs";
/* Note that this is a string containing the expansion of va_alist.
But in `main' we discard all but the first token. */
static const char *varargs_style_indicator = STRINGX (va_alist);
#endif /* !defined (UNPROTOIZE) */
/* The following two types are used to create hash tables. In this program,
there are two hash tables which are used to store and quickly lookup two
different classes of strings. The first type of strings stored in the
first hash table are absolute filenames of files which protoize needs to
know about. The second type of strings (stored in the second hash table)
are function names. It is this second class of strings which really
inspired the use of the hash tables, because there may be a lot of them. */
typedef struct hash_table_entry_struct hash_table_entry;
/* Do some typedefs so that we don't have to write "struct" so often. */
typedef struct def_dec_info_struct def_dec_info;
typedef struct file_info_struct file_info;
typedef struct f_list_chain_item_struct f_list_chain_item;
#ifndef UNPROTOIZE
static int is_syscalls_file PARAMS ((const file_info *));
static void rename_c_file PARAMS ((const hash_table_entry *));
static const def_dec_info *find_extern_def PARAMS ((const def_dec_info *,
const def_dec_info *));
static const def_dec_info *find_static_definition PARAMS ((const def_dec_info *));
static void connect_defs_and_decs PARAMS ((const hash_table_entry *));
static void add_local_decl PARAMS ((const def_dec_info *, const char *));
static void add_global_decls PARAMS ((const file_info *, const char *));
#endif /* ! UNPROTOIZE */
static int needs_to_be_converted PARAMS ((const file_info *));
static void visit_each_hash_node PARAMS ((const hash_table_entry *,
void (*)(const hash_table_entry *)));
static hash_table_entry *add_symbol PARAMS ((hash_table_entry *, const char *));
static hash_table_entry *lookup PARAMS ((hash_table_entry *, const char *));
static void free_def_dec PARAMS ((def_dec_info *));
static file_info *find_file PARAMS ((const char *, int));
static void reverse_def_dec_list PARAMS ((const hash_table_entry *));
static void edit_fn_declaration PARAMS ((const def_dec_info *, const char *));
static int edit_formals_lists PARAMS ((const char *, unsigned int,
const def_dec_info *));
static void edit_fn_definition PARAMS ((const def_dec_info *, const char *));
static void scan_for_missed_items PARAMS ((const file_info *));
static void edit_file PARAMS ((const hash_table_entry *));
/* In the struct below, note that the "_info" field has two different uses
depending on the type of hash table we are in (i.e. either the filenames
hash table or the function names hash table). In the filenames hash table
the info fields of the entries point to the file_info struct which is
associated with each filename (1 per filename). In the function names
hash table, the info field points to the head of a singly linked list of
def_dec_info entries which are all defs or decs of the function whose
name is pointed to by the "symbol" field. Keeping all of the defs/decs
for a given function name on a special list specifically for that function
name makes it quick and easy to find out all of the important information
about a given (named) function. */
struct hash_table_entry_struct {
hash_table_entry * hash_next; /* -> to secondary entries */
const char * symbol; /* -> to the hashed string */
union {
const def_dec_info * _ddip;
file_info * _fip;
} _info;
};
#define ddip _info._ddip
#define fip _info._fip
/* Define a type specifically for our two hash tables. */
typedef hash_table_entry hash_table[HASH_TABLE_SIZE];
/* The following struct holds all of the important information about any
single filename (e.g. file) which we need to know about. */
struct file_info_struct {
const hash_table_entry * hash_entry; /* -> to associated hash entry */
const def_dec_info * defs_decs; /* -> to chain of defs/decs */
time_t mtime; /* Time of last modification. */
};
/* Due to the possibility that functions may return pointers to functions,
(which may themselves have their own parameter lists) and due to the
fact that returned pointers-to-functions may be of type "pointer-to-
function-returning-pointer-to-function" (ad nauseum) we have to keep
an entire chain of ANSI style formal parameter lists for each function.
Normally, for any given function, there will only be one formals list
on the chain, but you never know.
Note that the head of each chain of formals lists is pointed to by the
`f_list_chain' field of the corresponding def_dec_info record.
For any given chain, the item at the head of the chain is the *leftmost*
parameter list seen in the actual C language function declaration. If
there are other members of the chain, then these are linked in left-to-right
order from the head of the chain. */
struct f_list_chain_item_struct {
const f_list_chain_item * chain_next; /* -> to next item on chain */
const char * formals_list; /* -> to formals list string */
};
/* The following struct holds all of the important information about any
single function definition or declaration which we need to know about.
Note that for unprotoize we don't need to know very much because we
never even create records for stuff that we don't intend to convert
(like for instance defs and decs which are already in old K&R format
and "implicit" function declarations). */
struct def_dec_info_struct {
const def_dec_info * next_in_file; /* -> to rest of chain for file */
file_info * file; /* -> file_info for containing file */
int line; /* source line number of def/dec */
const char * ansi_decl; /* -> left end of ansi decl */
hash_table_entry * hash_entry; /* -> hash entry for function name */
unsigned int is_func_def; /* = 0 means this is a declaration */
const def_dec_info * next_for_func; /* -> to rest of chain for func name */
unsigned int f_list_count; /* count of formals lists we expect */
char prototyped; /* = 0 means already prototyped */
#ifndef UNPROTOIZE
const f_list_chain_item * f_list_chain; /* -> chain of formals lists */
const def_dec_info * definition; /* -> def/dec containing related def */
char is_static; /* = 0 means visibility is "extern" */
char is_implicit; /* != 0 for implicit func decl's */
char written; /* != 0 means written for implicit */
#else /* !defined (UNPROTOIZE) */
const char * formal_names; /* -> to list of names of formals */
const char * formal_decls; /* -> to string of formal declarations */
#endif /* !defined (UNPROTOIZE) */
};
/* Pointer to the tail component of the filename by which this program was
invoked. Used everywhere in error and warning messages. */
static const char *pname;
/* Error counter. Will be nonzero if we should give up at the next convenient
stopping point. */
static int errors = 0;
/* Option flags. */
/* ??? These comments should say what the flag mean as well as the options
that set them. */
/* File name to use for running gcc. Allows GCC 2 to be named
something other than gcc. */
static const char *compiler_file_name = "gcc";
static int version_flag = 0; /* Print our version number. */
static int quiet_flag = 0; /* Don't print messages normally. */
static int nochange_flag = 0; /* Don't convert, just say what files
we would have converted. */
static int nosave_flag = 0; /* Don't save the old version. */
static int keep_flag = 0; /* Don't delete the .X files. */
static const char ** compile_params = 0; /* Option string for gcc. */
#ifdef UNPROTOIZE
static const char *indent_string = " "; /* Indentation for newly
inserted parm decls. */
#else /* !defined (UNPROTOIZE) */
static int local_flag = 0; /* Insert new local decls (when?). */
static int global_flag = 0; /* set by -g option */
static int cplusplus_flag = 0; /* Rename converted files to *.C. */
static const char *nondefault_syscalls_dir = 0; /* Dir to look for
SYSCALLS.c.X in. */
#endif /* !defined (UNPROTOIZE) */
/* An index into the compile_params array where we should insert the source
file name when we are ready to exec the C compiler. A zero value indicates
that we have not yet called munge_compile_params. */
static int input_file_name_index = 0;
/* An index into the compile_params array where we should insert the filename
for the aux info file, when we run the C compiler. */
static int aux_info_file_name_index = 0;
/* Count of command line arguments which were "filename" arguments. */
static int n_base_source_files = 0;
/* Points to a malloc'ed list of pointers to all of the filenames of base
source files which were specified on the command line. */
static const char **base_source_filenames;
/* Line number of the line within the current aux_info file that we
are currently processing. Used for error messages in case the prototypes
info file is corrupted somehow. */
static int current_aux_info_lineno;
/* Pointer to the name of the source file currently being converted. */
static const char *convert_filename;
/* Pointer to relative root string (taken from aux_info file) which indicates
where directory the user was in when he did the compilation step that
produced the containing aux_info file. */
static const char *invocation_filename;
/* Pointer to the base of the input buffer that holds the original text for the
source file currently being converted. */
static const char *orig_text_base;
/* Pointer to the byte just beyond the end of the input buffer that holds the
original text for the source file currently being converted. */
static const char *orig_text_limit;
/* Pointer to the base of the input buffer that holds the cleaned text for the
source file currently being converted. */
static const char *clean_text_base;
/* Pointer to the byte just beyond the end of the input buffer that holds the
cleaned text for the source file currently being converted. */
static const char *clean_text_limit;
/* Pointer to the last byte in the cleaned text buffer that we have already
(virtually) copied to the output buffer (or decided to ignore). */
static const char * clean_read_ptr;
/* Pointer to the base of the output buffer that holds the replacement text
for the source file currently being converted. */
static char *repl_text_base;
/* Pointer to the byte just beyond the end of the output buffer that holds the
replacement text for the source file currently being converted. */
static char *repl_text_limit;
/* Pointer to the last byte which has been stored into the output buffer.
The next byte to be stored should be stored just past where this points
to. */
static char * repl_write_ptr;
/* Pointer into the cleaned text buffer for the source file we are currently
converting. This points to the first character of the line that we last
did a "seek_to_line" to (see below). */
static const char *last_known_line_start;
/* Number of the line (in the cleaned text buffer) that we last did a
"seek_to_line" to. Will be one if we just read a new source file
into the cleaned text buffer. */
static int last_known_line_number;
/* The filenames hash table. */
static hash_table filename_primary;
/* The function names hash table. */
static hash_table function_name_primary;
/* The place to keep the recovery address which is used only in cases where
we get hopelessly confused by something in the cleaned original text. */
static jmp_buf source_confusion_recovery;
/* A pointer to the current directory filename (used by abspath). */
static char *cwd_buffer;
/* A place to save the read pointer until we are sure that an individual
attempt at editing will succeed. */
static const char * saved_clean_read_ptr;
/* A place to save the write pointer until we are sure that an individual
attempt at editing will succeed. */
static char * saved_repl_write_ptr;
/* Translate and output an error message. */
static void
notice VPARAMS ((const char *msgid, ...))
{
VA_OPEN (ap, msgid);
VA_FIXEDARG (ap, const char *, msgid);
vfprintf (stderr, _(msgid), ap);
VA_CLOSE (ap);
}
/* Make a copy of a string INPUT with size SIZE. */
static char *
savestring (input, size)
const char *input;
unsigned int size;
{
char *output = (char *) xmalloc (size + 1);
strcpy (output, input);
return output;
}
/* More 'friendly' abort that prints the line and file.
config.h can #define abort fancy_abort if you like that sort of thing. */
void
fancy_abort ()
{
notice ("%s: internal abort\n", pname);
exit (FATAL_EXIT_CODE);
}
/* Make a duplicate of the first N bytes of a given string in a newly
allocated area. */
static char *
dupnstr (s, n)
const char *s;
size_t n;
{
char *ret_val = (char *) xmalloc (n + 1);
strncpy (ret_val, s, n);
ret_val[n] = '\0';
return ret_val;
}
/* Return a pointer to the first occurrence of s2 within s1 or NULL if s2
does not occur within s1. Assume neither s1 nor s2 are null pointers. */
static const char *
substr (s1, s2)
const char *s1;
const char *const s2;
{
for (; *s1 ; s1++)
{
const char *p1;
const char *p2;
int c;
for (p1 = s1, p2 = s2; (c = *p2); p1++, p2++)
if (*p1 != c)
goto outer;
return s1;
outer:
;
}
return 0;
}
/* Read LEN bytes at PTR from descriptor DESC, for file FILENAME,
retrying if necessary. Return the actual number of bytes read. */
static int
safe_read (desc, ptr, len)
int desc;
PTR ptr;
int len;
{
int left = len;
while (left > 0) {
int nchars = read (desc, ptr, left);
if (nchars < 0)
{
#ifdef EINTR
if (errno == EINTR)
continue;
#endif
return nchars;
}
if (nchars == 0)
break;
/* Arithmetic on void pointers is a gcc extension. */
ptr = (char *) ptr + nchars;
left -= nchars;
}
return len - left;
}
/* Write LEN bytes at PTR to descriptor DESC,
retrying if necessary, and treating any real error as fatal. */
static void
safe_write (desc, ptr, len, out_fname)
int desc;
PTR ptr;
int len;
const char *out_fname;
{
while (len > 0) {
int written = write (desc, ptr, len);
if (written < 0)
{
int errno_val = errno;
#ifdef EINTR
if (errno_val == EINTR)
continue;
#endif
notice ("%s: error writing file `%s': %s\n",
pname, shortpath (NULL, out_fname), xstrerror (errno_val));
return;
}
/* Arithmetic on void pointers is a gcc extension. */
ptr = (char *) ptr + written;
len -= written;
}
}
/* Get setup to recover in case the edit we are about to do goes awry. */
static void
save_pointers ()
{
saved_clean_read_ptr = clean_read_ptr;
saved_repl_write_ptr = repl_write_ptr;
}
/* Call this routine to recover our previous state whenever something looks
too confusing in the source code we are trying to edit. */
static void
restore_pointers ()
{
clean_read_ptr = saved_clean_read_ptr;
repl_write_ptr = saved_repl_write_ptr;
}
/* Return true if the given character is a valid identifier character. */
static int
is_id_char (ch)
int ch;
{
return (ISIDNUM (ch) || (ch == '$'));
}
/* Give a message indicating the proper way to invoke this program and then
exit with nonzero status. */
static void
usage ()
{
#ifdef UNPROTOIZE
notice ("%s: usage '%s [ -VqfnkN ] [ -i <istring> ] [ filename ... ]'\n",
pname, pname);
#else /* !defined (UNPROTOIZE) */
notice ("%s: usage '%s [ -VqfnkNlgC ] [ -B <dirname> ] [ filename ... ]'\n",
pname, pname);
#endif /* !defined (UNPROTOIZE) */
exit (FATAL_EXIT_CODE);
}
/* Return true if the given filename (assumed to be an absolute filename)
designates a file residing anywhere beneath any one of the "system"
include directories. */
static int
in_system_include_dir (path)
const char *path;
{
const struct default_include *p;
if (! is_abspath (path))
abort (); /* Must be an absolutized filename. */
for (p = cpp_include_defaults; p->fname; p++)
if (!strncmp (path, p->fname, strlen (p->fname))
&& IS_DIR_SEPARATOR (path[strlen (p->fname)]))
return 1;
return 0;
}
#if 0
/* Return true if the given filename designates a file that the user has
read access to and for which the user has write access to the containing
directory. */
static int
file_could_be_converted (const char *path)
{
char *const dir_name = (char *) alloca (strlen (path) + 1);
if (access (path, R_OK))
return 0;
{
char *dir_last_slash;
strcpy (dir_name, path);
dir_last_slash = strrchr (dir_name, DIR_SEPARATOR);
#ifdef DIR_SEPARATOR_2
{
char *slash;
slash = strrchr (dir_last_slash ? dir_last_slash : dir_name,
DIR_SEPARATOR_2);
if (slash)
dir_last_slash = slash;
}
#endif
if (dir_last_slash)
*dir_last_slash = '\0';
else
abort (); /* Should have been an absolutized filename. */
}
if (access (path, W_OK))
return 0;
return 1;
}
/* Return true if the given filename designates a file that we are allowed
to modify. Files which we should not attempt to modify are (a) "system"
include files, and (b) files which the user doesn't have write access to,
and (c) files which reside in directories which the user doesn't have
write access to. Unless requested to be quiet, give warnings about
files that we will not try to convert for one reason or another. An
exception is made for "system" include files, which we never try to
convert and for which we don't issue the usual warnings. */
static int
file_normally_convertible (const char *path)
{
char *const dir_name = alloca (strlen (path) + 1);
if (in_system_include_dir (path))
return 0;
{
char *dir_last_slash;
strcpy (dir_name, path);
dir_last_slash = strrchr (dir_name, DIR_SEPARATOR);
#ifdef DIR_SEPARATOR_2
{
char *slash;
slash = strrchr (dir_last_slash ? dir_last_slash : dir_name,
DIR_SEPARATOR_2);
if (slash)
dir_last_slash = slash;
}
#endif
if (dir_last_slash)
*dir_last_slash = '\0';
else
abort (); /* Should have been an absolutized filename. */
}
if (access (path, R_OK))
{
if (!quiet_flag)
notice ("%s: warning: no read access for file `%s'\n",
pname, shortpath (NULL, path));
return 0;
}
if (access (path, W_OK))
{
if (!quiet_flag)
notice ("%s: warning: no write access for file `%s'\n",
pname, shortpath (NULL, path));
return 0;
}
if (access (dir_name, W_OK))
{
if (!quiet_flag)
notice ("%s: warning: no write access for dir containing `%s'\n",
pname, shortpath (NULL, path));
return 0;
}
return 1;
}
#endif /* 0 */
#ifndef UNPROTOIZE
/* Return true if the given file_info struct refers to the special SYSCALLS.c.X
file. Return false otherwise. */
static int
is_syscalls_file (fi_p)
const file_info *fi_p;
{
char const *f = fi_p->hash_entry->symbol;
size_t fl = strlen (f), sysl = sizeof (syscalls_filename) - 1;
return sysl <= fl && strcmp (f + fl - sysl, syscalls_filename) == 0;
}
#endif /* !defined (UNPROTOIZE) */
/* Check to see if this file will need to have anything done to it on this
run. If there is nothing in the given file which both needs conversion
and for which we have the necessary stuff to do the conversion, return
false. Otherwise, return true.
Note that (for protoize) it is only valid to call this function *after*
the connections between declarations and definitions have all been made
by connect_defs_and_decs. */
static int
needs_to_be_converted (file_p)
const file_info *file_p;
{
const def_dec_info *ddp;
#ifndef UNPROTOIZE
if (is_syscalls_file (file_p))
return 0;
#endif /* !defined (UNPROTOIZE) */
for (ddp = file_p->defs_decs; ddp; ddp = ddp->next_in_file)
if (
#ifndef UNPROTOIZE
/* ... and if we a protoizing and this function is in old style ... */
!ddp->prototyped
/* ... and if this a definition or is a decl with an associated def ... */
&& (ddp->is_func_def || (!ddp->is_func_def && ddp->definition))
#else /* defined (UNPROTOIZE) */
/* ... and if we are unprotoizing and this function is in new style ... */
ddp->prototyped
#endif /* defined (UNPROTOIZE) */
)
/* ... then the containing file needs converting. */
return -1;
return 0;
}
/* Return 1 if the file name NAME is in a directory
that should be converted. */
static int
directory_specified_p (name)
const char *name;
{
struct string_list *p;
for (p = directory_list; p; p = p->next)
if (!strncmp (name, p->name, strlen (p->name))
&& IS_DIR_SEPARATOR (name[strlen (p->name)]))
{
const char *q = name + strlen (p->name) + 1;
/* If there are more slashes, it's in a subdir, so
this match doesn't count. */
while (*q++)
if (IS_DIR_SEPARATOR (*(q-1)))
goto lose;
return 1;
lose: ;
}
return 0;
}
/* Return 1 if the file named NAME should be excluded from conversion. */
static int
file_excluded_p (name)
const char *name;
{
struct string_list *p;
int len = strlen (name);
for (p = exclude_list; p; p = p->next)
if (!strcmp (name + len - strlen (p->name), p->name)
&& IS_DIR_SEPARATOR (name[len - strlen (p->name) - 1]))
return 1;
return 0;
}
/* Construct a new element of a string_list.
STRING is the new element value, and REST holds the remaining elements. */
static struct string_list *
string_list_cons (string, rest)
const char *string;
struct string_list *rest;
{
struct string_list *temp
= (struct string_list *) xmalloc (sizeof (struct string_list));
temp->next = rest;
temp->name = string;
return temp;
}
/* ??? The GNU convention for mentioning function args in its comments
is to capitalize them. So change "hash_tab_p" to HASH_TAB_P below.
Likewise for all the other functions. */
/* Given a hash table, apply some function to each node in the table. The
table to traverse is given as the "hash_tab_p" argument, and the
function to be applied to each node in the table is given as "func"
argument. */
static void
visit_each_hash_node (hash_tab_p, func)
const hash_table_entry *hash_tab_p;
void (*func) PARAMS ((const hash_table_entry *));
{
const hash_table_entry *primary;
for (primary = hash_tab_p; primary < &hash_tab_p[HASH_TABLE_SIZE]; primary++)
if (primary->symbol)
{
hash_table_entry *second;
(*func)(primary);
for (second = primary->hash_next; second; second = second->hash_next)
(*func) (second);
}
}
/* Initialize all of the fields of a new hash table entry, pointed
to by the "p" parameter. Note that the space to hold the entry
is assumed to have already been allocated before this routine is
called. */
static hash_table_entry *
add_symbol (p, s)
hash_table_entry *p;
const char *s;
{
p->hash_next = NULL;
p->symbol = xstrdup (s);
p->ddip = NULL;
p->fip = NULL;
return p;
}
/* Look for a particular function name or filename in the particular
hash table indicated by "hash_tab_p". If the name is not in the
given hash table, add it. Either way, return a pointer to the
hash table entry for the given name. */
static hash_table_entry *
lookup (hash_tab_p, search_symbol)
hash_table_entry *hash_tab_p;
const char *search_symbol;
{
int hash_value = 0;
const char *search_symbol_char_p = search_symbol;
hash_table_entry *p;
while (*search_symbol_char_p)
hash_value += *search_symbol_char_p++;
hash_value &= hash_mask;
p = &hash_tab_p[hash_value];
if (! p->symbol)
return add_symbol (p, search_symbol);
if (!strcmp (p->symbol, search_symbol))
return p;
while (p->hash_next)
{
p = p->hash_next;
if (!strcmp (p->symbol, search_symbol))
return p;
}
p->hash_next = (hash_table_entry *) xmalloc (sizeof (hash_table_entry));
p = p->hash_next;
return add_symbol (p, search_symbol);
}
/* Throw a def/dec record on the junk heap.
Also, since we are not using this record anymore, free up all of the
stuff it pointed to. */
static void
free_def_dec (p)
def_dec_info *p;
{
free ((NONCONST PTR) p->ansi_decl);
#ifndef UNPROTOIZE
{
const f_list_chain_item * curr;
const f_list_chain_item * next;
for (curr = p->f_list_chain; curr; curr = next)
{
next = curr->chain_next;
free ((NONCONST PTR) curr);
}
}
#endif /* !defined (UNPROTOIZE) */
free (p);
}
/* Unexpand as many macro symbol as we can find.
If the given line must be unexpanded, make a copy of it in the heap and
return a pointer to the unexpanded copy. Otherwise return NULL. */
static char *
unexpand_if_needed (aux_info_line)
const char *aux_info_line;
{
static char *line_buf = 0;
static int line_buf_size = 0;
const unexpansion *unexp_p;
int got_unexpanded = 0;
const char *s;
char *copy_p = line_buf;
if (line_buf == 0)
{
line_buf_size = 1024;
line_buf = (char *) xmalloc (line_buf_size);
}
copy_p = line_buf;
/* Make a copy of the input string in line_buf, expanding as necessary. */
for (s = aux_info_line; *s != '\n'; )
{
for (unexp_p = unexpansions; unexp_p->expanded; unexp_p++)
{
const char *in_p = unexp_p->expanded;
size_t len = strlen (in_p);
if (*s == *in_p && !strncmp (s, in_p, len) && !is_id_char (s[len]))
{
int size = strlen (unexp_p->contracted);
got_unexpanded = 1;
if (copy_p + size - line_buf >= line_buf_size)
{
int offset = copy_p - line_buf;
line_buf_size *= 2;
line_buf_size += size;
line_buf = (char *) xrealloc (line_buf, line_buf_size);
copy_p = line_buf + offset;
}
strcpy (copy_p, unexp_p->contracted);
copy_p += size;
/* Assume that there will not be another replacement required
within the text just replaced. */
s += len;
goto continue_outer;
}
}
if (copy_p - line_buf == line_buf_size)
{
int offset = copy_p - line_buf;
line_buf_size *= 2;
line_buf = (char *) xrealloc (line_buf, line_buf_size);
copy_p = line_buf + offset;
}
*copy_p++ = *s++;
continue_outer: ;
}
if (copy_p + 2 - line_buf >= line_buf_size)
{
int offset = copy_p - line_buf;
line_buf_size *= 2;
line_buf = (char *) xrealloc (line_buf, line_buf_size);
copy_p = line_buf + offset;
}
*copy_p++ = '\n';
*copy_p = '\0';
return (got_unexpanded ? savestring (line_buf, copy_p - line_buf) : 0);
}
/* Return 1 if pathname is absolute. */
static int
is_abspath (path)
const char *path;
{
return (IS_DIR_SEPARATOR (path[0])
#ifdef HAVE_DOS_BASED_FILE_SYSTEM
/* Check for disk name on MS-DOS-based systems. */
|| (path[0] && path[1] == ':' && IS_DIR_SEPARATOR (path[2]))
#endif
);
}
/* Return the absolutized filename for the given relative
filename. Note that if that filename is already absolute, it may
still be returned in a modified form because this routine also
eliminates redundant slashes and single dots and eliminates double
dots to get a shortest possible filename from the given input
filename. The absolutization of relative filenames is made by
assuming that the given filename is to be taken as relative to
the first argument (cwd) or to the current directory if cwd is
NULL. */
static char *
abspath (cwd, rel_filename)
const char *cwd;
const char *rel_filename;
{
/* Setup the current working directory as needed. */
const char *const cwd2 = (cwd) ? cwd : cwd_buffer;
char *const abs_buffer
= (char *) alloca (strlen (cwd2) + strlen (rel_filename) + 2);
char *endp = abs_buffer;
char *outp, *inp;
/* Copy the filename (possibly preceded by the current working
directory name) into the absolutization buffer. */
{
const char *src_p;
if (! is_abspath (rel_filename))
{
src_p = cwd2;
while ((*endp++ = *src_p++))
continue;
*(endp-1) = DIR_SEPARATOR; /* overwrite null */
}
#ifdef HAVE_DOS_BASED_FILE_SYSTEM
else if (IS_DIR_SEPARATOR (rel_filename[0]))
{
/* A path starting with a directory separator is considered absolute
for dos based filesystems, but it's really not -- it's just the
convention used throughout GCC and it works. However, in this
case, we still need to prepend the drive spec from cwd_buffer. */
*endp++ = cwd2[0];
*endp++ = cwd2[1];
}
#endif
src_p = rel_filename;
while ((*endp++ = *src_p++))
continue;
}
/* Now make a copy of abs_buffer into abs_buffer, shortening the
filename (by taking out slashes and dots) as we go. */
outp = inp = abs_buffer;
*outp++ = *inp++; /* copy first slash */
#if defined (apollo) || defined (_WIN32) || defined (__INTERIX)
if (IS_DIR_SEPARATOR (inp[0]))
*outp++ = *inp++; /* copy second slash */
#endif
for (;;)
{
if (!inp[0])
break;
else if (IS_DIR_SEPARATOR (inp[0]) && IS_DIR_SEPARATOR (outp[-1]))
{
inp++;
continue;
}
else if (inp[0] == '.' && IS_DIR_SEPARATOR (outp[-1]))
{
if (!inp[1])
break;
else if (IS_DIR_SEPARATOR (inp[1]))
{
inp += 2;
continue;
}
else if ((inp[1] == '.') && (inp[2] == 0
|| IS_DIR_SEPARATOR (inp[2])))
{
inp += (IS_DIR_SEPARATOR (inp[2])) ? 3 : 2;
outp -= 2;
while (outp >= abs_buffer && ! IS_DIR_SEPARATOR (*outp))
outp--;
if (outp < abs_buffer)
{
/* Catch cases like /.. where we try to backup to a
point above the absolute root of the logical file
system. */
notice ("%s: invalid file name: %s\n",
pname, rel_filename);
exit (FATAL_EXIT_CODE);
}
*++outp = '\0';
continue;
}
}
*outp++ = *inp++;
}
/* On exit, make sure that there is a trailing null, and make sure that
the last character of the returned string is *not* a slash. */
*outp = '\0';
if (IS_DIR_SEPARATOR (outp[-1]))
*--outp = '\0';
/* Make a copy (in the heap) of the stuff left in the absolutization
buffer and return a pointer to the copy. */
return savestring (abs_buffer, outp - abs_buffer);
}
/* Given a filename (and possibly a directory name from which the filename
is relative) return a string which is the shortest possible
equivalent for the corresponding full (absolutized) filename. The
shortest possible equivalent may be constructed by converting the
absolutized filename to be a relative filename (i.e. relative to
the actual current working directory). However if a relative filename
is longer, then the full absolute filename is returned.
KNOWN BUG:
Note that "simple-minded" conversion of any given type of filename (either
relative or absolute) may not result in a valid equivalent filename if any
subpart of the original filename is actually a symbolic link. */
static const char *
shortpath (cwd, filename)
const char *cwd;
const char *filename;
{
char *rel_buffer;
char *rel_buf_p;
char *cwd_p = cwd_buffer;
char *path_p;
int unmatched_slash_count = 0;
size_t filename_len = strlen (filename);
path_p = abspath (cwd, filename);
rel_buf_p = rel_buffer = (char *) xmalloc (filename_len);
while (*cwd_p && IS_SAME_PATH_CHAR (*cwd_p, *path_p))
{
cwd_p++;
path_p++;
}
if (!*cwd_p && (!*path_p || IS_DIR_SEPARATOR (*path_p)))
{
/* whole pwd matched */
if (!*path_p) /* input *is* the current path! */
return ".";
else
return ++path_p;
}
else
{
if (*path_p)
{
--cwd_p;
--path_p;
while (! IS_DIR_SEPARATOR (*cwd_p)) /* backup to last slash */
{
--cwd_p;
--path_p;
}
cwd_p++;
path_p++;
unmatched_slash_count++;
}
/* Find out how many directory levels in cwd were *not* matched. */
while (*cwd_p++)
if (IS_DIR_SEPARATOR (*(cwd_p-1)))
unmatched_slash_count++;
/* Now we know how long the "short name" will be.
Reject it if longer than the input. */
if (unmatched_slash_count * 3 + strlen (path_p) >= filename_len)
return filename;
/* For each of them, put a `../' at the beginning of the short name. */
while (unmatched_slash_count--)
{
/* Give up if the result gets to be longer
than the absolute path name. */
if (rel_buffer + filename_len <= rel_buf_p + 3)
return filename;
*rel_buf_p++ = '.';
*rel_buf_p++ = '.';
*rel_buf_p++ = DIR_SEPARATOR;
}
/* Then tack on the unmatched part of the desired file's name. */
do
{
if (rel_buffer + filename_len <= rel_buf_p)
return filename;
}
while ((*rel_buf_p++ = *path_p++));
--rel_buf_p;
if (IS_DIR_SEPARATOR (*(rel_buf_p-1)))
*--rel_buf_p = '\0';
return rel_buffer;
}
}
/* Lookup the given filename in the hash table for filenames. If it is a
new one, then the hash table info pointer will be null. In this case,
we create a new file_info record to go with the filename, and we initialize
that record with some reasonable values. */
/* FILENAME was const, but that causes a warning on AIX when calling stat.
That is probably a bug in AIX, but might as well avoid the warning. */
static file_info *
find_file (filename, do_not_stat)
const char *filename;
int do_not_stat;
{
hash_table_entry *hash_entry_p;
hash_entry_p = lookup (filename_primary, filename);
if (hash_entry_p->fip)
return hash_entry_p->fip;
else
{
struct stat stat_buf;
file_info *file_p = (file_info *) xmalloc (sizeof (file_info));
/* If we cannot get status on any given source file, give a warning
and then just set its time of last modification to infinity. */
if (do_not_stat)
stat_buf.st_mtime = (time_t) 0;
else
{
if (stat (filename, &stat_buf) == -1)
{
int errno_val = errno;
notice ("%s: %s: can't get status: %s\n",
pname, shortpath (NULL, filename),
xstrerror (errno_val));
stat_buf.st_mtime = (time_t) -1;
}
}
hash_entry_p->fip = file_p;
file_p->hash_entry = hash_entry_p;
file_p->defs_decs = NULL;
file_p->mtime = stat_buf.st_mtime;
return file_p;
}
}
/* Generate a fatal error because some part of the aux_info file is
messed up. */
static void
aux_info_corrupted ()
{
notice ("\n%s: fatal error: aux info file corrupted at line %d\n",
pname, current_aux_info_lineno);
exit (FATAL_EXIT_CODE);
}
/* ??? This comment is vague. Say what the condition is for. */
/* Check to see that a condition is true. This is kind of like an assert. */
static void
check_aux_info (cond)
int cond;
{
if (! cond)
aux_info_corrupted ();
}
/* Given a pointer to the closing right parenthesis for a particular formals
list (in an aux_info file) find the corresponding left parenthesis and
return a pointer to it. */
static const char *
find_corresponding_lparen (p)
const char *p;
{
const char *q;
int paren_depth;
for (paren_depth = 1, q = p-1; paren_depth; q--)
{
switch (*q)
{
case ')':
paren_depth++;
break;
case '(':
paren_depth--;
break;
}
}
return ++q;
}
/* Given a line from an aux info file, and a time at which the aux info
file it came from was created, check to see if the item described in
the line comes from a file which has been modified since the aux info
file was created. If so, return nonzero, else return zero. */
static int
referenced_file_is_newer (l, aux_info_mtime)
const char *l;
time_t aux_info_mtime;
{
const char *p;
file_info *fi_p;
char *filename;
check_aux_info (l[0] == '/');
check_aux_info (l[1] == '*');
check_aux_info (l[2] == ' ');
{
const char *filename_start = p = l + 3;
while (*p != ':'
#ifdef HAVE_DOS_BASED_FILE_SYSTEM
|| (*p == ':' && *p && *(p+1) && IS_DIR_SEPARATOR (*(p+1)))
#endif
)
p++;
filename = (char *) alloca ((size_t) (p - filename_start) + 1);
strncpy (filename, filename_start, (size_t) (p - filename_start));
filename[p-filename_start] = '\0';
}
/* Call find_file to find the file_info record associated with the file
which contained this particular def or dec item. Note that this call
may cause a new file_info record to be created if this is the first time
that we have ever known about this particular file. */
fi_p = find_file (abspath (invocation_filename, filename), 0);
return (fi_p->mtime > aux_info_mtime);
}
/* Given a line of info from the aux_info file, create a new
def_dec_info record to remember all of the important information about
a function definition or declaration.
Link this record onto the list of such records for the particular file in
which it occurred in proper (descending) line number order (for now).
If there is an identical record already on the list for the file, throw
this one away. Doing so takes care of the (useless and troublesome)
duplicates which are bound to crop up due to multiple inclusions of any
given individual header file.
Finally, link the new def_dec record onto the list of such records
pertaining to this particular function name. */
static void
save_def_or_dec (l, is_syscalls)
const char *l;
int is_syscalls;
{
const char *p;
const char *semicolon_p;
def_dec_info *def_dec_p = (def_dec_info *) xmalloc (sizeof (def_dec_info));
#ifndef UNPROTOIZE
def_dec_p->written = 0;
#endif /* !defined (UNPROTOIZE) */
/* Start processing the line by picking off 5 pieces of information from
the left hand end of the line. These are filename, line number,
new/old/implicit flag (new = ANSI prototype format), definition or
declaration flag, and extern/static flag). */
check_aux_info (l[0] == '/');
check_aux_info (l[1] == '*');
check_aux_info (l[2] == ' ');
{
const char *filename_start = p = l + 3;
char *filename;
while (*p != ':'
#ifdef HAVE_DOS_BASED_FILE_SYSTEM
|| (*p == ':' && *p && *(p+1) && IS_DIR_SEPARATOR (*(p+1)))
#endif
)
p++;
filename = (char *) alloca ((size_t) (p - filename_start) + 1);
strncpy (filename, filename_start, (size_t) (p - filename_start));
filename[p-filename_start] = '\0';
/* Call find_file to find the file_info record associated with the file
which contained this particular def or dec item. Note that this call
may cause a new file_info record to be created if this is the first time
that we have ever known about this particular file.
Note that we started out by forcing all of the base source file names
(i.e. the names of the aux_info files with the .X stripped off) into the
filenames hash table, and we simultaneously setup file_info records for
all of these base file names (even if they may be useless later).
The file_info records for all of these "base" file names (properly)
act as file_info records for the "original" (i.e. un-included) files
which were submitted to gcc for compilation (when the -aux-info
option was used). */
def_dec_p->file = find_file (abspath (invocation_filename, filename), is_syscalls);
}
{
const char *line_number_start = ++p;
char line_number[10];
while (*p != ':'
#ifdef HAVE_DOS_BASED_FILE_SYSTEM
|| (*p == ':' && *p && *(p+1) && IS_DIR_SEPARATOR (*(p+1)))
#endif
)
p++;
strncpy (line_number, line_number_start, (size_t) (p - line_number_start));
line_number[p-line_number_start] = '\0';
def_dec_p->line = atoi (line_number);
}
/* Check that this record describes a new-style, old-style, or implicit
definition or declaration. */
p++; /* Skip over the `:'. */
check_aux_info ((*p == 'N') || (*p == 'O') || (*p == 'I'));
/* Is this a new style (ANSI prototyped) definition or declaration? */
def_dec_p->prototyped = (*p == 'N');
#ifndef UNPROTOIZE
/* Is this an implicit declaration? */
def_dec_p->is_implicit = (*p == 'I');
#endif /* !defined (UNPROTOIZE) */
p++;
check_aux_info ((*p == 'C') || (*p == 'F'));
/* Is this item a function definition (F) or a declaration (C). Note that
we treat item taken from the syscalls file as though they were function
definitions regardless of what the stuff in the file says. */
def_dec_p->is_func_def = ((*p++ == 'F') || is_syscalls);
#ifndef UNPROTOIZE
def_dec_p->definition = 0; /* Fill this in later if protoizing. */
#endif /* !defined (UNPROTOIZE) */
check_aux_info (*p++ == ' ');
check_aux_info (*p++ == '*');
check_aux_info (*p++ == '/');
check_aux_info (*p++ == ' ');
#ifdef UNPROTOIZE
check_aux_info ((!strncmp (p, "static", 6)) || (!strncmp (p, "extern", 6)));
#else /* !defined (UNPROTOIZE) */
if (!strncmp (p, "static", 6))
def_dec_p->is_static = -1;
else if (!strncmp (p, "extern", 6))
def_dec_p->is_static = 0;
else
check_aux_info (0); /* Didn't find either `extern' or `static'. */
#endif /* !defined (UNPROTOIZE) */
{
const char *ansi_start = p;
p += 6; /* Pass over the "static" or "extern". */
/* We are now past the initial stuff. Search forward from here to find
the terminating semicolon that should immediately follow the entire
ANSI format function declaration. */
while (*++p != ';')
continue;
semicolon_p = p;
/* Make a copy of the ansi declaration part of the line from the aux_info
file. */
def_dec_p->ansi_decl
= dupnstr (ansi_start, (size_t) ((semicolon_p+1) - ansi_start));
/* Backup and point at the final right paren of the final argument list. */
p--;
#ifndef UNPROTOIZE
def_dec_p->f_list_chain = NULL;
#endif /* !defined (UNPROTOIZE) */
while (p != ansi_start && (p[-1] == ' ' || p[-1] == '\t')) p--;
if (*p != ')')
{
free_def_dec (def_dec_p);
return;
}
}
/* Now isolate a whole set of formal argument lists, one-by-one. Normally,
there will only be one list to isolate, but there could be more. */
def_dec_p->f_list_count = 0;
for (;;)
{
const char *left_paren_p = find_corresponding_lparen (p);
#ifndef UNPROTOIZE
{
f_list_chain_item *cip
= (f_list_chain_item *) xmalloc (sizeof (f_list_chain_item));
cip->formals_list
= dupnstr (left_paren_p + 1, (size_t) (p - (left_paren_p+1)));
/* Add the new chain item at the head of the current list. */
cip->chain_next = def_dec_p->f_list_chain;
def_dec_p->f_list_chain = cip;
}
#endif /* !defined (UNPROTOIZE) */
def_dec_p->f_list_count++;
p = left_paren_p - 2;
/* p must now point either to another right paren, or to the last
character of the name of the function that was declared/defined.
If p points to another right paren, then this indicates that we
are dealing with multiple formals lists. In that case, there
really should be another right paren preceding this right paren. */
if (*p != ')')
break;
else
check_aux_info (*--p == ')');
}
{
const char *past_fn = p + 1;
check_aux_info (*past_fn == ' ');
/* Scan leftwards over the identifier that names the function. */
while (is_id_char (*p))
p--;
p++;
/* p now points to the leftmost character of the function name. */
{
char *fn_string = (char *) alloca (past_fn - p + 1);
strncpy (fn_string, p, (size_t) (past_fn - p));
fn_string[past_fn-p] = '\0';
def_dec_p->hash_entry = lookup (function_name_primary, fn_string);
}
}
/* Look at all of the defs and decs for this function name that we have
collected so far. If there is already one which is at the same
line number in the same file, then we can discard this new def_dec_info
record.
As an extra assurance that any such pair of (nominally) identical
function declarations are in fact identical, we also compare the
ansi_decl parts of the lines from the aux_info files just to be on
the safe side.
This comparison will fail if (for instance) the user was playing
messy games with the preprocessor which ultimately causes one
function declaration in one header file to look differently when
that file is included by two (or more) other files. */
{
const def_dec_info *other;
for (other = def_dec_p->hash_entry->ddip; other; other = other->next_for_func)
{
if (def_dec_p->line == other->line && def_dec_p->file == other->file)
{
if (strcmp (def_dec_p->ansi_decl, other->ansi_decl))
{
notice ("%s:%d: declaration of function `%s' takes different forms\n",
def_dec_p->file->hash_entry->symbol,
def_dec_p->line,
def_dec_p->hash_entry->symbol);
exit (FATAL_EXIT_CODE);
}
free_def_dec (def_dec_p);
return;
}
}
}
#ifdef UNPROTOIZE
/* If we are doing unprotoizing, we must now setup the pointers that will
point to the K&R name list and to the K&R argument declarations list.
Note that if this is only a function declaration, then we should not
expect to find any K&R style formals list following the ANSI-style
formals list. This is because GCC knows that such information is
useless in the case of function declarations (function definitions
are a different story however).
Since we are unprotoizing, we don't need any such lists anyway.
All we plan to do is to delete all characters between ()'s in any
case. */
def_dec_p->formal_names = NULL;
def_dec_p->formal_decls = NULL;
if (def_dec_p->is_func_def)
{
p = semicolon_p;
check_aux_info (*++p == ' ');
check_aux_info (*++p == '/');
check_aux_info (*++p == '*');
check_aux_info (*++p == ' ');
check_aux_info (*++p == '(');
{
const char *kr_names_start = ++p; /* Point just inside '('. */
while (*p++ != ')')
continue;
p--; /* point to closing right paren */
/* Make a copy of the K&R parameter names list. */
def_dec_p->formal_names
= dupnstr (kr_names_start, (size_t) (p - kr_names_start));
}
check_aux_info (*++p == ' ');
p++;
/* p now points to the first character of the K&R style declarations
list (if there is one) or to the star-slash combination that ends
the comment in which such lists get embedded. */
/* Make a copy of the K&R formal decls list and set the def_dec record
to point to it. */
if (*p == '*') /* Are there no K&R declarations? */
{
check_aux_info (*++p == '/');
def_dec_p->formal_decls = "";
}
else
{
const char *kr_decls_start = p;
while (p[0] != '*' || p[1] != '/')
p++;
p--;
check_aux_info (*p == ' ');
def_dec_p->formal_decls
= dupnstr (kr_decls_start, (size_t) (p - kr_decls_start));
}
/* Handle a special case. If we have a function definition marked as
being in "old" style, and if its formal names list is empty, then
it may actually have the string "void" in its real formals list
in the original source code. Just to make sure, we will get setup
to convert such things anyway.
This kludge only needs to be here because of an insurmountable
problem with generating .X files. */
if (!def_dec_p->prototyped && !*def_dec_p->formal_names)
def_dec_p->prototyped = 1;
}
/* Since we are unprotoizing, if this item is already in old (K&R) style,
we can just ignore it. If that is true, throw away the itme now. */
if (!def_dec_p->prototyped)
{
free_def_dec (def_dec_p);
return;
}
#endif /* defined (UNPROTOIZE) */
/* Add this record to the head of the list of records pertaining to this
particular function name. */
def_dec_p->next_for_func = def_dec_p->hash_entry->ddip;
def_dec_p->hash_entry->ddip = def_dec_p;
/* Add this new def_dec_info record to the sorted list of def_dec_info
records for this file. Note that we don't have to worry about duplicates
(caused by multiple inclusions of header files) here because we have
already eliminated duplicates above. */
if (!def_dec_p->file->defs_decs)
{
def_dec_p->file->defs_decs = def_dec_p;
def_dec_p->next_in_file = NULL;
}
else
{
int line = def_dec_p->line;
const def_dec_info *prev = NULL;
const def_dec_info *curr = def_dec_p->file->defs_decs;
const def_dec_info *next = curr->next_in_file;
while (next && (line < curr->line))
{
prev = curr;
curr = next;
next = next->next_in_file;
}
if (line >= curr->line)
{
def_dec_p->next_in_file = curr;
if (prev)
((NONCONST def_dec_info *) prev)->next_in_file = def_dec_p;
else
def_dec_p->file->defs_decs = def_dec_p;
}
else /* assert (next == NULL); */
{
((NONCONST def_dec_info *) curr)->next_in_file = def_dec_p;
/* assert (next == NULL); */
def_dec_p->next_in_file = next;
}
}
}
/* Set up the vector COMPILE_PARAMS which is the argument list for running GCC.
Also set input_file_name_index and aux_info_file_name_index
to the indices of the slots where the file names should go. */
/* We initialize the vector by removing -g, -O, -S, -c, and -o options,
and adding '-aux-info AUXFILE -S -o /dev/null INFILE' at the end. */
static void
munge_compile_params (params_list)
const char *params_list;
{
/* Build up the contents in a temporary vector
that is so big that to has to be big enough. */
const char **temp_params
= (const char **) alloca ((strlen (params_list) + 8) * sizeof (char *));
int param_count = 0;
const char *param;
struct stat st;
temp_params[param_count++] = compiler_file_name;
for (;;)
{
while (ISSPACE ((const unsigned char)*params_list))
params_list++;
if (!*params_list)
break;
param = params_list;
while (*params_list && !ISSPACE ((const unsigned char)*params_list))
params_list++;
if (param[0] != '-')
temp_params[param_count++]
= dupnstr (param, (size_t) (params_list - param));
else
{
switch (param[1])
{
case 'g':
case 'O':
case 'S':
case 'c':
break; /* Don't copy these. */
case 'o':
while (ISSPACE ((const unsigned char)*params_list))
params_list++;
while (*params_list
&& !ISSPACE ((const unsigned char)*params_list))
params_list++;
break;
default:
temp_params[param_count++]
= dupnstr (param, (size_t) (params_list - param));
}
}
if (!*params_list)
break;
}
temp_params[param_count++] = "-aux-info";
/* Leave room for the aux-info file name argument. */
aux_info_file_name_index = param_count;
temp_params[param_count++] = NULL;
temp_params[param_count++] = "-S";
temp_params[param_count++] = "-o";
if ((stat (HOST_BIT_BUCKET, &st) == 0)
&& (!S_ISDIR (st.st_mode))
&& (access (HOST_BIT_BUCKET, W_OK) == 0))
temp_params[param_count++] = HOST_BIT_BUCKET;
else
/* FIXME: This is hardly likely to be right, if HOST_BIT_BUCKET is not
writable. But until this is rejigged to use make_temp_file(), this
is the best we can do. */
temp_params[param_count++] = "/dev/null";
/* Leave room for the input file name argument. */
input_file_name_index = param_count;
temp_params[param_count++] = NULL;
/* Terminate the list. */
temp_params[param_count++] = NULL;
/* Make a copy of the compile_params in heap space. */
compile_params
= (const char **) xmalloc (sizeof (char *) * (param_count+1));
memcpy (compile_params, temp_params, sizeof (char *) * param_count);
}
/* Do a recompilation for the express purpose of generating a new aux_info
file to go with a specific base source file.
The result is a boolean indicating success. */
static int
gen_aux_info_file (base_filename)
const char *base_filename;
{
if (!input_file_name_index)
munge_compile_params ("");
/* Store the full source file name in the argument vector. */
compile_params[input_file_name_index] = shortpath (NULL, base_filename);
/* Add .X to source file name to get aux-info file name. */
compile_params[aux_info_file_name_index] =
concat (compile_params[input_file_name_index], aux_info_suffix, NULL);
if (!quiet_flag)
notice ("%s: compiling `%s'\n",
pname, compile_params[input_file_name_index]);
{
char *errmsg_fmt, *errmsg_arg;
int wait_status, pid;
pid = pexecute (compile_params[0], (char * const *) compile_params,
pname, NULL, &errmsg_fmt, &errmsg_arg,
PEXECUTE_FIRST | PEXECUTE_LAST | PEXECUTE_SEARCH);
if (pid == -1)
{
int errno_val = errno;
fprintf (stderr, "%s: ", pname);
fprintf (stderr, errmsg_fmt, errmsg_arg);
fprintf (stderr, ": %s\n", xstrerror (errno_val));
return 0;
}
pid = pwait (pid, &wait_status, 0);
if (pid == -1)
{
notice ("%s: wait: %s\n", pname, xstrerror (errno));
return 0;
}
if (WIFSIGNALED (wait_status))
{
notice ("%s: subprocess got fatal signal %d\n",
pname, WTERMSIG (wait_status));
return 0;
}
if (WIFEXITED (wait_status))
{
if (WEXITSTATUS (wait_status) != 0)
{
notice ("%s: %s exited with status %d\n",
pname, compile_params[0], WEXITSTATUS (wait_status));
return 0;
}
return 1;
}
abort ();
}
}
/* Read in all of the information contained in a single aux_info file.
Save all of the important stuff for later. */
static void
process_aux_info_file (base_source_filename, keep_it, is_syscalls)
const char *base_source_filename;
int keep_it;
int is_syscalls;
{
size_t base_len = strlen (base_source_filename);
char * aux_info_filename
= (char *) alloca (base_len + strlen (aux_info_suffix) + 1);
char *aux_info_base;
char *aux_info_limit;
char *aux_info_relocated_name;
const char *aux_info_second_line;
time_t aux_info_mtime;
size_t aux_info_size;
int must_create;
/* Construct the aux_info filename from the base source filename. */
strcpy (aux_info_filename, base_source_filename);
strcat (aux_info_filename, aux_info_suffix);
/* Check that the aux_info file exists and is readable. If it does not
exist, try to create it (once only). */
/* If file doesn't exist, set must_create.
Likewise if it exists and we can read it but it is obsolete.
Otherwise, report an error. */
must_create = 0;
/* Come here with must_create set to 1 if file is out of date. */
start_over: ;
if (access (aux_info_filename, R_OK) == -1)
{
if (errno == ENOENT)
{
if (is_syscalls)
{
notice ("%s: warning: missing SYSCALLS file `%s'\n",
pname, aux_info_filename);
return;
}
must_create = 1;
}
else
{
int errno_val = errno;
notice ("%s: can't read aux info file `%s': %s\n",
pname, shortpath (NULL, aux_info_filename),
xstrerror (errno_val));
errors++;
return;
}
}
#if 0 /* There is code farther down to take care of this. */
else
{
struct stat s1, s2;
stat (aux_info_file_name, &s1);
stat (base_source_file_name, &s2);
if (s2.st_mtime > s1.st_mtime)
must_create = 1;
}
#endif /* 0 */
/* If we need a .X file, create it, and verify we can read it. */
if (must_create)
{
if (!gen_aux_info_file (base_source_filename))
{
errors++;
return;
}
if (access (aux_info_filename, R_OK) == -1)
{
int errno_val = errno;
notice ("%s: can't read aux info file `%s': %s\n",
pname, shortpath (NULL, aux_info_filename),
xstrerror (errno_val));
errors++;
return;
}
}
{
struct stat stat_buf;
/* Get some status information about this aux_info file. */
if (stat (aux_info_filename, &stat_buf) == -1)
{
int errno_val = errno;
notice ("%s: can't get status of aux info file `%s': %s\n",
pname, shortpath (NULL, aux_info_filename),
xstrerror (errno_val));
errors++;
return;
}
/* Check on whether or not this aux_info file is zero length. If it is,
then just ignore it and return. */
if ((aux_info_size = stat_buf.st_size) == 0)
return;
/* Get the date/time of last modification for this aux_info file and
remember it. We will have to check that any source files that it
contains information about are at least this old or older. */
aux_info_mtime = stat_buf.st_mtime;
if (!is_syscalls)
{
/* Compare mod time with the .c file; update .X file if obsolete.
The code later on can fail to check the .c file
if it did not directly define any functions. */
if (stat (base_source_filename, &stat_buf) == -1)
{
int errno_val = errno;
notice ("%s: can't get status of aux info file `%s': %s\n",
pname, shortpath (NULL, base_source_filename),
xstrerror (errno_val));
errors++;
return;
}
if (stat_buf.st_mtime > aux_info_mtime)
{
must_create = 1;
goto start_over;
}
}
}
{
int aux_info_file;
int fd_flags;
/* Open the aux_info file. */
fd_flags = O_RDONLY;
#ifdef O_BINARY
/* Use binary mode to avoid having to deal with different EOL characters. */
fd_flags |= O_BINARY;
#endif
if ((aux_info_file = open (aux_info_filename, fd_flags, 0444 )) == -1)
{
int errno_val = errno;
notice ("%s: can't open aux info file `%s' for reading: %s\n",
pname, shortpath (NULL, aux_info_filename),
xstrerror (errno_val));
return;
}
/* Allocate space to hold the aux_info file in memory. */
aux_info_base = xmalloc (aux_info_size + 1);
aux_info_limit = aux_info_base + aux_info_size;
*aux_info_limit = '\0';
/* Read the aux_info file into memory. */
if (safe_read (aux_info_file, aux_info_base, aux_info_size) !=
(int) aux_info_size)
{
int errno_val = errno;
notice ("%s: error reading aux info file `%s': %s\n",
pname, shortpath (NULL, aux_info_filename),
xstrerror (errno_val));
free (aux_info_base);
close (aux_info_file);
return;
}
/* Close the aux info file. */
if (close (aux_info_file))
{
int errno_val = errno;
notice ("%s: error closing aux info file `%s': %s\n",
pname, shortpath (NULL, aux_info_filename),
xstrerror (errno_val));
free (aux_info_base);
close (aux_info_file);
return;
}
}
/* Delete the aux_info file (unless requested not to). If the deletion
fails for some reason, don't even worry about it. */
if (must_create && !keep_it)
if (unlink (aux_info_filename) == -1)
{
int errno_val = errno;
notice ("%s: can't delete aux info file `%s': %s\n",
pname, shortpath (NULL, aux_info_filename),
xstrerror (errno_val));
}
/* Save a pointer into the first line of the aux_info file which
contains the filename of the directory from which the compiler
was invoked when the associated source file was compiled.
This information is used later to help create complete
filenames out of the (potentially) relative filenames in
the aux_info file. */
{
char *p = aux_info_base;
while (*p != ':'
#ifdef HAVE_DOS_BASED_FILE_SYSTEM
|| (*p == ':' && *p && *(p+1) && IS_DIR_SEPARATOR (*(p+1)))
#endif
)
p++;
p++;
while (*p == ' ')
p++;
invocation_filename = p; /* Save a pointer to first byte of path. */
while (*p != ' ')
p++;
*p++ = DIR_SEPARATOR;
*p++ = '\0';
while (*p++ != '\n')
continue;
aux_info_second_line = p;
aux_info_relocated_name = 0;
if (! is_abspath (invocation_filename))
{
/* INVOCATION_FILENAME is relative;
append it to BASE_SOURCE_FILENAME's dir. */
char *dir_end;
aux_info_relocated_name = xmalloc (base_len + (p-invocation_filename));
strcpy (aux_info_relocated_name, base_source_filename);
dir_end = strrchr (aux_info_relocated_name, DIR_SEPARATOR);
#ifdef DIR_SEPARATOR_2
{
char *slash;
slash = strrchr (dir_end ? dir_end : aux_info_relocated_name,
DIR_SEPARATOR_2);
if (slash)
dir_end = slash;
}
#endif
if (dir_end)
dir_end++;
else
dir_end = aux_info_relocated_name;
strcpy (dir_end, invocation_filename);
invocation_filename = aux_info_relocated_name;
}
}
{
const char *aux_info_p;
/* Do a pre-pass on the lines in the aux_info file, making sure that all
of the source files referenced in there are at least as old as this
aux_info file itself. If not, go back and regenerate the aux_info
file anew. Don't do any of this for the syscalls file. */
if (!is_syscalls)
{
current_aux_info_lineno = 2;
for (aux_info_p = aux_info_second_line; *aux_info_p; )
{
if (referenced_file_is_newer (aux_info_p, aux_info_mtime))
{
free (aux_info_base);
free (aux_info_relocated_name);
if (keep_it && unlink (aux_info_filename) == -1)
{
int errno_val = errno;
notice ("%s: can't delete file `%s': %s\n",
pname, shortpath (NULL, aux_info_filename),
xstrerror (errno_val));
return;
}
must_create = 1;
goto start_over;
}
/* Skip over the rest of this line to start of next line. */
while (*aux_info_p != '\n')
aux_info_p++;
aux_info_p++;
current_aux_info_lineno++;
}
}
/* Now do the real pass on the aux_info lines. Save their information in
the in-core data base. */
current_aux_info_lineno = 2;
for (aux_info_p = aux_info_second_line; *aux_info_p;)
{
char *unexpanded_line = unexpand_if_needed (aux_info_p);
if (unexpanded_line)
{
save_def_or_dec (unexpanded_line, is_syscalls);
free (unexpanded_line);
}
else
save_def_or_dec (aux_info_p, is_syscalls);
/* Skip over the rest of this line and get to start of next line. */
while (*aux_info_p != '\n')
aux_info_p++;
aux_info_p++;
current_aux_info_lineno++;
}
}
free (aux_info_base);
free (aux_info_relocated_name);
}
#ifndef UNPROTOIZE
/* Check an individual filename for a .c suffix. If the filename has this
suffix, rename the file such that its suffix is changed to .C. This
function implements the -C option. */
static void
rename_c_file (hp)
const hash_table_entry *hp;
{
const char *filename = hp->symbol;
int last_char_index = strlen (filename) - 1;
char *const new_filename = (char *) alloca (strlen (filename)
+ strlen (cplus_suffix) + 1);
/* Note that we don't care here if the given file was converted or not. It
is possible that the given file was *not* converted, simply because there
was nothing in it which actually required conversion. Even in this case,
we want to do the renaming. Note that we only rename files with the .c
suffix (except for the syscalls file, which is left alone). */
if (filename[last_char_index] != 'c' || filename[last_char_index-1] != '.'
|| IS_SAME_PATH (syscalls_absolute_filename, filename))
return;
strcpy (new_filename, filename);
strcpy (&new_filename[last_char_index], cplus_suffix);
if (rename (filename, new_filename) == -1)
{
int errno_val = errno;
notice ("%s: warning: can't rename file `%s' to `%s': %s\n",
pname, shortpath (NULL, filename),
shortpath (NULL, new_filename), xstrerror (errno_val));
errors++;
return;
}
}
#endif /* !defined (UNPROTOIZE) */
/* Take the list of definitions and declarations attached to a particular
file_info node and reverse the order of the list. This should get the
list into an order such that the item with the lowest associated line
number is nearest the head of the list. When these lists are originally
built, they are in the opposite order. We want to traverse them in
normal line number order later (i.e. lowest to highest) so reverse the
order here. */
static void
reverse_def_dec_list (hp)
const hash_table_entry *hp;
{
file_info *file_p = hp->fip;
def_dec_info *prev = NULL;
def_dec_info *current = (def_dec_info *) file_p->defs_decs;
if (!current)
return; /* no list to reverse */
prev = current;
if (! (current = (def_dec_info *) current->next_in_file))
return; /* can't reverse a single list element */
prev->next_in_file = NULL;
while (current)
{
def_dec_info *next = (def_dec_info *) current->next_in_file;
current->next_in_file = prev;
prev = current;
current = next;
}
file_p->defs_decs = prev;
}
#ifndef UNPROTOIZE
/* Find the (only?) extern definition for a particular function name, starting
from the head of the linked list of entries for the given name. If we
cannot find an extern definition for the given function name, issue a
warning and scrounge around for the next best thing, i.e. an extern
function declaration with a prototype attached to it. Note that we only
allow such substitutions for extern declarations and never for static
declarations. That's because the only reason we allow them at all is
to let un-prototyped function declarations for system-supplied library
functions get their prototypes from our own extra SYSCALLS.c.X file which
contains all of the correct prototypes for system functions. */
static const def_dec_info *
find_extern_def (head, user)
const def_dec_info *head;
const def_dec_info *user;
{
const def_dec_info *dd_p;
const def_dec_info *extern_def_p = NULL;
int conflict_noted = 0;
/* Don't act too stupid here. Somebody may try to convert an entire system
in one swell fwoop (rather than one program at a time, as should be done)
and in that case, we may find that there are multiple extern definitions
of a given function name in the entire set of source files that we are
converting. If however one of these definitions resides in exactly the
same source file as the reference we are trying to satisfy then in that
case it would be stupid for us to fail to realize that this one definition
*must* be the precise one we are looking for.
To make sure that we don't miss an opportunity to make this "same file"
leap of faith, we do a prescan of the list of records relating to the
given function name, and we look (on this first scan) *only* for a
definition of the function which is in the same file as the reference
we are currently trying to satisfy. */
for (dd_p = head; dd_p; dd_p = dd_p->next_for_func)
if (dd_p->is_func_def && !dd_p->is_static && dd_p->file == user->file)
return dd_p;
/* Now, since we have not found a definition in the same file as the
reference, we scan the list again and consider all possibilities from
all files. Here we may get conflicts with the things listed in the
SYSCALLS.c.X file, but if that happens it only means that the source
code being converted contains its own definition of a function which
could have been supplied by libc.a. In such cases, we should avoid
issuing the normal warning, and defer to the definition given in the
user's own code. */
for (dd_p = head; dd_p; dd_p = dd_p->next_for_func)
if (dd_p->is_func_def && !dd_p->is_static)
{
if (!extern_def_p) /* Previous definition? */
extern_def_p = dd_p; /* Remember the first definition found. */
else
{
/* Ignore definition just found if it came from SYSCALLS.c.X. */
if (is_syscalls_file (dd_p->file))
continue;
/* Quietly replace the definition previously found with the one
just found if the previous one was from SYSCALLS.c.X. */
if (is_syscalls_file (extern_def_p->file))
{
extern_def_p = dd_p;
continue;
}
/* If we get here, then there is a conflict between two function
declarations for the same function, both of which came from the
user's own code. */
if (!conflict_noted) /* first time we noticed? */
{
conflict_noted = 1;
notice ("%s: conflicting extern definitions of '%s'\n",
pname, head->hash_entry->symbol);
if (!quiet_flag)
{
notice ("%s: declarations of '%s' will not be converted\n",
pname, head->hash_entry->symbol);
notice ("%s: conflict list for '%s' follows:\n",
pname, head->hash_entry->symbol);
fprintf (stderr, "%s: %s(%d): %s\n",
pname,
shortpath (NULL, extern_def_p->file->hash_entry->symbol),
extern_def_p->line, extern_def_p->ansi_decl);
}
}
if (!quiet_flag)
fprintf (stderr, "%s: %s(%d): %s\n",
pname,
shortpath (NULL, dd_p->file->hash_entry->symbol),
dd_p->line, dd_p->ansi_decl);
}
}
/* We want to err on the side of caution, so if we found multiple conflicting
definitions for the same function, treat this as being that same as if we
had found no definitions (i.e. return NULL). */
if (conflict_noted)
return NULL;
if (!extern_def_p)
{
/* We have no definitions for this function so do the next best thing.
Search for an extern declaration already in prototype form. */
for (dd_p = head; dd_p; dd_p = dd_p->next_for_func)
if (!dd_p->is_func_def && !dd_p->is_static && dd_p->prototyped)
{
extern_def_p = dd_p; /* save a pointer to the definition */
if (!quiet_flag)
notice ("%s: warning: using formals list from %s(%d) for function `%s'\n",
pname,
shortpath (NULL, dd_p->file->hash_entry->symbol),
dd_p->line, dd_p->hash_entry->symbol);
break;
}
/* Gripe about unprototyped function declarations that we found no
corresponding definition (or other source of prototype information)
for.
Gripe even if the unprototyped declaration we are worried about
exists in a file in one of the "system" include directories. We
can gripe about these because we should have at least found a
corresponding (pseudo) definition in the SYSCALLS.c.X file. If we
didn't, then that means that the SYSCALLS.c.X file is missing some
needed prototypes for this particular system. That is worth telling
the user about! */
if (!extern_def_p)
{
const char *file = user->file->hash_entry->symbol;
if (!quiet_flag)
if (in_system_include_dir (file))
{
/* Why copy this string into `needed' at all?
Why not just use user->ansi_decl without copying? */
char *needed = (char *) alloca (strlen (user->ansi_decl) + 1);
char *p;
strcpy (needed, user->ansi_decl);
p = (NONCONST char *) substr (needed, user->hash_entry->symbol)
+ strlen (user->hash_entry->symbol) + 2;
/* Avoid having ??? in the string. */
*p++ = '?';
*p++ = '?';
*p++ = '?';
strcpy (p, ");");
notice ("%s: %d: `%s' used but missing from SYSCALLS\n",
shortpath (NULL, file), user->line,
needed+7); /* Don't print "extern " */
}
#if 0
else
notice ("%s: %d: warning: no extern definition for `%s'\n",
shortpath (NULL, file), user->line,
user->hash_entry->symbol);
#endif
}
}
return extern_def_p;
}
/* Find the (only?) static definition for a particular function name in a
given file. Here we get the function-name and the file info indirectly
from the def_dec_info record pointer which is passed in. */
static const def_dec_info *
find_static_definition (user)
const def_dec_info *user;
{
const def_dec_info *head = user->hash_entry->ddip;
const def_dec_info *dd_p;
int num_static_defs = 0;
const def_dec_info *static_def_p = NULL;
for (dd_p = head; dd_p; dd_p = dd_p->next_for_func)
if (dd_p->is_func_def && dd_p->is_static && (dd_p->file == user->file))
{
static_def_p = dd_p; /* save a pointer to the definition */
num_static_defs++;
}
if (num_static_defs == 0)
{
if (!quiet_flag)
notice ("%s: warning: no static definition for `%s' in file `%s'\n",
pname, head->hash_entry->symbol,
shortpath (NULL, user->file->hash_entry->symbol));
}
else if (num_static_defs > 1)
{
notice ("%s: multiple static defs of `%s' in file `%s'\n",
pname, head->hash_entry->symbol,
shortpath (NULL, user->file->hash_entry->symbol));
return NULL;
}
return static_def_p;
}
/* Find good prototype style formal argument lists for all of the function
declarations which didn't have them before now.
To do this we consider each function name one at a time. For each function
name, we look at the items on the linked list of def_dec_info records for
that particular name.
Somewhere on this list we should find one (and only one) def_dec_info
record which represents the actual function definition, and this record
should have a nice formal argument list already associated with it.
Thus, all we have to do is to connect up all of the other def_dec_info
records for this particular function name to the special one which has
the full-blown formals list.
Of course it is a little more complicated than just that. See below for
more details. */
static void
connect_defs_and_decs (hp)
const hash_table_entry *hp;
{
const def_dec_info *dd_p;
const def_dec_info *extern_def_p = NULL;
int first_extern_reference = 1;
/* Traverse the list of definitions and declarations for this particular
function name. For each item on the list, if it is a function
definition (either old style or new style) then GCC has already been
kind enough to produce a prototype for us, and it is associated with
the item already, so declare the item as its own associated "definition".
Also, for each item which is only a function declaration, but which
nonetheless has its own prototype already (obviously supplied by the user)
declare the item as its own definition.
Note that when/if there are multiple user-supplied prototypes already
present for multiple declarations of any given function, these multiple
prototypes *should* all match exactly with one another and with the
prototype for the actual function definition. We don't check for this
here however, since we assume that the compiler must have already done
this consistency checking when it was creating the .X files. */
for (dd_p = hp->ddip; dd_p; dd_p = dd_p->next_for_func)
if (dd_p->prototyped)
((NONCONST def_dec_info *) dd_p)->definition = dd_p;
/* Traverse the list of definitions and declarations for this particular
function name. For each item on the list, if it is an extern function
declaration and if it has no associated definition yet, go try to find
the matching extern definition for the declaration.
When looking for the matching function definition, warn the user if we
fail to find one.
If we find more that one function definition also issue a warning.
Do the search for the matching definition only once per unique function
name (and only when absolutely needed) so that we can avoid putting out
redundant warning messages, and so that we will only put out warning
messages when there is actually a reference (i.e. a declaration) for
which we need to find a matching definition. */
for (dd_p = hp->ddip; dd_p; dd_p = dd_p->next_for_func)
if (!dd_p->is_func_def && !dd_p->is_static && !dd_p->definition)
{
if (first_extern_reference)
{
extern_def_p = find_extern_def (hp->ddip, dd_p);
first_extern_reference = 0;
}
((NONCONST def_dec_info *) dd_p)->definition = extern_def_p;
}
/* Traverse the list of definitions and declarations for this particular
function name. For each item on the list, if it is a static function
declaration and if it has no associated definition yet, go try to find
the matching static definition for the declaration within the same file.
When looking for the matching function definition, warn the user if we
fail to find one in the same file with the declaration, and refuse to
convert this kind of cross-file static function declaration. After all,
this is stupid practice and should be discouraged.
We don't have to worry about the possibility that there is more than one
matching function definition in the given file because that would have
been flagged as an error by the compiler.
Do the search for the matching definition only once per unique
function-name/source-file pair (and only when absolutely needed) so that
we can avoid putting out redundant warning messages, and so that we will
only put out warning messages when there is actually a reference (i.e. a
declaration) for which we actually need to find a matching definition. */
for (dd_p = hp->ddip; dd_p; dd_p = dd_p->next_for_func)
if (!dd_p->is_func_def && dd_p->is_static && !dd_p->definition)
{
const def_dec_info *dd_p2;
const def_dec_info *static_def;
/* We have now found a single static declaration for which we need to
find a matching definition. We want to minimize the work (and the
number of warnings), so we will find an appropriate (matching)
static definition for this declaration, and then distribute it
(as the definition for) any and all other static declarations
for this function name which occur within the same file, and which
do not already have definitions.
Note that a trick is used here to prevent subsequent attempts to
call find_static_definition for a given function-name & file
if the first such call returns NULL. Essentially, we convert
these NULL return values to -1, and put the -1 into the definition
field for each other static declaration from the same file which
does not already have an associated definition.
This makes these other static declarations look like they are
actually defined already when the outer loop here revisits them
later on. Thus, the outer loop will skip over them. Later, we
turn the -1's back to NULL's. */
((NONCONST def_dec_info *) dd_p)->definition =
(static_def = find_static_definition (dd_p))
? static_def
: (const def_dec_info *) -1;
for (dd_p2 = dd_p->next_for_func; dd_p2; dd_p2 = dd_p2->next_for_func)
if (!dd_p2->is_func_def && dd_p2->is_static
&& !dd_p2->definition && (dd_p2->file == dd_p->file))
((NONCONST def_dec_info *) dd_p2)->definition = dd_p->definition;
}
/* Convert any dummy (-1) definitions we created in the step above back to
NULL's (as they should be). */
for (dd_p = hp->ddip; dd_p; dd_p = dd_p->next_for_func)
if (dd_p->definition == (def_dec_info *) -1)
((NONCONST def_dec_info *) dd_p)->definition = NULL;
}
#endif /* !defined (UNPROTOIZE) */
/* Give a pointer into the clean text buffer, return a number which is the
original source line number that the given pointer points into. */
static int
identify_lineno (clean_p)
const char *clean_p;
{
int line_num = 1;
const char *scan_p;
for (scan_p = clean_text_base; scan_p <= clean_p; scan_p++)
if (*scan_p == '\n')
line_num++;
return line_num;
}
/* Issue an error message and give up on doing this particular edit. */
static void
declare_source_confusing (clean_p)
const char *clean_p;
{
if (!quiet_flag)
{
if (clean_p == 0)
notice ("%s: %d: warning: source too confusing\n",
shortpath (NULL, convert_filename), last_known_line_number);
else
notice ("%s: %d: warning: source too confusing\n",
shortpath (NULL, convert_filename),
identify_lineno (clean_p));
}
longjmp (source_confusion_recovery, 1);
}
/* Check that a condition which is expected to be true in the original source
code is in fact true. If not, issue an error message and give up on
converting this particular source file. */
static void
check_source (cond, clean_p)
int cond;
const char *clean_p;
{
if (!cond)
declare_source_confusing (clean_p);
}
/* If we think of the in-core cleaned text buffer as a memory mapped
file (with the variable last_known_line_start acting as sort of a
file pointer) then we can imagine doing "seeks" on the buffer. The
following routine implements a kind of "seek" operation for the in-core
(cleaned) copy of the source file. When finished, it returns a pointer to
the start of a given (numbered) line in the cleaned text buffer.
Note that protoize only has to "seek" in the forward direction on the
in-core cleaned text file buffers, and it never needs to back up.
This routine is made a little bit faster by remembering the line number
(and pointer value) supplied (and returned) from the previous "seek".
This prevents us from always having to start all over back at the top
of the in-core cleaned buffer again. */
static const char *
seek_to_line (n)
int n;
{
if (n < last_known_line_number)
abort ();
while (n > last_known_line_number)
{
while (*last_known_line_start != '\n')
check_source (++last_known_line_start < clean_text_limit, 0);
last_known_line_start++;
last_known_line_number++;
}
return last_known_line_start;
}
/* Given a pointer to a character in the cleaned text buffer, return a pointer
to the next non-whitespace character which follows it. */
static const char *
forward_to_next_token_char (ptr)
const char *ptr;
{
for (++ptr; ISSPACE ((const unsigned char)*ptr);
check_source (++ptr < clean_text_limit, 0))
continue;
return ptr;
}
/* Copy a chunk of text of length `len' and starting at `str' to the current
output buffer. Note that all attempts to add stuff to the current output
buffer ultimately go through here. */
static void
output_bytes (str, len)
const char *str;
size_t len;
{
if ((repl_write_ptr + 1) + len >= repl_text_limit)
{
size_t new_size = (repl_text_limit - repl_text_base) << 1;
char *new_buf = (char *) xrealloc (repl_text_base, new_size);
repl_write_ptr = new_buf + (repl_write_ptr - repl_text_base);
repl_text_base = new_buf;
repl_text_limit = new_buf + new_size;
}
memcpy (repl_write_ptr + 1, str, len);
repl_write_ptr += len;
}
/* Copy all bytes (except the trailing null) of a null terminated string to
the current output buffer. */
static void
output_string (str)
const char *str;
{
output_bytes (str, strlen (str));
}
/* Copy some characters from the original text buffer to the current output
buffer.
This routine takes a pointer argument `p' which is assumed to be a pointer
into the cleaned text buffer. The bytes which are copied are the `original'
equivalents for the set of bytes between the last value of `clean_read_ptr'
and the argument value `p'.
The set of bytes copied however, comes *not* from the cleaned text buffer,
but rather from the direct counterparts of these bytes within the original
text buffer.
Thus, when this function is called, some bytes from the original text
buffer (which may include original comments and preprocessing directives)
will be copied into the output buffer.
Note that the request implied when this routine is called includes the
byte pointed to by the argument pointer `p'. */
static void
output_up_to (p)
const char *p;
{
size_t copy_length = (size_t) (p - clean_read_ptr);
const char *copy_start = orig_text_base+(clean_read_ptr-clean_text_base)+1;
if (copy_length == 0)
return;
output_bytes (copy_start, copy_length);
clean_read_ptr = p;
}
/* Given a pointer to a def_dec_info record which represents some form of
definition of a function (perhaps a real definition, or in lieu of that
perhaps just a declaration with a full prototype) return true if this
function is one which we should avoid converting. Return false
otherwise. */
static int
other_variable_style_function (ansi_header)
const char *ansi_header;
{
#ifdef UNPROTOIZE
/* See if we have a stdarg function, or a function which has stdarg style
parameters or a stdarg style return type. */
return substr (ansi_header, "...") != 0;
#else /* !defined (UNPROTOIZE) */
/* See if we have a varargs function, or a function which has varargs style
parameters or a varargs style return type. */
const char *p;
int len = strlen (varargs_style_indicator);
for (p = ansi_header; p; )
{
const char *candidate;
if ((candidate = substr (p, varargs_style_indicator)) == 0)
return 0;
else
if (!is_id_char (candidate[-1]) && !is_id_char (candidate[len]))
return 1;
else
p = candidate + 1;
}
return 0;
#endif /* !defined (UNPROTOIZE) */
}
/* Do the editing operation specifically for a function "declaration". Note
that editing for function "definitions" are handled in a separate routine
below. */
static void
edit_fn_declaration (def_dec_p, clean_text_p)
const def_dec_info *def_dec_p;
const char *volatile clean_text_p;
{
const char *start_formals;
const char *end_formals;
const char *function_to_edit = def_dec_p->hash_entry->symbol;
size_t func_name_len = strlen (function_to_edit);
const char *end_of_fn_name;
#ifndef UNPROTOIZE
const f_list_chain_item *this_f_list_chain_item;
const def_dec_info *definition = def_dec_p->definition;
/* If we are protoizing, and if we found no corresponding definition for
this particular function declaration, then just leave this declaration
exactly as it is. */
if (!definition)
return;
/* If we are protoizing, and if the corresponding definition that we found
for this particular function declaration defined an old style varargs
function, then we want to issue a warning and just leave this function
declaration unconverted. */
if (other_variable_style_function (definition->ansi_decl))
{
if (!quiet_flag)
notice ("%s: %d: warning: varargs function declaration not converted\n",
shortpath (NULL, def_dec_p->file->hash_entry->symbol),
def_dec_p->line);
return;
}
#endif /* !defined (UNPROTOIZE) */
/* Setup here to recover from confusing source code detected during this
particular "edit". */
save_pointers ();
if (setjmp (source_confusion_recovery))
{
restore_pointers ();
notice ("%s: declaration of function `%s' not converted\n",
pname, function_to_edit);
return;
}
/* We are editing a function declaration. The line number we did a seek to
contains the comma or semicolon which follows the declaration. Our job
now is to scan backwards looking for the function name. This name *must*
be followed by open paren (ignoring whitespace, of course). We need to
replace everything between that open paren and the corresponding closing
paren. If we are protoizing, we need to insert the prototype-style
formals lists. If we are unprotoizing, we need to just delete everything
between the pairs of opening and closing parens. */
/* First move up to the end of the line. */
while (*clean_text_p != '\n')
check_source (++clean_text_p < clean_text_limit, 0);
clean_text_p--; /* Point to just before the newline character. */
/* Now we can scan backwards for the function name. */
do
{
for (;;)
{
/* Scan leftwards until we find some character which can be
part of an identifier. */
while (!is_id_char (*clean_text_p))
check_source (--clean_text_p > clean_read_ptr, 0);
/* Scan backwards until we find a char that cannot be part of an
identifier. */
while (is_id_char (*clean_text_p))
check_source (--clean_text_p > clean_read_ptr, 0);
/* Having found an "id break", see if the following id is the one
that we are looking for. If so, then exit from this loop. */
if (!strncmp (clean_text_p+1, function_to_edit, func_name_len))
{
char ch = *(clean_text_p + 1 + func_name_len);
/* Must also check to see that the name in the source text
ends where it should (in order to prevent bogus matches
on similar but longer identifiers. */
if (! is_id_char (ch))
break; /* exit from loop */
}
}
/* We have now found the first perfect match for the function name in
our backward search. This may or may not be the actual function
name at the start of the actual function declaration (i.e. we could
have easily been mislead). We will try to avoid getting fooled too
often by looking forward for the open paren which should follow the
identifier we just found. We ignore whitespace while hunting. If
the next non-whitespace byte we see is *not* an open left paren,
then we must assume that we have been fooled and we start over
again accordingly. Note that there is no guarantee, that even if
we do see the open paren, that we are in the right place.
Programmers do the strangest things sometimes! */
end_of_fn_name = clean_text_p + strlen (def_dec_p->hash_entry->symbol);
start_formals = forward_to_next_token_char (end_of_fn_name);
}
while (*start_formals != '(');
/* start_of_formals now points to the opening left paren which immediately
follows the name of the function. */
/* Note that there may be several formals lists which need to be modified
due to the possibility that the return type of this function is a
pointer-to-function type. If there are several formals lists, we
convert them in left-to-right order here. */
#ifndef UNPROTOIZE
this_f_list_chain_item = definition->f_list_chain;
#endif /* !defined (UNPROTOIZE) */
for (;;)
{
{
int depth;
end_formals = start_formals + 1;
depth = 1;
for (; depth; check_source (++end_formals < clean_text_limit, 0))
{
switch (*end_formals)
{
case '(':
depth++;
break;
case ')':
depth--;
break;
}
}
end_formals--;
}
/* end_formals now points to the closing right paren of the formals
list whose left paren is pointed to by start_formals. */
/* Now, if we are protoizing, we insert the new ANSI-style formals list
attached to the associated definition of this function. If however
we are unprotoizing, then we simply delete any formals list which
may be present. */
output_up_to (start_formals);
#ifndef UNPROTOIZE
if (this_f_list_chain_item)
{
output_string (this_f_list_chain_item->formals_list);
this_f_list_chain_item = this_f_list_chain_item->chain_next;
}
else
{
if (!quiet_flag)
notice ("%s: warning: too many parameter lists in declaration of `%s'\n",
pname, def_dec_p->hash_entry->symbol);
check_source (0, end_formals); /* leave the declaration intact */
}
#endif /* !defined (UNPROTOIZE) */
clean_read_ptr = end_formals - 1;
/* Now see if it looks like there may be another formals list associated
with the function declaration that we are converting (following the
formals list that we just converted. */
{
const char *another_r_paren = forward_to_next_token_char (end_formals);
if ((*another_r_paren != ')')
|| (*(start_formals = forward_to_next_token_char (another_r_paren)) != '('))
{
#ifndef UNPROTOIZE
if (this_f_list_chain_item)
{
if (!quiet_flag)
notice ("\n%s: warning: too few parameter lists in declaration of `%s'\n",
pname, def_dec_p->hash_entry->symbol);
check_source (0, start_formals); /* leave the decl intact */
}
#endif /* !defined (UNPROTOIZE) */
break;
}
}
/* There does appear to be yet another formals list, so loop around
again, and convert it also. */
}
}
/* Edit a whole group of formals lists, starting with the rightmost one
from some set of formals lists. This routine is called once (from the
outside) for each function declaration which is converted. It is
recursive however, and it calls itself once for each remaining formal
list that lies to the left of the one it was originally called to work
on. Thus, a whole set gets done in right-to-left order.
This routine returns nonzero if it thinks that it should not be trying
to convert this particular function definition (because the name of the
function doesn't match the one expected). */
static int
edit_formals_lists (end_formals, f_list_count, def_dec_p)
const char *end_formals;
unsigned int f_list_count;
const def_dec_info *def_dec_p;
{
const char *start_formals;
int depth;
start_formals = end_formals - 1;
depth = 1;
for (; depth; check_source (--start_formals > clean_read_ptr, 0))
{
switch (*start_formals)
{
case '(':
depth--;
break;
case ')':
depth++;
break;
}
}
start_formals++;
/* start_formals now points to the opening left paren of the formals list. */
f_list_count--;
if (f_list_count)
{
const char *next_end;
/* There should be more formal lists to the left of here. */
next_end = start_formals - 1;
check_source (next_end > clean_read_ptr, 0);
while (ISSPACE ((const unsigned char)*next_end))
check_source (--next_end > clean_read_ptr, 0);
check_source (*next_end == ')', next_end);
check_source (--next_end > clean_read_ptr, 0);
check_source (*next_end == ')', next_end);
if (edit_formals_lists (next_end, f_list_count, def_dec_p))
return 1;
}
/* Check that the function name in the header we are working on is the same
as the one we would expect to find. If not, issue a warning and return
nonzero. */
if (f_list_count == 0)
{
const char *expected = def_dec_p->hash_entry->symbol;
const char *func_name_start;
const char *func_name_limit;
size_t func_name_len;
for (func_name_limit = start_formals-1;
ISSPACE ((const unsigned char)*func_name_limit); )
check_source (--func_name_limit > clean_read_ptr, 0);
for (func_name_start = func_name_limit++;
is_id_char (*func_name_start);
func_name_start--)
check_source (func_name_start > clean_read_ptr, 0);
func_name_start++;
func_name_len = func_name_limit - func_name_start;
if (func_name_len == 0)
check_source (0, func_name_start);
if (func_name_len != strlen (expected)
|| strncmp (func_name_start, expected, func_name_len))
{
notice ("%s: %d: warning: found `%s' but expected `%s'\n",
shortpath (NULL, def_dec_p->file->hash_entry->symbol),
identify_lineno (func_name_start),
dupnstr (func_name_start, func_name_len),
expected);
return 1;
}
}
output_up_to (start_formals);
#ifdef UNPROTOIZE
if (f_list_count == 0)
output_string (def_dec_p->formal_names);
#else /* !defined (UNPROTOIZE) */
{
unsigned f_list_depth;
const f_list_chain_item *flci_p = def_dec_p->f_list_chain;
/* At this point, the current value of f_list count says how many
links we have to follow through the f_list_chain to get to the
particular formals list that we need to output next. */
for (f_list_depth = 0; f_list_depth < f_list_count; f_list_depth++)
flci_p = flci_p->chain_next;
output_string (flci_p->formals_list);
}
#endif /* !defined (UNPROTOIZE) */
clean_read_ptr = end_formals - 1;
return 0;
}
/* Given a pointer to a byte in the clean text buffer which points to
the beginning of a line that contains a "follower" token for a
function definition header, do whatever is necessary to find the
right closing paren for the rightmost formals list of the function
definition header. */
static const char *
find_rightmost_formals_list (clean_text_p)
const char *clean_text_p;
{
const char *end_formals;
/* We are editing a function definition. The line number we did a seek
to contains the first token which immediately follows the entire set of
formals lists which are part of this particular function definition
header.
Our job now is to scan leftwards in the clean text looking for the
right-paren which is at the end of the function header's rightmost
formals list.
If we ignore whitespace, this right paren should be the first one we
see which is (ignoring whitespace) immediately followed either by the
open curly-brace beginning the function body or by an alphabetic
character (in the case where the function definition is in old (K&R)
style and there are some declarations of formal parameters). */
/* It is possible that the right paren we are looking for is on the
current line (together with its following token). Just in case that
might be true, we start out here by skipping down to the right end of
the current line before starting our scan. */
for (end_formals = clean_text_p; *end_formals != '\n'; end_formals++)
continue;
end_formals--;
#ifdef UNPROTOIZE
/* Now scan backwards while looking for the right end of the rightmost
formals list associated with this function definition. */
{
char ch;
const char *l_brace_p;
/* Look leftward and try to find a right-paren. */
while (*end_formals != ')')
{
if (ISSPACE ((unsigned char)*end_formals))
while (ISSPACE ((unsigned char)*end_formals))
check_source (--end_formals > clean_read_ptr, 0);
else
check_source (--end_formals > clean_read_ptr, 0);
}
ch = *(l_brace_p = forward_to_next_token_char (end_formals));
/* Since we are unprotoizing an ANSI-style (prototyped) function
definition, there had better not be anything (except whitespace)
between the end of the ANSI formals list and the beginning of the
function body (i.e. the '{'). */
check_source (ch == '{', l_brace_p);
}
#else /* !defined (UNPROTOIZE) */
/* Now scan backwards while looking for the right end of the rightmost
formals list associated with this function definition. */
while (1)
{
char ch;
const char *l_brace_p;
/* Look leftward and try to find a right-paren. */
while (*end_formals != ')')
{
if (ISSPACE ((const unsigned char)*end_formals))
while (ISSPACE ((const unsigned char)*end_formals))
check_source (--end_formals > clean_read_ptr, 0);
else
check_source (--end_formals > clean_read_ptr, 0);
}
ch = *(l_brace_p = forward_to_next_token_char (end_formals));
/* Since it is possible that we found a right paren before the starting
'{' of the body which IS NOT the one at the end of the real K&R
formals list (say for instance, we found one embedded inside one of
the old K&R formal parameter declarations) we have to check to be
sure that this is in fact the right paren that we were looking for.
The one we were looking for *must* be followed by either a '{' or
by an alphabetic character, while others *cannot* validly be followed
by such characters. */
if ((ch == '{') || ISALPHA ((unsigned char) ch))
break;
/* At this point, we have found a right paren, but we know that it is
not the one we were looking for, so backup one character and keep
looking. */
check_source (--end_formals > clean_read_ptr, 0);
}
#endif /* !defined (UNPROTOIZE) */
return end_formals;
}
#ifndef UNPROTOIZE
/* Insert into the output file a totally new declaration for a function
which (up until now) was being called from within the current block
without having been declared at any point such that the declaration
was visible (i.e. in scope) at the point of the call.
We need to add in explicit declarations for all such function calls
in order to get the full benefit of prototype-based function call
parameter type checking. */
static void
add_local_decl (def_dec_p, clean_text_p)
const def_dec_info *def_dec_p;
const char *clean_text_p;
{
const char *start_of_block;
const char *function_to_edit = def_dec_p->hash_entry->symbol;
/* Don't insert new local explicit declarations unless explicitly requested
to do so. */
if (!local_flag)
return;
/* Setup here to recover from confusing source code detected during this
particular "edit". */
save_pointers ();
if (setjmp (source_confusion_recovery))
{
restore_pointers ();
notice ("%s: local declaration for function `%s' not inserted\n",
pname, function_to_edit);
return;
}
/* We have already done a seek to the start of the line which should
contain *the* open curly brace which begins the block in which we need
to insert an explicit function declaration (to replace the implicit one).
Now we scan that line, starting from the left, until we find the
open curly brace we are looking for. Note that there may actually be
multiple open curly braces on the given line, but we will be happy
with the leftmost one no matter what. */
start_of_block = clean_text_p;
while (*start_of_block != '{' && *start_of_block != '\n')
check_source (++start_of_block < clean_text_limit, 0);
/* Note that the line from the original source could possibly
contain *no* open curly braces! This happens if the line contains
a macro call which expands into a chunk of text which includes a
block (and that block's associated open and close curly braces).
In cases like this, we give up, issue a warning, and do nothing. */
if (*start_of_block != '{')
{
if (!quiet_flag)
notice ("\n%s: %d: warning: can't add declaration of `%s' into macro call\n",
def_dec_p->file->hash_entry->symbol, def_dec_p->line,
def_dec_p->hash_entry->symbol);
return;
}
/* Figure out what a nice (pretty) indentation would be for the new
declaration we are adding. In order to do this, we must scan forward
from the '{' until we find the first line which starts with some
non-whitespace characters (i.e. real "token" material). */
{
const char *ep = forward_to_next_token_char (start_of_block) - 1;
const char *sp;
/* Now we have ep pointing at the rightmost byte of some existing indent
stuff. At least that is the hope.
We can now just scan backwards and find the left end of the existing
indentation string, and then copy it to the output buffer. */
for (sp = ep; ISSPACE ((const unsigned char)*sp) && *sp != '\n'; sp--)
continue;
/* Now write out the open { which began this block, and any following
trash up to and including the last byte of the existing indent that
we just found. */
output_up_to (ep);
/* Now we go ahead and insert the new declaration at this point.
If the definition of the given function is in the same file that we
are currently editing, and if its full ANSI declaration normally
would start with the keyword `extern', suppress the `extern'. */
{
const char *decl = def_dec_p->definition->ansi_decl;
if ((*decl == 'e') && (def_dec_p->file == def_dec_p->definition->file))
decl += 7;
output_string (decl);
}
/* Finally, write out a new indent string, just like the preceding one
that we found. This will typically include a newline as the first
character of the indent string. */
output_bytes (sp, (size_t) (ep - sp) + 1);
}
}
/* Given a pointer to a file_info record, and a pointer to the beginning
of a line (in the clean text buffer) which is assumed to contain the
first "follower" token for the first function definition header in the
given file, find a good place to insert some new global function
declarations (which will replace scattered and imprecise implicit ones)
and then insert the new explicit declaration at that point in the file. */
static void
add_global_decls (file_p, clean_text_p)
const file_info *file_p;
const char *clean_text_p;
{
const def_dec_info *dd_p;
const char *scan_p;
/* Setup here to recover from confusing source code detected during this
particular "edit". */
save_pointers ();
if (setjmp (source_confusion_recovery))
{
restore_pointers ();
notice ("%s: global declarations for file `%s' not inserted\n",
pname, shortpath (NULL, file_p->hash_entry->symbol));
return;
}
/* Start by finding a good location for adding the new explicit function
declarations. To do this, we scan backwards, ignoring whitespace
and comments and other junk until we find either a semicolon, or until
we hit the beginning of the file. */
scan_p = find_rightmost_formals_list (clean_text_p);
for (;; --scan_p)
{
if (scan_p < clean_text_base)
break;
check_source (scan_p > clean_read_ptr, 0);
if (*scan_p == ';')
break;
}
/* scan_p now points either to a semicolon, or to just before the start
of the whole file. */
/* Now scan forward for the first non-whitespace character. In theory,
this should be the first character of the following function definition
header. We will put in the added declarations just prior to that. */
scan_p++;
while (ISSPACE ((const unsigned char)*scan_p))
scan_p++;
scan_p--;
output_up_to (scan_p);
/* Now write out full prototypes for all of the things that had been
implicitly declared in this file (but only those for which we were
actually able to find unique matching definitions). Avoid duplicates
by marking things that we write out as we go. */
{
int some_decls_added = 0;
for (dd_p = file_p->defs_decs; dd_p; dd_p = dd_p->next_in_file)
if (dd_p->is_implicit && dd_p->definition && !dd_p->definition->written)
{
const char *decl = dd_p->definition->ansi_decl;
/* If the function for which we are inserting a declaration is
actually defined later in the same file, then suppress the
leading `extern' keyword (if there is one). */
if (*decl == 'e' && (dd_p->file == dd_p->definition->file))
decl += 7;
output_string ("\n");
output_string (decl);
some_decls_added = 1;
((NONCONST def_dec_info *) dd_p->definition)->written = 1;
}
if (some_decls_added)
output_string ("\n\n");
}
/* Unmark all of the definitions that we just marked. */
for (dd_p = file_p->defs_decs; dd_p; dd_p = dd_p->next_in_file)
if (dd_p->definition)
((NONCONST def_dec_info *) dd_p->definition)->written = 0;
}
#endif /* !defined (UNPROTOIZE) */
/* Do the editing operation specifically for a function "definition". Note
that editing operations for function "declarations" are handled by a
separate routine above. */
static void
edit_fn_definition (def_dec_p, clean_text_p)
const def_dec_info *def_dec_p;
const char *clean_text_p;
{
const char *end_formals;
const char *function_to_edit = def_dec_p->hash_entry->symbol;
/* Setup here to recover from confusing source code detected during this
particular "edit". */
save_pointers ();
if (setjmp (source_confusion_recovery))
{
restore_pointers ();
notice ("%s: definition of function `%s' not converted\n",
pname, function_to_edit);
return;
}
end_formals = find_rightmost_formals_list (clean_text_p);
/* end_of_formals now points to the closing right paren of the rightmost
formals list which is actually part of the `header' of the function
definition that we are converting. */
/* If the header of this function definition looks like it declares a
function with a variable number of arguments, and if the way it does
that is different from that way we would like it (i.e. varargs vs.
stdarg) then issue a warning and leave the header unconverted. */
if (other_variable_style_function (def_dec_p->ansi_decl))
{
if (!quiet_flag)
notice ("%s: %d: warning: definition of %s not converted\n",
shortpath (NULL, def_dec_p->file->hash_entry->symbol),
identify_lineno (end_formals),
other_var_style);
output_up_to (end_formals);
return;
}
if (edit_formals_lists (end_formals, def_dec_p->f_list_count, def_dec_p))
{
restore_pointers ();
notice ("%s: definition of function `%s' not converted\n",
pname, function_to_edit);
return;
}
/* Have to output the last right paren because this never gets flushed by
edit_formals_list. */
output_up_to (end_formals);
#ifdef UNPROTOIZE
{
const char *decl_p;
const char *semicolon_p;
const char *limit_p;
const char *scan_p;
int had_newlines = 0;
/* Now write out the K&R style formal declarations, one per line. */
decl_p = def_dec_p->formal_decls;
limit_p = decl_p + strlen (decl_p);
for (;decl_p < limit_p; decl_p = semicolon_p + 2)
{
for (semicolon_p = decl_p; *semicolon_p != ';'; semicolon_p++)
continue;
output_string ("\n");
output_string (indent_string);
output_bytes (decl_p, (size_t) ((semicolon_p + 1) - decl_p));
}
/* If there are no newlines between the end of the formals list and the
start of the body, we should insert one now. */
for (scan_p = end_formals+1; *scan_p != '{'; )
{
if (*scan_p == '\n')
{
had_newlines = 1;
break;
}
check_source (++scan_p < clean_text_limit, 0);
}
if (!had_newlines)
output_string ("\n");
}
#else /* !defined (UNPROTOIZE) */
/* If we are protoizing, there may be some flotsam & jetsam (like comments
and preprocessing directives) after the old formals list but before
the following { and we would like to preserve that stuff while effectively
deleting the existing K&R formal parameter declarations. We do so here
in a rather tricky way. Basically, we white out any stuff *except*
the comments/pp-directives in the original text buffer, then, if there
is anything in this area *other* than whitespace, we output it. */
{
const char *end_formals_orig;
const char *start_body;
const char *start_body_orig;
const char *scan;
const char *scan_orig;
int have_flotsam = 0;
int have_newlines = 0;
for (start_body = end_formals + 1; *start_body != '{';)
check_source (++start_body < clean_text_limit, 0);
end_formals_orig = orig_text_base + (end_formals - clean_text_base);
start_body_orig = orig_text_base + (start_body - clean_text_base);
scan = end_formals + 1;
scan_orig = end_formals_orig + 1;
for (; scan < start_body; scan++, scan_orig++)
{
if (*scan == *scan_orig)
{
have_newlines |= (*scan_orig == '\n');
/* Leave identical whitespace alone. */
if (!ISSPACE ((const unsigned char)*scan_orig))
*((NONCONST char *) scan_orig) = ' '; /* identical - so whiteout */
}
else
have_flotsam = 1;
}
if (have_flotsam)
output_bytes (end_formals_orig + 1,
(size_t) (start_body_orig - end_formals_orig) - 1);
else
if (have_newlines)
output_string ("\n");
else
output_string (" ");
clean_read_ptr = start_body - 1;
}
#endif /* !defined (UNPROTOIZE) */
}
/* Clean up the clean text buffer. Do this by converting comments and
preprocessing directives into spaces. Also convert line continuations
into whitespace. Also, whiteout string and character literals. */
static void
do_cleaning (new_clean_text_base, new_clean_text_limit)
char *new_clean_text_base;
const char *new_clean_text_limit;
{
char *scan_p;
int non_whitespace_since_newline = 0;
for (scan_p = new_clean_text_base; scan_p < new_clean_text_limit; scan_p++)
{
switch (*scan_p)
{
case '/': /* Handle comments. */
if (scan_p[1] != '*')
goto regular;
non_whitespace_since_newline = 1;
scan_p[0] = ' ';
scan_p[1] = ' ';
scan_p += 2;
while (scan_p[1] != '/' || scan_p[0] != '*')
{
if (!ISSPACE ((const unsigned char)*scan_p))
*scan_p = ' ';
if (++scan_p >= new_clean_text_limit)
abort ();
}
*scan_p++ = ' ';
*scan_p = ' ';
break;
case '#': /* Handle pp directives. */
if (non_whitespace_since_newline)
goto regular;
*scan_p = ' ';
while (scan_p[1] != '\n' || scan_p[0] == '\\')
{
if (!ISSPACE ((const unsigned char)*scan_p))
*scan_p = ' ';
if (++scan_p >= new_clean_text_limit)
abort ();
}
*scan_p++ = ' ';
break;
case '\'': /* Handle character literals. */
non_whitespace_since_newline = 1;
while (scan_p[1] != '\'' || scan_p[0] == '\\')
{
if (scan_p[0] == '\\'
&& !ISSPACE ((const unsigned char) scan_p[1]))
scan_p[1] = ' ';
if (!ISSPACE ((const unsigned char)*scan_p))
*scan_p = ' ';
if (++scan_p >= new_clean_text_limit)
abort ();
}
*scan_p++ = ' ';
break;
case '"': /* Handle string literals. */
non_whitespace_since_newline = 1;
while (scan_p[1] != '"' || scan_p[0] == '\\')
{
if (scan_p[0] == '\\'
&& !ISSPACE ((const unsigned char) scan_p[1]))
scan_p[1] = ' ';
if (!ISSPACE ((const unsigned char)*scan_p))
*scan_p = ' ';
if (++scan_p >= new_clean_text_limit)
abort ();
}
if (!ISSPACE ((const unsigned char)*scan_p))
*scan_p = ' ';
scan_p++;
break;
case '\\': /* Handle line continuations. */
if (scan_p[1] != '\n')
goto regular;
*scan_p = ' ';
break;
case '\n':
non_whitespace_since_newline = 0; /* Reset. */
break;
case ' ':
case '\v':
case '\t':
case '\r':
case '\f':
case '\b':
break; /* Whitespace characters. */
default:
regular:
non_whitespace_since_newline = 1;
break;
}
}
}
/* Given a pointer to the closing right parenthesis for a particular formals
list (in the clean text buffer) find the corresponding left parenthesis
and return a pointer to it. */
static const char *
careful_find_l_paren (p)
const char *p;
{
const char *q;
int paren_depth;
for (paren_depth = 1, q = p-1; paren_depth; check_source (--q >= clean_text_base, 0))
{
switch (*q)
{
case ')':
paren_depth++;
break;
case '(':
paren_depth--;
break;
}
}
return ++q;
}
/* Scan the clean text buffer for cases of function definitions that we
don't really know about because they were preprocessed out when the
aux info files were created.
In this version of protoize/unprotoize we just give a warning for each
one found. A later version may be able to at least unprotoize such
missed items.
Note that we may easily find all function definitions simply by
looking for places where there is a left paren which is (ignoring
whitespace) immediately followed by either a left-brace or by an
upper or lower case letter. Whenever we find this combination, we
have also found a function definition header.
Finding function *declarations* using syntactic clues is much harder.
I will probably try to do this in a later version though. */
static void
scan_for_missed_items (file_p)
const file_info *file_p;
{
static const char *scan_p;
const char *limit = clean_text_limit - 3;
static const char *backup_limit;
backup_limit = clean_text_base - 1;
for (scan_p = clean_text_base; scan_p < limit; scan_p++)
{
if (*scan_p == ')')
{
static const char *last_r_paren;
const char *ahead_p;
last_r_paren = scan_p;
for (ahead_p = scan_p + 1; ISSPACE ((const unsigned char)*ahead_p); )
check_source (++ahead_p < limit, limit);
scan_p = ahead_p - 1;
if (ISALPHA ((const unsigned char)*ahead_p) || *ahead_p == '{')
{
const char *last_l_paren;
const int lineno = identify_lineno (ahead_p);
if (setjmp (source_confusion_recovery))
continue;
/* We know we have a function definition header. Now skip
leftwards over all of its associated formals lists. */
do
{
last_l_paren = careful_find_l_paren (last_r_paren);
for (last_r_paren = last_l_paren-1;
ISSPACE ((const unsigned char)*last_r_paren); )
check_source (--last_r_paren >= backup_limit, backup_limit);
}
while (*last_r_paren == ')');
if (is_id_char (*last_r_paren))
{
const char *id_limit = last_r_paren + 1;
const char *id_start;
size_t id_length;
const def_dec_info *dd_p;
for (id_start = id_limit-1; is_id_char (*id_start); )
check_source (--id_start >= backup_limit, backup_limit);
id_start++;
backup_limit = id_start;
if ((id_length = (size_t) (id_limit - id_start)) == 0)
goto not_missed;
{
char *func_name = (char *) alloca (id_length + 1);
static const char * const stmt_keywords[]
= { "if", "else", "do", "while", "for", "switch", "case", "return", 0 };
const char * const *stmt_keyword;
strncpy (func_name, id_start, id_length);
func_name[id_length] = '\0';
/* We must check here to see if we are actually looking at
a statement rather than an actual function call. */
for (stmt_keyword = stmt_keywords; *stmt_keyword; stmt_keyword++)
if (!strcmp (func_name, *stmt_keyword))
goto not_missed;
#if 0
notice ("%s: found definition of `%s' at %s(%d)\n",
pname,
func_name,
shortpath (NULL, file_p->hash_entry->symbol),
identify_lineno (id_start));
#endif /* 0 */
/* We really should check for a match of the function name
here also, but why bother. */
for (dd_p = file_p->defs_decs; dd_p; dd_p = dd_p->next_in_file)
if (dd_p->is_func_def && dd_p->line == lineno)
goto not_missed;
/* If we make it here, then we did not know about this
function definition. */
notice ("%s: %d: warning: `%s' excluded by preprocessing\n",
shortpath (NULL, file_p->hash_entry->symbol),
identify_lineno (id_start), func_name);
notice ("%s: function definition not converted\n",
pname);
}
not_missed: ;
}
}
}
}
}
/* Do all editing operations for a single source file (either a "base" file
or an "include" file). To do this we read the file into memory, keep a
virgin copy there, make another cleaned in-core copy of the original file
(i.e. one in which all of the comments and preprocessing directives have
been replaced with whitespace), then use these two in-core copies of the
file to make a new edited in-core copy of the file. Finally, rename the
original file (as a way of saving it), and then write the edited version
of the file from core to a disk file of the same name as the original.
Note that the trick of making a copy of the original sans comments &
preprocessing directives make the editing a whole lot easier. */
static void
edit_file (hp)
const hash_table_entry *hp;
{
struct stat stat_buf;
const file_info *file_p = hp->fip;
char *new_orig_text_base;
char *new_orig_text_limit;
char *new_clean_text_base;
char *new_clean_text_limit;
size_t orig_size;
size_t repl_size;
int first_definition_in_file;
/* If we are not supposed to be converting this file, or if there is
nothing in there which needs converting, just skip this file. */
if (!needs_to_be_converted (file_p))
return;
convert_filename = file_p->hash_entry->symbol;
/* Convert a file if it is in a directory where we want conversion
and the file is not excluded. */
if (!directory_specified_p (convert_filename)
|| file_excluded_p (convert_filename))
{
if (!quiet_flag
#ifdef UNPROTOIZE
/* Don't even mention "system" include files unless we are
protoizing. If we are protoizing, we mention these as a
gentle way of prodding the user to convert his "system"
include files to prototype format. */
&& !in_system_include_dir (convert_filename)
#endif /* defined (UNPROTOIZE) */
)
notice ("%s: `%s' not converted\n",
pname, shortpath (NULL, convert_filename));
return;
}
/* Let the user know what we are up to. */
if (nochange_flag)
notice ("%s: would convert file `%s'\n",
pname, shortpath (NULL, convert_filename));
else
notice ("%s: converting file `%s'\n",
pname, shortpath (NULL, convert_filename));
fflush (stderr);
/* Find out the size (in bytes) of the original file. */
/* The cast avoids an erroneous warning on AIX. */
if (stat (convert_filename, &stat_buf) == -1)
{
int errno_val = errno;
notice ("%s: can't get status for file `%s': %s\n",
pname, shortpath (NULL, convert_filename),
xstrerror (errno_val));
return;
}
orig_size = stat_buf.st_size;
/* Allocate a buffer to hold the original text. */
orig_text_base = new_orig_text_base = (char *) xmalloc (orig_size + 2);
orig_text_limit = new_orig_text_limit = new_orig_text_base + orig_size;
/* Allocate a buffer to hold the cleaned-up version of the original text. */
clean_text_base = new_clean_text_base = (char *) xmalloc (orig_size + 2);
clean_text_limit = new_clean_text_limit = new_clean_text_base + orig_size;
clean_read_ptr = clean_text_base - 1;
/* Allocate a buffer that will hopefully be large enough to hold the entire
converted output text. As an initial guess for the maximum size of the
output buffer, use 125% of the size of the original + some extra. This
buffer can be expanded later as needed. */
repl_size = orig_size + (orig_size >> 2) + 4096;
repl_text_base = (char *) xmalloc (repl_size + 2);
repl_text_limit = repl_text_base + repl_size - 1;
repl_write_ptr = repl_text_base - 1;
{
int input_file;
int fd_flags;
/* Open the file to be converted in READ ONLY mode. */
fd_flags = O_RDONLY;
#ifdef O_BINARY
/* Use binary mode to avoid having to deal with different EOL characters. */
fd_flags |= O_BINARY;
#endif
if ((input_file = open (convert_filename, fd_flags, 0444)) == -1)
{
int errno_val = errno;
notice ("%s: can't open file `%s' for reading: %s\n",
pname, shortpath (NULL, convert_filename),
xstrerror (errno_val));
return;
}
/* Read the entire original source text file into the original text buffer
in one swell fwoop. Then figure out where the end of the text is and
make sure that it ends with a newline followed by a null. */
if (safe_read (input_file, new_orig_text_base, orig_size) !=
(int) orig_size)
{
int errno_val = errno;
close (input_file);
notice ("\n%s: error reading input file `%s': %s\n",
pname, shortpath (NULL, convert_filename),
xstrerror (errno_val));
return;
}
close (input_file);
}
if (orig_size == 0 || orig_text_limit[-1] != '\n')
{
*new_orig_text_limit++ = '\n';
orig_text_limit++;
}
/* Create the cleaned up copy of the original text. */
memcpy (new_clean_text_base, orig_text_base,
(size_t) (orig_text_limit - orig_text_base));
do_cleaning (new_clean_text_base, new_clean_text_limit);
#if 0
{
int clean_file;
size_t clean_size = orig_text_limit - orig_text_base;
char *const clean_filename = (char *) alloca (strlen (convert_filename) + 6 + 1);
/* Open (and create) the clean file. */
strcpy (clean_filename, convert_filename);
strcat (clean_filename, ".clean");
if ((clean_file = creat (clean_filename, 0666)) == -1)
{
int errno_val = errno;
notice ("%s: can't create/open clean file `%s': %s\n",
pname, shortpath (NULL, clean_filename),
xstrerror (errno_val));
return;
}
/* Write the clean file. */
safe_write (clean_file, new_clean_text_base, clean_size, clean_filename);
close (clean_file);
}
#endif /* 0 */
/* Do a simplified scan of the input looking for things that were not
mentioned in the aux info files because of the fact that they were
in a region of the source which was preprocessed-out (via #if or
via #ifdef). */
scan_for_missed_items (file_p);
/* Setup to do line-oriented forward seeking in the clean text buffer. */
last_known_line_number = 1;
last_known_line_start = clean_text_base;
/* Now get down to business and make all of the necessary edits. */
{
const def_dec_info *def_dec_p;
first_definition_in_file = 1;
def_dec_p = file_p->defs_decs;
for (; def_dec_p; def_dec_p = def_dec_p->next_in_file)
{
const char *clean_text_p = seek_to_line (def_dec_p->line);
/* clean_text_p now points to the first character of the line which
contains the `terminator' for the declaration or definition that
we are about to process. */
#ifndef UNPROTOIZE
if (global_flag && def_dec_p->is_func_def && first_definition_in_file)
{
add_global_decls (def_dec_p->file, clean_text_p);
first_definition_in_file = 0;
}
/* Don't edit this item if it is already in prototype format or if it
is a function declaration and we have found no corresponding
definition. */
if (def_dec_p->prototyped
|| (!def_dec_p->is_func_def && !def_dec_p->definition))
continue;
#endif /* !defined (UNPROTOIZE) */
if (def_dec_p->is_func_def)
edit_fn_definition (def_dec_p, clean_text_p);
else
#ifndef UNPROTOIZE
if (def_dec_p->is_implicit)
add_local_decl (def_dec_p, clean_text_p);
else
#endif /* !defined (UNPROTOIZE) */
edit_fn_declaration (def_dec_p, clean_text_p);
}
}
/* Finalize things. Output the last trailing part of the original text. */
output_up_to (clean_text_limit - 1);
/* If this is just a test run, stop now and just deallocate the buffers. */
if (nochange_flag)
{
free (new_orig_text_base);
free (new_clean_text_base);
free (repl_text_base);
return;
}
/* Change the name of the original input file. This is just a quick way of
saving the original file. */
if (!nosave_flag)
{
char *new_filename
= (char *) xmalloc (strlen (convert_filename) + strlen (save_suffix) + 2);
strcpy (new_filename, convert_filename);
#ifdef __MSDOS__
/* MSDOS filenames are restricted to 8.3 format, so we save `foo.c'
as `foo.<save_suffix>'. */
new_filename[(strlen (convert_filename) - 1] = '\0';
#endif
strcat (new_filename, save_suffix);
/* Don't overwrite existing file. */
if (access (new_filename, F_OK) == 0)
{
if (!quiet_flag)
notice ("%s: warning: file `%s' already saved in `%s'\n",
pname,
shortpath (NULL, convert_filename),
shortpath (NULL, new_filename));
}
else if (rename (convert_filename, new_filename) == -1)
{
int errno_val = errno;
notice ("%s: can't link file `%s' to `%s': %s\n",
pname,
shortpath (NULL, convert_filename),
shortpath (NULL, new_filename),
xstrerror (errno_val));
return;
}
}
if (unlink (convert_filename) == -1)
{
int errno_val = errno;
/* The file may have already been renamed. */
if (errno_val != ENOENT)
{
notice ("%s: can't delete file `%s': %s\n",
pname, shortpath (NULL, convert_filename),
xstrerror (errno_val));
return;
}
}
{
int output_file;
/* Open (and create) the output file. */
if ((output_file = creat (convert_filename, 0666)) == -1)
{
int errno_val = errno;
notice ("%s: can't create/open output file `%s': %s\n",
pname, shortpath (NULL, convert_filename),
xstrerror (errno_val));
return;
}
#ifdef O_BINARY
/* Use binary mode to avoid changing the existing EOL character. */
setmode (output_file, O_BINARY);
#endif
/* Write the output file. */
{
unsigned int out_size = (repl_write_ptr + 1) - repl_text_base;
safe_write (output_file, repl_text_base, out_size, convert_filename);
}
close (output_file);
}
/* Deallocate the conversion buffers. */
free (new_orig_text_base);
free (new_clean_text_base);
free (repl_text_base);
/* Change the mode of the output file to match the original file. */
/* The cast avoids an erroneous warning on AIX. */
if (chmod (convert_filename, stat_buf.st_mode) == -1)
{
int errno_val = errno;
notice ("%s: can't change mode of file `%s': %s\n",
pname, shortpath (NULL, convert_filename),
xstrerror (errno_val));
}
/* Note: We would try to change the owner and group of the output file
to match those of the input file here, except that may not be a good
thing to do because it might be misleading. Also, it might not even
be possible to do that (on BSD systems with quotas for instance). */
}
/* Do all of the individual steps needed to do the protoization (or
unprotoization) of the files referenced in the aux_info files given
in the command line. */
static void
do_processing ()
{
const char * const *base_pp;
const char * const * const end_pps
= &base_source_filenames[n_base_source_files];
#ifndef UNPROTOIZE
int syscalls_len;
#endif /* !defined (UNPROTOIZE) */
/* One-by-one, check (and create if necessary), open, and read all of the
stuff in each aux_info file. After reading each aux_info file, the
aux_info_file just read will be automatically deleted unless the
keep_flag is set. */
for (base_pp = base_source_filenames; base_pp < end_pps; base_pp++)
process_aux_info_file (*base_pp, keep_flag, 0);
#ifndef UNPROTOIZE
/* Also open and read the special SYSCALLS.c aux_info file which gives us
the prototypes for all of the standard system-supplied functions. */
if (nondefault_syscalls_dir)
{
syscalls_absolute_filename
= (char *) xmalloc (strlen (nondefault_syscalls_dir) + 1
+ sizeof (syscalls_filename));
strcpy (syscalls_absolute_filename, nondefault_syscalls_dir);
}
else
{
GET_ENVIRONMENT (default_syscalls_dir, "GCC_EXEC_PREFIX");
if (!default_syscalls_dir)
{
default_syscalls_dir = standard_exec_prefix;
}
syscalls_absolute_filename
= (char *) xmalloc (strlen (default_syscalls_dir) + 0
+ strlen (target_machine) + 1
+ strlen (target_version) + 1
+ sizeof (syscalls_filename));
strcpy (syscalls_absolute_filename, default_syscalls_dir);
strcat (syscalls_absolute_filename, target_machine);
strcat (syscalls_absolute_filename, "/");
strcat (syscalls_absolute_filename, target_version);
strcat (syscalls_absolute_filename, "/");
}
syscalls_len = strlen (syscalls_absolute_filename);
if (! IS_DIR_SEPARATOR (*(syscalls_absolute_filename + syscalls_len - 1)))
{
*(syscalls_absolute_filename + syscalls_len++) = DIR_SEPARATOR;
*(syscalls_absolute_filename + syscalls_len) = '\0';
}
strcat (syscalls_absolute_filename, syscalls_filename);
/* Call process_aux_info_file in such a way that it does not try to
delete the SYSCALLS aux_info file. */
process_aux_info_file (syscalls_absolute_filename, 1, 1);
#endif /* !defined (UNPROTOIZE) */
/* When we first read in all of the information from the aux_info files
we saved in it descending line number order, because that was likely to
be faster. Now however, we want the chains of def & dec records to
appear in ascending line number order as we get further away from the
file_info record that they hang from. The following line causes all of
these lists to be rearranged into ascending line number order. */
visit_each_hash_node (filename_primary, reverse_def_dec_list);
#ifndef UNPROTOIZE
/* Now do the "real" work. The following line causes each declaration record
to be "visited". For each of these nodes, an attempt is made to match
up the function declaration with a corresponding function definition,
which should have a full prototype-format formals list with it. Once
these match-ups are made, the conversion of the function declarations
to prototype format can be made. */
visit_each_hash_node (function_name_primary, connect_defs_and_decs);
#endif /* !defined (UNPROTOIZE) */
/* Now convert each file that can be converted (and needs to be). */
visit_each_hash_node (filename_primary, edit_file);
#ifndef UNPROTOIZE
/* If we are working in cplusplus mode, try to rename all .c files to .C
files. Don't panic if some of the renames don't work. */
if (cplusplus_flag && !nochange_flag)
visit_each_hash_node (filename_primary, rename_c_file);
#endif /* !defined (UNPROTOIZE) */
}
static const struct option longopts[] =
{
{"version", 0, 0, 'V'},
{"file_name", 0, 0, 'p'},
{"quiet", 0, 0, 'q'},
{"silent", 0, 0, 'q'},
{"force", 0, 0, 'f'},
{"keep", 0, 0, 'k'},
{"nosave", 0, 0, 'N'},
{"nochange", 0, 0, 'n'},
{"compiler-options", 1, 0, 'c'},
{"exclude", 1, 0, 'x'},
{"directory", 1, 0, 'd'},
#ifdef UNPROTOIZE
{"indent", 1, 0, 'i'},
#else
{"local", 0, 0, 'l'},
{"global", 0, 0, 'g'},
{"c++", 0, 0, 'C'},
{"syscalls-dir", 1, 0, 'B'},
#endif
{0, 0, 0, 0}
};
extern int main PARAMS ((int, char **const));
int
main (argc, argv)
int argc;
char **const argv;
{
int longind;
int c;
const char *params = "";
pname = strrchr (argv[0], DIR_SEPARATOR);
#ifdef DIR_SEPARATOR_2
{
char *slash;
slash = strrchr (pname ? pname : argv[0], DIR_SEPARATOR_2);
if (slash)
pname = slash;
}
#endif
pname = pname ? pname+1 : argv[0];
#ifdef SIGCHLD
/* We *MUST* set SIGCHLD to SIG_DFL so that the wait4() call will
receive the signal. A different setting is inheritable */
signal (SIGCHLD, SIG_DFL);
#endif
gcc_init_libintl ();
cwd_buffer = getpwd ();
if (!cwd_buffer)
{
notice ("%s: cannot get working directory: %s\n",
pname, xstrerror(errno));
return (FATAL_EXIT_CODE);
}
/* By default, convert the files in the current directory. */
directory_list = string_list_cons (cwd_buffer, NULL);
while ((c = getopt_long (argc, argv,
#ifdef UNPROTOIZE
"c:d:i:knNp:qvVx:",
#else
"B:c:Cd:gklnNp:qvVx:",
#endif
longopts, &longind)) != EOF)
{
if (c == 0) /* Long option. */
c = longopts[longind].val;
switch (c)
{
case 'p':
compiler_file_name = optarg;
break;
case 'd':
directory_list
= string_list_cons (abspath (NULL, optarg), directory_list);
break;
case 'x':
exclude_list = string_list_cons (optarg, exclude_list);
break;
case 'v':
case 'V':
version_flag = 1;
break;
case 'q':
quiet_flag = 1;
break;
#if 0
case 'f':
force_flag = 1;
break;
#endif
case 'n':
nochange_flag = 1;
keep_flag = 1;
break;
case 'N':
nosave_flag = 1;
break;
case 'k':
keep_flag = 1;
break;
case 'c':
params = optarg;
break;
#ifdef UNPROTOIZE
case 'i':
indent_string = optarg;
break;
#else /* !defined (UNPROTOIZE) */
case 'l':
local_flag = 1;
break;
case 'g':
global_flag = 1;
break;
case 'C':
cplusplus_flag = 1;
break;
case 'B':
nondefault_syscalls_dir = optarg;
break;
#endif /* !defined (UNPROTOIZE) */
default:
usage ();
}
}
/* Set up compile_params based on -p and -c options. */
munge_compile_params (params);
n_base_source_files = argc - optind;
/* Now actually make a list of the base source filenames. */
base_source_filenames
= (const char **) xmalloc ((n_base_source_files + 1) * sizeof (char *));
n_base_source_files = 0;
for (; optind < argc; optind++)
{
const char *path = abspath (NULL, argv[optind]);
int len = strlen (path);
if (path[len-1] == 'c' && path[len-2] == '.')
base_source_filenames[n_base_source_files++] = path;
else
{
notice ("%s: input file names must have .c suffixes: %s\n",
pname, shortpath (NULL, path));
errors++;
}
}
#ifndef UNPROTOIZE
/* We are only interested in the very first identifier token in the
definition of `va_list', so if there is more junk after that first
identifier token, delete it from the `varargs_style_indicator'. */
{
const char *cp;
for (cp = varargs_style_indicator; ISIDNUM (*cp); cp++)
continue;
if (*cp != 0)
varargs_style_indicator = savestring (varargs_style_indicator,
cp - varargs_style_indicator);
}
#endif /* !defined (UNPROTOIZE) */
if (errors)
usage ();
else
{
if (version_flag)
fprintf (stderr, "%s: %s\n", pname, version_string);
do_processing ();
}
return (errors ? FATAL_EXIT_CODE : SUCCESS_EXIT_CODE);
}
```
|
Journey Through China () is a 2015 French drama film directed by Zoltan Mayer. For her starring role, Yolande Moreau was nominated for Best Actress at the 6th Magritte Awards.
Plot
Liliane goes in China for the first time in her life to repatriate the body of her son, who died in an accident. Immersed in this culture so long ago, this trip marked by mourning becomes a journey of initiation.
Cast
Yolande Moreau as Liliane Rousseau
Jingjing Qu as Danjie
Dong Fu Lin as Chao
Ling Zi Liu as Li Shu Lan
Qing Dong as Ruo Yu
Yilin Yang as Yun
André Wilms as Richard Rousseau
Chenwei Li as Master Sanchen
Sophie Chen as Mademoiselle Yang
Production
The movie was shot in China.
References
External links
2015 films
French drama films
2010s French-language films
2015 drama films
2010s French films
|
The Courtyard is a 1995 made-for-television thriller film that premiered on the Showtime network. Directed by Fred Walton, the movies uses a screenplay by Wendy Biller and Christopher Hawthorne. The work centers around a yuppie architect who suspects his neighbor is a murderer. The film stars Andrew McCarthy as Jonathan, Mädchen Amick as Lauren, Cheech Marin as Angel Steiner, David Packer as Jack Morgan, Bonnie Bartlett as Cathleen Fitzgerald, and Vincent Schiavelli as Ivan. Judith Dolan designed costumes for the production.
References
External links
1995 television films
1995 films
1995 thriller films
American thriller films
Showtime (TV network) films
American thriller television films
1990s American films
|
```javascript
const aws = require('aws-sdk'),
s3 = new aws.S3();
exports.handler = function (event, context) {
'use strict';
const eventRecord = event.Records && event.Records && event.Records[0];
console.log('got record', eventRecord);
if (eventRecord) {
if (eventRecord.eventSource === 'aws:s3' && eventRecord.s3) {
s3.deleteObject({Bucket: eventRecord.s3.bucket.name, Key: eventRecord.s3.object.key}, context.done);
}
}
};
```
|
Brian Fitzgerald (born 22 March 1947) is an Irish politician. He was a Labour Party Teachta Dála (TD) for the Meath constituency from 1992 to 1997, and since 1999 has been an independent member of Meath County Council.
Career
Previously a SIPTU trade union official, Fitzgerald was elected to Dáil Éireann for Meath during the swing to Labour at the 1992 general election. He had contested the seat unsuccessfully at the November 1982 and 1989 general elections.
Like many other Labour TDs elected in 1992, he lost his seat at the 1997 general election. His seat was taken by John V. Farrelly of Fine Gael whom he had defeated in 1992.
Fitzgerald was an opponent of the Labour Party's decision to merge with Democratic Left and resigned from the party in 1999. He was re-elected to Meath County Council, as an independent councillor for the Dunshaughlin local electoral area at the 1999 local elections. At the 2002 and 2007 general elections, he stood as an independent candidate for Meath and Meath East constituencies, but again failed to win a seat on both occasions.
Fitzgerald was re-elected at the 2019 local elections as an independent candidate for Ratoath electoral area.
References
1947 births
Living people
Labour Party (Ireland) TDs
Trade unionists from County Meath
Members of the 27th Dáil
Members of Meath County Council
Independent local councillors in the Republic of Ireland
Independent candidates in Dáil elections
|
```objective-c
/***************************************************************************/
/* */
/* ftmac.h */
/* */
/* Additional Mac-specific API. */
/* */
/* Just van Rossum, David Turner, Robert Wilhelm, and Werner Lemberg. */
/* */
/* This file is part of the FreeType project, and may only be used, */
/* modified, and distributed under the terms of the FreeType project */
/* license, LICENSE.TXT. By continuing to use, modify, or distribute */
/* this file you indicate that you have read the license and */
/* understand and accept it fully. */
/* */
/***************************************************************************/
/***************************************************************************/
/* */
/* NOTE: Include this file after FT_FREETYPE_H and after any */
/* Mac-specific headers (because this header uses Mac types such as */
/* Handle, FSSpec, FSRef, etc.) */
/* */
/***************************************************************************/
#ifndef __FTMAC_H__
#define __FTMAC_H__
#include <ft2build.h>
FT_BEGIN_HEADER
/* gcc-3.4.1 and later can warn about functions tagged as deprecated */
#ifndef FT_DEPRECATED_ATTRIBUTE
#if defined(__GNUC__) && \
((__GNUC__ >= 4) || ((__GNUC__ == 3) && (__GNUC_MINOR__ >= 1)))
#define FT_DEPRECATED_ATTRIBUTE __attribute__((deprecated))
#else
#define FT_DEPRECATED_ATTRIBUTE
#endif
#endif
/*************************************************************************/
/* */
/* <Section> */
/* mac_specific */
/* */
/* <Title> */
/* Mac Specific Interface */
/* */
/* <Abstract> */
/* Only available on the Macintosh. */
/* */
/* <Description> */
/* The following definitions are only available if FreeType is */
/* compiled on a Macintosh. */
/* */
/*************************************************************************/
/*************************************************************************/
/* */
/* <Function> */
/* FT_New_Face_From_FOND */
/* */
/* <Description> */
/* Create a new face object from a FOND resource. */
/* */
/* <InOut> */
/* library :: A handle to the library resource. */
/* */
/* <Input> */
/* fond :: A FOND resource. */
/* */
/* face_index :: Only supported for the -1 `sanity check' special */
/* case. */
/* */
/* <Output> */
/* aface :: A handle to a new face object. */
/* */
/* <Return> */
/* FreeType error code. 0~means success. */
/* */
/* <Notes> */
/* This function can be used to create @FT_Face objects from fonts */
/* that are installed in the system as follows. */
/* */
/* { */
/* fond = GetResource( 'FOND', fontName ); */
/* error = FT_New_Face_From_FOND( library, fond, 0, &face ); */
/* } */
/* */
FT_EXPORT( FT_Error )
FT_New_Face_From_FOND( FT_Library library,
Handle fond,
FT_Long face_index,
FT_Face *aface )
FT_DEPRECATED_ATTRIBUTE;
/*************************************************************************/
/* */
/* <Function> */
/* FT_GetFile_From_Mac_Name */
/* */
/* <Description> */
/* Return an FSSpec for the disk file containing the named font. */
/* */
/* <Input> */
/* fontName :: Mac OS name of the font (e.g., Times New Roman */
/* Bold). */
/* */
/* <Output> */
/* pathSpec :: FSSpec to the file. For passing to */
/* @FT_New_Face_From_FSSpec. */
/* */
/* face_index :: Index of the face. For passing to */
/* @FT_New_Face_From_FSSpec. */
/* */
/* <Return> */
/* FreeType error code. 0~means success. */
/* */
FT_EXPORT( FT_Error )
FT_GetFile_From_Mac_Name( const char* fontName,
FSSpec* pathSpec,
FT_Long* face_index )
FT_DEPRECATED_ATTRIBUTE;
/*************************************************************************/
/* */
/* <Function> */
/* FT_GetFile_From_Mac_ATS_Name */
/* */
/* <Description> */
/* Return an FSSpec for the disk file containing the named font. */
/* */
/* <Input> */
/* fontName :: Mac OS name of the font in ATS framework. */
/* */
/* <Output> */
/* pathSpec :: FSSpec to the file. For passing to */
/* @FT_New_Face_From_FSSpec. */
/* */
/* face_index :: Index of the face. For passing to */
/* @FT_New_Face_From_FSSpec. */
/* */
/* <Return> */
/* FreeType error code. 0~means success. */
/* */
FT_EXPORT( FT_Error )
FT_GetFile_From_Mac_ATS_Name( const char* fontName,
FSSpec* pathSpec,
FT_Long* face_index )
FT_DEPRECATED_ATTRIBUTE;
/*************************************************************************/
/* */
/* <Function> */
/* FT_GetFilePath_From_Mac_ATS_Name */
/* */
/* <Description> */
/* Return a pathname of the disk file and face index for given font */
/* name that is handled by ATS framework. */
/* */
/* <Input> */
/* fontName :: Mac OS name of the font in ATS framework. */
/* */
/* <Output> */
/* path :: Buffer to store pathname of the file. For passing */
/* to @FT_New_Face. The client must allocate this */
/* buffer before calling this function. */
/* */
/* maxPathSize :: Lengths of the buffer `path' that client allocated. */
/* */
/* face_index :: Index of the face. For passing to @FT_New_Face. */
/* */
/* <Return> */
/* FreeType error code. 0~means success. */
/* */
FT_EXPORT( FT_Error )
FT_GetFilePath_From_Mac_ATS_Name( const char* fontName,
UInt8* path,
UInt32 maxPathSize,
FT_Long* face_index )
FT_DEPRECATED_ATTRIBUTE;
/*************************************************************************/
/* */
/* <Function> */
/* FT_New_Face_From_FSSpec */
/* */
/* <Description> */
/* Create a new face object from a given resource and typeface index */
/* using an FSSpec to the font file. */
/* */
/* <InOut> */
/* library :: A handle to the library resource. */
/* */
/* <Input> */
/* spec :: FSSpec to the font file. */
/* */
/* face_index :: The index of the face within the resource. The */
/* first face has index~0. */
/* <Output> */
/* aface :: A handle to a new face object. */
/* */
/* <Return> */
/* FreeType error code. 0~means success. */
/* */
/* <Note> */
/* @FT_New_Face_From_FSSpec is identical to @FT_New_Face except */
/* it accepts an FSSpec instead of a path. */
/* */
FT_EXPORT( FT_Error )
FT_New_Face_From_FSSpec( FT_Library library,
const FSSpec *spec,
FT_Long face_index,
FT_Face *aface )
FT_DEPRECATED_ATTRIBUTE;
/*************************************************************************/
/* */
/* <Function> */
/* FT_New_Face_From_FSRef */
/* */
/* <Description> */
/* Create a new face object from a given resource and typeface index */
/* using an FSRef to the font file. */
/* */
/* <InOut> */
/* library :: A handle to the library resource. */
/* */
/* <Input> */
/* spec :: FSRef to the font file. */
/* */
/* face_index :: The index of the face within the resource. The */
/* first face has index~0. */
/* <Output> */
/* aface :: A handle to a new face object. */
/* */
/* <Return> */
/* FreeType error code. 0~means success. */
/* */
/* <Note> */
/* @FT_New_Face_From_FSRef is identical to @FT_New_Face except */
/* it accepts an FSRef instead of a path. */
/* */
FT_EXPORT( FT_Error )
FT_New_Face_From_FSRef( FT_Library library,
const FSRef *ref,
FT_Long face_index,
FT_Face *aface )
FT_DEPRECATED_ATTRIBUTE;
/* */
FT_END_HEADER
#endif /* __FTMAC_H__ */
/* END */
```
|
```python
__author__ = "saeedamen" # Saeed Amen
#
#
#
# Unless required by applicable law or agreed to in writing, software
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#
#
import pytest
import pandas as pd
from findatapy.timeseries import Filter
def test_filtering_by_dates():
filter = Filter()
# filter S&P500 between specific working days
start_date = '01 Oct 2008'
finish_date = '29 Oct 2008'
# read CSV from disk, and make sure to parse dates
df = pd.read_csv("S&P500.csv", parse_dates=['Date'], index_col=['Date'])
df = filter.filter_time_series_by_date(start_date=start_date,
finish_date=finish_date,
data_frame=df)
assert df.index[0] == pd.to_datetime(start_date)
assert df.index[-1]== pd.to_datetime(finish_date)
if __name__ == '__main__':
pytest.main()
```
|
Ugar Island is an island locality in the Torres Strait Island Region, Queensland, Australia. It consists of a single island, Stephens Island in the Torres Strait. In the , Ugar Island had a population of 85 people.
Education
Stephen Island Campus is a primary (Early Childhood-6) campus of Tagai State College ().
References
Torres Strait Island Region
Localities in Queensland
|
```go
package veneur
import (
"net"
"os"
"syscall"
"golang.org/x/sys/unix"
)
// see also path_to_url#L279
func NewSocket(addr *net.UDPAddr, recvBuf int, reuseport bool) (net.PacketConn, error) {
// default to AF_INET6 to be equivalent to net.ListenUDP()
domain := unix.AF_INET6
if addr.IP.To4() != nil {
domain = unix.AF_INET
}
sockFD, err := unix.Socket(domain, unix.SOCK_DGRAM|syscall.SOCK_CLOEXEC|syscall.SOCK_NONBLOCK, 0)
if err != nil {
return nil, err
}
// unix.SO_REUSEPORT is not defined on linux 386/amd64, see
// path_to_url
if reuseport {
if err := unix.SetsockoptInt(sockFD, unix.SOL_SOCKET, 0xf, 1); err != nil {
unix.Close(sockFD)
return nil, err
}
}
if err = unix.SetsockoptInt(sockFD, unix.SOL_SOCKET, unix.SO_RCVBUF, recvBuf); err != nil {
unix.Close(sockFD)
return nil, err
}
var sa unix.Sockaddr
if domain == unix.AF_INET {
sockaddr := &unix.SockaddrInet4{
Port: addr.Port,
}
if copied := copy(sockaddr.Addr[:], addr.IP.To4()); copied != net.IPv4len {
panic("did not copy enough bytes of ip address")
}
sa = sockaddr
} else {
sockaddr := &unix.SockaddrInet6{
Port: addr.Port,
}
// addr.IP will be length 0 for "bind all interfaces"
if copied := copy(sockaddr.Addr[:], addr.IP.To16()); !(copied == net.IPv6len || copied == 0) {
panic("did not copy enough bytes of ip address")
}
sa = sockaddr
}
if err = unix.Bind(sockFD, sa); err != nil {
unix.Close(sockFD)
return nil, err
}
osFD := os.NewFile(uintptr(sockFD), "veneursock")
// this will close the FD we passed to NewFile
defer osFD.Close()
// however, FilePacketConn duplicates the FD, so closing the File's FD does
// not affect this object's FD
ret, err := net.FilePacketConn(osFD)
if err != nil {
return nil, err
}
return ret, nil
}
```
|
```go
// _ _
// __ _____ __ ___ ___ __ _| |_ ___
// \ \ /\ / / _ \/ _` \ \ / / |/ _` | __/ _ \
// \ V V / __/ (_| |\ V /| | (_| | || __/
// \_/\_/ \___|\__,_| \_/ |_|\__,_|\__\___|
//
//
// CONTACT: hello@weaviate.io
//
package named_vectors_tests
import (
"context"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
wvt "github.com/weaviate/weaviate-go-client/v4/weaviate"
"github.com/weaviate/weaviate/entities/models"
"github.com/weaviate/weaviate/entities/schema"
)
func testCreateSchemaWithMixedVectorizers(host string) func(t *testing.T) {
return func(t *testing.T) {
ctx := context.Background()
client, err := wvt.NewClient(wvt.Config{Scheme: "http", Host: host})
require.Nil(t, err)
cleanup := func() {
err := client.Schema().AllDeleter().Do(context.Background())
require.Nil(t, err)
}
t.Run("none vectorizer", func(t *testing.T) {
cleanup()
className := "BringYourOwnVector"
none1 := "none1"
none2 := "none2"
mixedTargetVectors := []string{none1, none2, c11y, transformers_bq}
vector1a := []float32{0.1, 0.2, 0.3}
vector2a := []float32{-0.1001, 0.2002, -0.3003, -0.4, -0.5}
vector1b := []float32{0.1111, 0.4, 0.3}
vector2b := []float32{-0.11, 0.11111, -0.2222, -0.4, -0.5}
class := &models.Class{
Class: className,
Properties: []*models.Property{
{
Name: "text", DataType: []string{schema.DataTypeText.String()},
},
},
VectorConfig: map[string]models.VectorConfig{
none1: {
Vectorizer: map[string]interface{}{
"none": nil,
},
VectorIndexType: "hnsw",
},
none2: {
Vectorizer: map[string]interface{}{
"none": nil,
},
VectorIndexType: "flat",
},
c11y: {
Vectorizer: map[string]interface{}{
text2vecContextionary: map[string]interface{}{
"vectorizeClassName": false,
},
},
VectorIndexType: "hnsw",
},
transformers_bq: {
Vectorizer: map[string]interface{}{
text2vecTransformers: map[string]interface{}{
"vectorizeClassName": false,
},
},
VectorIndexType: "flat",
VectorIndexConfig: bqFlatIndexConfig(),
},
},
}
t.Run("create schema", func(t *testing.T) {
err := client.Schema().ClassCreator().WithClass(class).Do(ctx)
require.NoError(t, err)
cls, err := client.Schema().ClassGetter().WithClassName(className).Do(ctx)
require.NoError(t, err)
assert.Equal(t, class.Class, cls.Class)
require.NotEmpty(t, cls.VectorConfig)
require.Len(t, cls.VectorConfig, len(mixedTargetVectors))
for _, targetVector := range mixedTargetVectors {
require.NotEmpty(t, cls.VectorConfig[targetVector])
assert.NotEmpty(t, cls.VectorConfig[targetVector].VectorIndexType)
vectorizerConfig, ok := cls.VectorConfig[targetVector].Vectorizer.(map[string]interface{})
require.True(t, ok)
assert.Len(t, vectorizerConfig, 1)
}
})
t.Run("add objects", func(t *testing.T) {
objects := []struct {
id string
text string
vectors models.Vectors
}{
{
id: id1,
text: "bring your own first vector",
vectors: models.Vectors{
none1: vector1a,
none2: vector2a,
},
},
{
id: id2,
text: "bring your own second vector",
vectors: models.Vectors{
none1: vector1b,
none2: vector2b,
},
},
}
for _, tt := range objects {
objWrapper, err := client.Data().Creator().
WithClassName(className).
WithID(tt.id).
WithProperties(map[string]interface{}{
"text": tt.text,
}).
WithVectors(tt.vectors).
Do(ctx)
require.NoError(t, err)
require.NotNil(t, objWrapper)
assert.Len(t, objWrapper.Object.Vectors, 4)
objs, err := client.Data().ObjectsGetter().
WithClassName(className).
WithID(tt.id).
WithVector().
Do(ctx)
require.NoError(t, err)
require.Len(t, objs, 1)
require.NotNil(t, objs[0])
properties, ok := objs[0].Properties.(map[string]interface{})
require.True(t, ok)
assert.Equal(t, tt.text, properties["text"])
assert.Nil(t, objs[0].Vector)
assert.Len(t, objs[0].Vectors, len(mixedTargetVectors))
for targetVector, vector := range tt.vectors {
require.NotNil(t, objs[0].Vectors[targetVector])
assert.Equal(t, vector, objs[0].Vectors[targetVector])
}
}
})
t.Run("update vectors", func(t *testing.T) {
beforeUpdateVectors := getVectors(t, client, className, id1, mixedTargetVectors...)
updatedVector1 := []float32{0.11111111111, 0.2222222222, 0.3333333333}
updatedVector2 := []float32{0.1, 0.2, 0.3, 0.4, 0.5}
updatedVectors := models.Vectors{
none1: updatedVector1,
none2: updatedVector2,
}
err := client.Data().Updater().
WithClassName(className).
WithID(id1).
WithVectors(updatedVectors).
Do(ctx)
require.NoError(t, err)
afterUpdateVectors := getVectors(t, client, className, id1, mixedTargetVectors...)
for targetVector, vector := range updatedVectors {
assert.NotEqual(t, beforeUpdateVectors[targetVector], afterUpdateVectors[targetVector])
assert.Equal(t, vector, models.Vector(afterUpdateVectors[targetVector]))
}
})
t.Run("update vectors with merge", func(t *testing.T) {
beforeUpdateVectors := getVectors(t, client, className, id1, mixedTargetVectors...)
updatedVector1 := []float32{0.00001, 0.0002, 0.00003}
updatedVector2 := []float32{1.1, 1.2, 1.3, 1.4, 1.5}
updatedVectors := models.Vectors{
none1: updatedVector1,
none2: updatedVector2,
}
err := client.Data().Updater().
WithMerge().
WithClassName(className).
WithID(id1).
WithProperties(map[string]interface{}{
"text": "This change should change vector",
}).
WithVectors(updatedVectors).
Do(ctx)
require.NoError(t, err)
afterUpdateVectors := getVectors(t, client, className, id1, mixedTargetVectors...)
for _, targetVector := range mixedTargetVectors {
assert.NotEqual(t, beforeUpdateVectors[targetVector], afterUpdateVectors[targetVector])
}
for targetVector, vector := range updatedVectors {
assert.Equal(t, vector, models.Vector(afterUpdateVectors[targetVector]))
}
})
})
}
}
```
|
Leander Paes and Adil Shamasdin were the defending champions but chose not to defend their title.
Austin Krajicek and Jeevan Nedunchezhiyan won the title after defeating Kevin Krawietz and Andreas Mies 6–3, 6–3 in the final.
Seeds
Draw
References
Main Draw
Fuzion 100 Ilkley Trophy - Men's Doubles
2018 Men's Doubles
|
```ruby
# frozen_string_literal: true
# This migration adds the optional `object_changes` column, in which PaperTrail
# will store the `changes` diff for each update event. See the readme for
# details.
class AddObjectChangesToVersions < ActiveRecord::Migration[6.0]
# The largest text column available in all supported RDBMS.
# See `create_versions.rb` for details.
TEXT_BYTES = 1_073_741_823
def change
add_column :versions, :object_changes, :text, limit: TEXT_BYTES
end
end
```
|
Pelargonium insularis is a species of plant in the family Geraniaceae. It is endemic to the Samhah island in the Socotra Archipelago of Yemen. It was discovered in 1999 on the north-facing limestone escarpment on the island. It represents the first record of the genus from the Socotra Archipelago. At the time of its discovery only a single plant was found and an extensive search failed to find more. The cliffs in which it occurs are frequently enveloped in low cloud and provide a relatively moist refugium occupying less than 5 km2, on an otherwise desertic island. Local informants know of the plant and comment that it is found along the cliffs and when fresh, provides incidental grazing for livestock. Its natural habitat is rocky areas. It is threatened by habitat loss.
Botanical notes
Most closely related to P. alchemilloides (a plant of SW Arabia and tropical NE Africa) from which it differs in its pink (not white) flowers and the development of a pronounced woody stock.
References
The Ethnoflora of the Soqotra Archipelago by Anthony G Miller and Mirranda Morris, Royal Botanic Garden Edinburgh, 2004,
Endemic flora of Socotra
insularis
Critically endangered plants
Taxonomy articles created by Polbot
|
```shell
How to unstage a staged file
How to unmodify a modified file
Pushing tags to a server
You can use git offline!
Limiting log output by time
```
|
```javascript
`left${0}\u000g${1}right`
```
|
The women's 4 × 100 metre freestyle relay event in swimming at the 2013 World Aquatics Championships took place on 28 July at the Palau Sant Jordi in Barcelona, Spain.
Records
Prior to this competition, the existing world and championship records were:
Results
Heats
The heats were held at 12:10.
Final
The final was held at 19:17.
References
External links
Barcelona 2013 Swimming Coverage
Freestyle 4x100 metre, women's
World Aquatics Championships
2013 in women's swimming
|
```php
<?php
namespace App\Containers\AppSection\User\Actions;
use Apiato\Core\Exceptions\IncorrectIdException;
use App\Containers\AppSection\User\Models\User;
use App\Containers\AppSection\User\Notifications\PasswordUpdatedNotification;
use App\Containers\AppSection\User\Tasks\UpdateUserTask;
use App\Containers\AppSection\User\UI\API\Requests\UpdatePasswordRequest;
use App\Ship\Exceptions\NotFoundException;
use App\Ship\Exceptions\UpdateResourceFailedException;
use App\Ship\Parents\Actions\Action as ParentAction;
class UpdatePasswordAction extends ParentAction
{
public function __construct(
private readonly UpdateUserTask $updateUserTask,
) {
}
/**
* @throws IncorrectIdException
* @throws NotFoundException
* @throws UpdateResourceFailedException
*/
public function run(UpdatePasswordRequest $request): User
{
$sanitizedData = $request->sanitizeInput([
'password',
]);
$user = $this->updateUserTask->run($request->user_id, $sanitizedData);
$user->notify(new PasswordUpdatedNotification());
return $user;
}
}
```
|
```python
import math
import os
import cupy
import numpy as np
from ._util import _get_inttype
from ._pba_2d import (_check_distances, _check_indices,
_distance_tranform_arg_check, _generate_indices_ops,
_generate_shape, _get_block_size, lcm)
pba3d_defines_template = """
#define MARKER {marker}
#define MAX_INT {max_int}
#define BLOCKSIZE {block_size_3d}
"""
# For efficiency, the original PBA+ packs three 10-bit integers and two binary
# flags into a single 32-bit integer. The defines in
# `pba3d_defines_encode_32bit` handle this format.
pba3d_defines_encode_32bit = """
// Sites : ENCODE(x, y, z, 0, 0)
// Not sites : ENCODE(0, 0, 0, 1, 0) or MARKER
#define ENCODED_INT_TYPE int
#define ZERO 0
#define ONE 1
#define ENCODE(x, y, z, a, b) (((x) << 20) | ((y) << 10) | (z) | ((a) << 31) | ((b) << 30))
#define DECODE(value, x, y, z) \
x = ((value) >> 20) & 0x3ff; \
y = ((value) >> 10) & 0x3ff; \
z = (value) & 0x3ff
#define NOTSITE(value) (((value) >> 31) & 1)
#define HASNEXT(value) (((value) >> 30) & 1)
#define GET_X(value) (((value) >> 20) & 0x3ff)
#define GET_Y(value) (((value) >> 10) & 0x3ff)
#define GET_Z(value) ((NOTSITE((value))) ? MAX_INT : ((value) & 0x3ff))
""" # noqa
# 64bit version of ENCODE/DECODE to allow a 20-bit integer per coordinate axis.
pba3d_defines_encode_64bit = """
// Sites : ENCODE(x, y, z, 0, 0)
// Not sites : ENCODE(0, 0, 0, 1, 0) or MARKER
#define ENCODED_INT_TYPE long long
#define ZERO 0L
#define ONE 1L
#define ENCODE(x, y, z, a, b) (((x) << 40) | ((y) << 20) | (z) | ((a) << 61) | ((b) << 60))
#define DECODE(value, x, y, z) \
x = ((value) >> 40) & 0xfffff; \
y = ((value) >> 20) & 0xfffff; \
z = (value) & 0xfffff
#define NOTSITE(value) (((value) >> 61) & 1)
#define HASNEXT(value) (((value) >> 60) & 1)
#define GET_X(value) (((value) >> 40) & 0xfffff)
#define GET_Y(value) (((value) >> 20) & 0xfffff)
#define GET_Z(value) ((NOTSITE((value))) ? MAX_INT : ((value) & 0xfffff))
""" # noqa
@cupy.memoize(True)
def get_pba3d_src(block_size_3d=32, marker=-2147483648, max_int=2147483647,
size_max=1024):
pba3d_code = pba3d_defines_template.format(
block_size_3d=block_size_3d, marker=marker, max_int=max_int
)
if size_max > 1024:
pba3d_code += pba3d_defines_encode_64bit
else:
pba3d_code += pba3d_defines_encode_32bit
kernel_directory = os.path.join(os.path.dirname(__file__), "cuda")
with open(os.path.join(kernel_directory, "pba_kernels_3d.h"), "rt") as f:
pba3d_kernels = "\n".join(f.readlines())
pba3d_code += pba3d_kernels
return pba3d_code
@cupy.memoize(for_each_device=True)
def _get_encode3d_kernel(size_max, marker=-2147483648):
"""Pack array coordinates into a single integer."""
if size_max > 1024:
int_type = "ptrdiff_t" # int64_t
else:
int_type = "int" # int32_t
# value must match TOID macro in the C++ code!
if size_max > 1024:
value = """(((x) << 40) | ((y) << 20) | (z))"""
else:
value = """(((x) << 20) | ((y) << 10) | (z))"""
code = f"""
if (arr[i]) {{
out[i] = {marker};
}} else {{
{int_type} shape_2 = arr.shape()[2];
{int_type} shape_1 = arr.shape()[1];
{int_type} _i = i;
{int_type} x = _i % shape_2;
_i /= shape_2;
{int_type} y = _i % shape_1;
_i /= shape_1;
{int_type} z = _i;
out[i] = {value};
}}
"""
return cupy.ElementwiseKernel(
in_params="raw B arr",
out_params="raw I out",
operation=code,
options=("--std=c++11",),
)
def encode3d(arr, marker=-2147483648, bit_depth=32, size_max=1024):
if arr.ndim != 3:
raise ValueError("only 3d arr supported")
if bit_depth not in [32, 64]:
raise ValueError("only bit_depth of 32 or 64 is supported")
if size_max > 1024:
dtype = np.int64
else:
dtype = np.int32
image = cupy.zeros(arr.shape, dtype=dtype, order="C")
kern = _get_encode3d_kernel(size_max, marker=marker)
kern(arr, image, size=image.size)
return image
def _get_decode3d_code(size_max, int_type=""):
# bit shifts here must match those used in the encode3d kernel
if size_max > 1024:
code = f"""
{int_type} x = (encoded >> 40) & 0xfffff;
{int_type} y = (encoded >> 20) & 0xfffff;
{int_type} z = encoded & 0xfffff;
"""
else:
code = f"""
{int_type} x = (encoded >> 20) & 0x3ff;
{int_type} y = (encoded >> 10) & 0x3ff;
{int_type} z = encoded & 0x3ff;
"""
return code
@cupy.memoize(for_each_device=True)
def _get_decode3d_kernel(size_max):
"""Unpack 3 coordinates encoded as a single integer."""
# int_type = "" here because x, y, z were already allocated externally
code = _get_decode3d_code(size_max, int_type="")
return cupy.ElementwiseKernel(
in_params="E encoded",
out_params="I x, I y, I z",
operation=code,
options=("--std=c++11",),
)
def decode3d(encoded, size_max=1024):
coord_dtype = cupy.int32 if size_max < 2**31 else cupy.int64
x = cupy.empty_like(encoded, dtype=coord_dtype)
y = cupy.empty_like(x)
z = cupy.empty_like(x)
kern = _get_decode3d_kernel(size_max)
kern(encoded, x, y, z)
return (x, y, z)
def _determine_padding(shape, block_size, m1, m2, m3, blockx, blocky):
# TODO: can possibly revise to consider only particular factors for LCM on
# a given axis
LCM = lcm(block_size, m1, m2, m3, blockx, blocky)
orig_sz, orig_sy, orig_sx = shape
round_up = False
if orig_sx % LCM != 0:
# round up size to a multiple of the band size
round_up = True
sx = LCM * math.ceil(orig_sx / LCM)
else:
sx = orig_sx
if orig_sy % LCM != 0:
# round up size to a multiple of the band size
round_up = True
sy = LCM * math.ceil(orig_sy / LCM)
else:
sy = orig_sy
if orig_sz % LCM != 0:
# round up size to a multiple of the band size
round_up = True
sz = LCM * math.ceil(orig_sz / LCM)
else:
sz = orig_sz
aniso = not (sx == sy == sz)
if aniso or round_up:
smax = max(sz, sy, sx)
padding_width = (
(0, smax - orig_sz), (0, smax - orig_sy), (0, smax - orig_sx)
)
else:
padding_width = None
return padding_width
def _generate_distance_computation(int_type, dist_int_type):
"""
Compute euclidean distance from current coordinate (ind_0, ind_1, ind_2) to
the coordinates of the nearest point (z, y, x)."""
return f"""
{int_type} tmp = z - ind_0;
{dist_int_type} sq_dist = tmp * tmp;
tmp = y - ind_1;
sq_dist += tmp * tmp;
tmp = x - ind_2;
sq_dist += tmp * tmp;
dist[i] = sqrt(static_cast<F>(sq_dist));
"""
def _get_distance_kernel_code(int_type, dist_int_type, raw_out_var=True):
code = _generate_shape(
ndim=3, int_type=int_type, var_name="dist", raw_var=raw_out_var
)
code += _generate_indices_ops(ndim=3, int_type=int_type)
code += _generate_distance_computation(int_type, dist_int_type)
return code
@cupy.memoize(for_each_device=True)
def _get_distance_kernel(int_type, large_dist=False):
"""Returns kernel computing the Euclidean distance from coordinates."""
dist_int_type = "ptrdiff_t" if large_dist else "int"
operation = _get_distance_kernel_code(
int_type, dist_int_type, raw_out_var=True
)
return cupy.ElementwiseKernel(
in_params="I z, I y, I x",
out_params="raw F dist",
operation=operation,
options=("--std=c++11",),
)
def _generate_aniso_distance_computation():
"""
Compute euclidean distance from current coordinate (ind_0, ind_1, ind_2) to
the coordinates of the nearest point (z, y, x)."""
return """
F tmp = static_cast<F>(z - ind_0) * sampling[0];
F sq_dist = tmp * tmp;
tmp = static_cast<F>(y - ind_1) * sampling[1];
sq_dist += tmp * tmp;
tmp = static_cast<F>(x - ind_2) * sampling[2];
sq_dist += tmp * tmp;
dist[i] = sqrt(static_cast<F>(sq_dist));
"""
def _get_aniso_distance_kernel_code(int_type, raw_out_var=True):
code = _generate_shape(
ndim=3, int_type=int_type, var_name="dist", raw_var=raw_out_var
)
code += _generate_indices_ops(ndim=3, int_type=int_type)
code += _generate_aniso_distance_computation()
return code
@cupy.memoize(for_each_device=True)
def _get_aniso_distance_kernel(int_type):
"""Returns kernel computing the Euclidean distance from coordinates with
axis spacing != 1."""
operation = _get_aniso_distance_kernel_code(
int_type, raw_out_var=True
)
return cupy.ElementwiseKernel(
in_params="I z, I y, I x, raw F sampling",
out_params="raw F dist",
operation=operation,
options=("--std=c++11",),
)
@cupy.memoize(for_each_device=True)
def _get_decode_as_distance_kernel(size_max, large_dist=False, sampling=None):
"""Fused decode3d and distance computation.
This kernel is for use when `return_distances=True`, but
`return_indices=False`. It replaces the separate calls to
`_get_decode3d_kernel` and `_get_distance_kernel`, avoiding the overhead of
generating full arrays containing the coordinates since the coordinate
arrays are not going to be returned.
"""
if sampling is None:
dist_int_type = "ptrdiff_t" if large_dist else "int"
int_type = "int"
# Step 1: decode the (z, y, x) coordinate
code = _get_decode3d_code(size_max, int_type=int_type)
# Step 2: compute the Euclidean distance based on this (z, y, x).
code += _generate_shape(
ndim=3, int_type=int_type, var_name="dist", raw_var=True
)
code += _generate_indices_ops(ndim=3, int_type=int_type)
if sampling is None:
code += _generate_distance_computation(int_type, dist_int_type)
in_params = "E encoded"
else:
code += _generate_aniso_distance_computation()
in_params = "E encoded, raw F sampling"
return cupy.ElementwiseKernel(
in_params=in_params,
out_params="raw F dist",
operation=code,
options=("--std=c++11",),
)
def _pba_3d(arr, sampling=None, return_distances=True, return_indices=False,
block_params=None, check_warp_size=False, *,
float64_distances=False, distances=None, indices=None):
indices_inplace = isinstance(indices, cupy.ndarray)
dt_inplace = isinstance(distances, cupy.ndarray)
_distance_tranform_arg_check(
dt_inplace, indices_inplace, return_distances, return_indices
)
if arr.ndim != 3:
raise ValueError(f"expected a 3D array, got {arr.ndim}D")
if block_params is None:
m1 = 1
m2 = 1
m3 = 2
else:
m1, m2, m3 = block_params
# reduce blockx for small inputs
s_min = min(arr.shape)
if s_min <= 4:
blockx = 4
elif s_min <= 8:
blockx = 8
elif s_min <= 16:
blockx = 16
else:
blockx = 32
blocky = 4
block_size = _get_block_size(check_warp_size)
orig_sz, orig_sy, orig_sx = arr.shape
padding_width = _determine_padding(
arr.shape, block_size, m1, m2, m3, blockx, blocky
)
if padding_width is not None:
arr = cupy.pad(arr, padding_width, mode="constant", constant_values=1)
size = arr.shape[0]
# pba algorithm was implemented to use 32-bit integer to store compressed
# coordinates. input_arr will be C-contiguous, int32
size_max = max(arr.shape)
input_arr = encode3d(arr, size_max=size_max)
buffer_idx = 0
output = cupy.zeros_like(input_arr)
pba_images = [input_arr, output]
block = (blockx, blocky, 1)
grid = (size // block[0], size // block[1], 1)
pba3d = cupy.RawModule(
code=get_pba3d_src(block_size_3d=block_size, size_max=size_max)
)
kernelFloodZ = pba3d.get_function("kernelFloodZ")
if sampling is None:
kernelMaurerAxis = pba3d.get_function("kernelMaurerAxis")
kernelColorAxis = pba3d.get_function("kernelColorAxis")
sampling_args = ()
else:
kernelMaurerAxis = pba3d.get_function("kernelMaurerAxisWithSpacing")
kernelColorAxis = pba3d.get_function("kernelColorAxisWithSpacing")
sampling = tuple(map(float, sampling))
sampling_args = (sampling[2], sampling[1], sampling[0])
kernelFloodZ(
grid,
block,
(pba_images[buffer_idx], pba_images[1 - buffer_idx], size)
)
buffer_idx = 1 - buffer_idx
block = (blockx, blocky, 1)
grid = (size // block[0], size // block[1], 1)
kernelMaurerAxis(
grid,
block,
(pba_images[buffer_idx], pba_images[1 - buffer_idx], size) + sampling_args, # noqa
)
block = (block_size, m3, 1)
grid = (size // block[0], size, 1)
kernelColorAxis(
grid,
block,
(pba_images[1 - buffer_idx], pba_images[buffer_idx], size) + sampling_args, # noqa
)
if sampling is not None:
# kernelColorAxis transposes the first two axis, so have to reorder
# the sampling_args tuple correspondingly
sampling_args = (sampling[1], sampling[2], sampling[0])
block = (blockx, blocky, 1)
grid = (size // block[0], size // block[1], 1)
kernelMaurerAxis(
grid,
block,
(pba_images[buffer_idx], pba_images[1 - buffer_idx], size) + sampling_args, # noqa
)
block = (block_size, m3, 1)
grid = (size // block[0], size, 1)
kernelColorAxis(
grid,
block,
(pba_images[1 - buffer_idx], pba_images[buffer_idx], size) + sampling_args, # noqa
)
output = pba_images[buffer_idx]
if return_distances:
out_shape = (orig_sz, orig_sy, orig_sx)
dtype_out = cupy.float64 if float64_distances else cupy.float32
if dt_inplace:
_check_distances(distances, out_shape, dtype_out)
else:
distances = cupy.zeros(out_shape, dtype=dtype_out)
# make sure maximum possible distance doesn't overflow
max_possible_dist = sum((s - 1)**2 for s in out_shape)
large_dist = max_possible_dist >= 2**31
if not return_indices:
# Compute distances without forming explicit coordinate arrays.
kern = _get_decode_as_distance_kernel(
size_max=size_max,
large_dist=large_dist,
sampling=sampling
)
if sampling is None:
kern(output[:orig_sz, :orig_sy, :orig_sx], distances)
else:
sampling = cupy.asarray(sampling, dtype=distances.dtype)
kern(output[:orig_sz, :orig_sy, :orig_sx], sampling, distances)
return (distances,)
if return_indices:
x, y, z = decode3d(output[:orig_sz, :orig_sy, :orig_sx],
size_max=size_max)
vals = ()
if return_distances:
if sampling is None:
kern = _get_distance_kernel(
int_type=_get_inttype(distances), large_dist=large_dist,
)
kern(z, y, x, distances)
else:
kern = _get_aniso_distance_kernel(int_type=_get_inttype(distances))
sampling = cupy.asarray(sampling, dtype=distances.dtype)
kern(z, y, x, sampling, distances)
vals = vals + (distances,)
if return_indices:
if indices_inplace:
_check_indices(indices, (arr.ndim,) + arr.shape, x.dtype.itemsize)
indices[0, ...] = z
indices[1, ...] = y
indices[2, ...] = x
else:
indices = cupy.stack((z, y, x), axis=0)
vals = vals + (indices,)
return vals
```
|
Club Atlético Alto Perú is a football club from Montevideo in Uruguay. The club were affiliated with the second division amateur of the Uruguayan Football Association at the third level, which is the bottom and the only amateur level of the pyramid, named the second amateur division.
Titles
Uruguayan Primera División: 0
Amateur Era (0):
Professional Era (0):
Segunda División Uruguay: 0
Segunda División Amateur Uruguay: 1
Torneo Apertura 2006
External links
Football clubs in Uruguay
Association football clubs established in 1940
Football clubs in Montevideo
1940 establishments in Uruguay
|
Euaspidoceras is an extinct ammonoid cephalopod genus that lived during the Middle Jurassic.
The ancestor of Euaspidoceras is probably Aspidoceras, and it is considered to be related to genera such as Orthaspidoceras, Simaspidoceras, and Intranodites.
Species
Euaspidoceras ajax Leanza 1947
Euaspidoceras akantheen (or Aspidoceras akantheen) Buckman 1928
Euaspidoceras davouxi Bert and Bonnot 2004
Euaspidoceras babeanum d’Orbigny, 1848
Euaspidoceras perarmatum J. Sowerby, 1822
Euaspidoceras veranadaense Parent 2006
Distribution
Euaspidoceras species may be found in the Jurassic of Argentina, France, Germany, India, Italy, Madagascar, Saudi Arabia, Spain, the United Kingdom and Yemen.
References
Notes
Jurassic ammonites
Ammonites of Europe
Callovian first appearances
Late Jurassic extinctions
Ammonitida genera
Aspidoceratidae
|
Raelene Ann Boyle (born 24 June 1951) is an Australian retired athlete, who represented Australia at three Olympic Games as a sprinter, winning three silver medals, and was named one of 100 National Living Treasures by the National Trust of Australia in 1998. Boyle was diagnosed with breast cancer in 1996 and subsequently became a board member of Breast Cancer Network Australia (BCNA). In 2017, she was named a Legend in the Sport Australia Hall of Fame.
Early life
Boyle was born on 24 June 1951, the daughter of Gilbert and Irene Boyle, in Coburg, a suburb of Melbourne. She was educated at Coburg High School in Melbourne.
Sporting career
After strong performances in the 1968 Australian Championships and Olympic trials, Boyle was selected to represent Australia at the 1968 Summer Olympics in Mexico City, at the age of 16. At 17, she won a silver medal in the 200-metre sprint and placed 4th in the 100 metres. Setting world junior records in both distances, of 22.73 and 11.20 seconds. The 200-metre record lasted 12 years before being broken; the 100-metre 8 years.
Boyle competed in the 1970 British Commonwealth Games in Edinburgh, where she contributed to Australia's number one position on the medal tally with three gold medals, in the 100 and 200-metre sprints and the 4 × 100-metre relay.
At the 1972 Olympics in Munich, Boyle collected two more silver medals, in the 100-metre and 200-metre sprints. In both races, she came second to East German Renate Stecher.
In 1974, at the Christchurch British Commonwealth Games, Boyle duplicated her results at the Edinburgh Games, winning three more gold medals in the same three events. Breaking the games record in both the 100 metres 11.27 and 200 metres with a 22.50 clocking.
In January 1976, she and her team-mates beat an eight year old world record for the 4 × 200 metre relay in Brisbane.
At the 1976 Olympics in Montreal, Boyle finished fourth in the final of the 100-metre sprint, but was disqualified from the 200-metre-race for making two false starts. A video replay later showed that she had not false started on her first start. However, Boyle did receive the honour of acting as the flag bearer for the Australian team, the first woman to do so.
Boyle was unable to replicate her previous Commonwealth success at the 1978 Commonwealth Games in Edmonton, winning only a silver medal in the 100-metre sprint before withdrawing from the 200m and relay due to injury.
Boyle was selected to compete in the 1980 Olympics in Moscow but eventually withdrew from the team for what she stated were personal reasons, during the long dispute within Australian sporting circles over whether to join the USA led boycott of the Games.
Her final major competitive appearance was at the 1982 Commonwealth Games in Brisbane, where she won gold in the 400-metre sprint, and silver in the 4 × 400-metre relay.
Through her successful career, Boyle won seven gold and two silver medals at the Commonwealth Games, in addition to her three silver medals at the Olympic Games.
Many East German athletes were later revealed to have used anabolic steroids within a systematic state-sponsored doping program. Boyle has stated she believes that she would have won gold at the 1972 Olympics if not for drug use by her competitor. The IOC only banned the use of anabolic steroids in 1975.
Honours
15 June 1974 – appointed a Member of the Order of the British Empire (MBE) for services to sport.
1974 – awarded the ABC Sportsman of the Year Award
1985 – inducted into the Sport Australia Hall of Fame.
1991 – received an "award for excellence" in her sport, and for contributions to the Commonwealth Games by the Australian Commonwealth Games Association
25 September 1989 – awarded the Douglas Wilkie Medal by the Anti-Football League, for doing the least for football in the best and fairest manner.
2000 – Boyle pushed Betty Cuthbert in her wheelchair in the torch relay during the 2000 Sydney Olympics Opening Ceremony.
2000 – awarded Australian Sports Medal.
2001 – awarded Centenary Medal.
2004 – inducted into the Athletics Australia Hall of Fame.
2007 – appointed as a Member of the Order of Australia (AM) for service to the community through a range of roles with organisations that support people with cancer, particularly Breast Cancer Network Australia.
2013 – named in Australia's Top 100 Sportswomen of All Time.
2017 – upgraded to a Sport Australia Hall of Fame Legend.
Personal life
Boyle was diagnosed with cancer in 1996 and ovarian cancer in 2000 and 2001.
Boyle works to raise community awareness about breast cancer and has been a very active board member of Breast Cancer Network Australia (BCNA) since 1999.
Boyle currently lives on the Sunshine Coast in Queensland with her partner Judy Wild.
See also
List of Olympic medalists in athletics (women)
Australian athletics champions (Women)
References
External links
Raelene Boyle at Australian Athletics Historical Results
Sporting Chance Cancer Foundation
Raelene Boyle Australian Women's Archive Project –
Board members: Raelene Boyle Breast Cancer Network Australia
Elite Sports Australia – Raelene Boyle
National Australia Bank – Ambassadors: Raelene Boyle
Graham Thomas –
Sports for women – Australia's Top 100 Sportswomen of All Time
1951 births
Living people
Australian female sprinters
Commonwealth Games gold medallists for Australia
Commonwealth Games silver medallists for Australia
Commonwealth Games medallists in athletics
Olympic athletes for Australia
Olympic silver medalists for Australia
Athletes (track and field) at the 1970 British Commonwealth Games
Athletes (track and field) at the 1974 British Commonwealth Games
Athletes (track and field) at the 1978 Commonwealth Games
Athletes (track and field) at the 1982 Commonwealth Games
Athletes (track and field) at the 1968 Summer Olympics
Athletes (track and field) at the 1972 Summer Olympics
Athletes (track and field) at the 1976 Summer Olympics
Sportswomen from Victoria (state)
Douglas Wilkie Medal winners
Australian Members of the Order of the British Empire
Members of the Order of Australia
Sport Australia Hall of Fame inductees
Athletes from Melbourne
Medalists at the 1972 Summer Olympics
Medalists at the 1968 Summer Olympics
Olympic silver medalists in athletics (track and field)
Olympic female sprinters
People from Coburg, Victoria
Medallists at the 1970 British Commonwealth Games
Medallists at the 1974 British Commonwealth Games
Medallists at the 1978 Commonwealth Games
Medallists at the 1982 Commonwealth Games
|
```swift
//
// AuditDisabledView.swift
// Strongbox
//
// Created by Strongbox on 07/08/2024.
//
import SwiftUI
struct AuditDisabledView: View {
var body: some View {
VStack {
Image(systemName: "shield.slash")
.font(.system(size: 50))
.foregroundColor(.secondary)
Text("audit_disabled")
.foregroundStyle(.secondary)
.font(.title)
Text("generic_tap_the_settings_button_to_configure")
.foregroundStyle(.secondary)
.font(.subheadline)
}
}
}
#Preview {
AuditDisabledView()
}
```
|
Tun Jeanne Abdullah née Danker (born 29 July 1953) is married to the former Prime Minister of Malaysia, Tun Abdullah Ahmad Badawi. She married Abdullah Badawi while he was in office. She is his second wife after the death of Badawi's first wife, Endon Mahmood.
Jeanne was formerly married to the younger brother of Abdullah's first late wife. She was also a manager at the Seri Perdana residential complex and has two children from her previous marriage. However, earlier in March that year, the premier dismissed rumours about his plans to remarry even though the rumours had been circulating for more than a year.
Early life
Born Jeanne Danker on 29 July 1953 in Kuala Lumpur to a Roman Catholic Portuguese-Eurasian (Kristang) family with roots in the state of Malacca, she was the eldest of four siblings. She is an alumna of SMK Assunta in Petaling Jaya, Selangor.
Jeanne later converted to Islam at the age of 23, when she married her first husband, Othman Mahmood, who was the younger brother of Abdullah's first late wife, Tun Endon Mahmood.
Jeanne worked in the hotel management field at major hotels including Kuala Lumpur Hilton and the Pan Pacific Hotel. At one point, she was supervisor of the Malaysian Deputy Prime Minister's official residence while Abdullah Badawi was Deputy Prime Minister, and became the manager of Seri Perdana, the Prime Minister's residence, when Abdullah assumed the premiership.
Family
Jeanne has two daughters, Nadiah Kimie, and Nadene Kimie, from her previous marriage. Nadiah runs a visual communications company in Kuala Lumpur. Nadene is involved in the fashion industry, dealing with fashion-related and lifestyle projects.
Marriage with Abdullah
On 9 June 2007, she was escorted in a Proton Chancellor car bearing the Prime Minister's favourite license number 13, by police from her home in Damansara Perdana to be married to Abdullah Badawi at a private ceremony attended by close family members at the Prime Minister's official residence, Seri Perdana in Putrajaya. The bride-to-be arrived at around 2.20 pm. Among the wedding guests which included some 50 relatives were Abdullah's daughter, Nori, daughter-in-law Azrene Abdullah, his four grandchildren, Jeanne's two daughters, her father Mathew Danker and Endon's nine surviving brothers and sisters. Also present at the private wedding were Yayasan Budi Penyayang executive officer Leela Mohd Ali, Abdullah's private secretary Datuk Mohamed Thajudeen Abdul Wahab, his brother-in-law Telekom Malaysia Berhad chairman, Tan Sri Ir. Muhammad Radzi Mansor and wife Puan Sri Aizah Mahmood.
The marriage was solemnised in the surau of Seri Perdana by the Imam from the Putra Mosque of Putrajaya, Haji Abd Manaf Mat, at 2.50 pm and was witnessed by the prime minister's son, Kamaluddin, and son-in-law, Khairy Jamaluddin. Abdullah slipped a solitaire diamond ring onto Jeanne's left ring finger and kissed her on the cheek. Jeanne then slipped a wedding band on Abdullah's finger before taking his hands together into hers and kissing them.
Later in the evening, Abdullah and Jeanne visited Endon's grave which is located at the Taman Selatan Muslim cemetery in Precinct 20 of Putrajaya.
Her first official engagement as the wife of the Prime Minister was when she accompanied Abdullah to Brunei on 11 June 2007 to attend a banquet hosted by Sultan Hassanal Bolkiah of Brunei in conjunction with the wedding of the Sultan's fourth daughter Princess Hajah Majeedah Nuurul Bulqiah to Pengiran Khairul Khalil Pengiran Syed Haji Jaafar.
At a press conference after the engagement was announced, Abdullah stated he had known her for about two decades, as she had been his first wife's sister-in-law; Jeanne had been married to Othman Mahmood, who was the younger brother of Abdullah's first wife, Tun Endon Mahmood. Abdullah also denied that Endon had asked him to marry Jeanne, but said Endon had loved her as "Otherwise she would not have asked her to manage our official residence". Abdullah also said that there would be no bersanding ceremony or hantaran (exchange of wedding gifts) as this was not his first marriage.
Public life
During late 2007, Toh Puan (then Datin Seri) Jeanne Abdullah was made Open University Malaysia's second chancellor. The first was the late Tun Endon Mahmood, the former Prime Minister's first wife before passing away. In addition to that she is Chairperson of Landskap Malaysia and patron of Paralympic Council of Malaysia.
Honours
Upon marriage to Abdullah, Jeanne was automatically conferred the female version of her husband's honorific title, Dato' Seri, which is Datin Seri. Later on 3 April 2009, the King of Malaysia, Yang di-Pertuan Agong Tuanku Mizan Zainal Abidin at Istana Negara awarded the Seri Setia Mahkota (SSM) upon Jeanne. The SSM carried the honorific title Tun for the recipients. The award was conferred by the Yang di-Pertuan Agong in conjunction to the handing over of Prime Minister office from her husband to Najib Razak.
Honours of Malaysia
:
Grand Commander of the Order of Loyalty to the Crown of Malaysia (SSM) – Tun (2009)
:
Knight Grand Commander of the Order of the Crown of Selangor (SPMS) – Datin Paduka Seri (2007)
:
Knight Grand Commander of the Premier and Exalted Order of Malacca (DUNM) – Datuk Seri Utama (2007)
:
Knight Commander of the Order of the Star of Hornbill Sarawak (DA) – Datuk Amar (2008)
See also
Spouse of the Prime Minister of Malaysia
Notes and references
1953 births
Living people
People from Kuala Lumpur
Spouses of prime ministers of Malaysia
Converts to Sunni Islam from Catholicism
Malaysian Muslims
Kristang people
Malaysian people of Portuguese descent
Grand Commanders of the Order of Loyalty to the Crown of Malaysia
Knights Commander of the Order of the Star of Hornbill Sarawak
Knights Grand Commander of the Order of the Crown of Selangor
|
Relative pitch is the ability of a person to identify or re-create a given musical note by comparing it to a reference note and identifying the interval between those two notes. For example, if the note Do and Fa is played on a piano, a person with relative pitch would be able to identify the second note from the first note given that they know that the first note is Do without looking.
Detailed definition
Relative pitch implies some or all of the following abilities:
Determine the distance of a musical note from a set point of reference, e.g. "three octaves above middle C"
Identify the intervals between given tones, regardless of their relation to concert pitch (A = 440 Hz)
Correctly sing a melody by following musical notation, by pitching each note in the melody according to its distance from the previous note.
Hear a melody for the first time, then name the notes relative to a reference pitch.
This last criterion, which applies not only to singers but also to instrumentalists who rely on their own skill to determine the precise pitch of the notes played (wind instruments, fretless string instruments like violin or viola, etc.), is an essential skill for musicians in order to play successfully with others. An example, is the different concert pitches used by orchestras playing music from different styles (a baroque orchestra using period instruments might decide to use a higher-tuned pitch).
Compound intervals (intervals greater than an octave) can be more difficult to detect than simple intervals (intervals less than an octave).
Interval recognition is used to identify chords, and can be applied to accurately tune an instrument with respect to a given reference tone, even when the tone is not in concert pitch.
Prevalence and training
Unlike absolute pitch (sometimes called "perfect pitch"), relative pitch is quite common among musicians, especially musicians who are used to playing "by ear", and a precise relative pitch is a constant characteristic among good musicians.
Unlike perfect pitch, relative pitch can be developed through ear training. Computer-aided ear training is becoming a popular tool for musicians and music students, and various software is available for improving relative pitch.
Some music teachers teach their students relative pitch by having them associate each possible interval with the first two notes of a popular song. Another method of developing relative pitch is playing melodies by ear on a musical instrument, especially one that, unlike a piano or other keyboard or fretted instrument, requires a specific manual or blown adjustment for each particular tone.
Indian musicians learn relative pitch by singing intervals over a drone, which Mathieu (1997) described in terms of occidental just intonation terminology. Many Western ear training classes used solfège to teach students relative pitch, while others use numerical sight-singing.
See also
Absolute pitch
Tonal memory
References
Music cognition
Music psychology
Pitch (music)
Cognitive musicology
Singing
|
```asciidoc
xref::overview/apoc.monitor/apoc.monitor.tx.adoc[apoc.monitor.tx icon:book[]] +
`apoc.monitor.tx() returns informations about the neo4j transaction manager`
label:procedure[]
label:apoc-full[]
```
|
Jyrgalang (, officially named Shakta Jyrgalang) is a village in the Ak-Suu District of Issyk-Kul Region of Kyrgyzstan. It is located at the right bank of the river Jyrgalang. It was established in 1964 to support operation of a coal mine Jyrgalan. Its population was 1,033 in 2021. Until 2012 it was an urban-type settlement. In recent years numerous residents have moved to Karakol in search of economic opportunities. The coal mine is still operating, albeit at relatively low levels. Since 2013 the village has seen some investments into guest house development including the reconstruction of an existing house into a six-room guest house and construction of a three-story guest house. Several other homes were renovated in 2016 to serve as guest houses for back-country skiers and hikers.
Population
References
Populated places in Issyk-Kul Region
|
```javascript
export default {
"printWidth": 77,
"semi": false
}
```
|
```smalltalk
namespace Veldrid.NeoDemo
{
class Program
{
unsafe static void Main(string[] args)
{
Sdl2.SDL_version version;
Sdl2.Sdl2Native.SDL_GetVersion(&version);
new NeoDemo().Run();
}
}
}
```
|
Cucaita is a municipality in the Central Boyacá Province, part of Boyacá Department, Colombia. The urban centre is situated on the Altiplano Cundiboyacense at a distance of from the department capital Tunja. Cucaita borders Sora in the north, Tunja in the east and south and Samacá in the south and west.
Etymology
The name Cucaita is derived from Chibcha and means either "Seminary enclosure" or "Shade of the farming fields".
History
The area of Cucaita in the times before the Spanish conquest was inhabited by the Muisca, organised in their loose Muisca Confederation. Cucaita was ruled by the zaque of nearby Hunza.
Modern Cucaita was founded on August 12, 1556 by friar Juan de Los Barrios.
Economy
Main economical activities of Cucaita are agriculture (predominantly onions and peas), livestock farming and minor carbon mining.
Born in Cucaita
Rafael Antonio Niño, former professional cyclist
Gallery
References
External links
Colombian ministry of culture; Cucaita, 450 years old
Municipalities of Boyacá Department
Populated places established in 1556
1556 establishments in the Spanish Empire
|
```java
package com.sohu.cache.ssh;
import lombok.Builder;
import lombok.Data;
import java.util.Objects;
/**
* @Author: zengyizhao
* @CreateTime: 2024/4/3 12:38
* @Description: ssh session key
* @Version: 1.0
*/
@Data
@Builder
public class SSHMachineInfo {
/**
* ip
*/
private String ip;
/**
*
*/
private String username;
/**
*
*/
private int authType;
/**
*
*/
private String password;
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
SSHMachineInfo that = (SSHMachineInfo) o;
return authType == that.authType && Objects.equals(ip, that.ip) && Objects.equals(password, that.password) && Objects.equals(username, that.username);
}
@Override
public int hashCode() {
return Objects.hash(ip, username, authType, password);
}
@Override
public String toString() {
return "SSHMachineInfo{" +
"ip='" + ip + '\'' +
", authType=" + authType +
'}';
}
}
```
|
```swift
//
// URLSession+Rx.swift
// RxCocoa
//
// Created by Krunoslav Zaher on 3/23/15.
//
import Foundation
import RxSwift
#if canImport(FoundationNetworking)
import FoundationNetworking
#endif
/// RxCocoa URL errors.
public enum RxCocoaURLError
: Swift.Error {
/// Unknown error occurred.
case unknown
/// Response is not NSHTTPURLResponse
case nonHTTPResponse(response: URLResponse)
/// Response is not successful. (not in `200 ..< 300` range)
case httpRequestFailed(response: HTTPURLResponse, data: Data?)
/// Deserialization error.
case deserializationError(error: Swift.Error)
}
extension RxCocoaURLError
: CustomDebugStringConvertible {
/// A textual representation of `self`, suitable for debugging.
public var debugDescription: String {
switch self {
case .unknown:
return "Unknown error has occurred."
case let .nonHTTPResponse(response):
return "Response is not NSHTTPURLResponse `\(response)`."
case let .httpRequestFailed(response, _):
return "HTTP request failed with `\(response.statusCode)`."
case let .deserializationError(error):
return "Error during deserialization of the response: \(error)"
}
}
}
private func escapeTerminalString(_ value: String) -> String {
return value.replacingOccurrences(of: "\"", with: "\\\"", options:[], range: nil)
}
private func convertURLRequestToCurlCommand(_ request: URLRequest) -> String {
let method = request.httpMethod ?? "GET"
var returnValue = "curl -X \(method) "
if let httpBody = request.httpBody {
let maybeBody = String(data: httpBody, encoding: String.Encoding.utf8)
if let body = maybeBody {
returnValue += "-d \"\(escapeTerminalString(body))\" "
}
}
for (key, value) in request.allHTTPHeaderFields ?? [:] {
let escapedKey = escapeTerminalString(key as String)
let escapedValue = escapeTerminalString(value as String)
returnValue += "\n -H \"\(escapedKey): \(escapedValue)\" "
}
let URLString = request.url?.absoluteString ?? "<unknown url>"
returnValue += "\n\"\(escapeTerminalString(URLString))\""
returnValue += " -i -v"
return returnValue
}
private func convertResponseToString(_ response: URLResponse?, _ error: NSError?, _ interval: TimeInterval) -> String {
let ms = Int(interval * 1000)
if let response = response as? HTTPURLResponse {
if 200 ..< 300 ~= response.statusCode {
return "Success (\(ms)ms): Status \(response.statusCode)"
}
else {
return "Failure (\(ms)ms): Status \(response.statusCode)"
}
}
if let error = error {
if error.domain == NSURLErrorDomain && error.code == NSURLErrorCancelled {
return "Canceled (\(ms)ms)"
}
return "Failure (\(ms)ms): NSError > \(error)"
}
return "<Unhandled response from server>"
}
extension Reactive where Base: URLSession {
/**
Observable sequence of responses for URL request.
Performing of request starts after observer is subscribed and not after invoking this method.
**URL requests will be performed per subscribed observer.**
Any error during fetching of the response will cause observed sequence to terminate with error.
- parameter request: URL request.
- returns: Observable sequence of URL responses.
*/
public func response(request: URLRequest) -> Observable<(response: HTTPURLResponse, data: Data)> {
return Observable.create { observer in
// smart compiler should be able to optimize this out
let d: Date?
if URLSession.rx.shouldLogRequest(request) {
d = Date()
}
else {
d = nil
}
let task = self.base.dataTask(with: request) { data, response, error in
if URLSession.rx.shouldLogRequest(request) {
let interval = Date().timeIntervalSince(d ?? Date())
print(convertURLRequestToCurlCommand(request))
#if os(Linux)
print(convertResponseToString(response, error.flatMap { $0 as NSError }, interval))
#else
print(convertResponseToString(response, error.map { $0 as NSError }, interval))
#endif
}
guard let response = response, let data = data else {
observer.on(.error(error ?? RxCocoaURLError.unknown))
return
}
guard let httpResponse = response as? HTTPURLResponse else {
observer.on(.error(RxCocoaURLError.nonHTTPResponse(response: response)))
return
}
observer.on(.next((httpResponse, data)))
observer.on(.completed)
}
task.resume()
return Disposables.create(with: task.cancel)
}
}
/**
Observable sequence of response data for URL request.
Performing of request starts after observer is subscribed and not after invoking this method.
**URL requests will be performed per subscribed observer.**
Any error during fetching of the response will cause observed sequence to terminate with error.
If response is not HTTP response with status code in the range of `200 ..< 300`, sequence
will terminate with `(RxCocoaErrorDomain, RxCocoaError.NetworkError)`.
- parameter request: URL request.
- returns: Observable sequence of response data.
*/
public func data(request: URLRequest) -> Observable<Data> {
return self.response(request: request).map { pair -> Data in
if 200 ..< 300 ~= pair.0.statusCode {
return pair.1
}
else {
throw RxCocoaURLError.httpRequestFailed(response: pair.0, data: pair.1)
}
}
}
/**
Observable sequence of response JSON for URL request.
Performing of request starts after observer is subscribed and not after invoking this method.
**URL requests will be performed per subscribed observer.**
Any error during fetching of the response will cause observed sequence to terminate with error.
If response is not HTTP response with status code in the range of `200 ..< 300`, sequence
will terminate with `(RxCocoaErrorDomain, RxCocoaError.NetworkError)`.
If there is an error during JSON deserialization observable sequence will fail with that error.
- parameter request: URL request.
- returns: Observable sequence of response JSON.
*/
public func json(request: URLRequest, options: JSONSerialization.ReadingOptions = []) -> Observable<Any> {
return self.data(request: request).map { data -> Any in
do {
return try JSONSerialization.jsonObject(with: data, options: options)
} catch let error {
throw RxCocoaURLError.deserializationError(error: error)
}
}
}
/**
Observable sequence of response JSON for GET request with `URL`.
Performing of request starts after observer is subscribed and not after invoking this method.
**URL requests will be performed per subscribed observer.**
Any error during fetching of the response will cause observed sequence to terminate with error.
If response is not HTTP response with status code in the range of `200 ..< 300`, sequence
will terminate with `(RxCocoaErrorDomain, RxCocoaError.NetworkError)`.
If there is an error during JSON deserialization observable sequence will fail with that error.
- parameter url: URL of `NSURLRequest` request.
- returns: Observable sequence of response JSON.
*/
public func json(url: Foundation.URL) -> Observable<Any> {
self.json(request: URLRequest(url: url))
}
}
extension Reactive where Base == URLSession {
/// Log URL requests to standard output in curl format.
public static var shouldLogRequest: (URLRequest) -> Bool = { _ in
#if DEBUG
return true
#else
return false
#endif
}
}
```
|
```objective-c
/*your_sha256_hash---------------------
*
*
*your_sha256_hash---------------------*/
/* this ALWAYS GENERATED file contains the definitions for the interfaces */
/* File created by MIDL compiler version 8.01.0622 */
/* verify that the <rpcndr.h> version is high enough to compile this file*/
#ifndef __REQUIRED_RPCNDR_H_VERSION__
#define __REQUIRED_RPCNDR_H_VERSION__ 500
#endif
/* verify that the <rpcsal.h> version is high enough to compile this file*/
#ifndef __REQUIRED_RPCSAL_H_VERSION__
#define __REQUIRED_RPCSAL_H_VERSION__ 100
#endif
#include "rpc.h"
#include "rpcndr.h"
#ifndef __RPCNDR_H_VERSION__
#error this stub requires an updated version of <rpcndr.h>
#endif /* __RPCNDR_H_VERSION__ */
#ifndef COM_NO_WINDOWS_H
#include "windows.h"
#include "ole2.h"
#endif /*COM_NO_WINDOWS_H*/
#ifndef __d3d12downlevel_h__
#define __d3d12downlevel_h__
#if defined(_MSC_VER) && (_MSC_VER >= 1020)
#pragma once
#endif
/* Forward Declarations */
#ifndef __ID3D12CommandQueueDownlevel_FWD_DEFINED__
#define __ID3D12CommandQueueDownlevel_FWD_DEFINED__
typedef interface ID3D12CommandQueueDownlevel ID3D12CommandQueueDownlevel;
#endif /* __ID3D12CommandQueueDownlevel_FWD_DEFINED__ */
#ifndef __ID3D12DeviceDownlevel_FWD_DEFINED__
#define __ID3D12DeviceDownlevel_FWD_DEFINED__
typedef interface ID3D12DeviceDownlevel ID3D12DeviceDownlevel;
#endif /* __ID3D12DeviceDownlevel_FWD_DEFINED__ */
/* header files for imported files */
#include "oaidl.h"
#include "ocidl.h"
#include "d3d12.h"
#include "dxgi1_4.h"
#ifdef __cplusplus
extern "C"{
#endif
/* interface __MIDL_itf_d3d12downlevel_0000_0000 */
/* [local] */
#include <winapifamily.h>
#pragma region Desktop Family
#if WINAPI_FAMILY_PARTITION(WINAPI_PARTITION_DESKTOP)
typedef
enum D3D12_DOWNLEVEL_PRESENT_FLAGS
{
D3D12_DOWNLEVEL_PRESENT_FLAG_NONE = 0,
D3D12_DOWNLEVEL_PRESENT_FLAG_WAIT_FOR_VBLANK = ( D3D12_DOWNLEVEL_PRESENT_FLAG_NONE + 1 )
} D3D12_DOWNLEVEL_PRESENT_FLAGS;
DEFINE_ENUM_FLAG_OPERATORS( D3D12_DOWNLEVEL_PRESENT_FLAGS );
extern RPC_IF_HANDLE __MIDL_itf_d3d12downlevel_0000_0000_v0_0_c_ifspec;
extern RPC_IF_HANDLE __MIDL_itf_d3d12downlevel_0000_0000_v0_0_s_ifspec;
#ifndef __ID3D12CommandQueueDownlevel_INTERFACE_DEFINED__
#define __ID3D12CommandQueueDownlevel_INTERFACE_DEFINED__
/* interface ID3D12CommandQueueDownlevel */
/* [unique][local][object][uuid] */
EXTERN_C const IID IID_ID3D12CommandQueueDownlevel;
#if defined(__cplusplus) && !defined(CINTERFACE)
MIDL_INTERFACE("38a8c5ef-7ccb-4e81-914f-a6e9d072c494")
ID3D12CommandQueueDownlevel : public IUnknown
{
public:
virtual HRESULT STDMETHODCALLTYPE Present(
_In_ ID3D12GraphicsCommandList *pOpenCommandList,
_In_ ID3D12Resource *pSourceTex2D,
_In_ HWND hWindow,
D3D12_DOWNLEVEL_PRESENT_FLAGS Flags) = 0;
};
#else /* C style interface */
typedef struct ID3D12CommandQueueDownlevelVtbl
{
BEGIN_INTERFACE
HRESULT ( STDMETHODCALLTYPE *QueryInterface )(
ID3D12CommandQueueDownlevel * This,
REFIID riid,
_COM_Outptr_ void **ppvObject);
ULONG ( STDMETHODCALLTYPE *AddRef )(
ID3D12CommandQueueDownlevel * This);
ULONG ( STDMETHODCALLTYPE *Release )(
ID3D12CommandQueueDownlevel * This);
HRESULT ( STDMETHODCALLTYPE *Present )(
ID3D12CommandQueueDownlevel * This,
_In_ ID3D12GraphicsCommandList *pOpenCommandList,
_In_ ID3D12Resource *pSourceTex2D,
_In_ HWND hWindow,
D3D12_DOWNLEVEL_PRESENT_FLAGS Flags);
END_INTERFACE
} ID3D12CommandQueueDownlevelVtbl;
interface ID3D12CommandQueueDownlevel
{
CONST_VTBL struct ID3D12CommandQueueDownlevelVtbl *lpVtbl;
};
#ifdef COBJMACROS
#define ID3D12CommandQueueDownlevel_QueryInterface(This,riid,ppvObject) \
( (This)->lpVtbl -> QueryInterface(This,riid,ppvObject) )
#define ID3D12CommandQueueDownlevel_AddRef(This) \
( (This)->lpVtbl -> AddRef(This) )
#define ID3D12CommandQueueDownlevel_Release(This) \
( (This)->lpVtbl -> Release(This) )
#define ID3D12CommandQueueDownlevel_Present(This,pOpenCommandList,pSourceTex2D,hWindow,Flags) \
( (This)->lpVtbl -> Present(This,pOpenCommandList,pSourceTex2D,hWindow,Flags) )
#endif /* COBJMACROS */
#endif /* C style interface */
#endif /* __ID3D12CommandQueueDownlevel_INTERFACE_DEFINED__ */
#ifndef __ID3D12DeviceDownlevel_INTERFACE_DEFINED__
#define __ID3D12DeviceDownlevel_INTERFACE_DEFINED__
/* interface ID3D12DeviceDownlevel */
/* [unique][local][object][uuid] */
EXTERN_C const IID IID_ID3D12DeviceDownlevel;
#if defined(__cplusplus) && !defined(CINTERFACE)
MIDL_INTERFACE("74eaee3f-2f4b-476d-82ba-2b85cb49e310")
ID3D12DeviceDownlevel : public IUnknown
{
public:
virtual HRESULT STDMETHODCALLTYPE QueryVideoMemoryInfo(
UINT NodeIndex,
DXGI_MEMORY_SEGMENT_GROUP MemorySegmentGroup,
_Out_ DXGI_QUERY_VIDEO_MEMORY_INFO *pVideoMemoryInfo) = 0;
};
#else /* C style interface */
typedef struct ID3D12DeviceDownlevelVtbl
{
BEGIN_INTERFACE
HRESULT ( STDMETHODCALLTYPE *QueryInterface )(
ID3D12DeviceDownlevel * This,
REFIID riid,
_COM_Outptr_ void **ppvObject);
ULONG ( STDMETHODCALLTYPE *AddRef )(
ID3D12DeviceDownlevel * This);
ULONG ( STDMETHODCALLTYPE *Release )(
ID3D12DeviceDownlevel * This);
HRESULT ( STDMETHODCALLTYPE *QueryVideoMemoryInfo )(
ID3D12DeviceDownlevel * This,
UINT NodeIndex,
DXGI_MEMORY_SEGMENT_GROUP MemorySegmentGroup,
_Out_ DXGI_QUERY_VIDEO_MEMORY_INFO *pVideoMemoryInfo);
END_INTERFACE
} ID3D12DeviceDownlevelVtbl;
interface ID3D12DeviceDownlevel
{
CONST_VTBL struct ID3D12DeviceDownlevelVtbl *lpVtbl;
};
#ifdef COBJMACROS
#define ID3D12DeviceDownlevel_QueryInterface(This,riid,ppvObject) \
( (This)->lpVtbl -> QueryInterface(This,riid,ppvObject) )
#define ID3D12DeviceDownlevel_AddRef(This) \
( (This)->lpVtbl -> AddRef(This) )
#define ID3D12DeviceDownlevel_Release(This) \
( (This)->lpVtbl -> Release(This) )
#define ID3D12DeviceDownlevel_QueryVideoMemoryInfo(This,NodeIndex,MemorySegmentGroup,pVideoMemoryInfo) \
( (This)->lpVtbl -> QueryVideoMemoryInfo(This,NodeIndex,MemorySegmentGroup,pVideoMemoryInfo) )
#endif /* COBJMACROS */
#endif /* C style interface */
#endif /* __ID3D12DeviceDownlevel_INTERFACE_DEFINED__ */
/* interface __MIDL_itf_d3d12downlevel_0000_0002 */
/* [local] */
#endif /* WINAPI_FAMILY_PARTITION(WINAPI_PARTITION_DESKTOP) */
#pragma endregion
DEFINE_GUID(IID_ID3D12CommandQueueDownlevel,0x38a8c5ef,0x7ccb,0x4e81,0x91,0x4f,0xa6,0xe9,0xd0,0x72,0xc4,0x94);
DEFINE_GUID(IID_ID3D12DeviceDownlevel,0x74eaee3f,0x2f4b,0x476d,0x82,0xba,0x2b,0x85,0xcb,0x49,0xe3,0x10);
extern RPC_IF_HANDLE __MIDL_itf_d3d12downlevel_0000_0002_v0_0_c_ifspec;
extern RPC_IF_HANDLE __MIDL_itf_d3d12downlevel_0000_0002_v0_0_s_ifspec;
/* Additional Prototypes for ALL interfaces */
/* end of Additional Prototypes */
#ifdef __cplusplus
}
#endif
#endif
```
|
Devario kakhienensis is a freshwater fish found in the Irrawaddy basin of Myanmar and China.
References
Freshwater fish of China
Fish of Myanmar
Taxa named by John Anderson (zoologist)
Fish described in 1879
Devario
|
```yaml
interactions:
- request:
body: '{"Offset": 0, "Limit": 20}'
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '26'
Content-Type:
- application/json
Host:
- cvm.ap-singapore.tencentcloudapi.com
User-Agent:
- python-requests/2.28.1
X-TC-Action:
- DescribeInstances
X-TC-Region:
- ap-singapore
X-TC-Version:
- '2017-03-12'
method: POST
uri: path_to_url
response:
body:
string: "{\"Response\": {\"TotalCount\": 6, \"InstanceSet\": [{\"Placement\":
{\"Zone\": \"ap-singapore-4\", \"HostId\": null, \"ProjectId\": 0}, \"InstanceId\":
\"ins-dq1dmpgk\", \"Uuid\": \"2e56464f-5d8c-4449-960f-75ad37e5a625\", \"OperatorUin\":
\"\", \"InstanceState\": \"RUNNING\", \"RestrictState\": \"NORMAL\", \"InstanceType\":
\"SA2.MEDIUM4\", \"CPU\": 2, \"Memory\": 4, \"InstanceName\": \"arron_cvm\",
\"InstanceChargeType\": \"POSTPAID_BY_HOUR\", \"SystemDisk\": {\"DiskType\":
\"CLOUD_BSSD\", \"DiskId\": \"disk-86s0fjos\", \"DiskSize\": 50, \"Encrypt\":
false, \"KmsKeyId\": null, \"ThroughputPerformance\": 0, \"CdcId\": null},
\"DataDisks\": [], \"PrivateIpAddresses\": [\"192.168.0.11\"], \"PublicIpAddresses\":
[\"43.156.31.218\"], \"IPv6Addresses\": null, \"InternetAccessible\": {\"InternetMaxBandwidthOut\":
10, \"InternetChargeType\": \"TRAFFIC_POSTPAID_BY_HOUR\"}, \"VirtualPrivateCloud\":
{\"VpcId\": \"vpc-rdsfj6ot\", \"SubnetId\": \"subnet-9hl8a56w\", \"AsVpcGateway\":
false}, \"SecurityGroupIds\": [\"sg-0tifsp1w\", \"sg-cj259yog\"], \"LoginSettings\":
{\"KeyIds\": [\"skey-0uvvu4xh\"]}, \"ImageId\": \"img-eb30mz89\", \"OsName\":
\"TencentOS Server 3.1 (TK4)\", \"DefaultLoginUser\": \"root\", \"DefaultLoginPort\":
22, \"RenewFlag\": null, \"CreatedTime\": \"2022-09-14T02:09:47Z\", \"ExpiredTime\":
null, \"Tags\": [], \"PlatformProjectId\": null, \"DisasterRecoverGroupId\":
\"\", \"DedicatedClusterId\": \"\", \"CamRoleName\": \"\", \"LatestOperation\":
\"ModifyInstancesAttribute.SecurityGroups\", \"LatestOperationState\": \"SUCCESS\",
\"LatestOperationRequestId\": \"76438a01-31a9-4e8e-a112-8acf49ff4aca\", \"IsolatedSource\":
\"NOTISOLATED\", \"HpcClusterId\": \"\", \"DisableApiTermination\": false,
\"NOT_APPLICABLE\"}, {\"Placement\": {\"Zone\": \"ap-singapore-4\", \"HostId\":
null, \"ProjectId\": 0}, \"InstanceId\": \"ins-00lycyy6\", \"Uuid\": \"9683a76c-12cb-45f5-b7e9-70ae917a6ae4\",
\"OperatorUin\": \"\", \"InstanceState\": \"STOPPED\", \"RestrictState\":
\"NORMAL\", \"InstanceType\": \"SA2.MEDIUM4\", \"CPU\": 2, \"Memory\": 4,
\"InstanceName\": \"Unnamed\", \"InstanceChargeType\": \"POSTPAID_BY_HOUR\",
\"SystemDisk\": {\"DiskType\": \"CLOUD_BSSD\", \"DiskId\": \"disk-f4cbs4nc\",
\"DiskSize\": 50, \"Encrypt\": false, \"KmsKeyId\": null, \"ThroughputPerformance\":
0, \"CdcId\": null}, \"DataDisks\": [], \"PrivateIpAddresses\": [\"192.168.0.8\"],
\"PublicIpAddresses\": null, \"IPv6Addresses\": null, \"InternetAccessible\":
{\"InternetMaxBandwidthOut\": 5, \"InternetChargeType\": null}, \"VirtualPrivateCloud\":
{\"VpcId\": \"vpc-2hxl6jir\", \"SubnetId\": \"subnet-gnl4awp6\", \"AsVpcGateway\":
false}, \"SecurityGroupIds\": [\"sg-4rdhd5n6\"], \"LoginSettings\": {\"KeyIds\":
[\"skey-83mjf79v\"]}, \"ImageId\": \"img-eb30mz89\", \"OsName\": \"TencentOS
Server 3.1 (TK4)\", \"DefaultLoginUser\": \"root\", \"DefaultLoginPort\":
22, \"RenewFlag\": null, \"CreatedTime\": \"2022-09-06T03:07:37Z\", \"ExpiredTime\":
null, \"Tags\": [{\"tagKey\": \"maid_status\", \"tagValue\": \"Resource does
not meet policy: stop@2022-10-11T06:17:00+00:00\", \"Key\": \"maid_status\",
\"Value\": \"Resource does not meet policy: stop@2022-10-11T06:17:00+00:00\"}],
\"PlatformProjectId\": null, \"DisasterRecoverGroupId\": \"\", \"DedicatedClusterId\":
\"\", \"CamRoleName\": \"\", \"LatestOperation\": \"StopInstances\", \"LatestOperationState\":
\"SUCCESS\", \"LatestOperationRequestId\": \"ef0646f8-8cb5-4a8d-b5fb-6358fb38cd99\",
\"IsolatedSource\": \"NOTISOLATED\", \"HpcClusterId\": \"\", \"DisableApiTermination\":
\"STOP_CHARGING\"}, {\"Placement\": {\"Zone\": \"ap-singapore-4\", \"HostId\":
null, \"ProjectId\": 0}, \"InstanceId\": \"ins-a4vgayks\", \"Uuid\": \"67280d6c-7dee-4b06-b794-778535a61366\",
\"OperatorUin\": \"\", \"InstanceState\": \"STOPPED\", \"RestrictState\":
\"NORMAL\", \"InstanceType\": \"SA2.MEDIUM4\", \"CPU\": 2, \"Memory\": 4,
\"InstanceName\": \"ins-riot-cvmtest\", \"InstanceChargeType\": \"POSTPAID_BY_HOUR\",
\"SystemDisk\": {\"DiskType\": \"CLOUD_BSSD\", \"DiskId\": \"disk-96jhcorq\",
\"DiskSize\": 50, \"Encrypt\": false, \"KmsKeyId\": null, \"ThroughputPerformance\":
0, \"CdcId\": null}, \"DataDisks\": [], \"PrivateIpAddresses\": [\"192.168.0.12\"],
\"PublicIpAddresses\": null, \"IPv6Addresses\": null, \"InternetAccessible\":
{\"InternetMaxBandwidthOut\": 5, \"InternetChargeType\": null}, \"VirtualPrivateCloud\":
{\"VpcId\": \"vpc-2hxl6jir\", \"SubnetId\": \"subnet-gnl4awp6\", \"AsVpcGateway\":
false}, \"SecurityGroupIds\": [\"sg-4rdhd5n6\"], \"LoginSettings\": {\"KeyIds\":
null}, \"ImageId\": \"img-eb30mz89\", \"OsName\": \"TencentOS Server 3.1 (TK4)\",
\"DefaultLoginUser\": \"root\", \"DefaultLoginPort\": 22, \"RenewFlag\": null,
\"CreatedTime\": \"2022-08-30T06:24:34Z\", \"ExpiredTime\": null, \"Tags\":
[{\"tagKey\": \"tag_add_test_key_for_test_rename\", \"tagValue\": \"tag_add_test_value_for_test\",
\"Key\": \"tag_add_test_key_for_test_rename\", \"Value\": \"tag_add_test_value_for_test\"},
{\"tagKey\": \"test\", \"tagValue\": \"test\", \"Key\": \"test\", \"Value\":
\"test\"}, {\"tagKey\": \"test_pro\", \"tagValue\": \"this is test\", \"Key\":
\"test_pro\", \"Value\": \"this is test\"}, {\"tagKey\": \"test_pro_00122\",
\"tagValue\": \"andyxbchen\", \"Key\": \"test_pro_00122\", \"Value\": \"andyxbchen\"},
{\"tagKey\": \"test_pro_001_2\", \"tagValue\": \"andyxbchen\", \"Key\": \"test_pro_001_2\",
\"Value\": \"andyxbchen\"}, {\"tagKey\": \"test_pro_1\", \"tagValue\": \"this
is test\", \"Key\": \"test_pro_1\", \"Value\": \"this is test\"}, {\"tagKey\":
\"test_pro_10\", \"tagValue\": \"this is test\", \"Key\": \"test_pro_10\",
\"Value\": \"this is test\"}, {\"tagKey\": \"test_pro_11\", \"tagValue\":
\"this is test\", \"Key\": \"test_pro_11\", \"Value\": \"this is test\"},
{\"tagKey\": \"test_pro_12\", \"tagValue\": \"this is test\", \"Key\": \"test_pro_12\",
\"Value\": \"this is test\"}, {\"tagKey\": \"test_pro_13\", \"tagValue\":
\"this is test\", \"Key\": \"test_pro_13\", \"Value\": \"this is test\"},
{\"tagKey\": \"test_pro_14\", \"tagValue\": \"this is test\", \"Key\": \"test_pro_14\",
\"Value\": \"this is test\"}, {\"tagKey\": \"test_pro_15\", \"tagValue\":
\"this is test\", \"Key\": \"test_pro_15\", \"Value\": \"this is test\"},
{\"tagKey\": \"test_pro_16\", \"tagValue\": \"test_pro_16\", \"Key\": \"test_pro_16\",
\"Value\": \"test_pro_16\"}, {\"tagKey\": \"test_pro_17\", \"tagValue\": \"test_pro_17\",
\"Key\": \"test_pro_17\", \"Value\": \"test_pro_17\"}, {\"tagKey\": \"test_pro_18\",
\"tagValue\": \"test_pro_18\", \"Key\": \"test_pro_18\", \"Value\": \"test_pro_18\"},
{\"tagKey\": \"test_pro_2\", \"tagValue\": \"this is test\", \"Key\": \"test_pro_2\",
\"Value\": \"this is test\"}, {\"tagKey\": \"test_pro_3\", \"tagValue\": \"this
is test\", \"Key\": \"test_pro_3\", \"Value\": \"this is test\"}, {\"tagKey\":
\"test_pro_4\", \"tagValue\": \"this is test\", \"Key\": \"test_pro_4\", \"Value\":
\"this is test\"}, {\"tagKey\": \"test_pro_5\", \"tagValue\": \"this is test\",
\"Key\": \"test_pro_5\", \"Value\": \"this is test\"}, {\"tagKey\": \"test_pro_6\",
\"tagValue\": \"this is test\", \"Key\": \"test_pro_6\", \"Value\": \"this
is test\"}, {\"tagKey\": \"test_pro_7\", \"tagValue\": \"this is test\", \"Key\":
\"test_pro_7\", \"Value\": \"this is test\"}, {\"tagKey\": \"test_pro_8\",
\"tagValue\": \"this is test\", \"Key\": \"test_pro_8\", \"Value\": \"this
is test\"}, {\"tagKey\": \"test_pro_9\", \"tagValue\": \"this is test\", \"Key\":
\"test_pro_9\", \"Value\": \"this is test\"}, {\"tagKey\": \"test_test-mark-for\",
\"tagValue\": \"Resource does not meet policy: tag@2022-09-08T02:46:00+00:00\",
\"Key\": \"test_test-mark-for\", \"Value\": \"Resource does not meet policy:
tag@2022-09-08T02:46:00+00:00\"}, {\"tagKey\": \"\u8FD0\u7EF4\u8D1F\u8D23\u4EBA2\",
\"tagValue\": \"andyxbchen\", \"Key\": \"\u8FD0\u7EF4\u8D1F\u8D23\u4EBA2\",
\"Value\": \"andyxbchen\"}], \"PlatformProjectId\": null, \"DisasterRecoverGroupId\":
\"\", \"DedicatedClusterId\": \"\", \"CamRoleName\": \"\", \"LatestOperation\":
\"StopInstances\", \"LatestOperationState\": \"SUCCESS\", \"LatestOperationRequestId\":
\"52829930-c6e0-4842-949a-ac5d99cfc14e\", \"IsolatedSource\": \"NOTISOLATED\",
\"HpcClusterId\": \"\", \"DisableApiTermination\": false, \"RdmaIpAddresses\":
{\"Placement\": {\"Zone\": \"ap-singapore-1\", \"HostId\": null, \"ProjectId\":
1000000511}, \"InstanceId\": \"ins-n198q4gc\", \"Uuid\": \"507d7ee0-bb64-48ff-82b1-960f03ad904f\",
\"OperatorUin\": \"\", \"InstanceState\": \"RUNNING\", \"RestrictState\":
\"NORMAL\", \"InstanceType\": \"S5.LARGE8\", \"CPU\": 4, \"Memory\": 8, \"InstanceName\":
\"Unnamed\", \"InstanceChargeType\": \"POSTPAID_BY_HOUR\", \"SystemDisk\":
{\"DiskType\": \"CLOUD_PREMIUM\", \"DiskId\": \"disk-2xu821gm\", \"DiskSize\":
50, \"Encrypt\": false, \"KmsKeyId\": null, \"ThroughputPerformance\": 0,
\"CdcId\": null}, \"DataDisks\": [], \"PrivateIpAddresses\": [\"10.0.1.8\"],
\"PublicIpAddresses\": [\"43.156.4.43\"], \"IPv6Addresses\": null, \"InternetAccessible\":
{\"InternetMaxBandwidthOut\": 5, \"InternetChargeType\": \"TRAFFIC_POSTPAID_BY_HOUR\"},
\"VirtualPrivateCloud\": {\"VpcId\": \"vpc-aik1957f\", \"SubnetId\": \"subnet-5k1ixnpo\",
\"AsVpcGateway\": false}, \"SecurityGroupIds\": [\"sg-b3fnpwk6\"], \"LoginSettings\":
{\"KeyIds\": null}, \"ImageId\": \"img-9id7emv7\", \"OsName\": \"Windows Server
2016 \u6570\u636E\u4E2D\u5FC3\u7248 64\u4F4D\u4E2D\u6587\u7248\", \"DefaultLoginUser\":
\"Administrator\", \"DefaultLoginPort\": 3389, \"RenewFlag\": null, \"CreatedTime\":
\"2022-06-20T06:24:35Z\", \"ExpiredTime\": null, \"Tags\": [], \"PlatformProjectId\":
null, \"DisasterRecoverGroupId\": \"\", \"DedicatedClusterId\": \"\", \"CamRoleName\":
\"\", \"LatestOperation\": null, \"LatestOperationState\": null, \"LatestOperationRequestId\":
null, \"IsolatedSource\": \"NOTISOLATED\", \"HpcClusterId\": \"\", \"DisableApiTermination\":
\"NOT_APPLICABLE\"}, {\"Placement\": {\"Zone\": \"ap-singapore-4\", \"HostId\":
null, \"ProjectId\": 0}, \"InstanceId\": \"ins-beetmuio\", \"Uuid\": \"61c6aa25-9fd8-404c-9e91-3c2af566f050\",
\"OperatorUin\": \"\", \"InstanceState\": \"RUNNING\", \"RestrictState\":
\"NORMAL\", \"InstanceType\": \"S5.MEDIUM4\", \"CPU\": 2, \"Memory\": 4, \"InstanceName\":
\"Unnamed\", \"InstanceChargeType\": \"POSTPAID_BY_HOUR\", \"SystemDisk\":
{\"DiskType\": \"CLOUD_PREMIUM\", \"DiskId\": \"disk-fubdgqjm\", \"DiskSize\":
10, \"Encrypt\": false, \"KmsKeyId\": null, \"ThroughputPerformance\": 0,
\"CdcId\": null}, \"DataDisks\": [], \"PrivateIpAddresses\": [\"10.48.3.4\"],
\"PublicIpAddresses\": null, \"IPv6Addresses\": null, \"InternetAccessible\":
{\"InternetMaxBandwidthOut\": 0, \"InternetChargeType\": null}, \"VirtualPrivateCloud\":
{\"VpcId\": \"vpc-o7cqtz6h\", \"SubnetId\": \"subnet-8prst8z2\", \"AsVpcGateway\":
false}, \"SecurityGroupIds\": [\"sg-bbxj9vro\"], \"LoginSettings\": {\"KeyIds\":
[\"skey-puewei5v\"]}, \"ImageId\": \"img-eb30mz89\", \"OsName\": \"TencentOS
Server 3.1 (TK4)\", \"DefaultLoginUser\": \"root\", \"DefaultLoginPort\":
22, \"RenewFlag\": null, \"CreatedTime\": \"2022-05-10T15:19:55Z\", \"ExpiredTime\":
null, \"Tags\": [], \"PlatformProjectId\": null, \"DisasterRecoverGroupId\":
\"\", \"DedicatedClusterId\": \"\", \"CamRoleName\": \"\", \"LatestOperation\":
\"ResetInstancesPassword\", \"LatestOperationState\": \"SUCCESS\", \"LatestOperationRequestId\":
\"4486141d-9d4b-43a9-ba33-1deb9588e1dd\", \"IsolatedSource\": \"NOTISOLATED\",
\"HpcClusterId\": \"\", \"DisableApiTermination\": false, \"RdmaIpAddresses\":
{\"Placement\": {\"Zone\": \"ap-singapore-1\", \"HostId\": null, \"ProjectId\":
0}, \"InstanceId\": \"ins-5xpbvkm8\", \"Uuid\": \"9fc07c81-4425-40bb-8c19-6842f1d00406\",
\"OperatorUin\": \"\", \"InstanceState\": \"RUNNING\", \"RestrictState\":
\"NORMAL\", \"InstanceType\": \"S5.4XLARGE64\", \"CPU\": 16, \"Memory\": 64,
\"InstanceName\": \"tke_cls-4bqctahq_worker\", \"InstanceChargeType\": \"POSTPAID_BY_HOUR\",
\"SystemDisk\": {\"DiskType\": \"CLOUD_PREMIUM\", \"DiskId\": \"disk-0eu2baru\",
\"DiskSize\": 100, \"Encrypt\": false, \"KmsKeyId\": null, \"ThroughputPerformance\":
0, \"CdcId\": null}, \"DataDisks\": [{\"DiskType\": \"CLOUD_PREMIUM\", \"DiskId\":
\"disk-0fdcml0u\", \"DiskSize\": 10, \"DeleteWithInstance\": false, \"Encrypt\":
false, \"KmsKeyId\": null, \"ThroughputPerformance\": 0, \"CdcId\": null},
{\"DiskType\": \"CLOUD_PREMIUM\", \"DiskId\": \"disk-klhdsopu\", \"DiskSize\":
10, \"DeleteWithInstance\": false, \"Encrypt\": false, \"KmsKeyId\": null,
\"ThroughputPerformance\": 0, \"CdcId\": null}, {\"DiskType\": \"CLOUD_PREMIUM\",
\"DiskId\": \"disk-peuqflbc\", \"DiskSize\": 10, \"DeleteWithInstance\": false,
\"Encrypt\": false, \"KmsKeyId\": null, \"ThroughputPerformance\": 0, \"CdcId\":
null}, {\"DiskType\": \"CLOUD_PREMIUM\", \"DiskId\": \"disk-9uwxw17k\", \"DiskSize\":
10, \"DeleteWithInstance\": false, \"Encrypt\": false, \"KmsKeyId\": null,
\"ThroughputPerformance\": 0, \"CdcId\": null}], \"PrivateIpAddresses\": [\"172.17.0.10\"],
\"PublicIpAddresses\": [\"43.156.7.63\"], \"IPv6Addresses\": null, \"InternetAccessible\":
{\"InternetMaxBandwidthOut\": 100, \"InternetChargeType\": \"TRAFFIC_POSTPAID_BY_HOUR\"},
\"VirtualPrivateCloud\": {\"VpcId\": \"vpc-jls90tbv\", \"SubnetId\": \"subnet-2tvpnc9w\",
\"AsVpcGateway\": false}, \"SecurityGroupIds\": [\"sg-epfytda6\"], \"LoginSettings\":
{\"KeyIds\": [\"skey-3cxbfyxp\"]}, \"ImageId\": \"img-pi0ii46r\", \"OsName\":
\"Ubuntu Server 18.04.1 LTS 64\u4F4D\", \"DefaultLoginUser\": \"ubuntu\",
\"DefaultLoginPort\": 22, \"RenewFlag\": null, \"CreatedTime\": \"2022-02-24T00:55:54Z\",
\"ExpiredTime\": null, \"Tags\": [], \"PlatformProjectId\": null, \"DisasterRecoverGroupId\":
\"\", \"DedicatedClusterId\": \"\", \"CamRoleName\": \"\", \"LatestOperation\":
null, \"LatestOperationState\": null, \"LatestOperationRequestId\": null,
\"IsolatedSource\": \"NOTISOLATED\", \"HpcClusterId\": \"\", \"DisableApiTermination\":
\"NOT_APPLICABLE\"}], \"RequestId\": \"2bcc2139-2f9d-46dc-9d00-443108d8962e\"}}"
headers:
Connection:
- keep-alive
Content-Length:
- '12942'
Content-Type:
- application/json
Date:
- Tue, 27 Sep 2022 09:36:48 GMT
Server:
- nginx
status:
code: 200
message: OK
- request:
body: '{"Offset": 20, "Limit": 20}'
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '27'
Content-Type:
- application/json
Host:
- cvm.ap-singapore.tencentcloudapi.com
User-Agent:
- python-requests/2.28.1
X-TC-Action:
- DescribeInstances
X-TC-Region:
- ap-singapore
X-TC-Version:
- '2017-03-12'
method: POST
uri: path_to_url
response:
body:
string: '{"Response": {"TotalCount": 6, "InstanceSet": [], "RequestId": "ffb4b0c1-ed59-49ef-87a2-83156b807d12"}}'
headers:
Connection:
- keep-alive
Content-Length:
- '103'
Content-Type:
- application/json
Date:
- Tue, 27 Sep 2022 09:36:48 GMT
Server:
- nginx
status:
code: 200
message: OK
version: 1
```
|
```xml
import * as React from 'react';
import createSvgIcon from '../utils/createSvgIcon';
const AllAppsMirroredIcon = createSvgIcon({
svg: ({ classes }) => (
<svg xmlns="path_to_url" viewBox="0 0 2048 2048" className={classes.svg} focusable="false">
<path d="M2048 your_sha256_hashyour_sha256_hashm-128-256v128h-128V128h128zm-512 640V640H256v128h1152zm-896 384v128h896v-128H512zm896-1024H0v128h1408V128zm640 1792v-384h-384v384h384zm-128-256v128h-128v-128h128zm-512 128v-128H256v128h1152z" />
</svg>
),
displayName: 'AllAppsMirroredIcon',
});
export default AllAppsMirroredIcon;
```
|
Sahib Kaur (d.1841) was the second wife of Nau Nihal Singh, third Maharaja of the Sikh Empire and the mother of his son, Jawahar Singh.
Biography
Kaur was born to Sardar Gurdit Singh Gilwaliwala of Amritsar. She became the second wife of Nau Nihal Singh who was second in line of succession to the throne of Punjab after his father, Kharak Singh. He was the only son of Maharaja Kharak Singh and his queen consort, Maharani Chand Kaur and grandson of the legendary Maharaja Ranjit Singh and his queen consort, Maharani Datar Kaur of the Nakai Misl.
After the accession of Kharak Singh as the Maharaja, Kunwar Nau Nihal Singh became the Yuvraj (Crown Prince).
The effect of Chet Singh Bajwa on the , who was a relative of Inder Kaur Bajwa the fourth wife of Maharaja Kharak Singh, on the newly crowned king started to affect his relationship with the Lahore Darbar as well as his own son. It was decided to kill of Chet Singh Bajwa and to divest the Maharaja of all powers and to entrust Sri Tikka Kanwar Naunihal Singh with the responsibility of running the administration. From October 8, 1839 Kharak Singh was deprived of all his administrative powers, and all authority passed to Nau Nihal Singh. Thus beginning his reign. Kharak Singh died on November 5, 1940 and Nau Nihal Singh met a fatal accident on the very day of his father's cremation and died. Hazuri Bagh Gateway collapse still remains a mystery many think it was engineered by the Dogras or the British or the partisans of Chet Singh Bajwa. Dr Honigberger states that Nau Nihal Singh was alive when he was taken inside the fort by Dhian Singh Dogra. According to Alexander Gardner, who was just steps behind Nau Nihal when the incident took place, the prince had sustained only minor injuries during this episode: he was well enough to walk on his own, and agreed to be taken on a stretcher only because of Gardner's insistence. However, when the court physician Johann Martin Honigberger came to attend Nau Nihal in a tent, he observed that the prince's skull had been crushed, and the bedsheet was covered with blood and brain tissue. Dhian Singh insisted that the prince had suffered these injuries during the alleged accident in Hazuri Bagh. Nau Nihal died hours later, although the courtiers did not make this news public until three days later in an attempt to avoid panic. According to Gardner, five artillery men had carried Nau Nihal from Hazuri Bagh to the tent: two of these men died under mysterious circumstances, two went on leave and never re-joined the service, and one disappeared without explanation. Garder puts the blame on Dhian Singh and Honigberger too suggests that along with people who were loyal to Chet Singh Bajwa. Official court historian, Sohan Lal Suri labels it as a conspiracy of Sher Singh and Dhian Singh to claim the throne of the Punjab.
After the deaths of Kharak Singh and Nau Nihal Singh, Maharani Chand Kaur took over the reins of power and claimed that since Sahib Kaur was pregnant her child would become the next ruler. On 2 December 1840 Chand Kaur was proclaimed Maharani of the Punjab, with the title Malika Muqaddasa (Empress Immaculate) and became the only female ruler the Sikh Empire. Sher Singh whose claim to the throne was supported by Dhian Singh Dogra left the capital after she gained complete control of the administration together with her supporters. But Sher Singh still had the support of the army and in 1841 he arrived in Lahore and secured a ceasefire. She was persuaded to accept a jagir and relinquish her claim to the throne and retired to her late son's palace in Lahore. Sahib Kaur gave birth to a stillborn son who was named Shahzada Jawahar Singh and passed away. This ended any justification for a renewed claim to the regency of Chand Kaur and she too was killed in 1842.
Sohan Lal Suri notes with great horror about how Sher Singh secretly ordered for ‘hot medicines’ to be administered to Nau Nihal Singh’s widows, to ensure that they miscarried any pregnancies to ensure the throne from himself. Dr Honigberger claims that as a doctor he was always suspicious of the manner of the miscarriage of the baby boy of Sahib Kaur. Maharani Nanaki Kaur too was pregnant hence was also given medicines which resulted in a miscarriage.
Her Samadhi is located is the Royal Garden in between her mother-in-law, Chand Kaur's samadhi and her grandmother-in-law, Datar Kaur's samadhi lovingly called Mai Nakain by Maharaja Ranjit Singh.
References
1841 deaths
Deaths in childbirth
|
```c
/* Perform optimizations on tree structure.
Written by Mark Michell (mark@codesourcery.com).
This file is part of GNU CC.
GNU CC is free software; you can redistribute it and/or modify it
the Free Software Foundation; either version 2, or (at your option)
any later version.
GNU CC is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
along with GNU CC; see the file COPYING. If not, write to the Free
Software Foundation, 59 Temple Place - Suite 330, Boston, MA
02111-1307, USA. */
#include "config.h"
#include "system.h"
#include "tree.h"
#include "cp-tree.h"
#include "rtl.h"
#include "insn-config.h"
#include "input.h"
#include "integrate.h"
#include "toplev.h"
#include "varray.h"
#include "ggc.h"
#include "params.h"
#include "hashtab.h"
#include "debug.h"
#include "tree-inline.h"
/* Prototypes. */
static tree calls_setjmp_r PARAMS ((tree *, int *, void *));
static void update_cloned_parm PARAMS ((tree, tree));
static void dump_function PARAMS ((enum tree_dump_index, tree));
/* Optimize the body of FN. */
void
optimize_function (fn)
tree fn;
{
dump_function (TDI_original, fn);
/* While in this function, we may choose to go off and compile
another function. For example, we might instantiate a function
in the hopes of inlining it. Normally, that wouldn't trigger any
actual RTL code-generation -- but it will if the template is
actually needed. (For example, if it's address is taken, or if
some other function already refers to the template.) If
code-generation occurs, then garbage collection will occur, so we
must protect ourselves, just as we do while building up the body
of the function. */
++function_depth;
if (flag_inline_trees
/* We do not inline thunks, as (a) the backend tries to optimize
the call to the thunkee, (b) tree based inlining breaks that
optimization, (c) virtual functions are rarely inlineable,
and (d) TARGET_ASM_OUTPUT_MI_THUNK is there to DTRT anyway. */
&& !DECL_THUNK_P (fn))
{
optimize_inline_calls (fn);
dump_function (TDI_inlined, fn);
}
/* Undo the call to ggc_push_context above. */
--function_depth;
dump_function (TDI_optimized, fn);
}
/* Called from calls_setjmp_p via walk_tree. */
static tree
calls_setjmp_r (tp, walk_subtrees, data)
tree *tp;
int *walk_subtrees ATTRIBUTE_UNUSED;
void *data ATTRIBUTE_UNUSED;
{
/* We're only interested in FUNCTION_DECLS. */
if (TREE_CODE (*tp) != FUNCTION_DECL)
return NULL_TREE;
return setjmp_call_p (*tp) ? *tp : NULL_TREE;
}
/* Returns nonzero if FN calls `setjmp' or some other function that
can return more than once. This function is conservative; it may
occasionally return a nonzero value even when FN does not actually
call `setjmp'. */
int
calls_setjmp_p (fn)
tree fn;
{
return walk_tree_without_duplicates (&DECL_SAVED_TREE (fn),
calls_setjmp_r,
NULL) != NULL_TREE;
}
/* CLONED_PARM is a copy of CLONE, generated for a cloned constructor
or destructor. Update it to ensure that the source-position for
the cloned parameter matches that for the original, and that the
debugging generation code will be able to find the original PARM. */
static void
update_cloned_parm (parm, cloned_parm)
tree parm;
tree cloned_parm;
{
DECL_ABSTRACT_ORIGIN (cloned_parm) = parm;
/* We may have taken its address. */
TREE_ADDRESSABLE (cloned_parm) = TREE_ADDRESSABLE (parm);
/* The definition might have different constness. */
TREE_READONLY (cloned_parm) = TREE_READONLY (parm);
TREE_USED (cloned_parm) = TREE_USED (parm);
/* The name may have changed from the declaration. */
DECL_NAME (cloned_parm) = DECL_NAME (parm);
DECL_SOURCE_LOCATION (cloned_parm) = DECL_SOURCE_LOCATION (parm);
}
/* FN is a function that has a complete body. Clone the body as
necessary. Returns nonzero if there's no longer any need to
process the main body. */
int
maybe_clone_body (fn)
tree fn;
{
tree clone;
int first = 1;
/* We only clone constructors and destructors. */
if (!DECL_MAYBE_IN_CHARGE_CONSTRUCTOR_P (fn)
&& !DECL_MAYBE_IN_CHARGE_DESTRUCTOR_P (fn))
return 0;
/* Emit the DWARF1 abstract instance. */
(*debug_hooks->deferred_inline_function) (fn);
/* We know that any clones immediately follow FN in the TYPE_METHODS
list. */
for (clone = TREE_CHAIN (fn);
clone && DECL_CLONED_FUNCTION_P (clone);
clone = TREE_CHAIN (clone), first = 0)
{
tree parm;
tree clone_parm;
int parmno;
splay_tree decl_map;
/* Update CLONE's source position information to match FN's. */
DECL_SOURCE_LOCATION (clone) = DECL_SOURCE_LOCATION (fn);
DECL_INLINE (clone) = DECL_INLINE (fn);
DID_INLINE_FUNC (clone) = DID_INLINE_FUNC (fn);
DECL_DECLARED_INLINE_P (clone) = DECL_DECLARED_INLINE_P (fn);
DECL_COMDAT (clone) = DECL_COMDAT (fn);
DECL_WEAK (clone) = DECL_WEAK (fn);
DECL_ONE_ONLY (clone) = DECL_ONE_ONLY (fn);
DECL_SECTION_NAME (clone) = DECL_SECTION_NAME (fn);
DECL_USE_TEMPLATE (clone) = DECL_USE_TEMPLATE (fn);
DECL_EXTERNAL (clone) = DECL_EXTERNAL (fn);
DECL_INTERFACE_KNOWN (clone) = DECL_INTERFACE_KNOWN (fn);
DECL_NOT_REALLY_EXTERN (clone) = DECL_NOT_REALLY_EXTERN (fn);
TREE_PUBLIC (clone) = TREE_PUBLIC (fn);
/* Adjust the parameter names and locations. */
parm = DECL_ARGUMENTS (fn);
clone_parm = DECL_ARGUMENTS (clone);
/* Update the `this' parameter, which is always first. */
update_cloned_parm (parm, clone_parm);
parm = TREE_CHAIN (parm);
clone_parm = TREE_CHAIN (clone_parm);
if (DECL_HAS_IN_CHARGE_PARM_P (fn))
parm = TREE_CHAIN (parm);
if (DECL_HAS_VTT_PARM_P (fn))
parm = TREE_CHAIN (parm);
if (DECL_HAS_VTT_PARM_P (clone))
clone_parm = TREE_CHAIN (clone_parm);
for (; parm;
parm = TREE_CHAIN (parm), clone_parm = TREE_CHAIN (clone_parm))
{
/* Update this parameter. */
update_cloned_parm (parm, clone_parm);
/* We should only give unused information for one clone. */
if (!first)
TREE_USED (clone_parm) = 1;
}
/* Start processing the function. */
push_to_top_level ();
start_function (NULL_TREE, clone, NULL_TREE, SF_PRE_PARSED);
/* Remap the parameters. */
decl_map = splay_tree_new (splay_tree_compare_pointers, NULL, NULL);
for (parmno = 0,
parm = DECL_ARGUMENTS (fn),
clone_parm = DECL_ARGUMENTS (clone);
parm;
++parmno,
parm = TREE_CHAIN (parm))
{
/* Map the in-charge parameter to an appropriate constant. */
if (DECL_HAS_IN_CHARGE_PARM_P (fn) && parmno == 1)
{
tree in_charge;
in_charge = in_charge_arg_for_name (DECL_NAME (clone));
splay_tree_insert (decl_map,
(splay_tree_key) parm,
(splay_tree_value) in_charge);
}
else if (DECL_ARTIFICIAL (parm)
&& DECL_NAME (parm) == vtt_parm_identifier)
{
/* For a subobject constructor or destructor, the next
argument is the VTT parameter. Remap the VTT_PARM
from the CLONE to this parameter. */
if (DECL_HAS_VTT_PARM_P (clone))
{
DECL_ABSTRACT_ORIGIN (clone_parm) = parm;
splay_tree_insert (decl_map,
(splay_tree_key) parm,
(splay_tree_value) clone_parm);
clone_parm = TREE_CHAIN (clone_parm);
}
/* Otherwise, map the VTT parameter to `NULL'. */
else
{
splay_tree_insert (decl_map,
(splay_tree_key) parm,
(splay_tree_value) null_pointer_node);
}
}
/* Map other parameters to their equivalents in the cloned
function. */
else
{
splay_tree_insert (decl_map,
(splay_tree_key) parm,
(splay_tree_value) clone_parm);
clone_parm = TREE_CHAIN (clone_parm);
}
}
/* Clone the body. */
clone_body (clone, fn, decl_map);
/* There are as many statements in the clone as in the
original. */
DECL_NUM_STMTS (clone) = DECL_NUM_STMTS (fn);
/* Clean up. */
splay_tree_delete (decl_map);
/* The clone can throw iff the original function can throw. */
cp_function_chain->can_throw = !TREE_NOTHROW (fn);
/* Now, expand this function into RTL, if appropriate. */
finish_function (0);
BLOCK_ABSTRACT_ORIGIN (DECL_INITIAL (clone)) = DECL_INITIAL (fn);
expand_body (clone);
pop_from_top_level ();
}
/* We don't need to process the original function any further. */
return 1;
}
/* Dump FUNCTION_DECL FN as tree dump PHASE. */
static void
dump_function (phase, fn)
enum tree_dump_index phase;
tree fn;
{
FILE *stream;
int flags;
stream = dump_begin (phase, &flags);
if (stream)
{
fprintf (stream, "\n;; Function %s",
decl_as_string (fn, TFF_DECL_SPECIFIERS));
fprintf (stream, " (%s)\n",
decl_as_string (DECL_ASSEMBLER_NAME (fn), 0));
fprintf (stream, ";; enabled by -%s\n", dump_flag_name (phase));
fprintf (stream, "\n");
dump_node (fn, TDF_SLIM | flags, stream);
dump_end (phase, stream);
}
}
```
|
```xml
import {JsonHookContext, JsonSchema} from "@tsed/schema";
export function alterOnSerialize(schema: JsonSchema, value: any, options: JsonHookContext) {
return schema.$hooks.alter("onSerialize", value, [options]);
}
```
|
```html
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "path_to_url">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=US-ASCII">
<title>Struct multiplies</title>
<link rel="stylesheet" href="../../../../../doc/src/boostbook.css" type="text/css">
<meta name="generator" content="DocBook XSL Stylesheets V1.79.1">
<link rel="home" href="../../../index.html" title="The Boost C++ Libraries BoostBook Documentation Subset">
<link rel="up" href="../../../accumulators/reference.html#header.boost.accumulators.numeric.functional_hpp" title="Header <boost/accumulators/numeric/functional.hpp>">
<link rel="prev" href="minus.html" title="Struct minus">
<link rel="next" href="divides.html" title="Struct divides">
</head>
<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
<table cellpadding="2" width="100%"><tr>
<td valign="top"><img alt="Boost C++ Libraries" width="277" height="86" src="../../../../../boost.png"></td>
<td align="center"><a href="../../../../../index.html">Home</a></td>
<td align="center"><a href="../../../../../libs/libraries.htm">Libraries</a></td>
<td align="center"><a href="path_to_url">People</a></td>
<td align="center"><a href="path_to_url">FAQ</a></td>
<td align="center"><a href="../../../../../more/index.htm">More</a></td>
</tr></table>
<hr>
<div class="spirit-nav">
<a accesskey="p" href="minus.html"><img src="../../../../../doc/src/images/prev.png" alt="Prev"></a><a accesskey="u" href="../../../accumulators/reference.html#header.boost.accumulators.numeric.functional_hpp"><img src="../../../../../doc/src/images/up.png" alt="Up"></a><a accesskey="h" href="../../../index.html"><img src="../../../../../doc/src/images/home.png" alt="Home"></a><a accesskey="n" href="divides.html"><img src="../../../../../doc/src/images/next.png" alt="Next"></a>
</div>
<div class="refentry">
<a name="boost.numeric.op.multiplies"></a><div class="titlepage"></div>
<div class="refnamediv">
<h2><span class="refentrytitle">Struct multiplies</span></h2>
<p>boost::numeric::op::multiplies</p>
</div>
<h2 xmlns:rev="path_to_url~gregod/boost/tools/doc/revision" class="refsynopsisdiv-title">Synopsis</h2>
<div xmlns:rev="path_to_url~gregod/boost/tools/doc/revision" class="refsynopsisdiv"><pre class="synopsis"><span class="comment">// In header: <<a class="link" href="../../../accumulators/reference.html#header.boost.accumulators.numeric.functional_hpp" title="Header <boost/accumulators/numeric/functional.hpp>">boost/accumulators/numeric/functional.hpp</a>>
</span>
<span class="keyword">struct</span> <a class="link" href="multiplies.html" title="Struct multiplies">multiplies</a> <span class="special">{</span>
<span class="special">}</span><span class="special">;</span></pre></div>
</div>
<table xmlns:rev="path_to_url~gregod/boost/tools/doc/revision" width="100%"><tr>
<td align="left"></td>
file LICENSE_1_0.txt or copy at <a href="path_to_url" target="_top">path_to_url
</p>
</div></td>
</tr></table>
<hr>
<div class="spirit-nav">
<a accesskey="p" href="minus.html"><img src="../../../../../doc/src/images/prev.png" alt="Prev"></a><a accesskey="u" href="../../../accumulators/reference.html#header.boost.accumulators.numeric.functional_hpp"><img src="../../../../../doc/src/images/up.png" alt="Up"></a><a accesskey="h" href="../../../index.html"><img src="../../../../../doc/src/images/home.png" alt="Home"></a><a accesskey="n" href="divides.html"><img src="../../../../../doc/src/images/next.png" alt="Next"></a>
</div>
</body>
</html>
```
|
```smalltalk
//
//
// Microsoft Cognitive Services: path_to_url
//
// Microsoft Cognitive Services Github:
// path_to_url
//
// All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining
// a copy of this software and associated documentation files (the
// "Software"), to deal in the Software without restriction, including
// without limitation the rights to use, copy, modify, merge, publish,
// distribute, sublicense, and/or sell copies of the Software, and to
// permit persons to whom the Software is furnished to do so, subject to
// the following conditions:
//
// The above copyright notice and this permission notice shall be
// included in all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED ""AS IS"", WITHOUT WARRANTY OF ANY KIND,
// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
//
using IntelligentKioskSample.Controls.Overlays.Primitives;
using IntelligentKioskSample.Models;
using Microsoft.Azure.CognitiveServices.Vision.Face.Models;
using ServiceHelpers;
using System;
using System.Collections.Generic;
using System.Collections.ObjectModel;
using System.ComponentModel;
using System.IO;
using System.Linq;
using System.Net.Http;
using System.Threading.Tasks;
using Windows.UI.Xaml;
using Windows.UI.Xaml.Controls;
using Windows.UI.Xaml.Navigation;
namespace IntelligentKioskSample.Views.FaceApiExplorer
{
[KioskExperience(Id = "Face API Explorer",
DisplayName = "Face API Explorer",
Description = "See ages, genders, and landmarks from faces in an image",
ImagePath = "ms-appx:/Assets/DemoGallery/Face API Explorer.jpg",
ExperienceType = ExperienceType.Guided | ExperienceType.Business,
TechnologiesUsed = TechnologyType.BingAutoSuggest | TechnologyType.BingImages | TechnologyType.Face | TechnologyType.Emotion,
TechnologyArea = TechnologyAreaType.Vision,
DateUpdated = "2019/06/04",
DateAdded = "2015/10/16")]
public sealed partial class FaceApiExplorerPage : Page, INotifyPropertyChanged
{
const string NoneDesc = "None";
public static bool ShowAgeAndGender { get { return SettingsHelper.Instance.ShowAgeAndGender; } }
public ObservableCollection<ImageCrop<DetectedFaceViewModel>> DetectedFaceCollection { get; set; } = new ObservableCollection<ImageCrop<DetectedFaceViewModel>>();
public TabHeader AppearanceTab { get; set; } = new TabHeader() { Name = "Appearance" };
public TabHeader EmotionTab { get; set; } = new TabHeader() { Name = "Emotion" };
public TabHeader PoseTab { get; set; } = new TabHeader() { Name = "Pose" };
public TabHeader ImageQualityTab { get; set; } = new TabHeader() { Name = "Image quality" };
private PivotItem selectedTab;
public PivotItem SelectedTab
{
get { return selectedTab; }
set
{
selectedTab = value;
NotifyPropertyChanged("SelectedTab");
}
}
private ImageCrop<DetectedFaceViewModel> currentDetectedFace;
public ImageCrop<DetectedFaceViewModel> CurrentDetectedFace
{
get { return currentDetectedFace; }
set
{
currentDetectedFace = value;
NotifyPropertyChanged("CurrentDetectedFace");
UpdateResultDetails();
}
}
public event PropertyChangedEventHandler PropertyChanged;
public void NotifyPropertyChanged(string name)
{
PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(name));
}
public FaceApiExplorerPage()
{
this.InitializeComponent();
this.DataContext = this;
}
protected override async void OnNavigatedTo(NavigationEventArgs e)
{
await this.imagePicker.SetSuggestedImageList(
new Uri("ms-appx:///Assets/DemoSamples/FaceApiExplorer/1.jpg"),
new Uri("ms-appx:///Assets/DemoSamples/FaceApiExplorer/2.jpg"),
new Uri("ms-appx:///Assets/DemoSamples/FaceApiExplorer/3.jpg"),
new Uri("ms-appx:///Assets/DemoSamples/FaceApiExplorer/4.jpg"),
new Uri("ms-appx:///Assets/DemoSamples/FaceApiExplorer/5.jpg"));
}
private async void OnImageSearchCompleted(object sender, IEnumerable<ImageAnalyzer> args)
{
ImageAnalyzer image = args.First();
image.ShowDialogOnFaceApiErrors = true;
//detect faces
OverlayPresenter.ItemsSource = null;
OverlayPresenter.Source = await image.GetImageSource();
if (image.DetectedFaces == null)
{
ProgressIndicator.IsActive = true;
DisplayProcessingUI();
// detect faces
await image.DetectFacesAsync(true, true, new[] { FaceAttributeType.Accessories, FaceAttributeType.Age, FaceAttributeType.Blur, FaceAttributeType.Emotion, FaceAttributeType.Exposure, FaceAttributeType.FacialHair, FaceAttributeType.Gender, FaceAttributeType.Glasses, FaceAttributeType.Hair, FaceAttributeType.HeadPose, FaceAttributeType.Makeup, FaceAttributeType.Noise, FaceAttributeType.Occlusion, FaceAttributeType.Smile });
// try to identify the faces
await image.IdentifyFacesAsync();
ProgressIndicator.IsActive = false;
}
//show results
OverlayPresenter.ItemsSource = image.DetectedFaces.Select(i => new OverlayInfo() { Rect = i.FaceRectangle.ToRect() }).ToArray();
await UpdateResultsAsync(image);
}
private async Task UpdateResultsAsync(ImageAnalyzer img)
{
try
{
DetectedFaceCollection.Clear();
if (img.DetectedFaces.Any())
{
// extract crops from the image
IList<DetectedFaceViewModel> detectedFaceViewModels = GetDetectedFaceViewModels(img.DetectedFaces, img.IdentifiedPersons);
var stream = img.ImageUrl == null ? await img.GetImageStreamCallback() : new MemoryStream(await new HttpClient().GetByteArrayAsync(img.ImageUrl));
using (stream)
{
var faces = await Util.GetImageCrops(detectedFaceViewModels, i => i.FaceRectangle.ToRect(), stream.AsRandomAccessStream());
if (faces != null)
{
DetectedFaceCollection.AddRange(faces);
}
}
}
CurrentDetectedFace = DetectedFaceCollection.FirstOrDefault();
this.notFoundGrid.Visibility = DetectedFaceCollection.Any() ? Visibility.Collapsed : Visibility.Visible;
}
finally
{
this.progressRing.IsActive = false;
}
}
private void UpdateResultDetails()
{
DetectedFaceViewModel detectedFace = CurrentDetectedFace?.Entity;
FaceAttributes faceAttributes = detectedFace?.FaceAttributes;
if (faceAttributes != null)
{
if (ShowAgeAndGender)
{
// gender
this.genderTextBlock.Text = faceAttributes.Gender.HasValue ? Util.UppercaseFirst(faceAttributes.Gender.ToString()) : NoneDesc;
// age
this.ageTextBlock.Text = faceAttributes.Age.HasValue ? faceAttributes.Age.ToString() : NoneDesc;
}
// hair color
this.haircolorsGridView.ItemsSource = faceAttributes.Hair?.HairColor != null && faceAttributes.Hair.HairColor.Any()
? faceAttributes.Hair.HairColor.Where(x => x.Confidence >= 0.6).Select(x => new { Confidence = string.Format("({0}%)", Math.Round(x.Confidence * 100)), HairColor = x.Color.ToString() })
: (object)(new[] { new { HairColor = NoneDesc } });
// facial hair
var facialHair = new List<KeyValuePair<string, double>>()
{
new KeyValuePair<string, double>("Moustache", faceAttributes.FacialHair?.Moustache ?? 0),
new KeyValuePair<string, double>("Beard", faceAttributes.FacialHair?.Beard ?? 0),
new KeyValuePair<string, double>("Sideburns", faceAttributes.FacialHair?.Sideburns ?? 0)
};
if (facialHair.Any(x => x.Value > 0))
{
this.facialHairGridView.ItemsSource = facialHair.Select(x => new { Value = 100 * x.Value, Type = x.Key });
this.facialHairGridView.Visibility = Visibility.Visible;
this.facialHairTextBlock.Visibility = Visibility.Collapsed;
}
else
{
this.facialHairTextBlock.Text = NoneDesc;
this.facialHairGridView.Visibility = Visibility.Collapsed;
this.facialHairTextBlock.Visibility = Visibility.Visible;
}
// glasses
this.glassesTextBlock.Text = faceAttributes.Glasses.HasValue && faceAttributes.Glasses != GlassesType.NoGlasses ? faceAttributes.Glasses.ToString() : NoneDesc;
// makeup
var makeup = new List<string>()
{
faceAttributes.Makeup != null && faceAttributes.Makeup.EyeMakeup ? "Eye" : string.Empty,
faceAttributes.Makeup != null && faceAttributes.Makeup.LipMakeup ? "Lip" : string.Empty
};
this.makeupTextBlock.Text = makeup.Any(x => !string.IsNullOrEmpty(x)) ? string.Join(", ", makeup.Where(x => !string.IsNullOrEmpty(x))) : NoneDesc;
// accessories
this.accessoriesGridView.ItemsSource = faceAttributes.Accessories != null && faceAttributes.Accessories.Any()
? faceAttributes.Accessories.Where(x => x.Confidence >= 0.6).Select(x => new { Confidence = string.Format("({0}%)", Math.Round(x.Confidence * 100)), Accessory = x.Type.ToString() })
: (object)(new[] { new { Accessory = NoneDesc } });
// emotion
var emotionList = new List<KeyValuePair<string, double>>()
{
new KeyValuePair<string, double>("Anger", faceAttributes.Emotion?.Anger ?? 0),
new KeyValuePair<string, double>("Contempt", faceAttributes.Emotion?.Contempt ?? 0),
new KeyValuePair<string, double>("Disgust", faceAttributes.Emotion?.Disgust ?? 0),
new KeyValuePair<string, double>("Fear", faceAttributes.Emotion?.Fear ?? 0),
new KeyValuePair<string, double>("Happiness", faceAttributes.Emotion?.Happiness ?? 0),
new KeyValuePair<string, double>("Neutral", faceAttributes.Emotion?.Neutral ?? 0),
new KeyValuePair<string, double>("Sadness", faceAttributes.Emotion?.Sadness ?? 0),
new KeyValuePair<string, double>("Surprise", faceAttributes.Emotion?.Surprise ?? 0)
};
var detectedEmotions = emotionList.Where(x => x.Value > 0).Select(x => new { Value = 100 * x.Value, Type = x.Key });
string notDetectedEmotions = string.Join(", ", emotionList.Where(x => x.Value <= 0).Select(x => x.Key));
if (detectedEmotions.Any())
{
this.detectedEmotionGridView.ItemsSource = detectedEmotions;
this.detectedEmotionGridView.Visibility = Visibility.Visible;
this.detectedEmotionTextBlock.Visibility = Visibility.Collapsed;
}
else
{
this.detectedEmotionTextBlock.Text = NoneDesc;
this.detectedEmotionTextBlock.Visibility = Visibility.Visible;
this.detectedEmotionGridView.Visibility = Visibility.Collapsed;
}
this.notDetectedEmotionTextBlock.Text = !string.IsNullOrEmpty(notDetectedEmotions) ? notDetectedEmotions : NoneDesc;
// pose
double rollAngle = faceAttributes.HeadPose?.Roll ?? 0;
string rollDirection = rollAngle > 0 ? "right" : "left";
this.headTiltTextBlock.Text = rollAngle != 0 ? $"{Math.Abs(rollAngle)} {rollDirection}" : string.Empty;
this.headTiltControl.DrawFacePoseData(rollAngle, angleArr: new double[] { -60, -30, 0, 30, 60 });
double pitchAngle = faceAttributes.HeadPose?.Pitch ?? 0;
string pitchDirection = pitchAngle > 0 ? "up" : "down";
this.chinAngleTextBlock.Text = pitchAngle != 0 ? $"{Math.Abs(pitchAngle)} {pitchDirection}" : string.Empty;
this.chinAngleControl.DrawFacePoseData(pitchAngle, angleArr: new double[] { 60, 30, 0, -30, -60 });
double yawAngle = faceAttributes.HeadPose?.Yaw ?? 0;
string yawDirection = yawAngle > 0 ? "right" : "left";
this.faceRotationTextBlock.Text = yawAngle != 0 ? $"{Math.Abs(yawAngle)} {yawDirection}" : string.Empty;
this.faceRotationControl.DrawFacePoseData(yawAngle, angleArr: new double[] { 60, 30, 0, -30, -60 });
// exposure
this.expouseTextBlock.Text = faceAttributes.Exposure?.ExposureLevel != null ? Util.UppercaseFirst(faceAttributes.Exposure.ExposureLevel.ToString()) : NoneDesc;
this.expouseProgressBar.Value = faceAttributes.Exposure != null ? 100 * faceAttributes.Exposure.Value : 0;
// blur
this.blurTextBlock.Text = faceAttributes.Blur?.BlurLevel != null ? Util.UppercaseFirst(faceAttributes.Blur.BlurLevel.ToString()) : NoneDesc;
this.blurProgressBar.Value = faceAttributes.Blur != null ? 100 * faceAttributes.Blur.Value : 0;
// noise
this.noiseTextBlock.Text = faceAttributes.Noise?.NoiseLevel != null ? Util.UppercaseFirst(faceAttributes.Noise.NoiseLevel.ToString()) : NoneDesc;
this.noiseProgressBar.Value = faceAttributes.Noise != null ? 100 * faceAttributes.Noise.Value : 0;
// occlusion
var occlusionList = new List<string>()
{
faceAttributes.Occlusion != null && faceAttributes.Occlusion.ForeheadOccluded ? "Forehead" : string.Empty,
faceAttributes.Occlusion != null && faceAttributes.Occlusion.EyeOccluded ? "Eye" : string.Empty,
faceAttributes.Occlusion != null && faceAttributes.Occlusion.MouthOccluded ? "Mouth" : string.Empty
};
this.occlusionTextBlock.Text = occlusionList.Any(x => !string.IsNullOrEmpty(x)) ? string.Join(", ", occlusionList.Where(x => !string.IsNullOrEmpty(x))) : NoneDesc;
}
ShowFaceLandmarks();
}
private IList<DetectedFaceViewModel> GetDetectedFaceViewModels(IEnumerable<DetectedFace> detectedFaces, IEnumerable<IdentifiedPerson> identifiedPersons)
{
var result = new List<DetectedFaceViewModel>();
foreach (var (face, index) in detectedFaces.Select((v, i) => (v, i)))
{
string faceTitle = $"Face {index + 1}";
IdentifiedPerson identifiedPerson = identifiedPersons?.FirstOrDefault(x => x.FaceId == face.FaceId);
if (identifiedPerson?.Person != null)
{
faceTitle = $"{identifiedPerson.Person.Name} ({(uint)Math.Round(identifiedPerson.Confidence * 100)}%)";
}
else if (ShowAgeAndGender)
{
var genderWithAge = new List<string>() { face.FaceAttributes.Gender?.ToString() ?? string.Empty, face.FaceAttributes.Age?.ToString() ?? string.Empty };
faceTitle = string.Join(", ", genderWithAge.Where(x => !string.IsNullOrEmpty(x)));
}
KeyValuePair<string, double> topEmotion = Util.EmotionToRankedList(face.FaceAttributes.Emotion).FirstOrDefault();
var faceDescription = new List<string>()
{
face.FaceAttributes.Hair.HairColor.Any() ? $"{face.FaceAttributes.Hair.HairColor.OrderByDescending(x => x.Confidence).First().Color} hair" : string.Empty,
topEmotion.Key != null ? $"{topEmotion.Key} expression" : string.Empty
};
result.Add(new DetectedFaceViewModel()
{
FaceRectangle = face.FaceRectangle,
FaceAttributes = face.FaceAttributes,
FaceLandmarks = face.FaceLandmarks,
IdentifiedPerson = identifiedPerson,
FaceTitle = faceTitle,
FaceDescription = string.Join(", ", faceDescription.Where(x => !string.IsNullOrEmpty(x)))
});
}
return result;
}
private void OnFaceImageSizeChanged(object sender, SizeChangedEventArgs e)
{
ShowFaceLandmarks();
}
private void OnShowFaceLandmarksToggleChanged(object sender, RoutedEventArgs e)
{
ShowFaceLandmarks();
}
private void ShowFaceLandmarks()
{
DetectedFaceViewModel face = CurrentDetectedFace?.Entity;
if (this.showFaceLandmarksToggle.IsOn && face != null)
{
double scaleX = this.faceImage.RenderSize.Width / face.FaceRectangle.Width;
double scaleY = this.faceImage.RenderSize.Height / face.FaceRectangle.Height;
this.faceLandmarksControl.DisplayFaceLandmarks(face.FaceRectangle, face.FaceLandmarks, scaleX, scaleY);
}
else
{
this.faceLandmarksControl.HideFaceLandmarks();
}
}
private void DisplayProcessingUI()
{
CurrentDetectedFace = null;
DetectedFaceCollection.Clear();
this.progressRing.IsActive = true;
}
}
public class DetectedFaceViewModel
{
public FaceRectangle FaceRectangle { get; set; }
public FaceLandmarks FaceLandmarks { get; set; }
public FaceAttributes FaceAttributes { get; set; }
public IdentifiedPerson IdentifiedPerson { get; set; }
public string FaceTitle { get; set; }
public string FaceDescription { get; set; }
}
}
```
|
```xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "path_to_url">
<plist version="1.0">
<dict>
<key>CFBundleDevelopmentRegion</key>
<string>English</string>
<key>CFBundleExecutable</key>
<string>${EXECUTABLE_NAME}</string>
<key>CFBundleIconFile</key>
<string></string>
<key>CFBundleIdentifier</key>
<string>$(PRODUCT_BUNDLE_IDENTIFIER)</string>
<key>CFBundleInfoDictionaryVersion</key>
<string>6.0</string>
<key>CFBundleName</key>
<string>${PRODUCT_NAME}</string>
<key>CFBundlePackageType</key>
<string>BNDL</string>
<key>CFBundleShortVersionString</key>
<string>1.1.2</string>
<key>CFBundleSignature</key>
<string>????</string>
<key>CFBundleVersion</key>
<string>2</string>
<key>NSPrincipalClass</key>
<string>PathElementEditor</string>
</dict>
</plist>
```
|
Pollenia margarita is a species of cluster fly in the family Polleniidae.
Distribution
P. margarita is distributed around Austria.
References
Polleniidae
Insects described in 2021
Diptera of Europe
|
```javascript
/**
*
* This source code is licensed under the MIT license found in the
* LICENSE file in the root directory of this source tree.
*
* @format
*/
// used for testing react fiber
global.requestAnimationFrame = (callback) => global.setTimeout(callback, 0);
```
|
Danijela Dimitrovska (Serbian Cyrillic: Данијела Димитровска; ; born 22 April 1987) is a Serbian model. She began her modelling career by winning the Elite Model Look competition in Serbia in 2004.
Career
Modeling
Dimitrovska began her professional career as a model in 2004, aged 16, by winning the Elite Model Look competition in Serbia. She has since been a spokesperson for various brands, such as Vera Wang, Anne Fontaine, Benetton, BCBG Max Azria, Christian Lacroix, French Connection, Leonard, Marella, Miu Miu, Betty Jackson, YSL and Louis Vuitton. Some of her campaigns also include Emporio Armani, when she posed along with Giorgio Armani and other models for the Italian edition Marie Claire, and a television commercial for YSL with French actor Vincent Cassel.
In November 2009, Dimitrovska was chosen as one of promotional models for Victoria's Secret. She had her first photo session in April 2010.
Other work
In 2011, Dimitrovska founded her own modeling agency, Models Inc., along with fellow model Marija Vujović and modeling agent Mirjana Udovičić. Since 2012, the agency has organized Elite Model Look competitions in Serbia and Montenegro.
Personal life
On 26 May 2012, Dimitrovska married Serbian television personality and musician Ognjen Amidžić. Their first child, a son Matija, was born on 29 November 2013. Dimitrovska is close friends with Brazilian model Gisele Bündchen.
See also
List of Victoria's Secret models
References
External links
1987 births
Living people
People from Požarevac
Serbian female models
Serbian people of Macedonian descent
|
```java
/*
* This Source Code Form is subject to the terms of the Mozilla Public
* file, You can obtain one at path_to_url
*/
package com.vaticle.typedb.core.graph.edge.impl;
import com.vaticle.typedb.core.common.collection.ByteArray;
import com.vaticle.typedb.core.common.exception.TypeDBException;
import com.vaticle.typedb.core.common.parameters.Concept.Existence;
import com.vaticle.typedb.core.encoding.Encoding;
import com.vaticle.typedb.core.encoding.iid.EdgeViewIID;
import com.vaticle.typedb.core.encoding.iid.InfixIID;
import com.vaticle.typedb.core.encoding.iid.KeyIID;
import com.vaticle.typedb.core.encoding.iid.VertexIID;
import com.vaticle.typedb.core.graph.ThingGraph;
import com.vaticle.typedb.core.graph.edge.ThingEdge;
import com.vaticle.typedb.core.graph.vertex.ThingVertex;
import com.vaticle.typedb.core.graph.vertex.TypeVertex;
import javax.annotation.Nullable;
import java.util.Objects;
import java.util.Optional;
import java.util.concurrent.atomic.AtomicBoolean;
import static com.vaticle.typedb.core.common.collection.ByteArray.join;
import static com.vaticle.typedb.core.common.exception.ErrorMessage.Transaction.ILLEGAL_OPERATION;
import static com.vaticle.typedb.core.common.parameters.Concept.Existence.INFERRED;
import static com.vaticle.typedb.core.common.parameters.Concept.Existence.STORED;
import static com.vaticle.typedb.core.encoding.Encoding.Prefix.VERTEX_ROLE;
import static com.vaticle.typedb.core.encoding.Encoding.Status.PERSISTED;
import static java.util.Objects.hash;
public abstract class ThingEdgeImpl implements ThingEdge {
final ThingGraph graph;
final Encoding.Edge.Thing encoding;
final View.Forward forward;
final View.Backward backward;
final AtomicBoolean deleted;
final Existence existence;
ThingEdgeImpl(ThingGraph graph, Encoding.Edge.Thing encoding, Existence existence) {
this.graph = graph;
this.encoding = encoding;
this.deleted = new AtomicBoolean(false);
this.existence = existence;
this.forward = new View.Forward(this);
this.backward = new View.Backward(this);
}
@Override
public Existence existence() {
return existence;
}
@Override
public View.Forward forwardView() {
return forward;
}
@Override
public View.Backward backwardView() {
return backward;
}
abstract EdgeViewIID.Thing computeForwardIID();
abstract EdgeViewIID.Thing computeBackwardIID();
private static abstract class View<T extends ThingEdge.View<T>> implements ThingEdge.View<T> {
final ThingEdgeImpl edge;
EdgeViewIID.Thing iidCache = null;
private View(ThingEdgeImpl edge) {
this.edge = edge;
}
@Override
public ThingEdge edge() {
return edge;
}
@Override
public boolean equals(Object object) {
if (this == object) return true;
if (object == null || this.getClass() != object.getClass()) return false;
return edge.equals(((ThingEdgeImpl.View<?>) object).edge);
}
@Override
public int hashCode() {
return edge.hashCode();
}
private static class Forward extends ThingEdgeImpl.View<ThingEdge.View.Forward> implements ThingEdge.View.Forward {
private Forward(ThingEdgeImpl edge) {
super(edge);
}
@Override
public EdgeViewIID.Thing iid() {
if (iidCache == null) iidCache = edge.computeForwardIID();
return iidCache;
}
@Override
public int compareTo(ThingEdge.View.Forward other) {
return iid().compareTo(other.iid());
}
}
private static class Backward extends ThingEdgeImpl.View<ThingEdge.View.Backward> implements ThingEdge.View.Backward {
private Backward(ThingEdgeImpl edge) {
super(edge);
}
@Override
public EdgeViewIID.Thing iid() {
if (iidCache == null) iidCache = edge.computeBackwardIID();
return iidCache;
}
@Override
public int compareTo(ThingEdge.View.Backward other) {
return iid().compareTo(other.iid());
}
}
}
public static class Buffered extends ThingEdgeImpl implements ThingEdge {
private final AtomicBoolean committed;
private final ThingVertex.Write from;
private final ThingVertex.Write to;
private final ThingVertex.Write optimised;
private final int hash;
/**
* Default constructor for {@code ThingEdgeImpl.Buffered}.
*
* @param encoding the edge {@code Encoding}
* @param from the tail vertex
* @param to the head vertex
* @param existence whether the edge is stored or inferred
*/
public Buffered(Encoding.Edge.Thing encoding, ThingVertex.Write from, ThingVertex.Write to, Existence existence) {
this(encoding, from, to, null, existence);
}
/**
* Constructor for an optimised {@code ThingEdgeImpl.Buffered}.
*
* @param encoding the edge {@code Encoding}
* @param from the tail vertex
* @param to the head vertex
* @param optimised vertex that this optimised edge is compressing
*/
public Buffered(Encoding.Edge.Thing encoding, ThingVertex.Write from, ThingVertex.Write to,
@Nullable ThingVertex.Write optimised, Existence existence) {
super(from.graph(), encoding, existence);
assert this.graph == to.graph();
assert encoding.isOptimisation() || optimised == null;
this.from = from;
this.to = to;
this.optimised = optimised;
this.hash = hash(Buffered.class, encoding, from, to);
committed = new AtomicBoolean(false);
}
@Override
public Encoding.Edge.Thing encoding() {
return encoding;
}
@Override
public ThingVertex.Write from() {
return from;
}
@Override
public VertexIID.Thing fromIID() {
return from.iid();
}
@Override
public ThingVertex.Write to() {
return to;
}
@Override
public VertexIID.Thing toIID() {
return to.iid();
}
@Override
public Optional<ThingVertex> optimised() {
return Optional.ofNullable(optimised);
}
@Override
EdgeViewIID.Thing computeForwardIID() {
if (encoding().isOptimisation()) {
return EdgeViewIID.Thing.of(
fromIID(), InfixIID.Thing.of(encoding().forward(), optimised().get().type().iid()),
toIID(), optimised().get().iid().key()
);
} else {
return EdgeViewIID.Thing.of(fromIID(), InfixIID.Thing.of(encoding().forward()), toIID());
}
}
@Override
EdgeViewIID.Thing computeBackwardIID() {
if (encoding().isOptimisation()) {
return EdgeViewIID.Thing.of(
toIID(), InfixIID.Thing.of(encoding().backward(), optimised().get().type().iid()),
fromIID(), optimised().get().iid().key()
);
} else {
return EdgeViewIID.Thing.of(toIID(), InfixIID.Thing.of(encoding().backward()), fromIID());
}
}
/**
* Deletes this {@code Edge} from connecting between two {@code Vertex}.
*
* A {@code ThingEdgeImpl.Buffered} can only exist in the adjacency cache of
* each {@code Vertex}, and does not exist in storage.
*/
@Override
public void delete() {
if (deleted.compareAndSet(false, true)) {
from.outs().remove(this);
to.ins().remove(this);
if (from.status().equals(PERSISTED) && to.status().equals(PERSISTED)) {
graph.storage().deleteTracked(forward.iid());
graph.storage().deleteUntracked(backward.iid());
}
graph.edgeDeleted(this);
}
}
@Override
public boolean isDeleted() {
return deleted.get();
}
@Override
public void commit() {
if (existence() == INFERRED) throw TypeDBException.of(ILLEGAL_OPERATION);
if (committed.compareAndSet(false, true)) {
graph.storage().putTracked(computeForwardIID()); // re-compute IID because vertices may be committed
graph.storage().putUntracked(computeBackwardIID());
}
}
/**
* Determine the equality of a {@code ThingEdgeImpl.Buffered} against another.
*
* We only use {@code encoding}, {@code from} and {@code to} as the are
* the fixed properties that do not change, unlike {@code overridden}.
* They are also the canonical properties required to uniquely identify
* a {@code ThingEdgeImpl.Buffered} uniquely.
*
* @param object that we want to compare against
* @return true if equal, else false
*/
@Override
public final boolean equals(Object object) {
if (this == object) return true;
if (object == null || getClass() != object.getClass()) return false;
ThingEdgeImpl.Buffered that = (ThingEdgeImpl.Buffered) object;
return (this.encoding.equals(that.encoding) &&
this.from.equals(that.from) &&
this.to.equals(that.to) &&
Objects.equals(this.optimised, that.optimised));
}
/**
* Determine the equality of a {@code Edge.Buffered} against another.
*
* We only use {@code encoding}, {@code from} and {@code to} as the are
* the fixed properties that do not change, unlike {@code overridden}.
* They are also the canonical properties required to uniquely identify
* a {@code ThingEdgeImpl.Buffered}.
*
* @return int of the hashcode
*/
@Override
public final int hashCode() {
return hash;
}
}
public static class Target extends ThingEdgeImpl implements ThingEdge {
private final ThingVertex from;
private final ThingVertex to;
private final TypeVertex optimisedType;
private final int hash;
public Target(Encoding.Edge.Thing encoding, ThingVertex from, ThingVertex to, @Nullable TypeVertex optimisedType) {
super(from.graph(), encoding, STORED);
assert !encoding.isOptimisation() || optimisedType != null;
this.from = from;
this.to = to;
this.optimisedType = optimisedType;
this.hash = hash(Target.class, encoding, from, to, optimisedType);
}
@Override
public Encoding.Edge.Thing encoding() {
return encoding;
}
@Override
EdgeViewIID.Thing computeForwardIID() {
if (encoding().isOptimisation()) {
return EdgeViewIID.Thing.of(
fromIID(), InfixIID.Thing.of(encoding().forward(), optimisedType.iid()),
toIID(), KeyIID.of(ByteArray.empty())
);
} else {
return EdgeViewIID.Thing.of(fromIID(), InfixIID.Thing.of(encoding().forward()), toIID());
}
}
@Override
EdgeViewIID.Thing computeBackwardIID() {
if (encoding.isOptimisation()) {
return EdgeViewIID.Thing.of(toIID(), InfixIID.Thing.of(encoding().backward(), optimisedType.iid()),
fromIID(), KeyIID.of(ByteArray.empty()));
} else {
return EdgeViewIID.Thing.of(toIID(), InfixIID.Thing.of(encoding().backward()), fromIID());
}
}
@Override
public ThingVertex from() {
return from;
}
@Override
public VertexIID.Thing fromIID() {
return from.iid();
}
@Override
public ThingVertex to() {
return to;
}
@Override
public VertexIID.Thing toIID() {
return to.iid();
}
@Override
public Optional<ThingVertex> optimised() {
return Optional.empty();
}
@Override
public void delete() {
throw TypeDBException.of(ILLEGAL_OPERATION);
}
@Override
public boolean isDeleted() {
throw TypeDBException.of(ILLEGAL_OPERATION);
}
@Override
public void commit() {
throw TypeDBException.of(ILLEGAL_OPERATION);
}
@Override
public final boolean equals(Object object) {
if (this == object) return true;
if (object == null || getClass() != object.getClass()) return false;
ThingEdgeImpl.Target that = (ThingEdgeImpl.Target) object;
return this.encoding.equals(that.encoding) &&
this.from.equals(that.from) &&
this.to.equals(that.to) &&
Objects.equals(this.optimisedType, that.optimisedType);
}
@Override
public final int hashCode() {
return hash;
}
}
public static class Persisted extends ThingEdgeImpl implements ThingEdge {
private final VertexIID.Thing fromIID;
private final VertexIID.Thing toIID;
private final VertexIID.Thing optimisedIID;
private final int hash;
/**
* Default constructor for {@code Edge.Persisted}.
*
* The edge can be constructed from an {@code iid} that represents
* either an inwards or outwards pointing edge. Thus, we extract the
* {@code start} and {@code end} of it, and use the {@code infix} of the
* edge {@code iid} to determine the direction, and which vertex becomes
* {@code fromIID} or {@code toIID}.
*
* The head of this edge may or may not be overriding another vertex.
* If it does the {@code overriddenIID} will not be null.
*
* @param graph the graph comprised of all the vertices
* @param iid the {@code iid} of a persisted edge
*/
public Persisted(ThingGraph graph, EdgeViewIID.Thing iid) {
super(graph, iid.encoding(), STORED);
if (iid.isForward()) {
fromIID = iid.start();
toIID = iid.end();
} else {
fromIID = iid.end();
toIID = iid.start();
}
if (!iid.suffix().isEmpty()) {
optimisedIID = VertexIID.Thing.of(join(
VERTEX_ROLE.bytes(), iid.infix().asRolePlayer().tail().get().bytes(), iid.suffix().bytes()
));
} else {
optimisedIID = null;
}
this.hash = hash(Persisted.class, encoding, fromIID.hashCode(), toIID.hashCode());
}
@Override
public Encoding.Edge.Thing encoding() {
return encoding;
}
@Override
public ThingVertex from() {
// note: do not cache, since a readable vertex can become a writable vertex at any time
return graph.convertToReadable(fromIID);
}
@Override
public VertexIID.Thing fromIID() {
return fromIID;
}
@Override
public ThingVertex to() {
return graph.convertToReadable(toIID);
}
@Override
public VertexIID.Thing toIID() {
return toIID;
}
@Override
public Optional<ThingVertex> optimised() {
return Optional.ofNullable(graph.convertToReadable(optimisedIID));
}
@Override
EdgeViewIID.Thing computeForwardIID() {
if (encoding().isOptimisation()) {
return EdgeViewIID.Thing.of(
fromIID(), InfixIID.Thing.of(encoding().forward(), optimisedIID.type()),
toIID(), optimisedIID.key()
);
} else {
return EdgeViewIID.Thing.of(fromIID(), InfixIID.Thing.of(encoding().forward()), toIID());
}
}
@Override
EdgeViewIID.Thing computeBackwardIID() {
if (encoding().isOptimisation()) {
return EdgeViewIID.Thing.of(
toIID(), InfixIID.Thing.of(encoding().backward(), optimisedIID.type()),
fromIID(), optimisedIID.key()
);
} else {
return EdgeViewIID.Thing.of(toIID(), InfixIID.Thing.of(encoding().backward()), fromIID());
}
}
/**
* Delete operation of a persisted edge.
*
* This operation can only be performed once, and thus protected by
* {@code isDelete} atomic boolean. The delete operation involves
* removing this edge from the graph storage and notifying the from/to vertices of the modification.
*/
@Override
public void delete() {
if (deleted.compareAndSet(false, true)) {
graph.convertToWritable(fromIID).setModified();
graph.convertToWritable(toIID).setModified();
graph.storage().deleteTracked(forward.iid());
graph.storage().deleteUntracked(backward.iid());
graph.edgeDeleted(this);
}
}
@Override
public boolean isDeleted() {
return deleted.get();
}
/**
* No-op commit operation of a persisted edge.
*
* Persisted edges do not need to be committed back to the graph storage.
* The only property of a persisted edge that can be changed is only the
* {@code overriddenIID}, and that is immediately written to storage when changed.
*/
@Override
public void commit() {
}
/**
* Determine the equality of a {@code Edge} against another.
*
* We only use {@code encoding}, {@code fromIID} and {@code toIID} as the
* are the fixed properties that do not change, unlike
* {@code overriddenIID} and {@code isDeleted}. They are also the
* canonical properties required to identify a {@code Persisted} edge.
*
* @param object that that we want to compare against
* @return true if equal, else false
*/
@Override
public final boolean equals(Object object) {
if (this == object) return true;
if (object == null || getClass() != object.getClass()) return false;
ThingEdgeImpl.Persisted that = (ThingEdgeImpl.Persisted) object;
return (this.encoding.equals(that.encoding) &&
this.fromIID.equals(that.fromIID) &&
this.toIID.equals(that.toIID) &&
Objects.equals(this.optimisedIID, that.optimisedIID));
}
/**
* HashCode of a {@code ThingEdgeImpl.Persisted}.
*
* We only use {@code encoding}, {@code fromIID} and {@code toIID} as the
* are the fixed properties that do not change, unlike
* {@code overriddenIID} and {@code isDeleted}. They are also the
* canonical properties required to uniquely identify an
* {@code ThingEdgeImpl.Persisted}.
*
* @return int of the hashcode
*/
@Override
public final int hashCode() {
return hash;
}
}
}
```
|
KSAL may refer to:
KSAL (AM), a radio station (1150 AM) licensed to Salina, Kansas, United States
KSAL-FM, a radio station (104.9 FM) licensed to Salina, Kansas, United States
|
Shihuiyao Township () is a township in Ping'an District, Haidong, Qinghai, China. In 2010, Shihuiyao Township had a total population of 4,849: 2,427 males and 2,377 females: 1,111 aged under 14, 3,311 aged between 15 and 65 and 427 aged over 65.
Administrative divisions
Shihuiyao Township administers the following 14 administrative villages:
Shihuiyao Township ()
Yima Village ()
Yelong Village ()
Liming Village ()
Yangposhan Village ()
Yaozhuang Village ()
Chuchugou Village ()
Hongya Village ()
Xiahetan Village ()
Tanglongtai Village ()
Shangtanglong Village ()
Shangfatai Village ()
Xiafatai Village ()
Shiguasi Village ()
References
Township-level divisions of Qinghai
Ping'an District
|
The Battle of Pont du Feneau was the last battle of the siege of Saint-Martin-de-Ré by the English forces that had come to help the Huguenot rebellions of La Rochelle. It took place on 8 November 1627. The English lost the battle, and this final failure forced them to withdraw back to England.
Before the battle
English side
The English forces, having been defeated earlier that day at the siege of Saint-Martin, pulled back to the village of Loix, where their ships were anchored. Because of bad food, many soldiers in their army were sick. The army was commanded by George Villiers, the first duke of Buckingham, and was composed of 12 infantry regiments and 4 cannons, as well as several volunteer rochelais protestants, all covered by the cavalry, consisting of about 68 horses.
Not having seen the French forces since their retreat from Saint-Martin, Buckingham's troops thought the French would not attack and, growing reckless, were neither walking in closed or open ranks. They arrived at the wooden bridge of the Feneau that they had built when they first landed on the island, that linked the island of Loix to the rest of l'île de ré.
French side
The French troops were commanded by the Maréchal Henri de Schomberg who, informed of the English retreat by Jean Caylar d'Anduze de Saint-Bonnet, had reassembled his army and begun pursuing the English. The French cavalry was commanded by Louis de Marillac.
The battle
The English vanguard marched towards Loix. Two English battalions, commanded by colonels Sir Edward Conway, Sir Peregrine Barty, Sir Henery Spry, had crossed the bridge. Another battalion, led by Colonel Sir Charles Rich, the brother of the count of Holland Henry Rich, 1st Earl of Holland and half brother of Mountjoy Blount, 1st Earl of Newport, Sir Alexander Brett and the lieutenant of Sir Thomas Morton (he being sick), were getting ready to cross the bridge with the four cannons. The two last battalions, that had not yet engaged themselves on the bridge, were commanded by Colonel Sir William Courtney, Lieutenant-Colonel Sir Edward Hawley and Sir Ralph Bingley.
The Maréchal de Schomberg, having crossed the village of La Couarde, was informed by Marillac of the English position. The English vanguard started to engage itself, slowly, because of the narrowness, on the bridge of Feneau. Judging the time to be right, the Maréchal ordered the cavalry captain Bussi-Lamet to charge the English rearguard with his squadron. He was followed closely by Marillac and Schomberg himself, leading the rest of the cavalry. The English cavaliers retaliated but were defeated. Many English were killed, including Sir William Cunningham, and others were captured, including Mountjoy Blount, 1st Earl of Newport, colonel of the English cavalry, and half-brother of Sir Charles Rich and the count Holland, Henry Rich, 1st Earl of Holland. The English infantry intervened but also was defeated. Two regiments of the French infantry, the Piémont regiment, commanded by François Duval de Fontenay-Mareuil, and the Champagne regiment commanded by Pierre Arnaud arrived in the melee. An important part of the English forces was violently pushed in the many muddy ditches in the area. The two English battalions before the bridge were defeated. A few rochelais protestants managed to cross the bridge. The French then started crossing the bridge, killing everyone in their way, notably Sir Charles Rich and Sir Alexander Brett, that tried to defend it. There are two different versions of the outcome of the battle.
English version
After having crossed the bridge, the French forces, guided by Marillac, confronted Sir Thomas Fryar and Lieutenant-Colonel Hackluit commanding 40 pikes and 20 musketeers guarding the artillery and munitions. They were rapidly assisted by Sir Piers Crosby, commanding an Irish regiment. The French then panicked and withdrew to the bridge, disobeying the orders of Marillac. They then rallied and the count of Saligny, with a group of pikemen, charged again. They were, however, repelled and lost a commander. They charged a final time, were repelled again and withdraw from battle and crossed the bridge back, chased by the English. While routing the French made Marillac fall from his horse. The English put an end to the chase and crossed the bridge back to the island of Loix. The guarding of the bridge was given to Sir Piers Crosby. A few days later, when the sailing conditions became better, Crosby burned the bridge, and the surviving English forces embarked on their ships.
French version
Marillac dismounted from his horse and took over the command of the forces. After having crossed the bridge, the French forces captured the four small cannons of the English artillery. Few French soldiers fought on the front line, as a large part of the French forces took time capturing prisoners and the English arsenal. Marillac gave the order to move the arsenal a hundred feet back. The count of Saligny then arrived with a few fresh men. He fought remarkably, and the English soldiers lifted up their pikes and marched towards him. He only had a dozen men left, including Feuquiere and Porcheux, captain of the guards regiment, that stood their ground and held the line. The men around Marillac pulled back in disorder, in spite of his orders, and even made him fall down. Saligny and his men, however, held on and contained the English, allowing the French soldiers to get refreshed and return to the melee. The fight lasted for two hours, after which the French triumphed over the English.
The English army was in a mass rout. The causeway to Loix was covered in bodies, the ditches were full of men that were knocked unconscious in the mud. Some English soldiers swore they were Catholic, even showing rosaries and asking for mercy, but the French were ruthless. Many French noblemen were in the melee: the marquis of Annonay, Charles de Lévis-Ventadour, who would later become duke of Ventadour; Antoine d'Aumont de Rochebaron, marquis of Villequier; the knight of Chappe and his brother; Jean of Estampes-Valençay, baron of Valençay; the count of Charraux or Chârost Louis of Béthune, who would later become captain of the King's bodyguards; the count of Saligny, "a man of heart and of unique virtue"; Isaac of Raynié, Lord of Drouet; L'isle-Cerillac ; Manassès de Pas de Feuquières; L'isle Montmartin ; Pierre Arnaud, mestre de camp of the regiment of Champagne; Alexandre of Garnier, Lord of the Garets; de Jonquieres; Jean-Louis the 1st of Louët of Calvisson; and Jacques of Castelnau of La Mauvissière. Toiras himself fought, sword in hand. The English soldiers fled on every side, in the marshes, on the ditches' crossings, in the vineyards, constantly under fire from the French troops that chased them. Schonberg ordered the French troops pursuing the English to Loix to stop because they needed to rally and reorganize. Furthermore, night was falling. The retreat was signaled. The maréchal had the bridge guarded until he was certain that the English army had withdrawn from the island's soil.
Outcome of the battle
The air was fire and smoke, the land was covered in cadavers, and the water was reddened by the blood.
The outcome of the battle was very costly for the English, counting about 1.800 dead; including 5 colonels; 3 lieutenant-colonels; 20 gentlemen, including Sir Henry Spry, Sir Charles Rich, Sir Alexander Brett, Sir Ralph Bingley, Sir William Cunningham, and 150 officers; and a thousand injured. 46 flags were captured. The captured included Milord Montjoye, cavalry commandant; the colonel Milord Grey (possibly Henry Grey, 1st Earl of Stamford), grand master of the artillery; 35 captains or officers; 12 gentlemen; and between 100 and 120 soldiers. Every English horse, including Buckingham's, was captured, as well as the 4 cannons.
Colonel Grey fell in a salt-pit during the battle, and shouted out to save his life, "A hundred thousand crowns for my ransom!". He was therefore captured and not killed.
The Maréchal de Schonberg did not say a word about the French casualties, however, they were probably close to 500 or 600. The injured included the Général des Galères, Pierre de Gondi, having received two pistol shots to the shoulder; the Marquis of Villequier having received a musket shot through his body (but "the injury was without danger"); De Iade, the captain of Schonberg's bodyguards and his squire took a pistol shot to the knee; Cussigny (André, lord of Cussigny and the barres ?), injured by a pike stab to the throat, and Porcheux having had his thigh torn. Toiras himself nearly got injured, taking two pistol shots that holed his hat.
Aftermath
Of the 46 flags, the first having been captured by Sieur de Belinghem helped by Sieur de Moüy de la Mailleraye, all were sent to Paris by Claude de Saint-Simon and displayed on the vaults of Notre-Dame.
Toiras went back to the citadel of Saint-Martin, where the prisoners were also taken, to keep track of the Anglo-Rochelaise fleet. Some of the prisoners were ransomed. Schomberg returned to Saint-Martin as well to rest. He preferred not to leave the island until he was sure of the final departure of Buckingham.
On the English side, about 2.000 survivors were on the ships including Benjamin de Rohan, lord of Soubise and younger brother of Henri II de Rohan, a French protestant. Buckingham, after having let his troops rest, left on 17 November to return to England after 3 months and 6 days of battle, promising to the rochelais protestants who had re-supplied his troops with fresh water twice to return with a bigger army. He would however not be able to stay true to his word, being assassinated by John Felton at the Greyhound Pub in Portsmouth on 23 August 1628, before the departure of the second expedition.
Two centuries later, salt producers near the bridge of Feneau opened a pit to bury the many bones, bullets, and cannonballs scattered around on the ground.
Posterity
The battle was painted by Laurent de la Hyre in "La défaite des Anglais en l'île de Ré par l'armée française le 8 novembre 1627", (the defeat of the English on île de Ré by the French army on the 8th of November 1627) between December 1627 and early 1628. The painting is large, at 112x120 cm. It is conserved in Paris, musée de l'Armée, hôtel des Invalides.
See also
French Wars of Religion
Notes and references
Military history of France
Warfare of the early modern period
Conflicts in 1627
1627 in France
Battles of the Thirty Years' War
Battles involving France
Battles involving England
Battles in Nouvelle-Aquitaine
History of Charente-Maritime
Île de Ré
Anglo-French War (1627–1629)
|
The Black Diamonds were an Australian garage rock band from Lithgow, New South Wales, which were active under different names from 1959 to 1971. By 1965 the line-up consisted of Glenn Bland on vocals and harmonica, Allen Keogh on bass guitar, Colin McAuley on drums, Alan "Olly" Oloman on lead guitar and vocals, and his younger brother Neil Oloman on rhythm guitar. They signed with Festival Records, where they released two singles. The better-known B-side track, "I Want, Need, Love You", appeared on their first single in 1966 and became a regional hit. It features an pleading vocal over a driving rhythm section and fast guitar breaks. The band toured in support of the Easybeats. In 1967 their second single, "Outside Looking In", was a hit in the Sydney area. In 1968 the group changed their name to Tymepiece and evolved into a more eclectic and progressive style. Briefly changing their name to Love Machine they released a cover version of the Tokens' single, "Lion Sleeps Tonight" (1968). They reverted to Tymepiece and issued an album, Sweet Release, in February 1971 but broke up soon after. According to Australian musicologist, Ian McFarlane, "[they] will be remembered as one of the most ferocious garage/punk outfits Australia ever produced in the 1960s."
Origins: Johnny Kett's Black Diamonds 1959–1965
The Black Diamonds were founded in Lithgow, New South Wales, a coal-mining town in New South Wales, as Johnny Kett's Black Diamonds in 1959. Allen Michael Keogh and Alan Stewart Oloman, both twelve years old, learned guitar from an older friend, Brandt Newton. The three started playing regularly, mostly rockabilly instrumentals. Johnny Kett joined on drums, but initially they had no bass guitarist. Oloman's father, Bill Oloman (c.1911–2006), became their manager and allowed them to rehearse at the family home. He also provided their name, Johnny Kett's Black Diamonds, after their then-leader and a local term for coal.
The group gained a residency at Scottish Reunion Dance, a local dance hall. In 1963 Newton departed and Oloman's younger brother, Neil Oloman joined on rhythm guitar, while Keogh switched to bass guitar. They largely performed surf instrumentals. With the advent of the Beatles and the British Invasion in 1964, they found a lead vocalist, Glenn Christopher Bland. Bland initially also provided rhythm guitar, allowing Alan to concentrate on lead guitar. The group's leadership shifted from Kett to Alan, whose increasingly virtuoso lead guitar was emerging as a key feature in their sound. Bland dropped rhythm guitar but continued on lead vocals and added harmonica.
Recording and touring 1965–1968
In 1965 Kett departed and their name was shortened to the Black Diamonds with the line-up of Bland on vocals and harmonica, Keogh on bass guitar, Alan on lead guitar and vocals, and Neil on rhythm guitar – they were joined by Colin McAuley on drums. They became a popular band in the Blue Mountains area. Alan was a part-time radio announcer at the local station 2LT – its programming director Bob Jolly recorded their demos in the broadcasting studio. Jolly sent demos to record labels and producers. Festival Records's Pat Aulton was impressed with "See the Way" and had the group re-record it for their debut single. "See the Way" has Alan's "spacey" sounding guitar, which was put through a tape delay to get the effect.
For its B-Side, the group proposed a cover version of a Rolling Stones track. However, Aulton heard Alan practising a riff and recommended building a song around it, which resulted in "I Want, Need, Love You". It is an intense and ground-breaking slab of hard rock and proto-punk, that featured an over-driven instrumental interlude replete with pounding jungle rhythms and a lightning-fast guitar solo by Alan Oloman. The single was released in November 1966 via Festival Music. Both sides were written by Alan Oloman. In a contemporary review by teen newspaper Go-Set, a staff writer rated "See the Way" as C for mediocre and quipped, "[it] leaves us in the dark." Australian musicologist, Ian McFarlane, described the group, "[they] were equally adept at producing both jubilant pop and tough garage-punk on either side of the one single."
Though it failed to reach the national charts, the single was popular around Lithgow and nearby Bathurst. The Black Diamonds had a repertoire of thirty original tracks as well as cover versions. They supported the Easybeats, whose members cited the Black Diamonds as the best opening band they had. They appeared on ABC's TV drama series, Be Our Guest (1966), showing them lip synching to "See the Way" and "I Want, Need, Love You", while standing on a rocky beach. They also appeared on Saturday Date programme. The group encountered difficulties: live shows could be gruelling — at some gigs they were expected to play a four-hour stint, late into the night; promoters and club owners ripped the band off on several occasions; and while on tour they got into scuffles with reactionary youths.
Their second single was released in March 1967. Aulton selected their cover version of J. J. Cale's "Outside Looking In" for its A-side – a decision the band members later felt was a mistake. The B-side was the Who-influenced power pop track, "Not This Time", which found them more in their element. For the latter Alan played a home-made 12-string guitar. The single reached the top 30 in Sydney, but failed to chart nationally. Later that year, recently married, Neil left and was replaced by Brian "Felix" Wilkinson on piano and organ. At the end of the year the band moved to Sydney and secured residencies at the Caesar's Palace and Hawaiian Eye discothèques. In 1968 Keogh departed and was replaced on bass guitar by Darcy Rosser.
Tymepiece years 1968–1971
At the urging of Aulton and Festival Records, the group changed their name to Tymepiece, and signed an extended contract with Festival Records imprint Infinity Records. They issued three singles "Bird in the Tree" (August 1968), "Become Like You" (November 1969) and "Won't You Try" (October 1971). Back in 1968 under the pseudonym, Love Machine, they had also released a cover version of the Tokens' 1961 song, "Lion Sleeps Tonight", which reached the top 10 in Sydney and Brisbane. McFarlane observed, "[it] was a hit, but the band members soon tired of the Love Machine pop trappings and moved on."
In February 1971 as Tymepiece they issued an album, Sweet Release, which featured an eclectic blend of psychedelic pop ("Why?"'), folk ("Reflections"'), country ("`Sweet Release") R&B ("I Love, You Love") and heavy progressive blues ("Shake Off") influences. Soon after the band broke up. In 1974 Alan Oloman joined the Executives on bass guitar. On 9 August 2008 Alan "Olly" Oloman died of cancer, aged 61.
Legacy
The Black Diamonds' work came to the attention of garage rock enthusiasts around the world. Songs such as "I Want, Need, Love You" and "See the Way" have appeared on various vinyl and CD anthologies. The latter was covered by Brisbane rock band, the Screaming Tribesmen, in a live rendition from a 1982 concert on their compilation album, The Savage Beat of the Screaming Tribesmen (2003). Raven Records issued the "pulsating, eight-minute" track, "Shake Off" on their Various Artists compilation album, Golden Miles: Australian Progressive Rock 1969–1974 (1994).
"I Want, Need, Love You" was included on the Down Under Nuggets: Original Australian Artyfacts (1965–67) compilation issued by Festival Records in conjunction with Warner Bros. Records and Rhino Records in 2013. In 1995 Australian garage band, the Hunchbacks, provided their rendition as "Want Need Love You" on an EP, Play to Lose. The track was also covered by American garage rockers, the Dirtbombs, on their album, If You Don't Already Have a Look (2005). "See the Way" was included on the Obscure 60s Garage, Volume 5: Australian Edition compilation. The Black Diamonds are recognised as a trailblazing and innovative group. According to McFarlane, "[they] will be remembered as one of the most ferocious garage/punk outfits Australia ever produced in the 1960s."
Members
Johnny Kett's Black Diamonds 1959–1965
Johnny Kett (drums)
Alan Oloman (rhythm guitar)
Brandt Newton (lead guitar)
Allan "Banzai" Keogh (guitar, bass)
Neil Oloman (rhythm guitar)
Glenn Bland (lead vocals, rhythm guitar)
The Black Diamonds 1965–1967
Glenn Bland (vocals, harmonica)
Alan Oloman (lead guitar)
Neil Oloman (rhythm guitar)
Allan Keogh (bass, vocals)
Colin McAuley (drums)
Tymepiece/Love Machine 1968–1971
Glenn Bland (vocals, harmonica)
Alan Oloman (lead guitar)
Brian "Felix" Wilkinson (keyboards)
Darcy Rosser (bass, vocals)
Colin McAuley (drums)
Discography
"See the Way" b/w "I Want, Need, Love You" (Festival FK-1549, November 1966)
"Outside Looking In" b/w "Not This Time" (Festival FK-1693, March 1967)
References
New South Wales musical groups
Australian garage rock groups
Musical groups established in 1959
Musical groups disestablished in 1971
1959 establishments in Australia
|
Kim Jae-ho (Hangul:김재호, Hanja: 金宰鎬) (born March 21, 1985) is a South Korean infielder who plays for the Doosan Bears in the KBO League. Kim graduated from Choong Ang High School and joined the Doosan Bears through the first draft in 2004. His main position is shortstop, however, he sometimes plays as a second baseman. He won the KBO League Golden Glove Award in 2015 and 2016 consecutively.
He handed over his captain's position to Kim Jae-hwan when he was dismissed due to an injury in 2017, and returned to his batting sense, but was dismissed again on August 29, 2017 after being hit by Kim Jae-hwan while defending against the Lotte Giants.
Career
References
External links
Career statistics and player information at Korea Baseball Organization
1985 births
Living people
Baseball players from Seoul
South Korean baseball players
Doosan Bears players
KBO League shortstops
2015 WBSC Premier12 players
2017 World Baseball Classic players
|
```java
package com.shuyu.gsyvideoplayer.video;
import android.app.AlertDialog;
import android.app.Dialog;
import android.content.Context;
import android.content.DialogInterface;
import android.graphics.drawable.Drawable;
import android.util.AttributeSet;
import android.view.Gravity;
import android.view.LayoutInflater;
import android.view.MotionEvent;
import android.view.View;
import android.view.ViewGroup;
import android.view.Window;
import android.view.WindowManager;
import android.widget.ImageView;
import android.widget.ProgressBar;
import android.widget.TextView;
import android.widget.Toast;
import com.shuyu.gsyvideoplayer.R;
import com.shuyu.gsyvideoplayer.listener.GSYVideoShotListener;
import com.shuyu.gsyvideoplayer.listener.GSYVideoShotSaveListener;
import com.shuyu.gsyvideoplayer.utils.Debuger;
import com.shuyu.gsyvideoplayer.utils.NetworkUtils;
import com.shuyu.gsyvideoplayer.video.base.GSYBaseVideoPlayer;
import com.shuyu.gsyvideoplayer.video.base.GSYVideoPlayer;
import java.io.File;
import moe.codeest.enviews.ENDownloadView;
import moe.codeest.enviews.ENPlayView;
/**
* uiui
* Created by shuyu on 2016/11/11.
*/
public class StandardGSYVideoPlayer extends GSYVideoPlayer {
//dialog
protected Dialog mBrightnessDialog;
//dialog
protected Dialog mVolumeDialog;
//dialog
protected Dialog mProgressDialog;
//progress
protected ProgressBar mDialogProgressBar;
//progress
protected ProgressBar mDialogVolumeProgressBar;
//
protected TextView mBrightnessDialogTv;
//
protected TextView mDialogSeekTime;
//
protected TextView mDialogTotalTime;
//icon
protected ImageView mDialogIcon;
protected Drawable mBottomProgressDrawable;
protected Drawable mBottomShowProgressDrawable;
protected Drawable mBottomShowProgressThumbDrawable;
protected Drawable mVolumeProgressDrawable;
protected Drawable mDialogProgressBarDrawable;
protected int mDialogProgressHighLightColor = -11;
protected int mDialogProgressNormalColor = -11;
/**
* 1.5.0
*/
public StandardGSYVideoPlayer(Context context, Boolean fullFlag) {
super(context, fullFlag);
}
public StandardGSYVideoPlayer(Context context) {
super(context);
}
public StandardGSYVideoPlayer(Context context, AttributeSet attrs) {
super(context, attrs);
}
@Override
protected void init(Context context) {
super.init(context);
//ui
if (mBottomProgressDrawable != null) {
mBottomProgressBar.setProgressDrawable(mBottomProgressDrawable);
}
if (mBottomShowProgressDrawable != null) {
mProgressBar.setProgressDrawable(mBottomProgressDrawable);
}
if (mBottomShowProgressThumbDrawable != null) {
mProgressBar.setThumb(mBottomShowProgressThumbDrawable);
}
}
/**
*
*
* @return
*/
@Override
public int getLayoutId() {
return R.layout.video_layout_standard;
}
/**
*
*/
@Override
public void startPlayLogic() {
if (mVideoAllCallBack != null) {
Debuger.printfLog("onClickStartThumb");
mVideoAllCallBack.onClickStartThumb(mOriginUrl, mTitle, StandardGSYVideoPlayer.this);
}
prepareVideo();
startDismissControlViewTimer();
}
/**
* wifi
*/
@Override
protected void showWifiDialog() {
if (!NetworkUtils.isAvailable(mContext)) {
//Toast.makeText(mContext, getResources().getString(R.string.no_net), Toast.LENGTH_LONG).show();
startPlayLogic();
return;
}
AlertDialog.Builder builder = new AlertDialog.Builder(getActivityContext());
builder.setMessage(getResources().getString(R.string.tips_not_wifi));
builder.setPositiveButton(getResources().getString(R.string.tips_not_wifi_confirm), new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
dialog.dismiss();
startPlayLogic();
}
});
builder.setNegativeButton(getResources().getString(R.string.tips_not_wifi_cancel), new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
dialog.dismiss();
}
});
builder.create().show();
}
/**
* dialogdismissProgressDialog
*/
@Override
@SuppressWarnings("ResourceType")
protected void showProgressDialog(float deltaX, String seekTime, long seekTimePosition, String totalTime, long totalTimeDuration) {
if (mProgressDialog == null) {
View localView = LayoutInflater.from(getActivityContext()).inflate(getProgressDialogLayoutId(), null);
if (localView.findViewById(getProgressDialogProgressId()) instanceof ProgressBar) {
mDialogProgressBar = ((ProgressBar) localView.findViewById(getProgressDialogProgressId()));
if (mDialogProgressBarDrawable != null) {
mDialogProgressBar.setProgressDrawable(mDialogProgressBarDrawable);
}
}
if (localView.findViewById(getProgressDialogCurrentDurationTextId()) instanceof TextView) {
mDialogSeekTime = ((TextView) localView.findViewById(getProgressDialogCurrentDurationTextId()));
}
if (localView.findViewById(getProgressDialogAllDurationTextId()) instanceof TextView) {
mDialogTotalTime = ((TextView) localView.findViewById(getProgressDialogAllDurationTextId()));
}
if (localView.findViewById(getProgressDialogImageId()) instanceof ImageView) {
mDialogIcon = ((ImageView) localView.findViewById(getProgressDialogImageId()));
}
mProgressDialog = new Dialog(getActivityContext(), R.style.video_style_dialog_progress);
mProgressDialog.setContentView(localView);
mProgressDialog.getWindow().addFlags(Window.FEATURE_ACTION_BAR);
mProgressDialog.getWindow().addFlags(32);
mProgressDialog.getWindow().addFlags(16);
mProgressDialog.getWindow().setLayout(getWidth(), getHeight());
if (mDialogProgressNormalColor != -11 && mDialogTotalTime != null) {
mDialogTotalTime.setTextColor(mDialogProgressNormalColor);
}
if (mDialogProgressHighLightColor != -11 && mDialogSeekTime != null) {
mDialogSeekTime.setTextColor(mDialogProgressHighLightColor);
}
WindowManager.LayoutParams localLayoutParams = mProgressDialog.getWindow().getAttributes();
localLayoutParams.gravity = Gravity.TOP;
localLayoutParams.width = getWidth();
localLayoutParams.height = getHeight();
int location[] = new int[2];
getLocationOnScreen(location);
localLayoutParams.x = location[0];
localLayoutParams.y = location[1];
mProgressDialog.getWindow().setAttributes(localLayoutParams);
}
if (!mProgressDialog.isShowing()) {
mProgressDialog.show();
}
if (mDialogSeekTime != null) {
mDialogSeekTime.setText(seekTime);
}
if (mDialogTotalTime != null) {
mDialogTotalTime.setText(" / " + totalTime);
}
if (totalTimeDuration > 0)
if (mDialogProgressBar != null) {
mDialogProgressBar.setProgress((int)(seekTimePosition * 100 / totalTimeDuration));
}
if (deltaX > 0) {
if (mDialogIcon != null) {
mDialogIcon.setBackgroundResource(R.drawable.video_forward_icon);
}
} else {
if (mDialogIcon != null) {
mDialogIcon.setBackgroundResource(R.drawable.video_backward_icon);
}
}
}
@Override
protected void dismissProgressDialog() {
if (mProgressDialog != null) {
mProgressDialog.dismiss();
mProgressDialog = null;
}
}
/**
* dialogdismissVolumeDialog
*/
@Override
protected void showVolumeDialog(float deltaY, int volumePercent) {
if (mVolumeDialog == null) {
View localView = LayoutInflater.from(getActivityContext()).inflate(getVolumeLayoutId(), null);
if (localView.findViewById(getVolumeProgressId()) instanceof ProgressBar) {
mDialogVolumeProgressBar = ((ProgressBar) localView.findViewById(getVolumeProgressId()));
if (mVolumeProgressDrawable != null && mDialogVolumeProgressBar != null) {
mDialogVolumeProgressBar.setProgressDrawable(mVolumeProgressDrawable);
}
}
mVolumeDialog = new Dialog(getActivityContext(), R.style.video_style_dialog_progress);
mVolumeDialog.setContentView(localView);
mVolumeDialog.getWindow().addFlags(WindowManager.LayoutParams.FLAG_NOT_FOCUSABLE);
mVolumeDialog.getWindow().addFlags(WindowManager.LayoutParams.FLAG_NOT_TOUCH_MODAL);
mVolumeDialog.getWindow().addFlags(WindowManager.LayoutParams.FLAG_NOT_TOUCHABLE);
mVolumeDialog.getWindow().setLayout(ViewGroup.LayoutParams.WRAP_CONTENT, ViewGroup.LayoutParams.WRAP_CONTENT);
WindowManager.LayoutParams localLayoutParams = mVolumeDialog.getWindow().getAttributes();
localLayoutParams.gravity = Gravity.TOP | Gravity.START;
localLayoutParams.width = getWidth();
localLayoutParams.height = getHeight();
int location[] = new int[2];
getLocationOnScreen(location);
localLayoutParams.x = location[0];
localLayoutParams.y = location[1];
mVolumeDialog.getWindow().setAttributes(localLayoutParams);
}
if (!mVolumeDialog.isShowing()) {
mVolumeDialog.show();
}
if (mDialogVolumeProgressBar != null) {
mDialogVolumeProgressBar.setProgress(volumePercent);
}
}
@Override
protected void dismissVolumeDialog() {
if (mVolumeDialog != null) {
mVolumeDialog.dismiss();
mVolumeDialog = null;
}
}
/**
* dialogdismissBrightnessDialog
*/
@Override
protected void showBrightnessDialog(float percent) {
if (mBrightnessDialog == null) {
View localView = LayoutInflater.from(getActivityContext()).inflate(getBrightnessLayoutId(), null);
if (localView.findViewById(getBrightnessTextId()) instanceof TextView) {
mBrightnessDialogTv = (TextView) localView.findViewById(getBrightnessTextId());
}
mBrightnessDialog = new Dialog(getActivityContext(), R.style.video_style_dialog_progress);
mBrightnessDialog.setContentView(localView);
mBrightnessDialog.getWindow().addFlags(WindowManager.LayoutParams.FLAG_NOT_FOCUSABLE);
mBrightnessDialog.getWindow().addFlags(WindowManager.LayoutParams.FLAG_NOT_TOUCH_MODAL);
mBrightnessDialog.getWindow().addFlags(WindowManager.LayoutParams.FLAG_NOT_TOUCHABLE);
mBrightnessDialog.getWindow().getDecorView().setSystemUiVisibility(View.SYSTEM_UI_FLAG_HIDE_NAVIGATION);
mBrightnessDialog.getWindow().setLayout(ViewGroup.LayoutParams.WRAP_CONTENT, ViewGroup.LayoutParams.WRAP_CONTENT);
WindowManager.LayoutParams localLayoutParams = mBrightnessDialog.getWindow().getAttributes();
localLayoutParams.gravity = Gravity.TOP | Gravity.END;
localLayoutParams.width = getWidth();
localLayoutParams.height = getHeight();
int location[] = new int[2];
getLocationOnScreen(location);
localLayoutParams.x = location[0];
localLayoutParams.y = location[1];
mBrightnessDialog.getWindow().setAttributes(localLayoutParams);
}
if (!mBrightnessDialog.isShowing()) {
mBrightnessDialog.show();
}
if (mBrightnessDialogTv != null)
mBrightnessDialogTv.setText((int) (percent * 100) + "%");
}
@Override
protected void dismissBrightnessDialog() {
if (mBrightnessDialog != null) {
mBrightnessDialog.dismiss();
mBrightnessDialog = null;
}
}
@Override
protected void cloneParams(GSYBaseVideoPlayer from, GSYBaseVideoPlayer to) {
super.cloneParams(from, to);
StandardGSYVideoPlayer sf = (StandardGSYVideoPlayer) from;
StandardGSYVideoPlayer st = (StandardGSYVideoPlayer) to;
if (st.mProgressBar != null && sf.mProgressBar != null) {
st.mProgressBar.setProgress(sf.mProgressBar.getProgress());
st.mProgressBar.setSecondaryProgress(sf.mProgressBar.getSecondaryProgress());
}
if (st.mTotalTimeTextView != null && sf.mTotalTimeTextView != null) {
st.mTotalTimeTextView.setText(sf.mTotalTimeTextView.getText());
}
if (st.mCurrentTimeTextView != null && sf.mCurrentTimeTextView != null) {
st.mCurrentTimeTextView.setText(sf.mCurrentTimeTextView.getText());
}
}
/**
*
*
* @param context
* @param actionBar actionBar
* @param statusBar bar
* @return
*/
@Override
public GSYBaseVideoPlayer startWindowFullscreen(Context context, boolean actionBar, boolean statusBar) {
GSYBaseVideoPlayer gsyBaseVideoPlayer = super.startWindowFullscreen(context, actionBar, statusBar);
if (gsyBaseVideoPlayer != null) {
StandardGSYVideoPlayer gsyVideoPlayer = (StandardGSYVideoPlayer) gsyBaseVideoPlayer;
gsyVideoPlayer.setLockClickListener(mLockClickListener);
gsyVideoPlayer.setNeedLockFull(isNeedLockFull());
initFullUI(gsyVideoPlayer);
//
}
return gsyBaseVideoPlayer;
}
/********************************UI*********************************************/
/**
*
*/
@Override
protected void onClickUiToggle(MotionEvent e) {
if (mIfCurrentIsFullscreen && mLockCurScreen && mNeedLockFull) {
setViewShowState(mLockScreen, VISIBLE);
return;
}
if (mIfCurrentIsFullscreen && !mSurfaceErrorPlay && mCurrentState == CURRENT_STATE_ERROR) {
if (mBottomContainer != null) {
if (mBottomContainer.getVisibility() == View.VISIBLE) {
changeUiToPlayingClear();
} else {
changeUiToPlayingShow();
}
}
} else if (mCurrentState == CURRENT_STATE_PREPAREING) {
if (mBottomContainer != null) {
if (mBottomContainer.getVisibility() == View.VISIBLE) {
changeUiToPrepareingClear();
} else {
changeUiToPreparingShow();
}
}
} else if (mCurrentState == CURRENT_STATE_PLAYING) {
if (mBottomContainer != null) {
if (mBottomContainer.getVisibility() == View.VISIBLE) {
changeUiToPlayingClear();
} else {
changeUiToPlayingShow();
}
}
} else if (mCurrentState == CURRENT_STATE_PAUSE) {
if (mBottomContainer != null) {
if (mBottomContainer.getVisibility() == View.VISIBLE) {
changeUiToPauseClear();
} else {
changeUiToPauseShow();
}
}
} else if (mCurrentState == CURRENT_STATE_AUTO_COMPLETE) {
if (mBottomContainer != null) {
if (mBottomContainer.getVisibility() == View.VISIBLE) {
changeUiToCompleteClear();
} else {
changeUiToCompleteShow();
}
}
} else if (mCurrentState == CURRENT_STATE_PLAYING_BUFFERING_START) {
if (mBottomContainer != null) {
if (mBottomContainer.getVisibility() == View.VISIBLE) {
changeUiToPlayingBufferingClear();
} else {
changeUiToPlayingBufferingShow();
}
}
}
}
@Override
protected void hideAllWidget() {
setViewShowState(mBottomContainer, INVISIBLE);
setViewShowState(mTopContainer, INVISIBLE);
setViewShowState(mBottomProgressBar, VISIBLE);
setViewShowState(mStartButton, INVISIBLE);
}
@Override
protected void changeUiToNormal() {
Debuger.printfLog("changeUiToNormal");
setViewShowState(mTopContainer, VISIBLE);
setViewShowState(mBottomContainer, INVISIBLE);
setViewShowState(mStartButton, VISIBLE);
setViewShowState(mLoadingProgressBar, INVISIBLE);
setViewShowState(mThumbImageViewLayout, VISIBLE);
setViewShowState(mBottomProgressBar, INVISIBLE);
setViewShowState(mLockScreen, (mIfCurrentIsFullscreen && mNeedLockFull) ? VISIBLE : GONE);
updateStartImage();
if (mLoadingProgressBar instanceof ENDownloadView) {
((ENDownloadView) mLoadingProgressBar).reset();
}
}
@Override
protected void changeUiToPreparingShow() {
Debuger.printfLog("changeUiToPreparingShow");
setViewShowState(mTopContainer, VISIBLE);
setViewShowState(mBottomContainer, VISIBLE);
setViewShowState(mStartButton, INVISIBLE);
setViewShowState(mLoadingProgressBar, VISIBLE);
setViewShowState(mThumbImageViewLayout, INVISIBLE);
setViewShowState(mBottomProgressBar, INVISIBLE);
setViewShowState(mLockScreen, GONE);
if (mLoadingProgressBar instanceof ENDownloadView) {
ENDownloadView enDownloadView = (ENDownloadView) mLoadingProgressBar;
if (enDownloadView.getCurrentState() == ENDownloadView.STATE_PRE) {
((ENDownloadView) mLoadingProgressBar).start();
}
}
}
@Override
protected void changeUiToPlayingShow() {
Debuger.printfLog("changeUiToPlayingShow");
if (mLockCurScreen && mNeedLockFull) {
setViewShowState(mLockScreen, VISIBLE);
return;
}
setViewShowState(mTopContainer, VISIBLE);
setViewShowState(mBottomContainer, VISIBLE);
setViewShowState(mStartButton, VISIBLE);
setViewShowState(mLoadingProgressBar, INVISIBLE);
setViewShowState(mThumbImageViewLayout, INVISIBLE);
setViewShowState(mBottomProgressBar, INVISIBLE);
setViewShowState(mLockScreen, (mIfCurrentIsFullscreen && mNeedLockFull) ? VISIBLE : GONE);
if (mLoadingProgressBar instanceof ENDownloadView) {
((ENDownloadView) mLoadingProgressBar).reset();
}
updateStartImage();
}
@Override
protected void changeUiToPauseShow() {
Debuger.printfLog("changeUiToPauseShow");
if (mLockCurScreen && mNeedLockFull) {
setViewShowState(mLockScreen, VISIBLE);
return;
}
setViewShowState(mTopContainer, VISIBLE);
setViewShowState(mBottomContainer, VISIBLE);
setViewShowState(mStartButton, VISIBLE);
setViewShowState(mLoadingProgressBar, INVISIBLE);
setViewShowState(mThumbImageViewLayout, INVISIBLE);
setViewShowState(mBottomProgressBar, INVISIBLE);
setViewShowState(mLockScreen, (mIfCurrentIsFullscreen && mNeedLockFull) ? VISIBLE : GONE);
if (mLoadingProgressBar instanceof ENDownloadView) {
((ENDownloadView) mLoadingProgressBar).reset();
}
updateStartImage();
updatePauseCover();
}
@Override
protected void changeUiToPlayingBufferingShow() {
Debuger.printfLog("changeUiToPlayingBufferingShow");
setViewShowState(mTopContainer, VISIBLE);
setViewShowState(mBottomContainer, VISIBLE);
setViewShowState(mStartButton, INVISIBLE);
setViewShowState(mLoadingProgressBar, VISIBLE);
setViewShowState(mThumbImageViewLayout, INVISIBLE);
setViewShowState(mBottomProgressBar, INVISIBLE);
setViewShowState(mLockScreen, GONE);
if (mLoadingProgressBar instanceof ENDownloadView) {
ENDownloadView enDownloadView = (ENDownloadView) mLoadingProgressBar;
if (enDownloadView.getCurrentState() == ENDownloadView.STATE_PRE) {
((ENDownloadView) mLoadingProgressBar).start();
}
}
}
@Override
protected void changeUiToCompleteShow() {
Debuger.printfLog("changeUiToCompleteShow");
setViewShowState(mTopContainer, VISIBLE);
setViewShowState(mBottomContainer, VISIBLE);
setViewShowState(mStartButton, VISIBLE);
setViewShowState(mLoadingProgressBar, INVISIBLE);
setViewShowState(mThumbImageViewLayout, VISIBLE);
setViewShowState(mBottomProgressBar, INVISIBLE);
setViewShowState(mLockScreen, (mIfCurrentIsFullscreen && mNeedLockFull) ? VISIBLE : GONE);
if (mLoadingProgressBar instanceof ENDownloadView) {
((ENDownloadView) mLoadingProgressBar).reset();
}
updateStartImage();
}
@Override
protected void changeUiToError() {
Debuger.printfLog("changeUiToError");
setViewShowState(mTopContainer, INVISIBLE);
setViewShowState(mBottomContainer, INVISIBLE);
setViewShowState(mStartButton, VISIBLE);
setViewShowState(mLoadingProgressBar, INVISIBLE);
setViewShowState(mThumbImageViewLayout, INVISIBLE);
setViewShowState(mBottomProgressBar, INVISIBLE);
setViewShowState(mLockScreen, (mIfCurrentIsFullscreen && mNeedLockFull) ? VISIBLE : GONE);
if (mLoadingProgressBar instanceof ENDownloadView) {
((ENDownloadView) mLoadingProgressBar).reset();
}
updateStartImage();
}
@Override
protected void onDetachedFromWindow() {
super.onDetachedFromWindow();
dismissVolumeDialog();
dismissBrightnessDialog();
}
/**
* dialoglayoutId
*
* showProgressDialog
*/
protected int getProgressDialogLayoutId() {
return R.layout.video_progress_dialog;
}
/**
* dialogid
*
* showProgressDialog
*/
protected int getProgressDialogProgressId() {
return R.id.duration_progressbar;
}
/**
* dialog
*
* showProgressDialog
*/
protected int getProgressDialogCurrentDurationTextId() {
return R.id.tv_current;
}
/**
* dialog
*
* showProgressDialog
*/
protected int getProgressDialogAllDurationTextId() {
return R.id.tv_duration;
}
/**
* dialogid
*
* showProgressDialog
*/
protected int getProgressDialogImageId() {
return R.id.duration_image_tip;
}
/**
* dialoglayoutId
*
* showVolumeDialog
*/
protected int getVolumeLayoutId() {
return R.layout.video_volume_dialog;
}
/**
* dialog id
*
* showVolumeDialog
*/
protected int getVolumeProgressId() {
return R.id.volume_progressbar;
}
/**
* dialoglayoutId
*
* showBrightnessDialog
*/
protected int getBrightnessLayoutId() {
return R.layout.video_brightness;
}
/**
* dialogtext id
*
* showBrightnessDialog
*/
protected int getBrightnessTextId() {
return R.id.app_video_brightness;
}
protected void changeUiToPrepareingClear() {
Debuger.printfLog("changeUiToPrepareingClear");
setViewShowState(mTopContainer, INVISIBLE);
setViewShowState(mBottomContainer, INVISIBLE);
setViewShowState(mStartButton, INVISIBLE);
setViewShowState(mLoadingProgressBar, INVISIBLE);
setViewShowState(mThumbImageViewLayout, INVISIBLE);
setViewShowState(mBottomProgressBar, INVISIBLE);
setViewShowState(mLockScreen, GONE);
if (mLoadingProgressBar instanceof ENDownloadView) {
((ENDownloadView) mLoadingProgressBar).reset();
}
}
protected void changeUiToPlayingClear() {
Debuger.printfLog("changeUiToPlayingClear");
changeUiToClear();
setViewShowState(mBottomProgressBar, VISIBLE);
}
protected void changeUiToPauseClear() {
Debuger.printfLog("changeUiToPauseClear");
changeUiToClear();
setViewShowState(mBottomProgressBar, VISIBLE);
updatePauseCover();
}
protected void changeUiToPlayingBufferingClear() {
Debuger.printfLog("changeUiToPlayingBufferingClear");
setViewShowState(mTopContainer, INVISIBLE);
setViewShowState(mBottomContainer, INVISIBLE);
setViewShowState(mStartButton, INVISIBLE);
setViewShowState(mLoadingProgressBar, VISIBLE);
setViewShowState(mThumbImageViewLayout, INVISIBLE);
setViewShowState(mBottomProgressBar, VISIBLE);
setViewShowState(mLockScreen, GONE);
if (mLoadingProgressBar instanceof ENDownloadView) {
ENDownloadView enDownloadView = (ENDownloadView) mLoadingProgressBar;
if (enDownloadView.getCurrentState() == ENDownloadView.STATE_PRE) {
((ENDownloadView) mLoadingProgressBar).start();
}
}
updateStartImage();
}
protected void changeUiToClear() {
Debuger.printfLog("changeUiToClear");
setViewShowState(mTopContainer, INVISIBLE);
setViewShowState(mBottomContainer, INVISIBLE);
setViewShowState(mStartButton, INVISIBLE);
setViewShowState(mLoadingProgressBar, INVISIBLE);
setViewShowState(mThumbImageViewLayout, INVISIBLE);
setViewShowState(mBottomProgressBar, INVISIBLE);
setViewShowState(mLockScreen, GONE);
if (mLoadingProgressBar instanceof ENDownloadView) {
((ENDownloadView) mLoadingProgressBar).reset();
}
}
protected void changeUiToCompleteClear() {
Debuger.printfLog("changeUiToCompleteClear");
setViewShowState(mTopContainer, INVISIBLE);
setViewShowState(mBottomContainer, INVISIBLE);
setViewShowState(mStartButton, VISIBLE);
setViewShowState(mLoadingProgressBar, INVISIBLE);
setViewShowState(mThumbImageViewLayout, VISIBLE);
setViewShowState(mBottomProgressBar, VISIBLE);
setViewShowState(mLockScreen, (mIfCurrentIsFullscreen && mNeedLockFull) ? VISIBLE : GONE);
if (mLoadingProgressBar instanceof ENDownloadView) {
((ENDownloadView) mLoadingProgressBar).reset();
}
updateStartImage();
}
/**
*
*/
protected void updateStartImage() {
if (mStartButton instanceof ENPlayView) {
ENPlayView enPlayView = (ENPlayView) mStartButton;
enPlayView.setDuration(500);
if (mCurrentState == CURRENT_STATE_PLAYING) {
enPlayView.play();
} else if (mCurrentState == CURRENT_STATE_ERROR) {
enPlayView.pause();
} else {
enPlayView.pause();
}
} else if (mStartButton instanceof ImageView) {
ImageView imageView = (ImageView) mStartButton;
if (mCurrentState == CURRENT_STATE_PLAYING) {
imageView.setImageResource(R.drawable.video_click_pause_selector);
} else if (mCurrentState == CURRENT_STATE_ERROR) {
imageView.setImageResource(R.drawable.video_click_error_selector);
} else {
imageView.setImageResource(R.drawable.video_click_play_selector);
}
}
}
/**
* UI
*/
private void initFullUI(StandardGSYVideoPlayer standardGSYVideoPlayer) {
if (mBottomProgressDrawable != null) {
standardGSYVideoPlayer.setBottomProgressBarDrawable(mBottomProgressDrawable);
}
if (mBottomShowProgressDrawable != null && mBottomShowProgressThumbDrawable != null) {
standardGSYVideoPlayer.setBottomShowProgressBarDrawable(mBottomShowProgressDrawable,
mBottomShowProgressThumbDrawable);
}
if (mVolumeProgressDrawable != null) {
standardGSYVideoPlayer.setDialogVolumeProgressBar(mVolumeProgressDrawable);
}
if (mDialogProgressBarDrawable != null) {
standardGSYVideoPlayer.setDialogProgressBar(mDialogProgressBarDrawable);
}
if (mDialogProgressHighLightColor != -11 && mDialogProgressNormalColor != -11) {
standardGSYVideoPlayer.setDialogProgressColor(mDialogProgressHighLightColor, mDialogProgressNormalColor);
}
}
/**
* -
*/
public void setBottomShowProgressBarDrawable(Drawable drawable, Drawable thumb) {
mBottomShowProgressDrawable = drawable;
mBottomShowProgressThumbDrawable = thumb;
if (mProgressBar != null) {
mProgressBar.setProgressDrawable(drawable);
mProgressBar.setThumb(thumb);
}
}
/**
* -
*/
public void setBottomProgressBarDrawable(Drawable drawable) {
mBottomProgressDrawable = drawable;
if (mBottomProgressBar != null) {
mBottomProgressBar.setProgressDrawable(drawable);
}
}
/**
*
*/
public void setDialogVolumeProgressBar(Drawable drawable) {
mVolumeProgressDrawable = drawable;
}
/**
*
*/
public void setDialogProgressBar(Drawable drawable) {
mDialogProgressBarDrawable = drawable;
}
/**
*
*/
public void setDialogProgressColor(int highLightColor, int normalColor) {
mDialogProgressHighLightColor = highLightColor;
mDialogProgressNormalColor = normalColor;
}
/************************************* ****************************************/
/**
*
*/
public void taskShotPic(GSYVideoShotListener gsyVideoShotListener) {
this.taskShotPic(gsyVideoShotListener, false);
}
/**
*
*
* @param high
*/
public void taskShotPic(GSYVideoShotListener gsyVideoShotListener, boolean high) {
if (getCurrentPlayer().getRenderProxy() != null) {
getCurrentPlayer().getRenderProxy().taskShotPic(gsyVideoShotListener, high);
}
}
/**
*
*/
public void saveFrame(final File file, GSYVideoShotSaveListener gsyVideoShotSaveListener) {
saveFrame(file, false, gsyVideoShotSaveListener);
}
/**
*
*
* @param high
*/
public void saveFrame(final File file, final boolean high, final GSYVideoShotSaveListener gsyVideoShotSaveListener) {
if (getCurrentPlayer().getRenderProxy() != null) {
getCurrentPlayer().getRenderProxy().saveFrame(file, high, gsyVideoShotSaveListener);
}
}
/**
* view
* GSYVideoHelperremoveview
* GSYVideoControlView onDetachedFromWindow
*/
public void restartTimerTask() {
startProgressTimer();
startDismissControlViewTimer();
}
}
```
|
```python
import copy
import io
import json
import random
from datetime import datetime, timedelta
import pytest
from AzureADIdentityProtection import (AADClient, OUTPUTS_PREFIX, DATE_FORMAT,
azure_ad_identity_protection_risk_detection_list_command,
azure_ad_identity_protection_risky_users_list_command,
azure_ad_identity_protection_risky_users_history_list_command,
your_sha256_hashmand,
azure_ad_identity_protection_risky_users_dismiss_command,
parse_list)
dummy_user_id = 'dummy_id'
def util_load_json(path):
with io.open(path, mode='r', encoding='utf-8') as f:
return json.loads(f.read())
@pytest.fixture()
def client(mocker):
mocker.patch('AzureADIdentityProtection.MicrosoftClient.get_access_token', return_value='token')
return AADClient(app_id='dummy_app_id',
subscription_id='dummy_subscription_id',
verify=False,
proxy=False,
azure_ad_endpoint='path_to_url
@pytest.mark.parametrize('command,test_data_file,url_suffix,context_path,kwargs',
((azure_ad_identity_protection_risk_detection_list_command,
'test_data/risk_detections_response.json',
'riskDetections',
'Risks',
{}),
(azure_ad_identity_protection_risky_users_list_command,
'test_data/risky_users_response.json',
'RiskyUsers',
'RiskyUsers',
{}),
(azure_ad_identity_protection_risky_users_history_list_command,
'test_data/risky_user_history_response.json',
f'RiskyUsers/{dummy_user_id}/history',
"RiskyUserHistory",
{'user_id': dummy_user_id})
))
def test_list_commands(client, requests_mock, command, test_data_file, url_suffix, context_path,
kwargs):
"""
Given:
- AAD Client
When:
- Listing (risks, risky users, user history)
Then:
- Verify API request sent as expected
- Verify command outputs
"""
with open(test_data_file) as f:
api_response = json.load(f)
requests_mock.get(f'{client._base_url}/{url_suffix}?$top=50', json=api_response)
result = command(client, limit=50, **kwargs)
expected_values = api_response.get('value')
actual_values = result.outputs.get(f'{OUTPUTS_PREFIX}.{context_path}(val.id === obj.id)')
assert actual_values == expected_values
expected_next_link = api_response.get('@odata.nextLink')
if expected_next_link: # risky_users_history_list does not have next link
actual_next_url = result.outputs.get(f'{OUTPUTS_PREFIX}.NextLink(obj.Description === "{context_path}")', {}) \
.get('URL')
assert actual_next_url == expected_next_link
@pytest.mark.parametrize('method,expected_output,url_suffix,kwargs',
((your_sha256_hashmand,
' Confirmed successfully.',
'riskyUsers/confirmCompromised',
{'user_ids': [dummy_user_id]}
),
(azure_ad_identity_protection_risky_users_dismiss_command,
' Dismissed successfully.',
'riskyUsers/dismiss',
{'user_ids': [dummy_user_id]}
)
)
)
def test_status_update_commands(client, requests_mock, method, expected_output, url_suffix, kwargs):
"""
Given:
- AAD Client
- User name whose status we want to update
When:
- Calling a user-status-changing method (dismiss, confirm compromised)
Then:
- Verify API request sent as expected
- Verify command outputs
"""
requests_mock.post(f'{client._base_url}/{url_suffix}', status_code=204)
result = method(client, **kwargs)
assert requests_mock.request_history[0].json() == {'userIds': [dummy_user_id]}
assert result == expected_output
def test_parse_list():
"""
Given
- A Microsoft Graph List response (collection of objects)
When
- calling parse_list()
Then
- Validate output parsing
"""
with open('test_data/risk_detections_response.json') as f:
response = json.load(f)
human_readable_title = "Risks"
context_path = "Risks_path"
parsed = parse_list(response, human_readable_title=human_readable_title, context_path=context_path)
outputs = parsed.outputs
assert len(outputs) == 2
values = outputs[f'AADIdentityProtection.{context_path}(val.id === obj.id)'][0]
assert len(values) == len(response['value'][0]) # all fields parsed
next_link_dict = outputs[f'AADIdentityProtection.NextLink(obj.Description === "{context_path}")']
assert next_link_dict == {'Description': context_path,
'URL': 'path_to_url}
assert parsed.readable_output.startswith("### Risks (1 result)")
def test_parse_list_empty():
"""
Given
- A Microsoft Graph List response (collection of objects)
When
- calling parse_list()
Then
- Validate output parsing
"""
empty_response = dict()
human_readable_title = "Risks"
context_path = "Risks_path"
parsed = parse_list(empty_response, human_readable_title=human_readable_title, context_path=context_path)
outputs = parsed.outputs
assert outputs == {f'AADIdentityProtection.{context_path}(val.id === obj.id)': []} # no next_link
assert f"{human_readable_title} (0 results)" in parsed.readable_output
assert "**No entries.**" in parsed.readable_output
def test_fetch_all_incidents(mocker):
"""
Given
fetch incidents command running for the first time.
When
mock the Client's http_request.
Then
validate fetch incidents command using the Client gets all relevant incidents
"""
from AzureADIdentityProtection import detections_to_incidents, get_last_fetch_time
test_incidents = util_load_json('test_data/incidents.json')
last_run = {
'latest_detection_found': '2021-07-10T11:02:54Z'
}
last_fetch = get_last_fetch_time(last_run, {})
incidents, last_item_time = detections_to_incidents(test_incidents.get('value', []), last_fetch)
assert len(incidents) == 10
assert incidents[0].get('name') == 'Azure AD: 17 newCountry adminDismissedAllRiskForUser'
assert last_item_time == '2021-07-17T14:11:57Z'
def test_fetch_new_incidents(mocker):
"""
Given
fetch incidents command running for the first time.
When
mock the Client's http_request.
Then
validate fetch incidents command using the Client gets all relevant incidents
"""
from AzureADIdentityProtection import detections_to_incidents, get_last_fetch_time
test_incidents = util_load_json('test_data/incidents.json')
last_run = {
'latest_detection_found': '2021-07-20T11:02:54Z'
}
last_fetch = get_last_fetch_time(last_run, {})
incidents, last_item_time = detections_to_incidents(test_incidents.get('value', []), last_fetch)
assert len(incidents) == 10
assert incidents[0].get('name') == 'Azure AD: 17 newCountry adminDismissedAllRiskForUser'
assert last_item_time == '2021-07-20T11:02:54Z'
# set time to 2021-07-29 11:10:00
def test_first_fetch_start_time():
from AzureADIdentityProtection import get_last_fetch_time
last_run = {}
params = {
"first_fetch": "2 days"
}
expected_datetime = datetime.now() - timedelta(days=2)
last_fetch = get_last_fetch_time(last_run, params)
last_fetch_datetime = datetime.strptime(last_fetch.removesuffix('Z'), DATE_FORMAT)
assert expected_datetime - timedelta(minutes=1) < last_fetch_datetime < expected_datetime + timedelta(minutes=1)
def test_non_first_fetch_start_time():
from AzureADIdentityProtection import get_last_fetch_time
last_run = {
"latest_detection_found": '2021-07-28T00:10:00.000Z'
}
params = {
"first_fetch": "2 days"
}
last_fetch = get_last_fetch_time(last_run, params)
assert last_fetch == '2021-07-28T00:10:00.000Z'
def test_filter_creation_with_user_filter():
from AzureADIdentityProtection import build_filter
last_fetch = '2021-07-28T00:10:00.000Z'
params = {
"fetch_filter_expression": "id gt 1234"
}
user_filter = params['fetch_filter_expression']
constructed_filter = build_filter(last_fetch, params)
assert constructed_filter == f"({user_filter}) and detectedDateTime gt {last_fetch}"
def test_filter_creation_without_user_filter():
from AzureADIdentityProtection import build_filter
last_fetch = '2021-07-28T00:10:00.000Z'
params = {
"first_fetch": "2 days",
"fetch_filter_expression": ""
}
constructed_filter = build_filter(last_fetch, params)
assert constructed_filter == f"detectedDateTime gt {last_fetch}"
params = {
"first_fetch": "2 days",
}
constructed_filter = build_filter(last_fetch, params)
assert constructed_filter == f"detectedDateTime gt {last_fetch}"
@pytest.mark.parametrize('date_to_test', [('2021-07-28T00:10:00.000Z'), ('2021-07-28T00:10:00Z')])
def test_date_str_to_azure_format_z_suffix(date_to_test):
"""
Given:
- A date string that includes a Z at the end
When:
- The date string is moved to Azure format
Then:
- The result will be a string without a Z at the end
"""
from AzureADIdentityProtection import date_str_to_azure_format
assert date_str_to_azure_format(date_to_test)[-1].lower() != 'z'
@pytest.mark.parametrize('date_to_test, expected', [
('2021-07-28T00:10:00.123456Z', '2021-07-28T00:10:00.123456'),
('2021-07-28T00:10:00.123456789Z', '2021-07-28T00:10:00.123456'),
('2021-07-28T00:10:00.123Z', '2021-07-28T00:10:00.123'),
('2021-07-28T00:10:00.123456', '2021-07-28T00:10:00.123456'),
('2021-07-28T00:10:00.123456789', '2021-07-28T00:10:00.123456'),
('2021-07-28T00:10:00.123', '2021-07-28T00:10:00.123')
])
def test_date_str_to_azure_format_with_ms(date_to_test, expected):
"""
Given:
- A date string that includes miliseconds, that are less than 6 digits long.
When:
- The date string is moved to Azure format
Then:
- The result will be a string that contains the same digits as exp
"""
from AzureADIdentityProtection import date_str_to_azure_format
assert date_str_to_azure_format(date_to_test) == expected
@pytest.mark.parametrize('date_to_test, expected', [
('2021-07-28T00:10:00Z', '2021-07-28T00:10:00.000'),
('2021-07-28T00:10:00', '2021-07-28T00:10:00.000'),
])
def test_date_str_to_azure_format_without_ms(date_to_test, expected):
"""
Given:
- Two dates, without milliseconds, with and without a Z at the end.
When:
- Transforming the dates to Azure format
Then:
- Both dates have milliseconds and do not have a Z at the end.
"""
from AzureADIdentityProtection import date_str_to_azure_format
assert date_str_to_azure_format(date_to_test) == expected
def test_detections_to_incident():
"""
Given:
- 10 detections, sorted by their detection time.
- 10 detections, shuffled.
When:
- Calling detections_to_incidents to parse the detections to incidents on the sorted detections.
- Calling detections_to_incidents to parse the detections to incidents on the shuffled detections.
Then:
- Both calls return 10 incidents, and the latest detection time among the detections.
"""
from AzureADIdentityProtection import detections_to_incidents
detections_in_order = util_load_json('test_data/incidents.json')['value']
detections_out_of_order = copy.deepcopy(detections_in_order)
random.shuffle(detections_out_of_order)
last_fetch = '2019-07-28T00:10:00.123456'
incidents, latest_incident_time = detections_to_incidents(detections_in_order, last_fetch)
assert len(incidents) == 10
assert latest_incident_time == '2021-07-17T14:11:57Z'
incidents, latest_incident_time = detections_to_incidents(detections_out_of_order, last_fetch)
assert len(incidents) == 10
assert latest_incident_time == '2021-07-17T14:11:57Z'
def mock_list_detections(limit, filter_expression, user_id, user_principal_name):
"""
Mocks the request to list detections from the API.
The mock will manually take into consideration the filter and limit supplied as parameters.
It also accepts the user_id and user_principal_name, to allow full running of fetch (as the actual function
receives these parameters).
"""
from AzureADIdentityProtection import DATE_FORMAT, date_str_to_azure_format
test_incidents = util_load_json('test_data/incidents.json')
all_possible_results = test_incidents.get('value')
start_time = filter_expression.split('gt ')[-1]
start_time = date_str_to_azure_format(start_time)
start_time_datetime = datetime.strptime(start_time, DATE_FORMAT)
incidents_compliant_with_filter = []
for detection in all_possible_results:
detection_time = date_str_to_azure_format(detection['detectedDateTime'])
detection_datetime = datetime.strptime(detection_time, DATE_FORMAT)
if detection_datetime > start_time_datetime:
incidents_compliant_with_filter.append(detection)
incidents_compliant_with_limit = incidents_compliant_with_filter[:limit]
res = {
'value': incidents_compliant_with_limit
}
return res
def mock_get_last_fetch_time(last_run, params):
"""
Mocks the function that retrieves the fetch time that should be used.
Args:
last_run: the last run's data.
params: the instance parameters (mocked).
Returns:
last_fetch (str): the date of the time to start the fetch from.
last_fetch_datetime (str): the datetime of the time to start the fetch from.
"""
last_fetch = last_run.get('latest_detection_found')
if not last_fetch:
# To handle the fact that we can't freeze the time and still parse relative time expressions such as 2 days
last_fetch = "2021-07-16T11:08:55.000Z"
return last_fetch
def test_fetch_complete_flow(mocker, client):
"""
Given:
- A start time of 2021-07-16T11:08:55.000.
- 10 Possible incidents to fetch, the first 2 with a detection date before the start time.
When:
- Running fetch for the first time
Then:
- The two incidents before the start time are not fetched.
- The 5 incidents after them are fetched.
- The last run is updated with the detection date of the latest incident.
When:
- Running fetch for the second time.
Then:
- Exactly 3 incidents (the last 3 incidents that can be fetched) are fetched.
- The first fetched incident is the earliest one not fetched in the previous run.
- The last run is updated with the detection date of the latest incident.
When:
- Running fetch for the third time.
Then:
- No incidents are fetched.
- The fetch end time remains unchanged.
"""
from AzureADIdentityProtection import fetch_incidents
mocker.patch('AzureADIdentityProtection.get_last_fetch_time', side_effect=mock_get_last_fetch_time)
mocker.patch('AzureADIdentityProtection.MicrosoftClient.get_access_token', return_value='token')
mocker.patch('AzureADIdentityProtection.AADClient.azure_ad_identity_protection_risk_detection_list_raw',
side_effect=mock_list_detections)
mock_params = {
'max_fetch': 5
}
last_run = {}
mocker.patch('demistomock.params', return_value=mock_params)
mocker.patch('demistomock.getLastRun', return_value=last_run)
incidents, last_run = fetch_incidents(client, mock_params)
first_incident = incidents[0].get('name')
assert first_incident == 'Azure AD: 37 newCountry adminDismissedAllRiskForUser'
assert len(incidents) == 5
assert last_run['latest_detection_found'] == '2021-07-17T14:09:54Z'
mocker.patch('demistomock.getLastRun', return_value=last_run)
incidents, last_run = fetch_incidents(client, mock_params)
first_incident = incidents[0].get('name')
assert first_incident == 'Azure AD: 87 newCountry adminDismissedAllRiskForUser'
assert len(incidents) == 3
assert last_run['latest_detection_found'] == '2021-07-17T14:11:57Z'
mocker.patch('demistomock.getLastRun', return_value=last_run)
incidents, last_run = fetch_incidents(client, mock_params)
assert len(incidents) == 0
assert last_run['latest_detection_found'] == '2021-07-17T14:11:57Z'
```
|
Lewis Alfred Vasquez Tenorio (born July 9, 1984) is a Filipino professional basketball player for the Barangay Ginebra San Miguel of the Philippine Basketball Association (PBA). He is also an assistant coach for the Letran Knights of the Philippines' NCAA.
He was nicknamed Showtime while he was on Alaska due to his speed, scoring and skills, and The Tinyente due to the initials of his name (L.T.). In Barangay Ginebra, he earned the nickname GINeral (combination of Ginebra's product, gin, and the word general) for his great skills as a point guard and his passing ability.
Early life and high school years
Tenorio started playing basketball when he was 6 years old. No one really saw him play or his potential, but he tried his luck to join a basketball team when he was in grade three at Don Bosco Makati. He, then in sixth grade, played a nationally televised exhibition game in front of a PBA audience. His team faced the Ateneo Grade School's Small Basketeers Team. Tenorio's team did not win, but he pretty much stole the show, scoring 31 points in only 21 minutes of play.
After his elementary days were over, he first went to Adamson under coach Charlie Dy before eventually transferring to San Beda under legendary bench tactician Ato Badolato. LA became part of a Bedan squad that was rife with future collegiate stars – Magnum Membrere, Arjun Cordero, Toti Almeda, and Jon Jon Tabique. He won a title in his junior year, but finished just third in his last year with the Red Cubs.
College and amateur career
Tenorio made an immediate impact as a rookie for the Blue Eagles of Ateneo de Manila as he helped lead his team into the 2001 basketball Finals of the University Athletic Association of the Philippines (UAAP). He was practically unstoppable in Game 3 of the Best-of-3 Finals Series as he scored 30 points against their college rival De La Salle University-Manila. DLSU-Manila however would go on to win that series.
The following year, in 2002, he would once again lead the Ateneo de Manila back to the UAAP Finals. This time he and his team would not be denied as they exacted vengeance on DLSU-Manila to win the UAAP Men's Seniors basketball championship.
He would make a third straight Finals appearance in 2003 but he and his Blue Eagle team would yield their crown to the veteran Far Eastern University Tamaraws.
He played a total of five seasons with Ateneo de Manila and also graduated with a Bachelor of Arts degree in 2006, something he considers a far more important achievement than any of the basketball accolades he ever got. He played under four college coaches: Joe Lipa, Joel Banal, Sandy Arespacochaga and Norman Black.
After completing his collegiate eligibility he then saw action in the quasi-commercial basketball league of the Philippines, the Philippine Basketball League (PBL) the last stepping stone towards achieving a professional basketball career. In his last PBL Conference he led his Harbour Centre Portmasters team to the 2006 PBL Unity Cup championship, a fitting end to his career as an amateur player.
PBA career
During the 2006 PBA draft, Tenorio was the fourth overall draft pick by the San Miguel Beermen. He played an average of 25.5 minutes for Magnolia with a respectable average of 7.8 points, 4.6 assists and 3.6 rebounds in nine games.
In a surprise move in March 2008, he and Larry Fonacier were traded to Alaska Aces for Mike Cortez and Ken Bono. The Aces have been happy with the trade as they got a pure point guard in Tenorio to make life easier for Willie Miller who could now concentrate on his scoring.
In the first four games of the 2009–2010 KFC-PBA Philippine Cup, Tenorio did not disappoint Alaska's expectation. As a starting point guard Tenorio led the Alaska team to a scrambling victory over San Miguel Beer in their first game. In their next three games Tenorio was ever the reliable point guard who led his team to the top of the standings in the PBA.
On August 31, 2012, Tenorio was traded to the Barangay Ginebra San Miguel in a six-player blockbuster deal. Tenorio was also famous because of his "Pambansang reverse" which is a reverse lay up made him famous in international basketball.
On October 14, 2016, Tenorio was recognized during the PBA Leo Awards Night as he was named to the PBA Mythical Second Team. On October 19, 2016, Tenorio was named as the 2016 PBA Governor's Cup Finals Most Valuable Player after averaging 17.2 points, 4.7 assists and 3.8 rebounds against the Meralco Bolts.
On June 12, 2022, Tenorio played in his 700th consecutive game, the most consecutive games played for a PBA player. On December 10, 2022, he made his 1,178th three points field goals made and tied James Yap for third most all time. On March 1, 2023, Tenorio's consecutive games played ended at 744 due to a groin injury.
PBA career statistics
As of the end of 2022–23 season
Season-by-season averages
|-
| align=left |
| align=left | San Miguel
| 62 || 22.3 || .363 || .297 || .824 || 2.5 || 3.1 || 1.0 || .0 || 8.2
|-
| align=left rowspan=2|
| align=left | Magnolia
| rowspan=2|39 || rowspan=2|28.4 || rowspan=2|.405 || rowspan=2|.338 || rowspan=2|.701 || rowspan=2|3.6 || rowspan=2|4.5 || rowspan=2|1.2 || rowspan=2|.1 || rowspan=2|8.6
|-
| align=left | Alaska
|-
| align=left |
| align=left | Alaska
| 47 || 33.3 || .712 || .312 || .785 || 4.2 || 4.7 || 1.1 || .0 || 11.0
|-
| align=left |
| align=left | Alaska
| 62 || 35.3 || .399 || .337 || .844 || 4.6 || 4.6 || 1.2 || .0 || 12.8
|-
| align=left |
| align=left | Alaska
| 42 || 35.5 || .394 || .378 || .833 || 4.8 || 4.5 || 1.3 || .1 || 13.5
|-
| align=left |
| align=left | Alaska
| 35 || 36.1 || .373 || .246 || .800 || 5.4 || 5.4 || 1.2 || .1 || 14.0
|-
| align=left |
| align=left | Barangay Ginebra
| 52 || 36.0 || .364 || .297 || .753 || 5.0 || 5.8 || 1.5 || .1 || 14.0
|-
| align=left |
| align=left | Barangay Ginebra
| 43 || 32.8 || .376 || .275 || .830 || 4.3 || 5.5 || 1.3 || .1 || 11.2
|-
| align=left |
| align=left | Barangay Ginebra
| 37 || 29.2 || .382 || .333 || .793 || 4.3 || 3.9 || 1.5 || .0 || 10.0
|-
| align=left |
| align=left | Barangay Ginebra
| 49 || 33.7 || .433 || .387 || .804 || 4.1 || 4.5 || 1.2 || .1 || 13.0
|-
| align=left |
| align=left | Barangay Ginebra
| 64 || 34.3 || .403 || .370 || .780 || 3.5 || 4.7 || 1.3 || .0 || 14.2
|-
| align=left |
| align=left | Barangay Ginebra
| 57 || 35.9 || .372 || .335 || .837 || 3.5 || 4.6 || 1.6 || .1 || 12.5
|-
| align=left |
| align=left | Barangay Ginebra
| 52 || 35.4 || .387 || .361 || .863 || 3.3 || 4.6 || 1.2 || .0 || 11.8
|-
| align=left |
| align=left | Barangay Ginebra
| 22 || 31.3 || .423 || .400 || .750 || 2.9 || 4.6 || .8 || .1 || 9.6
|-
| align=left |
| align=left | Barangay Ginebra
| 36 || 37.8 || .405 || .338 || .853 || 3.3 || 5.0 || .5 || .1 || 12.7
|-
| align=left |
| align=left | Barangay Ginebra
| 45 || 26.8 || .338 || .313 || .739 || 2.0 || 3.6 || .8 || .0 || 7.8
|-class=sortbottom
| align="center" colspan="2" | Career
| 744 || 32.7 || .398 || .332 || .801 || 3.8 || 4.6 || 1.2 || .1 || 11.7
|}
National team career
Tenorio made his name on the final list of the Smart Gilas 2.0 roster. The first tournament of the Gilas were the prestigious 2012 William Jones Cup which was held from August 18–26 in Taipei. Gilas had an impressive 6–1 record, before battling out the USA Team for their last game. Tenorio led the team to beat the USA team 76–75, finishing with 20 points and grabbing the most important rebound of the game. Gilas won the tournament with a 7–1 record, and the 4th championship of the Philippines in the Jones Cup. Tenorio eventually became the tournament's Most Valuable Player after his last performance against the tough USA Team.
Personal life
On March 21, 2023, Tenorio announced that he was diagnosed with a stage 3 colon cancer. He was declared cancer-free in September 2023.
References
1984 births
Living people
2014 FIBA Basketball World Cup players
Alaska Aces (PBA) players
Asian Games competitors for the Philippines
Ateneo Blue Eagles men's basketball players
Barangay Ginebra San Miguel players
Basketball players at the 2014 Asian Games
Basketball players from Batangas
Competitors at the 2019 SEA Games
Filipino men's basketball coaches
Filipino men's basketball players
Letran Knights basketball coaches
Philippine Basketball Association All-Stars
Philippines men's national basketball team players
Point guards
San Beda University alumni
San Miguel Beermen draft picks
San Miguel Beermen players
SEA Games gold medalists for the Philippines
SEA Games medalists in basketball
Barangay Ginebra San Miguel coaches
|
```javascript
var monthNames = [
"January",
"February",
"March",
"April",
"May",
"June",
"July",
"August",
"September",
"October",
"November",
"December",
];
module.exports = function prettyDate(date) {
date = new Date(date);
return (
monthNames[date.getMonth()] +
" " +
date.getDate() +
", " +
date.getFullYear()
);
};
```
|
Kosambi is an old city and Buddhist pilgrimage site in India.
Kosambi may also refer to:
Kosambi, Tangerang, a subdistrict of Tangerang Regency, Banten, Indonesia
Damodar Dharmananda Kosambi (1907–1966), Indian polymath
Dharmananda Damodar Kosambi (1876–1947), Indian Buddhist and Pali language scholar
Duri Kosambi, Cengkareng, an administrative village (kelurahan) of Cengkareng, West Jakarta, Indonesia
See also
Kaushambi (disambiguation)
|
Die Lit is the debut studio album by American rapper Playboi Carti. It was released through AWGE and Interscope Records on May 11, 2018. Production was primarily handled by Pi'erre Bourne alongside several other record producers, including Don Cannon, Maaly Raw and IndigoChildRick. The album features a wide range of guest appearances from artists including Skepta, Travis Scott, Lil Uzi Vert, Pi'erre Bourne, Nicki Minaj, Bryson Tiller, Chief Keef, Gunna, Red Coldhearted, Young Thug, and Young Nudy. The album serves as a follow-up to his debut self-titled mixtape (2017).
Die Lit received generally positive reviews from critics and debuted at number three on the US Billboard 200, earning 61,000 album-equivalent units in its first week. In July 2019, the album was certified gold by the Recording Industry Association of America (RIAA).
Background and release
In December 2017, a video showed Playboi Carti behind a mixing board, presumably listening to new material with the caption "Album Mode". In March 2018, Pi'erre Bourne teased that the project had been finalized via Twitter. On May 10, 2018, Carti announced the album's release date via Twitter. On the same day, ASAP Rocky revealed the album's artwork. Upon release, the album was briefly released exclusively via Tidal, before showing up on other streaming services.
Critical reception
Die Lit was met with generally positive reviews. At Metacritic, which assigns a normalized rating out of 100 to reviews from professional publications, the album received an average score of 71, based on seven reviews.
Evan Rytlewski of Pitchfork stated that Die Lit is "an album that works almost completely from its own lunatic script. It is a perversely infectious sugar high, rap that fundamentally recalibrates the brain's reward centers", praising the production and simplicity. Corrigan B of Tiny Mix Tapes said, "Like its predecessor, it's an album of party records; these are songs that will be played ad infinitum at functions until the hooks, the breaks, and, of course, the bass are burned into the brain of every attendee". Maxwell Cavaseno of HotNewHipHop concluded that Die Lit "feels like the closest to a fully realized album as Carti is ever going to come close to achieving" and that it "should, at the very least, be respected for doing so much while doing so very little". Online publication Sputnikmusic stated: "A lot of the gratification of this record is in the production, which takes the age-old hip-hop trick of taking a fractional melodic idea, barely a song by itself, and spinning out of it a thick sonic weave."
A. Harmony from Exclaim! criticized the album for having a lack of variety, saying "It's fun enough but, save for a few keepers, has the lifespan of a mayfly. Rock to it for the summer and forget most of it by September". Riley Wallace of HipHopDX stated, "Through a more respectable body of work—is unlikely to win over any naysayers". Neil Z. Yeung of AllMusic said, "Like Rae Sremmurd and Migos, these big-bass trap anthems owe much to their club-friendly vibe, but offer little in terms of substance or lasting impact".
Year-end lists
Commercial performance
Die Lit debuted at number three on the US Billboard 200 chart, earning 61,000 album-equivalent units (including 5,000 copies as pure album sales) in its first week. On July 31, 2019, the album was certified gold by the Recording Industry Association of America (RIAA) for combined sales and streams in excess of 500,000 units in the United States. As of January 2021, the album has earned 1.1 million album-equivalent units and net 1.67 billion on-demand streams for its tracks.
Track listing
All tracks written by Jordan Carter and Jordan Jenks, and produced by Pi'erre Bourne, except where noted.
Notes
signifies an uncredited co-producer
Sample credits
"R.I.P." contains a sample from "What About Us", written by Donald DeGrate, Jr., Reginald Moore, Shirley Murdock, Larry Troutman and Roger Troutman, as performed by Jodeci.
"Fell in Luv" contains a sample from "Grandloves", written by Megan James, Corin Roddick and Isaac Gerasimou, as performed by Purity Ring.
Charts
Weekly charts
Year-end charts
Certifications
References
2018 debut albums
Playboi Carti albums
Albums produced by Don Cannon
Albums produced by Pi'erre Bourne
Interscope Records albums
|
The Ponte Corvo, rarely Ponte Corbo, is a Roman segmental arch bridge across the Bacchiglione in Padua, Italy (Roman Patavium). Dating to the 1st or 2nd century AD, its three remaining arches cross a branch of the river and are today partly buried respectively walled up. The span-to-rise ratio of the bridge varies between 2.8 and 3.4 to 1, the ratio of clear span to pier thickness from 4.9 to 6.9 to 1.
Besides the Ponte Corvo, there are three more ancient segmented arch bridges in Padua: Ponte San Lorenzo, Ponte Altinate and Ponte Molino, as well as Ponte San Matteo.
See also
Roman bridges
List of Roman bridges
Roman architecture
Roman engineering
References
Sources
Roman bridges in Italy
Roman segmental arch bridges
Deck arch bridges
Stone bridges in Italy
Bridges completed in the 1st century
Bridges in Padua
|
We Met in Virtual Reality is a 2022 documentary film that takes place entirely within the video game VRChat. It explores the social relations developed by the users of VRChat during the pandemic, and how their lives were changed by their time on the platform. It was created by Joe Hunting, who was the director and writer of the script.
Plot
The film follows multiple figures throughout the runtime of the movie, in chronological order for over a year, often switching back and forth to explore their lives on the platform as their relationships evolve and change. There is a teacher who has an online sign language school, a couple that met online, as well as one of them running a school for dance classes, another couple who met online and also found love on the platform, as well as other users. The film finds out the reasons they use the platform and how it has helped them during the COVID-19 pandemic.
Release
It had its world premiere at the 2022 Sundance Film Festival on January 21, 2022. The film was released for air on HBO and streaming on its service on July 27, 2022 on HBO Max.
Reception
We Met in Virtual Reality gained a 94% critic rating from Rotten Tomatoes, with a 33 of the 34 reviews being favorable. Consensus reads, "We Met in Virtual Reality takes a visually striking approach to its investigation of human interactions on the VR plane, with surprisingly poignant results."
According to The Hollywood Reporter, "watching We Met in Virtual Reality, you very quickly notice that the two people cuddling have horns and a tail and that the airplane they seem to be sitting on doesn’t exist. The young woman with pink hair talking about her suicide attempt is laying underneath the stars, but until she laments that the clouds aren’t moving, you could almost forget that they’re virtual as well. And when the deaf ASL instructor talks about losing his brother during COVID and lights a virtual Japanese lantern in his honor, there’s nothing synthetic about the emotions you feel."
Engadget states, "it’s clear from We Met in Virtual Reality that he’s not just dropping into the community for a quick story. Instead, he sees the humanity behind the avatars and virtual connections." On the other hand, Wired gave a more critical review, noting that the documentary leaves out the drastic ways VR is changing in the wake of competitors such as Meta's conception of the metaverse, stating, "Hunting spends a lot of time showing there’s a culture worth preserving; if only he’d shown if anyone is trying to do it."
References
External links
We Met in Virtual Reality at Field of Vision
2022 documentary films
2022 films
Films about virtual reality
Documentary films about video games
2020s English-language films
2022 independent films
|
```yaml
---
Resources:
KinesisFirehoseDeliveryStream:
Type: AWS::KinesisFirehose::DeliveryStream
Properties:
DeliveryStreamName: foobar
DeliveryStreamType: DirectPut
SplunkDestinationConfiguration:
HECEndpoint: String
HECEndpointType: path_to_url
HECToken: '{{resolve:ssm:UnsecureSecretString:1}}'
S3Configuration:
BucketARN: arn:aws:s3:::foobar-bucket
BufferingHints:
IntervalInSeconds: 60
SizeInMBs: 1
CompressionFormat: GZIP
RoleARN: arn:aws:iam::123456789012:role/KinesisFirehose-S3Configuration-foobar
```
|
```c++
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
//
// path_to_url
//
// Unless required by applicable law or agreed to in writing,
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// specific language governing permissions and limitations
#include "kudu/codegen/jit_wrapper.h"
#include <utility>
#include <llvm/ExecutionEngine/ExecutionEngine.h>
using llvm::ExecutionEngine;
using std::unique_ptr;
namespace kudu {
namespace codegen {
JITWrapper::JITWrapper(unique_ptr<JITCodeOwner> owner)
: owner_(std::move(owner)) {}
JITWrapper::~JITWrapper() {}
} // namespace codegen
} // namespace kudu
```
|
```objective-c
/**
* Tencent is pleased to support the open source community by making MSEC available.
*
*
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing, software distributed under the
*/
#ifndef _TBASE_TPROCMON_H_
#define _TBASE_TPROCMON_H_
#include <unistd.h>
//#include <assert.h>
#include <sys/msg.h>
#include <map>
#include <errno.h>
#include <string.h>
#include <stdio.h>
#include <time.h>
#include <signal.h>
#include <stdint.h>
#include "list.h"
#include "tstat.h"
#include "tlog.h"
#define MAX_FILEPATH_LEN 128 //
#define BUCKET_SIZE 10 //
#define MAX_PROC_GROUP_NUM 256 //
#define MAX_MSG_BUFF 100 //
#define MSG_EXPIRE_TIME 120 //
#define MSG_VERSION 0x01 //
#define MSG_ID_SERVER 0x01 //SERVERIDCLIENTIDMSG_ID_SERVER
#define MSG_SRC_SERVER 0x00 //SERVER
#define MSG_SRC_CLIENT 0x01 //CLIENT
#define DEFAULT_MQ_KEY 0x800100 //key, 8388864
#define EXCEPTION_STARTBIT 20
#define EXCEPTION_TYPE(msg_srctype__) (msg_srctype__ >> EXCEPTION_STARTBIT)
#define PROCMON_EVENT_PROCDEAD 1 //
#define PROCMON_EVENT_OVERLOAD (1<<1) //
#define PROCMON_EVENT_LOWSRATE (1<<2) //
#define PROCMON_EVENT_LATENCY (1<<3) //
#define PROCMON_EVENT_OTFMEM (1<<4) //
#define PROCMON_EVENT_PROCDOWN (1<<5) //
#define PROCMON_EVENT_PROCUP (1<<6) //
#define PROCMON_EVENT_FORKFAIL (1<<7) //
#define PROCMON_EVENT_MQKEYCONFIGERROR (1<<8) //
#define PROCMON_EVENT_TASKPAUSE (1<<9) //
#define PROCMON_EVENT_QSTATNULL (1<<10) //
#define PROCMON_CMD_KILL 0x1 //
#define PROCMON_CMD_LOAD 0x2 //
#define PROCMON_CMD_FORK 0x4 //
#define PROCMON_STATUS_OK 0x0 //
#define PROCMON_STATUS_OVERLOAD 1 //
#define PROCMON_STATUS_LOWSRATE (1<<1) //
#define PROCMON_STATUS_LATENCY (1<<2) //
#define PROCMON_STATUS_OTFMEM (1<<3) //
#define PROC_RELOAD_NORMAL 0 //
#define PROC_RELOAD_START 1 //fork
#define PROC_RELOAD_WAIT_NEW 2 //fork
#define PROC_RELOAD_KILLOLD 3 //
#define PROC_RELOAD_CLEAR 4 //
using namespace spp::comm;
using namespace tbase::tlog;
namespace tbase
{
namespace tprocmon
{
////////////////////////////////////////////////////////////////////
typedef struct
{
int groupid_; //ID
time_t adjust_proc_time;
char basepath_[MAX_FILEPATH_LEN]; //
char exefile_[MAX_FILEPATH_LEN]; //
char etcfile_[MAX_FILEPATH_LEN]; //
int exitsignal_; //kill
unsigned maxprocnum_; //
unsigned minprocnum_; //
unsigned heartbeat_; //
char mapfile_[MAX_FILEPATH_LEN];
void* q_recv_pstat;
unsigned int type;
unsigned int sendkey;
unsigned int sendkeysize;
unsigned int recvkey;
unsigned int recvkeysize;
int mqkey;
char reserve_[8];
unsigned group_type_;
unsigned affinity_;
unsigned reload_;
time_t reload_time;
} TGroupInfo;//
typedef struct
{
int groupid_; //
int procid_; //ID
time_t timestamp_; //
char reserve_[8]; //0 spp_handle_init>0
} TProcInfo;//
typedef struct
{
int groupid_; //
int procid_; //ID
unsigned cmd_; //ID
int arg1_; //1
int arg2_; //2
char reserve_[8];
} TProcEvent;//
typedef struct
{
long msgtype_; //ID
long msglen_; //
long srctype_; //0--server, 0--client
time_t timestamp_; //
char msgcontent_[MAX_MSG_BUFF]; //TProcInfoTProcEvent
} TProcMonMsg;//
/////////////////////////////////////////////////////////////////
//
class CCommu
{
public:
CCommu() {}
virtual ~CCommu() {}
//
//args:
//0
virtual int init(void* args) = 0;
//
virtual void fini() = 0;
//
//msg:
//msgtype: 1>1
//: >0
virtual int recv(TProcMonMsg* msg, long msgtype = 1) = 0;
//
//msg:
//: 0
virtual int send(TProcMonMsg* msg) = 0;
};
//
class CMQCommu : public CCommu
{
public:
CMQCommu();
CMQCommu(int mqkey);
~CMQCommu();
int init(void* args);
void fini();
int recv(TProcMonMsg* msg, long msgtype = 0);
int send(TProcMonMsg* msg);
protected:
int mqid_;
};
//......udp, fifo, share memory.....
//////////////////////////////////////////////////////////////////////
typedef struct
{
TGroupInfo groupinfo_; //
list_head_t bucket_[BUCKET_SIZE]; //
int curprocnum_; //
int errprocnum_; //
} TProcGroupObj;//
typedef struct
{
TProcInfo procinfo_; //
int status_; //
list_head_t list_;
} TProcObj;//
////////////////////////////////////////////////////////////////////////
typedef void (*monsrv_cb)(const TGroupInfo* groupinfo /**/,
const TProcInfo* procinfo /**/,
int event /**/,
void* arg /**/);
typedef struct
{
TProcGroupObj* group; //
TProcObj** proc; //NULL
} TProcQueryObj;
//class tbase::tstat::CTStat;
class CTProcMonSrv
{
public:
CTProcMonSrv();
virtual ~CTProcMonSrv();
//
//commu:
void set_commu(CCommu* commu);
void set_tlog(CTLog* log);
//
//groupinfo:
//: 0
int add_group(const TGroupInfo* groupinfo);
//
//groupid: ID
//groupinfo:
//: 0
int mod_group(int groupid, const TGroupInfo* groupinfo);
//
void run();
//
//buf:
//buf_len:
void stat(char* buf, int* buf_len);
//add by jeremy
void stat(tbase::tstat::CTStat *stat);
//
void killall(int signo);
//
void kill_group(int grp_id, int signo);
//
void pre_reload_group(int groupid, const TGroupInfo *groupinfo);
//
void query(TProcQueryObj*& result, int& num);
//GROUPpid
int dump_pid_list(char* buf, int len);
protected:
CTLog* log_;
CCommu* commu_;
TProcGroupObj proc_groups_[MAX_PROC_GROUP_NUM];
int cur_group_;
TProcMonMsg msg_[2]; //0-1-
std::map<int,std::map<int,int> > reload_tag;
void reload_kill_group(int groupid, int signo);
bool reload_check_group_dead(int groupid);
void segv_wait(TProcInfo *procinfo);
void process_exception(void);
bool do_recv(long msgtype);
bool do_check();
TProcGroupObj* find_group(int groupid);
int add_proc(int groupid, const TProcInfo* procinfo);
TProcObj* find_proc(int groupid, int procid);
void del_proc(int groupid, int procid);
bool check_reload_old_proc(int groupid, int procid);
//
virtual void check_group(TGroupInfo* group, int curprocnum);
virtual bool check_proc(TGroupInfo* group, TProcInfo* proc);
virtual bool do_event(int event, void* arg1, void* arg2);
virtual void do_fork(const char* basepath, const char* exefile, const char* etcfile, int num, int groupid, unsigned mask);
virtual void do_kill(int procid, int signo = SIGKILL);
virtual void do_order(int groupid, int procid, int eventno, int cmd, int arg1 = 0, int arg2 = 0);
bool check_groupbusy(int groupid);
int set_affinity(const uint64_t mask);
};
/////////////////////////////////////////////////////////////////////////////
class CTProcMonCli
{
public:
CTProcMonCli();
virtual ~CTProcMonCli();
//
//commu:
void set_commu(CCommu* commu);
//
void run();
TProcMonMsg msg_[2]; //0--1--
protected:
CCommu* commu_;
};
#define CLI_SEND_INFO(cli) ((TProcInfo*)(cli)->msg_[0].msgcontent_) //TProcInfo
#define CLI_RECV_INFO(cli) ((TProcEvent*)(cli)->msg_[1].msgcontent_) //TProcEvent,
// namespace
}
}
#endif
```
|
```java
/*
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing, software
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*/
package org.apache.shardingsphere.example.generator.core.yaml.config;
import com.google.common.base.Preconditions;
import java.util.Arrays;
import java.util.Collection;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.Set;
/**
* Example configuration validator.
*/
public final class YamlExampleConfigurationValidator {
/**
* Verify the entrance.
*
* @param config Yaml example configuration
*/
public static void validate(final YamlExampleConfiguration config) {
Map<String, List<String>> configMap = new HashMap<>(5, 1);
configMap.put("modes", config.getModes());
configMap.put("transactions", config.getTransactions());
configMap.put("features", config.getFeatures());
configMap.put("frameworks", config.getFrameworks());
validateConfigurationValues(configMap);
validateAccountConfigProperties(config.getProps());
}
private static void validateConfigurationValues(final Map<String, List<String>> configMap) {
configMap.forEach((key, value) -> {
YamlExampleConfigurationSupportedValue supportedValueEnum = YamlExampleConfigurationSupportedValue.of(key);
Set<String> supportedValues = supportedValueEnum.getSupportedValues();
value.forEach(each -> Preconditions.checkArgument(supportedValues.contains(each), getConfigValueErrorMessage(key, supportedValues, each)));
});
}
private static void validateAccountConfigProperties(final Properties props) {
Collection<String> accountConfigItems = Arrays.asList("host", "port", "username", "password");
accountConfigItems.forEach(each -> Preconditions.checkArgument(null != props.get(each), getConfigItemErrorMessage(each)));
}
private static String getConfigValueErrorMessage(final String configItem, final Set<String> supportedValues, final String errorValue) {
return "Example configuration(in the config.yaml) error in the \"" + configItem + "\"" + ",it only supports:" + supportedValues.toString() + ",the currently configured value:" + errorValue;
}
private static String getConfigItemErrorMessage(final String configItem) {
return "Example configuration(in the config.yaml) error in the \"" + configItem + "\"" + ",the configuration item missed or its value is null";
}
}
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.