text
stringlengths 1
22.8M
|
|---|
```python
# -*- coding: utf-8 -*-
"""
oauthlib.oauth1
~~~~~~~~~~~~~~
This module is a wrapper for the most recent implementation of OAuth 1.0 Client
and Server classes.
"""
from __future__ import absolute_import, unicode_literals
from .rfc5849 import Client
from .rfc5849 import SIGNATURE_HMAC, SIGNATURE_HMAC_SHA1, SIGNATURE_HMAC_SHA256, SIGNATURE_RSA, SIGNATURE_PLAINTEXT
from .rfc5849 import SIGNATURE_TYPE_AUTH_HEADER, SIGNATURE_TYPE_QUERY
from .rfc5849 import SIGNATURE_TYPE_BODY
from .rfc5849.request_validator import RequestValidator
from .rfc5849.endpoints import RequestTokenEndpoint, AuthorizationEndpoint
from .rfc5849.endpoints import AccessTokenEndpoint, ResourceEndpoint
from .rfc5849.endpoints import SignatureOnlyEndpoint, WebApplicationServer
from .rfc5849.errors import InsecureTransportError, InvalidClientError, InvalidRequestError, InvalidSignatureMethodError, OAuth1Error
```
|
Siyavashabad-e Chendar (, also Romanized as Sīyāvashābād-e Chendār) is a village in Doab Rural District, Bazoft District, Kuhrang County, Chaharmahal and Bakhtiari Province, Iran. At the 2006 census, its population was 169, in 30 families. The village is populated by Lurs.
References
Populated places in Kuhrang County
Luri settlements in Chaharmahal and Bakhtiari Province
|
Muckalee/Ballyfoyle Rangers GAA was a Gaelic Athletic Association club located in the Muckalee and Ballyfoyle areas of County Kilkenny, Ireland. The club was primarily concerned with the game of hurling.
History
The Muckalee/Ballyfoyle Rangers club was formed in 1971. The new club claimed the Kilkenny JHC title after a defeat of Mullinavat in 1974. It gained a second promotion just a year later when senior status was secured by winning the Kilkenny IHC. Muckalee/Ballyfoyle Rangers's time in the top flight saw them lose the 1980 SHC final to Ballyhale Shamrocks. Muckalee/Ballyfoyle Rangers amalgamated with the Coon club in 1982, resulting in the creation of the St. Martin's club.
Honours
Kilkenny Intermediate Hurling Championship (1): 1975
Kilkenny Junior Hurling Championship (1): 1974
Northern Kilkenny Junior Hurling Championship (1): 1974
Notable players
Patsy Moran: All-Ireland SHC-winner (1979)
References
Gaelic games clubs in County Kilkenny
Hurling clubs in County Kilkenny
Defunct Gaelic games clubs
|
Kargach () is a rural locality (a village) in Yugskoye Rural Settlement, Cherepovetsky District, Vologda Oblast, Russia. The population was 24 as of 2002.
Geography
Kargach is located southeast of Cherepovets (the district's administrative centre) by road. Seltso-Ryabovo is the nearest rural locality.
References
Rural localities in Cherepovetsky District
|
Zhang Deguang (Simplified Chinese: 张德广; born 10 February 1941) was Executive Secretary of the Shanghai Cooperation Organisation from 2004 to 2006.
Biography
Born in Jining, homeland of Chinese philosophers Confucius and Men Ji, in Shandong Province of occupied China. In 1965 he graduated from the Beijing Institute of Foreign Languages, Faculty of Russian literature. Zhang Deguang speaks Chinese, Russian and English.
In 1965 he joined the Ministry of Foreign Affairs of the People's Republic of China. There he has held the following posts:
1965-1973 Employee, translation branch, Ministry of Foreign Affairs of the People's Republic of China
1973-1977 Attaché, Embassy of the People's Republic of China in the USSR
1977-1987 Second Secretary, First Secretary, deputy director of Chancery, Sino-Russian Negotiations, Department of USSR and European affairs, Ministry of Foreign Affairs, People's Republic of China
1987-1992 Counselor, Embassy of the People's Republic of China in the United States
1992-1993 Ambassador Extraordinary and Plenipotentiary of the People's Republic of China to the Republic of Kazakhstan
1993-1995 Head of the Department of Eastern Europe and Central Asia, Ministry of Foreign Affairs, People's Republic of China
1995-2001 Deputy Minister of Foreign Affairs, People's Republic of China
2001-2003 Ambassador Extraordinary and Plenipotentiary of the People's Republic of China to Russia
On 29 May 2003, at the meeting of Heads of SCO member states, he was appointed Executive Secretary of the Secretariat of the Shanghai Cooperation Organisation. On 15 January 2004, he took up his duties at this post.
In December 1999 President of the Russian Federation Boris Yeltsin awarded Zhang Deguang with the "Friendship Order". In December 2001 President of Kazakhstan Nursultan Nazarbayev awarded him with First Grade Friendship Order of Kazakhstan. In February 2003 he was awarded as Academician of the Russian Academy of Social Sciences. In April 2003 he was awarded an honorary doctorate by the Institute of Far Eastern Studies and with the First Grade Medal of the Sino-Russian Friendship Society. In October 2003 President of the Russian Federation Vladimir Putin presented him with Commemorative Diploma for special contribution to strengthening Sino-Russian friendship. In March 2004 President of the Russian Federation Vladimir Putin awarded Zhang Deguang with Commemorative Medal on the occasion of the 300th anniversary of Saint Petersburg.
External links
Zhang Deguang's short biography, SCO website
People's Republic of China politicians from Shandong
Living people
1941 births
Politicians from Jining
Diplomats of the People's Republic of China
Ambassadors of China to Kazakhstan
Ambassadors of China to Russia
|
```sqlpl
select 1 as id
union all
select * from {{ ref('node_0') }}
union all
select * from {{ ref('node_3') }}
union all
select * from {{ ref('node_6') }}
union all
select * from {{ ref('node_8') }}
union all
select * from {{ ref('node_10') }}
union all
select * from {{ ref('node_109') }}
union all
select * from {{ ref('node_1641') }}
```
|
Iris spuria subsp. demetrii is a species of the genus Iris, part of a subgenus series known as Iris subg. Limniris and in the series Iris ser. Spuriae. It is a subspecies of Iris spuria, a rhizomatous perennial plant, from the Caucasus region, with blue-violet flowers. It is commonly known as Dimitry iris in Russia. It is cultivated as an ornamental plant in temperate regions.
Description
The iris is very similar in form to Iris notha, another spuria Iris from the Caucasus region. Both dislike wet soils.
It has a rhizome which has not been generally described.
It has stiff, dark green leaves that can grow up to between long. They are narrower than Iris spuria subsp. carthaliniae, (10–18 mm wide).
It has a stiff stem, that can grow up to between long.
It has dark green, compact, slightly inflated, spathes (leaves of the flower bud).
The stems hold between 2–5 terminal (top of stem) flowers, in late spring.
The flowers come in shades of blue, from dark blue, to blue-violet.
It has 2 pairs of petals, 3 large sepals (outer petals), known as the 'falls' and 3 inner, smaller petals (or tepals, known as the 'standards'. The narrow falls have blade that is the shorter than the claw (section of petal closest to the stem). The petals are veined with darker colours or white.
The capsules and seeds produced by the plant after flowering, have not been generally described.
Biochemistry
As most irises are diploid, having two sets of chromosomes, this can be used to identify hybrids and classification of groupings.
It has a chromosome count: 2n=38.
It was counted as 2n=38, by O.I. Zakharyeva and L.M. Makushenko in 1969.
Taxonomy
It is commonly known as Dimitry iris in Russia.
It is known as Iris Demetriou in Czechoslovakia.
It is unknown what the Latin specific epithet demetrii refers to, but an insect (beetle) Chioneosoma demetrii, also shares the same epithet.
It was originally published and described by Agazi Asaturovich Achverdov and Nina Vasilevna Mirzoeva as Iris demetrii in Transactions of Bot. Inst. Acad. Sci. Armenia SSR (Trudy Bot. Inst. Akad. Nauk Armyansk) Vol. 7 page 27, in 1950. It was named in 1950 (identical to Iris prilipkoana but not officially described). Iris prilipkoana was later classified as a synonym of Iris spuria subsp. demetrii.
Later, in 1981 Brian Mathew re-classified the species as a subspecies of Iris spuria, and published it as Iris spuria subsp. demetrii (Fomin ) B.Mathew, in (his book The Iris on page 117 in 1981.
It was verified by United States Department of Agriculture Agricultural Research Service on 9 January 2003 and then updated on 1 March 2007.
Iris spuria subsp. demetrii is a tentatively accepted name by the RHS.
Distribution and habitat
It is native to temperate regions of Asia.
Range
It is found in the Transcaucasia regions, of Armenia, and Azerbaijan.
In Armenia, it is found in Zangezur.
Habitat
Similar to Iris notha it grows on dry slopes, on the foothills and mountains of Azerbaijan, and Armenia.
It has been found at altitudes of 2000 m above sea level.
Conservation
Due to the wide distribution of the species within Armenia, has helped protect the plants survive various threats, including being picked for flower bouquets.
It was listed in the 1st edition of the Red Data Book of Armenia, under Iris prilipkoana (a synonym of Iris spuria subsp. demetrii) as 'Near Threatened' (NT). It was also listed in the Azerbaijan Red Data Book.
It is not included in the Annexes of CITES and the Bern Convention.
Cultivation
It prefers to grow in rich, well-drained soil. Including clay soils. It dislikes wet soils.
It also prefers positions in full sun or part shade.
It can be susceptible to mustard-seed fungus.
Hybrids and cultivars
Due to its habit of liking dry soils, it is of interest to iris plant breeders.
References
Sources
Czerepanov, S. K. 1995. Vascular plants of Russia and adjacent states (the former USSR). [as I. prilipkoana Kem.-Nath.]
Mathew, B. 1981. The Iris. 117.
External links
spuria subsp. demetrii
Plants described in 1981
Flora of Azerbaijan
Flora of Armenia
Plant subspecies
|
```php
<?php
namespace Hamcrest\Type;
/*
*/
use Hamcrest\Core\IsTypeOf;
/**
* Tests whether the value is an integer.
*/
class IsInteger extends IsTypeOf
{
/**
* Creates a new instance of IsInteger
*/
public function __construct()
{
parent::__construct('integer');
}
/**
* Is the value an integer?
*
* @factory intValue
*/
public static function integerValue()
{
return new self;
}
}
```
|
In fluid mechanics, external flow is a flow that boundary layers develop freely, without constraints imposed by adjacent surfaces. It can be defined as the flow of a fluid around a body that is completely submerged in it. Examples include fluid motion over a flat plate (inclined or parallel to the free stream velocity) and flow over curved surfaces such as a sphere, cylinder, airfoil, or turbine blade, water flowing around submarines, and air flowing around a truck; a 2000 paper analyzing the latter used computational fluid dynamics to model the three-dimensional flow structure and pressure distribution on the external surface of the truck. In a 2008 paper, external flow was said to be "arguably is the most common and best studied case in soft matter systems.
The term can also be used simply to describe flow in any body of fluid external to the system under consideration.
In external co-flow, fluid in the external region occurs in the same direction as flow within the system of interest; this contrasts with external counterflow.
References
Aerodynamics
Flow regimes
|
```yaml
define: DUK_USE_32BIT_PTRS
introduced: 1.0.0
removed: 1.4.0
default: false
tags:
- portability
description: >
Pointers are 32-bit integer compatible.
```
|
```c
/*
*
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without modification, are
* permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice, this list of
* conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright notice, this list of
* conditions and the following disclaimer in the documentation and/or other materials provided
* with the distribution.
*
* 3. Neither the name of the copyright holder nor the names of its contributors may be used to
* endorse or promote products derived from this software without specific prior written
* permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS
* OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
* COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE
* GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
* AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
* NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
* OF THE POSSIBILITY OF SUCH DAMAGE.
*/
int main() {
int i;
int res = 0;
goto L1;
for (i = 0; i < 10; i++) {
L2:
res++;
goto L3;
return 42;
L3:
if (i > 100) {
L1:
i = 0;
goto L2;
break;
}
}
return res;
}
```
|
```objective-c
/*
* XZ decompressor
*
* Authors: Lasse Collin <lasse.collin@tukaani.org>
* Igor Pavlov <path_to_url
*
* This file has been put into the public domain.
* You can do whatever you want with this file.
*/
#ifndef XZ_H
#define XZ_H
#ifdef __KERNEL__
# include <linux/stddef.h>
# include <linux/types.h>
#else
# include <stddef.h>
# include <stdint.h>
#endif
#ifdef __cplusplus
extern "C" {
#endif
/* In Linux, this is used to make extern functions static when needed. */
#ifndef XZ_EXTERN
# define XZ_EXTERN extern
#endif
/**
* enum xz_mode - Operation mode
*
* @XZ_SINGLE: Single-call mode. This uses less RAM than
* than multi-call modes, because the LZMA2
* dictionary doesn't need to be allocated as
* part of the decoder state. All required data
* structures are allocated at initialization,
* so xz_dec_run() cannot return XZ_MEM_ERROR.
* @XZ_PREALLOC: Multi-call mode with preallocated LZMA2
* dictionary buffer. All data structures are
* allocated at initialization, so xz_dec_run()
* cannot return XZ_MEM_ERROR.
* @XZ_DYNALLOC: Multi-call mode. The LZMA2 dictionary is
* allocated once the required size has been
* parsed from the stream headers. If the
* allocation fails, xz_dec_run() will return
* XZ_MEM_ERROR.
*
* It is possible to enable support only for a subset of the above
* modes at compile time by defining XZ_DEC_SINGLE, XZ_DEC_PREALLOC,
* or XZ_DEC_DYNALLOC. The xz_dec kernel module is always compiled
* with support for all operation modes, but the preboot code may
* be built with fewer features to minimize code size.
*/
enum xz_mode {
XZ_SINGLE,
XZ_PREALLOC,
XZ_DYNALLOC
};
/**
* enum xz_ret - Return codes
* @XZ_OK: Everything is OK so far. More input or more
* output space is required to continue. This
* return code is possible only in multi-call mode
* (XZ_PREALLOC or XZ_DYNALLOC).
* @XZ_STREAM_END: Operation finished successfully.
* @XZ_UNSUPPORTED_CHECK: Integrity check type is not supported. Decoding
* is still possible in multi-call mode by simply
* calling xz_dec_run() again.
* Note that this return value is used only if
* XZ_DEC_ANY_CHECK was defined at build time,
* which is not used in the kernel. Unsupported
* check types return XZ_OPTIONS_ERROR if
* XZ_DEC_ANY_CHECK was not defined at build time.
* @XZ_MEM_ERROR: Allocating memory failed. This return code is
* possible only if the decoder was initialized
* with XZ_DYNALLOC. The amount of memory that was
* tried to be allocated was no more than the
* dict_max argument given to xz_dec_init().
* @XZ_MEMLIMIT_ERROR: A bigger LZMA2 dictionary would be needed than
* allowed by the dict_max argument given to
* xz_dec_init(). This return value is possible
* only in multi-call mode (XZ_PREALLOC or
* XZ_DYNALLOC); the single-call mode (XZ_SINGLE)
* ignores the dict_max argument.
* @XZ_FORMAT_ERROR: File format was not recognized (wrong magic
* bytes).
* @XZ_OPTIONS_ERROR: This implementation doesn't support the requested
* compression options. In the decoder this means
* that the header CRC32 matches, but the header
* itself specifies something that we don't support.
* @XZ_DATA_ERROR: Compressed data is corrupt.
* @XZ_BUF_ERROR: Cannot make any progress. Details are slightly
* different between multi-call and single-call
* mode; more information below.
*
* In multi-call mode, XZ_BUF_ERROR is returned when two consecutive calls
* to XZ code cannot consume any input and cannot produce any new output.
* This happens when there is no new input available, or the output buffer
* is full while at least one output byte is still pending. Assuming your
* code is not buggy, you can get this error only when decoding a compressed
* stream that is truncated or otherwise corrupt.
*
* In single-call mode, XZ_BUF_ERROR is returned only when the output buffer
* is too small or the compressed input is corrupt in a way that makes the
* decoder produce more output than the caller expected. When it is
* (relatively) clear that the compressed input is truncated, XZ_DATA_ERROR
* is used instead of XZ_BUF_ERROR.
*/
enum xz_ret {
XZ_OK,
XZ_STREAM_END,
XZ_UNSUPPORTED_CHECK,
XZ_MEM_ERROR,
XZ_MEMLIMIT_ERROR,
XZ_FORMAT_ERROR,
XZ_OPTIONS_ERROR,
XZ_DATA_ERROR,
XZ_BUF_ERROR
};
/**
* struct xz_buf - Passing input and output buffers to XZ code
* @in: Beginning of the input buffer. This may be NULL if and only
* if in_pos is equal to in_size.
* @in_pos: Current position in the input buffer. This must not exceed
* in_size.
* @in_size: Size of the input buffer
* @out: Beginning of the output buffer. This may be NULL if and only
* if out_pos is equal to out_size.
* @out_pos: Current position in the output buffer. This must not exceed
* out_size.
* @out_size: Size of the output buffer
*
* Only the contents of the output buffer from out[out_pos] onward, and
* the variables in_pos and out_pos are modified by the XZ code.
*/
struct xz_buf {
const uint8_t *in;
size_t in_pos;
size_t in_size;
uint8_t *out;
size_t out_pos;
size_t out_size;
};
/**
* struct xz_dec - Opaque type to hold the XZ decoder state
*/
struct xz_dec;
/**
* xz_dec_init() - Allocate and initialize a XZ decoder state
* @mode: Operation mode
* @dict_max: Maximum size of the LZMA2 dictionary (history buffer) for
* multi-call decoding. This is ignored in single-call mode
* (mode == XZ_SINGLE). LZMA2 dictionary is always 2^n bytes
* or 2^n + 2^(n-1) bytes (the latter sizes are less common
* in practice), so other values for dict_max don't make sense.
* In the kernel, dictionary sizes of 64 KiB, 128 KiB, 256 KiB,
* 512 KiB, and 1 MiB are probably the only reasonable values,
* except for kernel and initramfs images where a bigger
* dictionary can be fine and useful.
*
* Single-call mode (XZ_SINGLE): xz_dec_run() decodes the whole stream at
* once. The caller must provide enough output space or the decoding will
* fail. The output space is used as the dictionary buffer, which is why
* there is no need to allocate the dictionary as part of the decoder's
* internal state.
*
* Because the output buffer is used as the workspace, streams encoded using
* a big dictionary are not a problem in single-call mode. It is enough that
* the output buffer is big enough to hold the actual uncompressed data; it
* can be smaller than the dictionary size stored in the stream headers.
*
* Multi-call mode with preallocated dictionary (XZ_PREALLOC): dict_max bytes
* of memory is preallocated for the LZMA2 dictionary. This way there is no
* risk that xz_dec_run() could run out of memory, since xz_dec_run() will
* never allocate any memory. Instead, if the preallocated dictionary is too
* small for decoding the given input stream, xz_dec_run() will return
* XZ_MEMLIMIT_ERROR. Thus, it is important to know what kind of data will be
* decoded to avoid allocating excessive amount of memory for the dictionary.
*
* Multi-call mode with dynamically allocated dictionary (XZ_DYNALLOC):
* dict_max specifies the maximum allowed dictionary size that xz_dec_run()
* may allocate once it has parsed the dictionary size from the stream
* headers. This way excessive allocations can be avoided while still
* limiting the maximum memory usage to a sane value to prevent running the
* system out of memory when decompressing streams from untrusted sources.
*
* On success, xz_dec_init() returns a pointer to struct xz_dec, which is
* ready to be used with xz_dec_run(). If memory allocation fails,
* xz_dec_init() returns NULL.
*/
XZ_EXTERN struct xz_dec *xz_dec_init(enum xz_mode mode, uint32_t dict_max);
/**
* xz_dec_run() - Run the XZ decoder
* @s: Decoder state allocated using xz_dec_init()
* @b: Input and output buffers
*
* The possible return values depend on build options and operation mode.
* See enum xz_ret for details.
*
* Note that if an error occurs in single-call mode (return value is not
* XZ_STREAM_END), b->in_pos and b->out_pos are not modified and the
* contents of the output buffer from b->out[b->out_pos] onward are
* undefined. This is true even after XZ_BUF_ERROR, because with some filter
* chains, there may be a second pass over the output buffer, and this pass
* cannot be properly done if the output buffer is truncated. Thus, you
* cannot give the single-call decoder a too small buffer and then expect to
* get that amount valid data from the beginning of the stream. You must use
* the multi-call decoder if you don't want to uncompress the whole stream.
*/
XZ_EXTERN enum xz_ret xz_dec_run(struct xz_dec *s, struct xz_buf *b);
/**
* xz_dec_reset() - Reset an already allocated decoder state
* @s: Decoder state allocated using xz_dec_init()
*
* This function can be used to reset the multi-call decoder state without
* freeing and reallocating memory with xz_dec_end() and xz_dec_init().
*
* In single-call mode, xz_dec_reset() is always called in the beginning of
* xz_dec_run(). Thus, explicit call to xz_dec_reset() is useful only in
* multi-call mode.
*/
XZ_EXTERN void xz_dec_reset(struct xz_dec *s);
/**
* xz_dec_end() - Free the memory allocated for the decoder state
* @s: Decoder state allocated using xz_dec_init(). If s is NULL,
* this function does nothing.
*/
XZ_EXTERN void xz_dec_end(struct xz_dec *s);
/*
* Standalone build (userspace build or in-kernel build for boot time use)
* needs a CRC32 implementation. For normal in-kernel use, kernel's own
* CRC32 module is used instead, and users of this module don't need to
* care about the functions below.
*/
#ifndef XZ_INTERNAL_CRC32
# ifdef __KERNEL__
# define XZ_INTERNAL_CRC32 0
# else
# define XZ_INTERNAL_CRC32 1
# endif
#endif
/*
* If CRC64 support has been enabled with XZ_USE_CRC64, a CRC64
* implementation is needed too.
*/
#ifndef XZ_USE_CRC64
# undef XZ_INTERNAL_CRC64
# define XZ_INTERNAL_CRC64 0
#endif
#ifndef XZ_INTERNAL_CRC64
# ifdef __KERNEL__
# error Using CRC64 in the kernel has not been implemented.
# else
# define XZ_INTERNAL_CRC64 1
# endif
#endif
#if XZ_INTERNAL_CRC32
/*
* This must be called before any other xz_* function to initialize
* the CRC32 lookup table.
*/
XZ_EXTERN void xz_crc32_init(void);
/*
* Update CRC32 value using the polynomial from IEEE-802.3. To start a new
* calculation, the third argument must be zero. To continue the calculation,
* the previously returned value is passed as the third argument.
*/
XZ_EXTERN uint32_t xz_crc32(const uint8_t *buf, size_t size, uint32_t crc);
#endif
#if XZ_INTERNAL_CRC64
/*
* This must be called before any other xz_* function (except xz_crc32_init())
* to initialize the CRC64 lookup table.
*/
XZ_EXTERN void xz_crc64_init(void);
/*
* Update CRC64 value using the polynomial from ECMA-182. To start a new
* calculation, the third argument must be zero. To continue the calculation,
* the previously returned value is passed as the third argument.
*/
XZ_EXTERN uint64_t xz_crc64(const uint8_t *buf, size_t size, uint64_t crc);
#endif
#ifdef __cplusplus
}
#endif
#endif
```
|
1st & Ten is an American sitcom that aired between December 1984 and January 1991 on the cable television network HBO. Featuring series regulars Delta Burke and veteran Reid Shelton, it was one of cable's first attempts to lure the lucrative sitcom audience away from the then-dominant "Big Three" broadcast television networks, by taking advantage of their freedom to include occasional profanity and nudity.
Plot
The sports-themed series follows the on-and off-field antics of the fictional American football team, the California Bulls. The team changed owners throughout the series' history, with the premise that a woman is in charge.
During the first season Diane Barrow (Delta Burke) becomes the owner of her ex-husband's team as part of a divorce settlement, after he has an affair with the team's tight end. She quickly learns the ups and downs of pro football. In one episode, she is forced to coach the team herself after the head coach, Ernie Denardo, is placed in the hospital. She also has constant battles with her General Manager/husband's nephew, who has dealings with the local mob, and fights off advances made by her quarterback (played by Geoffrey Scott).
The second season dealt with two themes: training camp and the playoffs. Barrow was dealing with her players taking recreational drugs during training camp. During this season, O. J. Simpson joined the cast as T.D. Parker, a veteran running back who is forced to make the transition from player to coach. Two real-life football stars made cameo appearances: Marcus Allen portrayed a rookie who was taking over T.D.'s spot on the team, and Vince Ferragamo played "Mainstreet" Manneti, a veteran quarterback. Jason Beghe joined the cast to play Tom Yinessa, a walk-on quarterback who deals with his overnight celebrity.
Delta Burke left the show midway through the third season, after committing herself exclusively to CBS' Designing Women, which she had begun starring on in 1986, and which was renewed. Diane loses control of the Bulls to Teddy Schraeder, her former lover, who manipulates everyone to his own ends. His antics include having T.D. fire Ernie as coach, letting Yinessa practice without a contract, and ignoring steroid use. Legal issues force him to leave the country and turn control over to his daughter, played by Leah Ayres.
Season 4 was briefly renamed 1st and Ten: The Bulls Mean Business. Shanna Reed joins the cast as the team's new female president, representing the new owners, the Dodds Corporation. Her attempts to innovate include bringing a female soccer player in to kick, and signing an Olympic sprinter as wide receiver. Joe Namath has a cameo appearance. Shannon Tweed would replace her in Season 5, and remain with the show to the end. The show was renamed 1st and Ten: Do it Again for the fifth season. The final season was 1st and Ten: In Your Face.
Series themes
The Bulls somehow manage to make it to the championship football game, yet lose in a controversial, heartbreaking manner.
Mad Dog and Dr. Death haze the rookies and rally the defense.
Bubba and Jethro help each other with their various (often sex-related) mishaps. Bubba's voracious appetite is also a running gag
The volatile ownership position of the franchise.
Controversial aspects of professional sports in the late 1980s: steroids, the instant replay, women in the locker room, the role of free agency, multi-sport stars, endorsements.
Game footage
Footage was used from USFL's Los Angeles Express. During simulated game shots, the Bulls football helmet has a decal of horns on the side. When the show uses actual game footage, you can clearly see the letters "L" and "A" on the helmets side, representing the L.A. Express. The Bulls quarterbacks wore #14 to match the actual game footage of L.A. Express real-life quarterback Tom Ramsey. Many generic shots of USFL stadiums were used to depict where the Bulls were playing. As the series went on, aerial shots were used of Los Angeles Memorial Coliseum to represent the Bulls home stadium. Game footage from the USFL stopped midway through the third season, as scripted football plays were being used instead, and the USFL had ceased operations by that point.
At one point, Denardo suggests trading for a running back. He mentions the Bulls from "that other league." He was talking about the Jacksonville Bulls from the United States Football League.
Characters
Only Donald Gibb, Cliff Frazier, Prince Hughes and Reid Shelton appeared in all six seasons. John Kassir and O.J. Simpson joined the cast the second season and stayed till the show's end.
Main
Delta Burke as Diane Barrow (seasons 1-3)
Jason Beghe as Tom Yinessa
Cliff Frazier as Jethro Snell
Donald Gibb as Leslie "Dr. Death" Krunchner
Prince Hughes as Buford "Bubba" Kincaid
Stan Kamber as Coach Grier
Shannon Tweed as Kristy Fulbright
John Kassir as Zagreb Shkenusky
Tommy 'Tiny' Lister as Otis
Tony Longo as Mad Dog Smears
Michael Toland as Billy Cooper
O. J. Simpson as T.D. Parker
Leah Ayres as Jill Schrader
Guest Stars
Mariann Aalda as Ellen
Robert Costanzo as Jake
Alexa Hamilton as Kay
Liam Sullivan as Doctor
Special Guest Star
Roy Thinnes as Teddy Schrader
Mark Lonow as Max Green
John Matuszak as John Manzak
Keith Amos as "Miracle Miles" Coolidge
Reid Shelton as Coach Ernie Denardo
Gail: Shanna Reed
Dr. Doc Phillips: Jim Antonio
Johnny Valentine: Sam J. Jones
Elvin Putts: Jeff Hochendoner
Jamie Waldren: Jeff Kaake
Mac Daniels: Jay Kerr
Roger Barrow: Clayton Landey
Rona Gold: Ruta Lee
Deacon: John Benjamin Martin
Johnny Gunn: Christopher Meloni
Carl Witherspoon: Sam Scarber
Bob Dorsey: Geoffrey Scott
Cheerleader: Tricia Pettitt
Police officer: Ron Shipp
Joe "Mainstreet" Manneti: Vince Ferragamo
'Tombstone' Packer: Lawrence Taylor
Mace Petty: Marshall R. Teague
Rick Lambert: Marcus Allen
Billy Cooper: Michael Toland
Bulls lineman: Arthur Avant
Bulls wide receiver: A. J. DiSpirito
Episodes
Season 1: 1984–85
Season 2 (The Championship, 1986–87)
Season 3 (Going for Broke, 1987)
Season 4 (The Bulls Mean Business, 1988–89)
Season 5 (Do It Again, 1989–90)
Season 6 (1990–91)
Syndication and home media
At the height of the O. J. Simpson murder case, the show made its way to syndicated reruns. The complete series was released on DVD on January 24, 2006.
The original HBO versions ran for 30 minutes, while the edited-for-syndication versions ran for 22 minutes, and had some dialog and scenes edited for content, as well as the addition of a laugh-track. The majority of episodes on the "Complete Collection" DVD are the syndicated versions.
The original opening credits showed former professional football player Fran Tarkenton introducing the players and the plot points at the beginning of each episode. Completely different closing credits were originally used, too. They showed credits rolling over scenes from the episode. In syndication, these were replaced with later opening credits featuring Miracle Miles Coolidge (even though he did not join the cast until the last season) and a generic "Copyright 1991" disclaimer on a blue background respectively.
In popular culture
Outtake promos for the "championship" season with OJ and Marcus Allen were featured in the 2016 Oscar-winning documentary OJ: Made in America.
References
External links
1984 American television series debuts
1991 American television series endings
1980s American sitcoms
1990s American sitcoms
American football television series
English-language television shows
HBO original programming
Television shows set in California
HBO Shows (series) WITHOUT Episode info, list, or Article
Television series by The Kushner-Locke Company
|
```xml
/*
* @license Apache-2.0
*
*
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing, software
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*/
// TypeScript Version: 4.1
/**
* Returns an example associated with a specified alias.
*
* @param alias - alias
* @returns example
*
* @example
* var out = example( 'base.sin' );
*/
declare function example( alias: string ): string | null;
// EXPORTS //
export = example;
```
|
The Whole Earth Telescope is an international network of astronomers that collaborate to study variable stars. The distribution of the observatories in longitude allow the selected targets to be continuously monitored despite the rotation of the Earth.
History
This concept was devised by American astronomers R. Edward Nather and Don E. Winget of the University of Texas at Austin. The consortium consists of individual astronomers interested in collaborating to study targets designated by a principal investigator. Where colleagues are not available, astronomers are dispatched to sites that allow telescope time to visitors. Initial funding for WET came from a grant by the US National Science Foundation, which lasted through 1998.
For each site, an observing run begins when the sky is dark, and continues until stopped by weather or dawn. A photometer is used to observe the target object, a nearby comparison star, and the background sky. The data is then sent to the control center. Each site in turn takes up an overlapping observation run, so the result is, ideally, a continuous sequence of data that can then be processed. After constructing a light curve, the data is subject to a Fourier transform to obtain the frequencies of pulsation. Referred to as an XCov, the typical observing run with the WET lasts from 10 to 14 days, and is scheduled for once or twice a year.
The first observation run took place in March, 1988, and it included the Multiple Mirror Telescope in the US, a aperture telescope at the South African Astronomical Observatory, and the IUE observatory in orbit around the Earth. The first target for the run was the star PG 1346+082, or CR Boötis, an AM CVn star. The second target was V803 Centauri, a cataclysmic binary. The campaign was able to monitor the star systems for a continual period of 15 days from six participating sites.
The early focus of the program was the study of pulsating white dwarfs. Most such stars exhibiting non-radial pulsations have multiple pulsation modes, with some having frequencies on the order of a cycle per day. The only way to observe these extended frequencies is continually over durations longer than 24 hours. The observations of PG 1159-035 with the WET, reported in 1991, initiated the study of white dwarf seismology, later termed astroseismology. By 1998, WET runs had been performed on pulsating white dwarfs of the DOV, DBV, and DAV types, Delta Scuti variables, a rapidly oscillating Ap star, and cataclysmic variables. A total of 16 XCov runs had been completed by May 1998, often covering more than one target per run. Only one failure was reported, for the roAp star HD 166473.
Operations for WET moved to Iowa State University in 1995 when the International Institute for Theoretical and Applied Physics offered to help fund the WET program. In 2004, the governing council of WET agreed to study private funding for its operations. This resulted in the formation of the Delaware Astroseismic Research Center (DARC) the following year, and WET operations were moved from Iowa to Delaware. The first run supported by DARC was XCONV25 during May 2006. Operations are supported by the Mount Cuba Astronomical Observatory and the University of Delaware.
The ability to collect photometric data over a long period is vulnerable to weather conditions, the need to allocate time for each telescope, and the situation of each participating astronomer. It was recognized that satellites could accomplish the same task with fewer issues, but at a far higher cost. The MOST spacecraft, launched in 2003, was an early effort to pursue this application. It was able to monitor individual stars for periods of up to 30 days, but was limited to a visual magnitude of 6 or brighter. The Kepler space telescope was launched in 2009 and was able to observe some stars continuously for up to four years. As of 2021, the TESS satellite is performing astroseismology down to magnitude 17.
References
Further reading
External links
Astronomy organizations
Scientific organizations established in 1988
Astronomy projects
Asteroseismology
|
```javascript
module.exports = {
up: async (queryInterface, Sequelize) => {
await queryInterface.addColumn("teams", "guestSignin", {
type: Sequelize.BOOLEAN,
allowNull: false,
defaultValue: false,
});
await queryInterface.addColumn("users", "lastSigninEmailSentAt", {
type: Sequelize.DATE,
});
await queryInterface.changeColumn("users", "email", {
type: Sequelize.STRING,
allowNull: true,
defaultValue: null,
});
},
down: async (queryInterface, Sequelize) => {
await queryInterface.removeColumn("teams", "guestSignin");
await queryInterface.removeColumn("users", "lastSigninEmailSentAt");
await queryInterface.changeColumn("users", "email", {
type: Sequelize.STRING,
allowNull: false,
});
},
};
```
|
```xml
/*
* This software is released under MIT license.
* The full license information can be found in LICENSE in the root directory of this project.
*/
import { html } from 'lit';
import { removeTestElement, createTestElement, componentIsStable } from '@cds/core/test';
import { CdsInternalControlInline } from '@cds/core/forms';
import '@cds/core/forms/register.js';
let element: HTMLElement;
let control: CdsInternalControlInline;
let input: HTMLInputElement;
let inputInControlGroup: HTMLInputElement;
describe('cds-internal-control-inline', () => {
beforeEach(async () => {
element = await createTestElement(html`
<cds-internal-control-inline>
<label>control</label>
<input type="checkbox" />
</cds-internal-control-inline>
<cds-internal-control-group>
<cds-internal-control-inline>
<label>control</label>
<input type="checkbox" />
</cds-internal-control-inline>
</cds-internal-control-group>
`);
control = element.querySelectorAll<CdsInternalControlInline>('cds-internal-control-inline')[0];
input = element.querySelector<HTMLInputElement>('input');
inputInControlGroup = element.querySelector<HTMLInputElement>('cds-internal-control-group input');
});
afterEach(() => {
removeTestElement(element);
});
it('should set identifier attribute', () => {
expect(control.getAttribute('cds-control')).toBe('');
});
it('should allow inline label to be place left or right of the control', async () => {
await componentIsStable(control);
expect(control.shadowRoot.innerHTML).not.toContain('order:reverse');
control.controlAlign = 'right';
await componentIsStable(control);
expect(control.shadowRoot.innerHTML).toContain('order:reverse');
});
it('should allow style input to trigger click of native input', done => {
let clicked = false;
input.addEventListener('click', () => {
clicked = true;
done();
});
control.shadowRoot.querySelector<HTMLElement>('.input').click();
expect(clicked).toBe(true);
});
it('should mark ui div input/focus elements as presentational only roles', async () => {
await componentIsStable(control);
expect(control.shadowRoot.querySelector<HTMLElement>('.input').getAttribute('role')).toEqual('presentation');
expect(control.shadowRoot.querySelector<HTMLElement>('[focusable]').getAttribute('role')).toEqual('presentation');
});
it('to prevent empty gap space it should not render messages slot wrapper when no messages are provided', async () => {
await componentIsStable(control);
expect(control.shadowRoot.querySelector('slot[message]')).toEqual(null);
});
it('should allow inline control messages when within a inline group', async () => {
await componentIsStable(control);
expect(control.shadowRoot.querySelector('.private-host').getAttribute('cds-layout').includes('vertical')).toBe(
true
);
control.isControlGroup = true;
await componentIsStable(control);
expect(control.shadowRoot.querySelector('.private-host').getAttribute('cds-layout').includes('horizontal')).toBe(
true
);
});
it('should only bubble events if control is not managed by a control group', async () => {
await componentIsStable(control);
let eventCount = 0;
element.addEventListener('checkedChange', () => eventCount++);
inputInControlGroup.click();
input.click();
await componentIsStable(control);
expect(eventCount).toBe(1);
});
});
```
|
Grigston is an unincorporated community in Scott County, Kansas, United States.
History
A post office was opened in Grigston (formerly Grigsby) in 1886, and remained in operation until it was discontinued in 1955. Currently, the town consists only of several houses and a moderate-sized grain elevator complex.
References
Further reading
External links
Scott County maps: Current, Historic, KDOT
Unincorporated communities in Scott County, Kansas
Unincorporated communities in Kansas
|
```python
# mypy: allow-untyped-decorators
# mypy: allow-untyped-defs
import logging
from collections import defaultdict
from threading import Lock
from typing import List, Optional
import torch
import torch.distributed.autograd as dist_autograd
import torch.distributed.rpc as rpc
import torch.jit as jit
import torch.nn as nn
from torch import Tensor
from torch.distributed.rpc import RRef
from .utils import functional_optim_map
__all__ = ["DistributedOptimizer"]
logger = logging.getLogger(__name__)
# XXX: we define a _ScriptModuleOptimizer here to explicitly
# compile the FunctionalOptimizer class into TorchScript
# This is because ScriptClass instance still lives in
# python unless you explicitly compile it as an attribute
# in ScriptModule or pass it to a ScriptFunction
# _ScriptLocalOptimizerInterface serves as a common
# interface type for Optimizer ScriptModules.
#
# TODO (wanchaol): remove this once we added TorchScript
# class reference semantics
@jit.interface
class _ScriptLocalOptimizerInterface:
def step(self, autograd_ctx_id: int) -> None:
pass
class _ScriptLocalOptimizer(nn.Module):
# TorchScript does not support multithread concurrent compiling.
# request_callback might invoke concurrent compiling, so we
# serialize the compiling with a lock
compile_lock = Lock()
def __init__(self, optim_cls, local_params_rref, *args, **kwargs):
super().__init__()
self._local_params = [rref.local_value() for rref in local_params_rref]
self.optim = optim_cls(self._local_params, *args, **kwargs)
@jit.export
def step(self, autograd_ctx_id: int):
all_local_grads = dist_autograd.get_gradients(autograd_ctx_id)
# apply functional optimizer step with a list of gradients
grads: List[Optional[Tensor]] = [
all_local_grads[p] if p in all_local_grads else None
for p in self._local_params
]
self.optim.step(grads)
# TODO (wanchaol): remove/merge this with ScriptLocalOptimizer once
# we have converted all to functional optimizer in distributed.optim
class _LocalOptimizer:
# Ideally we would only need to share a lock for instances of
# _LocalOptimizer that deal with the same parameters. We are
# making a simplifying assumption here that if there is more
# than one instance of _LocalOptimizer per worker, they will
# be optimizing the same parameters (e.g. each data parallel
# trainer will create its own instance of _LocalOptimizer but
# they will all optimize the same parameters on each worker)
global_lock = Lock()
def __init__(self, optim_cls, local_params_rref, *args, **kwargs):
self._local_params = [rref.local_value() for rref in local_params_rref]
self.optim = optim_cls(self._local_params, *args, **kwargs)
def step(self, autograd_ctx_id):
all_local_grads = dist_autograd.get_gradients(autograd_ctx_id)
with _LocalOptimizer.global_lock:
for param, grad in all_local_grads.items():
param.grad = grad
self.optim.step()
def _new_local_optimizer(optim_cls, local_params_rref, *args, **kwargs):
return rpc.RRef(_LocalOptimizer(optim_cls, local_params_rref, *args, **kwargs))
def _local_optimizer_step(local_optim_rref, autograd_ctx_id):
local_optim = local_optim_rref.local_value()
local_optim.step(autograd_ctx_id)
# new/step functions combined with _ScriptLocalOptimizer to provide GIL-free optimizer
def _new_script_local_optimizer(optim_cls, local_params_rref, *args, **kwargs):
optim = _ScriptLocalOptimizer(optim_cls, local_params_rref, *args, **kwargs)
with _ScriptLocalOptimizer.compile_lock:
script_optim = jit.script(optim)
return rpc.RRef(script_optim, _ScriptLocalOptimizerInterface)
@jit.script
def _script_local_optimizer_step(
local_optim_rref: RRef[_ScriptLocalOptimizerInterface], autograd_ctx_id: int
) -> None:
local_optim = local_optim_rref.local_value()
local_optim.step(autograd_ctx_id)
def _wait_for_all(rpc_futs):
# TODO: improve error propagation
exception = None
results = []
for fut in rpc_futs:
try:
results.append(fut.wait())
except Exception as e:
results.append(e)
exception = e
if exception is not None:
raise exception
return results
class DistributedOptimizer:
"""
DistributedOptimizer takes remote references to parameters scattered
across workers and applies the given optimizer locally for each parameter.
This class uses :meth:`~torch.distributed.autograd.get_gradients` in order
to retrieve the gradients for specific parameters.
Concurrent calls to
:meth:`~torch.distributed.optim.DistributedOptimizer.step`,
either from the same or different clients, will
be serialized on each worker -- as each worker's optimizer can only work
on one set of gradients at a time. However, there is no guarantee that
the full forward-backward-optimizer sequence will execute for one client
at a time. This means that the gradients being applied may not correspond
to the latest forward pass executed on a given worker. Also, there is no
guaranteed ordering across workers.
`DistributedOptimizer` creates the local optimizer with TorchScript enabled
by default, so that optimizer updates are not blocked by the Python Global
Interpreter Lock (GIL) in the case of multithreaded training (e.g. Distributed
Model Parallel). This feature is currently enabled for most optimizers. You
can also follow `the recipe`__ in PyTorch tutorials to enable TorchScript support
for your own custom optimizers.
Args:
optimizer_class (optim.Optimizer): the class of optimizer to
instantiate on each worker.
params_rref (list[RRef]): list of RRefs to local or remote parameters
to optimize.
args: arguments to pass to the optimizer constructor on each worker.
kwargs: arguments to pass to the optimizer constructor on each worker.
Example::
>>> # xdoctest: +SKIP("distributed")
>>> import torch.distributed.autograd as dist_autograd
>>> import torch.distributed.rpc as rpc
>>> from torch import optim
>>> from torch.distributed.optim import DistributedOptimizer
>>>
>>> with dist_autograd.context() as context_id:
>>> # Forward pass.
>>> rref1 = rpc.remote("worker1", torch.add, args=(torch.ones(2), 3))
>>> rref2 = rpc.remote("worker1", torch.add, args=(torch.ones(2), 1))
>>> loss = rref1.to_here() + rref2.to_here()
>>>
>>> # Backward pass.
>>> dist_autograd.backward(context_id, [loss.sum()])
>>>
>>> # Optimizer.
>>> dist_optim = DistributedOptimizer(
>>> optim.SGD,
>>> [rref1, rref2],
>>> lr=0.05,
>>> )
>>> dist_optim.step(context_id)
__ path_to_url
"""
def __init__(self, optimizer_class, params_rref, *args, **kwargs):
torch._C._log_api_usage_once("torch.distributed.optim.DistributedOptimizer")
per_worker_params_rref = defaultdict(list)
for param in params_rref:
per_worker_params_rref[param.owner()].append(param)
if optimizer_class in functional_optim_map and jit._state._enabled:
optim_ctor = functional_optim_map.get(optimizer_class)
else:
optim_ctor = optimizer_class
self.is_functional_optim = optim_ctor != optimizer_class
if self.is_functional_optim:
optimizer_new_func = _new_script_local_optimizer
else:
logger.warning(
"Creating the optimizer %s without TorchScript support, "
"this might result in slow computation time in multithreading environment"
"(i.e. Distributed Model Parallel training on CPU) due to the Python's "
"Global Interpreter Lock (GIL). Please file an issue if you need this "
"optimizer in TorchScript. ",
optimizer_class,
)
optimizer_new_func = _new_local_optimizer
remote_optim_futs = []
for worker, param_rrefs in per_worker_params_rref.items():
remote_optim_rref_fut = rpc.rpc_async(
worker,
optimizer_new_func,
args=(optim_ctor, param_rrefs) + args,
kwargs=kwargs,
)
remote_optim_futs.append(remote_optim_rref_fut)
self.remote_optimizers = _wait_for_all(remote_optim_futs)
def step(self, context_id):
"""
Performs a single optimization step.
This will call :meth:`torch.optim.Optimizer.step` on each worker
containing parameters to be optimized, and will block until all workers
return. The provided ``context_id`` will be used to retrieve the
corresponding :class:`~torch.distributed.autograd.context` that
contains the gradients that should be applied to the parameters.
Args:
context_id: the autograd context id for which we should run the
optimizer step.
"""
dist_autograd._is_valid_context(context_id)
optimizer_step_func = (
_script_local_optimizer_step
if self.is_functional_optim
else _local_optimizer_step
)
rpc_futs = []
for optimizer in self.remote_optimizers:
rpc_futs.append(
rpc.rpc_async(
optimizer.owner(),
optimizer_step_func,
args=(optimizer, context_id),
)
)
_wait_for_all(rpc_futs)
```
|
Parliamentary elections were held in Andorra on 10 December 1989, with a second round of voting on 17 December. Following the elections, Òscar Ribas Reig became Prime Minister, elected on 12 January 1990 by a vote of 23−5.
Electoral system
All 28 seats of the General Council were up for election. Each parish formed a constituency, electing four members each. Members of the Parliament were elected using a two-round plurality voting system. The voting age was 18 years old.
As political parties were not legalised until 1993, all candidates ran as independents, although press and newspapers considered some candidates to be government endorsed (supporting Pintat government) or opponents.
Following the elections, the General Council elected the Prime Minister of Andorra and the General Syndic (speaker).
Results
Voter turnout was 82.3%. A second round of voting was held in Andorra la Vella, Canillo and Ordino.
By affiliation
Although government endorsed candidates won the elections in terms of seats, in the most populated parishes (Andorra la Vella and Escaldes-Engordany), the opposition candidates received more votes. This was seen as a decrease of support of Josep Pintat-Solans's policies, and Òscar Ribas Reig was elected Prime Minister of Andorra.
References
Andorra
Parliamentary election
Parliamentary elections in Andorra
Non-partisan elections
December 1989 events in Europe
Election and referendum articles with incomplete results
|
Wanborough () is a rural village and civil parish in Surrey approximately 4 miles (6 km) west of Guildford on the northern slopes of the Hog's Back. Wanborough lies between Puttenham and Normandy. Wanborough village grew around and to service Wanborough Manor which is on the site of ancient springs.
History
Prehistory
According to a local publication Wanborough and its Church, humans in prehistory travelled along the Hog's Back, attracted by the spring in the locality. The earliest settlement dates to 8000 BC.
The "Wanborough Coins" are part of a votive offering deposited at a Romano-Celtic temple (i.e., late 1st century BC to 4th century AD); this site was looted between 1983 and 1985, but over one thousand silver coins, a small part of the original assemblage, were eventually added to the collection of the British Museum. The British Museum calls the destruction of the Romano-Celtic temple at Wanborough in Surrey 'one of the saddest stories in British archaeology'.
A headdress and sceptre handles were also recovered. These were probably used by a priest during rituals. Subsequent excavations have shown that there were in fact two temples on the site. A circular temple had been built during the late first century BC, replaced in the second century AD by a square temple.
The Saxon name of Wenberge means bump-barrow; this barrow was on the southern border of Wanborough on the top of the Hog's Back.
Pre-dissolution
Wanborough appears in Domesday Book of 1086 as Weneberge held by Goisfrid (Geoffrey) de Mandville. Its assets were: 3 hides; 1 church, 9 ploughs, of meadow, woodland worth 30 hogs (per year). Its people rendered £7 per year to their overlords. It also states that it had been held before the Norman conquest by two thegns, Sweign and Leofwin, who may have been brothers of King Harold.
In 1130 the Manor was sold to Waverley Abbey for £80 and put to use in great part to farm sheep to feed, clothe and endow the Cistercian community. The present Great Barn was built in 1388 and was used for storing and processing crops (threshing and winnowing). Having been built for the Cistercian Abbey, the barn was not a tithe barn as it would have stored the entire manor crop. The barn is made from massive oak timbers and is an aisled barn with large doors on either long side to permit entry by carts. It was extended in 1705. The dates have been obtained using tree-ring dating techniques.
In 1511 the Abbey obtained the right to hold an annual fair at Wanborough for 3 days from 23 August. By 1536 the fair was making £35 for the abbey and had a pie poudre court to try trading offences.
Post-dissolution
In 1536, Waverley Abbey was dissolved and the manor passed into secular ownership. St Bartholomew's Church was in regular use until at least 1675. By 1794, as leaseholder, the Quaker, Morris Birkbeck was farming an estate of at Wanborough, where he joined others in England and France who were experimenting with crossbreeding Merino sheep and innovating with modern techniques. He used the church as a wood store and barn.
In 1613, a court case recorded that someone was assaulted with a "cricket staffe" (an early term for the cricket bat) at Wanborough.
The present manor house was built, starting in about 1670, by Thomas Dalmahoy, who was MP for Guildford for most of the reign of the restored monarch, Charles II.
From 1880, Sir Algernon West lived at Wanborough Manor. He was Principal Private Secretary to Prime Minister W. E. Gladstone. West entertained many political figures at the manor, including Gladstone, Queen Victoria and Bismarck. He was also a director of the South Eastern Railway and was responsible for the opening of Wanborough Station (in nearby Normandy) in 1891. In 1900, the manor passed to Asquith who lived there until he became Prime Minister. In 1908 West returned and stayed until his death in 1921. The Manor passed to the Perkins family who introduced one of the first combine harvesters in the country.
World War II
During World War II the Manor was used as a training centre for Special Operations Executive agents. The manor was designated Special Training School 5, and handled the first three phases of agent training. It operated from spring 1941 to March 1943 under the command of Major Roger de Wesselow, who had been a Coldstream Guards officer in World War I. Many agents in 'Section F' (France) passed through STS5 and courses lasted 3 weeks. Each course was specific to one country and all conversation during the course was in the target language. Trainees were taught theoretical and practical subjects including physical training, shooting, explosives, sabotage, map-reading, Morse code, and observation skills. Among the 130 agents trained at Wanborough were Peter Churchill and Noor Inayat Khan.
One of the tests in the course was to invite beautiful women to seduce the agents through alcohol and flirtation and try to get them to divulge secrets. But the test was dropped, as almost all the agents appear to have failed to keep sensitive information to themselves.
Post-war
In the 1950s the Manor became a country club and restaurant. It acquired a reputation amongst the taxi drivers of Guildford who would be called to collect girls from Guildford station at weekends and then drive them back for an early train up to London on Monday morning.
The Manor House is now split into three private dwellings.
Since the 1960s development has been constrained by its rural isolation and protected status of much of its land, Wanborough has gradually become a mixture of a commuter and retirement settlement. Principal employment areas are the Aldershot Urban Area, Guildford and London.
St Bartholomew's Church
The village church is small, only by internally. It was originally built around 1060 replacing an earlier wooden Saxon church. It was rebuilt in the 13th century. In Victorian times, whilst nearby Puttenham church was closed for repairs, the rector of Puttenham, the Rev. W A Duckworth, decided to hold services in Wanborough's church. He arranged and paid for the restoration of St Bartholomew's by architect Henry Woodyer. It was rededicated in 1861. Thus the various walls and windows have significantly different heritage. The Victorian west brick wall now supports an external bell. The church's architectural importance is reflected in its Grade I listing.
Demography and housing
The average level of accommodation in the region composed of detached houses was 28%, the average that was apartments was 22.6%.
The proportion of households in the civil parish who owned their home outright compares to the regional average of 35.1%. The proportion who owned their home with a loan compares to the regional average of 32.5%. The remaining % is made up of rented dwellings (plus a negligible % of households living rent-free).
Transport links
The nearest railway station is in the large, generally 20th century, neighbourhood of Flexford, to the north, served by South Western Railway, who manage the station, and by Great Western Railway. It is on the Ascot to Guildford section of the North Downs Line.
The through road in the village leads south towards the edge of the village where there is an intersection with the A31 dual carriageway that runs along the top of the Hog's Back.
The only bus service is a once a day school bus, operated by Carlone, connecting the village to the Broadwater School in Farncombe.
References
Bibliography
External links
Wanborough Great Barn
Detailed historical record about the Great Barn, Wanborough
Wanborough Barn
Stained Glass Windows at St Bartholomew, Wanborough, Surrey
Borough of Guildford
Civil parishes in Surrey
Cricket in Surrey
English cricket in the 14th to 17th centuries
Villages in Surrey
|
Giselle Itié Ramos (born October 3, 1981) is a Mexican-born Brazilian actress. In 2001, she started her career as an actress in a Brazilian telenovela. In 2009, she debuted as protagonist in the telenovela Bela, a Feia, the Brazilian version for the Colombian Yo soy Betty, la fea. Itié also took part in the film The Expendables, co-written, directed by and starring Sylvester Stallone.
Biography
Giselle was born in Mexico City on 3 October 1981 to a Mexican father and a Brazilian mother from São Paulo. Her father and mother lost everything in the earthquake that destroyed Mexico City in 1985. Thereafter, she moved with her family to her mother's native Brazil.
Giselle lied to her family to pursue her artistic career; telling her father that she needed money to pay for a gym membership, she instead spent the money on a theater course at Televisa, the leading television station in Mexico. For eight months the actress stayed at the home of relatives in Mexico, convinced that she wanted to continue pursuing an acting career.
Career
Her long standing desire to work as an actress finally came true at age 18. Disheartened by her modeling career, she signed up for an audition for the miniseries Os Maias after much insistence by an employee of Elite Model Management. Two months later, after auditioning for the Rede Globo miniseries Engraçadinha: Seus Amores e Seus Pecados, based on a novel by Nelson Rodrigues, she received a communiqué from the broadcaster confirming her in the role of Lola in the miniseries. The character in question was a courtesan, forcing Itié to forgo her shyness during some risqué scenes. Still, Giselle was frustrated after Os Maias. Although she expected the series to be renewed in global production, it wasn't.
The same year, she accepted an audition for Pícara Sonhadora, adapted from the original Mexican production and revived the core of SBT TV dramas. Gisele won the role of the villainess Bárbara, lover of the protagonist played by Petrônio Gontijo.
She was then called by the director Luiz Fernando Carvalho who offered her the role of the character Eulália in the novela Esperança in 2002.
In 2004, she was one of the protagonists of the novela Começar de Novo and in 2005, she appeared in the TV series Mandrake for HBO Brazil. In 2006, she was also one of the protagonists of the show Avassaladoras. In the same year, she acted in the novela Pé na Jaca.
In 2007, she appeared on the show O Mistério da Estrada de Sintra. That same year she acted in the telenovela O Profeta and appeared on the third edition of the segment of the show Domingão do Faustão called "Dança no Gelo". It was going well, getting featured in the program, but a head injury forced her to quit. The actress subsequently made a full recovery from the injury.
In 2008, she participated in several episodes of the series Casos e Acasos and Faça Sua História.
In 2009, she worked on the movie Inversão directed by Edu Felistoque. That same year, the actress was the protagonist of the telenovela Bela, a Feia aired by Rede Record.
The following year, the film The Expendables premiered, in which she played Sandra, opposite Sylvester Stallone.
In 2012, she was one of the protagonists of the telenovela Máscaras produced by Rede Record. It was the same year in two foreign films, On the Road of Walter Salles Jr., and was the film's protagonist Chilean Caleuche: El llamado del mar.
Between 2015 and 2016, she played Zipporah in Os Dez Mandamentos.
Personal life
In 2014, Itié married actor Emilio Dantas. In the same year, she had to leave the cast of Vitória, where she would have played the character Renata (which was subsequently interpreted by Maytê Piragibe), due to a motorcycle accident during their honeymoon in Thailand. In 2015, she and actor Emilio Dantas separated. On November 11, 2015, Itié began a relationship with the actor Guilherme Winter.
Filmography
Television
Film
References
External links
Living people
Actresses from Mexico City
Actresses of Mexican descent
Mexican people of Brazilian descent
Mexican actresses
Mexican emigrants to Brazil
Brazilian telenovela actresses
Brazilian television actresses
Brazilian film actresses
Brazilian people of Mexican descent
1981 births
|
```c
/*
* All rights reserved.
*
* This source code is licensed under the BSD-style license found in the
* LICENSE file in the root directory of this source tree.
*/
#include <stdarg.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#ifdef _WIN32
#include <windows.h>
#else
#include <unistd.h>
#endif
#ifdef __ANDROID__
#include <android/log.h>
#endif
#ifndef CLOG_LOG_TO_STDIO
#ifdef __ANDROID__
#define CLOG_LOG_TO_STDIO 0
#else
#define CLOG_LOG_TO_STDIO 1
#endif
#endif
#include <clog.h>
/* Messages up to this size are formatted entirely on-stack, and don't allocate
* heap memory */
#define CLOG_STACK_BUFFER_SIZE 1024
#define CLOG_FATAL_PREFIX "Fatal error: "
#define CLOG_FATAL_PREFIX_LENGTH 13
#define CLOG_FATAL_PREFIX_FORMAT "Fatal error in %s: "
#define CLOG_ERROR_PREFIX "Error: "
#define CLOG_ERROR_PREFIX_LENGTH 7
#define CLOG_ERROR_PREFIX_FORMAT "Error in %s: "
#define CLOG_WARNING_PREFIX "Warning: "
#define CLOG_WARNING_PREFIX_LENGTH 9
#define CLOG_WARNING_PREFIX_FORMAT "Warning in %s: "
#define CLOG_INFO_PREFIX "Note: "
#define CLOG_INFO_PREFIX_LENGTH 6
#define CLOG_INFO_PREFIX_FORMAT "Note (%s): "
#define CLOG_DEBUG_PREFIX "Debug: "
#define CLOG_DEBUG_PREFIX_LENGTH 7
#define CLOG_DEBUG_PREFIX_FORMAT "Debug (%s): "
#define CLOG_SUFFIX_LENGTH 1
void clog_vlog_fatal(const char* module, const char* format, va_list args) {
#if defined(__ANDROID__) && !CLOG_LOG_TO_STDIO
__android_log_vprint(ANDROID_LOG_FATAL, module, format, args);
#else
char stack_buffer[CLOG_STACK_BUFFER_SIZE];
char* heap_buffer = NULL;
char* out_buffer = &stack_buffer[0];
/* The first call to vsnprintf will clobber args, thus need a copy in case a
* second vsnprintf call is needed */
va_list args_copy;
va_copy(args_copy, args);
int prefix_chars = CLOG_FATAL_PREFIX_LENGTH;
if (module == NULL) {
memcpy(stack_buffer, CLOG_FATAL_PREFIX, CLOG_FATAL_PREFIX_LENGTH);
} else {
prefix_chars = snprintf(
stack_buffer, CLOG_STACK_BUFFER_SIZE, CLOG_FATAL_PREFIX_FORMAT, module);
if (prefix_chars < 0) {
/* Format error in prefix (possible if prefix is modified): skip prefix
* and continue as if nothing happened. */
prefix_chars = 0;
}
}
int format_chars;
if (prefix_chars + CLOG_SUFFIX_LENGTH >= CLOG_STACK_BUFFER_SIZE) {
/*
* Prefix + suffix alone would overflow the on-stack buffer, thus need to
* use on-heap buffer. Do not even try to format the string into on-stack
* buffer.
*/
format_chars = vsnprintf(NULL, 0, format, args);
} else {
format_chars = vsnprintf(
&stack_buffer[prefix_chars],
CLOG_STACK_BUFFER_SIZE - prefix_chars - CLOG_SUFFIX_LENGTH,
format,
args);
}
if (format_chars < 0) {
/* Format error in the message: silently ignore this particular message. */
goto cleanup;
}
if (prefix_chars + format_chars + CLOG_SUFFIX_LENGTH >
CLOG_STACK_BUFFER_SIZE) {
/* Allocate a buffer on heap, and vsnprintf to this buffer */
heap_buffer = malloc(prefix_chars + format_chars + CLOG_SUFFIX_LENGTH);
if (heap_buffer == NULL) {
goto cleanup;
}
if (prefix_chars > CLOG_STACK_BUFFER_SIZE) {
/* Prefix didn't fit into on-stack buffer, re-format it again to on-heap
* buffer */
snprintf(
heap_buffer,
prefix_chars + 1 /* for '\0'-terminator */,
CLOG_FATAL_PREFIX_FORMAT,
module);
} else {
/* Copy pre-formatted prefix from on-stack buffer to on-heap buffer */
memcpy(heap_buffer, stack_buffer, prefix_chars);
}
vsnprintf(
heap_buffer + prefix_chars,
format_chars + CLOG_SUFFIX_LENGTH,
format,
args_copy);
out_buffer = heap_buffer;
}
out_buffer[prefix_chars + format_chars] = '\n';
#ifdef _WIN32
DWORD bytes_written;
WriteFile(
GetStdHandle(STD_ERROR_HANDLE),
out_buffer,
prefix_chars + format_chars + CLOG_SUFFIX_LENGTH,
&bytes_written,
NULL);
#else
write(
STDERR_FILENO,
out_buffer,
prefix_chars + format_chars + CLOG_SUFFIX_LENGTH);
#endif
cleanup:
free(heap_buffer);
va_end(args_copy);
#endif
}
void clog_vlog_error(const char* module, const char* format, va_list args) {
#if defined(__ANDROID__) && !CLOG_LOG_TO_STDIO
__android_log_vprint(ANDROID_LOG_ERROR, module, format, args);
#else
char stack_buffer[CLOG_STACK_BUFFER_SIZE];
char* heap_buffer = NULL;
char* out_buffer = &stack_buffer[0];
/* The first call to vsnprintf will clobber args, thus need a copy in case a
* second vsnprintf call is needed */
va_list args_copy;
va_copy(args_copy, args);
int prefix_chars = CLOG_ERROR_PREFIX_LENGTH;
if (module == NULL) {
memcpy(stack_buffer, CLOG_ERROR_PREFIX, CLOG_ERROR_PREFIX_LENGTH);
} else {
prefix_chars = snprintf(
stack_buffer, CLOG_STACK_BUFFER_SIZE, CLOG_ERROR_PREFIX_FORMAT, module);
if (prefix_chars < 0) {
/* Format error in prefix (possible if prefix is modified): skip prefix
* and continue as if nothing happened. */
prefix_chars = 0;
}
}
int format_chars;
if (prefix_chars + CLOG_SUFFIX_LENGTH >= CLOG_STACK_BUFFER_SIZE) {
/*
* Prefix + suffix alone would overflow the on-stack buffer, thus need to
* use on-heap buffer. Do not even try to format the string into on-stack
* buffer.
*/
format_chars = vsnprintf(NULL, 0, format, args);
} else {
format_chars = vsnprintf(
&stack_buffer[prefix_chars],
CLOG_STACK_BUFFER_SIZE - prefix_chars - CLOG_SUFFIX_LENGTH,
format,
args);
}
if (format_chars < 0) {
/* Format error in the message: silently ignore this particular message. */
goto cleanup;
}
if (prefix_chars + format_chars + CLOG_SUFFIX_LENGTH >
CLOG_STACK_BUFFER_SIZE) {
/* Allocate a buffer on heap, and vsnprintf to this buffer */
heap_buffer = malloc(prefix_chars + format_chars + CLOG_SUFFIX_LENGTH);
if (heap_buffer == NULL) {
goto cleanup;
}
if (prefix_chars > CLOG_STACK_BUFFER_SIZE) {
/* Prefix didn't fit into on-stack buffer, re-format it again to on-heap
* buffer */
snprintf(
heap_buffer,
prefix_chars + 1 /* for '\0'-terminator */,
CLOG_ERROR_PREFIX_FORMAT,
module);
} else {
/* Copy pre-formatted prefix from on-stack buffer to on-heap buffer */
memcpy(heap_buffer, stack_buffer, prefix_chars);
}
vsnprintf(
heap_buffer + prefix_chars,
format_chars + CLOG_SUFFIX_LENGTH,
format,
args_copy);
out_buffer = heap_buffer;
}
out_buffer[prefix_chars + format_chars] = '\n';
#ifdef _WIN32
DWORD bytes_written;
WriteFile(
GetStdHandle(STD_ERROR_HANDLE),
out_buffer,
prefix_chars + format_chars + CLOG_SUFFIX_LENGTH,
&bytes_written,
NULL);
#else
write(
STDERR_FILENO,
out_buffer,
prefix_chars + format_chars + CLOG_SUFFIX_LENGTH);
#endif
cleanup:
free(heap_buffer);
va_end(args_copy);
#endif
}
void clog_vlog_warning(const char* module, const char* format, va_list args) {
#if defined(__ANDROID__) && !CLOG_LOG_TO_STDIO
__android_log_vprint(ANDROID_LOG_WARN, module, format, args);
#else
char stack_buffer[CLOG_STACK_BUFFER_SIZE];
char* heap_buffer = NULL;
char* out_buffer = &stack_buffer[0];
/* The first call to vsnprintf will clobber args, thus need a copy in case a
* second vsnprintf call is needed */
va_list args_copy;
va_copy(args_copy, args);
int prefix_chars = CLOG_WARNING_PREFIX_LENGTH;
if (module == NULL) {
memcpy(stack_buffer, CLOG_WARNING_PREFIX, CLOG_WARNING_PREFIX_LENGTH);
} else {
prefix_chars = snprintf(
stack_buffer,
CLOG_STACK_BUFFER_SIZE,
CLOG_WARNING_PREFIX_FORMAT,
module);
if (prefix_chars < 0) {
/* Format error in prefix (possible if prefix is modified): skip prefix
* and continue as if nothing happened. */
prefix_chars = 0;
}
}
int format_chars;
if (prefix_chars + CLOG_SUFFIX_LENGTH >= CLOG_STACK_BUFFER_SIZE) {
/*
* Prefix + suffix alone would overflow the on-stack buffer, thus need to
* use on-heap buffer. Do not even try to format the string into on-stack
* buffer.
*/
format_chars = vsnprintf(NULL, 0, format, args);
} else {
format_chars = vsnprintf(
&stack_buffer[prefix_chars],
CLOG_STACK_BUFFER_SIZE - prefix_chars - CLOG_SUFFIX_LENGTH,
format,
args);
}
if (format_chars < 0) {
/* Format error in the message: silently ignore this particular message. */
goto cleanup;
}
if (prefix_chars + format_chars + CLOG_SUFFIX_LENGTH >
CLOG_STACK_BUFFER_SIZE) {
/* Allocate a buffer on heap, and vsnprintf to this buffer */
heap_buffer = malloc(prefix_chars + format_chars + CLOG_SUFFIX_LENGTH);
if (heap_buffer == NULL) {
goto cleanup;
}
if (prefix_chars > CLOG_STACK_BUFFER_SIZE) {
/* Prefix didn't fit into on-stack buffer, re-format it again to on-heap
* buffer */
snprintf(
heap_buffer,
prefix_chars + 1 /* for '\0'-terminator */,
CLOG_WARNING_PREFIX_FORMAT,
module);
} else {
/* Copy pre-formatted prefix from on-stack buffer to on-heap buffer */
memcpy(heap_buffer, stack_buffer, prefix_chars);
}
vsnprintf(
heap_buffer + prefix_chars,
format_chars + CLOG_SUFFIX_LENGTH,
format,
args_copy);
out_buffer = heap_buffer;
}
out_buffer[prefix_chars + format_chars] = '\n';
#ifdef _WIN32
DWORD bytes_written;
WriteFile(
GetStdHandle(STD_ERROR_HANDLE),
out_buffer,
prefix_chars + format_chars + CLOG_SUFFIX_LENGTH,
&bytes_written,
NULL);
#else
write(
STDERR_FILENO,
out_buffer,
prefix_chars + format_chars + CLOG_SUFFIX_LENGTH);
#endif
cleanup:
free(heap_buffer);
va_end(args_copy);
#endif
}
void clog_vlog_info(const char* module, const char* format, va_list args) {
#if defined(__ANDROID__) && !CLOG_LOG_TO_STDIO
__android_log_vprint(ANDROID_LOG_INFO, module, format, args);
#else
char stack_buffer[CLOG_STACK_BUFFER_SIZE];
char* heap_buffer = NULL;
char* out_buffer = &stack_buffer[0];
/* The first call to vsnprintf will clobber args, thus need a copy in case a
* second vsnprintf call is needed */
va_list args_copy;
va_copy(args_copy, args);
int prefix_chars = CLOG_INFO_PREFIX_LENGTH;
if (module == NULL) {
memcpy(stack_buffer, CLOG_INFO_PREFIX, CLOG_INFO_PREFIX_LENGTH);
} else {
prefix_chars = snprintf(
stack_buffer, CLOG_STACK_BUFFER_SIZE, CLOG_INFO_PREFIX_FORMAT, module);
if (prefix_chars < 0) {
/* Format error in prefix (possible if prefix is modified): skip prefix
* and continue as if nothing happened. */
prefix_chars = 0;
}
}
int format_chars;
if (prefix_chars + CLOG_SUFFIX_LENGTH >= CLOG_STACK_BUFFER_SIZE) {
/*
* Prefix + suffix alone would overflow the on-stack buffer, thus need to
* use on-heap buffer. Do not even try to format the string into on-stack
* buffer.
*/
format_chars = vsnprintf(NULL, 0, format, args);
} else {
format_chars = vsnprintf(
&stack_buffer[prefix_chars],
CLOG_STACK_BUFFER_SIZE - prefix_chars - CLOG_SUFFIX_LENGTH,
format,
args);
}
if (format_chars < 0) {
/* Format error in the message: silently ignore this particular message. */
goto cleanup;
}
if (prefix_chars + format_chars + CLOG_SUFFIX_LENGTH >
CLOG_STACK_BUFFER_SIZE) {
/* Allocate a buffer on heap, and vsnprintf to this buffer */
heap_buffer = malloc(prefix_chars + format_chars + CLOG_SUFFIX_LENGTH);
if (heap_buffer == NULL) {
goto cleanup;
}
if (prefix_chars > CLOG_STACK_BUFFER_SIZE) {
/* Prefix didn't fit into on-stack buffer, re-format it again to on-heap
* buffer */
snprintf(
heap_buffer,
prefix_chars + 1 /* for '\0'-terminator */,
CLOG_INFO_PREFIX_FORMAT,
module);
} else {
/* Copy pre-formatted prefix from on-stack buffer to on-heap buffer */
memcpy(heap_buffer, stack_buffer, prefix_chars);
}
vsnprintf(
heap_buffer + prefix_chars,
format_chars + CLOG_SUFFIX_LENGTH,
format,
args_copy);
out_buffer = heap_buffer;
}
out_buffer[prefix_chars + format_chars] = '\n';
#ifdef _WIN32
DWORD bytes_written;
WriteFile(
GetStdHandle(STD_OUTPUT_HANDLE),
out_buffer,
prefix_chars + format_chars + CLOG_SUFFIX_LENGTH,
&bytes_written,
NULL);
#else
write(
STDOUT_FILENO,
out_buffer,
prefix_chars + format_chars + CLOG_SUFFIX_LENGTH);
#endif
cleanup:
free(heap_buffer);
va_end(args_copy);
#endif
}
void clog_vlog_debug(const char* module, const char* format, va_list args) {
#if defined(__ANDROID__) && !CLOG_LOG_TO_STDIO
__android_log_vprint(ANDROID_LOG_DEBUG, module, format, args);
#else
char stack_buffer[CLOG_STACK_BUFFER_SIZE];
char* heap_buffer = NULL;
char* out_buffer = &stack_buffer[0];
/* The first call to vsnprintf will clobber args, thus need a copy in case a
* second vsnprintf call is needed */
va_list args_copy;
va_copy(args_copy, args);
int prefix_chars = CLOG_DEBUG_PREFIX_LENGTH;
if (module == NULL) {
memcpy(stack_buffer, CLOG_DEBUG_PREFIX, CLOG_DEBUG_PREFIX_LENGTH);
} else {
prefix_chars = snprintf(
stack_buffer, CLOG_STACK_BUFFER_SIZE, CLOG_DEBUG_PREFIX_FORMAT, module);
if (prefix_chars < 0) {
/* Format error in prefix (possible if prefix is modified): skip prefix
* and continue as if nothing happened. */
prefix_chars = 0;
}
}
int format_chars;
if (prefix_chars + CLOG_SUFFIX_LENGTH >= CLOG_STACK_BUFFER_SIZE) {
/*
* Prefix + suffix alone would overflow the on-stack buffer, thus need to
* use on-heap buffer. Do not even try to format the string into on-stack
* buffer.
*/
format_chars = vsnprintf(NULL, 0, format, args);
} else {
format_chars = vsnprintf(
&stack_buffer[prefix_chars],
CLOG_STACK_BUFFER_SIZE - prefix_chars - CLOG_SUFFIX_LENGTH,
format,
args);
}
if (format_chars < 0) {
/* Format error in the message: silently ignore this particular message. */
goto cleanup;
}
if (prefix_chars + format_chars + CLOG_SUFFIX_LENGTH >
CLOG_STACK_BUFFER_SIZE) {
/* Allocate a buffer on heap, and vsnprintf to this buffer */
heap_buffer = malloc(prefix_chars + format_chars + CLOG_SUFFIX_LENGTH);
if (heap_buffer == NULL) {
goto cleanup;
}
if (prefix_chars > CLOG_STACK_BUFFER_SIZE) {
/* Prefix didn't fit into on-stack buffer, re-format it again to on-heap
* buffer */
snprintf(
heap_buffer,
prefix_chars + 1 /* for '\0'-terminator */,
CLOG_DEBUG_PREFIX_FORMAT,
module);
} else {
/* Copy pre-formatted prefix from on-stack buffer to on-heap buffer */
memcpy(heap_buffer, stack_buffer, prefix_chars);
}
vsnprintf(
heap_buffer + prefix_chars,
format_chars + CLOG_SUFFIX_LENGTH,
format,
args_copy);
out_buffer = heap_buffer;
}
out_buffer[prefix_chars + format_chars] = '\n';
#ifdef _WIN32
DWORD bytes_written;
WriteFile(
GetStdHandle(STD_OUTPUT_HANDLE),
out_buffer,
prefix_chars + format_chars + CLOG_SUFFIX_LENGTH,
&bytes_written,
NULL);
#else
write(
STDOUT_FILENO,
out_buffer,
prefix_chars + format_chars + CLOG_SUFFIX_LENGTH);
#endif
cleanup:
free(heap_buffer);
va_end(args_copy);
#endif
}
```
|
```shell
How to unstage a staged file
Adding a remote repository
Using tags for version control
Use `short` status to make output more compact
`master` and `origin` aren't special
```
|
The 1998 Denmark Open in badminton was held in Vejle, from October 14 to October 18, 1998. It was a four-star tournament and the prize money was US$120,000.
Venue
Vejle Idraets Center, Vejle
Final results
References
Denmark Open
Denmark
|
```vue
<template>
<table cellspacing="0" cellpadding="0" border="0" :style="styles">
<colgroup>
<col v-for="(column, index) in columns" :width="setCellWidth(column)">
<col v-if="$parent.showVerticalScrollBar" :width="$parent.scrollBarWidth"/>
</colgroup>
<thead>
<tr v-for="(cols, rowIndex) in headRows">
<th
v-for="(column, index) in cols"
:colspan="column.colSpan"
:rowspan="column.rowSpan"
:class="alignCls(column)">
<div :class="cellClasses(column)">
<template v-if="column.type === 'expand'">
<span v-if="!column.renderHeader">{{ column.title || '' }}</span>
<render-header v-else :render="column.renderHeader" :column="column" :index="index"></render-header>
</template>
<template v-else-if="column.type === 'selection'"><Checkbox :value="isSelectAll" :disabled="isSelectDisabled" @on-change="selectAll"></Checkbox></template>
<template v-else>
<span v-if="!column.renderHeader" :class="{[prefixCls + '-cell-sort']: column.sortable}" @click="handleSortByHead(getColumn(rowIndex, index)._index)">{{ column.title || '#' }}</span>
<render-header v-else :render="column.renderHeader" :column="column" :index="index"></render-header>
<span :class="[prefixCls + '-sort']" v-if="column.sortable">
<i class="ivu-icon ivu-icon-md-arrow-dropup" :class="{on: getColumn(rowIndex, index)._sortType === 'asc'}" @click="handleSort(getColumn(rowIndex, index)._index, 'asc')"></i>
<i class="ivu-icon ivu-icon-md-arrow-dropdown" :class="{on: getColumn(rowIndex, index)._sortType === 'desc'}" @click="handleSort(getColumn(rowIndex, index)._index, 'desc')"></i>
</span>
<Poptip
v-if="isPopperShow(column)"
v-model="getColumn(rowIndex, index)._filterVisible"
placement="bottom"
popper-class="ivu-table-popper"
transfer
@on-popper-hide="handleFilterHide(getColumn(rowIndex, index)._index)">
<span :class="[prefixCls + '-filter']">
<i class="ivu-icon ivu-icon-ios-funnel" :class="{on: getColumn(rowIndex, index)._isFiltered}"></i>
</span>
<div slot="content" :class="[prefixCls + '-filter-list']" v-if="getColumn(rowIndex, index)._filterMultiple">
<div :class="[prefixCls + '-filter-list-item']">
<checkbox-group v-model="getColumn(rowIndex, index)._filterChecked">
<checkbox v-for="(item, index) in column.filters" :key="index" :label="item.value">{{ item.label }}</checkbox>
</checkbox-group>
</div>
<div :class="[prefixCls + '-filter-footer']">
<i-button type="text" size="small" :disabled="!getColumn(rowIndex, index)._filterChecked.length" @click.native="handleFilter(getColumn(rowIndex, index)._index)">{{ t('i.table.confirmFilter') }}</i-button>
<i-button type="text" size="small" @click.native="handleReset(getColumn(rowIndex, index)._index)">{{ t('i.table.resetFilter') }}</i-button>
</div>
</div>
<div slot="content" :class="[prefixCls + '-filter-list']" v-else>
<ul :class="[prefixCls + '-filter-list-single']">
<li
:class="itemAllClasses(getColumn(rowIndex, index))"
@click="handleReset(getColumn(rowIndex, index)._index)">{{ t('i.table.clearFilter') }}</li>
<li
:class="itemClasses(getColumn(rowIndex, index), item)"
v-for="item in column.filters"
@click="handleSelect(getColumn(rowIndex, index)._index, item.value)">{{ item.label }}</li>
</ul>
</div>
</Poptip>
</template>
</div>
</th>
<th v-if="$parent.showVerticalScrollBar && rowIndex===0" :class='scrollBarCellClass()' :rowspan="headRows.length"></th>
</tr>
</thead>
</table>
</template>
<script>
import CheckboxGroup from '../checkbox/checkbox-group.vue';
import Checkbox from '../checkbox/checkbox.vue';
import Poptip from '../poptip/poptip.vue';
import iButton from '../button/button.vue';
import renderHeader from './header';
import Mixin from './mixin';
import Locale from '../../mixins/locale';
export default {
name: 'TableHead',
mixins: [ Mixin, Locale ],
components: { CheckboxGroup, Checkbox, Poptip, iButton, renderHeader },
props: {
prefixCls: String,
styleObject: Object,
columns: Array,
objData: Object,
data: Array, // rebuildData
columnsWidth: Object,
fixed: {
type: [Boolean, String],
default: false
},
columnRows: Array,
fixedColumnRows: Array
},
computed: {
styles () {
const style = Object.assign({}, this.styleObject);
const width = parseInt(this.styleObject.width) ;
style.width = `${width}px`;
return style;
},
isSelectAll () {
let isSelectAll = true;
if (!this.data.length) isSelectAll = false;
if (!this.data.find(item => !item._disabled)) isSelectAll = false; // #1751
for (let i = 0; i < this.data.length; i++) {
if (!this.objData[this.data[i]._index]._isChecked && !this.objData[this.data[i]._index]._isDisabled) {
isSelectAll = false;
break;
}
}
return isSelectAll;
},
headRows () {
const isGroup = this.columnRows.length > 1;
if (isGroup) {
return this.fixed ? this.fixedColumnRows : this.columnRows;
} else {
return [this.columns];
}
},
isSelectDisabled () {
let isSelectDisabled = false;
if (!this.data.length) isSelectDisabled = true;
if (!this.data.find(item => !item._disabled)) isSelectDisabled = true;
return isSelectDisabled;
}
},
methods: {
cellClasses (column) {
return [
`${this.prefixCls}-cell`,
{
[`${this.prefixCls}-hidden`]: !this.fixed && column.fixed && (column.fixed === 'left' || column.fixed === 'right'),
[`${this.prefixCls}-cell-with-selection`]: column.type === 'selection'
}
];
},
scrollBarCellClass(){
let hasRightFixed = false;
for(let i in this.headRows){
for(let j in this.headRows[i]){
if(this.headRows[i][j].fixed === 'right') {
hasRightFixed=true;
break;
}
if(hasRightFixed) break;
}
}
return [
{
[`${this.prefixCls}-hidden`]: hasRightFixed
}
];
},
itemClasses (column, item) {
return [
`${this.prefixCls}-filter-select-item`,
{
[`${this.prefixCls}-filter-select-item-selected`]: column._filterChecked[0] === item.value
}
];
},
itemAllClasses (column) {
return [
`${this.prefixCls}-filter-select-item`,
{
[`${this.prefixCls}-filter-select-item-selected`]: !column._filterChecked.length
}
];
},
selectAll () {
const status = !this.isSelectAll;
this.$parent.selectAll(status);
},
handleSort (index, type) {
// index #5580
const column = this.columns.find(item => item._index === index);
const _index = column._index;
if (column._sortType === type) {
type = 'normal';
}
this.$parent.handleSort(_index, type);
},
handleSortByHead (index) {
// index #5580
const column = this.columns.find(item => item._index === index);
if (column.sortable) {
const type = column._sortType;
if (type === 'normal') {
this.handleSort(index, 'asc');
} else if (type === 'asc') {
this.handleSort(index, 'desc');
} else {
this.handleSort(index, 'normal');
}
}
},
handleFilter (index) {
this.$parent.handleFilter(index);
},
handleSelect (index, value) {
this.$parent.handleFilterSelect(index, value);
},
handleReset (index) {
this.$parent.handleFilterReset(index);
},
handleFilterHide (index) {
this.$parent.handleFilterHide(index);
},
// _ isGroup
getColumn (rowIndex, index) {
const isGroup = this.columnRows.length > 1;
if (isGroup) {
const id = this.headRows[rowIndex][index].__id;
return this.columns.filter(item => item.__id === id)[0];
} else {
return this.headRows[rowIndex][index];
}
}
}
};
</script>
```
|
Jan "JD" Derbyshire is a Canadian theatre artist, comedian, and writer. She has performed her one-woman show, Certified, across Canada, including in Vancouver where it won two Jessie Richardson Theatre Awards.
Early life and education
Derbyshire was born and raised in Calgary, Alberta. She completed master's degree from the Ontario College of Art and Design in Inclusive Design in 2014.
Career
In 2010 alongside the Paralympic Games, Derbyshire performed her one-woman show Funny in the Head at the Kickstart Disability Arts and Culture Festival. Funny in the Head, a show about Derbyshire's experiences with bipolar disorder, was the only show performed at Kickstart.
Derbyshire developed the one-woman show, Certified, in 2018. She has performed Certified for Handsome Alice Theatre (Calgary) in 2018, Touchstone Theatre (Vancouver) in 2019, at One Yellow Rabbit's High Performance Rodeo (Calgary) in 2020, and at the Progress Festival (Toronto) in 2020.
In 2020, Derbyshire was writer-in-residence at Mount Royal University.
Personal life
Derbyshire and her ex-husband, TV director Michael Rohl, have a daughter, actress Kacey Rohl.
Plays
Labour Unions, the Brotherhood of Mothers
Freaky Jane Fine Takes on the Serious World
Bearded Circus Ladies
Ingenius Speculations co-written with Kim Selody, Rita Bozi, and Roy Surette
Joke You
Maharani and the Maple Leaf
Under the Big Top
All In
Funny in the Head
Turkey in the Woods
Sorry Toronto, Really I Am
Certified
Awards
References
OCAD University alumni
Canadian women dramatists and playwrights
Canadian women comedians
Living people
Writers from Calgary
20th-century Canadian dramatists and playwrights
21st-century Canadian dramatists and playwrights
Date of birth missing (living people)
Year of birth missing (living people)
Comedians from Alberta
20th-century Canadian women writers
|
```c++
#define TORCH_ASSERT_ONLY_METHOD_OPERATORS
#include <ATen/core/Tensor.h>
#include <ATen/AccumulateType.h>
#include <ATen/Dispatch.h>
#include <ATen/TensorUtils.h>
#include <ATen/native/LossMulti.h>
#include <c10/util/irange.h>
#ifndef AT_PER_OPERATOR_HEADERS
#include <ATen/Functions.h>
#include <ATen/NativeFunctions.h>
#else
#include <ATen/ops/empty.h>
#include <ATen/ops/multilabel_margin_loss_backward_native.h>
#include <ATen/ops/multilabel_margin_loss_forward.h>
#include <ATen/ops/multilabel_margin_loss_forward_native.h>
#include <ATen/ops/multilabel_margin_loss_native.h>
#include <ATen/ops/zeros_like.h>
#endif
namespace at::native {
namespace {
template <typename scalar_t>
inline scalar_t multilabel_margin_loss_forward_inner_sum_cpu(
const scalar_t* input_data,
const int64_t* target_data,
scalar_t* is_target_data,
int64_t dim) {
using accscalar_t = at::acc_type<scalar_t, false>;
accscalar_t sum = 0;
for (const auto ddt : c10::irange(dim)) {
int64_t target_idx = target_data[ddt];
if (target_idx < 0) {
break;
}
is_target_data[target_idx] = 1;
}
for (const auto dt : c10::irange(dim)) {
int64_t target_idx = target_data[dt];
if (target_idx < 0) {
break;
}
scalar_t input_target = input_data[target_idx];
for (const auto d : c10::irange(dim)) {
if (!is_target_data[d]) {
scalar_t z = 1 - input_target + input_data[d];
if (z > 0) {
sum += z;
}
}
}
}
return sum;
}
template <typename scalar_t>
static void multilabel_margin_loss_forward_out_frame(
const Tensor& input_contiguous,
const Tensor& target_contiguous,
Tensor& output,
Tensor& is_target,
int64_t reduction,
int64_t nframe,
int64_t dim) {
using accscalar_t = at::acc_type<scalar_t, false>;
const scalar_t* input_data = input_contiguous.const_data_ptr<scalar_t>();
const int64_t* target_data = target_contiguous.const_data_ptr<int64_t>();
scalar_t* is_target_data = is_target.data_ptr<scalar_t>();
if (reduction != Reduction::None || output.dim() == 0) {
scalar_t* output_data = output.data_ptr<scalar_t>();
accscalar_t sum = 0;
for (C10_UNUSED const auto t : c10::irange(nframe)) {
sum += multilabel_margin_loss_forward_inner_sum_cpu(
input_data, target_data, is_target_data, dim);
input_data += dim;
target_data += dim;
is_target_data += dim;
}
sum /= dim;
if (reduction == Reduction::Mean) {
sum /= nframe;
}
*output_data = sum; // write scalar output value
} else {
auto output_acc = output.accessor<scalar_t, 1>();
for (const auto t : c10::irange(nframe)) {
scalar_t sum = multilabel_margin_loss_forward_inner_sum_cpu(
input_data, target_data, is_target_data, dim);
sum /= dim;
output_acc[t] = sum;
input_data += dim;
target_data += dim;
is_target_data += dim;
}
}
}
static void multilabel_margin_loss_forward_out_cpu_template(
const Tensor& input,
const Tensor& target,
Tensor& output,
Tensor& is_target,
int64_t reduction) {
#ifndef STRIP_ERROR_MESSAGES
auto target_arg = TensorArg(target, "target", 2);
#endif
// NOLINTNEXTLINE(cppcoreguidelines-init-variables)
int64_t nframe, dim;
const int64_t ndims = input.dim();
multilabel_margin_loss_shape_check(nframe, dim, ndims, input, target);
// special case target.dim() <= 1: produce scalar output for scalar inputs
// even if reduction == Reduction::None
if (reduction != Reduction::None || target.dim() <= 1) {
output.resize_({});
} else {
output.resize_({nframe});
}
is_target.resize_as_(target);
TORCH_CHECK(is_target.is_contiguous(), "is_target must be contiguous");
is_target.zero_();
if (input.numel() == 0) {
return;
}
TORCH_CHECK(
target.min().item<int64_t>() >= -1, target_arg, " is out of range");
TORCH_CHECK(
target.max().item<int64_t>() < dim, target_arg, " is out of range");
auto input_contiguous = input.contiguous();
auto target_contiguous = target.contiguous();
AT_DISPATCH_FLOATING_TYPES(
input.scalar_type(), "multilabel_margin_loss_forward_out_frame", [&] {
multilabel_margin_loss_forward_out_frame<scalar_t>(
input_contiguous, target_contiguous, output, is_target, reduction, nframe, dim);
});
}
template <typename scalar_t>
static void multilabel_margin_loss_backward_out_frame(
Tensor& grad_input,
const Tensor& grad_output,
const Tensor& input_contiguous,
const Tensor& target_contiguous,
int64_t reduction,
const Tensor& is_target_contiguous,
int64_t nframe,
int64_t dim) {
#ifndef STRIP_ERROR_MESSAGES
auto is_target_arg = TensorArg(is_target_contiguous, "is_target", 5);
#endif
TORCH_CHECK(
is_target_contiguous.min().item<scalar_t>() >= 0, is_target_arg, " is out of range");
TORCH_CHECK(
is_target_contiguous.max().item<scalar_t>() <= 1, is_target_arg, " is out of range");
const scalar_t* input_data = input_contiguous.const_data_ptr<scalar_t>();
const int64_t* target_data = target_contiguous.const_data_ptr<int64_t>();
const scalar_t* is_target_data = is_target_contiguous.const_data_ptr<scalar_t>();
scalar_t g = static_cast<scalar_t>(
// NOLINTNEXTLINE(cppcoreguidelines-narrowing-conversions,bugprone-narrowing-conversions)
reduction == Reduction::Mean ? 1. / (nframe * dim) : 1. / dim);
scalar_t* grad_input_row_data = grad_input.mutable_data_ptr<scalar_t>();
for (C10_UNUSED const auto t : c10::irange(nframe)) {
for (const auto dt : c10::irange(dim)) {
int64_t target_idx = target_data[dt];
if (target_idx < 0) {
break;
}
scalar_t input_target = input_data[target_idx];
for (const auto d : c10::irange(dim)) {
if (!is_target_data[d]) {
scalar_t z = 1 - input_target + input_data[d];
if (z > 0) {
grad_input_row_data[target_idx] -= g;
grad_input_row_data[d] += g;
}
}
}
}
input_data += dim;
target_data += dim;
is_target_data += dim;
grad_input_row_data += dim;
}
scalar_t* grad_input_data = grad_input.mutable_data_ptr<scalar_t>();
if (reduction != Reduction::None || grad_output.dim() == 0) {
assert(
reduction != Reduction::None || grad_output.dim() > 0 || nframe == 1);
const auto d = *grad_output.const_data_ptr<scalar_t>();
for (int64_t t = 0; t < nframe * dim; t++) {
grad_input_data[t] *= d;
}
} else {
check_dim_size(grad_output, 1, 0, nframe);
auto grad_output_acc = grad_output.accessor<const scalar_t, 1>();
for (const auto t : c10::irange(nframe)) {
for (const auto d : c10::irange(dim)) {
grad_input_data[t * dim + d] *= grad_output_acc[t];
}
}
}
}
static void multilabel_margin_loss_backward_out_cpu_template(
Tensor& grad_input,
const Tensor& grad_output,
const Tensor& input,
const Tensor& target,
int64_t reduction,
const Tensor& is_target) {
// NOLINTNEXTLINE(cppcoreguidelines-init-variables)
int64_t nframe, dim;
CheckedFrom c = "multilabel_margin_loss_backward_cpu_template";
auto target_arg = TensorArg(target, "target", 3);
auto is_target_arg = TensorArg(is_target, "is_target", 5);
const int64_t ndims = input.dim();
multilabel_margin_loss_shape_check(nframe, dim, ndims, input, target);
checkSameSize(c, target_arg, is_target_arg);
grad_input.resize_as_(input);
if (grad_input.numel() == 0) {
return;
}
TORCH_CHECK(grad_input.is_contiguous(), "grad_input must be contiguous");
grad_input.zero_();
TORCH_CHECK(
target.min().item<int64_t>() >= -1, target_arg, " is out of range");
TORCH_CHECK(
target.max().item<int64_t>() < dim, target_arg, " is out of range");
auto input_contiguous = input.contiguous();
auto target_contiguous = target.contiguous();
auto is_target_contiguous = is_target.contiguous();
AT_DISPATCH_FLOATING_TYPES(
input.scalar_type(), "multilabel_margin_loss_backward_out_frame", [&] {
multilabel_margin_loss_backward_out_frame<scalar_t>(
grad_input,
grad_output,
input_contiguous,
target_contiguous,
reduction,
is_target_contiguous,
nframe,
dim);
});
}
} // namespace
std::tuple<Tensor&, Tensor&> multilabel_margin_loss_forward_out_cpu(const Tensor& self,
const Tensor& target,
int64_t reduction,
Tensor& output,
Tensor& is_target) {
multilabel_margin_loss_forward_out_cpu_template(
self, target, output, is_target, reduction);
return std::tuple<Tensor&, Tensor&>(output, is_target);
}
std::tuple<Tensor, Tensor> multilabel_margin_loss_forward_cpu(
const Tensor& self,
const Tensor& target,
int64_t reduction) {
auto output = at::empty({0}, self.options());
auto is_target = at::empty({0}, self.options());
at::native::multilabel_margin_loss_forward_out_cpu(
self, target, reduction, output, is_target);
return std::make_tuple(output, is_target);
}
Tensor& multilabel_margin_loss_backward_cpu_out(const Tensor& grad_output,
const Tensor& self,
const Tensor& target,
int64_t reduction,
const Tensor& is_target,
Tensor& grad_input) {
multilabel_margin_loss_backward_out_cpu_template(
grad_input, grad_output, self, target, reduction, is_target);
return grad_input;
}
Tensor multilabel_margin_loss_backward_cpu(
const Tensor& grad_output,
const Tensor& self,
const Tensor& target,
int64_t reduction,
const Tensor& is_target) {
auto grad_input = at::zeros_like(self, LEGACY_CONTIGUOUS_MEMORY_FORMAT);
at::native::multilabel_margin_loss_backward_cpu_out(
grad_output, self, target, reduction, is_target, grad_input);
return grad_input;
}
Tensor & multilabel_margin_loss_out(const Tensor & self, const Tensor & target, int64_t reduction, Tensor & output) {
Tensor is_target = at::empty({0}, self.options());
return std::get<0>(at::multilabel_margin_loss_forward_out(output, is_target, self, target, reduction));
}
Tensor multilabel_margin_loss(const Tensor & self, const Tensor & target, int64_t reduction) {
return std::get<0>(at::multilabel_margin_loss_forward(self, target, reduction));
}
} // namespace at::native
```
|
```java
package com.fishercoder.thirdthousand;
import com.fishercoder.solutions.thirdthousand._2012;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import static org.junit.jupiter.api.Assertions.assertEquals;
public class _2012Test {
private _2012.Solution1 solution1;
@BeforeEach
public void setup() {
solution1 = new _2012.Solution1();
}
@Test
public void test1() {
assertEquals(1, solution1.sumOfBeauties(new int[]{2, 4, 6, 4}));
}
@Test
public void test2() {
assertEquals(14, solution1.sumOfBeauties(new int[]{1, 2, 3, 4, 5, 7, 8, 9, 10}));
}
@Test
public void test3() {
assertEquals(0, solution1.sumOfBeauties(new int[]{9, 9, 3, 8, 7, 9, 6, 10}));
}
@Test
public void test4() {
assertEquals(0, solution1.sumOfBeauties(new int[]{8, 4, 6, 3, 10, 5, 8, 5, 5, 9}));
}
}
```
|
```javascript
/**
* @license Apache-2.0
*
*
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing, software
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*/
'use strict';
/**
* Lint rule to enforce that semicolons are omitted if output is not suppressed.
*
* @module @stdlib/_tools/repl-txt/rules/code-semicolons
*
* @example
* var rule = require( '@stdlib/_tools/repl-txt/rules/code-semicolons' );
*
* console.log( rule );
*/
// MODULES //
var main = require( './main.js' );
// EXPORTS //
module.exports = main;
```
|
The Reeves AN/MSQ-51 Aerial Target Control Central (ATCC) was a combination radar/computer/communications system ("Q" system") developed 1961-3 for United States Navy "aerial target out-of-sight control". In addition to the "Target Control System AN/SRW-4D" with radios and "Antenna Assemblies for Target Control and Communications (7 Units)", the ATCC included acquisition/surveillance and tracking radars, a Mark X IFF/SIF, and an analog computer. The ATCC's automatic tracking radar was derived from the Western Electric M-33 gun laying radar and could process double-pulse 9340-9370 MHz beacon returns from transponders up to away from the AN/MSQ-51 transmitting 9215-9285 MHz radar pulses. If an ATCC was equipped with a "Telemetry Receiving Station", IRIG channels 5-14 could also be received from QF-9G and Q-2C unmanned aerial vehicles. Other ATCC-controlled drones included the QF-9F, KDA-1, KDA-4, KDB-1 and KD2R-5. For "RF communications (2 to 25 mc.)" to command the drone was a "Collins Radio Co. Model 618T-3" Single Sideband Transceiver (SST) with Control Unit 714E-2 for 28,000 channels (400 watts PEP, 100 w AM). The 1000 watt voice radio system had 2 UHF AN/GRC-27 sets "with Control-Indicator 6-806/GR" for 1750 channels (consoles had headsets, footpedals, and crew intercommunications.)
Configuration
Similar to the USMC's post-WWII AN/TPQ-2 and Korean War AN/MPQ-14, the AN/MSQ-51 used a Operations Trailer, a flatbed trailer with Electric Generator units, and a Maintenance Trailer. An additional flatbed was used for transporting the acquisition (surveillance) antenna assembly which included a 16' oblong dish and a ground pedestal. When emplaced, the "Operations Trailer Assembly" was the box-style fifth wheel trailer with antennas on top (tracking antenna with vertically polarized lens and conical scan, two AT-781/AU voice communications antennas, and two AT-197/GR telemetry antennas), a combination air conditioning and electric heating system, and interior operator equipment:
Tracking Console with Plan Position Indicator (PPI), A-scans for range/azimuth/elevation, AN/TPA-3 IFF/SIF electronics (e.g., Video Decoder MX1995 & Remote Switching Control C-1903), and for the surveillance antenna signal, "Acquisition Receiver Control" panel.
Tactical Control Console, a "Modified M33 Unit", with vertical 30" plotting board, command signal Transmitter Control C-2802/SRW-4, Thirty Channel Event Recorder ("on-off"/"Beep" type) and an additional PPI.
Computer cabinet with vacuum tube amplifiers for analog summing, Horizontal Range Servo for trigonometric, and mounted on the right cabinet door a Command Monitor Display Panel along with Telemetry Indicators (if equipped).
As with the M-33's computation of an enemy aircraft location, the AN/MSQ-51 computer derived "target altitude" from the elevation resolver, timed the "rate of change of altitude" using a differentiating amplifier, and "resolved horizontal plane coordinates" from the antenna's azimuth orientation.
Personnel
To "permit one man [tracking] operation instead of three", the M-33 tracking console was modified for the AN/MSQ-51's "Tracking Radar Operator", and the remainder of the 4 man ATCC operations crew was a "Surveillance Radar Operator" and 2 Drone Controllers with the primary in the Operations Trailer. The "Secondary Operator's Position [was] located outside the van", and a "telemetry directional helical receiving antenna…on top of a mast ten feet high [was] 100 feet from the operating van." ATCC maintenance was by 6 radar technicians, 4 AN/SRW-4D technicians, and 4 for HVAC, generators, and accessories.
At the China Lake "Target Radar Site", Reeves Instrument Corporation representative was an AN/MSQ-51 civilian contractor until ~1968.
References
Cold War military computer systems of the United States
Cold War military equipment of the United States Navy
Ground radars
M
|
Robert Kiss may refer to:
Róbert Kiss (born 1967), Hungarian fencer
Robert S. Kiss (1957–2021), American politician
|
Eriptychiida is an extinct marine taxon of vertebrate in the group Pteraspidomorphi.
The order contains the genus, Eriptychius, and fossilized specimens from this genus have been found in the Gull River Formation of Ontario, the Harding Formation of Colorado, and the Bighorn Dolomite of Wyoming. The group contains two documented species: Eriptychius americanus and Eriptychius orvigi.
Characteristics
The structure of the dentine of eriptychiids is in many respects closer to that of heterostracans that to that of astraspids. This is the only argument to place them as the closest relatives to heterostracans, among the Ordovician vertebrates. However, eriptychiids differ from all other pteraspidomorphs in having a massively calcified endoskeleton, pervaded by canals for blood vessels.
Taxonomy
Order †Eriptychiiformes Ørvig 1958
Genus ?†Eleochera Sansom & Smith 2005
Family †Eriptychiidae Tarlo 1962
Genus †Eriptychius Walcott 1892
Family †Oniscolepididae Märss & Karatajūtė-Talimaa 2009
Genus †Kallostrakon Lankester 1870
Genus †Oniscolepis Pander 1856 non Groß 1961 [Strosipherus Pander 1856]
In study at 2023, Eriptychius is placed just under Vertebrata, not considering class or order.
See also
Fish
References
External links
Eriptychiida
Pteraspidomorphi
Prehistoric jawless fish orders
Ordovician jawless fish
Late Ordovician animals
Ordovician fish of North America
Late Ordovician first appearances
Late Ordovician taxonomic orders
Late Ordovician extinctions
de:Pteraspidomorphi#Eriptychiida
|
```c++
/*******************************************************************************
*
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing, software
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*******************************************************************************/
#ifndef GPU_MICROKERNELS_SHIM_HPP
#define GPU_MICROKERNELS_SHIM_HPP
#include <string>
#include <vector>
#include "package.hpp"
namespace dnnl {
namespace impl {
namespace gpu {
namespace intel {
namespace micro {
enum class HostLanguage { None, OpenCL_C, SYCL, vISA };
struct ShimOptions {
std::string decorator;
int subgroupSize = 0;
bool copyScalarArgs = true;
bool copyTensorArgs = false;
bool useTileOps = false;
uint32_t microkernelID = 0;
};
std::string generateShim(const Package &package, HostLanguage language,
const ShimOptions &options = ShimOptions());
} /* namespace micro */
} // namespace intel
} // namespace gpu
} // namespace impl
} // namespace dnnl
#endif
```
|
The 2022 Asian Track Cycling Championships (41st edition) took place at the Indira Gandhi Stadium Velodrome in New Delhi, India from 18 to 22 June 2022.
Medal summary
Men
Women
Medal table
References
External links
Official website
Asian Cycling Championships
Asia
Cycling
International cycle races hosted by India
Asian Track Cycling Championships
|
```java
/*
*/
package com.microsoft.sqlserver.jdbc.unit.statement;
import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertTrue;
import java.lang.reflect.Field;
import java.sql.BatchUpdateException;
import java.sql.Connection;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.Arrays;
import java.util.logging.Level;
import java.util.logging.Logger;
import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.DisplayName;
import org.junit.jupiter.api.Tag;
import org.junit.jupiter.api.Test;
import org.junit.platform.runner.JUnitPlatform;
import org.junit.runner.RunWith;
import com.microsoft.sqlserver.jdbc.RandomUtil;
import com.microsoft.sqlserver.jdbc.SQLServerConnection;
import com.microsoft.sqlserver.jdbc.TestResource;
import com.microsoft.sqlserver.jdbc.TestUtils;
import com.microsoft.sqlserver.testframework.AbstractSQLGenerator;
import com.microsoft.sqlserver.testframework.AbstractTest;
import com.microsoft.sqlserver.testframework.Constants;
/**
* Tests batch execution with errors
*
*/
@RunWith(JUnitPlatform.class)
public class BatchExecuteWithErrorsTest extends AbstractTest {
public static final Logger log = Logger.getLogger("BatchExecuteWithErrors");
final String tableName = RandomUtil.getIdentifier("t_Repro47239");
final String insertStmt = "INSERT INTO " + AbstractSQLGenerator.escapeIdentifier(tableName)
+ " VALUES (999, 'HELLO', '4/12/1994')";
final String error16 = "RAISERROR ('raiserror level 16',16,42)";
final String select = "SELECT 1";
final String dateConversionError = "insert into " + AbstractSQLGenerator.escapeIdentifier(tableName)
+ " values (999999, 'Hello again', 'asdfasdf')";
String warning;
String error;
String severe;
@BeforeAll
public static void setupTests() throws Exception {
setConnection();
}
/**
* Batch test
*
* @throws Exception
*/
@Test
@DisplayName("Batch Test")
public void Repro47239() throws Exception {
Repro47239Internal("BatchInsert");
}
@Test
@DisplayName("Batch Test using bulk copy API")
public void Repro47239UseBulkCopyAPI() throws Exception {
Repro47239Internal("BulkCopy");
}
/**
* Tests large methods
*
* @throws Exception
*/
@Test
@DisplayName("Regression test for using 'large' methods")
@Tag(Constants.xAzureSQLDW)
public void Repro47239large() throws Exception {
Repro47239largeInternal("BatchInsert");
}
@Test
@DisplayName("Regression test for using 'large' methods using bulk copy API")
@Tag(Constants.xAzureSQLDW)
public void Repro47239largeUseBulkCopyAPI() throws Exception {
Repro47239largeInternal("BulkCopy");
}
private void Repro47239Internal(String mode) throws Exception {
try (Connection con = getConnection()) {
if (isSqlAzure()) {
// SQL Azure will throw exception for "raiserror WITH LOG", so the following RAISERROR statements have
// not "with log" option
warning = "RAISERROR ('raiserror level 4',4,1)";
error = "RAISERROR ('raiserror level 11',11,1)";
// On SQL Azure, raising FATAL error by RAISERROR() is not supported and there is no way to
// cut the current connection by a statement inside a SQL batch.
// Details: Although one can simulate a fatal error (that cuts the connections) by dropping the
// database,
// this simulation cannot be written entirely in TSQL (because it needs a new connection),
// and thus it cannot be put into a TSQL batch and it is useless here.
// So we have to skip the last scenario of this test case, i.e. "Test Severe (connection-closing)
// errors"
// It is worthwhile to still execute the first 5 test scenarios of this test case, in order to have best
// test coverage.
severe = "--Not executed when testing against SQL Azure"; // this is a dummy statement that never being
// executed on SQL Azure
} else {
warning = "RAISERROR ('raiserror level 4',4,1) WITH LOG";
error = "RAISERROR ('raiserror level 11',11,1) WITH LOG";
severe = "RAISERROR ('raiserror level 20',20,1) WITH LOG";
}
}
int[] actualUpdateCounts;
int[] expectedUpdateCounts;
String actualExceptionText;
try (Connection conn = getConnection(); Statement stmt = conn.createStatement()) {
if (mode.equalsIgnoreCase("bulkcopy")) {
modifyConnectionForBulkCopyAPI((SQLServerConnection) conn);
}
TestUtils.dropTableIfExists(AbstractSQLGenerator.escapeIdentifier(tableName), stmt);
stmt.executeUpdate("create table " + AbstractSQLGenerator.escapeIdentifier(tableName)
+ " (c1_int int, c2_varchar varchar(20), c3_date datetime, c4_int int identity(1,1))");
// Regular Statement batch update
expectedUpdateCounts = new int[] {1, -2, 1, -2, 1, -2};
try (Statement batchStmt = conn.createStatement()) {
batchStmt.addBatch(insertStmt);
batchStmt.addBatch(warning);
batchStmt.addBatch(insertStmt);
batchStmt.addBatch(warning);
batchStmt.addBatch(insertStmt);
batchStmt.addBatch(warning);
try {
actualUpdateCounts = batchStmt.executeBatch();
actualExceptionText = "";
} catch (BatchUpdateException bue) {
actualUpdateCounts = bue.getUpdateCounts();
actualExceptionText = bue.getMessage();
if (log.isLoggable(Level.FINE)) {
log.fine("BatchUpdateException occurred. Message:" + actualExceptionText);
}
} finally {
batchStmt.close();
}
}
if (log.isLoggable(Level.FINE)) {
log.fine("UpdateCounts:");
}
for (int updateCount : actualUpdateCounts) {
log.fine("" + updateCount + ",");
}
assertTrue(Arrays.equals(actualUpdateCounts, expectedUpdateCounts),
TestResource.getResource("R_testInterleaved"));
expectedUpdateCounts = new int[] {-3, 1, 1, 1};
stmt.addBatch(error);
stmt.addBatch(insertStmt);
stmt.addBatch(insertStmt);
stmt.addBatch(insertStmt);
try {
actualUpdateCounts = stmt.executeBatch();
actualExceptionText = "";
} catch (BatchUpdateException bue) {
actualUpdateCounts = bue.getUpdateCounts();
actualExceptionText = bue.getMessage();
}
log.fine("UpdateCounts:");
for (int updateCount : actualUpdateCounts) {
log.fine("" + updateCount + ",");
}
assertTrue(Arrays.equals(actualUpdateCounts, expectedUpdateCounts),
TestResource.getResource("R_errorFollowInserts"));
// 50280
expectedUpdateCounts = new int[] {1, -3};
stmt.addBatch(insertStmt);
stmt.addBatch(error16);
try {
actualUpdateCounts = stmt.executeBatch();
actualExceptionText = "";
} catch (BatchUpdateException bue) {
actualUpdateCounts = bue.getUpdateCounts();
actualExceptionText = bue.getMessage();
}
for (int updateCount : actualUpdateCounts) {
log.fine("" + updateCount + ",");
}
assertTrue(Arrays.equals(actualUpdateCounts, expectedUpdateCounts),
TestResource.getResource("R_errorFollow50280"));
// Test "soft" errors
conn.setAutoCommit(false);
stmt.addBatch(select);
stmt.addBatch(insertStmt);
stmt.addBatch(select);
stmt.addBatch(insertStmt);
try {
stmt.executeBatch();
// Soft error test: executeBatch unexpectedly succeeded
assertEquals(true, false, TestResource.getResource("R_shouldThrowException"));
} catch (BatchUpdateException bue) {
assertEquals("A result set was generated for update.", bue.getMessage(),
TestResource.getResource("R_unexpectedExceptionContent"));
assertEquals(Arrays.equals(bue.getUpdateCounts(), new int[] {-3, 1, -3, 1}), true,
TestResource.getResource("R_incorrectUpdateCount"));
}
conn.rollback();
stmt.addBatch(dateConversionError);
stmt.addBatch(insertStmt);
stmt.addBatch(insertStmt);
stmt.addBatch(insertStmt);
try {
stmt.executeBatch();
} catch (BatchUpdateException bue) {
if (isSqlAzureDW()) {
assert (bue.getMessage().contains(TestResource.getResource("R_syntaxErrorDateConvertDW")));
} else {
assert (bue.getMessage().contains(TestResource.getResource("R_syntaxErrorDateConvert")));
}
} catch (SQLException e) {
assert (e.getMessage().contains(TestResource.getResource("R_dateConvertError")));
}
conn.setAutoCommit(true);
// On SQL Azure, raising FATAL error by RAISERROR() is not supported and there is no way to
// cut the current connection by a statement inside a SQL batch.
// Details: Although one can simulate a fatal error (that cuts the connections) by dropping the
// database,
// this simulation cannot be written entirely in TSQL (because it needs a new connection),
// and thus it cannot be put into a TSQL batch and it is useless here.
// So we have to skip the last scenario of this test case, i.e. "Test Severe (connection-closing)
// errors"
// It is worthwhile to still execute the first 5 test scenarios of this test case, in order to have best
// test coverage.
if (!isSqlAzure()) {
// Test Severe (connection-closing) errors
stmt.addBatch(error);
stmt.addBatch(insertStmt);
stmt.addBatch(warning);
stmt.addBatch(select);
stmt.addBatch(insertStmt);
stmt.addBatch(severe);
stmt.addBatch(insertStmt);
stmt.addBatch(insertStmt);
try {
stmt.executeBatch();
// Test fatal errors batch execution succeeded (should have failed)
assertEquals(false, true, TestResource.getResource("R_shouldThrowException"));
} catch (BatchUpdateException bue) {
// Test fatal errors returned BatchUpdateException rather than SQLException
assertEquals(false, true, TestResource.getResource("R_unexpectedException") + bue.getMessage());
} catch (SQLException e) {
actualExceptionText = e.getMessage();
if (actualExceptionText.endsWith("reset")) {
assertTrue(actualExceptionText.equalsIgnoreCase("Connection reset"),
TestResource.getResource("R_unexpectedExceptionContent") + ": " + actualExceptionText);
} else {
assertTrue(actualExceptionText.equalsIgnoreCase("raiserror level 20"),
TestResource.getResource("R_unexpectedExceptionContent") + ": " + actualExceptionText);
}
}
}
} finally {
try (Connection conn = getConnection(); Statement stmt = conn.createStatement()) {
TestUtils.dropTableIfExists(AbstractSQLGenerator.escapeIdentifier(tableName), stmt);
}
}
}
private void Repro47239largeInternal(String mode) throws Exception {
// the DBConnection for detecting whether the server is SQL Azure or SQL Server.
try (Connection con = getConnection()) {
if (isSqlAzure()) {
// SQL Azure will throw exception for "raiserror WITH LOG", so the following RAISERROR statements have
// not
// "with log" option
warning = "RAISERROR ('raiserror level 4',4,1)";
error = "RAISERROR ('raiserror level 11',11,1)";
// On SQL Azure, raising FATAL error by RAISERROR() is not supported and there is no way to
// cut the current connection by a statement inside a SQL batch.
// Details: Although one can simulate a fatal error (that cuts the connections) by dropping the
// database,
// this simulation cannot be written entirely in TSQL (because it needs a new connection),
// and thus it cannot be put into a TSQL batch and it is useless here.
// So we have to skip the last scenario of this test case, i.e. "Test Severe (connection-closing)
// errors"
// It is worthwhile to still execute the first 5 test scenarios of this test case, in order to have best
// test coverage.
severe = "--Not executed when testing against SQL Azure"; // this is a dummy statement that never being
// executed on SQL Azure
} else {
warning = "RAISERROR ('raiserror level 4',4,1) WITH LOG";
error = "RAISERROR ('raiserror level 11',11,1) WITH LOG";
severe = "RAISERROR ('raiserror level 20',20,1) WITH LOG";
}
con.close();
long[] actualUpdateCounts;
long[] expectedUpdateCounts;
String actualExceptionText;
try (Connection conn = getConnection(); Statement stmt = conn.createStatement()) {
if (mode.equalsIgnoreCase("bulkcopy")) {
modifyConnectionForBulkCopyAPI((SQLServerConnection) conn);
}
TestUtils.dropTableIfExists(AbstractSQLGenerator.escapeIdentifier(tableName), stmt);
stmt.executeLargeUpdate("create table " + AbstractSQLGenerator.escapeIdentifier(tableName)
+ " (c1_int int, c2_varchar varchar(20), c3_date datetime, c4_int int identity(1,1) primary key)");
// Regular Statement batch update
expectedUpdateCounts = new long[] {1, -2, 1, -2, 1, -2};
try (Statement batchStmt = conn.createStatement()) {
batchStmt.addBatch(insertStmt);
batchStmt.addBatch(warning);
batchStmt.addBatch(insertStmt);
batchStmt.addBatch(warning);
batchStmt.addBatch(insertStmt);
batchStmt.addBatch(warning);
try {
actualUpdateCounts = batchStmt.executeLargeBatch();
actualExceptionText = "";
} catch (BatchUpdateException bue) {
actualUpdateCounts = bue.getLargeUpdateCounts();
actualExceptionText = bue.getMessage();
log.fine("BatchUpdateException occurred. Message:" + actualExceptionText);
}
}
log.fine("UpdateCounts:");
for (long updateCount : actualUpdateCounts) {
log.fine("" + updateCount + ",");
}
assertTrue(Arrays.equals(actualUpdateCounts, expectedUpdateCounts),
TestResource.getResource("R_testInterleaved"));
expectedUpdateCounts = new long[] {-3, 1, 1, 1};
stmt.addBatch(error);
stmt.addBatch(insertStmt);
stmt.addBatch(insertStmt);
stmt.addBatch(insertStmt);
try {
actualUpdateCounts = stmt.executeLargeBatch();
actualExceptionText = "";
} catch (BatchUpdateException bue) {
actualUpdateCounts = bue.getLargeUpdateCounts();
actualExceptionText = bue.getMessage();
}
log.fine("UpdateCounts:");
for (long updateCount : actualUpdateCounts) {
log.fine("" + updateCount + ",");
}
assertTrue(Arrays.equals(actualUpdateCounts, expectedUpdateCounts),
TestResource.getResource("R_errorFollowInserts"));
// 50280
expectedUpdateCounts = new long[] {1, -3};
stmt.addBatch(insertStmt);
stmt.addBatch(error16);
try {
actualUpdateCounts = stmt.executeLargeBatch();
actualExceptionText = "";
} catch (BatchUpdateException bue) {
actualUpdateCounts = bue.getLargeUpdateCounts();
actualExceptionText = bue.getMessage();
}
for (long updateCount : actualUpdateCounts) {
log.fine("" + updateCount + ",");
}
assertTrue(Arrays.equals(actualUpdateCounts, expectedUpdateCounts),
TestResource.getResource("R_errorFollow50280"));
// Test "soft" errors
conn.setAutoCommit(false);
stmt.addBatch(select);
stmt.addBatch(insertStmt);
stmt.addBatch(select);
stmt.addBatch(insertStmt);
try {
stmt.executeLargeBatch();
// Soft error test: executeLargeBatch unexpectedly succeeded
assertEquals(false, true, TestResource.getResource("R_shouldThrowException"));
} catch (BatchUpdateException bue) {
// Soft error test: wrong error message in BatchUpdateException
assertEquals("A result set was generated for update.", bue.getMessage(),
TestResource.getResource("R_unexpectedExceptionContent"));
// Soft error test: wrong update counts in BatchUpdateException
assertEquals(Arrays.equals(bue.getLargeUpdateCounts(), new long[] {-3, 1, -3, 1}), true,
TestResource.getResource("R_incorrectUpdateCount"));
}
conn.rollback();
// Defect 128801: Rollback (with conversion error) should throw SQLException
stmt.addBatch(dateConversionError);
stmt.addBatch(insertStmt);
stmt.addBatch(insertStmt);
stmt.addBatch(insertStmt);
try {
stmt.executeLargeBatch();
} catch (BatchUpdateException bue) {
assert (bue.getMessage().contains(TestResource.getResource("R_syntaxErrorDateConvert")));
} catch (SQLException e) {
assert (e.getMessage().contains(TestResource.getResource("R_dateConvertError")));
}
conn.setAutoCommit(true);
// On SQL Azure, raising FATAL error by RAISERROR() is not supported and there is no way to
// cut the current connection by a statement inside a SQL batch.
// Details: Although one can simulate a fatal error (that cuts the connections) by dropping the
// database,
// this simulation cannot be written entirely in TSQL (because it needs a new connection),
// and thus it cannot be put into a TSQL batch and it is useless here.
// So we have to skip the last scenario of this test case, i.e. "Test Severe (connection-closing)
// errors"
// It is worthwhile to still execute the first 5 test scenarios of this test case, in order to have
// best
// test coverage.
if (!isSqlAzure()) {
// Test Severe (connection-closing) errors
stmt.addBatch(error);
stmt.addBatch(insertStmt);
stmt.addBatch(warning);
stmt.addBatch(insertStmt);
stmt.addBatch(severe);
stmt.addBatch(insertStmt);
stmt.addBatch(insertStmt);
try {
stmt.executeLargeBatch();
// Test fatal errors batch execution succeeded (should have failed)
assertEquals(false, true, TestResource.getResource("R_shouldThrowException"));
} catch (BatchUpdateException bue) {
// Test fatal errors returned BatchUpdateException rather than SQLException
assertEquals(false, true, TestResource.getResource("R_unexpectedException") + bue.getMessage());
} catch (SQLException e) {
actualExceptionText = e.getMessage();
if (actualExceptionText.endsWith("reset")) {
assertTrue(actualExceptionText.equalsIgnoreCase("Connection reset"),
TestResource.getResource("R_unexpectedExceptionContent") + ": "
+ actualExceptionText);
} else {
assertTrue(actualExceptionText.equalsIgnoreCase("raiserror level 20"),
TestResource.getResource("R_unexpectedExceptionContent") + ": "
+ actualExceptionText);
}
}
}
} finally {
try (Connection conn = getConnection(); Statement stmt = conn.createStatement()) {
TestUtils.dropTableIfExists(AbstractSQLGenerator.escapeIdentifier(tableName), stmt);
}
}
}
}
private void modifyConnectionForBulkCopyAPI(SQLServerConnection con) throws Exception {
Field f1 = SQLServerConnection.class.getDeclaredField("isAzureDW");
f1.setAccessible(true);
f1.set(con, true);
con.setUseBulkCopyForBatchInsert(true);
}
}
```
|
Tradd Moore (born October 31, 1987) is an American comic book artist. His work has appeared in series published by Marvel Comics, Image Comics and DC Comics.
Comics career
Moore graduated from Savannah College of Art and Design in 2010. While still in college he was discovered on DeviantArt by writer Justin Jordan, who pitched him the script for a new comic. Their first collaboration, the six-issue action-horror series The Strange Talent of Luther Strode, was published in 2011 by Image Comics. A sequel, The Legend of Luther Strode was released the following year.
Moore was then noticed by DC Comics, who asked him to illustrate an issue of the Batman anthology Legends of the Dark Knight. In 2013 he also started working for Marvel Comics, drawing several covers for Deadpool. The same year he drew the second issue of writer Aleš Kot's espionage thriller Zero for Image Comics. In 2014 Moore drew the first five issues of All-New Ghost Rider for Marvel's All-New, All-Different relaunch. He also contributed covers for Kot's whole run on Secret Avengers.
The following year he reunited with writer Justin Jordan for the last installment in the Luther Strode trilogy, The Legacy of Luther Strode. He also returned to Ghost Rider with a backup story in the first issue of the relaunch, and to Deadpool, creating the covers for the Deadpool vs. Thanos mini series.
In 2017 he illustrated the landmark issue #150 of Venom, and continued to contribute covers for Marvel's Secret Warriors and IDW's Revolutionaries.
Moore returned to Image comics in 2018, co-plotting, penciling and inking the five-issue sci-fi action romance series The New World with writer Aleš Kot.
In 2019 Moore collaborated with fellow Savannah College of Art alumnus Donny Cates on the five-issue series Silver Surfer: Black. It was released to critical acclaim.
In early 2020 it was revealed that The New World would be adapted into a movie by Warner Bros., with Jeremy O. Harris on screenwriting duties. Later that year he wrote and illustrated a short story for the milestone issue #850 of The Amazing Spider-Man. At 2022 San Diego Comic-Con it was announced that Moore's next project will be Doctor Strange: Fall Sunrise, a four-issue miniseries that he will be both writing and illustrating.
Bibliography
Interior work
Image Comics
The Strange Talent of Luther Strode #1–6 (art, with writer Justin Jordan, 2011–12)
The Legend of Luther Strode #1–6 (art, with writer Justin Jordan, 2012–13)
Zero #2 (art, with writer Aleš Kot, 2013)
The Legacy of Luther Strode #1–6 (art, with writer Justin Jordan, 2015–16)
The New World #1–5 (art, with writer Aleš Kot, 2018)
Marvel Comics
All-New Ghost Rider #1–5 (art, with writer Felipe Smith, 2014)
Ghost Rider #1 (backup story art with writer Felipe Smith, 2016)
Venom #150 (art, among other artists with writer Mike Costa, 2017)
Silver Surfer: Black #1–5 (art and story, with writer Donny Cates, 2019)
Guardians of the Galaxy #12 (art, among other artists with writer Donny Cates, 2019)
The Amazing Spider-Man #850 (art and story, among other writers and artists, 2020)
Doctor Strange: Fall Sunrise #1–4 (art and story, 2022–23)
DC Comics
Legends of the Dark Knight #8 (art, with writers Paul Tobin and Ricardo Sanchez, 2013)
Batman Black and White #1 (art, with writer James Tynion IV, 2020)
References
External links
Tradd Moore on Tumblr
Living people
1987 births
American comics artists
Artists from Georgia (U.S. state)
Savannah College of Art and Design alumni
People from Snellville, Georgia
|
John McMillan (August 4, 1816 – September 12, 1886) was a New Brunswick businessman and political figure. He represented Restigouche in the House of Commons of Canada as a Liberal member from 1867 to 1868.
He was born on the Isle of Arran, Scotland in 1816 and came to New Brunswick with his father in 1832. McMillan worked at the timber trade and later with a partner established a lumber firm and a general store at Campbellton. He was named a justice of the peace in 1845. In 1857, he was elected to the Legislative Assembly of New Brunswick for Restigouche County; he was reelected in 1861, 1865 and 1866. McMillan was Surveyor General from 1861 to 1865. He served as postmaster general for New Brunswick from 1866 until Confederation. He was elected to the House of Commons in 1867 but resigned in 1868 to become Inspector of Post Offices for the province.
He died in Saint John, New Brunswick in 1886.
External links
Biography at the Dictionary of Canadian Biography Online
1816 births
1886 deaths
Members of the Legislative Assembly of New Brunswick
Members of the Executive Council of New Brunswick
Liberal Party of Canada MPs
Members of the House of Commons of Canada from New Brunswick
Scottish emigrants to pre-Confederation New Brunswick
People from Restigouche County, New Brunswick
Colony of New Brunswick people
Canadian justices of the peace
|
```c++
// PropertyName.cpp
#include "StdAfx.h"
#include "../../PropID.h"
#include "Windows/ResourceString.h"
#include "resource.h"
#include "PropertyName.h"
#include "PropertyNameRes.h"
#include "LangUtils.h"
struct CPropertyIDNamePair
{
PROPID PropID;
UINT ResourceID;
UINT LangID;
};
static CPropertyIDNamePair kPropertyIDNamePairs[] =
{
{ kpidPath, IDS_PROPERTY_PATH, 0x02000203 },
{ kpidName, IDS_PROPERTY_NAME, 0x02000204 },
// { kpidExtension, L"Extension" },
{ kpidIsFolder, IDS_PROPERTY_IS_FOLDER, 0x02000206},
{ kpidSize, IDS_PROPERTY_SIZE, 0x02000207},
{ kpidPackedSize, IDS_PROPERTY_PACKED_SIZE, 0x02000208 },
{ kpidAttributes, IDS_PROPERTY_ATTRIBUTES, 0x02000209 },
{ kpidCreationTime, IDS_PROPERTY_CREATION_TIME, 0x0200020A },
{ kpidLastAccessTime, IDS_PROPERTY_LAST_ACCESS_TIME, 0x0200020B },
{ kpidLastWriteTime, IDS_PROPERTY_LAST_WRITE_TIME, 0x0200020C },
{ kpidSolid, IDS_PROPERTY_SOLID, 0x0200020D },
{ kpidCommented, IDS_PROPERTY_C0MMENTED, 0x0200020E },
{ kpidEncrypted, IDS_PROPERTY_ENCRYPTED, 0x0200020F },
{ kpidSplitBefore, IDS_PROPERTY_SPLIT_BEFORE, 0x02000210 },
{ kpidSplitAfter, IDS_PROPERTY_SPLIT_AFTER, 0x02000211 },
{ kpidDictionarySize, IDS_PROPERTY_DICTIONARY_SIZE, 0x02000212 },
{ kpidCRC, IDS_PROPERTY_CRC, 0x02000213 },
{ kpidType, IDS_PROPERTY_FILE_TYPE, 0x02000214},
{ kpidIsAnti, IDS_PROPERTY_ANTI, 0x02000215 },
{ kpidMethod, IDS_PROPERTY_METHOD, 0x02000216 },
{ kpidHostOS, IDS_PROPERTY_HOST_OS, 0x02000217 },
{ kpidFileSystem, IDS_PROPERTY_FILE_SYSTEM, 0x02000218},
{ kpidUser, IDS_PROPERTY_USER, 0x02000219},
{ kpidGroup, IDS_PROPERTY_GROUP, 0x0200021A},
{ kpidBlock, IDS_PROPERTY_BLOCK, 0x0200021B },
{ kpidComment, IDS_PROPERTY_COMMENT, 0x0200021C },
{ kpidPosition, IDS_PROPERTY_POSITION, 0x0200021D },
{ kpidPrefix, IDS_PROPERTY_PREFIX, 0x0200021E },
{ kpidNumSubFolders, IDS_PROPERTY_FOLDERS, 0x0200021F },
{ kpidNumSubFiles, IDS_PROPERTY_FILES, 0x02000220 },
{ kpidUnpackVer, IDS_PROPERTY_VERSION, 0x02000221},
{ kpidVolume, IDS_PROPERTY_VOLUME, 0x02000222},
{ kpidIsVolume, IDS_PROPERTY_IS_VOLUME, 0x02000223},
{ kpidOffset, IDS_PROPERTY_OFFSET, 0x02000224},
{ kpidLinks, IDS_PROPERTY_LINKS, 0x02000225},
{ kpidNumBlocks, IDS_PROPERTY_NUM_BLOCKS, 0x02000226},
{ kpidNumVolumes, IDS_PROPERTY_NUM_VOLUMES, 0x02000227},
{ kpidTotalSize, IDS_PROPERTY_TOTAL_SIZE, 0x03031100 },
{ kpidFreeSpace, IDS_PROPERTY_FREE_SPACE, 0x03031101 },
{ kpidClusterSize, IDS_PROPERTY_CLUSTER_SIZE, 0x03031102},
{ kpidVolumeName, IDS_PROPERTY_VOLUME_NAME, 0x03031103 },
{ kpidLocalName, IDS_PROPERTY_LOCAL_NAME, 0x03031200 },
{ kpidProvider, IDS_PROPERTY_PROVIDER, 0x03031201 }
};
int FindProperty(PROPID propID)
{
for (int i = 0; i < sizeof(kPropertyIDNamePairs) / sizeof(kPropertyIDNamePairs[0]); i++)
if(kPropertyIDNamePairs[i].PropID == propID)
return i;
return -1;
}
UString GetNameOfProperty(PROPID propID)
{
int index = FindProperty(propID);
if (index < 0)
return UString();
const CPropertyIDNamePair &pair = kPropertyIDNamePairs[index];
return LangString(pair.ResourceID, pair.LangID);
}
```
|
Fenton Lee Robinson (September 23, 1935 – November 25, 1997) was an American blues singer and exponent of the Chicago blues guitar. In 2023, he was inducted in the Blues Hall of Fame.
Biography
Robinson was born near Greenwood, Mississippi. He left home at the age of 18 and moved to Memphis, Tennessee, where he recorded his first single "Tennessee Woman" in 1957. In 1959, he made his first recording of "As the Years Go Passing By", later recorded by several other blues artists. He settled in Chicago in 1962. He recorded his signature song, "Somebody Loan Me a Dime", in 1967 for the Palos label, the nationwide distribution of which was aborted by a freak snowstorm that hit Chicago. A cover version was recorded by Boz Scaggs in 1969, but the song was misattributed, and legal battles ensued. It has since become a blues standard, being "part of the repertoire of one out of every two blues artists", according to the Encyclopedia of Blues (1997).
Robinson re-recorded the song for the critically acclaimed album Somebody Loan Me a Dime in 1974, the first of three he recorded for Alligator Records. Robinson was nominated for a Grammy Award for the second, 1977's I Hear Some Blues Downstairs, which contained a rerecording of "As the Years Go Passing By". Robinson's third album for Alligator, Nightflight, was released in 1984.
Robinson played guitar on Larry Davis's original recording of "Texas Flood". Davis later became a guitar player, but for "Texas Flood" Robinson provided the distinctive guitar parts, with Davis on vocals and bass, flamboyant keyboardist James Booker on piano, David Dean on tenor saxophone, Booker Crutchfield on baritone saxophone and an unknown drummer.
In the 1970s Robinson was arrested and imprisoned for involuntary manslaughter in connection with a car accident. Paroled after nine months, he continued playing in Chicago clubs and later taught guitar.
Robinson died of complications from brain cancer, in Rockford, Illinois.
His signature song, "Somebody Loan Me a Dime", was used in the film The Blues Brothers; the song is playing on the radio when Jake (John Belushi) is being transported and paroled.
Discography
Monday Morning Boogie & Blues (1972), Seventy Seven Records; Sunset Blvd Records
The Getaway (1973), Seventy Seven
Somebody Loan Me A Dime (1974), Alligator
I Hear Some Blues Downstairs (1977), Alligator
Blues In Progress (AKA Nightflight) (1984), Black Magic; Alligator
Special Road (1989), Black Magic; Evidence
See also
List of blues musicians
List of Chicago blues musicians
List of Texas blues musicians
List of electric blues musicians
Chicago Blues Festival
References
External links
[ Biography] at Allmusic website
1935 births
1997 deaths
People from Greenwood, Mississippi
American blues guitarists
American male guitarists
American blues singers
Blues musicians from Mississippi
Electric blues musicians
Duke Records artists
Deaths from brain cancer in the United States
Deaths from cancer in Illinois
Texas blues musicians
Chicago blues musicians
20th-century American singers
20th-century American guitarists
Guitarists from Illinois
Guitarists from Mississippi
Guitarists from Texas
Alligator Records artists
USA Records artists
Meteor Records artists
20th-century American male musicians
|
Patriarch Basil of Bulgaria may refer to:
Basil I of Bulgaria, Patriarch of Bulgaria c. 1186 – c. 1232
Basil II of Bulgaria, Patriarch of Bulgaria c. 1246–1263
Basil III of Bulgaria, Patriarch of Bulgaria c. 1254–1263
|
```php
<?php
/*
*
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing, software
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
*/
namespace Google\Service\ServiceNetworking\Resource;
use Google\Service\ServiceNetworking\DnsRecordSet;
/**
* The "dnsRecordSet" collection of methods.
* Typical usage is:
* <code>
* $servicenetworkingService = new Google\Service\ServiceNetworking(...);
* $dnsRecordSet = $servicenetworkingService->services_dnsRecordSet;
* </code>
*/
class ServicesDnsRecordSet extends \Google\Service\Resource
{
/**
* Producers can use this method to retrieve information about the DNS record
* set added to the private zone inside the shared tenant host project
* associated with a consumer network. (dnsRecordSet.get)
*
* @param string $parent Required. Parent resource identifying the connection
* which owns this collection of DNS zones in the format services/{service}.
* @param array $optParams Optional parameters.
*
* @opt_param string consumerNetwork Required. The consumer network containing
* the record set. Must be in the form of
* projects/{project}/global/networks/{network}
* @opt_param string domain Required. The domain name of the zone containing the
* recordset.
* @opt_param string type Required. RecordSet Type eg. type='A'. See the list of
* [Supported DNS Types](path_to_url
* @opt_param string zone Required. The name of the zone containing the record
* set.
* @return DnsRecordSet
*/
public function get($parent, $optParams = [])
{
$params = ['parent' => $parent];
$params = array_merge($params, $optParams);
return $this->call('get', [$params], DnsRecordSet::class);
}
}
// Adding a class alias for backwards compatibility with the previous class name.
class_alias(ServicesDnsRecordSet::class, 'Google_Service_ServiceNetworking_Resource_ServicesDnsRecordSet');
```
|
The Punjab Legislative Assembly election, 2007 was held in Indian state of Punjab in 2007, to elect 117 members to the 13th Punjab Lagislative Assembly. Shiromani Akali Dal, and its alliance partner Bharatiya Janata Party gained majority of the seats. Parkash Singh Badal was elected as the Chief Minister
Background
2007 general elections in Punjab witnessed most closely fought elections in Indian National Congress and Shiromani Akali Dal. The turnout among 1.69 Crore eligible voters, which is 76% was exceptionally high compared to last elections.
The 2007 Punjab Assembly Elections at a Glance
Parties and Alliances
Others
Results
!colspan=10|
|-
!colspan=2|Party
!Candidates
!Seats won
!Votes
!% of Votes
|-
|
|93||48||4,689,018||37.09%
|-
|
|116||44||5,170,548||40.90%
|-
|
|23||19||1,046,451||8.28%
|-
|
|431||5||861,595||6.82%
|-
!colspan=2|Total
!1043!!116!!12,641,706!!
|-
|}
Results by Region
Result by Constituency
List of Successful Candidates in Punjab Assembly Election in 2007
Government Formation
On March 2, 2007 Parkash Singh Badal took Oath for the record fourth time. With crowd gathered in the Mohali cricket stadium to attend the Oath Taking Ceremony.
See also
Elections in Punjab, India
References
Punjab
2007
2007
|
Lauricocha is a small parish in the central Andean highlands of Perú.
Lauricocha is located at on the shores of Lago Lauricocha in the Huánuco Region. The parish consists of half a dozen houses, scattered at an elevation of 3,850 m in a sparsely populated area near the village of Antacolpa, and 25 km northwest of Yanahuanca, the capital of the province of Daniel Alcides Carrión.
25 km west of the parish there is Cerro Yerupajá (6,634 m) which is the highest mountain in the Cordillera Huayhuash and among the ten highest summits in South America.
Populated places in the Huánuco Region
|
```java
package com.fishercoder.firstthousand;
import com.fishercoder.common.utils.CommonUtils;
import com.fishercoder.solutions.firstthousand._79;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import static org.junit.jupiter.api.Assertions.assertEquals;
public class _79Test {
private _79.Solution1 solution1;
private _79.Solution2 solution2;
private _79.Solution3 solution3;
private static char[][] board;
@BeforeEach
public void setup() {
solution1 = new _79.Solution1();
solution2 = new _79.Solution2();
solution3 = new _79.Solution3();
}
@Test
public void test1() {
board = new char[][]{
{'A', 'B', 'C', 'E'},
{'S', 'F', 'E', 'S'},
{'A', 'D', 'E', 'E'},
};
assertEquals(true, solution1.exist(board, "ABCEFSADEESE"));
}
@Test
public void test2() {
board = new char[][]{
{'A', 'B', 'C', 'E'},
{'S', 'F', 'C', 'S'},
{'A', 'D', 'E', 'E'},
};
assertEquals(true, solution1.exist(board, "ABCCED"));
assertEquals(true, solution1.exist(board, "SEE"));
assertEquals(false, solution1.exist(board, "ABCD"));
}
@Test
public void test3() {
board = new char[][]{
{'a'},
{'a'},
};
assertEquals(false, solution1.exist(board, "aaa"));
}
@Test
public void test4() {
board = new char[][]{
{'A', 'B', 'H', 'I'},
{'K', 'E', 'H', 'S'},
{'A', 'D', 'E', 'E'},
};
assertEquals(true, solution2.exist(board, "ABHISHEK"));
}
@Test
public void test5() {
board = CommonUtils.convertLeetCodeRegular2DCharArrayInputIntoJavaArray("[\"A\",\"B\",\"C\",\"E\"],[\"S\",\"F\",\"C\",\"S\"],[\"A\",\"D\",\"E\",\"E\"]");
assertEquals(true, solution3.exist(board, "ABCCED"));
}
}
```
|
```go
// mkerrors.sh
// Code generated by the command above; see README.md. DO NOT EDIT.
// +build arm,freebsd
// Code generated by cmd/cgo -godefs; DO NOT EDIT.
// cgo -godefs -- _const.go
package unix
import "syscall"
const (
AF_APPLETALK = 0x10
AF_ARP = 0x23
AF_ATM = 0x1e
AF_BLUETOOTH = 0x24
AF_CCITT = 0xa
AF_CHAOS = 0x5
AF_CNT = 0x15
AF_COIP = 0x14
AF_DATAKIT = 0x9
AF_DECnet = 0xc
AF_DLI = 0xd
AF_E164 = 0x1a
AF_ECMA = 0x8
AF_HYLINK = 0xf
AF_IEEE80211 = 0x25
AF_IMPLINK = 0x3
AF_INET = 0x2
AF_INET6 = 0x1c
AF_INET6_SDP = 0x2a
AF_INET_SDP = 0x28
AF_IPX = 0x17
AF_ISDN = 0x1a
AF_ISO = 0x7
AF_LAT = 0xe
AF_LINK = 0x12
AF_LOCAL = 0x1
AF_MAX = 0x2a
AF_NATM = 0x1d
AF_NETBIOS = 0x6
AF_NETGRAPH = 0x20
AF_OSI = 0x7
AF_PUP = 0x4
AF_ROUTE = 0x11
AF_SCLUSTER = 0x22
AF_SIP = 0x18
AF_SLOW = 0x21
AF_SNA = 0xb
AF_UNIX = 0x1
AF_UNSPEC = 0x0
AF_VENDOR00 = 0x27
AF_VENDOR01 = 0x29
AF_VENDOR02 = 0x2b
AF_VENDOR03 = 0x2d
AF_VENDOR04 = 0x2f
AF_VENDOR05 = 0x31
AF_VENDOR06 = 0x33
AF_VENDOR07 = 0x35
AF_VENDOR08 = 0x37
AF_VENDOR09 = 0x39
AF_VENDOR10 = 0x3b
AF_VENDOR11 = 0x3d
AF_VENDOR12 = 0x3f
AF_VENDOR13 = 0x41
AF_VENDOR14 = 0x43
AF_VENDOR15 = 0x45
AF_VENDOR16 = 0x47
AF_VENDOR17 = 0x49
AF_VENDOR18 = 0x4b
AF_VENDOR19 = 0x4d
AF_VENDOR20 = 0x4f
AF_VENDOR21 = 0x51
AF_VENDOR22 = 0x53
AF_VENDOR23 = 0x55
AF_VENDOR24 = 0x57
AF_VENDOR25 = 0x59
AF_VENDOR26 = 0x5b
AF_VENDOR27 = 0x5d
AF_VENDOR28 = 0x5f
AF_VENDOR29 = 0x61
AF_VENDOR30 = 0x63
AF_VENDOR31 = 0x65
AF_VENDOR32 = 0x67
AF_VENDOR33 = 0x69
AF_VENDOR34 = 0x6b
AF_VENDOR35 = 0x6d
AF_VENDOR36 = 0x6f
AF_VENDOR37 = 0x71
AF_VENDOR38 = 0x73
AF_VENDOR39 = 0x75
AF_VENDOR40 = 0x77
AF_VENDOR41 = 0x79
AF_VENDOR42 = 0x7b
AF_VENDOR43 = 0x7d
AF_VENDOR44 = 0x7f
AF_VENDOR45 = 0x81
AF_VENDOR46 = 0x83
AF_VENDOR47 = 0x85
ALTWERASE = 0x200
B0 = 0x0
B110 = 0x6e
B115200 = 0x1c200
B1200 = 0x4b0
B134 = 0x86
B14400 = 0x3840
B150 = 0x96
B1800 = 0x708
B19200 = 0x4b00
B200 = 0xc8
B230400 = 0x38400
B2400 = 0x960
B28800 = 0x7080
B300 = 0x12c
B38400 = 0x9600
B460800 = 0x70800
B4800 = 0x12c0
B50 = 0x32
B57600 = 0xe100
B600 = 0x258
B7200 = 0x1c20
B75 = 0x4b
B76800 = 0x12c00
B921600 = 0xe1000
B9600 = 0x2580
BIOCFEEDBACK = 0x8004427c
BIOCFLUSH = 0x20004268
BIOCGBLEN = 0x40044266
BIOCGDIRECTION = 0x40044276
BIOCGDLT = 0x4004426a
BIOCGDLTLIST = 0xc0084279
BIOCGETBUFMODE = 0x4004427d
BIOCGETIF = 0x4020426b
BIOCGETZMAX = 0x4004427f
BIOCGHDRCMPLT = 0x40044274
BIOCGRSIG = 0x40044272
BIOCGRTIMEOUT = 0x4010426e
BIOCGSEESENT = 0x40044276
BIOCGSTATS = 0x4008426f
BIOCGTSTAMP = 0x40044283
BIOCIMMEDIATE = 0x80044270
BIOCLOCK = 0x2000427a
BIOCPROMISC = 0x20004269
BIOCROTZBUF = 0x400c4280
BIOCSBLEN = 0xc0044266
BIOCSDIRECTION = 0x80044277
BIOCSDLT = 0x80044278
BIOCSETBUFMODE = 0x8004427e
BIOCSETF = 0x80084267
BIOCSETFNR = 0x80084282
BIOCSETIF = 0x8020426c
BIOCSETWF = 0x8008427b
BIOCSETZBUF = 0x800c4281
BIOCSHDRCMPLT = 0x80044275
BIOCSRSIG = 0x80044273
BIOCSRTIMEOUT = 0x8010426d
BIOCSSEESENT = 0x80044277
BIOCSTSTAMP = 0x80044284
BIOCVERSION = 0x40044271
BPF_A = 0x10
BPF_ABS = 0x20
BPF_ADD = 0x0
BPF_ALIGNMENT = 0x4
BPF_ALU = 0x4
BPF_AND = 0x50
BPF_B = 0x10
BPF_BUFMODE_BUFFER = 0x1
BPF_BUFMODE_ZBUF = 0x2
BPF_DIV = 0x30
BPF_H = 0x8
BPF_IMM = 0x0
BPF_IND = 0x40
BPF_JA = 0x0
BPF_JEQ = 0x10
BPF_JGE = 0x30
BPF_JGT = 0x20
BPF_JMP = 0x5
BPF_JSET = 0x40
BPF_K = 0x0
BPF_LD = 0x0
BPF_LDX = 0x1
BPF_LEN = 0x80
BPF_LSH = 0x60
BPF_MAJOR_VERSION = 0x1
BPF_MAXBUFSIZE = 0x80000
BPF_MAXINSNS = 0x200
BPF_MEM = 0x60
BPF_MEMWORDS = 0x10
BPF_MINBUFSIZE = 0x20
BPF_MINOR_VERSION = 0x1
BPF_MISC = 0x7
BPF_MOD = 0x90
BPF_MSH = 0xa0
BPF_MUL = 0x20
BPF_NEG = 0x80
BPF_OR = 0x40
BPF_RELEASE = 0x30bb6
BPF_RET = 0x6
BPF_RSH = 0x70
BPF_ST = 0x2
BPF_STX = 0x3
BPF_SUB = 0x10
BPF_TAX = 0x0
BPF_TXA = 0x80
BPF_T_BINTIME = 0x2
BPF_T_BINTIME_FAST = 0x102
BPF_T_BINTIME_MONOTONIC = 0x202
BPF_T_BINTIME_MONOTONIC_FAST = 0x302
BPF_T_FAST = 0x100
BPF_T_FLAG_MASK = 0x300
BPF_T_FORMAT_MASK = 0x3
BPF_T_MICROTIME = 0x0
BPF_T_MICROTIME_FAST = 0x100
BPF_T_MICROTIME_MONOTONIC = 0x200
BPF_T_MICROTIME_MONOTONIC_FAST = 0x300
BPF_T_MONOTONIC = 0x200
BPF_T_MONOTONIC_FAST = 0x300
BPF_T_NANOTIME = 0x1
BPF_T_NANOTIME_FAST = 0x101
BPF_T_NANOTIME_MONOTONIC = 0x201
BPF_T_NANOTIME_MONOTONIC_FAST = 0x301
BPF_T_NONE = 0x3
BPF_T_NORMAL = 0x0
BPF_W = 0x0
BPF_X = 0x8
BPF_XOR = 0xa0
BRKINT = 0x2
CAP_ACCEPT = 0x200000020000000
CAP_ACL_CHECK = 0x400000000010000
CAP_ACL_DELETE = 0x400000000020000
CAP_ACL_GET = 0x400000000040000
CAP_ACL_SET = 0x400000000080000
CAP_ALL0 = 0x20007ffffffffff
CAP_ALL1 = 0x4000000001fffff
CAP_BIND = 0x200000040000000
CAP_BINDAT = 0x200008000000400
CAP_CHFLAGSAT = 0x200000000001400
CAP_CONNECT = 0x200000080000000
CAP_CONNECTAT = 0x200010000000400
CAP_CREATE = 0x200000000000040
CAP_EVENT = 0x400000000000020
CAP_EXTATTR_DELETE = 0x400000000001000
CAP_EXTATTR_GET = 0x400000000002000
CAP_EXTATTR_LIST = 0x400000000004000
CAP_EXTATTR_SET = 0x400000000008000
CAP_FCHDIR = 0x200000000000800
CAP_FCHFLAGS = 0x200000000001000
CAP_FCHMOD = 0x200000000002000
CAP_FCHMODAT = 0x200000000002400
CAP_FCHOWN = 0x200000000004000
CAP_FCHOWNAT = 0x200000000004400
CAP_FCNTL = 0x200000000008000
CAP_FCNTL_ALL = 0x78
CAP_FCNTL_GETFL = 0x8
CAP_FCNTL_GETOWN = 0x20
CAP_FCNTL_SETFL = 0x10
CAP_FCNTL_SETOWN = 0x40
CAP_FEXECVE = 0x200000000000080
CAP_FLOCK = 0x200000000010000
CAP_FPATHCONF = 0x200000000020000
CAP_FSCK = 0x200000000040000
CAP_FSTAT = 0x200000000080000
CAP_FSTATAT = 0x200000000080400
CAP_FSTATFS = 0x200000000100000
CAP_FSYNC = 0x200000000000100
CAP_FTRUNCATE = 0x200000000000200
CAP_FUTIMES = 0x200000000200000
CAP_FUTIMESAT = 0x200000000200400
CAP_GETPEERNAME = 0x200000100000000
CAP_GETSOCKNAME = 0x200000200000000
CAP_GETSOCKOPT = 0x200000400000000
CAP_IOCTL = 0x400000000000080
CAP_IOCTLS_ALL = 0x7fffffff
CAP_KQUEUE = 0x400000000100040
CAP_KQUEUE_CHANGE = 0x400000000100000
CAP_KQUEUE_EVENT = 0x400000000000040
CAP_LINKAT_SOURCE = 0x200020000000400
CAP_LINKAT_TARGET = 0x200000000400400
CAP_LISTEN = 0x200000800000000
CAP_LOOKUP = 0x200000000000400
CAP_MAC_GET = 0x400000000000001
CAP_MAC_SET = 0x400000000000002
CAP_MKDIRAT = 0x200000000800400
CAP_MKFIFOAT = 0x200000001000400
CAP_MKNODAT = 0x200000002000400
CAP_MMAP = 0x200000000000010
CAP_MMAP_R = 0x20000000000001d
CAP_MMAP_RW = 0x20000000000001f
CAP_MMAP_RWX = 0x20000000000003f
CAP_MMAP_RX = 0x20000000000003d
CAP_MMAP_W = 0x20000000000001e
CAP_MMAP_WX = 0x20000000000003e
CAP_MMAP_X = 0x20000000000003c
CAP_PDGETPID = 0x400000000000200
CAP_PDKILL = 0x400000000000800
CAP_PDWAIT = 0x400000000000400
CAP_PEELOFF = 0x200001000000000
CAP_POLL_EVENT = 0x400000000000020
CAP_PREAD = 0x20000000000000d
CAP_PWRITE = 0x20000000000000e
CAP_READ = 0x200000000000001
CAP_RECV = 0x200000000000001
CAP_RENAMEAT_SOURCE = 0x200000004000400
CAP_RENAMEAT_TARGET = 0x200040000000400
CAP_RIGHTS_VERSION = 0x0
CAP_RIGHTS_VERSION_00 = 0x0
CAP_SEEK = 0x20000000000000c
CAP_SEEK_TELL = 0x200000000000004
CAP_SEM_GETVALUE = 0x400000000000004
CAP_SEM_POST = 0x400000000000008
CAP_SEM_WAIT = 0x400000000000010
CAP_SEND = 0x200000000000002
CAP_SETSOCKOPT = 0x200002000000000
CAP_SHUTDOWN = 0x200004000000000
CAP_SOCK_CLIENT = 0x200007780000003
CAP_SOCK_SERVER = 0x200007f60000003
CAP_SYMLINKAT = 0x200000008000400
CAP_TTYHOOK = 0x400000000000100
CAP_UNLINKAT = 0x200000010000400
CAP_UNUSED0_44 = 0x200080000000000
CAP_UNUSED0_57 = 0x300000000000000
CAP_UNUSED1_22 = 0x400000000200000
CAP_UNUSED1_57 = 0x500000000000000
CAP_WRITE = 0x200000000000002
CFLUSH = 0xf
CLOCAL = 0x8000
CLOCK_MONOTONIC = 0x4
CLOCK_MONOTONIC_FAST = 0xc
CLOCK_MONOTONIC_PRECISE = 0xb
CLOCK_PROCESS_CPUTIME_ID = 0xf
CLOCK_PROF = 0x2
CLOCK_REALTIME = 0x0
CLOCK_REALTIME_FAST = 0xa
CLOCK_REALTIME_PRECISE = 0x9
CLOCK_SECOND = 0xd
CLOCK_THREAD_CPUTIME_ID = 0xe
CLOCK_UPTIME = 0x5
CLOCK_UPTIME_FAST = 0x8
CLOCK_UPTIME_PRECISE = 0x7
CLOCK_VIRTUAL = 0x1
CREAD = 0x800
CRTSCTS = 0x30000
CS5 = 0x0
CS6 = 0x100
CS7 = 0x200
CS8 = 0x300
CSIZE = 0x300
CSTART = 0x11
CSTATUS = 0x14
CSTOP = 0x13
CSTOPB = 0x400
CSUSP = 0x1a
CTL_HW = 0x6
CTL_KERN = 0x1
CTL_MAXNAME = 0x18
CTL_NET = 0x4
DIOCGATTR = 0xc144648e
DIOCGDELETE = 0x80106488
DIOCGFLUSH = 0x20006487
DIOCGFRONTSTUFF = 0x40086486
DIOCGFWHEADS = 0x40046483
DIOCGFWSECTORS = 0x40046482
DIOCGIDENT = 0x41006489
DIOCGMEDIASIZE = 0x40086481
DIOCGPHYSPATH = 0x4400648d
DIOCGPROVIDERNAME = 0x4400648a
DIOCGSECTORSIZE = 0x40046480
DIOCGSTRIPEOFFSET = 0x4008648c
DIOCGSTRIPESIZE = 0x4008648b
DIOCSKERNELDUMP = 0x804c6490
DIOCSKERNELDUMP_FREEBSD11 = 0x80046485
DIOCZONECMD = 0xc06c648f
DLT_A429 = 0xb8
DLT_A653_ICM = 0xb9
DLT_AIRONET_HEADER = 0x78
DLT_AOS = 0xde
DLT_APPLE_IP_OVER_IEEE1394 = 0x8a
DLT_ARCNET = 0x7
DLT_ARCNET_LINUX = 0x81
DLT_ATM_CLIP = 0x13
DLT_ATM_RFC1483 = 0xb
DLT_AURORA = 0x7e
DLT_AX25 = 0x3
DLT_AX25_KISS = 0xca
DLT_BACNET_MS_TP = 0xa5
DLT_BLUETOOTH_BREDR_BB = 0xff
DLT_BLUETOOTH_HCI_H4 = 0xbb
DLT_BLUETOOTH_HCI_H4_WITH_PHDR = 0xc9
DLT_BLUETOOTH_LE_LL = 0xfb
DLT_BLUETOOTH_LE_LL_WITH_PHDR = 0x100
DLT_BLUETOOTH_LINUX_MONITOR = 0xfe
DLT_CAN20B = 0xbe
DLT_CAN_SOCKETCAN = 0xe3
DLT_CHAOS = 0x5
DLT_CHDLC = 0x68
DLT_CISCO_IOS = 0x76
DLT_CLASS_NETBSD_RAWAF = 0x2240000
DLT_C_HDLC = 0x68
DLT_C_HDLC_WITH_DIR = 0xcd
DLT_DBUS = 0xe7
DLT_DECT = 0xdd
DLT_DOCSIS = 0x8f
DLT_DVB_CI = 0xeb
DLT_ECONET = 0x73
DLT_EN10MB = 0x1
DLT_EN3MB = 0x2
DLT_ENC = 0x6d
DLT_EPON = 0x103
DLT_ERF = 0xc5
DLT_ERF_ETH = 0xaf
DLT_ERF_POS = 0xb0
DLT_FC_2 = 0xe0
DLT_FC_2_WITH_FRAME_DELIMS = 0xe1
DLT_FDDI = 0xa
DLT_FLEXRAY = 0xd2
DLT_FRELAY = 0x6b
DLT_FRELAY_WITH_DIR = 0xce
DLT_GCOM_SERIAL = 0xad
DLT_GCOM_T1E1 = 0xac
DLT_GPF_F = 0xab
DLT_GPF_T = 0xaa
DLT_GPRS_LLC = 0xa9
DLT_GSMTAP_ABIS = 0xda
DLT_GSMTAP_UM = 0xd9
DLT_IBM_SN = 0x92
DLT_IBM_SP = 0x91
DLT_IEEE802 = 0x6
DLT_IEEE802_11 = 0x69
DLT_IEEE802_11_RADIO = 0x7f
DLT_IEEE802_11_RADIO_AVS = 0xa3
DLT_IEEE802_15_4 = 0xc3
DLT_IEEE802_15_4_LINUX = 0xbf
DLT_IEEE802_15_4_NOFCS = 0xe6
DLT_IEEE802_15_4_NONASK_PHY = 0xd7
DLT_IEEE802_16_MAC_CPS = 0xbc
DLT_IEEE802_16_MAC_CPS_RADIO = 0xc1
DLT_INFINIBAND = 0xf7
DLT_IPFILTER = 0x74
DLT_IPMB = 0xc7
DLT_IPMB_LINUX = 0xd1
DLT_IPMI_HPM_2 = 0x104
DLT_IPNET = 0xe2
DLT_IPOIB = 0xf2
DLT_IPV4 = 0xe4
DLT_IPV6 = 0xe5
DLT_IP_OVER_FC = 0x7a
DLT_ISO_14443 = 0x108
DLT_JUNIPER_ATM1 = 0x89
DLT_JUNIPER_ATM2 = 0x87
DLT_JUNIPER_ATM_CEMIC = 0xee
DLT_JUNIPER_CHDLC = 0xb5
DLT_JUNIPER_ES = 0x84
DLT_JUNIPER_ETHER = 0xb2
DLT_JUNIPER_FIBRECHANNEL = 0xea
DLT_JUNIPER_FRELAY = 0xb4
DLT_JUNIPER_GGSN = 0x85
DLT_JUNIPER_ISM = 0xc2
DLT_JUNIPER_MFR = 0x86
DLT_JUNIPER_MLFR = 0x83
DLT_JUNIPER_MLPPP = 0x82
DLT_JUNIPER_MONITOR = 0xa4
DLT_JUNIPER_PIC_PEER = 0xae
DLT_JUNIPER_PPP = 0xb3
DLT_JUNIPER_PPPOE = 0xa7
DLT_JUNIPER_PPPOE_ATM = 0xa8
DLT_JUNIPER_SERVICES = 0x88
DLT_JUNIPER_SRX_E2E = 0xe9
DLT_JUNIPER_ST = 0xc8
DLT_JUNIPER_VP = 0xb7
DLT_JUNIPER_VS = 0xe8
DLT_LAPB_WITH_DIR = 0xcf
DLT_LAPD = 0xcb
DLT_LIN = 0xd4
DLT_LINUX_EVDEV = 0xd8
DLT_LINUX_IRDA = 0x90
DLT_LINUX_LAPD = 0xb1
DLT_LINUX_PPP_WITHDIRECTION = 0xa6
DLT_LINUX_SLL = 0x71
DLT_LOOP = 0x6c
DLT_LTALK = 0x72
DLT_MATCHING_MAX = 0x109
DLT_MATCHING_MIN = 0x68
DLT_MFR = 0xb6
DLT_MOST = 0xd3
DLT_MPEG_2_TS = 0xf3
DLT_MPLS = 0xdb
DLT_MTP2 = 0x8c
DLT_MTP2_WITH_PHDR = 0x8b
DLT_MTP3 = 0x8d
DLT_MUX27010 = 0xec
DLT_NETANALYZER = 0xf0
DLT_NETANALYZER_TRANSPARENT = 0xf1
DLT_NETLINK = 0xfd
DLT_NFC_LLCP = 0xf5
DLT_NFLOG = 0xef
DLT_NG40 = 0xf4
DLT_NULL = 0x0
DLT_PCI_EXP = 0x7d
DLT_PFLOG = 0x75
DLT_PFSYNC = 0x79
DLT_PKTAP = 0x102
DLT_PPI = 0xc0
DLT_PPP = 0x9
DLT_PPP_BSDOS = 0xe
DLT_PPP_ETHER = 0x33
DLT_PPP_PPPD = 0xa6
DLT_PPP_SERIAL = 0x32
DLT_PPP_WITH_DIR = 0xcc
DLT_PPP_WITH_DIRECTION = 0xa6
DLT_PRISM_HEADER = 0x77
DLT_PROFIBUS_DL = 0x101
DLT_PRONET = 0x4
DLT_RAIF1 = 0xc6
DLT_RAW = 0xc
DLT_RDS = 0x109
DLT_REDBACK_SMARTEDGE = 0x20
DLT_RIO = 0x7c
DLT_RTAC_SERIAL = 0xfa
DLT_SCCP = 0x8e
DLT_SCTP = 0xf8
DLT_SITA = 0xc4
DLT_SLIP = 0x8
DLT_SLIP_BSDOS = 0xd
DLT_STANAG_5066_D_PDU = 0xed
DLT_SUNATM = 0x7b
DLT_SYMANTEC_FIREWALL = 0x63
DLT_TZSP = 0x80
DLT_USB = 0xba
DLT_USBPCAP = 0xf9
DLT_USB_FREEBSD = 0xba
DLT_USB_LINUX = 0xbd
DLT_USB_LINUX_MMAPPED = 0xdc
DLT_USER0 = 0x93
DLT_USER1 = 0x94
DLT_USER10 = 0x9d
DLT_USER11 = 0x9e
DLT_USER12 = 0x9f
DLT_USER13 = 0xa0
DLT_USER14 = 0xa1
DLT_USER15 = 0xa2
DLT_USER2 = 0x95
DLT_USER3 = 0x96
DLT_USER4 = 0x97
DLT_USER5 = 0x98
DLT_USER6 = 0x99
DLT_USER7 = 0x9a
DLT_USER8 = 0x9b
DLT_USER9 = 0x9c
DLT_WATTSTOPPER_DLM = 0x107
DLT_WIHART = 0xdf
DLT_WIRESHARK_UPPER_PDU = 0xfc
DLT_X2E_SERIAL = 0xd5
DLT_X2E_XORAYA = 0xd6
DLT_ZWAVE_R1_R2 = 0x105
DLT_ZWAVE_R3 = 0x106
DT_BLK = 0x6
DT_CHR = 0x2
DT_DIR = 0x4
DT_FIFO = 0x1
DT_LNK = 0xa
DT_REG = 0x8
DT_SOCK = 0xc
DT_UNKNOWN = 0x0
DT_WHT = 0xe
ECHO = 0x8
ECHOCTL = 0x40
ECHOE = 0x2
ECHOK = 0x4
ECHOKE = 0x1
ECHONL = 0x10
ECHOPRT = 0x20
EVFILT_AIO = -0x3
EVFILT_FS = -0x9
EVFILT_LIO = -0xa
EVFILT_PROC = -0x5
EVFILT_PROCDESC = -0x8
EVFILT_READ = -0x1
EVFILT_SENDFILE = -0xc
EVFILT_SIGNAL = -0x6
EVFILT_SYSCOUNT = 0xc
EVFILT_TIMER = -0x7
EVFILT_USER = -0xb
EVFILT_VNODE = -0x4
EVFILT_WRITE = -0x2
EV_ADD = 0x1
EV_CLEAR = 0x20
EV_DELETE = 0x2
EV_DISABLE = 0x8
EV_DISPATCH = 0x80
EV_DROP = 0x1000
EV_ENABLE = 0x4
EV_EOF = 0x8000
EV_ERROR = 0x4000
EV_FLAG1 = 0x2000
EV_FLAG2 = 0x4000
EV_FORCEONESHOT = 0x100
EV_ONESHOT = 0x10
EV_RECEIPT = 0x40
EV_SYSFLAGS = 0xf000
EXTA = 0x4b00
EXTATTR_NAMESPACE_EMPTY = 0x0
EXTATTR_NAMESPACE_SYSTEM = 0x2
EXTATTR_NAMESPACE_USER = 0x1
EXTB = 0x9600
EXTPROC = 0x800
FD_CLOEXEC = 0x1
FD_SETSIZE = 0x400
FLUSHO = 0x800000
F_CANCEL = 0x5
F_DUP2FD = 0xa
F_DUP2FD_CLOEXEC = 0x12
F_DUPFD = 0x0
F_DUPFD_CLOEXEC = 0x11
F_GETFD = 0x1
F_GETFL = 0x3
F_GETLK = 0xb
F_GETOWN = 0x5
F_OGETLK = 0x7
F_OK = 0x0
F_OSETLK = 0x8
F_OSETLKW = 0x9
F_RDAHEAD = 0x10
F_RDLCK = 0x1
F_READAHEAD = 0xf
F_SETFD = 0x2
F_SETFL = 0x4
F_SETLK = 0xc
F_SETLKW = 0xd
F_SETLK_REMOTE = 0xe
F_SETOWN = 0x6
F_UNLCK = 0x2
F_UNLCKSYS = 0x4
F_WRLCK = 0x3
HUPCL = 0x4000
HW_MACHINE = 0x1
ICANON = 0x100
ICMP6_FILTER = 0x12
ICRNL = 0x100
IEXTEN = 0x400
IFAN_ARRIVAL = 0x0
IFAN_DEPARTURE = 0x1
IFF_ALLMULTI = 0x200
IFF_ALTPHYS = 0x4000
IFF_BROADCAST = 0x2
IFF_CANTCHANGE = 0x218f52
IFF_CANTCONFIG = 0x10000
IFF_DEBUG = 0x4
IFF_DRV_OACTIVE = 0x400
IFF_DRV_RUNNING = 0x40
IFF_DYING = 0x200000
IFF_LINK0 = 0x1000
IFF_LINK1 = 0x2000
IFF_LINK2 = 0x4000
IFF_LOOPBACK = 0x8
IFF_MONITOR = 0x40000
IFF_MULTICAST = 0x8000
IFF_NOARP = 0x80
IFF_OACTIVE = 0x400
IFF_POINTOPOINT = 0x10
IFF_PPROMISC = 0x20000
IFF_PROMISC = 0x100
IFF_RENAMING = 0x400000
IFF_RUNNING = 0x40
IFF_SIMPLEX = 0x800
IFF_STATICARP = 0x80000
IFF_UP = 0x1
IFNAMSIZ = 0x10
IFT_BRIDGE = 0xd1
IFT_CARP = 0xf8
IFT_IEEE1394 = 0x90
IFT_INFINIBAND = 0xc7
IFT_L2VLAN = 0x87
IFT_L3IPVLAN = 0x88
IFT_PPP = 0x17
IFT_PROPVIRTUAL = 0x35
IGNBRK = 0x1
IGNCR = 0x80
IGNPAR = 0x4
IMAXBEL = 0x2000
INLCR = 0x40
INPCK = 0x10
IN_CLASSA_HOST = 0xffffff
IN_CLASSA_MAX = 0x80
IN_CLASSA_NET = 0xff000000
IN_CLASSA_NSHIFT = 0x18
IN_CLASSB_HOST = 0xffff
IN_CLASSB_MAX = 0x10000
IN_CLASSB_NET = 0xffff0000
IN_CLASSB_NSHIFT = 0x10
IN_CLASSC_HOST = 0xff
IN_CLASSC_NET = 0xffffff00
IN_CLASSC_NSHIFT = 0x8
IN_CLASSD_HOST = 0xfffffff
IN_CLASSD_NET = 0xf0000000
IN_CLASSD_NSHIFT = 0x1c
IN_LOOPBACKNET = 0x7f
IN_RFC3021_MASK = 0xfffffffe
IPPROTO_3PC = 0x22
IPPROTO_ADFS = 0x44
IPPROTO_AH = 0x33
IPPROTO_AHIP = 0x3d
IPPROTO_APES = 0x63
IPPROTO_ARGUS = 0xd
IPPROTO_AX25 = 0x5d
IPPROTO_BHA = 0x31
IPPROTO_BLT = 0x1e
IPPROTO_BRSATMON = 0x4c
IPPROTO_CARP = 0x70
IPPROTO_CFTP = 0x3e
IPPROTO_CHAOS = 0x10
IPPROTO_CMTP = 0x26
IPPROTO_CPHB = 0x49
IPPROTO_CPNX = 0x48
IPPROTO_DDP = 0x25
IPPROTO_DGP = 0x56
IPPROTO_DIVERT = 0x102
IPPROTO_DONE = 0x101
IPPROTO_DSTOPTS = 0x3c
IPPROTO_EGP = 0x8
IPPROTO_EMCON = 0xe
IPPROTO_ENCAP = 0x62
IPPROTO_EON = 0x50
IPPROTO_ESP = 0x32
IPPROTO_ETHERIP = 0x61
IPPROTO_FRAGMENT = 0x2c
IPPROTO_GGP = 0x3
IPPROTO_GMTP = 0x64
IPPROTO_GRE = 0x2f
IPPROTO_HELLO = 0x3f
IPPROTO_HIP = 0x8b
IPPROTO_HMP = 0x14
IPPROTO_HOPOPTS = 0x0
IPPROTO_ICMP = 0x1
IPPROTO_ICMPV6 = 0x3a
IPPROTO_IDP = 0x16
IPPROTO_IDPR = 0x23
IPPROTO_IDRP = 0x2d
IPPROTO_IGMP = 0x2
IPPROTO_IGP = 0x55
IPPROTO_IGRP = 0x58
IPPROTO_IL = 0x28
IPPROTO_INLSP = 0x34
IPPROTO_INP = 0x20
IPPROTO_IP = 0x0
IPPROTO_IPCOMP = 0x6c
IPPROTO_IPCV = 0x47
IPPROTO_IPEIP = 0x5e
IPPROTO_IPIP = 0x4
IPPROTO_IPPC = 0x43
IPPROTO_IPV4 = 0x4
IPPROTO_IPV6 = 0x29
IPPROTO_IRTP = 0x1c
IPPROTO_KRYPTOLAN = 0x41
IPPROTO_LARP = 0x5b
IPPROTO_LEAF1 = 0x19
IPPROTO_LEAF2 = 0x1a
IPPROTO_MAX = 0x100
IPPROTO_MEAS = 0x13
IPPROTO_MH = 0x87
IPPROTO_MHRP = 0x30
IPPROTO_MICP = 0x5f
IPPROTO_MOBILE = 0x37
IPPROTO_MPLS = 0x89
IPPROTO_MTP = 0x5c
IPPROTO_MUX = 0x12
IPPROTO_ND = 0x4d
IPPROTO_NHRP = 0x36
IPPROTO_NONE = 0x3b
IPPROTO_NSP = 0x1f
IPPROTO_NVPII = 0xb
IPPROTO_OLD_DIVERT = 0xfe
IPPROTO_OSPFIGP = 0x59
IPPROTO_PFSYNC = 0xf0
IPPROTO_PGM = 0x71
IPPROTO_PIGP = 0x9
IPPROTO_PIM = 0x67
IPPROTO_PRM = 0x15
IPPROTO_PUP = 0xc
IPPROTO_PVP = 0x4b
IPPROTO_RAW = 0xff
IPPROTO_RCCMON = 0xa
IPPROTO_RDP = 0x1b
IPPROTO_RESERVED_253 = 0xfd
IPPROTO_RESERVED_254 = 0xfe
IPPROTO_ROUTING = 0x2b
IPPROTO_RSVP = 0x2e
IPPROTO_RVD = 0x42
IPPROTO_SATEXPAK = 0x40
IPPROTO_SATMON = 0x45
IPPROTO_SCCSP = 0x60
IPPROTO_SCTP = 0x84
IPPROTO_SDRP = 0x2a
IPPROTO_SEND = 0x103
IPPROTO_SEP = 0x21
IPPROTO_SHIM6 = 0x8c
IPPROTO_SKIP = 0x39
IPPROTO_SPACER = 0x7fff
IPPROTO_SRPC = 0x5a
IPPROTO_ST = 0x7
IPPROTO_SVMTP = 0x52
IPPROTO_SWIPE = 0x35
IPPROTO_TCF = 0x57
IPPROTO_TCP = 0x6
IPPROTO_TLSP = 0x38
IPPROTO_TP = 0x1d
IPPROTO_TPXX = 0x27
IPPROTO_TRUNK1 = 0x17
IPPROTO_TRUNK2 = 0x18
IPPROTO_TTP = 0x54
IPPROTO_UDP = 0x11
IPPROTO_UDPLITE = 0x88
IPPROTO_VINES = 0x53
IPPROTO_VISA = 0x46
IPPROTO_VMTP = 0x51
IPPROTO_WBEXPAK = 0x4f
IPPROTO_WBMON = 0x4e
IPPROTO_WSN = 0x4a
IPPROTO_XNET = 0xf
IPPROTO_XTP = 0x24
IPV6_AUTOFLOWLABEL = 0x3b
IPV6_BINDANY = 0x40
IPV6_BINDMULTI = 0x41
IPV6_BINDV6ONLY = 0x1b
IPV6_CHECKSUM = 0x1a
IPV6_DEFAULT_MULTICAST_HOPS = 0x1
IPV6_DEFAULT_MULTICAST_LOOP = 0x1
IPV6_DEFHLIM = 0x40
IPV6_DONTFRAG = 0x3e
IPV6_DSTOPTS = 0x32
IPV6_FLOWID = 0x43
IPV6_FLOWINFO_MASK = 0xffffff0f
IPV6_FLOWLABEL_MASK = 0xffff0f00
IPV6_FLOWTYPE = 0x44
IPV6_FRAGTTL = 0x78
IPV6_FW_ADD = 0x1e
IPV6_FW_DEL = 0x1f
IPV6_FW_FLUSH = 0x20
IPV6_FW_GET = 0x22
IPV6_FW_ZERO = 0x21
IPV6_HLIMDEC = 0x1
IPV6_HOPLIMIT = 0x2f
IPV6_HOPOPTS = 0x31
IPV6_IPSEC_POLICY = 0x1c
IPV6_JOIN_GROUP = 0xc
IPV6_LEAVE_GROUP = 0xd
IPV6_MAXHLIM = 0xff
IPV6_MAXOPTHDR = 0x800
IPV6_MAXPACKET = 0xffff
IPV6_MAX_GROUP_SRC_FILTER = 0x200
IPV6_MAX_MEMBERSHIPS = 0xfff
IPV6_MAX_SOCK_SRC_FILTER = 0x80
IPV6_MIN_MEMBERSHIPS = 0x1f
IPV6_MMTU = 0x500
IPV6_MSFILTER = 0x4a
IPV6_MULTICAST_HOPS = 0xa
IPV6_MULTICAST_IF = 0x9
IPV6_MULTICAST_LOOP = 0xb
IPV6_NEXTHOP = 0x30
IPV6_PATHMTU = 0x2c
IPV6_PKTINFO = 0x2e
IPV6_PORTRANGE = 0xe
IPV6_PORTRANGE_DEFAULT = 0x0
IPV6_PORTRANGE_HIGH = 0x1
IPV6_PORTRANGE_LOW = 0x2
IPV6_PREFER_TEMPADDR = 0x3f
IPV6_RECVDSTOPTS = 0x28
IPV6_RECVFLOWID = 0x46
IPV6_RECVHOPLIMIT = 0x25
IPV6_RECVHOPOPTS = 0x27
IPV6_RECVPATHMTU = 0x2b
IPV6_RECVPKTINFO = 0x24
IPV6_RECVRSSBUCKETID = 0x47
IPV6_RECVRTHDR = 0x26
IPV6_RECVTCLASS = 0x39
IPV6_RSSBUCKETID = 0x45
IPV6_RSS_LISTEN_BUCKET = 0x42
IPV6_RTHDR = 0x33
IPV6_RTHDRDSTOPTS = 0x23
IPV6_RTHDR_LOOSE = 0x0
IPV6_RTHDR_STRICT = 0x1
IPV6_RTHDR_TYPE_0 = 0x0
IPV6_SOCKOPT_RESERVED1 = 0x3
IPV6_TCLASS = 0x3d
IPV6_UNICAST_HOPS = 0x4
IPV6_USE_MIN_MTU = 0x2a
IPV6_V6ONLY = 0x1b
IPV6_VERSION = 0x60
IPV6_VERSION_MASK = 0xf0
IP_ADD_MEMBERSHIP = 0xc
IP_ADD_SOURCE_MEMBERSHIP = 0x46
IP_BINDANY = 0x18
IP_BINDMULTI = 0x19
IP_BLOCK_SOURCE = 0x48
IP_DEFAULT_MULTICAST_LOOP = 0x1
IP_DEFAULT_MULTICAST_TTL = 0x1
IP_DF = 0x4000
IP_DONTFRAG = 0x43
IP_DROP_MEMBERSHIP = 0xd
IP_DROP_SOURCE_MEMBERSHIP = 0x47
IP_DUMMYNET3 = 0x31
IP_DUMMYNET_CONFIGURE = 0x3c
IP_DUMMYNET_DEL = 0x3d
IP_DUMMYNET_FLUSH = 0x3e
IP_DUMMYNET_GET = 0x40
IP_FLOWID = 0x5a
IP_FLOWTYPE = 0x5b
IP_FW3 = 0x30
IP_FW_ADD = 0x32
IP_FW_DEL = 0x33
IP_FW_FLUSH = 0x34
IP_FW_GET = 0x36
IP_FW_NAT_CFG = 0x38
IP_FW_NAT_DEL = 0x39
IP_FW_NAT_GET_CONFIG = 0x3a
IP_FW_NAT_GET_LOG = 0x3b
IP_FW_RESETLOG = 0x37
IP_FW_TABLE_ADD = 0x28
IP_FW_TABLE_DEL = 0x29
IP_FW_TABLE_FLUSH = 0x2a
IP_FW_TABLE_GETSIZE = 0x2b
IP_FW_TABLE_LIST = 0x2c
IP_FW_ZERO = 0x35
IP_HDRINCL = 0x2
IP_IPSEC_POLICY = 0x15
IP_MAXPACKET = 0xffff
IP_MAX_GROUP_SRC_FILTER = 0x200
IP_MAX_MEMBERSHIPS = 0xfff
IP_MAX_SOCK_MUTE_FILTER = 0x80
IP_MAX_SOCK_SRC_FILTER = 0x80
IP_MAX_SOURCE_FILTER = 0x400
IP_MF = 0x2000
IP_MINTTL = 0x42
IP_MIN_MEMBERSHIPS = 0x1f
IP_MSFILTER = 0x4a
IP_MSS = 0x240
IP_MULTICAST_IF = 0x9
IP_MULTICAST_LOOP = 0xb
IP_MULTICAST_TTL = 0xa
IP_MULTICAST_VIF = 0xe
IP_OFFMASK = 0x1fff
IP_ONESBCAST = 0x17
IP_OPTIONS = 0x1
IP_PORTRANGE = 0x13
IP_PORTRANGE_DEFAULT = 0x0
IP_PORTRANGE_HIGH = 0x1
IP_PORTRANGE_LOW = 0x2
IP_RECVDSTADDR = 0x7
IP_RECVFLOWID = 0x5d
IP_RECVIF = 0x14
IP_RECVOPTS = 0x5
IP_RECVRETOPTS = 0x6
IP_RECVRSSBUCKETID = 0x5e
IP_RECVTOS = 0x44
IP_RECVTTL = 0x41
IP_RETOPTS = 0x8
IP_RF = 0x8000
IP_RSSBUCKETID = 0x5c
IP_RSS_LISTEN_BUCKET = 0x1a
IP_RSVP_OFF = 0x10
IP_RSVP_ON = 0xf
IP_RSVP_VIF_OFF = 0x12
IP_RSVP_VIF_ON = 0x11
IP_SENDSRCADDR = 0x7
IP_TOS = 0x3
IP_TTL = 0x4
IP_UNBLOCK_SOURCE = 0x49
ISIG = 0x80
ISTRIP = 0x20
IXANY = 0x800
IXOFF = 0x400
IXON = 0x200
KERN_HOSTNAME = 0xa
KERN_OSRELEASE = 0x2
KERN_OSTYPE = 0x1
KERN_VERSION = 0x4
LOCK_EX = 0x2
LOCK_NB = 0x4
LOCK_SH = 0x1
LOCK_UN = 0x8
MADV_AUTOSYNC = 0x7
MADV_CORE = 0x9
MADV_DONTNEED = 0x4
MADV_FREE = 0x5
MADV_NOCORE = 0x8
MADV_NORMAL = 0x0
MADV_NOSYNC = 0x6
MADV_PROTECT = 0xa
MADV_RANDOM = 0x1
MADV_SEQUENTIAL = 0x2
MADV_WILLNEED = 0x3
MAP_ALIGNED_SUPER = 0x1000000
MAP_ALIGNMENT_MASK = -0x1000000
MAP_ALIGNMENT_SHIFT = 0x18
MAP_ANON = 0x1000
MAP_ANONYMOUS = 0x1000
MAP_COPY = 0x2
MAP_EXCL = 0x4000
MAP_FILE = 0x0
MAP_FIXED = 0x10
MAP_GUARD = 0x2000
MAP_HASSEMAPHORE = 0x200
MAP_NOCORE = 0x20000
MAP_NOSYNC = 0x800
MAP_PREFAULT_READ = 0x40000
MAP_PRIVATE = 0x2
MAP_RESERVED0020 = 0x20
MAP_RESERVED0040 = 0x40
MAP_RESERVED0080 = 0x80
MAP_RESERVED0100 = 0x100
MAP_SHARED = 0x1
MAP_STACK = 0x400
MCL_CURRENT = 0x1
MCL_FUTURE = 0x2
MNT_ACLS = 0x8000000
MNT_ASYNC = 0x40
MNT_AUTOMOUNTED = 0x200000000
MNT_BYFSID = 0x8000000
MNT_CMDFLAGS = 0xd0f0000
MNT_DEFEXPORTED = 0x200
MNT_DELEXPORT = 0x20000
MNT_EXKERB = 0x800
MNT_EXPORTANON = 0x400
MNT_EXPORTED = 0x100
MNT_EXPUBLIC = 0x20000000
MNT_EXRDONLY = 0x80
MNT_FORCE = 0x80000
MNT_GJOURNAL = 0x2000000
MNT_IGNORE = 0x800000
MNT_LAZY = 0x3
MNT_LOCAL = 0x1000
MNT_MULTILABEL = 0x4000000
MNT_NFS4ACLS = 0x10
MNT_NOATIME = 0x10000000
MNT_NOCLUSTERR = 0x40000000
MNT_NOCLUSTERW = 0x80000000
MNT_NOEXEC = 0x4
MNT_NONBUSY = 0x4000000
MNT_NOSUID = 0x8
MNT_NOSYMFOLLOW = 0x400000
MNT_NOWAIT = 0x2
MNT_QUOTA = 0x2000
MNT_RDONLY = 0x1
MNT_RELOAD = 0x40000
MNT_ROOTFS = 0x4000
MNT_SNAPSHOT = 0x1000000
MNT_SOFTDEP = 0x200000
MNT_SUIDDIR = 0x100000
MNT_SUJ = 0x100000000
MNT_SUSPEND = 0x4
MNT_SYNCHRONOUS = 0x2
MNT_UNION = 0x20
MNT_UPDATE = 0x10000
MNT_UPDATEMASK = 0x2d8d0807e
MNT_USER = 0x8000
MNT_VISFLAGMASK = 0x3fef0ffff
MNT_WAIT = 0x1
MSG_CMSG_CLOEXEC = 0x40000
MSG_COMPAT = 0x8000
MSG_CTRUNC = 0x20
MSG_DONTROUTE = 0x4
MSG_DONTWAIT = 0x80
MSG_EOF = 0x100
MSG_EOR = 0x8
MSG_NBIO = 0x4000
MSG_NOSIGNAL = 0x20000
MSG_NOTIFICATION = 0x2000
MSG_OOB = 0x1
MSG_PEEK = 0x2
MSG_TRUNC = 0x10
MSG_WAITALL = 0x40
MSG_WAITFORONE = 0x80000
MS_ASYNC = 0x1
MS_INVALIDATE = 0x2
MS_SYNC = 0x0
NAME_MAX = 0xff
NET_RT_DUMP = 0x1
NET_RT_FLAGS = 0x2
NET_RT_IFLIST = 0x3
NET_RT_IFLISTL = 0x5
NET_RT_IFMALIST = 0x4
NFDBITS = 0x20
NOFLSH = 0x80000000
NOKERNINFO = 0x2000000
NOTE_ATTRIB = 0x8
NOTE_CHILD = 0x4
NOTE_CLOSE = 0x100
NOTE_CLOSE_WRITE = 0x200
NOTE_DELETE = 0x1
NOTE_EXEC = 0x20000000
NOTE_EXIT = 0x80000000
NOTE_EXTEND = 0x4
NOTE_FFAND = 0x40000000
NOTE_FFCOPY = 0xc0000000
NOTE_FFCTRLMASK = 0xc0000000
NOTE_FFLAGSMASK = 0xffffff
NOTE_FFNOP = 0x0
NOTE_FFOR = 0x80000000
NOTE_FILE_POLL = 0x2
NOTE_FORK = 0x40000000
NOTE_LINK = 0x10
NOTE_LOWAT = 0x1
NOTE_MSECONDS = 0x2
NOTE_NSECONDS = 0x8
NOTE_OPEN = 0x80
NOTE_PCTRLMASK = 0xf0000000
NOTE_PDATAMASK = 0xfffff
NOTE_READ = 0x400
NOTE_RENAME = 0x20
NOTE_REVOKE = 0x40
NOTE_SECONDS = 0x1
NOTE_TRACK = 0x1
NOTE_TRACKERR = 0x2
NOTE_TRIGGER = 0x1000000
NOTE_USECONDS = 0x4
NOTE_WRITE = 0x2
OCRNL = 0x10
ONLCR = 0x2
ONLRET = 0x40
ONOCR = 0x20
ONOEOT = 0x8
OPOST = 0x1
OXTABS = 0x4
O_ACCMODE = 0x3
O_APPEND = 0x8
O_ASYNC = 0x40
O_CLOEXEC = 0x100000
O_CREAT = 0x200
O_DIRECT = 0x10000
O_DIRECTORY = 0x20000
O_EXCL = 0x800
O_EXEC = 0x40000
O_EXLOCK = 0x20
O_FSYNC = 0x80
O_NDELAY = 0x4
O_NOCTTY = 0x8000
O_NOFOLLOW = 0x100
O_NONBLOCK = 0x4
O_RDONLY = 0x0
O_RDWR = 0x2
O_SHLOCK = 0x10
O_SYNC = 0x80
O_TRUNC = 0x400
O_TTY_INIT = 0x80000
O_VERIFY = 0x200000
O_WRONLY = 0x1
PARENB = 0x1000
PARMRK = 0x8
PARODD = 0x2000
PENDIN = 0x20000000
PRIO_PGRP = 0x1
PRIO_PROCESS = 0x0
PRIO_USER = 0x2
PROT_EXEC = 0x4
PROT_NONE = 0x0
PROT_READ = 0x1
PROT_WRITE = 0x2
RLIMIT_AS = 0xa
RLIMIT_CORE = 0x4
RLIMIT_CPU = 0x0
RLIMIT_DATA = 0x2
RLIMIT_FSIZE = 0x1
RLIMIT_MEMLOCK = 0x6
RLIMIT_NOFILE = 0x8
RLIMIT_NPROC = 0x7
RLIMIT_RSS = 0x5
RLIMIT_STACK = 0x3
RLIM_INFINITY = 0x7fffffffffffffff
RTAX_AUTHOR = 0x6
RTAX_BRD = 0x7
RTAX_DST = 0x0
RTAX_GATEWAY = 0x1
RTAX_GENMASK = 0x3
RTAX_IFA = 0x5
RTAX_IFP = 0x4
RTAX_MAX = 0x8
RTAX_NETMASK = 0x2
RTA_AUTHOR = 0x40
RTA_BRD = 0x80
RTA_DST = 0x1
RTA_GATEWAY = 0x2
RTA_GENMASK = 0x8
RTA_IFA = 0x20
RTA_IFP = 0x10
RTA_NETMASK = 0x4
RTF_BLACKHOLE = 0x1000
RTF_BROADCAST = 0x400000
RTF_DONE = 0x40
RTF_DYNAMIC = 0x10
RTF_FIXEDMTU = 0x80000
RTF_FMASK = 0x1004d808
RTF_GATEWAY = 0x2
RTF_GWFLAG_COMPAT = 0x80000000
RTF_HOST = 0x4
RTF_LLDATA = 0x400
RTF_LLINFO = 0x400
RTF_LOCAL = 0x200000
RTF_MODIFIED = 0x20
RTF_MULTICAST = 0x800000
RTF_PINNED = 0x100000
RTF_PROTO1 = 0x8000
RTF_PROTO2 = 0x4000
RTF_PROTO3 = 0x40000
RTF_REJECT = 0x8
RTF_RNH_LOCKED = 0x40000000
RTF_STATIC = 0x800
RTF_STICKY = 0x10000000
RTF_UP = 0x1
RTF_XRESOLVE = 0x200
RTM_ADD = 0x1
RTM_CHANGE = 0x3
RTM_DELADDR = 0xd
RTM_DELETE = 0x2
RTM_DELMADDR = 0x10
RTM_GET = 0x4
RTM_IEEE80211 = 0x12
RTM_IFANNOUNCE = 0x11
RTM_IFINFO = 0xe
RTM_LOCK = 0x8
RTM_LOSING = 0x5
RTM_MISS = 0x7
RTM_NEWADDR = 0xc
RTM_NEWMADDR = 0xf
RTM_REDIRECT = 0x6
RTM_RESOLVE = 0xb
RTM_RTTUNIT = 0xf4240
RTM_VERSION = 0x5
RTV_EXPIRE = 0x4
RTV_HOPCOUNT = 0x2
RTV_MTU = 0x1
RTV_RPIPE = 0x8
RTV_RTT = 0x40
RTV_RTTVAR = 0x80
RTV_SPIPE = 0x10
RTV_SSTHRESH = 0x20
RTV_WEIGHT = 0x100
RT_ALL_FIBS = -0x1
RT_BLACKHOLE = 0x40
RT_CACHING_CONTEXT = 0x1
RT_DEFAULT_FIB = 0x0
RT_HAS_GW = 0x80
RT_HAS_HEADER = 0x10
RT_HAS_HEADER_BIT = 0x4
RT_L2_ME = 0x4
RT_L2_ME_BIT = 0x2
RT_LLE_CACHE = 0x100
RT_MAY_LOOP = 0x8
RT_MAY_LOOP_BIT = 0x3
RT_NORTREF = 0x2
RT_REJECT = 0x20
RUSAGE_CHILDREN = -0x1
RUSAGE_SELF = 0x0
RUSAGE_THREAD = 0x1
SCM_BINTIME = 0x4
SCM_CREDS = 0x3
SCM_RIGHTS = 0x1
SCM_TIMESTAMP = 0x2
SHUT_RD = 0x0
SHUT_RDWR = 0x2
SHUT_WR = 0x1
SIOCADDMULTI = 0x80206931
SIOCAIFADDR = 0x8040691a
SIOCAIFGROUP = 0x80246987
SIOCATMARK = 0x40047307
SIOCDELMULTI = 0x80206932
SIOCDIFADDR = 0x80206919
SIOCDIFGROUP = 0x80246989
SIOCDIFPHYADDR = 0x80206949
SIOCGDRVSPEC = 0xc01c697b
SIOCGETSGCNT = 0xc0147210
SIOCGETVIFCNT = 0xc014720f
SIOCGHIWAT = 0x40047301
SIOCGHWADDR = 0xc020693e
SIOCGI2C = 0xc020693d
SIOCGIFADDR = 0xc0206921
SIOCGIFBRDADDR = 0xc0206923
SIOCGIFCAP = 0xc020691f
SIOCGIFCONF = 0xc0086924
SIOCGIFDESCR = 0xc020692a
SIOCGIFDSTADDR = 0xc0206922
SIOCGIFFIB = 0xc020695c
SIOCGIFFLAGS = 0xc0206911
SIOCGIFGENERIC = 0xc020693a
SIOCGIFGMEMB = 0xc024698a
SIOCGIFGROUP = 0xc0246988
SIOCGIFINDEX = 0xc0206920
SIOCGIFMAC = 0xc0206926
SIOCGIFMEDIA = 0xc0286938
SIOCGIFMETRIC = 0xc0206917
SIOCGIFMTU = 0xc0206933
SIOCGIFNETMASK = 0xc0206925
SIOCGIFPDSTADDR = 0xc0206948
SIOCGIFPHYS = 0xc0206935
SIOCGIFPSRCADDR = 0xc0206947
SIOCGIFSTATUS = 0xc331693b
SIOCGIFXMEDIA = 0xc028698b
SIOCGLOWAT = 0x40047303
SIOCGPGRP = 0x40047309
SIOCGPRIVATE_0 = 0xc0206950
SIOCGPRIVATE_1 = 0xc0206951
SIOCGTUNFIB = 0xc020695e
SIOCIFCREATE = 0xc020697a
SIOCIFCREATE2 = 0xc020697c
SIOCIFDESTROY = 0x80206979
SIOCIFGCLONERS = 0xc00c6978
SIOCSDRVSPEC = 0x801c697b
SIOCSHIWAT = 0x80047300
SIOCSIFADDR = 0x8020690c
SIOCSIFBRDADDR = 0x80206913
SIOCSIFCAP = 0x8020691e
SIOCSIFDESCR = 0x80206929
SIOCSIFDSTADDR = 0x8020690e
SIOCSIFFIB = 0x8020695d
SIOCSIFFLAGS = 0x80206910
SIOCSIFGENERIC = 0x80206939
SIOCSIFLLADDR = 0x8020693c
SIOCSIFMAC = 0x80206927
SIOCSIFMEDIA = 0xc0206937
SIOCSIFMETRIC = 0x80206918
SIOCSIFMTU = 0x80206934
SIOCSIFNAME = 0x80206928
SIOCSIFNETMASK = 0x80206916
SIOCSIFPHYADDR = 0x80406946
SIOCSIFPHYS = 0x80206936
SIOCSIFRVNET = 0xc020695b
SIOCSIFVNET = 0xc020695a
SIOCSLOWAT = 0x80047302
SIOCSPGRP = 0x80047308
SIOCSTUNFIB = 0x8020695f
SOCK_CLOEXEC = 0x10000000
SOCK_DGRAM = 0x2
SOCK_MAXADDRLEN = 0xff
SOCK_NONBLOCK = 0x20000000
SOCK_RAW = 0x3
SOCK_RDM = 0x4
SOCK_SEQPACKET = 0x5
SOCK_STREAM = 0x1
SOL_SOCKET = 0xffff
SOMAXCONN = 0x80
SO_ACCEPTCONN = 0x2
SO_ACCEPTFILTER = 0x1000
SO_BINTIME = 0x2000
SO_BROADCAST = 0x20
SO_DEBUG = 0x1
SO_DONTROUTE = 0x10
SO_ERROR = 0x1007
SO_KEEPALIVE = 0x8
SO_LABEL = 0x1009
SO_LINGER = 0x80
SO_LISTENINCQLEN = 0x1013
SO_LISTENQLEN = 0x1012
SO_LISTENQLIMIT = 0x1011
SO_NOSIGPIPE = 0x800
SO_NO_DDP = 0x8000
SO_NO_OFFLOAD = 0x4000
SO_OOBINLINE = 0x100
SO_PEERLABEL = 0x1010
SO_PROTOCOL = 0x1016
SO_PROTOTYPE = 0x1016
SO_RCVBUF = 0x1002
SO_RCVLOWAT = 0x1004
SO_RCVTIMEO = 0x1006
SO_REUSEADDR = 0x4
SO_REUSEPORT = 0x200
SO_SETFIB = 0x1014
SO_SNDBUF = 0x1001
SO_SNDLOWAT = 0x1003
SO_SNDTIMEO = 0x1005
SO_TIMESTAMP = 0x400
SO_TYPE = 0x1008
SO_USELOOPBACK = 0x40
SO_USER_COOKIE = 0x1015
SO_VENDOR = 0x80000000
S_BLKSIZE = 0x200
S_IEXEC = 0x40
S_IFBLK = 0x6000
S_IFCHR = 0x2000
S_IFDIR = 0x4000
S_IFIFO = 0x1000
S_IFLNK = 0xa000
S_IFMT = 0xf000
S_IFREG = 0x8000
S_IFSOCK = 0xc000
S_IFWHT = 0xe000
S_IREAD = 0x100
S_IRGRP = 0x20
S_IROTH = 0x4
S_IRUSR = 0x100
S_IRWXG = 0x38
S_IRWXO = 0x7
S_IRWXU = 0x1c0
S_ISGID = 0x400
S_ISTXT = 0x200
S_ISUID = 0x800
S_ISVTX = 0x200
S_IWGRP = 0x10
S_IWOTH = 0x2
S_IWRITE = 0x80
S_IWUSR = 0x80
S_IXGRP = 0x8
S_IXOTH = 0x1
S_IXUSR = 0x40
TAB0 = 0x0
TAB3 = 0x4
TABDLY = 0x4
TCIFLUSH = 0x1
TCIOFF = 0x3
TCIOFLUSH = 0x3
TCION = 0x4
TCOFLUSH = 0x2
TCOOFF = 0x1
TCOON = 0x2
TCP_CA_NAME_MAX = 0x10
TCP_CCALGOOPT = 0x41
TCP_CONGESTION = 0x40
TCP_FASTOPEN = 0x401
TCP_FUNCTION_BLK = 0x2000
TCP_FUNCTION_NAME_LEN_MAX = 0x20
TCP_INFO = 0x20
TCP_KEEPCNT = 0x400
TCP_KEEPIDLE = 0x100
TCP_KEEPINIT = 0x80
TCP_KEEPINTVL = 0x200
TCP_MAXBURST = 0x4
TCP_MAXHLEN = 0x3c
TCP_MAXOLEN = 0x28
TCP_MAXSEG = 0x2
TCP_MAXWIN = 0xffff
TCP_MAX_SACK = 0x4
TCP_MAX_WINSHIFT = 0xe
TCP_MD5SIG = 0x10
TCP_MINMSS = 0xd8
TCP_MSS = 0x218
TCP_NODELAY = 0x1
TCP_NOOPT = 0x8
TCP_NOPUSH = 0x4
TCP_PCAP_IN = 0x1000
TCP_PCAP_OUT = 0x800
TCP_VENDOR = 0x80000000
TCSAFLUSH = 0x2
TIOCCBRK = 0x2000747a
TIOCCDTR = 0x20007478
TIOCCONS = 0x80047462
TIOCDRAIN = 0x2000745e
TIOCEXCL = 0x2000740d
TIOCEXT = 0x80047460
TIOCFLUSH = 0x80047410
TIOCGDRAINWAIT = 0x40047456
TIOCGETA = 0x402c7413
TIOCGETD = 0x4004741a
TIOCGPGRP = 0x40047477
TIOCGPTN = 0x4004740f
TIOCGSID = 0x40047463
TIOCGWINSZ = 0x40087468
TIOCMBIC = 0x8004746b
TIOCMBIS = 0x8004746c
TIOCMGDTRWAIT = 0x4004745a
TIOCMGET = 0x4004746a
TIOCMSDTRWAIT = 0x8004745b
TIOCMSET = 0x8004746d
TIOCM_CAR = 0x40
TIOCM_CD = 0x40
TIOCM_CTS = 0x20
TIOCM_DCD = 0x40
TIOCM_DSR = 0x100
TIOCM_DTR = 0x2
TIOCM_LE = 0x1
TIOCM_RI = 0x80
TIOCM_RNG = 0x80
TIOCM_RTS = 0x4
TIOCM_SR = 0x10
TIOCM_ST = 0x8
TIOCNOTTY = 0x20007471
TIOCNXCL = 0x2000740e
TIOCOUTQ = 0x40047473
TIOCPKT = 0x80047470
TIOCPKT_DATA = 0x0
TIOCPKT_DOSTOP = 0x20
TIOCPKT_FLUSHREAD = 0x1
TIOCPKT_FLUSHWRITE = 0x2
TIOCPKT_IOCTL = 0x40
TIOCPKT_NOSTOP = 0x10
TIOCPKT_START = 0x8
TIOCPKT_STOP = 0x4
TIOCPTMASTER = 0x2000741c
TIOCSBRK = 0x2000747b
TIOCSCTTY = 0x20007461
TIOCSDRAINWAIT = 0x80047457
TIOCSDTR = 0x20007479
TIOCSETA = 0x802c7414
TIOCSETAF = 0x802c7416
TIOCSETAW = 0x802c7415
TIOCSETD = 0x8004741b
TIOCSIG = 0x2004745f
TIOCSPGRP = 0x80047476
TIOCSTART = 0x2000746e
TIOCSTAT = 0x20007465
TIOCSTI = 0x80017472
TIOCSTOP = 0x2000746f
TIOCSWINSZ = 0x80087467
TIOCTIMESTAMP = 0x40107459
TIOCUCNTL = 0x80047466
TOSTOP = 0x400000
VDISCARD = 0xf
VDSUSP = 0xb
VEOF = 0x0
VEOL = 0x1
VEOL2 = 0x2
VERASE = 0x3
VERASE2 = 0x7
VINTR = 0x8
VKILL = 0x5
VLNEXT = 0xe
VMIN = 0x10
VQUIT = 0x9
VREPRINT = 0x6
VSTART = 0xc
VSTATUS = 0x12
VSTOP = 0xd
VSUSP = 0xa
VTIME = 0x11
VWERASE = 0x4
WCONTINUED = 0x4
WCOREFLAG = 0x80
WEXITED = 0x10
WLINUXCLONE = 0x80000000
WNOHANG = 0x1
WNOWAIT = 0x8
WSTOPPED = 0x2
WTRAPPED = 0x20
WUNTRACED = 0x2
)
// Errors
const (
E2BIG = syscall.Errno(0x7)
EACCES = syscall.Errno(0xd)
EADDRINUSE = syscall.Errno(0x30)
EADDRNOTAVAIL = syscall.Errno(0x31)
EAFNOSUPPORT = syscall.Errno(0x2f)
EAGAIN = syscall.Errno(0x23)
EALREADY = syscall.Errno(0x25)
EAUTH = syscall.Errno(0x50)
EBADF = syscall.Errno(0x9)
EBADMSG = syscall.Errno(0x59)
EBADRPC = syscall.Errno(0x48)
EBUSY = syscall.Errno(0x10)
ECANCELED = syscall.Errno(0x55)
ECAPMODE = syscall.Errno(0x5e)
ECHILD = syscall.Errno(0xa)
ECONNABORTED = syscall.Errno(0x35)
ECONNREFUSED = syscall.Errno(0x3d)
ECONNRESET = syscall.Errno(0x36)
EDEADLK = syscall.Errno(0xb)
EDESTADDRREQ = syscall.Errno(0x27)
EDOM = syscall.Errno(0x21)
EDOOFUS = syscall.Errno(0x58)
EDQUOT = syscall.Errno(0x45)
EEXIST = syscall.Errno(0x11)
EFAULT = syscall.Errno(0xe)
EFBIG = syscall.Errno(0x1b)
EFTYPE = syscall.Errno(0x4f)
EHOSTDOWN = syscall.Errno(0x40)
EHOSTUNREACH = syscall.Errno(0x41)
EIDRM = syscall.Errno(0x52)
EILSEQ = syscall.Errno(0x56)
EINPROGRESS = syscall.Errno(0x24)
EINTR = syscall.Errno(0x4)
EINVAL = syscall.Errno(0x16)
EIO = syscall.Errno(0x5)
EISCONN = syscall.Errno(0x38)
EISDIR = syscall.Errno(0x15)
ELAST = syscall.Errno(0x60)
ELOOP = syscall.Errno(0x3e)
EMFILE = syscall.Errno(0x18)
EMLINK = syscall.Errno(0x1f)
EMSGSIZE = syscall.Errno(0x28)
EMULTIHOP = syscall.Errno(0x5a)
ENAMETOOLONG = syscall.Errno(0x3f)
ENEEDAUTH = syscall.Errno(0x51)
ENETDOWN = syscall.Errno(0x32)
ENETRESET = syscall.Errno(0x34)
ENETUNREACH = syscall.Errno(0x33)
ENFILE = syscall.Errno(0x17)
ENOATTR = syscall.Errno(0x57)
ENOBUFS = syscall.Errno(0x37)
ENODEV = syscall.Errno(0x13)
ENOENT = syscall.Errno(0x2)
ENOEXEC = syscall.Errno(0x8)
ENOLCK = syscall.Errno(0x4d)
ENOLINK = syscall.Errno(0x5b)
ENOMEM = syscall.Errno(0xc)
ENOMSG = syscall.Errno(0x53)
ENOPROTOOPT = syscall.Errno(0x2a)
ENOSPC = syscall.Errno(0x1c)
ENOSYS = syscall.Errno(0x4e)
ENOTBLK = syscall.Errno(0xf)
ENOTCAPABLE = syscall.Errno(0x5d)
ENOTCONN = syscall.Errno(0x39)
ENOTDIR = syscall.Errno(0x14)
ENOTEMPTY = syscall.Errno(0x42)
ENOTRECOVERABLE = syscall.Errno(0x5f)
ENOTSOCK = syscall.Errno(0x26)
ENOTSUP = syscall.Errno(0x2d)
ENOTTY = syscall.Errno(0x19)
ENXIO = syscall.Errno(0x6)
EOPNOTSUPP = syscall.Errno(0x2d)
EOVERFLOW = syscall.Errno(0x54)
EOWNERDEAD = syscall.Errno(0x60)
EPERM = syscall.Errno(0x1)
EPFNOSUPPORT = syscall.Errno(0x2e)
EPIPE = syscall.Errno(0x20)
EPROCLIM = syscall.Errno(0x43)
EPROCUNAVAIL = syscall.Errno(0x4c)
EPROGMISMATCH = syscall.Errno(0x4b)
EPROGUNAVAIL = syscall.Errno(0x4a)
EPROTO = syscall.Errno(0x5c)
EPROTONOSUPPORT = syscall.Errno(0x2b)
EPROTOTYPE = syscall.Errno(0x29)
ERANGE = syscall.Errno(0x22)
EREMOTE = syscall.Errno(0x47)
EROFS = syscall.Errno(0x1e)
ERPCMISMATCH = syscall.Errno(0x49)
ESHUTDOWN = syscall.Errno(0x3a)
ESOCKTNOSUPPORT = syscall.Errno(0x2c)
ESPIPE = syscall.Errno(0x1d)
ESRCH = syscall.Errno(0x3)
ESTALE = syscall.Errno(0x46)
ETIMEDOUT = syscall.Errno(0x3c)
ETOOMANYREFS = syscall.Errno(0x3b)
ETXTBSY = syscall.Errno(0x1a)
EUSERS = syscall.Errno(0x44)
EWOULDBLOCK = syscall.Errno(0x23)
EXDEV = syscall.Errno(0x12)
)
// Signals
const (
SIGABRT = syscall.Signal(0x6)
SIGALRM = syscall.Signal(0xe)
SIGBUS = syscall.Signal(0xa)
SIGCHLD = syscall.Signal(0x14)
SIGCONT = syscall.Signal(0x13)
SIGEMT = syscall.Signal(0x7)
SIGFPE = syscall.Signal(0x8)
SIGHUP = syscall.Signal(0x1)
SIGILL = syscall.Signal(0x4)
SIGINFO = syscall.Signal(0x1d)
SIGINT = syscall.Signal(0x2)
SIGIO = syscall.Signal(0x17)
SIGIOT = syscall.Signal(0x6)
SIGKILL = syscall.Signal(0x9)
SIGLIBRT = syscall.Signal(0x21)
SIGLWP = syscall.Signal(0x20)
SIGPIPE = syscall.Signal(0xd)
SIGPROF = syscall.Signal(0x1b)
SIGQUIT = syscall.Signal(0x3)
SIGSEGV = syscall.Signal(0xb)
SIGSTOP = syscall.Signal(0x11)
SIGSYS = syscall.Signal(0xc)
SIGTERM = syscall.Signal(0xf)
SIGTHR = syscall.Signal(0x20)
SIGTRAP = syscall.Signal(0x5)
SIGTSTP = syscall.Signal(0x12)
SIGTTIN = syscall.Signal(0x15)
SIGTTOU = syscall.Signal(0x16)
SIGURG = syscall.Signal(0x10)
SIGUSR1 = syscall.Signal(0x1e)
SIGUSR2 = syscall.Signal(0x1f)
SIGVTALRM = syscall.Signal(0x1a)
SIGWINCH = syscall.Signal(0x1c)
SIGXCPU = syscall.Signal(0x18)
SIGXFSZ = syscall.Signal(0x19)
)
// Error table
var errorList = [...]struct {
num syscall.Errno
name string
desc string
}{
{1, "EPERM", "operation not permitted"},
{2, "ENOENT", "no such file or directory"},
{3, "ESRCH", "no such process"},
{4, "EINTR", "interrupted system call"},
{5, "EIO", "input/output error"},
{6, "ENXIO", "device not configured"},
{7, "E2BIG", "argument list too long"},
{8, "ENOEXEC", "exec format error"},
{9, "EBADF", "bad file descriptor"},
{10, "ECHILD", "no child processes"},
{11, "EDEADLK", "resource deadlock avoided"},
{12, "ENOMEM", "cannot allocate memory"},
{13, "EACCES", "permission denied"},
{14, "EFAULT", "bad address"},
{15, "ENOTBLK", "block device required"},
{16, "EBUSY", "device busy"},
{17, "EEXIST", "file exists"},
{18, "EXDEV", "cross-device link"},
{19, "ENODEV", "operation not supported by device"},
{20, "ENOTDIR", "not a directory"},
{21, "EISDIR", "is a directory"},
{22, "EINVAL", "invalid argument"},
{23, "ENFILE", "too many open files in system"},
{24, "EMFILE", "too many open files"},
{25, "ENOTTY", "inappropriate ioctl for device"},
{26, "ETXTBSY", "text file busy"},
{27, "EFBIG", "file too large"},
{28, "ENOSPC", "no space left on device"},
{29, "ESPIPE", "illegal seek"},
{30, "EROFS", "read-only file system"},
{31, "EMLINK", "too many links"},
{32, "EPIPE", "broken pipe"},
{33, "EDOM", "numerical argument out of domain"},
{34, "ERANGE", "result too large"},
{35, "EAGAIN", "resource temporarily unavailable"},
{36, "EINPROGRESS", "operation now in progress"},
{37, "EALREADY", "operation already in progress"},
{38, "ENOTSOCK", "socket operation on non-socket"},
{39, "EDESTADDRREQ", "destination address required"},
{40, "EMSGSIZE", "message too long"},
{41, "EPROTOTYPE", "protocol wrong type for socket"},
{42, "ENOPROTOOPT", "protocol not available"},
{43, "EPROTONOSUPPORT", "protocol not supported"},
{44, "ESOCKTNOSUPPORT", "socket type not supported"},
{45, "EOPNOTSUPP", "operation not supported"},
{46, "EPFNOSUPPORT", "protocol family not supported"},
{47, "EAFNOSUPPORT", "address family not supported by protocol family"},
{48, "EADDRINUSE", "address already in use"},
{49, "EADDRNOTAVAIL", "can't assign requested address"},
{50, "ENETDOWN", "network is down"},
{51, "ENETUNREACH", "network is unreachable"},
{52, "ENETRESET", "network dropped connection on reset"},
{53, "ECONNABORTED", "software caused connection abort"},
{54, "ECONNRESET", "connection reset by peer"},
{55, "ENOBUFS", "no buffer space available"},
{56, "EISCONN", "socket is already connected"},
{57, "ENOTCONN", "socket is not connected"},
{58, "ESHUTDOWN", "can't send after socket shutdown"},
{59, "ETOOMANYREFS", "too many references: can't splice"},
{60, "ETIMEDOUT", "operation timed out"},
{61, "ECONNREFUSED", "connection refused"},
{62, "ELOOP", "too many levels of symbolic links"},
{63, "ENAMETOOLONG", "file name too long"},
{64, "EHOSTDOWN", "host is down"},
{65, "EHOSTUNREACH", "no route to host"},
{66, "ENOTEMPTY", "directory not empty"},
{67, "EPROCLIM", "too many processes"},
{68, "EUSERS", "too many users"},
{69, "EDQUOT", "disc quota exceeded"},
{70, "ESTALE", "stale NFS file handle"},
{71, "EREMOTE", "too many levels of remote in path"},
{72, "EBADRPC", "RPC struct is bad"},
{73, "ERPCMISMATCH", "RPC version wrong"},
{74, "EPROGUNAVAIL", "RPC prog. not avail"},
{75, "EPROGMISMATCH", "program version wrong"},
{76, "EPROCUNAVAIL", "bad procedure for program"},
{77, "ENOLCK", "no locks available"},
{78, "ENOSYS", "function not implemented"},
{79, "EFTYPE", "inappropriate file type or format"},
{80, "EAUTH", "authentication error"},
{81, "ENEEDAUTH", "need authenticator"},
{82, "EIDRM", "identifier removed"},
{83, "ENOMSG", "no message of desired type"},
{84, "EOVERFLOW", "value too large to be stored in data type"},
{85, "ECANCELED", "operation canceled"},
{86, "EILSEQ", "illegal byte sequence"},
{87, "ENOATTR", "attribute not found"},
{88, "EDOOFUS", "programming error"},
{89, "EBADMSG", "bad message"},
{90, "EMULTIHOP", "multihop attempted"},
{91, "ENOLINK", "link has been severed"},
{92, "EPROTO", "protocol error"},
{93, "ENOTCAPABLE", "capabilities insufficient"},
{94, "ECAPMODE", "not permitted in capability mode"},
{95, "ENOTRECOVERABLE", "state not recoverable"},
{96, "EOWNERDEAD", "previous owner died"},
}
// Signal table
var signalList = [...]struct {
num syscall.Signal
name string
desc string
}{
{1, "SIGHUP", "hangup"},
{2, "SIGINT", "interrupt"},
{3, "SIGQUIT", "quit"},
{4, "SIGILL", "illegal instruction"},
{5, "SIGTRAP", "trace/BPT trap"},
{6, "SIGIOT", "abort trap"},
{7, "SIGEMT", "EMT trap"},
{8, "SIGFPE", "floating point exception"},
{9, "SIGKILL", "killed"},
{10, "SIGBUS", "bus error"},
{11, "SIGSEGV", "segmentation fault"},
{12, "SIGSYS", "bad system call"},
{13, "SIGPIPE", "broken pipe"},
{14, "SIGALRM", "alarm clock"},
{15, "SIGTERM", "terminated"},
{16, "SIGURG", "urgent I/O condition"},
{17, "SIGSTOP", "suspended (signal)"},
{18, "SIGTSTP", "suspended"},
{19, "SIGCONT", "continued"},
{20, "SIGCHLD", "child exited"},
{21, "SIGTTIN", "stopped (tty input)"},
{22, "SIGTTOU", "stopped (tty output)"},
{23, "SIGIO", "I/O possible"},
{24, "SIGXCPU", "cputime limit exceeded"},
{25, "SIGXFSZ", "filesize limit exceeded"},
{26, "SIGVTALRM", "virtual timer expired"},
{27, "SIGPROF", "profiling timer expired"},
{28, "SIGWINCH", "window size changes"},
{29, "SIGINFO", "information request"},
{30, "SIGUSR1", "user defined signal 1"},
{31, "SIGUSR2", "user defined signal 2"},
{32, "SIGTHR", "unknown signal"},
{33, "SIGLIBRT", "unknown signal"},
}
```
|
Marcelo Lopes de Faria or simply Marcelo Lopes (born 17 May 1975) is a Brazilian football defender,
He previously played for União São João, Fortaleza in the Campeonato Brasileiro Série A. and Potiguar de Mossoró in the Campeonato Potiguar
References
External links
CBF
1975 births
Living people
Brazilian men's footballers
Guarani FC players
Mirassol Futebol Clube players
Mogi Mirim Esporte Clube players
Fortaleza Esporte Clube players
União São João Esporte Clube players
Esporte Clube Santo André players
Esporte Clube XV de Novembro (Piracicaba) players
Footballers from Paraná (state)
Men's association football defenders
|
Weingarten is a surname. Notable people with the surname include:
Carl Weingarten, musician and photographer
Gene Weingarten (born 1951), humor writer and journalist
Johnny Wayne (born Louis Weingarten) (1918–1990), Canadian comedian and comedy writer
Joe Weingarten (born 1962), German politician
Julius Weingarten (1836–1910), German mathematician
Lawrence Weingarten (1897–1975), film director
Mordechai Weingarten, Jewish leader in Jerusalem from 1935 to 1948
Paul Weingarten (1886–1948), Moravia-born pianist
Randi Weingarten (born 1957), president of the United Federation of Teachers
Romain Weingarten (born 1926), French writer
|
```scala
package streaming.core.datasource.impl
import org.apache.spark.sql.mlsql.session.MLSQLException
import org.apache.spark.sql.{DataFrame, DataFrameReader}
import streaming.core.datasource._
import streaming.core.datasource.util.MLSQLJobCollect
import streaming.dsl.ScriptSQLExec
import streaming.dsl.auth.{OperateType, TableType}
import streaming.dsl.load.batch.{LogTail, MLSQLAPIExplain, MLSQLConfExplain}
import tech.mlsql.MLSQLEnvKey
import tech.mlsql.core.version.MLSQLVersion
import tech.mlsql.job.MLSQLJobInfo
/**
* 2019-01-11 WilliamZhu(allwefantasy@gmail.com)
*/
class MLSQLSystemTables extends MLSQLSource with MLSQLSourceInfo with MLSQLRegistry {
override def load(reader: DataFrameReader, config: DataSourceConfig): DataFrame = {
val context = ScriptSQLExec.contextGetOrForTest();
val owner = context.owner
context.execListener.addEnv(MLSQLEnvKey.CONTEXT_SYSTEM_TABLE, "true")
val spark = config.df.get.sparkSession
import spark.implicits._
val jobCollect = new MLSQLJobCollect(spark, owner)
val pathSplitter = "/"
config.path.stripPrefix(pathSplitter).stripSuffix(pathSplitter).split(pathSplitter) match {
case Array("datasources") => {
spark.createDataset(DataSourceRegistry.allSourceNames.toSet.toSeq ++ Seq(
"parquet", "csv", "jsonStr", "csvStr", "json", "text", "orc", "kafka", "kafka8", "kafka9", "crawlersql", "image",
"script", "hive", "xml", "mlsqlAPI", "mlsqlConf"
)).toDF("name")
}
case Array("datasources", "params", item: String) => {
DataSourceRegistry.fetch(item, Map[String, String]()) match {
case Some(ds) => ds.asInstanceOf[MLSQLSourceInfo].explainParams(spark)
case None => spark.createDataset[String](Seq()).toDF("name")
}
}
case Array("jobs") =>
spark.createDataset[MLSQLJobInfo](jobCollect.jobs).toDF()
case Array("jobs", jobGroupId) =>
spark.createDataset(Seq(jobCollect.jobDetail(jobGroupId))).toDF()
case Array("jobs", "v2", jobGroupId) =>
spark.createDataset(Seq(jobCollect.jobDetail(jobGroupId, 2))).toDF()
case Array("jobs", "get", jobGroupId) =>
spark.createDataset[MLSQLJobInfo](jobCollect.getJob(jobGroupId)).toDF()
case Array("progress", jobGroupId) =>
spark.createDataset(jobCollect.jobProgress(jobGroupId)).toDF()
case Array("resource") =>
spark.createDataset(Seq(jobCollect.resourceSummary(null))).toDF()
case Array("resource", jobGroupId) =>
val detail = jobCollect.jobDetail(jobGroupId)
detail.activeJobs.map(_.numActiveTasks)
spark.createDataset(Seq(jobCollect.resourceSummary(jobGroupId))).toDF()
case Array("tables", "tableTypes") =>
spark.createDataset(TableType.toList).toDF()
case Array("tables", "sourceTypes") =>
spark.createDataset(SourceTypeRegistry.sources ++ TableType.toIncludesList).toDF()
case Array("tables", "operateTypes") =>
val res = OperateType.toList
spark.createDataset(res).toDF()
case Array("api", "list") =>
new MLSQLAPIExplain(spark).explain
case Array("conf", "list") =>
new MLSQLConfExplain(spark).explain
case Array("log", offset) =>
val filePath = config.config.getOrElse("filePath", "")
val msgs = LogTail.log(owner, filePath, offset.toLong)
spark.createDataset(Seq(msgs)).toDF("offset", "value")
case Array("version") =>
spark.createDataset(Seq(MLSQLVersion.version())).toDF()
case _ => throw new MLSQLException(
s"""
|path [${config.path}] is not found. please check the doc website for more details:
|path_to_url
|or
|path_to_url
""".stripMargin)
}
}
override def sourceInfo(config: DataAuthConfig): SourceInfo = SourceInfo(fullFormat, "mlsql_system_db", "system_info")
override def register(): Unit = {
DataSourceRegistry.register(MLSQLDataSourceKey(fullFormat, MLSQLSparkDataSourceType), this)
DataSourceRegistry.register(MLSQLDataSourceKey(shortFormat, MLSQLSparkDataSourceType), this)
}
override def fullFormat: String = "_mlsql_"
override def shortFormat: String = "_mlsql_"
}
```
|
```python
from handlers.douban import DBFavoriteHandler, DBLoginHandler, \
ValidCodeHandler, DBLogoutHandler
from handlers.home import HomeHandler
from handlers.player import AlbumHandler, ArtistHandler
from handlers.playlist import AddMyPlaylistHandler, ClonePlaylistHandler, \
CreateMyPlaylistHandler, PlaylistHandler, \
RemoveMyPlaylistHandler, RemoveTrackHandler, \
ShowMyPlaylistHandler, ShowPlaylistHandler
from handlers.search import SearchHandler
from handlers.trackfile import TrackFileHandler
url_patterns = [
(r"/", HomeHandler),
(r"/search", SearchHandler),
# player handlers
(r"/playlist", PlaylistHandler),
(r"/artist", ArtistHandler),
(r"/album", AlbumHandler),
# track proxy
(r"/track_file", TrackFileHandler),
# playlist
(r"/show_playlist", ShowPlaylistHandler),
(r"/add_myplaylist", AddMyPlaylistHandler),
(r"/create_myplaylist", CreateMyPlaylistHandler),
(r"/show_myplaylist", ShowMyPlaylistHandler),
(r"/remove_track_from_myplaylist", RemoveTrackHandler),
(r"/remove_myplaylist", RemoveMyPlaylistHandler),
(r"/clone_playlist", ClonePlaylistHandler),
# douban handlers
(r"/dbvalidcode", ValidCodeHandler),
(r"/dblogin", DBLoginHandler),
(r"/dblogout", DBLogoutHandler),
(r"/dbfav", DBFavoriteHandler),
]
```
|
The following is a list of the television networks and announcers who have broadcast college football's Alamo Bowl throughout the years.
Television
2020s
2010s
2000s
1990s
Radio
2020s
2010s
2000s
References
Alamo
Broadcasters
Alamo Bowl
Alamo Bowl
Alamo Bowl broadcasters
|
```objective-c
/* dso.h -*- mode:C; c-file-style: "eay" -*- */
/* Written by Geoff Thorpe (geoff@geoffthorpe.net) for the OpenSSL
* project 2000.
*/
/* ====================================================================
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in
* the documentation and/or other materials provided with the
* distribution.
*
* 3. All advertising materials mentioning features or use of this
* software must display the following acknowledgment:
* "This product includes software developed by the OpenSSL Project
* for use in the OpenSSL Toolkit. (path_to_url"
*
* 4. The names "OpenSSL Toolkit" and "OpenSSL Project" must not be used to
* endorse or promote products derived from this software without
* prior written permission. For written permission, please contact
* licensing@OpenSSL.org.
*
* 5. Products derived from this software may not be called "OpenSSL"
* nor may "OpenSSL" appear in their names without prior written
* permission of the OpenSSL Project.
*
* 6. Redistributions of any form whatsoever must retain the following
* acknowledgment:
* "This product includes software developed by the OpenSSL Project
* for use in the OpenSSL Toolkit (path_to_url"
*
* THIS SOFTWARE IS PROVIDED BY THE OpenSSL PROJECT ``AS IS'' AND ANY
* EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE OpenSSL PROJECT OR
* ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
* STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
* OF THE POSSIBILITY OF SUCH DAMAGE.
* ====================================================================
*
* This product includes cryptographic software written by Eric Young
* (eay@cryptsoft.com). This product includes software written by Tim
* Hudson (tjh@cryptsoft.com).
*
*/
#ifndef HEADER_DSO_H
#define HEADER_DSO_H
#include <openssl/crypto.h>
#ifdef __cplusplus
extern "C" {
#endif
/* These values are used as commands to DSO_ctrl() */
#define DSO_CTRL_GET_FLAGS 1
#define DSO_CTRL_SET_FLAGS 2
#define DSO_CTRL_OR_FLAGS 3
/* By default, DSO_load() will translate the provided filename into a form
* typical for the platform (more specifically the DSO_METHOD) using the
* dso_name_converter function of the method. Eg. win32 will transform "blah"
* into "blah.dll", and dlfcn will transform it into "libblah.so". The
* behaviour can be overriden by setting the name_converter callback in the DSO
* object (using DSO_set_name_converter()). This callback could even utilise
* the DSO_METHOD's converter too if it only wants to override behaviour for
* one or two possible DSO methods. However, the following flag can be set in a
* DSO to prevent *any* native name-translation at all - eg. if the caller has
* prompted the user for a path to a driver library so the filename should be
* interpreted as-is. */
#define DSO_FLAG_NO_NAME_TRANSLATION 0x01
/* An extra flag to give if only the extension should be added as
* translation. This is obviously only of importance on Unix and
* other operating systems where the translation also may prefix
* the name with something, like 'lib', and ignored everywhere else.
* This flag is also ignored if DSO_FLAG_NO_NAME_TRANSLATION is used
* at the same time. */
#define DSO_FLAG_NAME_TRANSLATION_EXT_ONLY 0x02
/* The following flag controls the translation of symbol names to upper
* case. This is currently only being implemented for OpenVMS.
*/
#define DSO_FLAG_UPCASE_SYMBOL 0x10
/* This flag loads the library with public symbols.
* Meaning: The exported symbols of this library are public
* to all libraries loaded after this library.
* At the moment only implemented in unix.
*/
#define DSO_FLAG_GLOBAL_SYMBOLS 0x20
typedef void (*DSO_FUNC_TYPE)(void);
typedef struct dso_st DSO;
/* The function prototype used for method functions (or caller-provided
* callbacks) that transform filenames. They are passed a DSO structure pointer
* (or NULL if they are to be used independantly of a DSO object) and a
* filename to transform. They should either return NULL (if there is an error
* condition) or a newly allocated string containing the transformed form that
* the caller will need to free with OPENSSL_free() when done. */
typedef char* (*DSO_NAME_CONVERTER_FUNC)(DSO *, const char *);
/* The function prototype used for method functions (or caller-provided
* callbacks) that merge two file specifications. They are passed a
* DSO structure pointer (or NULL if they are to be used independantly of
* a DSO object) and two file specifications to merge. They should
* either return NULL (if there is an error condition) or a newly allocated
* string containing the result of merging that the caller will need
* to free with OPENSSL_free() when done.
* Here, merging means that bits and pieces are taken from each of the
* file specifications and added together in whatever fashion that is
* sensible for the DSO method in question. The only rule that really
* applies is that if the two specification contain pieces of the same
* type, the copy from the first string takes priority. One could see
* it as the first specification is the one given by the user and the
* second being a bunch of defaults to add on if they're missing in the
* first. */
typedef char* (*DSO_MERGER_FUNC)(DSO *, const char *, const char *);
typedef struct dso_meth_st
{
const char *name;
/* Loads a shared library, NB: new DSO_METHODs must ensure that a
* successful load populates the loaded_filename field, and likewise a
* successful unload OPENSSL_frees and NULLs it out. */
int (*dso_load)(DSO *dso);
/* Unloads a shared library */
int (*dso_unload)(DSO *dso);
/* Binds a variable */
void *(*dso_bind_var)(DSO *dso, const char *symname);
/* Binds a function - assumes a return type of DSO_FUNC_TYPE.
* This should be cast to the real function prototype by the
* caller. Platforms that don't have compatible representations
* for different prototypes (this is possible within ANSI C)
* are highly unlikely to have shared libraries at all, let
* alone a DSO_METHOD implemented for them. */
DSO_FUNC_TYPE (*dso_bind_func)(DSO *dso, const char *symname);
/* I don't think this would actually be used in any circumstances. */
#if 0
/* Unbinds a variable */
int (*dso_unbind_var)(DSO *dso, char *symname, void *symptr);
/* Unbinds a function */
int (*dso_unbind_func)(DSO *dso, char *symname, DSO_FUNC_TYPE symptr);
#endif
/* The generic (yuck) "ctrl()" function. NB: Negative return
* values (rather than zero) indicate errors. */
long (*dso_ctrl)(DSO *dso, int cmd, long larg, void *parg);
/* The default DSO_METHOD-specific function for converting filenames to
* a canonical native form. */
DSO_NAME_CONVERTER_FUNC dso_name_converter;
/* The default DSO_METHOD-specific function for converting filenames to
* a canonical native form. */
DSO_MERGER_FUNC dso_merger;
/* [De]Initialisation handlers. */
int (*init)(DSO *dso);
int (*finish)(DSO *dso);
/* Return pathname of the module containing location */
int (*pathbyaddr)(void *addr,char *path,int sz);
/* Perform global symbol lookup, i.e. among *all* modules */
void *(*globallookup)(const char *symname);
} DSO_METHOD;
/**********************************************************************/
/* The low-level handle type used to refer to a loaded shared library */
struct dso_st
{
DSO_METHOD *meth;
/* Standard dlopen uses a (void *). Win32 uses a HANDLE. VMS
* doesn't use anything but will need to cache the filename
* for use in the dso_bind handler. All in all, let each
* method control its own destiny. "Handles" and such go in
* a STACK. */
STACK_OF(void) *meth_data;
int references;
int flags;
/* For use by applications etc ... use this for your bits'n'pieces,
* don't touch meth_data! */
CRYPTO_EX_DATA ex_data;
/* If this callback function pointer is set to non-NULL, then it will
* be used in DSO_load() in place of meth->dso_name_converter. NB: This
* should normally set using DSO_set_name_converter(). */
DSO_NAME_CONVERTER_FUNC name_converter;
/* If this callback function pointer is set to non-NULL, then it will
* be used in DSO_load() in place of meth->dso_merger. NB: This
* should normally set using DSO_set_merger(). */
DSO_MERGER_FUNC merger;
/* This is populated with (a copy of) the platform-independant
* filename used for this DSO. */
char *filename;
/* This is populated with (a copy of) the translated filename by which
* the DSO was actually loaded. It is NULL iff the DSO is not currently
* loaded. NB: This is here because the filename translation process
* may involve a callback being invoked more than once not only to
* convert to a platform-specific form, but also to try different
* filenames in the process of trying to perform a load. As such, this
* variable can be used to indicate (a) whether this DSO structure
* corresponds to a loaded library or not, and (b) the filename with
* which it was actually loaded. */
char *loaded_filename;
};
DSO * DSO_new(void);
DSO * DSO_new_method(DSO_METHOD *method);
int DSO_free(DSO *dso);
int DSO_flags(DSO *dso);
int DSO_up_ref(DSO *dso);
long DSO_ctrl(DSO *dso, int cmd, long larg, void *parg);
/* This function sets the DSO's name_converter callback. If it is non-NULL,
* then it will be used instead of the associated DSO_METHOD's function. If
* oldcb is non-NULL then it is set to the function pointer value being
* replaced. Return value is non-zero for success. */
int DSO_set_name_converter(DSO *dso, DSO_NAME_CONVERTER_FUNC cb,
DSO_NAME_CONVERTER_FUNC *oldcb);
/* These functions can be used to get/set the platform-independant filename
* used for a DSO. NB: set will fail if the DSO is already loaded. */
const char *DSO_get_filename(DSO *dso);
int DSO_set_filename(DSO *dso, const char *filename);
/* This function will invoke the DSO's name_converter callback to translate a
* filename, or if the callback isn't set it will instead use the DSO_METHOD's
* converter. If "filename" is NULL, the "filename" in the DSO itself will be
* used. If the DSO_FLAG_NO_NAME_TRANSLATION flag is set, then the filename is
* simply duplicated. NB: This function is usually called from within a
* DSO_METHOD during the processing of a DSO_load() call, and is exposed so that
* caller-created DSO_METHODs can do the same thing. A non-NULL return value
* will need to be OPENSSL_free()'d. */
char *DSO_convert_filename(DSO *dso, const char *filename);
/* This function will invoke the DSO's merger callback to merge two file
* specifications, or if the callback isn't set it will instead use the
* DSO_METHOD's merger. A non-NULL return value will need to be
* OPENSSL_free()'d. */
char *DSO_merge(DSO *dso, const char *filespec1, const char *filespec2);
/* If the DSO is currently loaded, this returns the filename that it was loaded
* under, otherwise it returns NULL. So it is also useful as a test as to
* whether the DSO is currently loaded. NB: This will not necessarily return
* the same value as DSO_convert_filename(dso, dso->filename), because the
* DSO_METHOD's load function may have tried a variety of filenames (with
* and/or without the aid of the converters) before settling on the one it
* actually loaded. */
const char *DSO_get_loaded_filename(DSO *dso);
void DSO_set_default_method(DSO_METHOD *meth);
DSO_METHOD *DSO_get_default_method(void);
DSO_METHOD *DSO_get_method(DSO *dso);
DSO_METHOD *DSO_set_method(DSO *dso, DSO_METHOD *meth);
/* The all-singing all-dancing load function, you normally pass NULL
* for the first and third parameters. Use DSO_up and DSO_free for
* subsequent reference count handling. Any flags passed in will be set
* in the constructed DSO after its init() function but before the
* load operation. If 'dso' is non-NULL, 'flags' is ignored. */
DSO *DSO_load(DSO *dso, const char *filename, DSO_METHOD *meth, int flags);
/* This function binds to a variable inside a shared library. */
void *DSO_bind_var(DSO *dso, const char *symname);
/* This function binds to a function inside a shared library. */
DSO_FUNC_TYPE DSO_bind_func(DSO *dso, const char *symname);
/* This method is the default, but will beg, borrow, or steal whatever
* method should be the default on any particular platform (including
* DSO_METH_null() if necessary). */
DSO_METHOD *DSO_METHOD_openssl(void);
/* This method is defined for all platforms - if a platform has no
* DSO support then this will be the only method! */
DSO_METHOD *DSO_METHOD_null(void);
/* If DSO_DLFCN is defined, the standard dlfcn.h-style functions
* (dlopen, dlclose, dlsym, etc) will be used and incorporated into
* this method. If not, this method will return NULL. */
DSO_METHOD *DSO_METHOD_dlfcn(void);
/* If DSO_DL is defined, the standard dl.h-style functions (shl_load,
* shl_unload, shl_findsym, etc) will be used and incorporated into
* this method. If not, this method will return NULL. */
DSO_METHOD *DSO_METHOD_dl(void);
/* If WIN32 is defined, use DLLs. If not, return NULL. */
DSO_METHOD *DSO_METHOD_win32(void);
/* If VMS is defined, use shared images. If not, return NULL. */
DSO_METHOD *DSO_METHOD_vms(void);
/* This function writes null-terminated pathname of DSO module
* containing 'addr' into 'sz' large caller-provided 'path' and
* returns the number of characters [including trailing zero]
* written to it. If 'sz' is 0 or negative, 'path' is ignored and
* required amount of charachers [including trailing zero] to
* accomodate pathname is returned. If 'addr' is NULL, then
* pathname of cryptolib itself is returned. Negative or zero
* return value denotes error.
*/
int DSO_pathbyaddr(void *addr,char *path,int sz);
/* This function should be used with caution! It looks up symbols in
* *all* loaded modules and if module gets unloaded by somebody else
* attempt to dereference the pointer is doomed to have fatal
* consequences. Primary usage for this function is to probe *core*
* system functionality, e.g. check if getnameinfo(3) is available
* at run-time without bothering about OS-specific details such as
* libc.so.versioning or where does it actually reside: in libc
* itself or libsocket. */
void *DSO_global_lookup(const char *name);
/* If BeOS is defined, use shared images. If not, return NULL. */
DSO_METHOD *DSO_METHOD_beos(void);
/* BEGIN ERROR CODES */
/* The following lines are auto generated by the script mkerr.pl. Any changes
* made after this point may be overwritten when the script is next run.
*/
void ERR_load_DSO_strings(void);
/* Error codes for the DSO functions. */
/* Function codes. */
#define DSO_F_BEOS_BIND_FUNC 144
#define DSO_F_BEOS_BIND_VAR 145
#define DSO_F_BEOS_LOAD 146
#define DSO_F_BEOS_NAME_CONVERTER 147
#define DSO_F_BEOS_UNLOAD 148
#define DSO_F_DLFCN_BIND_FUNC 100
#define DSO_F_DLFCN_BIND_VAR 101
#define DSO_F_DLFCN_LOAD 102
#define DSO_F_DLFCN_MERGER 130
#define DSO_F_DLFCN_NAME_CONVERTER 123
#define DSO_F_DLFCN_UNLOAD 103
#define DSO_F_DL_BIND_FUNC 104
#define DSO_F_DL_BIND_VAR 105
#define DSO_F_DL_LOAD 106
#define DSO_F_DL_MERGER 131
#define DSO_F_DL_NAME_CONVERTER 124
#define DSO_F_DL_UNLOAD 107
#define DSO_F_DSO_BIND_FUNC 108
#define DSO_F_DSO_BIND_VAR 109
#define DSO_F_DSO_CONVERT_FILENAME 126
#define DSO_F_DSO_CTRL 110
#define DSO_F_DSO_FREE 111
#define DSO_F_DSO_GET_FILENAME 127
#define DSO_F_DSO_GET_LOADED_FILENAME 128
#define DSO_F_DSO_GLOBAL_LOOKUP 139
#define DSO_F_DSO_LOAD 112
#define DSO_F_DSO_MERGE 132
#define DSO_F_DSO_NEW_METHOD 113
#define DSO_F_DSO_PATHBYADDR 140
#define DSO_F_DSO_SET_FILENAME 129
#define DSO_F_DSO_SET_NAME_CONVERTER 122
#define DSO_F_DSO_UP_REF 114
#define DSO_F_GLOBAL_LOOKUP_FUNC 138
#define DSO_F_PATHBYADDR 137
#define DSO_F_VMS_BIND_SYM 115
#define DSO_F_VMS_LOAD 116
#define DSO_F_VMS_MERGER 133
#define DSO_F_VMS_UNLOAD 117
#define DSO_F_WIN32_BIND_FUNC 118
#define DSO_F_WIN32_BIND_VAR 119
#define DSO_F_WIN32_GLOBALLOOKUP 142
#define DSO_F_WIN32_GLOBALLOOKUP_FUNC 143
#define DSO_F_WIN32_JOINER 135
#define DSO_F_WIN32_LOAD 120
#define DSO_F_WIN32_MERGER 134
#define DSO_F_WIN32_NAME_CONVERTER 125
#define DSO_F_WIN32_PATHBYADDR 141
#define DSO_F_WIN32_SPLITTER 136
#define DSO_F_WIN32_UNLOAD 121
/* Reason codes. */
#define DSO_R_CTRL_FAILED 100
#define DSO_R_DSO_ALREADY_LOADED 110
#define DSO_R_EMPTY_FILE_STRUCTURE 113
#define DSO_R_FAILURE 114
#define DSO_R_FILENAME_TOO_BIG 101
#define DSO_R_FINISH_FAILED 102
#define DSO_R_INCORRECT_FILE_SYNTAX 115
#define DSO_R_LOAD_FAILED 103
#define DSO_R_NAME_TRANSLATION_FAILED 109
#define DSO_R_NO_FILENAME 111
#define DSO_R_NO_FILE_SPECIFICATION 116
#define DSO_R_NULL_HANDLE 104
#define DSO_R_SET_FILENAME_FAILED 112
#define DSO_R_STACK_ERROR 105
#define DSO_R_SYM_FAILURE 106
#define DSO_R_UNLOAD_FAILED 107
#define DSO_R_UNSUPPORTED 108
#ifdef __cplusplus
}
#endif
#endif
```
|
```javascript
// @flow
// Define the "types" of data a docker cli flag can represent in yaml.
export type ArgType =
// Used for lists of things
// e.g. --device (path_to_url#devices)
| 'Array'
// Used to store a "limits" value of the input format: <type>=<soft limit>[:<hard limit>]
// e.g. --ulimit
// @see path_to_url#ulimits
// @see path_to_url#set-ulimits-in-container---ulimit
| 'Ulimits'
// Used to store a boolean value for an option
// e.g. --privileged (path_to_url#your_sha256_hashe-stdin_open-tty-user-working_dir)
| 'Switch'
// Used to store an arbitrary text value for an option
| 'Value'
| 'IntValue'
| 'FloatValue'
| 'DeviceBlockIOConfigRate'
| 'DeviceBlockIOConfigWeight'
| 'Networks'
| 'MapArray'
| 'Map'
| 'Envs'
| 'Gpus';
// Type to represent the structure of the docker compose mapping
export type Mapping = {
type: ArgType,
path: string,
};
// Type to represent a compose file entry
export type ArrayComposeEntry = {
path: string,
value: [string],
};
export type KVComposeEntry = {
path: string,
value: {
[string]: string | number | any,
},
};
export type SwitchComposeEntry = {
path: string,
value: boolean,
};
export type ValueComposeEntry = {
path: string,
value: string | number | any,
};
export type IgnoreComposeEntry = {
path?: null,
value?: null,
};
export type ComposeEntry =
| ArrayComposeEntry
| KVComposeEntry
| SwitchComposeEntry
| ValueComposeEntry
| IgnoreComposeEntry;
export const getMapping = (type: ArgType, path: string): Mapping => ({
type,
path,
});
// docker cli -> docker-compose options
export const MAPPINGS: { [string]: Mapping } = {
'add-host': getMapping('Array', 'extra_hosts'),
'blkio-weight': getMapping('IntValue', 'blkio_config/weight'),
'blkio-weight-device': getMapping('DeviceBlockIOConfigWeight', 'blkio_config/weight_device'),
'cap-add': getMapping('Array', 'cap_add'),
'cap-drop': getMapping('Array', 'cap_drop'),
'cgroup-parent': getMapping('Value', 'cgroup_parent'),
cgroupns: getMapping('Value', 'cgroup'),
'cpu-period': getMapping('Value', 'cpu_period'),
'cpu-quota': getMapping('Value', 'cpu_quota'),
'cpu-rt-period': getMapping('Value', 'cpu_rt_period'),
'cpu-rt-runtime': getMapping('Value', 'cpu_rt_runtime'),
'cpu-shares': getMapping('IntValue', 'cpu_shares'),
cpus: getMapping('FloatValue', 'deploy/resources/limits/cpus'),
detached: getMapping('Switch', ''),
'device-cgroup-rule': getMapping('Array', 'device_cgroup_rules'),
'device-read-bps': getMapping('DeviceBlockIOConfigRate', 'blkio_config/device_read_bps'),
'device-read-iops': getMapping('DeviceBlockIOConfigRate', 'blkio_config/device_read_iops'),
'device-write-bps': getMapping('DeviceBlockIOConfigRate', 'blkio_config/device_write_bps'),
'device-write-iops': getMapping('DeviceBlockIOConfigRate', 'blkio_config/device_write_iops'),
device: getMapping('Array', 'devices'),
'dns-opt': getMapping('Array', 'dns_opt'),
'dns-search': getMapping('Array', 'dns_search'),
dns: getMapping('Array', 'dns'),
domainname: getMapping('Value', 'domainname'),
entrypoint: getMapping('Array', 'entrypoint'),
'env-file': getMapping('Array', 'env_file'),
env: getMapping('Envs', 'environment'),
expose: getMapping('Array', 'expose'),
gpus: getMapping('Gpus', 'deploy'),
'group-add': getMapping('Array', 'group_add'),
'health-cmd': getMapping('Value', 'healthcheck/test'),
'health-interval': getMapping('Value', 'healthcheck/interval'),
'health-retries': getMapping('Value', 'healthcheck/retries'),
'health-start-period': getMapping('Value', 'healthcheck/start_period'),
'health-timeout': getMapping('Value', 'healthcheck/timeout'),
hostname: getMapping('Value', 'hostname'),
init: getMapping('Switch', 'init'),
interactive: getMapping('Switch', 'stdin_open'),
ip6: getMapping('Value', 'networks/network/ipv6_address'),
ip: getMapping('Value', 'networks/network/ipv4_address'),
ipc: getMapping('Value', 'ipc'),
isolation: getMapping('Value', 'isolation'),
label: getMapping('Array', 'labels'),
'link-local-ip': getMapping('Array', 'networks/network/link_local_ips'),
link: getMapping('Array', 'links'),
'log-driver': getMapping('Value', 'logging/driver'),
'log-opt': getMapping('Map', 'logging/options'),
'mac-address': getMapping('Value', 'mac_address'),
'memory-reservation': getMapping('Value', 'deploy/resources/reservations/memory'),
'memory-swap': getMapping('Value', 'memswap_limit'),
'memory-swappiness': getMapping('Value', 'mem_swappiness'),
memory: getMapping('Value', 'deploy/resources/limits/memory'),
mount: getMapping('MapArray', 'volumes'),
name: getMapping('Value', 'container_name'),
net: getMapping('Networks', 'network_mode'), // alias for network
'network-alias': getMapping('Array', 'networks/network/aliases'),
network: getMapping('Networks', 'network_mode'),
'no-healthcheck': getMapping('Switch', 'healthcheck/disable'),
'oom-kill-disable': getMapping('Switch', 'oom_kill_disable'),
'oom-score-adj': getMapping('Value', 'oom_score_adj'),
pid: getMapping('Value', 'pid'),
'pids-limit': getMapping('IntValue', 'deploy/resources/limits/pids'),
platform: getMapping('Value', 'platform'),
privileged: getMapping('Switch', 'privileged'),
publish: getMapping('Array', 'ports'),
pull: getMapping('Value', 'pull_policy'),
'read-only': getMapping('Switch', 'read_only'),
restart: getMapping('Value', 'restart'),
rm: getMapping('Switch', ''),
runtime: getMapping('Value', 'runtime'),
'security-opt': getMapping('Array', 'security_opt'),
'shm-size': getMapping('Value', 'shm_size'),
'stop-signal': getMapping('Value', 'stop_signal'),
'stop-timeout': getMapping('Value', 'stop_grace_period'),
'storage-opt': getMapping('Map', 'storage_opt'),
sysctl: getMapping('Array', 'sysctls'),
tmpfs: getMapping('Value', 'tmpfs'),
tty: getMapping('Switch', 'tty'),
ulimit: getMapping('Ulimits', 'ulimits'),
user: getMapping('Value', 'user'),
userns: getMapping('Value', 'userns_mode'),
uts: getMapping('Value', 'uts'),
volume: getMapping('Array', 'volumes'),
'volumes-from': getMapping('Array', 'volume_from'),
workdir: getMapping('Value', 'working_dir'),
};
// Add flag mappings
MAPPINGS.v = MAPPINGS.volume;
MAPPINGS.p = MAPPINGS.publish;
MAPPINGS.e = MAPPINGS.env;
MAPPINGS.l = MAPPINGS.label;
MAPPINGS.h = MAPPINGS.hostname;
MAPPINGS.u = MAPPINGS.user;
MAPPINGS.w = MAPPINGS.workdir;
MAPPINGS.c = MAPPINGS['cpu-shares'];
MAPPINGS.l = MAPPINGS.label;
MAPPINGS.t = MAPPINGS.tty;
MAPPINGS.i = MAPPINGS.interactive;
MAPPINGS.m = MAPPINGS.memory;
MAPPINGS.d = MAPPINGS.detached;
```
|
```objective-c
/* Function object interface */
#ifndef Py_LIMITED_API
#ifndef Py_FUNCOBJECT_H
#define Py_FUNCOBJECT_H
#ifdef __cplusplus
extern "C" {
#endif
/* Function objects and code objects should not be confused with each other:
*
* Function objects are created by the execution of the 'def' statement.
* They reference a code object in their __code__ attribute, which is a
* purely syntactic object, i.e. nothing more than a compiled version of some
* source code lines. There is one code object per source code "fragment",
* but each code object can be referenced by zero or many function objects
* depending only on how many times the 'def' statement in the source was
* executed so far.
*/
typedef struct {
PyObject_HEAD
PyObject *func_code; /* A code object, the __code__ attribute */
PyObject *func_globals; /* A dictionary (other mappings won't do) */
PyObject *func_defaults; /* NULL or a tuple */
PyObject *func_kwdefaults; /* NULL or a dict */
PyObject *func_closure; /* NULL or a tuple of cell objects */
PyObject *func_doc; /* The __doc__ attribute, can be anything */
PyObject *func_name; /* The __name__ attribute, a string object */
PyObject *func_dict; /* The __dict__ attribute, a dict or NULL */
PyObject *func_weakreflist; /* List of weak references */
PyObject *func_module; /* The __module__ attribute, can be anything */
PyObject *func_annotations; /* Annotations, a dict or NULL */
PyObject *func_qualname; /* The qualified name */
/* Invariant:
* func_closure contains the bindings for func_code->co_freevars, so
* PyTuple_Size(func_closure) == PyCode_GetNumFree(func_code)
* (func_closure may be NULL if PyCode_GetNumFree(func_code) == 0).
*/
} PyFunctionObject;
PyAPI_DATA(PyTypeObject) PyFunction_Type;
#define PyFunction_Check(op) (Py_TYPE(op) == &PyFunction_Type)
PyAPI_FUNC(PyObject *) PyFunction_New(PyObject *, PyObject *);
PyAPI_FUNC(PyObject *) PyFunction_NewWithQualName(PyObject *, PyObject *, PyObject *);
PyAPI_FUNC(PyObject *) PyFunction_GetCode(PyObject *);
PyAPI_FUNC(PyObject *) PyFunction_GetGlobals(PyObject *);
PyAPI_FUNC(PyObject *) PyFunction_GetModule(PyObject *);
PyAPI_FUNC(PyObject *) PyFunction_GetDefaults(PyObject *);
PyAPI_FUNC(int) PyFunction_SetDefaults(PyObject *, PyObject *);
PyAPI_FUNC(PyObject *) PyFunction_GetKwDefaults(PyObject *);
PyAPI_FUNC(int) PyFunction_SetKwDefaults(PyObject *, PyObject *);
PyAPI_FUNC(PyObject *) PyFunction_GetClosure(PyObject *);
PyAPI_FUNC(int) PyFunction_SetClosure(PyObject *, PyObject *);
PyAPI_FUNC(PyObject *) PyFunction_GetAnnotations(PyObject *);
PyAPI_FUNC(int) PyFunction_SetAnnotations(PyObject *, PyObject *);
/* Macros for direct access to these values. Type checks are *not*
done, so use with care. */
#define PyFunction_GET_CODE(func) \
(((PyFunctionObject *)func) -> func_code)
#define PyFunction_GET_GLOBALS(func) \
(((PyFunctionObject *)func) -> func_globals)
#define PyFunction_GET_MODULE(func) \
(((PyFunctionObject *)func) -> func_module)
#define PyFunction_GET_DEFAULTS(func) \
(((PyFunctionObject *)func) -> func_defaults)
#define PyFunction_GET_KW_DEFAULTS(func) \
(((PyFunctionObject *)func) -> func_kwdefaults)
#define PyFunction_GET_CLOSURE(func) \
(((PyFunctionObject *)func) -> func_closure)
#define PyFunction_GET_ANNOTATIONS(func) \
(((PyFunctionObject *)func) -> func_annotations)
/* The classmethod and staticmethod types lives here, too */
PyAPI_DATA(PyTypeObject) PyClassMethod_Type;
PyAPI_DATA(PyTypeObject) PyStaticMethod_Type;
PyAPI_FUNC(PyObject *) PyClassMethod_New(PyObject *);
PyAPI_FUNC(PyObject *) PyStaticMethod_New(PyObject *);
#ifdef __cplusplus
}
#endif
#endif /* !Py_FUNCOBJECT_H */
#endif /* Py_LIMITED_API */
```
|
Thomas Robinson, 2nd Baron Grantham PC (30 November 173820 July 1786) was a British statesman. He notably served as Foreign Secretary between 1782 and 1783.
Background and education
Grantham was born in Vienna, Austria, the son of Thomas Robinson, 1st Baron Grantham, British Ambassador to Austria at the time, by his wife Frances, daughter of Thomas Worsley. He was educated at Westminster School and at Christ's College, Cambridge.
Political career
Grantham entered parliament as member for Christchurch in 1761, and succeeded to the peerage, because of his father's death, in 1770. That year he was appointed to the Privy Council. In 1771 he was sent as British Ambassador to Spain and retained this post until war broke out between Great Britain and Spain in 1779. In 1772, while at the Summer Spanish Court in Aranjuez, he received correspondence from Richard Wall, the Spanish Minister of Foreign Affairs. From 1780 to 1782 Grantham was President of the Board of Trade, and from July 1782 to April 1783 Foreign Secretary under Lord Shelburne.
Marriage and progeny
In 1780 Lord Grantham married Lady Mary Yorke (1757–1830), younger daughter of Philip Yorke, 2nd Earl of Hardwicke by his wife Lady Jemima Campbell (1723–1797), suo jure 2nd Marchioness Grey, a daughter of John Campbell, 3rd Earl of Breadalbane and Holland by his wife Lady Amabel Grey, a daughter of Henry Grey, 1st Duke of Kent (1671–1740).
In 1740 Lord Grantham's mother-in-law Lady Jemima Campbell (1723–1797) succeeded as 2nd Marchioness Grey by a special remainder upon the death of her maternal grandfather Henry Grey, 1st Duke of Kent, 1st Marquess Grey, 3rd Baron Lucas. As she had no male heirs, the title later became extinct upon her own death in 1797, but in 1816 her elder daughter Lady Amabel Yorke (1750–1833) (wife of Alexander Hume-Campbell, Lord Polwarth) was created Countess de Grey in her own right.
Lord Grantham and his wife lived at Grantham House in Whitehall Yard, Westminster. By his wife had two sons:
Thomas de Grey, 2nd Earl de Grey, eldest son and heir. He was born as Thomas Philip Robinson, his surname was Weddell from 1803 and de Grey from 1833.
Frederick John Robinson, 1st Viscount Goderich, 1st Earl of Ripon (1782–1859), Prime Minister of the United Kingdom in 1827 and 1828.
Death
He died on 20 July 1786, aged only 46, and was succeeded in the barony by his eldest son, Thomas de Grey, 2nd Earl de Grey. His widow continued to live at Grantham House until her own death in January 1830, aged 72 years.
See also
Wrest Park
References
External links
1738 births
1786 deaths
Politicians from Vienna
Alumni of Christ's College, Cambridge
Barons in the Peerage of Great Britain
British MPs 1761–1768
British MPs 1768–1774
British Secretaries of State for Foreign Affairs
Diplomatic peers
Members of the Privy Council of Great Britain
Robinson, Thomas
People educated at Westminster School, London
Ambassadors of Great Britain to Spain
Thomas
Parents of prime ministers of the United Kingdom
Presidents of the Board of Trade
|
Below are the rosters for teams competing in the 2006 World Junior Ice Hockey Championships.
Group A
Head coach: Brent Sutter
Head coach: Walt Kyle
Head coach: Hannu Aravirta
Head coach: Jakob Kölliker
Head coach: Petter Thoresen
Group B
Head coach: Sergei Mikhalev
Head coach: Torgny Bendelin
Head coach: Radim Rulík
Head coach: Branislav Šajban
Head coach: Olegs Znaroks
External links
International Ice Hockey Federation
Rosters
World Junior Ice Hockey Championships rosters
|
```kotlin
package org.jetbrains.kotlinx.jupyter.api.session
interface JupyterSessionProvider {
fun getCurrentSession(): JupyterSession
}
```
|
The Australian Underwater Federation (AUF) is the governing body for underwater sports in Australia.
Mission
The mission of the AUF is:
Bringing Sport, Conservation and Awareness to the Underwater World.
Organisation
The AUF is a membership-based organisation whose day-to-day operations are overseen by a federal board and by a number of committees (known as commissions) for following activities – finswimming (commission known as Ozfin Inc.), scuba, snorkel, spearfishing and underwater hockey (commission known as Underwater Hockey Australia). It also currently has state branches in New South Wales (incorporated as the Underwater Skindivers & Fishermen's Association Inc) and Queensland, and state commissions for finswimming and underwater hockey in most states.
Recognition
The AUF is recognised by the Australian Sports Commission as the national sporting organisation (NSO) for underwater sports in Australia.
It is the Australian representative to Confédération Mondiale des Activités Subaquatiques (CMAS), with full voting rights to the Sports and Technical Committees and non-voting rights to the Scientific Committee.
The AUF is a member of the World AquaChallenge Association (WAA) and Recfish Australia. It is also one of the organisations represented on the Standards Australia's Committee CS/83, Recreational Underwater Diving.
Underwater sports
The AUF is the governing body for the following underwater sports within Australia: spearfishing, underwater hockey, finswimming and underwater rugby.
Diver training
Historically, the AUF operated as a diver training organisation offering instructor training and certification, and recreational diver certification in both snorkel and scuba diving. It currently issues CMAS International Diving certificates in its capacity as a member of the CMAS Technical Committee in respect to its own training programs and those offered by FAUI (formerly the Federation of Australian Underwater Instructors and now known as the Formation of Australian Underwater Instructors) and the now-defunct Australian branch of NAUI.
AUF currently offers training in snorkelling (including breath-hold technique) for open water and pool environments, and in coaching levels accredited with the Australian Government's National Coaching Accreditation Scheme (NCAS) for three sports. Originally launched in 1985 under the name of the School Snorkelling Programme, the openwater training stream (known as Ocean) supports both recreational diving as well as the sports of spearfishing and photofishing (a breath-hold version of the sport of underwater photography offered by CMAS) while the pool stream is intended to develop proficiencies in finswimming and underwater hockey. The following instructional levels are currently offered – Finswimming Coach Level 1 and 2, Ocean Coach level 1 and 2, and Underwater Hockey Coach Level 1 and 2.
See also
Ron Taylor
Valerie Taylor
References
External links
Official AUF website
Australian Underwater Federation – Queensland Inc. homepage
Underwater Skindivers & Fishermen's Association Inc. homepage
Underwater Hockey Australian Commission website
Welcome to Ozfin – Australian Fin Swimming
AUF Spearfishing homepage
Sports governing bodies in Australia
Diver organizations
Underwater sports organizations
Finswimming
Underwater hockey governing bodies
Underwater rugby
1953 establishments in Australia
Underwater sport in Australia
|
Headrush may refer to:
Vertigo (medical), a medical symptom of a balance disorder
Orthostatic hypotension, a sudden drop in blood pressure and coordination when a person stands up too quickly
Headrush (film), a 2003 Irish film starring Steven Berkoff
Head Rush (TV series), a 2010 Discovery show
Headrush EP, by the band Creaming Jesus
HeadRush, a spinoff of the PC CD-ROM game You Don't Know Jack
"Headrush", a song on the 2015 album Walk the Plank by Zebrahead
"Headrush", a song on the 2018 album What Happens Next by Joe Satriani
Headrush, name of the line of guitar effects processing products from the company inMusic Brands
|
```ruby
class AddQuoteToPodcastEpisodes < ActiveRecord::Migration[4.2]
def change
add_column :podcast_episodes, :quote, :text
end
end
```
|
```objective-c
// The LLVM backend for CPUs/NVPTX/AMDGPU
#pragma once
#include <set>
#include <unordered_map>
#ifdef TI_WITH_LLVM
#include "taichi/ir/ir.h"
#include "taichi/codegen/llvm/llvm_codegen_utils.h"
#include "taichi/codegen/llvm/llvm_compiled_data.h"
#include "taichi/program/program.h"
namespace taichi::lang {
class TaskCodeGenLLVM;
class FunctionCreationGuard {
public:
TaskCodeGenLLVM *mb;
llvm::Function *old_func;
llvm::Function *body;
llvm::BasicBlock *old_entry, *allocas, *entry, *old_final, *final;
llvm::IRBuilder<>::InsertPoint ip;
FunctionCreationGuard(TaskCodeGenLLVM *mb,
std::vector<llvm::Type *> arguments,
const std::string &func_name);
~FunctionCreationGuard();
};
class TaskCodeGenLLVM : public IRVisitor, public LLVMModuleBuilder {
public:
const CompileConfig &compile_config;
const Kernel *kernel;
IRNode *ir;
Program *prog;
std::string kernel_name;
std::vector<llvm::Value *> kernel_args;
llvm::Type *context_ty;
llvm::Type *physical_coordinate_ty;
llvm::Value *current_coordinates;
llvm::Value *parent_coordinates{nullptr};
llvm::Value *block_corner_coordinates{nullptr};
llvm::GlobalVariable *bls_buffer{nullptr};
// Mainly for supporting continue stmt
llvm::BasicBlock *current_loop_reentry;
// Mainly for supporting break stmt
llvm::BasicBlock *current_while_after_loop;
llvm::FunctionType *task_function_type;
std::unordered_map<Stmt *, llvm::Value *> llvm_val;
llvm::Function *func;
OffloadedStmt *current_offload{nullptr};
std::unique_ptr<OffloadedTask> current_task;
std::vector<OffloadedTask> offloaded_tasks;
llvm::BasicBlock *func_body_bb;
llvm::BasicBlock *final_block;
std::set<std::string> linked_modules;
bool returned{false};
std::unordered_set<int> used_tree_ids;
std::unordered_set<int> struct_for_tls_sizes;
const Callable *current_callable{nullptr};
// The task_codegen_id represents the id of the offloaded task
int task_codegen_id{0};
std::unordered_map<const Stmt *, std::vector<llvm::Value *>> loop_vars_llvm;
std::unordered_map<Function *, llvm::Function *> func_map;
using IRVisitor::visit;
using LLVMModuleBuilder::call;
explicit TaskCodeGenLLVM(int id,
const CompileConfig &config,
TaichiLLVMContext &tlctx,
const Kernel *kernel,
IRNode *ir,
std::unique_ptr<llvm::Module> &&module = nullptr);
Arch current_arch() const {
return compile_config.arch;
}
void initialize_context();
llvm::Value *get_arg(int i);
llvm::Value *get_argpack_arg(const std::vector<int> &index,
int arg_depth,
bool create_load);
llvm::Value *get_struct_arg(const std::vector<int> &index, bool create_load);
llvm::Value *get_args_ptr(const Callable *callable, llvm::Value *context);
void set_args_ptr(Callable *callable, llvm::Value *context, llvm::Value *ptr);
llvm::Value *get_context();
llvm::Value *get_tls_base_ptr();
llvm::Type *get_tls_buffer_type();
std::vector<llvm::Type *> get_xlogue_argument_types();
std::vector<llvm::Type *> get_mesh_xlogue_argument_types();
llvm::Type *get_xlogue_function_type();
llvm::Type *get_mesh_xlogue_function_type();
llvm::PointerType *get_integer_ptr_type(int bits);
llvm::IntegerType *get_integer_type(int bits);
llvm::Value *get_root(int snode_tree_id);
llvm::Value *get_runtime();
void emit_struct_meta_base(const std::string &name,
llvm::Value *node_meta,
SNode *snode);
void create_elementwise_binary(
BinaryOpStmt *stmt,
std::function<llvm::Value *(llvm::Value *lhs, llvm::Value *rhs)> f);
void create_elementwise_cast(
UnaryOpStmt *stmt,
llvm::Type *to_ty,
std::function<llvm::Value *(llvm::Value *, llvm::Type *)> f,
bool on_self = false);
std::unique_ptr<RuntimeObject> emit_struct_meta_object(SNode *snode);
llvm::Value *emit_struct_meta(SNode *snode);
virtual void emit_to_module();
void eliminate_unused_functions();
/**
* @brief Runs the codegen and produces the compiled result.
*
* After this call, `module` and `tasks` will be moved.
*
* @return LLVMCompiledTask
*/
virtual LLVMCompiledTask run_compilation();
// For debugging only
virtual llvm::Value *create_print(std::string tag,
DataType dt,
llvm::Value *value);
llvm::Value *create_print(std::string tag, llvm::Value *value);
void set_struct_to_buffer(const StructType *struct_type,
llvm::Value *buffer,
const std::vector<Stmt *> &elements);
llvm::Value *cast_pointer(llvm::Value *val,
std::string dest_ty_name,
int addr_space = 0);
void emit_list_gen(OffloadedStmt *listgen);
void emit_gc(OffloadedStmt *stmt);
llvm::Value *call(SNode *snode,
llvm::Value *node_ptr,
const std::string &method,
const std::vector<llvm::Value *> &arguments);
llvm::Function *get_struct_function(const std::string &name, int tree_id);
template <typename... Args>
llvm::Value *call_struct_func(int tree_id,
const std::string &func_name,
Args &&...args);
void create_increment(llvm::Value *ptr, llvm::Value *value);
// Direct translation
void create_naive_range_for(RangeForStmt *for_stmt);
static std::string get_runtime_snode_name(SNode *snode);
void visit(Block *stmt_list) override;
void visit(AllocaStmt *stmt) override;
void visit(RandStmt *stmt) override;
virtual void emit_extra_unary(UnaryOpStmt *stmt);
void visit(DecorationStmt *stmt) override;
void visit(UnaryOpStmt *stmt) override;
void visit(BinaryOpStmt *stmt) override;
void visit(TernaryOpStmt *stmt) override;
void visit(IfStmt *if_stmt) override;
void visit(PrintStmt *stmt) override;
void visit(ConstStmt *stmt) override;
void visit(WhileControlStmt *stmt) override;
void visit(ContinueStmt *stmt) override;
void visit(WhileStmt *stmt) override;
void visit(RangeForStmt *for_stmt) override;
void visit(ArgLoadStmt *stmt) override;
void visit(ReturnStmt *stmt) override;
void visit(LocalLoadStmt *stmt) override;
void visit(LocalStoreStmt *stmt) override;
void visit(AssertStmt *stmt) override;
void visit(SNodeOpStmt *stmt) override;
llvm::Value *atomic_add_quant_fixed(llvm::Value *ptr,
llvm::Type *physical_type,
QuantFixedType *qfxt,
llvm::Value *value);
llvm::Value *atomic_add_quant_int(llvm::Value *ptr,
llvm::Type *physical_type,
QuantIntType *qit,
llvm::Value *value,
bool value_is_signed);
llvm::Value *to_quant_fixed(llvm::Value *real, QuantFixedType *qfxt);
virtual llvm::Value *optimized_reduction(AtomicOpStmt *stmt);
virtual llvm::Value *quant_type_atomic(AtomicOpStmt *stmt);
virtual llvm::Value *integral_type_atomic(AtomicOpStmt *stmt);
virtual llvm::Value *atomic_op_using_cas(
llvm::Value *output_address,
llvm::Value *val,
std::function<llvm::Value *(llvm::Value *, llvm::Value *)> op,
const DataType &type);
virtual llvm::Value *real_type_atomic(AtomicOpStmt *stmt);
void visit(AtomicOpStmt *stmt) override;
void visit(GlobalPtrStmt *stmt) override;
void visit(MatrixPtrStmt *stmt) override;
void store_quant_int(llvm::Value *ptr,
llvm::Type *physical_type,
QuantIntType *qit,
llvm::Value *value,
bool atomic);
void store_quant_fixed(llvm::Value *ptr,
llvm::Type *physical_type,
QuantFixedType *qfxt,
llvm::Value *value,
bool atomic);
void store_masked(llvm::Value *ptr,
llvm::Type *ty,
uint64 mask,
llvm::Value *value,
bool atomic);
void visit(GlobalStoreStmt *stmt) override;
llvm::Value *quant_int_or_quant_fixed_to_bits(llvm::Value *val,
Type *input_type,
llvm::Type *output_type);
void visit(BitStructStoreStmt *stmt) override;
void store_quant_floats_with_shared_exponents(BitStructStoreStmt *stmt);
llvm::Value *extract_quant_float(llvm::Value *physical_value,
BitStructType *bit_struct,
int digits_id);
llvm::Value *extract_quant_int(llvm::Value *physical_value,
llvm::Value *bit_offset,
QuantIntType *qit);
llvm::Value *reconstruct_quant_fixed(llvm::Value *digits,
QuantFixedType *qfxt);
llvm::Value *reconstruct_quant_float(llvm::Value *input_digits,
llvm::Value *input_exponent_val,
QuantFloatType *qflt,
bool shared_exponent);
virtual llvm::Value *create_intrinsic_load(llvm::Value *ptr, llvm::Type *ty);
void create_global_load(GlobalLoadStmt *stmt, bool should_cache_as_read_only);
void visit(GlobalLoadStmt *stmt) override;
void visit(GetRootStmt *stmt) override;
void visit(LinearizeStmt *stmt) override;
void visit(IntegerOffsetStmt *stmt) override;
llvm::Value *create_bit_ptr(llvm::Value *byte_ptr, llvm::Value *bit_offset);
std::tuple<llvm::Value *, llvm::Value *> load_bit_ptr(llvm::Value *bit_ptr);
void visit(SNodeLookupStmt *stmt) override;
void visit(GetChStmt *stmt) override;
void visit(ExternalPtrStmt *stmt) override;
void visit(ExternalTensorShapeAlongAxisStmt *stmt) override;
void visit(ExternalTensorBasePtrStmt *stmt) override;
virtual bool kernel_argument_by_val() const {
return false; // on CPU devices just pass in a pointer
}
std::string init_offloaded_task_function(OffloadedStmt *stmt,
std::string suffix = "");
void finalize_offloaded_task_function();
FunctionCreationGuard get_function_creation_guard(
std::vector<llvm::Type *> argument_types,
const std::string &func_name = "function_body");
std::tuple<llvm::Value *, llvm::Value *> get_range_for_bounds(
OffloadedStmt *stmt);
virtual void create_offload_range_for(OffloadedStmt *stmt) = 0;
virtual void create_offload_mesh_for(OffloadedStmt *stmt) {
TI_NOT_IMPLEMENTED;
}
void create_offload_struct_for(OffloadedStmt *stmt);
void visit(LoopIndexStmt *stmt) override;
void visit(LoopLinearIndexStmt *stmt) override;
void visit(BlockCornerIndexStmt *stmt) override;
void visit(GlobalTemporaryStmt *stmt) override;
void visit(ThreadLocalPtrStmt *stmt) override;
void visit(BlockLocalPtrStmt *stmt) override;
void visit(ClearListStmt *stmt) override;
void visit(InternalFuncStmt *stmt) override;
// Stack statements
void visit(AdStackAllocaStmt *stmt) override;
void visit(AdStackPopStmt *stmt) override;
void visit(AdStackPushStmt *stmt) override;
void visit(AdStackLoadTopStmt *stmt) override;
void visit(AdStackLoadTopAdjStmt *stmt) override;
void visit(AdStackAccAdjointStmt *stmt) override;
void visit(RangeAssumptionStmt *stmt) override;
void visit(LoopUniqueStmt *stmt) override;
void visit_call_bitcode(ExternalFuncCallStmt *stmt);
void visit_call_shared_object(ExternalFuncCallStmt *stmt);
void visit(ExternalFuncCallStmt *stmt) override;
void visit(MeshPatchIndexStmt *stmt) override;
void visit(ReferenceStmt *stmt) override;
void visit(MatrixInitStmt *stmt) override;
llvm::Value *create_xlogue(std::unique_ptr<Block> &block);
llvm::Value *create_mesh_xlogue(std::unique_ptr<Block> &block);
llvm::Value *extract_exponent_from_f32(llvm::Value *f);
llvm::Value *extract_digits_from_f32(llvm::Value *f, bool full);
llvm::Value *extract_digits_from_f32_with_shared_exponent(
llvm::Value *f,
llvm::Value *shared_exp);
llvm::Value *get_exponent_offset(llvm::Value *exponent, QuantFloatType *qflt);
void visit(FuncCallStmt *stmt) override;
void visit(GetElementStmt *stmt) override;
llvm::Value *bitcast_from_u64(llvm::Value *val, DataType type);
llvm::Value *bitcast_to_u64(llvm::Value *val, DataType type);
~TaskCodeGenLLVM() override = default;
private:
void set_struct_to_buffer(llvm::Value *buffer,
llvm::Type *buffer_type,
const std::vector<Stmt *> &elements,
const Type *current_type,
int ¤t_element,
std::vector<llvm::Value *> ¤t_index);
virtual std::tuple<llvm::Value *, llvm::Value *> get_spmd_info() = 0;
};
} // namespace taichi::lang
#endif // #ifdef TI_WITH_LLVM
```
|
Ludwig Steeg (22 December 1894 – 6 September 1945) was a German Nazi politician who was the Oberbürgermeister (Lord Mayor) and Stadtpräsident (City President) of Berlin during the Third Reich.
Early life
Steeg was born in Ottweiler near Saarbrücken, the son of a teacher. As a young man he moved to Berlin and gained a junior post in the city administration. In World War I he served in the infantry, attaining the rank of Oberleutnant of reserves and earning the Iron Cross, 1st and 2nd class. Returning to Berlin in 1919, he rejoined the administration and was, among other things, responsible for the city sanitation services and transport. He joined the Nazi Party in 1933 (membersip number 1,485,884) and thus gained rapid promotion as the majority of Berlin's civil servants who were loyal to the Social Democrats were removed from office. He became deputy to Julius Lippert, the Staatskommissar (State Commissioner) of Berlin from 1933, and who became Oberbürgermeister and Stadtpräsident in January 1937.
Oberbürgermeister and Stadtpräsident
The real power in Berlin under the Nazis was, however, the Party Gauleiter (Regional Leader), Joseph Goebbels, and in July 1940 he persuaded Adolf Hitler to dismiss Lippert, whom he believed had become a rival to his authority. Steeg, a loyal nonentity, was appointed in his place, "for lack of anyone better," as Goebbels put it. Steeg was appointed acting Oberbürgermeister and Stadtpräsident. A member of the SS (SS number 127,531), he attained the rank of SS-Brigadeführer on 30 January 1943.
Steeg was responsible, under Goebbels, for the city's budget, traffic, building regulations, schools, youth facilities and health services. All these came under increasing strain as World War II went on, partly because of the inexperience of the Nazi loyalists who had been placed in responsible jobs following the removal of experienced but politically unreliable officials, and partly because of the increasing shortage of staff as a result of wartime conscription.
Steeg was also responsible for preparing Berlin for the air-raids which were widely expected. He prepared plans for the production and distribution of food, the construction of shelters and the evacuation of women and children. At the time of the first severe raid on 23–24 August 1943, Goebbels blamed Steeg for the poor performance of municipal authorities and threatened to dismiss him if matters failed to improve. The raids began to occur with increasingly frequency and severity. About a million people were evacuated; even so, between 1943 and 1945 approximately 50,000 Berliners were killed in air-raids. Steeg performed his tasks competently but did not exercise a public leadership role, which was undertaken by Goebbels.
Removal as Stadtpräsident and death
By December 1943, Hitler wanted to settle the question of a permanent political leadership for Berlin and asked Goebbels to take on the job of Stadtpräsident while Steeg would be relegated to the more ceremonial post of Oberbürgermeister. Goebbels agreed to this as a means of obtaining more direct control over municipal authorities. On 7 April 1944, Goebbels took over direct administrative control of the city when he was formally named Stadtpräsident.
During the last months of the war, Steeg was made permanent Oberbürgermeister in February 1945 but was eclipsed by Goebbels in preparing the defences of Berlin against the approaching Red Army. He does not seem to have played any significant role in these events: he is not mentioned in any of the histories of Berlin at this time, such as Cornelius Ryan's The Last Battle or Anthony Reid and David Fisher's The Fall of Berlin. Nevertheless, his post as Oberbürgermeister made him the symbolic head of the capital of the German Reich. When the city surrendered to the Soviet forces on 2 May 1945, he was arrested and taken to a Soviet internment camp, where he died in unexplained circumstances in September.
References
Sources
1894 births
1945 deaths
German Army personnel of World War I
German people who died in Soviet detention
Mayors of Berlin
Military personnel from Saarland
Nazi Party politicians
People from Neunkirchen (German district)
People from the Rhine Province
Place of death missing
Recipients of the Iron Cross (1914), 1st class
Recipients of the Iron Cross (1914), 2nd class
SS-Brigadeführer
|
```smalltalk
using System;
using System.Collections.Generic;
using System.Linq;
using System.Linq.Dynamic.Core;
using System.Threading;
using System.Threading.Tasks;
using MongoDB.Driver;
using MongoDB.Driver.Linq;
using Volo.Abp.Domain.Repositories.MongoDB;
using Volo.Abp.MongoDB;
namespace Volo.Abp.FeatureManagement.MongoDB;
public class MongoFeatureValueRepository :
MongoDbRepository<IFeatureManagementMongoDbContext, FeatureValue, Guid>,
IFeatureValueRepository
{
public MongoFeatureValueRepository(IMongoDbContextProvider<IFeatureManagementMongoDbContext> dbContextProvider)
: base(dbContextProvider)
{
}
public virtual async Task<FeatureValue> FindAsync(
string name,
string providerName,
string providerKey,
CancellationToken cancellationToken = default)
{
return await (await GetMongoQueryableAsync(cancellationToken))
.OrderBy(x => x.Id)
.FirstOrDefaultAsync(s => s.Name == name && s.ProviderName == providerName && s.ProviderKey == providerKey, GetCancellationToken(cancellationToken));
}
public virtual async Task<List<FeatureValue>> FindAllAsync(
string name,
string providerName,
string providerKey,
CancellationToken cancellationToken = default)
{
return await (await GetMongoQueryableAsync(cancellationToken))
.Where(s => s.Name == name && s.ProviderName == providerName && s.ProviderKey == providerKey).ToListAsync(GetCancellationToken(cancellationToken));
}
public virtual async Task<List<FeatureValue>> GetListAsync(
string providerName,
string providerKey,
CancellationToken cancellationToken = default)
{
return await (await GetMongoQueryableAsync(cancellationToken))
.Where(s => s.ProviderName == providerName && s.ProviderKey == providerKey)
.ToListAsync(GetCancellationToken(cancellationToken));
}
public virtual async Task DeleteAsync(
string providerName,
string providerKey,
CancellationToken cancellationToken = default)
{
var dbContext = await GetDbContextAsync();
await dbContext.FeatureValues
.DeleteManyAsync(x => x.ProviderName == providerName && x.ProviderKey == providerKey, GetCancellationToken(cancellationToken));
}
}
```
|
Marie Elizabeth Macarte (1827 –20 September 1892) was an English equestrienne and circus performer who found success in Britain and the United States in the 1840s to 1860s.
Early life and career
Born in her mother's home town of Leigh-on-Sea in Essex in 1827 as Marie Elizabeth Ginnett, she was the daughter of Ann née Partridge (1803–1877) and the circus performer Jean Pierre Ginnett (1798-1861). Her older brother was John Frederick Ginnett (1825–1892), who later was the proprietor of Ginnett's Circus. A distant relative was the lion-tamer Thomas Macarte who was killed in the ring in 1872. Marie Ginnett was a pupil of Andrew Ducrow and started performing as Miss Ginnett when she was about 3 years old. In 1841 she married Michael 'John' Macarthy, an equestrian artiste performing with her father and later a vaulter, tumbler and acrobat who had performed at Astley's Royal Amphitheatre in London and who was a member of the Macarte dynasty of acrobats and circus performers who claimed to have been performing since the early 18th-century. With Michael Macarthy her children were: Marie Louise Macarthy (1848–); Adelaide Macarthy (1850–1930); Frederick Macarthy (1852–), a high wire and general circus performer who later had a performing dog and monkey act; Henry Macarthy (1853–1924); Blanche Macarthy (1855–), and Kate Macarthy (1856–).
Marie Macarte made her American début in 1842, and in October 1845 she was with Howe's Circus, with the critic of the New York Daily Herald writing of her:
"But what shall we say of Madame Macarte? - the most graceful and beautiful female equestrian of the age. Nature formed her in one of its happiest moods, as her physique would be a good study for a sculptor. Her act of horsemanship is of the most daring and brilliant description, while her attitudes in almost every variety of grace, charm and fascinate. She rivets the attention of the whole audience, and the eye is dazzled in following the mazy of her beautiful and fantastic evolutions. Now she looks the Hindoostanee shawl girl to perfection–and now the pious nun–again she changes into a voluptuous Sultana, and then transforms herself into the happiest peasant girl of vine-encircled France. But her riding must be seen to be appreciated."
She went on to perform with Sands, Lent & Co Circus (1847) followed by the Welch & Delavan Circus (1847). The proprietors of Welch and Delavan had entered into an expensive written agreement with the Macartes, paying them a weekly salary of $100 plus expenses, supplying two horses for Marie Macarte's carriage and paying her a third of a benefit at each venue where she performed. In return Welch & Delavan would receive one of her horses and the right to use her name in any publicity for their circus while she was appearing with them. They had spent $2,000 in printing bills and posters announcing her performances throughout the tour. However, the Macartes broke the agreement by leaving to join "Dr" Gilbert R. Spalding's Monster Circus, which was performing just ahead of Welch & Delavan at the same venues where they were booked to appear. The Macartes for their part stated that they had left because the conduct of Mr. Delavan had made Marie Macarte "very uncomfortable and unhappy." The ruling was that as the Macartes had entered into an agreement for personal services it was not enforceable in law. In 1848 she performed at the Vauxhall Gardens in London and at Astley's Amphitheatre in 1849.
Back in Britain, from 28 January to 2 February 1850 she was with Franconi's Circus in Birmingham, before embarking on an extensive tour of the provinces touring from 1850 to 1853 as "the only real troupe of lady equestrians" with her and her husband's newly-formed Macarte and Bell's Grand American Circus - actually quite a small circus by the standards of the time - which was owned by her husband Michael Macarthy, his brother Dan and Dick Bell. In 1853 their circus was joined by the famous clown Thomas Barry.
Success in America
On crossing the Atlantic Macarte toured with the Nixon-Macarte Circus in Washington D.C. (1863); 'the fearless and graceful equestrian, Mme. Marie Macarte' was with James M. Nixon's Alhambra Circus in New York City (1863); the Hippotheatron in New York City (1864); Rivers & Derious Circus in Washington D.C. (1864); the National Circus in Cincinnati (1864–65), and Frank Howes (1866), Palmer's (1866); Mike Lipman's (1866–67); Haight & Chambers Circus (1867); Michael O'Connor's Circus (1869); Stowe & Norton Circus (1869), and G. G. Grady's Old-Fashioned Circus (1870).
In her bare horseback act she jumped over ribbons, leapt through paper balloons, and performed a scarf dance. A new feature she introduced a new feature to the circus ring was that of performing scenes from mythology scenes along with her riding act, which originated from Ducrow's work in England. Her act, which included broad and high leaps and mock sword fights on horseback was considered new and novel for the time.
Later years
After the death of her husband Michael Macarte at Ipswich in 1856 she married George Clark at Cranbrook, Kent on 11 Sep 1856. Her children with him were: Georgina Clark (1858-); George Clark (1859–) and Charles Clark (1860–). Clark was killed in an accident at sea in 1863 while returning from the United States when he fell down a cargo hatch. She married Daniel Rhodes (real name Rose, d. 13 February 1890), a veteran advertiser and manager, in Harrison, Texas on 9 March 1868.
Marie Macarte retired in 1874 to found an equestrian and gymnastic furnishing business for the sale of all equipment needed for circus acts. In 1879 her four daughters Marie Louise, Adelaide, Blanche and Kate Macarthy formed a circus in their own right in the United States as the Macarte Sisters Parisian Circus. Her daughter Adelaide Macarte married Hubert Cooke, an equestrian performer who was killed in the ring while performing with the Circus Strepetow in Odessa in 1917.
Marie Macarte died in New York in 1892.
Her granddaughters were the acrobats and high wire act the Macarte Sisters.
References
1827 births
1892 deaths
English female equestrians
British circus performers
Circus owners
|
CGNET Services International, based in Mountain View, California, is an example of an early pioneer in international data communications. Beginning with landmark achievements in the 1980s and 1990s, CGNET has become one of the world's most well-known email providers in the international nonprofit community. Founded in 1983 by Georg Lindsey, CGNET built some of the earliest international email networks, providing custom email services over the Dialcom network in 1984 and later moved to the Internet.
Before the Internet, organizations usually sent mail over private data networks. While individuals could use email services on some value-added carriers, they had not been configured to serve organizations. CGNET changed this when it set up organizational communications over Dialcom. In the 1990s, it performed a similar feat by providing its customers with an early version of Voice over IP, which did not use the VoIP protocols but made it possible to transmit voice over IP networks using statistical multiplexing.
CGNET's first client was the Consultative Group on International Agricultural Research (CGIAR), from which CGNET derived its name. Since then, virtually all of its customers are reported as nonprofits, foundations or non-governmental organizations. CGNET's international experience includes several times when it has relocated clients' facilities or rerouted their email because their offices have been caught in the middle of armed rebellions or similar disorders.
External links
Official Website
Notes
Information technology companies of the United States
Companies based in Mountain View, California
Technology companies established in 1983
1983 establishments in California
|
Bruno Hübner (1899–1983) was an Austrian film and television actor known for his work in Germany. He was born in Reichenberg then in the Austro-Hungarian Empire, which later became part of Czechoslovakia.
Selected filmography
Punks Arrives from America (1935)
Under Blazing Heavens (1936)
A Wedding Dream (1936)
Patriots (1937)
The Broken Jug (1937)
Such Great Foolishness (1937)
The Mountain Calls (1938)
Maria Ilona (1939)
The Green Emperor (1939)
New Year's Eve on Alexanderplatz (1939)
Falstaff in Vienna (1940)
Judgement Day (1940)
Bismarck (1940)
The Rothschilds (1940)
The Fox of Glenarvon (1940)
Counterfeiters (1940)
The Rainer Case (1942)
Nora (1944)
The Millionaire (1947)
The Lost Face (1948)
Friday the Thirteenth (1949)
Doctor Praetorius (1950)
A Devil of a Woman (1951)
My Name is Niki (1952)
The Great Temptation (1952)
Knall and Fall as Detectives (1953)
Fanfare of Marriage (1953)
A Girl from Paris (1954)
Annie from Tharau (1954)
The Missing Miniature (1954)
The Flying Classroom (1954)
Holiday in Tyrol (1956)
The Last Ones Shall Be First (1957)
You Don't Shoot at Angels (1960)
Love Has to Be Learned (1963)
Don't Get Angry (1972)
References
Bibliography
Goble, Alan. The Complete Index to Literary Sources in Film. Walter de Gruyter, 1999.
External links
1899 births
1983 deaths
People from Liberec District
Austrian male film actors
Austrian male television actors
German Bohemian people
|
Clarence Blakiston (23 April 1864 – 21 March 1943) was a British film and stage actor, comedian and singer who during his career across five decades played the title role in the Sherlock Holmes parody Sheerluck Jones, or Why D’Gillette Him Off at Terry's Theatre (1901–02) which ran for 138 performances and who appeared in the original production of The Admirable Crichton at the Duke of York's Theatre in 1902.
Early life
He was born at Giggleswick in North Yorkshire in England, one of five children of Marie Jane née Simon (1825–1908) and John Richard Blakiston (1829–1917), HM Chief Inspector of Schools and Headmaster of Giggleswick School (1858–1866). In 1880 aged 16 Clarence Blakiston joined the Merchant Navy as an apprentice at Cardiff while in 1884 he was awarded a Certificate of Competency to serve as Second Mate. In 1888 he married Glasgow-born Clementina Lindsay née Low (1864–1936) and they had a daughter, Marie Blakiston (1889–1890).
Stage career
After leaving the Merchant Navy Blakiston determined to try his hand at the theatre. His brief biography in The Play Pictorial review of The Blue Moon in 1905 recorded that:
Blakiston's first engagement was with a modern, but somewhat shady, repertoire Company whose manager cast him for juvenile lead, only because he possessed a presentable wardrobe. The thirsty members of the company were most eager to show him how his parts really should be played, but stipulated that the coaching was done in the nearest bar-parlour. Soon after joining, his manager called him aside and said, "My boy, you're too good for juveniles. When I see real talent I always help it on. I'll sacrifice myself by exchanging parts with you – I'll lend you my clothes (take care of them as it has taken me years to collect them), and you shall lend me yours." "Whereupon this gentleman" says Mr. Blakiston, "possessed himself of all my available suits and linen, and two weeks later gave me 12/- for two weeks' work, explaining with tears in his eyes that business was so bad owing to my inability to play such important parts. I never saw my clothes again. The manager's clothes (mostly rags) I sold for a few shillings, and got insulted over the transaction." After going through many vicissitudes Mr. Blakiston obtained an introduction to Mr. Edward Compton who engaged him as prompter, and thenceforth he worked his way up to the position of leading man, which position he retained for five years before trying his fate in town.
His stage roles included Chastelard in The Queen's Room (1891) at the Opera Comique; Harry Dornton in The Road to Ruin (1891) at the Opera Comique; Roger Conant in The Mayflower (1892); Captain Simmonds in Delia Harding (1895) at the Comedy Theatre; Mr Goldie in A Breezy Morning (1895) at the Comedy Theatre; Butler in The Manoeuvres of Jane at the Haymarket Theatre (1899); Stingo in She Stoops to Conquer (1900) at the Theatre Royal Haymarket; Mr Fenwick in The Second in Command (1900) at the Theatre Royal Haymarket; Captain Trent in The New Clown and Sheerluck Jones in the Sherlock Holmes parody Sheerluck Jones, or Why D’Gillette Him Off (1901) and Edgar Blatcher in A Tight Corner (1901) at Terry's Theatre; Harry Brandon in The Little French Milliner (1902) at the Avenue Theatre; John Treherne in The Admirable Crichton (1902) at the Duke of York's Theatre; Dr Topping in Little Mary (1903); Grieve in Du Barry (1905) at the Savoy Theatre; Sm in The Faddists (1905) at St James's Theatre; Prince Badahur Sanatsinjhi of Kharikar in The Blue Moon (1905) at the Lyric Theatre; Prince Hassan in A Persian Princess (1909) at the Queen's Theatre; Richard Gilder in Within the Law (1916) at the Theatre Royal in Melbourne in Australia and the same role at the Kingsway Theatre (1920); Dr Macfarlane in The Unknown by W. Somerset Maugham at the Aldwych Theatre (1920) starring Basil Rathbone and Lady Tree; Harding in Send for Dr. O'Grady (1923) at the Criterion Theatre; Sir Robert Shale in The Lie (1924) at the Regent Theatre, and Archbishop in High Treason at the Strand Theatre (1928).
Film roles
Blakiston's film roles include Richard Gilder in the Australian film Within the Law (1916), M. Duval in The Lady of the Camellias (1922), Sir John Edmonds in Somebody's Darling (1925), Sir George Venning in Rogues of the Turf (1923), Henry Leslie in A Peep Behind the Scenes (1929), Mr Peabody in The Girl in the Crowd (1935),<ref>Denis Gifford, [https://books.google.com/books?id=1c7eCwAAQBAJ&pg=PA413 Girl in the Crowd – British Film Catalogue: Two Volume Set – The Fiction Film/The Non-Fiction Film, Volume 2], Routledge (2000) – Google Books 08666</ref> Love Up the Pole (1936), and the Duke of Sussex in Victoria the Great'' (1937).
By 1939 Clarence Blakiston was living in Ainsdale in Southport, Merseyside and here he died in 1943. In his will he left £221 3s 11d to Ellen Rosemary Blakiston.
References
External links
Blakiston on Internet Movie Database
1864 births
1943 deaths
People educated at Giggleswick School
English male silent film actors
English male film actors
English male stage actors
English male musical theatre actors
English operatic baritones
19th-century English male actors
20th-century English male actors
|
111 Squadron or 111th Squadron may refer to:
No. 111 Squadron RAF, a unit of the United Kingdom Royal Air Force
111th Reconnaissance Squadron, a unit of the United States Air Force
111th Space Operations Squadron, a unit of the United States Air Force
111 Squadron, Republic of Singapore Air Force
See also
111th Division (disambiguation)
111th Regiment (disambiguation)
|
James Smith (November 26, 1737 – April 11, 1813) was a frontiersman, farmer and soldier in British North America. In 1765, he led the "Black Boys", a group of Pennsylvania men, in a nine-month rebellion against British rule ten years before the outbreak of the American Revolutionary War. He participated in the Revolutionary War as a colonel of the Pennsylvania militia and was a legislator in the Kentucky General Assembly. Smith was also an author, publishing a memoir about his captivity by Native Americans in his Narrative in 1799, and in 1812 an in-depth analysis of Native-American fighting techniques, based on observations during his captivity.
Early life
Smith was born in Lancaster County, Pennsylvania, in an area now part of Franklin County, Pennsylvania. Some later sources suggest that he had little formal education.
French and Indian War and aftermath
In May 1755, he worked on the Braddock Road, a road built west from Alexandria, Virginia in support of General Edward Braddock's ill-fated expedition against the French. He was captured by Delaware Indians and brought to Fort Duquesne at the Forks of the Ohio River, where he was forced to run a gauntlet before being given over to the French. He was adopted by a Mohawk family, ritually cleansed, and made to practice tribal ways – ultimately gaining respect for Indian culture. He escaped near Montreal, but was jailed by the French for four months until his release in a prisoner exchange with the British. He returned to the Conococheague Valley in Pennsylvania and took up farming, marrying Anne Wilson in May 1763.
During Pontiac's War, he fought in the 1763 Battle of Bushy Run and accompanied the 1764 British expedition led by Henry Bouquet into the Ohio Country. When the unrest subsided, however, the British allowed trading with the Native Americans to resume, angering the colonists.
Black Boys Rebellion
In the 1760s, Smith took part in an unofficial band called Black Boys, so-called because they resided in Black's Town (then named after resident James Black, present-day Mercersburg, Pennsylvania) and disguised themselves in Native American dress, were upset with British policy regarding American Indians following Pontiac's War. On March 6, 1765, they stopped a pack train and burned goods, including rum and gunpowder, that Irish-born official George Croghan sought to trade to Native Americans, out of hatred for the Native Americans.
British authorities, however, supported Croghan's trading, and this led to the Black Boys Rebellion. The rebels laid siege to Fort Loudoun in the Pennsylvania mountain country and captured enough soldiers to exchange them two-for-one for settlers imprisoned, rightly or wrongly, for raids on wagon trains. The rebellion subsided in November.
In June 1766, Smith left to explore Kentucky.
In 1769, Smith and the Black Boys surprised Fort Bedford, freeing some prisoners being held there.
Later in 1769, while passing through Bedford with two companions, Smith was accosted by several men intent upon his arrest for being the leader of the Black Boys. Shots were fired, and one of Smith's companions was accidentally killed. Smith was initially found guilty of murder and jailed for four months before being exonerated and released. During his jail time, a group of 300 people, some of them Black Boys, came to free Smith from the jail, but Smith convinced them to return home in peace.
American Revolutionary War
Smith represented Westmoreland County, Pennsylvania at the 1776 Constitutional Convention.
When the American Revolutionary War broke out, he joined the Pennsylvania militia as captain, and was made a colonel in 1778.
Smith described his orders for at least one action against Indians: "In case of an attack, the officers were immediately to order the men to face out and take trees – in this position the Indians could not avail themselves by surrounding us, or have an opportunity of shooting a man from either side of the tree."
After his wife died in 1778, Smith moved to Westmoreland County. In 1785, he married Margaret Irwin. By the late 1780s, he and his family were living in Bourbon County, Kentucky. He served as a member of the Kentucky General Assembly for a number of years.
In 1799, he published his narrative, An Account of the Remarkable Occurrences in the Life and Travels of Col. James Smith, consisting of an autobiography and an analysis of Indian culture.
Missionary work
Smith became a Presbyterian missionary to the Native Americans, aided by the knowledge he had acquired of their customs in his early captivity. His son became a Shaker, but he himself, after living with his son among the Shakers for a few months, concluded they were a cult and denounced them in a pamphlet entitled Remarkable Occurrences Lately Discovered Among The People Called Shakers, printed in 1810. He continued his attack in another pamphlet, "Shakerism Detected", also printed in 1810.
In 1812, in response to the nation's continuing troubles with the Indians, Smith published "A Treatise on the Mode and Manner of Indian War".
Death
According to the May 8, 1813, edition of the Kentucky newspaper The Reporter, "DIED, at the house of Mr. John Rodgers, Green County, on Sunday, the 11th of April, Colonel JAMES SMITH, late of Bourbon County ... after an illness of four weeks" from an unspecified disease.
Book, film, and television
James Smith was the subject of the 1937 book The First Rebel by Neil H. Swanson. He was portrayed by John Wayne in the 1939 movie Allegheny Uprising, which was based on the book. A segment in the 2006 PBS miniseries The War that Made America shows a dramatization of Smith running the Native American gauntlet, following his capture in 1755.
References
External links
Digitized images of An account of the remarkable occurrences on the life and travels of Col. James Smith, during his captivity with the Indians, in the years 1755, 56, 57, 58, & 59 housed at the University of Kentucky Libraries Special Collections Research Center
1737 births
1813 deaths
People of colonial Pennsylvania
Pennsylvania militiamen in the American Revolution
Members of the Kentucky General Assembly
People from Franklin County, Pennsylvania
American Presbyterians
|
Yandi may refer to:
Yandi, Achkhoy-Martanovsky District, a rural locality in Chechnya
Yan Emperor, aka Yán Dì, ancient Chinese ruler
Yandi Munawar (born 1992), Indonesian footballer
Yandi mine, iron ore mine in Western Australia
Yandicoogina mine, iron ore mine in Western Australia
Yandhi, announced 2019 album by Kanye West
|
```swift
//
// extensionVCMain.swift
// RsyncOSX
//
// Created by Thomas Evensen on 31.05.2018.
//
import Cocoa
import Foundation
// Get output from rsync command
extension ViewControllerMain: GetOutput {
// Get information from rsync output.
func getoutput() -> [String] {
return TrimTwo(outputprocess?.getOutput() ?? []).trimmeddata
}
}
// Scheduled task are changed, read schedule again og redraw table
extension ViewControllerMain: Reloadandrefresh {
// Refresh tableView in main
func reloadtabledata() {
globalMainQueue.async { () in
self.mainTableView.reloadData()
}
}
}
// Get index of selected row
extension ViewControllerMain: GetSelecetedIndex {
func getindex() -> Int? {
return localindex
}
}
// New profile is loaded.
extension ViewControllerMain: NewProfile {
// Function is called from profiles when new or default profiles is seleceted
func newprofile(profile: String?, selectedindex: Int?) {
if let index = selectedindex {
profilepopupbutton.selectItem(at: index)
} else {
initpopupbutton()
}
reset()
singletask = nil
deselect()
// Read configurations
configurations = createconfigurationsobject(profile: profile)
// Make sure loading profile
displayProfile()
reloadtabledata()
}
func reloadprofilepopupbutton() {
globalMainQueue.async { () in
self.displayProfile()
}
}
func createconfigurationsobject(profile: String?) -> Configurations? {
configurations = nil
configurations = Configurations(profile: profile)
return configurations
}
}
// Check for remote connections, reload table when completed.
extension ViewControllerMain: Connections {
func displayConnections() {
globalMainQueue.async { () in
self.mainTableView.reloadData()
}
}
}
extension ViewControllerMain: NewVersionDiscovered {
func notifyNewVersion() {
globalMainQueue.async { () in
self.info.stringValue = Infoexecute().info(num: 9)
}
}
}
extension ViewControllerMain: DismissViewController {
func dismiss_view(viewcontroller: NSViewController) {
dismiss(viewcontroller)
globalMainQueue.async { () in
self.mainTableView.reloadData()
self.displayProfile()
}
}
}
// Deselect a row
extension ViewControllerMain: DeselectRowTable {
// deselect a row after row is deleted
func deselect() {
if let index = localindex {
SharedReference.shared.process = nil
localindex = nil
mainTableView.deselectRow(index)
}
}
}
// If rsync throws any error
extension ViewControllerMain: RsyncError {
func rsyncerror() {
// Set on or off in user configuration
globalMainQueue.async { () in
self.info.stringValue = "Rsync error, see logfile..."
self.info.textColor = self.setcolor(nsviewcontroller: self, color: .red)
self.info.isHidden = false
guard SharedReference.shared.haltonerror == true else { return }
self.deselect()
_ = InterruptProcess()
self.singletask?.error()
}
}
}
// If, for any reason, handling files or directory throws an error
extension ViewControllerMain: ErrorMessage {
func errormessage(errorstr: String, error errortype: RsyncOSXTypeErrors) {
globalMainQueue.async { () in
if errortype == .logfilesize {
self.info.stringValue = "Reduce size logfile, filesize is: " + errorstr
self.info.textColor = self.setcolor(nsviewcontroller: self, color: .red)
self.info.isHidden = false
} else {
self.outputprocess?.addlinefromoutput(str: errorstr + "\n" + errorstr)
self.info.stringValue = "Some error: see logfile."
self.info.textColor = self.setcolor(nsviewcontroller: self, color: .red)
self.info.isHidden = false
}
var message = [String]()
message.append(errorstr)
_ = Logfile(message, error: true)
}
}
}
// Abort task from progressview
extension ViewControllerMain: Abort {
// Abort the task
func abortOperations() {
_ = InterruptProcess()
working.stopAnimation(nil)
localindex = nil
info.stringValue = ""
}
}
// Extensions from here are used in newSingleTask
extension ViewControllerMain: StartStopProgressIndicatorSingleTask {
func startIndicatorExecuteTaskNow() {
working.startAnimation(nil)
}
func startIndicator() {
working.startAnimation(nil)
}
func stopIndicator() {
working.stopAnimation(nil)
}
}
extension ViewControllerMain: GetConfigurationsObject {
func getconfigurationsobject() -> Configurations? {
guard configurations != nil else { return nil }
return configurations
}
}
extension ViewControllerMain: ErrorOutput {
func erroroutput() {
info.stringValue = Infoexecute().info(num: 2)
}
}
extension ViewControllerMain: SendOutputProcessreference {
func sendoutputprocessreference(outputprocess: OutputfromProcess?) {
self.outputprocess = outputprocess
}
}
extension ViewControllerMain: OpenQuickBackup {
func openquickbackup() {
globalMainQueue.async { () in
self.presentAsSheet(self.viewControllerQuickBackup!)
}
}
}
extension ViewControllerMain: Count {
func maxCount() -> Int {
return TrimTwo(outputprocess?.getOutput() ?? []).maxnumber
}
func inprogressCount() -> Int {
return outputprocess?.getOutput()?.count ?? 0
}
}
extension ViewControllerMain: ViewOutputDetails {
func getalloutput() -> [String] {
return outputprocess?.getOutput() ?? []
}
func reloadtable() {
weak var localreloadDelegate: Reloadandrefresh?
localreloadDelegate = SharedReference.shared.getvcref(viewcontroller: .vcalloutput) as? ViewControllerAllOutput
localreloadDelegate?.reloadtabledata()
}
func appendnow() -> Bool {
if SharedReference.shared.getvcref(viewcontroller: .vcalloutput) != nil {
return true
} else {
return false
}
}
func outputfromrsync(data: [String]?) {
if outputprocess == nil {
outputprocess = OutputfromProcess()
}
if let data = data {
for i in 0 ..< data.count {
outputprocess?.addlinefromoutput(str: data[i])
}
} else {
outputprocess?.output = []
}
}
}
extension ViewControllerMain: OpenOutputfromrsync {
func openoutputfromrsync() {
presentAsModalWindow(viewControllerAllOutput!)
}
}
enum Color {
case red
case white
case green
case black
}
protocol Setcolor: AnyObject {
func setcolor(nsviewcontroller: NSViewController, color: Color) -> NSColor
}
extension Setcolor {
private func isDarkMode(view: NSView) -> Bool {
return view.effectiveAppearance.bestMatch(from: [.darkAqua, .aqua]) == .darkAqua
}
func setcolor(nsviewcontroller: NSViewController, color: Color) -> NSColor {
let darkmode = isDarkMode(view: nsviewcontroller.view)
switch color {
case .red:
return .red
case .white:
if darkmode {
return .white
} else {
return .black
}
case .green:
if darkmode {
return .green
} else {
return .blue
}
case .black:
if darkmode {
return .white
} else {
return .black
}
}
}
}
protocol Checkforrsync: AnyObject {
func checkforrsync() -> Bool
}
extension Checkforrsync {
func checkforrsync() -> Bool {
if SharedReference.shared.norsync == true {
_ = Norsync()
return true
} else {
return false
}
}
}
// Protocol for start,stop, complete progressviewindicator
protocol StartStopProgressIndicator: AnyObject {
func start()
func stop()
}
// Protocol for either completion of work or update progress when Process discovers a
// process termination and when filehandler discover data
protocol UpdateProgress: AnyObject {
func processTermination()
func fileHandler()
}
protocol ViewOutputDetails: AnyObject {
func reloadtable()
func appendnow() -> Bool
func getalloutput() -> [String]
func outputfromrsync(data: [String]?)
}
// Get multiple selected indexes
protocol GetMultipleSelectedIndexes: AnyObject {
func getindexes() -> [Int]
func multipleselection() -> Bool
}
extension ViewControllerMain: GetMultipleSelectedIndexes {
func multipleselection() -> Bool {
return multipeselection
}
func getindexes() -> [Int] {
if let indexes = indexset {
return indexes.map { $0 }
} else {
return []
}
}
}
extension ViewControllerMain: DeinitExecuteTaskNow {
func deinitexecutetasknow() {
executetasknow = nil
info.stringValue = Infoexecute().info(num: 0)
}
}
extension ViewControllerMain: DisableEnablePopupSelectProfile {
func enableselectpopupprofile() {
profilepopupbutton.isEnabled = true
}
func disableselectpopupprofile() {
profilepopupbutton.isEnabled = false
}
}
extension ViewControllerMain: Sidebarbuttonactions {
func sidebarbuttonactions(action: Sidebaractionsmessages) {
switch action {
case .Delete:
delete()
default:
return
}
}
}
```
|
```c
/* Target-dependent code for FreeBSD/amd64.
This file is part of GDB.
This program is free software; you can redistribute it and/or modify
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place - Suite 330,
Boston, MA 02111-1307, USA. */
#include "defs.h"
#include "arch-utils.h"
#include "frame.h"
#include "gdbcore.h"
#include "regcache.h"
#include "osabi.h"
#include "gdb_string.h"
#include "amd64-tdep.h"
#include "solib-svr4.h"
/* Support for signal handlers. */
/* Assuming NEXT_FRAME is for a frame following a BSD sigtramp
routine, return the address of the associated sigcontext structure. */
static CORE_ADDR
amd64fbsd_sigcontext_addr (struct frame_info *next_frame)
{
CORE_ADDR sp;
/* The `struct sigcontext' (which really is an `ucontext_t' on
FreeBSD/amd64) lives at a fixed offset in the signal frame. See
<machine/sigframe.h>. */
sp = frame_unwind_register_unsigned (next_frame, AMD64_RSP_REGNUM);
return sp + 16;
}
/* FreeBSD 5.1-RELEASE or later. */
/* Mapping between the general-purpose registers in `struct reg'
format and GDB's register cache layout.
Note that some registers are 32-bit, but since we're little-endian
we get away with that. */
/* From <machine/reg.h>. */
static int amd64fbsd_r_reg_offset[] =
{
14 * 8, /* %rax */
11 * 8, /* %rbx */
13 * 8, /* %rcx */
12 * 8, /* %rdx */
9 * 8, /* %rsi */
8 * 8, /* %rdi */
10 * 8, /* %rbp */
20 * 8, /* %rsp */
7 * 8, /* %r8 ... */
6 * 8,
5 * 8,
4 * 8,
3 * 8,
2 * 8,
1 * 8,
0 * 8, /* ... %r15 */
17 * 8, /* %rip */
19 * 8, /* %eflags */
18 * 8, /* %cs */
21 * 8, /* %ss */
-1, /* %ds */
-1, /* %es */
-1, /* %fs */
-1 /* %gs */
};
/* Location of the signal trampoline. */
CORE_ADDR amd64fbsd_sigtramp_start_addr = 0x7fffffffffc0;
CORE_ADDR amd64fbsd_sigtramp_end_addr = 0x7fffffffffe0;
/* From <machine/signal.h>. */
int amd64fbsd_sc_reg_offset[] =
{
24 + 6 * 8, /* %rax */
24 + 7 * 8, /* %rbx */
24 + 3 * 8, /* %rcx */
24 + 2 * 8, /* %rdx */
24 + 1 * 8, /* %rsi */
24 + 0 * 8, /* %rdi */
24 + 8 * 8, /* %rbp */
24 + 22 * 8, /* %rsp */
24 + 4 * 8, /* %r8 ... */
24 + 5 * 8,
24 + 9 * 8,
24 + 10 * 8,
24 + 11 * 8,
24 + 12 * 8,
24 + 13 * 8,
24 + 14 * 8, /* ... %r15 */
24 + 19 * 8, /* %rip */
24 + 21 * 8, /* %eflags */
24 + 20 * 8, /* %cs */
24 + 23 * 8, /* %ss */
-1, /* %ds */
-1, /* %es */
-1, /* %fs */
-1 /* %gs */
};
void
amd64fbsd_init_abi (struct gdbarch_info info, struct gdbarch *gdbarch)
{
struct gdbarch_tdep *tdep = gdbarch_tdep (gdbarch);
/* Obviously FreeBSD is BSD-based. */
i386bsd_init_abi (info, gdbarch);
tdep->gregset_reg_offset = amd64fbsd_r_reg_offset;
tdep->gregset_num_regs = ARRAY_SIZE (amd64fbsd_r_reg_offset);
tdep->sizeof_gregset = 22 * 8;
amd64_init_abi (info, gdbarch);
tdep->sigtramp_start = amd64fbsd_sigtramp_start_addr;
tdep->sigtramp_end = amd64fbsd_sigtramp_end_addr;
tdep->sigcontext_addr = amd64fbsd_sigcontext_addr;
tdep->sc_reg_offset = amd64fbsd_sc_reg_offset;
tdep->sc_num_regs = ARRAY_SIZE (amd64fbsd_sc_reg_offset);
/* FreeBSD uses SVR4-style shared libraries. */
set_solib_svr4_fetch_link_map_offsets
(gdbarch, svr4_lp64_fetch_link_map_offsets);
}
/* Provide a prototype to silence -Wmissing-prototypes. */
void _initialize_amd64fbsd_tdep (void);
void
_initialize_amd64fbsd_tdep (void)
{
gdbarch_register_osabi (bfd_arch_i386, bfd_mach_x86_64,
GDB_OSABI_FREEBSD_ELF, amd64fbsd_init_abi);
}
```
|
Scinax elaeochroa, commonly known as the Sipurio snouted treefrog, or olive snouted treefrog, is a species of frog in the family Hylidae. It is found in the Caribbean lowlands of Nicaragua and Panama and in the Pacific lowlands of Costa Rica and Panama, with an isolated population in Colombia.
Description
Males grow to and females to in snout–vent length. The snout is protruding. The dorsum is yellowish, sometimes with a hint of green or light brown, and turns brilliant yellow in breeding males. There are usually some darker markings on the dorsum. The arms and legs are usually barred. The venter varies from cream to yellow to orange; the throat is usually yellow.
The vocal sac in breeding males is bright yellow-orange. The advertisement call is a series of short "waaks".
Habitat and conservation
The natural habitats of Scinax elaeochroa are humid lowland and lower premontane forests, occurring also in secondary and disturbed forest habitats. It can be found from sea level to above sea level (to asl in Colombia). It is primarily a nocturnal species that breeds in temporary ponds during the wet season. Eggs are laid in ponds or on adjacent vegetation, and the tadpoles develop in the pond.
Though a common and somewhat adaptable species, Scinax elaeochrous is potentially threatened by deforestation.
References
elaeochroa
Amphibians of Colombia
Amphibians of Costa Rica
Amphibians of Nicaragua
Amphibians of Panama
Amphibians described in 1875
Taxa named by Edward Drinker Cope
Taxonomy articles created by Polbot
|
```python
# -*- coding: utf-8 -*-
from __future__ import print_function
import ast
import copy
import os
import pathlib
import pickle
import pprint
import shutil
import sys
import time
import warnings
import requests
from verta._protos.public.common import CommonService_pb2 as _CommonCommonService
from verta._protos.public.modeldb import (
CommonService_pb2 as _CommonService,
ProjectService_pb2,
)
from verta._protos.public.modeldb import (
ExperimentRunService_pb2 as _ExperimentRunService,
)
from verta._vendored import six
from verta._internal_utils import (
_artifact_utils,
_pip_requirements_utils,
_request_utils,
_utils,
importer,
)
from verta.dataset.entities import (
_dataset,
_dataset_version,
)
from verta import data_types
from verta import deployment
from verta import utils
from verta.environment import _Environment
from ._entity import _MODEL_ARTIFACTS_ATTR_KEY
from ._deployable_entity import _DeployableEntity
class ExperimentRun(_DeployableEntity):
"""
Object representing a machine learning Experiment Run.
This class provides read/write functionality for Experiment Run metadata.
There should not be a need to instantiate this class directly; please use
:meth:`Client.set_experiment_run() <verta.Client.set_experiment_run>`.
Attributes
----------
id : str
ID of this Experiment Run.
name : str
Name of this Experiment Run.
has_environment : bool
Whether there is an environment associated with this Experiment Run.
url : str
Verta web app URL.
"""
def __init__(self, conn, conf, msg):
super(ExperimentRun, self).__init__(
conn, conf, _ExperimentRunService, "experiment-run", msg
)
def __repr__(self):
self._refresh_cache()
run_msg = self._msg
return "\n".join(
(
"name: {}".format(run_msg.name),
"url: {}".format(self.url),
"date created: {}".format(
_utils.timestamp_to_str(int(run_msg.date_created))
),
"date updated: {}".format(
_utils.timestamp_to_str(int(run_msg.date_updated))
),
"start time: {}".format(
_utils.timestamp_to_str(int(run_msg.start_time))
),
"end time: {}".format(_utils.timestamp_to_str(int(run_msg.end_time))),
"description: {}".format(run_msg.description),
"tags: {}".format(run_msg.tags),
"attributes: {}".format(_utils.unravel_key_values(run_msg.attributes)),
"id: {}".format(run_msg.id),
"experiment id: {}".format(run_msg.experiment_id),
"project id: {}".format(run_msg.project_id),
"hyperparameters: {}".format(
_utils.unravel_key_values(run_msg.hyperparameters)
),
"observations: {}".format(
_utils.unravel_observations(run_msg.observations)
),
"metrics: {}".format(_utils.unravel_key_values(run_msg.metrics)),
"artifact keys: {}".format(_utils.unravel_artifacts(run_msg.artifacts)),
)
)
def _update_cache(self):
self._hyperparameters = _utils.unravel_key_values(self._msg.hyperparameters)
self._metrics = _utils.unravel_key_values(self._msg.metrics)
@property
def _MODEL_KEY(self):
return _artifact_utils.MODEL_KEY
@property
def workspace(self):
self._refresh_cache()
msg = ProjectService_pb2.GetProjectById(id=self._msg.project_id)
url = "/api/v1/modeldb/project/getProjectById"
response = self._conn.make_proto_request("GET", url, params=msg)
response = self._conn.must_proto_response(response, msg.Response)
proj_proto = response.project
if proj_proto.workspace_service_id:
return self._conn.get_workspace_name_from_id(
proj_proto.workspace_service_id
)
else:
return self._conn._OSS_DEFAULT_WORKSPACE
@property
def name(self):
self._refresh_cache()
return self._msg.name
@property
def url(self):
return "{}://{}/{}/projects/{}/exp-runs/{}".format(
self._conn.scheme,
self._conn.socket,
self.workspace,
self._msg.project_id,
self.id,
)
@classmethod
def _generate_default_name(cls):
return "Run {}".format(_utils.generate_default_name())
@classmethod
def _get_proto_by_id(cls, conn, id):
Message = _ExperimentRunService.GetExperimentRunById
msg = Message(id=id)
response = conn.make_proto_request(
"GET", "/api/v1/modeldb/experiment-run/getExperimentRunById", params=msg
)
return conn.maybe_proto_response(response, Message.Response).experiment_run
@classmethod
def _get_proto_by_name(cls, conn, name, expt_id):
Message = _ExperimentRunService.GetExperimentRunByName
msg = Message(experiment_id=expt_id, name=name)
response = conn.make_proto_request(
"GET", "/api/v1/modeldb/experiment-run/getExperimentRunByName", params=msg
)
return conn.maybe_proto_response(response, Message.Response).experiment_run
@classmethod
def _create_proto_internal(
cls,
conn,
ctx,
name,
desc=None,
tags=None,
attrs=None,
date_created=None,
start_time=None,
end_time=None,
):
Message = _ExperimentRunService.CreateExperimentRun
msg = Message(
project_id=ctx.proj.id,
experiment_id=ctx.expt.id,
name=name,
description=desc,
tags=tags,
attributes=attrs,
date_created=date_created,
date_updated=date_created,
start_time=start_time,
end_time=end_time,
)
response = conn.make_proto_request(
"POST", "/api/v1/modeldb/experiment-run/createExperimentRun", body=msg
)
expt_run = conn.must_proto_response(response, Message.Response).experiment_run
print("created new ExperimentRun: {}".format(expt_run.name))
return expt_run
def _log_artifact(
self,
key,
artifact,
artifact_type,
extension=None,
method=None,
framework=None,
overwrite=False,
):
"""
Logs an artifact to this Experiment Run.
Parameters
----------
key : str
Name of the artifact.
artifact : str or file-like or object
Artifact or some representation thereof.
- If str, then it will be interpreted as a filesystem path, its contents read as bytes,
and uploaded as an artifact.
- If file-like, then the contents will be read as bytes and uploaded as an artifact.
- Otherwise, the object will be serialized and uploaded as an artifact.
artifact_type : int
Variant of `_CommonCommonService.ArtifactTypeEnum`.
extension : str, optional
Filename extension associated with the artifact.
method : str, optional
Serialization method used to produce the bytestream, if `artifact`
was already serialized by Verta.
framework : str, optional
Framework with which the artifact was created. This is
`model_type` returned by `_artifact_utils.serialize_model()`
overwrite : bool, default False
Whether to allow overwriting an existing artifact with key `key`.
"""
if isinstance(artifact, six.string_types):
os.path.expanduser(artifact)
artifact = open(artifact, "rb")
if (
hasattr(artifact, "read") and method is not None
): # already a verta-produced stream
artifact_stream = artifact
else:
artifact_stream, method = _artifact_utils.ensure_bytestream(artifact)
artifact_msg = self._create_artifact_msg(
key,
artifact_stream,
artifact_type=artifact_type,
method=method,
framework=framework,
extension=extension,
)
# log key to ModelDB
msg = _ExperimentRunService.LogArtifact(id=self.id, artifact=artifact_msg)
data = _utils.proto_to_json(msg)
if overwrite:
response = _utils.make_request(
"DELETE",
"{}://{}/api/v1/modeldb/experiment-run/deleteArtifact".format(
self._conn.scheme, self._conn.socket
),
self._conn,
json={"id": self.id, "key": key},
)
_utils.raise_for_http_error(response)
response = _utils.make_request(
"POST",
"{}://{}/api/v1/modeldb/experiment-run/logArtifact".format(
self._conn.scheme, self._conn.socket
),
self._conn,
json=data,
)
if not response.ok:
if response.status_code == 409:
raise ValueError(
"artifact with key {} already exists;"
" consider setting overwrite=True".format(key)
)
else:
_utils.raise_for_http_error(response)
self._upload_artifact(key, artifact_stream)
self._clear_cache()
def _upload_artifact(self, key, artifact_stream, part_size=_artifact_utils._64MB):
"""
Uploads `artifact_stream` to ModelDB artifact store.
Parameters
----------
key : str
artifact_stream : file-like
part_size : int, default 64 MB
If using multipart upload, number of bytes to upload per part.
"""
# TODO: add to Client config
env_part_size = os.environ.get("VERTA_ARTIFACT_PART_SIZE", "")
try:
part_size = int(float(env_part_size))
except ValueError: # not an int
pass
else:
print("set artifact part size {} from environment".format(part_size))
artifact_stream.seek(0)
if self._conf.debug:
print(
"[DEBUG] uploading {} bytes ({})".format(
_artifact_utils.get_stream_length(artifact_stream), key
)
)
artifact_stream.seek(0)
# check if multipart upload ok
url_for_artifact = self._get_url_for_artifact(key, "PUT", part_num=1)
if url_for_artifact.multipart_upload_ok:
# TODO: parallelize this
file_parts = iter(lambda: artifact_stream.read(part_size), b"")
for part_num, file_part in enumerate(file_parts, start=1):
print("uploading part {}".format(part_num), end="\r")
# get presigned URL
url = self._get_url_for_artifact(key, "PUT", part_num=part_num).url
# wrap file part into bytestream to avoid OverflowError
# Passing a bytestring >2 GB (num bytes > max val of int32) directly to
# ``requests`` will overwhelm CPython's SSL lib when it tries to sign the
# payload. But passing a buffered bytestream instead of the raw bytestring
# indicates to ``requests`` that it should perform a streaming upload via
# HTTP/1.1 chunked transfer encoding and avoid this issue.
# path_to_url
part_stream = six.BytesIO(file_part)
# upload part
response = _utils.make_request("PUT", url, self._conn, data=part_stream)
_utils.raise_for_http_error(response)
# commit part
url = "{}://{}/api/v1/modeldb/experiment-run/commitArtifactPart".format(
self._conn.scheme,
self._conn.socket,
)
msg = _CommonService.CommitArtifactPart(id=self.id, key=key)
msg.artifact_part.part_number = part_num
msg.artifact_part.etag = response.headers["ETag"]
data = _utils.proto_to_json(msg)
# TODO: increase retries
response = _utils.make_request("POST", url, self._conn, json=data)
_utils.raise_for_http_error(response)
print()
# complete upload
url = (
"{}://{}/api/v1/modeldb/experiment-run/commitMultipartArtifact".format(
self._conn.scheme,
self._conn.socket,
)
)
msg = _CommonService.CommitMultipartArtifact(id=self.id, key=key)
data = _utils.proto_to_json(msg)
response = _utils.make_request("POST", url, self._conn, json=data)
_utils.raise_for_http_error(response)
else:
# upload full artifact
if url_for_artifact.fields:
# if fields were returned by backend, make a POST request and supply them as form fields
response = _utils.make_request(
"POST",
url_for_artifact.url,
self._conn,
# requests uses the `files` parameter for sending multipart/form-data POSTs.
# path_to_url
# the file contents must be the final form field
# path_to_url#HTTPPOSTFormFields
files=list(url_for_artifact.fields.items())
+ [("file", artifact_stream)],
)
else:
response = _utils.make_request(
"PUT", url_for_artifact.url, self._conn, data=artifact_stream
)
_utils.raise_for_http_error(response)
print("upload complete ({})".format(key))
def _get_artifact_msg(self, key):
# get key-path from ModelDB
msg = _CommonService.GetArtifacts(id=self.id, key=key)
endpoint = "/api/v1/modeldb/experiment-run/getArtifacts"
response = self._conn.make_proto_request("GET", endpoint, params=msg)
response = self._conn.must_proto_response(response, msg.Response)
artifact_msg = {artifact.key: artifact for artifact in response.artifacts}.get(
key
)
if artifact_msg is None:
raise KeyError("no artifact found with key {}".format(key))
return artifact_msg
def _get_artifact(self, key):
"""
Gets the artifact with name `key` from this Experiment Run.
If the artifact was originally logged as just a filesystem path, that path will be returned.
Otherwise, bytes representing the artifact object will be returned.
Parameters
----------
key : str
Name of the artifact.
Returns
-------
str or bytes
Filesystem path or bytes representing the artifact.
bool
True if the artifact was only logged as its filesystem path.
"""
artifact = self._get_artifact_msg(key)
# TODO: remove handling of path_only since log_artifact_path() was removed
# which should also let us consolidate _get_artifact() in _DeployableEntity
if artifact.path_only:
return artifact.path, artifact.path_only
else:
# download artifact from artifact store
url = self._get_url_for_artifact(key, "GET").url
response = _utils.make_request("GET", url, self._conn)
_utils.raise_for_http_error(response)
return response.content, artifact.path_only
def _get_artifact_parts(self, key):
endpoint = (
"{}://{}/api/v1/modeldb/experiment-run/getCommittedArtifactParts".format(
self._conn.scheme,
self._conn.socket,
)
)
data = {"id": self.id, "key": key}
response = _utils.make_request("GET", endpoint, self._conn, params=data)
_utils.raise_for_http_error(response)
committed_parts = _utils.body_to_json(response).get("artifact_parts", [])
committed_parts = list(
sorted(
committed_parts,
key=lambda part: int(part["part_number"]),
)
)
return committed_parts
# TODO: Fix up get dataset to handle the Dataset class when logging dataset
# version
def _get_dataset(self, key):
"""
Gets the dataset with name `key` from this Experiment Run.
If the dataset was originally logged as just a filesystem path, that path will be returned.
Otherwise, bytes representing the dataset object will be returned.
Parameters
----------
key : str
Name of the artifact.
Returns
-------
str or bytes
Filesystem path or bytes representing the artifact.
bool
True if the artifact was only logged as its filesystem path.
"""
# get key-path from ModelDB
Message = _ExperimentRunService.GetDatasets
msg = Message(id=self.id)
data = _utils.proto_to_json(msg)
response = _utils.make_request(
"GET",
"{}://{}/api/v1/modeldb/experiment-run/getDatasets".format(
self._conn.scheme, self._conn.socket
),
self._conn,
params=data,
)
_utils.raise_for_http_error(response)
response_msg = _utils.json_to_proto(
_utils.body_to_json(response), Message.Response
)
dataset = {dataset.key: dataset for dataset in response_msg.datasets}.get(key)
if dataset is None:
# may be old artifact-based dataset
try:
dataset, path_only = self._get_artifact(key)
except KeyError:
six.raise_from(
KeyError("no dataset found with key {}".format(key)), None
)
else:
return dataset, path_only, None
else:
return dataset.path, dataset.path_only, dataset.linked_artifact_id
def clone(self, experiment_id=None):
"""
Returns a newly-created copy of this experiment run.
Parameters
----------
experiment_id : str, optional
ID of experiment to clone this run into. If not provided, the new
run will be cloned into this run's experiment.
Returns
-------
:class:`~verta.tracking.entities.ExperimentRun`
"""
# get info for the current run
Message = _ExperimentRunService.CloneExperimentRun
msg = Message(src_experiment_run_id=self.id, dest_experiment_id=experiment_id)
response = self._conn.make_proto_request(
"POST", "/api/v1/modeldb/experiment-run/cloneExperimentRun", body=msg
)
new_run_msg = self._conn.maybe_proto_response(response, Message.Response).run
new_run = ExperimentRun(self._conn, self._conf, new_run_msg)
return new_run
def get_date_created(self):
"""
Gets a timestamp representing the time (in UTC) this Experiment Run was created.
Returns
-------
timestamp : int
Unix timestamp in milliseconds.
"""
self._refresh_cache()
return int(self._msg.date_created)
def get_date_updated(self):
"""
Gets a timestamp representing the time (in UTC) this Experiment Run was updated.
Returns
-------
timestamp : int
Unix timestamp in milliseconds.
"""
self._refresh_cache()
return int(self._msg.date_updated)
def log_tag(self, tag):
"""
Logs a tag to this Experiment Run.
Parameters
----------
tag : str
Tag.
"""
if not isinstance(tag, six.string_types):
raise TypeError("`tag` must be a string")
Message = _ExperimentRunService.AddExperimentRunTags
msg = Message(id=self.id, tags=[tag])
data = _utils.proto_to_json(msg)
response = _utils.make_request(
"POST",
"{}://{}/api/v1/modeldb/experiment-run/addExperimentRunTags".format(
self._conn.scheme, self._conn.socket
),
self._conn,
json=data,
)
_utils.raise_for_http_error(response)
self._clear_cache()
def log_tags(self, tags):
"""
Logs multiple tags to this Experiment Run.
Parameters
----------
tags : list of str
Tags.
"""
tags = _utils.as_list_of_str(tags)
Message = _ExperimentRunService.AddExperimentRunTags
msg = Message(id=self.id, tags=tags)
data = _utils.proto_to_json(msg)
response = _utils.make_request(
"POST",
"{}://{}/api/v1/modeldb/experiment-run/addExperimentRunTags".format(
self._conn.scheme, self._conn.socket
),
self._conn,
json=data,
)
_utils.raise_for_http_error(response)
self._clear_cache()
def get_tags(self):
"""
Gets all tags from this Experiment Run.
Returns
-------
list of str
All tags.
"""
Message = _CommonService.GetTags
msg = Message(id=self.id)
data = _utils.proto_to_json(msg)
response = _utils.make_request(
"GET",
"{}://{}/api/v1/modeldb/experiment-run/getExperimentRunTags".format(
self._conn.scheme, self._conn.socket
),
self._conn,
params=data,
)
_utils.raise_for_http_error(response)
response_msg = _utils.json_to_proto(
_utils.body_to_json(response), Message.Response
)
return response_msg.tags
def log_attribute(self, key, value, overwrite=False):
"""
Logs an attribute to this Experiment Run.
Parameters
----------
key : str
Name of the attribute.
value : one of {None, bool, float, int, str, list, dict}
Value of the attribute.
overwrite : bool, default False
Whether to allow overwriting an existing atribute with key `key`.
"""
_utils.validate_flat_key(key)
if isinstance(value, data_types._VertaDataType):
value = value._as_dict()
if overwrite:
self._delete_attributes([key])
attribute = _CommonCommonService.KeyValue(
key=key, value=_utils.python_to_val_proto(value, allow_collection=True)
)
msg = _ExperimentRunService.LogAttribute(id=self.id, attribute=attribute)
data = _utils.proto_to_json(msg)
response = _utils.make_request(
"POST",
"{}://{}/api/v1/modeldb/experiment-run/logAttribute".format(
self._conn.scheme, self._conn.socket
),
self._conn,
json=data,
)
if not response.ok:
if response.status_code == 409:
raise ValueError(
"attribute with key {} already exists;"
" consider using observations instead, or setting overwrite=True.".format(
key
)
)
else:
_utils.raise_for_http_error(response)
self._clear_cache()
def log_attributes(self, attributes, overwrite=False):
"""
Logs potentially multiple attributes to this Experiment Run.
Parameters
----------
attributes : dict of str to {None, bool, float, int, str, list, dict}
Attributes.
overwrite : bool, default False
Whether to allow overwriting an existing atributes.
"""
# validate all keys first
for key in six.viewkeys(attributes):
_utils.validate_flat_key(key)
for key, value in six.viewitems(attributes):
if isinstance(value, data_types._VertaDataType):
attributes[key] = value._as_dict()
if overwrite:
keys = list(six.viewkeys(attributes))
self._delete_attributes(keys)
# build KeyValues
attribute_keyvals = []
for key, value in six.viewitems(attributes):
attribute_keyvals.append(
_CommonCommonService.KeyValue(
key=key,
value=_utils.python_to_val_proto(value, allow_collection=True),
)
)
msg = _ExperimentRunService.LogAttributes(
id=self.id, attributes=attribute_keyvals
)
data = _utils.proto_to_json(msg)
response = _utils.make_request(
"POST",
"{}://{}/api/v1/modeldb/experiment-run/logAttributes".format(
self._conn.scheme, self._conn.socket
),
self._conn,
json=data,
)
if not response.ok:
if response.status_code == 409:
raise ValueError(
"some attribute with some input key already exists;"
" consider using observations instead, or setting overwrite=True."
)
else:
_utils.raise_for_http_error(response)
self._clear_cache()
def get_attribute(self, key):
"""
Gets the attribute with name `key` from this Experiment Run.
Parameters
----------
key : str
Name of the attribute.
Returns
-------
one of {None, bool, float, int, str}
Value of the attribute.
"""
_utils.validate_flat_key(key)
Message = _CommonService.GetAttributes
msg = Message(id=self.id, attribute_keys=[key])
data = _utils.proto_to_json(msg)
response = _utils.make_request(
"GET",
"{}://{}/api/v1/modeldb/experiment-run/getAttributes".format(
self._conn.scheme, self._conn.socket
),
self._conn,
params=data,
)
_utils.raise_for_http_error(response)
response_msg = _utils.json_to_proto(
_utils.body_to_json(response), Message.Response
)
attributes = _utils.unravel_key_values(response_msg.attributes)
try:
attribute = attributes[key]
try:
return data_types._VertaDataType._from_dict(attribute)
except (KeyError, TypeError, ValueError):
return attribute
except KeyError:
six.raise_from(KeyError("no attribute found with key {}".format(key)), None)
def get_attributes(self):
"""
Gets all attributes from this Experiment Run.
Returns
-------
dict of str to {None, bool, float, int, str}
Names and values of all attributes.
"""
Message = _CommonService.GetAttributes
msg = Message(id=self.id, get_all=True)
data = _utils.proto_to_json(msg)
response = _utils.make_request(
"GET",
"{}://{}/api/v1/modeldb/experiment-run/getAttributes".format(
self._conn.scheme, self._conn.socket
),
self._conn,
params=data,
)
_utils.raise_for_http_error(response)
response_msg = _utils.json_to_proto(
_utils.body_to_json(response), Message.Response
)
attributes = _utils.unravel_key_values(response_msg.attributes)
for key, attribute in attributes.items():
try:
attributes[key] = data_types._VertaDataType._from_dict(attribute)
except (KeyError, TypeError, ValueError):
pass
return attributes
def _delete_attributes(self, keys):
response = _utils.make_request(
"DELETE",
"{}://{}/api/v1/modeldb/experiment-run/deleteExperimentRunAttributes".format(
self._conn.scheme, self._conn.socket
),
self._conn,
json={"id": self.id, "attribute_keys": keys},
)
_utils.raise_for_http_error(response)
def _delete_metrics(self, keys):
response = _utils.make_request(
"DELETE",
"{}://{}/api/v1/modeldb/experiment-run/deleteMetrics".format(
self._conn.scheme, self._conn.socket
),
self._conn,
json={"id": self.id, "metric_keys": keys},
)
_utils.raise_for_http_error(response)
def _delete_observations(self, keys):
response = _utils.make_request(
"DELETE",
"{}://{}/api/v1/modeldb/experiment-run/deleteObservations".format(
self._conn.scheme, self._conn.socket
),
self._conn,
json={"id": self.id, "observation_keys": keys},
)
_utils.raise_for_http_error(response)
def _delete_hyperparameters(self, keys):
response = _utils.make_request(
"DELETE",
"{}://{}/api/v1/modeldb/experiment-run/deleteHyperparameters".format(
self._conn.scheme, self._conn.socket
),
self._conn,
json={"id": self.id, "hyperparameter_keys": keys},
)
_utils.raise_for_http_error(response)
def log_metric(self, key, value, overwrite=False):
"""
Logs a metric to this Experiment Run.
If the metadatum of interest might recur, :meth:`.log_observation` should be used instead.
Parameters
----------
key : str
Name of the metric.
value : one of {None, bool, float, int, str}
Value of the metric.
overwrite : bool, default False
Whether to allow overwriting an existing metric with key `key`.
"""
_utils.validate_flat_key(key)
metric = _CommonCommonService.KeyValue(
key=key, value=_utils.python_to_val_proto(value)
)
msg = _ExperimentRunService.LogMetric(id=self.id, metric=metric)
data = _utils.proto_to_json(msg)
if overwrite:
self._delete_metrics([key])
response = _utils.make_request(
"POST",
"{}://{}/api/v1/modeldb/experiment-run/logMetric".format(
self._conn.scheme, self._conn.socket
),
self._conn,
json=data,
)
if not response.ok:
if response.status_code == 409:
raise ValueError(
"metric with key {} already exists;"
" consider using observations instead".format(key)
)
else:
_utils.raise_for_http_error(response)
self._clear_cache()
def log_metrics(self, metrics, overwrite=False):
"""
Logs potentially multiple metrics to this Experiment Run.
Parameters
----------
metrics : dict of str to {None, bool, float, int, str}
Metrics.
overwrite : bool, default False
Whether to allow overwriting an existing metric with key `key`.
"""
# validate all keys first
for key in six.viewkeys(metrics):
_utils.validate_flat_key(key)
# build KeyValues
metric_keyvals = []
keys = []
for key, value in six.viewitems(metrics):
metric_keyvals.append(
_CommonCommonService.KeyValue(
key=key, value=_utils.python_to_val_proto(value)
)
)
keys.append(key)
msg = _ExperimentRunService.LogMetrics(id=self.id, metrics=metric_keyvals)
data = _utils.proto_to_json(msg)
if overwrite:
self._delete_metrics(keys)
response = _utils.make_request(
"POST",
"{}://{}/api/v1/modeldb/experiment-run/logMetrics".format(
self._conn.scheme, self._conn.socket
),
self._conn,
json=data,
)
if not response.ok:
if response.status_code == 409:
raise ValueError(
"some metric with some input key already exists;"
" consider using observations instead"
)
else:
_utils.raise_for_http_error(response)
self._clear_cache()
def get_metric(self, key):
"""
Gets the metric with name `key` from this Experiment Run.
Parameters
----------
key : str
Name of the metric.
Returns
-------
one of {None, bool, float, int, str}
Value of the metric.
"""
self._refresh_cache()
if key in self._metrics:
return self._metrics[key]
else:
six.raise_from(KeyError("no metric found with key {}".format(key)), None)
def get_metrics(self):
"""
Gets all metrics from this Experiment Run.
Returns
-------
dict of str to {None, bool, float, int, str}
Names and values of all metrics.
"""
self._refresh_cache()
return self._metrics
def log_hyperparameter(self, key, value, overwrite=False):
"""
Logs a hyperparameter to this Experiment Run.
Parameters
----------
key : str
Name of the hyperparameter.
value : one of {None, bool, float, int, str}
Value of the hyperparameter.
overwrite : bool, default False
Whether to allow overwriting an existing hyperparameter with key `key`.
"""
_utils.validate_flat_key(key)
hyperparameter = _CommonCommonService.KeyValue(
key=key, value=_utils.python_to_val_proto(value)
)
msg = _ExperimentRunService.LogHyperparameter(
id=self.id, hyperparameter=hyperparameter
)
data = _utils.proto_to_json(msg)
if overwrite:
self._delete_hyperparameters([key])
response = _utils.make_request(
"POST",
"{}://{}/api/v1/modeldb/experiment-run/logHyperparameter".format(
self._conn.scheme, self._conn.socket
),
self._conn,
json=data,
)
if not response.ok:
if response.status_code == 409:
raise ValueError(
"hyperparameter with key {} already exists;"
" consider using observations instead".format(key)
)
else:
_utils.raise_for_http_error(response)
self._clear_cache()
def log_hyperparameters(self, hyperparams, overwrite=False):
"""
Logs potentially multiple hyperparameters to this Experiment Run.
Parameters
----------
hyperparameters : dict of str to {None, bool, float, int, str}
Hyperparameters.
overwrite : bool, default False
Whether to allow overwriting an existing hyperparameter with key `key`.
"""
# validate all keys first
for key in six.viewkeys(hyperparams):
_utils.validate_flat_key(key)
# build KeyValues
hyperparameter_keyvals = []
keys = []
for key, value in six.viewitems(hyperparams):
hyperparameter_keyvals.append(
_CommonCommonService.KeyValue(
key=key, value=_utils.python_to_val_proto(value)
)
)
keys.append(key)
msg = _ExperimentRunService.LogHyperparameters(
id=self.id, hyperparameters=hyperparameter_keyvals
)
data = _utils.proto_to_json(msg)
if overwrite:
self._delete_hyperparameters(keys)
response = _utils.make_request(
"POST",
"{}://{}/api/v1/modeldb/experiment-run/logHyperparameters".format(
self._conn.scheme, self._conn.socket
),
self._conn,
json=data,
)
if not response.ok:
if response.status_code == 409:
raise ValueError(
"some hyperparameter with some input key already exists;"
" consider using observations instead"
)
else:
_utils.raise_for_http_error(response)
self._clear_cache()
def get_hyperparameter(self, key):
"""
Gets the hyperparameter with name `key` from this Experiment Run.
Parameters
----------
key : str
Name of the hyperparameter.
Returns
-------
one of {None, bool, float, int, str}
Value of the hyperparameter.
"""
self._refresh_cache()
if key in self._hyperparameters:
return self._hyperparameters[key]
else:
six.raise_from(
KeyError("no hyperparameter found with key {}".format(key)), None
)
def get_hyperparameters(self):
"""
Gets all hyperparameters from this Experiment Run.
Returns
-------
dict of str to {None, bool, float, int, str}
Names and values of all hyperparameters.
"""
self._refresh_cache()
return self._hyperparameters
def log_dataset_version(self, key, dataset_version, overwrite=False):
"""
Logs a Verta DatasetVersion to this ExperimentRun with the given key.
Parameters
----------
key : str
Name of the dataset version.
dataset_version : :class:`~verta.dataset.entities.DatasetVersion`
Dataset version.
overwrite : bool, default False
Whether to allow overwriting a dataset version.
"""
if not isinstance(dataset_version, _dataset.DatasetVersion):
raise TypeError("`dataset_version` must be of type DatasetVersion")
# TODO: hack because path_only artifact needs a placeholder path
dataset_path = "See attached dataset version"
# log key-path to ModelDB
Message = _ExperimentRunService.LogDataset
artifact_msg = _CommonCommonService.Artifact(
key=key,
path=dataset_path,
path_only=True,
artifact_type=_CommonCommonService.ArtifactTypeEnum.DATA,
linked_artifact_id=dataset_version.id,
)
msg = Message(id=self.id, dataset=artifact_msg, overwrite=overwrite)
data = _utils.proto_to_json(msg)
response = _utils.make_request(
"POST",
"{}://{}/api/v1/modeldb/experiment-run/logDataset".format(
self._conn.scheme, self._conn.socket
),
self._conn,
json=data,
)
if not response.ok:
if response.status_code == 409:
raise ValueError(
"dataset with key {} already exists;"
" consider setting overwrite=True".format(key)
)
else:
_utils.raise_for_http_error(response)
self._clear_cache()
def get_dataset(self, key):
"""
Gets the dataset artifact with name `key` from this Experiment Run.
If the dataset was originally logged as just a filesystem path, that path will be returned.
Otherwise, the dataset object itself will be returned. If the object is unable to be
deserialized, the raw bytes are returned instead.
Parameters
----------
key : str
Name of the dataset.
Returns
-------
str or object or file-like
DatasetVersion if logged using :meth:`log_dataset_version()`.
Filesystem path of the dataset, the dataset object, or a bytestream representing the
dataset.
"""
dataset, path_only, linked_id = self._get_dataset(key)
if path_only:
if linked_id:
return _dataset_version.DatasetVersion(
self._conn,
self._conf,
_dataset_version.DatasetVersion._get_proto_by_id(
self._conn, linked_id
),
)
else:
return dataset
else:
# TODO: may need to be updated for raw
try:
return pickle.loads(dataset)
except pickle.UnpicklingError:
return six.BytesIO(dataset)
def get_dataset_version(self, key):
"""
Gets the DatasetVersion with name `key` from this Experiment Run.
Parameters
----------
key : str
Name of the dataset version.
Returns
-------
`DatasetVersion <dataset.html>`_
DatasetVersion associated with the given key.
"""
return self.get_dataset(key)
def log_tf_saved_model(self, export_dir):
tempf = _artifact_utils.zip_dir(export_dir)
# TODO: change _log_artifact() to not read file into memory
self._log_artifact(
"tf_saved_model", tempf, _CommonCommonService.ArtifactTypeEnum.BLOB, "zip"
)
def log_model(
self,
model,
custom_modules=None,
model_api=None,
artifacts=None,
overwrite=False,
):
if model_api and not isinstance(model_api, utils.ModelAPI):
raise ValueError(
"`model_api` must be `verta.utils.ModelAPI`, not {}".format(
type(model_api)
)
)
if artifacts is not None and not (
isinstance(artifacts, list)
and all(
isinstance(artifact_key, six.string_types) for artifact_key in artifacts
)
):
raise TypeError(
"`artifacts` must be list of str, not {}".format(type(artifacts))
)
# validate that `artifacts` are actually logged
if artifacts:
self._refresh_cache()
run_msg = self._msg
existing_artifact_keys = {artifact.key for artifact in run_msg.artifacts}
unlogged_artifact_keys = set(artifacts) - existing_artifact_keys
if unlogged_artifact_keys:
raise ValueError(
"`artifacts` contains keys that have not been logged: {}".format(
sorted(unlogged_artifact_keys)
)
)
# serialize model
_utils.THREAD_LOCALS.active_experiment_run = self
try:
serialized_model, method, model_type = _artifact_utils.serialize_model(
model
)
finally:
_utils.THREAD_LOCALS.active_experiment_run = None
try:
extension = _artifact_utils.get_file_ext(serialized_model)
except (TypeError, ValueError):
extension = _artifact_utils.ext_from_method(method)
if self._conf.debug:
print("[DEBUG] model is type {}".format(model_type))
if artifacts and model_type != "class":
raise ValueError("`artifacts` can only be provided if `model` is a class")
# associate artifact dependencies
if artifacts:
self.log_attribute(_MODEL_ARTIFACTS_ATTR_KEY, artifacts, overwrite)
# create and upload model API
if model_type or model_api: # only if provided or model is deployable
if model_api is None:
model_api = utils.ModelAPI()
if self._conf.debug:
print("[DEBUG] model API is:")
pprint.pprint(model_api.to_dict())
self._log_artifact(
_artifact_utils.MODEL_API_KEY,
model_api,
_CommonCommonService.ArtifactTypeEnum.BLOB,
"json",
overwrite=overwrite,
)
# create and upload custom modules
if model_type or custom_modules: # only if provided or model is deployable
custom_modules_artifact = self._custom_modules_as_artifact(custom_modules)
self._log_artifact(
_artifact_utils.CUSTOM_MODULES_KEY,
custom_modules_artifact,
_CommonCommonService.ArtifactTypeEnum.BLOB,
"zip",
overwrite=overwrite,
)
# upload model
self._log_artifact(
self._MODEL_KEY,
serialized_model,
_CommonCommonService.ArtifactTypeEnum.MODEL,
extension,
method,
model_type,
overwrite=overwrite,
)
def get_model(self):
"""
Gets the model artifact for Verta model deployment from this Experiment Run.
Returns
-------
object
Model for deployment.
"""
model, _ = self._get_artifact(self._MODEL_KEY)
return _artifact_utils.deserialize_model(model, error_ok=True)
def log_image(self, key, image, overwrite=False):
"""
Logs a image artifact to this Experiment Run.
Parameters
----------
key : str
Name of the image.
image : one of {str, file-like, pyplot, matplotlib Figure, PIL Image, object}
Image or some representation thereof.
- If str, then it will be interpreted as a filesystem path, its contents read as bytes,
and uploaded as an artifact.
- If file-like, then the contents will be read as bytes and uploaded as an artifact.
- If matplotlib pyplot, then the image will be serialized and uploaded as an artifact.
- If matplotlib Figure, then the image will be serialized and uploaded as an artifact.
- If PIL Image, then the image will be serialized and uploaded as an artifact.
- Otherwise, the object will be serialized and uploaded as an artifact.
overwrite : bool, default False
Whether to allow overwriting an existing image with key `key`.
"""
_artifact_utils.validate_key(key)
_utils.validate_flat_key(key)
# convert pyplot, Figure or Image to bytestream
bytestream, extension = six.BytesIO(), "png"
try: # handle matplotlib
image.savefig(bytestream, format=extension)
except AttributeError:
try: # handle PIL Image
colors = image.getcolors()
except AttributeError:
try:
extension = _artifact_utils.get_file_ext(image)
except (TypeError, ValueError):
extension = None
else:
if len(colors) == 1 and all(val == 255 for val in colors[0][1]):
warnings.warn("the image being logged is blank")
image.save(bytestream, extension)
bytestream.seek(0)
if bytestream.read(1):
bytestream.seek(0)
image = bytestream
self._log_artifact(
key,
image,
_CommonCommonService.ArtifactTypeEnum.IMAGE,
extension,
overwrite=overwrite,
)
def get_image(self, key):
"""
Gets the image artifact with name `key` from this Experiment Run.
If the image was originally logged as just a filesystem path, that path will be returned.
Otherwise, the image object will be returned. If the object is unable to be deserialized,
the raw bytes are returned instead.
Parameters
----------
key : str
Name of the image.
Returns
-------
str or PIL Image or file-like
Filesystem path of the image, the image object, or a bytestream representing the image.
"""
image, path_only = self._get_artifact(key)
if path_only:
return image
else:
Image = importer.maybe_dependency("PIL.Image")
if Image is None: # Pillow not installed
return six.BytesIO(image)
try:
return Image.open(six.BytesIO(image))
except IOError: # can't be handled by Pillow
return six.BytesIO(image)
def log_artifact(self, key, artifact, overwrite=False):
"""
Logs an artifact to this Experiment Run.
.. note::
The following artifact keys are reserved for internal use within the
Verta system:
- ``"custom_modules"``
- ``"model"``
- ``"model.pkl"``
- ``"model_api.json"``
- ``"requirements.txt"``
- ``"train_data"``
- ``"tf_saved_model"``
- ``"setup_script"``
Parameters
----------
key : str
Name of the artifact.
artifact : str or file-like or object
Artifact or some representation thereof.
- If str, then it will be interpreted as a filesystem path, its contents read as bytes,
and uploaded as an artifact. If it is a directory path, its contents will be zipped.
- If file-like, then the contents will be read as bytes and uploaded as an artifact.
- Otherwise, the object will be serialized and uploaded as an artifact.
overwrite : bool, default False
Whether to allow overwriting an existing artifact with key `key`.
"""
_artifact_utils.validate_key(key)
_utils.validate_flat_key(key)
# zip if `artifact` is directory path
if isinstance(artifact, six.string_types) and os.path.isdir(artifact):
artifact = _artifact_utils.zip_dir(artifact)
try:
extension = _artifact_utils.get_file_ext(artifact)
except (TypeError, ValueError):
extension = None
self._log_artifact(
key,
artifact,
_CommonCommonService.ArtifactTypeEnum.BLOB,
extension,
overwrite=overwrite,
)
def get_artifact(self, key):
"""
Gets the artifact with name `key` from this Experiment Run.
If the artifact was originally logged as just a filesystem path, that path will be returned.
Otherwise, the artifact object will be returned. If the object is unable to be deserialized,
the raw bytes are returned instead.
Parameters
----------
key : str
Name of the artifact.
Returns
-------
str or bytes
Filesystem path of the artifact, the artifact object, or a bytestream representing the
artifact.
"""
artifact, path_only = self._get_artifact(key)
if path_only:
if not os.path.exists(artifact):
# path-only artifact; `artifact` is its path
return artifact
else:
# clientside storage; `artifact` is its path
# NOTE: can cause problem if accidentally picks up unrelated file w/ same name
artifact_stream = open(artifact, "rb")
else:
# uploaded artifact; `artifact` is its bytes
artifact_stream = six.BytesIO(artifact)
torch = importer.maybe_dependency("torch")
if torch is not None:
try:
obj = torch.load(artifact_stream)
except: # not something torch can deserialize
artifact_stream.seek(0)
else:
artifact_stream.close()
return obj
try:
obj = pickle.load(artifact_stream)
except: # not something pickle can deserialize
artifact_stream.seek(0)
else:
artifact_stream.close()
return obj
return artifact_stream
def get_artifact_keys(self):
"""
Gets the artifact keys of this Experiment Run.
Returns
-------
list of str
List of artifact keys of this Experiment Run.
"""
self._refresh_cache()
return list(map(lambda artifact: artifact.key, self._msg.artifacts))
def download_artifact(self, key, download_to_path):
download_to_path = os.path.abspath(download_to_path)
artifact = self._get_artifact_msg(key)
# create parent dirs
pathlib.Path(download_to_path).parent.mkdir(parents=True, exist_ok=True)
# TODO: clean up empty parent dirs if something later fails
# get a stream of the file bytes, without loading into memory, and write to file
# TODO: consolidate this with _get_artifact() and get_artifact()
print("downloading {} from ModelDB".format(key))
if artifact.path_only:
if os.path.exists(artifact.path):
# copy from clientside storage
shutil.copyfile(artifact.path, download_to_path)
else:
raise ValueError(
"artifact {} appears to have been logged as path-only,"
" and cannot be downloaded".format(key)
)
print("download complete; file written to {}".format(download_to_path))
else:
# download artifact from artifact store
url = self._get_url_for_artifact(key, "GET").url
with _utils.make_request("GET", url, self._conn, stream=True) as response:
_utils.raise_for_http_error(response)
if (
artifact.filename_extension == _artifact_utils.ZIP_EXTENSION
): # verta-created ZIP
downloader = _request_utils.download_zipped_dir
else:
downloader = _request_utils.download_file
downloader(response, download_to_path, overwrite_ok=True)
return download_to_path
def download_model(self, download_to_path):
return self.download_artifact(self._MODEL_KEY, download_to_path)
def log_observation(
self, key, value, timestamp=None, epoch_num=None, overwrite=False
):
"""
Logs an observation to this Experiment Run.
Parameters
----------
key : str
Name of the observation.
value : one of {None, bool, float, int, str}
Value of the observation.
timestamp : str or float or int, optional
String representation of a datetime or numerical Unix timestamp. If not provided, the
current time will be used.
epoch_num : non-negative int, optional
Epoch number associated with this observation. If not provided, it will automatically
be incremented from prior observations for the same `key`.
overwrite : bool, default False
Whether to allow overwriting an existing observation with key `key`.
Warnings
--------
If `timestamp` is provided by the user, it must contain timezone information. Otherwise,
it will be interpreted as UTC.
"""
_utils.validate_flat_key(key)
if timestamp is None:
timestamp = _utils.now()
else:
timestamp = _utils.ensure_timestamp(timestamp)
if epoch_num is not None:
if not isinstance(epoch_num, six.integer_types) and not (
isinstance(epoch_num, float) and epoch_num.is_integer()
):
raise TypeError(
"`epoch_num` must be int, not {}".format(type(epoch_num))
)
if epoch_num < 0:
raise ValueError("`epoch_num` must be non-negative")
attribute = _CommonCommonService.KeyValue(
key=key, value=_utils.python_to_val_proto(value)
)
observation = _ExperimentRunService.Observation(
attribute=attribute, timestamp=timestamp
) # TODO: support Artifacts
if epoch_num is not None:
observation.epoch_number.number_value = (
epoch_num # pylint: disable=no-member
)
msg = _ExperimentRunService.LogObservation(id=self.id, observation=observation)
data = _utils.proto_to_json(msg)
if overwrite:
self._delete_observations([key])
response = _utils.make_request(
"POST",
"{}://{}/api/v1/modeldb/experiment-run/logObservation".format(
self._conn.scheme, self._conn.socket
),
self._conn,
json=data,
)
_utils.raise_for_http_error(response)
self._clear_cache()
def get_observation(self, key):
"""
Gets the observation series with name `key` from this Experiment Run.
Parameters
----------
key : str
Name of observation series.
Returns
-------
list of {None, bool, float, int, str}
Values of observation series.
"""
_utils.validate_flat_key(key)
Message = _ExperimentRunService.GetObservations
msg = Message(id=self.id, observation_key=key)
data = _utils.proto_to_json(msg)
response = _utils.make_request(
"GET",
"{}://{}/api/v1/modeldb/experiment-run/getObservations".format(
self._conn.scheme, self._conn.socket
),
self._conn,
params=data,
)
_utils.raise_for_http_error(response)
response_msg = _utils.json_to_proto(
_utils.body_to_json(response), Message.Response
)
if len(response_msg.observations) == 0:
raise KeyError("no observation found with key {}".format(key))
else:
return [
_utils.unravel_observation(observation)[1:] # drop key from tuple
for observation in response_msg.observations
] # TODO: support Artifacts
def get_observations(self):
"""
Gets all observations from this Experiment Run.
Returns
-------
dict of str to list of {None, bool, float, int, str}
Names and values of all observation series.
"""
self._refresh_cache()
return _utils.unravel_observations(self._msg.observations)
def log_environment(self, env, overwrite=False):
if not isinstance(env, _Environment):
raise TypeError(
"`env` must be of type Environment, not {}".format(type(env))
)
if self.has_environment and not overwrite:
raise ValueError(
"environment already exists; consider setting overwrite=True"
)
msg = _ExperimentRunService.LogEnvironment(
id=self.id, environment=env._as_env_proto()
)
response = self._conn.make_proto_request(
"POST",
"/api/v1/modeldb/experiment-run/logEnvironment",
body=msg,
)
if response.ok:
# self._refresh_cache()
self._fetch_with_no_cache()
else:
_utils.raise_for_http_error(response)
def log_modules(self, paths, search_path=None):
"""
Logs local files that are dependencies for a deployed model to this Experiment Run.
.. deprecated:: 0.13.13
The behavior of this function has been merged into :meth:`log_model` as its
``custom_modules`` parameter; consider using that instead.
.. deprecated:: 0.12.4
The `search_path` parameter is no longer necessary and will be removed in an upcoming version; consider
removing it from the function call.
Parameters
----------
paths : str or list of str
Paths to local Python modules and other files that the deployed model depends on. If a
directory is provided, all files within will be included.
"""
warnings.warn(
"The behavior of this function has been merged into log_model() as its"
" `custom_modules` parameter; consider using that instead",
category=FutureWarning,
)
if search_path is not None:
warnings.warn(
"`search_path` is no longer used and will be removed in a later version;"
" consider removing it from the function call",
category=FutureWarning,
)
custom_modules_artifact = self._custom_modules_as_artifact(paths)
self._log_artifact(
_artifact_utils.CUSTOM_MODULES_KEY,
custom_modules_artifact,
_CommonCommonService.ArtifactTypeEnum.BLOB,
"zip",
)
def log_setup_script(self, script, overwrite=False):
"""
Associate a model deployment setup script with this Experiment Run.
.. versionadded:: 0.13.8
Parameters
----------
script : str
String composed of valid Python code for executing setup steps at the beginning of model
deployment. An on-disk file can be passed in using ``open("path/to/file.py", 'r').read()``.
overwrite : bool, default False
Whether to allow overwriting an existing setup script.
Raises
------
SyntaxError
If `script` contains invalid Python.
"""
# validate `script`'s syntax
try:
ast.parse(script)
except SyntaxError as e:
# clarify that the syntax error comes from `script`, and propagate details
reason = e.args[0]
line_no = e.args[1][1]
line = script.splitlines()[line_no - 1]
six.raise_from(
SyntaxError(
"{} in provided script on line {}:\n{}".format(
reason, line_no, line
)
),
e,
)
# convert into bytes for upload
script = six.ensure_binary(script)
# convert to file-like for `_log_artifact()`
script = six.BytesIO(script)
self._log_artifact(
"setup_script",
script,
_CommonCommonService.ArtifactTypeEnum.BLOB,
"py",
overwrite=overwrite,
)
def download_docker_context(self, download_to_path, self_contained=False):
"""
Downloads this Experiment Run's Docker context ``tgz``.
Parameters
----------
download_to_path : str
Path to download Docker context to.
self_contained : bool, default False
Whether the downloaded Docker context should be self-contained.
Returns
-------
downloaded_to_path : str
Absolute path where Docker context was downloaded to. Matches `download_to_path`.
"""
self._refresh_cache()
endpoint = "{}://{}/api/v1/deployment/builds/dockercontext".format(
self._conn.scheme,
self._conn.socket,
)
body = {
"run_id": self.id,
"self_contained": self_contained,
}
with _utils.make_request(
"POST", endpoint, self._conn, json=body, stream=True
) as response:
try:
_utils.raise_for_http_error(response)
except requests.HTTPError as e:
# propagate error caused by missing artifact
error_text = e.response.text.strip()
if error_text.startswith("missing artifact"):
new_e = RuntimeError(
"unable to obtain Docker context due to " + error_text
)
six.raise_from(new_e, None)
else:
raise e
downloaded_to_path = _request_utils.download_file(
response, download_to_path, overwrite_ok=True
)
return os.path.abspath(downloaded_to_path)
def rename(self, name: str) -> None:
"""Rename this experiment run
Parameters
----------
name : str
New name for this experiment run.
"""
msg = _ExperimentRunService.UpdateExperimentRunName(id=self.id, name=name)
endpoint = "/api/v1/modeldb/experiment-run/updateExperimentRunName"
response = self._conn.make_proto_request("POST", endpoint, body=msg)
self._conn.must_response(response)
self._clear_cache()
def _get_url_for_artifact(self, key, method, artifact_type=0, part_num=0):
"""
Obtains a URL to use for accessing stored artifacts.
Parameters
----------
key : str
Name of the artifact.
method : {'GET', 'PUT'}
HTTP method to request for the generated URL.
artifact_type : int, optional
Variant of `_CommonCommonService.ArtifactTypeEnum`. This informs the backend what slot to check
for the artifact, if necessary.
part_num : int, optional
If using Multipart Upload, number of part to be uploaded.
Returns
-------
response_msg : `_CommonService.GetUrlForArtifact.Response`
Backend response.
"""
if method.upper() not in ("GET", "PUT"):
raise ValueError("`method` must be one of {'GET', 'PUT'}")
Message = _CommonService.GetUrlForArtifact
msg = Message(
id=self.id,
key=key,
method=method.upper(),
artifact_type=artifact_type,
part_number=part_num,
)
data = _utils.proto_to_json(msg)
response = _utils.make_request(
"POST", self._request_url.format("getUrlForArtifact"), self._conn, json=data
)
_utils.raise_for_http_error(response)
response_msg = _utils.json_to_proto(
_utils.body_to_json(response), Message.Response
)
url = response_msg.url
# accommodate port-forwarded NFS store
if "path_to_url" in url[:20]:
url = "http" + url[5:]
if "localhost%3a" in url[:20]:
url = url.replace("localhost%3a", "localhost:")
if "localhost%3A" in url[:20]:
url = url.replace("localhost%3A", "localhost:")
response_msg.url = url
return response_msg
def delete(self):
"""
Deletes this experiment run.
"""
request_url = (
"{}://{}/api/v1/modeldb/experiment-run/deleteExperimentRun".format(
self._conn.scheme, self._conn.socket
)
)
response = _utils.make_request(
"DELETE",
request_url,
self._conn,
json={"id": self.id},
)
_utils.raise_for_http_error(response)
```
|
```smalltalk
// See license.txt file in the project root for full license information.
using System.Reflection;
namespace Scriban.Runtime
{
/// <summary>
/// Allows to rename a member.
/// </summary>
/// <param name="member">A member info</param>
/// <returns>The new name name of member</returns>
#if SCRIBAN_PUBLIC
public
#else
internal
#endif
delegate string MemberRenamerDelegate(MemberInfo member);
}
```
|
```xml
/*
* @license Apache-2.0
*
*
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing, software
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*/
import umuldw = require( './index' );
// TESTS //
// The function returns an array-like object of numbers...
{
umuldw( 0xAAAAAAAA, 0x55555555 ); // $ExpectType number[]
}
// The compiler throws an error if the function is provided non-numbers for the last two arguments...
{
umuldw( 0xAAAAAAAA, true ); // $ExpectError
umuldw( 0xAAAAAAAA, false ); // $ExpectError
umuldw( 0xAAAAAAAA, [] ); // $ExpectError
umuldw( 0xAAAAAAAA, {} ); // $ExpectError
umuldw( 0xAAAAAAAA, 'abc' ); // $ExpectError
umuldw( 0xAAAAAAAA, ( x: number ): number => x ); // $ExpectError
umuldw( true, 0x55555555 ); // $ExpectError
umuldw( false, 0x55555555 ); // $ExpectError
umuldw( [], 0x55555555 ); // $ExpectError
umuldw( {}, 0x55555555 ); // $ExpectError
umuldw( 'abc', 0x55555555 ); // $ExpectError
umuldw( ( x: number ): number => x, 0x55555555 ); // $ExpectError
}
// The compiler throws an error if the function is provided an insufficient number of arguments...
{
umuldw(); // $ExpectError
umuldw( 1 ); // $ExpectError
}
// Attached to the main export is an `assign` method which returns an array-like object containing numbers...
{
const out = [ 0, 0 ];
umuldw.assign( 1, 5, out, 1, 0 ); // $ExpectType Collection<number>
}
// The compiler throws an error if the `assign` method is provided a first argument which is not a number...
{
const out = [ 0, 0 ];
umuldw.assign( true, 4, out, 1, 0 ); // $ExpectError
umuldw.assign( false, 4, out, 1, 0 ); // $ExpectError
umuldw.assign( '5', 4, out, 1, 0 ); // $ExpectError
umuldw.assign( null, 4, out, 1, 0 ); // $ExpectError
umuldw.assign( [], 4, out, 1, 0 ); // $ExpectError
umuldw.assign( {}, 4, out, 1, 0 ); // $ExpectError
umuldw.assign( ( x: number ): number => x, 4, out, 1, 0 ); // $ExpectError
}
// The compiler throws an error if the `assign` method is provided a second argument which is not a number...
{
const out = [ 0, 0 ];
umuldw.assign( 4, true, out, 1, 0 ); // $ExpectError
umuldw.assign( 4, false, out, 1, 0 ); // $ExpectError
umuldw.assign( 4, '5', out, 1, 0 ); // $ExpectError
umuldw.assign( 4, null, out, 1, 0 ); // $ExpectError
umuldw.assign( 4, [], out, 1, 0 ); // $ExpectError
umuldw.assign( 4, {}, out, 1, 0 ); // $ExpectError
umuldw.assign( 4, ( x: number ): number => x, out, 1, 0 ); // $ExpectError
}
// The compiler throws an error if the `assign` method is provided a third argument which is not an array-like object...
{
umuldw.assign( 4, 1.0, 1, 1, 0 ); // $ExpectError
umuldw.assign( 4, 1.0, true, 1, 0 ); // $ExpectError
umuldw.assign( 4, 1.0, false, 1, 0 ); // $ExpectError
umuldw.assign( 4, 1.0, null, 1, 0 ); // $ExpectError
umuldw.assign( 4, 1.0, {}, 1, 0 ); // $ExpectError
}
// The compiler throws an error if the `assign` method is provided a fourth argument which is not a number...
{
const out = [ 0, 0 ];
umuldw.assign( 4, 1.0, out, '5', 0 ); // $ExpectError
umuldw.assign( 4, 1.0, out, true, 0 ); // $ExpectError
umuldw.assign( 4, 1.0, out, false, 0 ); // $ExpectError
umuldw.assign( 4, 1.0, out, null, 0 ); // $ExpectError
umuldw.assign( 4, 1.0, out, [], 0 ); // $ExpectError
umuldw.assign( 4, 1.0, out, {}, 0 ); // $ExpectError
umuldw.assign( 4, 1.0, out, ( x: number ): number => x, 0 ); // $ExpectError
}
// The compiler throws an error if the `assign` method is provided a fifth argument which is not a number...
{
const out = [ 0, 0 ];
umuldw.assign( 4, 1.0, out, 1, '5' ); // $ExpectError
umuldw.assign( 4, 1.0, out, 1, true ); // $ExpectError
umuldw.assign( 4, 1.0, out, 1, false ); // $ExpectError
umuldw.assign( 4, 1.0, out, 1, null ); // $ExpectError
umuldw.assign( 4, 1.0, out, 1, [] ); // $ExpectError
umuldw.assign( 4, 1.0, out, 1, {} ); // $ExpectError
umuldw.assign( 4, 1.0, out, 1, ( x: number ): number => x ); // $ExpectError
}
// The compiler throws an error if the `assign` method is provided an unsupported number of arguments...
{
const out = [ 0, 0 ];
umuldw.assign(); // $ExpectError
umuldw.assign( 1.0 ); // $ExpectError
umuldw.assign( 1.0, out ); // $ExpectError
umuldw.assign( 1.0, out, 1 ); // $ExpectError
umuldw.assign( 1.0, out, 1, 0 ); // $ExpectError
umuldw.assign( 1.0, out, 1, 0, 1, 1 ); // $ExpectError
}
```
|
The Belgian railway line 50 is a railway line in Belgium connecting Brussels to Ghent. The first section between Ghent and Schellebelle was finished in 1837, offering a connection to Brussels through Dendermonde and Mechelen. The section between Schellebelle and Brussels was completed on 1 May 1856.
A section between Ghent and Ostend was completed in 1838 and is named line 50A. Between 1923 and 1933, the line 50A was extended to Brussels, which provides a fast connection between Brussels and Ghent. Where the original line 50 enters Brussels from the North, after passing through the city and station of Aalst, the later 50A enters Brussels from the South; this allows through trains from the West of the country to the East or vice versa without having to reverse. In 2016, line 50A was widened with two extra tracks, labelled 50C, between Denderleeuw and Brussels; the main reason was the increase in traffic from the GEN/RER commuter trains.
The following stations are located on the original line 50:
Brussels-North
Bockstael
Jette
Berchem-Sainte-Agathe
Groot-Bijgaarden
Dilbeek
Sint-Martens-Bodegem
Ternat
Essene Lombeek
Liedekerke
Denderleeuw
Erembodegem
Aalst
Lede
Serskamp
Schellebelle
Wetteren
Kwatrecht
Melle
Merelbeke
Gent-Sint-Pieters
A new station was built in Anderlecht, near the COOVI/CERIA campus; it entered service with the updated schedules on 14 December 2020, with an hourly service by the S3 line of the Brussels Regional Express Network.
References
50
50
Aalst, Belgium
City of Brussels
Transport in Ghent
|
```javascript
const os = require('os')
const path = require('path')
const { globalShortcut } = require('electron')
const config = require('./config')
const home = os.homedir()
exports.configDir = function (...args) {
return path.join(home, '.devdocs', ...args)
}
exports.toggleGlobalShortcut = function ({
name,
registered,
accelerator,
action
}) {
if (registered) {
globalShortcut.unregister(accelerator)
config.delete(`shortcut.${name}`)
} else {
const ret = globalShortcut.register(accelerator, action)
config.set(`shortcut.${name}`, accelerator)
if (!ret) {
console.error(`Failed to register ${accelerator}`)
}
}
}
```
|
```javascript
import View from '../../components/view';
import { __ } from '@wordpress/i18n';
import ImageToolsPanel from './image-tools-panel';
import ImageToolsContent from './image-tools-content';
import { useLocation } from '../../context/location-context';
import { LOCATIONS } from '../../constants';
const ImageTools = () => {
const { navigate } = useLocation();
return (
<View>
<View.Panel>
<View.BackButton onClick={ () => navigate( LOCATIONS.GENERATE ) }>
{ __( 'Generate with a prompt', 'elementor' ) }
</View.BackButton>
<View.PanelHeading primary={ __( 'Edit with AI', 'elementor' ) } />
<ImageToolsPanel />
</View.Panel>
<View.Content>
<ImageToolsContent />
</View.Content>
</View>
);
};
export default ImageTools;
```
|
```c++
#include "../../share/json_helper.hpp"
#include "manipulator/types.hpp"
#include <boost/ut.hpp>
namespace {
using namespace boost::ut;
using namespace boost::ut::literals;
void handle_json(const nlohmann::json& json) {
auto c = json.at("class").get<std::string>();
if (c == "event_definition") {
krbn::manipulator::event_definition event_definition;
for (const auto& [key, value] : json.at("input").items()) {
event_definition.handle_json(key, value, json.at("input"));
}
} else if (c == "from_modifiers_definition") {
json.at("input").get<krbn::manipulator::from_modifiers_definition>();
} else if (c == "modifier") {
json.at("input").get<krbn::manipulator::modifier_definition::modifier>();
} else if (c == "modifier_definition") {
krbn::manipulator::modifier_definition::make_modifiers(json.at("input"));
} else if (c == "to_event_definition") {
json.at("input").get<krbn::manipulator::to_event_definition>();
} else {
expect(false);
}
}
} // namespace
void run_errors_test(void) {
using namespace boost::ut;
using namespace boost::ut::literals;
"errors"_test = [] {
auto json = krbn::unit_testing::json_helper::load_jsonc("json/errors.jsonc");
for (const auto& j : json) {
auto error_json = krbn::unit_testing::json_helper::load_jsonc("json/" + j.get<std::string>());
for (const auto& e : error_json) {
try {
handle_json(e);
expect(false) << e.at("input") << " does not throw the expected exceptions";
} catch (pqrs::json::unmarshal_error& ex) {
expect(std::string_view(e.at("error").get<std::string>()) == ex.what());
} catch (...) {
expect(false);
}
}
}
};
}
```
|
```c++
#pragma once
#include "tree.hpp"
#include "tree-controller.hpp"
#include "wayfire/view-helpers.hpp"
#include "wayfire/txn/transaction-manager.hpp"
#include "wayfire/scene-operations.hpp"
#include <wayfire/workarea.hpp>
#include <wayfire/window-manager.hpp>
struct autocommit_transaction_t
{
public:
wf::txn::transaction_uptr tx;
autocommit_transaction_t()
{
tx = wf::txn::transaction_t::create();
}
~autocommit_transaction_t()
{
if (!tx->get_objects().empty())
{
wf::get_core().tx_manager->schedule_transaction(std::move(tx));
}
}
};
namespace wf
{
/**
* When a view is moved from one output to the other, we want to keep its tiled
* status. To achieve this, we do the following:
*
* 1. In view-pre-moved-to-output handler, we set view_auto_tile_t custom data.
* 2. In detach handler, we just remove the view as usual.
* 3. We now know we will receive attach as next event.
* Check for view_auto_tile_t, and tile the view again.
*/
class view_auto_tile_t : public wf::custom_data_t
{};
class tile_workspace_set_data_t : public wf::custom_data_t
{
public:
std::vector<std::vector<std::unique_ptr<wf::tile::tree_node_t>>> roots;
std::vector<std::vector<wf::scene::floating_inner_ptr>> tiled_sublayer;
static constexpr wf::tile::split_direction_t default_split = wf::tile::SPLIT_VERTICAL;
wf::option_wrapper_t<int> inner_gaps{"simple-tile/inner_gap_size"};
wf::option_wrapper_t<int> outer_horiz_gaps{"simple-tile/outer_horiz_gap_size"};
wf::option_wrapper_t<int> outer_vert_gaps{"simple-tile/outer_vert_gap_size"};
tile_workspace_set_data_t(std::shared_ptr<wf::workspace_set_t> wset)
{
this->wset = wset;
wset->connect(&on_wset_attached);
wset->connect(&on_workspace_grid_changed);
resize_roots(wset->get_workspace_grid_size());
if (wset->get_attached_output())
{
wset->get_attached_output()->connect(&on_workarea_changed);
}
inner_gaps.set_callback(update_gaps);
outer_horiz_gaps.set_callback(update_gaps);
outer_vert_gaps.set_callback(update_gaps);
}
wf::signal::connection_t<workarea_changed_signal> on_workarea_changed = [=] (auto)
{
update_root_size();
};
wf::signal::connection_t<workspace_set_attached_signal> on_wset_attached = [=] (auto)
{
on_workarea_changed.disconnect();
if (wset.lock()->get_attached_output())
{
wset.lock()->get_attached_output()->connect(&on_workarea_changed);
update_root_size();
}
};
wf::signal::connection_t<wf::workspace_grid_changed_signal> on_workspace_grid_changed = [=] (auto)
{
wf::dassert(!wset.expired(), "wset should not expire, ever!");
resize_roots(wset.lock()->get_workspace_grid_size());
};
void resize_roots(wf::dimensions_t wsize)
{
for (size_t i = 0; i < tiled_sublayer.size(); i++)
{
for (size_t j = 0; j < tiled_sublayer[i].size(); j++)
{
if (wset.lock()->is_workspace_valid({(int)i, (int)j}))
{
destroy_sublayer(tiled_sublayer[i][j]);
}
}
}
roots.resize(wsize.width);
tiled_sublayer.resize(wsize.width);
for (int i = 0; i < wsize.width; i++)
{
roots[i].resize(wsize.height);
tiled_sublayer[i].resize(wsize.height);
for (int j = 0; j < wsize.height; j++)
{
roots[i][j] = std::make_unique<wf::tile::split_node_t>(default_split);
tiled_sublayer[i][j] = std::make_shared<wf::scene::floating_inner_node_t>(false);
wf::scene::add_front(wset.lock()->get_node(), tiled_sublayer[i][j]);
}
}
update_root_size();
update_gaps();
}
void update_root_size()
{
auto wo = wset.lock()->get_attached_output();
wf::geometry_t workarea = wo ? wo->workarea->get_workarea() : tile::default_output_resolution;
wf::geometry_t output_geometry =
wset.lock()->get_last_output_geometry().value_or(tile::default_output_resolution);
auto wsize = wset.lock()->get_workspace_grid_size();
for (int i = 0; i < wsize.width; i++)
{
for (int j = 0; j < wsize.height; j++)
{
/* Set size */
auto vp_geometry = workarea;
vp_geometry.x += i * output_geometry.width;
vp_geometry.y += j * output_geometry.height;
autocommit_transaction_t tx;
roots[i][j]->set_geometry(vp_geometry, tx.tx);
}
}
}
void destroy_sublayer(wf::scene::floating_inner_ptr sublayer)
{
// Transfer views to the top
auto root = wset.lock()->get_node();
auto children = root->get_children();
auto sublayer_children = sublayer->get_children();
sublayer->set_children_list({});
children.insert(children.end(), sublayer_children.begin(), sublayer_children.end());
root->set_children_list(children);
wf::scene::update(root, wf::scene::update_flag::CHILDREN_LIST);
wf::scene::remove_child(sublayer);
}
tile::gap_size_t get_gaps() const
{
return {
.left = outer_horiz_gaps,
.right = outer_horiz_gaps,
.top = outer_vert_gaps,
.bottom = outer_vert_gaps,
.internal = inner_gaps,
};
}
void update_gaps_with_tx(wf::txn::transaction_uptr& tx)
{
for (auto& col : roots)
{
for (auto& root : col)
{
root->set_gaps(get_gaps());
root->set_geometry(root->geometry, tx);
}
}
}
void refresh(wf::txn::transaction_uptr& tx)
{
flatten_roots();
update_gaps_with_tx(tx);
}
std::function<void()> update_gaps = [=] ()
{
autocommit_transaction_t tx;
update_gaps_with_tx(tx.tx);
};
void flatten_roots()
{
for (auto& col : roots)
{
for (auto& root : col)
{
tile::flatten_tree(root);
}
}
}
static tile_workspace_set_data_t& get(std::shared_ptr<workspace_set_t> set)
{
if (!set->has_data<tile_workspace_set_data_t>())
{
set->store_data(std::make_unique<tile_workspace_set_data_t>(set));
}
return *set->get_data<tile_workspace_set_data_t>();
}
static tile_workspace_set_data_t& get(wf::output_t *output)
{
return get(output->wset());
}
static std::unique_ptr<tile::tree_node_t>& get_current_root(wf::output_t *output)
{
auto set = output->wset();
auto vp = set->get_current_workspace();
auto& data = get(output);
return data.roots[vp.x][vp.y];
}
static scene::floating_inner_ptr get_current_sublayer(wf::output_t *output)
{
auto set = output->wset();
auto vp = set->get_current_workspace();
auto& data = get(output);
return data.tiled_sublayer[vp.x][vp.y];
}
std::weak_ptr<workspace_set_t> wset;
std::unique_ptr<wf::tile::view_node_t> setup_view_tiling(wayfire_toplevel_view view, wf::point_t vp)
{
view->set_allowed_actions(VIEW_ALLOW_WS_CHANGE);
auto node = view->get_root_node();
wf::scene::readd_front(tiled_sublayer[vp.x][vp.y], node);
view_bring_to_front(view);
return std::make_unique<wf::tile::view_node_t>(view);
}
void attach_view(wayfire_toplevel_view view, std::optional<wf::point_t> _vp = {})
{
auto vp = _vp.value_or(wset.lock()->get_current_workspace());
auto view_node = setup_view_tiling(view, vp);
{
autocommit_transaction_t tx;
roots[vp.x][vp.y]->as_split_node()->add_child(std::move(view_node), tx.tx);
}
consider_exit_fullscreen(view);
}
/** Remove the given view from its tiling container */
void detach_views(std::vector<nonstd::observer_ptr<tile::view_node_t>> views,
bool reinsert = true)
{
{
autocommit_transaction_t tx;
for (auto& v : views)
{
auto view = v->view;
view->set_allowed_actions(VIEW_ALLOW_ALL);
// After this, `v` is freed.
v->parent->remove_child(v, tx.tx);
if (view->pending_fullscreen() && view->is_mapped())
{
wf::get_core().default_wm->fullscreen_request(view, nullptr, false);
}
if (reinsert && view->get_output())
{
wf::scene::readd_front(view->get_output()->wset()->get_node(), view->get_root_node());
}
}
}
/* View node is invalid now */
flatten_roots();
update_root_size();
}
/**
* Consider unfullscreening all fullscreen views because a new view has been focused or attached to the
* tiling tree.
*/
void consider_exit_fullscreen(wayfire_toplevel_view view)
{
if (tile::view_node_t::get_node(view) && !view->pending_fullscreen())
{
auto vp = this->wset.lock()->get_current_workspace();
for_each_view(roots[vp.x][vp.y], [&] (wayfire_toplevel_view view)
{
if (view->pending_fullscreen())
{
set_view_fullscreen(view, false);
}
});
}
}
void set_view_fullscreen(wayfire_toplevel_view view, bool fullscreen)
{
/* Set fullscreen, and trigger resizing of the views (which will commit the view) */
view->toplevel()->pending().fullscreen = fullscreen;
update_root_size();
}
};
}
```
|
```c
/*
*
*/
#include "soc/adc_periph.h"
/* Store IO number corresponding to the ADC channel number. */
const int adc_channel_io_map[SOC_ADC_PERIPH_NUM][SOC_ADC_MAX_CHANNEL_NUM] = {
/* ADC1 */
{
ADC1_CHANNEL_0_GPIO_NUM, ADC1_CHANNEL_1_GPIO_NUM, ADC1_CHANNEL_2_GPIO_NUM, ADC1_CHANNEL_3_GPIO_NUM,
ADC1_CHANNEL_4_GPIO_NUM, ADC1_CHANNEL_5_GPIO_NUM, ADC1_CHANNEL_6_GPIO_NUM, ADC1_CHANNEL_7_GPIO_NUM
},
/* ADC2 */
{
ADC2_CHANNEL_0_GPIO_NUM, ADC2_CHANNEL_1_GPIO_NUM, ADC2_CHANNEL_2_GPIO_NUM,
ADC2_CHANNEL_3_GPIO_NUM, ADC2_CHANNEL_4_GPIO_NUM, ADC2_CHANNEL_5_GPIO_NUM
}
};
```
|
Captain Richard Boswell Rushall (April 1865 – 3 February 1953) was a British sea captain and businessman who served as mayor of Rangoon, Burma, during the 1930s. He was the first Englishman to hold this position. Born in Braunston, Northamptonshire, Rushall was the eldest of eight children. After finishing school he left for sea, joined the UK's Merchant Navy, and became a ship's captain. He spent 20 years with the Irrawaddy Flotilla Company, of which 17 were in command of steamships belonging to the company. In 1908 he settled in Rangoon with his family, resigned from the Irrawaddy Flotilla Company and founded Rushall & Co. Ltd., a stevedoring and contracting business that employed between 3,000 and 4,000 men.
In December 1922 Rushall was elected as an Honorary Magistrate, and was subsequently made a Member of the Order of the British Empire (MBE) for his distinguished service during the First World War. He was elected as mayor of Rangoon in January 1930, in an election that was described by Singapore's The Straits Times as having given "universal satisfaction". During his time as mayor, he sought to improve the accommodation and quality of care in the city hospital and to ensure that a fair share of stevedoring jobs in Rangoon were allotted to native dock labourers. During the Second World War Rushall evacuated to Bombay; he died at the age of 87 in Rangoon, where he was commended by U Kyaw Tha for his work and character as mayor.
Early life and naval career
Richard Boswell Rushall was born in April 1865 in Braunston, Northamptonshire, and was the eldest of eight children. His father, Benjamin Rushall (1825–1900), was a saddler; his mother was Mary Boswell (1843–1918). After finishing school as a young man, Rushall left for sea and joined the Merchant Navy; he served as third officer on one of the British-India Steam Navigation Company's coasting steamers. He first began to reside permanently in Rangoon at the age of 20, and in 1886 he joined the Irrawaddy Flotilla Company. He stayed with the company for 20 years, of which 17 were spent in command of their steamships. He earned his certificate of competency as second mate from the Lords of Trade on 8 March 1888, and eventually rose to the rank of ship's captain – he was aboard one of the final ships to travel under sail around Cape Horn. He married his first wife, Jane Amelia Graham (1872–1899), on 10 September 1892 in Burma, at the age of 27. He and Jane had two children together: Nancy (born 1897) and Benjamin Thomas (1898–1980). Jane died on 19 June 1899.
Business
While in Rangoon, Rushall met and married Charlotte Sarah Trype (1882–1933)—the daughter of the local station manager—and settled in the city in 1908 with his second wife and their three daughters: Ella Irene (born 1905), Charlotte Mary (1907–1963), and Cecelia. Whilst there, he resigned from the Irrawaddy Flotilla Company and, in 1906, founded Rushall & Co. Ltd., a stevedoring and contracting business located at 121 Judah Ezekiel Street (now Thein Phyu Road) next to the docks of the city. The company employed between 3,000 and 4,000 men. Over the following years, Rushall and Charlotte had three further children: Edna Helen (1909–1910), Richard Boswell (1911–2002) and Edgar Boswell (1916–2002). Charlotte left with the family in 1913 for Rugby, Warwickshire, where she set up and managed two businesses: a brick factory and Rugby Motor Transport Co., a haulage contracting business dealing in lorries and charabancs. Rushall remained in Rangoon to tend to his own company. Charlotte died in Rugby on 30 April 1933, with the Rugby Motor Transport Co. being wound-up two months later.
Politics
During the First World War, Rushall worked as harbourmaster at Rangoon's harbour, and in December 1922 he was elected as an Honorary Magistrate in the Rangoon Municipal Elections, whereupon he devoted himself to the improvement of the city's public parks and war memorial. He worked for eight years as a Councillor of the Corporation of Rangoon, and was subsequently made an MBE for his distinguished service during the war. He served as Chairman of the Roads and Buildings Committee, and also sat on the committees for public health and markets, playgrounds, and the protection of waifs and strays. From 1928 he was vice president of the hospital and governor of Rangoon University. Other public offices that he held included governor of the gaol and member of the Reformatory School Board.
As a result of his public service in Rangoon, Rushall became known to Thibaw Min, the last king of Burma's Konbaung dynasty, and in 1925 he attended the funeral of Supayalat, the king's favourite wife. On 6 January 1930, Rushall became the first Englishman to be elected mayor of Rangoon, and was seen as a popular choice for the position – at the time, Singapore's paper The Straits Times described his election as having given "universal satisfaction". According to the Rugby Advertiser, Rushall was "extremely popular both among the European and the native population of the city", and was "well known for his numerous acts of kindliness and charity".
Rushall's first year as mayor proved to be challenging: in March he was compelled to give at evidence at the trial of Jatindra Mohan Sengupta, the mayor of Calcutta, who was accused of sedition in speeches he had made during a visit to Rangoon. During the trial a riot erupted outside the courthouse. In May, further riots—this time of anti-Burmese Indian sentiment—sprung up in Rangoon and across the rest of the country following a strike by Indian coolies. One such riot lasted throughout the night of 26 May, and resulted in the deaths of 120 Indians and more than 900 injuries. When Rushall's son Richard came to visit him during this time, Rushall immediately sent him up the Rangoon River and away from the civil disorder for 2–3 months. In November, Rushall supported a resolution to improve the accommodation and quality of care in the city hospital, and the following year, he sat on a committee to ensure that a fair share of stevedoring jobs in Rangoon were allotted to native dock labourers.
Later life and death
Following the Japanese invasion of Burma in early 1942, Rushall evacuated from the country with his daughter Nancy. He stayed out the Second World War in Bombay, but eventually returned to Rangoon, where he died on 3 February 1953, at the age of 87. Upon his death, U Kyaw Tha—chairman of the Commissioners of the Port of Rangoon—commended him as a "born gentleman", and praised his work at the city's hospital and his "kindliness and infectious friendliness".
Notes
References
1864 births
1953 deaths
19th-century English businesspeople
British expatriates in British Burma
Mayors of Yangon
Members of the Order of the British Empire
Sea captains
People from Braunston
|
Rudolf Erich Edgar Hübner (29 April 1897 – 28 February 1965) was a German general during World War II. He was a recipient of the Knight's Cross of the Iron Cross of Nazi Germany.
Hübner entered the Army during the First World War on 25 July 1916 as a volunteer in the replacement battalion of the Grenadier Regiment "Prince Carl of Prussia" (2nd Brandenburg) No. 12 a. In 1916 he came with the 4th Lower Silesian Infantry Regiment No. 51 to the front. From 1917 until the end of the war he was with the Sturm Battalion No. 16. In this he was promoted to lieutenant on 27 September 1918 and then commanded to an officer's course, where he experienced the end of the war. On 28 November 1918 he was released from active service.
He then began studying dentistry, which he received as a doctor. med. dent. Graduating. Subsequently Hübner worked as a practical dentist.
In 1934 he joined as a supplementary officer candidate in the Reichswehr. In the spring of 1935 he was employed as a company commander in the supplementary battalion Oppeln A (later supplementary battalion 41) and appointed on 1 June 1935 supplementary officer. On July 15, 1936 followed in the course of the upgrade of the Wehrmacht its acquisition into active service. On March 1, 1937, he was appointed chief of the 6th Company in Infantry Regiment 18.
In the mobilization before the Second World War, he was appointed company commander in the Infantry Regiment 167, which belonged to the 86th Infantry Division, appointed. In late January 1940, he was appointed commander of the Second Battalion of the Infantry Regiment 529 and promoted to Major on 1 March 1940. The battalion he led in the Western campaign, which ended on June 22, 1940 with the capitulation of France. On April 1, 1942, he was promoted to lieutenant colonel. On April 9, 1942 he was charged with the leadership of the 529th Infantry Regiment, which belonged to the 299th Infantry Division. On August 26, 1942, he was appointed commander of the 529th Infantry Regiment. From its renaming in October 1942, he was commander of the Grenadier Regiment 529th On December 1, 1942 he was promoted to colonel. On April 21, 1943, he was awarded the German Cross in Gold. In May 1943, Hübner issued his memorandum on military education (Title: What do we fight for?), which was distributed to the officer corps by the Wehrmacht High Command (OKW) with 300,000 copies. On July 1, 1943, he gave his command and was transferred to the Führerreserve. In September 1943 he was transferred to the army personnel office. From spring 1944 he was commanded to the National Socialist command staff of the OKW (see National Socialist management officer).
From 1 August 1944 he was appointed Chief of Staff by the National Socialist Command Staff of the OKH. On 1 January 1945 he was promoted to Major General. On February 1, 1945, he gave his command and was simultaneously charged with the leadership of the 303rd Infantry Division. On March 1, 1945, he was promoted to lieutenant general and eight days later awarded the Knight's Cross of the Iron Cross.
Hitler was incensed by the loss of the Ludendorff Bridge during the Battle of Remagen on 7 March 1945. He summoned the "fanatical and reliable Nazi" Generalleutnant Hübner from the Eastern Front and personally appointed him Commander of Fliegendes Sonder-Standgericht West ("Flying Special Court-Martial West"). He directed him to court-martial and execute the officers who failed to destroy the bridge.
General Hubner tried Major Hans Scheller, Captain Willi Bragte, Lt. Karl Heinz Peters, Maj. Herbert Strobel and Maj. August Kraft.
Hübner, who had no legal experience, acted as both prosecutor and judge. He conducted extremely brief show trials during which he harangued the defendants for their alleged command failures, and then pronounced sentence. All of the officers were sentenced to death. Except for Bratge, who had been captured, the others were taken to a nearby woods within 24 hours, executed with a shot to the back of the neck, and buried where they fell.
On 28 April 1945 Hübner was appointed on command of Albert Kesselring commander of combat of Munich. Under his command, 200 people were hanged or shot in the last days of the war. Hübner "left quietly" (quote Henke), as Munich was taken on 30 April 1945. On 8 May 1945 Hübner was first in US American, later in British captivity. From this he was released in April 1948. In a post-war trial in Koblenz, he was sentenced to 10 years in prison for the death sentences of Rimbach.
Awards
Knight's Cross of the Iron Cross on 9 March 1945 as Generalmajor and leader of Grenadier-Regiment 529
References
Citations
Bibliography
1897 births
1965 deaths
German Army personnel of World War I
German mass murderers
German people convicted of manslaughter
German people convicted of war crimes
German prisoners of war in World War II held by the United Kingdom
German prisoners of war in World War II held by the United States
Lieutenant generals of the German Army (Wehrmacht)
People from Posen-West Prussia
Prisoners and detainees of Germany
Recipients of the Gold German Cross
Recipients of the Knight's Cross of the Iron Cross
|
Romain Genevois (born 28 October 1987) is a Haitian professional footballer who most recently played as a defender for Stade Malherbe Caen of the Ligue 1 in France.
Early years
Genevois was born in L'Estère, Haiti. At the age 3, he was separated from his biological parents who had difficulty raising him. He was then adopted, along with his younger brother, by a French couple who could not have children. He arrived in France on 5 February 1991 and grew up in Montcenis, a small town of Saône-et-Loire where his parents had chosen to live.
Club career
Genevois came up through FC Gueugnon's youth ranks to make his first team debut in the 2006–07 season. He moved to Tours FC in 2009 returning to Ligue 2. He then signed with OGC Nice in 2012 and concluded his tenure with a fourth-place finish and qualification to the UEFA Europa League in 2016. Genevois signed a three-year contract to play for Stade Malherbe Caen.
In his career, Genevois has recorded over one hundred games in Ligue 1 and 286 in the pros and has made several trips to the European Cup.
International career
He made his debut for Haiti in the February 2008 friendly series against Venezuela, which served as a warm-up for the 2010 FIFA World Cup qualification match against Nicaragua or the Netherlands Antilles.
In 2013, Genevois was interviewed by Nice-Matin, which he made clear that he was not a Haitian international as was often said and was 100% French. He cited that he had never signed any document to keep dual citizenship and did not accept Haitian nationality. He claims to have responded to a call for a friendly match in Venezuela, but said it did not officially count as he did not go through Haiti. After spending a week with the team, which he recalls a good memory, organization was an issue. He first recalls arriving in the morning at a Florida airport, only to be picked up in the evening and did not know who the national coach was. This bad experience led him to further pursuit his pro career in France. He was later contacted by team officials to rejoin, but did not want to go to Haiti citing its difficult conditions and thus not revisited the island since leaving for France in his early childhood.
In 2016, that changed when he finally accepted the solicitations of the Haitian selection under the new coach, Patrice Neveu. During this second stint with the national team, his outlook differed and stated his pleasure to be able to play for his country of origin and that things has really changed. He cites that something is building and there is room for optimism in Haitian football evoking its entry in the Copa America as "a very good experience, a very nice competition." He was one of the 23 players selected for the 2016 Copa América, which was the first time in its history to allow qualifications from countries outside of CONMEBOL, thus Haiti managed to qualify and enter for the first time.
Personal life
Genevois is married and is a father of two, a daughter and a son. He, along with his parents, has sent donations to Haiti through the association that allowed his adoption.
References
External links
1987 births
Living people
Men's association football defenders
Haitian men's footballers
Haitian expatriate men's footballers
Haiti men's international footballers
Haitian emigrants to France
French men's footballers
FC Gueugnon players
Tours FC players
OGC Nice players
Stade Malherbe Caen players
Ligue 1 players
Ligue 2 players
Copa América Centenario players
|
Casahuiria is a genus of flies in the family Tachinidae.
Species
Casahuiria cornuta Townsend, 1919
Distribution
Peru.
References
Exoristinae
Tachinidae genera
Diptera of South America
Monotypic Brachycera genera
Taxa named by Charles Henry Tyler Townsend
|
Antigua and Barbuda participated at the 2018 Summer Youth Olympics in Buenos Aires, Argentina from 6 October to 18 October 2018.
Athletics
Girls
Track
Field
Sailing
Antigua and Barbuda qualified one boat based on its performance at the North American and Caribbean IKA Twin Tip Qualifiers.
Boys' IKA Twin Tip Racing - 1 boat
Swimming
Boys
Girls
References
2018 in Antigua and Barbuda sport
Nations at the 2018 Summer Youth Olympics
Antigua and Barbuda at the Youth Olympics
|
```shell
Finding a tag
You can use git offline!
Search by commit message keyword
Remote repositories: viewing, editing and deleting
Ignore files in git
```
|
Immigration reform in the United Kingdom is a term used in political discussion regarding changes to the current immigration policy of the United Kingdom.
In the United Kingdom, the Strangers into Citizens campaign has been supported by the Liberal Democrats. Labour MP John McDonnell, the IPPR (a Labour-leaning think-tank) and Boris Johnson (the Conservative Prime minister) have also backed selective amnesty for illegal immigrants. The Liberal Democrat proposal would regularise the status of illegal immigrants who have lived in the country for at least ten years and who do not have a criminal record. Advocates have argued that bringing such individuals (estimates range from 300,000 to 800,000) into the legal economy would raise tax revenue, save on policing expenses, and reduce expenditures on deportation.
More recently, UK Prime Minister Cameron announced “a series of proposals to curb immigration,” noting that the overall quantitative inflow of foreigners has increased considerably since 2004. The UK Independence Party, having apparently “harnessed” voter “frustration” about immigration levels, got 12.6% of the vote in the May 2015 parliamentary elections, up from 3.1% in January 2010, winning one seat in the House of Commons.
See also
Immigration, Asylum and Nationality Act 2006
References
Immigration to the United Kingdom
Human rights in the United Kingdom
Immigration law in the United Kingdom
|
Alexander Y. Tetelbaum (, ; born 1948 in Kiev, Ukrainian SSR) is an educator, inventor, scientist, academician, and entrepreneur. He has been a pioneer in the Electronic Design Automation (EDA) and Artificial Intelligence (AI) industries since the 1960s. He has been selected and has held high level positions in academia and industry. He is a Fellow and Honorary Doctor of several universities, academies, and societies. He holds more than 40 US patents and is the author and co-author of 300 publications, including 16 books. Last published book: "Minimum Number of Timing Signoff Corners" )
Education and career
He holds Doctor of Engineering Science (Grand PhD) degree in Computer Science and Engineering as well as PhD in Electrical and Computer Engineering. Tetelbaum was Professor of Design Automation and a Distinguished Scientist at the National Technical University of Ukraine. In 1991, he founded and presided over the International Solomon University. He has served as a reviewer for the American Mathematical Society since 1994. Dr. Tetelbaum led design methodology and automation teams in LSI Corporation, Silicon Graphics (SGI), and Zycad Corporations. Currently, he is president and CEO of Abelite Design Automation, Inc.
Alexander Tetelbaum was selected for inclusion in Who's Who in the World, Men of Achievement, Who's Who in Technology, Who's Who in American Education, 5000 Personalities of the World, Who's Who in Science and Engineering, The International Directory of Distinguished Leadership, Longman Reference on Research Directories. WorldAtlas.com has included Dr. Tetelbaum as a Ukrainian famous inventor and scientist who “has made a significant contribution to the country in his personal endeavors”. The Star (ID: HD92636) residing at the astronomically verified position of constellation Leo (Right Ascension: 10h41m55.30s, Declination: +08.24.52.0) is hereby named as “Dr. Alexander Tetelbaum”. His hobbies include oil painting, table tennis, chess, solving and developing puzzles (books: "Yes-No Puzzles & Games", "Puzzle Games For Kids", "Solving Non-Standard Problems"), "Solving Non-Standard Very Hard Problems" ) in non-standard thinking and critical problem-solving.
Note: Doctor of Science is a "higher doctorate" awarded in recognition of a national/international substantial and sustained contribution to scientific knowledge beyond that required for a PhD.
References
External links
HISTORY OF FOUNDATION, OUTSTANDING PERSONALITIES OF ISU
1948 births
Living people
Engineers from Kyiv
Artificial intelligence researchers
Logic programming researchers
20th-century American mathematicians
21st-century American mathematicians
20th-century Ukrainian Jews
American computer scientists
Jewish American scientists
Electronic design automation people
American electronics engineers
Silicon Valley people
Soviet emigrants to the United States
Russian inventors
21st-century American Jews
20th-century Ukrainian engineers
Ukrainian computer scientists
|
```xml
<dict>
<key>LayoutID</key>
<integer>87</integer>
<key>PathMapRef</key>
<array>
<dict>
<key>CodecID</key>
<array>
<integer>283902601</integer>
</array>
<key>Headphone</key>
<dict/>
<key>Inputs</key>
<array>
<string>Mic</string>
<string>LineIn</string>
</array>
<key>IntSpeaker</key>
<dict/>
<key>LineIn</key>
<dict/>
<key>Mic</key>
<dict>
<key>MuteGPIO</key>
<integer>0</integer>
<key>SignalProcessing</key>
<dict>
<key>SoftwareDSP</key>
<dict>
<key>DspFunction0</key>
<dict>
<key>FunctionInfo</key>
<dict>
<key>DspFuncInstance</key>
<integer>0</integer>
<key>DspFuncName</key>
<string>DspNoiseReduction</string>
<key>DspFuncProcessingIndex</key>
<integer>0</integer>
</dict>
<key>ParameterInfo</key>
<dict>
<key>1</key>
<integer>0</integer>
<key>2</key>
<integer>1</integer>
<key>3</key>
<integer>0</integer>
<key>4</key>
<integer>-1063256063</integer>
<key>5</key>
<data>O7qJwvAsd8IxFYLCNC+Iwgh8h8JYT3zCTGxtwjCQbMLsb3/C58KIwmIAjcKqEZTCM22Xwr5/k8L6Q5DCUXiPwhlqksKOQ5TCQS2XwkCYnMLSmqPCbK+your_sha256_hashqwvxyqcLWr6XCdkajwulQpMJs1afCbmCqwqbpqcIaSKrCSrmpwjv+p8KjIqjCVkOowh9WqMLun6nCudimwvISp8K686rC+your_sha256_hash8GswgJLr8Ku2a/your_sha256_hashCQGK1woYFtcIw7LHCOMuxwiKZs8K8YrXC6nO4ws5cu8KCa73CJjG+wqekvMK9RLnC4/a2wuKBt8Jyyour_sha512_hash/Rwhmf0cImvtPClErXwmrF18JUfdvCNi7fwty43cL+WdvCuqrawiIL3cKCR+HCYPDnwqQ67MLYserCshHowl7L6MK2guzCsvrvwu4o8cJyfv7C</data>
</dict>
<key>PatchbayInfo</key>
<dict/>
</dict>
<key>DspFunction1</key>
<dict>
<key>FunctionInfo</key>
<dict>
<key>DspFuncInstance</key>
<integer>1</integer>
<key>DspFuncName</key>
<string>DspEqualization32</string>
<key>DspFuncProcessingIndex</key>
<integer>1</integer>
</dict>
<key>ParameterInfo</key>
<dict>
<key>1</key>
<integer>0</integer>
<key>9</key>
<integer>0</integer>
<key>Filter</key>
<array>
<dict>
<key>2</key>
<integer>0</integer>
<key>3</key>
<integer>0</integer>
<key>4</key>
<integer>0</integer>
<key>5</key>
<integer>1</integer>
<key>6</key>
<integer>1120623594</integer>
<key>7</key>
<integer>1060439283</integer>
<key>8</key>
<integer>-1069504319</integer>
</dict>
<dict>
<key>2</key>
<integer>0</integer>
<key>3</key>
<integer>3</integer>
<key>4</key>
<integer>0</integer>
<key>5</key>
<integer>4</integer>
<key>6</key>
<integer>1134130816</integer>
<key>7</key>
<integer>1068239080</integer>
<key>8</key>
<integer>-1073964333</integer>
</dict>
<dict>
<key>2</key>
<integer>0</integer>
<key>3</key>
<integer>4</integer>
<key>4</key>
<integer>0</integer>
<key>5</key>
<integer>4</integer>
<key>6</key>
<integer>1143149396</integer>
<key>7</key>
<integer>1069838081</integer>
<key>8</key>
<integer>-1072785033</integer>
</dict>
<dict>
<key>2</key>
<integer>0</integer>
<key>3</key>
<integer>5</integer>
<key>4</key>
<integer>0</integer>
<key>5</key>
<integer>4</integer>
<key>6</key>
<integer>1161109679</integer>
<key>7</key>
<integer>1093706804</integer>
<key>8</key>
<integer>-1069580896</integer>
</dict>
<dict>
<key>2</key>
<integer>0</integer>
<key>3</key>
<integer>7</integer>
<key>4</key>
<integer>0</integer>
<key>5</key>
<integer>4</integer>
<key>6</key>
<integer>1138536183</integer>
<key>7</key>
<integer>1094714319</integer>
<key>8</key>
<integer>-1069046873</integer>
</dict>
<dict>
<key>2</key>
<integer>0</integer>
<key>3</key>
<integer>9</integer>
<key>4</key>
<integer>0</integer>
<key>5</key>
<integer>4</integer>
<key>6</key>
<integer>1134823262</integer>
<key>7</key>
<integer>1088568216</integer>
<key>8</key>
<integer>-1073319056</integer>
</dict>
<dict>
<key>2</key>
<integer>0</integer>
<key>3</key>
<integer>10</integer>
<key>4</key>
<integer>0</integer>
<key>5</key>
<integer>4</integer>
<key>6</key>
<integer>1140763936</integer>
<key>7</key>
<integer>1095878445</integer>
<key>8</key>
<integer>-1066910782</integer>
</dict>
<dict>
<key>2</key>
<integer>0</integer>
<key>3</key>
<integer>11</integer>
<key>4</key>
<integer>0</integer>
<key>5</key>
<integer>4</integer>
<key>6</key>
<integer>1150711009</integer>
<key>7</key>
<integer>1082220668</integer>
<key>8</key>
<integer>-1072251010</integer>
</dict>
<dict>
<key>2</key>
<integer>0</integer>
<key>3</key>
<integer>22</integer>
<key>4</key>
<integer>0</integer>
<key>5</key>
<integer>4</integer>
<key>6</key>
<integer>1169045837</integer>
<key>7</key>
<integer>1080998247</integer>
<key>8</key>
<integer>-1076100424</integer>
</dict>
<dict>
<key>2</key>
<integer>0</integer>
<key>3</key>
<integer>23</integer>
<key>4</key>
<integer>0</integer>
<key>5</key>
<integer>6</integer>
<key>6</key>
<integer>1174718752</integer>
<key>7</key>
<integer>1074226939</integer>
<key>8</key>
<integer>-1065842737</integer>
</dict>
<dict>
<key>2</key>
<integer>0</integer>
<key>3</key>
<integer>24</integer>
<key>4</key>
<integer>0</integer>
<key>5</key>
<integer>4</integer>
<key>6</key>
<integer>1174256827</integer>
<key>7</key>
<integer>1091118565</integer>
<key>8</key>
<integer>-1065842737</integer>
</dict>
<dict>
<key>2</key>
<integer>1</integer>
<key>3</key>
<integer>0</integer>
<key>4</key>
<integer>0</integer>
<key>5</key>
<integer>1</integer>
<key>6</key>
<integer>1120623594</integer>
<key>7</key>
<integer>1060439283</integer>
<key>8</key>
<integer>-1069504319</integer>
</dict>
<dict>
<key>2</key>
<integer>1</integer>
<key>3</key>
<integer>3</integer>
<key>4</key>
<integer>0</integer>
<key>5</key>
<integer>4</integer>
<key>6</key>
<integer>1134130816</integer>
<key>7</key>
<integer>1068239080</integer>
<key>8</key>
<integer>-1073964333</integer>
</dict>
<dict>
<key>2</key>
<integer>1</integer>
<key>3</key>
<integer>4</integer>
<key>4</key>
<integer>0</integer>
<key>5</key>
<integer>4</integer>
<key>6</key>
<integer>1143149396</integer>
<key>7</key>
<integer>1069838081</integer>
<key>8</key>
<integer>-1072785033</integer>
</dict>
<dict>
<key>2</key>
<integer>1</integer>
<key>3</key>
<integer>5</integer>
<key>4</key>
<integer>0</integer>
<key>5</key>
<integer>4</integer>
<key>6</key>
<integer>1161109679</integer>
<key>7</key>
<integer>1093706804</integer>
<key>8</key>
<integer>-1069580896</integer>
</dict>
<dict>
<key>2</key>
<integer>1</integer>
<key>3</key>
<integer>7</integer>
<key>4</key>
<integer>0</integer>
<key>5</key>
<integer>4</integer>
<key>6</key>
<integer>1138536183</integer>
<key>7</key>
<integer>1094714319</integer>
<key>8</key>
<integer>-1069046873</integer>
</dict>
<dict>
<key>2</key>
<integer>1</integer>
<key>3</key>
<integer>9</integer>
<key>4</key>
<integer>0</integer>
<key>5</key>
<integer>4</integer>
<key>6</key>
<integer>1134823262</integer>
<key>7</key>
<integer>1088568216</integer>
<key>8</key>
<integer>-1073319056</integer>
</dict>
<dict>
<key>2</key>
<integer>1</integer>
<key>3</key>
<integer>10</integer>
<key>4</key>
<integer>0</integer>
<key>5</key>
<integer>4</integer>
<key>6</key>
<integer>1140763936</integer>
<key>7</key>
<integer>1095878445</integer>
<key>8</key>
<integer>-1066910782</integer>
</dict>
<dict>
<key>2</key>
<integer>1</integer>
<key>3</key>
<integer>11</integer>
<key>4</key>
<integer>0</integer>
<key>5</key>
<integer>4</integer>
<key>6</key>
<integer>1150711009</integer>
<key>7</key>
<integer>1082220668</integer>
<key>8</key>
<integer>-1072251010</integer>
</dict>
<dict>
<key>2</key>
<integer>1</integer>
<key>3</key>
<integer>22</integer>
<key>4</key>
<integer>0</integer>
<key>5</key>
<integer>4</integer>
<key>6</key>
<integer>1169045837</integer>
<key>7</key>
<integer>1080998247</integer>
<key>8</key>
<integer>-1076100424</integer>
</dict>
<dict>
<key>2</key>
<integer>1</integer>
<key>3</key>
<integer>23</integer>
<key>4</key>
<integer>0</integer>
<key>5</key>
<integer>6</integer>
<key>6</key>
<integer>1174718752</integer>
<key>7</key>
<integer>1074226939</integer>
<key>8</key>
<integer>-1065842737</integer>
</dict>
<dict>
<key>2</key>
<integer>1</integer>
<key>3</key>
<integer>24</integer>
<key>4</key>
<integer>0</integer>
<key>5</key>
<integer>4</integer>
<key>6</key>
<integer>1174256827</integer>
<key>7</key>
<integer>1091118565</integer>
<key>8</key>
<integer>-1065842737</integer>
</dict>
</array>
</dict>
<key>PatchbayInfo</key>
<dict>
<key>InputPort0</key>
<dict>
<key>PortInstance</key>
<integer>0</integer>
<key>PortWidth</key>
<integer>1</integer>
<key>SourceFuncInstance</key>
<integer>0</integer>
<key>SourcePortIndex</key>
<integer>0</integer>
</dict>
<key>InputPort1</key>
<dict>
<key>PortInstance</key>
<integer>1</integer>
<key>PortWidth</key>
<integer>1</integer>
<key>SourceFuncInstance</key>
<integer>0</integer>
<key>SourcePortIndex</key>
<integer>1</integer>
</dict>
</dict>
</dict>
<key>DspFunction2</key>
<dict>
<key>FunctionInfo</key>
<dict>
<key>DspFuncInstance</key>
<integer>2</integer>
<key>DspFuncName</key>
<string>DspGainStage</string>
<key>DspFuncProcessingIndex</key>
<integer>2</integer>
</dict>
<key>ParameterInfo</key>
<dict>
<key>1</key>
<integer>0</integer>
<key>2</key>
<integer>1065353216</integer>
<key>3</key>
<integer>1065353216</integer>
</dict>
<key>PatchbayInfo</key>
<dict>
<key>InputPort0</key>
<dict>
<key>PortInstance</key>
<integer>0</integer>
<key>PortWidth</key>
<integer>1</integer>
<key>SourceFuncInstance</key>
<integer>1</integer>
<key>SourcePortIndex</key>
<integer>0</integer>
</dict>
<key>InputPort1</key>
<dict>
<key>PortInstance</key>
<integer>1</integer>
<key>PortWidth</key>
<integer>1</integer>
<key>SourceFuncInstance</key>
<integer>1</integer>
<key>SourcePortIndex</key>
<integer>1</integer>
</dict>
</dict>
</dict>
<key>DspFunction3</key>
<dict>
<key>FunctionInfo</key>
<dict>
<key>DspFuncInstance</key>
<integer>3</integer>
<key>DspFuncName</key>
<string>DspClientGainAdjustStage</string>
<key>DspFuncProcessingIndex</key>
<integer>3</integer>
</dict>
<key>ParameterInfo</key>
<dict>
<key>1</key>
<integer>1</integer>
<key>2</key>
<integer>0</integer>
<key>3</key>
<integer>1082130432</integer>
<key>4</key>
<integer>1103626240</integer>
<key>5</key>
<integer>1</integer>
<key>6</key>
<integer>1082130432</integer>
<key>7</key>
<integer>3</integer>
<key>8</key>
<integer>0</integer>
</dict>
<key>PatchbayInfo</key>
<dict>
<key>InputPort0</key>
<dict>
<key>PortInstance</key>
<integer>0</integer>
<key>PortWidth</key>
<integer>1</integer>
<key>SourceFuncInstance</key>
<integer>2</integer>
<key>SourcePortIndex</key>
<integer>0</integer>
</dict>
<key>InputPort1</key>
<dict>
<key>PortInstance</key>
<integer>1</integer>
<key>PortWidth</key>
<integer>1</integer>
<key>SourceFuncInstance</key>
<integer>2</integer>
<key>SourcePortIndex</key>
<integer>1</integer>
</dict>
</dict>
</dict>
</dict>
</dict>
</dict>
<key>Outputs</key>
<array>
<string>Headphone</string>
<string>IntSpeaker</string>
</array>
<key>PathMapID</key>
<integer>289</integer>
</dict>
</array>
</dict>
```
|
Page 2 – A Collection of Her Most Famous Songs is a compilation album by Patti Page. It was released in October 1955 on Mercury Records. It was distributed as a vinyl LP.
This was the second album in a series of four, titled "Page 1" to "Page 4".
Reception
Billboard welcomed the album saying: “Page 2,” second in Mercury’s new Patti Page LP series, features memorable tunes from the late 1920s and early 1930s—“It All Depends on You,” “My Ideal,” “Rockin’ Chair,” etc—sung with warmth, taste and sincerity by the thrush. Perfect programming for romantic jock segs.
Track listing
References
Patti Page albums
1955 compilation albums
Mercury Records compilation albums
|
```css
/* atma-300normal - latin */
@font-face {
font-family: 'Atma';
font-style: normal;
font-display: swap;
font-weight: 300;
src:
local('Atma Light '),
local('Atma-Light'),
url('./files/atma-latin-300.woff2') format('woff2'), /* Super Modern Browsers */
url('./files/atma-latin-300.woff') format('woff'); /* Modern Browsers */
}
/* atma-400normal - latin */
@font-face {
font-family: 'Atma';
font-style: normal;
font-display: swap;
font-weight: 400;
src:
local('Atma Regular '),
local('Atma-Regular'),
url('./files/atma-latin-400.woff2') format('woff2'), /* Super Modern Browsers */
url('./files/atma-latin-400.woff') format('woff'); /* Modern Browsers */
}
/* atma-500normal - latin */
@font-face {
font-family: 'Atma';
font-style: normal;
font-display: swap;
font-weight: 500;
src:
local('Atma Medium '),
local('Atma-Medium'),
url('./files/atma-latin-500.woff2') format('woff2'), /* Super Modern Browsers */
url('./files/atma-latin-500.woff') format('woff'); /* Modern Browsers */
}
/* atma-600normal - latin */
@font-face {
font-family: 'Atma';
font-style: normal;
font-display: swap;
font-weight: 600;
src:
local('Atma SemiBold '),
local('Atma-SemiBold'),
url('./files/atma-latin-600.woff2') format('woff2'), /* Super Modern Browsers */
url('./files/atma-latin-600.woff') format('woff'); /* Modern Browsers */
}
/* atma-700normal - latin */
@font-face {
font-family: 'Atma';
font-style: normal;
font-display: swap;
font-weight: 700;
src:
local('Atma Bold '),
local('Atma-Bold'),
url('./files/atma-latin-700.woff2') format('woff2'), /* Super Modern Browsers */
url('./files/atma-latin-700.woff') format('woff'); /* Modern Browsers */
}
```
|
```java
/*
*
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing,
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* specific language governing permissions and limitations
*/
package org.wso2.ballerinalang.compiler.tree.clauses;
import org.ballerinalang.model.clauses.InputClauseNode;
import org.ballerinalang.model.tree.expressions.ExpressionNode;
import org.ballerinalang.model.tree.statements.VariableDefinitionNode;
import org.wso2.ballerinalang.compiler.semantics.model.types.BType;
import org.wso2.ballerinalang.compiler.tree.BLangNode;
import org.wso2.ballerinalang.compiler.tree.expressions.BLangExpression;
/**
* Abstract implementation of from/join clause statements.
*
* @since 1.3.0
*/
public abstract class BLangInputClause extends BLangNode implements InputClauseNode {
// BLangNodes
public BLangExpression collection;
public VariableDefinitionNode variableDefinitionNode;
// Parser Flags and Data
public boolean isDeclaredWithVar;
// Semantic Data
public BType varType; // T
public BType resultType; // map<T>
public BType nillableResultType; // map<T>?
@Override
public ExpressionNode getCollection() {
return collection;
}
@Override
public void setCollection(ExpressionNode collection) {
this.collection = (BLangExpression) collection;
}
@Override
public boolean setDeclaredWithVar() {
return false;
}
@Override
public boolean isDeclaredWithVar() {
return isDeclaredWithVar;
}
@Override
public VariableDefinitionNode getVariableDefinitionNode() {
return variableDefinitionNode;
}
@Override
public void setVariableDefinitionNode(VariableDefinitionNode variableDefinitionNode) {
this.variableDefinitionNode = variableDefinitionNode;
}
}
```
|
Bugulminka (; , Bögölmä) is a rural locality (a village) in Vozdvizhensky Selsoviet, Alsheyevsky District, Bashkortostan, Russia. The population was 131 as of 2010. There are 5 streets.
Geography
Bugulminka is located 43 km southwest of Rayevsky (the district's administrative centre) by road. Sanatoriya imeni Chekhova is the nearest rural locality.
References
Rural localities in Alsheyevsky District
|
```xml
import { computeArcBoundingBox } from '../src/boundingBox'
const boundingBoxCases = [
{
args: [0, 0, 100, 0, 360],
expected: {
x: -100,
y: -100,
width: 200,
height: 200,
},
},
{
args: [0, 0, 100, 0, 90],
expected: {
x: 0,
y: 0,
width: 100,
height: 100,
},
},
{
args: [0, 0, 100, -90, 0],
expected: {
x: 0,
y: -100,
width: 100,
height: 100,
},
},
{
args: [0, 0, 100, 90, 180],
expected: {
x: -100,
y: 0,
width: 100,
height: 100,
},
},
]
for (const boundingBoxCase of boundingBoxCases) {
const { args, expected } = boundingBoxCase
test(`computeArcBoundingBox() for position ${args[0]}, ${args[1]} with radius ${args[2]}, starting at ${args[3]}, ending at ${args[4]}`, () => {
const box = computeArcBoundingBox(...args)
for (const prop in expected) {
expect(box).toHaveProperty(prop, expected[prop])
}
})
}
```
|
```css
Vertical percentages are relative to container width, not height
Vertical centering with `margin-top`
Use `z-index` to specify the stack order of elements that overlap
Controlling cellpadding and cellspacing in CSS
Inherit `box-sizing`
```
|
MapEasy is a travel publishing company located in Wainscott, New York. The company was founded in 1990, starting with 3 titles, and currently produces maps and other travel content for over 150 cities worldwide.
The company's maps have received generally favorable reviews from major publications. The New York Times has described the maps as "friendly" but "with attitude", while the Washington Post has called them "charmingly illustrated and elegantly hand-lettered". Reviews have not always been positive, as one reviewer from about.com, while giving the map generally favorable ratings, found the San Francisco map "too busy".
Maps from MapEasy also contain colored-pencil illustrations with comments and descriptions.
References
External links
MapEasy
Map companies of the United States
Companies based in New York (state)
|
```python
#
#
# path_to_url
#
# Unless required by applicable law or agreed to in writing, software
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
import copy
import itertools
import os
import unittest
import numpy as np
import scipy
import scipy.linalg
from op_test import OpTest
import paddle
from paddle import base
from paddle.base import core
from paddle.pir_utils import test_with_pir_api
def scipy_lu(A, pivot):
shape = A.shape
if len(shape) == 2:
return scipy.linalg.lu(A, permute_l=not pivot)
else:
preshape = shape[:-2]
batchsize = np.prod(shape) // (shape[-2] * shape[-1])
PP = []
PL = []
PU = []
NA = A.reshape((-1, shape[-2], shape[-1]))
for b in range(batchsize):
P, L, U = scipy.linalg.lu(NA[b], permute_l=not pivot)
pshape = P.shape
lshape = L.shape
ushape = U.shape
PP.append(P)
PL.append(L)
PU.append(U)
return (
np.array(PP).reshape(preshape + pshape),
np.array(PL).reshape(preshape + lshape),
np.array(PU).reshape(preshape + ushape),
)
def Pmat_to_perm(Pmat_org, cut):
Pmat = copy.deepcopy(Pmat_org)
shape = Pmat.shape
rows = shape[-2]
cols = shape[-1]
batchsize = max(1, np.prod(shape[:-2]))
P = Pmat.reshape(batchsize, rows, cols)
permmat = []
for b in range(batchsize):
permlst = []
sP = P[b]
for c in range(min(rows, cols)):
idx = np.argmax(sP[:, c])
permlst.append(idx)
tmp = copy.deepcopy(sP[c, :])
sP[c, :] = sP[idx, :]
sP[idx, :] = tmp
permmat.append(permlst)
Pivot = (
np.array(permmat).reshape(
[
*shape[:-2],
rows,
]
)
+ 1
)
return Pivot[..., :cut]
def perm_to_Pmat(perm, dim):
pshape = perm.shape
bs = int(np.prod(perm.shape[:-1]).item())
perm = perm.reshape((bs, pshape[-1]))
oneslst = []
for i in range(bs):
idlst = np.arange(dim)
perm_item = perm[i, :]
for idx, p in enumerate(perm_item - 1):
temp = idlst[idx]
idlst[idx] = idlst[p]
idlst[p] = temp
ones = paddle.eye(dim)
nmat = paddle.scatter(ones, paddle.to_tensor(idlst), ones)
oneslst.append(nmat)
return np.array(oneslst).reshape([*pshape[:-1], dim, dim])
# m < n
class TestLUOp(OpTest):
"""
case 1
"""
def config(self):
self.x_shape = [3, 10, 12]
self.pivot = True
self.get_infos = True
self.dtype = "float64"
def set_output(self):
X = self.inputs['X']
sP, sl, sU = scipy_lu(X, self.pivot)
sL = np.tril(sl, -1)
ashape = np.array(X.shape)
lshape = np.array(sL.shape)
ushape = np.array(sU.shape)
lpad = (len(sL.shape) - 2) * [(0, 0)] + [
(0, (ashape - lshape)[-2]),
(0, (ashape - lshape)[-1]),
]
upad = (len(sU.shape) - 2) * [(0, 0)] + [
(0, (ashape - ushape)[-2]),
(0, (ashape - ushape)[-1]),
]
NsL = np.pad(sL, lpad)
NsU = np.pad(sU, upad)
NLU = NsL + NsU
self.output = NLU
self.Pivots = Pmat_to_perm(sP, min(ashape[-2], ashape[-1]))
self.Infos = (
np.zeros(self.x_shape[:-2]) if len(X.shape) > 2 else np.array(0)
)
def setUp(self):
self.op_type = "lu"
self.python_api = paddle.tensor.linalg.lu
self.python_out_sig = ["Out", "Pivots"]
self.config()
self.inputs = {'X': np.random.random(self.x_shape).astype(self.dtype)}
self.attrs = {'pivots': self.pivot}
self.set_output()
self.outputs = {
'Out': self.output,
'Pivots': self.Pivots,
'Infos': self.Infos,
}
def test_check_output(self):
self.check_output(check_pir=True)
def test_check_grad(self):
self.check_grad(['X'], ['Out'], check_pir=True)
# m = n 2D
class TestLUOp2(TestLUOp):
"""
case 2
"""
def config(self):
self.x_shape = [10, 10]
self.pivot = True
self.get_infos = True
self.dtype = "float64"
# m > n
class TestLUOp3(TestLUOp):
"""
case 3
"""
def config(self):
self.x_shape = [2, 12, 10]
self.pivot = True
self.get_infos = True
self.dtype = "float64"
class TestLUAPI(unittest.TestCase):
def test_dygraph(self):
def run_lu_dygraph(shape, dtype):
if dtype == "float32":
np_dtype = np.float32
elif dtype == "float64":
np_dtype = np.float64
np.random.seed(1024)
a = np.random.rand(*shape).astype(np_dtype)
m = a.shape[-2]
n = a.shape[-1]
min_mn = min(m, n)
pivot = True
places = []
if (
os.environ.get('FLAGS_CI_both_cpu_and_gpu', 'False').lower()
in ['1', 'true', 'on']
or not core.is_compiled_with_cuda()
):
places.append(base.CPUPlace())
if core.is_compiled_with_cuda():
places.append(base.CUDAPlace(0))
for place in places:
paddle.disable_static(place)
batch_size = a.size // (a.shape[-1] * a.shape[-2])
x = paddle.to_tensor(a, dtype=dtype)
sP, sl, sU = scipy_lu(a, pivot)
sL = np.tril(sl, -1)
LU, P, Info = paddle.linalg.lu(x, pivot=pivot, get_infos=True)
m, n = LU.shape[-2], LU.shape[-1]
tril = np.tril(LU, -1)[..., :m, :m]
triu = np.triu(LU)[..., :n, :n]
mtp = Pmat_to_perm(sP, min(m, n))
nP = perm_to_Pmat(P, sP.shape[-1])
np.testing.assert_allclose(sU, triu, rtol=1e-05, atol=1e-05)
np.testing.assert_allclose(sL, tril, rtol=1e-05, atol=1e-05)
np.testing.assert_allclose(P, mtp, rtol=1e-05, atol=1e-05)
np.testing.assert_allclose(nP, sP, rtol=1e-05, atol=1e-05)
tensor_shapes = [
(3, 5),
(5, 5),
(5, 3), # 2-dim Tensors
(2, 3, 5),
(3, 5, 5),
(4, 5, 3), # 3-dim Tensors
(2, 5, 3, 5),
(3, 5, 5, 5),
(4, 5, 5, 3), # 4-dim Tensors
]
dtypes = ["float32", "float64"]
for tensor_shape, dtype in itertools.product(tensor_shapes, dtypes):
run_lu_dygraph(tensor_shape, dtype)
@test_with_pir_api
def test_static(self):
paddle.enable_static()
def run_lu_static(shape, dtype):
if dtype == "float32":
np_dtype = np.float32
elif dtype == "float64":
np_dtype = np.float64
a = np.random.rand(*shape).astype(np_dtype)
m = a.shape[-2]
n = a.shape[-1]
min_mn = min(m, n)
pivot = True
places = []
if (
os.environ.get('FLAGS_CI_both_cpu_and_gpu', 'False').lower()
in ['1', 'true', 'on']
or not core.is_compiled_with_cuda()
):
places.append(base.CPUPlace())
if core.is_compiled_with_cuda():
places.append(base.CUDAPlace(0))
for place in places:
with paddle.static.program_guard(
paddle.static.Program(), paddle.static.Program()
):
batch_size = a.size // (a.shape[-1] * a.shape[-2])
sP, sl, sU = scipy_lu(a, pivot)
sL = np.tril(sl, -1)
ashape = np.array(a.shape)
lshape = np.array(sL.shape)
ushape = np.array(sU.shape)
lpad = (len(sL.shape) - 2) * [(0, 0)] + [
(0, (ashape - lshape)[-2]),
(0, (ashape - lshape)[-1]),
]
upad = (len(sU.shape) - 2) * [(0, 0)] + [
(0, (ashape - ushape)[-2]),
(0, (ashape - ushape)[-1]),
]
NsL = np.pad(sL, lpad)
NsU = np.pad(sU, upad)
NLU = NsL + NsU
x = paddle.static.data(
name="input", shape=shape, dtype=dtype
)
lu, p = paddle.linalg.lu(x, pivot=pivot)
exe = base.Executor(place)
fetches = exe.run(
feed={"input": a},
fetch_list=[lu, p],
)
np.testing.assert_allclose(
fetches[0], NLU, rtol=1e-05, atol=1e-05
)
tensor_shapes = [
(3, 5),
(5, 5),
(5, 3), # 2-dim Tensors
(2, 3, 5),
(3, 5, 5),
(4, 5, 3), # 3-dim Tensors
(2, 5, 3, 5),
(3, 5, 5, 5),
(4, 5, 5, 3), # 4-dim Tensors
]
dtypes = ["float32", "float64"]
for tensor_shape, dtype in itertools.product(tensor_shapes, dtypes):
run_lu_static(tensor_shape, dtype)
class TestLUAPIError(unittest.TestCase):
def test_errors(self):
with paddle.base.dygraph.guard():
# The size of input in lu should not be 0.
def test_0_size():
array = np.array([], dtype=np.float32)
x = paddle.to_tensor(
np.reshape(array, [0, 0, 0]), dtype='float32'
)
paddle.linalg.lu(x, get_infos=True)
self.assertRaises(ValueError, test_0_size)
if __name__ == "__main__":
paddle.enable_static()
unittest.main()
```
|
```c++
/*******************************************************************************
*
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing, software
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*******************************************************************************/
#ifndef CPU_AARCH64_JIT_GENERATOR_HPP
#define CPU_AARCH64_JIT_GENERATOR_HPP
#include <limits.h>
#include "common/bit_cast.hpp"
#include "common/type_helpers.hpp"
#include "common/utils.hpp"
#include "cpu/aarch64/cpu_isa_traits.hpp"
#include "cpu/jit_utils/jit_utils.hpp"
#if defined(_WIN32) && !defined(__GNUC__)
#define STRUCT_ALIGN(al, ...) __declspec(align(al)) __VA_ARGS__
#else
#define STRUCT_ALIGN(al, ...) __VA_ARGS__ __attribute__((__aligned__(al)))
#endif
#define DECLARE_CPU_JIT_AUX_FUNCTIONS(jit_name) \
const char *name() const override { return STRINGIFY(jit_name); } \
const char *source_file() const override { return __FILE__; }
static const size_t CSIZE = sizeof(uint32_t);
namespace dnnl {
namespace impl {
namespace cpu {
namespace aarch64 {
// TODO: move this to jit_generator class?
namespace {
typedef enum {
MAX_CODE_SIZE = 256 * 1024,
} max_code_size_t;
// Callee-saved registers
constexpr Xbyak_aarch64::Operand::Code abi_save_gpr_regs[]
= {Xbyak_aarch64::Operand::X16, Xbyak_aarch64::Operand::X17,
Xbyak_aarch64::Operand::X19, Xbyak_aarch64::Operand::X20,
Xbyak_aarch64::Operand::X21, Xbyak_aarch64::Operand::X22,
Xbyak_aarch64::Operand::X23, Xbyak_aarch64::Operand::X24,
Xbyak_aarch64::Operand::X25, Xbyak_aarch64::Operand::X26,
Xbyak_aarch64::Operand::X27, Xbyak_aarch64::Operand::X28};
// See "Procedure Call Standsard for the ARM 64-bit Architecture (AArch64)"
static const Xbyak_aarch64::XReg abi_param1(Xbyak_aarch64::Operand::X0),
abi_param2(Xbyak_aarch64::Operand::X1),
abi_param3(Xbyak_aarch64::Operand::X2),
abi_param4(Xbyak_aarch64::Operand::X3),
abi_param5(Xbyak_aarch64::Operand::X4),
abi_param6(Xbyak_aarch64::Operand::X5),
abi_param7(Xbyak_aarch64::Operand::X6),
abi_param8(Xbyak_aarch64::Operand::X7),
abi_not_param1(Xbyak_aarch64::Operand::X15);
} // namespace
class jit_generator : public Xbyak_aarch64::CodeGenerator, public c_compatible {
public:
using c_compatible::operator new;
using c_compatible::operator new[];
using c_compatible::operator delete;
using c_compatible::operator delete[];
private:
const size_t xreg_len = 8;
const size_t vreg_len_preserve = 8; // Only bottom 8byte must be preserved.
const size_t vreg_to_preserve = 8; // VREG8 - VREG15
const size_t num_abi_save_gpr_regs
= sizeof(abi_save_gpr_regs) / sizeof(abi_save_gpr_regs[0]);
const size_t preserved_stack_size = xreg_len * (2 + num_abi_save_gpr_regs)
+ vreg_len_preserve * vreg_to_preserve;
const size_t size_of_abi_save_regs = num_abi_save_gpr_regs * x0.getBit() / 8
+ vreg_to_preserve * vreg_len_preserve;
public:
enum {
_cmp_eq_oq = 0u,
_cmp_lt_os = 1u,
_cmp_le_os = 2u,
_cmp_neq_uq = 4u,
_cmp_nlt_us = 5u,
_cmp_nle_us = 6u,
_op_floor = 1u,
_op_mxcsr = 4u,
};
const uint64_t cpu_sveLen = get_sve_length();
const Xbyak_aarch64::WReg W_TMP_0 = w23;
const Xbyak_aarch64::WReg W_TMP_1 = w24;
const Xbyak_aarch64::WReg W_TMP_2 = w25;
const Xbyak_aarch64::WReg W_TMP_3 = w26;
const Xbyak_aarch64::WReg W_TMP_4 = w27;
const Xbyak_aarch64::XReg X_TMP_0 = x23;
const Xbyak_aarch64::XReg X_TMP_1 = x24;
const Xbyak_aarch64::XReg X_TMP_2 = x25;
const Xbyak_aarch64::XReg X_TMP_3 = x26;
const Xbyak_aarch64::XReg X_TMP_4 = x27;
const Xbyak_aarch64::XReg X_DEFAULT_ADDR = x28;
const Xbyak_aarch64::XReg X_SP = x21;
const Xbyak_aarch64::XReg X_TRANSLATOR_STACK = x22;
const Xbyak_aarch64::PReg P_TMP = p7;
const Xbyak_aarch64::PReg P_TMP_0 = p11;
const Xbyak_aarch64::PReg P_TMP_1 = p12;
const Xbyak_aarch64::PReg P_ALL_ZERO = p10;
const Xbyak_aarch64::PReg P_NOT_256 = p13;
const Xbyak_aarch64::PReg P_NOT_128 = p14;
const Xbyak_aarch64::PReg P_ALL_ONE = p0;
const std::vector<Xbyak_aarch64::XReg> x_tmp_vec
= {X_TMP_0, X_TMP_1, X_TMP_2, X_TMP_3, X_TMP_4};
const int x_tmp_vec_size = x_tmp_vec.size();
const Xbyak_aarch64::XReg param1 = abi_param1;
constexpr static size_t translator_stack_offset = 1024 * 128;
constexpr static uint32_t DUMMY_IDX = 99;
inline size_t get_size_of_abi_save_regs() { return size_of_abi_save_regs; }
void preamble() {
using namespace Xbyak_aarch64::util;
uint64_t sveLen = get_sve_length();
stp(x29, x30, pre_ptr(sp, -16));
/* x29 is a frame pointer. */
mov(x29, sp);
sub(sp, sp, static_cast<int64_t>(preserved_stack_size) - 16);
/* x9 can be used as a temporal register. */
mov(x9, sp);
if (vreg_to_preserve) {
st4((v8.d - v11.d)[0], post_ptr(x9, vreg_len_preserve * 4));
st4((v12.d - v15.d)[0], post_ptr(x9, vreg_len_preserve * 4));
}
for (size_t i = 0; i < num_abi_save_gpr_regs; i += 2) {
stp(Xbyak_aarch64::XReg(abi_save_gpr_regs[i]),
Xbyak_aarch64::XReg(abi_save_gpr_regs[i + 1]),
post_ptr(x9, xreg_len * 2));
}
if (sveLen) { /* SVE is available. */
ptrue(P_ALL_ONE.b);
pfalse(P_ALL_ZERO.b);
}
if (sveLen >= SVE_256) {
ptrue(P_NOT_128.b, Xbyak_aarch64::VL16);
not_(P_NOT_128.b, P_ALL_ONE / Xbyak_aarch64::T_z, P_NOT_128.b);
}
if (sveLen >= SVE_512) {
ptrue(P_NOT_256.b, Xbyak_aarch64::VL32);
not_(P_NOT_256.b, P_ALL_ONE / Xbyak_aarch64::T_z, P_NOT_256.b);
}
mov(X_SP, sp);
sub_imm(X_TRANSLATOR_STACK, X_SP, translator_stack_offset, X_TMP_0);
}
void postamble() {
using namespace Xbyak_aarch64::util;
mov(x9, sp);
if (vreg_to_preserve) {
ld4((v8.d - v11.d)[0], post_ptr(x9, vreg_len_preserve * 4));
ld4((v12.d - v15.d)[0], post_ptr(x9, vreg_len_preserve * 4));
}
for (size_t i = 0; i < num_abi_save_gpr_regs; i += 2) {
ldp(Xbyak_aarch64::XReg(abi_save_gpr_regs[i]),
Xbyak_aarch64::XReg(abi_save_gpr_regs[i + 1]),
post_ptr(x9, xreg_len * 2));
}
add(sp, sp, static_cast<int64_t>(preserved_stack_size) - 16);
ldp(x29, x30, post_ptr(sp, 16));
ret();
}
// Disallow char-based labels completely
void L(const char *label) = delete;
void L(Xbyak_aarch64::Label &label) {
Xbyak_aarch64::CodeGenerator::L(label);
}
void L_aligned(Xbyak_aarch64::Label &label, int alignment = 16) {
align(alignment);
L(label);
}
template <typename T>
Xbyak_aarch64::XReg addr_off(const Xbyak_aarch64::XReg &base, const T off,
const Xbyak_aarch64::XReg &addr, const Xbyak_aarch64::XReg &x_tmp) {
if (off == 0) return base;
add_imm(addr, base, off, x_tmp);
return addr;
}
template <typename PRegBHSD, typename T>
void set_preg(const PRegBHSD &p, T tail_size,
const Xbyak_aarch64::XReg x_tmp0 = Xbyak_aarch64::XReg(DUMMY_IDX),
const Xbyak_aarch64::XReg x_tmp1 = Xbyak_aarch64::XReg(DUMMY_IDX)) {
using namespace Xbyak_aarch64;
assert(tail_size <= 64); // Implemented only for "SVE size <= 512"
switch (tail_size) {
case 0: pfalse(PRegB(p.getIdx())); return;
case 1: ptrue(p, VL1); return;
case 2: ptrue(p, VL2); return;
case 3: ptrue(p, VL3); return;
case 4: ptrue(p, VL4); return;
case 5: ptrue(p, VL5); return;
case 6: ptrue(p, VL6); return;
case 7: ptrue(p, VL7); return;
case 8: ptrue(p, VL8); return;
case 16: ptrue(p, VL16); return;
case 32: ptrue(p, VL32); return;
case 64: ptrue(p, VL64); return;
}
assert(x_tmp0.getIdx() != DUMMY_IDX && x_tmp1.getIdx() != DUMMY_IDX);
mov_imm(x_tmp0, 0);
mov_imm(x_tmp1, tail_size);
whilelt(p, x_tmp0, x_tmp1);
}
template <typename T>
void uni_add(const T &x1, const T &x2, const T &op) {
add(x1, x2, op);
}
void uni_add(const Xbyak_aarch64::VReg4S &x1,
const Xbyak_aarch64::VReg4S &x2, const Xbyak_aarch64::VReg4S &op) {
add(x1, x2, op);
}
void uni_add(const Xbyak_aarch64::ZReg &x1, const Xbyak_aarch64::ZReg &x2,
const Xbyak_aarch64::ZReg &op) {
add(Xbyak_aarch64::ZRegS(x1.getIdx()),
Xbyak_aarch64::ZRegS(x2.getIdx()),
Xbyak_aarch64::ZRegS(op.getIdx()));
}
template <typename T>
void udiv_mod(const T &q, const T &r, const T &divend, const T &divisor) {
assert(q.getIdx() != divisor.getIdx());
assert(q.getIdx() != divend.getIdx());
assert(r.getIdx() != divend.getIdx());
udiv(q, divend, divisor);
mul(r, q, divisor);
sub(r, divend, r);
}
template <typename T>
void umod(const T &r, const T &divend, const T &divisor) {
assert(r.getIdx() != divend.getIdx());
assert(r.getIdx() != divisor.getIdx());
udiv(r, divend, divisor);
mul(r, r, divisor);
sub(r, divend, r);
}
void uni_clear(const Xbyak_aarch64::VReg &dst) { eor(dst.b, dst.b, dst.b); }
void uni_clear(const Xbyak_aarch64::ZReg &dst) { eor(dst.d, dst.d, dst.d); }
template <typename T>
void uni_fadd(const T &dst, const T &src, const T &src2) {
fadd(dst, src, src2);
}
void uni_fcvtzs(
const Xbyak_aarch64::VReg4S &d, const Xbyak_aarch64::VReg4S &s) {
fcvtzs(d, s);
}
void uni_fcvtzs(
const Xbyak_aarch64::ZRegS &d, const Xbyak_aarch64::ZRegS &s) {
fcvtzs(d, P_ALL_ONE / Xbyak_aarch64::T_z, s);
}
template <typename TReg>
void uni_fdiv(const TReg &dst, const TReg &src, const TReg &src2,
const TReg &tmp, const Xbyak_aarch64::PReg &pred) {
uint32_t dstIdx = dst.getIdx();
uint32_t srcIdx = src.getIdx();
uint32_t src2Idx = src2.getIdx();
uint32_t tmpIdx = tmp.getIdx();
if (dstIdx == src2Idx) {
assert(tmpIdx != srcIdx && tmpIdx != src2Idx);
mov(Xbyak_aarch64::ZRegD(tmpIdx), Xbyak_aarch64::ZRegD(src2Idx));
mov(dst, pred / Xbyak_aarch64::T_m, src);
fdiv(dst, pred / Xbyak_aarch64::T_m, tmp);
} else if (dstIdx == srcIdx) {
fdiv(dst, pred / Xbyak_aarch64::T_m, src2);
} else {
mov(dst, P_ALL_ONE / Xbyak_aarch64::T_m, src);
fdiv(dst, pred / Xbyak_aarch64::T_m, src2);
}
}
template <typename TReg>
void uni_fdiv(const TReg &dst, const TReg &src, const TReg &src2) {
fdiv(dst, src, src2);
}
void uni_fdiv(const Xbyak_aarch64::VReg4S &dst,
const Xbyak_aarch64::VReg4S &src, const Xbyak_aarch64::VReg4S &src2,
const Xbyak_aarch64::VReg4S &tmp, const Xbyak_aarch64::PReg &pred) {
UNUSED(tmp);
UNUSED(pred);
fdiv(dst, src, src2);
}
template <typename T>
void uni_fmad(const T &dst, const T &src, const T &src2) {
fmad(dst, P_ALL_ONE / Xbyak_aarch64::T_m, src, src2);
}
void uni_fmad(const Xbyak_aarch64::VReg4S &dst,
const Xbyak_aarch64::VReg4S &src,
const Xbyak_aarch64::VReg4S &src2) {
fmul(dst, dst, src);
fadd(dst, dst, src2);
}
template <typename T>
void uni_fmax(const T &dst, const T &src, const T &src2) {
uint32_t dstIdx = dst.getIdx();
uint32_t srcIdx = src.getIdx();
if (dstIdx != srcIdx)
mov(Xbyak_aarch64::ZRegD(dstIdx), Xbyak_aarch64::ZRegD(srcIdx));
fmax(dst, P_ALL_ONE / Xbyak_aarch64::T_m, src2);
}
template <typename T>
void uni_fmaxnm(const T &dst, const T &src, const T &src2) {
uint32_t dstIdx = dst.getIdx();
uint32_t srcIdx = src.getIdx();
if (dstIdx != srcIdx)
mov(Xbyak_aarch64::ZRegD(dstIdx), Xbyak_aarch64::ZRegD(srcIdx));
fmaxnm(dst, P_ALL_ONE / Xbyak_aarch64::T_m, src2);
}
void uni_fmaxnm(const Xbyak_aarch64::VReg4S &dst,
const Xbyak_aarch64::VReg4S &src,
const Xbyak_aarch64::VReg4S &src2) {
fmaxnm(dst, src, src2);
}
template <typename T>
void uni_fmin(const T &dst, const T &src, const T &src2) {
uint32_t dstIdx = dst.getIdx();
uint32_t srcIdx = src.getIdx();
if (dstIdx != srcIdx)
mov(Xbyak_aarch64::ZRegD(dstIdx), Xbyak_aarch64::ZRegD(srcIdx));
fmin(dst, P_ALL_ONE / Xbyak_aarch64::T_m, src2);
}
template <typename T>
void uni_fmul(const T &dst, const T &src, const T &src2) {
fmul(dst, src, src2);
}
void uni_frinti(
const Xbyak_aarch64::VReg4S &d, const Xbyak_aarch64::VReg4S &s) {
frinti(d, s);
}
void uni_frinti(
const Xbyak_aarch64::ZRegS &d, const Xbyak_aarch64::ZRegS &s) {
frinti(d, P_ALL_ONE / Xbyak_aarch64::T_m, s);
}
template <typename T>
void uni_fsqrt(const T &dst, const T &src) {
fsqrt(dst, P_ALL_ONE / Xbyak_aarch64::T_m, src);
}
void uni_fsqrt(const Xbyak_aarch64::VReg4S &dst,
const Xbyak_aarch64::VReg4S &src) {
fsqrt(dst, src);
}
void uni_fsub(const Xbyak_aarch64::VReg4S &v1,
const Xbyak_aarch64::VReg4S &v2, const Xbyak_aarch64::VReg4S &v3) {
fsub(v1, v2, v3);
}
template <typename T>
void uni_fsub(const T &dst, const T &src, const T &src2) {
fsub(dst, src, src2);
}
void uni_fsub(const Xbyak_aarch64::ZRegS &z1,
const Xbyak_aarch64::ZRegS &z2, const Xbyak_aarch64::ZRegS &z3) {
fsub(z1, z2, z3);
}
void uni_eor(const Xbyak_aarch64::VReg &v1, const Xbyak_aarch64::VReg &v2,
const Xbyak_aarch64::VReg &v3) {
eor(Xbyak_aarch64::VReg16B(v1.getIdx()),
Xbyak_aarch64::VReg16B(v2.getIdx()),
Xbyak_aarch64::VReg16B(v3.getIdx()));
}
void uni_eor(const Xbyak_aarch64::ZReg &z1, const Xbyak_aarch64::ZReg &z2,
const Xbyak_aarch64::ZReg &z3) {
eor(Xbyak_aarch64::ZRegD(z1.getIdx()),
Xbyak_aarch64::ZRegD(z2.getIdx()),
Xbyak_aarch64::ZRegD(z3.getIdx()));
}
void uni_ld1rw(const Xbyak_aarch64::VReg4S &dst,
const Xbyak_aarch64::XReg &base, const int64_t off) {
if (off == 0) {
ld1r(dst, ptr(base));
} else {
add_imm(X_DEFAULT_ADDR, base, off, X_TMP_0);
ld1r(dst, ptr(X_DEFAULT_ADDR));
}
}
void uni_ld1rw(const Xbyak_aarch64::ZRegS &dst,
const Xbyak_aarch64::XReg &base, const int64_t off) {
if (-32 <= off && off < 32) {
ld1rw(dst, P_ALL_ONE / Xbyak_aarch64::T_z, ptr(base, (int)off));
} else {
add_imm(X_DEFAULT_ADDR, base, off, X_TMP_0);
ld1rw(dst, P_ALL_ONE / Xbyak_aarch64::T_z, ptr(X_DEFAULT_ADDR));
}
}
void uni_ldr(
const Xbyak_aarch64::VReg &dst, const Xbyak_aarch64::XReg &addr) {
ldr(Xbyak_aarch64::QReg(dst.getIdx()), ptr(addr));
}
void uni_ldr(
const Xbyak_aarch64::ZReg &dst, const Xbyak_aarch64::XReg &addr) {
ldr(dst, ptr(addr));
}
template <typename T>
void uni_ldr(const Xbyak_aarch64::ZReg &r, const Xbyak_aarch64::XReg &base,
const T off) {
const int off_mod = off % cpu_sveLen;
const int off_mul_vl = off / cpu_sveLen;
if (off_mod == 0 && -256 <= off_mul_vl && off_mul_vl <= 255) {
ldr(r, Xbyak_aarch64::ptr(base, off_mul_vl, Xbyak_aarch64::MUL_VL));
} else {
const int offset = off_mod * 0x10 * (cpu_sveLen / 16);
add_imm(X_DEFAULT_ADDR, base, offset, X_TMP_0);
ldr(r, Xbyak_aarch64::ptr(X_DEFAULT_ADDR));
}
}
template <typename T>
void uni_ldr(const Xbyak_aarch64::VReg &r, const Xbyak_aarch64::XReg &base,
const T off) {
const int off_mod = off % 16;
const int off_mul_vl = off / 16;
if (off_mod == 0 && 0 <= off_mul_vl && off_mul_vl <= 65520) {
ldr(Xbyak_aarch64::QReg(r.getIdx()), Xbyak_aarch64::ptr(base, off));
} else {
add_imm(X_DEFAULT_ADDR, base, off_mod * 4, X_TMP_0);
ldr(Xbyak_aarch64::QReg(r.getIdx()),
Xbyak_aarch64::ptr(X_DEFAULT_ADDR));
}
}
void uni_orr(const Xbyak_aarch64::VReg &d, const Xbyak_aarch64::VReg &s0,
const Xbyak_aarch64::VReg &s1) {
orr(d.b16, s0.b16, s1.b16);
}
void uni_orr(const Xbyak_aarch64::ZReg &d, const Xbyak_aarch64::ZReg &s0,
const Xbyak_aarch64::ZReg &s1) {
orr(d.d, s0.d, s1.d);
}
void uni_scvtf(
const Xbyak_aarch64::VReg4S &t0, const Xbyak_aarch64::VReg4S &t1) {
scvtf(t0, t1);
}
void uni_scvtf(
const Xbyak_aarch64::ZRegS &t0, const Xbyak_aarch64::ZRegS &t1) {
scvtf(t0, P_ALL_ONE / Xbyak_aarch64::T_m, t1);
}
void uni_str(
const Xbyak_aarch64::VReg &src, const Xbyak_aarch64::XReg &addr) {
str(Xbyak_aarch64::QReg(src.getIdx()), ptr(addr));
}
void uni_str(
const Xbyak_aarch64::ZReg &src, const Xbyak_aarch64::XReg &addr) {
str(src, ptr(addr));
}
template <typename T>
void uni_sub(const T &x1, const T &x2, const T &op) {
sub(x1, x2, op);
}
/*
Saturation facility functions. enable to prepare the register
holding the saturation upperbound and apply the saturation on
the floating point register
*/
template <typename Vmm>
void init_vmm(Vmm vmm, Xbyak_aarch64::XReg reg_tmp, float value) {
using namespace data_type;
bool isSVE = get_sve_length() ? true : false;
Xbyak_aarch64::ZRegS z_tmp(vmm.getIdx());
Xbyak_aarch64::VReg4S v_tmp(vmm.getIdx());
Xbyak_aarch64::WReg w_tmp(reg_tmp.getIdx());
mov_imm(w_tmp, float2int(value));
if (isSVE) /* SVE is available. */
dup(z_tmp, w_tmp);
else
dup(v_tmp, w_tmp);
}
template <typename Vmm>
void init_saturate_f32(Vmm vmm_lbound, Vmm vmm_ubound,
Xbyak_aarch64::XReg reg_tmp, data_type_t idt, data_type_t odt,
bool force_lbound = false) {
using namespace data_type;
bool isSVE = get_sve_length() ? true : false;
if (!((idt == f32) && utils::one_of(odt, u8, data_type::s8, s32)))
return;
assert(IMPLICATION(
idt == u8, vmm_lbound.getIdx() != vmm_ubound.getIdx()));
// No need to saturate on lower bound for signed integer types, as
// the conversion to int would return INT_MIN, and then proper
// saturation will happen in store_data. The param force_lbound, will
// force saturate values unconditionally to lbound.
if (odt == u8) {
if (isSVE) /* SVE is available. */
dup(Xbyak_aarch64::ZRegS(vmm_lbound.getIdx()), 0);
else if (mayiuse(asimd))
movi(Xbyak_aarch64::VReg4S(vmm_lbound.getIdx()), 0);
else
assert(!"unreachable");
} else if (force_lbound) {
const float saturation_lbound
= odt == data_type::s8 ? INT8_MIN : INT32_MIN;
init_vmm(vmm_lbound, reg_tmp, saturation_lbound);
}
float saturation_ubound = types::max_value<float>(odt);
init_vmm(vmm_ubound, reg_tmp, saturation_ubound);
}
template <typename Vmm>
void saturate_f32(const Vmm &vmm, const Vmm &vmm_lbound,
const Vmm &vmm_ubound, data_type_t odt,
const Xbyak_aarch64::PReg &p_true, bool force_lbound = false) {
// This function is used to saturate to odt in f32 before converting
// to s32 in order to avoid bad saturation due to cvtps2dq
// behavior (it returns INT_MIN if the f32 is out of the
// s32 range)
using namespace data_type;
bool isSVE = get_sve_length() ? true : false;
if (!utils::one_of(odt, u8, data_type::s8, s32)) return;
Xbyak_aarch64::VReg4S v_tmp(vmm.getIdx());
Xbyak_aarch64::VReg4S v_lbound(vmm_lbound.getIdx());
Xbyak_aarch64::VReg4S v_ubound(vmm_ubound.getIdx());
Xbyak_aarch64::ZRegS z_tmp(vmm.getIdx());
Xbyak_aarch64::ZRegS z_lbound(vmm_lbound.getIdx());
Xbyak_aarch64::ZRegS z_ubound(vmm_ubound.getIdx());
// no need to apply lower saturation bound when odt is
// signed, as cvtps2dq will return MIN_INT if the value
// does not fit. The param force_lbound, will force saturate values
// unconditionally to lbound.
if (odt == u8 || force_lbound) {
if (isSVE) /* SVE is available. */
fmax(z_tmp, p_true / Xbyak_aarch64::T_m, z_lbound);
else if (mayiuse(asimd))
fmax(v_tmp, v_tmp, v_lbound);
else
assert(!"unreachable");
}
if (isSVE) /* SVE is available. */
fmin(z_tmp, p_true / Xbyak_aarch64::T_m, z_ubound);
else if (mayiuse(asimd))
fmin(v_tmp, v_tmp, v_ubound);
else
assert(!"unreachable");
}
/* A utility function to process f32 tail (load, store or other) depending
* on tail size, stored in Reg64. Tail size must be value from 0 to 3/7
* (Xmm/Ymm). Tail process functions require integer as argument to specify
* behavior for each tail size.
*
* Only supported for Xmm and Ymm.
*/
template <cpu_isa_t isa>
void runtime_tail_process(const Xbyak_aarch64::XReg ®_tail,
const Xbyak_aarch64::XReg ®_tmp,
const std::function<void(int)> &tail_process) {
constexpr int simd_w_ymm = 8;
constexpr int f32_bits = sizeof(float) * 8;
const auto simd_w = cpu_isa_traits<isa>::vlen * 8 / f32_bits;
assert(simd_w != cpu_isa_traits<isa>::vlen * 8 / f32_bits);
Xbyak_aarch64::Label label_tbl, label_tbl_end;
Xbyak_aarch64::Label l_case[simd_w_ymm];
adr(reg_tmp, label_tbl);
mov_imm(X_TMP_0, sizeof(void *));
madd(X_DEFAULT_ADDR, reg_tail, X_TMP_0, reg_tmp);
br(X_DEFAULT_ADDR);
// create jump table
L(label_tbl);
for (size_t i = 0; i < simd_w; i++)
putL(l_case[i]);
// cases for each tail size - from 0 to 3/7
L(l_case[0]);
b(label_tbl_end);
for (size_t i = 1; i < simd_w; i++) {
L(l_case[i]);
tail_process(i);
b(label_tbl_end);
}
L(label_tbl_end);
}
DNNL_DISALLOW_COPY_AND_ASSIGN(jit_generator);
public:
jit_generator(void *code_ptr = nullptr, size_t code_size = MAX_CODE_SIZE,
bool use_autogrow = true, cpu_isa_t max_cpu_isa = isa_all)
: Xbyak_aarch64::CodeGenerator(code_size,
(code_ptr == nullptr && use_autogrow) ? Xbyak_aarch64::AutoGrow
: code_ptr)
, max_cpu_isa_(max_cpu_isa) {}
virtual ~jit_generator() {}
virtual const char *name() const = 0;
virtual const char *source_file() const = 0;
void register_jit_code(const uint8_t *code, size_t code_size) const {
jit_utils::register_jit_code(code, code_size, name(), source_file());
}
const uint8_t *jit_ker() const { return jit_ker_; }
template <typename... kernel_args_t>
void operator()(kernel_args_t... args) const {
using jit_kernel_func_t = void (*)(const kernel_args_t... args);
auto *fptr = (jit_kernel_func_t)jit_ker_;
(*fptr)(std::forward<kernel_args_t>(args)...);
}
virtual status_t create_kernel() {
generate();
jit_ker_ = getCode();
return (jit_ker_) ? status::success : status::runtime_error;
}
private:
const cpu_isa_t max_cpu_isa_;
const uint8_t *getCode() {
this->ready();
if (!is_initialized()) return nullptr;
const uint8_t *code
= reinterpret_cast<const uint8_t *>(CodeGenerator::getCode());
register_jit_code(code, getSize() * CSIZE);
return code;
}
inline bool is_valid_isa(cpu_isa_t isa) {
return is_subset(isa, max_cpu_isa_) && mayiuse(isa);
}
static inline bool is_initialized() {
/* At the moment, Xbyak_aarch64 does not have GetError()\
so that return dummy result. */
return true;
}
protected:
virtual void generate() = 0;
const uint8_t *jit_ker_ = nullptr;
};
} // namespace aarch64
} // namespace cpu
} // namespace impl
} // namespace dnnl
#endif
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.